## Study at Cambridge

About the university, research at cambridge.

• Events and open days
• Fees and finance
• Student blogs and videos
• Why Cambridge
• Qualifications directory
• How to apply
• Fees and funding
• International students
• Continuing education
• Executive and professional education
• Courses in education
• How the University and Colleges work
• Visiting the University
• Term dates and calendars
• Video and audio
• Find an expert
• Publications
• International Cambridge
• Public engagement
• Giving to Cambridge
• For current students
• Colleges & departments
• Libraries & facilities
• Museums & collections
• Email & phone search

## STEP Support Programme

• Foundation Modules
• STEP 2 Modules
• STEP 3 Modules
• Worked Papers
• Video Solutions
• Foundation Assignment STEP Questions
• Webinar Recordings
• Code of Conduct for Events
• STEP Support Module Types
• Support for Teachers
• Teacher Webinars
• STEP Questions Database
• STEP Specifications and Specification Support
• Advanced Problems in Mathematics book
• Notes on A-level topics
• Underground Mathematics - worked STEP questions

## Introduction

This is the third of the 25 Foundation modules. We suggest working through the first assignment and second assignment before starting this.

STEP is a challenging examination, and is different in style from A-level, although the mathematical content is the same. STEP questions are longer: they are designed to take around 30 minutes, rather than the typical 10 minutes required for an A-level question.

Do not worry if the STEP problems appear difficult: they are meant to be! However, you should not be daunted. These assignments are designed to help you to develop the skills you need, over time, so that by the time you sit the STEP exam in the summer of Y13 you will feel well-prepared.

The assignment is published as a pdf file below. Each STEP Support assignment module starts with a warm-up exercise, followed by preparatory work leading to a STEP question. Finally, there is a warm-down exercise.

The warm up for this assignment involves the sigma notation, and a proof of the formula for the sum of the terms of a geometric progression. For the last part, use the formula rather than summing the individual terms, and try to do this without using a calculator.

The main STEP question (2004 STEP 1 Question 2) introduces the “floor” notation. More information on this and related functions can be found here .

The final question involves a linear Diophantine equation, i.e. one of the form ax+by = c where a, b and c are are given integers and we are looking for a solution where x and y are also integers.

More information on Diophantine equations can be found in this article on Plus, the free online mathematics magazine, and in this Wikipedia entry . You may also enjoy watching this talk by Dr Vicky Neale - 'How to Solve Equations' This Plus article shows how we can integrate from first principles. This Numberphile video discusses the "Monkeys and Coconuts" problem. Do not watch it until you have tried the assignment!

## Hints, support and self evaluation

The “Hints and partial solutions for Assignment 3” file gives suggestions on how you can tackle the questions, and some common pitfalls to avoid, as well as some partial solutions and answers.

Here is a Worked Video Solution to the STEP question from this assignment.

Underground Mathematics: Selected worked STEP questions

STEP Question database

University of Cambridge Mathematics Faculty: What do we look for?

University of Cambridge Mathematics Faculty: Information about STEP

MEI: Worked solutions to STEP questions (external link)

Tweets by stepsupportcam

• University A-Z
• Contact the University
• Accessibility
• Freedom of information
• Terms and conditions
• Spotlight on...

Assignment 3

Your assignment consists of a short document or extract of a larger document. Your assignment submission should be within the range of 4-10 pages. Contact your tutor to ensure the scope is appropriate.

You will submit assignments in two sections. The planning documents are the first section of the assignment ( Part A ). Submit these at the end of the section on organizing, as you prepare to draft your report.

Part B , the document (report, proposal) or a section of that document and an accompanying briefing note, is due when you complete your proofreading. Use the feedback your tutor provided on your planning documents to inform your writing.

More specific instructions for Part A and Part B of your assignment are included on the following screens.

## Assignment 3 CS6910

Instructions.

• The goal of this assignment is fourfold: (i) learn how to model sequence to sequence learning problems using Recurrent Neural Networks (ii) compare different cells such as vanilla RNN, LSTM and GRU (iii) understand how attention networks overcome the limitations of vanilla seq2seq models (iv) visualise the interactions between different components in a RNN based model.
• We strongly recommend that you work on this assignment in a team of size 2. Both the members of the team are expected to work together (in a subsequent viva both members will be expected to answer questions, explain the code, etc).
• Collaborations and discussions with other groups are strictly prohibited.
• You must use Python (numpy and pandas) for your implementation.
• You can use any and all packages from keras, pytorch, tensorflow
• You can run the code in a jupyter notebook on colab by enabling GPUs.
• You have to generate the report in the same format as shown below using wandb.ai. You can start by cloning this report using the clone option above. Most of the plots that we have asked for below can be (automatically) generated using the apis provided by wandb.ai. You will upload a link to this report on gradescope.
• You also need to provide a link to your github code as shown below. Follow good software engineering practices and set up a github repo for the project on Day 1. Please do not write all code on your local machine and push everything to github on the last day. The commits in github should reflect how the code has evolved during the course of the assignment.
• You have to check moodle regularly for updates regarding the assignment.

## Problem Statement

Question 1 (15 marks).

xi=EIix_i = EI_i x i ​ = E I i ​

sis_i s i ​ = σ\sigma σ ( Uxi+Wsi−1+b)Ux_i + Ws_{i-1} + b) U x i ​ + W s i − 1 ​ + b )

yi=y_i = y i ​ = Softmax (V′si+c)(V's_i + c) ( V ′ s i ​ + c )

IiI_i I i ​ → ( V,1V,1 V , 1 ) (One Hot Vectors)

EE E → ( m,Vm,V m , V ) (Embedding Matrix)

UU U → ( k,mk,m k , m )

WW W → ( k,kk,k k , k )

si−1s_{i-1} s i − 1 ​ → ( k,1k,1 k , 1 )

bb b → ( k,1k,1 k , 1 )

V′V' V ′ → ( k,kk,k k , k ) (for decoder, V′V' V ′ → ( V,kV,k V , k ) )

sis_i s i ​ → ( k,1k,1 k , 1 )

cc c → ( k,1k,1 k , 1 ) (for decoder, cc c → ( V,1V,1 V , 1 ) )

xix_i x i ​ → ( m,1m,1 m , 1 )

Multiplication = O(km+k2)O(km + k^2) O ( k m + k 2 )

Additions = O(2k)O(2k) O ( 2 k )

Total = O(km+k2+2k)O(km + k^2 + 2k) O ( k m + k 2 + 2 k )

EIiEI_i E I i ​ → O(mV)O(mV) O ( m V )

Sigmoid → O(k)O(k) O ( k )

O(mk+k2+mV+3k)O(mk + k^2 + mV + 3k) O ( m k + k 2 + m V + 3 k )

O(T(mk+k2+mV+3k))O(T(mk + k^2 + mV + 3k)) O ( T ( m k + k 2 + m V + 3 k ) )

yiy_i y i ​ → O(Vk+V)O(Vk + V) O ( V k + V )

Softmax → O(V)O(V) O ( V )

O(mk+k2+mV+Vk+2V+2k)O(mk + k^2 + mV + Vk + 2V + 2k) O ( m k + k 2 + m V + V k + 2 V + 2 k )

O(T(mk+k2+mV+Vk+2V+2k))O(T(mk + k^2 + mV + Vk + 2V + 2k)) O ( T ( m k + k 2 + m V + V k + 2 V + 2 k ) )

O(T(2mk+2mV+2k2+Vk+2V+5k))O(T(2mk + 2mV + 2k^2 + Vk + 2V + 5k)) O ( T ( 2 m k + 2 m V + 2 k 2 + V k + 2 V + 5 k ) )

E1E_1 E 1 ​ → ( m,Vm,V m , V ) (Embedding Matrix)

V′V' V ′ → ( k,kk,k k , k )

cc c → ( k,1k,1 k , 1 )

O(mV+km+2k2+2k)O(mV +km + 2k^2 + 2k) O ( m V + k m + 2 k 2 + 2 k )

E2E_2 E 2 ​ → ( m,Vm,V m , V ) (Embedding Matrix)

V′V' V ′ → ( V,kV,k V , k )

cc c → ( V,1V,1 V , 1 )

O(mV+km+k2+Vk+V+k)O(mV + km + k^2 + Vk + V + k) O ( m V + k m + k 2 + V k + V + k )

O(2mV+2km+3k2+3k+Vk+V)O(2mV + 2km + 3k^2 + 3k + Vk + V) O ( 2 m V + 2 k m + 3 k 2 + 3 k + V k + V )

## Question 2 (10 Marks)

• input embedding size: 16, 32, 64, 256, ...
• number of encoder layers: 1, 2, 3
• number of decoder layers: 1, 2, 3
• hidden layer size: 16, 32, 64, 256, ...
• cell type: RNN, GRU, LSTM
• dropout: 20%, 30% (btw, where will you add dropout? you should read up a bit on this)
• beam search in decoder with different beam sizes:
• accuracy v/s created plot (I would like to see the number of experiments you ran to get the best configuration).
• parallel co-ordinates plot
• correlation summary table (to see the correlation of each hyperparameter with the loss/accuracy)
• input embedding size: 32, 64, 256
• hidden layer size: 32, 64, 256
• dropout: 0%, 25%, 40% (recurrent dropout is given the same value as normal dropout)
• beam width: 1, 5
• Since each run takes around 30 mins, it is imperative to keep the numbers of values hyperparameters in the sweep as small as possible. We used smaller sweeps for this purpose. Many values of parameters were tried for a small number of runs and the range which gives good accuracy values were taken and put in the configuration for a large sweep involving more than 150 runs. For example, the value of 16 for input embedding size was performing consistently poorly compared to other values and hence the option was removed in the large sweep. The small sweeps were also used to adjust the other fields of architecture like hidden layer size and the optimizer. We observed that the adam optimizer works the best, and a hidden layer size of 16 performs worse than other values.
• We use Early stopping technique with a patience parameter of 5 to monitor the validation accuracy and stop the training if there is no improvement for 5 continuous epochs. Also the wandb callback passed to training not only logs the training metrics but also stores the model with the best validation accuracy (and hence it is not always that the parameters corresponding to the last epoch are stored). This is then used to and train them further, or use them for analyzing later on.
• We use the bayesian search provided by wandb to sweep over the hyperparameters. This helps in discarding combinations which give poor accuracies and hence we can obtain many models with good accuracies for a given number of runs.
• val _ exact accuracy refers to the word level validation accuracy whereas val_accuracy refers to the character level validation accuracy.

## Parameter importance with respect to val_exact_accuracy

Question 3 (15 marks).

• RNN based model takes longer time to converge than GRU or LSTM
• using smaller sizes for the hidden layer does not give good results
• dropout leads to better performance
• All the top performing models have a beam width of 5 and hence we can say that having a beam width greater than 1 is definitely an advantage.
• RNN performs worse compared to either LSTM or GRU. The situation remains the same even after adding more layers.
• LSTM gives good performance with a lower number of layers in the encoder and decoder.
• GRU requires more layers in the encoder and decoder for better performance.
• Hidden layers is positively correlated with accuracy. More number of layers give better performance.
• A dropout value of 0.4 seems high (for most models) and a value of 0 leads to overfitting. The value of 0.25 in between gives good performance. We use the same value of dropout for the normal and recurrent dropout (though, a non-zero recurrent dropout makes training not compatible with CuDNN for some models and hence takes more time). Presence of recurrent dropout improved the performance of models with any configuration on the validation set (by a small margin), hence was worth the sacrifice on compute time.
• Optimizers other than Adam failed to converge quickly. Hence the optimizer was fixed as Adam.
• Higher input embedding size helps in increasing performance by allowing to learn the input with greater contrast.
• LSTMs are able to memorize better even with lesser hidden layer size. It has achieved a validation accuracy of 50% with a hidden layer size of 64. On the constrast the best validation accuracy for GRUs with hidden layer size of 64 is 24%.
• GRU and LSTM are able to achieve a good level of validation accuracy (around 35) with beam width 1. This is better than any configuration of RNN. This explains how they are able to memorize the input and replicate the output better.
• The models perform better with greater number of encoder and decoder layers, when other configurations are favourable. Hence, greater number of encoder/decoder layers alone may not help much, but when other hyperparameters are in desired configurations, more number of layers help in taking the accuracy up further. As an evidence to this fact, the best model obtained in sweep has 3 encoder and decoder layers.

## Question 4 (10 Marks)

• The model makes more errors on consonants than vowels
• The model makes more errors on longer sequences
• I am thinking confusion matrix but may be it's just me!

• The model gets confused for English alphabets with multiple pronunciations (for e.g., it can be noticed in how it is not sure about ‘ou’ in youth and ‘ch’ in chaya).
• Since it predicts after seeing the whole encoder input just once, it does not remember certain characters well (mainly vowels which indicate the form of the consonant to use). For e.g. , ‘o’ in counteron is forgotten while transliterating it.
• Due to the same reason as the point mentioned above, prediction sometimes jumbles the input characters while transliterating. e.g., 3rd prediction for phantom seems to be output for phantamo.
• There are errors in the prediction of half characters in words like nasht and arch, (i.e) the model struggles to decide when to join letters and when not to.

## Question 5 (20 Marks)

• RNNs performance greatly increased with attention. Some of the models based on RNNs now crossed 70% val_exact_accuracy (using beam_width = 5) while they were struggling to get to 30% without attention (with the same beam_width). LSTMs and GRUs still outperform RNNs, but not with as much difference as without attention.
• Though the models having greater hidden layer size and input embedding size still dominated, the dependency of validation accuracy on number of encoder and decoder layers was brought down. Infact, the best attention based model had just 1 encoder and decoder layer, even though there was models with more layers and all other configurations remaining same. Hence, increasing number of layers doesn't help (and sometimes, even works in a negative way like for the best model configuration) when it comes to performance on the validation set.
• The best model has dropout 0.25, but a model with dropout of 0.4 is very close behind in performance. Depending on the other hyperparameter values, a dropout of 0.25 and 0.4 both seem to work (almost) equally well.

• the model now pays attention to the relevant input words at every timestep rather than looking at the input in one go, thereby overcoming the remembrance problem to an extent
• the model incorporates the influence of surrounding characters effectively in predicting the output at a timestep, hence is performs better in cases where same input English alphabet has different pronunciations.
• The model now is able to predict chaya correctly by paying attention to the ‘h’ after the ‘c’ (see the 8th plot in the below heatmap).
• The ‘o’ in counteron is not forgotten while transliterating it, unlike the model without attention.
• The model no longer predicts transliteration of phantamo as a top output and hence is clear about the order of characters in the input sequence.

## Question 7 (10 Marks)

Question 8 (0 marks), self declaration.

• coded the model creation and training for attention based models
• set up WANDB sweep for no attention models
• trained and reported test metrics and predictions for the best vanilla model
• coded the inference models
• coded the function for finding test set metrics and saving predictions
• coded the attention visualization
• reported observations on sweep of vanilla models
• reported improvements by attention models
• did the overall report formatting and proof-reading
• coded the model creation and training for vanilla models
• set up WANDB sweep for attention based models
• trained and reported test metrics and predictions for the best attention model
• coded the beam decoder and custom callback function
• coded the attention heatmaps and sample output visualization function
• coded the GUI part for attention visualization
• reported observations on sweep of attention models
• reported shortcomings of vanilla models
• did the Github readme and youtube video for demonstrating attention visualization

Created with ❤️ on Weights & Biases.

## Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

## Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

## Assignment 3

The assignment.

• Sessions should be defined by a random Session Management String . The version of sessionServlet.java in the .zip file below begins to implement this.
• Bean instantiation should be handled by the servlet rather than the new JStoreView.jsp . In particular, the database should be locked while instantiating a requested Bean .
• Views should have a "Close Session" button

## Source Files and References

• Link references to localhost:8080 in various JSP s need to be changed to hoare.cs.umsl.edu/servlet .
• The directory structure under js_test needs to be moved under j-<yourname> .
• getSimpleNotes.jsp
• sessionServlet.java- First Draft
• sessionServlet.java- Second Draft
• Session.java
• JStore - Browser Client
• Session Management .
• The Database table .
• In particular, The Java Server Page .
• Chapter 7 Sessions and Cookies .
• Some Notes .

## Assignment 3: Sushi Go

Comp1100 assignment 3, semester 2 2023.

In this assignment, you will develop a software agent or AI bot that plays a board game called Sushi Go. We have implemented the rules of the game for you. Your task is to decide how is best to play.

Sushi Go is a card drafting game where players compete to build collections of sushi-themed cards that score the most points. This delightful game was designed by Phil Walker-Harding and published by Gamewright. For the purposes of this assignment, we have made some modifications to the rules.

Deadline : Sunday 29th of October, 2023, at 11:00pm Canberra time sharp

Note: Late submissions will not be marked unless you have an approved extension .

We highly recommend committing your work as you go, and pushing regularly. Do not wait until you are finished to commit and push for the first time, as you run the risk of missing the deadline and having nothing submitted. An unfinished assignment is worth more than no assignment.

This assignment is marked out of 100.

As with assignment 2, code that does not compile will be penalised heavily. This means that both the commands cabal v2-run game (with sensible arguments) and cabal v2-test must run without errors. If either command fails with an error, a significant mark deduction will be applied. If you have a partial solution that you cannot get working, you should comment it out and write an additional comment directing your tutor’s attention to it. You are also welcome to ask questions about error messages in drop-ins or on Ed.

If marks are deducted for warnings, they will only be from warnings in AI.hs and AITests.hs .

To help you ensure your submitted code is compiling, we’ve added a Continuous Integration (CI) script to this assignment. This will check that your code compiles, and mark any commit that fails with a red x on the commit, as shown below. You need to ensure that your latest commit on submission displays a green tick. You will also need to separately check that cabal v2-test succeeds on your own computer, because our CI is only set up to check cabal v2-run game .

## Getting Started

• Fork the assignment repository and create a project for it in VS Code/Codium, following the same steps as in Lab 2 . The assignment repository is at

https://gitlab.cecs.anu.edu.au/comp1100/2023s2/studentfiles/asst3-1100_s2_2023 .

Add our version of the repository as a remote called upstream . This allows us to provide additional fixes in the unlikely case that they are required. You do this by doing the following:

Go to the command palette in VSCode (or VSCodium) by pressing Ctrl + Shift + p .

Type git remote .

Enter upstream into the box for the remote name.

Put the following URL as the remote url: https://gitlab.cecs.anu.edu.au/comp1100/2023s2/studentfiles/asst3-1100_s2_2023.git .

## Overview of the Game

The aim of Sushi Go (and our restricted version) is to collect cards (representing dishes) to make the highest-scoring (most delicious) meal possible. There are two players: Player 1 ( Player1 ) and Player 2 ( Player2 ).

There is a deck of cards which consists of the following:

At the start of the game the cards are shuffled and each player is dealt 20 cards each. (Note: As the original deck has more than 40 cards, the set of cards in play each game differ).

There are four collections of cards, the hand and the cards for each player. Player1’s hand denotes the cards that the player is to chose from. While Player1’s cards are the ones Player1 had already selected. Similarly, Player2’s hand denote what player 2 selects from and their cards are the cards that the player had already selected.

Thus when the game is starting, both players’ cards would be empty, while they would have a selection of 20 cards in their hand.

First player 1 chooses a single card from their hand of cards to keep. After placing the selected card, Player 2 does the same. After Player 2’s turn, both players swap hands and keep going. The game is over when both players’ hands are empty. The winner is the person whose cards are worth the most points, according to the scoring rules below.

Nigiri( Nigiri Int ) is your basic meal, and is worth the number of points in its argument (between 1 and 3, inclusive). It is more delicious if you add Wasabi .

Wasabi ( Wasabi (Maybe Card) ) makes Nigiri taste better! The next Nigiri that you take will be worth triple the points (for example Wasabi (Nigiri 2) will be worth $2\times3 =6$ points). If you take a Wasabi but do not later get a Nigiri to put on it, the Wasabi card will have no effect, so pay attention to the order in which you take cards.

Dumplings are delicious! The more you have the more you want! They score depends on how many you have

Eel ( Eel ) tastes terrible when you first try it, but you grow to like it after that.

Tofu ( Tofu ) is great initially, but after a point it starts to make you sick!

Tempura ( Tempura ) is amazing! But you have to have it in pairs. Each pair is worth 5 points.

Tea ( Tea ) gives your meal that amazing ending that you were craving! At the end of the round each tea card will be worth the maximum number of cards you have of a certain type.

• Note that if you have Wasabi (Nigiri 1) it counts as both a Wasabi and a Nigiri
• If you have more tea cards than other cards, then each tea card will be worth the count of tea cards.
• For example if we have: Nigiri 1, Wasabi Nothing, Wasabi (Nigiri 1), Dumpling, Wasabi (Nigiri 2), Tea , this will count as 3 Wasabi , 2 Nigiri 1 , 1 Nigiri 2 , 1 Dumpling , 1 Tea . Thus indicating that the tea card will be worth 3 points.

(This is slightly different from the original rules.)

With Temaki ( Temaki ), you wouldn’t want anyone else to have it! At the end of the game, the player with the most temaki will get 4 points, while the player with the least will get (-4) points. If both players have the same amount, neither player will be awarded any points.

## Initial set up

As mentioned above, each player will be dealt 20 cards. There will be no cards. ( Player1 cards and Player2 cards are empty)

Player1 would be first to make a move.

## Overview of the Repository

Most of your code will be written in src/AI.hs , but you will also need to write tests in src/AITests.hs .

## Other Files

src/SushiGo.hs implements the rules of SushiGo. You should read through this file and familiarise yourself with the data declarations and the type signatures of the functions in it, as you will use some of these to analyse the game states. You do not need to understand how the functions in this file work in detail. You do not need to change or implement anything in this file.

src/SushiGoTests.hs implements some unit tests for the game. You are welcome to read through it.

src/AITests.hs is an empty file for you to write tests for your agent.

src/Testing.hs is a simple test framework similar to the one in Assignment 2. However, it has been extended so that you can group related tests together for clarity.

src/Dragons contains all the other code that makes the framework run. You do not need to read or understand anything in this directory. Here be dragons! (On medieval maps they drew pictures of dragons or sea monsters over uncharted areas.) The code in those files is beyond the areas of Haskell which this course explores.

Setup.hs tells cabal that this is a normal package with no unusual build steps. Some complex packages (that we will not see in this course) need to put more complex code here. You are not required to understand it.

comp1100-assignment3.cabal tells the cabal build tool how to build your assignment. We will discuss how to use cabal below.

.ghcid tells the ghcid tool which command to run, which is what supplies VSCodium with error highlighting that automatically updates when you save a file.

.gitignore tells git which files it should not put into version control. These are often generated files, so it doesn’t make sense to place them under version control.

## Overview of Cabal

As before, we are using the cabal tool to build the assignment code. The commands provided are very similar to last time:

cabal v2-build : Compile your assignment.

cabal v2-run game : Build your assignment (if necessary), and run the test program. We discuss the test program in detail below, as there are a number of ways to launch it.

cabal repl comp1100-assignment3 : Run the GHCi interpreter over your project so you can test functions interactively. It’s a good idea to run a cabal v2-clean before you run the above.

cabal v2-test : Build and run the tests. This assignment is set up to run a unit test suite, and as with Assignment 2 you will be writing tests. The unit tests will abort on the first failure, or the first call to a function that is undefined .

cabal v2-haddock : Generate documentation in HTML format, which you can read with a web browser. This might be a nice way to read a summary of the game module, but it also documents the Dragons modules which you can safely ignore.

cabal v2-clean : Cleans up the temporary files and build artifacts stored in the dist-newstyle folder.

You should execute these cabal commands in the top-level directory of your project: comp1100-assignment3 (i.e., the directory you are in when you launch a terminal from VSCodium).

## Overview of the Test Program

To run the test program, you need to provide it with command line arguments that tell it who is playing. This command will let you play against the current "default" AI bot. Before you replace this with your own bot, the default will be firstLegalMove :

using ai to get the default ai is part of how we mark your assignment, so it is vital that you update your default bot to be whatever you want to be marked!

For instance if you want your minimax to be the bot that is marked, you should change your code to look like this

Note that there should be only one default ai.

To play against your ai, you need to type in the letters next to a card to pick it from your hand.

To play with a differently named AI, say the one you have named "greedy , use:

In general, the command to run the game looks like this:

Replace ARGS with a collection of arguments from the following list:

--seed INT : Sets a seed for how the deck would be shuffled, the INT is to be a positive number. This would allow you to check your performance on a certain game setup.

--timeout DURATION : Change the amount of time (in decimal seconds) that AI functions are given to think of a move (default = 4.0 ). You may want to set this to a smaller number when testing your program, so that things run faster.

--debug-lookahead : When an AI is done thinking, print out how many moves ahead it considered, and the candidate move it came up with at each level of lookahead. The first item in the printed list is the move it came up with at lookahead 1, the second item is the move it came up with at lookahead 2, and so on.

--ui text : Show the game in the terminal.

--ui json : Run a non-interactive game (i.e., AI vs. AI, or AI vs network), and output a report of the game in JSON format. You probably won’t have a use for this, but it’s documented here for completeness.

--host PORT : Listen for a network connection on PORT . You only need this for network play (see below).

--connect HOST:PORT : Connect to someone else’s game. You only need this for network play (see below).

--p1 PLAYER : Specify the first player. Required.

--p2 PLAYER : Specify the second player. Required.

The PLAYER parameters describe who is playing, and can take one of the following forms:

## Network Play

Network play is provided in the hope that it will be useful, but we are unable to provide support for this feature, or diagnose problems related to tunnelling network connections between computers.

The assignment framework supports network play, so that you can test agents against each other without sharing code. One machine must host the game, and the other machine must connect to the game. In the example below, machine A hosts a game on port 5000 with the agent crashOverride as player 1, then machine B connects to the game, providing the AI chinook as player 2:

Under the bonnet, the network code makes a single TCP connection, and moves are sent over the network in JSON. You will need to set up your modem/router to forward connections to the machine running your assignment. A service like ngrok may help, but as previously mentioned we are unable to provide any support for this feature.

## Main Task: Sushi Go AI (55 Marks)

Implement an AI (of type AI , defined in src/AI.hs ). There is a list called ais in that file, and we will mark the AI you call "default" in that list. This list is also where the framework looks when it tries to load an AI by name.

We will test your AI’s performance by comparing it to implementations written by course staff, using a variety of standard approaches. Its performance against these AIs will form a large part of the marks for this task.

It is vital that you indicate one AI as "default" , otherwise we will not know which one to mark. To indicate an AI as "default" , you must have a (String, AI) pair in the ais list of AIs in src/AI.hs where the String is precisely "default" .

## Understanding the AI Type

The AI type has two constructors, depending on whether you are implementing a simple AI that looks only at the current state, or a more complicated AI that performs look-ahead.

The NoLookahead constructor takes as its argument a function of type GameState -> Move . That is, the function you provide should look at the current state of the game and return the move to play. This constructor is intended for very simple AIs that do not look ahead in the game tree. As such, this function should never run for more than a moment at a time, but nevertheless, it is also subject to the timeout of 4 seconds.

The WithLookahead constructor takes as its argument a function of type GameState -> Int -> Move . The Int parameter may be used for any purpose, but we anticipate that you will use it to represent how many steps you should try to look ahead in the game tree. The assignment framework will call your function over and over, with look-ahead 1 , then 2 , then 3 , etc., until it runs out of time. The framework will take the result of the most recent successful function call as your AI’s best move. If your AI does not return a move in time, the program will stop with an error.

This is a very open-ended task, and it will probably help if you build up your solution a little at a time. We suggest some approaches below.

Your AI should inspect the Turn within the Game to see whose turn it is. You may call error if the Turn is GameOver - your AI should never be called on a finished game. Your AI can then use the Player value and otherPlayer function to work out how to evaluate the board.

You may also assume that we will only ever call your AI if there is a legal move it can make (when the player’s hand is never empty). In particular, this means that we will not deduct marks for assuming that a list of legal moves is non-empty (e.g., you used the head function). Note that gratuitous use of head and tail is still poor style. Note that, in general, you cannot make this assumption about GameStates you have generated within your AI function.

## First Legal Move

The simplest AI you can build is one that makes the first legal move it can. We have provided this for you, so you can see what a simple AI looks like.

## Interlude: Heuristics

Heuristic functions are discussed in the lecture on game trees. We expect the quality of your heuristic function—how accurately it scores game states—to have a large impact on how well your AI performs.

## Greedy Strategy

“Greedy strategies” are the class of strategies that make moves that provide the greatest immediate advantage. In the context of this game, it means always making the move that will give it the greatest increase in heuristic. Try writing a simple heuristic and a greedy strategy, and see whether it beats your “firstLegalMove” AI. Bear in mind that in a game like this, firstLegalMove will not play terribly, as it still must capture when given the opportunity.

## Interlude: Game Trees

To make your AI smarter, it is a good idea for it to look into the future and consider responses to its moves, its responses to those responses, and so on. The lecture on game trees may help you here.

Greedy strategies can often miss opportunities that need some planning, and get tricked into silly traps by smarter opponents. The Minimax algorithm was discussed in the lecture on game trees and will likely give better performance than a greedy strategy.

Once you have Minimax working, you may find that your AI explores a number of options that cannot possibly influence the result. Cutting off branches of the search space early is called pruning , and one effective method of pruning is called alpha-beta pruning as discussed in lectures. Good pruning may allow your search to explore deeper within the time limit it has to make its move.

## Other Hints

Look-ahead: If your function runs efficiently, it can see further into the future before it runs out of time. The more moves into the future it looks, the more likely it will find good moves that are not immediately obvious. Example: at 1 level of look-ahead, a move may let you capture a lot of pieces, but at deeper look-ahead you might see that it leaves you open to a large counter-capture.

Heuristic: You will not have time to look all the way to the end of every possible game. Your heuristic function guesses how good a Game is for each player. If your heuristic is accurate, it will correctly identify strong and weak states.

Search Strategy: This determines how your AI decides which heuristic state to aim for. Greedy strategies look for the best state they can (according to the heuristic) and move towards that state. More sophisticated strategies like Minimax consider the opponent’s moves when planning.

Pruning: if you can discard parts of the game tree without considering them in detail, you can process game trees faster and achieve a deeper look-ahead in the allotted running time. Alpha-beta pruning is one example; there are others.

Choosing a good heuristic function is very important, as it gives your AI a way to value its position that is smarter than just looking at current score. If there is only one copy of Sashimi in the game, you will never get the 3 copies to get 10 points, so the card is worth zero and you probably will not want to pick it. If you can complete the set, each card in the set is effectively worth a pretty-good 3+1/3 points.

Do not try to do everything at once. This does not work in production code and often does not work in assignment code either. Get something working, then take your improved understanding of the problem to the more complex algorithms.

As you refine your bots, test them against each other to see whether your changes are actually an improvement.

## Unit Tests (10 Marks)

As with Assignment 2, you will be expected to write unit tests to convince yourself that your code is correct. The testing code has been extended from last time— src/Testing.hs now allows you to group tests into a tree structure. As before, you run the tests using cabal v2-test .

Most of the hints from Assignment 2 apply here. Re-read those.

If a function is giving you an unexpected result, try breaking it into parts and writing tests for each part. This helps you isolate the incorrect parts, and gives you smaller functions to fix.

If your function has subtle details that need to be correct, think about writing tests to ensure those details do not get missed as you work on your code.

## Coding Style (10 Marks)

As you write increasingly complex code, it is increasingly important that the code remains readable. This saves wasted effort understanding messy code, which makes it easier to think about the problem and your solution to it.

If you wish, you know how, and you have a good reason, you may split your code into multiple modules. However this is not a requirement, and you will never be penalised for not doing so.

You MUST NOT edit any of the files in the framework (with the obvious exceptions of AI.hs and AITests.hs . You may also edit SushiGoTests.hs but there should be no reason to.)

Note: We will not regard code that is used by any of your AIs as ‘dead code’, even if it not used by your submitted default AI. But we will only mark your default AI.

## Technical Report (COMP1100: 25 marks)

You should write a concise technical report explaining your design choices in implementing your program. The maximum word count is 1500 . This is a limit , not a quota ; concise presentation is a virtue.

Once again: This is not a required word count. They are the maximum number of words that your marker will read. If you can do it in fewer words without compromising the presentation, please do so.

Your report must be in PDF format, located at the root of your assignment repository on GitLab and named Report.pdf . Otherwise, it may not be marked, or will be marked but with a penalty. You should double-check on GitLab that this is typed correctly.

The report must have a title page with the following items:

Your laboratory time and tutor; and

## Content and Structure

Your audience is the tutors and lecturers, who are proficient at programming and understand most concepts. Therefore you should not, for example, waste words describing the syntax of Haskell or how recursion works. After reading your technical report, the reader should thoroughly understand:

• What problem your program is trying to solve;
• The reasons behind major design choices in it; as well as
• How it was tested.

Your report should give a broad overview of your program, but focus on the specifics of what you did and why.

Remember that the tutors have access to the above assignment specification, and if your report only contains details from it then you will only receive minimal marks. Below is a potential outline for the structure of your report and some things you might discuss in it.

## Introduction

If you wish to do so you can write an introduction. In it, give:

A brief overview of your program:

how it works; and

what it is designed to do.

If you have changed the way the controls work, or added something that may make your program behave unexpectedly, then it would be worth making a note of it here.

This section is particularly relevant to more complicated programs.

The purpose of this section is to describe your program to the reader, both in detail and at a high level.

Talk about what features your program actually has. We know what we asked for (the features in this document!), but what does your program actually let a user do? How does your program work as a whole?

How does it achieve this? Let us know how each individual function works and how they work together to solve particular design goals.

A successful report will demonstrate conceptional understanding of all relevant functions, and depicts a holistic view of program structure through discussion of what it is and hour it works.

## Rationale and Reflection

The purpose of this section is to describe the design decisions you made while writing the program, to the reader.

Tell us the reasoning behind the choices you detailed above. Tell us the assumptions you made about user behaviour. Why did you solve the problems the way you did? Why did you write the functions you wrote? Did you make any other assumptions?

For example:

“I implemented the checkFirst helper function after reading this blog post (citing the post as a reference), claiming that users of quadrant based drawing programs virtually always draw their first shape in the top-right quadrant. Deciding to use this as my base assumption for user-behaviour, I decided to save on quadrant-dependent calculation of trigonometric ratios by always assuming the first shape is drawn in this quadrant. This in turn meant I needed a function to check if a shape was the first one drawn.”

This is a critical reflection not a personal one. You’re explaining the justification and reasoning behind the choices you made.

A successful report will give a thorough explanation of the process followed to reach a final design, including relevant reasoning and assumptions / influences.

In this section, you might also reflect on any conceptual or technical issues you encountered, particularly if you were unable to complete your program. Try to include details such as:

• theories as to what caused the problem;
• suggestions of things that might have fixed it; and
• discussion about what you did try, and the results of these attempts.
• Describe the tests that prove individual functions on their own behave as expected (i.e. testing a function with different inputs and doing a calculation by hand to check that the outputs are correct).
• How did you test the entire program? What tests did you perform to show that the program behaves as expected in all (even unexpected) cases?
• How did you test the quality of your AI’s play?

A successful report will demonstrate evidence of a process that checked most, if not all, of the relevant parts of the program through testing. Such a report would combine this with some discussion of why these testing results prove or justify program correctness.

A successful report should have excellent structure, writing style, and formatting. Write professionally, use diagrams where appropriate but not otherwise, ensure your report has correct grammar and spelling.

This is a list of suggestions , not requirements. You should only discuss items from this list if you have something interesting to write.

## Things to avoid in a technical report

Line by line explanations of large portions of code. (If you want to include a specific line of code, be sure to format as described in the “Format” section below).

Pictures of code, VSCodium or your IDE.

Content that is not your own, unless cited.

Grammatical errors or misspellings. Proof-read it before submission.

Informal language - a technical report is a professional document, and as such should avoid things such as:

Unnecessary abbreviations (atm, btw, ps, and so on), emojis, and emoticons; and

Stories / recounts of events not relevant to the development of the program.

Irrelevant diagrams, graphs, and charts. Unnecessary elements will distract from the important content. Keep it succinct and focused.

If you need additional help with report writing, the academic skills writing centre has a peer writing service and writing coaches.

You are not required to follow any specific style guide (such as APA or Harvard). However, here are some tips which will make your report more pleasant to read, and make more sense to someone with a computer science background.

Colours should be kept minimal. If you need to use colour, make sure it is absolutely necessary.

If you are using graphics, make sure they are vector graphics (that stay sharp even as the reader zooms in on them).

Any code, including type/function/module names or file names, that appears in your document should have a monospaced font (such as Consolas, Courier New, Lucida Console, or Monaco).

Other text should be set in serif fonts (popular choices are Times, Palatino, Sabon, Minion, or Caslon).

When available, automatic ligatures should be activated.

Do not use underscore to highlight your text.

Text should be at least 1.5 spaced.

## Communication

Do not post your code publicly, either on Ed or via other forums. Posts on Ed trigger emails to all students, so if by mistake you post your code publicly, others will have access to your code and you may be held responsible for plagiarism.

Once again, and we cannot stress this enough: do not post your code publicly . If you need help with your code, post it privately to the instructors.

When brainstorming with your friends, do not share code . There might be pressure from your friends, but this is for both your and their benefit. Anything that smells of plagiarism will be investigated and there may be serious consequences.

Sharing ideas and sketches is perfectly fine, but sharing should stop before you risk handing in suspiciously similar solutions.

Course staff will not look at assignment code unless it is posted privately on Ed, or shared in a drop-in consultation.

Course staff will typically give assistance by asking questions, directing you to relevant exercises from the labs, or definitions and examples from the lectures.

Before the assignment is due, course staff will not give individual tips on writing functions for the assignment or how your code can be improved. We will help you get unstuck by asking questions and pointing you to relevant lecture and lab material. You will receive feedback on your work when marks are released.

Start early, and aim to finish the assignment several days before the due date. At least 24 hours before the deadline, you should:

Re-read the specification one final time, and make sure you’ve covered everything.

• You have fully read and understand the entire assignment specification. See the “Overview of Tasks” section to check that you have completed all tasks.

• You can confirm that the latest version of your code has been pushed to GitLab by using your browser to visit https://gitlab.cecs.anu.edu.au/uXXXXXXX/asst3-1100_s2_2023, where XXXXXXX is your student number.

Ensure your program compiles and runs, including the cabal v2-test test suite. Make sure you have a green CI tick on your latest commit in Gitlab

Ensure your submission works on the lab machines. If it does not, it may fail tests used by the instructors.

Verify that your report is in PDF format, located at the root of your project.

Verify that your report is in PDF format, located at the root of your project directory (not in src ), and named Report.pdf . That capital R is important - Linux uses a case-sensitive file system. Otherwise, it may not be marked.

Check that all work including your report is submitted by viewing your assignment repo on Gitlab .

We recommend that you do not wait until you are finished to commit and push your work. Commit and push as you work, to reduce the risk of submission errors at the last minute.

## Assignment 3

This assignment is due on Wednesday, May 27 2020 at 11:59pm PDT.

• Option A: Colab starter code
• Option B: Jupyter starter code

## Option A: Google Colaboratory (Recommended)

Option b: local development, q1: image captioning with vanilla rnns (29 points), q2: image captioning with lstms (23 points), q3: network visualization: saliency maps, class visualization, and fooling images (15 points), q4: style transfer (15 points), q5: generative adversarial networks (15 points), submitting your work.

In this assignment, you will implement recurrent neural networks and apply them to image captioning on the Microsoft COCO data. You will also explore methods for visualizing the features of a pretrained model on ImageNet, and use this model to implement Style Transfer. Finally, you will train a Generative Adversarial Network to generate images that look like a training dataset!

The goals of this assignment are as follows:

• Understand the architecture of recurrent neural networks (RNNs) and how they operate on sequences by sharing weights over time.
• Understand and implement both Vanilla RNNs and Long-Short Term Memory (LSTM) networks.
• Understand how to combine convolutional neural nets and recurrent nets to implement an image captioning system.
• Explore various applications of image gradients, including saliency maps, fooling images, class visualizations.
• Understand and implement techniques for image style transfer.
• Understand how to train and implement a Generative Adversarial Network (GAN) to produce images that resemble samples from a dataset.

You should be able to use your setup from assignments 1 and 2.

You can work on the assignment in one of two ways: remotely on Google Colaboratory or locally on your own machine.

Regardless of the method chosen, ensure you have followed the setup instructions before proceeding.

If you choose to work with Google Colab, please familiarize yourself with the recommended workflow .

Note . Ensure you are periodically saving your notebook ( File -> Save ) so that you don’t lose your progress if you step away from the assignment and the Colab VM disconnects.

Once you have completed all Colab notebooks except collect_submission.ipynb , proceed to the submission instructions .

Install Packages . Once you have the starter code, activate your environment (the one you installed in the Software Setup page) and run pip install -r requirements.txt .

Download data . Next, you will need to download the COCO captioning data, a pretrained SqueezeNet model (for TensorFlow), and a few ImageNet validation images. Run the following from the assignment3 directory:

Start Jupyter Server . After you’ve downloaded the data, you can start the Jupyter server from the assignment3 directory by executing jupyter notebook in your terminal.

Complete each notebook, then once you are done, go to the submission instructions .

You can do Questions 3, 4, and 5 in TensorFlow or PyTorch. There are two versions of each of these notebooks, one for TensorFlow and one for PyTorch. No extra credit will be awarded if you do a question in both TensorFlow and PyTorch

The notebook RNN_Captioning.ipynb will walk you through the implementation of an image captioning system on MS-COCO using vanilla recurrent networks.

The notebook LSTM_Captioning.ipynb will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs, and apply them to image captioning on MS-COCO.

The notebooks NetworkVisualization-TensorFlow.ipynb , and NetworkVisualization-PyTorch.ipynb will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.

In thenotebooks StyleTransfer-TensorFlow.ipynb or StyleTransfer-PyTorch.ipynb you will learn how to create images with the content of one image but the style of another. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.

In the notebooks GANS-TensorFlow.ipynb or GANS-PyTorch.ipynb you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awarded if you complete both notebooks.

Important . Please make sure that the submitted notebooks have been run and the cell outputs are visible.

Once you have completed all notebooks and filled out the necessary code, there are two steps you must follow to submit your assignment:

1. If you selected Option A and worked on the assignment in Colab, open collect_submission.ipynb in Colab and execute the notebook cells. If you selected Option B and worked on the assignment locally, run the bash script in assignment3 by executing bash collectSubmission.sh .

This notebook/script will:

• Generate a zip file of your code ( .py and .ipynb ) called a3.zip .
• Convert all notebooks into a single PDF file.

Note for Option B users . You must have (a) nbconvert installed with Pandoc and Tex support and (b) PyPDF2 installed to successfully convert your notebooks to a PDF file. Please follow these installation instructions to install (a) and run pip install PyPDF2 to install (b). If you are, for some inexplicable reason, unable to successfully install the above dependencies, you can manually convert each jupyter notebook to HTML ( File -> Download as -> HTML (.html) ), save the HTML page as a PDF, then concatenate all the PDFs into a single PDF submission using your favorite PDF viewer.

If your submission for this step was successful, you should see the following display message:

2. Submit the PDF and the zip file to Gradescope .

Note for Option A users . Remember to download a3.zip and assignment.pdf locally before submitting to Gradescope.

#### IMAGES

1. Assignment 3

2. Week 3

3. Assignment # 3

4. Solved Assignment 6 PLSQL Assignment 3 You need to create

5. Assignment 3 Poster copy

6. Assignment 3

#### VIDEO

1. NPTEL Introduction To Haskell Programming Quiz Assignment 3

2. NPTEL Week 3 Assignment: Computer Architecture July 2023

3. Data Structure And Algorithms Using Java || NPTEL Week 3 assignment answers

4. An Introduction To Artificial Intelligence || NPTEL week 3 assignment answers|| #nptel #skumaredu

5. Tutorial 3 Assignment 3 Lectures 6 & 7

6. Lesson 3, Assignment 2, Unit 14

1. What Is the Abbreviation for “assignment”?

According to Purdue University’s website, the abbreviation for the word “assignment” is ASSG. This is listed as a standard abbreviation within the field of information technology.

2. What Is a Deed of Assignment?

In real property transactions, a deed of assignment is a legal document that transfers the interest of the owner of that interest to the person to whom it is assigned, the assignee. When ownership is transferred, the deed of assignment show...

3. What Is a Notice of Assignment?

A Notice of Assignment is the transfer of one’s property or rights to another individual or business. Depending on the type of assignment involved, the notice does not necessarily have to be in writing, but a contract outlining the terms of...

4. Assignment 3

Hints, support and self evaluation. The “Hints and partial solutions for Assignment 3” file gives suggestions on how you can tackle the questions, and some

5. Assignment 3

You will submit assignments in two sections. The planning documents are the first section of the assignment (Part A). Submit these at the end of the section on

6. Assignment 3 CS6910

Assignment 3 CS6910 · i.e., a word in the native script and its corresponding transliteration in the Latin script (the way we type while chatting with our

7. Assignment 3

Assignment 3 Done by Ayam Jain (spent roughly 12hrs on the assignment) Question 1 Question 2 - K-means Question 3 - DBSCAN method Question 4 - Data file

8. dhs-comp-sci/src/main/java/term1/assignments/Assignment3.java at

Search code, repositories, users, issues, pull requests... · Provide feedback · Saved searches · Assignment3.java · Assignment3.java · Assignment3.java.

9. Assignment 3

java, sessionServlet.java and Bean Test which consists of testJStore.html and JStoreView.jsp. Assignment 3 is to combine these two application, adding some

10. Assignment 3: Sushi Go

The assignment framework will call your function over and over, with look-ahead 1 , then 2 , then 3 , etc., until it runs out of time. The framework will take

11. Assignment 3 Example Report.pdf

Course Hero is not sponsored or endorsed by any college or university.

12. Assignment 3 Video Tutorial

Share your videos with friends, family, and the world.

13. Assignment 3

You'll be creating a new project and copying code from last week's assignment. STANFORD CS193P IOS APPLICATION DEVELOPMENT. FALL 2010. PAGE 1 OF 6. ASSIGNMENT

14. Assignment 3

In this assignment, you will implement recurrent neural networks and apply them to image captioning on the Microsoft COCO data. You will also