## Assignment 4: Word Embeddings

Assignment 4: word embeddings #.

Welcome to the fourth (and last) programming assignment of Course 2!

In this assignment, you will practice how to compute word embeddings and use them for sentiment analysis.

To implement sentiment analysis, you can go beyond counting the number of positive words and negative words.

You can find a way to represent each word numerically, by a vector.

The vector could then represent syntactic (i.e. parts of speech) and semantic (i.e. meaning) structures.

In this assignment, you will explore a classic way of generating word embeddings or representations.

You will implement a famous model called the continuous bag of words (CBOW) model.

By completing this assignment you will:

Train word vectors from scratch.

Learn how to create batches of data.

Understand how backpropagation works.

Plot and visualize your learned word vectors.

Knowing how to train these models will give you a better understanding of word vectors, which are building blocks to many applications in natural language processing.

## Important Note on Submission to the AutoGrader #

You have not added any extra print statement(s) in the assignment.

You have not added any extra code cell(s) in the assignment.

You have not changed any of the function parameters.

You are not using any global variables inside your graded exercises. Unless specifically instructed to do so, please refrain from it and use the local variables instead.

You are not changing the assignment code where it is not required, like creating extra variables.

If you do any of the following, you will get something like, Grader not found (or similarly unexpected) error upon submitting your assignment. Before asking for help/debugging the errors in your assignment, check for these first. If this is the case, and you don’t remember the changes you have made, you can get a fresh copy of the assignment by following these instructions .

1 The Continuous bag of words model

2 Training the Model

2.0 Initialize the model

Exercise 01

2.1 Softmax Function

Exercise 02

2.2 Forward Propagation

Exercise 03

2.3 Cost Function

2.4 Backproagation

Exercise 04

Exercise 05

3 Visualizing the word vectors

## 1. The Continuous bag of words model #

Let’s take a look at the following sentence:

‘I am happy because I am learning’ .

In continuous bag of words (CBOW) modeling, we try to predict the center word given a few context words (the words around the center word).

For example, if you were to choose a context half-size of say $$C = 2$$ , then you would try to predict the word happy given the context that includes 2 words before and 2 words after the center word:

$$C$$ words before: [I, am]
$$C$$ words after: [because, I]

In other words:

The structure of your model will look like this:

Where $$\bar x$$ is the average of all the one hot vectors of the context words.

Once you have encoded all the context words, you can use $$\bar x$$ as the input to your model.

The architecture you will be implementing is as follows:

## Mapping words to indices and indices to words #

We provide a helper function to create a dictionary that maps words to indices and indices to words.

## 2 Training the Model #

Initializing the model #.

You will now initialize two matrices and two vectors.

The first matrix ( $$W_1$$ ) is of dimension $$N \times V$$ , where $$V$$ is the number of words in your vocabulary and $$N$$ is the dimension of your word vector.

The second matrix ( $$W_2$$ ) is of dimension $$V \times N$$ .

Vector $$b_1$$ has dimensions $$N\times 1$$

Vector $$b_2$$ has dimensions $$V\times 1$$ .

$$b_1$$ and $$b_2$$ are the bias vectors of the linear layers from matrices $$W_1$$ and $$W_2$$ .

The overall structure of the model will look as in Figure 1, but at this stage we are just initializing the parameters.

## Exercise 01 #

Please use numpy.random.rand to generate matrices that are initialized with random values from a uniform distribution, ranging between 0 and 1.

Note: In the next cell you will encounter a random seed. Please DO NOT modify this seed so your solution can be tested correctly.

## Expected Output #

2.1 softmax #.

Before we can start training the model, we need to implement the softmax function as defined in equation 5:

Array indexing in code starts at 0.

$$V$$ is the number of words in the vocabulary (which is also the number of rows of $$z$$ ).

$$i$$ goes from 0 to |V| - 1.

## Exercise 02 #

Instructions : Implement the softmax function below.

Assume that the input $$z$$ to softmax is a 2D array

Each training example is represented by a vector of shape (V, 1) in this 2D array.

There may be more than one column, in the 2D array, because you can put in a batch of examples to increase efficiency. Let’s call the batch size lowercase $$m$$ , so the $$z$$ array has shape (V, m)

When taking the sum from $$i=1 \cdots V-1$$ , take the sum for each column (each example) separately.

numpy.sum (set the axis so that you take the sum of each column in z)

## Expected Ouput #

2.2 forward propagation #, exercise 03 #.

Implement the forward propagation $$z$$ according to equations (1) to (3).

For that, you will use as activation the Rectified Linear Unit (ReLU) given by:

• You can use numpy.maximum(x1,x2) to get the maximum of two values
• Use numpy.dot(A,B) to matrix multiply A and B

## Expected output #

2.3 cost function #.

We have implemented the cross-entropy cost function for you.

## 2.4 Training the Model - Backpropagation #

Exercise 04 #.

Now that you have understood how the CBOW model works, you will train it. You created a function for the forward propagation. Now you will implement a function that computes the gradients to backpropagate the errors.

Exercise 05 #.

Now that you have implemented a function to compute the gradients, you will implement batch gradient descent over your training set.

Hint: For that, you will use initialize_model and the back_prop functions which you just created (and the compute_cost function). You can also use the provided get_batches helper function:

for x, y in get_batches(data, word2Ind, V, C, batch_size):

Also: print the cost after each batch is processed (use batch size = 128)

Your numbers may differ a bit depending on which version of Python you’re using.

## 3.0 Visualizing the word vectors #

In this part you will visualize the word vectors trained using the function you just coded above.

You can see that man and king are next to each other. However, we have to be careful with the interpretation of this projected word vectors, since the PCA depends on the projection – as shown in the following illustration.

## Assignment 4

From this assignment forward, you will use autograd in PyTorch to perform backpropgation for you. This will enable you to easily build complex models without worrying about writing code for the backward pass by hand.

The goals of this assignment are:

• See how to use PyTorch Modules to build up complex neural network architectures
• Understand and implement recurrent neural networks
• See how recurrent neural networks can be used for image captioning
• Understand how to augment recurrent neural networks with attention
• Use image gradients to synthesize saliency maps, adversarial examples, and perform class visualizations
• Combine content and style losses to perform artistic style transfer

This assignment is due on Friday, October 30 at 11:59pm EDT .

## Q1: PyTorch Autograd (30 points)

The notebook pytorch_autograd_and_nn.ipynb will introduce you to the different levels of abstraction that PyTorch provides for building neural network models. You will use this knowledge to implement and train Residual Networks for image classification.

## Q2: Image Captioning with Recurrent Neural Networks (40 points)

The notebook rnn_lstm_attention_captioning.ipynb will walk you through the implementation of vanilla recurrent neural networks (RNN) and Long Short Term Memory (LSTM) RNNs. You will use these networks to train an image captioning model. You will then augment your implementation to perform spatial attention over image regions while generating captions.

## Q3: Network Visualization (15 points)

The notebook network_visualization.ipynb will walk you through the use of image gradients for generating saliency maps, adversarial examples, and class visualizations.

## Q4: Style Transfer (15 points)

In the notebook style_transfer.ipynb , you will learn how to create images with the artistic style of one image and the content of another image.

## 3. Work on the assignment

Work through the notebook, executing cells and writing code in *.py, as indicated. You can save your work, both *.ipynb and *.py, in Google Drive (click “File” -> “Save”) and resume later if you don’t want to complete it all at once.

While working on the assignment, keep the following in mind:

• The notebook and the python file have clearly marked blocks where you are expected to write code. Do not write or modify any code outside of these blocks .
• Do not add or delete cells from the notebook . You may add new cells to perform scratch computations, but you should delete them before submitting your work.
• Run all cells, and do not clear out the outputs, before submitting. You will only get credit for code that has been run.

Once you want to evaluate your implementation, please submit the *.py , *.ipynb and other required files to Autograder for grading your implementations in the middle or after implementing everything. You can partially grade some of the files in the middle, but please make sure that this also reduces the daily submission quota. Please check our Autograder tutorial for details.

Once you have completed a notebook, download the completed uniqueid_umid_A4.zip file, which is generated from your last cell of the style_transfer.ipynb file. Before executing the last cell in style_transfer.ipynb , please manually run all the cells of notebook and save your results so that the zip file includes all updates.

• rnn_lstm_attention_captioning.ipynb
• network_visualization.ipynb
• style_transfer.ipynb
• rnn_lstm_attention_captioning.py
• network_visualization.py
• style_transfer.py
• rnn_lstm_attention_submission.pkl
• saliency_maps_results.jpg
• class_viz_result.jpg
• style_transfer_result.jpg
• feature_inversion_result.jpg

## Deep-Learning-Specialization-Coursera

This repo contains the updated version of all the assignments/labs (done by me) of deep learning specialization on coursera by andrew ng. it includes building various deep learning models from scratch and implementing them for object detection, facial recognition, autonomous driving, neural machine translation, trigger word detection, etc., deep learning specialization coursera [updated version 2021].

## Announcement

[!IMPORTANT] Check our latest paper (accepted in ICDAR’23) on Urdu OCR

This repo contains all of the solved assignments of Coursera’s most famous Deep Learning Specialization of 5 courses offered by deeplearning.ai

Instructor: Prof. Andrew Ng

This Specialization was updated in April 2021 to include developments in deep learning and programming frameworks. One of the most major changes was shifting from Tensorflow 1 to Tensorflow 2. Also, new materials were added. However, Most of the old online repositories still don’t have old codes. This repo contains updated versions of the assignments. Happy Learning :)

## Programming Assignments

Course 1: Neural Networks and Deep Learning

• W2A1 - Logistic Regression with a Neural Network mindset
• W2A2 - Python Basics with Numpy
• W3A1 - Planar data classification with one hidden layer
• W3A1 - Building your Deep Neural Network: Step by Step¶
• W3A2 - Deep Neural Network for Image Classification: Application

Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

• W1A1 - Initialization
• W1A2 - Regularization
• W2A1 - Optimization Methods
• W3A1 - Introduction to TensorFlow

Course 3: Structuring Machine Learning Projects

• There were no programming assignments in this course. It was completely thoeretical.
• Here is a link to the course

Course 4: Convolutional Neural Networks

• W1A1 - Convolutional Model: step by step
• W1A2 - Convolutional Model: application
• W2A1 - Residual Networks
• W2A2 - Transfer Learning with MobileNet
• W3A1 - Autonomous Driving - Car Detection
• W3A2 - Image Segmentation - U-net
• W4A1 - Face Recognition
• W4A2 - Neural Style transfer

Course 5: Sequence Models

• W1A1 - Building a Recurrent Neural Network - Step by Step
• W1A2 - Character level language model - Dinosaurus land
• W1A3 - Improvise A Jazz Solo with an LSTM Network
• W2A1 - Operations on word vectors
• W2A2 - Emojify
• W3A1 - Neural Machine Translation With Attention
• W3A2 - Trigger Word Detection
• W4A1 - Transformer Network
• W4A2 - Named Entity Recognition - Transformer Application
• W4A3 - Extractive Question Answering - Transformer Application

I’ve uploaded these solutions here, only for being used as a help by those who get stuck somewhere. It may help them to save some time. I strongly recommend everyone to not directly copy any part of the code (from here or anywhere else) while doing the assignments of this specialization. The assignments are fairly easy and one learns a great deal of things upon doing these. Thanks to the deeplearning.ai team for giving this treasure to us.

## Connect with me

Name: Abdur Rahman

Institution: Indian Institute of Technology Delhi

Find me on:

## Deep-Learning-Specialization

Coursera deep learning specialization, neural networks and deep learning.

In this course, you will learn the foundations of deep learning. When you finish this class, you will:

• Understand the major technology trends driving Deep Learning.
• Be able to build, train and apply fully connected deep neural networks.
• Know how to implement efficient (vectorized) neural networks.
• Understand the key parameters in a neural network’s architecture.

## Week 1: Introduction to deep learning

Be able to explain the major trends driving the rise of deep learning, and understand where and how it is applied today.

• Quiz 1: Introduction to deep learning

## Week 2: Neural Networks Basics

Learn to set up a machine learning problem with a neural network mindset. Learn to use vectorization to speed up your models.

• Quiz 2: Neural Network Basics
• Programming Assignment: Python Basics With Numpy
• Programming Assignment: Logistic Regression with a Neural Network mindset

## Week 3: Shallow neural networks

Learn to build a neural network with one hidden layer, using forward propagation and backpropagation.

• Quiz 3: Shallow Neural Networks
• Programming Assignment: Planar Data Classification with Onehidden Layer

## Week 4: Deep Neural Networks

Understand the key computations underlying deep learning, use them to build and train deep neural networks, and apply it to computer vision.

• Quiz 4: Key concepts on Deep Neural Networks
• Programming Assignment: Building your Deep Neural Network Step by Step
• Programming Assignment: Deep Neural Network Application

## Course Certificate

• 🌐 All Sites
• _APDaga DumpBox
• _APDaga Tech
• _APDaga Invest
• _APDaga Videos
• 🗃️ Categories
• _Free Tutorials
• __Python (A to Z)
• __Internet of Things
• __Coursera (ML/DL)
• __HackerRank (SQL)
• __Interview Q&A
• _Artificial Intelligence
• __Machine Learning
• __Deep Learning
• _Internet of Things
• __Raspberry Pi
• __Coursera MCQs
• __Celonis MCQs
• _Handwriting Analysis
• __Graphology
• _Investment Ideas
• _Open Diary
• _Troubleshoots
• _Freescale/NXP
• _Logo Maker
• 🕸️ Sitemap

## Coursera: Neural Networks and Deep Learning (Week 4A) [Assignment Solution] - deeplearning.ai

• Initialize the parameters for a two-layer network and for an  L -layer neural network.
• Complete the LINEAR part of a layer's forward propagation step (resulting in  Z [ l ] ).
• We give you the ACTIVATION function (relu/sigmoid).
• Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
• Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer  L ). This gives you a new L_model_forward function.
• Compute the loss.
• Complete the LINEAR part of a layer's backward propagation step.
• We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
• Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
• Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
• Finally update the parameters.

## 3 - Initialization

3.1 - 2-layer neural network.

• The model's structure is:  LINEAR -> RELU -> LINEAR -> SIGMOID .
• Use random initialization for the weight matrices. Use  np.random.randn(shape)*0.01  with the correct shape.
• Use zero initialization for the biases. Use  np.zeros(shape) .

## 3.2 - L-layer Neural Network

• The model's structure is  [LINEAR -> RELU]  × ×  (L-1) -> LINEAR -> SIGMOID . I.e., it has  L − 1 L − 1  layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
• Use random initialization for the weight matrices. Use  np.random.randn(shape) * 0.01 .
• Use zeros initialization for the biases. Use  np.zeros(shape) .
• We will store  n [ l ] n [ l ] , the number of units in different layers, in a variable  layer_dims . For example, the  layer_dims  for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means  W1 's shape was (4,2),  b1  was (4,1),  W2  was (1,4) and  b2  was (1,1). Now you will generalize this to  L L  layers!
• Here is the implementation for  L = 1 L = 1  (one layer neural network). It should inspire you to implement the general case (L-layer neural network).

## 4 - Forward propagation module

4.1 - linear forward.

• LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
• [LINEAR -> RELU]  × ×  (L-1) -> LINEAR -> SIGMOID (whole model)
 [[ 3.26295337 -1.23429987]]

## 4.2 - Linear-Activation Forward

• Sigmoid :  σ ( Z ) = σ ( W A + b ) = 1 1 + e − ( W A + b ) . We have provided you with the  sigmoid  function. This function returns  two  items: the activation value " a " and a " cache " that contains " Z " (it's what we will feed in to the corresponding backward function). To use it you could just call: A , activation_cache = sigmoid ( Z )
• ReLU : The mathematical formula for ReLu is  A = R E L U ( Z ) = m a x ( 0 , Z ) A = R E L U ( Z ) = m a x ( 0 , Z ) . We have provided you with the  relu  function. This function returns  two items: the activation value " A " and a " cache " that contains " Z " (it's what we will feed in to the corresponding backward function). To use it you could just call: A , activation_cache = relu ( Z )
 [[ 0.96890023 0.11013289]] [[ 3.43896131 0. ]]

## d) L-Layer Model

• Use the functions you had previously written
• Use a for loop to replicate [LINEAR->RELU] (L-1) times
• Don't forget to keep track of the caches in the "caches" list. To add a new value  c  to a  list , you can use  list.append(c) .
 [[ 0.03921668 0.70498921 0.19734387 0.04728177]] 3

## 5 - Cost function

 0.414932

## 6 - Backward propagation module

• LINEAR backward
• LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
• [LINEAR -> RELU]  × ×  (L-1) -> LINEAR -> SIGMOID backward (whole model)

## 6.2 - Linear-Activation backward

• sigmoid_backward : Implements the backward propagation for SIGMOID unit. You can call it as follows:
• relu_backward : Implements the backward propagation for RELU unit. You can call it as follows:

## 6.3 - L-Model Backward

 dW1 [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] db1 [[-0.22007063] [ 0. ] [-0.02835349]] dA1 [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]]

## 7 - Conclusion

• A two-layer neural network
• An L-layer neural network

hi bro...i was working on the week 4 assignment .i am getting an assertion error on cost_compute function.help me with this..but the same function is working for the l layer model AssertionError Traceback (most recent call last) in () ----> 1 parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost= True) in two_layer_model(X, Y, layers_dims, learning_rate, num_iterations, print_cost) 46 # Compute cost 47 ### START CODE HERE ### (≈ 1 line of code) ---> 48 cost = compute_cost(A2, Y) 49 ### END CODE HERE ### 50 /home/jovyan/work/Week 4/Deep Neural Network Application: Image Classification/dnn_app_utils_v3.py in compute_cost(AL, Y) 265 266 cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). --> 267 assert(cost.shape == ()) 268 269 return cost AssertionError:

Hey,I am facing problem in linear activation forward function of week 4 assignment Building Deep Neural Network. I think I have implemented it correctly and the output matches with the expected one. I also cross check it with your solution and both were same. But the grader marks it, and all the functions in which this function is called as incorrect. I am unable to find any error in its coding as it was straightforward in which I used built in functions of SIGMOID and RELU. Please guide.

hi bro iam always getting the grading error although iam getting the crrt o/p for all

## Winter 2018

Introduction to Deep Learning

## Assignment 4: tf.Estimator & Assorted Programming Puzzles

Same submission options as last time.

## High-level Modeling in Tensorflow

You have hopefully seen that working with neural networks becomes more comfortable as you move from low- to higher-level interfaces. However, if you have been following the tutorials so far, you have been defining things such as training loops, evaluation procedures etc. yourself. This is rather annoying – compare this to libraries such as scikit-learn where such things can usually be done in a single line of code.

Luckily, Tensorflow also comes with similar high-level interfaces. In this assignment, we will be having a look at Tensorflow’s own Estimator class. Unfortunately, you will need to do quite a bit of extra reading again to get an overview. You can have a look at some or all of the following docs:

• An extremely high-level overview
• A tutorial using pre-built models for the well-known Iris classification task.
• The tutorial on checkpoints gives a quick overview over saving and restoring trained models.
• There is another tf.data overview .
• The last introductory tutorial shows how to build your own models for the Estimator API.
• Finally, this tutorial walks you through building a CNN for the MNIST task, allowing you to place the Estimator API in context with the methods you used in previous assignments. This is likely the most interesting/complete of these tutorials.

Note that the above tutorials mention “feature columns” quite a lot; feel free to ignore these beyond what is needed to follow along since we won’t be needing them anytime soon (they are not needed for “custom” estimators and are most interesting for categorial input data).

Build a functioning CNN using the Estimator interface, supporting all of training, evaluation and prediction on new inputs. You should have a grasp on these core components of building models with tf.Estimator :

• Build an input function, preferably using tf.data .
• Build a model function that can operate in train , predict and evaluate modes, usually based on tf.layers functions.
• Set up hooks as desired and run the model in the appropriate mode.

Once again, if you haven’t done so, use the Fashion-MNIST dataset instead for more of a challenge (or feel free to train models on other datasets). With the more convenient Estimator interface, experimenting with different models/ hyperparameters is hopefully more comfortable. Try to achieve the best results you can!

Some Estimator fun facts:

• A tf.summary.merge_all() op is automatically set up. No need to include this in your model functions when using Tensorboard.
• Summaries are saved every 100 steps by default, but you can change this if you want. They are saved in the same directory as the checkpoints are stored (the model_dir ).
• A tf.summary.scalar is automatically set up for the loss in training mode.
• Similarly, a logging hook is set up for the loss when training, and a streaming metric when evaluating. No need to include these yourself.
• Never forget: tf.logging.set_verbosity(tf.logging.INFO) or else the estimator won’t talk to you.

## Exploring Tensorflow

Following all those tutorials can be boring, so we will now be focusing on getting to know the Tensorflow core a little more. In the long run, knowing what tools you have available will allow you to write better/shorter/faster programs when you go beyond the straightforward models we’ve been looking at. Below you will find several (mostly disconnected) small programming tasks to be solved using Tensorflow functions. Most of these can be solved in just a few lines of code, but you will need to find the right tools first. The API docs will be indispensable here. Note: Below we are sometimes linking to the 1.10 API because for some reason the “API Guides” grouping functions by their usage etc. have been removed in the most recent API versions. If this leads to any issues, you might want to look at the most recent API version of the function you are interested in after you found it using the API Guides in the old version. Since the API can be a bit overwhelming at first, hints are included for each task.

• Given a 2D tensor of shape (?, n) , extract the k (k <= n) highest values for each row into a tensor of shape (?, k) . Hint: There might be a function to get the “top k” values of a tensor.
• As above, but instead of “extracting” the top k values, create a new tensor with shape (?, n) where all but the top k values for each row are zero. Try doing this with a 1D tensor of shape (n,) (i.e. one row) first. Getting it right for a 2D tensor is more tricky; consider this a bonus. Hint: You should look for a way to “scatter” a tensor of values into a different tensor.
• Implement an exponential moving average. That is, given a decay rate a and an input tensor of length T , create a new length T tensor where new[0] = input[0] and new[t] = a * new[t-1] + (1-a) * input[t] otherwise. Do not use tf.train.ExponentialMovingAverage . Hint: You might want to have a look at higher order functions to simulate a loop over the input. Alternatively, with the full input already being available you might be able to find a way to do this without looping. Do not use Python loops!
• Given three integer tensors x, y, z all of the same (arbitrary) shape, create a new tensor that takes values from y where x is even and from z where x is odd. Hint: An op from Sequence Comparison and Indexing could help.
• Given a tensor of arbitrary and unknown shape (but at least one dimension), return 100 if the last dimension has size > 100, 12 if the last dimension has size <= 100 and > 44, and return 0 otherwise. Hint: You will need some Control Flow Operation for this.
• Given two 1D tensors of equal length n , create a tensor of shape (n, n) where element i,j is the ith element of the first tensor plus the jth element of the second tensor. No loops! Hint: Tensorflow supports broadcasting much like numpy.

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

## Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

Programming assignments from all courses in the Coursera Natural Language Processing Specialization offered by deeplearning.ai.

Folders and files.

NameName
35 Commits

Natural language processing specialization on coursera (offered by deeplearning.ai).

Programming assignments from all courses in the Coursera Natural Language Processing Specialization offered by deeplearning.ai .

This repo contains my work for this specialization. The code base, quiz questions and diagrams are taken from the Natural Language Processing Specialization , unless specified otherwise.

The Natural Language Processing Specialization on Coursera contains four courses:

## Course 1: Natural Language Processing with Classification and Vector Spaces

Course 2: natural language processing with probabilistic models, course 3: natural language processing with sequence models, course 4: natural language processing with attention models, specialization info.

Natural Language Processing (NLP) uses algorithms to understand and manipulate human language. This technology is one of the most broadly applied areas of machine learning. As AI continues to expand, so will the demand for professionals skilled at building models that analyze speech and language, uncover contextual patterns, and produce insights from text and audio.

By the end of this specialization, you will be ready to design NLP applications that perform question-answering and sentiment analysis, create tools to translate languages and summarize text, and even build chatbots. These and other NLP applications are going to be at the forefront of the coming transformation to an AI-powered future.

This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.

## Topics Covered

This Specialization will equip you with the state-of-the-art deep learning techniques needed to build cutting-edge NLP systems:

Use logistic regression, naïve Bayes, and word vectors to implement sentiment analysis, complete analogies, and translate words, and use locality sensitive hashing for approximate nearest neighbors.

Use dynamic programming, hidden Markov models, and word embeddings to autocorrect misspelled words, autocomplete partial sentences, and identify part-of-speech tags for words.

Use dense and recurrent neural networks, LSTMs, GRUs, and Siamese networks in TensorFlow and Trax to perform advanced sentiment analysis, text generation, named entity recognition, and to identify duplicate questions.

Use encoder-decoder, causal, and self-attention to perform advanced machine translation of complete sentences, text summarization, question-answering and to build chatbots. Models covered include T5, BERT, transformer, reformer, and more! Enjoy!

## Programming Assignments

• Sentiment Analysis with Logistic Regression
• Natural language Preprocessing
• Visualizing word frequencies
• Visualizing tweets and Logistic Regression models
• Naive Bayes
• Visualizing likelihoods and confidence ellipses
• Word Embeddings: Hello Vectors
• Linear algebra in Python with Numpy
• Manipulating word embeddings
• Word Translation
• Rotation matrices in R2
• Hash tables
• Autocorrect
• Building the vocabulary
• Candidates from edits
• Part of Speech Tagging
• Working with text data
• Working with tags and NumPy
• Autocomplete
• Corpus preprocessing for N-grams
• Building the language model
• Language model generalization
• Word Embeddings
• Data Preparation
• Intro to CBOW model
• Training the CBOW model
• Word Embeddings Step by Step
• Sentiment with Deep Neural Networks
• Introduction to Trax
• Classes and Subclasses
• Data Generators
• Deep N-grams
• Hidden State Activation
• Working with JAX NumPy and Calculating Perplexity
• Vanilla RNNs, GRUs and the scan function
• Creating a GRU model using Trax
• Named Entity Recognition (NER)
• Question duplicates
• Creating a Siamese Model using Trax
• Modified Triplet Loss
• Evaluate a Siamese Model
• NMT with Attention
• Stack Semantics
• Transformer Summarizer
• The Transformer Decoder
• SentencePiece and BPE
• Reformer LSH

I recognize the hard time people spend on building intuition, understanding new concepts and debugging assignments. The solutions uploaded here are only for reference . They are meant to unblock you if you get stuck somewhere. Please do not copy any part of the code as-is (the programming assignments are fairly easy if you read the instructions carefully). Similarly, try out the quizzes yourself before you refer to the quiz solutions.

## Contributors 3

• Jupyter Notebook 98.4%
• Python 1.6%

## Deep Learning - IIT Ropar

--> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> -->

Note: This exam date is subjected to change based on seat availability. You can check final exam date on your hall ticket.

## Page Visits

Course layout, books and references, instructor bio.

## Prof. Sudarshan Iyengar

Course certificate.

## SWAYAM SUPPORT

Please choose the SWAYAM National Coordinator for support. * :

#### IMAGES

1. NPTEL Deep Learning Assignment 4 Answers 2022

2. Deep Learning

3. Assignment 4

4. NPTEL 2020: Deep Learning Week 4 Assignment 4 Quiz Answers

5. NPTEL Deep Learning Week 4 Quiz Assignment Solutions

6. Coursera: Neural Networks and Deep Learning (Week 4B) [Assignment

#### VIDEO

1. nptel deep learning week-10, assignment-10

2. NPTEL DEEP LEARNING week-3 Assignment-3

3. Deep Learning -IIT Ropar Week 7 Assignment Answers ||Jan 2024 || NPTEL

4. Deep Learning week 2 Assignment 2 solutions || 2023

5. Deep Learning -IIT Kharagpur Week 1 Assignment Answers ||Jan 2024|| NPTEL

6. Deep Learning IIT Ropar

Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models - amanchadha/coursera-deep ...

2. GitHub

Week 4 - Programming Assignment 3 - Building your Deep Neural Network: Step by Step; Week 4 - Programming Assignment 4 - Deep Neural Network for Image Classification: Application; Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization. Learning Objectives: Understand industry best-practices for building ...

3. muhac/coursera-deep-learning-solutions

Solutions of Deep Learning Specialization by Andrew Ng on Coursera - muhac/coursera-deep-learning-solutions

4. Assignment 4

Assignment 4. In this assignment, you will implement two different object detection systems. The goals of this assignment are: Learn about a typical object detection pipeline: understand the training data format, modeling, and evaluation. Understand how to build two prominent detector designs: one-stage anchor-free detectors, and two-stage ...

5. Assignment 4: Word Embeddings

Assignment 4: Word Embeddings Deep Learning Course#3: NLP with Sequence Models Sentiment Analysis with Deep Learning Recurrent Neural Networks for Language Model Assignment 1: Sentiment with Deep Neural Networks Assignment 2: Deep N-grams Assignment 3 - Named Entity Recognition (NER) Assignment 4: Question duplicates

6. PDF Assignment 4: Deep Learning

The assignment must be submitted as a Jupyter notebook through the following Dropbox le request, before midnight of the deadline date. The le must be named as ml-assign4-unalusername1-unalusername2-unalusername3.ipynb, where unalusername is the user name assigned by the university (include the usernames of all the members of the group).

7. Assignment 4

Assignment 4. From this assignment forward, you will use autograd in PyTorch to perform backpropgation for you. This will enable you to easily build complex models without worrying about writing code for the backward pass by hand. The goals of this assignment are: Understand how autograd can help automate gradient computation.

8. PDF CPEG 586 Deep Learning Assignment #4

CPEG 586 - Deep Learning Assignment #4 Implement a matrix based Dense Neural Network for character recognition on the MNIST dataset. The initial architecture you will program is shown below. - - - Y 10x1 δ1 δ2 X 784x1 W1 100x784 Ʃ f W2 ...

9. Deep Learning Specialization Coursera [UPDATED Version 2021]

What's New. This Specialization was updated in April 2021 to include developments in deep learning and programming frameworks. One of the most major changes was shifting from Tensorflow 1 to Tensorflow 2. Also, new materials were added. However, Most of the old online repositories still don't have old codes. This repo contains updated ...

10. Neural Networks and Deep Learning

Week 4: Deep Neural Networks. Understand the key computations underlying deep learning, use them to build and train deep neural networks, and apply it to computer vision. Quiz 4: Key concepts on Deep Neural Networks; Programming Assignment: Building your Deep Neural Network Step by Step; Programming Assignment: Deep Neural Network Application

11. haocai1992/Deep-Learning-Specialization

AI is transforming many industries. The Deep Learning Specialization provides a pathway for you to take the definitive step in the world of AI by helping you gain the knowledge and skills to level up your career. Along the way, you will also get career advice from deep learning experts from industry and academia.

12. Coursera: Neural Networks and Deep Learning (Week 4A) [Assignment

In the next assignment, you will use these functions to build a deep neural network for image classification. After this assignment you will be able to: Use non-linear units like ReLU to improve your model. Build a deeper neural network (with more than 1 hidden layer) Implement an easy-to-use neural network class.

13. Deep Learning

The availability of huge volume of Image and Video data over the internet has made the problem of data analysis and interpretation a really challenging task....

14. Assignment 4

In this assignment, we will be having a look at Tensorflow's own Estimator class. Unfortunately, you will need to do quite a bit of extra reading again to get an overview. You can have a look at some or all of the following docs: An extremely high-level overview. A tutorial using pre-built models for the well-known Iris classification task.

15. NPTEL-Deep Learning (IIT Ropar)- Assignment 4 Solution (2024)

NPTEL-Deep Learning (IIT Ropar)- Assignment 4 Solution (2024)Assignment-4 for Week-4 can be accessed from the following linkink: https://onlinecourses.nptel....

16. Deep Learning

#nptel #deeplearning #nptelanswersCOURSE- Deep LearningORGANIZATON- IITPLATFORM- SWAYAMIn this video, you can solutions for assignment 4 - Deep Learning .N...

17. Deep Learning Week 4 Assignment Answers

#deeplearning #nptel #npteldeeplearning Deep Learning In this video, we're going to unlock the answers to the Deep Learning questions from the NPTEL 2024 Jan...

18. PDF coursera-deep-learning-specialization/C1

Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models - coursera-deep-learning ...

This Specialization will equip you with the state-of-the-art deep learning techniques needed to build cutting-edge NLP systems: Use logistic regression, naïve Bayes, and word vectors to implement sentiment analysis, complete analogies, and translate words, and use locality sensitive hashing for approximate nearest neighbors.

20. Deep Learning

Deep Learning has received a lot of attention over the past few years and has been employed successfully by companies like Google, Microsoft, IBM, Facebook, Twitter etc. to solve a wide range of problems in Computer Vision and Natural Language Processing. ... Average assignment score = 25% of average of best 8 assignments out of the total 12 ...

21. Dynamic scheduling of decentralized high-end equipment R&D projects via

To tackle the problem, we develop a decentralized multi-agent system using the dynamic coordination mechanism for resource assignment (DMAS/DCMRA). An up-to-date deep reinforcement learning (DRL) algorithm, dueling double deep Q-learning (D3QN) with prioritized replay, is employed to select the optimal strategy to resolve the global resource ...

22. Deep knowledge distillation: : A self-mutual learning framework for

He Y., Liu Y., Yang L., Qu X., Deep adaptive control: Deep reinforcement learning-based adaptive vehicle trajectory control algorithms for different risk levels, IEEE Transactions on Intelligent ... Siri S., Sacone S., A topology-based bounded rationality day-to-day traffic assignment model, Communications in Transportation Research 2 (2022 ...

 Course Status : Completed Course Type : Elective Duration : 12 weeks Category : Credit Points : 3 Undergraduate/Postgraduate Start Date : 25 Jul 2022 End Date : 14 Oct 2022 Enrollment Ends : 08 Aug 2022 Exam Date : 30 Oct 2022 IST