5. Sequence Models: Slides 5.1 Week 1 - Recuurent Neural Networks 5.1.1 Why Sequence Models? 5.1.2 Notation 5.1.3 Recurrent Neural Network Model 5.1.4 Backpropagation Through Time 5.1.5 Different Types of RNNs 5.1.6 Language Model and Sequence Generation 5.1.7 Sampling Novel Sequences 5.1.8 Vanishing Gradients with RNNs 5.1.9 Gated Recurrent Unit (GRU) 5.1.10 Long Short Term Memory (LSTM) 5.1.11 Bidirectional RNN 5.1.12 Deep RNNs 5.2 Week 2 - Introduction to Word Embeddings / Learning Word Embeddings: Word2vec & GloVe / Applications Using Word Embeddings 5.2.1 Word Representation 5.2.2 Using Word Embeddings 5.2.3 Properties of Word Embeddings 5.2.4 Embedding Matrix 5.2.5 Learning Word Embeddings 5.2.6 Word2Vec 5.2.7 Negative Sampling 5.2.8 GloVe Word Vectors 5.2.9 Sentiment Classification 5.2.10 Debiasing Word Embeddings 5.3 Week 3 - Various Sequence To Sequence Architectures / Speech Recognition 5.3.1 Basic Models 5.3.2 Picking the Most Likely Sentence 5.3.3 Beam Search 5.3.4 Refinements to Beam Search 5.3.5 Error Analysis in Beam Search 5.3.6 Bleu Score (Optional) 5.3.7 Attention Model Intuition 5.3.8 Attention Model 5.3.9 Speech Recognition 5.3.10 Trigger Word Detection 5.4 Week 4 - Transformers 5.4.1 Transformer Network Intuition 5.4.2 Self-Attention 5.4.3 Multi-Head Attention 5.4.4 Transformer Network Share on Twitter Facebook Google+ LinkedIn Previous Next Leave a Comment
Leave a Comment