2. Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization:

Slides

2.1.1 Week 1 - Setting up your Machine Learning Application

2.1.1.1 Train / Dev / Test sets

2.1.1.2 Bias / Variance

2.1.1.3 Basic Recipe for Machine Learning

2.1.2 Week 1 - Regularizing your Neural Network

2.1.2.1 Regularization

2.1.2.2 Why Regularization Reduces Overfitting?

2.1.2.3 Dropout Regularization

2.1.2.4 Understanding Dropout

2.1.2.5 Other Regularization Methods

2.1.3 Week 1 - Setting Up your Optimization Problem

2.1.3.1 Normalizing Inputs

2.1.3.2 Vanishing / Exploding Gradients

2.1.3.3 Weight Initialization for Deep Networks

2.1.3.4 Numerical Approximation of Gradients

2.1.3.5 Gradient Checking

2.1.3.6 Gradient Checking Implementation Notes

2.2 Week 2 - Optimization Algorithms

2.2.1 Mini-batch Gradient Descent

2.2.2 Understanding Mini-batch Gradient Descent

2.2.3 Exponentially Weighted Averages

2.2.4 Understanding Exponentially Weighted Averages

2.2.5 Bias Correction in Exponentially Weighted Averages

2.2.6 Gradient Descent with Momentum

2.2.7 RMSprop

2.2.8 Adam Optimization Algorithm

2.2.9 Learning Rate Decay

2.2.10 The Problem of Local Optima

2.3.1 Week 3 - Hyperparameter Tuning

2.1.1.1 Tuning Process

2.1.1.2 Using an Appropriate Scale to pick Hyperparameters

2.1.1.3 Hyperparameters Tuning in Practice: Pandas vs. Caviar

2.3.2 Week 3 - Batch Normalization

2.3.2.1 Normalizing Activations in a Network

2.3.2.2 Fitting Batch Norm into a Neural Network

2.3.2.3 Why does Batch Norm work?

2.3.2.4 Batch Norm at Test Time

2.3.3 Week 3 - Batch Normalization

2.3.3.1 Softmax Regression

2.3.3.2 Training a Softmax Classifier

2.3.4 Week 3 - Deep Learning Frameworks

2.3.4.1 Deep Learning Frameworks

2.3.4.2 TensorFlow

Leave a Comment