1. Neural Networks and Deep Learning: Slides 1.1 Week 1 - Introduction to Deep Learning 1.1.1 What is a Neural Network? 1.1.2 Supervised Learning with Neural Networks 1.1.3 Why is Deep Learning taking off? 1.1.4 About this Course 1.2 Week 2 - Logistic Regression as Neural Network 1.2.1 Binary Classification 1.2.2 Logistic Regression 1.2.3 Logistic Regression Cost Function 1.2.4 Gradient Descent 1.2.5 Derivatives 1.2.6 More Derivative Examples 1.2.7 Computation Graph 1.2.8 Derivatives with a Computation Graph 1.2.9 Logistic Regression Gradient Descent 1.2.10 Gradient Descent on m Examples 1.2.11 Vectorization 1.2.12 More Vectorization Examples 1.2.13 Vectorizing Logistic Regression 1.2.14 Vectorizing Logistic Regression’s Gradient Output 1.2.15 Broadcasting in Python 1.2.16 A Note on Python/Numpy Vectors 1.2.17 Quick tour of Jupyter/iPython Notebooks 1.2.18 Explanation of Logistic Regression Cost Function 1.3 Week 3 - Shallow Neural Networks 1.3.1 Neural Networks Overview 1.3.2 Neural Network Representation 1.3.3 Computing a Neural Network’s Output 1.3.4 Vectorizing Across Multiple Examples 1.3.5 Explanation for Vectorized Implementation 1.3.6 Activation Functions 1.3.7 Why do you need Non-Linear Activation Functions? 1.3.8 Derivatives of Activation Functions 1.3.9 Gradient Descent for Neural Networks 1.3.10 Backpropagation Intuition (Optional) 1.3.11 Random Initialization 1.4 Week 4 - Deep Neural Networks 1.4.1 Deep L-layer Neural Network 1.4.2 Forward Propagation in a Deep Network 1.4.3 Getting your Matrix Dimensions Right 1.4.4 Why Deep Representations? 1.4.5 Building Blocks of Deep Neural Networks 1.4.6 Forward and Backward Propagation 1.4.7 Parameters vs Hyperparameters Share on Twitter Facebook Google+ LinkedIn Previous Next Leave a Comment
Leave a Comment