You can access the lecture slides here.

  • (dlc-0) Course Introduction
    tl;dr: Introduction and logistics of the course.
    [slides]
  • (dlc-1.1) From NNs to DL
    tl;dr: What is DL?
    [slides]
  • (dlc-1.2) Basics of Tensors and an example
    tl;dr: Basics of Tensors
    [Tensor basics-codes] [Linear Regression-codes] [slides]
  • (dlc-2.1) ML Concepts - Risk
    tl;dr: Quick visit to ML concepts - Risk
    [slides]
  • (dlc-2.2) ML Concepts - Over and underfitting
    tl;dr: Quick visit to ML concepts - Over and Underfitting
    [Model Capacity-codes] [slides]
  • (dlc-2.3) ML Concepts - Bias and Variance Trade-off
    tl;dr: Quick visit to ML concepts - Bias & Variance Trade-off
    [Model Capacity-codes] [slides]
  • (dlc-3.1) Perceptron
    tl;dr: Neuron Models- TLU and Perceptron
    [Perceptron-codes] [slides]
  • (dlc-3-2) MLP
    tl;dr: MLP
    [Approximation example-Codes] [slides]
  • (dlc-3.3) Gradient Descent
    tl;dr: Gradient Descent
    [slides]
  • (dlc-3.4) Backpropagation
    tl;dr: Gradient Descent in NNs
    [slides]
  • (dlc-3.5) More on Gradient Descent
    tl;dr: Variations in Gradient Descent
    [slides]
  • (dlc-3.6) Backprop beyond MLP and Autograd
    tl;dr: Computational graph and Automatic Differentiation
    [Codes - Autograd and MLP training] [slides]
  • (dlc-4.1) Convolution
    tl;dr: Convolution operation in the CNNs
    [Codes - Convolution] [slides]
  • (dlc-4.2) Pooling
    tl;dr: Pooling Operation in CNNs
    [slides]
  • (dlc-4.3) Putting it all together
    tl;dr: General architecture of CNNs with a case study
    [slides]
  • (dlc-5.1) Cross-entropy loss
    tl;dr: Cross-entropy used for training classifiers
    [Codes - Cross-entropy] [slides]
  • (dlc-6.1) Going Deeper
    tl;dr: Benefits and challenges of depth in DNNs
    [slides]
  • (dlc-6.2) Rectifiers and Dropout
    tl;dr: Some of the important regularizers for training DNNs
    [slides]
  • (dlc-2.4) Regularization
    tl;dr: Regularization in Machine Learning
    [slides]
  • (dlc-6.1a) Regularization in Deep Learning
    tl;dr: Regularizers for training DNNs
    [slides]
  • (dlc-7.1) Transposed Convolutions
    tl;dr: Transformation for increasing the signal dimension in DNNs
    [slides]
  • (dlc-7.2) Autoencoders
    tl;dr: Special DNNs that map input to itself
    [slides]
  • (dlc-7.3) Denoising Autoencoders
    tl;dr: Autoencoders for learning the dependencies among the signal components.
    [slides]
  • (dlc-8.1) Recurrent Neural Networks
    tl;dr: Neural Networks to handle inputs of variable lengths and that have memory.
    [Elman Network] [slides]
  • (dlc-8.2) Word Embeddings
    tl;dr: Vectors representing words in NLP applications.
    [slides]
  • (dlc-9.1) GANs
    tl;dr: Generative Adversarial Networks
    [Simple GAN] [slides]
  • (dlc-7.4) VAE
    tl;dr: Variational Autoencoders
    [slides]
  • (dlc-10.1) Inside DNNs
    tl;dr: Visualizing the Deep Neural Networks
    [slides]