Mathematical Foundations of Deep Neural Networks, Fall 2022

This is the primary course website for Mathematical Foundations of Deep Neural Networks (심층신경망의 수학적 기초), M1407.001200, Fall 2022. We will also use eTL as a secondary website.


Announcements

  • This year (2022), lectures will in a hybrid format. You may attend in person or virtually.


Homework

This class will have weekly homework assignments. Submit completed assignments through eTL.


Lecture Plans

  • [Week 1] Optimization and stochastic gradient descent
  • [Week 2] Shallow neural networks and logistic regression.
  • [Week 3] Multi-layer perceptron. Softmax regression.
  • [Week 4] Convolutional layers, pooling layers, GPU computing, LeNet
  • [Week 5] Data augmentation, regularization techniques: dropout, weight decay, early stopping
  • [Week 6] Weight initialization, VGGNet, backprop
  • [Week 7] Optimizers (ADAM, RMSProp), NiN network, GoogLeNet
  • [Week 8] Batch normalization, ResNet, DenseNet
  • [Week 9] ResNext, SENet, DNCNN, super-resolution, inverse problem
  • [Week 10-11] Flow models
  • [Week 12-13] Variational auto-encoders
  • [Week 14-15] Generative adversarial networks

Course Information

Course material will be posted on this website. eTL will be used for announcements, homework submission, and receiving homework and exam scores.

Instructor

Ernest K. Ryu, 27-205,

Photo of Ernest Ryu

Graduate Teaching Assistants

Joo Young Choi

Photo of Joo Young Choi

Haneol Kijm

Photo of Haneol Kijm

Sehyun Kwon

Photo of Sehyun Kwon

Jongmin Lee

Photo of Jongmin Lee

Undergraduate Teaching Assistants

Dongyeob Kim

Photo of Dongyeob Kim

Jae yeon Kim

Photo of Jae yeon Kim

Wonseok Lee

Photo of Chanwoo Park

Yeeseok Oh

Photo of Chanwoo Park

Kunwoo Na

Photo of Kunwoo Na

Jaeseung Park

Photo of Jaeseung Park

Lectures

Hybrid lecture, in-person at 43-1-201 and Zoom, Tuesdays and Thursdays 5:00–6:15pm. We thank the Faculty of Liberal Education (기초교육원) for helping us execute the hybrid lecture. Live (in-person or online) attendance is required.

Exams

The midterm and final exams will be hand-written (no computers), in-person, and 4 hours long.

Grading

Homework 30%, midterm exam 30%, final exam 40%.

Non-programming prerequisites

Good knowledge of the following subjects is required.

  • Vector calculus: profeciency with gradients and chain rules.
  • Linear algebra: profeciency with matrix-vector products.
  • Basic probability: random variables, expectation, mean, variance, discrete and continuous random variables, Gaussian random variables.

No prior backround in machine learning or deep learning is required.

Programming prerequisites

Although programming is an essential part of deep learning, I do not want to make substantial Python programming background a prerequisite for this course. Therefore, I will record and upload a Python tutorial for students to study before the start of the semester. I expect this tutorial to be about 5 hours long, and it will cover the necessary Python programming concepts such as variables, loops, functions, classes, objects, inheritance, exceptions, Python lists, Python tuples, Python decorators, Python iterators, the NumPy library, and the Matplotlib library. Of course, students already familiar with these concepts need not spend time with the tutorial.

Programming in deep learning tends to be relatively simple if one utilizes the frameworks such as PyTorch. These frameworks do utilize advanced programming concepts such as inheritance, decorators, and iterators, but if you are roughly familiar with these concepts, I do not expect the Python programming to be a major challenge in this course.

Textbooks and other references

The course does not have a designated reference or a textbook. However, you may find the following references useful: Fei Fei Li's CS 231n at Stanford, Pieter Abbeel's Deep Unsupervised Learning at Berkeley, Dive into Deep Learning by Zhang, Lipton, Li, and Smola, and Deep Learning by Goodfellow, Bengio, and Courville.

Throughout this class, you will spend many hours computing gradients. While the chain rule is taught in any vector calculus curriculum, chain rule formulae in vector and matrix forms are not. Fundamentally, you should first carry out the gradient calculations elementwise and then collect the result into a vectorized form. Nevertheless, you may find the following references useful: Matrix calculus notes 1, matrix calculus notes 2, and matrix calculus notes 3 (Chapters 1–3).