Rapid Introduction to Machine Learning / Deep Learning
Professor Hyeong In Choi (Seoul
National University, Math)
Lectures: Seven lectures
delivered biweekly in the fall of 2015 (for dates, see Lecture Schedule)
Lecture hour
starts at 7:30 pm and will last for about two to three hours
Venue: Room 104, Sangsan Mathematical Sciences Bldg., Seoul
National University
[¼¿ï´ëÇб³ »ó»ê¼ö¸®°úÇаü
(129µ¿) 104È£]
Aim & Scope:
The aim
of this lecture series is to introduce the basics of current machine learning
to those who are interested in deep learning. The intended audience is diverse:
they range from practitioners in various industries to researchers in
mathematics, statistics, computer science, and engineering. Due to time
limitation, we will go over various topics only very briefly, leaving out many
important details. Our approach is somewhat theoretical/mathematical, although
we skip the proofs in most places.
Audience: Anyone is welcome.
Lecture Schedule (tentative; subject to change)
Lecture 1:
(Sept. 18, 2015)
Unit 1a : Introduction
-
Very
brief history of machine learning from deep learning perspective
-
Course
outline
-
Bare
minimum of probability: marginalization and Bayes rule
-
Short
demo
Unit 1b : Logistic regression & neural network
-
Probabilistic
formalism of logistic regression
-
Symmetric(redundant)-form
of log-likelihood function
-
Training
-
Exponential
family of distributions and generalized linear model
-
Softmax
regression
-
XOR
problem & neural network with hidden layer
-
Universal
approximation theorem
Lecture 2:
(Oct. 2, 2015)
Unit 2a : SVM
& kernel machine
-
Support
Vector Machine (SVM)
-
Reproducing
Kernel Hilbert Space (RKHS) and kernel machine
-
Deep
vs. shallow learning
Unit 2b : Statistical
learning theory and its consequences
-
Brief
overview of statistical learning theory
-
Overfitting
-
Regularization
Lecture 3:
(Oct. 16, 2015)
Unit 3a : Model
selection
-
Bias-variance
tradeoff
-
Testing,,validation
and model selection
Unit 3b : Aggregation
and randomization
-
Bootstrap
-
Bagging
-
Random
Forests
Lecture 4:
(Oct. 30, 2015)
Unit 4a : Feedforward neural network
-
Multilayer
perceptron (MLP)
-
Backpropagation
algorithm
Unit 4b: Convolutional
network & computer vision
-
Convolution
(filter)
-
Convolutional
network
Lecture 5:
(Nov. 13, 2015)
Unit 5a :
Bayesian network
-
Dependency
and independency model
-
D-separation
-
Examples
Unit 5b : Markov
random field
-
Energy
and probability
-
Boltzmann
machine
-
Restricted
Boltzmann machine (RBM)
Lecture 6: (Nov.
27, 2015)
Unit 6a : MCMC
-
Markov
chain and its limiting probability distribution
-
Gibbs
sampling
-
Metropolis-Hasting
algorithm
Unit 6b : Unsupervised
feature learning : RBM
-
Contrastive
divergence (CD)
-
Stacked
RMB
-
Deep
belief network (DBN)
Lecture 7:
(Dec. 11, 2015)
Unit 7a : Unsupervised feature learning : Auto-encoder &
Sparse coding
-
Auto-encoder
and denoising auto-encoder (DAE)
Unit 7b : Sparse coding
Post
Scriptum:
If
demand warrants it, we may organize more lectures/seminars in 2016 on various
aspects of deep learning. For example, we can cover some important topics that
are left untouched in this lecture series; we may also delve into
theoretical/mathematical aspects. But most of all, we want to together explore
more practical issues arising in the implementation of or in programming with
the likes of Caffe, Torch and Theano.