1/19 |
overview, MLE
[slides] |
Read: ESL {1, 4.3}
Sam Roweis' probability and statistics review Iain Murray's crib-sheet |
1/21 |
classifiers via generative models
[slides] |
Read: ESL {4.3} |
1/26 |
nearest neighbor classifiers
[slides] |
Read: CML {2.1, 2.2}
HW1 (due 2/9). Data set: usps.mat HW1 solution and code |
1/28 |
nearest neighbor classifiers (contd.), decision trees
[slides] |
Read: CML {1}, ESL {9.2} |
2/2 |
linear clasifiers
[slides] |
Read: UML {Chapter 9 introduction to 9.1.1} |
2/4 |
perceptron
[slides, more slides] |
Read: CML {3}, ESL {4.5.1}, Daniel Hsu's online-to-batch notes,
voted-perceptron paper {1, 2, 3.1, 5} |
2/9 |
feature expansions, kernels
[slides] |
Read: CML {3.7, 4.4, 9.1, 9.2, 9.4}, Daniel Hsu's notes on kernels {1}
HW2 (due 2/23). Data set: spam.mat HW2 solution and code |
2/11 |
SVMs, SVM dual problem (Guest lecture by Daniel Hsu)
[slides] |
Read: CML {6.1, 6.7}, ESL {4.5.2, 12.2}
[Optional: SVM tutorial] |
2/16 |
convex losses and ERM (Guest lecture by Daniel Hsu)
[slides] |
Read: CML {6.2, 6.3}
[Optional but recommended: cvxbook {2, 3, 4}] |
2/18 |
convex optimization (Guest lecture by Daniel Hsu)
[slides] |
Read: CML {6.4, 6.5}
[Optional but recommended: cvxbook {9.2, 9.3}] |
2/23 |
learning theory
[slides] |
Read: UML {3.1 - 3.2.1, 6.1 - 6.4} |
2/25 |
boosting
[slides] |
Read: CML {11.2}, intro (from this book)
[Optional: ESL {10}] |
3/1 |
boosting (contd.), importance weighted and multiclass prediction
[slides] |
Read: CML {5.1, 5.2} |
3/3 | In-class exam #1 from 10:10 am - 11:25 am | Practice set of problems for exam 1. |
3/8 |
beyond prediction error, cross-validation
[same slides as previous lecture] |
Read: ESL {7.10} |
3/10 |
linear regression
[slides] |
Read: ESL {3.2 - 3.2.2}
[if needed: Daniel Hsu's linear algebra review] |
3/22 |
regularized regression
[slides] |
Read: ESL {3.4.1, 3.4.2, 3.4.3, 3.5.1}
HW3 (due 4/5). Data set: spam.mat HW3 solution and code Kaggle competition starts: description. |
3/24 |
regularized regression
[same slides as last class] |
Read: ESL {3.4.1, 3.4.2, 3.4.3, 3.5.1} |
3/29 |
regularized regression
[same slides as previous two class] |
Read: ESL {3.4.1, 3.4.2, 3.4.3, 3.5.1} |
3/31 |
principal component analysis
[slides] |
Read: ESL {14.5.1}, CML {13.2}
[if needed: Daniel Hsu's linear algebra review] |
4/5 |
principal component analysis
[same slides as previous lecture] |
Read: ESL {14.5.1}, CML {13.2}
[if needed: Daniel Hsu's linear algebra review] HW4 (due 4/19) HW4 solutions |
4/7 |
k-means clustering
[slides] |
Read: CML {2.4, 13.1} |
4/12 |
dictionary learning, mixture models
[same slides as previous lecture, mixture models slides] |
Read: UML {24.4}, ESL {8.5} |
4/14 |
mixture models, expectation maximization
[slides] |
Read: UML {24.4}, ESL {8.5}
[Optional: Daniel Hsu's E-M notes] |
4/19 |
neural networks
[slides (adapted from Stuart Russell's slides)] |
Read: UML {20}
[Recommended: practical, efficient backprop] HW5 (due 4/29) This is purely for extra credit and optional. HW5 solutions |
4/21 |
neural networks (contd.)
[same slides as previous lecture] |
Read: UML {20}
[Recommended: practical, efficient backprop] [Cool demo] |
4/26 |
online learning
[slides] |
Read: UML {21}
[Recommended: Avrim Blum's survey, decision tree paper. Optional: online learning survey] [Neat online learning demo] |
4/28 |
online learning (contd.)
[same slides as previous lecture] |
Read: UML {21}
[Recommended: Avrim Blum's survey, decision tree paper. Optional: online learning survey] [Neat online learning demo] |
5/6 | Kaggle competition ends. | |
5/10 |
Final exam in 1127 Mudd and 233 Mudd
9:00 am - 12 noon |
uni.pdf
where uni
is replaced with your UNI (e.g., abc1234.pdf
), on Columbia Canvas by 1:00 pm of the specified due date. If any code is required, separate instructions will be provided.