- Course Info: CS7545, Fall 2019
- Instructor: Jacob Abernethy
- Office: Coda S1221
- Email: prof_at_gatech_dot_edu
- Office Hours: Wednesdays, 10-11am, in Coda S1221
- Course Time&Place: MW 4:30-5:45pm, Weber SST III (Lecture Hall 2)
- Teaching Assistants:
- Bhuvesh Kumar
- Email: bhuvesh_at_gatech.edu
- Office Hours: Thursdays 3:00-4:00pm, alcove of Klaus 2116/2124
- Zihao Hu
- Email: zihaohu_at_gatech.edu
- Office Hours: Mondays 3:00-4:00pm, alcove of Klaus 2116/2124
- Bhuvesh Kumar
This course will study theoretical aspects of prediction and decision-making probelms, where our goal is to understand the mathematical underpinnings of machine learning. A primary objective of the class is to bring students to the frontiers of research and to prepare students to publish in this area. The course will cover, among other things, concentration inequalities, uniform deviation bounds, Vapnik-Chervonenkis Theory, Rademacher Complexity, margin bounds, boosting, some theoretical aspects of deep learning, online learning theory, regret minimization, multi-armed bandit algorithms, and connections to convex optimization. Along the way, we may dive into several related topics, including minimax equilibrium in games, calibration, sequential portfolio selection, option pricing, and differential privacy.
Prerequisites: Familiarity with the analysis of algorithms, probabilistic analysis, and several similar topics. CS7641 (Machine Learning) will be quite helpful but not strictly necessary. The material is going to be about 90% “theory” and thus potential students must have a strong mathematical background. We shall rely heavily on techniques from calculus, probability, and convex analysis, but many tools will be presented in lecture.
Coursework: There will be 5 problem sets throughout the semester.
- 50% - Homeworks
- 40% - Final Exam
- 10% - Participation
Note: The final exam will be held on Wednesday, December 11, from 2:40-5:30pm.
Roughly half of the course will follow material from the following text:
- “Foundations of Machine Learning” by Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar
Much of the material in online learning (aka regret minimization) is of my own taste, and I will present these topics how I enjoy. But for students that want reading material on this topic, there are several surveys released in the last several years that explore several many that we shall cover. I will link to them here, and will mention them in various lectures when appropriate:
- The Multiplicative Weights Update Method by Sanjeev Arora, Elad Hazan, and Satyen Kale.
- Online Learning and Online Convex Optimization survey by Shai Shalev-Shwartz.
- The convex optimization approach to regret minimization survey by Elad Hazan.
- Sasha Rakhlin’s Lecture Notes.
|1||19 Aug 2019||Introduction and and Linear Algebra Review|
|2||21 Aug 2019||Convex Analysis|
|3||26 Aug 2019||Convex Analysis and Deviation Bounds|
|4||28 Aug 2019||Chernoff Bounds|
|5||04 Sep 2019||Martingale and Online Learning Intro|
|6||09 Sep 2019||Weighted Majority Algorithm|
|7||11 Sep 2019||Exponential Weights Algorithm|
|8||16 Sep 2019||Perceptron and Game Theory Intro|
|9||18 Sep 2019||Game Theory and Boosting|
|10||23 Sep 2019||Boosting|
|11||25 Sep 2019||Online Convex Optimization|
|12||30 Sep 2019||Online Convex Optimization|
|13||2 Oct 2019||SGD and Mirror Descent|
|14||7 Oct 2019||Mirror Descent Continued|
|15||9 Oct 2019||FTRL, Multi-Armed Bandits, & EXP3|
|16||21 Oct 2019||EXP3 and Stochastic Bandits|
|Bandits, & EXP3|
|17||23 Oct 2019||Stochastic Bandits & UCB|
|1||Sep 8 2019, 11:59 pm|
|2||Sep 30 2019, 2:00 pm|
|3||Oct 29 2019,11:59 pm|
|4||Nov 14 2019,11:59 pm|
Previous offerings of the course: Fall 2018