Raef Bassily‎ > ‎Teaching‎ > ‎

CSE 250C: Machine Learning Theory

Time and Place: 
Tue-Thu 5 - 6:20 PM in HSS 1330 (Humanities and Social Sciences Bldg).

Instructor: 
Raef Bassily 
Email: rbassily at ucsd dot edu
Office Hrs: Thu 3-4 PM, Atkinson Hall 4111.

TAs: 
- Andrew Leverentz (aleveren@eng.ucsd.edu) - Office Hrs: Wed 4-5 PM (CSE Basement B260A)
- Chicheng Zhang (chz038@cs.ucsd.edu) - Office Hrs: Mon 11 AM - 12 PM (CSE Basement B240A)

Course Schedule: (Times and contents may slightly vary depending on the class progress) 

 Week 1-2: 4/ 4, 6, 11- Introduction and Course Overview 
- Probability Tools and Concentration Inequalities
    Notes: Part 1 
 Week 2: 4/ 11, 13- Framework of Statistical Learning, Empirical Risk Minimization 
- PAC Learning Model
 Week 3: 4/ 18, 20- PAC Learning: Examples
- General Bound on Sample Complexity for Finite Hypothesis Classes (Occam's Razor)
    Notes: Part 2 
 Week 4: 4/ 25, 27- Agnostic PAC Learning and the Uniform Convergence Principle
- Set Shattering: Intro to Vapnik-Chervonenkis (VC) dimension
- VC dimension: Examples, Discussion and Implications, and the Growth function
 Week 5: 5/ 2, 4- VC dimension: Sauer's Lemma 
- The Fundamental Theorem of Learning (Characterization of Learnability via VC dimension).
    Notes: Part 3 
 Week 6: 5/ 9, 11- Boosting: Weak vs. Strong Learnability
- Adaboost
    Notes: Part 4 
 Week 7: 5/ 16, 18- Agnostic-PAC Learning in the Generalized Loss Model
- Intro to Convex Analysis: Convex, Lipschitz functions.
- Learnability of Convex-Lipschitz-Bounded Problems
    Notes: Part 5 
 
Week 8: 5/ 23, 25
- Stochastic Gradient Descent: 
    * Basic GD Algorithm and Convergence Guarantees. 
    * Projected Stochastic Gradient Descent.
- Learning via Stochastic Gradient Descent.
     Notes: Part 6-A
     Notes: Part 6-B
Weeks 9 & 10: 6/ 1, 6, 8 - Regularization and Stability
    * Regularized Loss Minimization and Balancing Bias-Complexity
    * Regularization as a Stabilizer: Stable algorithms do not overfit.
    * Learning via RLM
    Notes: Part 7 
   
    Suggested Future Topics & Concluding Remarks

Announcements: 
  • Final exam has been posted on Piazza (Due Thursday, June 15, 5 PM - Submissions should be made on Gradescope).
  • Homework 3 (mini-project) (Due June 6 - See assignment description, LaTex template, pdf version of the template)
  • Mid-term exam is here. (Due on May 25 - No collaboration is allowed. Individual submissions should be made on Gradescope)
  • Homework 2 is here. (Due on May 11 - Submissions should be made on Gradescope)
  • Homework 1 is up. View it here (Due date: May 2 Submissions to be made on Gradescope: entry code 9DB489)
  • Calibration Assignment is up here (Due date: April 13 - Submissions to be made on Gradescope: entry code 9DB489)
  • Introductory Lecture: Course overview and Administrative Information can be found here.

Course Overview: 

The course main goal is to explain the fundamental concepts underlying machine learning and the techniques that transform such concepts into practical algorithms. The main focus will be on the theoretical foundations of the subject, and the material covered will include rigorous mathematical proofs and analyses. The class will cover wide array of topics: starting from basic concepts such as PAC learning, uniform convergence, generalization, Vapnik-Chervonenkis (VC) dimension, and will build upon those to study more complex concepts and algorithmic techniques such as: boosting, convex learning, stochastic gradient descent, regularization and stability.


Prerequisites:

Good knowledge of probability and multivariate calculus is required. Students should be comfortable working with mathematical abstractions and proofs. Some previous exposure to machine learning is recommended. 


Homeworks:
  • There will be at least 4 assignments (including a take-home midterm exam) in addition to an initial calibration assignment. 
  • Each homework assignment will be posted on Gradescope (sign up using entry code: 9DB489). An announcement will be made in class and on this page when a homework is up and a due date will also be specified. 
  • Students should submit their homework on Gradescope on the specified due date.
  • No late homeworks will be accepted. 
  • Solutions of most problems will involve proofs. Grading will be based on both correctness and clarity. Also, solutions need to be concise. 
  • The last homework will potentially include a mini-project on implementation/evaluation of one of the algorithmic techniques discussed in class.
  • Each student can choose one homework partner to collaborate with in solving the assignments excluding the take-home midterm and final exams! 
  • It is up to you whether you want to have a homework partner or work by yourself. However, if you choose not to have a homework partner, there will be no extra credit for that. 
  • Each homework group needs to send me their names by email no later than April 18
  • Collaboration with students other than your homework partner is not allowed. However, discussion of the class material (not including homework problems) among students is encouraged (outside the classroom as well as on piazza).  
  • Collaboration with the homework partner is not allowed on the midterm or the final exams (both are take-home exams).

Grading Policy: 
  • This is a 4-unit course. The evaluation in this course will be based on
    • Calibration assignment: 3% (serious solution attempts will receive full credit)
    • Homework assignments: 45%
    • Mid-term take-home exam: 18%
    • A Final take-home exam: 34%
    • A bonus of up to 5% for those who actively engage in discussions and answer questions on Piazza!

Class Forum on Piazza

Please sign-up here to join the discussion forum of this class on Piazza. Do not miss the important announcements and discussions that take place there!

Supplementary readings: 
  • S. Shalev-Shwartz, S. Ben-David, 

     Understanding Machine Learning: From Theory to Algorithms. 
  • M. J. Kearns, U. V. Vazirani, An Introduction To Computational Learning Theory.