Time and Place: TueThu 5  6:20 PM in HSS 1330 (Humanities and Social Sciences Bldg).
Instructor: Raef Bassily Email: rbassily at ucsd dot edu Office Hrs: Thu 34 PM, Atkinson Hall 4111.
TAs:  Andrew Leverentz (aleveren@eng.ucsd.edu)  Office Hrs: Wed 45 PM (CSE Basement B260A)  Chicheng Zhang (chz038@cs.ucsd.edu)  Office Hrs: Mon 11 AM  12 PM (CSE Basement B240A)
Tentative Course Schedule: (Times and contents may slightly vary depending on the class progress) Week 12: 4/ 4, 6, 11   Introduction and Course Overview  Probability Tools and Concentration Inequalities Notes: Part 1 (Now is up!)
 Week 2: 4/ 11, 13   Framework of Statistical Learning, Empirical Risk Minimization  PAC Learning Model  Week 3: 4/ 18, 20   PAC Learning: Examples  General Bound on Sample Complexity for Finite Hypothesis Classes (Occam's Razor) Notes: Part 2 (Now is up!)  Week 4: 4/ 25, 27   Agnostic PAC Learning and the Uniform Convergence Principle  Set Shattering: Intro to VapnikChervonenkis (VC) dimension  VC dimension: Examples, Discussion and Implications, and the Growth function  Week 5: 5/ 2, 4   VC dimension: Sauer's Lemma  The Fundamental Theorem of Learning (Characterization of Learnability via VC dimension). Notes: Part 3 (to be added)  Week 6: 5/ 9, 11   Boosting: Weak vs. Strong Learnability  Adaboost Notes: Part 4 (to be added)  Week 7: 5/ 16, 18   AgnosticPAC Learning in the Generalized Loss Model  Intro to Convex Analysis: Convex, Lipschitz functions.  Learnability of ConvexLipschitzBounded Problems Notes: Part 5 (to be added)  Week 8: 5/ 23, 25   Stochastic Gradient Descent: * Basic GD Algorithm and Convergence Guarantees. * Projected Stochastic Gradient Descent.  Learning via Stochastic Gradient Descent. Notes: Part 6 (to be added)
 Weeks 9 & 10: 6/ 1, 6, 8   Regularization and Stability * Regularized Loss Minimization and Balancing BiasComplexity * Regularization as a Stabilizer: Stable algorithms do not overfit. * Learning via RLM Notes: Part 7 (to be added)  Other suggested topics and Concluding Remarks 
Announcements:  Homework 1 is up. View it here (Due date: May 2 Submissions to be made on Gradescope: entry code 9DB489)
 Calibration Assignment is up here (Due date: April 13  Submissions to be made on Gradescope: entry code 9DB489)
 Introductory Lecture: Course overview and Administrative Information can be found here.
Course Overview:
The course main goal is to explain the fundamental concepts underlying machine learning and the techniques that transform such concepts into practical algorithms. The main focus will be on the theoretical foundations of the subject, and the material covered will include rigorous mathematical proofs and analyses. The class will cover wide array of topics: starting from basic concepts such as PAC learning, uniform convergence, generalization, VapnikChervonenkis (VC) dimension, and will build upon those to study more complex concepts and algorithmic techniques such as: boosting, convex learning, stochastic gradient descent, regularization and stability.
Prerequisites:
Good knowledge of probability and multivariate calculus is required. Students should be comfortable working with mathematical abstractions and proofs. Some previous exposure to machine learning is recommended.
Homeworks:  There will be at least 4 assignments (including a takehome midterm exam) in addition to an initial calibration assignment.
 Each homework assignment will be posted on Gradescope (sign up using entry code: 9DB489). An announcement will be made in class and on this page when a homework is up and a due date will also be specified.
 Students should submit their homework on Gradescope on the specified due date.
 No late homeworks will be accepted.
 Solutions of most problems will involve proofs. Grading will be based on both correctness and clarity. Also, solutions need to be concise.
 The last homework will potentially include a miniproject on implementation/evaluation of one of the algorithmic techniques discussed in class.
 Each student can choose one homework partner to collaborate with in solving the assignments excluding the takehome midterm and final exams!
 It is up to you whether you want to have a homework partner or work by yourself. However, if you choose not to have a homework partner, there will be no extra credit for that.
 Each homework group needs to send me their names by email no later than April 18.
 Collaboration with students other than your homework partner is not allowed. However, discussion of the class material (not including homework problems) among students is encouraged (outside the classroom as well as on piazza).
 Collaboration with the homework partner is not allowed on the midterm or the final exams (both are takehome exams).
Grading Policy:  This is a 4unit course. The evaluation in this course will be based on
 Calibration assignment: 3% (serious solution attempts will receive full credit)
 Homework assignments: 45%
 Midterm takehome exam: 18%
 A Final takehome exam: 34%
 A bonus of up to 5% for those who actively engage in discussions and answer questions on Piazza!
Class Forum on Piazza:
Please signup here to join the discussion forum of this class on Piazza. Do not miss the important announcements and discussions that take place there!
Supplementary readings: S. ShalevShwartz, S. BenDavid, Understanding Machine Learning: From Theory to Algorithms.  M. J. Kearns, U. V. Vazirani, An Introduction To Computational Learning Theory.
