top of page
Data.jpg
IMG_2492.WEBP

Classical Theory of Machine Learning

INTRODUCING THE PAC PARADIGM

The course commences with a rigorous exposition pertaining to the classical theory of ML. In that, the classical Probably Approximately Correct (PAC) paradigm is introduced and through it the notion of true and empirical errors are systematically studied. 

​

The key message in this part of the course is the development of techniques by which one can obtain estimates on how large a training sample fed to a leanring algorithm (model) must be in order to control the true and empirical errors of the model.

Lecture notes & videos

Introduction_.jpg
PAC_Errors.jpg
PAC.jpg
Agnostic.jpg
VC_Dimension.jpg
Sauer.jpg
No_Free_Lunch.jpg
PAC_The.jpg
Model_Selection.jpg
IMG_2484.JPG

Classical ML Models

In this section of the course, we study some of the more classical models (learning algorithms) of ML. The notions of true and empirical errors introduced in the previous section follow us here; for each model we seek to provide estimates for the errors corresponding or associated with it.

Linear_Predictors.jpg
Boosting.jpg
Model.jpg
SVM.jpg
Nearest_Neighbours.jpg
Decision_Trees.jpg
Convex.jpg

ENG VIDEO

SGD.jpg
Kernel.jpg
Online.jpg

©2020 by Elad Aigner-Horev

bottom of page