Friday, March 28, 2008

Active Learning

D. Cohn, L. Atlas, R. Ladner. Improving generalization with active learning, Machine Learning, 15(2), 1994.

S. Dasgupta, D.J. Hsu, and C. Monteleoni. A general agnostic active learning algorithm.
Neural Information Processing Systems (NIPS), 2007.

S. Dasgupta. Coarse sample complexity bounds for active learning.
Neural Information Processing Systems (NIPS), 2005.

S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning.
Eighteenth Annual Conference on Learning Theory (COLT), 2005.

The algorithm A2 described in the class:
M-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic Active Learning, ICML 2006.
Video lecture on A2.

Hanneke, S. (2007). Teaching Dimension and the Complexity of Active Learning. In proceedings of the 20th Annual Conference on Learning Theory (COLT).

Hanneke, S. (2007). A Bound on the Label Complexity of Agnostic Active Learning. In proceedings of the 24th Annual International Conference on Machine Learning (ICML).

M-F. Balcan, A. Broder, and T. Zhang. Margin Based Active Learning, COLT 2007.

Friday, March 14, 2008

Midterm statistics














Midterm statistics:

Average grade: 17.14
Median grade: 18
Min: 0
Max: 33
Total number of students: 53

Deergha will email the individual grades.

Saturday, March 8, 2008

Midterm

The midterm is worth 25 points (25% of your final grade).

There will be three problems (8 points each) and three simple exercises (3 points each). Thus if you solve any two problems + all exercises, you will still get the maximum grade. If you do everything, you will get 8 bonus points.

The material covered in the following lectures will NOT be on the midterm:
  1. Generalization bounds (the lecture before the midterm)
  2. Reductions
  3. Sanjoy's talk on projections, Gaussian scale mixtures, and the EM
One of the problems will be on the semantics of loss functions. You should be able to answer questions of the form: What loss function does the following quantity minimize? (See this post, for example.)

The other two problems will be on any two of the following algorithms:
  1. Winnow
  2. Halving
  3. (Randomized) Weighted Majority
  4. Perceptron
The exercises will cover the rest of the material.