Class Schedule, Notes 
	 There is no single textbook for the course. We will be drawing from several sources, and I will post references (or lecture notes) each week.
	A) Randomized Algorithms
	Week 1
	- Finding hay in a haystack; examples: probabilistic method (e.g., constructing an error correcting code), testing matrix multiplication
	- Randomized vs deterministic algorithms: success probability, expected running time (Las Vegas vs Monte Carlo); probability amplification
	
	- Quicksort, Sampling and estimation
	- Concentration, Markov's inequality 
  
Draft Lecture Notes
	Week 2
	- Chernoff and Bernstein inequalities, union bound
	- Application to dimension reduction (Johnson-Lindenstrauss lemma)
 
	- Construction of error correcting codes 
  (Here is the 
video recording of Lecture #4, where we covered the Johnson Lindenstrauss lemma. Here are the 
pdf slides. Also, here are some 
lecture notes that contain the details that we skipped.)
	
Week 3
	- Streaming algorithms: number of distinct elements, frequency moments; frequent elements (count sketch)
	- Lower bounds on randomized algorithms(?)
	- More examples: Max 3-SAT, Random walks, randomized rounding, ... (?)
	
	
	
	 B) Convex programming, relax-and-round paradigm
	
	Lecture 7 (Jan 31)
	- Linear programming, convex optimization
	- Extreme points and the simplex algorithm (outline)
	- 
Reading: Chapters 1, 2, and 5 of 
Matousek and Gartner's Book
	Lecture 8 (Feb 5)
	- Ellipsoid algorithm, separation oracles 
 
	- 
Reading: Lecture notes by Goemans (look out for two things we skipped: proving a lower bound on the volume assuming feasibility, and the example of matching)
	
Lecture 9 (Feb 7)
	- LP relaxations for discrete problems; Examples: matching, vertex cover
	- Relax and round paradigm
	- Bipartite matching polytope is integral
	- 2 approximation for vertex cover using an LP relaxation
	- 
Reading: Chapter 3 of 
Matousek and Gartner's Book
	Lecture 10 (Feb 12)
	- LP for Set Cover and Randomized rounding
	- Facility location: introduction and LP relaxation
	- Example fractional solutions
	- 
Reading: These lecture notes do a slightly different rounding compared to what we saw in class, but the idea is basically the same. The notes also cover the original application that introduced randomized rounding.
	
Lecture 11 (Feb 14)
	- Rounding the Facility Location LP
	- Randomized rounding (incurs overhead of a (log n) factor, see Homework)
	- Constant factor approximation
	- 
Reading: Section 9.2 of 
these notes
	- Related problems: k-median and k-means.
	
Lecture 12 (Feb 21)
	- Summary of the 
relax-and-round paradigm
 (Optional reading: 
some notes from a previous offering.)
	- Other relaxations: Semidefinite programming as "vector programs"
	- Max Cut: Goemans-Williamson algorithm
	- Analysis of random hyperplane rounding
	- 
Reading: Chapter 1 of the 
book of Gartner and Matousek.
	
Lecture 13 (Feb 26)
	- Semidefinite programming: formulation as LP with matrix variables and infinitely many constraints
	- Efficient separation oracle
	- Cholesky decomposition and finding vector solution
	- Other examples of SDP algorithms: 3-coloring (Karger, Motwani, Sudan), quadratic programming
	- 
Reading: Chapter 13 of 
this excellent book.
	
	 C) Graph analysis
	Lecture 14 (Feb 28)
	- Introduction, graph partitioning
	- Balanced cut, sparsity (and the sparsest cut problem)
	- Example:2 sparsest cuts in some basic graphs (complete graph, n-cycle, 2-D grid)
	- Tangent 1: Max-flow Min-cut theorem (see 
these lecture slides)
	- Tangent 2: Hardness of approximation vs NP-hardness (see Chapter 16 of 
the book referred to above; the first section contains a couple of basic reductions)
	- 
Reading: Old 
lecture notes on partitioning and objectives.
	
Lecture 15 (Mar 11)
	- Sparsest cut problem, derivation of continuous relaxation
	- Adjacency matrix and Laplacian of graphs
	- 
Reading: Lecture notes on partitioning and objectives.
	
Lecture 16 (Mar 13)
	- Laplacian quadratic form and sparsest cut
	- Recap of basic linear algebra -- eigenvalues & eigenvectors of a symmetric matrix, spectral decomposition
	- Eigenvalues of a Laplacian & number of connected components
	- Top eigenvalue/vector as optimizing a quadratic form (or Rayleigh coefficient)
	- 
Reading: Lecture notes -- also discusses how to deal with non-regular graphs
	
Lecture 17 (Mar 18)
	- Second eigenvector of Laplacian and sparsest cut
	- Cheeger's inequality + very rough sketch of proof (a similar treatment can be found in 
these notes); for a proof, see my 
notes from a previous offering; the notation is slightly different, so please pay attention!
	- Power iteration for finding the top eigenvalue/vector
	- Analysis in terms of eigenvalue gap
	- 
More reading: Notes on the power method. A 
classic paper on how spectral clustering recoves a "good" partition in planar graphs
	- A 
video with some example graphs and how spectral partitioning performs (ff to 3:00). Also, one of the top "cited" papers in CS (over 20k citations) is an application of 
spectral partitioning to image segmentation
	Lecture 18 (Mar 20)
	- Spectral partitioning review
	- Random walks in undirected graphs: walk on a line and Brownian motion
	- The random walk transition matrix
	- Random walk as power iteration
	- 
Reading: Old Lecture notes
	- 
Optional reading: Link to 
Einstein's paper on Brownian motion (only Sections 4 & 5 seem relevant!). Elegant book of 
Doyle and Snell on random walks and their connection to electrical networks
	
Lecture 19 (Mar 25)
	- Stationary probability distribution (in terms of degrees)
	- Convergence to stationary distribution, notion of "mixing time"
	- "Applications" of random walks
	- Random walk on a line revisited (quadratic time bound)
	- 2-Satisfiability -- algorithm using random walks (see 
these notes for more details; 
another reference)
	- Graph search (connectivity) in Log space
	
Lecture 20 (Mar 27)
	- Log-space algorithm for graph search (see 
these notes for some of the details that we skipped)
	- Volume of a convex body given membership oracle
	- Lower bound for any deterministic algorithm
	- Randomized algorithm using sampling within K ("dart throwing")
	- 
Reading: Lecture notes by Santosh Vempala for details on the lower bound of Elekes 
	
Lecture 21 (Apr 1)
	- Approximating volume: two steps -- (i) sampling => volume estimation, (ii) algorithms for sampling via random walks
	- 
Slides used for recap and overview
	- 
Reading: Lecture notes by Santosh Vempala; those interested can find a link to the analysis of the Ball walk
	- Introduction -- learning with expert advice
	- 
Reading: Some 
lecture notes. Also see the introductory part of 
this classic survey
	
	 D) Efficient algorithm design from "online" optimization
	Lecture 22 (Apr 3)
	- Learning with expert advice
	- The weighted majority algorithm (deterministic) and analysis
	- Multiplicative weight update (MWU) rule, discussion on implementation
	- 
Reading: Notes 
linked above
	Lecture 23 (Apr 8)
	- Learning with expert advice
	- Randomized sampling algorithm, (1+epsilon) approximation
	- The "loss" viewpoint on the randomized analysis
	- Linear classification
	- 
Reading: Lecture notes (these cover the weighted majority algorithm as well; sections 4 & 5 are most relevant for this lecture)
	
Lecture 24 (Apr 10)
	- (Relevant aside) Perceptron algorithm. 
Reading: Lecture notes
	- Winnow algorithm is very similar, but based on learning with expert predictions. 
Reading: Lecture notes
	- Introductio to solving linear programs using the MWU framework
	
Lecture 25 (Apr 15)
	- How to solve a linear program using the MWU framework
	- 
Reading: These 
lecture notes
	Lectures 26 and 27 -- COURSE REVIEW -- 
	- 
Reading: Notes about the 
centroid based algorithm for solving LP and other optimization problems