I taught my first class last quarter and it was an enjoyable and eye-opening experience at many levels. First some background. The class was undergraduate algorithms or as popularly known in UCLA - CS180. There were 129 students (kind of like jumping into the deep end to test the waters). Like most other CS curricula, it is a … Continue reading Doing a 180 and still spinning
Author: Raghu Meka
Discrepancy and Rounding Linear Programs
In the previous posts we talked about various discrepancy questions and saw a proof of the six standard deviations suffice result. Besides being of interest in combinatorics, discrepancy theory has several remarkable applications in algorithms. Check this excellent book for a taste of these results. Here I will briefly discuss two (one old and one … Continue reading Discrepancy and Rounding Linear Programs
Discrepancy Bounds from Convex Geometry
In the last post we discussed some questions about discrepancy and the 'Six Standard Deviations Suffice' theorem stated below (without the $latex {6}&fg=000000$, which is not too important, but makes for a great title): Theorem 1 For vectors $latex {a^1,\ldots,a^n \in \{1,-1\}^n}&fg=000000$, there exists $latex {\epsilon \in \{1,-1\}^n}&fg=000000$ such that for every $latex {j \in … Continue reading Discrepancy Bounds from Convex Geometry
Discrepancy and Beating the Union Bound
In this series of three posts I want to discuss some recent and old advances in discrepancy theory and their applications to algorithms. Discrepancy minimization is quite a rich and beautiful area as evidenced in these two books. Here I will focus on a specific perspective -- that of ``Beating the Union Bound'' -- which … Continue reading Discrepancy and Beating the Union Bound