Theory Seminar – MSR-Silicon Valley – RIP

The seminar took place on Wednesdays, 1:30-2:30, at MSR Silicon Valley. The 09/18/2014 talk was canceled to allow the local audience some time to pack their offices.

Organizers (by order of their service): Omer Reingold, Parikshit Gopalan, Guy Rothblum and Raghu Meka


09/18/2014 Madhu Sudan (Microsoft Research) (Note unusual time: Thursday 2:30 – 3:30)

Communication with Imperfectly Shared Randomness

A common feature in most natural communication is that it relies on a large shared context between speaker and listener, where the context is shared vaguely rather than precisely. Knowledge of language, technical terms, socio-political events, history etc. all combine to form this shared context; and it should be obvious that no one can expect any part of this context to be shared perfectly by the two parties. This shared context helps in compressing communication (else I would have to include an English dictionary, a book on grammar etc. to this abstract); but the imperfection of the sharing potentially leads to misunderstanding and ambiguity. The challenge of achieving the benefits provided by the shared context without leading to new errors due to the imperfection leads to a host of new mathematical questions.

In this talk we will focus on one specific setting for this tension between shared context and imperfection of sharing, namely in the use of shared randomness in communication complexity. It is widely known that shared randomness between speaker and listener can lead to immense savings in communication complexity for certain communication tasks. What happens when this randomness is not perfectly shared? We model this as saying that sender has access to a sequence of uniform random bits and receiver has access to a noisy version of the same sequence where each bit is flipped independently with some probability p. While most known communication protocols fail when the randomness is imperfectly shared, it turns out that many of the effects of shared randomness can be recovered with a slight loss by more carefully designed protocols. Among other results we will describe a general one which shows that any k-bit one-way communication protocol with perfectly shared randomness can be “simulated” with 2^k bits of imperfectly shared randomness, and this is essentially tight.

Based on joint work with Clément Canonne (Columbia), Venkatesan Guruswami (CMU), and Raghu Meka (MSR).


09/17/2014 Tim Roughgarden (Stanford University)

Barriers to Near-Optimal Equilibria

We explain when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the worst-case quality of equilibria in games derived from the problem. The most straightforward use of our lower bound framework is to harness an existing computational or communication lower bound to derive a lower bound on the worst-case price of anarchy

(POA) in a class of games. This is a new approach to POA lower bounds, which relies on reductions in lieu of explicit constructions. Our lower bounds are particularly significant for problems of game design— ranging from the design of simple combinatorial auctions to the existence of effective tolls for routing networks — where the goal is to design a game that has only near-optimal equilibria.


09/10/2014 Dawn Woodard (Cornell, visiting MSR)

Characterizing the Efficiency of Markov Chain Monte Carlo

The introduction of simulation-based computational methods has allowed the field of Bayesian statistics to dramatically expand, achieving widespread use in areas from finance to information technology. Despite this, the biggest challenge for the adoption of Bayesian approaches is still the computational methods used in implementation: understanding of the error associated with these methods has lagged far behind their use, and for some statistical models all available methods take too long to converge. I will describe work characterizing the efficiency of particular computational methods used for Bayesian statistical inference, including adaptive Markov chain Monte Carlo and a Gibbs sampler for Bayesian motif discovery in genomics. 

I will highlight in particular our recent results on the efficiency of approximate Bayesian computation (ABC), which has become a fundamental tool in population genetics and systems biology.  ABC is used when evaluation of the likelihood function is prohibitively expensive or impossible; it instead approximates the likelihood function by drawing pseudo-samples from the model. We address both the rejection sampling and Markov chain Monte Carlo versions of ABC, presenting the surprising result that multiple pseudo-samples typically do not improve the efficiency of the algorithm as compared to employing a high-variance estimate computed using a single pseudo-sample. This result means that it is unnecessary to tune the number of pseudo-samples, and is in contrast to particle MCMC methods, in which many particles are often required to provide sufficient accuracy.


09/03/2014 Omri Weinstein (Princeton University)

From honest but curious to malicious: An Interactive Information Odometer and Applications to Privacy

We introduce a novel technique which enables two players to maintain an estimate of the internal information cost of their conversation in an online fashion without revealing much extra information. We use this construction to obtain new results about information-theoretically secure computation and interactive compression.

More specifically, we show that the information odometer can be used to achieve information-theoretic secure communication between two untrusted parties: If the players’ goal is to compute a function f(x,y), and f admits a protocol with information cost I and communication cost C, then our odometer can be used to produce a “robust” protocol which:

(i) Assuming both players are honest, computes f with high probability, and (ii) Even if one party is malicious/adversarial, then for any k, the probability that the honest player reveals more than O(k(I+ log C)) bits of information to the other player is at most 2^{-Omega(k)}.

We also outline a potential approach which uses our odometer as a proxy for braking state of the art interactive compression

results: Any progress on interactive compression in the regime where I=O(\log C) will lead to new *general* compression results in all regimes, and hence to new direct sum theorems in communication complexity.

Joint work with Mark Braverman


08/27/2014 Amit Daniely (Hebrew University)

From average case complexity to improper learning complexity

It is presently still unknown how to show hardness of learning problems. There are huge gaps between our upper and lower bounds in the area. The main obstacle is that standard NP-reductions do not yield hardness of learning . All known lower bounds rely on (unproved) cryptographic assumptions.

We introduce a new technique to this area, using reductions from problems that are hard on average. We put forward a natural generalization of Feige’s assumption about the complexity of refuting random K-SAT instances. Under this assumption we show:

1. Learning DNFs is hard.
2. Learning an intersection of super logarithmic number of halfspaces is hard.

In addition, the same assumption implies the hardness of virtually all learning problems that were previously shown hard under cryptographic assumptions.

Joint with Nati Linial and Shai Shelev-Shwartz.


08/20/2014 Robi Krauthgamer (Weizmann Institute)

Adaptive Metric Dimensionality Reduction

I plan to discuss data-adaptive dimensionality reduction in the context of supervised learning in general metric spaces. Our contribution is twofold. Statistically, we present a generalization bound for Lipschitz functions in a metric space that is doubling, or nearly doubling. Consequently, we provide a new theoretical explanation for empirically reported improvements gained by preprocessing Euclidean data by PCA (Principal Components Analysis) prior to constructing a linear classifier.

On the algorithmic front, we introduce a PCA-analogue for general metric spaces, namely, an efficient procedure to approximate the data’s intrinsic dimension, which is often much lower than the ambient dimension. Our results thus leverage the dual benefits of low dimensionality:
(1) more efficient algorithms, e.g., for proximity search, and
(2) more optimistic generalization bounds.

Joint work with Lee-Ad Gottlieb and Aryeh Kontorovich.


08/13/2014 Moni Naor (Weizmann Institute)

Physical Zero-Knowledge Proofs of Physical Properties

Is it possible to prove that two DNA-fingerprints match, or that they do not match, without revealing any further information about the fingerprints? Is it possible to prove that two objects have the same design without revealing the design itself?

In the digital domain, zero-knowledge is an established concept where a prover convinces a verifier of a statement without revealing any information beyond the statement’s validity. However, zero-knowledge is not as well-developed in the context of problems that are inherently physical.

In this talk, we are interested in protocols that prove physical properties of physical objects without revealing further information. The literature lacks a unified formal framework for designing and analyzing such protocols. We suggest the first paradigm for formally defining, modeling, and analyzing physical zero-knowledge (PhysicalZK) protocols, using the Universal Composability framework. We demonstrate applications of physical zero-knowledge to DNA profiling and neutron radiography. Finally, we explore an analog of public-coin proofs in the context of PhysicalZK that we call publicly observable proofs.

Joint work with Ben Fisch and Daniel Freund


08/06/2014 Lyle Ramshaw

Stråhle’s equal-tempered triumph

In 1743, the Swedish organ builder Daniel P. Stråhle gave an elegant geometric construction for determining the precise pitches of musical notes — for example, for locating frets along the neck of a guitar.  Stråhle chose 24/17 as the tilt ratio of the perspectivity in his construction.  That ratio needs to be roughly the square root of 2; but why did he choose 24/17 over the continued-fraction convergent 17/12?  Stråhle’s choice turns out to have the musical advantage that the frets that are furthest from being equal-tempered appear higher up on the fretboard.


07/30/2014 Ryan Williams (Stanford University)

Faster all-pairs shortest paths via circuit complexity

The good old all-pairs shortest path problem in dense n-node directed graphs with arbitrary edge weights (APSP) has been known for 50 years (by Floyd and Warshall) to be solvable in O(n^3) time on the real RAM, where additions and comparisons of reals take unit cost (but all other operations have typical logarithmic cost). Faster algorithms (starting with Fredman in 1975) were known to take O(n^3/(log^c n)) time for various constants c <= 2. It’s a longstanding open problem if APSP can be solved in n^{3-e} time for some constant e > 0. A first step towards a positive answer would be to determine if even an n^3/(log^c n) time algorithm is possible, for *every* constant c.

I will outline a new randomized method for computing the min-plus product (a.k.a., tropical product) of two n by n matrices, yielding a faster algorithm for APSP. On the real RAM, the algorithm runs in time O~(n^3/2^{(log n)^{1/2}) and works with high probability on any pair of matrices. The algorithm applies tools from low-level circuit complexity.


07/23/2014 Shachar Lovett (University of California, San Diego)

List decoding Reed-Muller codes over small fields

The list decoding problem for a code asks for the maximal radius up to which any ball of that radius contains only a constant number of codewords. The list decoding radius is not well understood even for well studied codes, like Reed-Solomon or Reed-Muller codes.Fix a finite field $\F$. The Reed-Muller code $\RM_{\F}(n,d)$ is defined by $n$-variate degree-$d$ polynomials over $\F$. In this work, we study the list decoding radius of Reed-Muller codes over a constant prime field $\F=\F_p$, constant degree $d$ and large $n$. We show that the list decoding radius is equal to the minimal distance of the code.That is, if we denote by $\delta(d)$ the normalized minimal distance of $\RM_{\F}(n,d)$, then the number of codewords in any ball of radius $\delta(d)-\eps$ is bounded by $c=c(p,d,\eps)$ independent of $n$. This resolves a conjecture of Gopalan-Klivans-Zuckerman [STOC 2008], who among other results proved it in the special case of $\F=\F_2$; and extends the work of Gopalan [FOCS 2010] who proved the conjecture in the case of $d=2$.We also analyse the number of codewords in balls of radius exceeding the minimal distance of the code. For $e \leq d$, we show that the number of codewords of $\RM_{\F}(n,d)$ in a ball of radius $\delta(e) – \eps$ is bounded by $\exp(c  \cdot  n^{d-e})$, where $c=c(p,d,\eps)$ is independent of $n$. The dependence on $n$ is tight. This extends the work of Kaufman-Lovett-Porat [IEEE Inf. Theory 2012] who proved similar bounds over $\F_2$.The proof relies on several new ingredients: an extension of the Frieze-Kannan weak regularity to general function spaces, higher-order Fourier analysis, and an extension of the Schwarz-Zippel lemma to compositions of polynomials.

Joint work with Abhishek Bhowmick.


07/16/2014 Noam Nisan (Hebrew University of Jerusalem)

Economic Efficiency Requires Interaction

We study the necessity of interaction between individuals for obtaining approximately efficient allocations. The role of interaction in markets has received significant attention in economic thinking, e.g. in Hayek’s 1945 classic paper.

We consider this problem in the framework of simultaneous communication complexity. We analyze the amount of simultaneous communication required for achieving an approximately efficient allocation. In particular, we consider two settings: combinatorial auctions with unit demand bidders (bipartite matching) and combinatorial auctions with sub-additive bidders. For both settings we first show that non-interactive systems have enormous communication costs relative to interactive ones. On the other hand, we show that limited interaction enables us to find approximately efficient allocations.

Joint work with Shahar Dobzinski and Sigal Oren.


07/09/2014 Anupam Gupta (CMU)

The Power of Recourse: Online Spanning Trees with Alterations
(UNUSUAL TIME: Talk will start at 3 PM)

In the online spanning (or Steiner) tree problem, we are given a metric space, vertices from this metric space arrive online, and we want to buy connections maintain a tree spanning all arrived points with least possible cost. It is known that the greedy algorithm maintains an O(log n) competitive spanning tree, and this is optimal.

But suppose decisions of the online algorithm are not irrevocable. When a new vertex arrives, in addition to adding an edge to connect the newly arrived terminal, we are allowed to exchange a small number of previously bought edges for other edges. Can we maintain a better solution? We show a positive answer to this problem: even allowing us to change a single edge at each point in time can give us a constant competitive tree.

Joint work with Amit Kumar and Albert Gu.


07/02/2014 David Zuckerman (UT Austin)

Pseudorandomness from Shrinkage

One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don’t know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs that are essentially best possible without in turn improving the lower bounds.

More specifically, say that a circuit family has shrinkage exponent Gamma if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p^{Gamma + o(1)}. Our PRG uses a seed of length s^{1/(Gamma + 1) + o(1)} to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths:

1. For de Morgan formulas, seed length s^{1/3+o(1)};
2. For formulas over an arbitrary basis, seed length s^{1/2+o(1)};
3. For read-once de Morgan formulas, seed length s^{.234…};
4. For branching programs of size s, seed length s^{1/2+o(1)}.

The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only when the size s=O(n).

Joint work with Russell Impagliazzo and Raghu Meka.


6/25/2014 Rocco Servedio (Columbia University)

A complexity theoretic perspective on unsupervised learning

We introduce and study a new type of learning problem for probability distributions over the n-dimensional Boolean hypercube.  A learning problem in our framework is defined by a class C of Boolean functions over the hypercube; in our model the learning algorithm is given uniform random satisfying assignments of an unknown function in C and its goal is to output a high-accuracy approximation of the uniform distribution over the space of satisfying assignments for f.  We discuss connections between the existence of efficient learning algorithms in this framework and evading the “curse of dimensionality” for more traditional density estimation problems.

As our main results, we show that the two most widely studied classes of Boolean functions in computational learning theory — linear threshold functions and DNF formulas — have efficient distribution learning algorithms in our model. Our algorithm for linear threshold functions runs in time poly(n,1/epsilon) and our algorithm for polynomial-size DNF runs in quasipolynomial time.  On the other hand, we also prove complementary hardness results which show that under cryptographic assumptions, learning monotone 2-CNFs, intersections of 2 halfspaces, and degree-2 PTFs are all hard. This shows that our algorithms are close to the limits of what is efficiently learnable in this model.

Joint work with Anindya De and Ilias Diakonikolas.


6/18/2014 Haim Kaplan (Tel-Aviv University)

Adjacency labeling schemes and induced-universal graphs

We describe a way of assigning \emph{labels} to the vertices of any undirected graph on up to~$n$ vertices, each composed of $n/2+O(1)$ bits, such that given the labels of two vertices, and no other information regarding the graph, it is possible to decide whether or not the vertices are adjacent in the graph. This is optimal, up to an additive constant, and constitutes the first improvement in almost 50 years of an $n/2+O(\log n)$ bound of Moon. As a consequence, we obtain an \emph{induced-universal} graph for $n$-vertex graphs containing only $O(2^{n/2})$ vertices, which is optimal up to a multiplicative constant, solving an open problem of Vizing from 1968. We obtain similar tight results for directed graphs, tournaments and bipartite graphs.


6/17/2014 Amit Sahai (UCLA)

NOTE UNUSUAL DATE AND TIME: Tuesday, 2:30-3:30 in Titan

Advances in Program Obfuscation

Is it possible for someone to keep a secret, when an adversary can see how every neuron in her brain behaves, even while she is thinking about her secret? The problem of Program Obfuscation asks the analogue of this question for software: can we write software that uses a secret built into its code, and yet keep this secret safe from an adversary that obtains the entire source code of the software? The secret should remain hidden no matter what the adversary does with the code, including modifying or executing it. For decades, achieving this goal for general programs remained elusive.

This changed with our recent work (FOCS 2013), which gave the first candidate constructions of secure general-purpose program obfuscation. These new constructions are based on novel mathematical ideas. However, especially because these ideas are new, we must ask the question: How can we gain confidence in the security of these new constructions? This talk will discuss recent works that address this question.


6/11/2014 Bernhard Haeupler (MSR)

Making Conversations Robust to Noise Made Easy and Rate Optimal

Interactive Coding schemes add redundancy to any interactive protocol (=conversation)such that it can be correctly executed over a noisy channel which can corrupt any eps fraction of the transmitted symbols.

The fact that coding schemes that tolerate a constant fraction of errors even exist is a surprising result of Schulman. His and most subsequent coding schemes achieve this feat by using tree-codes, intricate structures proven to exist by Schulman but for which no efficient constructions or encoding/decoding procedures are known. Up to the recent work of [Kol, Raz, STOC13] all these schemes required a large (unspecified) constant factor overhead in their communication. Kol and Raz showed that if errors are random then a rate approaching one can be achieved. In particular they prove a 1 – Theta(\sqrt{H(eps))) upper and lower bound.

This talk will show that one can break this bound even for adversarial errors. In particular, we give coding schemes that achieve a rate of 1 – Theta(\sqrt{\eps}) for random or oblivious channels, and a rate of 1 – Theta(\sqrt{eps log log 1/eps}) for fully adversarial channels. We conjecture these bounds to be tight. The coding schemes are extremely natural and simple in their design and essentially consist of both parties having their original conversation as-is (no coding!!), interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack.

This will be an interactive board talk. 😉 Come and participate!


6/4/2014 No Theory Seminar (STOC week)


5/28/2014 Paris Siminelakis (Stanford University)

Convex Random Graph Models

Realistic models of random graphs are important both for design purposes (predicting the average performance of different protocols/algorithms) as well as for network inference (extracting latent group membership, clustering, etc. ). There are by now thousands of papers defining different random graph models. I will present a principled framework of deriving random graph models by dramatically generalizing the approach of Erdos-Renyi in defining their classic G(n,m) model. Our central principle is to study the uniform measure over  symmetric sets of graphs, i.e., sets that are invariant under a group of transformations. Our main contribution is to derive natural sufficient conditions under which the uniform measure over a symmetric set of graphs (i) collapses asymptotically to a distribution where edges appear independently, and (ii) wherein the probability of each edge can be computed from the property through the solution of an optimization problem. Time permitting I will also present an application of this work in resolving the fragility of Navigable Graphs of Kleinberg’s augmentation scheme.

Based on joint work with Dimitris Achlioptas.


5/21/2014 Chen Avin  (Ben-Gurion University. Visiting Professor at Brown and ICERM)

Homophily and the Emergence of a Glass Ceiling Effect in Social Networks

The glass ceiling may be defined as “the unseen, yet unbreakable barrier that keeps minorities and women from rising to the upper rungs of the corporate ladder, regardless of their qualifications or achievements”. Although undesirable, it is well documented that many societies and organizations exhibit a glass ceiling.

In this paper we formally define and study the glass ceiling effect in social networks and provide a natural mathematical model that (partially) explains it. We propose a biased preferential attachment model that has two type of nodes, and is based on three well known social phenomena: i) minority of females in the network, ii) rich get richer (preferential attachment) and iii) homophily (liking those who are the same). We prove that our model exhibits a strong class ceiling effect and that all three conditions are necessary, i.e., removing any one of them, will cause the model not to exhibit glass ceiling.

Additionally we present empirical evidence of a student-mentor network of researchers (based on DBLP data) that exhibits all the above properties: female minority, preferential attachment, homophily and glass ceiling.

Joint work with: Barbara Keller, Zvi Lotker, Claire Mathieu, David Peleg, Yvonne-Ann Pignolet


5/14/2014 Swastik Kopparty  (Rutgers University)

Simultaneous approximation for Constraint Satisfaction Problems

Given k collections of 2SAT clauses on the same set of variables V , can we find one assignment to the variables in V that satisfies a large fraction of clauses from each collection? We consider such simultaneous constraint satisfaction problems, and design the  first nontrivial approximation algorithms for them.

Our main result is that for every CSP F, when k =  log^{O(1)}(n), there is a polynomial time constant factor “Pareto” approximation algorithm for k simultaneous MAX-F-CSP instances. In contrast, we show that for every nontrivial Boolean CSP, when k = log^{\Omega(1)}(n), no nonzero approximation factor for k simultaneous MAX-F-CSP instances can be achieved in polynomial time (assuming the Exponential Time Hypothesis).

Joint work with Amey Bhangale and Sushant Sachdeva


5/7/2014 Shubhangi Saraf  (Rutgers University)

Lower bounds for bounded depth arithmetic circuits

In the last few years there have been several exciting results related to depth reduction of arithmetic circuits and strong lower bounds for bounded depth arithmetic circuits. I will survey these results and highlight some of the main challenges and open directions in this area.

4/23/2014 Motty Perry (Hebrew University of Jerusalem)

Implementing the “Wisdom of the Crowd”

We study a novel mechanism design model in which agents each arrive sequentially and choose one action from a set of actions with unknown rewards. The information revealed by the principal affects the incentives of the agents to explore and generate new information. We characterize the optimal disclosure policy of a planner whose goal is to maximize social welfare. One interpretation of our result is the implementation of what is known as the “wisdom of the crowd”. This topic has become increasingly relevant with the rapid spread of the Internet over the past decade.

Joint with Ilan Kremer and Yishay Mansour

4/16/2014 Greg Valiant (Stanford)

An Automated Inequality Prover and Instance Optimal Identity Testing

This talk will have two sections.  In the first section, I’ll discuss the problem of verifying the identity of a distribution: Given the description of a distribution, p, over a discrete support, how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p=q from the case that the total variation distance (L1 distance) is at least eps? In joint work with Paul Valiant, we resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c’ and a function f(p,eps)  on distributions and error parameters, such that our tester distinguishes the two cases  using f(p)  samples with success probability 2/3 , but no tester can distinguish the case that p=q from the case that the total variation distance is at least c*eps when given c’ f(p,eps) samples. The function f(p,eps)  is upper bounded by the 2/3-norm of p, divided by eps^2, but is more complicated. This result significantly generalizes and tightens previous results: since distributions of support at most n have 2/3 norm bounded by sqrt(n)  this result immediately shows that for such distributions, O(sqrt(n)/eps^2)  samples suffice, matching the (tight) results for the case that p is the uniform distribution over support n.

The second part of the talk will focus on the main analysis tool that we leverage to obtain the testing results.  The analysis of our simple testing algorithm involves several hairy inequalities; to enable this analysis, we give a complete characterization of a general class of inequalities—generalizing Cauchy-Schwarz, Holder’s inequality, and the monotonicity of L_p norms. Our characterization is of a, perhaps, non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought through trial and error, by hand. We do not believe such a characterization has appeared in the literature, and have found its computational nature extremely useful.

4/02/2014 Thomas Vidick (Simons Institute)

Fully device independent quantum key distribution

Quantum cryptography promises levels of security that are impossible to replicate in a classical world. Can this security be guaranteed even when the quantum devices on which the protocol relies are untrusted?

This central question in quantum cryptography dates back to the early nineties when the challenge of achieving device independent quantum key distribution, or DIQKD, was first formulated.

In this talk I will give a positive answer this challenge by exhibiting a robust protocol for DIQKD and rigorously proving its security. The proof of security is based on a fundamental property of quantum entanglement, called monogamy. The resulting protocol is robust: while assuming only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and from any adversary’s laboratory, it achieves a linear key rate and tolerates a constant noise rate in the devices.

The talk will be at an introductory level and accessible without prior knowledge of quantum information or cryptography.

Based on joint work with Umesh Vazirani.

3/26/2014 No Seminar
3/19/2014 No Seminar

3/12/2014 Gilad Tsur (TAU)

Fast Affine Template Matching

In this work we consider approximately matching a template to a grayscale image under affine transformations. We give theoretical results and find that the algorithm is surprisingly successful in practice.

Given a grayscale template M_1 of dimensions n_1 \times n_1 and a grayscale image M_2 of dimensions n_2 \times n_2, our goal is to find an affine transformation that maps pixels from M1 to pixels in M2 minimizing the sum-of-absolute-differences error. We present sublinear algorithms that give an approximate result for this problem, that is, we perform this task while querying as few pixels from both images as possible, and give a transformation that comes close to minimizing the difference.

Our major contribution is an algorithm for a natural family of images, namely, smooth images. We consider an image smooth when the total difference between neighboring pixels is O(n). For such images we provide an approximation of the distance between the images to within an additive error of \epsilon using a number of queries depending polynomially only on 1/\epsilon and on n_2/n_1.

The implementation of this sublinear algorithm works surprisingly well. We performed several experiments on three different datasets, and got very good results, showing resilience to noise and the ability to match real-world templates to images.

Joint work with Simon Korman (TAU), Daniel Reichman (WIS) and Shai Avidan(TAU).

3/5/2014     No Seminar
2/26/2014   No Seminar

2/21/2014 Anup Rao (University of Washington)
NOTE UNUSUAL DAY AND TIME (Thursday, 10:30 AM)

Circuits with Large Fan-In

We consider boolean circuits where every gate in the circuit may compute an arbitrary function of k other gates, for a parameter k. We give an explicit function f : {0, 1}^n → {0, 1} that requires at least Ω(log^2 n) non-input gates in this model when k = 2n/3. When the circuit is restricted to being depth 2, we prove a stronger lower bound of n^Ω(1), and when it is restricted to being a formula, our lower bound is strengthened to Ω(n^2 /k log n) gates.

Our model is connected to some well known approaches to proving lower bounds in complexity theory. Optimal lower bounds for the Number-On-Forehead model in communication complexity, or for bounded depth circuits in AC0, or for bounded depth monotone circuits, or extractors for varieties over small fields would imply strong lower bounds in our model. On the other hand, new lower bounds for our model would prove new time-space tradeoffs for branching programs and impossibility results for (fan-in 2) circuits with linear size and logarithmic depth. In particular, our lower bound gives a different proof for the best known time-space tradeoff for oblivious branching programs.

Joint work with Pavel Hrubes.

2/5/2014 Mariana Raykova (SRI)

Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits 

In this work, we study indistinguishability obfuscation and functional encryption for general circuits:

Indistinguishability obfuscation requires that given any two equivalent circuits $C_0$ and $C_1$ of similar size, the obfuscations of $C_0$ and $C_1$ should be computationally indistinguishable.

In functional encryption, ciphertexts encrypt inputs $x$ and keys are issued for circuits $C$. Using the key $\SK_C$ to decrypt a ciphertext $\CT_x=\enc(x)$, yields the value $C(x)$ but does not reveal anything else about $x$. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually.

We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits.  We accomplish this goal in three steps:

We describe a candidate construction for indistinguishability obfuscation for $NC^1$ circuits. The security of this construction is based on a new algebraic hardness assumption. The candidate and assumption use a simplified variant of multilinear maps, which we call Multilinear Jigsaw Puzzles.

We show how to use indistinguishability obfuscation for $NC^1$ together with Fully Homomorphic Encryption (with decryption in $NC^1$) to achieve  indistinguishability obfuscation for all circuits.

Finally, we show how to use  indistinguishability obfuscation for circuits, public-key encryption, and non-interactive zero knowledge to achieve functional encryption for all circuits.  The functional encryption scheme we construct also enjoys succinct ciphertexts, which enables several other applications.

Joint work with Sanjam Garg, Craig Gentry, Shai Halevi, Amit Sahai, Brent Waters


1/22/2014 George Varghese (Microsoft Research)

Reconciling Differences

If you and I both have a million song titles, of which almost all are the same, how can we efficiently communicate the songs that are different? I will describe a new and practical algorithm (joint work with D. Eppstein, M. Goodrich, and F. Uyeda) for computing the set difference using communication proportional to the difference, linear computation, and small latency. A key component is a new estimator for the set difference that outperforms earlier estimators such as MinWise sketches for small values of the set difference. Potential applications include trading blocks in a peer-to-peer environment, link state routing and data deduplication. I will show that the similarity to the “peeling algorithm” used in say Tornado codes is not surprising because there is a reduction from set difference to coding. In the second part I will describe generalizations to reconciling sequences under the edit distance metric (joint work with J. Ullman), and to reconciling sets on graphs (a generalization of the celebrated rumor spreading problem, joint work with N. Goyal and R. Kannan). A “Steiner” version of the graph problem suggests new network coding problems. If time permits, I will describe a simple connection that shows that the basic data structure can be used to design new error correction codes that are efficiently decodable (joint work with M. Mitzenmacher).   I will describe some open problems in this space.

1/15/2014 Iftach Haitner (Tel-Aviv University)

Coin Flipping of Any Constant Bias Implies One-Way Functions

We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499), implies the existence of one way functions. This improves upon a recent result of Haitner and Omri [FOCS ’11], who proved this implication for protocols with bias ~.207. Unlike the result of Haitner and Omri, our result holds also for weak coin-flipping protocols. Joint work with Itay Berman and Aris Tentes.

*** Winter Break 11/21/2013-1/14/2014***

11/20/2013 Daniel Reichman (Weizmann Institute, Israel)

Smoothed analysis on connected graphs

The main paradigm of smoothed analysis on graphs suggests that for any large graph G in a certain class of graphs, perturbing slightly the edges of G at random (usually adding few random edges to G) typically results in a graph having much nicer properties.

In this talk we discuss smoothed analysis on trees, or equivalently on connected graphs. A connected graph G on n vertices can be a very bad expander, can have very large diameter, very high mixing time, and possibly has no long paths. The situation changes dramatically when \epsilon n random edges are added on top of G, the so obtained graph G* has with high probability the following properties:

– its edge expansion is at least c/log n;

– its diameter is O(log n);

– its vertex expansion is at least c/log n;

– it has a linearly long path;

– its mixing time is O(log^2n)

(the last three items assuming the base graph G has bounded degrees). All of the above estimates are asymptotically tight. Joint work with Michael Krivelevich (Tel Aviv) and Wojciech Samotij (Tel Aviv/Cambridge).

11/13/2013 Ankit Sharma (CMU)

Multiway Cut

Multiway cut is a generalization of the min-cut problem to one with more than two terminals. Theoretically, given a set of terminals in a graph, the objective is to assign each vertex to some terminal while minimizing the number of ‘cut’ edges — edges whose end points are assigned to different terminals. The special case of this problem with two terminals is the “max-flow/min-cut problem”. With three or more terminals, the problem is NP hard.

The problem has a rich history in approximation algorithms. Starting with a 2-approximation by Dahlhaus et al. in 1994, the problem received a major improvement in a paper by Calinescu et al., where they presented a relaxation of the problem, and a 1.5 approximation to it. It was subsequently shown that it is UGC hard to beat the integrality gap of this relaxation. The rounding schemes to the relaxation were also subsequently improved, and in a recent result, Buchbinder et al. in STOC 2013, introduced a new rounding scheme that gave a 1.32388 approximation. In a work with Jan Vondrak, we first present the best combination of rounding schemes used by Buchbinder et al., and show that it is limited to achieving a factor of 1.30902 (=(3+sqrt(5))/4). We then introduce a new rounding scheme and show that the new combination of rounding schemes achieves an approximation factor close to 1.297. Under UGC, it is NP hard to go below 1.14.

This is a joint work with Jan Vondrak.

Bio: Ankit Sharma is a graduate student at Carnegie Mellon University. He is advised by Avrim Blum and Anupam Gupta. His research focuses on approximation algorithms and algorithmic game theory.

11/6/2013 Ashwinkumar Badanidiyuru Varadaraja (Cornell)

Bandits with Knapsacks

Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in machine learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising, to dynamic pricing. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called “bandits with knapsacks”, that combines aspects of stochastic integer programming with online learning. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems.

We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel “balanced exploration” paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors.

Joint work with Robert Kleinberg, Alex Slivkins. Appeared at FOCS 2013.

10/30/2013 NO SEMINAR (FOCS)

10/25/2013 Adam Smith (Penn State)
NOTE UNUSUAL LOCATION (TELSTAR) AND TIME (1:30 PM)

Coupled-Worlds Privacy: Exploiting Adversarial Uncertainty in Statistical Data Privacy

In this talk, I will present a new framework for defining privacy in statistical databases that enables reasoning about and exploiting adversarial uncertainty about the data. Roughly, our framework requires indistinguishability of the real world in which a mechanism is computed over the real dataset, and an ideal world in which a simulator outputs some function of a “scrubbed” version of the dataset (e.g., one in which an individual user’s data is removed). In each world, the underlying dataset is drawn from the same distribution in some class (specified as part of the definition), which models the adversary’s uncertainty about the dataset.

We argue that our framework provides meaningful guarantees in a broader range of settings as compared to previous efforts to model privacy in the presence of adversarial uncertainty. We present several natural, “noiseless” mechanisms that satisfy our definitional framework under realistic assumptions on the distribution of the underlying data.

Joint work with Raef Bassily, Adam Groce and Jonathan Katz, to appear at FOCS 2013.

10/24/2013 Omri Weinstein (Princeton)
NOTE UNUSUAL LOCATION (LUNA) and TIME (10:30 AM)

Information complexity and applications

Over the past three decades, communication complexity has found applications in nearly every area of computer science, and constitutes one of the few known techniques for proving unconditional lower bounds. Developing tools in communication complexity is therefore a promising approach for making progress in other computational models such as circuit complexity, streaming, data structures, and privacy to mention a few.

One striking example of such tool is information theory, introduced by Shannon in the late 1940’s in the context of the one way data transmission problem. Shannon’s work revealed the intimate connection between information and communication, namely, that the amortized transmission cost of a random message is equal to the amount of information it contains. This compression theory, however, does not readily convert to interactive setups, where two (or more) parties must engage in a multi-round conversation to accomplish a task.

The goal of our ongoing research is to extend this theory, develop the right tools, and understand how information behaves in interactive setups, such as the communication complexity model. In this introductory talk, I will give an overview of Information Complexity, an interactive analogue of Shannon’s theory. I will describe some of the main open problems in this emerging field, and some of the interesting applications we found, including an exact bound on the communication complexity of Set Disjointness function (0.48n), and how information helped us understand the limits of parallel computation.

10/23/2013 Jonathan Ullman (Harvard)
NOTE UNUSUAL LOCATION (LUNA)
(time is unchanged)

Fingerprinting Codes and the Price of Approximate Differential Privacy

We show new lower bounds on the sample complexity of (eps, delta)-differentially private algorithms that accurately answer large sets of counting queries.  A counting query on a database D in ({0,1}^d)^n has the form “What fraction of the individual records in the database satisfy the property q?”  We show that in order to answer an arbitrary set of >> nd counting queries, Q, on D to within error +/- alpha it is necessary that n > Omega~(d^{1/2} log|Q| / alpha^2 eps).  This bound is optimal up to poly-logarithmic factors, as demonstrated by the Private Multiplicative Weights algorithm of Hardt and Rothblum (FOCS’10).  In particular, our lower bound is the first to show that the sample complexity required for accuracy and (eps, delta)-differential privacy is asymptotically larger than what is required merely for accuracy, which is O(log|Q| / alpha^2).  In addition, we show that our lower bound holds for the specific case of k-way marginals (where |Q|~(2d)^k) when alpha is a constant.

Our results rely on the existence of short fingerprinting codes (Boneh-Shaw, CRYPTO’95), which we show are closely connected to the sample complexity of differentially private data release.  We also give a new method for combining certain types of sample complexity lower bounds into stronger lower bounds.

Joint work with Mark Bun and Salil Vadhan.

10/16/2013 Justin Thaler (Simons Institute, UC Berkeley)

Time-Optimal Interactive Proofs for Circuit Evaluation

Considerable attention has recently been devoted to the development of protocols for verifiable computation. These protocols enable a computationally weak verifier to offload computations to a powerful but untrusted prover, while providing the verifier with a guarantee that the prover performed the computations correctly. Despite substantial progress, existing implementations fall short of full practicality, with the main bottleneck typically being the extra effort required by the prover to return an answer with a guarantee of correctness.

I will describe very recent work that addresses this bottleneck by substantially reducing the prover’s runtime in a powerful interactive proof protocol originally due to Goldwasser, Kalai, and Rothblum (GKR), and previously refined and implemented by Cormode, Mitzenmacher, and Thaler.

This talk will include a detailed technical overview of the GKR protocol and the algorithmic techniques underlying its efficient implementation.

10/9/2013 Nikhil Srivastava (MSR India)

(This will be a two hour talk with a 15 minute break in between. The first part would be self contained for the most part.)

Interlacing Families, Ramanujan Graphs, and the Kadison-Singer Problem

We introduce a new type of existence argument based on random polynomials and use it to prove the following two results.

(1) Expander graphs are very sparse graphs which are nonetheless very well-connected, in the sense that their adjacency matrices have large spectral gap. There is a limit to how large this gap can be for a d-regular graph, and graphs which achieve the limit are called Ramanujan graphs. A beautiful number-theoretic construction of Lubotzky-Phillips-Sarnak and Margulis shows that infinite families of Ramanujan graphs exist for every d=p+1 where p is prime, leaving open the question of whether they exist for other degrees. We prove that there exist infinite families of bipartite Ramanujan graphs of every degree bigger than 2. We do this by proving a variant of a conjecture of Bilu and Linial about the existence of good 2-lifts of every graph.

(2) The Kadison-Singer problem is a question in operator theory which arose while trying to make certain assertions in Dirac’s formulation of quantum mechanics mathematically rigorous. Over several decades, this question was shown to be equivalent to several discrepancy-type conjectures about finite matrices, with applications in signal processing, harmonic analysis, and computer science. We prove a strong variant of the conjecture due to Nik Weaver, which says that every set of vectors satisfying some mild conditions can be divided into two sets each of which approximates the whole set spectrally.

Both proofs are based on two significant ingredients: a new existence argument, which reduces the existence of the desired object to bounding the roots of the expected characteristic polynomial of a certain random matrix, and systematic techniques for proving sharp bounds on the roots of such polynomials. The techniques are mostly elementary and based on tools from the theory of real stable polynomials.

Joint work with Adam Marcus and Dan Spielman.

10/2/2013 Eli Gafni (UCLA)

Adaptive Register Allocation with a Linear Number of Registers

We give an adaptive algorithm in which processes use multiwriter multi-reader registers to acquire exclusive write access to their own single-writer, multi-reader registers. It is the first such algorithm that uses a number of registers linear in the number of participating processes. Previous adaptive algorithms require at least (n^{3/2}) registers.

Joint work with: Carole Delporte-Gallet (Paris 7), Hugues Fauconnier (Paris 7), and Leslie Lamport (MSR-SVC).

10/1/2013 Milan Vojnovic (MSR Cambridge)
NOTE UNUSUAL DAY (Tuesday) AND TIME (10:30AM)

Cooperation and Efficiency in Utility Maximization Games

We consider a framework for studying the effect of cooperation on the quality of outcomes in utility maximization games. This is a class of games that accommodates as a special case a game where individuals make strategic investments of effort across a set of available projects. A key feature of interest in such environments is the effect of strategic coalitional deviations on the value produced. In the talk, we shall discuss how the recently developed framework of smooth games allows to derive price of anarchy bounds for utility maximization games. Specifically, we shall discuss a novel concept of coalitional smoothness and show how it implies strong price of anarchy bounds in utility maximization games.

This talk is based on joint works with Yoram Bachrach, Vasilis Syrgkanis and Eva Tardos.

9/25/2013 Vijay V. Vazirani (Georgia Tech)

Dichotomies in Equilibrium Computation: Markets Provide a Surprise

Equilibrium computation is among the most significant additions to the theory of algorithms and computational complexity in the last decade — it has its own character, quite distinct from the computability of optimization problems.

Our contribution to this evolving theory can be summarized in the following sentence: Natural equilibrium computation problems tend to exhibit striking dichotomies. The dichotomy for Nash equilibrium, showing a qualitative difference between 2-Nash and k- Nash for k > 2, has been known for some time. We establish a dichotomy for market equilibrium.

For this purpose. we need to define the notion of Leontief-free functions which help capture the joint utility of a bundle of goods that are substitutes, e.g., bread and bagels. We note that when goods are complements, e.g., bread and butter, the classical Leontief function does a splendid job. Surprisingly enough, for the former case, utility functions had been defined only for special cases in economics, e.g., CES utility function. A new min-max relation supports the claim that our notion is well-founded.

We were led to our notion from the high vantage point provided by an algorithmic approach to market equilibria.

Note: Joint work with Jugal Garg and Ruta Mehta.

9/18/2013 Jelani Nelson (Harvard)

OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings

An “oblivious subspace embedding” (OSE) is a distribution over matrices S such that for any low-dimensional subspace V, with  high probability over the choice of S,||Sx||_2 approximately equals ||x||_2 (up to 1+eps multiplicative error) for all x in V simultaneously. Sarlos in 2006 pioneered the use of OSE’s for speeding up algorithms for several numerical linear algebra problems. Problems that benefit from OSE’s include: approximate least squares regression, low-rank approximation, l_p regression, approximating leverage scores, and constructing good preconditioners.

We give a class of OSE distributions we call “oblivious sparse norm-approximating projections” (OSNAP) that yield matrices S with few rows that are also extremely sparse, yielding improvements over recent work in this area by (Clarkson, Woodruff STOC ’13). In particular, we show S can have O(d^2) rows and 1 non-zero entry per column, or even O(d^{1+gamma}) rows and poly(1/gamma) non-zero entries per column for any desired constant gamma>0. When applying the latter bound for example to the approximate least squares regression problem of finding x to minimize ||Ax – b||_2 up to a constant factor, where A is n x d for n >> d, we obtain an algorithm with running time O(nnz(A) + ^{omega + gamma}). Here nnz(A) is the number of non-zero entries in A, and omega is the exponent of square matrix multiplication.

Our main technical result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: i.e. we show that for any U in R^{n x d} with orthonormal columns and random sparse S with appropriately chosen entries and sufficiently many rows, all singular values of SU lie in the interval [1-eps, 1+eps] with good probability.

Joint work with Huy Lê Nguyễn (Princeton).

9/11/2013 Graham Cormode (University of Warwick)

Small summaries for Big Data

In dealing with big data, we often need to look at a small summary to get the big picture. Over recent years, many novel techniques have been developed which allow important properties of large distributions to be extracted from compact and easy-to-build summaries. This talk highlights some examples of different types of summarization: sampling, sketching, and special-purpose. It concludes by outlining the road ahead for further development and adoption of such summaries.

9/4/2013 Aleksander Madry (EPFL)

Navigating Central Path with Electrical Flows: from Flows to Matchings, and Back

We describe a new way of employing electrical flow computations to solve the maximum flow and minimum s-t cut problems. This approach draws on ideas underlying path-following interior-point methods (IPMs) – a powerful tool in convex optimization, and exploits certain interplay between maximum flows and bipartite matchings.

The resulting algorithm provides improvements over some long-standing running time bounds for the maximum flow and minimum s-t cut problems, as well as, the closely related bipartite matching problem. Additionally, we establish a connection between primal-dual structure of electrical flows and convergence behavior of IPMs when applied to flow problems. This connection enables us to overcome the notorious Omega(sqrt(m))-iterations convergence barrier that all the known interior-point methods suffer from.

8/28/2013 Shai Vardi (TAU)

Local Computation Algorithms and Local Mechanism Design

Abstract: The talk is divided into two parts. In the first part I will give an introduction to Local Computation Algorithms (LCAs). LCAs implement query access to a global solution to computational problems, using polylogarithmic time and space. I will also discuss how to construct LCAs using a reduction from online algorithms.

In the second part I will explore Local Mechanism Design – designing truthful mechanisms that run in polylogarithmic time and space. I will focus on local scheduling algorithms.

The talk is based on joint works with Noga Alon, Yishay Mansour, Ronitt Rubinfeld, Aviad Rubinstein and Ning Xie

8/26/2013 Kai-Min Chung (Institute of Information Science, Academia Sinica, Taiwan)

Interactive Coding, Revisited
How can we encode a communication protocol between two parties to become resilient to adversarial errors on the communication channel? This question dates back to the seminal works of Shannon and Hamming from the 1940’s, initiating the study of error-correcting codes (ECC). But, even if we encode each message in the communication protocol with a “good” ECC, the error rate of the encoded protocol becomes poor (namely O(1/m) where m is the number of communication rounds). Towards addressing this issue, Schulman (FOCS’92, STOC’93) introduced the notion of interactive coding. We argue that whereas the method of separately encoding each message with an ECC ensures that the encoded protocol carries the same amount of information as the original protocol, this may no longer be the case if using interactive coding. In particular, the encoded protocol may completely leak a player’s private input, even if it would remain secret in the original protocol. Towards addressing this problem, we introduce the notion of knowledge-preserving interactive coding, where the interactive coding protocol is required to preserve the “knowledge” transmitted in the original protocol. Our main results are as follows.
  • The method of separately applying ECCs to each message is essentially optimal: No knowledge-preserving interactive coding scheme can have an error rate of 1/m, where m is the number of rounds in the original protocol.
  • If restricting to computationally-bounded (polynomial-time) adversaries, then assuming the existence of one-way functions (resp. subexponentially- hard one-way functions), for every eps > 0, there exists a knowledge-preserving interactive coding schemes with constant error rate and information rate n^eps (resp. 1/polylog(n)) where n is the security parameter; additionally to achieve an error of even 1/m requires the existence of one-way functions.
  •  Finally, even if we restrict to computationally-bounded adversaries, knowledge-preserving interactive coding schemes with constant error rate can have an information rate of at most o(1/ log n). This results applies even to non-constructive interactive coding schemes.
Joint work with Rafael Pass and Sidharth Telang.

8/21/2013 Thomas Steinke (Harvard)

Pseudorandomness for Regular Branching Programs via Fourier Analysis

We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is $O(\log^2 n)$, where $n$ is the length of the branching program. The previous best seed length known for this model was $n^{1/2+o(1)}$, which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of $s^{1/2+o(1)}$ for arbitrary branching programs of size $s$). Our techniques also give seed length $n^{1/2+o(1)}$ for general oblivious, read-once branching programs of width $2^{n^{o(1)}}$, which is incomparable to the results of Impagliazzo et al.

Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width $w$ has Fourier mass at most $(2w^2)^k$ at level $k$, independent of the length of the program.

Joint work with Omer Reingold and Salil Vadhan. See http://eccc.hpi-web.de/report/2013/086/

8/14/2013 Renato Paes Leme (MSR SV)

Efficiency Guarantees in Auctions with Budgets

In settings where players have a limited access to liquidity, represented in the form of budget constraints, efficiency maximization has proven to be a challenging goal. In particular, the social welfare cannot be approximated by a better factor then the number of players. Therefore, the literature has mainly resorted to Pareto-efficiency as a way to achieve efficiency in such settings. While successful in some important scenarios, in many settings it is known that either exactly one incentive-compatible auction that always outputs a Pareto-efficient solution, or that no truthful mechanism can always guarantee a Pareto-efficient outcome. Traditionally, impossibility results can be avoided by considering approximations. However, Pareto-efficiency is a binary property (is either satisfied or not), which does not allow for approximations.
In this paper we propose a new notion of efficiency, called \emph{liquid welfare}. This is the maximum amount of revenue an omniscient seller would be able to extract from a certain instance. We explain the intuition behind this objective function and show that it can be 2-approximated by two different auctions. Moreover, we show that no truthful algorithm can guarantee an approximation factor better than 4/3 with respect to the liquid welfare, and provide a truthful auction that attains this bound in a special case.
Importantly, the liquid welfare benchmark also overcomes impossibilities for some settings. While it is impossible to design Pareto-efficient auctions for multi-unit auctions where players have decreasing marginal values, we give a deterministic $O(\log n)$-approximation for the liquid welfare in this setting.

(Joint work with Shahar Dobzinski)
arxiv: http://arxiv.org/abs/1304.7048

8/7/2013 Abhradeep Guha Thakurta (Stanford and MSR SV)

(Near) Dimension Independent Differentially Private Learning

In this talk I will present some of the recent developments in the area of differentially private machine learning. More specifically, I will present results which provide exponential improvement (in terms of dimensions) over the error guarantees of existing differentially private learning algorithms (namely, output perturbation, objective perturbation and private follow the perturbed leader). In fact, for some of these algorithms the error bounds will be independent of any explicit dependence on dimensions.

I will also provide experimental results which support these error bounds.

Joint work with Prateek Jain from Microsoft Research India.

7/31/2013 Amitabh Trehan (Technion)

Networks that Fix Themselves aka Self-Healing Networks

Given a connected graph, two players play a turn-based game: First. the red guy removes a node (and therefore, its adjoining edges too), now the blue guy adds edges between the remaining nodes. What edges should the blue guy add so that over a whole run of the game, the network remains connected, no node gets too many new edges and the distance between any pair of nodes (i.e. the network stretch) does not blow up by much? Now, imagine that the nodes in the graph are computers and the graph is a distributed network; the nodes themselves are the blue guys but they do not know anybody beyond the nodes they share an edge with. Solving such problems is the essence of self-healing distributednetworks.

We shall present the distributed self-healing model which is especially applicable to reconfigurable networks such as peer-to-peer and wireless mesh networks and present fully distributed algorithms that can ‘heal’ certain global and topological properties using only local information. ForgivingTree[PODC2008] and  Forgiving Graph [PODC 2009; DC 2012] use a ‘virtual graphs’ approach maintaining connectivity, low degree increase and closeness of nodes (i.e. diameter, stretch). Xheal [PODC 2011; Xheal: localizedself-healing using expanders] further maintains expansion and spectral properties of the network. We present a fully distributed implementation in the LOCAL message passing model. However, we are working on ideas to allow even more efficient implementations and stronger guarantees.

Joint works with Thomas P Hayes, Jared Saia, Navin Rustagi and Gopal Pandurangan.

7/24/2013 Haim Kaplan (Tel Aviv University)

Submatrix maximum queries in Monge matrices and Monge partial matrices, and their applications

We describe a data structure for submatrix maximum queries in Monge matrices or partial Monge matrices, where a query seeks the maximum element in a contiguous submatrix of the given matrix. The structure, for an $n \times n$ Monge matrix, takes $O(n \log n)$ space, $O(n \log^2 n)$ preprocessing time, and answers queries in $O(\log^2 n)$ time. For partial Monge matrices the space and preprocessing grow by $\alpha(n)$ (the inverse Ackermann function), and the query remains $O(\log^2 n)$. Our design exploits an interpretation of the column maxima in a Monge (resp., partial Monge) matrix as an upper envelope of pseudo-lines (resp., pseudo-segments).

This data structure has already found few applications for dynamic distance oracles in planar graphs, for maximum flow in planar graphs, and for some geometric problem on empty rectangles.

7/17/2013 Moni Naor (The Weizmann Institute)

Cryptography and Data Structures: A Match Made in Heaven

The developments of cryptography and complexity theory often go hand
in hand. In this talk I will survey the connection of cryptography
with a different area of computer science: data structures. There are
numerous cases where developments in one area have been fruitfully
applied in the other. Early examples include Hellman’s Time/Space
Tradeoffs from 1980 and there are developments to this day.

7/10/2013 Toniann Pitassi (University of Toronto)

Average Case Lower Bounds for Monotone Switching Networks

(Joint work with Yuval Filmus, Robert Robere and Stephen Cook)

7/3/2013 Andrew V. Goldberg (MSR SV)

The Hub Labeling Algorithm

Given a weighted graph, a distance oracle takes as an input a pair of vertices and returns the distance between them. The labeling approach to distance oracle design is to precompute a label for every vertex so that the distances can be computed from the corresponding labels, without looking at the graph. We survey results on hub labeling (HL), a labeling algorithm that received a lot of attention recently.

HL query time and memory requirements depend on the label size. While some graphs admit small labels, one can prove that there are graphs for which the labels must be large. Computing optimal hub labels is hard, but in polynomial time one can approximate them up to a factor of O(log(n)). This can be done for the total label size (i.e., memory required to store the labels), the maximum label size (which determines the worst-case query time), and in general for an Lp norm of the vector induced by the vertex label sizes. One can also simultaneously approximate Lp and Lq norms.

Hierarchical labels are a special class of HL. For networks with small highway dimension, one can compute provably small hierarchical labels in polynomial time. On the other hand, one can prove that for some graphs hierarchical labels are significantly larger than the general ones. A heuristic for computing hierarchical labels leads to the fastest implementation of distance oracles for road networks. One can use label compression to trade off time for space, making the algorithm practical for a wider range of applications. We give experimental results showing that the heuristic hierarchical labels work well on road networks as well as some other graph classes, but not on all graphs. We also discuss efficient implementations of the provably good approximation algorithms and give experimental results.

Finally, we show that the labels can be stored in a database and HL queries can be implemented in SQL, making the algorithm accessible to SQL programmers.

6/26/2013 David Woodruf (IBM Almaden)

Low Rank Approximation and Regression in Input Sparsity Time

We improve the running times of algorithms for least squares regression and low-rank approximation to account for the sparsity of the input matrix. Namely, if nnz(A) denotes the number of non-zero entries of an input matrix A:
– we show how to solve approximate least squares regression given an n x d matrix A in nnz(A) + poly(d log n) time
– we show how to find an approximate best rank-k approximation of an n x n matrix in nnz(A) + n*poly(k log n) time

All approximations are relative error. Previous algorithms based on fast Johnson-Lindenstrauss transforms took at least ndlog d or nnz(A)*k time.

Joint work with Ken Clarkson.

6/19/2013 Siu On Chan (UC Berkeley)

Approximation Resistance from Pairwise Independent Subgroups

6/10/2013 Prateek Jain (MSR)

1:30-2:30PM. Please notice unusual day (Monday)!

Provable Alternating Minimization methods for Low-rank Matrix Estimation Problems

Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge.

In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result.

In this talk, we present one of the first theoretical analysis of alternating minimization methods for several different low-rank matrix estimation problems, e.g., matrix completion, inductive matrix completion etc. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.

This is a joint work with Praneeth Netrapalli, Sujay Sanghavi, Inderjit Dhillon.

6/5/2013 *No Talk* (STOC@Stanford)

5/28/2013 *No Talk* (Visions Symposium@Berkeley)

5/21/2013 Daniel Weitzner (MIT)

1:30-2:30PM. Please notice unusual day (Tuesday)!

Real Privacy: Context and Individual Control as the Path to Genuine Privacy Protection for the 21st Century.

Daniel J. Weitzner
Director, Decentralized Information Group
MIT Computer Science and Artificial Intelligence Laboratory

We hear that “your privacy matters to us,” but does anyone really know what that means? As public awareness of privacy grows there is also a growing divergence about what privacy really is. Whether we call privacy a fundamental human right, something that exists in the penumbra of other constitutional rights or a matter of consumer fairness, 20th century implementations of privacy are both unsatisfying for individuals and burdensome for innovators. This is a talk about rediscovering the foundations of privacy: freedom of association, protection from discrimination and limiting the tyranny of large institutions, be they public or private. In returning to privacy basics we see that real privacy is depends little on ‘notice’ but places a high value on respect for context and individual control. Pseudo-contractual notions of ‘choice’ are unhelpful to users, but respect for the context of relationships matters. Formalistic misunderstandings about the differences between US and European privacy frameworks are based on the simplistic view that Europe has ‘more’ privacy and the US has ‘less’, leaving us to believe we can chose privacy levels along a linear scale. In reality, we face a more complex set of technical and legal choices about how to enforce highly nuanced rules in complex information relationships. Finally, secrecy through encryption or barriers to the free flow of information are generally fig-leaves over privacy risks, but accountability for how information is actually used can build a world in which real privacy exists alongside a vibrant, open information society.

5/15/2013 Anupam Gupta (CMU)

How to Run your Chores (and Get to Dinner on Time)

In the orienteering problem, we are given a metric space (the distances are supposed to represent travel times between the locations), a start vertex (“home”) and a deadline B, and want to visit as many points as possible using a tour of length at most B. We know constant-factor approximation algorithms for this problem since the work of Blum et al in 2002.

However, suppose it is not enough for us to visit the nodes: upon reaching a location, we also have to wait for some (random) time at each location before we can get the reward. Each such waiting time is drawn from a known probability distribution. What can we do then? In this talk, we will discuss adaptive and non-adaptive approximation algorithms for this stochastic orienteering problem.

This is based on work with Ravi Krishnaswamy, Viswanath Nagarajan, and R.Ravi.

5/8/2013 Vitaly Feldman (IBM Research, Almaden)

Statistical Algorithms and a Lower Bound for Detecting Planted Cliques

We introduce a framework for proving lower bounds on computational problems over distributions, based on a class of algorithms called statistical algorithms. For such algorithms, access to the input distribution is limited to obtaining an estimate of the expectation of any given function on a sample drawn randomly from the input distribution, rather than directly accessing samples. Most natural algorithms of interest in theory and in practice, e.g., moments-based methods, local search, standard iterative methods for convex optimization, MCMC and simulated annealing, are statistical algorithms or have statistical counterparts. Our framework is inspired by and generalizes the statistical query model in learning theory.

Our main application is a nearly optimal lower bound on the complexity of any statistical algorithm for detecting planted bipartite clique distributions (or planted dense subgraph distributions) when the planted clique has size O(n^(1/2-\delta)) for any constant \delta > 0. Variants of these problems have been assumed to be hard to prove hardness for other problems and for cryptographic applications. Our lower bounds provide concrete evidence of hardness, thus supporting these assumptions.

Joint work with Elena Grigorescu, Lev Reyzin, Santosh Vempala and Ying Xiao

5/1/2013 Chandra Chekuri (University of Illinois, Urbana-Champaign)

Large-Treewidth Graph Decompositions and Applications

Treewidth is a graph parameter that plays a fundamental role in many structural and algorithmic results. We study the problem of decomposing a given graph $G$ into several node-disjoint subgraphs, where each subgraph has sufficiently large treewidth. We prove two theorems on the tradeoff between the number of the desired subgraphs $h$, and the desired lower bound $r$ on the treewidth of each subgraph. The theorems assert that, given a graph $G$ with treewidth $k$, a decomposition with parameters $h,r$ is feasible whenever $hr^2 \le k/\polylog(k)$, or $h^3r \le k/\polylog(k)$ holds.

The decomposition theorems are inspired by the breakthrough work of Chuzhoy on the maximum edge-disjoint paths problem in undirected graphs and follow up work that extended the ideas to node-disjoint paths. The goal of the talk to explain the background for the theorems and their application to routing and to fixed-parameter tractability and Erdos-Posa-type theorems.

The latter applications allow one to bypass the well-known Grid-Minor theorem of Robertson and Seymour. No prior knowledge of treewidth will be assumed.

The decomposition theorems are from a paper joint with Julia Chuzhoy that is to appear in STOC 2013 but the talk is also based on several previous papers on the maximum disjoint paths problem.

4/24/2013 Robert Krauthgamer (Weizmann Institute of Science)

Cutting corners cheaply, or how to remove Steiner points

The main result I will present is that the Steiner Point Removal (SPR) problem can always be solved with polylogarithmic distortion, which resolves in the affirmative a question posed by Chan, Xia, Konjevod, and Richa (2006). Specifically, for every edge-weighted graph $G=(V,E,w)$ and a subset of terminals $T\subset V$, there is a graph only on the terminals, denoted $G’=(T,E’,w’)$, which is a minor of $G$ and the shortest-path distance between any two terminals is approximately equal in $G’$ and in $G$, namely within factor $O(\log^6|T|)$. Our existence proof actually gives a randomized polynomial-time algorithm.

Our proof features a new variant of metric decomposition. It is well-known that every $n$-point metric space $(X,d)$ admits an $O(\log n)$-separating decomposition, which roughly speaking says there is a randomized partitioning of $X$, with a certain bound on the probability of separating any two points $x,y \in X$.  We introduce an additional requirement, which is a tail bound on the following random variable $Z_P$: the number of clusters of the partition that meet any shortest-path $P$ whose length is not too large.

Joint work with Lior Kamma and Huy L. Nguyen

4/17/2013 Yaron Singer (Google / Harvard)

Adaptive Seeding in Social Networks

The rapid adoption of social networking technologies throughout the past decade is bringing special attention to algorithmic and data mining techniques designed for maximizing information cascades in social networks.  Despite the immense progress which has been made in the past decade, due to limited data access and the structure of social networks the application of state-of-the-art techniques often results in poor performance.

In this talk we will introduce a new framework we call adaptive seeding.  The framework is a two-stage model which leverages a phenomenon known as the “friendship paradox” in social networks in order to dramatically increase the spread of information cascades.  Our main result shows constant factor approximations are achievable for the most well-studied models of information spreading in social networks.  The result follows from new techniques and concepts that may be of independent interest for those interested in stochastic optimization and machine learning.

Joint work with Lior Seeman

4/10/2013 Yoram Moses (Technion, on sabbatical at Stanford)

Knowledge as a Window into Distributed Coordination

This talk will review the knowledge-based approach in distributed systems, show a few fundamental connections between knowledge and multi-party coordination, and illustrate how these can provide insight into the interplay between time and communication in enabling coordination. The talk will be self-contained, intended for a general CS audience. The latter part of the talk is based on joint work with Ido Ben Zvi.

3/13/2013 Anupam Datta (CMU)

Naturally Rehearsing Passwords

We introduce quantitative usability and security models to guide the design of password management schemes — systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and validated through empirical studies. Given rehearsal requirements and a user’s visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks.

Observing that current password management schemes are either insecure or unusable, we present Shared Cues — a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem in a non-standard manner to achieve these competing goals.

Joint work with Jeremiah Blocki and Manuel Blum at CMU

2/20-3/6/2013: BREAK, no theory seminars are currently planned for these dates.

2/13/2013 Sergiu Hart (Hebrew University)

Dynamics and Equilibrium

An overview of a body of work on dynamical systems in multi-player environments. On the one hand, the natural informational restriction that each participant does not know the payoff functions of the other participants — “uncoupledness” — severely limits the possibilities to converge to Nash equilibria. On the other hand, there are simple adaptive heuristics — such as “regret matching” — that lead in the long run to correlated equilibria, a concept that embodies full rationality. Connections to behavioral economics, neurobiological studies, and engineering, are also mentioned.

1/16/2013 Konstantin Makarychev (MSR Redmond)
Notice Unusual Room: Telstar (usual building, SVC6)

Sorting Noisy Data with Partial Information

I will talk about semi-random models for the Minimum Feedback Arc Set Problem. In the Minimum Feedback Arc Set Problem, we are given a directed graph and our  goal is to remove as few edges as possible to make this graph acyclic. This is a classical optimization problem. The best known approximation algorithm due to Seymour gives O(log n loglog n) approximation in the worst case. I will discuss whether we can do better in the “real-life” than in the worst case. To this end, I will present two models that try to capture “real-life” instances of Minimum Feedback Arc Set. Then,  I will talk in more detail about one of the models. I will give an approximation algorithm that finds a solution of cost (1+epsilon) OPT + n polylog n, where OPT is the cost of the optimal solution.

Joint work with Yury Makarychev (TTIC) and Aravindan Vijayaraghavan (CMU).

1/9/2013 Shafi Goldwasser (MIT and Weizmann Institute)

Pseudo Deterministic Algorithms

We will present a new type of probabilistic algorithm, which we call pseudo-determinisc: they can not be distinguished from deterministic algorithms by a probabilistic polynomial time observer with black box access.

We will show a necessary and sufficient condition for the existence of a such an algorithm, and several examples of Bellagio algorithms which improve on deterministic solutions.

The notion of pseudo-deterministic computations extends beyond sequential polynomial time algorithms, to other domains where the use of randomization is essential such as distributed algorithms and sub-linear algorithms.  We will discuss these extensions.

12/5/2012 Ashwin Badanidiyuru Varadaraja (Cornell)

Fast algorithms for maximizing submodular functions

There has been much progress recently on improved approximations for problems involving submodular objective functions, many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this work we develop general techniques to get very fast approximation algorithms for maximizing submodular functions subject to various constraints. These include speeding up greedy and continuous greedy based algorithms and a new potential function based local search algorithm to handle multiple constraints.

(Based on joint work with Jan Vondrak)

11/28/2012 Ilan Lobel (NYU)

Intertemporal Price Discrimination: Structure and Computation of Optimal Policies

We consider the question of how should a firm optimally set a sequence of prices in order to maximize its long-term average revenue given a continuous flow of strategic customers. In particular, customers arrive over time, are strategic in timing their purchases and are heterogeneous along two dimensions: their valuation for the firm’s product and their willingness to wait before purchasing or leaving.

The customers’ patience and valuation may be correlated in an arbitrary fashion. For this general formulation, we prove that the firm may restrict attention to short cyclic pricing policies, which have length twice the maximum willingness to wait of the customer population. We further establish results on the suboptimality of monotone pricing policies in general, and illustrate the structure of optimal policies. These are, in a typical scenario, characterized by nested sales, where the firm offers partial discounts throughout each cycle, offers a significant discount halfway through the cycle, with the largest discount offered at the end of the cycle. From a computational perspective, we exploit the structure of the underlying problem to develop a novel dynamic programming formulation for the problem that computes an optimal pricing policy in polynomial time (in the maximum willingness-to-wait). We further establish a form of equivalence between the problem of pricing for a stream of heterogeneous strategic customers and pricing for a pool of heterogeneous customers who may stockpile units of the product. Joint work with Omar Besbes (Columbia).

11/9/2012 Elchanan Mossel (U C Berkeley)

 Some new proofs of Gaussian and discrete noise stability

I will discuss some new proofs of Borel’s result on Gaussian stability and of Majority is stablest and new applications that follow from these proofs  in hardness of approximation and social choice theory.

11/2/2012  Aravind Srinivasan (University of Maryland)

The Lovasz Local Lemma — constructive and nonconstructive

 Abstract: The Lovasz Local Lemma is a powerful probabilistic tool. We start by reviewing the breakthrough by Moser and Tardos on its constructive aspects, connections to other fields, and extensions due to Haeupler, Saha, and the speaker. We will then outline some very recent further extensions, due to David Harris and the speaker, which are nonconstructive thus far.

10/24/2012 No Talk (FOCS)

10/17/2012 Sanjeev Arora (Princeton)

Is Machine Learning Tractable? — Three Vignettes

Please note unusual time (1-2PM at Titan)

Many tasks in machine learning (especially unsupervised learning) are provably intractable: NP-hard or worse. Nevertheless, researchers have developed heuristic algorithms to try to solve these tasks in practice. In most cases, these algorithms are heuristics with no  provable guarantees on their running time or on the quality of solutions they return.  Can we change this state of affairs?

This talk will suggest that the answer is yes, and describe three of our recent works as illustration. (a) A new algorithm for learning topic models. (It applies to Linear Dirichlet Allocations of Blei et al. and also to more general topic models. It provably works under some reasonable assumptions and in practice is up to 50 times faster than existing software like Mallet. It relies upon a new procedure for nonnegative matrix factorization.) (b) What classifiers are worth learning? (Can theory illuminate the contentious question of what binary classifier to learn: SVM, Decision tree, etc.) (c) Provable ICA with unknown gaussian noise. (An algorithm to provably learn a “manifold” with small number of parameters but exponentially many “interesting regions.”)

(Based upon joint works with Rong Ge, Ravi Kannan, Ankur Moitra, Sushant Sachdeva.)

10/10/2012 Grigory Yaroslavtsev (Penn State)

Learning and testing submodular functions

Submodular functions capture the law of diminishing returns and can be viewed as a generalization of convexity to functions over the Boolean cube. Such functions arise in different areas, such as combinatorial optimization, machine learning and economics. In this talk we will focus on positive results about learning such functions from examples and testing whether a given function is submodular with a small number of queries.

For the class submodular functions taking values in discrete integral range of size R we show a structural result, giving concise representation for this class. The representation can be described as a maximum over a collection of threshold functions, each expressed by an R-DNF formula. This leads to efficient PAC-learning algorithms for this class, as well as testing algorithms with running time independent of the size of the domain.

Joint work with Sofya Raskhodnikova and Rocco Servedio

10/5/2012, 11AM, Fred Cate (Indiana University Maurer School of Law)

Big Data in Healthcare: The Future of Healthcare Innovation and the Regulations that Will Kill It

Please note unusual day (Friday) and time (11AM)

Personal information is increasingly recognized as the most critical resource for healthcare treatment, research, and management. While healthcare providers generate large amounts of information associated with patient interactions, far more health information, including genetic and behavioral data, are now being generated directly by individuals and as a result of individuals’ interactions with home healthcare and mobile devices, social media sites, and personal health records. These data are critical to the transformation of healthcare and the evolution of truly personalized medicine, yet privacy law imposes an inexplicably restrictive and inconsistent approach to their use. For the past three years, a blue-ribbon panel of medical practitioners and researchers, ethicists, lawyers, technologists, and privacy experts, funded by the NIH, have worked to craft an alternative approach to protecting privacy while facilitating medical research, to try to eliminate the threats posed by privacy regulations to health and privacy today and to the future of medical innovation tomorrow.

Fred H. Cate is a Distinguished Professor and C. Ben Dutton Professor of Law at the Indiana University Maurer School of Law. He is managing director of the Center for Law, Ethics, and Applied Research in Health Information and director of the Center for Applied Cybersecurity Research (a National Center of Academic Excellence in both Information Assurance Research and Information Assurance Education).  A member of Microsoft’s Trustworthy Computing Academic Advisory Board, Cate serves on numerous government and industry advisory and oversight committees, and testifies frequently before congressional committees on information privacy and security issues. He is the author of more than 150 articles and books and is one of the founding editors of the Oxford University Press journal, International Data Privacy Law. He is the PI on the NIH grant “Protecting Privacy in Health Research.”

9/19/2012 Zvika Brakerski (Stanford)

Efficient Interactive Coding Against Adversarial Noise

We study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS ’92, STOC ’93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors.

We show how to efficiently simulate any interactive protocol in the presence of constant-rate adversarial noise, while incurring only a constant blow-up in the communication complexity ($\cc$). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $1-2^{-\Omega(\cc)}$. Prior works could not achieve efficient simulation in the adversarial case.

Joint work with Yael Tauman Kalai

9/12/2012 Moritz Hardt (IBM Almaden)

Incoherence and privacy in spectral analysis of data

Matrix incoherence is a frequently observed property of large real-world matrices. Intuitively, the coherence of a matrix is low if the singular vectors of the matrix bear little resemblance to the individual rows of the matrix. We show that this property is quite useful in the design of differentially private approximate singular vector computation and low-rank approximation.

Our algorithms for these tasks turn out to be significantly more accurate under a low coherence assumption than what a well known worst-case lower bound would suggest. While not straightforward to analyze, our algorithms are highly efficient and easy to implement. We complement our theoretical results with several experiments on real and synthetic data.

Based on joint works with Aaron Roth

9/5/2012 Brendan Lucier (MSR-NE)

Stable Pricing and Partitions

In a combinatorial auction, a seller has m items to sell and n buyers willing to purchase, where each buyer has a value for each subset of the items.  In general such auctions are complex, with specification of bids and determination of optimal allocations having exponential complexity in n and/or m.  In some special cases (e.g. the “gross substitutes” condition) there exist ways to set prices on the items for sale so that a socially efficient outcome occurs when each buyer takes his most-preferred set.  Such a wonderful pricing outcome is known as a Walrasian Equilibrium (WE), but unfortunately a WE does not always exist in general.

In this talk I will present some new results (joint with Michal Feldman and Nick Gravin) in which we relax the concept of a pricing equilibrium to the combinatorial setting.  The essential feature of our relaxation is the ability for the seller to pre-partition the items prior to sale, then price the resulting bundles.  I will describe some of the properties of this notion and some of the algorithmic problems that arise and solutions we provide.  In particular, for general buyer values, we give a black-box reduction that converts an arbitrary allocation into such a “stable partition pricing” outcome with only a constant factor loss in social welfare.

8/29/2012 Noam Nisan (MSR SV)

Should Auctions Be Complex?

We consider the menu-size of an auction as a complexity measure and ask whether simple auctions suffice for obtaining high revenue.  For the case of one item and IID bidders, Myerson shows that the answer is “yes”:  the revenue-maximizing auction has a  single deterministically chosen reserve price.  For two (or more) items we show that the answer is “no”: complex auctions can yield infinitely more revenue than simple ones, even for a single bidder with an additive valuation.  However, when the bidder’s values for the two items are independently distributed,  the answer is “approximately yes”:  selling each of two items simply and separately yields at least half the revenue of any auction.

Joint work with Sergiu Hart

8/22/2012 Shiri Chechik (MSR SV)

Fully Dynamic Approximate Distance Oracles for Planar Graphs via Forbidden-Set Distance Labels

Distance oracle is a data structure that provides fast answers to distance queries. Recently, the problem of designing distance oracles capable of answering restricted distance queries, that is, estimating distances on a subgraph avoiding some forbidden vertices, has attracted a lot of attention. In this talk, we will consider forbidden set distance oracles for planar graphs. I’ll present an efficient compact distance oracle that is capable of handing any number of failures.

In addition, we will consider a closely related notion of fully dynamic distance oracles. In the dynamic distance oracle problem instead of getting the failures in the query phase, we rather need to handle an adversarial online sequence of update and query operations. Each query operation involves two vertices s and t whose distance needs to be estimated. Each update operation involves inserting/deleting a vertex/edge from the graph.

I’ll show that our forbidden set distance oracle can be tweaked to give fully dynamic distance oracle with improved bounds compared to the previously known fully dynamic distance oracle for planar graphs.

Based on a joint work with Ittai Abraham and Cyril Gavoille

8/15/2012 Raghu Meka (IAS)

Constructive discrepancy minimization by walking on the edges.

Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS 1985): In any system of n sets in a universe of size n, there always exists a coloring which achieves discrepancy 6\sqrt{n}. The original proof of Spencer was existential in nature, and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOCS 2010) gave an efficient algorithm which finds such a coloring. His algorithm was based on an SDP relaxation of the discrepancy problem and a clever rounding procedure.

In this work we give a new randomized algorithm to find a coloring as in Spencer’s result based on a restricted random walk we call “Edge-Walk”. Our algorithm and its analysis use only basic linear algebra and is “truly” constructive in that it does not appeal to the existential arguments, giving a new proof of Spencer’s theorem and the partial coloring lemma.

Joint work with Shachar Lovett.

8/8/2012 Rocco Servedio (Columbia)

Inverse Problems for Power Indices in Weighted Voting Games

Suppose we must design a weighted voting scheme to be used by n voters to choose between two candidates.  We want the i-th voter to have a certain prescribed amount of “influence” over the final outcome of the vote — for example, the voters may correspond to states with different populations, or shareholders who hold different numbers of shares in a company.  How can we design a weighted voting scheme that gives each voter the prescribed amount of influence?

Of course, in order to even hope to answer such a question we need a well-defined notion of the influence of a voter in a weighted voting scheme. Many such measures of influence have been studied in the voting theory literature; such measures are sometimes called “power indices.” In this talk we’ll consider two of the most popular power indices:  the “Banzhaf indices” (known in theoretical computer science as “Chow Parameters”) and the “Shapley-Shubik indices.” These are two quite different natural ways of of quantifying how much influence each voter has in a given weighted voting scheme.

As our main results, we’ll describe algorithms that solve the inverse problem of designing a weighted voting scheme for each of these power indices. Specifically,

(1) Given a vector of desired Banzhaf indices for the n voters, our first algorithm efficiently constructs a weighted voting scheme which has Banzhaf indices very close to the target indices (if any such weighted voting scheme exists).  Our result gives an almost doubly exponential (in terms of the closeness parameter) running time improvement over the only previous provably correct solution.

(2) Given a vector of desired Shapley-Shubik indices, our second algorithm efficiently constructs a weighted voting scheme which has Shapley-Shubik indices very close to the target indices (if any such weighted voting scheme exists).  This is the first algorithm for this problem with a poly(n) as opposed to exp(n) runtime.

A common algorithmic ingredient underlies these two results, but the structural results used to prove correctness are very different for the two indices.  Our results for Banzhaf indices are based on structural properties of linear threshold functions and geometric and linear-algebraic arguments about how hyperplanes interact with the Boolean hypercube.  Our results for Shapley-Shubik indices are based on anticoncentration bounds for sums of non-independent random variables.

No background in voting theory is required for the talk.

Based on joint works with Anindya De, Ilias Diakonikolas and Vitaly Feldman.

8/1/2012 Avi Wigderson (IAS Princeton)

Restriction Access, Population Recovery, and Partial Identification

We study several natural problems in which an unknown distribution over an unknown population of vectors needs to be recovered from partial or noisy samples. Such problems naturally arise in a variety of contexts in learning, clustering, statistics, data mining and database privacy, where loss and error may be introduced by nature, inaccurate measurements, or on purpose. We give fairly efficient algorithms to recover the data under fairly general assumptions, when loss and noise are close to the information theoretic limit (namely, nearly completely obliterate the original data).

Underlying one of our algorithms is a new structure we call a partial identification (PID) graph. While standard IDs are subsets of features (vector coordinates) that uniquely identify an individual in a population, partial IDs allow ambiguity (and “imposters”), and thus can be shorter. PID graphs capture this imposter-structure. PID graphs yield strategies for dimension reductions of recovery problems, and the re-assembly of this local pieces of statistical information to a global one. The combinatorial heart of this work is proving that every set of vectors admits partial IDs with “cheap” PID graphs (and hence efficient recovery). We further show how to find such near-optimal PIDs efficiently.

Time permitting, I will also describe our original motivation for studying these recovery problems above: a new learning model we call “restriction access”. This model aims at generalizing prevailing “black-box” access to functions when trying to learn the “device” (e.g.circuit, decision tree, polynomial…) which computes them. We propose a “grey-box” access that allows certain partial views of the device, obtained from random restrictions. Our recovery algorithms above allow positive learning results for the PAC-learning analog of our model, for such devices as decision trees and DNFs, which are currently beyond reach in the standard “black-box” version of PAC-learning.

Based on joint works with Zeev Dvir, Anup Rao and Amir Yehudayoff.

7/25/2012 Abhradeep Guha Thakurta (Pennsylvania State University)

Differentially Private Empirical Risk Minimization and High-dimensional Regression

In the recent past there have been several high-profile privacy breaches on machine learning based systems which satisfied various ad-hoc notions of privacy. Some of the examples being: attack on Amazon recommendation system by Calandrino et al. in 2011, attack on Facebook advertisement system by Korolova in 2011 etc. With these breaches in place, an obvious question that arises is “How can we design learning algorithms with rigorous privacy guarantees?”

In this talk, I will focus on designing convex empirical risk minimization (ERM) algorithms (a special class of learning algorithms) with differential privacy guarantees. In the recent past, differential privacy has emerged as one of the most commonly used rigorous notions of privacy.

My talk will be logically segregated into two parts:

Part a) Private ERM on offline data sets: In this part, I will discuss about various approaches for differentially private ERM when the complete data set is available at once (as opposed to the online setting, which I will consider in the next part). One of my main focuses in this part will be to discuss two of our new approaches towards private ERM in the offline setting: i)Improved objective perturbation algorithm (which is an improvement of the initial algorithm proposed by Chaudhuri et al. in 2008 and 2011) and ii) Online convex programming (OCP) based algorithm. Additionally, I will discuss about our first private ERM algorithms in high-dimensions (i.e., where the number of data elements is much lesser than the dimensionality of the underlying model parameter).

Part b) Private Online Learning: Online learning involves learning from private data in real-time, due to which the learned model as well as its predictions are continuously changing. We study the problem in the framework of online convex programming (OCP) while preserving differential privacy. For this problem, we provide a generic framework that can be used to convert any given OCP algorithm into its private variant while preserving privacy as well as sub-linear regret, provided that the given OCP algorithm satisfies the following two criteria:  1) linearly decreasing sensitivity, i.e., the effect of the new data points on the learned model decreases linearly, and 2) sub-linear regret. We instantiate our framework with two commonly used OCP algorithms: i) Generalized Infinitesimal Gradient Ascent (GIGA) and ii) Implicit Gradient Descent (IGD).

Joint work with Prateek Jain [MSR, India], Daniel Kifer [Penn State], Pravesh Kothari [University of Texas, Austin] and Adam Smith [Penn State].

7/18/2012 Alex Samorodnitsky (Hebrew University, Jerusalem)

Discrete tori and bounds for linear codes.

Let C be a linear subspace of the Hamming cube H. Let C’ be the dual code. Following Friedman and Tillich we try to estimate the rate of growth of metric balls in the discrete “torus” T = H/C’ and use this to upperbound the cardinality of T, and therefore of C.

A notion of discrete Ricci curvature of metric spaces, as defined by Ollivier, turns out to be useful in the cases where C’ has local structure (that is C is locally correctable / locally testable).

This approach leads to different (and, we would like to think, easier) proofs of some known upper bounds and to some occasional improvements in the bounds.

Joint work with Eran Iceland.

7/11/2012 Aviv Zohar (MSR SVC)

Critiques of Game Theory

Can game theorists predict social and political outcomes successfully? Should we listen when a game theorist argues that Israel should attack Iran to prevent it from obtaining nuclear weapons? How useful has game theory been in designing interactions? Do computers help somehow? In this talk I’d like to survey criticisms of Game Theory’s practical applicability. The talk will be non-technical and will assume only basic knowledge of concepts from Game Theory.

7/4/2012 No seminar (holiday)

6/27/2012 No seminar (intern-mentor baseball game)

6/20/2012 Piotr Indyk (MIT)

Faster Algorithms for Sparse Fourier Transform

The Fast Fourier Transform (FFT) is one of the most fundamental numerical algorithms. It computes the Discrete Fourier Transform (DFT) of an n- dimensional signal in O(n log n) time. The algorithm plays an important role in many areas.

In many applications (e.g., audio, image or video compression), most of the Fourier coefficients of a signal are “small” or equal to zero, i.e., the output of the transform is (approximately) sparse. In this case, there are algorithms that enable computing the non-zero coefficients faster than the FFT. However, in practice, the exponents in the runtimes of these algorithms and their complex structure have limited their applicability to very sparse signals.

In this talk, I will describe a new set of algorithms for sparse Fourier Transform. Their key feature is simplicity, which leads to efficient running times with low overhead, both in theory and in practice. One of those algorithms achieves a runtime of O(k log n), where k is the number of non-zero Fourier coefficients of the signal. This improves over the runtime of the FFT for any k = o(n).

Joint work with Haitham Hassanieh, Dina Katabi and Eric Price

6/13/2012 Guy Rothblum (MSR SVC)

How to Compute in the Presence of Leakage

We address the following problem: how to execute any algorithm, for an unbounded number of executions, in the presence of an attacker who gets to observe partial information on the internal state of the computation during executions. This general problem has been  addressed in the last few years with varying degrees of success. It is important for running cryptographic algorithms in the presence of side-channel attacks, as well as for running non-cryptographic algorithms, such as a proprietary search algorithm or a game, on a cloud server where parts of the execution’s internals might be observed.

In this work, we view algorithms as running on a leaky CPU. In each (sub)-computation run on the CPU, we allow the adversary to observe the output of an arbitrary and adaptively chosen length-bounded function on the CPU’s input, output, and randomness.

Our main result is a general compiler for transforming any algorithm into one that is secure in the presence of this family of partial observation attacks (while maintaining the algorithm’s functionality). This result is unconditional, it does not rely on any secure hardware components or cryptographic assumptions.

Joint work with Shafi Goldwasser

6/6/2012 Jasmin Fisher  (MSR Cambridge)

From Coding the Genome to Algorithms Decoding Life

The decade of genomic revolution following the human genome’s sequencing has produced significant medical advances, and yet again, revealed how complicated human biology is, and how much more remains to be understood. Biology is an extraordinary complicated puzzle; we may know some of its pieces but have no clue how they are assembled to orchestrate the symphony of life, which renders the comprehension and analysis of living systems a major challenge. Recent efforts to create executable models of complex biological phenomena – an approach we call Executable Biology – entail great promise for new scientific discoveries, shading new light on the puzzle of life. At the same time, this new wave of the future forces computer science to stretch far and beyond, and in ways never considered before, in order to deal with the enormous complexity observed in biology. This talk will focus on our recent success stories in using formal methods to model cell fate decisions during development and cancer, and on-going efforts to develop dedicated tools for biologists to model cellular processes in a visual-friendly way.

5/29/2012 Madhu Sudan (MSR New England)

TBA. Note Unusual Day and Time (Tuesday 11am-12pm)

5/23/2012 No Seminar (stoc week)

5/16/2012 Edith Cohen (AT&T)

Title: How to Get the Most out of Your Sampled Data

Random sampling is an  important tool for retaining the ability to query data under resource limitations. It is used to summarize data too large to store or manipulate and meet resource constraints on bandwidth or battery power.  Estimators that are applied to the sample provide fast approximate answers to queries posed over the original data and the value of the sample hinges on the quality of these estimators.

We are interested in queries, such as maximum or range, that span multiple data points.  Sum aggregates of these queries correspond to distinct counts and difference norms and are used for planning or change/anomaly detection over traffic logs and measurement data.  Unbiased estimators are particularly effective — While the estimate of each basic query inevitably has high variance, the relative error decreases with aggregation.

The sample may provide no information, exact value, or partial information on the queried value.  The Horvitz-Thompson estimator, known to minimize variance for sampling with “all or nothing” outcomes (which reveal the exact or no information on the queried value), is not optimal or altogether inapplicable to such queries.

We present a general principled methodology for the derivation of (Pareto) optimal nonnegative unbiased estimators over sampled data and aim to understand its potential.  We demonstrate significant improvement in estimation accuracy.

This work is joint with Haim Kaplan (Tel Aviv University).

5/9/2012 Andrew Drucker (MIT)

New Limits to Instance Compression for Hard Problems

Given an instance of a decision problem that is too difficult to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent instance of the same or a different problem.  Studying the power and limits of instance compression involves an intriguing interplay between computational and information-theoretic ideas.

As a representative problem, say we are given a Boolean formula Psi over n variables, and of size m >> n, and we want to determine if Psi is satisfiable.  Can we efficiently reduce this question to an equivalent problem instance of size poly(n), independent of m?  Harnik and Naor (FOCS ’06) and Bodlaender et al. (ICALP ’08) showed that this question has important connections to cryptography and to fixed-parameter tractability theory.  Fortnow and Santhanam (STOC ’08) gave a negative answer for deterministic compression, assuming that NP is not contained in coNP/poly.

We will describe new and improved evidence against efficient instance compression schemes.  Our method applies to probabilistic compression for SAT, and also gives the first evidence against deterministic compression for a number of problems.  To prove our results, we exploit the information bottleneck of an instance compression scheme, using a new method to “disguise” information being fed into a compressive mapping.

5/2/2012 Matthew Weineberg (MIT)

Optimal Multi-Dimensional Mechanism Design: Reducing Revenue to Welfare Maximization 

We provide a reduction from revenue maximization to welfare maximization in multi-dimensional Bayesian combinatorial auctions with arbitrary (possibly

combinatorial) feasibility constraints and independent additive bidders with arbitrary (possibly combinatorial) demand constraints, appropriately extending Myerson’s single-dimensional result [Myerson81] to this setting. We show that every feasible Bayesian auction can be implemented as a distribution over virtual VCG allocation rules. A virtual VCG allocation rule has the following simple form: Every bidder’s bid vector v_i is transformed into a virtual bid vector f_i(v_i), via a bidder-specific function. Then, the allocation maximizing virtual welfare is chosen. Using this characterization, we show how to find and run the revenue-optimal auction given only black box access to an implementation of the VCG allocation rule. We generalize this result to arbitrarily correlated bidders, introducing the notion of a second-order VCG allocation rule.

We obtain our reduction from revenue to welfare optimization via two algorithmic results on reduced form auctions in settings with arbitrary feasibility and demand constraints. First, we provide a separation oracle for determining feasibility of a reduced form auction. Second, we provide a geometric algorithm to decompose any feasible reduced form into a distribution over virtual VCG allocation rules. In addition, we show how to approximately execute both algorithms computationally efficiently given only black box access to an implementation of the VCG allocation rule, providing two fully polynomial-time randomized approximation schemes (FPRASs). With high probability, the separation oracle is correct on all points that are eps-away from the boundary of the set of feasible reduced forms (in the infinity-norm), and the decomposition algorithm returns a distribution over virtual VCG allocation rules whose reduced form is within eps (in the infinity-norm) of any given feasible reduced form that is eps-away from the boundary.

Our mechanisms run in time polynomial in the number of bidder types and not type profiles. This running time is always polynomial in the number of bidders, and scales with the cardinality of the support of each bidder’s value distribution. The running time can be improved to polynomial in both the number of bidders and number of items in item-symmetric settings by making use of results from [Daskalakis-Weinberg 12].

Joint work with Yang Cai and Costis Daskalakis.

4/25/2012 Or Meir. (Stanford).

Combinatorial Construction of Locally Testable Codes

An error correcting code is said to be locally testable if there is a test that can check whether a given string is a codeword of the code, or rather far from the code, by reading only a constant number of symbols of the string. Locally Testable Codes (LTCs) were first explicitly studied by Goldreich and Sudan (J. ACM 53(4)) and since then several constructions of LTCs were suggested.

While the best known construction of LTCs achieves very efficient parameters, it relies heavily on algebraic tools and on PCP machinery. We present a new and arguably simpler construction of LTCs that matches the parameters of the best known construction while not relying on either heavy algebra or PCP machinery. However, our construction is a probabilistic one.

4/18/2012 Shayan Oveis Gharan (Stanford).

Multi-way Spectral Partitioning and Higher-Order Cheeger Inequalities.

A basic fact in algebraic graph theory is that the number of connected components in an undirected graph is equal to the multiplicity of the eigenvalue zero in the Laplacian matrix of the graph. In particular, the graph is disconnected if and only if there are at least two eigenvalues equal to zero. Cheeger’s inequality and its variants provide an approximate version of the latter fact; they state that a graph has a sparse cut if and only if there are at least two eigenvalues that are close to zero.

In this talk I show an analogous characterization holds for higher multiplicities, i.e.,  there are k eigenvalues close to zero if and only if the vertex set can be partitioned into k subsets, each defining a sparse cut. Our result also provides a theoretical justification for clustering algorithms that use the bottom k eigenvectors to embed the vertices into R^k, and then apply geometric considerations to the embedding. Our techniques also yield a nearly optimal tradeoff between the expansion of sets of size~n/k, and the k-th smallest eigenvalue of the normalized Laplacian matrix.

Based on a joint work with James R. Lee, and Luca Trevisan.

4/11/2012 Jan Vondrak (IBM Almaden).

Hardness of randomized truthful mechanisms for combinatorial auctions.

The problem of combinatorial auctions is one of the basic questions in algorithmic mechanism design: how can we allocate m items to n agents with private valuations of different combinations of items, so that the agents are motivated to reveal their true valuations and the outcome is (approximately) optimal in terms of social welfare? While approximation algorithms exist for several non-trivial classes of valuations, they typically do not motivate agents to report truthfully. The classical VCG mechanism, although truthful, is not computationally efficient. Thus the main question is whether the requirements of truthfulness and computational efficiency can be combined, or whether they are incompatible.

We identify a class of explicit (succinctly represented) submodular valuations, for which it is known that combinatorial auctions without the requirement of truthfulness admit a (1-1/e)-approximation; however, we prove that unless NP \subset P/poly, there is no truthful (even truthful-in-expectation) mechanism for this class providing approximation better than polynomial in the number of agents. (Previous work by Dobzinski already ruled out deterministic and universally truthful mechanisms for submodular valuations given by a value oracle.)

Joint work with Shaddin Dughmi and Shahar Dobzinski.

4/4/2012 No Seminar.

3/28/2012 1:30-2:30 Justin Thaler (Harvard)

Practical Verified Computation with Streaming Interactive Proofs

A potential problem in outsourcing work to commercial cloud computing services is trust. If we store a large data set with a service provider, and ask them to perform a computation on that data set — for example, to compute the eigenvalues of a large graph, or to compute a linear program on a large matrix derived from a database — how can we know the computation was performed correctly? Obviously we don’t want to compute the result ourselves, and we might not even be able to store all the data locally. This leads to new problems in the streaming paradigm: we consider a streaming algorithm (modeling a user with limited memory and computational resources) that can be assisted by a powerful helper (the service provider). The goal of the service provider is to not only provide the user with answer, but to convince the user the answer is correct.

In this talk, I will give a unified overview of a recent line of work exploring the application of proof systems to problems that are streaming in nature. In all of these protocols, an honest service provider can always convince the data owner that the answer is correct, while a dishonest prover will be caught with high probability. The protocols I will discuss utilize and extend powerful ideas from communication complexity and the theory of interactive proofs, and I will argue that many are highly practical, achieving millions of updates per second and requiring little space and communication.

Joint work with Amit Chakrabarti, Graham Cormode, Andrew McGregor, Michael Mitzenmacher, and Ke Yi

3/21/2012 1:30-2:30 Mehrdad Nojoumian (University of Waterloo)

Secret Sharing Based on the Social Behaviors of Players

Initially, a mathematical model for “trust computation” in social networks is provided. Subsequently, the notion of a “social secret sharing scheme” is introduced in which shares are allocated based on a player’s reputation and the way she interacts with other parties. In other words, this scheme renews shares at each cycle without changing the secret, and allows trusted parties to gain more authority.

Finally, a novel “socio-rational secret sharing” scheme is proposed in which rational foresighted players have long-term interactions in a social context, i.e., players run secret sharing while founding and sustaining a trust network. To motivate this, consider a repeated game such as sealed-bid auctions. If we assume each party has a reputation value, we can then penalize (or reward) players who are selfish (or unselfish) from game to game. This social reinforcement stimulates players to be cooperative.

3/14/2012 1:30-2:30 Venkatesan Guruswami (Carnegie Mellon University)

Lasserre hierarchy, higher eigenvalues, and graph partitioning

Partitioning the vertices of a graph into two (roughly) equal parts to minimize the weight of edges cut is a fundamental optimization problem, arising in diverse applications. Despite intense research, there remains a huge gap in our understanding of the approximability of these problems: the best algorithms achieve a super-constant approximation factor, whereas even a factor 1.1 approximation is not known to be NP-hard.

We describe an approximation scheme for various graph partitioning problems such as sparsest cut, minimum bisection, and small set expansion. Specifically, we give an algorithm running in time n^{O_epsilon(r)} with approximation ratio (1+epsilon)/min(1,lambda_r), where lambda_r is the r’th smallest eigenvalue of the normalized graph Laplacian matrix. This perhaps indicates why even showing very weak hardness for these problems has been elusive, since the reduction must produce hard instances with slowly growing spectra.

Our algorithm is based on a rounding procedure for semidefinite programming relaxations from a strong class called the Lasserre hierarchy. The analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix.

Our methods apply more broadly to optimizing certain Quadratic Integer Programming problems with positive semidefinite objective functions and global linear constraints. This framework includes other notorious problems such as Unique Games, which we again show to be easy when the normalized Laplacian doesn’t have too many small eigenvalues.

Joint work with Ali Kemal Sinop.

3/7/2012 – Cancled (TechFest)

2/29/2012 1:30-2:30 Parikshit Gopalan (MSR-SVC)

The Short Code

The Long Code plays a crucial role in our understanding of the approximability of NP-hard problems. True to its name however, it is rather long. We construct a shorter code which enjoys many of the desirable properties of the Long code and can replace it in some scenarios.

The Short Code is derived from an explicit construction of a graph with several large eigenvalues which is nevertheless a good small-set expander. This answers a question raised by Arora, Barak and Steurer. We present a general recipe for constructing small-set expanders from certain locally testable codes.

Joint work with Boaz Barak, Johann Hastad, Raghu Meka, Prasad Raghavendra and David Steurer.

2/22/2012 1:30-2:30 Aleksander Madry (MSR New-England)

Online Algorithms and the K-server Conjecture

Traditionally, in the problems considered in optimization, one produces the solution only after the whole input is made available. However, in many real-world scenarios the input is revealed gradually, and one needs to make irrevocable decisions along the way while having only partial information on the whole input. This motivates us to develop models that allow us to address such scenarios.

In this talk, I will consider one of the most popular approaches to dealing with uncertainty in optimization: the online model and competitive analysis; and focus on a central problem in this area: the k-server problem. This problem captures many online scenarios – in particular, the widely studied caching problem – and is considered by many to be the “holy grail” problem of the field.

I will present a new randomized algorithm for the k-server problem that is the first online algorithm for this problem that achieves polylogarithmic competitiveness.

Based on joint work with Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor.

2/15/2012 1:30-2:30 Dorothea Wagner (Karlsruhe Institute of Technology)

Algorithm Engineering for Graph Clustering

Graph clustering has become a central tool for the analysis of networks in general, with applications ranging from the field of social sciences to biology and to the growing field of complex systems. The general aim of graph clustering is to identify dense groups in networks. Countless formalizations thereof exist, among those the widespread measure modularity. However, the overwhelming majority of algorithms for graph clustering relies on heuristics, e.g., for some NP-hard optimization problem, and do not allow for any structural guarantee on their output. Moreover, most networks in the real world are not static but evolve over the time, and so do their group structures.

The lecture will focus on algorithmic aspects of graph clustering, especially on quality measures and algorithms that are based on the intuition of identifying as clusters dense subgraphs that are loosely connected among one another. We will discuss different quality measures, in particular the quality index modularity, and present an algorithm engineering approach for modularity maximization and related problems.

2/9/2012 10:30-11:30 Michael Kapralov (Stanford)

 Algorithms for Bipartite Matching Problems with Connections to Sparsification and Streaming

The need to process massive modern data sets necessitates rethinking of some classical algorithmic solutions from the point of view of modern data processing architectures. Over the past years sparsification has emerged as an important primitive in the algorithmic toolkit for graph algorithms that allows one to obtain a small space representation that approximately preserves some useful properties of the graph. This talk is centered around two topics. First, we give new algorithms for some bipartite matching problems, which use both sparsification and random walks in novel ways. Second, we give efficient algorithms for constructing sparsifiers on modern computing platforms.

In the first part of the talk we consider the problem of finding perfect matchings in regular bipartite graphs, a classical problem with applications to edge-colorings, routing and scheduling. A sequence of improvements over the years have culminated in a linear-time algorithm. We use both sparsification and random walks to obtain efficient sublinear time algorithms for the problem. In particular, we give an algorithm that recovers a perfect matching in O(n log n) time, where n is the number of vertices in the graph, when the graph is given in adjacency array representation. The runtime is within O(log n) of output complexity, essentially closing the problem. Our approach also yields extremely efficient and easy to implement algorithms for edge-coloring bipartite multigraphs and computing the Birkhoff-von-Neumann decomposition of a doubly-stochastic matrix.

In the second part of the talk we describe an efficient algorithm for single pass graph sparsification in distributed stream processing systems such as Twitter’s recently introduced Storm. We also present a novel approach to obtaining spectral sparsifiers, based on a new notion of distance between nodes in a graph related to shortest path distance on random samples of the graph.

Finally, in the last part of the talk we introduce and study a notion of sparsification relevant to matching problems in general graphs, and show applications to the problem of approximating maximum matchings in a single pass in the streaming model.

1/26/2012 10:30-11:30 Gregory Valiant (UC Berkeley)

Algorithmic Solutions to Some Statistical Questions

I will discuss three classical statistical problems for which the computational perspective unlocks insights into the fundamental nature of these tasks, and suggests new approaches to coping with the increasing size of real-world datasets.

The first problem is recovering the parameters of a mixture of Gaussian distributions. Given data drawn from a single Gaussian distribution, the sample mean and covariance of the data trivially yield good estimates of the parameters of the true distribution; if, however, some of the data points are drawn according to one Gaussian, and the rest of the data points are drawn according to a different Gaussian, how can one recover the parameters of each Gaussian component? This problem was first proposed by Pearson in the 1890’s, and, in the last decade, was revisited by computer scientists. In a pair of papers with Adam Kalai and Ankur Moitra, we established that both the sample complexity, and computational complexity of this problem are polynomial in the relevant parameters (the dimension, and the inverse of the desired accuracy).

The second problem, investigated in a series of papers with Paul Valiant, considers the tasks of estimating a broad class of statistical properties, which includes entropy, L_k distances between pairs of distributions, and support size. There are several implications of our results, including resolving the sample complexity of the `distinct elements problem’ (i.e. given a data table with n rows, how many random rows must one query to accurately estimate the number of distinct rows?). We show that on the order of n/log n rows is both necessary, and sufficient, improving significantly on both the prior upper and lower bounds for this problem.

Finally I’ll describe some new bounds for the problem of learning noisy juntas (and parities). Roughly, this problem captures the task of determining the `relevant’ variables—for example, given a large table with columns representing the expression of many different genes, and one final column representing the incidence of some medical condition, how can one efficiently find the (possibly small) subset of genes that is relevant to predicting the condition?

1/25/2012 1:30-2:30 Virginia Vassilevska Williams (UC Berkeley)

Multiplying Matrices Faster than Coppersmith-Winograd

In 1987 Coppersmith and Winograd presented an algorithm to multiply two n by n matrices using O(n^{2.3755}) arithmetic operations. This algorithm has remained the theoretically fastest approach for matrix multiplication for 24 years. We have recently been able to design an algorithm that multiplies n by n matrices and uses at most O(n^{2.3727}) arithmetic operations, thus improving the Coppersmith-Winograd running time.

The improvement is based on a recursive application of the original Coppersmith-Winograd construction, together with a general theorem that reduces the analysis of the algorithm running time to solving a nonlinear constraint program. The final analysis is then done by numerically solving this program. To fully optimize the running time we utilize an idea from independent work by Stothers who claimed an O(n^{2.3737}) runtime in his Ph.D. thesis.

The aim of the talk will be to give some intuition and to highlight the main new ideas needed to obtain the improvement.

1/24/2012 10:30-11:30 Roy Schwartz (Technion Israel Institute of Technology)

Submodular Maximization

The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. In addition to the fact that it is common for utility functions in economics and algorithmic game theory to be submodular, such functions also play a major role in combinatorics, graph theory and combinatorial optimization. A partial list of well known problems captured by submodular maximization includes: Max-Cut, Max-DiCut, Max-k-Cover, Generalized-Assignment, several variants of Max-SAT and some welfare and scheduling problems.

Classical works on submodular maximization problems are mostly combinatorial in nature.

Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck in the continuous approach is how to approximately solve a non-convex relaxation for the submodular problem at hand. A simple and elegant method, called “continuous greedy”, successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. We present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for various applications. Some notable immediate implications are information-theoretic tight approximations for Submodular Max-SAT and Submodular-Welfare with k players, for any number of players k, and an improved (1/e)-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints.

We show that continuous methods can be further used to obtain improved results in other settings. Perhaps the most basic submodular maximization problem is the problem of Unconstrained Submodular Maximization, which captures some well studied problems, such as: Max-Cut, Max-DiCut, and some variants of maximum facility location and Max-SAT. Exploiting some symmetry properties of the problem, we present a simple information-theoretic tight (1/2)-approximation algorithm, which unlike previous known algorithms keeps a fractional inner state, i.e., it is based on a continuous approach. We note that our algorithm can be further simplified to obtain a purely combinatorial algorithm which runs only in linear time.

1/18/2012 1:30-2:30 Paul Ohm (University of Colorado Law School)

Computer Science and the Law

Today more than ever, computer science shapes law and law shapes computer science. Computer scientists pave the way for new computing and networking platforms and applications that then set the stage for new forms of collaboration and conflict. These create opportunities for lawyers and lawmakers to respond. The relationship works in the reverse order, too, when lawyers and lawmakers enact laws in ways that constrain or (less often) expand what computer scientists are allowed to do. This interdisciplinary interplay has taken place in many different specific conflicts, from network neutrality to the crypto wars and from battles over copyright to information privacy, but there is value in looking at these more generally, examining the messy but necessary relationship between computer science and law (and policy too).

In this talk, Paul Ohm, a law professor at the University of Colorado, will examine the relationship between computer science and law and policy. He will draw from his experiences as an undergraduate computer science major, professional systems administrator, Department of Justice computer crimes prosecutor, and leading scholar in cyberlaw and information privacy, to talk about whether and how computer science can influence law and policy and vice versa. He will draw from specific examples including debates over anonymization, domain name seizure, and deep packet inspection. This is intended to be a dynamic and interactive session, one in which the audience will help shape the direction of discussion.

1/12/2012 10:30-11:30 Shiri Chechik (Weizmann Institute of Science)

Compact Routing with Failures

Routing is one of the most fundamental problems in distributed networks. A routing scheme is a mechanism that allows delivering messages between the nodes of the graph.

In this talk I’ll discuss a natural extension of compact routing schemes – routing in the presence of failures. Suppose that some of the links crash from time to time and it is still required to deliver messages between the nodes of the graph, if possible. I’ll present a compact routing scheme capable of handling multiple edge failures.

1/11/2012 1:30-2:30 Balasubramanian Sivan (University of Wisconsin-Madison)

 Bayesian Multi-Parameter Scheduling

We study the makespan minimization problem with unrelated selfish machines. Specifically, our goal is to schedule a given set of jobs on n machines; the machines are strategic agents who hold the private information of the run-time of each job on them. The goal is to design a mechanism that minimizes makespan — the time taken for the last job to complete. In the strategic setting, strong impossibility results are known that show that no truthful (anonymous) mechanism can achieve better than a factor n approximation. We show that under mild Bayesian assumptions it is possible to circumvent such negative results and obtain a constant approximation.

Joint work with Shuchi Chawla, Jason Hartline and David Malec.

1/10/2012 10:30-11:30 Christian Sommer (MIT)

Exact and Approximate Shortest-Path Queries

We discuss the problem of efficiently computing a shortest path between two nodes of a network — a problem with numerous applications. The shortest-path query problem in particular occurs in transportation (route planning and navigation or also logistics and traffic simulations), in packet routing, in social networks, and in many other scenarios. Furthermore, shortest-path problems occur as subproblems in various optimization problems.

Strategies for computing answers to shortest-path queries may involve the use of pre-computed data structures (also called distance oracles) in order to improve the query time. Designing a shortest-path-query processing method raises questions such as: How can these data structures be computed efficiently? What amount of storage is necessary? How much improvement of the query time is possible? How good is the approximation quality (also termed stretch) of the query result? And, in particular, what are the tradeoffs between pre-computation time, storage, query time, and approximation quality?

The talk provides answers to these questions for static networks. In particular, we consider the tradeoff between storage and query time, both from a theoretical and from an experimental perspective. We focus on two application scenarios: First, we discuss shortest-path query methods for planar graphs, motivated by route planning in road networks. Second, we discuss distance oracles and shortest-path query methods for complex networks, motivated by small-world phenomena in social networks and Internet routing. We also outline which methods and techniques can or cannot be extended to more general networks.

Joint work with Takuya Akiba, Wei Chen, Ken-ichi Kawarabayashi, Philip Klein, Shay Mozes, Shang-Hua Teng, Mikkel Thorup, Elad Verbin, Yajun Wang, Wei Yu, and others.

12/14/2011 1:30-2:30 Dan Spielman (Yale)

 Algorithms, Graph Theory and the Solution of Laplacian Linear Equations

We survey several fascinating concepts and algorithms in graph theory that arise in the design of fast algorithms for solving linear equations in the Laplacian matrices of graphs. We will begin by explaining why linear equations in these matrices are so interesting.

The problem of solving linear equations in these matrices motivates a new notion of what it means for one graph to approximate another. This leads to a problem of graph sparsification–the approximation of a graph by a sparser graph. Our algorithms for solving Laplacian linear equations will exploit surprisingly strong approximations of graphs by sparse graphs, and even by trees.

We will survey the roles that spectral graph theory, random matrix theory, graph sparsification, low-stretch spanning trees and local clustering algorithm play in the design of fast algorithms for solving Laplacian linear equations.

12/13/2011 1:30-2:30 Dan Spielman (Yale)

Fitting a Graph to Vectors

We ask “What is the right graph to fit to a set of vectors?”

We would like to associate one vertex with each vector, and choose the edges in a natural way.

We propose one solution that provides good answers to standard Machine Learning problems such as classification and regression, that has interesting combinatorial properties, and that we can compute efficiently.

Joint work with Jonathan Kelner and Samuel Daitch.

12/7/2011 1:30-2:30 Salil Vadhan (Harvard)

Computational Entropy

Shannon’s notion of entropy measures the amount of “randomness” in a process. However, to an algorithm with bounded resources, the amount of randomness can appear to be very different from the Shannon entropy. Indeed, various measures of “computational entropy” have been very useful in computational complexity and the foundations of cryptography.

In this talk, I will describe two new measures of computational entropy (“next-bit pseudoentropy” and “inaccessible entropy”) that have enabled much simpler and more efficient constructions of cryptographic primitives from one-way functions. In particular, I will present ideas underlying a construction of pseudorandom generators of seed length O(n^3) from a one-way function on n bits, improving the seed length of O(n^8) in the classic construction of Hastad, Impagliazzo, Levin, and Luby.

Joint works with Iftach Haitner, Thomas Holenstein, Omer Reingold, Hoeteck Wee, and Colin Jia Zheng.

12/6/2011 1:30-2:30 Aleksandar Nikolov (Rutgers)

Optimal Private Halfspace Counting via Discrepancy

In the range counting problem we are given a set $P$ of $n$ points in $d$-dimensional Euclidean space, an integer weight $x_p$ for each point $p$ in $P$, and a collection ${\cal R}$ of ranges, i.e. subsets of $P$. Given a query range, the task is to output the sum of weights of the points belonging to that range. Range counting is a fundamental problem in computational geometry.

We study $(\epsilon, \delta)$-differentially private algorithms for range counting. Our main results are for range spaces given by halfspaces, i.e.~the halfspace counting problem. We present an $(\epsilon, \delta)$-differentially private algorithm for halfspace counting in $d$ dimensions which achieves $O(n^{1-1/d})$ average squared error. We also show a matching lower bound of $\Omega(n^{1-1/d})$ for any $(\eps, \delta)$-differentially private algorithm.

Both bounds are obtained using discrepancy theory. Our lower bound approach also yields a lower bound of $\Omega((\log n)^{d-1})$ average squared error for $(\epsilon, \delta)$-differentially private orthogonal range counting, the first known lower bound for this problem. Our upper bound methods yield $(\epsilon, \delta)$-differentially private algorithms for range counting with polynomially bounded shatter function range spaces.

Joint work with S. Muthukrishnan.

11/30/2011 1:30-2:30 Cynthia Dwork (MSR Silicon Valley)

Lipschitz Mappings, Differential Privacy, and Fairness Through Awareness

We study fairness in classification, where individuals are classified, e.g., admitted (or not) to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). We present a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint that similar individuals are treated similarly; and (3) an adaptation for achieving the complementary goal of “fair affirmative action,” which guarantees statistical parity (the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible.  Our approach handles arbitrary classifiers, with arbitrary utilities. We also establish a connection to differential privacy, where similar databases give rise to similar output distributions.

Joint work with Moritz Hardt, Toni Pitassi, Omer Reingold, and Richard Zemel.

11/16/2011 1:30-2:30 Renato Paes Leme (Cornell)

Polyhedral Clinching Auctions and the Adwords Polytope

A central problem in mechanism design is how to deal with budget-constrained agents and the goal is to produce auctions that are incentive compatible, individually rational, budget feasible and Pareto-optimal. Variations of Ausubel’s clinching auction have produced successful results for multi-unit auctions(Dobzinski et al) and certain matching markets (Fiat et al). In this paper, we extend the Ausubel’s clinching auction to the setting where the allocation is a general polymatroid given to us by a separation oracle. We also show that the clinching step can be calculated efficiently using submodular minimization. Moreover, we show that polymatroids are the most general setting under clinching auctions can be applied. Many settings of interest to sponsored search can be expressed as polymatroids. In particular, we define the AdWords polytope, which can be of independent interest.

Furthermore, for the case where only one player is budget constrained, we define an auction for any generic environment satisfying Pareto-optimality, individual rationality, budget feasibility and Pareto optimality. The technique here involves approximating a convex set by smooth surfaces, designing an auction for the smooth environments and taking its limit.

For the case of two players with budget constraints and a general polyhedral (but not polymatroidal) environment, we show an impossibility result. As a byproduct, we get an impossibility result for multi-unit auctions with decreasing marginal utilities. It was conjectured by Ausubel that a variation of the clinching auction with budgets would work for this setting. This conjecture was later reinforced in Dobzinski, Lavi and Nisan. We solve the problem in a negative way. We do so by a characterization of Pareto optimal truthful auctions for polyhedral environment.

This is a joint work with Vahab Mirrokni and Gagan Goel.

11/9/2011 1:30-2:30 Moshe Babaioff (MSR Silicon Valley)

Peaches, Lemons, and Cookies: Designing Auction Markets with Dispersed Information

This paper studies the role of information asymmetries in second price, common value auctions. Motivated by information structures that arise commonly in applications such as online advertising, we seek to understand what types of information asymmetries lead to substantial reductions in revenue for the auctioneer. One application of our results concerns online advertising auctions in the presence of “cookies”, which allow individual advertisers to recognize advertising opportunities for users who, for example, are customers of their websites. Cookies create substantial information asymmetries both ex ante and at the interim stage, when advertisers form their beliefs. The paper proceeds by first introducing a new refinement, which we call “tremble robust equilibrium” (TRE), which overcomes the problem of multiplicity of equilibria in many domains of interest.

Second, we consider a special information structure, where only one bidder has access to superior information, and show that the seller’s revenue in the unique TRE is equal to the expected value of the object conditional on the lowest possible signal, no matter how unlikely it is that this signal is realized. Thus, if cookies identify especially good users, revenue may not be affected much, but if cookies can (even occasionally) be used to identify very poor users, the revenue consequences are severe. In the third part of the paper, we study the case where multiple bidders may be informed, providing additional characterizations of the impact of information structure on revenue. Finally, we consider richer market designs that ensure greater revenue for the auctioneer, for example by auctioning the right to participate in the mechanism.

This is joint work with Ittai Abraham, Susan Athey and Michael Grubb.

11/3/2011 1:30-2:30 Karthik Chandrasekaran (Georgia Tech)

Discrete entropy and the complexity of random integer programming

We consider integer feasibility of random polytopes specified by m random tangential hyperplanes to a ball of radius R centered around an arbitrary point in n-dimensional space. We show that with high probability, the random polytope defined by m=O(n) such constraints contains an integer point provided the radius R is larger than a universal constant.

For the random polytope with m constraints in n-dimensional space, where n < m < 2^O(sqrt(n)), a ball of radius about Omega(sqrt(log (2m/n))) suffices. Moreover, if the polytope contains a ball of radius Omega(log (2m/n)), then we can find an integer solution with high probability (over the input) in randomized polynomial time. Our work provides a connection between integer programming and discrepancy of set systems – in particular, we use the entropy technique from Spencer’s classical result on discrepancy of set systems and build on Bansal’s recent algorithm for finding low-discrepancy solutions efficiently.

This is joint work with Santosh Vempala.

10/28/2011 1:30-2:30 Mihai Patrascu (AT&T Labs Research)

Hashing for Linear Probing

Hash tables are ubiquitous data structures solving the dictionary problem, and they often show up in inner loops, making performance critical.

A hash table algorithm relies crucially on a hash function, which quasirandomly maps a large domain (the input keys) to a small domain (the memory space available). Many “hash tables” (in effect, algorithms for dealing with collisions) have been proposed, the best known being collision chaining, linear probing and cuckoo hashing.

Among these, linear probing is ideally suited for modern computer architectures, which tend to favor linear scans. However, linear probing is quite sensitive to the quality of the hash function and, traditionally, good performance was only guaranteed by using highly independent (but slow) hash functions.

Our finding is that tabulation hashing, despite its low degree of independence, can actually guarantee very robust performance in linear probing. This function is both easy to implement and extremely fast on current hardware (faster than 2 multiplications), offering an ideal solution both in theory and in practice.

10/20/2011 1:30-2:30 Roy Schwartz (Technion)

Continuous Methods for Submodular Maximization

The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged.

We present several new techniques and algorithmic tools which are based on the continuous approach, yielding improved approximations for various applications and in some cases information-theoretic tight approximations.

1. The main bottleneck in the continuous approach is how to approximately solve a non-convex relaxation for the submodular problem at hand. A simple and elegant method, called “continuous greedy”, successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. We present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for various applications. Some notable immediate implications are information-theoretic tight approximations for Submodular Max-SAT and Submodular-Welfare with $k$ players, for {\em any} number of players $k$, and an improved $1/e$-approximation for maximizing a non-monotone submodular function subject to a matroid or $O(1)$-knapsack constraints.

2. Consider the Unconstrained Submodular Maximization problem in which we are given a non-negative (and possibly non-monotone) submodular function $f$ over a domain $N$, and the objective is to find a subset $S\subseteq N$ maximizing $f(S)$. This is considered one of the basic submodular optimization problems, generalizing well known problems such as Max-CUT, Max-DICUT and variants of maximum facility location and Max-SAT. We present the first rounding based algorithm for this problem, providing an improved approximation of roughly $0.45$ (a hardness of $0.5+\varepsilon$ for every constant $\varepsilon >0$ was given by Feige et al.). To achieve this goal we present a new algorithmic tool, which given a suboptimal solution $S$ whose value is small in comparison to $OPT$ but is “structurally” similar to $OPT$, improves the value of $S$ solely on this information. When two coupled executions of the unified continuous greedy algorithm are carefully combined with the above algorithmic tool, one can achieve the stated approximation of $0.45$.

Joint work with Moran Feldman and Seffi Naor.

10/19/2011 1:30-2:30 Aranyak Mehta (Google)

Online Matching and Ad Allocation

The spectacular success of search and display advertising and its huge growth potential has attracted the attention of researchers from many aspects of computer science. A core problem in this area is that of Ad allocation, an inherently algorithmic and game theoretic question – that of matching ad slots to advertisers, online, under demand and supply constraints. Put very simply: the better the matching, the more efficient the market.

The seminal algorithmic work on online matching, by Karp, Vazirani and Vazirani, was done over two decades ago, well before this motivation existed. In this talk, I will present an overview of several key algorithmic papers in this area, starting with its purely academic beginnings, to the general problem motivated by the application. The theory behind these algorithms involves new combinatorial, probabilistic and linear programming techniques. Besides the analytical results, I will also touch upon how these algorithmic ideas can be applied in practice.

10/11/2011 1:30-2:30 Lorenzo Orecchia (UC Berkeley)

A \tilde{O}(m)-time Spectral Algorithm for Balanced Cut

In the Balanced Cut problem, on input an undirected graph G with m edges, a balance parameter b and a target conductance value \gamma, we are asked to decide whether G has any b-balanced cut of conductance at most \gamma.

Approximation algorithms for Balanced Cut are a fundamental primitive in the design of recursive graph algorithms for a variety of combinatorial and numerical problems. In many of these practical applications, it is of crucial importance for the underlying Balanced-Cut algorithm to run in time as close to linear as possible and have an efficient implementation. For this reason, researchers have focused on spectral methods that guarantee speed and ease of implementation, albeit at the loss of approximation quality on some graphs.

The simplest spectral algorithm for this problem computes the graph’s slowest-mixing eigenvector, removes any unbalanced low-conductance cut it finds and recurses on the rest of the graph. This algorithm achieves an approximation guarantee that is asymptotically optimal for spectral algorithms, but unfortunately, it may need \Omega(n) recursive calls and runs in worst-case quadratic time.

In recent work with Nisheeth Vishnoi and Sushant Sachdeva, we give a spectral algorithm that achieves the same approximation guarantee, by designing a more principled recursion that allows our algorithm to remove unbalanced sparse cuts in O(\log n) stages and to run in time \tilde{O}(m). The main idea behind this improvement is the use of certain random walks as a regularized, more robust analogue of the graph’s slowest-mixing eigenvector. Our second novel contribution is a method to compute the required random-walk vectors in time \tilde{O}(m) that combines ideas in Approximation theory, Lanczos methods, and the linear equation solver by Spielman and Teng.

Our algorithm is thus the first spectral algorithm for Balanced Cut that runs in time \tilde{O}(m) and achieves the asymptotically optimal approximation guarantee for spectral methods, i.e. is able to distinguish between conductance \gamma and \Omega(\sqrt{\gamma}).

10/5/2011 1:30-2:30 Arnab Bhattacharyya (Princeton)

Tight lower bounds for 2-query locally correctable codes over finite fields

A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction of the coordinates are corrupted. LCCs are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions.

In this work, we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form p^Ω(δd) on the length of linear 2-query LCCs over F_p, that encode messages of length d. Our bound improves over the known bound of 2^Ω(δd) which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in Theoretical Computer Science.

We also obtain, as corollaries of our main theorem, new results in incidence geometry over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields and the second is a new analog of Beck’s theorem over finite fields.

Joint work with Zeev Dvir, Shubhangi Saraf, and Amir Shpilka.

9/7/2011 1:30-2:30 Aditya Bhaskara (Princeton)

Delocalization Properties of Eigenvectors of Random Graphs

A lot is known about the spectrum and the distribution of eigenvalues of G(n,p) random graphs. However many elementary questions regarding the eigenvectors remain open: for instance, do they ‘behave’ like random vectors in $\R^n$? In particular, can we say that each coordinate (in the normalized eigenvector) will be only $C/\sqrt{n}$ with high probability? Does an eigenvector have a ‘significant’ mass on a $\delta n$ fraction of the coordinates w.h.p.?

We use recent techniques due to Erdos, Schlein and Yau, and Tao and Vu to answer questions of this nature. Using these results, we answer an open question of Dekel, Lee and Linial on the number of ‘nodal domains’ in the eigenvectors of G(n,p) random graphs, when $p$ is `reasonably large’ (earlier such results were not known even when $p=1/2$).

This is joint work with Sanjeev Arora.

8/31/2011 1:30-2:30 Shahar Dobzinski (Cornell)

(Approximating) Optimal Auctions with Correlated Bidders

We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We show that one can find in poly time a deterministic revenue-maximizing auction that guarantees at least 3/5 of the revenue of the best truthful-in-expectation auction. The proof consists of three steps:

(1) Showing that the optimal truthful-in-expectation auction for a constant number of bidders can be computed in polynomial time.

(2) A “derandomization” algorithm that implements every 2-player truthful-in-expectation mechanism as a universally truthful mechanism.

(3) An analysis of three new auctions that have complementary properties.

Joint work with Hu Fu and Bobby Kleinberg.

8/24/2011 1:30-2:30 Yevgeniy Dodis (NYU)

Leftover Hash Lemma, Revisited

The famous Leftover Hash Lemma (LHL) states that (almost) universal hash functions are good randomness extractors. Despite its numerous applications, LHL-based extractors suffer from the following two drawbacks:

(1) Large Entropy Loss: to extract v bits from distribution X of min-entropy m which are e-close to uniform, one must set v<= m – 2*log(1/e), meaning that the entropy loss L = m-v>= 2*log(1/e).

(2) Large Seed Length: the seed length n of universal hash function required by the LHL must be linear in the length of the source.

Quite surprisingly, we show that both limitations of the LHL — large entropy loss and large seed — can often be overcome (or, at least, mitigated) in various quite general scenarios. First, we show that entropy loss could be reduced to L=log(1/e) for the setting of deriving secret keys for most cryptographic applications, including signatures/MACs, CPA/CCA-secure encryption, etc. Specifically, the security of these schemes gracefully degrades from e to at most e + sqrt(e * 2^{-L}). (Notice that, unlike standard LHL, this bound is meaningful even for negative entropy loss, when we extract more bits than the the min-entropy we have!) Second, we study the soundness of the natural *expand-then-extract* approach, where one uses a pseudorandom generator (PRG) to expand a short “input seed” S into a longer “output seed” S’, and then use the resulting S’ as the seed required by the LHL (or, more generally, any randomness extractor). Unfortunately, we show that, in general, expand-then-extract approach is not sound if the Decisional Diffie-Hellman assumption is true.

Despite that, we show that it is sound either: (1) when extracting a “small” (logarithmic in the security of the PRG) number of bits; or (2) in *minicrypt*. Implication (2) suggests that the sample-then-extract approach is likely secure when used with “practical” PRGs, despite lacking a reductionist proof of security!

The paper can be found at http://eprint.iacr.org/2011/088.

8/23/2011 1:30-2:30 Amos Fiat (Tel-Aviv University)

Beyond Myopic Best Response (For Cournot Competition)

A Nash Equilibrium is a joint strategy profile at which each agent myopically plays a best response to the other agents’ strategies, ignoring the possibility that deviating from the equilibrium could lead to an avalanche of successive changes by other agents. However, such changes could potentially be beneficial to the agent, creating incentive to act non-myopically, so as to take advantage of others’ responses.

To study this phenomenon, we consider a non-myopic Cournot competition, where each firm selects whether it wants to maximize profit (as in the classical Cournot competition) or to maximize revenue (by masquerading as a firm with zero production costs).

The key observation is that profit may actually be higher when acting to maximize revenue, (1) which will depress market prices, (2) which will reduce the production of other firms, (3) which will gain market share for the revenue maximizing firm, (4) which will, overall, increase profits for the revenue maximizing firm. Implicit in this line of thought is that one might take other firms’ responses into account when choosing a market strategy.
The Nash Equilibria of the non-myopic Cournot competition capture this action/response issue appropriately, and this work is a step towards understanding the impact of such strategic manipulative play in markets. We study the properties of Nash Equilibria of non-myopic Cournot competition with linear demand functions and show existence of pure Nash Equilibria, that simple best response dynamics will produce such an equilibrium, and that for some natural dynamics this convergence is within linear time. This is in contrast to the well known fact that best response dynamics need not converge in the standard myopic Cournot competition.

Furthermore, we compare the outcome of the non-myopic Cournot competition with that of the standard myopic standard Cournot competition. Not surprisingly, perhaps, prices in the non-myopic game are lower and the firms, in total, produce more and have a lower aggregate utility.

Joint work with Elias Koutsoupias, Katrina Ligett, Yishay Mansour, and Svetlana Olonetsky.

8/17/2011 1:30-2:30 Rocco Servedio (Columbia)

Learning and Testing k-Model Distributions

A k-modal probability distribution over the domain {1,…,N} is one whose  histogram has at most k “peaks” and “valleys”. Such distributions are a natural generalization of the well-studied class of monotone increasing (or monotone decreasing) probability distributions.

We study the problem of learning an unknown k-modal distribution from samples. We also study a related problem in property testing: given access to samples from an unknown k-modal distribution p, determine whether p is identical to q or p is “far” from q. Here q is a k-modal distribution which may be given explicitly or may be available via sample access.

We give algorithms for these problems that are provably close to the best possible in terms of sample and time complexity. An interesting feature of our approach is that our learning algorithms use ingredients from property testing and vice versa.

Joint work with Costis Daskalakis and Ilias Diakonikolas.

8/10/2011 1:30-2:30 Vasilis Gkatzelis (NYU)

Inner Product Spaces for Minsum Coordination Mechanisms

In this work we study the machine scheduling problem of minimizing the weighted sum of completion times of jobs on unrelated machines. We begin by studying the selfish scheduling setting in which each job is controlled by a selfish agent who can strategically choose the machine that will process its job, aiming to minimize its completion time. In order to reduce the inefficiency caused by selfishness, we use local scheduling policies (coordination mechanisms) and analyze their approximation ratio at equilibrium points. Finally, using the intuition acquired from these coordination mechanisms, we develop a simple local search constant-factor approximation algorithm imitating best response dynamics for an appropriately chosen local scheduling policy.

Joint work with: R. Cole, J. Correa, V. Mirrokni, and N. Olver.

8/3/2011 1:30-2:30 Adel Javanmard (Stanford)

Localization from Incomplete Noisy Distance Measurements

We consider the problem of positioning a cloud of points in the Euclidean space R^d, using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. Also, it is closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: In the noiseless case, we find a radius r0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph.

This is joint work with Andrea Montanari.

7/27/2011 1:30-2:30 Jon Kleinberg (Cornell)

Which Networks Are Least Susceptible to Cascading Failures? 

Perhaps the most basic model of contagion in a network draws its motivation from epidemic disease: when a node is affected, it has a given probability of affecting its neighbors as well. In recent prior work with Blume, Easley, Kleinberg, and Tardos, we considered how strategic agents should form links when there is a danger that failures can spread through the resulting network according to such a process.

This type of probabilistic contagion, however, represents just a small portion of a much broader space of “threshold cascade models”. This more general class of threshold models has a simple formulation: each node v picks a threshold t(v) according to an underlying distribution, and it fails if at any point in time it has at least t(v) failed neighbors. The expressive power of this framework lies in the way it can capture a range of fundamentally different types of failure processes, including those where exposure to failed nodes increases one’s chance of failing in the future. Despite the simplicity of the formulation, however, it has been very challenging to analyze the failure processes that arise from arbitrary thresholds; even qualitative questions concerning which graphs are the most resilient to cascading failures in these models have been difficult to resolve.

Here we consider the full space of threshold cascade models, and develop a set of new techniques for comparing arbitrary networks with respect to their failure resilience under different distributions of thresholds. We find that the space has a surprisingly rich structure: small shifts in the behavior of the thresholds can favor clustered clique-like graph structures, branching tree-like ones, or even intermediate hybrids.

Joint work with Larry Blume, David Easley, Bobby Kleinberg, and Eva Tardos (to appear at FOCS 2011).

7/20/2011 1:30-2:30 Sigal Oren (Cornell)

Mechanisms for (Mis)allocating Scientific Credit

Scientific communities confer many forms of credit — both implicit and explicit — on their successful members, and it has long been argued that the motivation provided by these forms of credit helps to shape a community’s collective attention toward different lines of research. The allocation of scientific credit, however, has also been the focus of long-documented pathologies: certain research questions are said to command too much credit, at the expense of other equally important questions; and certain researchers (in a version of Robert Merton’s Matthew Effect) seem to receive a disproportionate share of the credit, even when the contributions of others are similar.

Here we show that the presence of each of these pathologies can in fact increase the collective productivity of a community. We consider a model for the allocation of credit, in which individuals can choose among projects of varying levels of importance and difficulty, and they compete to receive credit with others who choose the same project. Under the most natural mechanism for allocating credit, in which it is divided among those who succeed at a project in proportion to the project’s importance, the resulting selection of projects by self-interested, credit-maximizing individuals will in general be socially sub-optimal. However, we show that there exist ways of allocating credit out of proportion to the true importance of the projects, as well as mechanisms that assign credit out of proportion to the relative contributions of the individuals, that lead credit-maximizing individuals to collectively achieve social optimality. These results therefore suggest how well-known forms of misallocation of scientific credit can in fact serve to channel self-interested behavior into socially optimal outcomes.

This is joint work with Jon Kleinberg.

7/13/2011 1:30-2:30 Elette Boyle (MIT)

Leakage-Resilient Coin Tossing

The ability to collectively toss a common coin among n parties in the presence of faults is an important primitive in the arsenal of randomized distributed protocols. In the case of dishonest majority, it was shown to be impossible to achieve less than 1/r bias in O(r) rounds (Cleve STOC ’86). In the case of honest majority, in contrast, unconditionally secure O(1)-round protocols for generating common unbiased coins follow from general completeness theorems on multi-party secure protocols in the secure channels model (e.g., BGW, CCD STOC ’88).

However, in the protocols with honest majority, parties must generate and hold local secret values which are assumed to be perfectly hidden from malicious parties: an assumption which is crucial to proving the resulting common coin is unbiased. This assumption unfortunately does not seem to hold in practice, as attackers can launch side-channel attacks on the local state of honest parties and leak information on their secrets.

In this work, we present an O(1)-round protocol for collectively generating an unbiased common coin, in the presence of leakage on the local state of the honest parties. We tolerate t ≤ ( 1/3 − ϵ)n computationally-unbounded Byzantine faults and in addition a Ω(1)-fraction leakage on each (honest) party’s secret state. Our results hold in the memory leakage model (of Akavia, Goldwasser, Vaikuntanathan ’08) adapted to the distributed setting.

This is joint work with Shafi Goldwasser and Yael Tauman Kalai.

7/6/2011 1:30-2:30 Ohad Shamir (MSR New-England)

Information Trade-offs in Learning

When originally formulated in the 1980’s, the classic PAC model of learning focused on how one can learn with a small amount of high-quality data. In contrast, much of machine learning today is done on huge amounts of low-quality data. While in some sense the “total” amount of information provided remains the same, the question is how one can trade-off between the data size and the amount of information provided by any individual example. In this talk, I will discuss two concrete settings where this trade-off occurs. The first deals with learning linear predictors, where we can only obtain a few attributes from any individual example. The second deals with multi-armed-bandits problems, where one can obtain varying amounts of side-information on actions not chosen. The resulting algorithms are theoretically-grounded and completely practical.

Based on joint works with Nicolò Cesa-Bianchi, Shai Shalev-Shwartz and Shie Mannor.

6/22/2011 1:30-2:30 Huy L. Nguyen (Princeton)

Near linear lower bound for dimension reduction in L1

Given a set of n points in L1, how many dimensions are needed to represent all pairwise distances within a specific distortion ? This dimension-distortion tradeoff question is well understood for the L2 norm, where O((log n)/epsilon^2) dimensions suffice to achieve 1+epsilon distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in L1. A recent result shows that distortion 1+epsilon can be achieved with n/epsilon^2 dimensions. On the other hand, the only lower bounds known are that distortion delta requires n^{\Omega(1/delta^2)} dimension and that distortion 1+epsilon requires n^{1/2-O(epsilon*log(1/ epsilon))} dimensions. In this work, we show the first near linear lower bounds for dimension reduction in L1. In particular, we show that 1+epsilon distortion requires at least n^{1-O(1/log(1/epsilon))} dimensions.

Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in L1.

This is joint work with Alexandr Andoni, Moses Charikar, and Ofer Neiman.

6/15/2011 1:30-2:30 Giuseppe F. Italiano (University of Rome “Tor Vergata”)

Finding Strong Bridges and Strong Articulation Points in Linear Time

Given a directed graph $G$, an edge is a strong bridge if its removal increases the number of strongly connected components of $G$. Similarly, we say that a vertex is a strong articulation point if its removal increases the number of strongly connected components of $G$. In this paper, we present linear-time algorithms for computing all the strong bridges and all the strong articulation points of directed graphs, solving an open problem posed in [BFL05].

Joint work with Luigi Laura and Federico Santaroni.

6/14/2011 1:30-2:30 Robert Kleinberg (Cornell)

Network Formation in the Presence of Contagious Risk

There are a number of domains where agents must collectively form a network in the face of the following trade-off: each agent receives benefits from the direct links it forms to others, but these links expose it to the risk of being hit by a cascading failure that might spread over multi- step paths. Financial contagion, epidemic disease, and the exposure of covert organizations to discovery are all settings in which such issues have been articulated.

Here we formulate the problem in terms of strategic network formation, and provide asymptotically tight bounds on the welfare of both optimal and stable networks. We find that socially optimal networks are, in a precise sense, situated just beyond a phase transition in the behavior of the cascading failures, and that stable graphs lie slightly further beyond this phase transition, at a point where most of the available welfare has been lost. Our analysis enables us to explore such issues as the trade-offs between clustered and anonymous market structures, and it exposes a fundamental sense in which very small amounts of “over-linking” in networks with contagious risk can have strong consequences for the welfare of the participants.

Joint work with Larry Blume, David Easley, Jon Kleinberg, and Eva Tardos.

6/9/2011 2:30-3:30 Rasmus Pagh (IT University of Copenhagen)

A New Data Layout For Set Intersection on GPUs

Set intersection is the core in a variety of problems, e.g. frequent itemset mining and sparse boolean matrix multiplication. It is well-known that large speed gains can, for some computational problems, be obtained by using a graphics processing unit (GPU) as a massively parallel computing device. However, GPUs require highly regular control flow and memory access patterns, and for this reason previous GPU methods for intersecting sets have used a simple bitmap representation. This representation requires excessive space on sparse data sets. In this paper we present a new data layout, BATMAP, that is particularly well suited for parallel processing, and is compact even for sparse data. We also describe experiments on the data structure, compared against CPU alternatives.

Joint work with Rasmus Resen Amossen.

6/1/2011 1:30-2:30 Vinod Vaikuntanathan (MSR Redmond)

New Paradigms for Efficient Fully Homomorphic Encryption

We present a fully homomorphic encryption scheme that is based solely on the standard learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the classical worst-case hardness of short vector problems on arbitrary lattices. As icing on the cake, our scheme is quite efficient, and has very short ciphertexts.

Our construction introduces a new paradigm and two new techniques for the construction of homomorphic encryption, and improves on previous works in two aspects.

1) We show that “somewhat homomorphic” encryption can be based on LWE, using a new re-linearization technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings.

2) More importantly, we deviate from the “squashing paradigm” used in all previous works. We introduce a new dimension reduction technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. In contrast, all previous works required an additional, very strong assumption (namely, the sparse subset sum assumption).

A by-product of our work is a new type of fully homomorphic identity-based encryption scheme.

Joint work with Zvika Brakerski (Weizmann).

5/25/2011 1:30-2:30 Amin Saberi (Stanford)

A Randomized Rounding Approach to the Traveling Salesman Problem

For some positive constant c, we give a 3/2-c approximation algorithm for the following problem: given a graph G(V; E), find the shortest tour that visits every vertex at least once. This is a special case of the metric traveling salesman problem when the underlying metric is defined by shortest path distances in G. The result improves on the 3/2-approximation algorithm due to Christodes.

Joint work with M. Singh and S. Oveis Gharan.

5/18/2011 1:30-2:30 Gil Segev (MSR Silicon Valley)

Public-Key Cryptographic Primitives Provably as Secure as Subset Sum

We present a public-key encryption scheme whose security is equivalent to the hardness of solving random instances of the subset sum problem. The subset sum assumption required for the security of our scheme is weaker than that of existing subset-sum based encryption schemes, namely the lattice-based schemes of Ajtai and Dwork (STOC ’97), Regev (STOC ’03, STOC ’05), and Peikert (STOC ’09). Our proof of security is simple and direct.

Joint work with Vadim Lyubashevsky and Adriana Palacio.

5/11/2011 1:30-2:30 Mukund Sundararajan (Google)

Axiomatic Attribution

We study the attribution problem, that is, the problem of attributing a change in the value of a characteristic function to its independent variables. We make three contributions. First, we propose a formalization of the problem based on a standard cost-sharing model. Second, in our most important technical contribution, we show that there is a unique path attribution method that satisfies Affine Scale Invariance and Anonymity if and only if the characteristic function is multilinear. We term this the Aumann-Shapley-Shubik method. Third, we study multilinear characteristic functions in detail; we describe a computationally efficient implementation of the Aumann-Shapley-Shubik method and discuss practical applications to portfolio analysis, e-commerce, and sports statistics.

5/4/2011 1:30-2:30 Paul Valiant (UC Berkeley)

Central Limit Theorems and Tight Lower Bounds for Entropy Estimation

In joint work with Gregory Valiant, we have been investigating a new approach to estimating symmetric properties of distributions, including the standard and fundamental properties of entropy and support size. In a previous talk, Greg introduced the first explicit sublinear-sample estimators for these problems, which estimate these quantities to constant additive error using O(n/log n) samples. In this independent but complementary talk, we show that this result is in fact optimal up to constant factors. The analysis makes crucial use of two new multivariate central limit theorems that appear quite natural and general. The first is proven directly via Stein’s method; the second is proven in terms of the first using a recent generalization of Newton’s inequalities. The talk will include a high level overview of these techniques, and their application both in our context and more generally.

4/27/2011 1:30-2:30 Mihai Budiu and others (MSR Silicon Valley)

DryadLINQ and Theory Challenges

4/20/2011 1:30-2:30 Gregory Valiant (UC Berkeley)

Estimating the Unseen: A Sublinear-Sample Estimator of Distributions

In joint work with Paul Valiant, we introduce a new approach to characterizing the unobserved portion of a distribution, which provides sublinear-sample additive estimators for a class of properties that includes entropy and distribution support size (the `distinct elements’ problem). Together with our new lower bounds—which Paul will discuss in a separate talk—this settles the longstanding question of the sample complexities of these estimation problems, up to constant factors. Our algorithm estimates these properties up to an arbitrarily small additive constant, using O(n log n) samples, where n is a bound on the support size (or in the case of estimating the support size, 1/n is a lower bound on the probability of any element of the domain). Previously, no explicit sublinear-sample algorithms for either of these problems were known.

3/30/2011 1:30-2:30 Yaron Singer (UC Berkeley)

Budget Feasible Mechanism Design

In this talk we will discuss the budget feasibility framework where the goal is to design incentive compatible mechanisms under a budget constraint on the mechanisms’ payments. Such a challenge typically arises in markets where the mechanism designer wishes to procure items or services from strategic agents.

While in many cases the budget limitation can render mechanisms with poor performance in terms of the utility of the buyer, there are broad classes of utility functions for which desirable approximation guarantees are achievable. We will present some of the positive results for submoudular, subaddtive and cases of non-monotone utilities, and discuss computational and strategic intricacies that arise in this setting.

We will also highlight the relevance of this framework to privacy, crowdsourcing and social networks. In particular, we will show the framework’s application to the influence maximization problem in social networks and give some evidence that suggests that beyond provable guarantees, the mechanisms perform well in practice.

3/16/2011 1:30-2:30 Jason Hartline (Northwestern University)

The Theory of Crowdsourcing

Crowdsourcing contests have been popularized by the Netflix challenge and websites like TopCoder and Taskcn. What is crowdsourcing? Imagine you are designing a new web service, you have it all coded up, but the site looks bad because you haven’t got any graphic design skills. You could hire an artist to design your logo, or you could post the design task as a competition to crowdsourcing website Taskcn with a monetary reward of $100. Contestants on Taskcn would then compete to produce the best logo. You then select your favorite logo and award that contestant the $100 prize.

In this talk, I discuss the theory of crowdsourcing contests. First, I will show how to model crowdsourcing contests using auction theory. Second, I will discuss how to solve for contestant strategies. I.e., suppose you were entering such a programming contest on TopCoder, how much work should you do on your entry to optimize your gains from winning less the cost of doing the work? Finally, I will discuss inefficiency from the fact that the effort of losing contestants is wasted (e.g., every contestant has to do work to design a logo, but you only value your favorite logo). I will show that this wasted effort is at most half of the total amount of effort. A consequence is that crowdsourcing is approximately as efficient a means of procurement as conventional methods (e.g., auctions or negotiations).

Joint work with Shuchi Chawla and Balu Sivan.

2/23/2011 1:30-2:30

Yael Tauman Kalai (MSR New-England) Leaky Pseudo Entropy Functions

Pseudo-random functions (PRFs), introduced by Goldreich, Goldwasser, and Micali (FOCS 1984), are functions that look truly random! These functions are associated with a short random seed s, and the guarantee is that no efficient adversary can tell the difference between getting oracle access to a random PRF function f_s, and getting oracle access to a truly random function.

The question we ask in this work is: Can we generate randomness (or even entropy) from a short random seed s that is partially leaked? Unfortunately, as deterministic extractors do not exist, leaky PRFs do not exist, even if a single bit about the secret seed s is leaked. Thus, we consider the following relaxation: Instead of requiring that for each input x, the value f_s(x) looks random, we require that it looks like it has high min-entropy, even given oracle access to f_s everywhere except at point x. We call such a function family a pseudo-entropy function (PEF) family. We construct such a leakage-resilient PEF family under standard cryptographic assumptions (such as DDH).

 

Leave a comment