This is a followup to the previous post on program obfuscation written jointly with Guy Rothblum.

The problem of program obfuscation is fascinating.  The question at hand is whether one can transform a program (say, described as a Boolean circuit) into a form that is executable (i.e., has the same input/output behavior), but is otherwise completely unintelligible.  This problem was originally formalized by Barak et al. [BGI+01], who constructed (contrived) function families that are not obfuscatable under the natural definition of virtual black box (VBB) security, as well as various relaxations.  Loosely speaking, VBB security requires that anything that can be efficiently computed given an obfuscation of a program could be efficiently computed from black box access to the program.

The result of Barak et al. (and followup work [GK05]) left researchers quite pessimistic about solving the general problem of program obfuscation, and (until recently) most of the effort was on obfuscating very simple functions, such as point functions [Canetti97] (functions that are zero everywhere except for a single input), variants of point functions, hyper-planes [CRV10], conjunctions [BR13a], and CNFs [BR13b].

The challenging and cryptographically meaningful function families to obfuscate are those that have high “pseudo-entropy”.  Extreme examples are pseudo-random functions and decryption algorithms. Note that it is trivial (and uninteresting) to obfuscate functions that are learnable from black-box access.

The focus of the previous post was on recent exciting progress on program obfuscation, initiated by the fascinating recent work of Garg et al. [GGHRSW13], which gave the first plausible candidate for general-purpose obfuscation.  They conjectured that it is an indistinguishability obfuscator; i.e., given any two circuits C_1 and C_2 of the same size and computing the same function, no polynomial time adversary can distinguish between the obfuscation of C_1 and that of C_2.  Unfortunately, it is not clear how meaningful this notion is, since an indistinguishablity obfuscator does not guarantee to hide the secrets of the underlying program (or circuit); indeed, as we know from [BGI+01] there are function families that can always be reverse engineered. One of the motivations of indistinguishability obfuscation is that it was proven to be equivalent to “best possible” obfuscation by Goldwasser and Rothblum [GR07].

The work of Garg et al. gave optimism to many cryptographers, and several tried to analyze and prove security of their construction (and its variations) [GGHRSW13,BR13c,BGKPS13], with limited success.  To date, a variant of the construction is known to be VBB secure against a subclass of adversaries, known as algebraic adversaries.  However, we have no evidence as to its security level against non-algebraic adversaries.  Moreover, as mentioned above, even if we were able to prove that it is an indistinguishability obfuscator, it is still unclear how meaningful it is.

Nevertheless, to my surprise, subsequent to [GGHRSW13] a flood of results have appeared showing that indistinguishability obfuscation suffices for many other cryptographic applications, such as the construction of public-key encryption from private-key encryption, the existence of deniable encryption, multi-input functional encryption, multiparty key exchange, broadcast encryption, traitor tracingand, more [SW13,GGHRSW13,HSW13,GGJS13,BZ13,BCP13,BCPR13].

Unfortunately, in this post, we diverge from the optimistic view of the crowd.  In a somewhat strange twist, in joint work with Cohn and Goldwasser [CGK14], we show that there are negative implications of [GGHRSW13] to accompany the positive ones.   In particular, we show the existence of indistinguishability obfuscation implies strong limitations on the possibility of VBB obfuscation with a universal simulator for any function family with high pseudo-entropy.  What is VBB simulation with a universal simulator?

The Barak et al. definition of VBB obfuscation of a circuit family requires that for each probabilistic polynomial-time (PPT) adversary A there exists a PPT simulator S that succeeds in simulating the output of A, when A is given the obfuscated circuit but S is given only black-box access to the circuit. Unfortunately, this definition does not say how hard (or easy) it is to find this simulator S. This sufficed for the Barak et al. work, as they were after showing impossibility results and thus were happy to work with an obfuscation definition that didn’t address how one may find S.

A stronger and arguably more meaningful definition requires that there exist a *universal* PPT simulator capable of simulating any PPT adversary A given the code of A.  Such a definition is referred to as VBB with a universal simulator. Ideally, we would like to construct an obfuscator that is VBB secure with a universal simulator.  However, given the negative result of [BGI+01] we know that we cannot hope to construct such an obfuscator for all function families, and we must focus on specific function families.  That said, it may be the case that all “natural” function families are obfuscatable.

In [CGK14] we show that assuming the existence of indistinguishable obfuscation, <strong>all</strong> function families with super-polynomial “pseudo-entropy” cannot be VBB obfuscated with a universal simulator.  Informally, a function family has super-polynomial pseudo-entropy if given black-box access to the function it appears to have super-polynomial min-entropy.  Such families include all pseudo-random function families, as well as every semantically secure secret-key and public key encryption scheme, or any secure digital signature scheme (where randomness is generated by using a pseudo-random function).

We obtain this result by exploiting a connection between obfuscation with a universal simulator and obfuscation with auxiliary inputs, and by showing new impossibility results for obfuscation with auxiliary inputs.

In light of this, where should we be heading?  It seems that in the quest for positive results, we should either restrict our attention to function families that do not have super-polynomial pseudo-entropy, or try to bypass these negative results by considering relaxed definitions of security.  If we stick to our goal of VBB security for functions with super-polynomial pseudo-entropy, we will need to use non-black techniques where the simulator uses the adversary in an inefficient manner.  To my knowledge, to date such a technique was used only once, in [Canetti97].

A class of functions that would be interesting to study which do not have super-polynomial pseudo entropy are evasive function families, which are functions that are zero almost everywhere, and any PPT adversary who is given oracle access to a random function in the family cannot find a non-zero input.  We have no negative results for such families, and some partial positive results are known [BBCPKS13].

These are fascinating questions and I am excited to see new developments in the upcoming months.

References:

[BBCPKS13] Boaz Barak, Nir Bitanski, Ran Canetti, Omer Paneth, Yael Tauman Kalai, Amit Sahai: Obfuscation for Evasive Functions. Cryptology ePrint Archive, Report 2013/ 668

[BGKPS13] Boaz Barak, Sanjam Garg, Yael Tauman Kalai, Omer Paneth, Amit Sahai: Protecting Obfuscation against Algebraic Attacks. Cryptology ePrint Archive, Report 2013/631

[BGI+01] Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil P. Vadhan, Ke Yang: On the (im)possibility of obfuscating programs. Crypto 2001, Journal of the ACM 2012

[BCPR13] Nir Bitansky,Ran Canetti,Omer Paneth, Alon Rosen: Indistinguishability Obfuscation vs. Auxiliary-Input Extractable Functions: One Must Fall. Cryptology ePrint Archive, Report 2013/641

[BZ13] Dan Boneh, Mark Zhandry: Multiparty Key Exchange, Efficient Traitor Tracing, and More from Indistinguishability Obfuscation.  Cryptology ePrint Archive, Report 2013/642

[BCP13] Elette Boyle, Kai-Min Chung, Rafael Pass: On Extractability Obfuscation. Cryptology ePrint Archive, Report 2013/650

[BR13a] Zvika Brakerski, Guy N. Rothblum: Obfuscating Conjunctions. CRYPTO 2013

[BR13b] Zvika Brakerski, Guy N. Rothblum: Black-Box Obfuscation for d-CNFs. ITCS 2014

[BR13c] Zvika Brakerski, Guy N. Rothblum: Virtual Black-Box Obfuscation for All Circuits via Generic Graded Encoding. TCC 2014

[Canetti97] Ran Canetti: Towards Realizing Random Oracles: Hash Functions That Hide All Partial Information. CRYPTO 1997

[CRV10] Ran Canetti, Guy N. Rothblum, Mayank Varia: Obfuscation of Hyperplane Membership. TCC 2010

[CGK14] Henry Cohn, Shafi Goldwasser, Yael Tauman Kalai: The Impossibility of Obfuscation with a Universal Simulator.  IACR Cryptology ePrint Archive, Report 2013/665

[GGHRSW13] Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, Brent Waters: Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits. FOCS 2013

[GGJS13] Shafi Goldwasser, Vipul Goyal,Abhishek Jain, Amit Sahai: Multi-Input Functional Encryption. IACR Cryptology ePrint Archive, Report 2013/727

[GK05] Shafi Goldwasser, Yael Tauman Kalai:  On the Impossibility of Obfuscation with Auxiliary Input.  FOCS 2005

[GR07] Shafi Goldwasser, Guy N. Rothblum: On Best-Possible Obfuscation. TCC 2007

[HSW13] Susan Hohenberger, Amit Sahai, Brent Waters: Replacing a random oracle: Full domain   hash from indistinguishability obfuscation.  IACR Cryptology ePrint Archive, Report 2013/509

[SW13] Amit Sahai, Brent Waters: How to Use Indistinguishability Obfuscation: Deniable Encryption, and More. IACR Cryptology ePrint Archive, Report 2013/454

(joint post by Yael Kalai and Guy Rothblum)

It feels especially appropriate to write about recent developments in cryptography and code obfuscation while basking in the afterglow of a wonderful workshop at the Weizmann Institute of Science, celebrating the work of Shafi Goldwasser and Sivio Micali—this year’s Turing Award recipients. Shafi and Silvio repeatedly demonstrated that in cryptography we can obtain seemingly impossible or self-contradictory goals, such as zero-knowledge proofs that convey no information beyond their validity, or pseudorandom functions whose input-output behavior appears completely random (even though they have a succinct description).

Our blog post is about another such “pie in the sky” problem in cryptography: code obfuscation. The question at hand is whether one can transform a program (say, described as a Boolean circuit), maintaining its input/output behavior but making it otherwise unintelligible. This problem was originally formalized by Barak et al. [BGI+01] (following earlier work by Hada [Hada00]). However, rather than providing tools to obfuscate programs, Barak et al. gave impossibility results.  They considered the natural definition of virtual-black-box obfuscation (VBB-Obf): anything that can be computed efficiently given a program’s obfuscation, should be efficiently computable from black box access to the program.  This natural definition is quite strong, and in particular general-purpose VBB-Obf (under the “right” formalization) has fantastic cryptographic applications. Unfortunately, Barak et al. proved a strong negative result, showing that general-purpose VBB obfuscation is impossible. Namely, they constructed a (contrived) function family for which there exists a PPT adversary that given *any* code of a function f in the family, can find the secret key associated with f, whereas this key remains completely hidden given only black-box access to f.  Thus, access to the code is *very different* from black box access, and the family seems very difficult to obfuscate in any meaningful sense.

Following this thought provoking work, much effort has been devoted by the cryptographic community to constructing obfuscators for natural classes of programs.  However until recently, all known obfuscation candidates were for limited classes of functions, such as (for example) point functions [Canetti97] (functions that are zero everywhere except for a single input), variants of point functions, hyper-planes [CRV10], conjunctions [BR13a], and CNFs [BR13b]. It was not clear how to extend these works to get obfuscation for more complex classes of functions, and until recently there weren’t even suggestions for candidates or heuristics.

This changed with a fascinating recent work of Garg et al. [GGHRSW13]. They propose a candidate general-purpose program obfuscator.  Namely, they construct an obfuscator, that takes as input any program (or circuit) and outputs another program (or circuit) that has the same functionality as the input program, and seems to hide secret information.  The big question is whether this candidate construction indeed has a “meaningful” secrecy guarantee. One can simply assume that the [GGHRSW13] obfuscator is secure.  However, due to the negative result of [BGI+01], assuming that the [GGHRSW13] obfuscator always offers a “meaningful” security guarantee is simply false.

To bypass these negative results, [GGHRSW13] study the possibility that their obfuscator is an indistinguishability obfuscator (Ind-Obf); i.e. that given any two circuits C and C’ of the same size that compute the same functionality f, no polynomial time adversary can distinguish between the obfuscation of C and the obfuscation of C’. There are no known impossibility results for Ind-Obf. Indistinguishability obfuscation provides an intuitively appealing notion of security via an equivalence to “best possible” obfuscation, a notion put forth by Goldwasser and Rothblum [GR07].  This notion makes the relaxed requirement that the obfuscated program leaks as little information as any other program with the same functionality (and of similar size). Further, in a fascinating recent work, Sahai and Waters [SW13] show that Ind-Obf has many exciting cryptographic applications (e.g. deniable encryption [CDNO97]).

In [GGHRSW13], and in followup works [BR13c,BGKPS13], there is some evidence that this obfuscator and variants of it do have some secrecy guarantees. The obfuscator of Garg et al. makes use of multi-linear maps, a powerful new cryptographic tool introduced by [GGH13]. Loosely speaking, such maps allow one to encode elements in a way that one can efficiently add encodings, multiply encodings (a bounded number of times), and check whether an encoding is an encoding of zero. [BR13c] prove that a variant of the [GGHRSW13] obfuscator does indeed satisfy the Ind-Obf definition if the adversary is limited to “algebraic” attacks, which means that it can only add and multiply the multi-linear encodings, and check whether an encoding is an encoding of zero, but cannot do anything else with these encodings. I.e., they prove security for a limited class of attackers. Moreover, [BR13c,BGKPS13] show that variants of the construction even satisfy the stronger VBB-Obf definition for “algebraic” attacks.

Of course, the attentive reader may be left puzzled by this state of affairs, as Barak et al. showed that satisfying VBB-Obf is impossible! There is no contradiction, because there is no reason for attackers to limit themselves to algebraic attacks.  For example, an attacker can feed the obfuscated circuit (which contains these encodings) as input to another circuit.  Barak et al. [BGI+01] make use of such attackers to obtain their negative results.

To summarize (and interpret) the recent developments:

• Using powerful new cryptographic tools (multilinear maps), Garg et al. present a candidate for obfuscation that may provide meaningful security guarantees.
Namely, it is a candidate for Indistinguishability Obfuscation, which provides an appealing semantic notion of “best possible security”, and has exciting cryptographic applications.
• There is some evidence for the security of this construction and variants of it: we have obfuscators that provably resist the rich family of “algebraic attacks”.
• We know that adversaries may mount non-algebraic attacks against obfuscators, and indeed restricting our attention to algebraic attackers lets us bypass known impossibility results.
• It is an outstanding open problem to either prove the security of an Indistinguishability Obfuscator under standard assumptions (or under any “falsifiable” assumption), or to show impossibility for general-purpose Indistinguishability Obfuscation.

In a follow-up post, Yael will describe even more recent work that leads her to be pessimistic about the possibility of obtaining strong positive results on obfuscation for many natural classes of functions.

In conclusion, there have been many exciting recent developments in cryptography (perhaps most notably in the study of fully homomorphic encryption), and it appears that we may be on the brink of another exciting wave of developments in the study of code obfuscation. At the very least, there are fascinating new foundational problems for the field to study.

Going back to the Weizmann workshop, one recurring theme in this workshop was participants recounting how, again and again, the question has been raised: “what is left to do in cryptography?” (As early as the early 80’s). Again and again, however, we are surprised and delighted by new conceptual and technical breakthroughs in the field. Nowadays it seems clear that while much has been done in cryptography, even more remains to be explored.

References

◾[BGI+01] Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil P. Vadhan, Ke Yang: On the (im)possibility of obfuscating programs. Crypto 2001, Journal of the ACM 2012
◾[BGKPS13] Boaz Barak, Sanjam Garg, Yael Tauman Kalai, Omer Paneth, Amit Sahai: Protecting Obfuscation against Algebraic Attacks. CRYPTO ePrint 2013
◾[BR13a] Zvika Brakerski, Guy N. Rothblum: Obfuscating Conjunctions. CRYPTO 2013
◾[BR13b] Zvika Brakerski, Guy N. Rothblum: Black-Box Obfuscation for d-CNFs. ITCS 2014
◾[BR13c] Zvika Brakerski, Guy N. Rothblum: Virtual Black-Box Obfuscation for All Circuits via Generic Graded Encoding. TCC 2014
◾[CDNO97] Ran Canetti, Cynthia Dwork, Moni Naor, Rafail Ostrovsky: Deniable Encryption. CRYPTO 1997
◾[Canetti97] Ran Canetti: Towards Realizing Random Oracles: Hash Functions That Hide All Partial Information. CRYPTO 1997
◾[CRV10] Ran Canetti, Guy N. Rothblum, Mayank Varia: Obfuscation of Hyperplane Membership. TCC 2010
◾[GGH13] Sanjam Garg, Craig Gentry, Shai Halevi: Candidate Multilinear Maps from Ideal Lattices. EUROCRYPT 2013
◾[GGHRSW13] Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, Brent Waters: Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits. FOCS 2013
◾[GR07] Shafi Goldwasser, Guy N. Rothblum: On Best-Possible Obfuscation. TCC 2007
◾[SW13] Amit Sahai, Brent Waters: How to Use Indistinguishability Obfuscation: Deniable Encryption, and More. CRYPTO ePrint 2013

The various MSR labs are looking for postdocs and full-time researchers in many scientific fields, including all areas of theoretical Computer Science. You can apply via this website. Please don’t forget to specify in the form all the labs you may be interested in.

For Microsoft Research Silicon Valley applications submitted by December first will receive full consideration.

The deadline for the Schramm Postdoctoral Fellowship (joint with MIT math and Microsoft Research New England)  is also on December first.

As Omer says: dont trust the machine. Make sure that somebody relevant knows you are applying.

The 15th ACM conference on Economics and Computation (EC’14, formally known as “ACM conference on Electronic Commerce”) will be held June 8-12, 2014 at Stanford University, Palo Alto, California, United States.

The CFP is now public and can be found here.

EC’14 will be co-located with a meeting of the NBER Market Design working group, and with the NSF/CEME Decentralization Conference.

While this blog is mostly about theory, today I would like to talk about a real auction and how it relates to theory. I’ll focus on a fundraising auction for my child’s elementary school.

As is common around here, the school has an annual community event called a “Walk-a-Thon” in which donations are collected towards the funding of various programs such as music, band, science, arts, field trips and technology. Most of the money comes from parents (and relatives) that commit to payments proportional to the distance (number of laps) walked by their kids. The next picture shows the first lap.

Kids walking the first lap of the Walk-a-Thon

As part of the event, items are collected and are auctioned throughout the afternoon of the Walk-a-Thon day. The items presented in the auction are very diverse, and include both physical objects and services. Typical items could be crafts made by the kids, parties that parents organize, events like a pizza party organized by teachers, items donated by parents as well as items donated by corporations (and even a pumpkin grown in the school’s garden).

A general view of the auction.

A pumpkin from the school’s garden, ready for Halloween.

The first year I joined the school, the auction was running as a “silent auction” (a format used by many schools for their charity auction). The auction was about 4 hours long with a fixed end time. Items were placed on tables, each item had a bid sheet in which bidders bid (simply a piece of paper attached to the item). The highest bidder for an item wins the auction and pays his bid. Each item had a minimal bid and the bid increment was $1. Bidders were encouraged to submit multiple bids, outbidding each other. When multiple identical items were sold (like in the case of tickets to a party), the highest bidders would win the items, paying their bids. Naturally I was very interested to see how the auction performed. Looking at the auction format, one sees that this is essentially an English auction implemented using bid sheets, so at least in theory (for items with private values) one would expect the allocation to be efficient (and the same as of the second price auction). Revenue should also be very high, as long as each item has enough bidders interested in it. But when attending my first Walk-A-Thon event it was quite clear to me that this optimistic outcome was not being realized, and many prices were much lower than I would have expected. Why was this happening? What are the main differences between this auction and the clean theoretical model? The Walk-A-Thon day is a very busy day for most of the potential bidders (=parents), with many of them volunteering in various activities (like handing water bottles to the kids, or serving food) or simply taking care of the children once they become exhausted from walking. So essentially the bidders in our auction have a high cost of submitting bids, as submitting a bid requires them to leave whatever they are doing, go to the auction and find the item they are interested in, and then write their bid. All this hustle only buys you one bid, while an English auction requires you to keep coming back to the auction and submitting bids until the auction ends or someone bids above your true value. Another fundamental difference is that it is not clear that the private value model holds for some of the items (e.g., when the item is a$50 voucher for a restaurant). Moreover, it is not clear that bidders in a charity auction aim at maximizing their utility in the standard way. Perhaps they have a fix amount of money they want to spend and will maximize their utility given that they exhaust their budget? Finally, the value of some items (like party tickets) depend on whether the same group of friends manages to secure enough tickets, so there are plenty of externalities.

Some people have realized that there is no reason to be active until the last minute, and have used the “sniping” strategy: bid at the last minute just as the auction closes (this strategy is well documented in online bidding). With many occupied with other things, only few people were able to execute this strategy and there were no last minute bidding wars, and the lack of competition resulted in low prices.

It seems as though when people did submit bids earlier, many were basically bidding just $1 above the previous highest bid and not more than that, and many never came back to bid again. While this seems not very rational for a bidder that knows that he might never return to submit another bid, large jumps in the bids were very rare. So how could the revenue of the auction be increased? It seemed to me the main goals should be to decrease the cost of bidding and increase the participation rate. A natural solution is to have a sealed bid second price auction on each item, thus eliminating the need to bid multiple times on each item. This idea was not adopted by the organizing committee as it seems like such a big change might be confusing for the bidders, and the auction committee preferred incremental changes (here’s a constraint that is very rarely put into a formal model, yet exists in many real auctions, including large scale internet auctions). The ideas we came up with will not surprise anyone that is familiar with eBay’s auctions. Instead of moving all bidders to the new auction all at once, the changes will allow “traditional” bidders to continue as before and not do anything new, while allowing (but not mandating) more exploratory bidders to try new options. In the following year we implemented two changes. First, for some items we have added a “buy–it-now” (BIN) price. People could still bid for the item on the bid sheet, but could also buy the item at any time during the auction for the BIN price, and take the item off the auction. The BIN option worked very well. It was extensively used by people that wanted to secure the acquisition of the items immediately as the auction opened. That option worked particularly well for parties, as it allowed buyers to coordinate: people would buy tickets together with friends (their valuation had significant positive externalities; and unlike in the auction, with BIN it was easy to guarantee that some set of bidders win together). Of course one has to set a proper BIN price, and that is a non-trivial task. Although the BIN prices we have posted were much higher than the selling prices in the previous year, the first year we have added BIN for parties, tickets were taken off the auction almost immediately. We raised the price in subsequent years and the result is that we almost quadrupled the revenue from one of the parties! In general it is hard to know how to set the BIN price. One would like to update the BIN price of sold out items in future auctions, but by how much? A possible solution is to sell some of the identical items with BIN and others in the auction, but that was determined to be too complicated by the committee. Another benefit of using BIN is that more people are paying for items throughout the auction and not at the end when there are crowds in line for payments. This further reduces the indirect costs of participating in the auction and is significant when running this kind of physical auction. In the last 3 years we added BIN options to more and more items, mainly of high value, and essentially all parties. For most party tickets with BIN option we have observed that usually all tickets were sold at the BIN price, although we have consistently increased the BIN price every year. The second change we introduced was the addition of Proxy bidding for some of the items. Proxy bids are submitted in a sealed box and are secret (see picture for the table with the boxes). People can still bid on the bid sheet, but the highest of all bids (proxy or open) wins. A proxy bidder that wins pays the minimal amount needed to win. This option is an incremental step towards having a sealed bid second price auction. This option was mainly useful for people that were busy with other activities during the day, and was used extensively and resulted with much more aggressive bids than the open bids, and almost always won the auction. Surprisingly, open bids were still used for items with the proxy option. The auction table, with both bid sheets and boxes for Proxy bids. While proxy bidding are trivial to implement in an electronic system, there are surprising complications when they are done in a physical auction. Once the auction ends, people immediately want to know whether they won, and pay for the item, and it requires some time to manually handle proxy bids. Nevertheless, this option was successful and was adopted extensively, mainly for items of unclear (and private) value, like participating in a party with the teachers. Two other changes we made were aimed directly at accelerating the auction and decreasing the overall cost of bidding. First, we increased the reserve prices. Second, we increased the bid increment from$1 to \$5 on items of high value. Both changes seem effective.

So what have we learned? In the physical world, implementation issues can cause big gaps between the basic theoretical model and practice. Understanding the reasons for these gaps enabled us to make a set of changes that were very effective, and increased the revenue considerably. This was achieved although we do not have a complete theoretical understanding of people’s behavior in this auction, and thus are unable to come up with the optimal auction.

In this post I’d like to report a new and simple proof we found for theorem by Berenbrink, Czumaj, Steger, Vöcking on the imbalance of the balls-into-bins process known as the multiple choice scheme. Full details could be found in the paper. Probably many are familiar with this process, Kunal blogged about some variations of it a few months ago, in any case, I’ll start with a quick primer (shamelessly taken from Kunal’s blog post):

Balls-and-Bins processes are a name for randomized allocations processes, typically used to model the performance of hashing or more general load balancing schemes. Suppose there are $m$ balls (think items) to be thrown into $n$ bins (think hash buckets). We want a simple process that will keep the loads balanced, while allowing quick decentralized lookup. We measure the balance in terms of the additive gap: the difference between the maximum load and the average load.

The simplest randomized process one can think of is what is called the one choice process: place each ball in an independently and uniformly sampled bin. Many of us studied the $m=n$ case of this process in a randomized algorithms class, and we know that the load is $\Theta(\log n/\log\log n)$ except with negligible probability (for the rest of the post, I will skip this “except with negligible probability” qualifier). What happens when $m$ is much larger than $n$? It can be shown that for large enough $m$, the load is roughly $m/n + \sqrt{ \tfrac{m\log n}{n}}$ (see this paper for more precise behavior of the gap).

Can we do better? Azar, Broder, Karlin and Upfal analyzed the following two choice process: balls come sequentially, and each ball picks two bins independently and uniformly at random (say, with replacement), and goes into the less loaded of the two bins (with ties broken arbitrarily). They showed that when $m=n$ the load of the two choice process when is significantly better: only $\log\log n$.

The proof uses a simple but clever induction. The induction is not on the number of balls nor on the number of bins. Say we know an upper bound $\beta_i$ on the fraction of bins with load at least $i$, the proof shows how do derive a bound $\beta_{i+1}$ on the fraction of bins with load at least $i+1$.

Induction base: Since only $n$ balls are thrown we know that at most $n/3$ bins contain at least $3$ balls, so $\beta_3 \leq 1/3.$

Induction step: As more balls are put in the bins the fraction of bins with load at least $i$ only increases, so if $\beta_i$ is an upper bound on the fraction of bins with load at least $i$ at the end of the process, it is also an upper bound in any other point in time. The key observation is that the number of bins of load at least $i+1$ is bounded by the number of balls which were placed at ‘height’ at least $i+1$. For a ball to land at height at least $i+1$ both its choices need to be at height at least $i$. The inductive hypothesis asserts that the fraction of such bins is at most $\beta_i$, so the probability a ball indeed lands at height at least $i+1$ is at most $\beta_i^2$. There are $n$ such balls so the (expected) total number of balls (and thus bins) of height at least $i+1$ is at most $n\beta_i^2$. In other words, the inductive step implies the following recurrence: $\beta_{i+1} = \beta_i^2.$

Since each inductive step the beta’s are squared, after $\log\log n$ steps the fraction is less than $1/n$ and (modulo some details) we are done!

The proof as sketched above relies on the assumption that the number of balls is $n$. We needed that for the base of the induction and for the inductive step. What happens when $m>>n$? The technique as described above seems unfit to handle the heavily loaded case. Other proof techniques that use differential equations and witness trees also seem unsuited to analyze the heavily loaded case (See this nice survey).

A breakthrough was achieved in a remarkable paper by Berenbrink et al. In that paper they proved that the same bound on the gap holds for any, arbitrarily large number of balls:

Theorem: For every $m$, the maximum load is $m/n + \log \log n + O(1)$ w.h.p.

Note that the additive gap between maximum and average is independent of $m$. Contrast this with the one choice case in which the additive gap diverges with the number of balls.  This is very meaningful, say there are 100 bins and 100,000 balls, the gap between max and average is less than 3 with two choices, while it is roughly 80 when using one choice.

From a (very) high level their approach is the following: first they show that all the action is in the first poly$(n)$ balls, in other words, the gap after $m$ balls are thrown is distributed similarly to the gap after only poly$(n)$ balls are thrown. This is done by bounding the mixing time of the underlying Markov chain. The second step is to extend the induction sketched above to the case of poly$(n)$ balls. This turns out to be a major technical challenge which involves four inductive invariants and plenty of calculations, some of which are computer aided. Moreover, these calculations need to be redone whenever the model is tweaked.

Our hope is that our simpler proof will make this beautiful theorem more accessible.

We take a different approach. The main tool we use is a theorem from our previous paper on the topic which directly implies a weaker bound. This bound is then sharpened to yield the tight $\log \log n$ bound on the gap.  First we need a simple definition:  The gap vector at time $m$, denoted by $X(m)$, is the $n$ dimensional  vector specifying the additive gap(= load – average) of the $n$ bins after $m$ balls were thrown.   The lemma we use states that an exponential potential function has linear expectation:

Lemma (1): There exist universal $a>0,b>0$ such that $E[ \sum e^{a|X_i(m)|}] \leq bn$

What can we learn from the Lemma? It  immediately implies a bound of $O(\log n)$ on the gap for every given $m$.  In fact the bound is on the gap between the maximum and minimum, and as such it is tight. What we need however is a finer bound of $\log\log n$ on the gap between maximum and average.

So, lets assume $m$ balls were thrown, and the max gap is say smaller than $\log n$. The key idea is to analyze what will happen after we throw $n\log n$ additional balls, i.e. at time $m+n\log n$. First of all, the average would move up from $m/n$ to $m/n+\log n$ which is beyond the current max gap, so all balls contributing to the gap at time $m+n\log n$ were indeed thrown in the last $n\log n$ time steps. This means that we can now prove a bound for time $m+n\log n$ by essentially repeating the inductive step as was described above. In the description above we used the fact that $n$ balls were thrown, but nothing terrible happens if $n\log n$ balls are thrown instead. The more serious obstacle is to find a base case for time $m+n\log n$. Recall that when $m=n$ it must be the case that  $\beta_3 \leq 1/3$, this of course doesn’t hold when the number of balls is arbitrarily large.

The crucial observation is that Lemma (1) gives just that! It shows that the fraction of bins of gap at least $\log \log n$ (at time $m+n\log n$) is at most $O(1/\log n)$ so we can start the argument  with the base case $\beta_{\log\log n} \leq 1/ \log n$ and then continue as usual.

The induction essentially tells us that if we know that at time $m$ the gap was at most $\log n$, then after throwing $n\log n$ additional balls the gap is reduced to $O(\log \log n)$. If we now throw $O(n\log\log n)$ additional balls and repeat the argument, the gap is reduced to $(1+o(1))\log \log n$.

It is important to point out that we haven’t recovered the theorem of Berenbrink et.al. at full strength. We get a weaker tail bound for the probability the gap is larger than the expectation, as well as worse low order terms on the expectation itself.

On the plus side this technique is general enough to capture many variotions on the basic model. Consider for instance a scenario where balls are asigned random weights: when a ball is placed its weight is uniformly sampled from the set $\{1,2\}$. One would naturally expect the maximum gap to be bounded by $2\log\log n$. Indeed the proof above shows just that with minimal changes.

tags:

FOCS 2013 is over, and as predicted here it was very successful. I am happy to have taken part in its success (though all of the credit goes of course to the authors).

I also predicted that “A significant fraction of the community will think the PC messed up badly.” Naturally, most of the people with a paper at FOCS were generally happy with the PC work so I did not get too many complaints :-) (but I did get some). What I did find was that FOCS/STOC attendees passionately care about FOCS/STOC. I therefore got to hear quite a few personal philosophies about what’s good and bad in FOCS/STOC and what should be changed. The interesting aspect is that the opinions were at times the exact opposite:

• Some think (as also reflected in the comments here) that the conference has become too competitive and selective which is why FOCS/STOC is less important now. Others think that FOCS/STOC are accepting too many papers and in this sense the PC is not doing its job of filtering and therefore FOCS/STOC does not serve the purpose of joining TCS and is less useful.
• Some feel that authors and speakers in FOCS/STOC are drowning the audience in technical details rather than sharing ideas and this happens in part because FOCS/STOC papers are judged by experts in their subfield rather than by the community at large. Others expressed rage that the papers aaccepted in their subfield are far from being the best papers submitted. They say that this happens because they are judged by general TCS experts rather than by the subfield experts.
• Some think that FOCS and STOC should be joined into a single bigger event that will draw the attention of the entire community (but with multiple submission deadlines such as is now happening in other fields). Other think it is a horrible idea and two conferences are extremely important.
• The business meeting saw some exciting arguments on various topics.
• Sanjeev and I had a vigorous discussion  (to be continued I am sure) on the no-page-limit policy initiated by the FOCS 2013 PC.

As you might have realized by now, I am passionate about FOCS/STOC as well. The experience of chairing FOCS made me think about it much more and I have various ideas about changes to be made (I did express my ideas even before but I have refined them since). But the abovementioned discussions indicate to me that (1) FOCS/STOC is still extremely important to many of us, (2) changes should be gradual and controlled as we have such a diverse set of expectations.

After things settle in my mind, I may blog some more about what I think should be done, but for now let me just say that I believe the following combination is important and possible: FOCS/STOC should allow wide active participation (in numbers larger than today and by a more diverse community). On the other hand a small enough (and humanly digestible) part of the program should facilitate cross fertilization and information flow between the sub communities. The federated theory conference which so many of us support is a good idea but only part of the solution.