# Indistinguishability Obfuscation and Multi-linear Maps: A Brave New World – Guest Post by Ran Canetti

A bunch of us hapless cryptographers got the following boilerplate comment from the FOCS’15 PC:

*“Overall, submissions related to multi-linear maps and indistinguishability obfuscation were held to a somewhat higher standard. The PC expressed some concern with the recent flurry of activities pertaining to multi-linear maps and indistinguishability obfuscation, given how little we understand and can say and *prove* about the underlying hardness assumptions”.*

This comment was clearly written with the best of intentions, to explain views expressed at the PC deliberations. And I’m thankful to it – mainly since it made the underlying misconceptions so explicit that it mandated a response. So, after discussing and commiserating with colleagues here at Simons, and after amusing ourselves with some analogues of above statement (e.g., “results on NP completeness are held to a higher standard given how little we understand and can say and ***prove*** about the hardness solving SAT in polynomial time”), I decided to try to write an – obviously subjective – account for the recent developments in multilinear maps and indistinguishability obfuscation (IO) and why this exciting research should be embraced and highlighted rather than “held to a somewhat higher standard” — in spite of how little we understand about the underlying assumptions. The account is aimed at the general CS-theorist.

Let me start by giving rough definitions of the concepts involved. An Indistinguishability Obfuscator (IO) is a randomized algorithm O that takes as input a circuit C and outputs a (distribution over) circuits O(C) with the properties that:

- C and O(C) have the same functionality,
- O(C) is only polynomially larger than C
- for any two same-size, functuinally equivalent circuits C and C’ we have that O(C) ~ O(C’) (i.e., the distributions over strings representing O(C) and O(C’) are computationally indistinguishable).

IO has been proposed as a notion of obfuscation in 2000 (Hada, Barak-Goldreich-Impagliazzo-Sahai-Vadhan-Yang). Indeed, it is arguably a clean and appealing notion – in some sense the natural extension of semantic security of standard encryption to “functionality-preserving encryption of programs”. However, it has been largely viewed as too weak to be of real applicability or interest. (There were also no candidate polytime IO schemes, but this in my eyes is a secondary point, see below.)

Things changed dramatically in 2013 when Sahai and Waters demonstrated how IO schemes can be ingeniously combined with other rather “mundane” cryptographic constructs to do some amazing things. Since then dozens of papers came about that extend the SW techniques and apply them to obtain even more amazing things – that by now have transcended crypto and spilled over to other areas. (e.g.: deniable encryption, succinct delegation, succinct multi-party computation with hardly any interaction, one message succinct witness hiding and witness indistinguishable proofs, hash functions with random-oracle-like properties, hardness results for PPAD, and many more). In fact, think about a result in your area that assumes that some computation is done inside a black box – most probably IO can replace that assumption in one way or another…

Still, my (subjective but distinct) feeling is that we are far from understanding the limits and full power of IO. Furthermore, the study of IO has brought with it a whole new toolbox of techniques that are intriguing in their own right, and teach us about the power and limitations of working with “encrypted computations”.

So far I have not mentioned any candidate constructions of IO – and indeed the above study is arguably valuable as a pure study of this amazing concept, even without any candidate constructions. (Paraphrasing Levin on quantum computers, one can take the viewpoint that the above is the study of impossibility results for IO…)

However, unlike quantum computers, here we also have candidate constructions. This is where multilinear maps come to play.

Multi-linear maps are this cool new technical tool (or set of tools) that was recently put forth. (The general concept was proposed by Boneh and Silverberg around 2000, and the first candidate construction of one of the current variants was presented in 2012 by Garg, Gentry and Halevi.) Essentially, a multilinear map scheme is a fully homomorphic encryption scheme where the public key provides, in addition to the ability to encrypt elements and perform homomorphic operations on ciphertexts, also the ability to partially decrypt ciphertexts under certain restrictions. There are many incomparable variants of this general paradigm, which differ both in the functionality provided and in the security guarantees. Indeed, variants appear to be closely tied to candidate constructions. Furthermore, our understanding of what’s possible here has been evolving considerably, with multiple new constructions, attacks, and fixes reported.

Still, the number and variety of applications of multi-linear maps makes it clear that this “family of primitives” is extremely powerful and well worth studying – both at the level of candidate constructions, at the level of finding the “right” computational abstractions, and at the level of applications. In a sense, we are here back to the 70’s: we are faced with this new set of algebraic and number theoretic tools, and are struggling to find good ways to use them and abstract them.

Indeed, some of the most powerful applications of multilinear maps are candidate constructions of IO schemes. The first such candidate construction (by Garg, Gentry, Halevi, Raykova, Sahai and Waters in 2013) came with only heuristic arguments for security; However more rigorous analyses of this and other constructions, based on well-defined formulations of multi-linear map variants, soon followed suite. Some of these analyses have eventually been “broken” in the sense that we currently don’t have candidate constructions that satisfy the properties they assume. Still, other analyses do remain valid. Indeed, there are no attacks against the actual basic IO scheme of Garg et al.

The fact that the only current candidate constructions of IO need to assume existence of some variant of multi-linear maps at some point or another may make it seem as it the two concepts are somehow tied together. However, there is no reason to believe that this is the case. For all we know, multi-linear maps are just the path first uncovered to IO, and other paths may well be found. Similarly, even if IO turns out to be unobtainable for some reason, the study of multilinear maps and their power will still remain very relevant.

So, to sum up this long-winded account:

- IO is a natural and fascinating computational concept. Studying its consequences (both within and outside cryptography) is a well worth endeavor.
- Studying new candidate constructions of IO and/or new analyses of their security is another well worth endeavor.
- Multilinear maps are an intriguing and powerful set of techniques and tools. Finding better candidate constructions and abstractions is of central importance to cryptography. Finding new cool uses of these maps is another intriguing challenge.
- The three should be treated as separate (although touching and potentially interleaving) research efforts.

———–

I’d like to thank Guy Rothblum and Vinod Vaikuntanathan for great comments that significantly improved this post.

Comments are closed.

“results on NP completeness are held to a higher standard given how little we understand and can say and *prove* about the hardness solving SAT in polynomial time”I don’t think this parodic example proves anything. On the contrary, it is in fact a correct statement. NP-completeness results are indeed not extremely interesting nowadays. Unconditional lower bounds, even for restricted models, for instance, would have a better chance to get accepted.

The point in the analogy was different: NP completeness was coined as a concise and natural notion that you can work with and build on, in face of our inability to argue hardness of problems.

It allows demonstrating relative hardness *regardless* of our inability to actually prove hardness of problems.

A similar thing happens here – IO (and also MMAPs, although to a much lesser extent) is a concise and natural notion that allows us to separate out the study of the hardness/security of constructions from the security of schemes that use it as a building block. So the study of IO

is relevant regardless of our inability to prove its security.

Of course, NPC is just one obvious example. The UGC is another – the fact that we dont know how to prove/disprove it doesnt prevent us from studying its consequences (in fact, it only enhances our interest.) can think of others…

So it should be

“results based on NP completeness are held to a higher standard”and not as written.But I’m still doubtful about the argument. Obviously, results based on NPC or NP hardness are legitimate, but they do stand to higher standards. For instance, you should show that NP hardness doesn’t almost directly imply your result.

Another thing is that NP hardness is an established assumption, while UGC, and probably much less so Indistinguishability Obfuscator (IO) is apparently not established to the same extent NP hardness. Hence, it does seem reasonable that concentrating on the implications of such assumptions should be held to higher standards than unconditional results.

Sure, unconditional results are better than conditional ones, and IO is much less established than NPc. But this is not the issue – the interest in studying consequences of a mathematical construct that is not certain to exist are much more tied to the interest/novelty of this construct than to our level of our understanding or belief in its existence.

Indeed, if a construct is not interesting, then who cares if it exists. But if it is, then studying its consequences is just as interesting, even if we have no understanding if the said construct exists – in some sense this study is even more interesting in this case, since it may shed light on the existence of said construct.

I think obfuscation (and IO as a special case) has earned the right to be considered of high interest to the general TCS community. so the “flurry of recent results on IO” should be embraced, rather than being a source for a concern.

Guy/Ran, can you clarify my confusion as to the difference between multi-linear maps & Boolean circuits? Also, am presuming you can derive BC’s from MM’s?

I assume you mean the relation of MMAPs and circuit IO. We currently know to construct circuit IO for NC1 from several variants of the MMAPs primitive. For some of these variants we dont currently have candidate constructions. For others we do. (This may be TMI, but for instance can use the construction of Barak-Garg-Kalai-Paneth-Sahai14 instantiated with the Garg-Gentry-Halevi12 MMAPs, assuming that the GGH12 constuction is “semantically secure”, as coined and demonstrated in Pass-Seth-Karn14. Can then bootstrap to IO of P/poly, and even IO of Turing and RAM machines, assuming subexponential security of the underlying IO scheme, plus subexp OWFs.)

Going in the other direction, What we know today is to obtain some form of MMAPs from a notion of obfuscation that’s slightly stronger than IO, called Strong IO. (In fact We know that SIO exists if and only if that form of MMAPs exists. see Bitansky-C-Kalai-Paneth, Crypto 14). It may also well be possible to obtain such an equivalence for plain IO and some variant of MMAPs. Still, this does not rule out the possibility that IO can be naturally constructed in other ways (say, directly from LWE or from a completely different algebraic construct) without going via MMAPs. This also doesnt rule out the possibility that other forms of MMAPs maybe possible even if IO is proven impossible or unplausible.

Hope this clarifies.

Ran, I think it will, when I have digested the links. Appreciate your reply, thankyou.

Please, don’t take the following quote too seriously (taken from http://mathoverflow.net/a/53127).

The story goes that once upon a time a student wrote his thesis on Hölder-continuous maps with α>1, since he had only seen the case α≤1 addressed in his books. The student proved many wonderful theorems about these maps and was very excited for his defense.

At his thesis defense, one of the examiners (is that the right word?) asked him to provide a nontrivial example of such a map. The student was flustered. As it turns out, all such maps are constant – no wonder the theorems were so nice.

“there are no attacks against the actual basic IO scheme of Garg et al.”

Which scheme are you refering to?

The Garg et al. “Candidate Multilinear Maps from Ideal Lattices” scheme used in Garg et al. “Candidate indistinguishability obfuscation and functional encryption for all circuits” was broken as far as I know (Jung Hee Cheon, Changmin Lee. “Cryptanalysis of the multilinear map on the ideal lattices”).

Hi Mario. While there have been strong attacks against the GGH scheme, these make use of zero-encryptions in lower levels of the graded encoding scheme. Since the Garg et al obfuscation candidate does not release such encryptions, to the best of my knowledge its instantiation with GGH has not been broken.