# On Leakage-Resilient Cryptography

In this post I wanted to share some background and thoughts about *leakage-resilient cryptography*, which has recently been the focus of a rich body of research. Before saying anything more, I do want to emphasize that this is not meant to be a survey – just a collection of thoughts and observations about recent work and where it may go next. Also, I apologize in advance to the authors of many beautiful and important works that I will not mention here.

**Introduction**

So, what is leakage resilient cryptography all about? To begin, we will need a bit of background. In modern cryptography, algorithms (e.g. for encryption) are often modeled as “black boxes”. It is commonly assumed that their keys, internal computations, and randomness are opaque to external adversaries (note, however, that we assume that the algorithms themselves are public). Using this modeling, cryptographic algorithms can be proven secure under various hardness assumptions. This methodology has been a phenomenal success story, both in theory and in practice, transforming cryptography from an “art” into a “science”.

This, however, is not the end of a story: cryptographic algorithms are routinely run in adversarial settings, where keys are compromised and computations are not fully opaque, e.g. because of *side channel attacks*. Side channel attacks exploit the physical implementation of cryptographic algorithms. The physical implementation might enable “leakage”: observations and measurements on the cryptographic algorithm’s internals. Attacks such as these can and have broken systems with a mathematical security proof, without violating any of the underlying mathematical principles (see, for example, KJJ99 for one example). The problem was not with the proofs, but rather with the model of the cryptographic algorithm as a fully opaque black box.

Leakage-resilient (or side-channel resilient) cryptography attempts to tackle such attacks. One main goal is building more robust models of adversarial access to a cryptographic algorithm, and developing methods grounded in modern cryptography to provably resist such attacks.

One digression I would like to make at this point, is to note that while leakage can be particularly devastating to cryptographic algorithms, it can also pose a threat to non-cryptographic algorithms and computations. Consider, for example, a valuable proprietary algorithm running on an untrusted remote cloud server. If some of the algorithm’s internals, say its cache use, are exposed to other entities on the same server, might the code be reverse-engineered? It certainly seems interesting to protect the valuable algorithm from an adversary with partial access to its internals.

So the question remains, which kinds of algorithms can be protected against which kinds of attacks? I will mention two paradigms that have led to two lines of work on this question.

**Paradigm I: Resisting Leakage at Design Time**

One line of research takes leakage into account *at design time*. Namely, we consider a specific task or cryptographic primitive (e.g. encryption), and a specific security requirement (e.g. semantic security), and attempt to design an algorithm that maintains security even in the face of some family of leakage attacks.

For example, a beautiful paper of Akavia, Goldwasser and Vaikuntanathan [AGV09] showed how to build a public-key cryptosystem that remains semantically secure even under leakage on the secret key: in particular, the system remains secure even when the adversary can see some arbitrary (efficiently computable) shrinking function whose output length is a constant fraction of the secret key size. So, for example, the adversary could choose to see half of the bits of the secret key and still the scheme remains secure. ([AGV09] showed this under lattice assumptions, Naor and Segev [NS09] showed even stronger security and under a wider variety of assumptions. Both works were motivated by the so-called “cold boot” attacks of Halderman et al [H+08]).

One shortcoming of this scheme was that the total number of bits of leakage that could be handled was fixed in advance. Brakerski, Kalai, Katz and Vaikuntanathan [BKKV10] showed how to handle an unbounded total number of leaked bits, while maintaining semantic security. In particular, they built a scheme where the key is periodically “refreshed”, and no more than a constant fraction of bits of information are leaked between successive updates. ([BKKV10] assumed a leak-free refresh phase, Lewko, Lewko and Waters [LLW11] gave a scheme that can even handle bounded-length leakage from each key refresh).

**Paradigm II: automatic “leakage-resilience” compiler?**

The works above (and many others that I did not mention) demonstrate that one can design cryptographic algorithms whose security guarantees are extremely robust in the presence of leakage. However, there are limitations to the approaching of considering leakage at design time:

- The first, and more obvious limitation, is that whenever we want to accomplish a new task we need to design a new leakage-resilient algorithm. It would be much easier if we had an automatic
*“leakage resilience compiler”*that could transform any algorithm secure in the “black-box” opaque model of cryptography into an algorithm that is also secure even under leakage. In particular, this would potentially let us protect even non-cryptographic algorithms from side-channel attacks. - The second limitation has to do with what exactly we mean by “secure under leakage”. In the leakage at design time paradigm, we considered (at design time) a
*specific security property*(e.g. semantic security). The leakage-resilient algorithm we designed will not necessarily guarantee*any*other security properties under leakage: in particular, even if the scheme was CCA secure without leakage, it will not be CCA secure in the presence of leakage. In particular, one bit of leakage from the secret key, say leakage obtained from observing the decryption of some ciphertext*c*, might be used to decrypt a different and unrelated ciphertext_{1}*c*.Ideally, we’d want a compiler that lets the transformed algorithm inherit all the security properties of the original algorithm. For example, we may ask that a leakage adversary cannot learn_{2}*anything*more given leakage from the algorithm’s internals, than what it could learn in the “black box” model. Slightly more formally, we seek a “simulation-based” guarantee: any harm that an adversary could inflict via leakage access to the transformed program, could also be inflicted on the original program using only black-box access (I focus throughout on passive adversaries that simply observe the program’s execution). Such goals were first considered in the work of Goldreich and Ostrovsky [GO96], and also in the work of Ishai, Sahai and Wagner [ISW03].

The above motivation motivates what we call “leakage resilience compilers” for automatically transforming programs to resist leakage in the strong black-box sense above. The big question is whether we can hope for such “automatic” leakage-resilience compilers. To the best of my knowledge, this question was first raised explicitly by Ishai, Sahai and Wagner [ISW03]. The answer is both **NO** and **YES**: it depends on the class of leakage attacks we want to protect against. Ideally, we might hope to protect against general length bounded leakage on (each execution of) the transformed algorithm:

**NO**, because there provably cannot exist a leakage-resilience compiler for handling even*a single bit*of general leakage from each execution (i.e. an arbitrary efficiently computable function with a 1-bit output). This is a consequence of the negative result for program obfuscation in the work of Barak et al [B+01]. It was pointed out to me when discussing the question with Russel Impagliazzo.**YES**, because if we are willing to consider more restricted classes of leakage, then automatic leakage-resilience compilers become possible. [ISW03] showed how to handle “wire-probe” leakage, where the adversary can see the values on a bounded number of wires in the transformed circuit. A line of research extended the leakage to any length-bounded AC^{0 }functions, first in the work of Faust et al [FRRTV10] (they also handled “noisy” leakage), this was more recently extended in [R12]. Another family that has been studied is length-bounded “local” leakage functions that leak separately on different parts of the (transformed) computation (this is also known as OC or “only computation” leakage due to Micali and Reyzin [MR04]). This was considered by [GR10], by Juma and Vhalis [JV10], by Dziembowski and Faust [DF12], and most recently in [GR12]. Note that the compilers above all incur different overheads in their transformations (which I will not get into). recent explosion of new ideas on leakage-resilient cryptography indicates that

The body of work on leakage-resilience compilation indicates that while certain limitations does exist, it is possible to build leakage-resilience compilers against rich families of side-channel attacks

Note that even more recently, there has been further work on *leakage resilient protocols*, see e.g Boyle, Goldwasser, Jain and Kalai[BGJK12]. I will not discuss this more recent line of work here (even though I find it very interesting).

Afterword: A Foundational Perspective

While the study of leakage-resilient cryptography is motivated by real-world attacks and security considerations, it explores a foundational question: The issue at its heart is the difference between giving an adversary *black-box access to a program* and *access to the program’s code or internals*. This question is central to the foundations and the practice of cryptography. In [GR12] we noted that the connection between obfuscation and leakage-resilience hinted at above is no accident: code obfuscation considers the task of compiling a program to make it completely unintelligible, or “impervious to all leakage” (i.e. even to an adversary with complete access to the program’s code). Unfortunately, full-blown obfuscation is provably impossible in many settings {B+01,GK05}, and is considered intractable in practice. Perhaps as a result of this impossibility, much of cryptography only considers adversaries that have (at best) “black box” access to the programs under attack. Leakage-resilience compilation can be viewed as exploring the middle ground between full access to the code and black-box access: Giving the adversary *limited access* to the program’s internals and its code. One of my primary motivations in working on leakage resilience compilers, is understanding which kinds of restricted access to code permit secure compilation (i.e. leakage resilience).

From a real-world security perspective, I would like to note that I do not necessarily view the leakage models as realistically modeling *all* side-channel attacks. See e.g. the work of Renauld et al [R+11]. I view this as motivation for further study of leakage-resilience against more and richer classes of attacks.

** **

**Bibliography**

[AGV09] Adi Akavia, Shafi Goldwasser, and Vinod Vaikuntanathan. Simultaneous hardcore bits and cryptography against memory attacks. TCC 2009: 474-495

[B+01] Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil Vadhan, and Ke Yang. On the (im)possibility of obfuscating programs. CRYPTO 2001: 1-18

[BGJK12] Elette Boyle, Shafi Goldwasser, Abhishek Jain, and Yael Tauman Kalai. Multiparty computation secure against continual memory leakage. STOC 2012: 1235-1254

[BKKV10] Zvika Brakerski, Yael Tauman Kalai, Jonathan Katz, and Vinod Vaikuntanathan. Overcoming the hole in the bucket: Public-key cryptography resilient to continual memory leakage. FOCS 2010: 501-510

[DF12] Stefan Dziembowski, Sebastian Faust. Leakage-Resilient Circuits without Computational Assumptions. TCC 2012: 230-247

[F+10] Sebastian Faust, Tal Rabin, Leonid Reyzin, Eran Tromer, and Vinod Vaikuntanathan. Protecting circuits from leakage: the computationally-bounded and noisy cases. EUROCRYPT 2010: 135–156

[GO96] Oded Goldreich, Rafail Ostrovsky. Software Protection and Simulation on Oblivious RAMs. J. ACM 43(3): 431-473 (1996)

[GK05] Shafi Goldwasser, Yael Tauman Kalai. On the Impossibility of Obfuscation with Auxiliary Input. FOCS 2005: 553-562

[GR10] Shafi Goldwasser, Guy N. Rothblum. Securing Computation against Continuous Leakage. CRYPTO 2010: 59-79

[GR12] Shafi Goldwasser, Guy N. Rothblum. How to compute in the presence of leakage. FOCS 2012

[H+08] J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, Edward W. Felten. Lest We Remember: Cold Boot Attacks on Encryption Keys. USENIX Security Symposium 2008: 45-60

[ISW03] Yuval Ishai, Amit Sahai, and David Wagner. Private circuits: Securing hardware against probing attacks. CRYPTO 2003:463-481

[JV10] Ali Juma and Yevgeniy Vahlis. Protecting cryptographic keys against continual leakage. CRYPTO 2010: 41-58

[KJJ99] Paul Kocher, Joshua Jaffe, and Benjamin Jun. Differential power analysis. CRYPTO 1999: 388-397

[LLW11] Allison B. Lewko, Mark Lewko, Brent Waters. How to leak on key updates. STOC 2011: 725-734

[MR04] Silvio Micali and Leonid Reyzin. Physically observable cryptography (extended abstract). TCC 2004: 278-296

[NS09] Moni Naor, Gil Segev. Public-Key Cryptosystems Resilient to Key Leakage. CRYPTO 2009: 18-35

[R+11] Mathieu Renauld, Franois-Xavier Standaert, Nicolas Veyrat-Charvillon, Dina Kamel, and Denis Flandre. A formal study of power variability issues and side-channel attack for nanoscale devices. EUROCRYPT 2011: 109-128

[R12] Guy N. Rothblum. How to Compute under AC0 Leakage without Secure Hardware. CRYPTO 2012

Comments are closed.

Thanks for the excellent post, Guy! Can you sketch out more of the negative result that Russell mentioned to you?

Sure Moritz, thanks for the question. Barak et al showed the following: there exists a family f_s of functions (where each function has a key s), s.t. for a randomly chosen s, given *any circuit* computing f_s you can compute a bit of s. On the other hand, this bit looks uniformly random from black-box access to f_s.

Think now of trying to protect a program computing f_s from leakage. Any allegedly leakage-resilient program computing f_s is a circuit that computes the function, and so the leakage function can compute the non-black box bit of information from the Barak et al attack. This means we cannot even get a leakage-resilience compiler that protects against a single bit of arbitrary (efficiently computable) leakage.

In fact the impossibility is even worse – Barak et al build a family of PRFs s.t. given any circuit implementing a random function in the family, you can reconstruct the key. If the key is k bits longs, then k one-bit leakage attacks will suffice for reconstructing the key.

Thanks for your introduction in leakage resilient cryptography. When we face the key leakage, how can we provide the possible solution to eliminate the consequence of the leakage.

As I know, Hash proof system (HPS) can afford this solution. Actually, there are lots of solutions that are based on the algebraic and number theoretic manner.

May you list the possible leakage attakcs and give the corresponding solutions?

Thanks in advance.