Privacy Loss as a Random Variable

This post will be about differential privacy (DP), with a focus on what is often referred to in the differential privacy literature (often colloquially) as “privacy loss”. A brief recap of the setting: a trusted data curator has a database of sensitive information about individuals. The curator wants to release aggregate statistical information about the data, without compromising any individual’s privacy. Differential privacy (DP), due to Cynthia Dwork, Frank Mcsherry, Kobbi Nissim and Adam Smith, is a rigorous framework and security definition for algorithms that operate on sensitive data and publish aggregate statistics. Differential privacy aims to prevent any individual’s privacy from being compromised. Two unique advantages of differential privacy are: (1) it resists linkage attacks and auxiliary information, (2) it supplies a quantifiable measure of harm incurred by individuals and composes nicely.

In this blog post I’d like to discuss the second property. I’ll focus on a quantitive measure of “harm” incurred by an individual. In a future post I’ll build on this and elaborate on the “nice” composition properties of Differential Privacy, based on work with Cynthia Dwork and Salil Vadhan, and also on more recent work with Cynthia Dwork (but, since I couldn’t resist, I’ll give a couple of teasers about composition in this post).

Differential Privacy: Definition. An algorithm {A} (often referred to as a mechanism in the DP literature) satisfies {\epsilon}-DP if for any database {DB}, an individual’s record {I}, and event {S} (over {A}‘s output space), the probabilities (over the {A}‘s coins), that event {S} occurs when {A} is run on: (1) the database {(DB+I)}, comprising all records in {DB} together with record {I}, (2) the database {DB} (without record {I}), are roughly similar: they are within a small multiplicative factor (close to 1) of each other.

More formally, {A} is {\epsilon}-DP iff for all {DB,I,S}:

\displaystyle \left| \ln \frac{\Pr_A [A(DB+I) \in S]}{\Pr_A [A(DB)\in S]} \right| \leq \epsilon

Typically, we think of {\epsilon} as a small positive constant. Intuitively, DP means that any event that can occur when my data are considered, would have occurred with roughly the same probability even if my data were never considered. One interpretation of this guarantee, is that individuals’ participation in the analysis will not lead to significant harm.

Privacy Loss for Outcome {s}. For database {DB}, individual {I}, and outcome {s}, the privacy loss under {s} is:

\displaystyle PrivLoss_A(s)= \ln \frac{\Pr_A[A(DB+I)=s]}{Pr_A[A(DB)=s]}

Intuitively, this is meant to capture the “harm” incurred by individual I when the algorithm outputs {s}. For example, if output {s} is more likely with {I} in the dataset than without {I}, then when we see outcome {s} we are more likely to think that {I} was in the dataset, and the privacy loss is large. In particular, if the probability of {s} is greater than 0 on {(DB+I)}, but 0 on database {DB}, then whenever {s} occurs we know with certainty that {I} was in the dataset. In this case, the privacy loss is infinite. On the other hand, if {s} is equally likely with or without {I} in the database, then outcome {s} does not reveal any information about whether I was in the dataset, and the privacy loss is 0. If outcome {s} is more likely without {I} in the database than without {I}, then when we see outcome {s} we are less likely to think that {I} was in the dataset, and the privacy loss is negative. Thus, privacy loss gives us a quantifiable measure of harm incurred by an individual. Note that in fact negative privacy loss might also be dangerous, as exposing non-participation might lead to harm, and so often we look at the absolute value of the privacy loss.

The Privacy Loss Random Variable. The notion of privacy loss defined above was tied to a specific outcome {s}. We can analyze the privacy loss (or harm) incurred by participation in the analysis, by considering the privacy loss random variable. This is the random variable obtained by sampling an outcome {s}, and then examining its privacy loss. For database {DB} and individual {I}, the random variable {PrivLoss_A} is sampled by running {A} on {(DB+I)} to produce outcome {s}, and outputting {PrivLoss_A (s)}.

With this random variable in mind, note that {\epsilon}-Differential Privacy simply means that for every {DB} and {I}, the absolute value of {PrivLoss_A} is always bounded by {\epsilon}. A relaxed guarantee, {(\epsilon,\delta)}-Differential Privacy, simply means that with all but {\delta} probability over the coins of {A}, the absolute value of {PrivLoss_A} is bounded by {\epsilon}.

Expected Privacy Loss. Once we define the privacy loss random variable, we can begin to study its behavior. Of particular interest is its expectation. The expected privacy loss is:

\displaystyle E_{s \leftarrow A(DB+I) } \left[ \ln \frac{\Pr_A [A(DB+I)=s]}{\Pr_A [A(DB)=s]} \right]

(in other words, this is the KL-divergence, or relative entropy, between {A}‘s output distributions on {(DB+I)} and on {DB}).

I will elaborate on expected privacy loss and its relationship to composition, based on work with Cynthia Dowrk and Salil Vadhan, in a  future post. But, as I warned above, I can’t resisit a brief teaser.

Irit Dinur, Cynthia Dwork and Kobbi Nissim were the first to analyze and bound the expected privacy loss. They did this for a particular algorithm (the binomial noise mechanism), and showed that the expected privacy loss is much smaller than the worst-case behavior. In the next post, I’ll show that this is true for any {\epsilon}-DP algorithm: for small {\epsilon}, the expected privacy loss is always bounded by {2\epsilon^2}. This improved bound on the expected privacy loss (compared with the worst case guarantee) leads to significant improvements in our understanding of the way Differentially Private algorithms behave under composition, i.e. when an individual’s data are involved in many analyses.

One thought on “Privacy Loss as a Random Variable

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s