[Below is the transcript of a talk I gave at the Harvard Lectures that Last event. This was a talk for a general (non technical) audience, but I still thought some people might find it interesting. For some more technical discussion of the notion of “computational Bayesian probabilities” I hint at toward the end, see the following talk and blog post.]
Today I want to talk about one of humanities’ greatest inventions. This is an invention you encountered already when you were 5 or 6. No, I’m not talking about the wheel or fire or even the iPad, but about the place-value number system.
Because we use the place-value decimal system, we only need six digits to express number of miles from here to the moon. But if you wanted to write this distance using roman numerals, it will take more than 200 letters. The distance to the sun would fill a small book.
This means that for cultures without place-value systems, these quantities were not merely unknown — they were unspeakable . This is why the place-value system is such a great invention. It did more than simply help us do better calculations- it expanded mankind’s imagination and gave us access to concepts that we simply couldn’t comprehend otherwise.
What does an ancient Babylonian concept has to do with Computer Science? Computers, like people, need to be taught to do even simple things like multiply numbers.
And it matters how they do it. For example, according to my unscientific and non-IRB approved experiments, a 9 year old will take about an hour to multiply two 30-digit numbers using the place-value algorithm she learned in elementary school. In contrast, even the fastest supercomputer on earth will take more than a century to do this computation if it follows the naive roman-numeral way of repeatedly adding the numbers to one another. So you see, even with all the technological advances of the last 3000 years, we still need those old Babylonian insights.
In fact, Computer Science has about as much to do with computers as English has to do with printing technology. The point of computer science is not about mechanizing some calculation but about finding a novel way to express a concept we couldn’t grasp before. Finding such a way enriches us whether or not it ends up being programmed into a computer, since it lets us access what was heretofore unspeakable.
Some of the phenomena computer scientists investigate arise from modern challenges; whether it is finding a new language to express social interactions among millions of people or ways to speak about biochemical interactions among thousands of genes. But we also study age-old riddles that have been around since the dawn of civilization. For example, it turns out that the basic algorithm we all learn in elementary school is not the best way to multiply two numbers. Computer scientists found new and much better ways in the 20th century and the currently best known algorithm was only discovered in 2007.
Computer science has had amazing advances that expanded our horizons both materially and intellectually. But I am personally even more fascinated by the discovery that some notions are in fact inherently unspeakable. That is, no matter what technological advances or new inventions humanity comes up with, some tasks will be forever out of our reach. These limitations tell us about the fundamental laws of computation, in the same way that the physical laws of nature prohibit a perpetual motion machine or faster-than-light communication. Much of computer science is about dealing with these inherent limitations— how can we talk about what we can not talk about?
Let me give an example loosely related to my own work. Can we really grasp probabilities? We talk about them all the time. You hear in the news that with probability 50\% the global sea level will rise by half a meter before 2100, or that Hillary Clinton has a 54% chance of becoming president in 2016. These are important events that have serious consequences. But what do these numbers actually mean?
Economists have long thought they can answer this question. These probabilities are supposed to reflect the predictions of a perfectly rational person that has accumulated all available evidence. This is the same hypothetical rational person that makes free markets work. The only problem is that this person doesn’t exist. And this is not just because we humans are biased, lazy, or stupid. Rather, it turns out that these probabilities are often inherently unspeakable. For example, to truly compute the probability that Hillary Clinton becomes president, you would need to know the probability that she wins Florida, the probability that she wins Ohio, as well as the probability that she wins both Florida and Ohio, and so on and so forth. Unfortunately, there are literally quadrillion such combinations of the 50 states. If you considered all combinations of the 3000+ counties you get a number so large that it doesn’t even have a name. There is no way for any human or computer to compute all these probabilities. But yet this is what needed to obtain such perfectly rational predictions.
So, how do we talk about what we can not talk about? This is a topic for a longer lecture, but one approach involves borrowing insights from Voltaire, Ralph Waldo Emerson as well as quantum mechanics. To make these probabilities “speakable” we need to realize that the “perfect is the enemy of the good” and give up on the “hobgoblin of consistency”. In fact we sometimes even need to consider probabilities that are negative numbers. If this sounds mysterious, I hope it entices you to learn some more about computer science.
This term I am teaching cryptography at Harvard. There had been several advances in crypto since I last taught this course at Princeton, and so I will do several things differently.
The slow but steady progress towards constructing quantum computers has caused the NSA to announce that users should transition away from RSA, Diffie-Hellman and Elliptic Curves (that can be broken by Shor’s algorithm) in the “not too distant future”. That, coupled with the fact that most of the exciting recent theoretical advances (such as fully homomorphic encryption and indistinguishability obfuscators) have been used lattice-based cryptography, means that I will shift focus away from the number theoretic constructions and toward lattice-based crypto.
Also, while the security of the assumptions underlying the recent obfuscation constructions remains murky, and they are still too complicated to be covered with the full gory details, I am going to talk about obfuscation, and in particular use it as a pedagogical tool. One thing that has always bothered me about teaching crypto is that we teach all these wonderfully amazing notions such as public key crypto, zero knowledge proofs, secure multiparty computation, fully homomorphic encryption, etc.. as if they “fell from the sky”. Each one of them comes as a huge surprise that it could exist, and the students don’t get a sense of why people were hopeful enough to look for a construction in the first place. Obfuscation gives, at least at some intuitive level, an explanation as to why someone who is not on drugs might be bold enough to imagine that such concepts could exist, and I hope to convey to the students this intuition.
Also, I strongly believe that despite the need to cover the fundamentals, an undergraduate cryptography course should not consists mostly of a “dry” collection of basic theorems, definitions, and reductions and I do want to talk about some more exciting notions such as bitcoin, quantum computing, fully homomorphic encryption, and more. This of course requires some tradeoffs, and my typical way out of this is that I simply expect the students to work harder…
If you are not around Harvard and am interested in the course, you can always take it as part of Harvard’s extension school, or you can simply follow the lecture notes that I will post on the course web page.
Below are my draft notes for Tuesday’s lecture- I won’t post here future lecture notes, so follow the website for more. I am not going to actually cover all of this in class- I am doing a “quasi flipped” class where I expect the students to read the lecture notes before each class, so I can talk about high level intuitions and answer questions, rather than going through all details of the proof.
The graduating bits event at ITCS was held last Friday. You can see the list of all presenters and their slides on the webpage.
I enjoyed it and learned quite a bit, even if I was rather strict towards the poor presenters with the timer… Here is a summary of some of the talks based on my faulty memory and the slides.. I apologize in advance for the (probably numerous) errors and omissions, and encourage you to take a look at the web pages of the presenters for much better information.
Ron Rothblum talked about new results on proving that computation is correct, including some wonderful connections between cryptography and quantum information, as well as an exciting new result with his brother Guy and Omer Reingold showing that a (at the moment quantatively suboptimal) variant of the classical IP=PSPACE result with an efficient prover.
Jayadev Acharya talked about computational versions of classical questions in statistics such as hypothesis testing, and testing distributions for monotonicity and independence, in particular getting the first linear time algorithms for several classical tasks.
Mohsen Ghaffari talked about his work on distributed algorithms. He’s had a great number of impressive results but focused on his recent SODA 16 paper on distributed maximal independent set which is the first that matches the 2006 lower bound of Kuhn et al at some range of parameters.
Nima Haghpanah talked about his work on incentives and computation and in particular about a new result with Hartline showing that despite some daunting hardness results, in several natural settings, it is possible to efficiently maximize revenue even when users have multidimensional preferences (e.g. think of users of an Internet Service Provider where some care more about the price and others care more about the bandwidth).
Z. Steven Wu Gave an excellent short talk on differential privacy, a subject on which he had a number of recent results, both giving new differentially private mechanisms as well as using it for other applications in mechanism design and learning.
Clément Canonne had probably the most interesting set of slides I have seen. He gave a high level talk about his work in learning. property and distributional testing.
Aviad Rubinstein is a non graduating student that has done several interesting works on the complexity of problems arising in computational economics, but talked about his recent thought-proving work on the fundamental “Sparse PCA” problem, talking about the relation between approximation ratios and success on actual data and (apparently Berkeley inspired) “best case analysis”.
Laura Florescu couldn’t make it to the event but her slides describe some interesting results on the stochastic block model that has attracted much attention from people in statistics, learning, statistical physics, and more, as well as Rotor walks, that are a deterministic variant of some natural physical processes.
Bo Waggoner opened his talk with a joke that probably resonated with many of the students: when a graduating student asked what he’s working he says “last year I worked on analysis of Boolean functions and lower bounds, but this year I’m graduating, so my area is now data science”. He generally worked in algorithmic game theory and has recently studied markets for information as opposed to physical goods.
Ilya Volkovich spoke about the notorious polynomial identity testing (PIT) problem. Most of us run away in fear from it, but he actually managed to solve it for some restricted circuit classes, as well as show connections between it and other open problems.
Elad Haramaty talked about algebraic property testing. Here the canonical example is the low degree test (i.e., property testing of the Reed-Muller code) that was the technical heart of the original PCP Theorem, he’s had some results in this area and also gave generalizations of the Reed-Muller code that have various applications.
Alexander Golovnev talk was inspiring- apparently he missed the memo that general circuit lower bounds are too difficult and should not be attempted by anyone, let alone a student. He and his co authors have obtained the strongest known lower bound for general Boolean circuits and he talked about a program to obtain stronger bounds via improved constructions of randomness dispersers.
Finally Mark Bun missed Bo’s joke, and despite graduating, was not afraid to declare his love for lower bounds. Like Steven Wu, he too works on differential privacy (which for some reason is highly correlated with Star Wars imagery on one’s slides). He worked on the lower bound side, with works answering the fundamental question of whether we need to pay a price in accuracy or efficiency if we want privacy.
A couple announcements on happenings in Cambridge:
- STOC/SoCG joint workshop day: this year, STOC and SoCG are co-located in Cambridge, MA, and will hold a joint workshop day on June 18th, 2016. The goal of this day is to have events of common interest to both communities and thus foster further collaboration. The schedule already includes plenary talks by two invited speakers: Timothy Chan and Santosh Vempala. You are invited to submit your workshop proposal by February 22nd, to contribute to this Nobel-Peace-Prize-worthy event. You can find more details on the workshop day page.
If you are a student or a postdoc looking for a new position, please come to ITCS 2016 (January 14-16 in Cambridge MA) and give a short presentation on your work during the graduating bits event (Friday, Jan 15 6:30pm).
If you are interested, please send me your information and presentation (see here for precise details how to do so). There is no deadline per se, but the presentations will be scheduled in the order of submissions so it’s “first comes first served”. I will also maintain a website with the photos and slides of all presenters.
Happy new year to everyone and hope to see you at ITCS!
ITCS 2016 will be held in Cambridge January 14-16. As in previous years, we will have a “graduating bits” session where students graduating this year can present a (very) short talk on their work. This year I am in charge of the graduating bits event, which means that details on how to sign up for it will only appear at the last minute, so look out for further announcements….