Skip to content

Experts shmexperts

April 9, 2020

(If you’re not already following him, I highly recommend reading Luca Trevisan’s dispatches from Milan, much more interesting than what I write below.)

On the topic of my last post, Ross Douthat writes in the New York Times that “In the fog of coronavirus, there are no experts”, even citing Scott Aaronson’s post. Both Aaronson and Douthat make the point that the COVID-19 crisis is so surprising and unprecedented, and experts were so much in the dark, that there is no reason to trust them over non expert “common sense” or “armchair epidemiologists”.

It’s true that the “expert models” have significant uncertainty, hardwired constants, noisy data, and dubious assumptions. It is also true that many countries (especially those that didn’t learn from the 2003 SARS epidemic) bungled their initial response. But do we really need to challenge the notion of expertise itself? To what extent was this pandemic not predicted by experts or progressed in ways defying their expectations?

Here is what some of these experts and institutions were saying in the recent past:

“The thing I’m most concerned about … is the emergence of a new virus that the body doesn’t have any background experience with, that is very transmissible, highly transmissible from person to person, and has a high degree of morbidity and mortality … a respiratory illness that can spread even before someone is so sick that you want to keep them in bed.” Dr. Anthoni Fauci, 2019.

“High-impact respiratory pathogens … pose particular global risks … [they] are spread via respiratory droplets; they can infect a large number of people very quickly and with today’s transportation infrastructure, move rapidly across multiple geographies. … There is insufficient R&D investment and planning for innovative vaccine development and manufacture, broad-spectrum antivirals, … In addition, such a pandemic requires advance planning across multiple sectors … Epidemic control costs would completely overwhelm the current financing arrangements for emergency response.” WHO world at risk report, 2019.

“respiratory transmission …. is the transmission route posing the greatest pandemic risk … [since it] can occur with coughing or simply breathing (in aerosol transmission), making containment much more challenging. … If a pathogen is capable of causing asymptomatic or mildly symptomatic infections that either do not or only minimally interrupt activities of daily living, many individuals may be exposed. Viruses that cause the common cold, including coronaviruses, have this attribute.” JHU report, 2019.

As an experiment, I also tried to compare the response of experts and “contrarians” in real time as the novel coronavirus was discovered, trying to see if it’s really the case that, as Douthat says, “up until mid-March you were better off trusting the alarmists of anonymous Twitter than the official pronouncements from the custodians of public health”. I chose both experts and contrarians that are active on Twitter. I was initially planning to look at several people but due to laziness am just taking Imperial college’s J-IDEA institute for the expert, and Robin Hanson for the contrarian. I also looked at Douthat’s twitter feed, to see if he followed his own advice. Initially I thought I would go all the way to March but have no time so just looked at the period from January 1 till February 14th. I leave any conclusions to the reader.

January 1-19:

(Context: novel coronavirus confirmed in Wuhan, initially unclear if there is human to human transmission – this was confirmed by China on January 20 though suspected before.)

Here is one of several tweets by Imperial from this period:

I didn’t see any tweets from Hanson or Douthat on this topic.

January 20-31:

(Context: first confirmed cases in several countries, including the US, WHO declares emergency in Jan 30, US restricts travel from China on Jan 31. By then there are about 10K confirmed cases and 213 deaths worldwide.)

On January 25th Imperial college estimated the novel coronavirus “R0” parameter as 2.6:

Hanson tweeted approvingly about China’s response and that this situation might help the “more authoritarian” U.S. presidential candidate:

Still no tweet from Douthat on this topic though he did say in January 29th that compared to issues in the past the U.S.’s problems in the 2020’s are “problem of decadence” rather than any crisis like the late 1970’s:

February 1-14:

(Context: Diamond princess cruise ship quaranteed, disease gets COVID-19 official name, first death in Europe)

Imperial continues to tweet extensively, including the following early estimates of the case fatality rates:

Robin Hanson correctly realizes this is going to spread wide:

Hanson tweets quite a lot about this, including potential social implications. Up to February 13th there is nothing too “contrarian” at this point, but also no information that could not be gotten from the experts:

In February 14 Hansons makes a very contrarian position when he proposes “controlled infection” as a solution:

To the anticipated “you first” objection he responds “I proposed compensating volunteers via cash or medical priority for associates, & I’d seriously consider such offers.”. He doesn’t mention that he is much less strapped for cash than some of the would be “volunteers”.

Still no tweet from Douthat about COVID-19 though he does write that we live in an “age of decadence”:

In defense of expertise

April 4, 2020

Scott Aaronson blogged in defense of “armchair epidemiology”. Scott makes several points I agree with, but he also advocates that rather than discounting ideas from “contrarians” who have no special expertise in the matter, each one of us should evaluate the input of such people on its merits.

I disagree. I can judge on their merits the validity of a proposed P vs NP proof or a quantum algorithm for SAT, but I have seen time and again smart and educated non-experts misjudge such proposals. As much as I’d like to think otherwise, I would probably be fooled just as easily by a well-presented proposal in area X that “brushes under the rug” subtleties that experts in X would immediately notice.

This is not to say that non experts should stay completely out of the matter. Just like scientific journalists such as Erica Klarreich and Kevin Hartnett of quanta can do a great job of explaining computer science topics to lay audience, so can other well-read people serve as “signal boosters” and highlight works of experts in epidemiology. Journalist Helen Branswell of stat news has been following the novel coronavirus since January 4th.

The difference is that these journalists don’t pretend to see what the experts are missing but rather to highlight and simplify the works that experts are already doing. This is unlike “contrarians” such as Robin Hanson that do their own analysis on a spreadsheet and come up with a “home brewed” policy proposal such as deliberate infection or variolation (with “hero hotels” in which people go to be deliberately infected). I am not saying that such proposals are necessarily wrong, but I am saying that I (or anyone else without the experience in this field) am not qualified to judge them. Even if they did “make sense” to me (they don’t) I would not feel any more confident in judging them than I would in reviewing a paper in astronomy. There is a reason why Wikipedia has a “no original research” policy.

Moreover, the attitude of dismissing expertise can be dangerous, whether it comes in the form of “teach the debate” in the context of evolution, or “ClimateGate” in the context of climate change. Unlike the narrative of few brave “dissenters” or “contrarians”, in the case of COVID-19, experts as well as the world health organization have been literally sounding the alarm (see also timeline, as well as this NPR story on the US response). Yes, some institutions, and especially the U.S., failed in several aspects (most importantly in the early production of testing). But one of the most troubling aspects is the constant sense of “daylight” and distrust between the current U.S. administration and its own medical experts. Moreover, the opinions of people such as law professor Richard Epstein are listened to even when they are far out of their depth. It is one thing to entertain the opinion of non-expert contrarians when we have all the time in the world to debate, discuss and debunk. It’s quite another to do so in the context of a fast-moving health emergency. COVID-19 is an emergency that has medical, social, economical, and technological aspects, but it would best be addressed if each person contributes according to their skill set and collaborates with people of complementary backgrounds.

Technology for theory: COVID-19 edition

March 26, 2020

The new coronavirus upended much of society, including our little corner of it. I believe at this point almost all theorists are teaching and doing research at home, and I thought it would be good to share some of the tools we use for doing so. Below I will describe my setup, but I hope other people share theirs too.

Teaching and virtual whiteboards

I am teaching using Zoom and using an iPad pro with a pencil to simulate a whiteboard. I use a laptop to connect to Zoom and for the camera, laptop, and chat window, and then join the iPad.There are several ways to connect an iPad to a Zoom session:

  1. Join the session from the iPad separately using the iPad Zoom app. (To do so you might need to logout of your other account.)
  2. Within the Zoom program on your computer you can choose “share screen” and then one of the option is to join an iPad connected to the same wireless network as the laptop/desktop and use “screen mirroring” on the iPad. (You can find the screen mirroring option on the iPad by swiping from the top right corner in a diagonal motion.)
  3. Another variant of this is to use a third party app such as Reflector 3. Reflector 3 sets up an airplay server on your PC so you can mirror the iPad screen to it. You can then share the Reflector 3 window from Zoom (see screen shorts below). If you do use Reflector 3, you can remove its annoying iPad like frame.
  4. You can use a wired connection, which is either by just connecting through USB (in a Mac) or a complex combination of combining an adapter to take an HDMI signal out of an iPad with an HDMI capture card to stream this signal to the computer.

I use either option 2 or 3. (Might have used 4 if I had a Mac.) The main reason I prefer these to option 1 is because the application I use for a whiteboard – GoodNotes – has a presentation mode that behaves differently when you are connected to an external display or use AirPlay (which is what options 2 and 3 do). In this presentation mode the students don’t see your interface, and so you can Zoom, go to the page selector and more without it disturbing what they see. GoodNotes also has a great “laser pointer”. I set the pages at a landscape orientation, and pre-write a good chunk of what I plan to present before the lecture. I also use the ability to “duplicate pages” to achieve the PowerPoint like effect of gradual reveal.

It is not perfect – I’ve discovered that the screen share sometimes stops refreshing and I need to leave GoodNotes and return to it for it to go back (this seems to works better in Reflector 3 so far for me).

Monitoring the chat window and raised hands in Zoom is non-trivial. It helps a lot that I have a teaching assistant that participates in lecture and notices if I missed something.

Some people say that a “matte screen protector” such as PaperLike makes the iPad more comfortable to write on – haven’t yet tried it. (Update 4/1/2020: I now tried PaperLike and can vouch for it – it greatly improves my handwriting! shipping from the UK did take some time though.)

I have a good Webcam (Logitech Brio) but at the moment I’m not using it since it seems too taxing on my laptop and so I went back to the native webcam. I have a very nice wireless headset/mic combo (Jabra Evolve 75) that I am constantly using and have been very happy with. I particularly like the feature of being able to unmute and mute yourself by raising and lowering the mike.

Using Screen share from Zoom to either share an iPad or the Reflector 3 window
Choose which source to mirror the screen to on your iPad screen
You can reach the screen mirroring options by swiping from the top right corner of the iPad.

Research

For research Slack continues to extremely useful. For working jointly on a paper Overleaf is of course great, but for maintaining a shared document it sometimes useful to use simpler platform that are not full fledged LaTeX. Some options include:

Google JamBoard is an interactive whiteboard, also with an iPad app. I haven’t tried it yet but it seems promising.

Keeping children busy

For many people I imagine childcare is one of the most challenging aspects. At the moment at least the Cambridge Public Schools are not keeping my kids too busy. While probably most of their time is spent in non educational pursuits, we try to also encourage (i.e., bribe/extort/threaten/beg) them to do some learning. If your kids are interested in math, I highly recommend the courses offered by the Art of Problem Solving (they also have a theory connection: one of their books was co-authored by theorist Ravi Boppana). For younger kids you can also try their Beast Academy.

The AOPS program is not free but over the years my kids (especially my 13 year old daughter Alma) have also spent a lot of time on the free and amazing Khan Academy. In fact, last year she was inspired enough by Khan’s JavaScript course to write the following poem which (for obvious reasons) moved me very much:

Tending / Alma Barak

My mind is a desert
of boredom
of blankness
of leaning-back-in-your-chairness
and of simply-staring-at-the-ceilingness
I kick at a crumpled
deserted baby-blonde post-it
with my spotted socked feet
waiting for Aba to come.

We dive when he comes
into the seas of Khan
free-styling our way
into the lakes of Java
we pass codes we already
knew
while loops
for loops
arrays
reminding me of what I’d learned

We frog-kick to an
uncharted
unfamiliar
lagoon of code I don’t know
sometimes I get swept away by currents
of confusion
but my aba, my dad
grabs my hand
and shows me the way through
teaching me
tending to me
washing away the sands of boredom with the sea of Khan.

New CS theory talk aggragator

March 25, 2020

Shcachar Lovett has put together a new website aggregating information about virtual talks in CS theory: https://cstheorytalks.wordpress.com/

 It has a Google calendar that people can add to their own, and a form to submit a new talk that automatically gets added to the Google calendar. 

This can be a fantastic resource these days that almost no one can travel – please publicize this and also submit to it talks that you are organizing.

FOCS deadline pushed back 6 days

March 19, 2020

From Sandy Irani, FOCS 2020 PC chair:

In light of the very unusual developments in the world due to the spread of Covid-19 and the disruption it is causing to many people in our field, especially those with young children at home, the FOCS PC has decided to push back the final deadline for papers by six days. We would have liked to do more, but we still are trying to stick to our timeline for notification since that date is coordinated with other deadlines in the theory community. In addition, some members of the committee are also affected by this crisis and there is concern that we may not be able to do our job as a committee in a shorter time frame. Pre-registration (including titles and abstracts) is still required by the original deadline. Here are the new dates:

Pre-registration (including titles and abstracts): Thursday April 9, 11:59 PM (EDT)

Final Paper Submission: Wednesday April 15, 11:59 PM (EDT)

Conference url: https://focs2020.cs.duke.edu/

We hope you all stay healthy!

–Sandy Irani, for the FOCS 2020 Committee

Life and CS theory in the age of Coronavirus

March 12, 2020

Harvard University, as well most other places that I know of will be moving to remote lectures. I just gave the last in-person lecture in my cryptography course. I would appreciate technical suggestions on the best format for teaching remotely. At the moment I plan to use Zoom and log-in from both my laptop (for the video) and from my iPad pro (for the interactive whiteboard).

The one silver lining is that more lectures will be available online. In particular, if you’re teaching algorithms, you might find Eric Vigoda’s videos helpful. (If you know of more sources, please let us know.)

I was hoping that reducing meetings and activities will be good for research, but currently find it hard to concentrate on anything except this train-wreck of a situation. The financial times has a chart that summarizes the progress of the disease in several countries:

The number of confirmed cases grows by about 33% each day. This growth in confirmed cases is partially due to increased testing as cases increase – there is some evidence that the doubling time of the disease (time between n infected people to 2n infected) is about 6 days (rather than the 2.4 days that this figure suggest). However, a doubling time of 6 days still means that the number of cases grows 10-fold in a month, and so if there are 10K actual cases in the U.S., today, there would be 100K by mid April and 1M by mid May.

Strong quarantine regimes, contact tracing, and drastically reducing activity and increasing “social distance” can very significantly reduce the base of this exponent. Reducing the base of the exponent is more than simply “delaying the inevitable”. The mortality statistics mask the fact that this can be a very serious illness even for the people who don’t die of it – about 5% of the cases need intensive care (see this Economist article). Spreading the infections over time will enable the healthcare system to handle the increased caseload, which will completely overwhelm it otherwise.

Such steps are clearly much easier to achieve before the number of cases is too large to be manageable, but despite having “advance warning” from other countries, this lesson does not seem at the moment to have sunk in, at least here in the U.S. At the moment no such initiatives are taken at the federal level, the states are doing more but still not enough, and it’s up to private companies and institutions to come up with their own policies. As faculty and citizens there is not much we can do about it except support such decisions even when they are unpopular, and just try to make the remote experience as good as possible for us, our colleagues, and our students.

Summer School on Statistical Physics and Machine Learning

January 21, 2020

Gerard Ben Arous, Surya Ganguli, Florent Krzakala and Lenka Zdeborova are organizing a summer school on statistical physics of machine learning on August 2-28, 2020 in Les Houches, France. If you don’t know Les Houches, it apparently looks like this:

They are looking for applications from students, postdocs, and young researchers in physics & math, as well computer scientists. While I am biased (I will be lecturing there too) I think the combination of lecturers, speakers, and audience members will yield a very unique opportunity for interaction across communities, and strongly encourage theoretical computer scientists to apply (which you can from the website). Let me also use this opportunity to remind people again of Tselil Schramm’s blog post where she collected some of the lecture notes from the seminar we ran on physics & computation.

More information about the summer school:

The “Les Houches school of physics”, situated close to Chamonix and the Mont Blanc in the French Alps, has a long history of forming generations of young researchers on the frontiers of their fields. Our school is aimed primarily at the growing audience of theoretical physicists and applied mathematicians interested in machine learning and high-dimensional data analysis, as well as to colleagues from other fields interested in this interface. [my emphasis –Boaz] We will cover basics and frontiers of high-dimensional statistics, machine learning, the theory of computing and learning, and probability theory. We will focus in particular on methods of statistical physics and their results in the context of current questions and theories related to machine learning and neural networks. The school will also cover examples of applications of machine learning methods in physics research, as well as other emerging applications of wide interest. Open questions and directions will be presented as well.

Students, postdocs and young researchers interested to participate in the event are invited to apply on the website http://leshouches2020.krzakala.org/ before March 15, 2020. The capacity of the school is limited, and due to this constraint participants will be selected from the applicants and participants will be required to attend the whole event.

Lecturers:

  • Boaz Barak (Harvard): Computational hardness perspectives
  • Giulio Biroli (ENS, Paris): High-dimensional dynamics
  • Michael Jordan (UC Berkeley): Optimization, diffusion & economics
  • Marc Mézard (ENS, Paris): Message-Passing algorithms
  • Yann LeCun (Facebook AI, NYU). Challenges and directions in machine learning
  • Remi Monasson (ENS, Paris): Statistical physics or learning in neural networks
  • Andrea Montanari (Stanford): High-dimensional statistics & neural networks
  • Maria Schuld (Univ. KwaZulu Natal & Xanadu): Quantum machine learning
  • Haim Sompolinsky (Harvard & Hebrew Univ.): Statistical mechanics of deep neural networks
  • Nathan Srebro (TTI-Chicago): Optimization and implicit regularisation
  • Miles Stoudenmire (Flatiron, NYC): Tensor network methods
  • Pierre Vandergheynst (EPFL, Lausanne): Graph signal processing & neural networks

Invited Speakers (to be completed):

  • Christian Borgs (UC Berkeley)
  • Jennifer Chayes (UC Berkeley)
  • Shirley Ho (Flatiron NYC)
  • Levent Sagun (Facebook AI)

Intro TCS recap

January 15, 2020

This semester I taught another iteration of my “Introduction to Theoretical Computer Science” course, based on my textbook in process. The book was also used in University of Virgnia CS 3102 by David Evans and Nathan Brunelle.

The main differences I made in the text and course since its original version were to make it less “idiosyncratic”: while I still think using programming language terminology is the conceptually “right” way to teach this material, there is a lot to be said for sticking with well-established models. So, I used Boolean circuits as the standard model for finite-input non-uniform computation, and Turing Machines, as the standard model for unbounded-input uniform computation. (I do talk about the equivalent programming languages view of both models, which can be a more useful perspective for some results, and is also easier to work with in code.)

In any course on intro to theoretical CS, there are always beautiful topics that are left on the “cutting room floor”. To partially compensate for that, we had an entirely optional “advanced section” where guest speakers talked about topics such as error correcting codes, circuit lower bounds, communication complexity, interactive proofs, and more. The TA in charge of this section – amazing sophomore named Noah Singer – wrote very detailed lecture notes for this section.

This semester, students in CS 121 could also do an optional project. Many chose to do a video about topics related to the course, here are some examples:

There is much work to still do on both the text and the course. Though the text has improved a lot (we do have 267 closed issues after all) some students still justifiably complained about typos, which can throw off people that are just getting introduced to the topic. I also want to add significantly more solved exercises and examples, since students do find them extremely useful. I need to significantly beef up the NP completeness chapter with more examples of reductions, though I do have Python implementation of several reductions and the Cook Levin theorem.

This type of course is often known as a “great ideas” in computer science, and so in the book I also added a “Big Idea” environment to highlight those. Of course some of those ideas are bigger than others, but I think the list below reflects well the contents of the course:

  • If we can represent objects of type T as strings, then we can represent tuples of objects of type T as strings as well.
  • A function is not the same as a program. A program computes a function.
  • Two models are equivalent in power if they can be used to compute the same set of functions.
  • Every finite function can be computed by a large enough Boolean  circuit.
  • program is a piece of text, and so it can be fed as input to other  programs.
  • Some functions  f:\{0,1\}^n \rightarrow \{0,1\}  cannot be computed by a Boolean circuit using fewer than exponential (in n) number of gates.
  • We can precisely define what it means for a function to be computable by any possible algorithm.
  • Using equivalence results such as those between Turing and RAM machines, we can “have our cake and eat it too”: We can use a simpler model such as Turing machines when we want to prove something can’t be done, and use a feature-rich model such as RAM machines when we want to prove something can be done.
  • There is a  “universal” algorithm that can evaluate arbitrary algorithms on arbitrary inputs.
  • There are some functions that can not be computed by any algorithm.
  • If a function F is uncomputable we can show that another function H is uncomputable by giving a way to reduce the task of computing F to computing H.
  • We can use restricted computational models to bypass limitations such as uncomputability of the Halting problem and Rice’s Theorem. Such models can compute only a restricted subclass of functions, but allow to answer at least some semantic questions on programs.
  • A proof is just a string of text whose meaning is given by a verification algorithm.
  • The running time of an algorithm is not a number, it is a function of the length of the input.
  • For a function F:{0,1}^* \rightarrow {0,1} and T:\mathbb{N} \rightarrow \mathbb{N}, we can formally define what it means for F to be computable in time at most T(n) where n is the size of the input.
  • All “reasonable” computational models are equivalent if we only care about the distinction between  polynomial and exponential. (The book immediately notes quantum computers as a possible exception for this.)
  • If we have more time, we can compute more functions.
  • By “unrolling the loop” we can transform an algorithm that takes T(n) steps to compute F into a circuit that uses poly(T(n)) gates to compute the restriction of F to {0,1}^n.
  • A reduction F \leq_p G shows that F is “no harder than G” or equivalently that G is “no easier than F“.
  • If a single \mathbf{NP}-complete has a polynomial-time algorithm, then there is such an algorithm for every decision problem that corresponds to the existence of an efficiently-verifiable solution.
  • If \mathbf{P}=\mathbf{NP}, we can efficiently solve a fantastic number of decision, search, optimization, counting, and sampling problems from all areas of human endeavors.
  • A randomized algorithm outputs the correct value with good probability on every possible input.
  • We can amplify the success of randomized algorithms to a value that is arbitrarily close to 1.
  • There is no secrecy without randomness.
  • Computational hardness is necessary and sufficient for almost all cryptographic applications.
  • Just as we did with classical computation, we can define mathematical models for quantum computation, and represent quantum algorithms as binary strings.
  • Quantum computers are not a panacea and are unlikely to solve \mathbf{NP} complete problems, but they can provide exponential speedups to certain structured problems.

These are all ideas that I believe are important for Computer Science undergraduates to be exposed to, but covering all of these does make for a every challenging course, which gets literally mixed reviews from the students, with some loving it and some hating it. (I post all reviews on the course home page.) Running a 200-student class is definitely something that I’m still learning how to do.

MIP*=RE, disproving Connes embedding conjecture.

January 14, 2020

In an exciting manuscript just posted on the arxiv, Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen prove that there is a 2-prover quantum protocol (with shared entanglement) for the halting problem. As a consequence they resolve negatively a host of open problems in quantum information theory and operator algebra, including refuting the longstanding Connes embedding conjecture. See also Scott’s post and this blog post of Thomas Vidick discussing his personal history with these questions, that started with his Masters project under Julia Kempe’s supervision 14 years ago.

I am not an expert in this area, and still have to look the paper beyond the first few pages, but find the result astounding. In particular, the common intuition is that since all physical quantities are “nice” function (continuous, differentiable, etc..), we could never distinguish between the case that the universe is infinite or discretized at a fine enough grid. The new work (as far as I understand) provides a finite experiment that can potentially succeed with probability 1 if the two provers use an infinite amount of shared entangled state, but would succeed with probability at most 1/2 if they use only a finite amount. A priori you would expect that if there is a strategy that succeeds with probability 1 with an infinite entanglement, then you could succeed with probability at least 1-\epsilon with a finite entangled state whose dimension depends only on \epsilon.

The result was preceded by Ito and Vidick’s 2012 result that \mathbf{NEXP} \subseteq \mathbf{MIP^*} and Natarajan and Wright’s result last year that \mathbf{NEEXP} (non deterministic double exponential time) is contained in \mathbf{MIP^*}. This brings to mind Edmonds’ classic quote that:

“For practical purposes the difference between algebraic and exponential order is often more crucial than the difference between finite and non-finite”

sometimes, the difference between double-exponential and infinite turns out to be non-existent..

A bet for the new decade

December 30, 2019

I am in Tel Aviv Theory Fest this week – a fantastic collection of talks and workshops organized by Yuval Filmus , Gil Kalai, Ronen Eldan, and Muli Safra.

It was a good chance to catch up with many friends and colleagues. In particular I met Elchanan Mossel and Subhash Khot, who asked me to serve as a “witness” for their bet on the unique games conjecture. I am recording it here so we can remember it a decade from noe.

Specifically, Elchanan bets that the Unique Games conjecture will be proven in the next decade – sometime between January 1, 2020 and December 31, 2029 there will be a paper uploaded to the arxiv with a correct proof of the conjecture. Subhash bets that this won’t happen. They were not sure what to bet on, but eventually agreed to take my offer that the loser will have to collaborate on a problem chosen by the winner, so I think science will win in either case. (For what it’s worth, I think there is a good chance that Subhash will lose the bet because he himself will prove the UGC in this decade, though it’s always possible Subhash can both win the bet and prove the UGC if he manages to do it by tomorrow 🙂 )

The conference itself is, as I mentioned, wonderful with an amazing collection of speakers. Let me mention just a couple of talks from this morning. Shafi Goldwasser talked about “Law and Algorithms”. There is a recent area of research studying how to regulate algorithms, but Shafi’s talk focused mostly on the other direction: how algorithms and cryptography can help achieve legal objectives such as the “right to be forgotten” or the ability to monitor secret proceedings such as wiretap requests.

Christos Papadimitriou talked about “Language, Brain, and Computation”. Christos is obviously excited about understanding the language mechanisms in the brain. He said that studying the brain gives him the same feeling that you get when you sit in a coffee shop in Cambridge and hear intellectual discussions all around you: you don’t understand why everyone is not dropping everything they are doing and come here. (Well, his actual words were “sunsets over the Berkeley hills” but I think the Cambridge coffee shops are a better metaphor 🙂 )