Skip to content

Free trade and CS

December 1, 2016

Economists generally agree that free trade agreements between countries such as the U.S. and Mexico or China that have complimentary strengths result in a net benefit to both sides. But this doesn’t mean that every individual citizen benefits. There are definitely winners and losers, and as we have seen in this election, the losers are anything but “happy campers”.

NAFTA’s effect on U.S. employment  has probably been something between a modest gain to a loss of several hundred thousand U.S. jobs. The effect of trade with China has probably been greater, resulting in a loss of perhaps a million or more jobs. But both of these effects  are likely to be much smaller than the result of the U.S.’s completely unregulated trade with a different country, one that has no labor protections and whose workers work very long hours for very low wages.

I am talking about the “Nation of AI”. According to the Bureau  of Labor and Statistics,  in the U.S. there are 3.85 million drivers (of trucks, buses, taxis, etc..), 3.5 million cashiers, 2.6 million customer service representatives, and many other people working in jobs that could be automated in the near future. It is sometimes said that “routine” jobs are the ones most at risk, but perhaps a better term for this is quantifiable jobs. If your job consists of performing well-defined tasks that have a clear criteria of success (like “getting from point A to point B”) then it is at risk of first being “Uberized” (or “M-Turk’ed”)  and then automated. After all, optimizing well defined objectives is what computers do best.

Of course, like other trade deals and technological advances in the past, it could well be that the the eventual net effect of artificial intelligence on human employment is zero or even positive. But it will undoubtedly involve shifting of jobs, and, especially if it happens on a short time scale, many people whose jobs are eliminated would be unable to acquire the skills for obtaining the jobs that are created.

Understanding how to deal with this (arguably more realistic) type of “AI risk” is a grand challenge on the interface of Economics and Computer Science, as well as many other areas. Like other questions of incentives, privacy, fairness, and others, I believe theoretical computer science can and should play some role in addressing this challenge.

 

Some announcements

November 20, 2016

As also posted by Michael Mitzenmacher, we have several postdoc positions at Harvard,  please apply by December 1st.

In particular in 2017-2018, Harvard’s center for mathematical sciences and applications will be hosting a special year on combinatorics and complexity,  organized by  Noga Alon, me, Jacob Fox, Madhu Sudan, Salil Vadhan, and Leslie Valiant. I am quite excited about the workshops and events we have planned, so it should be a great time to be in the area.

 

The two sister sum-of-squares seminars at Cambridge and Princeton have  been maintaining a fairly extensive set of online lecture notes (with links to videos of Cambridge lectures added as we go along). While these notes are still a work in progress, I am already quite happy  with how they turned out (but would be grateful for any feedback to help make them more accessible).

As I mentioned before, if you want to see the live version, David Steurer and I are going to teach a Sum of Squares Winter Course in UC San Diego in January 4-7, 2017. Should be fun!!

Finally, please send in your suggestions for papers to invite for Theory Fest presentations by December 12, 2016. I’ve been having some issues with the dedicated email I setup for this, so if you sent in a suggestion and didn’t get a response, please send me a copy at my personal email as well.

Post-deadline diversion: Election predictions

November 3, 2016

Now that the important event of the STOC deadline has passed, we can talk about trivial matters such as the future of the free world (but of course, like a broken record, I will come back to talking about sum of squares by the end of the post).

A priori predicting the result of the election seems like an unglamorous  and straighforward exercise: you ask n people for their opinions x_1,\ldots,x_n whether they prefer candidate 0 or candidate 1, and you predict that the result will be the majority opinion, with probability that is about  1-\exp(-|\sum x_i - n/2|^2/n). This means that if two candidates are at least 2 percent apart, then you should get extremely high confidence if you ask some constant factor times 2,500 people.

Yet somehow, different analysts looking at the polls come up with very different numbers for the probability that Trump will win. As of today 538 says it is 34.2%, NY-times’ Upshot says it is 14%, David Rotchschild’s predictwise says it is 16%,  and Sam Wang’s PEC says it is 2%.

There are several reasons for this discrepancy, including the fact that the U.S. election is not won based on popular vote (though they almost always agree), that we need to estimate the fraction among actual voters as opposed to the general population, that polls could have systematic errors, and of course there is genuine uncertainty in the sense that some people might change their minds.

But at least one of the reasons seems to come up from a problem that TCS folks are familiar with, and arises in the context of rounding algorithms for convex optimization, which is to understand higher level correlations. For example, essentially all these predictors think that there  are a few states such Florida, New Hampshire, Nevada, and North Carolina that have reasonable chance of going either way, but that Trump cannot afford to lose even one of them.  Clearly these are not perfectly independent nor perfectly correlated events, but understanding the correlations between them seems hard, and appears to account for much of the huge discrepancy between the topline predictions.

Even if you do understand the correlations, using them to come up with predictions can be a computationally hard task. The typical way these people do it is to come up with an efficiently samplable distribution that matches the given marginals and correlations, and then run many simulations on this distribution to predict the outcome.

But coming up with efficiently sampleable distributions that match even simple pairwise or three-wise correlations is a computationally hard task.  (For constrained pairwise moments this is related to MAX-CUT while for unconstrained higher moments this is related to SAT, it is possible to do this for unconstrained pairwise moments using the quadratic sampling lemma, also known as Gaussian copula, which is related to the hyperplane rounding technique of Geomans and Williamson.) For an empirical demonstration, see this blog post of David Rothschild for how it is problematic to find a distribution that matches both the statewise and topline marginals of prediction markets (despite a fact that such a distribution should exist under the efficient market hypothesis). The fact that the matching moments problem is hard in general means that people use different heuristics to achieve this, and I don’t know if they have a good understanding of how the choice of heuristic affects the quality of prediction.

In our sum of squares lecture notes we discuss (with a javascript-powered  figure!) this issue with respect to the clique problem. Given a graph G with a hidden k clique S,  if we are computationally bounded and want to make predictions on events such as A \subseteq S we cannot find an efficiently sampleable distribution over sets S' that would not make non-sensical predictions such as \Pr[ |S| \neq k ] < 1 or \Pr [ \{ i,j \} \subseteq S ] >0 for some non neighbors i,j. The sos algorithm (and other convex optimization frameworks) can be thought of as a way to come up with predictions without a matching efficiently sampleable distribution, by generalizing to the notion of pseudo-distribution.

Theory Fest short presentations – call for suggestions

October 22, 2016

As mentioned before on this blog,   STOC 2017 will be part of an expanded “Theory Fest” (http://acm-stoc.org/stoc2017/) which is being planned by a small committee (Sanjeev Arora, Paul Beame, Avrim Blum, and Ryan Williams, as well as SIGACT chair Michael Mitzenmacher and STOC’17 PC chair Valerie King).  

One component of Theory Fest would be a series of presentations to highlight some of the best theoretical work in the past two years from other conferences or journals. (A non-exhaustive list of sample venues is ICALP,SODA, CRYPTO, LICS, PODS, PODC, QIP, SPAA, KDD, CCC, SIGMETRICS, Transaction on IT, WWW, ICML/NIPS , Science/Nature, etc..) Invited papers from these venues  will be presented in short (20-30 minute) plenary presentations at Theory Fest.

We (the sub-committee in charge of this component) seek suggestions of theoretical papers that have made breakthrough advances, opened up new questions or areas, made unexpected connections, or had significant impact on practice or other sciences. Anyone is welcome to contact us, but we especially invite members of PCs or editorial boards in various venues to send us suggestions.

If you can think of any recent result that satisfies these criteria, or have any related questions, please email stoc17presentations@boazbarak.org (please CC my personal email as well). Suggestions for presentations should include the following information:

  1. Name of the paper and authors.
  2. Publication venue (conference / journal), with publication date no earlier than January 1 2015.
  3. Short (1-3 paragraph) explanation of the paper and its importance.
  4. (Optional) Names of 1-3 knowledgeable experts on the area of this paper.

To ensure maximum consideration, please send us all these suggestions by December 12,  2016. Self-nominations are discouraged.

Thank you,

Theory Fest 2017 short plenaries committee:

Dorit Aharonov (Hebrew University)
Boaz Barak (Harvard University, committee chair)
Paul Beame (University of Washington)
Kamalika Chaudhuri (UC San Diego)
Ronald Fagin (IBM Research – Almaden)
Piotr Indyk (MIT)
Friedhelm Meyer auf der Heide (University of Paderborn)
Eva Tardos (Cornell)
Suresh Venkatasubramanian (University of Utah)

TOCA-Revolution Begins: Nov 4

October 14, 2016

The success of TOC is due to the intrinsic intellectual merit of our field but also due to many fruitful connections: connections between subfields of TOC (see Avi Wigderson’s depth through breadth), connection with Mathematics, connections with other fields in Science and Humanities via the computational lens, and connections with industry. All of these powerful connections contribute to the impact and vitality of TOC.

In this spirit, the theory group at Stanford and our theory colleagues all around the Silicon Valley are establishing a forum for continuous collaboration (through meetings and electronic means) between theoreticians in industry and academia in the SF Bay Area. We name it TOCA-SV. TOCA stands for Theory of Computing Associated and also, quite appropriately, means touches in Spanish. Like others, calling for a revolution: I hope that TOCA groups will surface all over.

Our first meeting will take place at Stanford on November 4th. Please join us if you are around and please don’t forget to register.

Students who want to present in this event (in a lightning-talk session) are asked to email us at tocasvalley@gmail.com. Please specify your name, affiliation, adviser (if you have one), year and email address. We will try to accommodate as many as possible (but future opportunities are planed).

Finally, to stay in touch for future information, please join our Google group.

Occupy TOC, power to the people, onwards and upwards or whatever seems appropriate here🙂

Live from Princeton, NJ: Avi60

October 4, 2016

I hope to see many readers of this blog in person tomorrow for the workshop in honor of Avi Wigderson 60th birthday (Wed-Sat), which will feature a collection of great speakers talking on a variety of areas in theoretical computer science and mathematics.

But, if you can’t make it in person, the talks will be streamed live on the workshop’s website. These talks can be a great resource for anyone, but I especially encourage beginning graduate students to watch them. Watching the kind of high level talks that are in this workshop (as well as other places, including the theoretically speaking and open lectures series of the Simons Institute of Computing) can be extremely useful for students trying to figure  out what area of research to focus on.

 

An optimal weather variant of the sum of squares algorithm

September 18, 2016

As I mentioned before, this term Pablo Parrilo, David Steurer, Pravesh Kothari, and I are teaching two sister seminars at Harvard/MIT and Princeton on the Sum of Squares algorithm. See the website http://sumofsquares.org  for details, lecture notes, as well as links to lecture videos and how to sign up to follow the course on Piazza.

But, if you want the short, in-person, better weather version, you might want to sign up for the winter Course David Steurer and I will teach on this topic January 4-7 2017 in UC San Diego.

We hope to touch on how the sos algorithm interacts with questions in computational complexity, approximation algorithms,  machine learning,  quantum information theory, extremal combinatorics, and more.