Skip to content

On “external” definitions for computation

March 14, 2017

I recently stumbled upon a fascinating talk by the physicist Nima Arkani-Hamed on the  The Morality of Fundamental Physics. (“Moral” here is in the sense of “morally correct”, as opposed to understanding the impact of science on society. Perhaps “beauty” would have been a better term.)

In this talk, Arkani-Hamed describes the quest for finding scientific theories in much the same terms as solving an optimization problem, where the solution is easy-to-verify (or “inevitable”, in his words) once you see it, but the problem is that you might get stuck in a local optimum:

The classical picture of the world is the top of a local mountain in the space of ideas. And you go up to the top and it looks amazing up there and absolutely incredible. And you learn that there is a taller mountain out there. Find it, Mount Quantum…. they’re not smoothly connected … you’ve got to make a jump to go from classical to  quantum … This also tells you why we have such major challenges in trying to extend our understanding of physics. We don’t have these knobs, and little wheels, and twiddles that we can turn. We have to learn how to make these jumps. And it is a tall order. And that’s why things are difficult

But what actually caught my attention in this talk is his description that part of what enabled progress beyond Newtonian mechanics was a different, dual, way to look at classical physics. That is, instead of the Newtonian picture of an evolution of particles according to clockwork rules,  we think that

The particle takes every path it could between A and B, every possible one. And then imagine that it just sort of sniffs them all out; and looks around; and says, I’m going to take the one that makes the following quantity as small as possible.

 

I know almost no physics and a limited amount of math, but this seems to me to be an instance of moving to an external, as opposed to internal definition, in the sense described by Tao. (Please correct me if I’m wrong!) As Arkani-Hamed describes, a hugely important paper of Emmy Noether showed how this viewpoint immediately implies the conservation laws and shows that this second viewpoint, in his words, is

simple, and deep, and will always be the right way of thinking about these things.

 

Since determinism is not “hardwired” into this second viewpoint, it is much easier to generalize it to incorporate quantum mechanics.

This talk got me thinking about whether we can find an “external” definition for computation.  That is, our usual notion of computation via Turing Machines or circuits involves a “clockwork” like view of an evolving state via composition of some basic steps. Perhaps one of the reasons we can’t make progress on lower bounds is that we don’t have a more “global” or “external” definition that would somehow capture the property that a function F is “easy” without giving an explicit way to compute it. Alas, there is a good reason that we lack such a definition. The natural proofs barrier tell us that any property that is efficiently computable from the truth table and contains all the “easy” functions (which are an exponentially small fraction of all functions) must contain many many other functions (in fact more than 99.99% of all functions) . It is sometimes suggested that the way to bypass this barrier is to avoid the “largeness” condition, as for example even a property that contains all functions except a single function G would be useful to prove a lower bound for G if we can prove that it contains all easy functions. However, I think that to obtain a true understanding of computation, and not just a lower bound for a single function, we will need to find completely new types of nonconstructive arguments.

The immigration ban is still antithetical to scientific progress

March 7, 2017

By Boaz Barak and Omer Reingold

President Trump had just signed a new executive order revising the prior ban on visitors from seven (now six) muslim-majority countries. It is fundamentally the same, imposing a blanket 90-day ban on entry of people from six countries, with the conditions for lifting the ban depending on the cooperation of these countries’ governments.

One good analysis of the original order called it  “malevolence tempered by incompetence”, and indeed the fact that it was incompetently drafted is the main reason why the original ban did not survive court challenges. The new version has obviously been crafted with more input from competent people but it does not change anything about the points we  wrote before.

Every country has a duty to protect its citizens and we have never advocated “open borders”. Indeed, as many people who visited or immigrated to the U.S. know, the visa process is already very arduous, and involves extensive vetting. A blanket policy does not make the U.S. safer. In fact, as opposed to individual vetting,  it actually removes an element of unpredictability for any group that is planning to carry out a terror attack in the U.S. Moreover this policy (whose first, hastily drafted version was crafted without much input from the intelligence community), is not the result of a careful balancing of the risks and benefits but rather an attempt to fulfill an obviously unconstitutional  campaign promise for a “muslim ban” while tailoring it to try to pass it through the courts.

This ban hurts the U.S. and science. Much of progress in science during the 20th century can be attributed to the U.S.’s role in becoming a central hub for scientists, welcoming scientists even from countries that it was in conflict with (including 1930’s Germany and cold-war Soviet Union, and more recently, Iran). This has benefited all the world, but in particular the U.S., which during the 20th century became the world leader in science and technology as a result. Science is not a zero-sum game, and collaborations and interactions are better for all of us. We continue to strenuously object to this ban, and call on all scientists to do the same.

 

Immigration ban is antithetical to scientific progress

January 26, 2017

By Boaz Barak and Omer Reingold

Update (1/28): If you are an academic that opposes this action, please consider signing the following open letter.

Today leaked drafts of planned executive actions showed that president Trump apparently intends to issue an order suspending (and possibly permanently banning) entry  to the U.S. of citizens of seven countries:  Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen. As Scott Aaronson points out, one consequence of this is that students from these countries would not be able to study or do research in U.S. universities.

The U.S. has mostly known to separate its treatment of foreign governments from its treatment of their citizens, whether it is Cubans, Russians, or other cases. Based on the past records, the danger of terrorism by lawful visitors to the U.S. from the seven countries above is slim to none. But over the years, visitors and immigrants from these countries did contribute immensely to the U.S. society, economy, and the scientific world at large.

We personally have collaborated and built on the scientific works of colleagues from these countries. In particular, both of us are originally from Israel, but have collaborated with scientists from Iran who knew that the issues between the two governments should not stop scientific cooperation.

This new proposed policy is not just misguided, but also directly contradicts the interests of the U.S., and the advancement of science. We call on all our fellow scientists to express their strong  disagreement with it, and their solidarity and gratitude for the contributions of visiting and immigrant scientists, without which the U.S., and the state of human knowledge, would not have been the same.

On exp(exp(sqrt(log n))) algorithms.

January 5, 2017

Update: I made a bit of a mess in my original description of the technical details, which was based on my faulty memory from Laci’s talk a year ago at Harvard. See Laci’s and Harald’s posts for more technical information.

 

Laci Babai has posted an update on his graph isomorphism algorithm. While preparing a Bourbaki talk on the work Harald Helfgott found an error in the original running time analysis of one of the subroutines. Laci fixed the error but with running time that is quantitatively weaker than originally stated, namely  \exp(\exp(\sqrt{\log n}))  time (hiding poly log log factors). Harald has verified that the modified version is correct.

This is a large quantitative difference, but I think makes very little actual difference for the (great) significance of this paper. It is tempting to judge theoretical computer science papers by the “headline result”, and hence be more impressed with an algorithm that improves 2^n  time to n^3 than, say, n^3 to n^{2.9}. However, this is almost always the wrong metric to use.

Improving quantitative parameters such as running time or approximation factor is very useful as intermediate challenge problems that force us to create new ideas, but ultimately the important contribution of a theoretical work is the ideas it introduces and not the actual numbers. In the context of algorithmic result, for me the best way to understand what a bound such as \exp(\exp(\sqrt{\log n})) says about the inherent complexity of the problem is whether you meet it “on the way up” or “on the way down”.

Often, if you have a hardness result that (for example based on the exponential time hypothesis) shows that some problem (e.g., shortest codeword) cannot be solved faster than  \exp(\exp(\sqrt{\log n}))  then you could expect that eventually this hardness would improve further and the true complexity of the problem is \exp(n^c) for some c>0 (maybe even c=1). That is, when you meet such a bound “on your way up”, it sometimes makes sense to treat \exp(\sqrt{\log n}) as a function that is “almost polynomial”.

On the other hand, if you meet this bound “on your way down” in an algorithmic result, such as in this case, or in cases where for example you improve an n^2 algorithm to an n\exp(\sqrt{\log n}) then one expects further improvements and so in that context it sometimes makes sense to treat \exp(\sqrt{\log n}) as “almost polylogarithmic”.

Of course it could be that for some problems this kind of bound is actually their inherent complexity and not simply the temporary state of our knowledge. Understanding whether and how “unnatural” complexity bounds can arise for natural problems is a very interesting problem in its own right.

Motwani Postdoctoral Fellowship at Stanford

December 25, 2016
I’m happy to invite (in the name of the Stanford theory group) applications for the inaugural Motwani Postdoctoral Fellowship in Theoretical Computer Science, made possible by a gift from the Motwani-Jadeja foundation. Please see application instructions . Please apply by Jan 6, 2017 for full consideration.

Free trade and CS

December 1, 2016

Economists generally agree that free trade agreements between countries such as the U.S. and Mexico or China that have complimentary strengths result in a net benefit to both sides. But this doesn’t mean that every individual citizen benefits. There are definitely winners and losers, and as we have seen in this election, the losers are anything but “happy campers”.

NAFTA’s effect on U.S. employment  has probably been something between a modest gain to a loss of several hundred thousand U.S. jobs. The effect of trade with China has probably been greater, resulting in a loss of perhaps a million or more jobs. But both of these effects  are likely to be much smaller than the result of the U.S.’s completely unregulated trade with a different country, one that has no labor protections and whose workers work very long hours for very low wages.

I am talking about the “Nation of AI”. According to the Bureau  of Labor and Statistics,  in the U.S. there are 3.85 million drivers (of trucks, buses, taxis, etc..), 3.5 million cashiers, 2.6 million customer service representatives, and many other people working in jobs that could be automated in the near future. It is sometimes said that “routine” jobs are the ones most at risk, but perhaps a better term for this is quantifiable jobs. If your job consists of performing well-defined tasks that have a clear criteria of success (like “getting from point A to point B”) then it is at risk of first being “Uberized” (or “M-Turk’ed”)  and then automated. After all, optimizing well defined objectives is what computers do best.

Of course, like other trade deals and technological advances in the past, it could well be that the the eventual net effect of artificial intelligence on human employment is zero or even positive. But it will undoubtedly involve shifting of jobs, and, especially if it happens on a short time scale, many people whose jobs are eliminated would be unable to acquire the skills for obtaining the jobs that are created.

Understanding how to deal with this (arguably more realistic) type of “AI risk” is a grand challenge on the interface of Economics and Computer Science, as well as many other areas. Like other questions of incentives, privacy, fairness, and others, I believe theoretical computer science can and should play some role in addressing this challenge.

 

Some announcements

November 20, 2016

As also posted by Michael Mitzenmacher, we have several postdoc positions at Harvard,  please apply by December 1st.

In particular in 2017-2018, Harvard’s center for mathematical sciences and applications will be hosting a special year on combinatorics and complexity,  organized by  Noga Alon, me, Jacob Fox, Madhu Sudan, Salil Vadhan, and Leslie Valiant. I am quite excited about the workshops and events we have planned, so it should be a great time to be in the area.

 

The two sister sum-of-squares seminars at Cambridge and Princeton have  been maintaining a fairly extensive set of online lecture notes (with links to videos of Cambridge lectures added as we go along). While these notes are still a work in progress, I am already quite happy  with how they turned out (but would be grateful for any feedback to help make them more accessible).

As I mentioned before, if you want to see the live version, David Steurer and I are going to teach a Sum of Squares Winter Course in UC San Diego in January 4-7, 2017. Should be fun!!

Finally, please send in your suggestions for papers to invite for Theory Fest presentations by December 12, 2016. I’ve been having some issues with the dedicated email I setup for this, so if you sent in a suggestion and didn’t get a response, please send me a copy at my personal email as well.