**Update:** I made a bit of a mess in my original description of the technical details, which was based on my faulty memory from Laci’s talk a year ago at Harvard. See Laci’s and Harald’s posts for more technical information.

Laci Babai has posted an update on his graph isomorphism algorithm. While preparing a Bourbaki talk on the work Harald Helfgott found an error in the original running time analysis of one of the subroutines. Laci fixed the error but with running time that is quantitatively weaker than originally stated, namely time (hiding poly log log factors). Harald has verified that the modified version is correct.

This is a large quantitative difference, but I think makes very little actual difference for the (great) significance of this paper. It is tempting to judge theoretical computer science papers by the “headline result”, and hence be more impressed with an algorithm that improves time to than, say, to . However, this is almost always the wrong metric to use.

Improving quantitative parameters such as running time or approximation factor is very useful as intermediate challenge problems that force us to create new ideas, but ultimately the important contribution of a theoretical work is the ideas it introduces and not the actual numbers. In the context of algorithmic result, for me the best way to understand what a bound such as says about the inherent complexity of the problem is whether you meet it **“on the way up”** or **“on the way down”**.

Often, if you have a *hardness result* that (for example based on the exponential time hypothesis) shows that some problem (e.g., shortest codeword) cannot be solved faster than then you could expect that eventually this hardness would improve further and the true complexity of the problem is for some (maybe even ). That is, when you meet such a bound “on your way up”, it sometimes makes sense to treat as a function that is “almost polynomial”.

On the other hand, if you meet this bound “on your way down” in an *algorithmic result*, such as in this case, or in cases where for example you improve an algorithm to an then one expects further improvements and so in that context it sometimes makes sense to treat as “almost polylogarithmic”.

Of course it could be that for some problems this kind of bound *is* actually their inherent complexity and not simply the temporary state of our knowledge. Understanding whether and how “unnatural” complexity bounds can arise for natural problems is a very interesting problem in its own right.

Nice points. What do you think of the ideas in the latest Lipton/Regan blogpost, which they call “extended complexity neighbourhoods”?

In most cases I think of a quantity such as as not inherently being in the neighborhood of neither or .

Rather it’s the complexity analog to a pool that has both water and ice. It’s not a steady state, and it helps to know the past in order to predict what equilibrium it will end up in.