Skip to content

Looking for car keys under the streetlight

February 12, 2018

In NIPS 2017, Ali Rahimi and Ben Recht won the test of time award for their paper “Random Features for Large-scale Kernel Machines”. Ali delivered the following acceptance speech (see also addendum) in which he said that Machine Learning has become “alchemy” in the sense that it involves more and more “tricks” or “hacks” that work well in practice, but are very poorly understood. (Apparently alchemists were also successful in making many significant practical  discoveries.) Similarly, when I teach cryptography I often compare the state of “pre modern” cryptography  (before Shannon and Diffie-Hellman) to alchemy.

Yann LeCun was not impressed with the speech, saying that sticking to using methods for which we have theoretical understanding is “akin to looking for your lost car keys under the street light knowing you lost them someplace else.” There is a sense in which LeCun is very right. For example, already in the seminal paper in which Jack Edmonds defined the notion of  polynomial time he said that “it would be unfortunate for any rigid criterion to inhibit the practical development of algorithms which are either not known or known not to conform nicely to the criterion.” But I do want to say something in defense of “looking under the streetlight”. When we want to understand the terrain, rather than achieve some practical goal, it can make a lot of sense to start in the simplest regime (e.g. “most visible” or “well lit”) and then expand our understanding (e.g., “shine new lights”). Heck, it may well be that when the super intelligent robots are here, then they would look for their keys by first making observations under the light and then extrapolating to the unlit area.

  1. tas permalink
    February 12, 2018 3:55 am

    Woah! I missed this debate.

    I sounds like Yann LeCun is attacking a strawman theorist who insists that we shouldn’t use things we don’t fully understand. A good theorist is someone who seeks to understand (and improve) the things we use.

  2. February 12, 2018 4:38 pm

    lecun while a brilliant/ dogged visionary has become a bit of a curmudgeon these days & picking a lot of fights lately. he provides a great experimentalist pov but tends to feel threatened by advancing theory/ higher demands on mathematical rigor/ understanding etc. based on his (numerous) tweets he seems to feel as if ML is besieged when in fact its exactly the opposite, its undergoing wild embrace by many “external” communities, broadening. he got in a big “debate” with gary marcus (to put it politely/ diplomatically, whereas lecuns fiery tweets often are not) over deep learning limitations. merely pointing out limitations is not attacking a field. lecun had to suffer through maybe more than one AI winter over the decades and is a bit gunshy so to speak.

    View story at

    lecun is also upset about sophia the humanlike robot recently making headlines at consumer electronics show. the robot & associated research seems highly innovative and mostly harmless to me. lecun has some point that they shouldnt make wild claims but some of this is obviously due to marketing.

  3. February 12, 2018 4:45 pm

    btw another thought, the “drunk looking for keys under the streetlight” is a great metaphor for science and various human endeavors (eg capitalism etc) but notice it could be argued to apply in exactly the opposite way lecun asserted. one could say that practitioners are too focused on “merely getting stuff to work” with existing technology and they miss the forest for the trees. actually the relatively new papers/ prjs that attack networks with adversarial methods and show that very small perturbations on their inputs cause them to fall down/ fail in dramatic ways suggest exactly that case, ie that the theoretical understanding of how to make these algorithms “really work” needs major attn at this point and semiblindly trying merely to optimize metrics can utimately lead to a dead end lacking real understanding.

Comments are closed.

%d bloggers like this: