Scott Aaronson blogged in defense of “armchair epidemiology”. Scott makes several points I agree with, but he also advocates that rather than discounting ideas from “contrarians” who have no special expertise in the matter, each one of us should evaluate the input of such people on its merits.
I disagree. I can judge on their merits the validity of a proposed P vs NP proof or a quantum algorithm for SAT, but I have seen time and again smart and educated non-experts misjudge such proposals. As much as I’d like to think otherwise, I would probably be fooled just as easily by a well-presented proposal in area X that “brushes under the rug” subtleties that experts in X would immediately notice.
This is not to say that non experts should stay completely out of the matter. Just like scientific journalists such as Erica Klarreich and Kevin Hartnett of quanta can do a great job of explaining computer science topics to lay audience, so can other well-read people serve as “signal boosters” and highlight works of experts in epidemiology. Journalist Helen Branswell of stat news has been following the novel coronavirus since January 4th.
The difference is that these journalists don’t pretend to see what the experts are missing but rather to highlight and simplify the works that experts are already doing. This is unlike “contrarians” such as Robin Hanson that do their own analysis on a spreadsheet and come up with a “home brewed” policy proposal such as deliberate infection or variolation (with “hero hotels” in which people go to be deliberately infected). I am not saying that such proposals are necessarily wrong, but I am saying that I (or anyone else without the experience in this field) am not qualified to judge them. Even if they did “make sense” to me (they don’t) I would not feel any more confident in judging them than I would in reviewing a paper in astronomy. There is a reason why Wikipedia has a “no original research” policy.
Moreover, the attitude of dismissing expertise can be dangerous, whether it comes in the form of “teach the debate” in the context of evolution, or “ClimateGate” in the context of climate change. Unlike the narrative of few brave “dissenters” or “contrarians”, in the case of COVID-19, experts as well as the world health organization have been literally sounding the alarm (see also timeline, as well as this NPR story on the US response). Yes, some institutions, and especially the U.S., failed in several aspects (most importantly in the early production of testing). But one of the most troubling aspects is the constant sense of “daylight” and distrust between the current U.S. administration and its own medical experts. Moreover, the opinions of people such as law professor Richard Epstein are listened to even when they are far out of their depth. It is one thing to entertain the opinion of non-expert contrarians when we have all the time in the world to debate, discuss and debunk. It’s quite another to do so in the context of a fast-moving health emergency. COVID-19 is an emergency that has medical, social, economical, and technological aspects, but it would best be addressed if each person contributes according to their skill set and collaborates with people of complementary backgrounds.
30 thoughts on “In defense of expertise”
Thank you. Thank you so much for this.
+1!! (Is this “signal-boosting”?. I kid…)
Not all experts are created equal. In fields with strong filtering there are “good experts”.
Pilots who were fake experts, ended up crashing and dying a long time ago. Restaurant owners who were fake experts went bankrupt. Investment bankers who were fake experts got bailed out, promoted, and got their bonuses. Experts who said “this is just a flu, nothing to see here” continue to peddle their BS with the same confidence and condescending tone.
What makes an expert an expert? If you are a non-expert, how can you tell an expert from a fake expert?
Some messages from experts on the current topic have been seriously, consistently, grotesquely wrong from the beginning, when many “armchair analysts” already made the right call.
Can you give example of experts (e.g. academics in the field of infectious diseases) that said “this is just the flu”?
The communications I’ve seen from people like Marc Lipsitch, Neil Ferguson, Tom Frieden , and others look nothing like that.
The first response of the UK Government to the emergency (i.e., laissez faire to the Covid19) was suggested by its Scientific Advisory Group on Emergencies (i.e., “experts”).
In Italy, the Director of Macrobiology, Virology and Bioemergencies Unit of the Sacco Hospital (one of the leading italian institution for the treatment of epidemics) claimed several times that Covid19 is “less dangerous than the flu” https://www.ilsole24ore.com/art/coronavirus-sfogo-direttrice-analisi-sacco-e-follia-uccide-piu-l-influenza-ACq3ISLB ).
“Health Experts” were warning us that travel bans would backfire…
I could easily go on…
I would like to correct the quote of the Director of Macrobiology, Virology and Bioemergencies Unit of the Sacco Hospital, who said:
“people have confused a virus slightly more dangerous than flu with a lethal pandemic”.
This happened on the 22nd of February, the italian government believed there were only two isolated clusters. It was not known that the virus had been circulating for weeks in many other communities.
In any case, it didn’t age well.
It is important to notice that the infamous comment was criticized by most of the other national experts in the field.
Perhaps another key word here is consensus.
If the overwhelming majority of the experts agree that the virus is dangerous, shouldn’t we non-experts simply classify the outliers as noise?
I did not say that all experts are always right. Clearly experts can disagree with one another. As a non expert however, I do believe we should restrict attention to arguments put forward by experts and try to judge one against the other via internal consistency, consensus and more. It’s much easier for non experts to miss glaring issues and unlike armchair commenters, experts also put their reputation on the line.
Re travel bans, many experts agree they can help but their effect is much smaller than touted, and sometime on net can be negative, especially if they become the highlight of the policy response. Some of the countries that did the best job in containing the epidemic had very limited bans (e.g. South Korea, Hong Kong, Singapore) and some of the countries that did the worst job had early ones (e.g. Italy, US, see https://www.forbes.com/sites/davekeating/2020/03/12/italy-banned-flights-from-china-before-americait-didnt-work/ ). By the time the US banned entrance from Europe for example, it’s pretty clear that community transmissions inside the US already dominated the number of imported cases by foreign nationals/
In particular a 14 day quarantine for everyone coming in, regardless of whether they are citizens or not, makes much more sense than a US style ban which allowed citizens to come freely, but then ensured they all spend many hours together in packed airports due to the new procedures.
@ziotom: I was not speaking about a blunder made by the italian government, so I do not understand why you want to justify that comment by Dr. Gismondo by saying what you said.
Anyway: Ilaria Capua, head of the One Health Center in Florida said: “In Italy we have more cases simply because we seek them more carefully” (!)
That (in retrospect) terrible statement that delayed the Italian lockdown was also made by Walter Ricciardi (member of the European Advisory Committee on Health Research of the OMS) several times. Please, just google and check.
Should we also talk about the very same OMS, that initially gave the guidelines of testing only people with symptoms, and subsequently admitted the mistake with the (in)famous tweet by its Director: “Test, test, test!”? If you want to know why the initial policy was a mistake, please read about what they are doing in Corea and about Dr. Andrea Crisanti
(who is writing a paper with Neil Ferguson about that).
@Boaz: I am not trying to put the blame on anyone and I know that being expert does not mean “to be correct all the time”. The Great Recession https://en.wikipedia.org/wiki/Great_Recession is too a recent episode to cure any belief that being expert means to be infallible…
So what is my point? Consensus among expert is fine and dandy in sciences like Math, Physics, TCS, …in many other areas is just useful to fix a prior, then apply Bayes…😉
My point in talking about the Italian government was to underline that the information available was limited and that the the true state of the epidemic was not known.
This was only to provide (useful?) context on the day in which the comment first appeared online. After all it is obvious that today nobody would make such a comment, at least in Italy.
I am not sure I understand the point you are trying to make concerning testing: I think the topic of the post was about “defending expertise”. If, after gathering more information, the experts at the WHO realise that mistakes have been made in the fight against the virus and change the guidelines, how does this invalidate their opinion? How does this make the opinion of contrarians more relevant? And how does this make it easier for us to distrust experts?
@ziotom: I sympathize with your questions, and I really wish I had sensible answers, but unfortunately I have not. My general point is that in many areas (medicine, economics, etc…) consensus of experts has been oftentimes wrong. Therefore, there is room for educated and informed laypeople to form their own judgment using their intelligence guided by experience, as someone said. If I understood correctly Scott Aaronson’s piece, that was also the point he was (more or less) trying to make.
I believe you are right when the subject matter is highly mathematical such as TCS. However, once you get a bit out of math, things become very different. Things like the opinion of the leaders of the field become highly important in the absence of a mathematical ground truth. After all, they say “science advance from a funeral to a funeral”. You hardly see that happens in TCS, but from discussions I had with people in other fields, this is too often the case.
I completely agree with this. In particular, I heard from many non-experts that they should do group testing, which means that one test is used to check whether there is a positive case among, say, 10 blood samples. But I haven’t seen this happen in practice/considered by experts. Does anyone happen to know why? Here is an article on it: https://www.forbes.com/sites/kotlikoff/2020/03/29/group-testing-is-our-secret-weapon-against-coronavirus/#13df896c36a6
Same Anon. as before.
It seems that in Germany they are doing it
and also in Israel
These are theoretical results (of which I also know some in Hungarian), but in practice it’s not done this way anywhere. I wonder why.
Don’t you mean Kevin Hartnett?
You’re right – sorry! Fixed!
According to this https://www.3newsnow.com/news/coronavirus/live-gov-ricketts-provides-coronavirus-briefing-3-24-20 in Nebraska are already using group testing (in groups of five)
The main problem is the sensitivity of the test to pooling samples, false positives/negatives, and all that jazz…. I have done some work on this, google for group testing, dilution effect. What the hungarians have done on that area is very nice mathematics, but not very practical.
Also, If I understood correctly the link I gave, in Israel are going to use it in practice.
A number of commenters make the distinction between more mathy fields such as TCS and less mathy fields such as epidemiology.
While there are of course mathematical models also in epidemiology, it is true that there are more “error bars” around the parameters of these models. For example, in a novel virus, we often don’t know precisely the “base of the exponent” (i.e., the parameter “R0”) which of course makes a huge difference. We also do not know in advance the precise impact of steps such as closing schools, businesses, etc…
But these subtleties should be a cause for more humility, not less, for us non-experts. In a “mathy” field such as number theory, I would at least in principle, be able to verify the proof of a statement despite being a non expert. In a field such as epidemiology, I don’t stand a chance. This does not mean that we need to accept all expert opinion as gospel. We can question the assumptions, and many of us are equipped to follow the math. But we should be very wary of any amateur that claims to see something that all the experts missed.
Yes, the models have grotesque errors. But this is not a cause to be more humble, but to switch to a mode where the goal is not to model correctly, but to protect against worst-case risk. In terms that TCS people can relate to, the experts you are referring to are doing average case analysis, with possibly wrong distribution over inputs. The preppers and the armchair analysts are doing worst-case analysis with possibly pessimistic assumptions but protecting against downside.
It was perfectly rational to say around February: I don’t trust the experts because their models have large errors, therefore I will stop traveling for now, self-isolate, buy a mask, stock up on food and protect myself and my family. Such an attitude would have been ridiculed as “non-scientific” or “not evidence based”.
An expert knows much more about virology and aerosol transmission than the average person, but if I err vastly on the safe side, then I can largely ignore most of that knowledge. There are different fields of expertise, and one in which we are all experts is *survival*.
I completely agree that everyone is an expert on themselves, different people have different risk tolerances, different preexisting conditions, and different circumstances on how easy is it for them to self isolate, stock up, etc. When discussing “armchair epidemiologists” I emphatically don’t mean people that decide to err on the side of safety for themselves and their family.
One comment I would make is that in an exponentially growing infection, the right response for both individuals and societies is highly time dependent. In the U.S. for example, I think that by late February there were perhaps at most thousands of undetected cases, which was the seed of a huge public health problem but still very small personal risk to the vast majority of the population.
So, L, if I understand your point, it is that experts (and that includes public health experts, epidemiologists, and people literally working on pandemics/global epidemics) are only preparing for “average-case” events? Do you have a reference for that? The concept of “average-case pandemic” sounds by itself oxymoronic to me.
I don’t dispute the fact that not all epidemiologists work on pandemics. I dispute the fact that “the experts you are referring to are doing average case analysis”, which appears to be a very broad and all-encompassing statement.
You might enjoy reading this NYT piece: https://www.nytimes.com/2020/04/07/opinion/coronavirus-science-experts.html?smid=fb-share&fbclid=IwAR1KIcMblvNsaKaAQTh2vCpB2yARYL8O5TVy9j3ZxSqsXsRAMK2GGcvF81o about the performances of experts in the present circumstances, starting from the WHO and all the way down… (btw, our own Scott A. is quoted too! 😉)
This opinion piece is exactly the attitude that I am warning about, and unfortunately I see that Scott’s piece is already used as “evidence” that experts were somehow less reliable than medium armchair epidemiologists.
Somehow the “worst offender” is the WHO, even though they called it a health emergency on January 30th, while Trump was still minimizing it (and not declaring an emergency) more than a *month* later.
The issue with masks is more complex (Scott Alexander did a nice overview of it here: https://slatestarcodex.com/2020/03/23/face-masks-much-more-than-you-wanted-to-know/ ).
They are not some “magic bullet” but yes, I think probably in the future part of pandemic preparedness would be to ensure there are enough surgical masks for the entire population and require people to wear them. This is more about preventing them from infecting others than the other way around.
Generally, experts definitely have a lot of “error bars” and as I said when error bars are in the base of the exponent they can make a huge difference. Also part of the things that are hardest to model are the public response to directives.
But this piece is very dangerous. For example, it comes very close to suggesting doctors on the front line should take their medical advice from Trump. (This is not to say that doctors should not be using experimental treatment protocols – they have and they are – but the choice of which such protocols to apply and which drugs should be made based on the data they have at the moment and not which story happened to show up in Trump’s twitter feed.)
Boaz, I don’t think I am contradicting anything you’re saying, but I will add my two cents. I think there is a need to distinguish between the qualitative recommendations of public health experts and the quantitative analysis that leads them to these recommendations. As an example, I believe that stay at home orders (or for that matter something way more binding) is completely warranted and in fact, should have been done a month ago.
However, I have a hard time putting much confidence in the numbers these experts have come up with — as an example, Lipsitch is one of the people who is most quoted in various analyses concerning covid-19. His analysis concerning the last pandemic swine flu https://www.webmd.com/cold-and-flu/news/20091207/h1n1-swine-flu-less-severe-than-feared#1 is noted here. To pick a sentence from the articule “He [Lipsitch] noted that between one in 70 and one in 600 people who fall ill with H1N1 swine flu will be hospitalized.”. That’s a prediction where the upper and lower bounds are off by a factor of 9. While this gap probably does not change the qualitative recommendations that one would suggest, I think it’s important to be aware that when various fields make predictions, they do not hold themselves to the same level of rigor or accuracy as some other fields.
So, yes, I think in times of emergencies, I think we should all listen to the opinions of experts (as in their recommendations). However, having followed the recommendations (in this case, staying home 24 x 7), I have definitely been wondering about the quantitative aspects of the predictions.
I definitely agree that the experts estimates have a huge amount of uncertainty.
In particular there are several different approaches to make predictions on future cases and also work out rates for hospitalization and fatalities. Some models use census data to try to model every individual person, some use some synthetic networks, some infer parameters from number of positively tested people, some infer parameters from deaths. The fraction of people that are assumed to follow social distancing instructions is often a hardwired constant. There are also different ways to try to account for the inevitable lag between infection, testing, and recovery/death. Ultimately you have to take these numbers with a large degree of uncertainty (though they probably have the right numbers up to a factor of 5-10 and not off by factors of 100), and mainly trust them when different ways of calculating the same number lead to similar results.
There may well be ways in which data scientists and computer scientists can team up with such experts and improve the estimates or find better ways to aggregate them. I just think that it’s presumptuous to assume that if this is so hard for the experts to get it right, then someone that “understanding virality, how things grow, and data” (to quote https://www.zerohedge.com/health/covid-19-evidence-over-hysteria ) can figure it out better than them by fitting a bell curve to the data.
On the topic of experts, this is a quote from an interview with Dr. Fauci from about a year ago (see https://fivethirtyeight.com/features/dr-fauci-has-been-dreading-a-pandemic-like-covid-19-for-years/ ):
“the thing I’m most concerned about as an infectious disease physician and as a public health person is the emergence of a new virus that the body doesn’t have any background experience with, that is very transmissible, highly transmissible from person to person, and has a high degree of morbidity and mortality.
Now what I’ve essentially done is paint the picture of a pandemic influenza. Now it doesn’t have to be influenza. It could be something like SARS. SARS was really quite scary. Thankfully, it kind of burned itself out by good public health measures. But the thing that worries most of us in the field of public health is a respiratory illness that can spread even before someone is so sick that you want to keep them in bed. And that’s really the difference.”
I completely agree — In particular, when I expressed skepticism about numbers from epidemiologists, I did not mean to endorse the estimates from “data scientists” (I do not place any credence on estimates obtained by somebody who just blindly fit a ML model).
So, aside from expressing skepticism at all numbers 🙂 (may be a natural outcome of being a cynical person), let me end with a question. Right now, as you said, does not matter how these numbers are calculated, the results (at least their recommendations) will be similar. But at some point, the cost of extended lockdowns on people’s health (as an example, most places are no longer allowing for routine checkups) might start to conflict with the benefit of the lockdown with respect to COVID-19. I don’t know what that point is but the point will depend on more rigorous quantitative analysis than the ones we have now. May be at this point, we will need non-epidemiologists to weigh in.
I am keeping my fingers crossed and hoping that something magical happens which does not put as the point of doing cost benefit analysis of various measures but if it does not, we might have to look at the numbers more carefully.
I am definitely interested in looking at these numbers and models critically, if only for intellectual curiosity. As I said, it seems that there varieties of ways to model the infection dynamics, and it’s interesting to figure out what are the differences, assumptions, and error bars. These things will absolutely inform decisions. I think at a crude level you want to weigh the effect of measure X on reducing the “R0” parameter against the cost of measure X in other ways. Initially I guess the idea is to throw the “kitchen sink” and hope R0 shrinks below one, but then you want to see how much of the things you can relax while still keeping it under 1.
I think collaborations between epidemiologists and others can be very fruitful. In particular, technology allows new ways to pull data into these models, as well as to test the assumptions that underlie them, and I don’t think people have yet fully explored all these possibilities.