It was Daniel Kahnemans 80th birthday last week and it was celebrated in style by Edge.org. Richard Thaler suggested that the question “How has Kahneman’s Work Influenced Your Own?” be asked to friends working in the fields of behavioural economics, psychology, cognitive psychology, law and medicine, a consistent stream of response has flown in.
“It is not just a celebration of Danny. It is a celebration of behavioural science” – Richard Nisbett
(Kahneman is the author of Thinking Fast and Slow — if you have not read it, purchase a copy tomorrow and make some time to, you won’t regret it.)
There were over 25 entries posted to Edge.org and I will be honest, I did not read them all in detail. I am not familiar with most of the people commenting leading me to subjectively and prejudicially choose who was included and discluded. Provided below are comments from WSJ Journalist and author of Your Money and Your Brain, Jason Zweig, The author of Antifragile, Black Swan and Fooled By Randomness, Nassim Taleb and Harvard University’s Psychology Professor Steven Pinker. (Richard Thaler did not provided a comment under his section as of March 31st)
While I worked with Danny on a project, many things amazed me about this man whom I had believed I already knew well: his inexhaustible mental energy, his complete comfort in saying “I don’t know,” his ability to wield a softly spoken “Why?” like the swipe of a giant halberd that could cleave overconfidence with a single blow.
But nothing amazed me more about Danny than his ability to detonate what we had just done.
Anyone who has ever collaborated with him tells a version of this story: You go to sleep feeling that Danny and you had done important and incontestably good work that day. You wake up at a normal human hour, grab breakfast, and open your email. To your consternation, you see a string of emails from Danny, beginning around 2:30 a.m. The subject lines commence in worry, turn darker, and end around 5 a.m. expressing complete doubt about the previous day’s work.
You send an email asking when he can talk; you assume Danny must be asleep after staying up all night trashing the chapter. Your cellphone rings a few seconds later. “I think I figured out the problem,” says Danny, sounding remarkably chipper. “What do you think of this approach instead?”
The next thing you know, he sends a version so utterly transformed that it is unrecognizable: It begins differently, it ends differently, it incorporates anecdotes and evidence you never would have thought of, it draws on research that you’ve never heard of. If the earlier version was close to gold, this one is hewn out of something like diamond: The raw materials have all changed, but the same ideas are somehow illuminated with a sharper shift of brilliance.
The first time this happened, I was thunderstruck. How did he do that? How could anybody do that? When I asked Danny how he could start again as if we had never written an earlier draft, he said the words I’ve never forgotten: “I have no sunk costs.”
To most people, rewriting is an act of cosmetology: You nip, you tuck, you slather on lipstick. To Danny, rewriting is an act of war: If something needs to be rewritten then it needs to be destroyed. The enemy in that war is yourself.
After decades of trying, I still hadn’t learned how to be a writer until I worked with Danny.
I no longer try to fix what I’ve just written if it doesn’t work. I try to destroy it instead— and start all over as if I had never written a word.
Danny taught me that you can never create something worth reading unless you are committed to the total destruction of everything that isn’t. He taught me to have no sunk costs.
The Problem of Multiple Counterfactuals
Here is an insight Danny K. triggered and changed the course of my work. I figured out a nontrivial problem in randomness and its underestimation a decade ago while reading the following sentence in a paper by Kahneman and Miller of 1986:
A spectator at a weight lifting event, for example, will find it easier to imagine the same athlete lifting a different weight than to keep the achievement constant and vary the athlete’s physique.
This idea of varying one side, not the other also applies to mental simulations of future (random) events, when people engage in projections of different counterfactuals (we treat alternative past and future histories in exactly the same analytical manner).
It hit me that the mathematical consequence is vastly more severe than it appears. Kahneman and colleagues focused on the bias that variable of choice is not random. But the paper set off in my mind the following realization: now what if we were to go one step beyond and perturbate both?
The response would be nonlinear. I had never considered the effect of such nonlinearity earlier nor seen it explicitly made in the literature on risk and counterfactuals. And you never encounter one single random variable in real life; there are many things moving together.
Increasing the number of random variables compounds the number of counterfactuals and causes more extremes—particularly in fat-tailed environments (i.e., Extremistan): imagine perturbating by producing a lot of scenarios and, in one of the scenarios, increasing the weights of the barbell and decreasing the bodyweight of the weightlifter.
This compounding would produce an extreme event of sorts. Extreme, or tail events (Black Swans) are therefore more likely to be produced when both variables are random, that is real life. Simple.
Now, in the real world we never face one variable without something else with it. In academic experiments, we do. This sets the serious difference between laboratory (or the casino’s “ludic” setup), and the difference between academia and real life. And such difference is, sort of, tractable.
I rushed to change a section for the 2003 printing of one of my books. Say you are the manager of a fertilizer plant. You try to issue various projections of the sales of your product—like the weights in the weightlifter’s story.
But you also need to keep in mind that there is a second variable to perturbate: what happens to the competition—you do not want them to be lucky, invent better products, or cheaper technologies. So not only you need to predict your fate (with errors) but also that of the competition (also with errors). And the variance from these errors add arithmetically when one focuses on differences. There was a serious error made by financial analysts.
When comparing strategy A and strategy B, people in finance compare the Sharpe ratio (that is, the mean divided by the standard deviation of a stream of returns) of A to the Sharpe ratio of B and look at the difference between the two. It is very different than the correct method of looking at the Sharpe ratio of the difference, A-B, which requires a full distribution.
Now, the bad news: the misunderstanding of the problem is general. Because scientists (not just financial analysts) use statistical methods blindly and mechanistically, like cooking recipes, they tend to make the mistake when consciously comparing two variables.
About a decade after I exposed the Sharpe ratio problem, Nieuwenhuis et al. in 2011 found that 50% of neuroscience papers (peer-reviewed in “prestigious journals”) that compared variables got it wrong, using the single variable methodology.
In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P > 0.05).
We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuronand The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience.
Sadly, ten years after I reported the problem to investment professionals; the mistake is still being made. Ten years from now, they will still be making the same mistake.
Now that was the mild problem. There is worse. We were discussing two variables. Now assume the entire environment is random, and you will see that standard analyses of future events are doomed to underestimate tails. In risk studies, a severe blindness to multivariate tails prevails.
The discussions on the systemic risks of genetically modified organisms (GMOs) by “experts” falls for such butchering of risk management, invoking some biological mechanism and missing on the properties of the joint distribution of tails.
As many Edge readers know, my recent work has involved presenting copious data indicating that rates of violence have fallen over the years, decades, and centuries, including the number of annual deaths in war, terrorism, and homicide.
Most people find this claim incredible on the face of it. Why the discrepancy between data and belief? The answer comes right out of Danny’s work with Amos Tversky on the Availability Heuristic. People estimate the probability of an event by the ease of recovering vivid examples from memory. As I explained, “Scenes of carnage are more likely to be beamed into our homes and burned into our memories than footage of people dying of old age.
No matter how small the percentage of violent deaths may be, in absolute numbers there will always be enough of them to fill the evening news, so people’s impressions of violence will be disconnected from the actual proportions.”
The availability heuristic also explains a paradox in people’s perception of the risks of terrorism. The world was turned upside-down in response to the terrorist attacks on 9/11.
But putting aside the entirely hypothetical scenario of nuclear terrorism, even the worst terrorist attacks kill a trifling number of people compared to other causes of violent death such as war, genocide, and homicide, to say nothing of other risks of death. Terrorists know this, and draw disproportionate attention to their grievances by killing a relatively small number of innocent people in the most attention-getting ways they can think of.
Even the perceived probability of nuclear terrorism is almost certainly exaggerated by the imaginability of the scenario (predicted at various times to be near-certain by 1990, 2000, 2005, and 2010, and notoriously justifying the 2003 invasion of Iraq).
I did an internet survey which showed that people judge it more probable that “a nuclear bomb would be set off in the United States or Israel by a terrorist group that obtained it from Iran” than that “a nuclear bomb would be set off.” It’s an excellent example of Kahneman and Tversky’s Conjunction Fallacy, which they famously illustrated with the articulate activist Linda, who was judged more likely to be feminist bank teller than a bank teller.
If somebody were to ask me what are the most important contributions to human life from psychology, I would identify this work [by Kahneman & Tversky] as maybe number one, and certainly in the top two or three.
In fact, I would identify the work on reasoning as one of the most important things that we’ve learned about anywhere. When we were trying to identify what any educated person should know in the entire expanse of knowledge, I argued unsuccessfully that the work on human cognition and probabilistic reason should be up there as one of the first things any educated person should know.