Knowing About Biases Can Hurt People

Author: Eliezer Yudkowsky. Link to original: http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ (English).
Tags: lesswrong, Rationalization Submitted by Remlin 07.01.2013. Public material.

Translations of this material:

into Russian: Знание о искажениях может вредить людям.. Translation complete.
Submitted for translation by Remlin 07.01.2013 Published 4 years, 4 months ago.

Text

Once upon a time I tried to tell my mother about the problem of expert calibration, saying: "So when an expert says they're 99% confident, it only happens about 70% of the time." Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added: "Of course, you've got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with—"

And my mother said: "Are you kidding? This is great! I'm going to use it all the time!"

Taber and Lodge's Motivated skepticism in the evaluation of political beliefs describes the confirmation of six predictions:

Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.

Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.

Confirmation bias. Subjects free to choose their information sources will seek out supportive rather than contrary sources.

Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.

Attitude strength effect. Subjects voicing stronger attitudes will be more prone to the above biases.

Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.

If you're irrational to start with, having more knowledge can hurt you. For a true Bayesian, information would never have negative expected utility. But humans aren't perfect Bayes-wielders; if we're not careful, we can cut ourselves.

I've seen people severely messed up by their own knowledge of biases. They have more ammunition with which to argue against anything they don't like. And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich's "dysrationalia" sense of stupidity.

You can think of people who fit this description, right? People with high g-factor who end up being less effective because they are too sophisticated as arguers? Do you think you'd be helping them—making them more effective rationalists—if you just told them about a list of classic biases?

I recall someone who learned about the calibration / overconfidence problem. Soon after he said: "Well, you can't trust experts; they're wrong so often as experiments have shown. So therefore, when I predict the future, I prefer to assume that things will continue historically as they have—" and went off into this whole complex, error-prone, highly questionable extrapolation. Somehow, when it came to trusting his own preferred conclusions, all those biases and fallacies seemed much less salient—leapt much less readily to mind—than when he needed to counter-argue someone else.

I told the one about the problem of disconfirmation bias and sophisticated argument, and lo and behold, the next time I said something he didn't like, he accused me of being a sophisticated arguer. He didn't try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself. He had acquired yet another Fully General Counterargument.

Even the notion of a "sophisticated arguer" can be deadly, if it leaps all too readily to mind when you encounter a seemingly intelligent person who says something you don't like.

I endeavor to learn from my mistakes. The last time I gave a talk on heuristics and biases, I started out by introducing the general concept by way of the conjunction fallacy and representativeness heuristic. And then I moved on to confirmation bias, disconfirmation bias, sophisticated argument, motivated skepticism, and other attitude effects. I spent the next thirty minutes hammering on that theme, reintroducing it from as many different perspectives as I could.

I wanted to get my audience interested in the subject. Well, a simple description of conjunction fallacy and representativeness would suffice for that. But suppose they did get interested. Then what? The literature on bias is mostly cognitive psychology for cognitive psychology's sake. I had to give my audience their dire warnings during that one lecture, or they probably wouldn't hear them at all.

Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!