In the field of mind science, the inability of people to think straight about odds and probability is such a well known phenomenon that the term "probability blindness" has been coined to describe it. Is it a simple matter of innumeracy? In "How the Mind Works" Steven Pinker reports the results of some experimental problems in probability. My favorite concerns a woman, "Linda," 31, bright and outspoken, a college philosophy major interested in social justice and the anti-nuke cause. Given this description, test subjects are asked to rate the likelihood of the following statements: (1) Linda is a bankteller, and (2) Linda is a bankteller who is active in the feminist movement. One is tempted to say that (1) is "obviously" more likely, but in tests there is considerable support for (2). It seems that the mind forms an idea of Linda's character and scans the propositions, searching for "hits." It is the binary world of computer science. The first statement is a "miss," the second at least a partial "hit," and thus the tendency to rate it more likely than (1).

So some people are bad at math, others just slow all around: what is so interesting about that? Try another problem described by Pinter, one almost identical to the coin problem at Mathproblems.info. A certain invisible medical condition afflicts one person in a thousand. It can be detected before onset by a test that yields 5% false positives. If a randomly selected person tests positive, what is the probability she has the condition?

The answer is: just about exactly 2%. You may satisfy yourself of this, I hope, by considering (as in the coin problem) what results you'd expect with more than a single trial--in this case, say, 100,000. Since the condition afflicts one person in a thousand, you'd expect 100 to test positive on account of having the condition. Of the remaining 99,900, five per cent, or 4995, would test positive. So the ratio of afflicted to all positive results is given by 100/5095, or 0.0196.

According to Pinker, the average wrong answer submitted by Ivy-league test subjects is on the order of 0.5, which is around 25 times the actual risk. I know that I myself, barred from doing the number grinding, would have guessed a number much higher than 0.02. Something about the way the problem presents itself to the mind seduces us toward the conclusion that the subject is much more likely to be afflicted than she actually is. And the error, it seems, is similar in all the problems I have discussed in the last two posts--a failure to recognize, or to weigh, different bits of information. In the coin problem, people tend to consider only that a "head" result does not eliminate the possibility that the conventional coin has been selected; in the "Linda" problem, people are so sure she is a feminist that any proposition including "feminist" is deemed more likely than one containing only dissonant information; and in the problem relating to the test for an invisible medical condition, people tend to discount the low incidence of the condition in favor of elevating the significance of a quite unreliable test.

Pinker thinks that these tendencies are evidence that our minds, the products of evolution by natural selection, are essentially for information processing of the binary variety. That lower function is still there, too, tempting us, at least on first consideration, toward false conclusions on problems that require subtler reasoning, the deployment of a higher order of processing. I think it is exciting, in a humbling sort of way, to consider that our lizard minds, unelected monarchs in certain conditions, such as the presence of physical danger, may also shed light on our fondness for casino gambling and state lotteries.

Thanks. The article is good for understanding our biases in judging probable events.

Posted by: Arun Sharma | January 15, 2016 at 11:30 PM