*Linda Fallacy. Availability Heuristics. Confirmation Bias.*

Well, yes, I am dropping jargon like a consultant, as my new bunch of friends would say.

What I am essentially talking about here are Probabilitistic Cognitive Illusions - a malfunction of the evolution of our cognitive map which makes us prone to make errors when faced with choices involving probability.

Let me start with the Linda Fallacy. Also known as Conjunction Fallacy. The most famous names associated with work related to Psychology of Decisions and Choice,Daniel Kahneman and Amos Tversky, state this as follows:

*Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.*

*Which is more probable?*

*A)*

*Linda is a bank teller.*

*B)*

*Linda is a bank teller and is active in the feminist movement.*

A whopping 85% of people, when asked this question, choose B, although, in Probabilistic terms, the event B is included in the event A and hence has a lower probability. However, human beings are not engineered for proper probabilistic thinking.

Or the celebrated TaxiCab problem:

In another study done by Tversky and Kahneman, subjects were given the following problem:

*A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue.*

*A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.*

*What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?*

Most subjects gave probabilities over 50%, and some gave answers over 80%.

*The correct answer, found using Bayes' theorem, is lower than these estimates:*

*There is a 12% chance (15% times 80%) of the witness correctly identifying a blue cab.*

*There is a 17% chance (85% times 20%) of the witness incorrectly identifying a green cab as blue.*

*There is therefore a 29% chance (12% plus 17%) the witness will identify the cab as blue.*

*This results in a 41% chance (12% divided by 29%) that the cab identified as blue is actually blue.*

Whenever the problem turns Bayesian, it is almost impossible for anyone but a trained statistician or mathematical probabilist to think out complicated situations.

And even trained statisticians are fallible, as pointed out by Marilyn von Savant in the famed

*Monty Hall problem*. I am providing this curious problem below:

*Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?*

Although not explicitly stated in this version, solutions are almost always based on the additional assumptions that the car is initially equally likely to be behind each door and that the host must open a door showing a goat, must randomly choose which door to open if both hide goats, and must make the offer to switch.

As the player cannot be certain which of the two remaining unopened doors is the winning door, and initially all doors were equally likely, most people assume that each of two remaining closed doors has an equal probability and conclude that switching does not matter; hence the usual answer is "stay with your original door". However, under standard assumptions, the player should switch—doing so doubles the overall probability of winning the car from 1/3 to 2/3.

The Monty Hall problem, in its usual interpretation, is mathematically equivalent to the earlier Three Prisoners problem, and both bear some similarity to the much older Bertrand's box paradox. These and other problems involving unequal distributions of probability are notoriously difficult for people to solve correctly; when the Monty Hall problem appeared in Parade Magazine, approximately 10,000 readers,

*including nearly 1,000 with PhDs*, wrote to the magazine claiming the published solution ("switch!") was wrong. Numerous psychological studies examine how these kinds of problems are perceived. Even when given a completely unambiguous statement of the Monty Hall problem, explanations, simulations, and formal mathematical proofs, many people still meet the correct answer with disbelief.

Why this inability to solve probability problems?

One school of thought is that different events and situations faced men only relatively recently in the history of mankind. Interaction between people and events started to get exponentially complicated with the growth of communities. With small populations, life is simple – the probability space is narrow. If the cave painter is not etching mammoths on the wall (I guess the primitive man also had some way of denoting that they liked whatever was on the wall), he was probably with his woman. That is where probability came to a stop. But, with community growth, cultural exchange, communication methods and complicated networks between an ever expanding group of human beings, there are complicated events to consider, some independent, some dependent, some included and some excluded. Our genes have not kept up with the speed of community building and hence, when we make decisions of choice, we most often go by gut feel rather than probabilistic reasoning. Hence comes into the picture anchoring, bandwagon method, a total lack of understanding of the law of large numbers. In finance, many people blow up because of their gut feel that, after serving them through these various alternatives to probabilistic thinking mentioned above, finally runs out of luck.

And think of the world now, after connections and parameters have taken a completely new meaning, with the advent of the internet and Web 2.0 in the form of social networking. With the human brain not equipped to deal with probability based on events in normal society, how does it fare in the socially networked world, where connections are infinite in the literal sense? Decision making based on the information available to human beings based on all the channels of association in the modern world is in one word – impossible. The brain is just not tuned to work with so many parameters.

And if I suggest that a major reason for the panic that snowballed into the crisis was the socially networked communications, which took away the last rational power of people to make an informed choice, will I be too far from the truth?

## No comments:

## Post a Comment