Thinking Fast and Slow

Posted by

11 interesting paragraphs from the book, “Thinking Fast and Slow” by “Deniel Kahneman” I highly recommend this book to every person who has important decisions to make. The Paragraphs are:

To get the idea, take a sheet of paper and draw a 2 and half inch line going up, starting at the bottom of the page without a ruler. Now take another sheet, and start at the top and draw a line going down until it is 2 and half inches from the bottom. Compare the lines. There is a good chance that your first estimate of 2 and half inches were shorter than the second. The reason is that you do not know exactly what such a line looks like; there is a range of uncertainty. You stop near the bottom of the region of uncertainty when you start from the bottom of the page and near the top of the region when you start from the top. Robyn Leboeuf and Shafir found many examples of that mechanism is daily Insufficient adjustment neatly explains why you are likely to drive too fast when you come off the highway onto city streets. Especially if you are talking with someone as you drive. Insufficient adjustment is also a source of tension between exasperated parents and teenagers who enjoy loud music in their room. Leboeuf and Shafir note that a “well-intentioned child who turns down exceptionally loud music to meet parents demand that it be played at a ‘reasonable’ volume may fail to adjust sufficiently from a high anchor and may feel that genuine attempts at compromise are being overlooked.” The driver and the child both deliberately adjust down, and both fail to adjust enough.

Difference between experts and the public are explained in part by biases in lay adjustments, but Slovic draws attention to situations in which the differences reflect a genuine conflict of values. He points out that experts often measure risks by the numbers of lives (or life-year) lost, while the public draws finer distinctions, for example between “good deaths” and “bad deaths” or between random accidental fatalities and deaths that occur in the curse of voluntary activities such as skiing. These legitimate distinctions are often ignored in statics that merely counts Slovic argues from such observations that the public has the richer conception of risks than the expert do. Consequently, he strongly resists the views that the experts should rule and that their opinions should be accepted without question when they conflict with the opinions and wishes of other citizens. When experts and the public disagree on their priorities, he says, “Each side must respect the insights and intelligence of the other.

As in the Muller-Lyer illusion, the fallacy remains attractive even when you recognize it for what it is. The naturalist Stephen Jay Gould described his own struggle with the Linda Problem. He knows the correct answer, of course, and yet, he wrote,” a little homunculus in my head continues to jump up and down, shouting at me – ‘but she can’t just be a bank teller; read the description. The little homunculus is, of course, Gauld’s system 1 speaking to him in insistent tones. (The two-system terminology had not yet been introduced when he wrote.)

Stereotyping is a bad word in our culture but in my usage. It is neutral. One of the basic characteristics of system 1 is that it represents categories as norms and prototypical exemplars. This is how we think of horses, refrigerators, and New York police officers; we hold in memory a representation of one or more “normal” members of each of these categories. When the categories are social, these representations are called stereotypes. Some stereotyping can have dreadful consequences, but the psychological facts cannot be avoided; stereotypes, both correct and false, are how we think of categories.

This statement is obviously true and not interesting at all. Who would expect the correlation to be perfect? There is nothing to explain. But the statement you found interesting and the statement you found trivial are algebraically equivalent. If the correlation between the intelligence of spouses is less than perfect (and if men and women on average do not differ in intelligence), then it is a mathematical inevitability that highly intelligent women will be married to husbands who are on average less intelligent than they are (and, vice versa, of course). The observed regression to the mean cannot be more interesting or more explainable than the imperfect correlation.

By now you should easily recognize that all these operations are features of system 1. I listed them here as an orderly sequence of steps, but of course, the spread of activation in associative memory does not work this way. You should imagine a process of spreading activation that is initially prompted by the evidence and the question, feeds back upon itself and eventually settles on the most coherent solution possible.

Finally, the illusion of validity and skill are supported by a powerful professional culture. We know that people can maintain an unshakable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers. Given the professional culture of the financial community, it is not surprising that large numbers of individuals in that world believe themselves to be among the chosen few who can do what they believe others cannot.

At the end of our journey, Gary Klein and I agreed on a general answer to our initial question; when can you trust an experienced professional who claims to have an intuition? Our conclusion was that for the most part, it is possible to distinguish intuitions that are likely to be valid from those that are likely to be bogus. As in the judgment of whether a work of art is genuine or a fake, you will usually do better by focusing on its provenance than by looking at the piece itself. If the environment is sufficiently regular and if the judge has had a chance to learn its regularities, the accusative machinery will recognize situations and generate quick and accurate predictions and decisions. You can trust someone’s intuitions if these conditions are met.

Fortunately, I had read Paul Meehl’s “Little book” which had appeared just a year earlier. I was convinced by his argument that simple, statistical rules are superior to intuitive “clinical” judgments. I concluded that the then current interview had failed at least in part because it allowed the interviewer to do what they found most interesting. Which was to learn about the dynamics of the interviewee’s mental life. Instead, we should use the limited time at our disposal to obtain as much specific information as possible about the interviewee’s life in his normal environment. Another lesson I learned from Meehl was that we should abandon the procedure in which the interviewer’s global evaluations of the recruit determined the final decision. Meehl’s book suggested that such evaluations should not be trusted and the statistical summaries of separately evaluated attributes would achieve higher validity.

This embarrassing episode remains one of the most instructive experiences of my professional life. I eventually learned three lessons from it. The first was immediately apparent; I had stumbled into a distinction between two profoundly different approaches to forecasting, which Amos and I later labeled the inside view and the outside view. The second lesson was that our initial forecast of about two years for the completion of the project exhibited a planning fallacy. Our estimates were closer to a best-case scenario than to a realistic assessment. I was slower to accept the third lesson, which I call irrational perseverance. The folly we displayed that day in failing to abandon the project. Facing a choice, we gave up rationality rather than give up the enterprise.

The mystery is how a conception of the utility outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness; once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it. Many scholars have surely thought at one time or another of stories such as those of Anthony and Betty, or Jack and Jill, and casually noted that these stories did not jibe with utility theory. But they did not pursue the idea to the point of saying, “this theory is seriously wrong because it ignores the fact that utility depends on the history of one’s wealth, not only on present wealth.” As the psychologist Daniel Gilbert observed, disbelieving is hard work, and system 2 is easily tired.

Will you Share, Follow, and Like?

Stay Connected for more updates, till then

Look Sharp, Feel Strong, and Create Paradise.

Advertisements
WordPress.com

Your opinion about it?