We usually think of bias in the context of underlying motivations or interests, particularly in the political realm. The underlying premise of this book is that there are much more fundamental biases in human judgments. Humans aren’t perfectly logical creatures. Even when we have perfectly good information, and we are free from motivational biases, we still make poor decisions.
I picked up this book after there was a few passing references to it in “Harry Potter and the Methods of Rationality.” In this alternate world, Harry Potter is supposed to represent the paragon of Baconian rationalism, and he uses the fact that he has read Kahnemann’s work as evidence of his rationality.
The book itself isn’t for the faint of heart; it is a collection of scientific articles published by psychologists with research interests in the science of judgment. The topic matter itself is interesting. If you were interested in the material, I would recommend reading the introductions and conclusions of each essay, and if it captured your interest enough, you could read further into the experimental sections. I got less thorough in my reading as I got through the book, because I was anxious to be done; I already read enough scientific papers as a graduate student, I don’t want to read more!
The material itself is fascinating. I believe the editor, Kahneman, has written another book directed more towards a lay audience, “Thinking Fast and Slow.” I will look into reading that too, but I’m sure a lot of the material in it is drawn from these scientific studies.
I think the material itself would be good for most readers to be aware, to recognize the limitations and tendencies of human thought. I felt with was especially relevant to me as a graduate student, as I could picture many of these logical fallacies in experimental settings. It brought to mind the modern movement against bad science and bad reporting of science e.g. Calling BS here at the University of Washington. Here are a few examples:
Humans tend to be uncharitable in making judgments of others; when seeking explanations of others’ behavior, we attribute more to characteristics of the individual and not the situation in which they find themselves. For instance, if a student is doing bad in school, we are more likely to think they are lazy rather than to consider the circumstances going on in their home.
Humans tend to be very bad predictors of outcomes that have multiple steps. For instance, when trying to predict how long a project will take to complete, we very easily underestimate the time required. Why? Each step of the process requires successful completion, and so a single fudge factor doesn’t account for all the delay.
Humans rarely take into account base case statistics. For instance, if an editor is very confident a manuscript will get published because of the excellent writing, he rarely takes into account the success rates of similar books. The integration of book base case data AND intuitive judgments is referred to as regression, and leads to better estimates.
Definitely a good read, but I just wasn’t in the mood for scientific papers!
Here’s a list of the essays contained in the book and a brief description of each:
Judgment under uncertainty: heuristics and biases
More of a summary of the entire book with introductory concepts, including representativeness (e.g. what is the probability that object A belongs to class B?), misconceptions of chance (truly random events don’t seem random to humans), and sample size (humans are bad at taking into account the effects of sample size when making decisions).
Belief in the law of small numbers
The believer in the law of small numbers gambles his hypotheses on small sample sizes without realizing that the odds against him are unreasonably high. Bad for scientists who only do 5-6 replicates in a study.
Subjective probability: A judgment of representativeness
Human evaluate the representativeness of a sample by looking for similarities to the population of interest and the apparent “randomness” of the sample.
On the psychology of prediction
Predictions have three sources of information: (1) prior general knowledge e.g. base rates, (2) information specific to the case at hand, and (3) information on the reliability of the information you have been given. Humans generally ignore (3) entirely, and generally rely entirely on (2).
Studies of representativeness
When seeking to attribute causes to effects, the lay person has three sources of information: distinctiveness information (how does this situation differ from others?), consistency information (does this happen in repeated experiments?), and consensus information (does everyone respond this way?). Humans generally ignore consensus information.
Judgments of and by representativeness
Popular induction: Information is not necessarily informative
Causal schemas in judgments under uncertainty
It is a psychological commonplace that people strive to achieve a coherent interpretation of the events that surround them, and that the organization of events by schemas of cause-effect relations serve to achieve this goal.
Shortcomings in the attribution process: On the origins
This chapter examines “non-motivational attribution biases”, biases that aren’t due to self-serving motives. For example, the fundamental attribution error, in which we “infer broad personal dispositions and expect consistency in behavior or outcomes across widely disparate situations and contexts.”
Evidential impact of base rates
Even when given base rate data, humans rarely take it into account, using their initial intuitions rather than the hard numbers provided by scientific studies.
Availability: A heuristic for judging frequency and probability
Introduce a new heuristic: availability. Humans estimate probabilities by how easily information is retrieved from memory. For instance, if I asked you to compare the word count of words that start with r and words that have r in the third position, you would have an easier time recalling words that being with r and likely overestimate the word count compared to the latter.
Egocentric biases in availability and attribution
This looks at how the ego plays a role in availability. For instance, I tend to focus on my inputs into the project rather than my teammates, and am likely to overestimate my contribution. This can result in tensions, such as who gets authorship on a paper.
The availability bias in social perception and interaction
The simulation heuristic
There are two kinds of judgments where availability can play a role: how easily past information is recalled, and how easily new situations are created using the imagination. The latter is called the simulation heuristic. For instance, if I asked to to think up all the ways you could kill someone with a paper clip, you would start to get an idea of availability.
Informal covariation assessment: Data-based versus theory-based judgments
Humans are really bad at evaluating covariation, because they look at a limited selection of the data. For instance, when evaluating the question, does God answer prayers, you have to take into account (a) the time you prayed, and your prayer was answered, (b) the times you prayed and the prayer wasn’t answered, (c) the times you didn’t pray and you still got positive results, and (d) the times you didn’t pray and you got negative results. Hard to evaluate, right?
The illusion of control
In situations where the actor has absolutely no control e.g. rolling dice doesn’t stop the actor from behaving as if he has some control over the situation resulting in all sorts of odd behaviors.
Test results are what you think they are
This was one of my favorites, and is basically what it says. In many instances, psychologists interpret what they want to see. The authors look at the example of how Rorschach blot tests were used to evaluate whether patients were homosexual or not.
Probabilistic reasoning in clinical medicine: Problems and opportunities
Oftentimes, physicians don’t have proper training in probability and aren’t using diagnostic tests appropriately. False positives and false negatives should be taken seriously, and understanding what diagnostic results is vital for recommendations on the physicians’ part.
Learning from experience and suboptimal rules in decision making
Overconfidence in case-study judgments
The more information decision-makers have, the more confident they are in their decisions. But this isn’t reflective of the actual accuracy of their predictions.
A progress report on the training of probability assessors
Calibration of probabilities: The state of the art to 1980
How do you tell how good someone is at making predictions? “If a person assesses the probability of a proposition being true as .7 and later finds that the proposition is false, that in itself does not invalidate the assessment. However, if a judge assigns .7 to 10,000 independent propositions, only of 25 of which subsequently are found to be true, there is something wrong with these assessments.”
For those condemned to study the past: Heuristics and biases in hindsight
The idea that we can learn from the past is in some aspects overrated. We tend to focus on salient details rather than the ordinary ones, and string them together in causal diagrams. We also tend to view the past with the foreknowledge of how it will end. In real decisions in the present, we do not have that luxury. “Inevitably we are all captives of our present personal perspective. We know things that those living in the past did not… Historians do ‘play new tricks on the dead in every generation.'”
Evaluation of compound probabilities in sequential choice
Humans are really bad at compound probabilities, probabilities based on sequential events.
Conservatism in human information processing
Baye’s theorem gives the user updated probabilities based on new information. But the probabilities predicted by Baye’s theorem are much higher than those that humans make. Humans are much more conservative. Probably partly because we don’t like getting into 90%+. That’s why using odds instead of probabilities is probably easier to interpret.
The best-guess hypothesis in multi-stage inference
When making multi-stage inferences, humans tend to use the best-guess hypothesis: make your best guess, and pretend it’s actually 100% true when taking action, rather than taking into account other possibilities that still might exist.
Inferences of personal characteristics on the basis of information retrieved from one’s memory
When making decisions based on memory, the user should take into account (1) diagnostic value of the information available and (2) the reliability of the information available. Humans tend to ignore the latter.
The robust beauty of improper linear models in decision making
This was an interesting topic. Let’s say we’re evaluating student applications for graduate school. You could use a regression model that takes into account GRE scores, GPA, etc. Or you could use human evaluators. Or, even if you had an imperfect model (improperly weighted), these models will still do better than humans. Humans are still useful though, because their intuition into what factors are actually important is usually pretty good.
The vitality of mythical numbers
Another good one, it looks at how humans can be overconfident in quick calculations. The author looks at one quick calculation of how much stolen property in NYC is attributable to heroin addicts. The numbers sound good, perhaps to a newspaper reporter, but they are terrible, and can be countered by starting with different sets of data to arrive at different conclusions.
Intuitive prediction: Biases and corrective procedures
Improving inductive inference
Facts versus fears: Understanding perceived risk
On the study of statistical intuitions
Variants of uncertainty