Main

Summary

Judgment under Uncertainty: Heuristics and Biases
Amos Tversky and Daniel Kahneman
Tversky and Kahneman use this article to summarize and explain a compilation of heuristics and biases that hinder our ability to judge probabilities of uncertain events. The article is categorized into discussions of 3 main heuristics and examples of biases each heuristic leads to. I will summarize these in outline form for ease of organization.
• Representativeness heuristic. When using the representativeness heuristic, people make judgments about probability based on how well it represents, or is similar to a stereotype they are familiar with. The closer it resembles the stereotype, the higher they consider the probability to be that it fits the stereotype. This heuristic is usually used when one is asked to judge the probability that an object or event belongs to a specific class or process.
o Insensitivity to prior probability of outcomes. This is a phenomenon where people ignore prior probabilities when they evaluate probability by representativeness. However, people do use prior probabilities correctly when they have no other information to go on.
o Insensitivity to sample size. People assume that characteristics of a population will hold no matter what the sample size is, whereas this is not a safe assumption in small sample sizes.
o Misconceptions of chance. One consequence of this is the gambler’s fallacy, where chance is viewed as a self-correcting process, which is not true in a series of independent events.
o Insensitivity to predictability. This describes the bias in which people feel comfortable making intuitive predictions based on insufficient information.
o Illusion of validity. The confidence a person has in their ability to predict something is based primarily on its degree of representativeness of what it is being compared to without considering factors that may limit predictability.
o Misconceptions of regression. People look at individual instances of performance independently, without considering the effects of regression toward the mean.
• Availability heuristic. People sometimes judge the frequency of an event based on instances that can be brought to mind at the time. This heuristic is often used when one is asked to assess the frequency of an event or the plausibility of a development.
o Biases due to the retrievability of instances. If one group has more instances that are more familiar or more salient than another group, the first group will seem to be bigger, even if the two groups are the same size.
o Biases due to the effectiveness of a search set. When people are asked to solve a problem that requires them to elicit a search set, they will decide on the answer to the problem based on ease of search due to information that is available, rather than the effectiveness of the search.
o Biases of imaginability. When trying to judge the frequency of an event in which instances need to be imagined to try to decide on the frequency, the frequency will be based on how easy it is to imagine various instances of the event.
o Illusory correlation. If two events are strongly associated, they are judged to occur together more frequently.
• Adjustment and Anchoring Heuristic. People many times make estimates by adjusting from an initial value (anchoring point) to obtain a final value. This heuristic is used in numerical prediction when a relevant value is available.
o Insufficient adjustment. When starting from an initial value and adjusting, the adjustment is often much smaller than what it should be.
o Biases in the evaluation of conjunctive and disjunctive events. Because of anchoring, probability is often overestimated in conjunctive problems and underestimated in disjunctive problems.
o Anchoring in the assessment of subjective probability distributions. Because of anchoring, when people are forming subjective probability distributions, many times the distributions are too tight in relation to the actual probability distributions.
In the discussion, the authors point out that these heuristics and biases are not only experienced by naïve people, but also by researchers who are aware of this theory. The lack of an appropriate code is a reason that people don’t detect their own biases when they make decisions. Also, internal consistency is not enough to consider judged probabilities to be accurate.

Analysis of Heuristics and Biases paper according to Meister’s characteristics:
Primary Topic. The topic is Heuristics and Biases.
Specific Theme. The theme is that certain heuristics in decision making lead to various biases.
Venue. Various venues were used for research, mostly universities.
Subjects. Various subjects used, including undergraduate students and researchers.
Methodology. Various studies performed contributed to this work, but many of the studies involved asking subjects to solve problems.
Theory Involvement. The theory discussed in this paper evolved from previous studies, which are used to explain and validate the theory.
Research Type. This article is kind of a theoretical discussion of a compilation of previous research.
Unit of Analysis. The unit of analysis in this paper is the individual person making decisions.
Hypothesis. Biases in judgments reveal some heuristics of thinking under uncertainty.
Design Application. Heuristics and biases are important to consider in the design of any system where human decision making will occur.

Prospect Theory: An Analysis of Decision under Risk
Daniel Kahneman and Amos Tversky
Kahneman and Tversky begin this paper by giving a critique of expected utility theory. They state that expected utility theory is based on the tenets of expectation, asset integration, and risk aversion. They then make a point to concede that their methods raise questions of validity and generalizability, but they point out that all other methods used to test utility theory raise the same concerns. The discussion then turns to observed phenomena that refute expected utility theory. With each phenomenon, examples are given of studies where hypothetical problems were given to subjects. The results of these studies validate the phenomena used to refute the tenets of the expected utility theory. The first phenomenon is the certainty effect, in which people overweight outcomes that are considered certain relative to outcomes that are only probable. The next is the reflection effect, in which the reflection of prospects around zero reverses the preference order of the prospects. The reflection effect implies that risk aversion in the positive domain is accompanied by risk seeking in the negative domain. It also eliminates aversion for uncertainty or variability as an explanation of the certainty effect. Kahneman and Tversky then state that many believe that the purchasing of insurance against both large and small losses gives evidence for the concavity of the utility theory for money, but they test this with the notion of probabilistic insurance. The results of studies involving probabilistic insurance indicate that the intuitive notion of risk is not adequately captured by the assumed concavity of the utility function for wealth. The isolation effect is the phenomenon in which people disregard components that alternatives share and focus on components that distinguish them, to simplify the choice. Instances of this effect violate a basic concept of utility theory, that choices between prospects are determined completely by the probabilities of final states.
After giving examples that clearly violate the basic principles of expected utility theory, Kahneman and Tversky explain the theory behind their descriptive model of decision making under risk, called prospect theory. Prospect theory differentiates two phases in the process of making a choice: editing followed by evaluation.
The editing phase consists of a preliminary analysis of the prospects to reorganize them into a simpler form for evaluation. It is composed of various operations including coding, combination, segregation, cancellation, simplifications, and detection of dominance. When people are presented with multiple alternatives to choose from, they first code the alternatives into gains and losses relative to a reference point. They also combine probabilities associated with identical outcomes, segregate riskless components from risky components, and discard components that are shared by all of the alternatives. Additionally, people simplify prospects by rounding probabilities or outcomes and disregard alternatives that are clearly dominated.
The next phase of decision making is the evaluation of the edited prospects. It is assumed that the evaluation will conclude in choosing the outcome with the highest value. Two basic equations are given to describe the relation between the two scales π and v to determine the overall value V. The scale, π, associates with each probability, p, a decision weight, π(p), which reflects the impact of p on V, while the scale, v, assigns a number v(x) to every outcome, x, which reflects the subjective value of that outcome. The basic equation of the theory is given as
V(x, p; y, q) = π(p)v(x) + π(q)v(y)
Where v(0) = 0, π(0) = 0, and π(1) = 1
This equation generalizes expected utility theory by relaxing the expectation principle. The evaluation of strictly positive or strictly negative prospects is described by the equation
V(x, p; y, q) = v(y) + π(p)[v(x) - v(y)]
Where p + q = 1 and either x > y > 0 or x < y < 0
The essential feature of this equation is that a decision weight is applied to the difference in value between the alternatives, which represents the risky component, but not to the riskless component v(y).
After definitions of the various components of the theory are given, they are discussed in more detail, again with examples of studies given as validation. First, the value function is discussed. The value function is defined on deviations from a reference point (not on final states), generally concave for gains and convex for losses, and steeper for losses than for gains. Next, the weighting function is discussed. One factor that affects the weighting function is that very low probabilities are usually overweighted. The property of subcertainty states that in general that the actual decision weights of all of the probabilities usually sum to less than 1, though the probabilities obviously add to 1. Also π is hypothesized to be nonlinear because it departs from linearity near the extreme values of 0 and 1. The authors point out that this theory is applied to the simple situation where a person is presented with two alternatives and that different processes may be used to make a decision in more complicated situations.
In the discussion section, the authors first explain how prospect theory accounts for observed attitudes toward risk. It then points out that there are situations in which a person’s frame of reference shifts and decisions are made based on expectations of future states rather than the current state. In addition, a person who has not accepted their current state and still uses the reference point of a recent state will make decisions accordingly. The authors finally discuss the directions in which they see future research of this theory continuing.

Analysis of Prospect Theory paper according to Meister’s characteristics:
Primary Topic. The topic is Prospect Theory.
Specific Theme. The theme is that expected utility theory is not accurate in explaining how decisions are made under risk, so a new theory is developed called Prospect Theory.
Venue. Various venues were used for research, mostly universities.
Subjects. Various subjects used, including undergraduate students and faculty.
Methodology. Various studies performed contributed to this work, but many of the studies involved asking subjects to solve problems.
Theory Involvement. The paper begins by refuting the expected utility theory and then testing and developing Prospect Theory.
Research Type. This article is a theoretical study.
Unit of Analysis. The unit of analysis in this paper is the individual person making decisions.
Hypothesis. Prospect Theory is a better descriptive model of decision under risk than expected utility theory.
Design Application. Prospect Theory would be important to consider in the design of any system where human decision making will occur under risk.

Theories of Decision-Making in Economics and Behavioral Science
Herbert A. Simon
Simon divides this article into 7 main sections: a discussion of how much psychology economics needs, developments in the theory of utility and consumer choice, the motivation of managers, the conflict of goals and the phenomena of bargaining, work on uncertainty and the formation of expectations, recent developments in the theory of human problem-solving with their implications for economic decision-making, and conclusions.
I. How Much Psychology Does Economics Need?
Simon uses a metaphor of molasses in a container to explain how the human has been thought of in economics as a simple object with clear characteristics and goals. He then discusses how economics has been changing to incorporate the more complicated system of a human and its environment which is actually a more realistic picture. He lists specific problems with classical theory.
II. The Utility Function
Simon brings up the concepts of utility theory from von Neumann and Morgenstern, and then considers the validity of the assumption of utility theory that decisions are made based on objectively determined probabilities. Understanding decision-making then becomes increasingly complicated as the situations studied become more realistic. Tools such as linear and dynamic programming help determine optimal decisions based on expected values of outcomes. The type of experiment most often used to study decision making, a situation where a person must choose between two alternatives, is defined as a binary choice experiment.
III. The Goals of Firms
Simon gives attacks on the hypothesis that the entrepreneur strives to maximize profit. Firm’s goals are not to maximize profit, but to attain a certain level of or rate of profit, holding a certain share of the market or a certain level of sales, called satisficing. There is some empirical evidence to support this theory.
IV. Conflict of Interest
Since perfect competition is a very poor assumption in reality, difficulties in imperfect competition need to be considered. Different concepts, such as game theory, power and bargaining, and games against nature, attempt to explain how decisions are made with imperfect competition assumed.
V. The Formation of Expectations
Expectations about future states have an impact on economic decisions made in the present, and empirical research that supports this is presented. It must also be assumed that people will consider the cost of information when compiling it to make a decision.
VI. Human Cognition and Economics
The limitations of the decision-maker and the complexity of the environment in which the decision-maker is operating need to be considered in predicting a decision.

Primary Topic. The topic is decision-making in economics.
Specific Theme. The theme is that considering psychological concepts in studying economic decision-making may help to predict behavior more realistically.
Venue. N/A because this is a theoretical paper.
Subjects. N/A because this is a theoretical paper.
Methodology. N/A because this is a theoretical paper.
Theory Involvement. The theory discussed in this paper comes from a compilation of studies and concepts from both psychology and economics.
Research Type. This article is a theoretical paper.
Unit of Analysis. The unit of analysis in this paper spans from the individual person making decisions to a firm or organization making decisions.
Hypothesis. Perhaps psychological concepts should be used to analyze economic decision-making in the complex world that actually exists rather than an ideal world.
Design Application. The theory in this paper applies to any system in which economic decision making occurs.

TrackBack

TrackBack URL for this entry:
https://blogs.psu.edu/mt-unprotected/mt-tb.cgi/6039

Comments (5)

Oyku Asikoglu:

In the Tversky and Kahneman paper, How are humans rational as they exhibit biases in the form of representativeness, availability, and adjustment and anchoring? Contrast this notion of rationality with the assumptions underlying the Von Neumann and Morgenstern Axioms.
The humans exhibit biases in representativeness which is employed when people are asked to judge the probability of an object or an event. The representativeness of an object is assessed by the degree to which the representative of or similar to the stereotype of the represented concept. Humans show biases in representativeness by neglecting the probability of the representativeness. The example in the paper illustrates that by a given set of characteristics of a specific person, humans are tend to think him as a librarian instead of a farmer even though being him a librarian has less probability than being him a farmer in terms of statistical facts.
The other heuristic that human are showing biases, is availability which is considered when people are asked about the frequency of an event or plausibility of a particular development. Such as assessing the number of heart attacks in middle aged people. This heuristic is affected by some biases which are due to the irretrievability of instances which may be related to the familiarity of the subject. The other bias is originated from the effectiveness of the search set which is particular to each search set. Biases of imaginability is another factor that effect the availability heuristic where people cannot distinguish the difference between the remembered or imagines instances.
The last heuristic of adjustment and anchoring which talks about the numerical predictions where there is a relevant value available. This heuristic includes biases in differences of adjustments in between people or insufficient adjustments and biases in deciding if the events are dependent to each other or independent from each other and anchoring.
In this research the psychophysical distinction between the normative (objective) and the descriptive (subjective) theory has been made. As it has been shown that individuals’ subjective perception of utility and probabilities are different from their objective values, the main focus of the psychological line is to conceptually define and measure subjective utilities and probabilities.[1]
On the other hand Von Neumann and Morgenstern are trying to model the human decisions mathematically and they construct some set of rules to illustrate the preferences of human beings. These set of rules can are further integrated in some axioms stating the following assumptions on human decisions;
“a) The preference ordering assumption. This assumption holds that in a set of options there always is a
preference ordering.
b) The choice according to preference assumption. This assumption states that if an individual prefers
one option over another, she chooses that option.
c) The transitivity assumption. This assumption holds that the preference ordering of the individual is
consistent. That is, that the preferences do not contradict one another.
d) The independence of irrelevant alternatives assumption. This means that the individual’s preference
is independent of other considerations, including other options.
e) The invariance assumption. For the preference relation it does not matter how the options are
presented as long as the different presentations are logically equivalent.” [1].
After the work of Von Neumann and Morgenstern further researchers such as Savage, Tversky and Kahnemann introduced the term of subjective expected utility into the literature and has been shown that human decision making and biases does not completely in accordance with VNM axioms

[1] Heukelom, F., (2007). “Kahneman and Tversky and the Origin of Behavioral Economics”. Universiteit van Amsterdam,Tinbergen Institute Discussion Paper.

Monifa Vaughn-Cooke:

Monifa:
Does Prospect Theory violate any of the Von Neumann and Morgenstern Axioms? Why or why not?
Prospect theory allows one to describe how people make choices in situations where they have to decide between alternatives that involve risk. The theory describes how individuals evaluate potential losses and gains, which comes into conflict with several VNM axioms.
One very important result of Kahneman and Tversky work is demonstrating that people's attitudes toward risks concerning gains may be quite different from their attitudes toward risks concerning losses. For example, when given a choice between an 80% chance of getting $4000 or 33% chance of getting $3000, with certainty, they will most likely choose the certain $4000 in preference to the uncertain chance. This is a perfectly reasonable attitude that is described as risk-aversion. But Kahneman and Tversky found that the same people when confronted with a 20% chance of getting of $4,000, or a 25% chance of getting $3000, often choose the risky alternative. This is called risk-seeking behavior. This is what’s referred to as the Alias paradox, which demonstrates an inconsistency of actual observed choices with the predictions of expected utility theory. In the example given, respondents are asked to choose between two gambles This observed pattern violates the independence axiom, since in both gambles, the payoff is identical.
The substitution axiom of utility theory says that if B is preferred to A, then any mixture (B,p) must be preferred to the mixture (A,p). However, Kanneman and Tversky also proved this axiom is violated. When given 2 gambles, respondents preferred A (with certainty) over B (probable) for the first gamble, but preferred B over A in the second gamble, due to a probability drop. This is know as the certainty effect, which states that people place a higher weight on outcomes that are certain, relative to outcomes that are probable. This is in direct conflict with the substitution axiom of utility. Kanneman and Tversky’s discussion of reflection effects, which states that the reflection of prospects around 0 reverses the preference order, also support this violation. Contrary to utility theory which states that certainty is generally preferred, the reflection effect states that certainty increases the aversiveness of losses and the attractiveness of gains.

Ben Donaldson:

In their paper on Prospect Theory, what do K&T mean by, "the decision weights may be affected by other considerations such as ambiguity or vagueness. Indeed, the work of Ellsberg and Fellner implies that vagueness reduce decision weights. Consequently, subcertainty should be more pronounced for vague than for clear probabilities?" (p. 190)

The Ellsberg and Fellner papers were arguing that these ambiguous or vague problems means that the subject will not have a concrete certainty of what the actual probabilities are (whether because of lack of prior knowledge, a vague question itself or both). Because of this their probability ranking is more relative than absolute:

Example 1: Instead of Option A having 90% probability and Option B 10%...
Option A should be higher probability than Option B, but how much I don’t know.

In other words, even though a decision maker might be able to figure out which options occur more often than others, vague problems make it harder to understand the exact probabilities.
So why would a decision-maker with relative probabilities have lower decision weights than someone who found absolute probabilities? As shown in a plot on page 184 of our textbook, there is a tendency for π(p) to undervalue the actual probability except for very small probabilities (which are overvalued). If we the decision-maker have three choices to make and only know that some are more likely than others, we presumably don’t have a large enough sample size to figure out the exact probabilities…

Example 2: Say Option C showed up 1 time in the 10 examples we had, while Option B showed up 3 times and Option C 6 times? Why would we assume that Option C is much lower than 10%?
Example 3: If Option C shows up 0 times, would we assume that it never shows up or instead put some small probability just to include it?

Because of the vagueness, the decision-maker’s bias is helping to make the probabilities, before he even uses his bias to decide on them. So in examples 2 and 3 they are likely to push up the expected probabilities according to the bias curve on page 184, which suggests they are less likely to riskily assume an option barely ever occurs.
So vague problems mean the decision-maker has to cautiously choose safe probabilities, and so those probabilities stay above the very low levels more often. Since the probabilities are NOT at these lowest levels, the plot on page 184 shows that they will be farther from the bottom section with “overestimated” decision weights.
Caution pushes the probabilities up, where the decision weights are underestimated, and so the sum of safely estimated π(p)’s < sum of real π(p)’s.

Beant Dhillon:

Prospect Theory vs Satisficing:

The gist of the Prospect Theory is expressed in the hypothetical value function in Fig 3 which depicts the value of a prospect is judged not by the final state or asset position rather by the change in the asset position w.r.t. to a reference point e.g. An object is judged hot or cold to touch depending on the temperature to which one has adapted rather than the absolute value of the temperature. Further,the decision maker makes risk-averse choices in case of gains and risk-seeking choices in case of losses. Further,the higher the losses are,the more risk-seeking the behavior becomes e.g. investors are very risk-averse for small losses but will take on investments with a small chance of very large losses. Also,the decision weights affect the risk aversion and the risk seeking too.
So,the decision rests to a large extent on the choice of the reference point,which determines if the outcomes are perceived as gains or losses by the decision maker. The reference point could be the current asset position or the aspiration level and each reference point would result in different preferences. Also,according to this, people would rather eliminate risk than reduce it.
According to the satisficing theory, “the motive to act stems from drives and action terminates when the drive is satisfied. Also,the conditions for satisfying are not fixed but may be specified by an aspiration level that adjusts itself up or down based on experience.”
The satisficing principle differs from prospect theory first and foremost in the fact that it takes into account the method of reaching the decision too e.g. If the performance falls short of aspiration,search for new alternatives is induced and at the same time,the aspiration level begins to adjust itself downward till goals reach attainable levels and lastly,if the adaptation process is too slow,the rational behavior transforms into apathy or aggression e.g. A firm doesn't necessarily operate on maximizing its profit rather it may operate on maintaining a particular market share,maintaining last years profit margins etc.
The decision maker will have a flat utility function for returns beyond satisfactory level which means that once the drive is satisfied,the decision maker is indifferent even to the so-called “better” choices.
while according to the prospect theory it resembles an S-shape.
Also, prospect theory assumes that the decision maker makes an “optimized” choice among the various available options while according to sufficing ,it is not always an optimized choice ,as in real life situations,a person doesn't always have all the information needed for a decision,the outcomes are uncertain,the cognitive abilities of a human are limited and most importantly,there usually is a time constraint. Thus,in simple words “People are motivated for choosing a thing that’s good enough,even if not ideal.”
The two theories are similar in the respect that the reference point talked about by prospect theory may be the aspiration level of the decision maker and he may judge the respective outcomes as gains or losses in that light. Also,both the theories give importance to the decision weight placed by the decision maker.i.e.the decision maker eliminates some alternatives which don't meet the minimum requirements according to sufficing but looking at Prospect theory,the reference point may be such that or the weighted function may be such that a particular item/decision is perceived as a loss and thus not chosen.

Gary Ezekian:

Simon’s paper focuses on the shortcomings he believes exist in broadly applying the VN-M axioms to realistic economic practice. At no time are the VN-M rules determined to be false and there is implication that they are still a good guide, but perhaps not appropriate to put too much faith in. The main criticism is that the traditional rules are such a simplification of reality that it is almost useless unless other factors are considered.

The article specifically mentioned the VN-M axioms when explaining traditional utility functions. It notes that someone following those rules would act to maximize the expected value of the outcome in all situations. The axioms provide cardinal utility functions. Problems arise, however, when the simple, small value money lotteries are thrown away for more realistic scenarios. Studies are cited to show how even switching to something like choice between records produces much less consistency. In all cases, there is also the assumption of fixed and known alternatives, whereas this is rarely the case in realistic situations. Additionally, gambler’s fallacy is discussed as a potential issue where the DM would not correct identify probabilities.

It is proposed that the real world is so complex that utility maximization has little bearing on complex decisions. Most realistic situations are presented as too complex for the decision maker to even know what a maximum utility decision is. The case of binary choice experiments is brought up to discuss a phenomenon called event matching, where people would practice non-optimal decision making by adapting to the observed ratio.

The consistency assumption of VN-M is also called into question with the idea of probabilistic preferences, where it is not considered unreasonable to waver when picking one of two choices if their utility was close in value. VN-M is based on a straight deterministic being, which is rarely accurate in the human context.

While the VN-M axioms focus on the behavior of the individual, Simon also brings up questions of decision making for a firm. In particular, it is used to address the difference between satiation and maximization. VN-M works on the assumption that the human DM is trying to maximize their utility, while it is a plausible explanation that they instead actually set certain goals (aspiration levels) and try to meet as many of them as possible.

As the environment becomes more interactive, Simon points out more potential fallacies of traditional theories of economic rationality. Conflicts of interest are bound to arise as DMs try to predict and act on the interactions of others in the environment. Expectation about future conditions is bound to be important in real decision making scenarios. VN-M rules rely solely on the mean, ignoring other potential qualities of the data distribution such as variance that do have a real effect on the situation. Adaptive learning is likewise another confounding factor that is not done justice with traditional rules.

In the end, Simon uses his paper to find faults in the VN-M oversimplifications as applied to the real world. He doesn’t dispute their validity, but rather their practicality and value in determining human behavior. Simon faults the traditional theory for ignoring the cognitive processes which he believes must be analyzed to generate a full picture of rational decision making.

Post a comment

About

This page contains a single entry from the blog posted on October 31, 2007 12:05 PM.

Many more can be found on the main index page or by looking through the archives.

Powered by
Movable Type 3.33