Do empirical studies of human reasoning show that humans are fundamentally irrational?

Modified: 20th Dec 2021
Wordcount: 4586 words

Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.ae.

Cite This

Prior to the 1970s, mainstream psychology of decision-making assumed people to be fairly good and reasonable statisticians (Edwards, 1966; Lopes, 1991, p. 66). Since then, however, this assumption has been gradually undermined. This shift was sparked by a series of empirical findings published by psychologists Daniel Kahneman and Amos Tversky as part of their heuristics and biases program (summarised in Tversky and Kahneman, 1974; see also Kahneman, Slovic and Tversky, 1982). They suggested that, in assessing probabilities, people rely on a limited number of rules of thumb, or heuristics, which reduce complex reasoning tasks to simpler, more intuitive judgmental operations. Drawing on this idea, several researchers from various disciplines have argued, in a pessimistic vein, that humans are fundamentally irrational.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Evaluating some of the heuristics-and-biases tradition's empirical findings will indeed reveal seemingly irrational patterns of reasoning (I). Nevertheless, I will contend that these results should be approached with scepticism, as they are ultimately embedded in an unwarranted and problematic idea of human cognition. Indeed, counterarguments and evidence advanced by evolutionary psychologists will show that many of the alleged cognitive illusions, or biases, proposed by Kahneman, Tversky and several of their colleagues, can be avoided by adopting a more instrumental approach to rationality (II). Against these opposite and conflicting extremes, I will finally propose and defend a more moderate 'middleway', offered by dual process theories (III).

(I)

In their widely cited articles and books, Tversky and Kahneman set out to describe and discuss how people make judgments under uncertainties. In doing so, they designed a series of thought-experiments devised to reveal people's underlying reasoning processes (Tversky and Kahneman, 1974, p. 1124; McKenzie, 2005, p. 323). To better understand their work, it is useful to directly engage with some of their most notable experiments. In the famous Linda problem, Tversky and Kahneman presented a group of statistically naïve subjects with this simple personality sketch:

'Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations' (Tversky and Kahneman, 1982, pp. 84–98).

Participants were then asked to rank the following statements by their probability:

a) Linda is a bank teller (T)

b) Linda is a bank teller and is active in the feminist movement (T&F) (Tversky and Kahneman, 1982, pp. 84–98).

The overwhelming majority of subjects (89%) ranked the compound target (T&F) more probable than the simple target (T). This, however, clearly violates the conjunction rule – i.e. the requirement that a conjunction cannot be more probable than either of its conjuncts. All feminist bank tellers are, by definition, bank tellers; a person cannot be more likely to be a feminist bank teller than just a bank teller (Tversky and Kahneman, 1982, pp. 84–98; McKenzie, 2005, p. 326). Drawing upon these results, Tversky and Kahneman posited that, when asked to estimate the probability that A belongs to B, or that B will generate A, people rely on representativeness heuristics; that is, on the degree to which A is representative of, or resembles, B (Tversky and Kahneman, 1974, p. 1124; 1982, pp. 84–98). Accordingly, the description of Linda being highly consistent with the stereotype of feminists but not of bank tellers, subjects replaced correct probability judgment with this, more readily available, heuristic. Obviously, however, because similarity is not a factor affecting probability assessment, judgments based on representativeness are frequently biased (Tversky and Kahneman, 1982, pp. 90, 92–93; Newell, 2013, pp. 606–607).

Impressively, this pattern of reasoning – labelled conjunction fallacy – has been found repeatedly not only in later, similar experiments, but also within groups with backgrounds in statistic and probabilistic theory, both at intermediate and sophisticated level (Tversky and Kahneman, 1982, pp. 92–93). Moreover, representativeness-based biases have been reported also in problems concerning prior probabilities assessment. In the well-known lawyers―engineers problem, two groups of subjects were presented personality sketches of several individuals allegedly randomly sampled from a group of 100 lawyers and engineers (Tversky and Kahneman, 1974, p. 1124–1125). In one condition participants were told that the group comprised 70 lawyers and 30 engineers; in the other condition the composition was reversed. Both groups were then asked to assess the probability that a given personality sketch belonged to engineer rather than a lawyer.

According to Bayesian reasoning, the provided base-rate of lawyers and engineers should have influenced reported probabilities (Tversky and Kahneman, 1974, p. 1124–1125; Samuels and Stich, 2004, pp. 4–5). However, Tversky and Kahneman observed that the subjects in the two conditions produced the same probability judgment. This indicates that participants systematically ignored base-rate information, relying instead on the degree to which a given description was representative of either lawyers or engineers. Interestingly, in the absence of descriptive material, prior probabilities were correctly employed. Nevertheless, these were yet again ignored every time a personality sketch was introduced – even when the sketch was completely uninformative and undescriptive of either lawyers or engineers (Tversky and Kahneman, 1974, p. 1124–1125; Samuels and Stich, 2004, pp. 4–5).

Involving fairly obvious errors, base-rate neglect and conjunction fallacy are, perhaps, the most interesting phenomena discovered in decision-making. However, they are far from isolated. Proponents of the heuristics-and-biases approach have reported a huge number of empirical findings concerning popular fallacies in probabilistic judgment.[1] Notably, for example, Peter Wason's selection task (1966) seem to indicate that people are biased towards confirmation. During an experiment, Wason presented subjects with four cards bearing letters on one side (e.g. 'A' and 'K') and numbers on the other side (e.g. '2' and '7'). Two cards were displayed with the letter side up, two with the number side up. Participants were then asked to select just those cards that, if turned over, would show whether or not the following statement is true: 'if there is a consonant on one side of a card, then there is an odd number on the other side'. Subjects mostly selected the Kcard alone, or the Kand the 2cards, rarely choosing the Kand 7cards. Yet, if the 7 had a consonant on its other side, the rule would be false. Drawing on these results, Wason concluded that people are biased towards confirmation, and fail to see the importance of the falsifying card (Wason, 1968, as quoted in McKenzie, 2005, p. 328).

Against these upsetting results, one might argue that many of the reasoning problems explored in the heuristics-and-biases literature do not have great practical importance. Yet, worryingly, this does not appear to be the case (Lopes, 1991, pp. 78–81; Gigerenzer, 1991, p. 85). For instance, in a renowned study, Casscells, Schoenberger and Grayboys (1978) presented the following reasoning task to a group of staff members and students at Harvard Medical School:

'If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs? __%' (Casscells, Schoenberger and Grayboys, 1978, pp. 999–1000).

The authors found that, under the most plausible interpretation of the problem, the majority of their subjects neglected probabilistic reasoning. Only eighteen percent of the participants gave the correct Bayesian answer (2%); while a striking forty-five percent of them ignored the base-rate information, assuming the correct answer to be 95% (Casscells, Schoenberger and Grayboys, 1978, pp. 999–1000). In this particular case, the base-rate neglect cannot be explained in terms of representativeness heuristic. Accordingly, it seems plausible to argue, as Kahneman and Tversky did, that judgmental biases are widespread even beyond the laboratory's walls, making disquieting inroads also in applied disciplines with potentially real-world implications (Tversky & Kahneman, 1982, p. 154; Casscells, Schoenberger and Grayboys, 1978, pp. 999–1000; Cosmides and Tooby, 1996, pp. 21–22; Samuels, Stich and Bishop, 2002, p. 240).

On their face, these results show that, in making intuitive judgements involving probabilities and uncertainties, people systematically deviate from appropriate statistical, mathematical and logical rules. Instead, they employ normatively problematic heuristics, which, more often than not, lead to biases (Tversky and Kahneman, 1974, pp. 1124). Thus, some researchers have painted a rather bleak image of human rationality, claiming that people repeatedly commit errors in probabilistic judgement because they have not evolved 'an intellect capable of dealing conceptually with uncertainty' (Slovic, Fischhoff and Lichtenstein, 1976, p. 174; Nisbett and Borgida, 1975, p. 935). Kahneman and Tversky themselves also seem to endorse this pessimistic interpretation, arguing that 'people do not appear to follow the calculus of chance or the statistical theory of prediction' not just in some or many cases, but in all cases – including those in which they get the right answer (Kahneman and Tversky, 1973, p. 48; Samuels, Stich and Bishop, 2002, p. 241).

This pessimistic view has some weight to it. The above discussed empirical findings do indeed seem to demonstrate that the untutored mind only makes use of 'quick-and-dirty' heuristics. Nonetheless, I find such a conclusion contentious. One problem with the pessimistic interpretation is the yardstick against which proponents of the heuristics-and-biases tradition assess people's cognitive mechanisms. Adopting the so-called 'standard picture of rationality', they maintain that being rational is to reason in accordance with rules of classical logic and probability theory (Samuels, Stich and Bishop, 2002, p. 247). However, this assumption is problematic. Firstly, the concept of 'probability' itself is hotly debated. For instance, some argue that rules of probability theory do apply to single events, while some contend that they only apply to classes of events. If the latter camp is correct, then this would invalidate many of the heuristics-andbiases experiments involving unique events (Cosmides and Tooby, 1996, p. 3; Chase, Hertwig and Gigerenzer, 1998, p. 207). Secondly, this classic interpretation of human rationality is content-blind. In other words, it assumes laws of logic and probability a priori as normative, independent from problem context and subjects' judgements about it (Gigerenzer, 2006, pp. 106; 121–122; Chase, Hertwig and Gigerenzer, 1998, p. 207). Such criticisms indicate that a reevaluation of the criteria used to assess rationality is needed. Thus, in the following section, I will consider evolutionary psychologists' call for a more instrumental view of human cognition.

(II)

As mentioned, following the classical picture of rationality, proponents of the heuristics-and-biases approach define errors in reasoning as discrepancies between people's judgments and probabilistic norms. However, as evolutionary psychologists contend, these laws of logic are neither necessary nor sufficient to make rational inferences in a world of uncertainties. Normative theories and their rules are relevant to people only in some contexts (Gigerenzer, 2006, p. 118; 1991, p. 86; Over, p. 5). This emphasis on the 'ecology' of rationality Echoing the tradition of Simon's bounded rationality (1956), these authors therefore emphasise on the relationship between mind and environment and reject the 'cognition in a vacuum' of the heuristics-and-biases approach. In particular, given that the human mind has been shaped by evolution, Gigerenzer (1994) and Cosmides and Tooby (1996) suggest that researchers should present problems in more 'ecological' way; that is a way that resembles humans' natural evolutionary environment. In such an environment, they insist, probabilistic information is framed in terms of natural frequencies – e.g., 'X will happen 3 out of 20 times' – rather than percentages (Cosmides and Tooby, 1996; Gigerenzer, 1994; 2006, p. 119).

This speculation – i.e. frequentist hypothesis – has prompted several evolutionary psychologists to re-design some of Kahneman and Tversky's most famous reasoning task in terms of natural frequencies. For example, Fielder (1988) proposed a frequentist version of the Linda problem phrased as follows:

There are 100 people who fit [Linda's description]. How many of them are:

a) bank tellers

b) bank tellers and active in the feminist movement

In this version of the experiment, as Fielder predicted, the conjunction fallacy was significantly reduced: only 22% of participants judged (b) more probable than (a) (Fielder, 1988, as quoted in Gigerenzer, 1991, pp. 91–92). Cosmides and Tooby (1996) have presented even more impressive results by running a frequentist version of Casscells, Schoenberger and Grayboys's medical diagnosis problem. In contrast to the original findings, their version of the problem elicited the correct Bayesian answer from 76% of the subjects. Base-rate neglect simply seemed to disappear (Cosmides and Tooby, 1996, pp. 21–22). Obviously, these findings do not invalidate the ones produced by the heuristics-and-biases approach; however, they do show that people's inductive mechanisms embody aspects of calculus of probability. However, these are designed to take frequency information as input and produce frequencies as output. Just like the frequentist school does, the untutored mind distinguishes between frequencies and single events (Cosmides and Tooby, 1996, p. 5; Gigerenzer, 1991, p. 104). Moreover, and more importantly, these results demonstrate that good judgment under uncertainty is more than mechanically applying a formula of classical logic or probabilistic theory. In making decisions, intuitive statisticians must first check the problem context or content (Gigerenzer, 2006, p. 119; Newell, 2013, p. 610– 613).

This argument introduces and informs Gigerenzer's notions of adaptive toolbox of fast and frugal heuristics (Todd, Gigerenzer, and the ABC Research Group, 2012). To briefly explain, he compares the mind to an adaptive toolbox containing specific heuristics selected depending on the constraints of the environment and the goals of the decision maker. The emphasis is on using heuristics that do well, rapidly, and on the basis of a small amount of information (Gigerenzer, 2006, pp. 124–126; Goldstein and Gigerenzer, 2002). The following example serves to illustrate the approach.

Which US city has more inhabitants: San Diego or San Antonio?

Goldstein and Gigerenzer (2002) posed this question to groups of students from the University of Chicago and the University of Munich. Sixty-two percent of University of Chicago students inferred correctly that San Diego was larger; but, surprisingly, every single Munich university student answered correctly (Gigerenzer, 2006, pp. 124–126). Goldstein and Gigerenzer explained the result through the operation of the recognition heuristic, which states that when you are faced with two objects and you have heard of one but not the other, you should choose the former. Most of the Chicago students had heard of both cities so could not rely on this heuristic; in contrast, the ignorance of the Munich students – very few had heard of San Antonio – facilitated their judgment (Gigerenzer, 2006, pp. 124–126).

Evolutionary psychologists' conclusions and results urge a re-consideration of the heuristics-and-biases pessimistic view. They demonstrate that, if mental tasks are proposed in a more instrumental, ecological frame, than people do not deviate from appropriate norms of rationality. Most of people's reasoning is worked out by 'elegant machines' shaped to survive the intricate evolutionary environment. Moreover, as Gigerenzer argues, 'fast and frugal' heuristics are useful strategies, insofar as they capitalise on evolved abilities and environmental structure to make smart inferences (Gigerenzer, 2006, p. 120). Therefore, concerns for human irrationality are ultimately unsubstantiated.

(III)

Unsurprisingly, the debate between heuristics-and-biases proponents and evolutionary psychologists has received huge attention. Some have attempted to make this dispute disappear, claiming that there is no fundamental disagreement between the two sides (Samuels, Stich and Bishop, 2002). For instance, Samuels, Stich and Bishop (2002) note that the empirical findings of the heuristics-and-biases approachdo not provide any compelling reason to think that people only base their judgments on normatively problematic mechanisms of reasoning. At the same time, evolutionary psychologists have offered no empirical proof that all reasoning and decision-making is promoted by normatively unproblematic 'elegant machines' (Samuels, Stich and Bishop, 2002, pp. 245–260). This argument, however, completely ignores the extent of differences between pessimistic and optimistic view of rationality (see Kahneman and Tversky, 1996; Gigerenzer; 1991, pp. 101–103). Nevertheless, it does correctly suggest that these approaches do not necessarily invalidate each other.

I have suggested that the fast-and-frugal approach has helpfully refocused questions of human rationality on the relationship between mind and environment. However, sometimes it might be difficult to find the necessary or correct result in the external environment. In these cases, careful thought about available information and its cognitive representation can help to overcome erroneous judgments. Moreover, as Evans and Stanovich (2013) note, both the heuristics-and-biases tradition and evolutionary psychologists largely neglect personal differences. After all, some participants in the heuristics-and-biases experiments do give the standard normative response, whereas some subjects in the experiments championed by evolutionary psychologists still commit fairly obvious errors (Evans and Stanovich, 2013, pp. 234–235).

Drawing on this consideration, proponents of dual-process theories have claimed that human reasoning and related higher cognitive processes – such as judgement and decision-making – are underpinned by two kinds of thinking; one intuitive, the other reflective. The former – i.e., Type 1 processing – is fast, automatic, holistic, largely unconscious, and makes minimal cognitive demands; while the latter – i.e., Type 2 processing – is relatively slow, rule-based, deliberatively controlled and requires more cognitive capacity (Evans and Stanovich, 2013, p. 225). Further, Evans and Stanovich speculate that the Type 1 processing, as evolutionary psychologists suggest,has been shaped by natural selection to make smart judgement based on the environment; whereas Type 2 processing has developed more recently, it is aimed at maximising personal utility and it is more influenced by culture and education. Accordingly, individual differences can be explained in terms of subjects' cognitive abilities; those participants who are more trained to use Type 2 processing will be more likely to find the correct answer, independently from how the problem is framed (Evans and Stanovich, 2013, pp. 236–237; Stanovich, 1999, p. 150).

Although I am sympathetic to evolutionary psychologists' argument for human rationality, the empirical findings proposed by dual-process theories provide a tenable, and in some respects, more persuasive, "middle-way". Reviewing and assessing the experiments proposed by the heuristics-and-biases tradition and by evolutionary psychologists has showed that people are inclined to make errors, as well as to reason in accordance with optimal information processing models. Although very influential, these views ultimately oversimplify questions on human rationality, failing to see the complexities of the mind and its mechanisms. In contract, by accommodating both pessimistic and optimistic interpretations, dual-process theories overcome the blunt question 'are people rational?', acknowledging that the mind is neither a perfectly working machine nor a fundamentally flawed one. Upon these considerations, researchers should abandon the 'monolithic' views proposed by the heuristics-and-biases and evolutionary approaches, to focus instead on questions concerning the extent of human cognitive abilities and the specific reasoning processes at play under certain conditions.

Reference List

Casscells, W., Schoenberger, A. and Grayboys, T. B, (1978), 'Interpretation By Physicians of Clinical Laboratory Results', New England Journal of Medicine, Vol. 299, No. 18: pp. 999–1001.

Chase, V. M., Hertwig, R. and Gigerenzer, G. (1998), 'Visions of Rationality', Trends in Cognitive Science, Vol. 2, No. 6: pp. 206–214.

Cohen, L. J., (1981), 'Can Human Irrationality Be Experimentally Demonstrated?', Behavioral and Brain Sciences, Vol. 4, pp. 317–370.

Cosmides, L. and Tooby, J., (1996), 'Are Humans Good Intuitive Statisticians After All? Rethinking Some Conclusions from The Literature on Judgment Under Uncertainty', Cognition, Vol. 58, pp. 1–73.

Edwards, W., (1966), Nonconservative Information Processing Systems. Ann Arbor: University of Michigan.

Evans, J. St. B. T. and Stanovich, K. E., (2013), 'Dual-Process Theories of Higher Cognition: Advancing the Debate', Perspectives on Psychological Science, Vol. 8, No. 3: pp. 223–241.

Evans, J. St. B. T., (2008), 'Dual-Processing Accounts of Reasoning, Judgment, and Social Cognition', Annual Review Psychology, Vol. 59, pp. 255–278.

Fielder, K., (1988), 'The Dependence of The Conjunction Fallacy On Subtle Linguistic Factors', Psychological Research, Vol. 50, No. 3: pp. 123–129.

Gigerenzer G., Hertwig R. and Pachur, T. (eds), (2011), Heuristics: The Foundations of Adaptive Behavior. Oxford: Oxford University Press.

Gigerenzer, G. and Goldstein, D. G., (1996), 'Reasoning the Fast and Frugal Way: Models of Bounded Rationality', Psychological Review, Vol. 103, No. 4, pp. 650–669.

Gigerenzer, G., (1991), 'How to Make Cognitive Illusions Disappear: Beyond "Heuristics and Biases"', European Review of Social Psychology, Vol. 2, No. 1: pp. 83–115.

Gigerenzer, G., (1994), 'Why the Distinction between Single-event Probabilities and Frequencies is Important for Psychology (and Vice Versa)', in G. Wright and P. Ayton (eds), Subjective Probability. New York: John Wiley & Sons.

Gigerenzer, G., (2006), 'Bounded and Rational' in R. Stainton (ed) Contemporary Debates in Cognitive Science. Oxford: Blackwell Publishing.

Gilovich, T., Griffin, D. and Kahneman, D., (2002), Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press.

Kahneman, D. and Tversky, A., (1973) 'On the Psychology of Prediction', Psychological Review, Vol. 80: pp. 237–251

Kahneman, D. and Tversky, A., (1996), 'On the Reality of Cognitive Illusions', Psychological Review, Vol. 103, No. 3: pp. 582–591.

Kahneman, D., Slovic, P. and Tversky, A. (eds), (1982), Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

Lopes, L. L. (1991), 'The Rhetoric of Irrationality', Theory & Psychology, Vol. 1, No. 1: pp. 65–82.

McKenzie, C. R. M., (2005), 'Judgment and Decision Making', in K. Lamberts and R. L. Goldstone, Handbook of Cognition. London: SAGE Publications.

Newell, B. R. (2013), 'Judgement Under Uncertainty', in D. Reisenberg (ed.), The Oxford Handbook of Cognitive Psychology. Oxford: Oxford University Press.

Nisbett, R., and Borgida, E., (1975). 'Attribution and the social psychology of prediction', Journal of Personality and Social Psychology, Vol. 32, pp. 932– 943.

Over, D., (2007), 'Rationality and the Normative/Descriptive Distinction' in D. J. Koehler and N. Harvey, Blackwell Handbook of Judgment and Decision Making. London: John Wiley & Sons.

Samuels, R. and Stich, S., (2004), 'Rationality and Psychology' in A. R. Mele and P. Rawling (eds), The Oxford Handbook of Rationality. Oxford: Oxford University Press.

Samuels, R., Stich, S. and Bishop, M. (2002), 'Ending the Rationality Wars: How To Make Disputes About Human Rationality Disappear' in R. Elio (ed) Common Sense, Reasoning and Rationality. Oxford: Oxford University Press.

Simon, H. A., (1956), 'Rational Choice And The Structure Of The Environment', Psychological Review, Vol. 63, No. 2: pp. 129–138.

Slovic, P., Fischhoff, B. and Lichtenstein, S. (1976), 'Cognitive Processes and

Societal Risk Taking' in J.S. Carroll and J.W. Payne (eds.), Cognition and Social Behavior. Lawrence Erlbaum

Stanovich, K. E., (1999), Who is rational? Studies of individual differences in reasoning. Mahwah, NJ: Erlbaum

Todd, P. M., Gigerenzer, G. and the ABC Research Group (2012), Ecological Rationality. Intelligence in the World. Oxford: Oxford University Press.

Tversky A. and Kahneman, D. (1982), 'Judgments of and by Representativeness' in D. Kahneman, P. Slovic and A. Tversky (eds), Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

Tversky, A. and Kahneman, D. (1974), 'Judgment Under Uncertainty: Heuristics and Biases', Science, Vol. 185, No. 4: pp. 1124–1131.

Wason, P. C., (1968), 'Reasoning About a Rule', Quarterly Journal of Experimental Psychology, Vol. 20, No. 3: pp. 273–281.


[1] Amongst the most mentioned, overconfidence biases and anchoring and framing effects. For a complete account see Kahneman, D., Slovic, P. and Tversky, A. (eds), (1982), Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

 

Cite This Work

To export a reference to this article please select a referencing style below:

Give Yourself The Academic Edge Today

  • On-time delivery or your money back
  • A fully qualified writer in your subject
  • In-depth proofreading by our Quality Control Team
  • 100% confidentiality, the work is never re-sold or published
  • Standard 7-day amendment period
  • A paper written to the standard ordered
  • A detailed plagiarism report
  • A comprehensive quality report
Discover more about our
Essay Writing Service

Essay Writing
Service

AED558.00

Approximate costs for Undergraduate 2:2

1000 words

7 day delivery

Order An Essay Today

Delivered on-time or your money back

Reviews.io logo

1837 reviews

Get Academic Help Today!

Encrypted with a 256-bit secure payment provider