ÇďżűĘÓƵ

Education and a Civil Society: Teaching Evidence-Based Decision Making

Chapter 2: Can Reasoning Be Taught?

Back to table of contents
Authors
Eamonn Callan, Tina Grotzer, Jerome Kagan, Richard E. Nisbett, David N. Perkins, and Lee S. Shulman
Project
Teaching Evidence-Based Decision Making in K-16 Education

Richard E. Nisbett

Can people be taught to use abstract inference rules, such as the rules of logic, to reason about events in everyday life?

For 2,500 years the Western world believed that the answer to this was yes. Plato said “those who have a natural talent for calculation are generally quick at every other kind of knowledge; and even the dull, if they have had an arithmetical training . . . become much quicker than they would otherwise have been . . .” and “We must endeavor to persuade those who are to be the principal men of our state to go and learn arithmetic” (Plato 1875, 785).

The Romans agreed with their Greek predecessors, adding the study of grammar to the curriculum. The medieval scholastics were not ones to doubt the wisdom of the ancients and added to the curriculum the study of logic, especially syllogisms. The humanists of the Renaissance, even more in thrall to the ancients, added the study of Latin and Greek, and the curriculum was set for the next 400 years, culminating in the English public (sic) school system of the nineteenth century. One educator proclaimed the utility of Latin for teaching people how to think:

My claim for Latin, as an Englishman and a . . . teacher is simply that it would be impossible to devise for English boys a better teaching instrument. . . . The acquisition of a language is educationally of no importance; what is important is the process of acquiring it. . . . The one great merit of Latin as a teaching instrument is its tremendous difficulty.

But the contention that the learning of abstract rule systems has any effect on people’s ability to reason about everyday life problems was one of the first ideas to be attacked by the new discipline of psychology at the turn of the twentieth century. William James ridiculed the idea that the mind had muscles that could be exercised by arithmetic or Latin. The learning theorists of the 1920s and 1930s provided a theoretical rationale for rejecting the idea of highly general rules. Behavior and thought consisted of responses to concrete stimuli, and what was learned was a limited stimulus-response link. “(T)he amount of general influence from special training (is) much less than common opinion supposes” (Thorndike 1906, 246). Even the early cognitive scientists of the 1960s and 1970s rejected the view that reasoning was much influenced by general rules. Alan Newell declared “the modern position is that learned problem-solving skills are, in general, idiosyncratic to the task” (Newell 1980, 178).

An exception to the anti-inferential rules position of many twentieth-century psychologists was Jean Piaget, who held that people do have abstract inferential rules, including those corresponding to the rules of propositional logic, as well as particular cognitive schemas, including schemas for proportionality, probability, and the mechanical equilibrium principle of action-reaction (e.g., Inhelder and Piaget 1958; Piaget and Inhelder 1951/1975). However, Piaget was as insistent as other twentieth-century psychologists that such rules can’t be taught but only induced from living in the particular world that we live in.

But the antirule, anti-instruction position of much of twentieth-century psychology is mistaken. My colleagues and I have shown that people do have inferential rules corresponding to a number of inferential rule systems— including probability and statistics, methodological principles that are relied on by the social scientist, the rules of cost-benefit decision theory, and what we call “pragmatic reasoning schemas.” (Larrick et al. 1990) Moreover, these rules can be readily taught, some to a significant degree in classroom or laboratory settings lasting an hour or less.

Consider the following problem:

Catherine is a manufacturer’s representative. She likes her job, which takes her to cities all over the country. Something of a gourmet, she eats at restaurants that are recommended to her. When she has a particularly excellent meal at a restaurant, she usually goes for a return visit. But she is frequently disappointed. Subsequent meals are rarely as good as the first meal. Why do you suppose this is?

We have posed this sort of question to scores of people having educational levels ranging from college freshman to Ph.D.level staff at major research institutions (Nisbett et al. 1987; Nisbett 1992). From freshmen, we almost never get anything other than a causal hypothesis: “Maybe the chefs change a lot” or “Maybe her expectations are so high that the reality will disappoint her.” Such hypotheses are not necessarily wrong, but they miss underlying statistical points. From undergraduates who have had a course in statistics, we often get answers that reflect an appreciation of the probabilistic nature of restaurant meal quality: “Maybe it was just by a chance that she got such a good meal the first time,” which is surely right, as far as it goes. From graduate students in psychology, who have typically had two or three courses in statistics, we usually get a statistical answer, often of high quality, such as “There are probably many more restaurants where you can get an excellent meal some of the time than there are restaurants where you can get an excellent meal all of the time. So if she gets an excellent meal it’s probably in a restaurant that is only very good on average. Therefore if she gets a truly excellent meal then there’s no place to go but down—on average.” From Ph.D.s in science we nearly always get a statistical answer, usually of high quality.

Can you teach people statistical principles that can affect their understanding of everyday life events without having them take hundreds of hours of courses? Yes, we can teach people how to reason about events like Catherine’s disappointment in laboratory sessions lasting less than an hour (Fong et al. 1986). We can teach rules like the law of large numbers in purely abstract fashion by talking about urns with balls of different colors; defining the concepts of population parameter, sample parameter, and sample size; and pointing out that larger samples will on average reflect population parameters better than smaller samples. We can also teach by presenting people with a number of concrete problems in everyday life that require the law of large numbers for solutions. For example, we can ask people to think about the following sort of problem:

David is a high school senior choosing between two colleges. He has friends at both colleges. His friends at College A like it a lot on social and academic grounds. His friends at College B are not so satisfied, being generally unenthusiastic about the college. He visits both of the colleges for a day and meets some students at A who are not very interesting and a professor who gives him a curt brushoff. He meets several students at College B who are lively and intelligent and a couple of professors take a personal interest in him. Which college do you think he should go to?

Most people who are uninstructed in statistics think David should go to the place he likes, not the place his friends like. But you can massage people’s probabilistic intuitions by saying, in effect, “We can think of David’s impressions of each campus as a sample parameter. But David’s sample is very small and could well be misleading. His friends have a much larger sample of the colleges, and we would expect that their sample parameters are closer to the population parameter of college satisfaction for people like David. So David should probably go with what his friends think.”

Teaching people the rule in the abstract, using arbitrary events like balls in an urn, and teaching people the rule using concrete examples like the college-choice problem are both effective in getting people to apply statistical solutions to problems that require them. The two together are even more effective. Moreover, the effects last over a period of at least several weeks, and for the concretely trained subjects an influence is found for problems far in surface content from the problems they were trained on (Fong and Nisbett 1991).

We find similar benefits for teaching people principles of behavioral science methodology. Think about the following problem:

The promoters of a local Lose Weight Now! organization have claimed that, on the average, their members lose ten pounds during their first three months of attending meetings. To test this claim, a public health nurse kept records of weight lost by every new member who joined the Lose Weight Now! branch during 2006–2007. Out of 138 people who started to attend meetings, 81 kept attending for at least three months, and, indeed, the average amount of weight lost by these people was 9.7 pounds. Does this study establish that the Lose Weight Now! program is effective in helping people to lose weight?

The flaw in any conclusion that the program is effective is the “self-selection” possibility. That is, the people who stuck with the program may have been those who were going to lose weight anyway. Those who didn’t stick with the program may have been those who weren’t losing weight and had given up. They might even have gained weight. That you’re going to lose weight if you enroll in the Lose Weight Now! organization is not clear.

We find that two years of graduate training in psychology greatly increases the likelihood that people will spot the potential artifact in problems that require understanding of the self-selection principle, or the relevance of control groups, or consideration of the base rate for a given outcome (Lehman et al. 1988). Medical training improves people’s solutions to such problems somewhat, and training in chemistry and law does nothing whatever (Lehman et al. 1988). We have not tried laboratory training sessions to teach such principles, but I don’t doubt that this could easily be done, with results that would be lasting.

How about logic? Taking a course in formal logic actually does nothing for the ability of undergraduates to reason about everyday life problems that require the logic of the conditional (Modus ponens, Modus tollens, etc.; Lehman and Nisbett 1990).

Even two years of graduate education in philosophy does nothing for conditional reasoning, although it is effective for some types of syllogistic reasoning and for the ability to come up with damaging counterarguments to a proposition (Morris and Nisbett 1992).

I would explain the difference between the teachability of statistical and methodological reasoning on the one hand and the difficulty in teaching logical reasoning on the other as being due to the differential “gracefulness” of increments to the rule system. People already have rudimentary, intuitive versions of probabilistic and methodological rules, and when we teach them we are improving on rule systems about which they already have some inkling. The difficulty in teaching logic may be a matter of alarm to some, but not to me. I think the rules of logic fall into two categories—those everybody induces by virtue of long practice in the world, such as “or exclusion” (either A or B is the case but not both), and those that are highly artificial, such as the more nonintuitive implications of the conditional or most syllogistic forms. (Bertrand Russell said about the medieval monks’ development of the syllogism that it was as barren intellectually as they themselves were reproductively.)

How about cost-benefit rules? Are people capable of making their choices in the highly rational fashion required by the formal axioms of choice? Economists have gone through three phases on this question. In the first phase, economists maintained that all choices are in fact made in accordance with those abstract rules. Then Herbert Simon loosened the requirements a bit by introducing the concept of “bounded rationality”: given the brevity of life, people consider choices only to the extent that they are important and information about the relevant utilities and probabilities are easy to come by (Simon 1955). This bounded rationality sounded to most people as if it were rational enough. But then research, especially that by psychologists Daniel Kahneman and Amos Tversky, showed that people weren’t even very boundedly rational. They spend as much time shopping for a shirt as for a refrigerator; they are risk averse in situations where there is a potential gain and risk-seeking in situations where there is a potential loss; they are subject to severe framing effects (for example, reaching opposite conclusions about the same formal problem when encouraged to think about the problem as a possible gain than they do when encouraged to think about it as a possible loss); they calculate value not with respect to some absolute scale but merely with respect to their current state; and they don’t use proper probabilities at all but rather something more like “decision weights”; and on the list could go (Tversky & Kahneman 1981).

While not denying any of these pejorative characterizations of the decision maker, I have good news. Formal principles of cost-benefit analysis can be taught in such a way as to make people more likely to employ them in everyday choices (Larrick et al. 1990). Moreover, people are better off when they do use those principles.1

Consider the following problem:

Several months ago you bought tickets to a basketball game in a nearby city. That game will be played tonight. However, the star of your team is not playing, the opposing team is weaker than expected, and snow has begun to fall. Should you go to the game or tear up the tickets?

If you said you should go to the game because it would be uneconomical not to consume something you’ve paid for, you’re not thinking like an economist, who has a valuable rule for making such decisions; namely, the “sunk cost” rule, which follows from the choice axioms. This rule says that you should not consume something that you’ve paid for, unless its value is positive at the present time. You’ve already spent the money; it’s sunk. You can’t retrieve any part of it by suffering through an unpleasant and possibly risky drive to watch a game that is likely to be boring.

Do economists live their lives using such principles? We gave policy problems and everyday life-choice problems to University of Michigan economists, biologists, and humanities professors (Larrick et al. 1993). They included, in addition to sunk cost problems, “opportunity cost” problems, where the trick is to recognize that you shouldn’t pursue some course of action when another course of action offers a likely lower cost and a likely higher benefit. For example, we gave study participants a sunk cost problem that asked whether they agreed with the university’s decision to tear down the old hospital and put up a new one, even though the cost of putting up a new one was nearly as high as the cost of renovating the old one and the old one had been extremely expensive to build. Economists and everyone else should say that the cost of the old hospital is sunk and therefore irrelevant to the present choice.

Economists do in fact say that, and they are much more likely to apply cost-benefit principles in the avowedly normatively correct way to all kinds of policy problems and personal choices (e.g., Walk out of a lousy movie or stay to the bitter end?) But biologists and humanities professors are not nearly as likely to use those principles as economists. And biologists and humanities professors are not much more likely to use those principles than their students.

How about training in economics short of a doctorate? Does a course in economics make you more likely to use the cost-benefit rules? No. Undergraduates who have had a single course in economics are no more likely to use cost-benefit rules in analyzing policy questions or making personal decisions than are undergraduates who have never had a course in economics (Larrick et al. 1993).

But we can teach college students how to use the sunk cost and opportunity cost principles in sessions lasting less than an hour (Larrick et al. 1990). And the new rules stick around. Two weeks after training them, we call them in the guise of an opinion poll and ask questions that don’t look at all like the ones on which they were trained. The students use the normatively correct rules more than do untrained students.

But how do I know they’re better off using “normatively correct” rules? I’ve already admitted that neither the abstract optimality nor the fact that corporations hire decision experts provides much reason to believe that the rules are correct.

But our research shows that use of the rules is associated with life outcomes that people desire. Faculty members who regularly use cost-benefit rules when making choices get higher salaries and bigger raises than do those who less regularly use the rules (Larrick et al. 1993). Undergraduates who more frequently use the rules have higher GPAs. One might ask whether this finding can be attributed to the self-selection principle: Are the students who use the rules, or claim to do so, more intelligent than those who do not use the rules? Actually, students who have higher grades than their SAT scores (a pretty good indicator of intelligence) would predict are more likely to use the rules. Being wise about choices helps you to achieve more than you would otherwise.

To date we have studied and found effective ways of teaching a significant number of rule systems. We will continue to identify many more inferential rules that are pragmatically helpful to people, and we will also find surprisingly efficient ways to teach those rules.

ENDNOTES

1. The arguments by economists on this latter point are not very persuasive: The cost-benefit rules must be beneficial because 1) under the assumptions made by economists they can be shown to maximize outcomes and 2) corporations pay decision experts for advice. Unfortunately, corporations also pay for handwriting analysts, lie detectors, and motivation experts to jump around on stage to unknown effect.

REFERENCES

Fong, G. T., D. H. Krantz, and R. E. Nisbett. 1986. The effects of statistical training on thinking about everyday problems. Cognitive Psychology 18:253–292.

Fong, G. T., and R. E. Nisbett. 1991. Immediate and delayed transfer of training effects in statistical reasoning. Journal of Experimental Psychology: General 120:34–45.

Inhelder, B., and J. Piaget. 1958. The Growth of Logical Thinking from Childhood to Adolescence. New York: Basic Books.

Larrick, R. P., J. N. Morgan, and R. E. Nisbett. 1990. Teaching the use of cost-benefit reasoning in everyday life. Psychological Science 1:362–370.

Larrick, R. P., R. E. Nisbett, and J. N. Morgan 1993. Who uses the cost-benefit rules of choice? Implications for the normative status of microeconomic theory. Organizational Behavior and Human Decision Processes 56:331–347.

Lehman, D. R., R. O. Lempert, and R. E. Nisbett. (1988). The effects of graduate training on reasoning: Formal discipline and thinking abut everyday life events. American Psychologist 43:431–443.

Lehman, D., and R. E. Nisbett. 1990. A longitudinal study of the effects of undergraduate education on reasoning. Developmental Psychology 26:952–960.

Morris, M. W., and R. E. Nisbett. 1992. Tools of the trade: Deductive reasoning schemas taught in psychology and philosophy. In Rules for Reasoning, ed. R. E. Nisbett. Hillsdale, NJ: Erlbaum.

Newell, A. 1980. One last word. In Problem Solving and Education, ed. D. Tuma and F. Reif. Hillsdale, NJ: Erlbaum.

Nisbett, R.E. 1992. Rules for Reasoning, ed. R.E. Nisbett. Hillsdale, NJ: Earlbaum.

Nisbett, R.E. et al. Teaching reasoning. Science 283: 625–631.

Piaget, J., and B. Inhelder. 1951/1975. The Origin of the Idea of Chance in Children. New York: Norton.

Plato. 1875. The Dialogues of Plato. Oxford: Oxford University Press.

Simon, H. A. 1955. A behavioral model of choice. Quarterly Journal of Economics 69:99–118.

Thorndike, E. 1906. Principles of Teaching. New York: Seiler.

Tversky, A., and D. Kahneman. 1981. The framing of decisions and the psychology of choice. Science 21:453–458.