Intuitive ethics: how innately prepared intuitions generate culturally viable virtues
Strangeness is fascinating. Medieval maps embellished with fantastical beasts, sixteenth-century wonder chambers filled with natural and technological marvels, even late-twentieth-century supermarket tabloids–all attest to the human fascination with things that violate our basic ideas about reality. The study of morality and culture is therefore an intrinsically fascinating topic. People have created moralities as divergent as those of Nazis and Quakers, headhunters and Jains. And yet, when we look closely at the daily lives of people in divergent cultures, we can find elements that arise in nearly all of them– for example, reciprocity, loyalty, respect for (some) authority, limits on physical harm, and regulation of eating and sexuality. What are we to make of this pattern of similarity within profound difference? Social scientists have traditionally taken two approaches.
The empiricist approach posits that moral knowledge, moral beliefs, moral action, and all the other stuff of morality are learned in childhood. There is no moral faculty or moral anything else built into the human mind, although there may be some innate learning mechanisms that enable the acquisition of later knowledge. To the extent that there are similarities across cultures, they arise because all cultures face similar problems (e.g., how to divide power and resources, care for children, and resolve disputes) for which they have often developed similar solutions.
The nativist approach, on the other hand, holds that knowledge about such issues as fairness, harm, and respect for authority has been built into the human mind by evolution. All children who are raised in a reasonable environment will come to develop these ideas, even if they are not taught by adults. To the extent that there are differences across cultures, they arise because of local variation in the implementation of universal moral knowledge (e.g., should relations among siblings be guided by rank and respect for elders, or by equality and reciprocity?).
We would like to take the opportunity afforded by this æ岹ܲ issue on human nature to work through one aspect of the idea that morality is both innate and learned. We are not going to offer a wishy-washy, split-the-difference approach. Rather, we will present a modified nativist view that we believe fully respects the depth and importance of cultural variation in morality. We will do this by focusing attention on a heretofore ignored link: the link between intuitions, especially a subset of intuitions that we argue are innate in important respects, and virtues, which by and large are social constructions.
We propose that human beings come equipped with an intuitive ethics, an innate preparedness to feel flashes of approval or disapproval toward certain patterns of events involving other human beings. The four patterns for which we believe the evidence is best are those surrounding suffering, hierarchy, reciprocity, and purity. These intuitions undergird the moral systems that cultures develop, including their understandings of virtues and character. By recognizing that cultures build incommensurable moralities on top of a foundation of shared intuitions, we can develop new approaches to moral education and to the moral conflicts that divide our diverse society.
Anthropologists often begin with sociological facts and then try to work down one level of analysis to psychology. Laws, customs, rituals, and norms obviously vary, and from that variation it is reasonable to conclude that many psychological facts, such as beliefs, values, feelings, and habits, vary too. Evolutionary psychologists, in contrast, work mostly in the space between psychological and biological levels of analysis. Human brains are obviously products of natural selection, adapted to solve problems that faced our hominid ancestors for millions of years. Since infant brains hardly vary across cultures and races, it is reasonable to suppose that many psychological facts (e.g., emotions, motivations, and ways of processing social information) are part of the factory-installed equipment that evolution built into us to solve those recurrent problems.
So how can we get those working down from sociological facts to connect with those working up from biological facts? Where exactly should we drive the golden spike to link the two approaches? The meeting point must be somewhere in the territory of psychology, and we suggest that the exact spot is the intuitions. Intuitions are the judgments, solutions, and ideas that pop into consciousness without our being aware of the mental processes that led to them. When you suddenly know the answer to a problem you’ve been mulling, or when you know that you like someone but can’t tell why, your knowledge is intuitive. Moral intuitions are a subclass of intuitions, in which feelings of approval or disapproval pop into awareness as we see or hear about something someone did, or as we consider choices for ourselves.1
Intuitions arise because the mind is composed of two distinct processing systems. Most of cognition can be referred to as the intuitive, or automatic, system. The human mind, like animal minds, does most of its work by automatic pattern matching and distributed processing. Our visual system, for example, makes thousands of interpretations each second, without any conscious effort or even awareness. It does this by relying in part on built-in processing shortcuts, or heuristics (e.g., the assumption that lines continue behind obstacles that block parts of them), which are integrated with learned knowledge about the things in one’s visible world. Analogously, many psychologists now believe that most social cognition occurs rapidly, automatically, and effortlessly– in a word, intuitively–as our minds appraise the people we encounter on such features as attractiveness, threat, gender, and status. The mind accomplishes this by relying in part on heuristics, which are then integrated with learned facts about the social world.
But human minds are unlike other animal minds in having a well-developed second system in which processing occurs slowly, deliberately, and fully within conscious awareness. When you think in words or reason through a problem or work backward from a goal to your present position, you are using the reasoning, or controlled, system. Most psychological research on morality has looked at deliberative moral reasoning, in part because it is so accessible. All you have to do is ask someone, as Lawrence Kohlberg did, “Do you think that Heinz should break into the pharmacy to steal the drug to save his wife’s life?”2 Kohlberg developed a comprehensive account of moral development by looking at how people’s answers to these sorts of dilemmas changed over the years of childhood and adolescence.
Yet recent research in social psychology suggests that the responses to such dilemmas mostly emerge from the intuitive system: people have quick gut feelings that come into consciousness as soon as a situation is presented to them. Most decide within a second or two whether Heinz should steal the drug. Then when asked to explain their judgments, people search for supporting arguments and justifications using the reasoning system.3 As with the visual system, we can’t know how we came to see something; we can only know that we see it. If you focus on the reasons people give for their judgments, you are studying the rational tail that got wagged by the emotional dog.
We propose that intuition is a fertile but under-studied construct for research on morality. It is here that we can find a small number of basic units that might underlie a great diversity of cultural products. Analogous units comprise our perceptual systems. Three kinds of receptors in the skin (for pressure, temperature, and pain) work together to give us our varied experiences of touch. Five kinds of receptors on the tongue (for salt, sweet, bitter, sour, and, oddly, glutamate) work together with our sense of smell to give us a great variety of gustatory experiences. Might there be a few different kinds of social receptors that form the foundation of our highly elaborated and culturally diverse moral sense?
What can evolution put into a mind, and how does it put it there? Some have argued that the evolutionary process has created innate knowledge of various kinds.4 For example, infants appear to have hard-wired knowledge of faces and sweet tastes, because their brains come equipped with cells and circuits that recognize them. But our more complex abilities are often better described as a ‘preparedness’ to learn something. For example, humans are born with few hardwired fears, but we come prepared to acquire certain fears easily (e.g., of snakes, spiders, mice, open spaces), and cultures vary in the degree to which they reinforce or oppose such fears. On the other hand, it is very difficult to create a fear of flowers, or even of such dangerous things as knives and fire, because evolution did not ‘prepare’ our minds to learn such associations.
So what moral intuitions might the mind be prepared to develop? What are the patterns in the social world to which human beings might easily come to react with approval or disapproval? There is more than one way to answer these questions; in this essay we take what might be called a meta-empirical approach, surveying works by a variety of social scientists to locate a common core of moral values, concerns, and issues.
We focused on five works–two that aim to describe what is universal,5 two that describe what is culturally variable,6 and one that describes the building blocks of morality that are visible in other primates.7 We began by simply listing the major kinds of social situations these five authors said people (or chimpanzees) react to with a clear evaluation as positive or negative. We then tallied the number of ‘votes’ each item got, that is, the number of authors, out of the five, who referred to it directly.
The winners, showing up in all five works, were suffering/compassion, reciprocity/fairness, and hierarchy/respect. It seems that in all human cultures, individuals often react with flashes of feeling linked to moral intuitions when they perceive certain events in their social worlds: when they see others (particularly young others) suffering, and others causing that suffering; when they see others cheat or fail to repay favors; and when they see others who are disrespectful or who do not behave in a manner befitting their status in the group. With chimpanzees, these reactions occur mostly in the individual that is directly harmed. The hallmark of human morality is third-party concern: person A can get angry at person B for what she did to person C. In fact, people love to exercise their third-party moral intuitions so much that they pay money to see and hear stories about fictional strangers who do bad things to each other.
The best way to understand our argument is to begin with the notion of long-standing adaptive challenges, and then to scan down each of the columns in table 1. For example, the prolonged dependence characteristic of primates, especially humans, made it necessary, or at least beneficial, for mothers to detect signs of suffering and distress in their offspring. Mothers who were good at detecting such signals went on to rear more surviving offspring, and over time a communication system developed in which children’s stylized distress signals triggered maternal aid. Psychological preparation for hierarchy evolved to help animals living in social groups make the most of their relative abilities to dominate others. Given the unequal distribution of strength, skill, and luck, those individuals who had the right emotional reactions to play along successfully and work their way up through the ranks did better than those who refused to play a subordinate role or who failed to handle the perks of power gracefully.8 Similarly, a readiness for reciprocity evolved to help animals, particularly primates, reap the benefits of cooperating with non-kin. Individuals who felt bad when they cheated, and who were motivated to get revenge when they were cheated, were able to engage successfully in more non-zero-sum games with others.9
Table 1
Suffering | Hierarchy | Reciprocity | Purity | |
---|---|---|---|---|
Proper domain (original triggers) | Suffering and vulnerability of one’s children | Physical size and strength, domination, and protection | Cheating vs. cooperation in joint ventures, food sharing | People with diseases or parasites, waste products |
Actual domain (modern examples) | Baby seals, cartoon characters | Bosses, gods | Marital fidelity, broken vending machines | Taboo ideas (communism, racism) |
Characteristic emotions | Compassion | Resentment vs. respect/awe | Anger/guilt vs. gratitude | Disgust |
Relevant virtues | Kindness, compassion | Obedience, deference, loyalty | Fairness, justice, trustworthiness | Cleanliness, purity, chastity |
A useful set of terms for analyzing the ways in which such abilities get built into minds comes from recent research into the modularity of mental functioning.10 An evolved cognitive module is a processing system that was designed to handle problems or opportunities that presented themselves for many generations in the ancestral environment of a species. Modules are little bits of input-output programming, ways of enabling fast and automatic responses to specific environmental triggers. In this respect, modules behave very much like what cognitive psychologists call heuristics, shortcuts or rules of thumb that we often apply to get an approximate solution quickly (and usually intuitively).
One useful distinction in the modularity literature is that between the proper and actual domains of a module. The proper domain is the set of specific scenarios or stimuli that the module was evolved to handle. In the case of a suffering/compassion module, the proper domain is the sight of one’s own child showing the stereotypical signs of distress or fear. The proper domain may have extended to distress shown by all kin as well. The actual domain, in contrast, is the set of all things in the world that now happen to trigger the module. This includes the suffering of other people’s children, starving adults seen on television, images of baby seals being clubbed to death, and our pet dogs that droop, mope, whine, and break our hearts as we prepare to go off to work each morning.
The concept of modules is helpful for thinking about moral intuitions. One possibility is that moral intuitions are the output of a small set of modules. When a module takes the conduct or character of another person as its input and then emits a feeling of approval or disapproval, that output is a moral intuition. In strong cases, each of these moral modules triggers a full-fledged emotion: suffering triggers compassion; arrogant behavior by subordinates triggers contempt; cheating triggers anger. But in most cases our moral modules are triggered by minor events, by gossip, by things we read in the newspaper, and we do not truly get angry, or feel compassion; we just feel small flashes of approval or disapproval.
For the three sets of moral intuitions we have examined so far, the persistent adaptive challenge is a social challenge. But there is an odd corner of moral life, odd at least for modern Westerners, who tend to think of morality as strictly concerned with how we treat other people. That corner is the profound moralization of the body and bodily activities, such as menstruation, eating, bathing, sex, and the handling of corpses. A great deal of the moral law of Judaism, Hinduism, Islam, and many traditional societies is explicitly concerned with regulating purity and pollution.
Based on our research and that of others, we propose that culturally widespread concerns with purity and pollution can be traced to a purity module evolved to deal with the adaptive challenges of life in a world full of dangerous microbes and parasites. The proper domain of the purity module is the set of things that were associated with these dangers in our evolutionary history, things like rotting corpses, excrement, and scavenger animals. Such things, and people who come into contact with them, trigger a fast, automatic feeling of disgust. Over time, this purity module and its affective output have been elaborated by many cultures into sets of rules, sometimes quite elaborate, regulating a great many bodily functions and practices, including diet and hygiene. Once norms were in place for such practices, violations of those norms produced negative affective flashes, that is, moral intuitions.11
Purity and pollution were important ideas in Europe from antiquity through the Victorian age, but they began to fade as the twentieth century replaced them with an increasingly medical and utilitarian understanding of hygiene and an increasing emphasis on personal liberty and privacy in regard to bodily matters. However, even contemporary American college students, when we interview them in our studies of moral judgment, will confess to feeling flashes of disgust and disapproval when asked about violations of purity taboos. Stories about eating one’s dead pet dog, about harmless cases of cannibalism, or even about homosexuality may elicit feelings of disgust, which the students attempt, often comically, to justify afterward. The intuition is produced by the module, but the culture does not support a purity-based morality anymore (at least for liberal college students), so the students are left to struggle with the reasoning system to explain a judgment produced by the intuitive system.
Thus far, we have argued two points: that much of mature moral functioning is intuitive rather than deliberative; and that among our moral intuitions are a small number that are primitive and innate, or at least innately prepared. In addition to reflecting persistent adaptive tasks in the human evolutionary past, these prepared intuitions influence moral development and functioning by constraining our moral attention and laying the foundation for the development of other moral concepts. We will now link these observations to another area of philosophical and psychological thinking about morality, namely, the area of virtue theory.
Virtue theorists are a contentious lot, but most would agree at least that virtues are characteristics of a person that are morally praiseworthy. Virtues are therefore traits as John Dewey conceived them–as dynamic patternings of perception, emotion, judgment, and action.12 Virtues are social skills. To possess a virtue is to have disciplined one’s faculties so they are fully and properly responsive to one’s local sociomoral context. To be kind, for example, is to have a perceptual sensitivity to certain features of situations, including those having to do with the well-being of others, and to be sensitive such that those features have an appropriate impact on one’s motivations and other responses. To be courageous is to have a different kind of sensitivity and well-formedness of response; to be patient, still another.
Virtues, on this understanding, are closely connected to the intuitive system. A virtuous person is one who has the proper automatic reactions to ethically relevant events and states of affairs, for example, another person’s suffering, an unfair distribution of a good, a dangerous but necessary mission. Part of the appeal of virtue theory has always been that it sees morality as embodied in the very structure of the self, not merely as one of the activities of the self. Even Aristotle supposed that in developing the virtues we acquire a second nature, a refinement of our basic nature, an alteration of our automatic responses.
One of the crucial tenets of virtue theory is that the virtues are acquired inductively, that is, through the acquisition, mostly in childhood but also throughout the life course, of many examples of a virtue in practice. Often these examples come from the child’s everyday experience of construing, responding, and getting feedback, but they also come from the stories that permeate the culture. Each of these examples contains information about a number of aspects of the situation, including the protagonists’ motivations, the protagonists’ state of being (suffering, disabled, hostile, rich, etc.), the categorization of the situation, and the evaluation of the outcome offered by more experienced others. Only over time will the moral learner recognize what information is important to retain and what can be safely disregarded.
As philosophers and cognitive scientists have recently been arguing, with respect both to morality and to cognition more generally, this kind of learning cannot be replaced with top-down learning, such as the acceptance of a rule or principle and the deduction of specific responses from it. Interestingly, this aspect of virtue theory shows Aristotle to have been a forerunner of the current application of the neural network theory of morality that is being developed by Paul Churchland, Andy Clark, and others.13 In this model, the mind, like the brain itself, is a network that gets tuned up gradually by experience. With training, the mind does a progressively better job of recognizing important patterns of input and of responding with the appropriate patterns of output.
For those who emphasize the importance of virtues in moral functioning, then, moral maturity is a matter of achieving a comprehensive attunement to the world, a set of highly sophisticated sensitivities embodied in the individual virtues. Of course, reasoning and deliberation play important roles in this conception as well; indeed, part of being a virtuous person is being able to reason in the right way about difficult situations. But virtue theory is nevertheless a departure from theories of morality that see deliberation as the basic moral psychological activity.
We believe that virtue theories are the most psychologically sound approach to morality. Such theories fit more neatly with what we know about moral development, judgment, and behavior than do theories that focus on moral reasoning or on the acceptance of high-level moral principles such as justice. But a fundamental problem with many virtue theories is they assume that virtues are learned exclusively from environmental inputs. They implicitly endorse the old behaviorist notion that if we could just set up our environment properly, we could inculcate any virtue imaginable, even virtues such as ‘love all people equally’ and ‘be deferential to those who are smaller, younger, or weaker than you.’ Yet one of the deathblows to behaviorism was the demonstration that animals have constraints on learning: some pairings of stimuli and responses are so heavily prepared that the animal can learn them on a single training trial, while other associations go against the animal’s nature and cannot be learned in thousands of trials. Virtue theories would thus be improved if they took account of the kinds of virtues that ‘fit’ with the human mind and of the kinds that do not. Virtues are indeed cultural achievements, but they are cultural achievements built on and partly constrained by deeply rooted preparednesses to construe and respond to the social world in particular ways.
Aristotle himself recognized the constraining effect of human beings’ embodied and situated nature on ethical experience. As Martha Nussbaum points out, Aristotle defined virtues by reference to universal features of human beings and their environments that combine to define spheres of human experience in which we make normative appraisals of our own and others’ conduct14–not unlike what above we called persistent adaptive challenges. Aristotle’s and Nussbaum’s approach is also a nativist one, albeit one that locates the innate moral content in both the organism and the environment. Our four modules of intuitive ethics are in a sense a pursuit of this Aristotelian project. Like Aristotle, we are seeking a deeper structure to our moral functioning, though in the form of a smaller number of phenomena that are located more in the organism than in the environment.
Let us now link our account of moral intuitions with this account of virtues. Briefly, we propose that the human mind comes equipped with at least the four modules we describe above.15 These modules provide little more than flashes of affect when certain patterns are encountered in the social world. A great deal of cultural learning is required to respond to the actual domain that a particular culture has created, but it may take little or no learning to recognize cases at the heart of the proper domain for each module (e.g., seeing the facial and bodily signals of distress in a child or seeing a large male display signs of dominance and threat while staring down at you).
These flashes are the building blocks that make it easy for children to develop certain virtues and virtue concepts. For example, when we try to teach our children virtues of kindness and compassion, we commonly use stories about mean people who lack those virtues. While hearing such stories children feel sympathy for the victim and condemnation for the perpetrator. Adults cannot create such flashes out of thin air; they can only put children into situations in which these flashes are likely to happen. We should emphasize that a flash of intuition is not a virtue. But it is an essential tool in the construction of a virtue.
Of course, it is possible to teach children to be cruel to certain classes of people, but how would adults accomplish such training? Most likely by exploiting other moral modules. Racism, for example, can be taught by invoking the purity module and triggering flashes of disgust at the ‘dirtiness’ of certain groups, or by invoking the reciprocity module and triggering flashes of anger at the cheating ways of a particular group (Hitler used both strategies against Jews). In this way, cultures can create variable actual domains that are much broader than the universal proper domains for each module.
A second way in which cultures vary is in their relative use of the four modules. In our own research we have found that American Muslims and American political conservatives value virtues of kindness, respect for authority, fairness, and spiritual purity. American liberals, however, rely more heavily on virtues rooted in the suffering module (liberals have a much keener ability to detect victimization) and the reciprocity module (virtues of equality, rights, and fairness). For liberals, the conservative virtues of hierarchy and order seem too closely related to oppression, and the conservative virtues of purity seem to have too often been used to exclude or morally taint whole groups (e.g., blacks, homosexuals, sexually active women).16
A third way in which cultures diverge is in their assignment of very different meanings and intuitive underpinnings to particular virtues. Take, for example, the virtue of loyalty. Certainly there is a difference between loyalty to peers and friends on the one hand (that is, loyalty grounded in reciprocity intuitions), and loyalty to chiefs, generals, and other superiors (that is, loyalty in the context of hierarchy), even though both have much in common. Similarly, the virtue of honor can be incarnated as integrity (in reciprocity), as chivalry or masculine honor more generally (in hierarchy), or as chastity or feminine honor (in purity). And temperance is one thing in the context of reciprocity, where it may be essential for the flourishing of the group in conditions of scarcity, and something quite different in the context of purity, where it is often construed as a means of enlightenment or spiritual development. In each of these cases, different moral underpinnings provide the virtue with different eliciting conditions and different appropriate behaviors and responses.
A fourth source of cultural variation is the complex interactions that virtues can generate, forming what one might call virtue complexes, which express a great deal of a society’s conception of human nature and moral character. One excellent example comes from Reynold A. Nicholson’s Literary History of the Arabs, a masterful survey of pre-Islamic and Islamic Arab culture. One of the moral concepts elucidated by Nicholson is that of hamasa, which is often glossed simply as ‘valor.’ Nicholson, however, defines it this way: “‘Hamasa’ denotes the virtues most highly prized by the Arabs–bravery in battle, patience in misfortune, persistence in revenge, protection of the weak and defiance of the strong.”17 There is no necessary connection between these qualities; one could imagine someone brave in battle and protective of the weak, but impatient in misfortune and inclined to bide his time when challenged by someone stronger. But the point is that the Arabs do not imagine this set of traits, or at least they do not award it their ultimate praise. Even if some virtues tend to go together across cultures, the virtue complexes that each culture generates are likely to be unique.
On the account we have sketched, morality is innate (as a small set of modules) and socially constructed (as sets of interlocking virtues). It is cognitive (intuitions are pattern-recognition systems) and it is emotional (intuitions often launch moral emotions). But above all, morality is important to people in their daily lives, and to societies that seem forever to lament the declining morals of today’s youth. We will herefore close with suggestions for using intuitive ethics in moral education and in dealing with moral diversity.
Moral education, on our account, is a matter of linking up the innate intuitions and virtues already learned with a skill that one wants to encourage. Parents and educators should therefore recognize the limits of the ‘direct route’ to moral education. It is helpful to espouse rules and principles, but only as an adjunct to more indirect approaches, which include immersing children in environments that are rich in stories and examples that adults interpret with emotion. Those stories and examples should trigger the innate moral modules, if possible, and link them to broader virtues and principles. Another indirect approach involves arranging environments so that messages about what is good and bad are consistent across sources (parents, teachers, television, movies, afterschool activities, etc.). Conservative parents who homeschool their children, limit what they can watch on television, and read to them from William Bennett’s Book of Virtues are therefore likely to be successful in tuning up their children’s moral-perceptual systems in the desired ways. Liberal parents who try not to ‘impose their morality’ on their children, by contrast, may well be disappointed by the results. Depriving children of frequent moral feedback, including displays of the parent’s moral emotions, or exposing them to many conflicting messages, may deprive the intuitive system of the experiences it needs to properly tune up. If virtues are social skills, then moral education should be a comprehensive and sustained training regimen with regular feedback.
Moral diversity, on our account, results from differences in moral education and enculturation. As we suggested above, one of the main sources of moral diversity originates in political diversity. On such currently divisive issues as gay marriage, therapeutic cloning, and stem cell research, liberals focus on promoting individual welfare and individual rights. Conservatives understand these arguments, but they have a more multivocal moral life, drawing on a wider set of moral intuitions.18 They also have to integrate their deeply intuitive aversion to ‘playing God’ and their more finely honed and valued sense of disgust. Leon Kass, President Bush’s bioethics advisor, for instance, bases his critique of human cloning in part on the fact that it offends and repulses many people. He grants that disgust is not by itself an argument, but he suggests that there is a form of wisdom in repugnance. “Shallow are the souls that have forgotten how to shudder,” he wrote.19
So how can we all get along in a morally diverse society? The first step is simply to recognize that all sides in the debate are morally motivated. We tend to assume the worst about our opponents, to regard them as perfectly villainous. But when liberals assume that conservatives are motivated by little more than hatred and bigotry, they show about as much psychological insight as President Bush’s statement that the 9/11 hijackers did what they did because they “hate our freedom.” Only when moral motives are acknowledged can intelligent discourse begin.
The second step is to try to frame appeals in language that may trigger new intuitions on the other side. For example, conservatives tend to value social order and stability; a concerted effort to show that gay marriage is about order and stability, that it’s about helping people to form life-long commitments that will often create stability for children, may be more effective in changing hearts and minds than the familiar arguments about rights and fairness.
It is our hope that a fuller understanding of the links between virtues and intuitions will lead to greater tolerance and respect–between liberals and conservatives, between people of different nations, and, perhaps in the far distant future, between nativists and empiricists.
Endnotes
- 1For more on moral intuition, see Jonathan Haidt, “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review 108 (2001): 814 –834; James Q. Wilson, The Moral Sense (New York: Free Press, 1993).
- 2Lawrence Kohlberg, “Stage and Sequence: The Cognitive-Developmental Approach to Socialization,” in David A. Goslin, ed., Handbook of Socialization Theory and Research (Chicago: Rand McNally, 1969).
- 3See Haidt, “The Emotional Dog”; Richard E. Nisbett and Timothy DeCamp Wilson, “Telling More Than We Can Know: Verbal Reports on Mental Processes,” Psychological Review 84 (1977): 231–259.
- 4For reviews see Steven Pinker, The Blank Slate (New York: Viking, 2002); Jerome Barkow, Leda Cosmides, and John Tooby, eds., The Adapted Mind: Evolutionary Psychology and the Generation of Culture (New York: Oxford, 1992).
- 5Donald E. Brown, Human Universals (Philadelphia: Temple University Press, 1991); Alan P. Fiske, Structures of Social Life (New York: Free Press, 1991).
- 6Shalom H. Schwartz and Wolfgang Bilsky, “Toward a Theory of the Universal Content and Structure of Values: Extensions and CrossCultural Replications,” Journal of Personality and Social Psychology 58 (1990): 878–891; Richard A. Shweder et al., “The ‘Big Three’ of Morality (Autonomy, Community, and Divinity) and the ‘Big Three’ Explanations of Suffering,” in Allan M. Brandt and Paul Rozin, eds., Morality and Health (New York: Routledge, 1997), 119–169.
- 7Frans de Waal, Good Natured: The Origins of Right and Wrong in Humans and Other Animals (Cambridge, Mass.: Harvard University Press, 1996).
- 8Frans de Waal, Chimpanzee Politics (New York: Harper & Row, 1982).
- 9See Robert L. Trivers, “The Evolution of Reciprocal Altruism,” Quarterly Review of Biology 46 (1971): 35–57; Robert Wright, NonZero: The Logic of Human Destiny (New York: Vintage, 2000).
- 10Modularity was first proposed for perceptual processes by Jerry Fodor, Modularity of Mind (Cambridge, Mass.: MIT Press, 1983). However, more recent modularity theorists argue that more flexible and only partially modularized cognitive systems play a role in most areas of higher cognition. See Dan Sperber and Lawrence A. Hirschfeld, “The Cognitive Foundations of Cultural Stability and Diversity,” Trends in Cognitive Sciences 8 (2004): 40–46; Gerd Gigerenzer, Adaptive Thinking: Rationality in the Real World (Oxford: Oxford University Press, 2002).
- 11For the complete story of how the actual domain of disgust expanded into the social world, see Paul Rozin, Jonathan Haidt, and Clark R. McCauley, “Disgust,” in Michael Lewis and Jeanette M. Haviland-Jones, eds., Handbook of Emotions, 2nd ed. (New York: Guilford Press, 2000), 637–653.
- 12John Dewey, Human Nature and Conduct: An Introduction to Social Psychology (New York: Holt, 1922). See also Paul M. Churchland, “Toward a Cognitive Neurobiology of the Moral Virtues,” Topoi 17 (1998): 83–96.
- 13See Larry May, Marilyn Friedman, and Andy Clark, eds., Mind and Morals (Cambridge, Mass.: MIT Press, 1997).
- 14Martha C. Nussbaum, “Non-Relative Virtues: An Aristotelian Approach,” in Martha C. Nussbaum and Amartya Sen, eds., The Quality of Life (New York: Oxford University Press, 1993).
- 15There are probably many others. The best candidate for a fifth might be an ‘ingroup’ module whose proper domain is the boundaries of a co-residing kin group, and whose actual domain now includes all the ethnic groups, teams, and hobbyist gatherings that contribute to modern identities. To the extent that people feel a bond of trust or loyalty toward strangers, the operation of a such an ingroup module seems likely.
- 16For a compatible and much more comprehensive treatment see George Lakoff, Moral Politics: What Conservatives Know That Liberals Don’t (Chicago: University of Chicago Press, 1996).
- 17Reynold A. Nicholson, A Literary History of the Arabs (Cambridge: Cambridge University Press, 1930), 79.
- 18Lakoff, Moral Politics.
- 19Leon Kass, “The Wisdom of Repugnance,” New Republic, 2 June 1997, 17–26.