
Contents
-
-
-
-
-
Topics in Neuroethics Topics in Neuroethics
-
Jonathan Haidt’s Social Intuitionist Model of Moral Judgment Jonathan Haidt’s Social Intuitionist Model of Moral Judgment
-
Marc Hauser’s Universal Moral Grammar Marc Hauser’s Universal Moral Grammar
-
Joshua Greene’s Dual Process Account Joshua Greene’s Dual Process Account
-
References References
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Neuroethics: Moral Cognition
Neil Levy, Professor of Philosophy, Macquarie University
-
Published:01 July 2014
Cite
Abstract
This chapter sketches and discusses the most prominent contemporary accounts of moral cognition, concentrating on the normative implications that have or might been drawn from each. After a brief overview of the neuroethical landscape, the article presents Jonathan Haidt’s social intuitionism and his more recent moral foundations theory, Marc Hauser’s universal moral grammar account, and Joshua Greene’s dual process account. While each account is able to explain a great deal of data, none provides a wholly satisfying explanation of the entire range, the chapter claims. Proponents of each account have also attempted to show that their views are not merely descriptively accurate but also have implications for normative ethics. This chapter argues that these claims are unconvincing.
The 1960s and 1970s saw the introduction of a number of novel techniques and therapies in medical practice, giving physicians unprecedented powers over the beginning and end of life. In vitro fertilization and surrogacy, among other new reproductive techniques, raised new challenges and concerns regarding the creation of life, while artificial respiration raised new challenges having to do with sustaining life. In part in response to these developments, a new field of applied philosophy calling itself “bioethics” or “medical ethics” was born. More recent years have seen the development of new and apparently unprecedented abilities to intervene in the brain. Psychopharmaceuticals, in the form of antipsychotics and antidepressants (not to mention illegal drugs) date back many decades, but it is relatively recently that they have come to be widely used. They are joined by a host of more exotic techniques for altering brain function and thereby the mind: deep brain stimulation, transcranial magnetic stimulation, transcranial direct current stimulation, and more. At the same time, we have acquired new powers to study the living brain in patients and in volunteer subjects: functional magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and more. These new powers raise new concerns. Once again, these concerns have helped to spawn a new philosophical subdiscipline: neuroethics.
While the analogy with bioethics is illuminating, neuroethics is in many ways broader and more ambitious than bioethics. Neuroethics is often said to have two branches (Roskies, 2002). One branch—dubbed by Roskies the ethics of neuroscience—stands to the sciences of the mind as bioethics stands to the sciences of life. Just as bioethics is concerned primarily with whether it is permissible, or morally advisable, to utilize particular medical interventions in particular contexts, so this branch of neuroethics is concerned with the permissibility or moral advisability of utilizing scientific interventions into the mind in particular circumstances. In this guise, neuroethics utilizes approaches that are familiar to anyone acquainted with applied philosophy more generally, assessing particular uses of technology in light of the motivations of physicians and patients, the consequences for the patient and for the broader society, and so on.
But many neuroethicists are instead, or also, concerned with the second branch of neuroethics, which Roskies dubs the neuroscience of ethics. This branch of neuroethics is concerned with how the sciences of the mind can illuminate traditional philosophical questions, principally those questions that concern the normative dimensions of human life. Neuroethics, in this guise, is continuous with main currents in contemporary naturalized philosophy.
The two branches of neuroethics need not be independent of one another. As we shall see in what follows, work on the neuroscience of ethics may have normative implications, entailing that the neuroscience of ethics can illuminate the ethics of neuroscience. Whereas bioethicists, qua bioethicists, seek to apply ethical theories to dilemmas in the medical field, neuroethicists may seek to test and refine the very tools they then apply. To that extent, neuroethics may be qualitatively different from bioethics. For some of those involved in it, it represents one of the most exciting challenges in contemporary philosophy.
In what follows, I briefly outline some of the major topics that have preoccupied neuoethicists, attending first to problems in neuroethics in its guise as a branch of as applied ethics and then to topics in neuroethics considered as the neuroscience of ethics. I aim only to sketch some of the kinds of concerns central to neuroethics as a branch of applied ethics; my focus is on neuroethical work on moral cognition. Since this work is both a central topic within the neuroscience of ethics but has also been seen by some as having normative implications, it nicely illustrates how the two branches of neuroethics may be mutually illuminating. Accordingly, my overview considers the implications of work on moral cognition for issues in normative and applied ethics.
A note before I begin. The attentive reader may have noticed that although Roksies speaks of the neuroscience of ethics and the ethics of neuroscience, I have spoken of “the sciences of the mind.” The actual practice of many neuroethicists is to focus on the full range of sciences of the mind, not on neuroscience alone. This is entirely appropriate, since to a large extent the psychological theories upon which the hypotheses of cognitive neuroscience are based are very significantly shaped by work in longer-established areas of cognitive science (which are, in turn, increasingly shaped by work in cognitive neuroscience). It would be artificial to restrict attention to neuroscience alone in attempting to understand the human mind. Indeed, the topic on which I concentrate here, moral cognition, well illustrates how restricting attention to neuroscience alone distorts our understanding of the debates. Although they utilize different methodologies to understand the mind/brain, the psychologists and neuroscientists I discuss see each other as offering rival accounts of the same subject matter: They see their accounts as in competition with one another. Restricting attention to the neuroscientific alone would leave us unable fully to understand what the neuroscientists see themselves as doing. Moreover, the neuroscientists cite behavioral evidence and the psychologists cite brain imaging studies in support of their views. Although some may prefer a more literal construal of “neuroethics,” the broader construal I here adopt is intellectually defensible and reflects the practice of many people who call themselves neuroethicists; I make no apologies for it.
Topics in Neuroethics
Neuroethicists are concerned with a large range of different topics—that is, with any normative question illuminated by (or illuminating) the sciences of the mind. Thus, for instance, the ethics of psychiatry (and, relatedly, the nature of mental illness and the best explanation of the etiology, phenomenology, and nature of specific psychiatric conditions: schizophrenia, delusions, depression, addiction, and so on) falls within the purview of neuroethics (Levy & Clarke 2008). So does the neuroscience of religious belief and the extent to which explaining it might explain it away (Newberg 2010), and so do questions concerning the application of the cognitive science to influence consumer behavior (“neuromarketing”; Ariely & Berns 2010). Although these and many other topics have all concerned neuroethicists at some time or another, the majority of attention has focused on a narrower range of topics.
Within neuroethics understood as an applied discipline, the single most discussed topic is almost certainly the question whether the use of cognitively enhancing technologies is permissible or advisable. There is a wide range of such technologies. A number of psychopharmaceuticals have been promoted as effective enhancers of attention and concentration. Modafinil, a drug developed for the treatment of narcolepsy, is reported to be widely used off label for these purposes (Sahakian & Morein-Zamir 2007). Methylphenidate, utilized for the treatment of ADHD, has also been promoted as a cognitive enhancer (Mehta et al. 2000). More exotic technologies for cognitive enhancement include transcranial magnetic stimulation (Snyder et al. 2003) and—most promisingly—transcranial direct current stimulation (Kadosh et al. 2012). Opponents of these technologies have a raised a number of concerns with regard to them. They have worried about the extent to which they threaten the authenticity of users (Elliott 1998), the extent to which they give those who use them an unfair advantage (Schermer 2008), broader social effects of widespread use (Wexler 2011), and their potential to increase inequality between rich and poor (Metzinger & Hildt 2011), among other topics.
Somewhat related to the question of the permissibility or advisability of cognitive enhancement is the question of the permissibility or advisability of memory modification utilizing new technologies. A number of psychopharmaceuticals (most prominently propranolol) have been suggested as possible memory modifiers. Opponents of memory modification have worried that it might threaten the identity of those who use them, cut them off from valuable truths concerning themselves (Hurley 2007), and threaten their authenticity (Henry, Fishman, & Youngner 2007).
Neuroethicists have also been very concerned with the use of neuroscientific evidence in legal settings. They have worried about the persuasive power of fMRI (functional magnetic resonance imaging) images and the extent to which courts might be swayed inappropriately by claims about the predictive power of biomarkers for certain conditions (Goodenough & Tucker 2010). More broadly but relatedly, a number of neuroethicists have focused on the question whether findings in neuroscience and in psychology might threaten free will and (therefore) moral responsibility. Several scientists have advanced evidence that they claim demonstrates the epiphenomenality of consciousness in behavior; on this basis they have argued that we lack free will (Libet 1999; Soon et al. 2008; Wegner 2002). In response, philosophers have criticized the methodologies used (Mele 2009), the logical basis of the epipheneomenalism claim (Nahmias 2002), and the need for consciousness in freely performed action (King & Carruthers 2012). Other philosophers have focused on an apparent threat to free will from findings in social psychology, especially the claim that situational influences on behavior undermine freedom (Doris 2002).
Space prevents me from discussing any of these topics here. Instead, I focus on one major topic within neuroethics: The psychological and neuropsychological underpinnings of moral cognition itself. This work is of great intrinsic interest, which in itself is sufficient to justify it. Insofar as neuroethics is concerned centrally with the cognitive science of the normative, the way in which moral cognition is implemented neutrally and psychologically is clearly at its heart. Moreover, the topic is also potentially of direct relevance to the other branch of neuroethics: the branch concerned with the normative assessment of technologies stemming from the science of mind. At least some of the theorists who have engaged in the project of identifying the psychological and neural correlates of moral judgments have hoped thereby to show that some moral judgments are less reliable than others. If these claims can be substantiated, work on moral cognition might shape the very tools that neuroethicists use when they ask about the permissibility or advisability of using technologies to shape the mind.
In the next section, I describe and assess the most influential descriptive work in this burgeoning field, including the normative claims theorists have advanced on the basis of this work.
Jonathan Haidt’s Social Intuitionist Model of Moral Judgment
Earlier work in moral cognition was firmly in a rationalist mode. Building on Piaget, Kohlberg (1981) argued that children pass through different developmental phases of moral reasoning; those with sufficiently advanced reasoning capacities might achieve a postconventional stage of moral development, where reasoning is governed by universal principles. Although Kohlberg was influentially criticized by Gilligan (1982), who argued that the model was blind to an equally legitimate care-based approach to moral thinking, more often associated with women, Kohlberg’s rationalist approach to moral cognition dominated the psychological literature for decades.
Jonathan Haidt’s work could hardly be more different. Haidt describes his own view as “sentimentalist” (the reader should beware: The way these terms are used in the psychological literature tracks the processes involved in generating the judgment and not the content of the judgment; therefore the terms fails to map well onto cognate vocabulary in metaethics [Joyce 2008]). Haidt (2001; Haidt, Koller, & Dias 1993) presented high- and low-socioeconomic-status subjects in the United States and subjects in developing countries with a series of vignettes describing taboo actions. Haidt’s vignettes were designed so that the obvious basis for a negative moral judgment was removed. For instance, one vignette concerned an adult brother and sister who had consensual sex. Haidt stipulated that they used two forms of contraception, that they enjoyed the experience but decided not to repeat it, that it had no negative psychological impact on them, and so on. Thus the obvious objections to incest (that it might result in deformities; that it can be expected to cause psychological harm) are avoided. Other vignettes featured subjects cleaning a toilet with their national flag, a man masturbating using a chicken bought from the supermarket prior to very carefully cleaning it and cooking it, a family who decided to cook and eat the family dog after it died in an accident, and so on. Again, all potential harms are stipulated away.
Haidt found that low-socioeconomic-status American subjects and subjects in developing nations condemned the actions. But they were “morally dumbfounded”: They were unable to justify their judgments. High-socioeconomic-status subjects were more likely to abandon their judgments when they found themselves unable to justify them, but they did so reluctantly, if at all. The inability to justify the judgment was experienced as highly uncomfortable, but for most subjects this discomfort was not sufficient to cause them to abandon the judgment.
On the basis of this kind of data, Haidt suggests that moral judgments are not caused by reasoning; rather, our moral judgments are caused by the gut feelings generated by thinking about or perceiving cases (initially Haidt suggested that the contemplation of cases caused an emotional response, which then caused or constituted—Haidt is not clear on this point—a moral judgment with a matching content. In later work (Haidt 2007), he claims that gut reactions only typically have an affective valence). This kind of data is correlational; although it suggests a causal process (and sufficient controlling for confounds may make the causal inference irresistible), it leaves open the possibility that the results are explained by some factor influencing both the explicit judgment and the gut reaction. Haidt’s next step was to attempt to show causation by manipulating the gut reaction directly.
Wheatley and Haidt (2005) used posthypnotic suggestion to generate a disgust response to the word often. Thus, to assess the extent to which the disgust modulated subjects’ moral judgments, they presented their subjects—the posthypnotic group and a control group—with vignettes that differed only in whether they featured the trigger word. They found that the manipulation intensified subjects’ negative moral judgments as expected. Schnall, Haidt, Clore, and Jordan (2008) utilized a different manipulation to the same end. They found that in a subset of subjects—those high in bodily self-consciousness—being seated at a dirty desk intensified negative moral judgments. Once again, a negatively valenced feeling played a causal role in moral judgment, although the cause of the feeling was in fact not the content of the vignette to which the subjects responded.
On the basis of this evidence, Haidt concludes that contrary to Kohlberg’s claims, we do not reason to moral judgments. Rather, for him, the role of reasoning in moral judgment is post hoc and often confabulatory. We are more like lawyers defending a client—where the ‘client’ is a moral judgment generated by nonconscious processes—than seekers after truth. The social intuitionist model does not entirely eliminate reasoning from a causal role. Haidt acknowledges that on rare occasions a person may reject a moral judgment because he or she cannot justify it. Further, reasoning may have an effect on moral judgments via a socially mediated process of reason giving; moral intuitions (as I think we may reasonably describe the proximate causes of Haidtian moral judgments) may gradually shift under pressure, including pressure from the exchange of reasons (Haidt & Bjorklund 2008). However, moral judgments are almost never caused by a process of reasoning, at least when “reasoning” is understood as an explicit inferential process, and they rarely shift once formed.
In more recent work, Haidt and colleagues (Haidt 2012; Haidt & Kesebir 2010) have identified six alleged “foundations” of morality, where a foundation is a dimension along which we implicitly categorize actions and agents; this categorization generates an intuition. Thus, for instance, if we categorize an action as high on the dimension of “fairness” (to mention one foundation), we are likely to generate an approving intuition. The six foundations Haidt identifies are
Care/harm
Fairness/cheating
Liberty/oppression
Loyalty/betrayal
Authority/subversion
Sanctity/degradation
These foundations are supposed to be innate, having an evolutionary explanation. For instance, the concern about fairness may have its origins in reciprocal altruism—behavior which boosts organisms’ inclusive fitness by trading aid—while concerns with sanctity might have an original basis in adaptations designed for pathogen avoidance. More controversially, Haidt argues that moral foundations theory helps to explain and, to some extent, dissolve, political controversies. He claims that “liberals”—using that word in its US meaning, to refer, roughly, to social democrats (rather than to refer to people who ascribe to liberal political philosophy)—have a morality that is based almost exclusively on concerns about harm and fairness alone, whereas conservatives have a morality that is sensitive to all six moral foundations. More controversially still, Haidt argues that the fact that moral judgments are ultimately founded on intuitions entails that there is no way to resolve disagreements between “liberals” and “conservatives,” insofar as many of these conflicts turn simply on such intuitive responses, none of which is better justified than any of the others. Haidt (2012) therefore calls for greater mutual tolerance of opposing views. Haidt’s work might thus be cited by those engaged in first-order work in neuroethics in support of the claim that some disputes may not be resolvable because they reflect different, equally well grounded and perhaps incommensurable worldviews. Indeed, Erik Parens has argued for a view somewhat along these lines (without citing Haidt) with regard to debates over authenticity in neuroethics. Parens (2005) suggests that different conceptions of authenticity reflect different conceptions of the self, each of which is a reflection of a different outlook on human life. Parens thinks that disputes like these cannot be rationally resolved and that it is incumbent on ethicists to acknowledge the merits of the conflicting views.
Haidt’s work has been influential on philosophers (Levy 2007; Prinz 2007), but it is fair to say that it remains somewhat speculative. A number of philosophers have argued, for instance, without sufficient sensitivity to Haidt’s data, that Haidt has shown that emotions cause moral judgments. In the well-known experiment on hypnotically induced disgust, the manipulation does not seem to have caused subjects to confabulate moral wrongness when there was no basis for it (May 2014). In fact, on most vignettes, whether they featured moral transgressions or not (where I operationalize “featuring a moral transgressions” as leading the overwhelming majority of controls to characterize the action as morally wrong), the manipulation did not significantly modulate moral judgments. In the much discussed “Dan” case, in which Dan is innocent of any conceivable wrongdoing, the manipulation did indeed have a significant effect, but it did not alter the character of the judgment. Rather, neither controls nor the experimental group found wrongdoing on Dan’s part (subjects rated the actions on a 100-point scale, with 0 representing “not at all morally wrong” and 100 representing “extremely morally wrong.” Controls rated Dan’s action as a 7 on this scale; the experimental group rated it at 14—a significant difference, but in both cases the actions was rated close to the “not wrong at all” end of the spectrum).
There are also good reasons to hesitate before concluding that Haidt has shown that reason plays a relatively small role in moral cognition. For most moral philosophers, intuitions are among the data for moral theorizing. That we generate such intuitive responses to cases is common ground; the mere fact that we respond like this has not been seen as evidence in favor of one view in metaethics rather than another. Haidt’s case rests on the claim that subjects’ intuitions cause their judgments and that these judgments do not shift under pressure from rational argument. But the evidence is weaker than he seems to think.
Much of the evidence Haidt (2001; 2010) cites for the conclusion that reasoning is powerless against motivational factors is domain-general rather than morality-specific. Thus it would seem to be equally good evidence for the conclusion that reasoning is inefficacious in all domains. But while the power of reason is more limited than we might hope (and indeed some thinkers have defended views reminiscent of, if more circumspect than, Haidt’s with regard to reasoning more generally; see Mercier & Sperber 2011, for instance), the claims that reason’s power is much smaller than we might like and that people reason more to persuade each other than to seek the truth does not support the kind of relativism Haidt espouses. The success of science is powerful testimony to the fact that our severe cognitive limitations are not an insurmountable obstacle to the successful pursuit of truth.
The success of science may be due to its social organization (Goldman 1999); knowledge seeking is a distributed enterprise, with truth claims generated and tested by competing and cooperating groups of researchers. To the extent that moral progress has been made, it is plausible to think that a similar process has been at work in the moral domain. It is also plausible, moreover, that the success of science is due to institutional features lacking from moral debates—features that constrain and direct debates in ways that make them more fruitful. There may be good reasons, including good moral reasons, why we cannot replicate these institutional structures in the moral domain. But we may, consistent with these constraints, be able to move a great deal closer to such structures. Indeed, perhaps we are already doing so: it is not implausible that the rate of moral progress has sped up considerably in recent decades (think of how recent a genuine commitment to the equality of women is in western countries, or to a recognition of the equal standing of gays, and compare these movements to the centuries-long struggle to abolish slavery). To the extent to which moral argument should be understood as socially distributed, Haidt’s individualistic methodology simply may not be well designed to capture it at work.
Further, reflection on the actual disputes between “liberals” and “conservatives” reveals that much of the conflict actually turns on empirical claims. This is true of the dispute between “liberals” and the group upon which Haidt focuses, the religious right (one need only to think of global warming, evolution, or the effects of gun ownership on violent death) or their sometime allies, proponents of deregulated markets (here the disputes are largely economic). There are therefore good reason to think that Haidt’s intuitionism underplays both the extent to which many moral debates turn on straightforward matters of fact and the extent to which even in the moral domain rational argument alters judgments.
Reflection on free-market thinking raises the question whether Haidt’s moral foundationalism has the resources to understand all varieties of conservative political thought. Advocates of free markets represent the dominant strand on the political right in many western countries. But laissez faire capitalism is arguably far more corrosive of traditional family and institutional structures than is liberalism. It was not liberalism but capitalism that Marx described with his famous lines from The Communist Manifesto: “All fixed, fast frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all which is holy is profaned” (Marx 1848/1988).
Free-market thinkers therefore appear to recognize no more foundations than do “liberals”; indeed, perhaps they recognize fewer (certainly they might be thought to give less weight to fairness). Foundations of morality do not appear to map, as Haidt seems to think, onto political perspectives. Haidt’s inability to account for this strand of conservative thought prompts the suspicion that the moral foundations theory reflects prevalent political currents in the United States rather than the nature of moral thought per se. This would be ironic, given that Haidt (2012) has repeatedly chastized psychologists for failing to attend to the thought of other cultures.
In the wider cognitive science literature, it remains controversial to what extent emotional arousal actually causes moral judgments, as Haidt thinks is generally the case, and to what extent they follow from moral judgments, causing behavioral tendencies and perhaps intensifying the judgment (see Avroma & Inbar 2013 for review). The gradually evolving picture seems to be more subtle than Haidt’s groundbreaking work suggested. In particular, there is emerging evidence of emotional specificity and interactions with other elements: Different emotions have different effects on different kinds of judgments in different kinds of people. For instance, one study found that consuming a bitter liquid made subjects judge moral transgressions more harshly, but only in people with conservative political attitudes (Eskine et al. 2011). This evidence is consistent with Haidt’s claim that conservatives are more sensitive to the concerns captured by the sanctity/degradation foundation, since disgust seems to heighten judgments of offenses against purity specifically, leaving judgments about harm unaffected (Horberg et al. 2009). Much more work remains to be done on the interaction between political perspectives, personality traits, emotions and arguments.
Marc Hauser’s Universal Moral Grammar
Building on foundations laid by the important earlier work of John Mikhail (2000, 2007, 2011), Hauser aims to demonstrate that morality is innate in the same sense in which language is innate: not in its precise content but in its form. Hauser argues that just as, according to Chomsky, each child comes into the world with a brain wired for language acquisition, so we are each born ready to acquire a moral system. Just as the principles innate in the human mind tightly constrain the grammar of any possible human language, so the innate moral principles constrain—without determining—the content of any possible moral system (Hauser 2006; Hauser, Young, & Cushman 2008).
On Hauser’s view, the moral faculty is triggered by the perception of an action or omission and automatically produces a judgment that is sensitive to its causes, consequences, and whether the consequences were intended or foreseen. Actions are perceived as more morally significant than omissions and intended harms are seen as morally worse than foreseen harms. Through web-based surveys (backed up by more traditional lab-based studies as well as fieldwork), Hauser and colleagues have amassed an enormous amount of evidence showing that these distinctions are made in the same way in all cultures, across all educational levels, and by both genders. But subjects are typically unable to articulate adequate justifications for their judgments (Cushman, Young, & Hauser 2006). Hauser takes this inability to be evidence for his view that we have a moral faculty that operates below the level of conscious awareness. Once again, as Hauser points out, the parallels to linguistic competence are clear. Just as we are able effortlessly and automatically to judge whether a sentence is grammatical but our ability to justify our judgments is typically small, so we are able automatically to judge the permissibility of an action but often cannot explain our judgments.
Since our ability to make moral judgments outpaces our ability to justify them, moral judgment does not seem to be the product of rational reflection. Might it be the product of an emotional system instead, as Haidt suggested? Hauser grants that moral judgments are typically accompanied by emotions, but he suggests that these emotions are caused by moral judgments rather than causing them. Part of his evidence for this claim comes from studies of patients with damage to the ventromedial prefrontal cortex. These patients appear to have significantly reduced emotional responses to harms, yet their moral judgments appear to be identical to those of normal subjects on most dilemmas. Hauser believes that emotions powerfully influence moral performance—by influencing our motivation to act morally—but that the moral faculty itself is independent of the emotional system. Moral competence is the product of the innate moral faculty, which has its optional parameters and exceptions set by the culture into which a child is born. Just as there is universal grammar innately encoded in the brain, so there is a universal moral grammar (UMG), which careful study of the responses of subjects across cultures is gradually uncovering.
Like Haidt, Hauser claims that his views have normative implications. He seems to advance two different (but compatible) claims:
Policymakers should take the UMG into consideration in formulating legislation.
Learning the details of the UMG will affect the performance of ordinary agents.
It is easy to see how (1) could be true. Although there is room for parametric variation in the settings of the moral faculty, the points at which parameters can be varied are limited and the extent of permissible variation is constrained. Beyond these permissible variations, we must accept the limitations imposed by the UMG. However, this claim is rather trivial. Hauser may be correct in saying that policy and intuition sometimes conflict, “and when policy and intuition conflict, policy is in trouble” (2006, p. xix). To the extent that it is compatible with our goals, policies ought to be written to minimize these kinds of conflicts. But this is surely obvious. It is certainly not the case, and Hauser does not make the mistake of claiming that it is, that our policies should simply express our intuitions. It is not clear that they can, because it is not clear that our intuitions are stable and consistent enough for policy to express them. In any case, it is clear that attempts to write policies that track intuitions do not always lead to policies with which people are happy to live. Indeed, Hauser’s own example of how conflict with intuitions causes unworkable policy actually illustrates precisely the opposite point. The example he utilizes is the American Medical Association’s policy on terminating patients’ lives, which turns on an act/omission distinction. As Hauser points out, it is an open secret that doctors quite routinely ignore this distinction, actively hastening a patient’s death. Hauser takes this as evidence that the policy is not intuitive, but the lesson should be precisely the opposite: Because doctors see that despite the fact that the distinction is intuitive it is hard to justify, they sometimes ignore it. What the case illustrates, if anything, is that the fact that a policy is in line with our intuitions is no guarantee that it will be stable. It certainly demonstrates, contra Hauser’s apparent intention in citing it, that our intuitions provide only limited guidance when it comes to policy formulation.
Claim (2) is the more interesting one. Is it true that learning about the UMG and its parametric variations might alter our moral performance? If it is true, what might the effects be? Hauser advocates a kind of moral pluralism and sees “adherence to a single system as oppressive” (2006, p. 425). He hopes that by understanding moral differences as stemming from parametric variations of an underlying, universal, competence will increase mutual toleration and understanding.
There is reason to be more cautious and less optimistic than Hauser seems to be. He places more weight on the analogy with languages and Chomsky’s universal grammar than it seems able to bear. There are reasons to hope that learning the languages of other cultures might help with intercultural understanding, tolerance, and perhaps some degree of moral progress to the benefit of both cultures. But there are significant differences between languages and the parametric variations to which morality is supposed to be subject. Most obviously, the referents of various languages are (more or less) the same: To learn a new language is to learn a new way of talking about (much) the same old things. But to learn a new moral language is to learn about new things: new (purported) moral facts. One can learn such a language, but one cannot speak both the old language and the new one in propria person: One cannot believe that the old way of talking and the new are both right. When they conflict, at least one of them must be wrong. One can learn the new language as a fictional language (as one might learn about Hopi physics, for instance). Alternatively, one can adopt the new way of speaking at the expense of the old. Finally, one can take the availability of both ways of talking as evidence that neither is true; that is, one might become a relativist.
As a matter of fact, moral relativism is frequently motivated by recognition of cultural differences. When we see that cultures differ, we might conclude that culture explains away morality (especially if we accept Mill’s [1859, p. 78] claim that “the same causes which make [someone] a churchman in London, would have made him a Buddhist or a Confucian in Peking”). Moral relativism is standardly seen as both philosophically confused and morally rephrensible insofar as it seems to allow that anything goes. Against the latter charge, Hauser might argue that moral relativism motivated by parametric variations in the UMG is less corrosive of morality than cultural relativism is often taken to be: It emphasizes what is shared across cultures as well as what varies. To that extent, it might lead to skepticism only about the objectivity of the features that vary across cultures. This is small comfort, however, when we recognize the extent and the significance of parametric variation.
Indeed, Hauser argues that honor killing is within the range of permissible variation (where “permissibility” is specified by availability of parametric settings of the UMG [2006: pp. 144–146]). Among the parameters that can be set are the permissible targets of killing. I take it, though, that any relativism according to which some actions are universally impermissible but honor killing is not among them is too extreme a relativism for most philosophers to stomach. In any case, although it might follow as a matter of psychological fact that acquaintance with the parametric variations of the UMG led to this kind of relativism, it is clear that it need not: It remains possible that there might be evidence that settles the question whether a particular parametric setting is correct. The evidence might be moral, in the sense that it falls within the purview of reflective equilibrium narrowly construed (i.e., we might find that one permissible setting is inconsistent with other moral beliefs, including, perhaps, others to which we are deeply committed), or it might be nonmoral and thus within the purview of wide reflective equilibrium (perhaps a society would be more stable with one setting than another, or we might find that honor killing settings rest in part on false empirical beliefs about women). The conclusion that UMG has implications for normative ethics therefore seems at best premature.
Joshua Greene’s Dual Process Account
The account of moral cognition that has been most influential in debates in neuroethics, and has been the source of the most interesting normative claims as well, is undoubtedly Joshua Greene’s dual process account. Greene, who originally trained as a philosopher, came to prominence studying the neural correlates of moral judgments modeled on the famous trolley dilemma (Foot 1978; Thomson 1976). The original trolley dilemma contrasted two cases, in both of which it is possible to act so that one person will die and several will live, but which generate opposing intuitions. In Switch, the agent may divert an oncoming trolley to a sidetrack where it will hit and kill one person; if she fails to divert it, the trolley will hit and kill five workers on the track. In Footbridge, the agent may stop an oncoming trolley from hitting and killing five workers only by pushing a large bystander onto the track. Although the number of people saved and killed are held constant across scenarios, most philosophers think it is impermissible to push the bystander, though it may be permissible to divert the trolley. Ordinary people agree with these judgments (Cushman et al. 2006; Hauser 2006). Greene aimed to explain the neuropsychological mechanisms at work.
Call the judgment that we ought to act to save more lives rather than fewer a consequentialist judgment and the judgment that there are some actions we may not perform even if they produce better consequences than alternatives a deontological judgment. In that case, we can say that most subjects and most philosophers produce consequentialist judgments in the Switch case and deontological judgments in the Footbridge case (with a minority—around 10%—of subjects producing consequentialist judgments in the Footbridge case). Greene et al. (2001) claimed that consequentialist judgments are associated with activation in dorsolateral prefrontal cortex and inferior parietal lobe, which Greene claims are associated with working memory and therefore rational processes. Deontological judgments, by contrast, are associated with the ventromedial prefrontal cortex, the posterior cingulate cortex, and other regions that Greene associates with emotional processing.
In light of this evidence, Greene argues that consequentialist judgments are produced by reflective processes but deontological judgments are not. Greene interprets these results in line with the influential (though now highly controversial) dual process theory in cognitive psychology, which depicts cognition as produced through the interaction of fast, effortless, ballistic, unconscious, and encapsulated system 1 processes and slow, effortful, conscious and domain-general system 2 processes. Although the original 2001 paper has come in for devastating criticism (see, e.g., Kahane & Shackel 2008), Greene argues that later work, from his laboratory and from other labs, supports the dual process account of moral judgment. He mentions, for instance, the work of Koenigs et al. (2007), who administered moral dilemmas to subjects who had suffered damage to the ventromedial prefrontal cortex. This kind of brain injury is associated with emotional blunting; if emotions play a more significant role in deontological judgments than in consequentialist judgments, subjects with VMPFC damage should be significantly more consequentialist in their judgments. This is indeed what was found.
Against this suggestion, Avramova and Inbar (2013) have argued that the VMPFC plays too many different roles in cognition for us to be able to infer that emotional blunting is what drives the judgments Koenigs et al. cite. Moreover, there is evidence that apparently conflicts with the dual process model. Terbeck et al. (2013), for instance, found that propranolol, a beta blocker, increased the likelihood that subjects would judge “up close and personal harms” as morally unacceptable. This is apparently inconsistent with the dual process view, since propranolol has an inhibitory effect on emotional arousal. It seems fair to say that no current theory of moral judgment neatly explains all the available data and that the dual process view remains a viable contender.
Greene (2003) and some of his allies (Singer 2005) have argued that his results support a debunking explanation for deontological intuitions. They do not merely explain deontological intuitions; they explain them away. One argument for this conclusion is what Berker (2009, p. 316) calls the “emotions bad, reason good argument”—that is, the fact that deontological intuitions are generated by emotional processes is sufficient to cast doubt on their warrant. But this argument is not persuasive: By itself, the claim that consequentialist judgments are associated with working memory and therefore rational processes and deontological judgments are associated with emotional processes—even if these claims stand up to scrutiny—does not deliver the result that Greene and his allies seek. The claim that a judgment is untrustworthy (only) because it is generated by emotional processes begs the question against cognitivist accounts of emotions. However, Greene and Singer have resources to strengthen the argument. They claim that at least some of our intuitive judgments track irrelevant factors. In the environment of evolutionary adaptiveness, to which our moral responses are (allegedly) attuned, harms to persons had to be “up close and personal”; the limitations in technologies available ensured that when one agent harmed another, the two were never further away than a stone’s throw. For this reason, moral mechanisms may respond to harms that are caused by low-tech means while failing to respond to those caused by means that allow causation at an (apparent) distance. Greene and Singer suggest that it is because dilemmas like Footbridge involve an up close and personal application of force that subjects feel a powerful intuitive revulsion toward the act. In cases like Switch, however, the infliction of harm is technologically mediated and the relevant mechanisms are not triggered.
It is plausible that this difference between the cases triggers different mechanisms. Intuitive mechanisms—the mechanisms associated with system 1—may indeed play a greater role when stimuli match those to which they are attuned, and reflective processing may be more predominant in other circumstances. However, it is difficult to see how we might construct a good argument for the conclusion that deontological judgments ought to be rejected on this basis. Indeed, it is as easy to construct an argument for the opposite view. There is a case, after all, for identifying the output of the relevant system 1 process with a moral judgment (so, for instance, Hauser might claim). That is, we might argue that in cases like Switch, subjects judge it permissible to throw the switch only because the case has features that their moral faculty is not designed to be able to process; the case bypasses the moral faculty, and the subjects fail to make a moral judgment at all. Indeed, this claim is not an entirely implausible response, if not (so much) to cases like Switch as to our responses to some aspects of modern warfare, to take one example. Drone warfare may seem less morally problematic because the harms involved are multiply technologically mediated, and the correct response to these cases might be to override our reflective judgment in favor of the judgment we would have were the harms caused up close and personal. This line of thinking would support deontological responses rather than consequentialist responses.
If Singer and Greene are to be able to support the conclusion that deontological intuitions are to be rejected because they fail to be truth-tracking, they will need additional resources. So far as I can tell, these resources must be philosophical; they must offer an account of morality that entails that deontological intuitions do not feature among the foundations of morality. In recent work, Singer (together with Katarzyna de Lazari-Radek) has attempted to shoulder this burden. De Lazari-Radek and Singer (2012) argue that morality has its foundations in reason, entirely independent of our intuitions. Many people will find the claim wildly implausible, both because of the continuities between human morality and animal behavior and because it is very plausible to think that all moral theories, including utilitarianism, owe their plausibility to how well they systematize intuitions. But these objections need not concern us. What matters here is that the account of morality offered by de Lazari-Radek and Singer entails that deontological intuitions—indeed all intuitions—are to be set aside for reasons entirely independent of details concerning their implementation in the brain. For this reason, the argument does not succeed in doing what Greene (at least) might have hoped: showing that the neuroscience of morality supports one normative view over another. All the work is done by the account of the nature of morality and the neuroscience has no independent evidential value.
The three theories of the psychological and neuropsychological foundations of morality reviewed above are the three most ambitious and influential. Other neuroscientists—most notably Jorge Moll (2002a,b; 2008a,b)—have offered alternative accounts of the neural underpinnings of moral judgments, though without the ambition of drawing normative conclusions. Some philosophers have utilized work in cognitive psychology or experimental philosophy to argue for limited revisions in normative thought. For instance, Horowitz (1998) has suggested that the doing/allowing distinction is an output of the same heuristic that generates an asymmetry between losses and forgone gains; since the latter heuristic can be shown unequivocally to generate irrational judgments (since it can lead to divergent responses on the same problem, depending upon how the problem is framed; see Tversky & Kahneman 1981), it might provide us with the sought criterion (see Kamm 2007 for a reply to Horowitz).
More recently, Levy (2011) has suggested that the venerable doctrine of double effect—according to which intentionally caused harms are morally more significant than harms caused as a foreseen by-product of pursuing another goal—might be the product of the same mechanisms that produce the Knobe effect (Knobe 2003, 2006). The Knobe effect is an apparent biasing effect of normative considerations on subjects’ judgments as to whether a certain act was intentional or not; subjects are significantly more likely to judge that an agent intentionally brought about a state of affairs of which they disapprove than one of which they approve. Levy suggests that the doctrine of double effect may trade on this mechanism. If that is correct, he suggests, it cannot play the role in normative ethics that it has been invoked to play. The doctrine of double effect is supposed to allow us to distinguish between actions that are permissible (e.g., intending to relieve pain while foreseeing that the means taken will result in the death of a terminally ill patient) and those that are impermissible (e.g., intending to bring about the death of a terminally ill patient). If the intuition that a particular act is permissible because the agent merely foresees causing a bad state of affairs is produced by mechanisms that take normative considerations as inputs, the doctrine might be circular. Rather than being a neutral mechanism for distinguishing between the permissible and the impermissible, it responds to intuitions of permissibility prior to generating the judgment that because a harm is merely foreseen, an action is permissible.
There are strategies available to resist Levy’s argument, as there are to resist Horowitz’s claims, those of Greene and his supporters, and those of other theorists engaged in attempting to explain moral judgments and thereby to generate normative conclusions. These arguments ought, however, not to be held to unreasonably high standards. It is highly unlikely that anyone will succeed anytime soon in offering an argument from the implementation details of moral judgments to the conclusion that a whole class of moral judgments is unreliable, that must be accepted on pain of irrationality. Rather, the best that such arguments can hope for is to lower our confidence that a certain kind of moral judgment is reliable. Whether any of the arguments mentioned above meet even this lowered standard is left for the reader to judge.
References
Ariely, D., & Berns, G. S.
Avroma, Y. R., & Inbar, Y.
Berker, S.
Cushman, F. A., Young, L., & Hauser, M. D.
De Lazari-Radek, K., & Singer, P.
Doris, J.
Elliott, C.
Eskine, K., Kacinik, N., & Prinz, J.
Fishman, H. M. Jr., & Youngner, S. J.
Foot, P.
Gilligan, C.
Goldman, A. I.
Goodenough, O. R., & Tucker, M.
Greene, J.
Greene J., Sommerville, R. B., Nystrom L. E., Darley, J. M., & Cohen, J. D.
Haidt, J.
Haidt, J.
Haidt, J.
Haidt, J., & Bjorklund, F.
Haidt, J., Koller, S. H., & Dias, M. G.
Haidt, J., & Kesebir, S.
Hauser, M.
Hauser, M., Young, L., & Cushman, F.
Horberg. E. J., Oveis, C., Keltner, D., & Cohen, A. B.
Horowitz, T.
Hurley, E. A.
Joyce, R.
Kadosh, R., Levy, C., O’Shea, N., Shea, J., & Savulescu, J.
Kahane, G., & Shackel, N.
Kamm, F. M.
King, M., & Carruthers, P.
Knobe, J.
Knobe, J.
Koenigs, M., Young, L., Adolphs, R., Tranel, D., Hauser, M., Cushman, F., & Damasio, A.
Kohlberg, L.
Levy, N.
Levy, N.
Levy, N., & Clarke, S.
Libet, Benjamin.
Marx, K.
May, J.
Mehta, M. A., Owen, A. M., Sahakian, B. J., Mavaddat, N., Pickard, J. D., & Robbins T. W.
Mele, A.
Mercier, H., & Sperber, D.
Metzinger, T., & Hildt, E.
Mikhail, J.
Mikhail, J. 2000. Rawls’ linguistic analogy: A study of the “generative grammar” model of moral theory described by John Rawls in A theory of justice. Ph.D. dissertation. Ithaca, NY: Cornell University.
Mikhail, J.
Mill J. S.
Moll, J., De Oliveira-Souza, R., Eslinger, P., Bramati, I., Mourao-Miranda, J., Andreiuolo, P., & Pessoa, L. (
Moll, J., De Oliveira-Souza, R., Bramati, I. E., & Grafman, J.
Moll, J., De Oliveira-Souza, R., & Zahn, R.
Moll, J., De Oliveira-Souza, R., Zahn, R., & Grafman, J.
Nahmias, E.
Newberg, A. B.
Parens, E.
Prinz, J.
Roskies, A.
Sahakian, B. J., & Morein-Zamir, S.
Schermer, M.
Schnall, S., Haidt, J., Clore, G. L., & Jordan. A. H.
Singer, P.
Snyder, A. W., Mulcahy, E., Taylor, J. L., Mitchell, D. J., Sachdev, P., & Gandevia, S. C.
Soon, C. S., Brass, M., Heinze, H.-J., & Haynes, J.-D.
Terbeck, S., Kahane, G., McTavish, S., Savulescu, J., Levy, N., Hewstone, M., & Cowen, P.
Thomson, J. J.
Tversky, A., & Kahneman, D.
Wegner, D. M.
Wexler, B. E.
Wheatley, T., & Haidt, J.
Month: | Total Views: |
---|---|
October 2022 | 15 |
November 2022 | 11 |
December 2022 | 9 |
January 2023 | 17 |
February 2023 | 5 |
March 2023 | 11 |
April 2023 | 13 |
May 2023 | 7 |
June 2023 | 9 |
July 2023 | 11 |
August 2023 | 6 |
September 2023 | 9 |
October 2023 | 11 |
November 2023 | 7 |
December 2023 | 10 |
January 2024 | 8 |
February 2024 | 12 |
March 2024 | 11 |
April 2024 | 21 |
May 2024 | 11 |
June 2024 | 9 |
July 2024 | 13 |
August 2024 | 18 |
September 2024 | 18 |
October 2024 | 41 |
November 2024 | 34 |
December 2024 | 5 |
January 2025 | 9 |
February 2025 | 28 |
March 2025 | 31 |
April 2025 | 11 |