Titles and abstracts

Conference: Norms and moral psychology

Early moral reasoning about interactions between ingroup and outgroup agents

Renée Baillargeon (Psychology, University of Illinois Champagne-Urbana)

Over the past 15 years, a great deal of research has examined infants' psychological reasoning-their ability to make sense of the actions of agents. This research suggests that, when watching an agent act in a scene, infants can attribute to the agent a whole host of mental states including motivational states (eg goals), reality-congruent informational states (eg knowledge), and reality-incongruent informational states (eg false beliefs). Furthermore, infants expect the agent to act in accordance with a Principle of Rationality, which states that, when pursuing a goal, agents will select means that are both causally appropriate and efficient.

More recently, investigators have begun to examine early moral reasoning-infants' and toddlers' expectations about interactions among agents. My collaborators and I have begun to explore three of the principles that might guide these expectations: Reciprocity, Fairness, and Group Loyalty. The Principle of Reciprocity states that if A acts toward B, and B chooses to respond, then B's action will match A's action in value (ie valence and magnitude), though not necessarily in form.

The Principle of Fairness states that agents should be fair in distributing resources, costs, rewards, punishments, and so on. Finally, the Principle of Group Loyalty states that agents should act in ways that support their group. Most of our research examines children's responses in third-party situations involving novel groups. Our results so far suggest that (1) infants and toddlers generally expect interactions among agents to unfold in accordance with the Principles of Reciprocity and Fairness, and (2) these expectations are moderated in predictable ways by considerations of Group Loyalty.

Back to top

Sacred values in the service of good consequences

Will Bennis (Psychology, Northwestern University)

Sacred values concern goods that are taken to be so important that the mere contemplation of their sacrifice is deemed inappropriate and induces (often strong) negative emotions. They provide important paradoxes to decision theory, not the least of which is that the domain where consequences are subjectively the most important (that of the sacred), is just the domain where people refuse to consider costs and benefits. By extension, they wilfully fail to maximise expected utility just when it matters most (at least for those who accept cost-benefit analysis as a normative standard).

This talk will discuss four reasons we should take this interpretation - that decision makers who refuse to weigh costs and benefits are not intending to maximise expected utility - with a degree of skepticism:

  1. Gaps between researchers and research participants in the semantic scope of costs and benefits

  2. The participants' desire to avoiding slippery slopes

  3. The participants' scepticism about the reliability of cost-benefit analysis as a means to maximise expected utility relative to other available decision modes

  4. Justified rejection of the study scenarios' unrealistic constraints

The research is based on interviews with a variety of cultural groups in northeast Wisconsin, largely culled from their own explanations as to why weighing costs and benefits is inappropriate. The talk concludes with a plea for more qualitative field studies (as a precursor to null-hypothesis testing) in research on the cultural psychology of morality.

Back to top

Two functions of morality

Fiery Cushman (Psychology, Harvard University)

Sometimes people cause harm accidentally; other times they attempt to cause harm, but fail. How do ordinary people treat cases where intentions and outcomes are mismatched? While people's judgments of moral wrongness depend overwhelmingly on an assessment of intent, their judgments of deserved punishment exhibit substantial reliance on accidental outcomes as well. This pattern of behavior is present at an early age and consistent across both survey-based and behavioral economic paradigms. This raises a question about the function of our moral psychology: why do we judge moral wrongness and deserved punishment by different standards?

Models of the evolution of social behaviour emphasise a reciprocal relationship between punishment and prosociality. Punishment is worthwhile if it enforces prosociality; prosociality is worthwhile when enforced by punishment. This poses two functional challenges for an individual: determining what behaviours to punish in others, and determining which behaviors to perform oneself. I present evidence that these distinct functional demands cause us to punish accidents, while not regarding them as wrongful.

Back to top

Lies in disguise - an experimental study on cheating

Urs Fischbacher (Economics, University of Konstanz)

How and why do people lie? In a new experimental design, we can investigate the distribution of lying behaviour in a population. In this experiment, participants receive a dice and instructions how the reported number translates into their payoff. The rolling of the dice is not monitored. So, the subjects can report a number yielding a higher payoff. About one fifth of the subjects are full liars and 39% can be categorised as honest. Among the remaining subjects we found subjects who lied but who did not maximise their income by doing so.

A broad class of lying aversion models cannot explain the observed data. Trying to appear honest toward the experimenter explains a small part of the incomplete lying. The desire to maintain a favourable self-concept appears as a crucial element in explaining our data.

Back to top

Of trolleys and cheaters: Automatic and controlled processes in moral judgement

Joshua Greene (Psychology, Harvard University)

This talk will cover three related topics. First, I'll review evidence supporting and extending the 'dual-process' theory of moral judgment, according to which deontological moral judgments (favouring 'rights' over the 'greater good') are driven by automatic emotional responses, while utilitarian judgments (favouring the 'greater good' over 'rights') are enabled by controlled cognitive mechanisms. This will include new behavioural and fMRI studies of healthy individuals and psychopaths.

Second, I'll discuss a series of behavioural experiments aimed at identifying factors to which people's automatic emotional responses are sensitive, with an eye on evaluating the quality of the judgments based on these responses. Finally, I'll discuss a new line of research on the neural and cognitive bases of honesty and dishonesty.

Back to top

Guilt and shame in philosophy and psychology

Corey Maley and Gilbert Harman (Philosophy, Princeton University)

Philosophers often see a deep connection between morality and guilt or shame, but disagree about what the connection is and indeed about what guilt and shame consist in. For example, Brandt (1969, 1979, 1992) and Gibbard (1992) suggest that one's moral principles can be identified with those principles one would feel (or would be warranted in feeling) guilt or shame for violating. Nietzsche distinguishes two basic types of morality depending on whether the relevant feeling is guilt (slave morality) or shame (master morality).

Walter Kaufmann (1973) argues against a morality of guilt. Bernard Williams (1993) says that an ethics of shame is to be preferred to a morality of guilt. On the other hand, recent psychological research has been described as showing that people are better off if they are subject to guilt rather than shame (Tangney et al. 2007).

Matters are complicated by differences in how the terms guilt and shame are used. Here we follow the usage in Tangney et al. We will use the word guilt for a negative feeling directed at one's act or failure to act. We will use the word shame for a negative feeling directed at oneself. In this usage, guilt is a certain way of feeling that what one has done or not done was bad and shame is a certain way of feeling that one is bad for having acted or not acted. Bernard Williams' and Nietzsche's usage appears to coincide with ours (based on Tangney's).

Gibbard and Kaufmann appear to take guilt to include what we are calling shame. Tagney et al. say that most people do not distinguish guilt and shame for having acted badly. Of course, in ordinary usage, shame but not guilt is used more widely to include negative feelings about oneself because of other things: misbehaviour of one's relative or country, for example.

Given our terminological agreement with Tangney et al. and Williams we ask and try to answer such questions as:

  1. Are there people who are not susceptible to guilt as we are using this term? If so, can such people be moral people? Can they have a sense of what it is to do something because that is the morally right thing to do?

  2. Is it a good thing to be susceptible to guilt? Should children be brought up to be susceptible to guilt?

  3. Similar issues about shame.

Brandt, R. B. (1967). 'Some merits of one form of rule-utilitarianism,' University of Colorado Studies, Series in Philosophy, No. 3.

Brandt, R. B. (1979). A Theory of the Good and the Right. Oxford: Clarendon Press.

Brandt, R. B. (1992). 'Introductory comments,' Morality, Utilitarianism, and Rights. Cambridge, England: Cambridge University Press.

Gibbard, A. (1992). 'Moral concepts: Substance and sentiment,' Philosophical Perspectives, 6, Ethics.

Kaufmann, W. (1973). Without Guilt and Justice: From Decidophobia to Autonomy (New York: Peter Wyden).

Nietzsche, F. (1966). Thus Spoke Zarathustra: A Book for All and None, translated by W. Kaufmann. New York: Viking Press.

Tangney, C. et al. (2007). 'Moral emotions and moral behavior.' Annual Review of Psychology, 58, 345-72.

Williams, B. (1993). Shame and Necessity. Berkeley, CA: University of California Press.

Back to top

'Any animal whatever': Harmful battery and its elements as building blocks of human and nonhuman moral cognition

John Mikhail (Psychology, Law and Philosophy, Georgetown University)

Darwin's (1871) observation that evolution has produced in us "certain emotions responding to right and wrong conduct, which have no apparent basis in the individual experiences of utility" is a useful springboard from which to clarify an important problem in the cognitive science of moral judgment. The problem is whether a certain class of moral judgments is constituted or driven by emotion (eg Greene, 2008; Greene and Haidt, 2002) or merely correlated with emotion while being generated by unconscious computations (eg Mikhail, 2007; also Huebner et al., 2008)

Although the point has escaped notice until now, all 25 of the 'personal' vignettes used by Greene and colleagues (2001) in their fMRI study of emotional engagement in moral judgment describe well-known crimes or torts. Specifically, 22 of these scenarios describe actions that satisfy a prima facie case of purposeful battery and/or intentional homicide (ie murder). Two other cases describe acts that constitute rape and sexual battery, while the final case describes a prima facie case of negligence. With one exception, then, what Greene actually did in the "personal" condition of his experiment was to put subjects in the scanner and ask them to respond to a serious of violent crimes and torts.

By contrast, only 5 of 19 cases in Greene's 'impersonal' condition are batteries, and only one of these batteries is purposeful. The other four batteries involve foreseeable but non-purposeful harmful contact, at least two of which admit of a clear necessity defence. The remaining 14 'impersonal' scenarios are a hodgepodge of cases that raise a variety of legal issues: fraud, tax evasion, insider trading, public corruption, theft, unjust enrichment, and necessity as a defence to trespass to chattels, among others. Finally, 5 of these residual cases describe risk-risk tradeoffs in the context of vaccinations and environmental policy.

The upshot is that Greene's initial fMRI experiments did not really test two patterns of moral judgment - one "deontological" and the other "consequentialist"- as much as different categories of potentially wrongful behaviour. The basic cleavage he identified in the brain was not Kant versus Mill, but purposeful battery, rape, and murder, on the one hand, and a disorderly grab bag of theft crimes, regulatory crimes, torts against non-personal interests, and risk-risk tradeoffs, on the other. Moreover, his finding that the MPFC, PCC, STS, and amygdala are recruited for judgment tasks involving purposeful battery, rape, and murder does not undermine the traditional rationalist claim that moral rules are engraved in the mind (eg Grotius, 1625; Kant, 1788; Leibniz, 1705).

On the contrary, Greene's evidence largely supports that thesis. Crimes and torts have *elements*, and the relevant pattern of intuitions is best explained by assuming that humans possess tacit knowledge of legal *rules*. Naturally, violent crimes and torts are more emotionally engaging than insider trading or environmental risk analysis, but it does not follow that emotion 'constitutes' or 'drives' the judgment that the former acts are wrong.

Rather, what drive these intuitions are the unconscious computations that characterise these acts as battery, rape, or murder in the first place. By mischaracterising his own stimuli, Greene and other researchers (eg Koenigs et al., 2007) have misinterpreted their own findings and misunderstood the nature of the problem.

Returning to Darwin, the main questions for cognitive science going forward include (1) how the brain computes unconscious representations of battery, murder, rape, negligence, and other forms of harmful trespass, and (2) how these computations and the negative emotions they typically elicit are related to the complex socio-emotional capacities that humans share with nonhuman animals. Future research should focus more squarely on these topics and move beyond misleading pseudo-problems such as how emotion and reason 'duke it out' in the brain.

Darwin, C. (1981/1871). The Descent of Man, and Selection in Relation to Sex. Princeton, NJ: Princeton University Press.

Greene, J. (2008). The secret joke of Kant's soul; and Reply to Mikhail and Timmons. In W. Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development. Cambridge, MA: MIT Press, pp. 35-79, 105-117.

Greene, J. and Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, 6(12), 517-523.

Greene, J., Sommerville, R., Nystrom, L., Darley, J. and Cohen, J. (2001). An fMRI investigation of emotional engagement in moral Judgment. Science, 293, 2105-2108.

Grotius, H. (1925/1625). On the Law of War and Peace. (F. W. Kelsey, Trans.). Oxford: Clarendon Press.

Huebner, B., Dwyer, S. and Hauser M. (2008). The role of emotion in moral psychology. Trends in Cognitive Sciences, 13, 1-6.

Kant, I. (1993/1788). Critique of Practical Reason. (L.W. Beck, Trans.). MacMillan: New York.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., and Damasio, A. (2007). Damage to ventromedial prefrontal cortex increases utilitarian moral judgments. Nature, 446, 908-911.

Leibniz, G. (1981/1704). New Essays on Human Understanding. (P. Remnant and J. Bennett, Eds.). Cambridge: Cambridge University Press.

Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Sciences, 11, 143-152.

Back to top

The norm of self-sacrifice

Sonya Sachdeva (Psychology, Northwestern University)

Recent work in moral psychology has begun to acknowledge that culture may play a large role in shaping people's moral values. Cultural factors can affect how our moral modules are structured and which specific moral principles are most salient in a community (Hauser, 2006; Shweder, Mahapatra and Miller, 1997; Haidt and Joseph, 2004). Moreover, the way an individual construes of herself in relation to her social group can affect her broader orientation toward moral concepts (Moghaddam, Slocum, Finkel, Mor and Harre, 2002).

Specifically, the two orientations toward morality that I explore in this work distinguish between rights-based systems and duty-based systems of morality. As a result of these culturally-specific systems of morality, the norm of self-sacrifice may evolve to be an important moral virtue among certain groups.

The focus of the current work is to investigate the influence of having self-sacrifice as an important part of one's moral system and how it can affect perceptions of moral behaviour and the formation of moral judgments. In a series of five studies, I show that self-sacrifice is linked to having a duty-based orientation toward morality and is more salient among some social classes than others. I also demonstrate that the value of self-sacrifice is limited by certain cultural constraints such as social role expectations and other types of contextual factors.

These results have implications for behaviour scientists' understanding of individual motivation of engaging in social action and that suggests that perhaps, self-interest may not be the most useful framework across all cultures and social contexts.

Back to top

The role of intent across distinct moral domains

Liane Young (Brain and Cognitive Sciences, MIT)

When we make moral judgments of people's actions, we consider not only the outcomes of the actions but also people's mental states concerning their actions. Did she believe she would cause harm? Did she intend to harm? Typically, beliefs and intentions match the outcomes: when a person thinks she is sweetening her friend's coffee by putting sugar in it, she is usually not mistaken. Mismatches occur, however, in the case of accidents (eg when the 'sugar' is in fact poison) and failed attempts to harm (eg when the 'poison' is in fact sugar).

The current work uses behavioural methods, functional neuroimaging (fMRI), TMS (transcranial magnetic stimulation), and neuropsychological methods to characterise the cognitive and neural mechanisms for judgments of innocence and guilt. First, this work reveals mental states as key cognitive inputs for moral judgment and the role of specific neural substrates in supporting mental state processing. For example, activity in the right temporo-parietal junction (RTPJ) is correlated with the use of mental states for moral judgment, and disrupting RTPJ activity disrupts the use of mental states for moral judgment.

Second, this work investigates whether the role of mental states is stable across distinct moral domains. Behavioural and fMRI results suggest that mental states such as beliefs and intention matter more for moral judgments of harmful actions than actions considered to be morally impure (eg incest avoidance, food taboos). Together, the evidence informs the functional role of moral norms and mental state reasoning.


Back to conference page

Back to top