It is a philosophical truism that we must think of others as moral agents, not merely as causal or statistical objects. But why? I argue that this follows from the best resolution of an antinomy between our experience of morality as necessarily binding on the will and our knowledge that all moral beliefs originate in contingent histories. We can address this antinomy only by understanding moral deliberation via interpersonal relationships, which simultaneously vindicate and constrains morality’s bind on the will. This means that moral agency is fundamentally social. I model an attitude toward our causal nature on sociologist Erving Goffman’s concept of ‘civil inattention’; our social practice of agency requires that we give minimal attention to the contingent origins of moral judgments in ourselves and others. Understood this way, seeing ourselves as moral agents requires avoiding appeal to causal aetiology to settle substantive moral disagreement.
We ought to treat others’ moral views with respect, even when we disagree. But what does that mean? This paper articulates a moral obligation to make ourselves open to sincere moral persuasion by others. Doing so allows us to participate in valuable relationships of reciprocal respect for agency. Yet this proposal can sound tritely agreeable. To explore its full implications, the paper applies the general obligation to one of the most challenging topics of moral disagreement: the morality of abortion. I consider and reject arguments that abortion decisions have special features exempting them from the obligation to be open to moral persuasion. Further, I argue that viewing fetal ultrasound images can accomplish morally persuasion. Accordingly, in at least some cases a woman seeking abortion has an obligation to view fetal ultrasound images as a means of being open to moral persuasion. However, this conclusion does not support recent laws compelling women seeking abortion to view ultrasound images; such laws are in fact incompatible with the respect for agency that underwrites the obligation to be open to persuasion.
This paper does four things: (1) It provides an analysis of the concept ‘fake news’. (2) It identifies distinctive epistemic features of social media testimony. (3) It argues that partisanship-in-testimony-reception is not always epistemically vicious; in fact some forms of partisanship are consistent with individual epistemic virtue. (4) It argues that a solution to the problem of fake news will require changes to institutions, such as social media platforms, not just to individual epistemic practices.
Learning the psychological origins of our moral judgments can lead us to lose confidence in them. In this paper I explain why. I consider two explanations drawn from existing literature – regarding epistemic unreliability and automaticity – and argue that neither is fully adequate. I then propose a new explanation, according to which psychological research reveals the extent to which we are disturbingly disunified as moral agents.
This paper presents a regress challenge to the selective psychological debunking of moral judgments. A selective psychological debunking argument conjoins an empirical claim about the psychological origins of certain moral judgments to a theoretical claim that these psychological origins cannot track moral truth, leading to the conclusion that the moral judgments are unreliable. I argue that psychological debunking arguments are vulnerable to a regress challenge, because the theoretical claim that ‘such-and-such psychological process is not moral-truth-tracking’ relies upon moral judgments. We must then ask about the psychological origins of these judgments, and then make a further evaluative judgment about these psychological origins… and so on. This chain of empirical and evaluative claims may continue indefinitely and, I will argue, proponents of the debunking argument are in a dialectical position where they may not simply call a halt to the process. Hence, their argument cannot terminate, and its debunking conclusion cannot be upheld.
Recent empirical work appears to suggest that the moral intuitions of professional philosophers are just as vulnerable to distorting psychological factors as are those of ordinary people. This paper assesses these recent tests of the ‘expertise defense’ of philosophical intuition. I argue that the use of familiar cases and principles constitutes a methodological problem. Since these items are familiar to philosophers, but not ordinary people, the two subject groups do not confront identical cognitive tasks. Reflection on this point shows that these findings do not threaten philosophical expertise – though we can draw lessons for more effective empirical tests.
A popular argument form uses general theories of cognitive architecture to motivate conclusions about the nature of moral cognition. This paper highlights the possibility for modus tollens reversal of this argument form. If theories of cognitive architecture generate predictions for moral cognition, then tests of moral thinking provide feedback to cognitive science. In certain circumstances, philosophers’ introspective attention to their own moral deliberations can provide unique data for these tests. Recognizing the possibility for this sort of feedback helps to illuminate a deep continuity between the disciplines.
The evidential value of moral intuitions has been challenged by psychological work showing that the intuitions of ordinary people are affected by distorting factors. One reply to this challenge, the expertise defence, claims that training in philosophical thinking confers enhanced reliability on the intuitions of professional philosophers. This defence is often expressed through analogy: since we do not allow doubts about folk judgments in domains like mathematics or physics to undermine the plausibility of judgments by experts in these domains, we also should not do so in philosophy. In this paper I clarify the logic of the analogy strategy, and defend it against recent challenges by Jesper Ryberg. The discussion exposes an interesting divide: while Ryberg’s challenges may weaken analogies between morality and domains like mathematics, they do not affect analogies to other domains, such as physics. I conclude that the expertise defence can be supported by analogical means, though care is required in selecting an appropriate analog. I discuss implications of this conclusion for the expertise defence debate and for study of the moral domain itself.
The Science of Morality and its Normative Implications (with Tommaso Bruni and Matteo Mameli)
Neuromoral theorists are those who claim that a scientific understanding of moral judgment through the methods of psychology, neuroscience and related disciplines can have normative implications and can be used to improve the human ability to make moral judgments. We consider three neuromoral theories: one suggested by Gazzaniga, one put forward by Gigerenzer, and one developed by Greene. By contrasting these theories we reveal some of the fundamental issues that neuromoral theories in general have to address. One important issue concerns whether the normative claims that neuromoral theorists would like to make are to be understood in moral terms or in non-moral terms. We argue that, on either a moral or a non-moral interpretation of these claims, neuromoral theories face serious problems. Therefore, neither the moral nor the non-moral reading of the normative claims makes them philosophically viable.
The debate between proponents and opponents of a role for empirical psychology in ethical theory seems to be deadlocked. This paper aims to clarify the terms of that debate, and to defend a principled middle position. I argue against extreme views, which see empirical psychology either as irrelevant to, or as wholly displacing, reflective moral inquiry. Instead, I argue that moral theorists of all stripes are committed to a certain conception of moral thought – as aimed at abstracting away from individual inclinations and toward interpersonal norms – and that this conception tells against both extremes. Since we cannot always know introspectively whether our particular moral judgments achieve this interpersonal standard, we must seek the sort of self-knowledge offered by empirical psychology. Yet reflective assessment of this new information remains a matter of substantive normative theorizing, rather than an immediate consequence of empirical findings themselves.
In a recent paper, Giubilini and Minerva argue for the moral permissibility of what they call ‘after-birth abortion’, or infanticide. Here I suggest that they actually employ a confusion of two distinct arguments: one relying on the purportedly identical moral status of a fetus and a newborn, and the second giving an independent argument for the denial of moral personhood to infants (independent of whatever one might say about fetuses). After distinguishing these arguments, I suggest that neither one is capable of supporting Giubilini and Minerva’s conclusion. The first argument is at best neutral between permitting infanticide and prohibiting abortion, and may in fact more strongly support the latter. The second argument, I suggest, contains an ambiguity in its key premise, and can be shown to fail on either resolution of that ambiguity. Hence, I conclude that Giubilini and Minerva have not demonstrated the permissibility of infanticide, or even great moral similarity between abortion and infanticide.
2017. Moral Inferences, eds. Jean-Francois Bonnefon and Bastien Tremoliere
Some ethicists try to settle moral disagreement by ruling out particular types of moral reasoning on the basis of cognitive scientific evidence. We argue that the cognitive science of reasoning is not well-suited to this Archimedean role. Through discussion of several influential research programs, we show that such attempts tend to either fail to be Archimedean (by assuming controversial moral views) or fail to settle disagreement (by getting caught up in unsettled debates about rationality). We speculate that these outcomes reflect a fundamental sort of normative disagreement, which can be reshuffled to the domains of morality or rationality, but cannot be avoided.
2016. APA Newsletter on Teaching Philosophy 15(2)
Why do students resist engaging with philosophical thought experiments? How can we address this in the classroom? What should you do, as a teacher, if you harbor some skepticism about thought experiment methodology yourself?
2015. Internet Encyclopedia of Philosophy
What do we know about how people make moral judgments? And what should moral philosophers do with this knowledge? This article addresses the cognitive science of moral judgment. It reviews important empirical findings and discusses how philosophers have reacted to them.
Neuromodulators and the (in)stability of moral cognition (with Molly Crockett)
2015. The Moral Brain – A Multidisciplinary Perspective, eds. Jean Decety and Thalia Wheatley. 221-238.
2015. Springer Handbook of Neuroethics, eds. Jens Clausen and Neil Levy. 149-168.
This chapter discusses the philosophical relevance of empirical research on moral cognition. It distinguishes three central aims of normative ethical theory: understanding the nature of moral agency, identifying morally right actions, and determining the justification of moral beliefs. For each of these aims, the chapter considers and rejects arguments against employing cognitive scientific research in normative inquiry. It concludes by suggesting that, whichever of the central aims one begins from, normative ethics is improved by engaging with the science of moral cognition.
2018. Ethics. 128(4)
2017. Notre Dame Philosophical Reviews.
2017. Hypatia Reviews Online.
2016. Mind 125(500): 1227-1236.
2016. The Philosophical Quarterly