1 Artificial Agency, Consciousness, and the Criteria ...

2 downloads 0 Views 394KB Size Report
Artificial Agency, Consciousness, and the Criteria for Moral Agency: What. Properties Must an Artificial Agent Have to be a Moral Agent? Kenneth Einar Himma ...
Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to be a Moral Agent? Kenneth Einar Himma, Ph.D., J.D. Associate Professor Department of Philosophy Seattle Pacific University 3307 Third Avenue West Seattle, WA 98119, USA 206.281.2038 (office) 206.281.2335 (fax) [email protected] A spate of papers have recently appeared raising the issue of the possibility of not only artificial agency, but also artificial moral agency that raises all sorts of substantive questions of moral responsibility. Suppose, for example, that we are epistemically justified in believing that an ICT engineer has designed and produced an ICT that is capable of acting and satisfies the criteria for moral agency. If this ICT turns out to do something bad that no one anticipated, who is morally responsible: the designer, the ICT or some combination of both? Are there special moral constraints governing the efforts of ICT designers in this regard? Is there an absolute prohibition on attempting to produce the sorts of technologys, such as neural nets, that might actually result in artificial moral agents? These are not trivial questions; nor are any of these the possible answers non-starters. Consider, for example, the relationship of parent to child. There are special duties parents owe not only to children but also to society at large to try to ensure that their children become productive citizens who generally respect the demands of morality. Of course, these duties do not usually result in parental responsibility for the bad acts of their children; no one, I suppose, is tempted to blame Hitler’s parents for Hitler’s wickedness. Hitler, as a moral matter, owns his own wickedness and is, other things being equal, solely responsible for his terrible deeds. But, other things are not always equal, as can be seen by considering the case of Robert Alton Harris who was executed a decade or so ago in California for two shockingly callous murders. Harris and his brother ordered two kids at a fast-food restaurant to act as a get-away vehicle and promised they would not be harmed if they did so. When they arrived at what Harris deemed an appropriate location, he shot both of the kids point blank in the head and calmly remained in the car eating their sandwiches, while his stunned brother staggered out of the car and vomited.

1 Electronic copy of this paper is available at: http://ssrn.com/abstract=983503

This seemed as clear a case for the death penalty as here could possibly be until the details of Harris’s life began to emerge. Harris’s mother drank excessively during the pregnancy, causing fetal alcohol syndrome that many people believe caused “organic brain damage” and affected Harris’s ability to empathize with other people.1 Harris was born three months prematurely after his father kicked his mother in the stomach.2 He was physically abused during most of his life, beaten severely by parents and siblings; indeed, his father once chased him around a table with a loaded gun when Harris was 2 years old, claiming that he would kill Harris. The results were both psychological and, if you believe many advocates, damage to areas of the brain having to do with personality. In 1992, Harris was executed after then-governor Wilson revived the death penalty. Wilson himself rejected Harris’s appeal for clemency, even while observing that Harris’s life was “a living nightmare.”3 Should Harris’s parents be punished for the murder? Of course not, though they deserve punishment for child abuse. Do they share any of the blame? It seems obvious that they do, although much more would need to be done to work out the relationship between standards governing the rearing of children and standards governing the creation of artificial moral agents – something I do not propose to attempt in this paper. What I do wish to attempt to do is to work out some conceptual issues regarding the concepts of agency, natural agency, artificial agency, and moral agency, as well as articulate the criteria for moral agency as a first step towards beginning a consideration of these important questions of professional responsibility. Much of what I take myself to be doing enjoys a consensus in the literature – so much so that many crucial claims on which I rely are taken for granted in such widely-used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I attempt to flesh out some of the implications of some of these well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I will begin with analyses of the more basic concepts, like that of agency, and work up to an analyses of the more complex concepts, like that of moral agency and rational free agency, subsequently considering the criteria something must satisfy to be accountable for its behavior; all of this will largely be uncontroversial in the literature. I will then argue that the each of the various elements of the necessary conditions for moral agency presupposes I’m indebted to Rebekah Rice, Fran Grodzinsky, and Keith Miller for very helpful comments on an earlier draft. 1

See “California Revives the Death Penalty,” Time Magazine; (April 27, 1992); available at http://www.time.com/time/magazine/article/0,9171,975376,00.html?promoid=googlep 2

See Jennifer McNulty, “Crime and Punishment,” University of California Review (Summer 2000); available at http://review.ucsc.edu/summer.00/crime_and_punishment.html) 3

See Time Magazine, Note 1, supra.

2 Electronic copy of this paper is available at: http://ssrn.com/abstract=983503

consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-tobe-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious. Along the way, I consider an influential recent attempt to articulate a nonstandard analysis of each of the relevant notions and argue that it is unmotivated (any of the ideas the authors are trying to capture could, and should, simply be called something else to avoid the confusion associated with taking wellestablished and supported philosophical analyses of concepts and denying them without much by way of good reason. I will also argue that the non-standard analysis is unhelpful in the sense that it contributes nothing to solving the difficult problems of professional responsibility described in the introduction to this essay. 1.

The Concept of Agency

The idea of agency is conceptually associated with the idea of being capable of doing something that counts as an act or action. As a conceptual matter, X is an agent if and only if X is capable of performing actions. Actions are doings, but not every doing is an action; breathing is something we do, but it does not count as an action. Typing these words is an action, and it is in virtue of my ability to do this kind of thing that, as a conceptual matter, I am an agent. It might not be possible for an agent to avoid doings that count as actions. Someone who can act who chooses always to do nothing, according to the majority view, is doing something that counts as an action – though in this case it is an omission that counts as the relevant act. I have decided not to have a second cup of coffee this morning, and my ability to execute that decision in the form of an omission counts, in a somewhat technical sense, as an act, albeit one negative in character. Agents are not merely capable of performing acts; they inevitably perform them (in the relevant sense) – sometimes when they do nothing. An omission with the right properties counts as an act in the sense necessary to constituting a being as an agent. The difference between breathing and typing words is that the latter depends on my having a certain kind of mental state, while the former does not. Intuitively, my typing these words is caused by certain kind of a mental state. Some theorists, like Davidson, regard the relevant mental state as a belief/desire pair; on this view, if I want X and believe y is a necessary means to achieving x, my belief and desire will cause my doing y – or will cause something that counts as an “intention” to do y, which will cause the doing of y. Others, including myself, regard the relevant mental state as a “volition” or a “willing.”4 A volition is a mental state, like a desire is a mental state, but 4

I want to remain agnostic with respect to theories of mind. A mental state might be a private, inner state that is non-extended and non-material, as substance dualism and non-reductive physicalist theories assert, or it might be nothing more than a brain state, as reductive

3

volitions – unlike desires by themselves – are the mental states that cause actions. For example, if I introspect my inner mental states after I have a made a decision to raise my right arm and then do so, I will notice that the movement is preceded by a somewhat mysterious mental state (perhaps itself a doing of some kind) that is traditionally characterized as a “willing” or a “volition.” The fact that I have a certain desire might help to explain why a rational being like me would reach a decision about typing these words and hence execute that decision by means of a volition; as has been uncontroversial since Hobbes and Hume, desires clearly motivate. But while it may be true that desires motivate, motivation alone cannot not cause action. Some other mental element is needed. Again, theorists disagree about what that element might be. It might be simply something fairly characterized as an intent (although an “intent” seems simply, as a conceptual matter, to express what it is the actor seeks to accomplish); as a belief-desire pair (a belief in relation to a desire being the additional element); or it might be the instantiation of something that counts as a volition. Either way, it is a necessary condition for something to count as X’s doing y that y be causally related to some other mental state than simply a desire or simply a belief. Breathing is not an action precisely because my taking a breathe at this moment doesn’t depend directly on an intent, belief/desire pair of the right kind or volition – though it might depend indirectly on my not having a such a mental state to end my life. Waking up in the morning is something I do, but it is not an action, at least not most of the time, because it doesn’t involve one of these conscious states – though getting out of bed does. The relevant mental states might be free or they might not be free. Volitions, belief-desire pairs, and intentions might be mechanistically caused and hence determined – or they might be free in some libertarian or compatibilistic sense. Likewise, the relevant mental states might be related to something that counts as the kind of mental calculation (e.g., a deliberation) that we associate with rational beings – or they might not be, Agency is therefore an atomic concept and hence is a more basic notion than the compound concepts of free agency, rational agency, or rational free agency. One need not be either rational or free to be an agent. While dogs are neither rational nor free (in the relevant sense), it makes sense to think of them as being capable of performing actions because some of their doings seem to be related to the right kinds of mental states – states that are intentional, which in the sense that they are about something else and should not be confused either with the notions of having an intent or intention (which seems to presuppose linguistic or quasi-linguistic abilities absent in dogs) or with the notion of intension

physicalism (e.g., identity theory) asserts. I make no assumptions here about the nature of a volition or the nature of a mental state generally – though I think reductive physicalism is false.

4

(which connotes ideational or conceptual content).5 After all, we try to train dogs not to engage in certain kinds of behavior, and the relevant methods are directed at producing some sort of mental association between the behavior and an unpleasant consequence; that is an intentional state because it is clearly a state that is about something (though it may not, strictly speaking, constitute or figure into the production of having an intent). The relevant association is not, of course, an idea; but it is a mental state that would produce something that we intuitively count as an action; if this is correct, then it makes sense to think that these relevant mental states in dogs are unfree, on the one hand, but that dogs are agents, on the other. Only beings capable of intentional states (i.e., mental states that are about something else, like a desire for X), then, are agents. People and dogs are both capable of performing acts because both are capable of intentional states; people are, while dogs are not, rational agents because only people can deliberate on reasons, but both seem to be agents. In contrast, trees are not agents, at bottom, because trees are incapable of intentional states (or any other mental state, for that matter). Trees grow leaves, but growing leaves is not something that happens as the result of an action on the part of the tree. 2.

Artificial and Natural Agents

One can distinguish natural agents from artificial agents. Some agents are natural in the sense that their existence can be explained by biological considerations; people and dogs are natural agents insofar as they exist in consequence of biological reproductive capacities – and are hence biologically alive. Some agents might be artificial in the sense that they are manufactured by intentional agents out of pre-existing materials external to the manufacturers; such agents are artifacts. Highly sophisticated computers might be artificial agents; they are clearly artificial and would be artificial agents if they satisfy the criteria for agency – in particular, if they are capable of instantiating intentional states that cause performances. The distinction between natural and artificial agents should not be thought to logically preclude an artificial agent that is biologically alive. Suppose, for just the sake of argument, that a literal interpretation of the Genesis creation story is true. If so, then Adam and Eve would be living human beings who are plausibly characterized as both “natural agents” and “artificial agents,” since, on this assumption, God is an agent who manufactured Adam and Eve out of materials external to God (and presumably pre-existing in the sense that they pre-existed the lives of Adam and Eve). While all this, of course, might be false, it is surely not incoherent as a conceptual matter. Another example of an agent that is both artificial and biologically alive would be certain kinds of clones. If we could manufacturer living DNA out of preexisting non-genetic materials, then the resulting organism would be both 5

See Pierre Jacob, “Intentionality,” Stanford Encyclopedia of Philosophy (Edward Zalta, ed.); available at http://plato.stanford.edu/entries/intentionality/

5

artificial and biologically alive. If sufficiently complex to constitute an agent, then it would be an agent that was artificial but nonetheless alive. The above examples show that, as a conceptual matter, something can be both artificial and biologically alive and hence that something can be both an artificial agent and a natural agent. Agency, as a conceptual matter, is simply the capacity to cause actions – and this requires the capacity to instantiate certain intentional mental states that are capable of causing performances (though, again, I make no claims here about exactly what that state is). Thus, the following constitutes a rough but accurate characterization of an agent: X is an agent if and only if X can instantiate intentional mental states capable of directly causing a performance; here it is important to remember that intentional states include beliefs, desires, intentions, and volitions (or perhaps neurophysiological correlates that perform the relevant functions). In any event, on the received view, doing α is an action if and only if α is caused by an intentional state (and is hence performed by an agent). 3.

The Concept of Moral Agency

The concept of moral agency is ultimately a normative notion that picks out the class of beings whose behavior is subject to moral requirements. The idea is that, as a conceptual matter, the behavior of a moral agent is subject to being governed by moral standards, while the behavior of something that is not a moral agent is not governed by moral standards. Adult human beings are, for example, typically thought to be moral agents, while cats and dogs are not. The concept of moral agency should be distinguished from the concept of moral patiency. Whereas a moral agent is something that has duties or obligations, a moral patient is something that is owed at least one duty or obligation. Moral agents are usually, if not always, moral patients; all adult human beings are moral patients. But there are many moral patients that are not moral agents; a newborn infant is a moral patient (indeed, a moral person6) but is not a moral agent – though it will, other things being equal, become a moral agent. Moral agency and moral patiency are not just analytically distinct concepts; a different set of substantive moral standards apply in each case.

6

By “moral person,” I do not mean a person who behaves in a morally appropriate way; rather, I mean a being that has the special moral status of personhood, which confers upon it certain fundamental moral rights. This is, for example, the sense of the term “moral person” that is at stake in debates about whether abortion should be criminalized as murder. According to the classic pro-life position, a human conceptus is a moral person and has the same right to life as any adult human person; since intentionally killing an innocent person is murder, abortion is murder at any stage of the pregnancy and should be prohibited by law. Most, but not all, abortion-rights supporters deny the claim that a human fetus is a person from the moment of conception and hence deny that abortion is murder at every stage during a pregnancy. Although somewhat awkward, the adjective “moral” in “moral person” is used to distinguish that notion from the purely descriptive notion of a “genetic person.” A being is a genetic person if and only if it is alive and has human DNA. See Mary Anne Warren, “The Moral and Legal Status of Abortion,”

6

The idea of moral agency is conceptually associated with the idea of being accountable for one’s behavior. To say that one’s behavior is governed by moral standards and hence that one has moral duties or moral obligations is to say that one’s behavior should be guided by and hence evaluated under those standards. Something subject to moral standards is accountable (or morally responsible) for its behavior under those standards.7 The idea is that it is appropriate – in some sense of the word – to hold moral agents accountable for their behavior. There are potentially two distinct ideas here: (1) it is rational to hold moral agents accountable for their behavior; and (2) it is just to hold moral agents accountable for their behavior. While (2) presumably implies (1), it is not the case that (1) implies (2); while it is reasonable to think that moral standards figure into a determination of what is rational, they are not the only standards of rationality – and there might be other considerations (perhaps prudential in character) that imply the rationality of holding someone accountable. Although only an agent can be a moral agent, it is crucial to realize that agency is different from moral agency. The idea of moral agency is conceptually associated with the idea of being accountable for one’s behavior. To say that one’s behavior is governed by moral standards and hence that one has moral duties or moral obligations is to say that one’s behavior should be guided by and hence evaluated under those standards. Something subject to moral standards is accountable (or morally responsible) for its behavior under those standards. These are utterly uncontroversial conceptual claims (i.e., claims about the content of the concept) in the literature. As Routledge Encyclopedia of Encyclopedia explains the notion, “[m]oral agents are those agents expected to meet the demands of morality.”8 According to Stanford Encyclopedia of Philosophy, “a moral agent [is] one who qualifies generally as an agent open to responsibility ascriptions.” 9 According to Internet Encyclopedia of Philosophy, moral agents “can be held accountable for their actions – justly praised or blamed, deservedly punished or rewarded.”10 These claims are conceptual in the same sense the claim that a bachelor is unmarried is conceptual: it is true in virtue of the core conventions for using the relevant terms – and will remain true for as long as those conventions are practiced.

7

Though the notions of moral responsibility and accountability are not synonyms, they are intimately related. To say X is morally responsible for y is to say y is causally related to some act of X that violates a moral obligation for which X is appropriately held accountable. As a general matter, the term “moral responsibility” is used to designate acts or events that involve breaches of one’s moral duties. 8

Vinit Haksar, “Moral Agents,” Routledge Encycopedia of Philosophy.

9

Andrew Eshelman, “Moral Responsibility,” Stanford Encyclopedia of Philosophy.

10

Garrath Matthews, “Responsibility,” Internet Encyclopedia o Philosophy (James Fieser, ed.); available at http://www.iep.utm.edu/r/responsi.htm#SH2a

7

As a conceptual matter, moral agency thus is a special kind of agent, one that is subject to the norms of morality – and, unlike most philosophical theories of the content of some concept, this one is so basic, unproblematic, and uncontroversial that it represents the foundation for not only substantive moral theorizing about the obligations of individuals but also substantive moral theorizing about the legitimacy of many areas of law (like criminal law). Even so, the notion of accountability suggests some additional constraints that might very well negate the difference between (1) and (2). To hold something accountable is to respond to the being’s behavior by giving her what her behavior deserves – and what a behavior deserves is a substantive moral matter. Behaviors that violate a moral obligation deserve (and perhaps require) blame, censure, or punishment.11 Behaviors that go beyond the call of duty (i.e., a so-called “supererogatory” act) in the sense that the agent has sacrificed important interests of her own in order to produce a great moral good that the agent was not required to produce deserve some sort of praise or recognition. Behaviors that satisfy one’s obligations deserve neither praise nor censure of some kind; ordinarily, one does not deserve praise, for example, for not violating the obligation to refrain from violence. The notion of desert, which underlies the notion of moral accountability, is a purely backward-looking notion. What one deserves is not directly concerned with changing or reinforcing one’s behavior so as to ensure that one behaves properly in the future; regardless of whether one can change someone who is culpable for committing a murderer12 by censuring him, he deserves censure. To put it in somewhat metaphorical terms, desert is concerned with maintaining the balance of justice. When someone commits a bad act, the balance of justice is disturbed by his act and can be restored, if at all, only by an appropriate act of censure or punishment.13 When someone performs a supererogatory act, she is owed a debt of gratitude, praise or recognition; until that debt is discharged, the balance of justice remains disturbed. For all practical purposes, then, it doesn’t make any difference whether we define “agency” in terms of (1) or (2). Regardless of which formulation we choose, it seems clear that dogs will not count as moral agents. Although it is rational to train dogs by using rewards and punishments, the point of using such measures is to train (as opposed to teach) dogs to behave in desirable ways, and not to respond to disturbances in the balance of justice. Strictly speaking, dogs do not “deserve” punishment for their bad acts or “praise” for their good acts – though there might be other practical reasons for responding in such ways. The 11

From here on, I will use the term “censure” as including acts of blame and acts of punishment to streamline the exposition. 12

Insane folks are not culpable and hence do not deserve censure.

13

I say “if at all” here because an act of censure cannot erase the bad act. Punishing a murderer, for example, cannot bring her victim back to life. In such cases, it seems not possible to fully restore the balance of justice. In other cases, an act of compensation, together with censure, might be enough.

8

fact that there are many stray dogs running around doing undesirable things like digging up gardens might invoke a host of moral considerations having to do with the welfare of stray animals, but the fact that such “misdeeds” are going unpunished involves no disturbance to the balance of justice. It would be silly to think it unfair that stray dogs go unpunished for their misdeeds. 3.1

Can the Standard Account Bear All the Needed Weight?

One might worry that defining “moral agency” as “accountability” has the effect of depleting moral discourse of concepts or principles necessary to express certain kinds of value judgments that do not attribute responsibility to some agent. Floridi and Sanders (2001) worry, for example, we need a way to make sense of the idea that parents evaluate their children before they are morally accountable for their behavior.14 Likewise, we need a way to express that poverty is, from a moral point of view, bad – even if no one is morally accountable for having acted culpably in producing it. The worry here is that ultimately that the analysis suggested above falsely presupposes or implies that all morally evaluative talk is reducible to responsibility talk – a claim that is inconsistent with deeply entrenched and uncontroversial moral practice. Accordingly, Floridi and Sanders (2004) “argue” for the following nonstandard analysis of moral agency: For all X, X is a moral agent if and only if X has the following properties: (1) X and its environment are capable of acting upon each other (the property of “interactivity”); (2) X is able to change its states internally without the stimuli of interaction with the external world (the property of “autonomy”); (3) X is capable of changing the transition rules by which it changes state (the property of “adaptability”); and (4) X is capable of acting such as to have morally significant effects on the world. The first three conditions are necessary and sufficient for something to count as an “agent.” The fourth, in effect, adds a requirement that one has the capacity to affect the world in ways that matter from the standpoint of moral evaluation. 14

This line of argument owes to Floridi and Sanders (2001) who argue as follows: “[T]he objection [that “moral agent” picks out only those who are morally responsible] has finally shown its fundamental presupposition: that we should reduce all prescriptive discourse to responsibility analysis. But this is an unacceptable assumption, a juridical fallacy. There is plenty of room for prescriptive discourse that is independent of responsibility-assignment and hence requires a clear identification of moral agents. Good parents, for example, commonly engage in moral-evaluation practices when interacting with their children, even at an age when the latter are not yet responsible agents, and this is not only perfectly acceptable but something to be expected.” The response to this objection is as follows. We are not really holding children morally accountable before they are moral agents; at least early on, we are training them (much like animals) to incorporate certain moral principles that are (1) largely correct; and (2) capable of shaping their behavior; otherwise put, we are training children to become moral agents. The idea that a 3-year old is a “moral agent” is utterly preposterous – and inconsistent with legal practices, which purport to incorporate the relevant moral standards in every western industrial society.

9

Accordingly, the notion of a moral agent, as Floridi and Sanders define it, requires a particular kind of agency that includes the ability to impact the world in morally relevant ways. There are two problems here. First, if the received view is correct, then it is a necessary condition for being an agent that an entity be capable of initiating changes through intentional mental states. While the notion of a mental state is frequently thought to be, as a conceptual matter, conscious, many thinkers (including Freud) believe that we are capable of unconscious mental states. If this is correct, then it is conceptually possible for there to be unconscious intentional states. But we are also beings capable of conscious mental states and the intentional states that cause our behavior are conscious mental states. The problem here is that it seems incoherent to think that an entity utterly lacking the capacity for consciousness to have anything fairly characterized as a “mental state.” Unconscious mental states might be conceptually possible in beings with the capacity of consciousness, but it is hard to make sense of the idea that something utterly lacking conscious states might have anything that would count as a mind or a mental state. While Floridi’s and Sanders’s conditions (1) through (3) might be necessary conditions for something to count as having the capacity for consciousness and hence for having mental states, it seems clear that they are not sufficient. I would imagine that an ordinary PC with software that automatically searches for and changes software content (or the most baskc neural net) satisfies (1) through (3), but is clearly not something that has mental states, much less intentional states, and hence could not count even as an “agent.” Satisfaction of condition (4) does seem to imply that a being is conscious because it explicitly incorporates the notion of “acting,” which logically presupposes, on the received view, that it instantiates intentional mental states capable of bringing about performances. But this is not of much help as an explanation here. It is (1) through (3) that are supposed to explain the concept of agency, while satisfaction of (4) by something that is, on their view, an agent in virtue of satisfying (1) through (3). So while (4) presupposes that an entity has the right kind of states, it is not intended to explain the notion of agency – and simply repeats what is trivially true of agency, namely that agents act. (1) through (3), which are supposed to do the heavy lifting, do not adequately explain the notion of agency and do not show the possibility of artificial agency. Their analysis of moral agency is also deeply problematic – even if we assume (1) through (3) are sufficient to explain agency and hence to show the possibility of artificial agency. The problem is that not that the analysis implies that human beings are not the only existing moral agents in the world; for all we know, there might be some other rational free agents in the universe accountable for their behavior. Rather, the problem is that the analysis seems to imply that certain things are moral agents that are clearly not moral accountable for their behavior – which is the accepted equivalent of moral agency. Rattlesnakes and even poisonous spiders might count as moral agents on this definition. It is clear, 10

for example, that rattlesnakes and spiders have the properties of interactivity and adaptability; both interact with the world and at least occasionally learn. It is also clear that both are capable of acting such as to have morally significant effects on the world since both are capable of killing human beings. One might doubt that they satisfy the autonomy requirement as formulated by Floridi and Sanders, but that would be wrong. If, as seems reasonable, a spider’s/rattlesnake’s apprehension of hunger (a purely internal state) is capable of causing a desire to eat (a purely internal state), then it follows that spiders and rattlesnakes have the property of autonomy as Floridi and Sanders define it. While it is true that the causal chain leading to the apprehension of hunger might lead outside the organism to the external world, there is nothing in their analysis of autonomy to preclude this. As long as even one internal state of the organism is sufficient to cause a distinct internal state, the organism is autonomous according to the definition above. As a conceptual matter, however, it is clear that rattlesnakes and spiders are not moral agents in the traditional sense; it is part of the core content of a shared practices regarding “moral agency” that it applies only to beings that are accountable for their behavior – and snakes and spiders are clearly not. In fact, it is not clear that they are even agents on the received view. Spiders have functional nervous systems, but it is not obvious that the system is complex enough to give rise to something that counts as an intentional state – much less a mental state. Spiders might be comparatively unsophisticated zombie automatons that are moved about by a comparatively uncomplicated brain. If they lack intentional states, then they are clearly not agents, according to the received view. Of course, one might think that Floridi and Sanders do not mean to be analyzing existing concepts, but rather creating a new taxonomy.15 Even if this is true, the problem here is that this new taxonomy provides nothing that would enable us to say anything we cannot already say or that would enable us to solve some problem we cannot already solve. For example, we can already say that rattlesnakes, whether they are agents or not, are capable of causing effects that are undesirable from a moral point of view – and say so with much less ambiguity than if we try to do it by introducing new meanings for words that already have well-entrenched usages. A taxonomy that redefines such words simply makes the relevant discourse ambiguous; it creates, rather than solves, confusion. The most plausible explanation of what they are doing is ordinary conceptual analysis of a sort that is very common in philosophy. First, they explicitly engage the existing accounts of the concepts and argue against them, which suggests they are attempting to analyze the ordinary concepts. Second, as is evident from the last paragraph, a new taxonomy might be useful, but it ought to utilize new terminology so as to avoid serious confusions – something that is usually obvious to theorists creating new taxonomies. If Floridi and 15

I am indebted to Keith Miller for this interpretation of their project.

11

Sanders were concerned to invent a new taxonomy, one would think they would invent new terms (as Floridi frequently does in other areas) to express that taxonomy. (One plausible candidate here would be a “moral potent,” the term “potent” being intended to express the ability to cause effects that are morally significant). It is true that ordinary conceptual analysis might sometimes provide a corrective function: if a particular set of conceptual practices are logically incoherent or inconsistent with certain phenomenon of the world they purport to explain, that is a good reason for revising existing concepts. But Floridi and Sanders provide nothing whatsoever to suggest that existing practices are flawed in one of these two ways and hence provide no argument for accepting an analysis of moral agency that is inconsistent with ordinary understandings of this comparatively well established, well understood notion. It is simply not at all clear what the motivation would be for abandoning an analysis of moral agency that is comparatively unproblematic. It is important to get the problem right here. Conceived as a piece of traditional conceptual analysis, the Floridi/Sanders analysis fails because inconsistent with core intuitions and practices regarding the terms “moral agent.” While there is no a priori reason to rule out the possibility of artificial agents (maybe we are simply computers made of meat), we simply do not talk as though spiders and rattlesnakes are moral agents; and we do not treat them as such. Indeed, a spider or rattlesnake cannot be literally “evil”; spiders and rattlesnakes might be “harmful” and “pests,” but they cannot be “evil,” according to the received view, precisely because they are not moral agents whose behavior is subject to moral requirements. Nevertheless, there is nothing in the standard definition that precludes our making other sorts of moral claim than responsibility-assignments. For example, one who accepts the standard definition can easily deny that very young children are not moral agents and hence not responsible for their behavior, but that parents owe a moral obligation to their children (who are moral patients even if not moral agents) to raise them in a way that is most likely to inculcate a grasp of and respect for morality. As an empirical matter, the most effective way to do this is to behave towards them as if they are moral agents by praising good behavior and censuring bad behavior – even though they lack the requisite knowledge of right and wrong to genuinely qualify as moral agents. These practices inculcate a sense of right and wrong in children and help them to develop the capacities to become moral agents – which is what arguably justifies such practices. There is simply nothing in the standard definition of “moral agency” that would preclude such talk.16 16

Similarly, we can easily express the idea that poverty is bad from a moral point of view without explicitly or implicitly attributing responsibility for it to some class of agents. Insofar as persons are moral patients with certain vital interests, states of affairs that adversely impact these interests are easily characterized as bad or undesirable from a moral point of view. There is nothing in the traditional view that precludes being able to say that poverty is bad or undesirable from a moral point of view because it causes tremendous suffering to human beings. Not

12

The reason for this is that the above analysis depends on substantive moral claims about the duties of one class of persons towards another – and not merely on the claim that moral agents are, as a conceptual matter, accountable for their behavior. It is true that the latter claim constrains what one can say by way of defense of the above practices: the proponent of the standard claim has to say, for example, that we are not really holding the youngest children accountable (if they are not full or limited moral agents). But there is nothing particularly controversial about this; indeed, it is presupposed by the claim that such children are not yet subject to the process of moral evaluation. To evaluate such practices, we need recourse to much more than a definition of “moral agency”; we need to consider the content of a number of moral norms regarding parenthood.17 Again, it is true that our conceptual practices should sometimes be revised because they rest on mistaken presuppositions. But one needs a special kind of reason for abandoning traditional usages in this way. In particular, one must show that the core conventions for using a concept are either incoherent or inconsistent with some other substantive commitment that is more central to the various practices that the conceptual commitments. The vast majority of the philosophical community believes that he standard conception of moral agency is flawed in neither of these ways -- and Floridi and Sanders offer nothing that would count as a reason to deny this and hence nothing that would count as a defense of their idiosyncratic conception of moral agency. To be quite candid about the matter, their analysis confuses much more than clarifies any ethical issue an ICT professional needs to worry about. 4.

Necessary and Sufficient Conditions for Moral Agency

surprisingly, such claims have been made for at least as long as people have been writing about ethics. 17

Floridi and Sanders argue against the traditional view as follows: “There is nothing wrong with identify a dog as a source of a morally good action, hence as an agent playing a crucial role in a moral situation, and therefore as a moral agent. Search-and-rescue doges are trained to track missing people. They often help save lives, for which they receive much praise and rewards from both their owners and the people they have located. But that is not the point. Emotionally, people may be very grateful to the animals, but for the dogs it is a game and they cannot be considered morally responsible for their actions. At the same time, the dogs are involved in a moral game as main players and we rightly identify them as moral agents that may cause good or evil.” This argument is problematic. It is certainly true that dogs may perform actions that have consequences that we regard as desirable or undesirable and would regard as good or bad if produced by the acts of a moral agent. It is also true that people get emotionally attached to animals and often feel gratitude and loyalty towards them. While it might be true to characterize dogs as “loving,” “loyal,” or “brave,” it is misleading in the extreme to describe them as “moral agents.” There is a difference between saying “X’s behavior resulted in consequences that are undesirable or bad” and “X is a moral agent.” The proponent of the standard analysis can acknowledge all this, and say everything that Floridi and Sanders want to say about search-andrescue dogs while respecting the common core usage of “moral agency.”

13

The issue of which conditions are necessary and sufficient for something to qualify as a moral agent is a different issue than the issue of identifying the content of the concept. Whereas an analysis of the content of the concept must begin with the core conventions people follow in using the term, an analysis of the capacities something must have to be appropriately held accountable for its behavior is a substantive meta-ethical issue, and not a linguistic or conceptual issue. It is generally thought there are two capacities that are necessary and jointly sufficient for moral agency. The first capacity is the capacity to freely choose one’s acts.18 The idea here is that, at the very least, one must be the direct cause of one’s behavior in order to be characterized as freely choosing that behavior; something whose behavior is directly caused by something other than itself has not freely chosen its behavior. If, for example, A injects B with a drug that makes B so uncontrollably angry that B is helpless to resist it, then B has not freely chosen his or her behavior. This should not be taken to deny that external influences are relevant with respect to the actions of even moral agents. It might be, for example, that human beings come pre-programmed into the world with a certain set of desires and emotional reactions that condition one’s moral views. If this is correct, it does not follow that we are not moral agents and should not be thought to rule out the possibility of artificial moral agents that are programmed by other persons. All that is being claimed here is that it is a necessary condition for being free and hence a moral agent that one is the direct cause of one’s behavior in the sense that its behavior is not directly compelled by something external to it. Moreover, the relevant cause of a moral agent’s behavior must have something to do with a decision that is reached by some sort of deliberative process. Consider a dog, for example, trained to respond to someone wearing red by attacking that person. Although the dog is the direct cause of its behavior in the sense that its mental states produce the behavior, it has not freely chosen its behavior because dogs do not deliberate and do not hence make decisions. In contrast, the choices that cause a person’s behavior are sometimes related to some sort of deliberative process in which the pros and cons of the various options are considered and weighed. It is the ability to ground choice in this deliberative process, instead of being caused by instincts, that warrants characterizing the behavior as free. The issue of whether a person’s deliberations are causally determined and causally determine her choices is deeply contested. A compatibilist accepts the determinist thesis that all events have some sort of material cause and hence attempts to articulate the notions of free will and moral agency in a way that are compatible with the determinist thesis. A libertarian denies that choices are 18

Not surprisingly, this entails that only agents are moral agents. Agents are distinguished from non-agents in that agents initiate responses to the world that count as acts. Only something that is capable of acting counts as an agent and only something that is capable of acting is capable of acting freely.

14

causally determined by one’s deliberations and hence denies that all events have a material cause; on the libertarian’s view, it is always possible for the agent to choose otherwise – which is not true under the determinist thesis. Either way, the idea that moral agents are free presupposes that they are rational. Regardless of whether one’s deliberations are caused or cause one’s behavior, one can deliberate only to the extent that one is capable of reasoning. Something that makes “decisions” wholly on the basis of random considerations is not deliberating nor acting rationally (assuming that she has not rationally decided that it is good to make decisions on such a basis). Someone who acts on the basis of some unthinking compulsion is not deliberating, acting rationally, or freely choosing her behaviors. Insofar as one must reason to deliberate, one must have the capacity to reason and hence be rational to deliberate. The second capacity necessary for moral agency is also related to rationality. As traditionally expressed, the capacity is “knowling the difference between right and wrong”; someone who does not know the difference between right and wrong is not a moral agent and not appropriately censured for her behaviors. This is, of course, why we do not punish people with severe cognitive disabilities like a psychotic condition that interferes with the ability to understand the moral character of her behavior. As traditionally described, however, the condition is too strong because it presupposes a general ability to get the moral calculus correct. Knowledge, as a conceptual matter, requires justified true belief. But it is not clear that any fallible human being generally knows which acts are right and which acts are wrong; this would require one to have some sort of generally reliable methodology for determining what is right and what is wrong – and it is clear that no fallible human being can claim such a methodology. In any event, this much is certainly clear: many (if not most) adult human beings, notwithstanding their own views to the contrary, do not know which acts are right and which are wrong. About the most that we can confidently say about moral agents is that they have the ability to engage in something that is fairly characterized as moral reasoning. This ability may be more or less developed. But anyone who is justly or rationally held accountable for her behavior must have the potential to engage in something that is reliable, much of the time, in identifying the requirements of morality. The idea that a being should conform her behavior to moral requirements presupposes that she has the ability to do so; and this requires not only that she have free will, but also that she has the potential to correctly identify moral requirements (even if she frequently fails to do so). At the very least, it requires people to correctly identify core requirements -- such as it is wrong to kill innocent persons for no reason. How this ability is developed does not seem important. As far as I can tell, it does not matter whether such an ability is consciously learned as is at least partially true of human beings or whether the being comes into the world, so to speak, pre-programmed with it. If people had an instinctual ability to engage in

15

something fairly characterized as moral reasoning, this would not be incompatible with their being moral agents. Moral reasoning requires a number of capacities. First, and most obviously, it requires a minimally adequate understanding of moral concepts like “good,” “bad,” “obligatory,” “wrong,” and “permissible” and thus requires the capacity to form and use concepts. Second, it requires an ability to grasp at least those moral principles that we take to be basic – like the idea that it is wrong to intentionally cause harm to human beings unless they have done some sort of wrong that would warrant it (which might very well be a principle that is universally accepted across cultures). Third, it requires the ability to identify the facts that make one rule relevant and another irrelevant. For example, one must be able to see that pointing a loaded gun at a person’s head and pulling the trigger implicates such rules. Finally, it requires the ability to correctly apply these rules to certain paradigm situations that constitute the meaning of the rule. Someone who has the requisite ability will be able to determine that setting fire to a child is morally prohibited by the rule governing murder.19 5.

Consciousness as Implicitly Necessary for Moral Agency

Our moral and legal practices reflect this common understanding of moral agency. Though we attempt to teach young children right and wrong by reward and punishment, we do not, strictly speaking, hold them accountable for their behavior morally or legally because their capacity for moral understanding is limited by their emotional and intellectual immaturity. Nor do we hold persons with severe psychological illnesses morally or legally accountable for wrongful behavior insofar as their capacity to freely choose the behavior is diminished by something that is fairly characterized as a behavioral compulsion. Conversely, any person of reasonable intelligence and ordinary psychological characteristics is treated as a moral agent responsible for her behavior. While the necessary conditions for moral agency as I have described them do not explicitly contain any reference to consciousness, it is reasonable to think that each of the necessary capacities presuppose the capacity for consciousness. The idea of accountability, which is central to the meaning of “moral agency,” is sensibly attributed only to conscious beings. There are two reasons for this. First, it is a conceptual truth that an action is the result of some intentional state – and intentional states are mental states. While this is not intended to rule out the Identity Theorist’s implausible claim that mental states are brain states and nothing else, only a being that has something 19

Floridi and Sanders seem to agree that moral accountability presupposes free will and consciousness. As to free will, they assert that: “if the agent failed to interact properly with the environment, for example, because it actually lacked sufficient information or had no choice, we should hold an agent morally responsible for an action it has committed because this would be morally unfair” (18). As to consciousness, they assert that “[i]t can be immediately conceded that it would be ridiculous to praise or blame an AA [i.e., artificial agent] for its behaviour or charge it with a moral accusation. You do not scold your webbot, that is obvious” (17). As noted above, they believe moral agency does not necessarily involve moral accountability.

16

fairly characterized as a conscious mental state is also fairly characterized as having intentional states like volitions -- regardless of what the ultimate analysis of a mental state turns out to me. It is a conceptual truth, then, that agents have mental states and that some of these mental states explain the distinguishing feature of agents – namely the production of doings that count as actions. Second, Jaegwon Kim argues that if we lacked some sort of access to those mental states that constitute reasons, then we would lack a first-person self-conscious perspective that seems necessary for agency. It cannot, for example, be the external presence of a stop sign that directly causes a performance that counts as an action; the cause must have something to do with a reason that is internal – like a belief about the risks or consequences of running a stop sign and a desire to them. If I don’t have some sort of access to something that would count as a reason for doing X, doing X is utterly arbitrary – akin to a random production by a device lacking a first-person perspective. Although Kim does not explicitly claim that the access must be conscious, it is quite natural to think that it must be. Reasons are, in some sense, grasped – and this is a conscious process. While grasping a reason need not entail an ability to articulate it, an agent must have some sense for why she is doing X. If our ordinary intuitions are correct, even a dog has something resembling conscious access to the fact that she eats because she is hungry or because what is offered is tasty.20 Third, as a substantive matter of practical rationality, it is irrational to praise or censure something that lacks conscious mental states – no matter how otherwise sophisticated its computational abilities might be. Praise, reward, censure, and punishment are rational responses only to beings capable of experiencing conscious states like pride and shame. Indeed, it is conceptually impossible to reward or punish something that is not conscious. As a conceptual matter, punishment is something that is reasonably contrived to produce an unpleasant mental state. While the justification for inflicting punitive discomfort might be to rehabilitate the offender or deter others, something must be reasonably calculated to cause some discomfort to count as punishment; if it isn’t calculated to hurt in some way, then it isn’t punishment. Similarly, a reward is something that is reasonably calculated to produce a pleasurable mental state; if it isn’t calculated to feel good in some way, then it isn’t a reward. Only conscious beings can have pleasant and unpleasant mental states. As it turns out, each of the substantive capacities needed for moral agency also presupposes consciousness. It is hard to make sense of the idea that a non-conscious thing freely choosing anything. It is reasonable to think that there are only two possible explanations for the behavior of any non-conscious thing: its behavior will either be (1) purely random in the sense of being arbitrary 20

Jaegwon Kim, “Reasons and the First Person,” Human Action, Deliberation, and Causation, eds. Bransen and Cuypers (Kluwer, 1998). I am indebted to Rebekah Rice for pointing this out to me.

17

and lacking any causal antecedents or (2) fully determined (and explainable) in terms of the mechanistic interactions of either mereological simples or higherorder but equally mechanistic interactions that emerge from higher order structures composed of mereological simples. It is not implausible to think that novel properties that transcend explanation in terms of causal interactions of atomic constituents emerge from sufficiently complex biological systems. The capacity for conscious deliberation seems to make possible a third possible explanation for behavior in beings like us. The behavior of something that deliberates consciously seems to us neither purely arbitrary nor fully determined by mechanistic interactions among ontological simples (i.e., the most basic constituents of material reality – like subatomic particles). Deliberations influence and rationalize (in the sense of making rational) decisions without mechanistically determining them. The behavior of a being that decides what it will do seems to us freely chosen unlike the behavior of things that do not decide what to do – and deciding what to do requires consciousness (or at least the capacity for consciousness). We might, of course, be wrong about this; but if so, the mistake will be in thinking that we freely choose our behavior. It might be that our conscious deliberations play no role in explaining our acts and that our behavior can be fully explained entirely in terms of the mechanistic interactions of ontological simples. Our sense that we decide how we will act would be, in that case, mistaken; our behavior would be as mechanistically determined as the behavior of any other material thing in the universe – though the causal explanation for any piece of human behavior will be quite complicated. This does not mean that a behavior is freely chosen only if preceded by some self-conscious assessment of reasons that are themselves articulated in a language. Very few acts are preceded by a conscious process of reasoning; most of what we do during the day is done without much, if any, conscious thought. My decision this morning to make two cups of coffee instead of three was not preceded by any conscious process of reasoning. But it seems no less free to me because I did not have to think about it. Beings that can freely choose their behavior by consciously deliberating about it can sometimes freely choose behavior without consciously deliberating about it. Nevertheless, it seems to be a necessary condition for something to freely choose its behavior that it be capable of conscious deliberation. If there are beings in the universe with free will, then they will certainly be conscious and capable of consciously deciding what to do. As it turns out, the same is true of the capacity for moral understanding: it is a necessary condition for something to know, believe, think, or understand that it has conscious mental states. Believing, as a conceptual matter, involves a disposition to assent to P when one considers the content of P; assenting and considering are conscious acts. Similarly, thinking, as a conceptual matter, involves a process of conscious reasoning. While it may turn out that thinking can be explained entirely in terms of some sort of computational process,

18

thinking and computation are analytically distinct processes; an ordinary calculator can compute, but it cannot think. Terms like “know,” “believe,” “think,” and “understand” are intentional terms that apply only to conscious beings. This is a point that emerges indirectly from the debate about John Searle’s famous “Chinese Room” argument that conscious states cannot be fully explained in computational terms.21 As is well known, Searle asks us to suppose we are locked in a room, and given a rule book in English for responding in Chinese to incoming Chinese symbols; in effect, the rule book maps Chinese sentences to other Chinese sentences that are appropriate responses. Searle argues that neither you nor the system for responding to Chinese inputs that contains you “understands” Chinese. Searle believes that the situation is exactly the same with a computer; as he makes the argument: The point of the story is this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don’t understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you and understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese. And again, the reason for this can be stated quite simply. If you don’t understand Chinese, then no other computer could understand Chinese because no digital computer, just by virtue of running a program, has anything that you don’t have. All that the computer has, as you have, is a formal program for manipulating uninterpreted Chinese symbols. To repeat, a computer has a syntax, but no semantics. The whole point of the parable of the Chinese room is to remind us of a fact that we knew all along. Understanding a language, or indeed, having mental states at all, involves more than just having a bunch of formal symbols. It involves having an interpretation, or a meaning attached to those symbols. And a digital computer, as defined, cannot have more than just formal symbols because the operation of the computer, as I said earlier, is defined in terms of its ability to implement programs. And these programs are purely formally specifiable— that is they have no semantic content. It is true, of course, that Searle’s argument remains controversial to this day, but no one disputes the conceptual presupposition that only conscious 21

As Searle elsewhere describes the view, “The brain just happens to be one of an indefinitely large number of different kinds of hardware computers that could sustain the programs which make up human intelligence. On this view, any physical system whatever that had the right program with the right inputs and outputs would have a mind in exactly the same sense that you and I have minds. So, for example, if you made a computer out of old beer cans powered by windmills; if it had the right program, it would have to be a mind. And the point is not that for all we know it might have thoughts and feelings, but rather that it must have thoughts and feelings, because that is all there is to having thoughts and feelings: implementing the right program.”

19

beings can fairly be characterized as “understanding” a language. The continuing dispute is about whether consciousness can be fully explained in terms of sufficiently powerful computing hardware running the right sort of software. Proponents of this view believe that the fact that a functioning brain is contained in a living organism is irrelevant with respect to explaining why it is conscious; an isomorphic processing system made entirely of non-organic materials that runs similar software would be conscious – regardless of whether it is fairly characterized as “biologically alive.” If so, then it is capable of understanding, believing, knowing and thinking. Thus, the dispute is about whether consciousness can be fully explained in terms of computational processes, and not about whether non-conscious beings can know, believe, think, or understand. All sides agree that such terms apply only to conscious beings. Either way, it seems clear that only conscious beings can be moral agents. While consciousness, of course, is not a sufficient condition for moral agency (as there are many conscious beings, like cats, that are neither free nor rational), it is a necessary condition for being a moral agent. Nothing that isn’t capable of conscious mental states is a moral agent accountable for its behavior.22 None of this should be taken to deny that conscious beings sometimes act in cohort or that these collective acts are rightly subject to moral evaluation. As a moral and legal matter, we frequently have occasion to evaluate acts of corporate bodies, like governments and business entities. The law includes a variety of principles, for example, that make it possible to hold business corporations liable under civil and criminal law. 22

Floridi and Sanders argue that the idea that moral agency presupposes consciousness is problematic: “the [view that only beings with intentional states are moral agents] presupposes the availability of some sort of privileged access (a God’s eye perspective from without or some sort of Cartesian internal intuition from within) to the agent’s mental or intentional states that, although possible in theory, cannot be easily guaranteed in practice” (16). There are a number of problems with this line of objection. First, it is not the idea of a moral agency that presupposes that we can determine which beings are conscious and which beings are not; it is rather the ability to reliably determine which beings are moral agents and which beings are not that presupposes that we can reliably determine which beings are conscious and which beings are not. If moral agency presupposes consciousness, then we cannot be justified in characterizing a being as a moral agent unless we are justified in characterizing the being as being conscious. Second, and more importantly, this is as much a problem for Floridi and Sanders as it is for anyone else. Floridi and Sanders, as noted above, concede that moral responsibility presupposes consciousness, but fail to notice that their own view is vulnerable to the very same charge. They deny, of course, that moral agency presupposes consciousness, but affirm that moral responsibility presupposes consciousness – and exactly the same objection they make in the first paragraph of this footnote can be directed at their own view. While it is probably true that we cannot reliably access the intentional states of other beings and cannot reliably determine which beings are conscious (Are birds conscious? Are roaches?), this does not bear on the issue of whether moral agency and moral responsibility presuppose consciousness. Both pretty clearly seem to.

20

Strictly speaking, corporate entities are not moral agents for a basic reason. Any corporate entity is a set of objects, which includes, of course, conscious moral agents who are accountable for their behavior, but also includes, at the very least, legal instruments like a certificate of incorporation and bylaws. The problem here is that a set is an abstract object and as such incapable of doing anything that would count as an “act.” Sets (as opposed to a representation of a set on a piece of paper) are no more capable of acting than numbers (as opposed to representations of numbers); they have nothing that would count as “state,” internal or otherwise, that is capable of changing – a necessary precondition for being able to act.23 Sets are not, strictly speaking, moral agents because they are not agents at all. The acts that we attribute to corporations are really acts of individual directors, officers, and employees acting in coordinated ways. Officers sign a contract on behalf of the organization, and new obligations are created that are backed by certain assets also attributed to the corporation. Officers decide to release a product and instruct various parties to behave in certain ways that have the effect of releasing the product into the stream of commerce. Though we attribute these acts to the corporate entity for purposes of legal liability, corporate entities, qua abstract objects, do not act; corporate officers, employees, etc. do. Indeed, the law acknowledges as much, characterizing corporations as a “legal fiction.” The justification for the fiction of treating corporations as agents is to encourage productive behavior by allowing persons who make decisions on behalf of the corporation to shield their personal assets from civil liability – at least in the case of acts that are reasonably done within the scope of the corporation’s charter. If the assets of, for example, individual directors were exposed to liability for bad business decisions, people would be much less likely to serve as business directors. Our moral practices are somewhat different and less dependent upon fictional assertions of agency to corporate entities. Most people rightly seek to attribute moral fault for corporate misdeeds to those persons who are most fairly characterized as responsible for them. It is clear, for example, that we cannot incarcerate a corporation for concealing debts to artificially inflate shareholder value, but we can – and do – incarcerate individual officers for their participation in schemes to conceal debts. We do not say Enron was bad; we say that the people running Enron were. And we would make this distinction even if every person on Enron’s payroll were behaving badly. 6.

Artificial Agents

This, of course, is not to suggest that artificial entities that satisfy the first three conditions of Floridi’s and Sander’s analysis and would hence be, on their view, “artificial agents” cannot count as moral agents. It is rather to suggest that more 23

This, it is worth noting, is true of Floridi’s and Sanders’s own analysis of an agent. According to Floridi and Sanders, it is a necessary condition for being an agent that one “is able to change state without direct response to interaction: it can perform internal transitions to change its state” (9). Sets, numbers, and other abstract objects do not change states.

21

is required than simply an ability to cause effects that are morally relevant, which will be trivially true in a large variety of cases (after all, it is not difficult to make someone frustrated or unhappy – states that are morally relevant). For an artificial agent to be a moral agent (properly understood), it must have the capacity to choose its actions “freely” and understand the basic concepts and requirements of morality. It is clear that an artificial agent would have to be a remarkably sophisticated piece of technology to be a moral agent. It seems clear that a great deal of processing power would be needed to enable an artificial entity to be able to (in some relevant sense) “process” moral standards. Artificial free will presents different challenges: it is not entirely clear what sorts of technologies would have to be developed in order to enable an artificial entity to make “free” choices – in part, because it is not entirely clear in what sense our choices are free. Free will poses tremendous philosophical difficulties that would have to be worked out before the technology can be worked out; if we don’t know what free will is, we are not going to be able to model it technologically. While the necessary conditions for moral agency as I have described them do not explicitly contain any reference to consciousness, each presupposes consciousness, as we saw above. The idea of accountability, which is central to the meaning of “moral agency,” is sensibly attributed only to conscious beings. It seems irrational to praise or censure something that isn’t conscious – no matter how otherwise sophisticated its computational abilities might be. Praise, reward, censure, and punishment are rational responses only to beings capable of experiencing conscious states like pride and shame. Likewise, it is hard to make sense of the idea that a non-conscious thing might freely choose its behavior. It is reasonable to think that there are only two possible explanations for the behavior of any non-conscious thing: its behavior will either be (1) purely random in the sense of being arbitrary and lacking any causal antecedents or (2) fully determined (and explainable) in terms of mechanistic interactions of some sort. These interactions might simply involve ontological simples, like quarks and electrons, or they might involve higher-order emergent entities and properties. The property of being alive and certain properties associated with being alive are thought to be higher-order properties that emerge from lower-order properties and states of simpler entities and cannot be reduced to causal explanations among those lower-order entities. It is implausible to think that the property of self-regeneration or reproduction could be reduced to properties associated with quarks and electrons. But, either way, it seems reasonable to think that it is simply not metaphysically possible for a non-conscious thing to freely choose its behavior. Choice is a product of a volition, and volitions are thought to be conscious, higher-order states that, on the most widely accepted theories of mind (dualist and non-dualist) cannot be reduced to neurophysiological states or causal interactions among those states. It is a necessary, though not sufficient, condition for a being to have free will that it has the capacity for consciousness; if

22

free will is a necessary condition for moral agency, then consciousness is a necessary condition for moral agency. However, if consciousness is a conceptual or moral prerequisite for moral agency, then an artificial agent would have to be conscious in order to be a moral agent. This raises two related problems having to do with our ability to determine whether an artificial agent is conscious. First, philosophers of mind disagree about whether it is even possible for an artificial ICT (I suppose we are an example of a natural ICT) to be conscious. Some philosophers believe that only beings that are biologically alive are conscious, while others believe that any entity with a brain that is as complex as ours will produce consciousness regardless of the materials of which that brain is composed.24 In any event, if the foregoing analysis is correct, only conscious artificial agents could be moral agents. Second, even if it should turn out that we can show conclusively that it is possible for artificial ICTs to be conscious, there are potentially insuperable epistemic difficulties in determining whether or not any particular ICT is conscious. It is worth remembering that philosophers have yet to solve even the problem of justifying the belief that other human beings than ourselves are conscious (“the problem of other minds”); the best we can do is an argument by analogy. Taking that strategy and applying it to other kinds of things, like animals and artificial ICTs, weakens it considerably because the closeness of the resemblance between us and another type of being diminishes the fewer properties that type of thing shares with human beings. Indeed, it is not at clear at this point how we could even begin to determine that a machine is conscious. So there are extreme epistemological difficulties associated with trying to determine whether a machine is a moral agent; these difficulties might very well imply much greater moral responsibility for the behaviors of highly complex ICTs that might very well be conscious than the very worst parents have for the behavior of the children they have damaged by their cruelty and incompetence. This, of course, should not be thought incompatible with a number of existing projects with ICTs to produce ICTs that either are agents or model the behavior of agents. However, it does call attention to the need to be clear, not only about what these research projects are intended to accomplish, but also about the content of the concepts of agency, artificial agency, and moral agency. It seems clear, for example, that there could be artificial entities that satisfy all 24

It is worth noting here that there are also serious epistemic difficulties involved in our determining whether something is conscious. Since we have direct access to only our own consciousness, knowledge of other minds is necessarily indirect. We can infer that something is conscious only on the strength of analogical similarities (e.g., it has the same kind of brain I do and behaves the same way in response to certain kind of stimuli). But philosophers of mind have shown that such analogical similarities may not be sufficient to justify thinking someone is conscious. In essence, someone who infers that X is conscious based on X’s similarity to him is illegitimately generalizing on the strength of just one observed case – one’s own. Again, I can directly observe the consciousness of only one being, myself; and in no other context is an inductive argument sufficiently grounded in one observed case. The further from our own case some entity is, the more difficult it is for us to be justified in thinking it is conscious.

23

four of Floridi’s and Sander’s conditions, but it should also, by now, be clear that satisfaction of (1) through (3) is not enough to imply agency and satisfaction of (1) through (4) does not imply moral agency. While it is surely correct to think these empirical research projects raise novel ethical issues, it is critically important to be much clearer about which issues are raised than the literature on agency and moral agency has been up to this point. References Coleman, K. (2004) “Computing and Moral Responsibility,” Encyclopedia of Philosophy; available http://plato.stanford.edu/entries/computing-responsibility/

Stanford at

Eshleman, A. (2001) “Moral Responsibility,” Stanford Encyclopedia Philosophy; available at http://plato.stanford.edu/entries/moral-responsibility/

of

Floridi, L. (1999) “Information Ethics: On the Philosophical Foundation of Computer Ethics,” Ethics and Information Technology, vol. 1, no. 1 Floridi L. and Sanders J. (2001). “Artificial Evil and the Foundation of Computer Ethics," Ethics and Information Technology, vol. 3, no. 1 Keulartz et.al (2004) “Pragmatism in Progress,” Techne: Journal of the Society for Philosophy and Technology, vol. 7, no. 3 Kim, J. (1998) “Reasons and the First Person,” Human Action, Deliberation, and Causation, eds. Bransen and Cuypers. Kluwer Publishers. Latour, B. (1994) “On Technical Mediation – Philosophy, Sociology, Genealogy,” Common Knowledge, vol. 3 Miller, K. and Larson, D., “Angels and Artifacts: Moral Agents in the Age of Computers and Networks,” Journal of Information, Communication & Ethics in Society, vol. 3, no. 3 (July, 2005) Moor, J. (1998) “Reason, Relativity, and Responsibility in Computer Ethics,” Computers and Society, vol. 28, no. 1

24