Holding Technologies to Moral Account - IEEE Xplore

2 downloads 0 Views 459KB Size Report
MICHAEL ARNOLD AND CHRIs pEARCE. Is Technology. Innocent? Holding Technologies to Moral Account. Digital Object Identifier 10.1109/MTS.2008.924868.
Is Technology Innocent?

Holding Technologies to Moral Account MICHAEL ARNOLD and chris pearce

Technologies Act Humans and technologies interact in a variety of ways, and together, humans and their technologies act in the world. Some of these actions Digital Object Identifier 10.1109/MTS.2008.924868

44 |

© Illustration Works

S

ophisticated technologies – in different times, this could be a stone axe, or a computer system – are important to humans in cultural, economic, and existential terms. The performance of these technologies requires careful assessment. Some forms of technology assessment are fine-grained and small-scale, attending to specific technologies in specific contexts (for example, HCI evaluations, usability studies, and user-centered design methods), while other approaches are more sweeping, and critique technology in epochal terms, rather than focusing this or that example (see for example [13] –[15]). It is proposed here that critical assessments of technology should hold sophisticated artifacts to moral account. The normative standards by which technologies are judged are thus extended from exclusively instrumental concerns, to the noninstrumental realm. Technologies must be held morally accountable for their actions, in order for those actions to be assessed appropriately.

are performed at the direct command of humans, while others are autonomous in that they prompted by triggers internal to the technology. Some actions are optional for the operator; others are mandated by the technology, or perhaps by other technical systems to which the system in question is answerable. Some human actions can be performed without the participation of technology, but for pragmatic reasons are not. Other human actions require the participation of

1932-4529/08/$25.00©2008IEEE

technologies. Technologies thereby respond to action, intervene with their own acts, make requests for action, receive requests for action, make demands on others, and receive demands from others. Technology is thus an actor in human affairs, in at least the literal and weakest sense of the term – the technology acts; it performs actions in the world that are part of the processes and outcomes of human affairs. In so much as the technology acts in this weak sense,

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

|

summer 2008

it might be held to account for its actions against instrumental criteria which compare the technology’s actions with human will for instrumental action. Where human will or human purposes are not faithfully executed by a technology, the technology might be held to be deficient in some way – to be slow, inefficient, faulty, costly, error prone, difficult to use, and so forth. The more interesting question is whether a stronger claim can be made about the actions of the technology, a claim that better represents its role and its place in our considerations. A phenomenological approach to the relation between humans and artifacts supports such a stronger claim. In accordance with this stronger claim, the technology not only acts to implement the will of the operator and/or the designer. Rather, the technology mediates the will of the operator and designer [16].

Activity Theory as a Potential Framework When a computer system acts to receive and store an updated patient record in a doctor’s computer system, the computer system is not simply acting in mechanistic obeisance to the doctor’s will, any more than it is acting autonomously, in and of itself. Rather, by requiring data in a certain format, by requiring that certain fields be filled, by limiting the format and length of that data, by requiring that certain buttons be clicked in a certain order, by requiring that the doctor sit in a certain position and attend to certain things, the actions of the computer can be understood as mediating the doctor’s relation to the patient record (an electronic version of the patient), and to the world of the consultation. The doctor and the computer system thus work their way through the consultation in consort, with a few tasks completed more or less autonomously by one or the other depending on respective compe-

tencies, while many others tasks are completed together. Such is also the case in other systems; the argument extends to artifacts less complex than the doctor’s system (say, a stone axe), or more complex (say, self-modeling artificial intelligence (AI)). The phenomenological primitive is the relation between humans and the world [1]. Phenomenologically, (to continue our example), the doctor’s relation to the world of the consultation is comprised of an interplay of perception of the world {I < world}, and action in the world {I > world}, and the computer system acts to mediate both relations. This notion of “mediation” captures the detours, the compromises, the extended capacities, the constraints, the shifts in priorities, the altered methods, the new points of focus, the new objectives, the changed criteria, and other alterations to will that occur when one negotiates action in partnership with another. The doctor’s actions in the world of the consultation and the computer system’s actions in the world of the consultation are not independent or autonomous – they are negotiated in a dance of reciprocal accommodation and resistance. Neither acts in response to independent will. Both sets of acts are mediated. The weak claim that the computer acts is thus amended to a stronger claim that its actions are not strictly the mechanistic reflection of human will, and that its competencies and actions shape the will and the actions of humans. The computer can for example, make the doctor a better doctor, by checking the myriad of drug interactions, or suggesting the current “best practice” for a treatment. In so much as human and nonhuman co-participation is intrinsic to the phenomenological relation, it is tempting to conflate the human and the technology, and conduct an analysis and critical assessment of action in terms of a human – nonhuman hybrid. One might argue

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

|

summer 2008

for example, that the act of updating a patient record intertwines and mixes up the respective competencies and inadequacies of the doctor and the computer system to a point where it makes more sense to talk of the record-keeping system as a hybrid – in terms which cast both doctor and computer as components of the system. Contemporary Western medical practice (accounting practice, stock control practice, manufacturing practice, communication practice, etc.) has integrated technology to a point where human and non-human contributions to that practice is seamlessly alloyed. In such a hybrid, or heterogeneous “actor network” [2], separating out the human contribution to outcomes from the non-human is an analytic achievement subject to argument, and is not a self-evident fact of nature. In some analytic contexts, a “systems approach” that is blind to distinctions between human and non-human action is appropriate. (For example, the Turing test for artificial intelligence [11] as well as Floridi and Sanders’ levels of abstraction arguments [12] are both relevant to this idea.) The system as a whole constitutes the action and co-determines the outcomes. In other contexts, however, this serves to gloss the differences between humans and technologies in ways that are unhelpful. In designing a medical system for General Practice, for example, we should be mindful that the computer is better than the human at “remembering” the details of a patient’s record through multiple appointments, and so it is best that that task be delegated to the computer. On the other hand, the information presented by the patient is rarely organized in a way suitable for the computer, and is best mediated by the doctor. So while the responsibility for capturing and storing the record of presentation may be a hybridized “system ­ responsibility,” the humans and non-humans in that system remain distinguishable, and |

45

are not commensurable. We note that these delegations are subject to review as the competencies and inadequacies of humans and technologies change relative to one another, and are reassessed. Recently, this ongoing review has resulted in the delegation of more and more responsibility for action in the world to non-humans. (Indeed, it is now quite likely to be unethical as well as impractical for a doctor to practice without a computer, an accountant to practice without a spreadsheet, or a salesperson to practice without a mobile telephone.) So for some purposes at least, it continues to make sense to distinguish between humans and technologies, in order to focus on their respective competencies and delegations of responsibilities and actions, while at the same time acknowledging the stronger claim that humans and technologies act and co-shape one another’s actions in performance [17], [18]. To return to our example, if the doctor’s computer system a) acts in the consultation, b) acts in the consultation in ways that do not simply reflect the will of the doctor, c) acts to co-shape the actions of the doctor, but d) remains distinguishable from the doctor, can we take the next step and make the still stronger claim that the computer system may be held to account for these actions in terms drawn from a moral discourse? Such a claim shifts an analysis of the performance of the technology from the descriptive-analytic realm, where the discourse assesses action in terms of efficiency, accuracy, speed, and so forth, to the normative realm of ethics and morality, where the discourse assesses action using terms such as right and wrong, or good and bad. Building upon the insightful contributions of others [3]-[9], this is the claim we wish to pursue.

Normative Realm of Morality In the Western metaphysical tradition, to be held accountable for 46 |

one’s actions in a moral or ethical sense is a privilege and a responsibility afforded only to humans, and non-humans (such as an animal, a rock, or a machine) are “accountable” in a much more limited sense. The limited sense in which a non-human might be held to account is the sense in which its action in a chain of events is identified – for example, a dog growls at child, unstable rocks threaten a house, car headlights fail because of faulty magneto – and remedial action is directed at the dog, the rocks, or the magneto. In some limited, pragmatic sense then, the dog may be “held to account” by being disciplined, the rock formation may be “blamed” and then removed or reinforced, and the magneto identified as the “culprit” and replaced. In the same limited sense, a doctor, an accountant, a salesperson, will not tolerate a computer system that crashes repeatedly or loses files. The dog, the rocks, the magneto and the computer system are held to be “at fault,” and are the focus of intervention, but this focus is limited in so much as moral responsibility for the action of the dog, rocks, magneto, or computer system is not held to be intrinsic to those technologies, but is commonly held to be located in the dog’s human owner, the human planners that co-located the house and the rocks, the human owner of the car, or the computer system’s human programmer. Although the actions of non-human technologies may be the immediate cause of great harm, it is humans further back in the chain of causality that are held to be morally accountable for the harm. The reasons for this have been pretty much settled after a couple of thousand years of discussion among metaphysicians and moral philosophers. The actions of a technology such as a computer system may be proximate in a chain of causality culminating in great harm, but a technology such as a computer sys-

tem is held to be beyond the realm of moral accountability on any one of five grounds. These grounds are not necessarily exhaustive, and other objections may be raised, but these five do represent significant hurdles the argument must jump to be accepted. 1) The a priori argument. The computer system is not human. A priori, only humans may be held to be morally accountable for their actions. 2) The dumb instrument argument. The actions of computer system may be the immediate cause of harm, but the computer is created and operated by humans. The creators and operators are responsible for the action of the computer system, not the computer system itself. 3) The free will argument. The computer system is deterministic. That is, it has no will, and it is not free to choose its actions. Without this freedom there can be no moral responsibility. 4) The right mind argument. Computer systems have no knowledge of the wider circumstances in which they act, or of the consequences of their actions, and cannot reasonably be expected to have an awareness of these contexts or ­consequences. To be held accountable for the consequences of an action, a computer system must have the capacity to foresee these consequences. 5) The dilution of responsibility argument. The computer system is ill-defined, is not a discrete entity, has no clear boundaries in space and time, and its extensive constituent elements and boundaries are not clearly evident. Responsibility that is located everywhere is located nowhere. Do any of these objections provide sufficient reason to exclude artifacts such as computer systems from moral accountability?

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

|

summer 2008

A Priori Argument The first objection is reasonably easy to dismiss. The proposal that a priori only humans, by virtue of their very humanness, may be held morally responsible, is not so much an argument as an unreasonable statement of faith in the privileged position of a particular species. One may easily imagine non-humans that may be held to account by a moral framework (for example, beings from other planet, angels, highly intelligent animals), and not all humans are held to be morally accountable for their actions (for example, children, and the mentally ill). Indeed, non-human entities such as corporations are commonly held to be accountable for their actions in moral as well as legal terms. There are therefore criteria other than humanness per se for granting or withholding moral accountability.

Dumb Instrument Argument The second objection seeks to divert accountability for the acts of a technology from the system itself to the system’s creators and operators. This of course assumes objection three – that the technology is an instrument that faithfully translates the instructions of its designers and operators into action. As the technology is merely obeying human orders, and has no alternative course of action other than to obey orders, it follows that it is those who issue the orders who are responsible for outcomes, not the purely instrumental intermediary. This “dumb instrument” objection is not sustainable, based as it is, in a series of erroneous assumptions about the relationship between humans and technologies. As has been argued, it is clearly not the case that our technologies are designed and manufactured strictly in accordance with the pure will of humans, nor is it the case that their operation is a reflection of the will of humans and humans alone. The design and manufacture of an ar-

tifact – a stone axe or a computer system – is a process of negotiating with the materiality of the world to establish possibilities for action in the world. These possibilities emerge with the stone tool or computer system – they do not prefigure the tool or system. Technologies are in this sense not entirely of our own making, and what we do with these technologies is certainly not unilaterally determined. We negotiate with the stone. We argue with the binary signals. We compromise with the obduracy of the world of things. We are surprised by the unimagined possibilities that present themselves only when the stone is in the hand, or the computer system is booted. Indeed, it is clear that the qualities and characteristics of the non-human elements of the stone in the hand and the computer system in the GP’s office are as influential in the shape and performance of the system as the hapless humans who are blamed for every mishap and imperfection. The human designers and manufacturers may well wish for a different artifact, and would do better were it up to them. But of course it is not. Things will have their say. A similar story might be told of the operators of the stone axe or the computer system. Do we not all wish that technologies were not of themselves so wilful? But this too is an impulse that misreads the role of the technology. We have seen that the computer system is not simply a faithful and obedient servant, and nor would we wish it to be. The computer system or the axe prescribes certain actions, enables some, and proscribes others, and out of this negotiation new means and new ends arise. The technology shapes how we do things, shapes what we can do, and shapes what we want to do, in ways that are productive as well as constraining. Our stone and electronic tools no doubt changed us as much as we have changed them, and this two-way flow of reciprocal influence is our way of being – not a way

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

|

summer 2008

wherein we stand unmoved while we change the world around us. So when something goes wrong, and when a computer system (or a stone axe) are proximate in a chain of causality connected to a wrong, it is not necessarily correct to single out and blame the operator, for it is not they alone that determine and execute the means and the ends, nor is it correct to blame the many designers and manufactures of the computer system (or stone axe) for they too are constrained in certain directions and extended in unexpected others through negotiation with non-human technologies. We humans do not have it all our own way. But this is not to say that humans are never to blame where computer systems or stone axes are implicated in wrong – human designers, manufacturers, and operators are not absolved of all responsibility no matter how neglectful or reckless their actions might be. That is not the point of the argument. The point is not to absolve humans of responsibility, but to attribute responsibility where it lies, regardless of humanness. All human action that is mediated through the co-participation of technologies is action that is co-determined in both its means and its ends, by the participation of those technologies. Without a stone axe in the hand, both the means and the ends would be different. Without napalm, cluster bombs, and antipersonnel mines, the means and the ends of war would be different. Without the computer system in the doctor’s office the means and the ends of general practice would be different. It does not make sense to argue that the designers and manufacturers of the artifact are entirely responsible for events far removed from design and manufacture. It does not make sense to argue that the operator of the artifact is solely culpable in selecting and determining events, as though the same means and the same ends were available in the absence of the axe or the computer system. It does make sense to |

47

make moral judgments about the acts that are co-shaped by the edged stone or the doctor’s software. One might well say that on this or that occasion, in this or that circumstance, the act was a good act or a bad act, a moral act or an immoral act, and that the action of the axe, or the computer system, and not just the acts of the human designer, manufacturer, or operator, was good or was bad, moral or immoral. The stone in the hand energizes the blow and the skull is fractured – and this may be assessed as a good thing or a bad thing, but either way the axe is implicated. The doctor’s computer system queries the drug prescription, and the prescription is altered – and this may be assessed as a good thing or a bad thing, and either way the computer system is implicated. Stone axes and computer systems are not innocent. They are co-participants in emergent good and bad.

Free Will and Foreknowledge Arguments The third objection to the inclusion of technologies in the domain of moral or ethical assessment is that unlike humans, non-humans have no will, and are not free to determine their actions. Closely tied to the free will requirement is the fourth objection – that non-human technologies have no capacity to foresee the outcomes of their actions, and without this knowledge cannot be held responsible for these outcomes. To be morally accountable, an actor must choose to act in circumstances in which they might have chosen otherwise, and they have chosen to act in this way in knowledge of the likely consequences, or where it might be reasonably expected that the actor might anticipate these consequences. It can be argued that some applications of artificial intelligence have demonstrated goal-setting and semi-autonomous learning capacities that might be regarded as akin to the possession of free will and knowledge of consequences. If this 48 |

is so, or if this becomes so at some point in the future, the argument that technology be brought into the realm of moral assessment is strengthened. However, the possession of these exotic qualities is not required for our argument. The requirement that moral actors exercise will has been used for hundreds of years in classical moral philosophy to differentiate between humans and non-humans, and the requirement that this will be exercised in knowledge of the likely consequences has been used to differentiate among humans. In the case of the first line of differentiation, the line lay between say, a cracked skull resulting from one person choosing to hit another over the head with a stone axe, and say, a cracked skull resulting from a falling coconut. It is argued that the human assailant with axe in hand had the choice of action, and might have acted differently. The coconut palm did not exercise will to release the coconut, and the coconut must obey gravity. In the case of the second line of differentiation, the ability to apprehend circumstances and consequences differentiates between categories of humans, with the effect of excluding some from moral assessment – typically children, the mentally ill, and others “not of right mind.” In other times and places, categories of humans excluded from a normative moral assessment were more numerous, and included slaves, women, natives, and other Others. It is argued that while the actions of a two yearold or an adult suffering hallucinations might have been willful, the absence of a state of mind capable of comprehending the consequences of those actions excludes the actor from moral responsibility. But while it might be true that coconut trees, children, and those hallucinating are rightly excused on grounds of an absence of will, and-or an absence of knowledge of consequences, the same cannot be said in the case of computer sys-

tems or stone axes. For unlike the coconut palms, the computer system and the axe are artifacts, and here we must draw a new line of differentiation, not among humans (to separate the morally competent from the incompetent), nor between morally competent humans and all non-humans, as is now standard, but among non-humans, to differentiate between those that are artifacts and those that are not. For artifacts are designed, manufactured, and operated in consort with human will [18], [19]. They are not formed as they are and do not act as they do, innocent of will, and separated from prescience of consequences. The stone axe or the doctor’s computer system is without an autonomous will. They do not have a will of their own to exercise freely in knowledge of likely consequences. Nor do they act on their own, as autonomous beings. And nor do we. As has been argued, a human’s will to act, in both means and ends, is not unilaterally exercised. A will to act and the choice of action available to us, is formed in relation to our capacity to act, which is mediated in conjunction with technologies. We are both in it together, Neanderthals and axes, doctors and computer systems. Just as we act together, our will to act, our ability to act otherwise, and our prescience of means and ends, emerge in relation to one another – humans and technologies. A technology cannot be excused from the realm of will, simply because that will it is not possessed independent of human design, manufacture, and operation. The edge of the axe materializes an “in order to” that stands independent of the origins of that ordering in its very materiality. The doctor’s computer system is not without purpose. The material ordering of an “in order to,” of purposes, and functions, is the materialization of will, and there is nothing in any of this that might not have been other given different human conditions and different

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

|

summer 2008

material conditions. A stone may not have any shape whatsoever, but it may have many shapes, materializing many different relations and orderings, suggesting many strategies, foreseeing many different outcomes, depending upon the particular negotiations of human will and the materiality of the stone. A vast number of choices (but not entirely free choices) are exercised in the design, manufacture, and use of complex technologies like a stone axe or computer system, and the choices that have been negotiated are material in this substance, in this place, doing this. The doctor’s system, for example, may be other than it is and may act in ways other than it acts, and that it is as it is, is an act of will (but not free will), emergent as humans negotiate with non-human technologies in design, manufacture, and operation. When the anthropologists arrive from Andromeda and start looking for expressions of desire, of will, they will examine the “in order to” of constructions like stone axes or computer systems and the choices that have been made as we have negotiated means and ends with the world. Our will is not free to express as we might wish; it is constrained and it is opened out by an obdurate world that provides the resources and sets the rules for ordering. But the will we are able to express, however much compromised, is present in our technologies. In this sense, technologies do “have” will, and the choices that have been made in negotiating an “in order to” with an obdurate world are made through a process of negotiating both the means and the likely ends with the world of non-humans.

Dilution Of Responsibility Argument The fifth problem to overcome is particularly obvious in the contemporary era of distributed technologies, though it also applies in the case of technologies like the stone axe. If a technology is to be held to

account on any grounds – instrumental, legal, moral, or other – we must define just what the technology is, where it begins and ends, what is part of the technology and what is not. This problem is a real one, for all technologies are extensive in time and space. The stone axe and the computer system are artifacts that “fold up” [10] millions of years of geological activity and thousands of years of human practice, pulled together from places as diverse as Korean factories, Californian software houses, flint mines, and forests. We might reasonably argue (and scholars in Science and Technology Studies do makes this argument) that the doctor’s system comprises not just the doctor and all the particular hardware and software that resides in the doctor’s rooms, but also the global communications hardware and software it depends upon, and all the people and companies and organizations responsible for its construction, operation, and maintenance, now, and back through time; the specific-purpose applications-software, and the system software that the doctor uses, and all of the people, companies, organizations, tools, code libraries and testing regimes responsible for its construction, operation, and maintenance, now, and back through time; the educational systems responsible for training all of the above people, the manufacturing systems responsible for constructing all of the nonhuman technologies that are part of the system, now, and back through time, and so on. Similar chains may be drawn through time and space in the case of the manufacture of a stone axe. If we are to say that accountability is distributed among all humans and non-human technologies whose actions are implicated in a causal chain that has led to a bad outcome, the cast is so huge, the guilt spread so thin, that there is scarcely any point in attributing accountability or responsibility at all. If everyone and everything is responsible, nothing is.

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

|

summer 2008

This is clearly a problem, but it is clearly a problem that can and has been overcome by many inquiries, inquests, and tribunals that have been charged with the task of distributing responsibility for events in circumstances where many elements participated in the causal web that culminated in some accident or disaster. Accountability, in instrumental terms and in legal terms if not in moral terms, can and has been distributed through extensive and complex systems of interaction. The question here is whether any inquiry or investigation of moral culpability, or any assessment couched in moral or ethical terms, should regard the actions of technologies as relevant, and whether the distribu­tion of ­normative moral accountability should include those technologies. The pragmatic problems raised by the fifth objection are real, but are not sufficient to exclude non humans, given that there is good reason in principle to include them. If the axe or the doctor’s computer system has acted in a way that is implicated in a bad outcome; if the outcome might have been other than bad were not for the actions of the axe or the computer system; if the axe or computer system ­materialized an “in order to” that is manifest in that bad outcome; if the relationship between the “in order to” and the bad outcome is foreseeable; if the “in order to” might have been different, given a different will, and given a different course in negotiations, then the non-human axe or computer system bears some responsibility for the badness of that outcome. Whether others are also responsible (say, the maker of the axe, the programmer of the computer), is a matter for determination. To decline to attribute a responsibility that in principle might be attributed, simply because responsibility is distributed rather than concentrated, may be a pragmatic response, but is not justified |

49

in principle, and the fifth objection may therefore be dismissed.

What Are We Going to Do? But does any of this matter? Even if the argument is accepted and the actions of non-human technologies become subject to moral evaluation as well as instrumental, what are we going to do – put the Doctor’s computer system in jail? Send the axe to confession? Fine it? Execute it? Re-educate and rehabilitate it? Well, yes. Although it is allowed that in a literal sense jail and fines might not work, a figurative execution or rehabilitation of non-human technologies are certainly options that should be considered. Execution for example, is an attractive option in some circumstances. There are technologies whose constitution is so aligned to bad outcomes, or whose actions are so deeply implicated to bad outcomes, that rehabilitation is improbable, and the permanent banning of the technology (that is, its “execution”) becomes the best way to avoid those bad outcomes. One might think here of napalm, cluster bombs, CF2 spray, dioxin, battery cages, and automated spam. To blame humans and only humans for the acts of napalm or CF2 spray, and to direct moral outrage at humans and only humans, while implicitly or explicitly finding the napalm or the CF2 spray in themselves irrelevant to a moral assessment, lying outside an ethical jurisdiction, and thereby innocent of all moral wrong, is perverse. These technologies act in the world, the world is different for their actions, and at the very least, they share responsibility for the outcomes of their actions in the world. Accordingly, certain technologies should share the focus of moral outrage, and their moral assessment has a place in the restitution of moral order. The eradication of certain technologies on moral grounds will lead to better outcomes, but this can occur only if those technologies are held to bear responsibility 50 |

for those outcomes. If we continue to conclude that technologies are beyond moral accountability, and that only humans are morally responsible for bad outcomes, the technologies that might otherwise be eradicated will continue to threaten and cause tragedy, and technologies that might otherwise be rehabilitated will continue to cause nuisance and accident. But to make decisions about which technologies are to be eradicated, and why, and how, and which are to be rehabilitated, and why, and how, requires a technology assessment capable of reaching these decisions. To allow that technologies are legitimate subjects of moral assessment alters the grounds for assessment in ways that allow this. When considered without an explicitly moral assessment, technologies may be judged only in instrumental terms. They may be held to be inefficient, inaccurate, slow, costly, cumbersome to use, and so on. But strictly instrumental measures are inadequate yardsticks to use to get the full measure of these powerful actors. To draw on concepts from the moral domain is to assess technological actions within an appropriate framework. Right and wrong, virtue and wickedness, good and bad, are yardsticks by which we can more meaningfully comment upon the actions of cluster bombs or automated spam. A position at the intersection of moral axes provides a more meaningful conceptual context, and provides a more appropriate language for the assessment of technologies than a position solely on instrumental axes. Only in a moral context can the actions of technologies be assessed with appropriate conceptual tools, and with appropriate power and authority.

Author Information The author is with the School of Social and Environmental Enquiry, University of Melbourne, Victoria 3010, Australia; Email: mvarnold@ unimelb.edu.au.

References

[1] D. Ihde, Technics and Praxis. Dordrecht, the Netherlands and Boston, MA: Reidel, 1979. [2] J. Law and J. Hassard, Eds. Actor Network Theory and After. Oxford, U.K.: Blackwell. [3] B. Latour, “Morality and technology: The end of the means,” Theory, Culture & Society, vol. 19, no. 5/6, pp. 247–260, 2002. [4] H. Nissenbaum, “Computing and accountability (social computing),” Commun. ACM, vol. 37, no. 1, pp. 72-81, 1994. [5] D.G. Johnson and T.M. Powers, “Computers as surrogate agents,” in Ethicomp 2004: Challenges for the Citizen of the Information Society, Proc. Seventh Int. Conf. Syros, Greece: University of the Aegean, 2004. [6] D. Johnson, Computer Systems: Moral Entities but not Moral Agents, Canberra, Australia: Centre for Applied Philosophy and Public Ethics, 2006; http://www.cappe.edu.au/abstracts_06.htm. [7] P.-P. Verbeek, “Materializing morality: Design ethics and technological mediation,” Science, Technology, & Human Values, vol. 31, no. 3, pp. 361-380, 2006. [8] D. Birsch, “Moral responsibility for harm caused by computer system failures,” Ethics & Information Technology, vol. 6, pp. 233-245, 2005. [9] T. Swierstra and J. Jelsma, “Responsibility without moralism in technoscientific design practice,” Science, Technology, & Human Values, vol. 31, no. 3, pp. 309-332, 2006. [10] B. Latour and C. Venn, “Morality and technology: The end of the means,” Theory Culture Society, vol. 19, no. 5-6, pp. 247-260, 2002. [11] J.H. Moor, “The status and future of the Turing Test,” Minds Mach., vol. 11, no. 1, pp. 77-93, Feb. 2001. [12] L. Floridi and J.W. Sanders, “On the morality of artificial agents,” Minds Mach., vol. 14, no. 3, pp. 349-379, Aug. 2004. [13] M. Heidegger, The Question Concerning Technology, and Other Essays, translated and with an introd. by William Lovitt. New York, NY: Garland, 1977. [14] J. Ellul, “The technological bluff,” Geoffrey W. Bromiley, trans. Grand Rapids, MI: W.B. Eerdmans, 1990. [15] L. Mumford, The Lewis Mumford Reader, D.L. Miller, Ed. New York, NY: Pantheon, 1986. [16] B. Nardi, Ed., Context and Consciousness: Activity Theory and Human Computer Interaction. Cambridge, MA: M.I.T. Press, 1995, pp. 17-44. [17] D.G. Johnson and T.M. Powers, “Computers as surrogate agents,” in Ethicomp 2004: Challenges for the Citizen of the Information Society. Proc. Seventh Int.l Conf., T.W. Bynum, N. Pouloudi, S. Rogerson, and T. Spyrou, Eds. (Syros, Greece), Apr. 14 to 16, 2004. vol. 2. Syros, Greece: Univ. of the Aegean, 2004, pp. 422-435. [18] B. Latour, Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard Univ. Press, 1999. [19 B. Latour, Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford, U.K. and New York, NY: Oxford Univ. Press, 2005.

IEEE TECHNOLOGY AND SOCIETY MAGAZINE

|

summer 2008