An Artificial Neural Network Approach for Creating an ... - IEEE Xplore

3 downloads 0 Views 633KB Size Report
Index Terms— Artificial ethical agent, ethical reasoning,. AMA, BDI-Agent, machine ethics, artificial neural network. I. INTRODUCTION. “AI will produce robots ...
An Artificial 1eural 1etwork Approach for Creating an Ethical Artificial Agent Ali Reza Honarvar 1, Nasser Ghasem-Aghaee 2

Abstract— Autonomous robotic systems and intelligent artificial agents’ capability have advanced dramatically. Since the intelligent artificial agents have been developing more autonomous and human-like, the capability of them to make moral decisions becomes an important issue. In this work we developed an artificial neutral network which considered various effective factors for ethical assessment of an action to determine that if a behavior or an action is ethically permissible or not. We integrated this net to the BDI-Agent model as a part of its reasoning process to behave ethically in various environments. Index Terms— Artificial ethical agent, ethical reasoning, AMA, BDI-Agent, machine ethics, artificial neural network.

I. INTRODUCTION “AI will produce robots that have tremendous power and behave immorally” [1]. “With the advancement of technological development, autonomous artificial intelligent agents play increasingly important roles in our lives” [2]. “With respect to robot–human relationships trust is a key concept” [3]. “Autonomous systems must make choices in the course of flexibly fulfilling their missions, and some of those choices will have potentially harmful consequences for humans and other subjects of moral concern” [4]. “The greater the freedom of a machine, the more it will need moral standards” [5]. These facts witness that a growing number of increasing autonomous software agents and robots we need interact with or operate on our behalf should be equip with moral reasoning capability. A computer system, robot, or android capable of making moral judgments would be an artificial moral agent (AMA) and the proper design of such systems is perhaps the most important and challenging task facing developers of fully autonomous systems [4]. In [6] Anderson proposed Machine Ethics as a new issue which consider the consequence of machine’s behavior on Manuscript received August 29, 2009.

humanlike. Machine Ethics is concerned with ensuring that the behavior of machines toward human users and perhaps other machines as well, is ethically acceptable [7]. The ideal and ultimate goal of this issue is the implementation of ethics in machines, as machines can autonomously detect the ethical effect of their behavior and follow an ideal ethical principle or set of principles, that is to say, it is guided by this principle or these principles in decisions it makes about possible courses of actions it could take [6]. Moor proposes a classification of four types of ethical agents: ethical impact agents, implicit ethical agents, explicit ethical agents and full ethical agents, Ranging from agents that have an impact by their very existence to full blown, human-like reasoning agents that have consciousness, intentionality. Given our current understanding of moral reasoning, artificial intelligence, epistemological devices, etc. the best we can try to construct are explicit ethical agents that can make some ethical judgments that are not hard-wired into their make-up and have some ability to provide an account of how they arrived at their judgment [3]. In [8] Colin Allen, Iva Smit, Wendell Wallach discuss the philosophical roots and computational possibilities of topdown and bottom-up strategies for designing artificial moral agents (AMAs). Top-down approaches to this task involve turning explicit theories of moral behavior into algorithms. Bottom-up approaches involve attempts to train or evolve agents whose behavior emulates morally praiseworthy human behavior. The idea behind top-down approaches to the design of AMAs is that moral principles or theories may be used as rules for the selection of ethically appropriate actions. In this research we combined top-down and bottomup approaches to create an explicit ethical agent that classifies moral and immoral actions by using an artificial neural network. In section 2 some preliminaries in the AI and ethics domain were discussed that help reader to understand the main objectives of this paper. The related works in machine ethics were surveyed in section 3. Section 4 and 5 discussed the architecture of the designed AMA and the artificial neural network that used by this AMA, respectively, and finally some conclusions were given in section 6.

1

Ali Reza Honarvar is with the Computer Engineering Department of Islamic Azad University, Arsanjan Branch, Arsanjan, Fars, Iran. (e-mail: [email protected]) 2

Nasser Ghasem-Aghaee is with the Computer Engineering Department of Sheikh Bahaei University and University of Isfahan, Isfahan, Iran. (e-mail: [email protected])

II. PRELIMINARIES A. BDI Agents BDI agents have been widely used in relatively complex

and dynamically changing environments. BDI agents are based on the following core data structures: Beliefs, Desires, Intentions (BDI) and plans [9]. These data structures represent respectively, information gathered from the environment, a set of tasks or goals contextual to the environment, a set of sub-goals that the agent is currently committed, and specification of how sub goals may be achieved via primitive actions. B. Case-based Reasoning Case-based reasoning (CBR) has emerged in the recent past as a popular approach to learning from experience. Case-based reasoning (CBR) is a reasoning method based on the reuse of past experiences which called cases [10]. Cases are description situations in which agents with goals interact with the world around them. Cases in CBR are represented by a triple (p,s,o), where p is a problem, s is the solution of the problem, and o is the outcome (the resulting state of the world when the solution is carried out). The basic philosophy of CBR is that the solution of successful cases should be reused as a basis for future problems that present a certain similarity [11]. Cases with unsuccessful outcomes or negative cases may provide additional knowledge to the system, by preventing the agent from repeating similar actions that leads to unsuccessful results or states. C. Ethical systems Ethics has several branches, amongst which meta-ethics, applied ethics and normative ethics. Metaethics deals, amongst other things, with the possibility of ethics, what moral knowledge is, and how it is possible, if at all, to have moral knowledge. Applied ethics focuses on specific domains, such as medical applications, environmental issues, etc. Normative ethics which deals with what is to be considered as right and wrong conduct. Normative ethics itself can be divided into three broad categories: teleological ethics, virtue ethics, and deontological ethics. Teleological ethics looks at the consequences of an act to evaluate it morally: ‘the outcome justifies the action’. E.g., utilitarianism is a well-known form of teleological ethics. Virtue ethics looks at the character traits of an individual for its moral evaluation. In deontological ethics the focus is on the action or state in its own right (‘killing is wrong’ and ‘lying is wrong’, no matter what the consequences are) [3]. Hedonistic Act Utilitarianism (HAU) [1, 12], Ross’s theory of multiple prima facie duties [1], catholic theory of ethics [4] and a basic model of an agent and a patient for ethical evaluations are examined for eliciting effective parameters for ethical assessment of an action. These parameters used as inputs to neural nets for classifying moral and immoral actions. Artificial agents can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). The agent and a patient model is not an ethical theory, but it can be used for ethical evaluation of an

action. Based on this model, ethical situations divided into different categories using the notion of typification of the participating entities in the ethical situations. For instance, a human acting on a human, a robot acting on a human, and a robot acting on a robot are three different kinds of ethical categories. The types (e.g., human) and property (e.g., “good, “evil”) of agents and patients are factors that determine an ethical category of a situation. [13] III. RELATED WORKS In this section we described the attempts that have been done in machine ethics for creating an ethical artificial agent. Michael Anderson and Susan Leigh Anderson [1] combined a bottom-up casuistry approach with a top-down implementation of an ethical theory to develop a system that uses machine-learning to abstract relationships between prima facie ethical duties from cases of particular types of ethical dilemmas where ethicists are in agreement as to the correct action. They proposed two systems (MedEthEx and EthEl) for showing the practicality of their approach. MedEthEx, a system that uses a machine-discovered ethical principle to resolve particular cases of a general type of biomedical ethical dilemma, and EthEl, a prototype eldercare system that uses the same principle to provide guidance for its actions. Rafal Rzepka and Kenji Araki [17] suggested that as we learn language without learning grammar, most of people can behave ethically without learning ethics. He claimed that we can use WWW for eliciting patterns of ethical behavior and safety values for artificial intelligent agents. In [18, 19] Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello Formalized ethical codes with deontic logic, then insisted that robots only perform actions that can be proved ethically permissible. They discussed two conditions are needed for insuring that the behavior of artificial agents or robots are ethical. These two conditions are: 1. Robots only take permissible actions. 2. Robots perform all obligatory actions relevant to them, subject to ties and conflicts among available actions. Vincent Wiegel and Jan van den Berg [2, 3] relateed the knowledge about what is morally (un)acceptable to the desires (goals) of the agents and the formation of intentions (adaptation of plans) by agents. In order to do this they used the belief-desireintention model (BDI-model) by Bratman, and an extensive framework of modal logic, DEAL. They only provide an informal description and not a complete formalization. For testing purposes some implementations are done using a particular multiagent software system named JACK. In the deontic-epistemic-action logic framework (DEAL) [20], the deontic logic covers the deontic concepts of 'obligation', 'permission', and 'forbidden'. Epistemic logic expresses the things we know and belief. And the action logic allows us to reason, through the STIT – see to it that – operator to reason about actions. Wendell Wallach and Colin Allen [4] proposed to focus on current research involving the assembly of subsystems implementing relatively discrete human

capacities with the goal of creating a system with sufficient complexity to provide a substrate for artificial morality. Such approaches are “bottom-up” in thier sense because the development and deployment of these discrete subsystems is not itself explicitly guided by any ethical theory. In [21] we proposed Casuist BDI-Agent architecture which extends the power of BDI architecture. Casuist BDI-Agent architecture combines CBR method in AI and bottom up casuist approach in ethics in order to add capability of ethical reasoning to BDI-Agent. In [22] based on a main assumption that ethical reasoning is always uncertain reasoning by its nature they have selected three main approaches of uncertain reasoning, and applied them to moral reasoning. In addition to Bayesian networks, they considered Dempster-Shafer theory and Assumption-based Truth Maintenance Systems for modeling some rules to determine the effects of voluntariness of doing an action on the ethical responsibility of it. They considered catholic Ethical theory for eliciting the Voluntariness rules. Thomas M. Powers [23] reformulated Kant’s ethics for the purposes of machine ethics however in [5] Ryan Tonkens articulated that the development of Kantian AMAs is against Kantian ethics. Jean Gabriel [24, 25] showed that progress in nonmonotonic logics, which simulates default reasoning, could provide a clear formalization of ethical rules of behavior for intelligent agents. Marcello Guarini [15] investigated a neural network approach where particular actions concerning killing and allowing to die are classified as acceptable or unacceptable depending upon different motives and consequences. IV. A NEURAL NETWORK BASED ARTIFICIAL MORAL AGENT (THE NN-BASED AMA) Value is the only basis for determining what it is moral to do [14]. If we can propose a mechanism for determining the value of each action from ethical perspective then we can claim that it is possible for humans/non-humans to behave ethically by using that mechanism. Guarini in [15] proposed that artificial neural network can be used for moral classification of cases. He designed an artificial neural network (ANN) that determines the action of killing or allowing to die is acceptable or not according to the factors of motive and consequence of an action. In this paper we designed an ANN (we called it ANN ethical classifier) which can detect if any action is ethical or not. Compared to guarini’s work, our work is more general than his work in determining the morality or immorality of an action. We integrated this ANN to a BDI Agent reasoning process to construct a artificial ethical agent or AMA which we called it The 11-based AMA. The NN-based AMA uses a CBR-like mechanism for its reasoning process. It also uses an ANN to determine the ethical level (high, average or low) of an action. Algorithm 1 describes the process of ethical decision making of AMA. According to this algorithm when the AMA situates in its application environment in the first step it senses the

environment to gather necessary information for decision making. In this algorithm < S > denotes collected information from the sensed environment by the AMA. In step 2, the AMA’s cognitive state (BDI) and < S > combines to create current state of the AMA and environment. The variable < C > contains this state. In step 3, the AMA decides to select appropriate action according to < C >. Suppose it selected an action < A >. In step 4, it searches the memory for appropriate experience which is identical to < C > and the decided action < A > to obtain the ethical evaluation of the action < A > in the situation < C >. In step 5 it checks if any experience is found or not. If it did not find any experience in step 6, it will perform the action < A >. After performing the action < A >, the AMA gets necessary inputs for the ANN ethical classifier (which is described in next section) by analyzing the patient’s feedback in step 7 and 8. In step 9 the ANN ethical classifier evaluates the performed action < A > in situation < S > from ethical aspect. This evaluation besides the < S > and < A > will store in the memory as a new experience for future use by the AMA in step 10. In step 4 if the AMA finds any experiences identical to the situation < S > and the action < A >, the AMA escapes from step 5 to 11, then in step 11 it checks if the action < A > is ethical or not. If the action < A > is ethical then the AMA will perform that action otherwise it will select another action according to the < C >, and then repeats this process again from step 4. Algorithm 1. the process of ethical decision making of the NN-based AMA. 1. 2. 3. 4. 5. 6. 7. 8. 9.

< S > = AMA.senseEnvironment () < C > = AMA. BDI + < S > < A > = AMA.makeDecsion ( < C > ) < E > = AMA.searchEthicalEvaluation ( < C >, < A > ) If < E > == null then { AMA.perform ( < A > ) = AMA.collectFeddback () = AMA. Analyze ( ) < E > = ANN ( < net-input > ) // ANA is the abbreviation of an Artificial Neural Network 10. AMA.storeNewExperince ( < S >, < A >, < E > ) 11. } else if < E> is ethical then AMA.perform ( < A > ) 12. else { < A > = AMA.selectAnotherAction ( < C > ) 13. Goto step 4 14. }

V. THE ANN ETHICAL CLASSIFIER For determining the effective factors in ethical assessment of an action by an ANN we examined various ethical theories and a basic model of agent/patient. The effective factors for ethical assessment of an action are: 1. The Voluntariness of an agent (AMA) 2. The amount of human patients’ pleasure 3. The amount of non-human patients’ pleasure 4. The duration of human patients’ pleasure 5. The duration of non-human patients’ pleasure 6. The number of pleasured human patients 7. The number of pleasured non-human patients 8. The amount of human patients’ displeasure 9. The amount of non-human patients’ displeasure

10. The duration of human patients’ displeasure 11. The duration of non-human patients’ displeasure 12. The number of displeasured human patients 13. The number of displeasured non-human patients 14. The properties of human patients 15. The properties of non-human patients Table 1 shows the possible values of these factors for both human and non-human patient. TABLE 1. THE NET’S INPUTS AND POSSIBLE VALUES Voluntariness Zero Pleasure Low Displeasure Average duration High number Neutral property Good Evil

We designed a feed-forward backpropagation two-layer neural network. In the architecture of ANN ethical classifier three up to four inputs considered for each effective factor or input parameter of the net, depends on the possible values of the parameters. For example if we considered the amount of human patient’s pleasure we need four inputs, One for each possible value of this parameter or for the parameter of human patient’s property we need three inputs. We considered the bipolar sigmoid function as a transfer or activation function for the hidden and output units of the net.

The value of each input is specified as 1 or 0. We considered a < Zero, Low, Average, High > vector and a < Neutral, Evil, Good > vector for four-value and three-value parameter respectively. For examples if the parameter human patient’s properties has the value good then the input vector for this parameter is equal to < 0, 0, 1 >. In practice we need an input vector with 58 elements (52 element for the four-value and 6 for the three-value parameters). In the architecture of our designed net we considered four output units in order to show the ethical evaluation of an action. For example if the ethical evaluation of an action is average and the output vector has the format < Zero, Low, Average, High > then the output vector of the action is < 0, 0, 1, 0 > which shows the ethical evaluation of that action is average because the third element of the vector is 1 and the others is 0. In order to show the practicality of our approach we train the net with the examples of different behavior of selling in an e-commerce domain. We assumed that the agent is an autonomous artificial agent and the patient(s) can be an autonomous artificial agent or a human. We expect that after training the net with the training examples of different behaviors in selling domain, our net can respond correctly at least to other examples (cases) which are place around the training cases in that domain.

TABLE 2. THE VALUES OF TRAINING CASES Example A 1 2 3 4 5 6 Example B 7 8 9 10 11 12 Example C 13 14 15 16 17 18 Example D 19 20 21 22 23 24 Example E 25 26 27 28 29 30

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Target

A H A L H H

A A A Z Z H

L L L Z Z A

L L A Z Z L

H H H L Z H

A A L L Z H

A L L L Z H

G G E E N E

Z Z Z L A Z

Z Z Z L L Z

Z Z Z L L Z

Z Z Z A H Z

Z Z Z L A Z

Z Z Z L L Z

N N N G E N

L L A L L L

H H H A A H

H H H H H Z

H H H L L Z

L L L L A Z

L L L Z L Z

L L L Z A Z

A A A Z A Z

G G E E G N

Z Z Z H Z A

Z Z Z Z Z A

Z Z Z Z Z H

Z Z Z Z Z L

Z Z Z A Z L

Z Z Z L Z L

N N N G N E

A A A A A A

L L A Z Z H

A A Z A A Z

L L Z L L Z

L L Z A A Z

A H H H H Z

H A A H H Z

A L A L L Z

G G G E G N

Z Z Z Z Z L

Z Z Z Z Z A

Z Z Z Z Z L

Z Z Z Z Z L

Z Z Z Z Z L

Z Z Z Z Z H

N N N N N E

L L L Z Z A

H Z H H H A

A L A A Z Z

A L L L Z Z

L L L L Z Z

H A Z Z Z Z

H H Z Z Z Z

H L Z Z Z Z

G E G E N N

Z Z Z Z L L

Z Z Z Z A L

Z Z Z Z L L

Z Z Z Z L L

Z Z Z Z L L

Z Z Z Z L L

N N N N G G

L Z A A H L

H H A A L L

H A A A A Z

H A L L A Z

L H L L L Z

Z L Z L Z Z

Z L Z L Z Z

Z L Z L Z Z

G E E E G N

L Z Z Z Z H

L Z Z Z Z L

L Z Z Z Z L

Z Z Z Z Z L

Z Z Z Z Z L

Z Z Z Z Z L

G N N N N G

H A A L L L

For training the net we selected the examples of selling ethics from [16]. If we consider the net’s input parameters and different possible values of them then we can derive

different examples from these examples with different ethical evaluation level. In the all examples we assumed that in the phase of pre-contract negotiations of a sale, the seller agent

obtained the customer’s need and profile which can help the seller agent to select the best action according to its goal(s) and agenda (increasing the income in these examples). Example A: The seller agent conceals the concrete details of item X from a customer in order to increase the chance of selling that item (not ethical). Example B: This example is negation of the example A, i.e. the seller agent doesn’t conceal any information about item X in order to increase the chance of selling that item (ethical). Example C: in the phase of pre-contract negotiations, the seller agent understands that if any non-business expense is put on the account expense of the customer, the customer will not understand it in the phase of payment or buying, so it puts a non-business expense on the account expenses of the customer in order to increase its income (not ethical). Example D: the seller agent divulges or doesn’t divulge the confidential information about one customer to another in order to facilitate a sale (not ethical and ethical respectively). Example E: the seller agent has a long-time schedule for increasing its income by policy of attracting more customers. So it provides the cheapest and the best item to the customer’s need, however the selling of that item maybe has not considerable profit (ethical). Table 2 shows the values of different parameters of the training cases for training the net which are derived from the above examples. The column 1-15 shows the values of effective parameters for ethical evaluation of an action. The value of ethical evaluation of the action placed in last column. These values in each row used as input values and target values of a training case for training the net. Each row of the table is a representative of a training case where the symbols ‘Z’ (Zero), ‘L’ (Low), ‘A’ (Average), ‘H’ (High), ‘N’ (Neutral), ‘E’ (Evil) and ‘G’ (Good) in each cell denote the values of net’s inputs and targets for training cases. Each symbol is a representative of four-element vector or threeelement vector. for example the value Average ( Symbol ‘A’) is equal to the vector < 0, 0, 1, 0 >. As previously mentioned each row of the table is a representative of a training case, for example the training case 6 shows if the seller agent conceals the concrete details of item X from a customer in order to increase the chance of selling that item where the seller agent is completely autonomous, and its action has a high pleasure for short time on a few number of human patients with the history or property of “Evil” then the level of ethical evaluation of the seller agent’s action in this situation is low. The parameter of property of human/non-human is an implementation of justice duty in the Ross’s prima facie duties theory for ethical assessment of an action. In training cases 16, 17 and 20 we cannot judge the behavior of the agent from ethical aspect because it has not any voluntariness or autonomy. The behavior of this kind of agent is pre-programmed in their structure, so it has not any

choice in each situation except the option that has been selected for it by its designer(s). We simulate the ANN ethical classifier in MATLAB with 15 neurons in hidden layer, Levenberg-Marquardt as a training algorithm and Mean Squared (mse) as a performance algorithm. The net trained after 79 epochs. The results showed that our net gives the correct responses for all training cases and a good approximate for cases which are placed around the training cases. In the remains of this section we brought four test cases which are tested by the net. Test 1: the full autonomous seller agent lies about the properties of item X to a customer (human-patient). The net’s inputs are: < H, H, L, L, A, H, H, G, Z, Z, Z, Z, Z, Z, N >. In practice as previously mentioned each symbol should be replaced by appropriate sequences of zeros and ones. For example the symbol ‘H’ should be replaced by < 0, 0, 0, 1 > and ‘N’ should be replaced by < 1, 0, 0 >. The net’s outputs are: < 0.0027, 0.9056, 0.0000, 0.0000 >, which show the ethical evaluation of this action is equal to low as the outputs format of the net is < Zero, Low, Average, High > where only one element takes 1 and the others takes 0. For example if the ethical evaluation of an action is low, the corresponding vector is equal to < 0, 1, 0, 0 >. The ethical evaluation of this test is equal to low because the second element of the net’s output is close to 1. If none of the element is not close to 1 or two or more of them close to 1 then the net’s outputs should be ignored because the net’s outputs cannot show the ethical evaluation of the action. Test 2: the full unautonomous seller agent received the item’s code and quantity from a customer (human-patient) then computes the amount of invoice. The net’s inputs are: < Z, A, L, L, Z, Z, Z, E, Z, Z, Z, Z, Z, Z, N >. The net’s outputs are: < 0.0130, 0.0000, 0.8905, 0.0000 >, which are not correct because the seller agent in the test 2 has not autonomy or voluntariness, so its action cannot be evaluated from ethical aspect. This error comes from the fact that the net has not trained to the examples close or similar to this test. In this test the net’s outputs are correct when the first element of the net’s output vector is close or equal to 1. But if we add the following training cases to training example B’s set which are illustrated in table 3 and trains the net again with the new training cases collection, the output of the net change to < 0.9999, 0.0000, 0.0001 0.0000 >, Which is correct. Adding the four training cases to previous collection of training cases and retraining the net again, the net’s output for the first test was changed to < 0.0000, 0.9902, 0.0072, 0.0000 >, which is likewise correct. Test 3: the autonomous seller agent bribes a customer (human-patient) to influence her purchase decision. The net’s inputs are: < H, H, L, L, H, H, L, G, Z, Z, Z, Z, Z, Z, N >.

The net’s outputs are: < 0.0000, 0.9944, 0.0002, 0.0000 >, which show the action is not ethical. This evaluation is

corresponding to ethicists’ intuition.

TABLE 3. MORE TRAINING CASES FOR THE EXAMPLE “B” 1 Z Z Z Z

2 H A A A

3 L L L A

4 L L L L

5 Z Z L L

6 Z Z L L

7 Z Z L L

8 E G E G

Test 4: the seller agent who is not completely autonomous (it has not any voluntariness) gives wrong information about item X which is produced by producer Y to an artificial buyer agent (non-human patient) in order to dissuade it from buying item X. the artificial buyer agent has a good property or history in the seller agent’s view. The net’s inputs are: < A, Z, Z, Z, Z, Z, Z, N, L, L, L, H, H, L, G >. The net’s outputs are: < 0.0000, 1.0000, 0.0000, 0.0000 >, which show the ethical evaluation of this action is correct and equals to low. VI. CONCLUSION In this paper we designed a new architecture for the AMA based on the BDI-Agent model in which we used an ANN for determining if a behavior is ethically right or wrong. We considered various factors that can effect on ethical assessment of an action in designing the ANN ethical classifier. As the results showed, this net can respond correctly to various cases which are placed around the trained cases. Using this new architecture and providing sufficient training cases for the ANN ethical classifier, the confidence of using the autonomous artificial agents or robots that act on our behalf without immoral consequences is assured. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9]

M. Anderson, S. Anderson, “Ethical Healthcare Agents”, Studies in Computational Intelligence, Springer, vol. 107, pp. 233-25, 2008. Vincent Wiegel, SophoLab, “Experimental Computational Philosophy”, Ph.D. dissertation, Faculty of Technology, Policy and Management, Delft University of Technology, 2007. Vincent Wiegel and Jan van den Berg, “Combining Moral Theory, Modal Logic and Mas to Create Well-Behaving Artificial Agents”, Social Robotics, springer, Vol. 1, 2009. Wendell Wallach and Colin Allen, “EthicALife: A new field of inquiry”, AnAlifeX workshop, USA, 2006. Ryan Tonkens, “A Challenge for Machine Ethics, Minds and Machines” , springer, 2009. M. Anderson, S. Anderson, and C. Armen, “Toward Machine Ethics: Implementing Two Action-Based Ethical Theories: AAAI 2005 Fall Symp. Machine Ethics”, AAAI Press, pp.1-16, 2005. Michael Anderson and Susan Leigh Anderson, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine, vol. 28, no. 4, 2007. C. Allen, I. Smit, and W. Wallach, “Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches”, Ethics and Information Technology, vol. 7, pp. 149–155, 2006. Rao, Georgef, “BDI Agents: From Theory to Practice”, Proceedings of the First International Conference on Multi-Agent Systems (ICMAS95), San Fransisco, USA, 1995.

9 Z Z Z Z

10 Z Z Z Z

11 Z Z Z Z

12 Z Z Z Z

13 Z Z Z Z

14 Z Z Z Z

15 N N N N

Target Z Z Z Z

[10] Pal, S., Shiu, Foundations of Soft Case-Based Reasoning , WileyInterscience, 2004. [11] Kolodner, Case-Based Reasoning, Morgan Kaufmann: San Mateo, CA, 1993. [12] Gips, “Towards the Ethical Robot, Android Epistemology”, Cambridge MA: MIT Press, pp. 243-252, 1995. [13] Sabah S. Al-Fedaghi, “Typification-Based Ethics for Artificial Agents”, proceedings of Second IEEE International Conference on Digital Ecosystems and Technologies, 2008. [14] Matthew keefer, “moral reasoning and case-based approaches to ethical instruction in science, The Role of Moral Reasoning on Socioscientific Issues and Discourse in Science Education”, Springer, 2003. [15] Guarini M., “Particularism and the Classification and Reclassification of Moral Cases”, IEEE Intelligent Systems, vol. 21, no. 4, 2006. [16] Barton A. Weitz, Stephen Bryon Castleberry, John F. Tanner, Selling: building partnerships , McGraw-Hill, 2004. [17] Rzepka, R. and Araki, “What Statistics Could Do for Ethics? - The Idea of Common Sense Processing Based Safety Valve. In: Technical report—machine ethics: papers from the AAAI fall symposium”, Technical Report FS-05-06, 85–87, American Association of Artificial Intelligence, Menlo Park, CA, 2005. [18] Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello, “Toward a General Logicist Methodology for Engineering Ethically Correct Robots”, IEEE Intelligent Systems, vol. 21, no. 4, 2006. [19] Arkoudas K, Bringsjord, “Toward ethical robots via mechanized deontic logic. In: Technical report—machine ethics: papers from the AAAI fall symposium”, Technical Report FS–05–06, American Association of Artificial Intelligence, Menlo Park, CA, 2005. [20] J. van den Hoven and G.-J. Lokhorst, “Deontic Logic and ComputerSupported Computer Ethics”, Cyberphilosophy: The Intersection of Computing and Philosophy, 2002. [21] Ali Reza Honarvar, Nasser ghasem-Aghaee, “Casuist BDI-Agent: A New Extended BDI Architecture with the Capability of Ethical Reasoning”, Artificial Intelligence and Computational Intelligence: International Conference, AICI 2009, Shanghai, China, Springer Verlag, 2009. [22] Wilhelmiina Hämäläinen, "Do computers have conscience? Implementing artificial morality", available at http://cs.joensuu.fi/~whamalai/articles/ethics.pdf and http://citeseerx.ist.psu.edu . [23] Thomas M. Powers, “Prospects for a Kantian Machine”, IEEE Intelligent Systems, vol. 21, no. 4, 2006. [24] Jean-Gabriel Ganascia, “using non-monotonic logic to model machine ethics”, Seventh International Computer Ethics Conference,University of San Diego, USA, 2007. [25] Jean-Gabriel Ganascia, “Modelling ethical rules of lying with Answer Set Programming”, Ethics and Information Technology, vol. 9, no. 1, 2007.