Bab 5

22 downloads 2795 Views 348KB Size Report
1. Bab 6. Logical Agents. The story so far. . . Simple situation: + agent has to deal with SINGLE goal (Eg, get to B; clean house; . . . ) + agent (designer) has ...
Bab 6 Logical Agents The story so far. . . Simple situation: + agent has to deal with SINGLE goal (Eg, get to B; clean house; . . . ) + agent (designer) has accurate model of world + agent can determine its state by sensing + agent's actions are deterministic + world does not change while agent is thinking + .. . Designer only needs one “program" (Eg, heuristic function, . . . ) Agent does not need internal model of world state task (goal) Here: simple Search-based Agents are adequate But, many situations require more. . . World not always accessible . . . ie, cannot simple “read off" state determine best action

“perceptual aliasing" so, we need internal model to help

Have diverse “goals" or “worlds" (TAXI: different destinations, different traffic patterns), so we need easy way to change agents (Rather than “re-programing" each time)

Knowledge bases approach To be e_ective, agent may need to know + current state of the world + unseen properties of world + how world evolves + what it wants to achieve + what its actions do in various situations so we need to go beyond simple Search-based Agents Current Focus: Deterministic Discrete World is Known (but state may not be) Static (initially) Basic search techniques still used but perhaps wrt other “spaces"

1

Example A Wumpus world

Sensors: Actions: Goals

Stench, Breeze, Glitter, Bump, Scream Sense = [ ± Stench, ± Breeze, ± Glitter, ± Bump, ± Scream] Forward, Turn Right, Turn Left, Shoot, Grab, Climb, Release Get gold back to start without entering pit or wumpus square +1000 climbing out with gold -1 for each action taken -10,000 for dying

Environment • • • • • • • • • • • •

Stench in every room adjacent to Wumpus. Breeze in every room adjacent to a pit. Squares adjacent to wumpus are smelly Glitter in room with the gold. If agent Shoots while facing wumpus, arrow kills Wumpus and agent hears Scream. (Only 1 arrow) If agent grabs while in room with gold, it will pick up gold. If agent Climbs while in start square, it will leave maze (and end game). Agent can travel by moving forward. If into wall, will feel Bump. Agent can Turn Right or Turn Left, Agent dies on entering room with live Wumpus, or pit. Releasing drops the gold in the same square

Wumpus world characterization Is the world deterministic?? Is the world fully accessible?? Is the world static?? Is the world discrete??

Yes, outcomes exactly specified No, only local perception Yes, Wumpus and Pits do not move Yes

2

Exploring a wumpus world

In (a): (Location = [1, 1]) Sense = [-Stench, -Breeze, -Glitter, -Bump, -Scream] Can Reason. . . As - Stench, Wumpus not∈ { [1,2], [2,1] } As - Breeze, Pit not∈ { [1,2], [2,1] } Conclude: [1, 2] is “safe"; [2, 1] is “safe" Action = Forward (to [2,1]) In (b): (Location = [2, 1]) Sense = [-Stench, Breeze, -Glitter, -Bump, -Scream] Can Reason: As -Stench, Wumpus not∈ { [1,1], [3,1], [2,2] } As Breeze, Pit ∈ { [1,1], [3,1], [2,2] } Note: Pit NOT in [1, 1] (agent was there, did not fall in) Only GUARANTEED safe move is. . . Action = “Return to [1, 1]" (Turn Left, Turn Left, Forward)

3

In (c): (Location = [1, 2]) Sense = [+Stench, -Breeze, -Glitter, -Bump, -Scream] Can Reason: As +Stench, Wumpus ∈ { [1,1], [1,3], [2,2] } As -Breeze, Pit not∈ { [1,1], [1,3], [2,2] } Note: Wumpus NOT in [1, 1] (agent was there, not eaten) Wumpus NOT in [2, 2] (else +Stench in [2, 1]) Wumpus is in [1, 3] Only unvisited adjacent OK square = [2, 2] Action = “Go to [2, 2]" (Turn Right, Forward)

Challenges Need to encode what is known / observed + partial information (Don't know where Wumpus is exactly, but constrained to {. . .} ) + obtained at different times, from different locations . . . use it to reach appropriate conclusions Eg, As +Stench @ [1, 2], Wumpus ∈ { [1,1], [1,3], [2,2] } As agent was earlier in [1, 1], but not eaten Wumpus NOT in [1, 1] As _Stench in [2, 1] Wumpus NOT in [2, 2] Wumpus is in [1, 3] We need to Use “LOGIC" for this

Logic/Knowledge-Based Approach Represent knowledge as declarative statements Use inference / reasoning mechanism to • derive “new" (implicit) information • make decisions Key Problem: Need to express partial knowledge about current state. Solution: Use intensional representation based on formal logic. (Ie, logical language (propositional / first-order) combined w/ logical inference mechanism) Close to human thought? ?? . . . but appears reasonable strategy for machines

Reasoning Agents • •

• •

Knowledge base (KB) = set of sentences in a formal language (abstract data type) Declarative approach to building an agent (or other system): 1. Tell it what it needs to know, so Tell (KB; Fact) records Fact into KB. 2. Then it can ask itself what to do, Ask (KB, Query) asks whether Fact is true wrt KB 3. Answers should follow from the KB, may return information “action" specifying when Fact is true Agents can be viewed at the knowledge level i.e., what they know, regardless of how implemented or at the implementation level i.e., data structures in KB and algorithms that manipulate them

4

A simple knowledge-based agent function KB-AGENT (percept) returns an action static: KB, a knowledge base t, a counter, initially 0, indicating time TELL (KB, MAKE-PERCEPT-SENTENCE (percept, t)) action ASK (KB, MAKE-ACTION-QUERY (t) ) TELL (KB, MAKE-ACTION-SENTENCE (action, t)) t t+1 return action Challenge: “world" is not in computer, it is only a “representation" of world

Computer only has sentences (hopefully about world) and sensors can provide some grounding

FOLLOWS: “If Facts1 is true of world, then Fact2 is also true”. . . .Property of the WORLD . . . Wrt Representation: Given. . . Sentences1 Sentence2

represents Facts1 represents Fact2

If rep'n includes Sentences1, it should also include Sentence2. If you believe Sentences1, you MUST believe Sentence2 This called “entailment", written Sentences1 A Sentence 2

Meaning of Entailment K is entailment of KB

5

KB ╞ K means: All models of KB are models of Models(KB) ⊂ Models(K) If observe βi ∈ KB real world is a model of real world is a model of KB RW ∈ Models(KB)

K βi

As Models(KB) ⊂ Models(K) real-world is model of K ie, K must hold.

What is logic? Logic related to three important aspects: Syntax:

What do “sentences" look like?

Semantics:

What does it mean? How is it linked to the world?

Proof theory:

What else is derived? (find implicit information) “Pushing symbols" Want to derive all-and-only “entailed information"

How Logic used to represent the World? Preferably: expressive, concise unambiguous, independent of context have an effective procedure to derive implied (implicit) information Not easy meeting these goals . . . propositional / first-order logic meet some Must be able to handle incompleteness / uncertainty Contrast with programming languages . . . Dari logical expression yang digunakan untuk merepresentasikan knowledge about the real world, suatu agent diharapkan dapat: Represent states, actions, etc. Incorporate new percepts Update internal representations of the world Deduce hidden properties of the world Deduce appropriate actions Berikut ini kita pelajari berbagai logic concept yang umumnya dipergunakan dalam representing the world in the KB

6

Proportional Logic Syntax Propositional logic is the simplest logic|illustrates basic ideas The proposition symbols P1, P2 etc are sentences If S is a sentence, ¬S is a sentence If S1 and S2 is a sentence, S1 ∧ S2 is a sentence If S1 and S2 is a sentence, S1 ∨ S2 is a sentence If S1 and S2 is a sentence, S1 S2 is a sentence If S1 and S2 is a sentence, S1 E S2 is a sentence

Semantics Each model specifies true/false for each proposition symbol e.g.

A True

B True

C False

Rules for evaluating truth with respect to a model m: ¬S is true iff S1 ∧ S2 is true iff S1 ∨ S2 is true Iff S1 S2 is true Iff S1 S2 is false Iff S1 E S2 is true Iff

S S1 S1 S1 S1 S1

is false is true and is true or is false or is true and is true and

S2

S2 is true S2 is true S2 is tue S2 is false S2 S1 is true

Propositional inference: Enumeration method Let F = A ∨ B and KB = (A ∨ C) ∧ (B ∨ ¬C) Is it the case that KB ╞ F ? Check all possible models F must be true wherever KB is true

A False False False False True True True True

B False False True True False False True True

C

A∨C

B ∨¬ C

KB

False True False True False True False True

7

Propositional inference: Solution A

B

False False False False True True True True

False False True True False False True True

C

A∨C

False True False True False True False True

False True False True True True True True

B ∨¬ C True False True True True False True True

KB False False False True True False True True

False False True True True True True True

Normal forms Other approaches to inference use syntactic operations on sentences, often expressed in standardized forms Conjunctive Normal Form (CNF-- universal) conjunction of disjunctions of literals clauses

e.g., (A ∨ ¬B) ∧ (B ∨ ¬C ∨ ¬D)

Disjunctive Normal Form (DNF-- universal) disjunction of conjunctions of literals terms

e.g., (A ∧ B) ∨ (A ∧ ¬C) ∨ (A ∧ ¬D) ∨ (¬B ∧ ¬C) ∨ (¬B ∧ ¬D)

Horn Form (restricted) conjunction of Horn clauses (clauses with N 1 positive literal)

e.g., (A∨ ∨ ¬B) ∧ (B ∨ ¬C ∨ ¬D) Often written as set of implications:

B

A and (C ∧ D)

B

Validity and Satisfiability A sentence is valid if it is true in all models

e.g. A∨ ∨ ¬A, A

A, (A∧ ∧ (A

A))

B

Validity is connected to inference via Deduction Theorem: KB ╞ F if and only if (KB F ) is valid

8

A sentence is satisfiable if it is true in some models

e.g. A∨ ∨ B, C A sentence is unsatisfiable if it is true in no models

e.g. A∨ ∨ ¬A Satisfiable is connected to inference via the following: KB ╞ F if and only if (KB ∧¬F ) is unsatisfiable

Proof methods Proof methods divide into (roughly) two kinds: Model checking truth table enumeration (sound and complete for propositional) heuristic search in model space (sound but incomplete) Application of inference rules Legitimate (sound) generation of new sentences from old Proof = a sequence of inference rule applications Can use inference rules as operators in a standard search alg.

Summary Logical agents apply inference to a knowledge base to derive new information and make decisions Basic concepts of logic: syntax: formal structure of sentences semantics: truth of sentences wrt models entailment: necessary truth of one sentence given another inference: deriving sentences from other sentences soundess: derivations produce only entailed sentences completeness: derivations can produce all entailed sentences Wumpus world requires the ability to represent partial and negated information, reason by cases, etc. Propositional logic succes for some of these tasks Truth table method is sound and complete for propositional logic

First Order Logic (FOL) Basic elements Constants: Predicates: Functions: Variables: Connectives: Equality: Quantifiers:

KingJohn; 2; UCB; dll Brother; >; dll Sqrt; LeftLegOf; dll x; y; a; b; dll ∧, ∨, ¬, S, T

= ∀, ∃

9

Atomic sentences Atomic sentence = predicate (term1… termn) or term1 = term2 Term = function (term1… termn) or constant or variable e.g.,

Brother (KingJohn, RichardTheLionheart) > (Length (LeftLegOf (Richard)), Length (LeftLegOf (KingJohn)))

Complex sentences Complex sentences are made from atomic sentences using connectives ¬ S, S1 ∧ S2, S1 ∨ S2, S1 S S2; S1 T S2 E.g. Sibling (KingJohn, Richard) > (1, 2) ∨ N (1, 2) > (1; 2) ∧ ¬ >(1, 2)

Sibling(Richard;KingJohn)

Truth in first-order logic • • •



Sentences are true with respect to a model and an interpretation Model contains objects and relations among them Interpretation specifies referents for constant symbols objects predicate symbols relations function symbols functional relations An atomic sentence predicate (term1 … termn) is true iff the objects referred to by term1 … termn are in the relation referred to by predicate

relations: functional relations:

sets of tuples of objects all tuples of objects + "value" object

Universal quantification ∀ Example: Everyone at Trisakti is smart: ∀ x At(x, Trisakti) Smart(x) ∀ x P is equivalent to the conjunction of instantiations of P Smart (Ali) At (Ali, Trisakti) Smart (Saman) ∧ At (Saman, Trisakti) Smart (BomBom) ∧ At (BomBom, Trisakti) ∧ …

10

Typically

is the main connective with ∀.

Common mistake: using ∧ as the main connective with ∀: ∀ x At(x, Trisakti) ∧ Smart(x) Means “Everyone is at Trisakti and everyone is smart"

Existential quantification ∃ Someone at ITB is smart: ∃ x At(x, ITB) ∧ Smart(x) ∃ x P is equivalent to the disjunction of instantiations of P At(Ali, ITB) ∧ Smart(Ali) ∨At(Abdul, ITB) ∧ Smart(Abdul) ∨At(Amin, ITB) ∧ Smart(Amin) ∨… Typically, ∧ is the main connective with ∃. Common mistake: using as the main connective with ∃ ∃ x At(x, ITB) Smart(x) is true if there is anyone who is not at ITB

Properties of quantifiers ∀x ∀y is the same as ∀y ∀x ∃x ∃y is the same as ∃y ∃x ∃x ∀y is not the same as ∀y ∃x ∃x ∀y Loves(x; y) There is a person who loves everyone in the world ∀y ∃x Loves(x; y) Everyone in the world is loved by at least one person

Quantifier duality: each can be expressed using the other ∀x Likes(x; IceCream)

¬∃ ∃x ¬Likes(x; IceCream)

∃x Likes(x; IceCream)

¬∀ ∀x ¬Likes(x; IceCream)

11

Examples Brothers are siblings ∀x,y Brother(x, y) E Sibling(x, y). “Sibling" is reflexive ∀x, y Sibling(x, y) E Sibling(y, x) One's mother is one's female parent ∀x,y Mother(x, y) E (Female(x)andParent(x, y)) A first cousin is a child of a parent's sibling ∀x,y FirstCousin(x, y) E ∃p, ps Parent(p, x) ∧ Sibling(ps; p) ∧ Parent(ps, y)

Equality Term1 = term2 is true under a given interpretation If and only if term1 and term2 refer to the same object e.g. 1 = 2 and ∨x X (Sqrt(x), Sqrt(x)) = x are satisfiable 2 = 2 is valid e.g. definition of (full) Sibling in terms of Parent: ∨x, y Sibling(x,y) [¬(x=y) ∧ ∃m, f. ¬(m=f) ∧ Parent(m,x) ∧ Parent(f,x) ∧ Parent(m,y) ∧ Parent(f,y) ]

Interacting with FOL KBs Suppose a wumpus-world agent is using an FOL KB and perceives a smell and a breeze (but no glitter) at t = 5: Tell (KB, Percept([Smell, Breeze, None], 5)) Ask(KB, ∃a Action(a, 5)) I.e., does the KB entail any particular actions at t = 5? Answer: Yes, {a/Shoot}

substitution (binding list)

Given a sentence S and a substitution σ, Sσ denotes the result of plugging σ into S, e.g., S = Smarter(x, y) σ = {x/Aminah, y/Ali} Sσ = Smarter (Aminah, Ali) Ask(KB, S) returns some/all

σ such that KB ╞Sσ

Knowledge base for the wumpus world Perception ∨ b, g, t Percept ([Smell, b, g], t)

∨ s, b, t Percept ([s, b, Glitter], t)

Smelt(t) AtGold(t)

Reflex ∨ t AtGold(t)

Action (Grab, t)

12

Reflex with internal state: do we have the gold already? ∨ t AtGold(t) ∧ ¬Holding(Gold,t) Action (Grab, t) Holding(Gold,t) can not be observed

Deducing hidden properties Properties of locations ∀ loc, t At(Agent, loc,t) ∧ Smelt(t) ∀loc, t At(Agent,loc,t) ∧ Breeze(t)

Smelly(loc) Breezy(loc)

Squares are breezy near a pit: Diagnostic Rules – infer cause from effect ∀y Breezy (y) ∃ x Pit(x) ∧ Adjacent (x,y) Causal Rules – infer effect from cause ∀x, y Pit(x) ∧ Adjacent (x,y)

Breezy (y)

Neither of these is complete – e.g. the causal rule doesn’t say whether squares far away from pits can be breezy Definition for the Breezy predicate: ∀y Breezy (y) E [ ∃ x Pit(x) ∧ Adjacent (x,y)

Keeping track of change Facts hold in situations, rather than eternally e.g., holding (Gold, Now) rather than just holding (Gold). Situation calculus is one way to represent change in FOL(by adding a situation argument to each non-eternal predicate) e.g., Now in holding (Gold, Now) denotes a situation. Situations are connected by the Result function Result (a, s) is the situation that results from doing a is s

Describing actions “Effect" axiom -- describe changes due to action ∀s AtGold(s) Holding(Gold, Result(Grab, s)) “Frame" axiom -- describe non-changes due to action ∀s HaveArrow(s) HaveArrow(Result(Grab; s)) Frame problem: find an elegant way to handle non-change (a) representation -- avoid frame axioms (b) inference -- avoid repeated “copy-overs" to keep track of state

13

Qualification problem: true descriptions of real actions require endless caveats -- what if gold is slippery or nailed down or … Ramification problem: real actions have many secondary consequences -- what about the dust on the gold, wear and tear on gloves, ... Successor-state axioms solve the representational frame problem Each axiom is “about" a predicate (not an action): P true afterwards T [an action made P true ∨P true already and no action made P false] For holding the gold: ∀ a, s Holding(Gold, Result(a, s)) E [(a=Grab ∧ AtGold(s)) ∨(Holding(Gold, s) ∧ a ≠Release)]

Making plans Initial condition in KB: At(Agent, [1, 1], S0) At(Gold, [1, 2], S0) Query: Ask(KB, ∃s Holding(Gold; s)) i.e., in what situation will I be holding the gold? Answer: {s/Result (Grab, Result(Forward,S0)) i.e., go forward and then grab the gold This assumes that the agent is interested in plans starting at S0 and that S0 is the only situation described in the KB

Making plans: A better way Represent plans as action sequences [a1, a2 …an] PlanResult(p, s) is the result of executing p in s Then the query Ask(KB, ∃p Holding(Gold;PlanResult(p, S0))) has the solution {p/Forward, Grab]} Definition of PlanResult in terms of Result: ∀s PlanResult([], s) = s ∀a, p, s PlanResult(a[p], s) = PlanResult(p, Result(a, s)) Planning systems are special-purpose reasoners designed to do this type of inference more effciently than a general-purpose reasoner

14