UNIT 3 KNOWLEDGE REPRESENTATION

 

 

 

 

 

 

 



 

UNIT 3 KNOWLEDGE REPRESENTATION

First Order Predicate Logic Prolog Programming Unification Forward Chaining- Backward Chaining – Resolution – Knowledge Representation – Ontological Engineering- Categories and Objects Events Mental Events and Mental Objects Reasoning Systems for Categories – Reasoning with Default Information.

First order Logic

 

Propositional logic is a declarative language because its semantics is based on a truth relation between sentences and possible worlds. It also has sufficient expressive power to deal with partial information, using disjunction and negation.

First-Order Logic is a logic which is sufficiently expressive to represent a good deal of our common sense knowledge.

·                It is also either includes or forms the foundation of many other representation languages.

·                It is also called as First-Order Predicate calculus.

 

·                It is abbreviated as FOL or FOPC

 

FOL adopts the foundation of propositional logic with all its advantages to build a more expressive logic on that foundation, borrowing representational ideas from natural language while avoiding its drawbacks.

The Syntax of natural language contains elements such as,

 

1.     Nouns and noun phrases that refer to objects (Squares, pits, rumpuses)

2.     Verbs and verb phrases that refer to among objects ( is breezy, is adjacent to)

 

Some of these relations are functions-relations in which there is only one “Value” for a given “input”. Whereas propositional logic assumes the world contains facts, first-order logic (like natural language) assumes the world contains Objects: people, houses, numbers, colors, baseball games, wars, …

Relations: red, round, prime, brother of, bigger than, part of, comes between,… Functions: father of, best friend, one more than, plus, …


3.1           SPECIFY THE SYNTAX OF FIRST-ORDER LOGIC IN BNF FORM

 

The domain of a model is DOMAIN the set of objects or domain elements it contains. The domain is required to be nonempty—every possible world must contain at least one object. Figure 8.2 shows a model with five objects: Richard the Lionheart, King of England from 1189 to 1199; his younger brother, the evil King John, who ruled from 1199 to 1215; the left legs of Richard and John; and a crown. The objects in the model may be related in various ways. In the figure, Richard and John are brothers. Formally speaking, a relation TUPLE is just the set of tuples of objects that are related. (A tuple is a collection of objects arranged in a fixed order and is written with angle brackets surrounding the objects.) Thus, the brotherhood relation in this model is the set


 

 

Figure 3.1 First-Order Logic in BNF Form

 

The crown is on King John’s head, so the “on head” relation contains just one tuple, _ the crown, King John_. The “brother” and “on head” relations are binary relations — that is, they relate pairs of objects. Certain kinds of relationships are best considered as functions, in that a given object must be related to exactly one object in this way. For example, each person has one left leg, so the model has a unary “left leg” function that includes the following mappings:

 



The five objects are,

 

       Richard the Lionheart       His younger brother       The evil King John

       The left legs of Richard and John       A crown

·                The objects in the model may be related in various ways, In the figure Richard and John are brothers.

·                Formally speaking, a relation is just the set of tuples of objects that are related.

 

·                A tuple is a collection of Objects arranged in a fixed order and is written with angle brackets surrounding the objects.

·                Thus, the brotherhood relation in this model is the set {(Richard the Lionheart, King John),(King John, Richard the Lionheart)}

·                The crown is on King John’s head, so the “on head” relation contains just one tuple, (the crown, King John).

o The relation can be binary relation relating pairs of objects (Ex:- “Brother”) or unary relation representing a common object (Ex:- “Person” representing both Richard and John)

Certain kinds of relationships are best considered as functions that relates an object to exactly one object.

       For Example:- each person has one left leg, so the model has a unary “left leg” function that includes the following mappings (Richard the Lionheart)------------------------------------------------------- > Richard’s left leg

(King John)---- > John’s left leg

 

       Symbols and Interpretations:

 

       The basic syntactic elements of first-order logic are the symbols that stand for objects, relations and functions

       Kinds of Symbols

 

       The symbols come in three kinds namely,

 

       Constant Symbols standing for Objects (Ex:- Richard)       Predicate Symbols standing for Relations (Ex:- King)       Function Symbols stands for functions (Ex:-Left Leg)


o  Symbols will begin with uppercase letters

o  The choice of names is entirely up to the user

o  Each predicate and function symbol comes with an arity

o  Arity fixes the number of arguments.

 

       The semantics must relate sentences to models in order to determine truth.

 

       To do this, an interpretation is needed specifying exactly which objects, relations and

functions are referred to by the constant, predicate and function symbols.       One possible interpretation called as the intended interpretation- is as follows;

       Richard refers to Richard the Lion heart and John refers to the evil King John.

 

       Brother refers to the brotherhood relation, that is the set of tuples of objects given in equation {(Richard the Lionheart, King John),(King John, Richard the Lionheart)}

       On Head refers to the “on head” relation that holds between the crown and King John;

Person, King and Crown refer to the set of objects that are persons, kings and crowns.

 

       Left leg refers to the “left leg” function, that is, the mapping given in {(Richard the Lion heart, King John), (King John, Richard the Lionheart)}

A complete description from the formal grammar is as follows



Term A term is a logical expression that refers TERM to an object. Constant symbols are therefore terms, but it is not always convenient to have a distinct symbol to name every object. For example, in English we might use the expression “King John’s left leg” rather than giving a name to his leg. This is what function symbols are for: instead of using a constant symbol, we use Left Leg (John). The formal semantics of terms is straightforward. Consider a term f(t1, . . . , tn). The function symbol f refers to some function in the model. Atomic sentences Atomic sentence (or atom for short) is formed from a predicate symbol optionally followed by a parenthesized list of terms, such as Brother (Richard, John). Atomic sentences can have complex terms as arguments. Thus, Married (Father (Richard),Mother (John)) states that Richard the Lionheart’s father is married to King John’s mother.

Complex Sentences

 

We can use logical connectives to construct more complex sentences, with the same syntax and semantics as in propositional calculus

Brother (LeftLeg (Richard), John)

Brother (Richard, John) 𝖠 Brother (John, Richard) King(Richard ) King(John)

King(Richard) King(John).

 

Quantifiers

 

Quantifiers are used to express properties of entire collections of objects, instead of enumerating the objects by name. First-order logic contains two standard quantifiers, called universal and existential

Universal quantification ()

 

“All kings are persons,” is written in first-order logic as

 

x King(x) Person(x)

 

is usually pronounced “For all. . .” Thus, the sentence says, “For all x, if x is a king, then x is a person.” The symbol x is called a variable. A term with no variables is called a ground term.

Consider the model shown in Figure 8.2 and the intended interpretation that goes with it. We can extend the interpretation in five ways:

x Richard the Lionheart,

x King John, x Richard’s left leg, x → John’s left leg,

x → the crown.


The universally quantified sentence x King(x) Person(x) is true in the original model if the sentence King(x) Person(x) is true under each of the five extended interpretations. That is, the universally quantified sentence is equivalent to asserting the following five sentences:

Richard the Lionheart is a king Richard the Lionheart is a person. King John is a king King John is a person.

Richard’s left leg is a king Richard’s left leg is a person. John’s left leg is a king John’s left leg is a person.

The crown is a king the crown is a person.

 

Existential quantification ()

 

Universal quantification makes statements about every object. Similarly, we can make a statement about some object in the universe without naming it, by using an existential quantifier. To say, for example, that King John has a crown on his head, we write

x Crown(x) 𝖠OnHead(x, John)

 

x is pronounced “There exists an x such that . . .” or “For some x . . .” More precisely,

x P is true in a given model if P is true in at least one extended interpretation that assigns x to a domain element. That is, at least one of the following is true:

Richard the Lionheart is a crown 𝖠 Richard the Lionheart is on John’s head; King John is a crown 𝖠 King John is on John’s head;

Richard’s left leg is a crown 𝖠 Richard’s left leg is on John’s head; John’s left leg is a crown 𝖠 John’s left leg is on John’s head;

The crown is a crown 𝖠 the crown is on John’s head.

 

The fifth assertion is true in the model, so the original existentially quantified sentence is true in the model. Just as appears to be the natural connective to use with , 𝖠 is the natural connective to use with .

Using 𝖠 as the main connective with led to an overly strong statement in the example in the previous section; using with usually leads to a very weak statement, indeed. Consider the following sentence:

x Crown(x) OnHead(x, John)

 

Applying the semantics, we see that the sentence says that at least one of the following assertions is true:

Richard the Lionheart is a crown Richard the Lionheart is on John’s head; King John is a crown King John is on John’s head;

Richard’s left leg is a crown Richard’s left leg is on John’s head;


and so on. Now an implication is true if both premise and conclusion are true, or if its premise is false. So if Richard the Lionheart is not a crown, then the first assertion is true and the existential is satisfied. So, an existentially quantified implication sentence is true whenever any object fails to satisfy the premise

Nested quantifiers

 

We will often want to express more complex sentences using multiple quantifiers. The simplest case is where the quantifiers are of the same type. For example, “Brothers are siblings” can be written as

x y Brother (x, y) Sibling(x, y)

 

Consecutive quantifiers of the same type can be written as one quantifier with several variables. For example, to say that siblinghood is a symmetric relationship, we can write

x, y Sibling(x, y) Sibling(y, x). In other cases we will have mixtures. “Everybody loves somebody” means that for every person, there is someone that person loves:

x y Loves(x, y).

 

On the other hand, to say “There is someone who is loved by everyone,” we write

 

y x Loves(x, y).

 

The order of quantification is therefore very important. It becomes clearer if we insert parentheses.

x (y Loves(x, y)) says that everyone has a particular property, namely, the property that they love someone. On the other hand,

y (x Loves(x, y)) says that someone in the world has a particular property, namely the property of being loved by everybody.

Connections between and

 

The two quantifiers are actually intimately connected with each other, through negation. Asserting that everyone dislikes parsnips is the same as asserting there does not exist someone who likes them, and vice versa:

xLikes(x, Parsnips ) is equivalent to

 

xLikes(x, Parsnips ) is equivalent to x Likes(x, Parsnips). We can go one step further: “Everyone likes ice cream” means that there is no one who does not like ice cream:

x Likes(x, IceCream) is equivalent to x Likes(x, IceCream).


Equality

 

We can use the equality symbol to signify that two terms refer to the same object. For example,

Father (John)=Henry

 

says that the object referred to by Father (John) and the object referred to by Henry are the same.

 

The equality symbol can be used to state facts about a given function, as we just did for the Father symbol. It can also be used with negation to insist that two terms are not the same object. To say that Richard has at least two brothers, we would write

x, y Brother (x,Richard ) 𝖠 Brother (y,Richard ) 𝖠(x=y).

 

Compare different knowledge representation languages

 

Figure 3.2 Formal languages and their ontological and epistemological commitments

 

What are the syntactic elements of First Order Logic?

 

The basic syntactic elements of first-order logic are the symbols that stand for objects, relations, and functions. The symbols, come in three kinds:

a)       constant symbols, which stand for objects;

b)      predicate symbols, which stand for relations;

c)       and function symbols, which stand for functions.

We adopt the convention that these symbols will begin with uppercase letters. Example: Constant symbols :

Richard and John; predicate symbols :

Brother, On Head, Person, King, and Crown; function symbol : LeftLeg.


Quantifiers

 

Quantifiers are used to express properties of entire collections of objects, instead of enumerating the objects by name if a logic that allows object is found.

It has two type,

The following are the types of standard quantifiers, Universal

Existential

 

Universal quantification

 

Explain Universal Quantifiers with an example.

 

Rules such as "All kings are persons,'' is written in first-order logic as King(x) => Person(x)

where    is pronounced as For all..”

 

Thus, the sentence says, "For all x, if x is a king, then is a person." The symbol x is called a variable(lower case letters)

The sentence    x P, where P is a logical expression says that P is true for every object x.



The universally quantified sentence is equivalent to asserting the following five sentences

 

Richard the Lionheart------ Richard the Lionheart is a person

King John is a King------- King John is a Person

Richard’s left leg is King-------- Richard’s left leg is a person

John’s left leg is a King-------- John’s left leg is a person

The crown is a King--------- The crown is a Person

 

Existential quantification

 

Universal quantification makes statements about every object.

 

It is possible to make a statement about some object in the universe without naming it,by using an existential quantifier.

Example

 

“King John has a crown on his head” Crown(x) ^ OnHead(x,John)

is pronounced “There exists an x such that..” or “ For some x ..”


 

 

Nested Quantifiers

 

More complex sentences are expressed using multiple quantifiers. The following are the some cases of multiple quantifiers,

The simplest case where the quantifiers are of the same type.


For Example:- “Brothers are Siblings” can be written as

 


 

Consecutive quantifiers of the same type can be written as one quantifier with

 

several variables. For Example:- to say that siblinghood is a symmetric relationship as

 


 

3.2           THE WUMPUS WORLD

 

The wumpus world is a cave consisting of rooms connected by passageways. Lurking somewhere in the cave is the terrible wumpus, a beast that eats anyone who enters its room. The wumpus can be shot by an agent, but the agent has only one arrow. Some rooms contain bottomless pits that will trap anyone who wanders into these rooms (except for the wumpus, which is too big to fall in). The only mitigating feature of this bleak environment is the possibility of finding a heap of gold. Although the wumpus world is rather tame by modern computer game standards, it illustrates some important points about intelligence. A sample wumpus world is shown in Figure

 

 

Figure 3.3 wumpus world


To specify the agent's task, we specify its percepts, actions, and goals. In the wumpus world, these are as follows:

                In the square containing the wumpus and in the directly (not diagonally) adjacent squares the agent will perceive a stench.

                In the squares directly adjacent to a pit, the agent will perceive a breeze.

               In the square where the gold is, the agent will perceive a glitter.

               When an agent walks into a wall, it will perceive a bump.

                When the wumpus is killed, it gives out a woeful scream that can be perceived anywhere in the cave.

                The percepts will be given to the agent in the form of a list of five symbols; for example, if there is a stench, a breeze, and a glitter but no bump and no scream, the agent will receive the percept [Stench, Breeze, Glitter, None, None]. The agent cannot perceive its own location.

                Just as in the vacuum world, there are actions to go forward, turn right by 90°, and turn left by 90°. In addition, the action Grab can be used to pick up an object that is in the same square as the agent. The action Shoot can be used to fire an arrow in a straight line in the direction the agent is facing. The arrow continues until it either hits and kills the wumpus or hits the wall. The agent only has one arrow, so only the first Shoot action has any effect.

The wumpus agent receives a percept vector with five elements. The corresponding first order sentence stored in the knowledge base must include both the percept and the time at which it occurred; otherwise, the agent will get confused about when it saw what. We use integers for time steps. A typical percept sentence would be

Percept ([Stench, Breeze, Glitter, None, None], 5).

 

Here, Percept is a binary predicate, and Stench and so on are constants placed in a list.

The actions in the wumpus world can be represented by logical terms:

 

Turn(Right ), Turn(Left ), Forward, Shoot, Grab, Climb.

 

To determine which is best, the agent program executes the query

 

ASKVARS( a BestAction(a, 5)),

 

which returns a binding list such as {a/Grab}. The agent program can then return Grab as the action to take. The raw percept data implies certain facts about the current state.

For example:

 

t,s,g,m,c Percept ([s,Breeze,g,m,c],t) Breeze(t),

t,s,b,m,c Percept ([s,b,Glitter,m,c],t) Glitter (t)


These rules exhibit a trivial form of the reasoning process called perception. Simple “reflex” behavior can also be implemented by quantified implication sentences.

For example, we have t Glitter (t) BestAction(Grab, t).

 

Given the percept and rules from the preceding paragraphs, this would yield the desired conclusion

BestAction(Grab, 5)—that is, Grab is the right thing to do. For example, if the agent is at a square and perceives a breeze, then that square is breezy:

s, t At(Agent, s, t) 𝖠 Breeze(t) Breezy(s).

 

It is useful to know that a square is breezy because we know that the pits cannot move about. Notice that Breezy has no time argument. Having discovered which places are breezy (or smelly) and, very important, not breezy (or not smelly), the agent can deduce where the pits are (and where the wumpus is). first-order logic just needs one axiom:

s Breezy(s) ⇔∃r Adjacent (r, s) 𝖠 Pit(r).

 

3.3           SUBSTITUTION

 

Let us begin with universal quantifiers

 

x King(x) 𝖠 Greedy(x) Evil(x).

 

Then it seems quite permissible to infer any of the following sentences:

 

King(John) 𝖠 Greedy(John) Evil(John) King(Richard ) 𝖠 Greedy(Richard) Evil(Richard)

King(Father (John)) 𝖠 Greedy(Father (John)) Evil(Father (John)).

 

The rule of Universal Instantiation (UI for short) says that we can infer any sentence obtained by substituting a ground term (a term without variables) for the variable.

Let SUBST(θ,α) denote the result of applying the substitution θ to the sentence α. Then the rule is written

v α SUBST({v/g}, α) for any variable v and ground term g.

For example, the three sentences given earlier are obtained with the substitutions

{x/John}, {x/Richard }, and {x/Father (John)}.

 

In the rule for Existential Instantiation, the variable is replaced by a single new constant symbol. The formal statement is as follows: for any sentence α, variable v, and constant symbol k that does not appear elsewhere in the knowledge base,

v α SUBST({v/k}, α) For example, from the sentence


x Crown(x) 𝖠OnHead(x, John) we can infer the sentence

Crown(C1) 𝖠OnHead(C1, John)

 

EXAMPLE

 

Suppose our knowledge base contains just the sentences

x King(x) 𝖠 Greedy(x) Evil(x) King(John)

Greedy(John) Brother (Richard, John)

 

Then we apply UI to the first sentence using all possible ground-term substitutions from the vocabulary of the knowledge base—in this case,

{x/John} and {x/Richard }. We obtain

King(John) 𝖠 Greedy(John) Evil(John) King(Richard ) 𝖠 Greedy(Richard) Evil(Richard),

and we discard the universally quantified sentence. Now, the knowledge base is essentially propositional if we view the ground atomic sentences

King(John),

Greedy(John), and so on—as proposition symbols.

 

3.4           UNIFICATION

 

Lifted inference rules require finding substitutions that make different logical expressions look identical. This process is called unification and is a key component of all first- order inference algorithms. The UNIFY algorithm takes two sentences and returns a unifier for them if one exists: UNIFY(p, q)=θ where SUBST(θ, p)= SUBST(θ, q).

Suppose we have a query AskVars(Knows(John, x)): whom does John know? Answers can be found by finding all sentences in the knowledge base that unify with Knows(John, x).

Here are the results of unification with four different sentences that might be in the knowledge base: UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane}

UNIFY(Knows(John, x), Knows(y, Bill )) = {x/Bill, y/John}

 

UNIFY(Knows(John, x), Knows(y, Mother (y))) = {y/John, x/Mother (John)} UNIFY(Knows(John, x), Knows(x, Elizabeth)) = fail.

The last unification fails because x cannot take on the values John and Elizabeth at the same time. Now, remember that Knows(x, Elizabeth) means “Everyone knows Elizabeth,” so we should be able to infer that John knows Elizabeth. The problem arises only because the two sentences happen to use the same variable name, x. The problem can be avoided by


standardizing apart one of the two sentences being unified, which means renaming its variables to avoid name clashes. For example, we can rename x in

Knows(x, Elizabeth) to x17 (a new variable name) without changing its meaning.

 

Now the unification will work

 

UNIFY(Knows(John, x), Knows(x17, Elizabeth)) = {x/Elizabeth, x17/John} UNIFY should return a substitution that makes the two arguments look the same. But there could be more than one such unifier.

For example,

 

UNIFY(Knows(John, x), Knows(y, z)) could return

{y/John, x/z} or {y/John, x/John, z/John}.

 

The first unifier gives Knows(John, z) as the result of unification, whereas the second gives Knows(John, John). The second result could be obtained from the first by an additional substitution {z/John}; we say that the first unifier is more general than the second, because it places fewer restrictions on the values of the variables. An algorithm for computing most general unifiers is shown in Figure.

The process is simple: recursively explore the two expressions simultaneously “side by side,” building up a unifier along the way, but failing if two corresponding points in the structures do not match. There is one expensive step: when matching a variable against a complex term, one must check whether the variable itself occurs inside the term; if it does, the match fails because no consistent unifier can be constructed.

 

 

Figure 3.4 Recursively explore


In artificial intelligence, forward and backward chaining is one of the important topics, but before understanding forward and backward chaining lets first understand that from where these two terms came.

Inference engine

 

The inference engine is the component of the intelligent system in artificial intelligence, which applies logical rules to the knowledge base to infer new information from known facts. The first inference engine was part of the expert system. Inference engine commonly proceeds in two modes, which are:

a.     Forward chaining

b.     Backward chaining

 

Horn Clause and Definite clause

 

Horn clause and definite clause are the forms of sentences, which enables knowledge base to use a more restricted and efficient inference algorithm. Logical inference algorithms use forward and backward chaining approaches, which require KB in the form of the first-order definite clause.nt

Definite clause: A clause which is a disjunction of literals with exactly one positive literal is known as a definite clause or strict horn clause.

Horn clause: A clause which is a disjunction of literals with at most one positive literal is known as horn clause. Hence all the definite clauses are horn clauses.

Example: (¬ p V ¬ q V k). It has only one positive literal k. It is equivalent to p 𝖠 q → k.

A.   Forward Chaining

 

Forward chaining is also known as a forward deduction or forward reasoning method when using an inference engine. Forward chaining is a form of reasoning which start with atomic sentences in the knowledge base and applies inference rules (Modus Ponens) in the forward direction to extract more data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are satisfied, and add their conclusion to the known facts. This process repeats until the problem is solved.

Properties of Forward-Chaining

 

o    It is a down-up approach, as it moves from bottom to top.

 

o    It is a process of making a conclusion based on known facts or data, by starting from the initial state and reaches the goal state.


o    Forward-chaining approach is also called as data-driven as we reach to the goal using available data.

o    Forward -chaining approach is commonly used in the expert system, such as CLIPS, business, and production rule systems.

Consider the following famous example which we will use in both approaches:

 

Example

 

"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A, an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who is an American citizen."

Prove that "Robert is criminal."

 

To solve the above problem, first, we will convert all the above facts into first-order definite clauses, and then we will use a forward-chaining algorithm to reach the goal.

Facts Conversion into FOL

 

o        It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)

American (p) 𝖠 weapon(q) 𝖠 sells (p, q, r) 𝖠 hostile(r) → Criminal(p)                                                                                             …(1)

 

o        Country A has some missiles. p Owns(A, p) 𝖠 Missile(p). It can be written in two definite clauses by using Existential Instantiation, introducing new Constant T1. Owns(A, T1)                                                                    …(2)

Missile(T1)                                                                                             …(3)

 

o        All of the missiles were sold to country A by Robert.

p Missiles(p) 𝖠 Owns (A, p) → Sells (Robert, p, A)                                                                                             …(4)

 

o        Missiles are weapons.

Missile(p) Weapons (p)                                                                                             …(5)

 

o        Enemy of America is known as hostile.

Enemy(p, America) →Hostile(p)                                                                                             …(6)

 

o        Country A is an enemy of America.

Enemy (A, America)                                                                                             …(7)

 

o        Robert is American

American(Robert).                                                                                             …(8)


Forward chaining proof Step-1

In the first step we will start with the known facts and will choose the sentences which do not have implications, such as: American (Robert), Enemy(A, America), Owns(A, T1), and Missile(T1). All these facts will be represented as below.

 

 

Forward Chaining and backward chaining in AI

Figure 3.5

 

 

Step-2

 

At the second step, we will see those facts which infer from available facts and with

satisfied premises.

 

Rule-(1) does not satisfy premises, so it will not be added in the first iteration. Rule-(2) and (3) are already added.

Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from the conjunction of Rule (2) and (3).

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-(7).

 

 

Forward Chaining and backward chaining in AI

Figure 3.6

 

 

Step-3

 

At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1,

r/A}, so we can add Criminal (Robert) which infers all the available facts. And hence we reached our goal statement.


 

Forward Chaining and backward chaining in AI

Figure 3.7

 

 

Hence it is proved that Robert is Criminal using forward chaining approach.

 

B.                  Backward Chaining

 

Backward-chaining is also known as a backward deduction or backward reasoning method when using an inference engine. A backward chaining algorithm is a form of reasoning, which starts with the goal and works backward, chaining through rules to find known facts that support the goal.

Properties of backward chaining

 

o        It is known as a top-down approach.

o        Backward-chaining is based on modus ponens inference rule.

o        In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.

o        It is called a goal-driven approach, as a list of goals decides which rules are selected and used.

o        Backward -chaining algorithm is used in game theory, automated theorem proving tools, inference engines, proof assistants, and various AI applications.

o        The backward-chaining method mostly used a depth-first search strategy for proof.

 

Example

 

In backward-chaining, we will use the same above example, and will rewrite all the rules.

 

o    American (p) 𝖠 weapon(q) 𝖠 sells (p, q, r) 𝖠 hostile(r) → Criminal(p)                                                                …(1) Owns(A, T1)                                                                …(2)


o    Missile(T1)

 

o    ?p Missiles(p) 𝖠 Owns (A, p) → Sells (Robert, p, A)                                                                                             …(4)

 

o    Missile(p) Weapons (p)                                                                                             …(5)

 

o    Enemy(p, America) →Hostile(p)                                                                                             …(6)

 

o    Enemy (A, America)                                                                                             …(7)

 

o    American (Robert).                                                                                             …(8)

 

Backward-Chaining proof

 

In Backward chaining, we will start with our goal predicate, which is Criminal (Robert), and then infer further rules.

Step-1

 

At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and at last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is the predicate of it.

Forward Chaining and backward chaining in AI

 

Step-2

 

At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can see in Rule-1, the goal predicate Criminal (Robert) is present with substitution

{Robert/P}. So we will add all the conjunctive facts below the first level and will replace p with Robert.

Here we can see American (Robert) is a fact, so it is proved here.

 

 

Forward Chaining and backward chaining in AI

Figure 3.8


Step-3

 

t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it

satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.

 

 

Forward Chaining and backward chaining in AI

Figure 3.9

 

Step-4

 

At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which satisfies the Rule- 4, with the substitution of A in place of r. So these two statements are proved here.

 

 

Forward Chaining and backward chaining in AI

Figure 3.10


Step-5

 

At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies

Rule- 6. And hence all the statements are proved true using backward chaining.

 

 

Forward Chaining and backward chaining in AI

Figure 3.11

 

Suppose you have a production system with the FOUR rules: R1: IF A AND C then F R2: IF A AND E, THEN G R3: IF B, THEN E R4: R3: IF G, THEN D and you have four initial facts: A, B, C, D. PROVE A&B TRUE THEN D IS TRUE. Explain what is meant by “forward chaining”, and show explicitly how it can be used in this case to determine new facts.

3.5                  RESOLUTION IN FOL Resolution

Resolution is a theorem proving technique that proceeds by building refutation proofs, i.e., proofs by contradictions. It was invented by a Mathematician John Alan Robinson in the year 1965.

Resolution is used, if there are various statements are given, and we need to prove a conclusion of those statements. Unification is a key concept in proofs by resolutions. Resolution is a single inference rule which can efficiently operate on the conjunctive normal form or clausal form.

Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit clause.

Conjunctive Normal Form: A sentence represented as a conjunction of clauses is said to be conjunctive normal form or CNF.


Steps for Resolution

 

1.     Conversion of facts into first-order logic.

2.     Convert FOL statements into CNF

3.     Negate the statement which needs to prove (proof by contradiction)

4.     Draw resolution graph (unification).

 

To better understand all the above steps, we will take an example in which we will apply resolution.

Example

a.     John likes all kind of food.

b.     Apple and vegetable are food

c.     Anything anyone eats and not killed is food.

d.     Anil eats peanuts and still alive

e.     Harry eats everything that Anil eats. Prove by resolution that:

f.      John likes peanuts.

 

Step-1: Conversion of Facts into FOL

 

In the first step we will convert all the given statements into its first order logic.

Resolution in FOL

 

o        Eliminate all implication (→) and rewrite

1.     x ¬ food(x) V likes(John, x)

2.     food(Apple) Λ food(vegetables)

3.     x y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)

4.     eats (Anil, Peanuts) Λ alive(Anil)

5.     x ¬ eats(Anil, x) V eats(Harry, x)

6.     killed(x) ] V alive(x)

7.     x ¬ alive(x) V ¬ killed(x)

8.     likes(John, Peanuts).


o        Move negation (¬)inwards and rewrite

1.     x ¬ food(x) V likes(John, x)

2.     food(Apple) Λ food(vegetables)

3.     x y ¬ eats(x, y) V killed(x) V food(y)

4.     eats (Anil, Peanuts) Λ alive(Anil)

5.     x ¬ eats(Anil, x) V eats(Harry, x)

6.     x ¬killed(x) ] V alive(x)

7.     x ¬ alive(x) V ¬ killed(x)

8.     likes(John, Peanuts).

 

o        Rename variables or standardize variables

1.     x ¬ food(x) V likes(John, x)

2.     food(Apple) Λ food(vegetables)

3.     y z ¬ eats(y, z) V killed(y) V food(z)

4.     eats (Anil, Peanuts) Λ alive(Anil)

5.     eats(Anil, w) V eats(Harry, w)

6.     g ¬killed(g) ] V alive(g)

7.     k ¬ alive(k) V ¬ killed(k)

8.     likes(John, Peanuts).

 

o        Eliminate existential instantiation quantifier by elimination. In this step, we will eliminate existential quantifier , and this process is known as Skolemization. But in this example problem since there is no existential quantifier so all the statements will remain same in this step.

o        Drop Universal quantifiers. In this step we will drop all universal quantifier since all the statements are not implicitly quantified so we don't need it.

1.     ¬ food(x) V likes(John, x)

2.     food(Apple)

3.     food(vegetables)

4.     ¬ eats(y, z) V killed(y) V food(z)

5.     eats (Anil, Peanuts)

6.     alive(Anil)

7.     ¬ eats(Anil, w) V eats(Harry, w)

8.     killed(g) V alive(g)

9.     ¬ alive(k) V ¬ killed(k)

10.  likes(John, Peanuts).

 

o        Distribute conjunction 𝖠 over disjunction ¬.This step will not make any change in this problem.


Step-3: Negate the statement to be proved

 

In this statement, we will apply negation to the conclusion statements, which will be written as ¬likes(John, Peanuts)

Step-4: Draw Resolution graph

 

o        Now in this step, we will solve the problem by resolution tree using substitution. For the above problem, it will be given as follows:

 

 

Resolution in FOL

Figure 3.12

 

Explanation of Resolution graph

 

o        In the first step of resolution graph, ¬likes(John, Peanuts), and likes(John, x) get resolved (canceled) by substitution of {Peanuts/x}, and we are left with ¬ food(Peanuts)

o        In the second step of the resolution graph, ¬ food (Peanuts), and food(z) get resolved (canceled) by substitution of { Peanuts/z}, and we are left with ¬ eats(y, Peanuts) V killed(y).

o        In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil, Peanuts) get resolved by substitution {Anil/y}, and we are left with Killed(Anil).

o        In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k) get resolve by substitution {Anil/k}, and we are left with ¬ alive(Anil).


o        In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved. Difference between Predicate Logic and Propositional Logic

Table 3.1

 

S. No.

Predicate logic

Propositional logic

1.

Predicate logic is a generalization of propositional logic that allows us to express and infer arguments in infinite models.

A preposition is a declarative statement that’s either TRUE or FALSE (but not both).

2.

Predicate logic (also called predicate calculus and first-order logic) is an extension of propositional logic to formulas involving terms and predicates. The full predicate logic is undecidable

Propositional logic is an axiomatization of Boolean logic. Propositional logic is decidable, for example by the method of truth table.

3.

Predicate logic have variables

Propositional     logic                          has                          variables. Parameters are all constant.

4.

A predicate is a logical statement that depends on one or more variables (not necessarily Boolean variables)

Propositional logic deals solely with propositions and logical connectives.

5.

Predicate logic there are objects, properties, functions (relations) are involved.

Proposition logic is represented in terms of Boolean variables and logical connectives.

6.

In predicate logic, we symbolize subject and predicate separately. Logicians often use lowercase letters to symbolize subjects (or objects) and upper case letter to symbolize predicates.

In propositional logic, we use letters to symbolize entire propositions. Propositions are statements of the form “x is y” where x is a subject and y is a predicate.

7.

Predicate logic uses quantifies such as universal quantifier (“"”), the existential quantifier (“$”).

Prepositional logic has not qualifiers.

8.

Example

Everything is a green as “"x Green(x)” or

“Something is blue as $ x Blue(x)”.

Example

Everything     is     a                       green     as                       “G” or

“Something is blue as “B(x)”.

 

3.6           KNOWLEDGE REPRESENTATION ONTOLOGICAL ENGINEERING

 

Concepts such as Events, Time, Physical Objects, and Beliefs— that occur in many different domains. Representing these abstract concepts is sometimes called ontological engineering.


 

Figure 3.13 The upper ontology of the world, showing the topics to be covered later in the chapter. Each link indicates that the lower concept is a specialization of the upper one. Specializations are not necessarily disjoint; a human is both an animal and an agent, for example.

 

 

The general framework of concepts is called an upper ontology because of the convention of drawing graphs with the general concepts at the top and the more specific concepts below them, as in Figure

Categories and Objects

 

The organization of objects into categories is a vital part of knowledge representation. Although interaction with the world takes place at the level of individual objects, much reasoning takes place at the level of categories.

For example, a shopper would normally have the goal of buying a basketball, rather than a particular basketball such as BB9 There are two choices for representing categories in first-order logic: predicates and objects. That is, we can use the predicate Basketball (b), or we can reify1 the category as an object, Basketballs.

We could then say Member(b, Basketballs ), which we will abbreviate as b Basketballs, to say that b is a member of the category of basketballs. We say Subset(Basketballs, Balls), abbreviated as Basketballs Balls, to say that Basketballs is a subcategory of Balls. Categories serve to organize and simplify the knowledge base through inheritance. If we say that all instances of the category Food are edible, and if we assert that Fruit is a subclass of Food and Apples is a subclass of Fruit, then we can infer that every apple is edible. We say that the individual apples inherit the property of edibility, in this case from their membership in the Food category. First-order logic makes it easy to state facts about categories, either by relating objects to categories or by quantifying over their members. Here are some types of facts, with examples of each:

                An object is a member of a category.


BB9 Basketballs

               A category is a subclass of another category. Basketballs Balls

                All members of a category have some properties.

 

(x Basketballs) Spherical (x)

                Members of a category can be recognized by some properties. Orange(x) 𝖠 Round (x) 𝖠 Diameter(x)=9.5 𝖠 x Balls x Basketballs

                A category as a whole has some properties.

 

Dogs Domesticated Species

 

Notice that because Dogs is a category and is a member of Domesticated Species, the latter must be a category of categories. Categories can also be defined by providing necessary and sufficient conditions for membership. For example, a bachelor is an unmarried adult male:

x Bachelors Unmarried(x) 𝖠 x Adults 𝖠 x Males

 

Physical Composition

 

We use the general PartOf relation to say that one thing is part of another. Objects can be grouped into part of hierarchies, reminiscent of the Subset hierarchy:

PartOf (Bucharest, Romania) PartOf (Romania, EasternEurope) PartOf(EasternEurope, Europe) PartOf (Europe, Earth)

The PartOf relation is transitive and reflexive; that is, PartOf (x, y) 𝖠PartOf (y, z) PartOf (x, z)

PartOf (x, x)

Therefore, we can conclude PartOf (Bucharest, Earth).

For example, if the apples are Apple1, Apple2, and Apple3, then

BunchOf ({Apple1,Apple2,Apple3})

denotes the composite object with the three apples as parts (not elements). We can define BunchOf in terms of the PartOf relation. Obviously, each element of s is part of

BunchOf (s): x xs PartOf (x, BunchOf (s)) Furthermore, BunchOf (s) is the smallest object satisfying this condition. In other words, BunchOf (s) must be part of any object that has all the elements of s as parts:

y [x x s PartOf (x, y)] PartOf (BunchOf (s), y)


Measurements

 

In both scientific and commonsense theories of the world, objects have height, mass, cost, and so on. The values that we assign for these properties are called measures. Length(L1)=Inches(1.5)=Centimeters(3.81)

Conversion between units is done by equating multiples of one unit to another:

Centimeters(2.54 ×d)=Inches(d)

 

Similar axioms can be written for pounds and kilograms, seconds and days, and dollars and cents. Measures can be used to describe objects as follows:

Diameter (Basketball12)=Inches(9.5) ListPrice(Basketball12)=$(19)

dDays Duration(d)=Hours(24)

 

Time Intervals

 

Event calculus opens us up to the possibility of talking about time, and time intervals. We will consider two kinds of time intervals: moments and extended intervals. The distinction is that only moments have zero duration:

Partition({Moments,ExtendedIntervals}, Intervals ) iMomentsDuration(i)=Seconds(0)

The functions Begin and End pick out the earliest and latest moments in an interval, and the function Time delivers the point on the time scale for a moment.

The function Duration gives the difference between the end time and the start time.

 

Interval           (i)                         Duration(i)=(Time(End(i))                                      Time(Begin(i))) Time(Begin(AD1900))=Seconds(0)

Time(Begin(AD2001))=Seconds(3187324800)

Time(End(AD2001))=Seconds(3218860800)

Duration(AD2001)=Seconds(31536000)

 

Two intervals Meet if the end time of the first equals the start time of the second. The complete set of interval relations, as proposed by Allen (1983), is shown graphically in Figure

12.2 and logically below:

 

Meet(i,j) End(i)=Begin(j) Before(i,j) End(i) < Begin(j) After (j,i) Before(i, j)

During(i,j) Begin(j) < Begin(i) < End(i) < End(j)


Overlap(i,j) Begin(i) < Begin(j) < End(i) < End(j) Begins(i,j) Begin(i) = Begin(j)

Finishes(i,j) End(i) = End(j)

Equals(i,j) Begin(i) = Begin(j) 𝖠 End(i) = End(j)

 

 

Figure 3.14 Predicates on time intervals

 

3.7           EVENTS

 

Event calculus reifies fluents and events. The fluent At(Shankar, Berkeley) is an object that refers to the fact of Shankar being in Berkeley, but does not by itself say anything about whether it is true. To assert that a fluent is actually true at some point in time we use the predicate T, as in T(At(Shankar, Berkeley), t). Events are described as instances of event categories. The event E1 of Shankar flying from San Francisco to Washington, D.C. is described as E1 Flyings𝖠 Flyer (E1, Shankar ) 𝖠 Origin(E1, SF) 𝖠 Destination (E1,DC) we can define an alternative three-argument version of the category of flying events and say E1 Flyings(Shankar, SF,DC) We then use Happens(E1, i) to say that the event E1 took place over the time interval i, and we say the same thing in functional form with Extent(E1)=i. We represent time intervals by a (start, end) pair of times; that is, i = (t1, t2) is the time interval that starts at t1 and ends at t2. The complete set of predicates for one version of the event calculus is T(f, t) Fluent f is true at time t Happens(e, i) Event e happens over the time interval i Initiates(e, f, t) Event e causes fluent f to start to hold at time t Terminates(e, f, t) Event e causes fluent f to cease to hold at time t Clipped(f, i) Fluent f ceases to be true at some point during time interval i Restored (f, i) Fluent f becomes true sometime during time interval i We assume a distinguished event, Start, that describes the initial state by saying which fluents are initiated or terminated at the start time. We define T by saying that a fluent holds at a point in time if the fluent was initiated by an event at some time in the past and was not made false (clipped) by an intervening event. A fluent does not hold if it was terminated by an event and not made true (restored) by another event. Formally, the axioms are:

Happens(e, (t1, t2)) 𝖠Initiates(e, f, t1) 𝖠Clipped(f, (t1, t)) 𝖠 t1 < t T(f, t)Happens(e, (t1, t2)) 𝖠 Terminates(e, f, t1)𝖠Restored (f, (t1, t)) 𝖠 t1 < t T(f, t)


where Clipped and Restored are defined by Clipped(f, (t1, t2)) ⇔∃ e, t, t3 Happens(e, (t, t3))𝖠 t1 t < t2 𝖠 Terminates(e, f, t) Restored (f, (t1, t2)) ⇔∃ e, t, t3 Happens(e, (t, t3)) 𝖠 t1 ≤ t < t2 𝖠 Initiates(e, f, t)

3.8           MENTAL EVENTS AND MENTAL OBJECTS

 

What we need is a model of the mental objects that are in someone’s head (or something’s knowledge base) and of the mental processes that manipulate those mental objects. The model does not have to be detailed. We do not have to be able to predict how many milliseconds it will take for a particular agent to make a deduction. We will be happy just to be able to conclude that mother knows whether or not she is sitting.

We begin with the propositional attitudes that an agent can have toward mental objects: attitudes such as Believes, Knows, Wants, Intends, and Informs. The difficulty is that these attitudes do not behave like “normal” predicates.

For example, suppose we try to assert that Lois knows that Superman can fly: Knows (Lois, CanFly(Superman)) One minor issue with this is that we normally think of CanFly(Superman) as a sentence, but here it appears as a term. That issue can be patched up just be reifying CanFly(Superman); making it a fluent. A more serious problem isthat, if it is true that Superman is Clark Kent, then we must conclude that Lois knows that Clark can fly: (Superman = Clark) 𝖠Knows(Lois, CanFly(Superman)) |= Knows(Lois, CanFly (Clark)) Modal logic is designed to address this problem. Regular logic is concerned with a single modality, the modality of truth, allowing us to express “P is true.” Modal logic includes special modal operators that take sentences (rather than terms) as arguments.

For example, “A knows P” is represented with the notation KAP, where K is the modal operator for knowledge. It takes two arguments, an agent (written as the subscript) and a sentence. The syntax of modal logic is the same as first-order logic, except that sentences can also be formed with modal operators. In first-order logic a model contains a set of objects and an interpretation that maps each name to the appropriate object, relation, or function. In modal logic we want to be able to consider both the possibility that Superman’s secret identity is Clark and that it isn’t. Therefore, we will need a more complicated model, one that consists of a collection of possible worlds rather than just one true world. The worlds are connected in a graph by accessibility relations, one relation for each modal operator. We say that world w1 is accessible from world w0 with respect to the modal operator KA if everything in w1 is consistent with what A knows in w0, and we write this as Acc(KA,w0,w1). In diagrams such as Figure 12.4 we show accessibility as an arrow between possible worlds. In general, a knowledge atom KAP is true in world w if and only if P is true in every world accessible from

w. The truth of more complex sentences is derived by recursive application of this rule and the normal rules of first-order logic. That means that modal logic can be used to reason about nested knowledge sentences: what one agent knows about another agent’s knowledge. For example, we can say that, even though Lois doesn’t know whether Superman’s secret identity is Clark Kent, she does know that Clark knows: KLois [KClark Identity(Superman, Clark )


KClarkIdentity(Superman, Clark )] Figure 3.15 shows some possible worlds for this domain, with accessibility relations for Lois and Superman

 

 

Figure 3.15

 

In the TOP-LEFT diagram, it is common knowledge that Superman knows his own identity, and neither he nor Lois has seen the weather report. So in w0 the worlds w0 and w2 are accessible to Superman; maybe rain is predicted, maybe not. For Lois all four worlds are accessible from each other; she doesn’t know anything about the report or if Clark is Superman. But she does know that Superman knows whether he is Clark, because in every world that is accessible to Lois, either Superman knows I, or he knows I. Lois does not know which is the case, but either way she knows Superman knows. In the TOP-RIGHT diagram it is common knowledge that Lois has seen the weather report. So in w4 she knows rain is predicted and in w6 she knows rain is not predicted. Superman does not know the report, but he knows that Lois knows, because in every world that is accessible to him, either she knows R or she knows

R. In the BOTTOM diagram we represent the scenario where it is common knowledge that Superman knows his identity, and Lois might or might not have seen the weather report. We represent this by combining the two top scenarios, and adding arrows to show that Superman does not know which scenario actually holds. Lois does know, so we don’t need to add any arrows for her. In w0 Superman still knows I but not R, and now he does not know whether Lois knows R. From what Superman knows, he might be in w0 or w2, in which case Lois does not know whether R is true, or he could be in w4, in which case she knows R, or w6, in which case she knows R.

3.9           REASONING SYSTEMS FOR CATEGORIES


This section describes systems specially designed for organizing and reasoning with categories. There are two closely related families of systems: semantic networks provide graphical aids for visualizing a knowledge base and efficient algorithms for inferring properties of an object on the basis of its category membership; and description logics provide a formal language for constructing and combining category definitions and efficient algorithms for deciding subset and superset relationships between categories.

3.10        SEMANTIC NETWORKS

 

There are many variants of semantic networks, but all are capable of representing individual objects, categories of objects, and relations among objects. A typical graphical notation displays object or category names in ovals or boxes, and connects them with labeled links. For example, Figure 12.5 has a Member Of link between Mary and Female Persons, corresponding to the logical assertion Mary FemalePersons ; similarly, the SisterOf link between Mary and John corresponds to the assertion SisterOf (Mary, John). We can connect categories using SubsetOf links, and so on. We know that persons have female persons as mothers, so can we draw a HasMother link from Persons to FemalePersons? The answer is no, because HasMother is a relation between a person and his or her mother, and categories do not have mothers. For this reason, we have used a special notation—the double-boxed link—in Figure 12.5. This link asserts that x x Persons [ y HasMother (x, y) y FemalePersons

] We might also want to assert that persons have two legs—that is, x xPersons Legs(x, 2) The semantic network notation makes it convenient to perform inheritance reasoning. For example, by virtue of being a person, Mary inherits the property of having two legs. Thus, to find out how many legs Mary has, the inheritance algorithm followsthe MemberOf link from Mary to the category she belongs to, and then follows SubsetOf links up the hierarchy until it finds a category for which there is a boxed Legs link—in this case, the Persons category.

 

 

Figure 3.16 A semantic network with four objects (John, Mary, 1, and 2) and four categories. Relations are denoted by labeled links.

 

Inheritance becomes complicated when an object can belong to more than one category or when a category can be a subset of more than one other category; this is called multiple inheritance. The drawback of semantic network notation, compared to first-order logic: the fact


that links between bubbles represent only binary relations. For example, the sentence Fly(Shankar, NewYork, NewDelhi, Yesterday) cannot be asserted directly in a semantic network. Nonetheless, we can obtain the effect of n-ary assertions by reifying the proposition itself as an event belonging to an appropriate event category. Figure 12.6 shows the semantic network structure for this particular event. Notice that the restriction to binary relations forces the creation of a rich ontology of reified concepts. One of the most important aspects of semantic networks is their ability to represent.

 

 

Figure 3.17 A fragment of a semantic network showing the representation of the logical assertion Fly(Shankar, NewYork, NewDelhi, Yesterday)

 

One of the most important aspects of semantic networks is their ability to represent default values for categories. Examining Figure 3.6 carefully, one notices that John has one leg, despite the fact that he is a person and all persons have two legs. In a strictly logical KB, this would be a contradiction, but in a semantic network, the assertion that all persons have two legs has only default status; that is, a person is assumed to have two legs unless this is contradicted by more specific information


We also include the negated goal Ø Criminal (West). The resolution proof is shown in Figure 3.18.


 

Figure 3.18 A resolution proof that West is a criminal. At each step, the literals that unify are in bold.

 

Notice the structure: single “spine” beginning with the goal clause, resolving against clauses from the knowledge base until the empty clause is generated. This is characteristic of resolution on Horn clause knowledge bases. In fact, the clauses along the main spine correspond exactly to the consecutive values of the goals variable in the backward-chaining algorithm of Figure. This is because we always choose to resolve with a clause whose positive literal unified with the left most literal of the “current” clause on the spine; this is exactly what happens in backward chaining. Thus, backward chaining is just a special case of resolution with a particular control strategy to decide which resolution to perform next.

EXAMPLE 2

 

Our second example makes use of Skolemization and involves clauses that are not definite clauses. This results in a somewhat more complex proof structure. In English, the problem is a follows:

Everyone who love all animals is loved by someone. Anyone who kills an animals is loved by no one.

Jack loves all animals.

Either Jack or Curiosity killed the cat, who is named Tuna.

Did Curiosity kill the cat? First, we express the original sentences, some background knowledge, and the negated goal G in first-order logic:

A.                 " x [" y Animal(y) Þ Loves(x,y) Þ [$y Loves(y,x)]

B.                 " x [$ x Animal(z) ^ Kills (x,z)] Þ [" y Loves(y,x)]

C.                 " x Animals(x) ÞLoves(Jack, x)

D.                 Kills (Jack, Tuna) V Kills (Curiosity, Tuna)

E.                  Cat (Tuna)

F.                  " x Cat(x) Þ Animal (x)

ØG.     ØKills(Curiosity, Tuna)


Now we apply the conversion procedure to convert each sentence to CNF:

 

A1.      Animal(F(x)) v Loves (G(x),x)

A2.      ØLoves(x,F(x)) v Loves (G(x),x)

B.             ØLoves(y,x) v Ø Animal(z) V Ø Kills(x,z)

C.             ØAnimal(x) v Loves(Jack, x)

D.             Kills(Jack, Tuna) v Kills (Curiosity, Tuna)

E.             Cat (Tuna)

F.              ØCat(x) v Animal (x)

ØG.     ØKills (Curiosity, Tuna)

 

The resolution proof that Curiosity kills the cat is given in Figure. In English, the proof could be paraphrased as follows:

Suppose Curiosity did not kill Tuna. We know that either Jack or Curiosity did; thus Jack must have. Now, Tuna is a cat and cats are animals, so Tuna is an animal. Because anyone who kills an animal is loved by no one, we know that no one loves Jack. On the other hand, Jack loves all animals, so someone loves him; so we have a contradiction. Therefore Curiosity killed the cat.

 

 

Figure 3.19 A resolution proof that Curiosity killed that Cat. Notice then use of factoring in the derivation of the clause Loves(G(Jack), Jack). Notice also in the upper right, the unification of Loves(x,F(x)) and Loves(Jack,x) can only succeed after the variables have been standardized apart

 

The proof answers the question “Did Curiosity kill the cat?” but often we want to pose more general questions, such as “Who killed the cat?” Resolution can do this, but it takes a little more work to obtain the answer. The goal is $w Kills (w, Tuna), which, when negated become ØKills (w, Tuna) in CNF, Repeating the proof in Figure with the new negated goal, we obtain a similar proof tree, but with the substitution {w/Curiosity} in one of the steps. So, in this case, finding out who killed the cat is just a matter of keeping track of the bindings for the query variables in the proof.

EXAMPLE 3

 

1.              All people who are graduating are happy.


2.              All happy people smile.

3.              Someone is graduating.

4.              Conclusion: Is someone smiling?

 

Solution

 

Convert the sentences into predicate Logic

1.              "x graduating(x) -> happy(x)

2.              "x happy(x) -> smile(x)

3.              $x graduating(x)

4.              $x smile(x)

 

Convert to clausal form

(i)                  Eliminate the sign

1.                   "x – graduating(x) vhappy(x)

2.                   "x – happy(x) vsmile(x)

3.                   $x graduating(x)

4.                   -$x smile(x)

 

(ii)                Reduce the scope of negation

1.              "x – graduating(x) vhappy(x)

2.              "x – happy(x) vsmile(x)

3.              $x graduating(x)

4.              "xØ smile(x)

 

(iii)              Standardize variable apart

1.              "x – graduating(x) vhappy(x)

2.              "x – happy(y) vsmile(y)

3.              $x graduating(z)

4.              "xØ smile(w)

 

(iv)              Move all quantifiers to the left

1.              "xØ graduating(x) vhappy(x)

2.              "xØ happy(y) vsmile(y)

3.              $x graduating(z)

4.              "wØ smile(w)

 

(v)                Eliminate $

1.              "xØ graduating(x) vhappy(x)

2.              "xØ happy(y) vsmile(y)


3.              graduating(name1)

4.              "wØ smile(w)

 

(vi)              Eliminate"

1.              Øgraduating(x) vhappy(x)

2.              Øhappy(y) vsmile(y)

3.              graduating(name1)

4.              Øsmile(w)

 

(vii)            Convert to conjunct of disjuncts form

 

(viii)          Make each conjunct a separate clause.

 

(ix)              Standardize variables apart again.

 

 

Figure 3.20 Standardize variables

 

Thus, we proved someone is smiling.

 

EXAMPLE 4

 

Explain the unification algorithm used for reasoning under predicate logic with an example. Consider the following facts

a.              Team India

b.              Team Australia

c.              Final match between India and Australia

d.              India scored 350 runs, Australia scored 350 runs, India lost 5 wickets, Australia lost 7 wickets.

f.          If the scores are same the team which lost minimum wickets wins the match.

 

Represent the facts in predicate, convert to clause form and prove by resolution “India wins the match”.


Solution

 

Convert into predicate Logic

(a)                team(India)

(b)                team(Australia)

(c)                team(India) ^ team(Australia) final_match(India,Australia)

(d)                score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)

(e)                $x team(x) ^ wins(x) → score(x,mat_runs)

(f)                 $xy score(x,equal(y)) ^ wicket(x,min) ^ final_match(x,y) win(x)

 

Convert to clausal form

(i)                  Eliminate the sign

(a)                team(India)

(b)                team(Australia)

(c)                Ø(team(India) ^ team(Australia) v final_match(India,Australia)

(d)                score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)

(e)                $xØ (team(x) ^ wins(x)) vscore(x,max_runs))

(f)                 $xyØ (score(x,equal(y)) ^ wicket(x,min) ^final_match(x,y)) vwin(x)

 

(ii)                Reduce the scope of negation

(a)                team(India)

(b)                team(Australia)

(c)                Øteam(India) v Øteam(Australia) v final_match(India, Australia)

(d)                score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)

(e)                $xØ team(x) v Øwins(x) vscore(x,max_runs))

(f)                 $xyØ (score(x,equal(y)) v Ø wicket(x,min_wicket) vØfinal_match(x,y)) vwin(x)

 

(iii)              Standardize variables apart

 

(iv)              Move all quantifiers to the left

 

(v)                Eliminate $

(a)                team(India)

(b)                team(Australia)

(c)                Øteam(India) v Øteam(Australia) v final_match (India,Australia)

(d)                score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)

(e)                Øteam(x) v Øwins(x) vscore(x, max_runs))

(f)                 Øscore(x,equal(y)) vØwicket(x,min_wicket) v-final_match(x,y)) vwin(x)

(vi)              Eliminate"


(vii)            Convert to conjunct of disjuncts form.

 

(viii)          Make each conjunct a separate clause.

(a)                team(India)

(b)                team(Australia)

(c)                Øteam(India) v Øteam(Australia) v final_match (India,Australia)

(d)                score(India,350)

Score(Australia,350) Wicket(India,5) Wicket(Austrialia,7)

(e)                Øteam(x) v Øwins(x) vscore(x,max_runs))

(f)                 Øscore(x,equal(y)) vØwicket(x,min_wicket) vØfinal_match(x,y)) vwin(x)

(ix) Standardize variables apart again To prove:win(India) Disprove:Øwin(India)

 

 

Figure 3.21 Standardize variables

 

Thus, proved India wins match.

 

EXAMPLE 5

 

Problem 3

Consider the following facts and represent them in predicate form: F1.            There are 500 employees in ABC company.

F2.            Employees earning more than Rs. 5000 per tax. F3.            John is a manger in ABC company.

F4.       Manger earns Rs. 10,000.


Convert the facts in predicate form to clauses and then prove by resolution: “John pays

tax”.

 

Solution

 

Convert into predicate Logic

1.              company(ABC) ^employee(500,ABC)

2.              $x company(ABC) ^employee(x,ABC) ^ earns(x,5000)→pays(x,tax)

3.              manager(John,ABC)

4.              $x manager(x, ABC)→earns(x,10000)

 

Convert to clausal form

 

(i)                  Eliminate the sign

1.              company(ABC) ^employee(500,ABC)

2.              $xØ(company(ABC) ^employee(x,ABC) ^ earns(x,5000)) v pays(x,tax)

3.              manager(John,ABC)

4.              $xØ manager(x, ABC) v earns(x,10000)

 

(ii)                Reduce the scope of negation

1.              company(ABC) ^employee(500,ABC)

2.              $xØ company(ABC) v Øemployee(x,ABC) vØearns(x,5000) v pays(x,tax)

3.              manager(John,ABC)

4.              $xØ manager(x, ABC) v earns(x,10000)

 

(iii)              Standardize variables apart

1.              company(ABC) ^employee(500,ABC)

2.              $xØ company(ABC) v Øemployee(x,ABC) vØearns(x,5000) v pays(x,tax)

3.              manager(John,ABC)

4.              $xØ manager(x, ABC) v earns(x,10000)

 

(iv)              Move all quantifiers to the left

 

(v)                Eliminate $

 

1.              company(ABC) ^employee(500,ABC)

2.              Øcompany(ABC) v Øemployee(x,ABC) vØearns(x,5000) v pays(x,tax)

3.              manager(John,ABC)

4.              Ømanager(x, ABC) v earns(x,10000)

 

(vi)              Eliminate"

 

(vii)            Convert to conjunct of disjuncts form


(viii)          Make each conjunct a separate clause.

1.              (a) company(ABC)

(b) employee(500,ABC)

2.              Øcompany(ABC) v Øemployee(x,ABC) vØearns(x,5000) v pays(x,tax)

3.              manager(John,ABC)

4.              Ømanager(x, ABC) v earns(x,10000)

 

(ix)   Standardize variables apart again. Prove:pays(John,tax) Disprove:Øpays(John,tax)

 

 

Figure 3.22 Standardize variables

 

Thus, proved john pays tax.

 

 

EXAMPLE 6 & EXAMPLE 7

 

Problem 4

If a perfect square is divisible by a prime p then it is also divisible by square of p. Every perfect square is divisible by some prime.

36 is a perfect square.

 

Convert into predicate Logic

 

1.              "xyperfect_sq(x) ^ prime(h) ^divideds(x,y) divides(x,square(y))


2.              "x$y perfect)sq(x) ^prime(y) ^divides(x,y)

3.              perfect_sq(36)

 

Problem 5

 

1.                                Marcus was a a man man(Marcus)

2.              Marcus was a Pompeian

Pompeian(Marcus)

3.              All Pompeians were Romans

"x (Pompeians(x) Roman(x))

4.                                Caesar was ruler Ruler(Caesar)

5.              All Romans were either loyal to Caesar or hated him.

$x (Roman(x) loyalto(x,Caesar) v hate(x,Caesar))

6.              Everyone is loyal to someone

"x $y (person(x) person(y) ^ loyalto(x,y))

7.              People only try to assassinate rulers they are not loyal to

"x $y (person(x) ^ ruler(y)^ tryassassinate(x,y) → -loyalto(x,y))

8.              Marcus tried to assassinate Caesar tryassassinate(Marcus, Caesar)

9.              All men are persons

"x (max(x) → person(x))

 

Example

 

Trace the operation of the unification algorithm on each of the following pairs of literals:

 

i)                    f(Marcus) and f(Caesar)

ii)                  f(x) and f(g(y))

iii)                f(marcus,g(x,y)) and f(x,g(Caesar, Marcus))

 

In propositional logic it is easy to determine that two literals can not both be true at the same time. Simply look for L and ~L. In predicate logic, this matching process is more complicated, since bindings of variables must be considered.

For example man (john) and man(john) is a contradiction while man (john) and man(Himalayas) is not. Thus in order to determine contradictions we need a matching procedure that compares two literals and discovers whether there exist a set of substitutions that makes them identical. There is a recursive procedure that does this matching. It is called Unification algorithm.


In Unification algorithm each literal is represented as a list, where first element is the name of a predicate and the remaining elements are arguments. The argument may be a single element (atom) or may be another list. For example we can have literals as

(tryassassinate Marcus Caesar) (tryassassinate Marcus (ruler of Rome))

To unify two literals, first check if their first elements re same. If so proceed. Otherwise they can not be unified. For example the literals

(try assassinate Marcus Caesar) (hate Marcus Caesar)

Can not be Unfied. The unification algorithm recursively matches pairs of elements, one pair at a time. The matching rules are :

i)                    Different constants, functions or predicates can not match, whereas identical ones can.

ii)                  A variable can match another variable, any constant or a function or predicate expression, subject to the condition that the function or [predicate expression must not contain any instance of the variable being matched (otherwise it will lead to infinite recursion).

iii)                The substitution must be consistent. Substituting y for x now and then z for x later is inconsistent (a substitution y for x written as y/x).

The Unification algorithm is listed below as a procedure UNIFY (L1, L2). It returns a list representing the composition of the substitutions that were performed during the match. An empty list NIL indicates that a match was found without any substitutions. If the list contains a single value F, it indicates that the unification procedure failed.

UNIFY (L1, L2)

1.  if L1 or L2 is an atom part of same thing do

(a)   if L1 or L2 are identical then return NIL

(b)   else if L1 is a variable then do

(i) if L1 occurs in L2 then return F else return (L2/L1)

© else if L2 is a variable then do

(i) if L2 occurs in L1 then return F else return (L1/L2) else return F.

2.  If length (L!) is not equal to length (L2) then return F.

3.  Set SUBST to NIL

(at the end of this procedure, SUBST will contain all the substitutions used to unify L1 and L2).


4.  For I = 1 to number of elements in L1 do

i)  call UNIFY with the ith element of L1 and I’th element of L2, putting the result in S

ii)  if S = F then return F

iii)  if S is not equal to NIL then do

(A)   apply S to the remainder of both L1 and L2

(B)   SUBST := APPEND (S, SUBST) return SUBST.

Consider a knowledge base containing just two sentences: P(a) and P(b). Does this knowledge base entail x P(x)? Explain your answer in terms of models.

The knowledge base does not entail x P(x). To show this, we must give a model where P(a) and P(b) but x P(x) is false. Consider any model with three domain elements, where a and b refer to the first two elements and the relation referred to by P holds only for those two elements.

Is the sentence x, y x = y valid? Explain

The sentence x, y x=y is valid. A sentence is valid if it is true in every model. An existentially quantified sentence is true in a model if it holds under any extended interpretation in which its variables are assigned to domain elements. According to the standard semantics of FOL as given in the chapter, every model contains at least one domain element, hence, for any model, there is an extended interpretation in which x and y are assigned to the first domain element. In such an interpretation, x=y is true.

What is ontological  commitment (what  exists in the world) of first  order logic?

Represent the sentence “Brothers are siblings” in first order logic?

Ontological commitment means what assumptions language makes about the nature if reality. Representation of “Brothers are siblings” in first order logic is " x, y [Brother (x, y) è Siblings (x, y)]

Differentiate between propositional and first order predicate logic?

 

Following are the comparative differences versus first order logic and propositional

logic.

 

1)                  Propositional logic is less expressive and do not reflect individual object`s properties explicitly. First order logic is more expressive and can represent individual object along with all its properties.

2)                  Propositional logic cannot represent relationship among objects whereas first order logic can represent relationship.

3)                  Propositional logic does not consider generalization of objects where as first order logic handles generalization.

4)                  Propositional logic includes sentence letters (A, B, and C) and logical connectives, but not quantifier. First order logic has the same connectives as


propositional logic, but it also has variables for individual objects, quantifier, symbols for functions and symbols for relations.

Represent the following sentence in predicate form: “All the children like sweets”

"x child(x) Ç sweet(y) Ç likes (x,y).

 

Illustrate the use of first order logic to represent knowledge. The best way to find usage of First order logic is through examples. The examples can be taken from some simple domains. In knowledge representation, a domain is just some part of the world about which we wish to express some knowledge. Assertions and queries in first-order logic Sentences are added to a knowledge base using TELL, exactly as in propositional logic. Such sentences are called assertions. For example, we can assert that John is a king and that kings are persons: Where KB is knowledge base. TELL(KB, "x King(x) => Person(x)). We can ask questions of the knowledge base using AS K. For example, returns true. Questions asked using ASK are called queries or goals ASK(KB, Person(John)) Will return true. (ASK KBto find whether Jon is a king) ASK (KB, $x person(x)) The kinship domain The first example we consider is the domain of family relationships, or kinship. This domain includes facts such as "Elizabeth is the mother of Charles" and "Charles is the father of William7' and rules such as "One's grandmother is the mother of one's parent." Clearly, the objects in our domain are people. We will have two unary predicates, Male and Female. Kinship relations-parenthood, brotherhood, marriage, and so on- will be represented by binary predicates: Parent, Sibling, Brother, Sister, Child, Daughter, Son, Spouse, Husband, Grandparent, Grandchild, Cousin, Aunt, and Uncle. We will use functions for Mother and Father.

What are the characteristics of multi agent systems?

 

Each agent has just incomplete information and is restricted in is capabilities. The system control is distributed. Data is decentralized. Computation is asynchronous Multi agent environments are typically open ad have no centralized designer. Multi agent environments provide an infrastructure specifying communication and interaction protocols. Multi agent environments have agents that are autonomous and distributed, and may be self interested or cooperative.

Comments

Popular posts from this blog

Compiler Design UNIT-1

COA- Unit -5 Peripheral DeviceS

COA-UNIT-3 Control Unit