Knowledge based agents: -Form representations of a complex world, use a process of inference to derive new representations about the world, and use these new representations to deduce what to do -Humans seem to know things and what they know helps them do things, in AI knowledge-based agents use a process of reasoning over an internal representation of knowledge to decide what actions to take -Problem solving agents know things, but only in a very limited inflexible sense, they know what actions are available and what the result of performing a specific action from a specific state will be, but they dont know general facts -A route finding agent does not know that it is impossible for a road to be negative number of kilometers long, the knowledge they have is very useful for finding a path from the start to a goal, but not for anything else -Knowledge-based agents can accept new tasks in the form of explicitly described goals; they can achieve competence quickly by being told or learning new knowledge about the environment; and they can adapt to changes in the environment by updating the relevant knowledge -Logic is a general class of representations to support knowledge-based agents who can combine and recombine information to suit myriad purposes
-Central component of a knowledge based agent is its knowledge base, or KB is a set of sentences (not identical to the sentences of English and other natural languages), each is expressed in a language called a knowledge representation language
-Must be a way to add new sentences to the KB and a way to query what is known, standard operations are TELL and ASK, both may invovle inference - that is, deriving new sentences from old
function KB-AGENT(percept) returns an action
persistent: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
action <- ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action, t))
t <- t+1
return action
-Each time the agent program is called, it does three things:
1. It TELLS the knowledge base what is perceives
2. It ASKS the knowledge base what action it should perform, in the process of answering the query, extensive reasoning may be done about the current state of the world, about the outcomes of possible action sequences, and so on
3. It TELLS the knowledge base which action was chosen, and returns the action so that it can be executed
-MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the agent perceived the given percept at the given time
-MAKE-ACTION-QUERY constructs a sentence that asks what action should be done at the current time
>A knowledge-based agent can be built simply by TELLing it what it needs to know, starting with an empty knowledge base, the agent designer can TELL sentences one by one until the agent knows how to operate in its environment, known as declarative approach to system building
>In contrast, the procedural approach encodes desired behaviors directly as program code, for a successful agent often combines both declarative and procedural elements in its design
Logic:
-Knowledge bases consist of sentences wherin a logic defines the sematics, or meaning, of the sentences, in standard logics, every sentence must be either true or false in each possible world - there is no in between
(means ki ta true huna paryo ki ta false huna paryo bhanya hawa)
-When need is to be precise, the term model is in place of possible world, whereas possible worlds might be though as real environments that the agent might or might not be in, models are mathematical abstractions, each of which has a fixed truth value for every relevant sentence
-If a sentence /a is true in model m, say that m satisfies a or sometimes m is a model of a, represent M(a) to mean that set of all models of a
Logic reasoning:
-Logical entailment is the idea that a sentence follows logically from another sentence, a |= b to mean the sentence a entails the sentence b, formaly a |= b if and only if, in every model in which a is true, b is also true, we can write
a |= b if and only if M(a) subset M(b)
-The sentence x = 0 entails the sentence xy = 0, in any model where x is zero, it is the case that xy is zero, regardles of the value of y
-Think of the set of all consequences of KB as a haystack and of a as a needle, entailment is like needle being in the haystack; inference is like finding it, write KB |-i a if an inference algorithm i can derive a from KB
1. An inference algorithm that derives only entailed sentences is called sound or truth preserving, an unsound procedure essentially makes things up as it goes along - announces the discovery of nonexistent needles
2. Property of compoleteness is desirable: an inference algorithm is complete if it can derive any sentence that is entailed, for finite KB, a systematic examination can always decide whether the needle is in the haystack, for infinite haystack of consequences, completeness becomes an issue
[bhanesi euta sematics representational language ra arko inference algo chaiyo]
Propositinal Logic:
-Syntax of propositional defines the allowable sentences, atomic sentences consist of a single proposition symbol, each such symbol stands for a proposition that can be true or false
-Complex sentences are constructed from simpler sentences, using parentheses and operators called logical connectives
-[not, and, or, implies, if and only if]
-Think of P->Q as 'if P is true, then I am claiming that Q is true; otherwise I am making no claim', the only way for this sentence to be false is if P is true but Q is false
Simple inference procedure:
-A inference procedure decides whether KB |= a for some sentence a; a simple model checking carries out logical inference as enumerates all possible models to check that a is true in all models in which KB is true, that is M(KB) subset M(a), works if the space of models if finite, is a direct implementation of the definition of entailment: enumerate the models
-[aba KB ma ta sentences haru hunxa haina, ani sab variables lai assign garni true or false so 2^n combination garauni ani KB true bhayo ki nai herni, sab sentence true bhayo bhani KB true hunxa]
-[ani tei variables le banako arko sentence check garni true chha ki nai]
-Performs a recursive enumeration of finite space of assignments to symbols, is sound because it implements directly the definition of entailment, and complete because it works for any KB and a and always terminates - there are only finitely many models to examine
-If KB and a contains n symbols in all, then there are 2^n models, the time complexity of the algorithm is O(2^n) and space is O(n) as enumeration is depth-first, NP-complete
-Entailment can also be done by theorem proving - applying rules of inference directly to the sentenecs in our knowledge base to construct a proof of the desired sentence without consulting models
-Principles (concepts)
1. Logical equivance:
-Two sentences a and b are logically equivalent if they are true in the same set of models, =- is used to make claims about sentences, while <-> is used as part of a sentence
-Any two sentences a and b are equivalent if and only if each of them entails the other:
a -=- if and only if a |= b and b |= a
2. Validity:
-A sentence is valid if it is true in all models, also known as tautologies - they are necessarily true
-From the definition of entailment:
| For any sentences a and b, (a |= b) if and only if the sentence a -> b is valid
3. Satisfiability:
-A sentence is satisfiable if it is true in, or satisfied by some model, can be checked by enumerating the possible models until one is found that satisfies the sentence (NP-complete)
-Validity and satisfiability are connected: a is valid iff ~a is unsatisfiable; contrapositivley, a is satisfiable iff ~a is not valid, also
a |= b if and only if sentence (a and !b) is unsatisfiable
-Proving b from a by checking the unsatisfiability of (a and !b) corresponds exactly to the standard mathematical proof technqiue of reductio ad absurdum, called proof by contradcition
[entailment, valid ra satisfiabiliy ra ek arka bich ko relations]
[aba chai inference procedures nai hun hai model checking jasatai]
Inference and proofs:
-Inference rules that can be applied to derive a proof - a chain of conclusions that leads to the desired goal
1. Modus ponens:
2. Modus tollens:
3. Hypothetical syllogism:
4. Disjunctive syllogism:
5. Addition:
6. Simplication:
7. Conjuction:
[essentially all of the]
x. Logical Equivalences
[sense, distributive, implication eliminate, contrapositiove, demorgans]
[what bout completeness?]
Resolution:
-A single inference rule, resolution, that yields a complete inference algorithm when coupled with any complete search algorithm
[usual lekhni tarika:
p or q
not p or r
Hence, q or r
]
-Unit resolution:
l1 or ... or lk, m
l1 or li-1 or li+1 or lk
where each l is a literal (positive or negative) and li and m are complimentary literals (i.e., one is the negation of the other), thus the unit resolution rule takes a caluse - a disjunction of literals - and a literal and produces a new clause
-Can be generalized to the full resolution rule:
l1 or ... or lk, m1 or ... or mn
l1 or ... or li-1 or li+1 or ... or lk or m1 or ... or mj-1 or mj+1 or ... or mn
where li and mj are complementary literals, says that resolution takes two clauses and produces a new clause containing all the literals of the two original clauses execept the two complementary literals
-The resulting clause should contain only one copy of each literal (kaile kai duita le cancel garnu parne huna sakxa, resolution le euta lai matra cancel garxa), removal of multiple copies of literals is called factoring
-A resolution-based theorem prover can, for any sentences a and b in propositinoal logic, decide whether a |= b
CNF:
-Resolution rule applies only to clauses (i.e, disjunction of literals), so is relevant only to KB consisting of clauses,
-Leads to complete inference for all of propositional logic as every sentence of propositional logic is logically equivalent to a conjunction of clauses
-A sentence expressed as a conjunction of clauses is said to be in conjuctive normal form or CNF
-Procedure for CNF:
1. Eliminate <-> replacing a <-> wwith (a->b) and (b->a)
2. Eliminate -> replacing a -> with not a or b
3. Move not inwards by repeated application of DeMorgans
4. Distribute or over and wherever possible
Resolution algorithm:
-Inference procedures based on resolution work by using the principle of proof by contradcition, i.e to show KB |= a, is to show (KB and not a) is unsatisfiable
-First (KB and ~a) is converted into CNF, then the resolution rule is applied to the resulting clauses, each pair that contains complementary literal is resolved to produce a new clause, which is added to the set if it is not already present, the process continues until one of two things happen:
1. There are no new clauses that can be added, in which case KB does not entail a; or,
2. Two clauses resolve to yield the empty clause, in which case KB entails a
-The empty clause - a disjunction of no disjunts - is equivalent to False because a disjunction is true only if at least one of its disjunts is true
[proof of complteness?]
Horn clauses:
-Some real-world knowledge bases satisfy certain restrictions on the form of sentences they contain, which enables them to use a more restricted and efficient inference algorithm
1. Definite clause:
-Which is a disjunction of literals of which exactly one is positive
2. Horn clause:
-Slightly more general, which is a disjunction of literals of which at most one is positive,
-All definite clauses are Horm clauses
-If you resolve two Horn clauses, you get back a Horn clause
>Every definite clause can be written as an implication whose paramter is a conjunction of positive literals and whose conclusion is a single positive literal, is easier to understand, a sentence consisting of a single positive literal too can be writtein in implication form as True -> L
>Inference with Horn clauses can be done through forward-chaining and backward-chaining algorithms, both of which are natural in that inference steps are obvious and easy for humans to follow, is basis for logic programming
>Deciding entailment with Horn clauses can be done in time that is linear in the size of the knowledge base - a plesant surprise
Forward and backward chaining:
Forward:
-Determines if a single proposition symbol q - the query - is entailed by a knowledge base of definite clauses
-Begins from known facts (positive literals) in the knowledge base, if all premises of an implication are known, then its conclusion is added to the set of known facts
(aghi bhaenthoy ni sexy form ma laijana milxa bhanera tesko kkura ho)
-The process continues until the query q is added or until no further inferences can be made
-[Euta mast example ghoknu parxa]
-Knowledge base drawn as an AND-OR graph where multiple edges joined by an arc indicate a conjuction while multple edges without an arc indicate a disjunction
-The known leaves are set, and inference propagates up the graph as far as possible, wherever a conjuction appears, the propagation waits until all the conjucts are known before proceeding
Performances:
1. Is sound: every inference is essentially an application of Modus Ponens
2. Is complete: every entailed atomic sentence will be derived
-Consider final state of the inferred table after the algorithm reaches a fixed point where no new inferences are possible, the table contanis true for each symbol inferred during the process, and false for all other symbols
-Forward chaining is an example of the general concept of data-driven reasoning—that is, reasoning in which the focus of attention starts with the known data
-It can be used within an agent to derive conclusions from incoming percepts, often without a specific query in mind
Backward:
-The backward-chaining algorithm works backward from query, if the query q is known to be true, then no work is needed
-Otherwise, the algorithm finds those implications in the KB whose conclusion is q, if all the premises of one of those implications can be proved by backward chaining recursively, then q is true
-When applied to query q, it works back down the graph until it reaches a set of known facts A and B, that forms the basis for a proof
-Is a form of goal-directed reasoning, useful for answering specific questions
-Often, the cost of backward chaining is much less than linear in the size of the knowledge base, because the process touches only relevant facts.
......
First-Order Logic:
-...
-Propositional logic asusmes that there are facts that either hold or do not hold in the world, each fact can be in one of two states - true or false - and each model assigns true or false to each proposition symbol
-FOPL assumes more; namely that the world consists of objects with certain relations among them that do or do not hold
-A relation is just the set of tuples of objects that are related
-The crown is on King John's head so the 'on head' relation contains just one tuple, <the crown, King John>
-Also person property could be there, the kind is person
-Certain kinds of relationships are best considered as: in that given object must be related to exactly one object in this way, for example, each person has one left leg, so the model has a unary left leg' function - a mapping from a one element tuple to an object - that includes the floowing mappings:
<Richard> -> Richard's left leg
<King John> -> Johns left leg
-Inference ma jaam sidhai:
>To make inference we could conver the FOPL sentences to propostional logic , where first step is to eliminate universal quantifiers by generating each sentences for each object in the KB
>Ani existential lai KB ma nabhako variables use garera hatauni, more geneal process is called Skolemization
>The problem here is that universal le infinite sentences generate garna sakxa
>But something is there and waht the fuck?
---Okay aba arko rule hai ta
>Instead of doing lafda, if there is some substitution theta that makes each of the conjucts in the premise of the implication identical to sentences already in the KB, then we can assert the conclusion of the implicaiton, after applying theta
>Unification is that process of making logical expressions look identical
-Lifted inference rules require finding substituitoins that make different logical expressions look identical, the process is called unification and is a key component
-THe UNIFY algorithm takes two sentneces and returns a unifiers for them (a substitution) if one exists
-UNIFY(p,q) = theta where SUBST(theta, p) = SUBST(theta, q)
-For example: want to know all the objects who john knows ie.
find x such that Knows(John, x)
-It could be done by unifying:
UNIFY(Knows(John, x), Knows(John, Jane))
UNIFY(Knows(John, x), Knows(y, Bill)) and so son
1. First step is to convert sentences to CNF, key is that every sentence of first order logic can be converted into an inferentially equivalent CNF sentence
2. Procedure is same but the difference arises from the need to eliminate the existential quantifiers
Example:
>Everyone who loves all animals is loved by someone:
>For all x [for all y Animal(y) -> Loves(x,y)] -> [there exists y Loves(y,x)]
A. Elimiate implications: same way
B. Move negation inwards
becomes
>Either there is some animal that x doesnot love or (if this is not the case) someone loves x
>Standardize variables: variables ta dummy hun ni tei bhayera repeat huna sakxan but dont repeat
>SKOLEMIZE:
Skolemization is the process of removing existential quantifiers by elimination
-bhannale universal quanitifers ko bhitra bhako existential lai yettikai remove garna mildena
-for all x [there exists y Loves(x,y)] xa bhane we cannot jus remove y wala part
-for all x [Loves(G(x), y)]; yesto garna painxa baru
-Here F are the skolem functions, general rule is that arguments of the skolem functions are all universally quantified variables in whose scope the existential quantifiers appears
>Drop the universal quantifiers
>Distribute or over and:
Fuzzy learning:
-Fuzzy logic is a mathematical framework that allows for reasoning with uncertain or imprecise information, is a form of multi-valued logic that allows for values between true and false, which are represented as degrees of membership in a fuzzy set
-Fuzzy sets introduce a certain amount of vagueness to reduce complexity of comprehenstion, set consists of elements that signitfy the degree or grade of membership to a fuzzy aspect
-Three types of sets: child young old, each are fuzzy set now, people of different ages chai some degree or grade of membership hunxa ni ta
-Table: |AGE | CHILD | YOUNG | OLD|
-Operations:
1. Unions: maximum of two: uA union B = max(ua, ub) similary interseciton is minimum
Advantages:
1. Ability to handle uneratinlgY; can handle unceratin, vague, or incomplete information which is common in many real life
2. Flexibilty: flexible frameowk for decision making by allowing degrees of truth or membership in a fuzzy set
3. Robustness: robus in the face of variaiblty in input data, crisp is sensitive
4. Natural naguage processing Interpret natural language which are uslaly vague or something
5. Inutitve reaosning: human like reaosning
6. Cost effecitive solutios: reduces compelxity
-Steps and example of water cooler where you need to control the tempeature of the room, control the water flow rate to get temperature and speed of fan, bhannale fan ko speed ra temperature given hunxa we need to change the flow rate
1. Fuzzification:
-When real data is available we define profiles for each of these parameters by assigning memberships to their respective values
-For example, we may say that when the tempeature is 25 degrees, its membership to the fuzzy set moderate is 1 (100%) but as we drift away from 30 dgegrees, its membeship to this set decreases while the same to the set warm starts to incrase
-When the sensors report the values of temperature and fan speed, they are mapped based on their membership to the respective fuzzy regions they belong to
2. Fuzzy rules and engine:
-The fuzzy rules form the triggers of the fuzzy engine, after a study of system we could write linguistic rules akin to natural languages such as -
1. If temperature is HOT and fan speed is LOW then flow rate is POSITIVE
2. If temperature is COOL and fan motor is LOW then flow rate is NEGATIVE
3. Defuzzification:
-The fuzzy outputs LOW-POSITVE, POSITIVE and HIGH-POSITIVE are to be converted to a single crisp value which can then be delivered to final actuator of the pump
-MIN Wala example dekhauni
widely appied to control applications