Reasoning Under Uncertainty
Reasoning Under Uncertainty
Reasoning Under Uncertainty
Agents may need to handle uncertainty, whether due to partial observability, non-determinism, or a
combination of the two.
An agent may never know for certain what state it’s in or where it will end up after a sequence of
actions.
With this knowledge representation, we might write A→B, which means if A is true then B is true,
but consider a situation where we are not sure about whether A is true or not then we cannot express
this statement, this situation is called uncertainty.
Causes of uncertainty:
Following are some leading causes of uncertainty to occur in the real world.
1) Information occurred from unreliable sources.
2) Experimental Errors
3) Equipment fault
4) Temperature variation
5) Climate change.
Summarizing uncertainty
Let’s consider an example of uncertain reasoning, diagnosing a dental patient’s toothache. Let us
try to write rules for dental diagnosis using propositional logic.
Toothache ⇒ Cavity.
The problem is that this rule is wrong. Not all patients with toothaches have cavities; some
of them have gum disease, an abscess, or one of several other problems:
Toothache ⇒ Cavity ∨ GumProblem ∨ Abscess . . .
Unfortunately, to make the rule true, we have to add an almost unlimited list of
possible problems. We could try turning the rule into a causal rule:
Cavity ⇒ Toothache
The agent’s knowledge can provide only a degree of belief (a representation of the set of all
possible world states) in the relevant sentences.
Our main tool for dealing with degrees of belief is probability theory. The ontological commitments
of logic and probability theory are the same.
Probability summarizes the uncertainty that comes from our laziness and ignorance, thereby
solving the qualification problem.
Uncertainty and rational (expressible) decisions
To make choices, an agent must first have preferences between the different possible outcomes of
the various plans.
An outcome is a completely specified state.
Utility theory says that every state has a degree of usefulness, or utility, to an agent and that the
agent will prefer states with higher utility. The term utility is used here in the sense of “the quality of
being useful”.
Preferences, as expressed by utilities, are combined with probabilities in the general theory of
rational decisions called decision theory.
Review Of Probability
Probabilistic reasoning is a way of knowledge representation where we apply the concept of
probability to indicate the uncertainty in knowledge.
In probabilistic reasoning, we combine probability theory with logic to handle the uncertainty.
We use probability in probabilistic reasoning because it provides a way to handle the uncertainty.
In the real world, there are lots of scenarios, where the certainty of something is not confirmed,
such as "It will rain today," "behavior of someone for some situations," or "A match between two
teams or two players." These are probable sentences for which we can assume that it will happen but
are not sure about it, so here we use probabilistic reasoning.
Need for probabilistic reasoning in AI:
When there are unpredictable outcomes.
When specifications or possibilities of predicates become too large to handle.
When an unknown error occurs during an experiment.
In probabilistic reasoning, there are two ways to solve problems with uncertain knowledge:
1) Bayes' rule
2) Bayesian Statistics
Common terms used in probabilistic reasoning:
Probability: Probability can be defined as a chance that an uncertain event will occur. It is the
numerical measure of the likelihood that an event will occur. The value of probability always remains
between 0 and 1 that represent ideal uncertainties.
0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.
P(A) = 0, indicates total uncertainty in an event A.
P(A) =1, indicates total certainty in an event A.
How we can work out the likelihood of two events occurring together given
their base and conditional probabilities?
P(a | b) = P(a b)^/ P(b)......1
P(b | a) =P(a ^b)/ P(a)......2
P(a ^b) = P(a | b)P(b) =P(b | a)P(a).........3
So in our toy example 1:
P(toothache ^ cavity) = P(toothache | cavity)P(cavity)
= P(cavity | toothache)P(toothache)
First order logic (or) predicate logic
First-order logic (FOL), also known as predicate logic or first-order predicate calculus, is a
powerful framework used in various fields such as mathematics, philosophy, linguistics, and
computer science. In artificial intelligence (AI), FOL plays a crucial role in knowledge
representation, automated reasoning, and natural language processing.
The key components of FOL include constants, variables, predicates, functions, quantifiers,
and logical connectives.
1. Constants: Constants represent specific objects within the domain of discourse. For
example, in a given domain, Alice, 2, and NewYork could be constants.
2. Variables: Variables stand for unspecified objects in the domain. Commonly used
symbols for variables include x, y, and z.
3. Predicates: Predicates are functions that return true or false, representing properties
of objects or relationships between them. For example, Likes(Alice, Bob) indicates
that Alice likes Bob, and GreaterThan(x, 2) means that x is greater than 2.
4. Functions: Functions map objects to other objects. For instance, MotherOf(x) might
denote the mother of x.
5. Quantifiers: Quantifiers specify the scope of variables. The two main quantifiers are:
Universal Quantifier (∀): Indicates that a predicate applies to all elements in
the domain.
o For example, ∀x (Person(x) → Mortal(x)) means “All persons are
mortal.”
Existential Quantifier (∃): Indicates that there is at least one element in the
domain for which the predicate holds.
o For example, ∃x (Person(x) ∧ Likes(x, IceCream)) means “There
exists a person who likes ice cream.”
6. Logical Connectives: