AI-Unit 1_Notes
AI-Unit 1_Notes
Techniques
UNIT-1
Syllabus:
Unit I:
Meaning and definition of artificial intelligence, Production systems, Characteristics of
production systems, Study and comparison of breadth first search and depth first search
techniques, other Search Techniques like hill Climbing, Best first Search. A* algorithm,
AO* algorithms etc, and various types of control strategies.
Unit II:
Knowledge Representation, Problems in representing knowledge, knowledge representation
using propositional and predicate logic, comparison of propositional and predicate logic,
Resolution, refutation, deduction, theorem proving, inferencing, monotonic and non-
monotonic reasoning.
Unit III:
Probabilistic reasoning, Baye's theorem, semantic networks, scripts, schemas, frames,
conceptual dependency, fuzzy logic, forward and backward reasoning.
Unit IV:
Game playing techniques like minimax procedure, alpha-beta cut-offs etc, planning, Study
of the block world problem in robotics, Introduction to understanding, natural language
processing.
Unit V:
Introduction to learning, Various techniques used in learning, Introduction to neural
networks, applications of neural networks, common sense, reasoning, some example of
expert systems.
Course Objectives:
when most people use the term AI today, they’re referring to a suite of machine
learning-powered technologies, such as Chat GPT or computer vision, that
enable machines to perform tasks that previously only humans can do like
generating written content, steering a car, or analyzing data.
Key Concepts are being used in AI:
1.Machine Learning: A subset of AI where systems learn from data to
improve their performance over time without being explicitly programmed.
Key techniques include supervised learning, unsupervised learning, and
reinforcement learning.
2.Neural Networks: Inspired by the human brain, neural networks are
algorithms designed to recognize patterns and make decisions. Deep
learning, a specialized area of machine learning, involves using large neural
networks with many layers to tackle complex tasks.
3.Natural Language Processing (NLP): This field focuses on enabling
computers to understand, interpret, and generate human language.
Applications include language translation, sentiment analysis & chatbots.
4.Computer Vision: AI systems that can interpret and make decisions
based on visual information from the world. Examples include facial
recognition, object detection, and image classification.
5.Robotics: The integration of AI into robots allows them to perform tasks
autonomously or semi-autonomously.
Examples of AI :
•ChatGPT: Uses large language models (LLMs) to generate text in
response to questions or comments posed to it.
AI in healthcare:
Early diagnosis: AI can analyze patient and disease data to predict
the likelihood of a patient developing a disease and either diagnose
it early or help to prevent it entirely.
Disease tracking: Using predictive analytics, AI can model
how a contagious disease could spread over the course of time
or across a specific area.
Drug discovery: AI models can discover new applications or
potentially harmful interactions between different drugs.
Evolution of AI
1. Early Foundations (Before 1950)
•Philosophical Beginnings: The concept of machines or artificial beings with
intelligence dates back to ancient civilizations. Philosophers like Aristotle
developed syllogistic logic, a form of reasoning that influenced later AI
developments.
•Mechanical Automata: In the 18th and 19th centuries, inventors created
mechanical devices that could perform simple tasks, laying the groundwork for
the idea of machines simulating human activities.
2. Birth of AI (1950s)
•Alan Turing's Influence: The British mathematician and logician Alan Turing
is often considered the father of AI. In 1950, he proposed the "Turing Test" to
determine if a machine could exhibit human-like intelligence.
•The Dartmouth Conference (1956): The official birth of AI as a field is often
traced to this conference, where researchers like John McCarthy, Marvin
Minsky, Nathaniel Rochester, and Claude Shannon gathered to explore the
possibility of creating "thinking machines."
•Early Programs: In the late 1950s, AI pioneers developed early programs like
the Logic Theorist, which could prove mathematical theorems, and the General
Problem Solver (GPS), designed to solve a wide range of problems.
3. The Golden Age (1950s-1970s)
•Symbolic AI: This period saw the rise of symbolic AI, where researchers focused
on developing systems that manipulated symbols to solve problems and reason
about the world. LISP, a programming language developed by John McCarthy,
became the primary language for AI research.
•Expert Systems: In the 1970s, expert systems like DENDRAL (used for chemical
analysis) and MYCIN (used for medical diagnosis) demonstrated AI's potential in
specialized domains.
•Challenges: Despite early successes, AI faced significant challenges, including
limitations in computational power, the brittleness of systems, and the inability to
handle real-world complexity. This led to the first "AI winter" in the 1970s, where
funding and interest in AI research waned.
4. The Rise of Machine Learning (1980s-1990s)
•Connectionism: The 1980s saw a resurgence in AI through connectionism,
particularly with the development of artificial neural networks. Researchers like
Geoffrey Hinton revived interest in neural networks with the backpropagation
algorithm, allowing for more efficient training of multi-layered networks.
•Machine Learning: AI shifted focus toward machine learning, where systems could
learn from data rather than rely solely on predefined rules. This approach proved
more scalable and effective in many real-world applications.
•AI in Industry: During this period, AI began to be adopted in various
industries, from finance (for credit scoring) to manufacturing (for automation).
5. Modern AI and the Deep Learning Revolution (2000s-Present)
•Big Data and Increased Computing Power: The explosion of data and
advancements in computing power, particularly with the advent of GPUs, enabled
the training of deep neural networks on a scale previously unimaginable.
•Deep Learning: Deep learning, a subset of machine learning, revolutionized AI
by allowing machines to achieve state-of-the-art performance in tasks such as
image recognition, natural language processing, and game playing. Notable
breakthroughs include Google's AlphaGo defeating a world champion Go
player in 2016.
•AI in Everyday Life: Today, AI is embedded in countless applications, from
voice assistants like Siri and Alexa to recommendation systems on platforms like
Netflix and YouTube. It’s also making strides in areas like autonomous driving,
healthcare diagnostics, and more.
•Ethical and Societal Considerations: As AI becomes more pervasive, concerns
around ethics, privacy, bias, and the impact on jobs have grown, prompting
discussions about the responsible development and deployment of AI
technologies.
6. The Future of AI
•Integration with Other Technologies: AI is increasingly being integrated with
other emerging technologies such as the Internet of Things (IoT), blockchain,
and quantum computing, potentially unlocking new capabilities and applications
Production System in AI
In artificial intelligence (AI), a production system refers to a type of rule-based
system that is designed to provide a structured approach to problem solving
and decision-making.
1. Knowledge Base: This is the core repository where all the rules and facts are stored. In AI, the
knowledge base is critical as it contains the domain-specific information and the if-then rules
that dictate how decisions are made or actions are taken.
2. Inference Engine: The inference engine is the mechanism that applies the rules to the known
facts to derive new facts or to make decisions. It scans the rules and decides which ones are
applicable based on the current facts in the working memory. It can operate in two modes:
2. Forward Chaining (Data-driven): This method starts with the available data and
uses the inference rules to extract more data until a goal is reached.
3. Backward Chaining (Goal-driven): This approach starts with a list of goals and
works backwards to determine what data is required to achieve those goals.
3. Working Memory: Sometimes referred to as the fact list, working memory holds the dynamic
information that changes as the system operates. It represents the current state of knowledge,
including facts that are initially known and those that are deduced throughout the operation of
the system.
4. Control Mechanism: This governs the order in which rules are applied by the inference
engine and manages the flow of the process. It ensures that the system responds appropriately
to changes in the working memory and applies rules effectively to reach conclusions or
solutions.
Breadth-First Search (BFS):
BFS, Breadth-First Search, is a vertex-based technique for finding the shortest path in
the graph. It uses a Queue data structure that follows first in first out. In BFS, one vertex
is selected at a time when it is visited and marked then its adjacent are visited and stored
in the queue. It is slower than DFS.
DFS, Depth First Search, is an edge-based technique. It uses the Stack data
structure and performs two stages, first visited vertices are pushed into the stack, and
second if there are no vertices then visited vertices are popped.
Comparison of BFS and DFS
Parameters BFS DFS
Stands for BFS stands for Breadth First Search. DFS stands for Depth First Search.
Conceptual
BFS builds the tree level by level. DFS builds the tree sub-tree by sub-tree.
Difference
Approach
It works on the concept of FIFO (First In First Out). It works on the concept of LIFO (Last In First Out).
used
BFS is more suitable for searching vertices closer to the DFS is more suitable when there are solutions away
Suitable for given source. from source.
1.Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to decide
which direction to move in the search space.
2.Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
3.No backtracking: It does not backtrack the search space, as it does not remember
the previous states.
4.Deterministic Nature:
Hill Climbing is a deterministic optimization algorithm, which means that given the
same initial conditions and the same problem, it will always produce the same
result. There is no randomness or uncertainty in its operation.
5.Local Neighborhood:
Hill Climbing is a technique that operates within a small area around the current
solution. It explores solutions that are closely related to the current state by making
small, gradual changes. This approach allows it to find a solution that is better than
the current one although it may not be the global optimum.
Advantages of Hill climb algorithm:
1. The first part of the paper is based on Hill's diagrams that can easily put things together,
ideal for complex optimization problem. A hill climbing algorithm is first invention of hill
climbers.
2. It uses much less RAM for the current problem state and the solutions located around it
than comparing the algorithm to a tree search method which will require inspecting the
entire tree. Consequently, reducing the total memory resources to be used. Space is what
matters solutions should occupy a convenient area to consume as little of memory as
possible.
3. When it comes to the acceleration of the hill up, most of the time it brings a closure in the
local maximum straight away. This is the route if having quickly getting a solution,
outshining acquiring a global maximum, is an incentive.
Different regions in the state space
landscape:
1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present which is higher
than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the search
space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of
the current state contains the same value, because of this algorithm does not find any best
direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the current
state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.
•A* algorithm and AO* algorithm both works on the best first search.
•They are both informed search and works on given heuristics values.
•A* always gives the optimal solution but AO* doesn’t guarantee to give the
optimal solution.
•Once AO* got a solution doesn’t explore all possible paths but A* explores
all paths.
•When compared to the A* algorithm, the AO* algorithm uses less memory.
So, by calculation A⇢B path is chosen which is the minimum path, i.e f(A⇢B)
According to the answer of step 1,
as we can see that path f(A⇢C+D) is get solved and this tree has become a
solved tree now.
In simple words, the main flow of this algorithm is that we have to find
firstly level 1st heuristic value and then level 2nd and after that update the
values with going upward means towards the root node.
The AO* algorithm can be used to find the optimal routes that satisfy both objectives.
Portfolio Optimization:
Portfolio optimization is choosing a set of investments that maximize returns while
minimizing risks.
The AO* algorithm can be used to find the optimal portfolio that satisfies both
objectives, such as maximizing the expected return and minimizing the standard
deviation.
An Introduction to A* Search Algorithm in AI
A* (pronounced "A-star") is a powerful graph traversal and pathfinding
algorithm widely used in artificial intelligence and computer science. It is
mainly used to find the shortest path between two nodes in a graph, given the
estimated cost of getting from the current node to the destination node. The
main advantage of the algorithm is its ability to provide an optimal path by
exploring the graph in a more informed way compared to traditional search
algorithms such as Dijkstra's algorithm.