Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Module 2

ISML

Uploaded by

avg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module 2

ISML

Uploaded by

avg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Intelligent Systems and Machine Learning

Algorithms

Gahan A V
Assistant Professor
Department of Electronics and Communication Engineering
Bangalore Institute of Technology
BANGALORE INSTITUTE OF TECHNOLOGY

Module 2

2
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY

• The reflex agents mentioned in Chapter 2 are the simplest type of agents.

• These agents choose actions based on a simple rule: "If I'm in this state, take this action." They don’t
think about the future or the long-term consequences of their actions.

• This works fine in small or simple environments, but in more complex situations, where there are many
possible states or outcomes, this approach fails because there’s just too much information to store or learn.

• In contrast, goal-based agents are smarter because they think about the future. Instead of just reacting to
the current state, they consider what will happen next and whether their actions will help them reach a
specific goal.

• This chapter talks about a specific type of goal-based agent called a problem-solving agent. These
agents simplify their thinking by treating each state of the world as a whole (like snapshots), without
worrying about the details inside that state.

• They use this simplified view to figure out the best sequence of actions to solve a problem.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• More advanced goal-based agents, called planning agents, handle more complex problems.

In simple terms:

• - Reflex agents = react without thinking ahead.

• - Goal-based agents = plan for the future.

• - Problem-solving agents = use simple views of the world to solve problems.

• - Planning agents = use detailed views to solve more complex problems.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
• In problem-solving, we first need to clearly define what the problem is and what a solution looks like.

• Once a problem is defined, we can use search algorithms to find solutions.

There are two main types of search algorithms:

1. Uninformed search algorithms: These algorithms don't have any extra knowledge about the problem
beyond its basic definition. They explore all possibilities blindly, without knowing which paths are better.
Some uninformed algorithms can solve any solvable problem, but they often do so very inefficiently (they
take a lot of time or resources).

2. Informed search algorithms: These algorithms are smarter because they use additional information (or
"guidance") about the problem to help them focus on more promising paths. This allows them to solve
problems more efficiently than uninformed search algorithms.

In summary:
- Uninformed search: No extra information, slower, tries everything.
- Informed search: Uses guidance to find solutions faster.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

PROBLEM-SOLVING AGENTS
In this scenario, we are looking at an intelligent agent that tries to make decisions to maximize its performance. To do
this effectively, the agent often sets a specific goal and focuses on achieving it.

Example: The Tourist

• Imagine an agent on a holiday in Arad, Romania, with many objectives like improving its suntan, learning
Romanian, sightseeing, and enjoying the nightlife.

• The agent faces a complex decision problem because there are many factors to consider.

• But let’s say this agent has a nonrefundable flight from Bucharest the next day.

• Now, instead of juggling multiple activities, the agent can adopt a clear goal: get to Bucharest in time.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• This simplifies things because any action that doesn't help the agent reach Bucharest can be ignored.

• Goals make decision-making easier by narrowing down the options.

This means that by setting a clear goal—reaching Bucharest—the agent no longer has to consider
unnecessary actions. For example, activities like sightseeing or enjoying the nightlife, which don't help the
agent get to Bucharest, can be ignored.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


• Goal Formulation
The agent decides what it wants to achieve based on its current situation. In this case, the goal is to reach Bucharest. This is
the first step in solving the problem because it clearly defines what the agent is trying to accomplish.

• Problem Formulation
Next, the agent figures out how to reach its goal. This is called problem formulation. Instead of focusing on tiny details like
how to steer the car slightly, the agent thinks about bigger actions, like driving from one town to another. This approach
helps simplify decision-making by reducing the number of unnecessary steps.

• Exploring Options
Now the agent must decide how to get from Arad to Bucharest. There are three roads to choose from: one to Sibiu, one to
Timisoara, and one to Zerind. The agent isn’t sure which road is best since it doesn't know Romanian geography.

If the agent has no information (like a map), it might have to randomly pick one road. However, if the agent has a map, it can
look at future possibilities. The map shows what happens after going to each town and helps the agent figure out which
path will eventually lead to Bucharest. By using this information, the agent can create a plan to successfully reach its goal.
BANGALORE INSTITUTE OF TECHNOLOGY

Key Ideas

•Goals simplify decision-making by narrowing down options.

•Problem formulation is deciding what actions and states to consider to achieve the goal.

•Without information, the agent might have to guess; with information (like a map), the agent can plan a better path
toward the goal.

In summary, intelligent agents set goals, plan their actions based on available information, and use that planning to
reach their objectives more efficiently.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Well-defined problems and solutions

A problem can be defined using five main components:

1. Initial State
- This is where the agent starts. For our tourist in Romania, the initial state is In(Arad), meaning they are in the city of
Arad.

2. Actions
- These are the possible things the agent can do from a given state. For example, from In(Arad), the tourist can choose
to Go(Sibiu), Go(Timisoara), or Go(Zerind). So, the set of actions available is {Go(Sibiu), Go(Timisoara),
Go(Zerind)}.

3. Transition Model
- This describes what happens when an action is taken. It’s a function called RESULT(s, a), which tells us the new
state after doing action a in state s. For example, if the agent is In(Arad) and takes the action Go(Zerind), the result
will be In(Zerind).

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

4. State Space
- This is the collection of all possible states the agent can reach from the initial state by taking various actions. It forms a
graph where each state is a node and the actions connecting them are the links. For example, a map of Romania can be
seen as a state-space graph, showing all the towns and roads.

5. Goal Test
- This checks if the agent has reached its goal. For our tourist, the goal is to be In(Bucharest). The goal test will confirm if
the current state is Bucharest.

6. Path Cost
- This is a way to measure how good or efficient a particular path is. The agent assigns a cost to each action based on its
performance measure. For example, if time is important, the cost could be based on the distance traveled. The total cost of
a journey is the sum of the costs of all actions taken.

Summary
When these elements are combined, they create a problem that can be solved by a problem-solving algorithm. A solution
is a sequence of actions that takes the agent from the initial state to the goal state. The quality of the solution is measured
by its path cost, with an optimal solution being the one with the lowest cost among all possible solutions.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
The SIMPLE-PROBLEM-SOLVING-AGENT is a basic type of intelligent agent that makes decisions based on its surroundings.

How It Works
1.Inputs:
1. percept: This is the current information the agent receives about its environment.
2.Persistent Variables:
1. seq: A sequence of actions to be taken, starting off as empty.
2. state: Describes the current situation of the agent in the world.
3. goal: The objective the agent wants to achieve, initially set to null (nothing).
4. problem: A formulation of the problem that needs solving.
Steps:
1.Update State:
1. The agent updates its state based on the current percept it receives from the environment.
2.Formulate Goal and Problem:
1. If the action sequence seq is empty (meaning there are no actions to take yet), the agent:
1. Formulates a goal based on the current state.
2. Creates a problem formulation using the current state and the newly defined goal.
3.Search for Actions:
1. The agent searches for a sequence of actions that can solve the problem. This is stored in seq.
2. If the search fails (no solution found), the agent returns a null action (essentially doing nothing).

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

4. Execute Actions:
1. The agent takes the first action from the sequence seq.
2. It then removes this action from the sequence.
3. Finally, it returns the action to be executed.

5.Repeat:After completing the action, the agent will repeat the process by formulating a new goal and starting
over.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Formulating problems

In this section, we discuss the concept of abstraction in problem-solving, especially in the context of
planning a trip to Bucharest. Explanation follows:

Key Points:

1. Abstract Model vs. Real World:


- The problem formulation for reaching Bucharest includes an initial state, actions, transition model, goal
test, and path cost. However, this is an abstract model that doesn’t capture all real-world details, such as
traveling companions, the radio program, scenery, law enforcement presence, and weather conditions.
These details, while important in reality, are not necessary for solving the routing problem.

2. Abstraction of States and Actions:


- Abstraction is the process of simplifying complex details. In our formulation, we only focus on the
essential aspects, like the locations (e.g., moving from one town to another) without considering every little
action (like turning on the radio or adjusting the steering wheel).

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

3. Choosing Levels of Abstraction:


- We choose abstract states and actions that represent larger sets of more detailed situations. For example, the route
from Arad to Bucharest could include various detailed paths, like listening to the radio or taking breaks, but we only need
to consider the main route for the planning.

4. Validity of Abstraction:
- An abstraction is considered valid if any abstract solution can be expanded into a detailed solution. For instance, if
there’s a path from a detailed state in Arad to a detailed state in Sibiu, it means the abstraction works.

5. Ease of Execution:
- The abstract actions should be easier to carry out than dealing with all the details of the original problem. For example,
deciding to drive from one town to another is simpler than planning every single action along the way.

6. Importance of Abstraction:
- Abstraction allows intelligent agents to focus on relevant information without being overwhelmed by the complexities
of the real world. Without useful abstractions, agents would struggle to make decisions effectively.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
EXAMPLE PROBLEMS
The problem-solving approach can be applied to many different types of environments, and we can categorize
these into toy problems and real-world problems.

Toy Problems
•Definition: Toy problems are simplified scenarios created primarily to demonstrate and test problem-solving
methods.
•Characteristics:
• They have a clear and concise description.
• They can be easily understood and replicated by researchers.
• They are often used for comparing the performance of different algorithms.
•Examples: Classic puzzles like the Tower of Hanoi, the 8-puzzle, or algorithms used in simple board games.

Real-World Problems
•Definition: Real-world problems are practical issues that people face and care about solving.
•Characteristics:
• They are more complex and often lack a single, agreed-upon description.
• They involve various factors and uncertainties that make them harder to define precisely.
•Examples: Navigating traffic in a city, optimizing supply chain logistics, or diagnosing medical conditions.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Toy problems

The vacuum world is a classic example used to illustrate problem-solving in artificial intelligence.

Formulation of the Vacuum World Problem

1. States:
- The state of the environment is defined by the agent's location and the dirt locations.
- The agent can be in one of two locations (let’s say Location A and Location B).
- Each location may or may not contain dirt.
- Therefore, there are in total:
A larger environment with n locations has n x 2^ n states.

- Agent in Location A with dirt in both squares.


- Agent in Location A with dirt only in Location B.
- Agent in Location B with dirt in Location A.
- Agent in Location B with dirt in both squares, etc.

2. Initial State:
- Any one of the possible states can be chosen as the initial state where the problem starts.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

3. Actions:
- The agent can perform three possible actions in this environment:
- Left: Move to the left location.
- Right: Move to the right location.
- Suck: Clean the current location.
- In larger environments, actions like Up and Down could also be added.

4. Transition Model:
- The effects of the actions are defined as follows:
- Moving Left at the leftmost location has no effect (the agent cannot move further left).
- Moving Right at the rightmost location has no effect.
- Sucking in a clean square has no effect (the square is already clean).
- This defines how the state changes in response to each action.

5. Goal Test:
- The goal of the agent is to ensure that all squares are clean. This test checks whether the agent has achieved this state.

6. Path Cost:
- Each action taken (each step) costs 1 unit. Therefore, the total path cost is simply the number of steps taken to reach the
goal.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

8-puzzle

The 8-puzzle is a classic problem in artificial intelligence and problem-solving. Here’s a breakdown of how the 8-
puzzle can be formally defined:

Formulation of the 8-Puzzle Problem

1. States:
- A state describes the arrangement of the eight numbered tiles and the blank space on a 3x3 board.
- Each configuration can be represented by the positions of the tiles and the blank space in the nine squares.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

2. Initial State:
- Any configuration of the tiles can be the initial state.
- Notably, only half of the possible configurations can reach a specific goal state, which is an important characteristic of the
puzzle.

3. Actions:
- The actions available are based on moving the blank space:
- Left: Slide a tile from the left into the blank space.
- Right: Slide a tile from the right into the blank space.
- Up: Slide a tile from above into the blank space.
- Down: Slide a tile from below into the blank space.
- The available actions depend on the position of the blank space (e.g., if the blank is in a corner, some moves won’t be
possible).

4. Transition Model:
- This model describes how the state changes when an action is taken.
- For example, if you apply the Left action to a state where the blank is next to tile 5, the result would be a new state where
tile 5 and the blank are swapped.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

5. Goal Test:
- The goal test checks if the current state matches the target configuration of the tiles, which is usually a sequential
arrangement with the blank in a specific position (e.g., from 1 to 8 in order).

6. Path Cost:
- Each move (action) has a cost of 1. Therefore, the path cost is the total number of moves taken to reach the goal state
from the initial state.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

8-queens problem

The 8-queens problem is a puzzle where the goal is to place 8 queens on a chessboard in such a way that none of
the queens can attack each other. Queens in chess can attack in straight lines (rows, columns) and diagonals, so
the challenge is to arrange the queens without any two being able to attack.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

There are two main ways to approach this problem using search algorithms:

1. Incremental Formulation (Step-by-Step Approach):

An incremental formulationis a method where you start with an empty board (no queens) and gradually add queens
one by one to build up the solution. Each action (or step) adds a queen to the board, and the goal is to keep adding
queens until you have all 8 placed without them attacking each other.

In short: you build the solution step by step, starting from nothing and adding pieces until the problem is solved.

•States: A state is just the current arrangement of queens on the board. It could have anywhere from 0 to 8 queens.

•Starting Point: The board starts empty with no queens.

•Actions: You add one queen at a time to any empty square on the board.

•Goal: To have 8 queens on the board, with none attacking each other.

But if you try every possible sequence of placing queens, it leads to an enormous number of possibilities (about 180 trillion
possibilities).

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
2. Complete-state Formulation:

A smarter approach is to never place a queen in a spot where it could be attacked by another queen. This reduces the number
of possibilities from 180 trillion to just 2,057.

A complete-state formulation is a method where you start with all 8 queens already placed on the board, but they
might be attacking each other. The goal is to move the queens around to find a configuration where no queens can
attack each other.

In this approach, you're not adding queens but adjusting their positions to solve the problem. The process of moving queens
(or path cost) doesn’t matter—only the final arrangement where no queens are attacking each other is important.

•States: A state is an arrangement where there are up to 8 queens, and none of them can attack each other.
•Starting Point: Start with 0 queens and gradually add queens.
•Actions: Add a queen to a valid spot where it won't be attacked by others.
•Goal: Place all 8 queens without any of them attacking each other.

This smaller number of possibilities makes it much easier to find a solution. However, when trying to solve the same
problem for a much larger chessboard (like with 100 queens), even this smarter method leaves a huge number of
possibilities, though it's still a big improvement.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

In simple terms:

•The "naive" approach tries every possible way to place the queens, which takes forever.

•The "smart" approach only places queens in safe spots, cutting down the number of possibilities a lot, making it easier
to solve the puzzle.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Real-world problems
The route-finding problem is about finding the best path between two locations, usually through a network
of connected points (like cities or intersections). This problem can be seen in real life through various
applications, such as:

•Driving directions: When apps like Google Maps help you navigate from one place to another by showing
the best route.

•Computer networks: When video streams or data are sent from one computer to another, routing decides
the best path for the data to travel across the internet.

•Military operations: Planning the best routes for moving troops or supplies.

•Airline travel: Finding the best flights between two airports, potentially with layovers.
Some of these problems are relatively simple, like finding the best driving route, while others involve much
more complex decisions and constraints, such as coordinating multiple flights or handling the massive data
traffic on the internet.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
The airline travel problem for a travel-planning website involves solving several steps to find the best flights for a user.

•States: Each "state" represents where you are (airport) and the time. It also keeps track of things like past flights, fare
classes (economy/business), and whether the flight is domestic or international.

•Initial state: The starting point, which is based on the user’s search (e.g., where you’re flying from, where you want to
go, and when).

•Actions: The available options are any flights leaving from your current airport after your current time. You also need to
consider things like enough time for layovers or transfers.

•Transition model: When you take a flight, you update your current location to the flight's destination and the current time
to when the flight lands.

•Goal test: The goal is to reach the final destination specified by the user.

•Path cost: This is more complex. The total cost considers factors like the price of the flight, how long you wait between
flights, total travel time, seat class (economy/business), and even things like customs and immigration. Other factors could
include comfort, time of day, or airline points.

Travel websites use this problem-solving approach, but the real world is more complicated, so good systems also plan for
problems. For example, they might book backup flights in case your original flight gets delayed or canceled.
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY
The touring problem is similar to the route-finding problem, but with one key difference: instead of just finding a path
from one city to another, the goal is to visit every city in a set at least once and return to the starting point.

Let’s break it down using an example:

•Problem: You need to visit all cities in a map (like in Figure 3.2) at least once, starting and ending in Bucharest.

•Actions: You can travel between cities that are connected by a direct road (like in route-finding problems).

•State: This is different from simple route-finding. A state includes:


• Current location: Which city you are in right now (e.g., Bucharest).
• Visited cities: A list of cities you’ve already visited. For example, if you've been to Bucharest and Urziceni, the
state would record this.

•Initial state: You start in Bucharest with only Bucharest marked as visited. So the state is: In(Bucharest),
Visited({Bucharest}).
•Intermediate state: As you travel, the state updates to reflect your new location and the cities you've visited. For
example, if you move to Vaslui after visiting Bucharest and Urziceni, the state becomes: In(Vaslui), Visited({Bucharest,
Urziceni, Vaslui}).

•Goal: The goal is to end up in Bucharest after visiting all the cities. The system checks whether you have visited all the
cities and returned to Bucharest.
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY
The Traveling Salesperson Problem (TSP) is a special type of touring problem where the goal is to visit a set of cities
exactly once and return to the starting point, while minimizing the total distance traveled. In simple terms, you need to
find the shortest route that visits all the cities and brings you back to where you started.

Key points about TSP:

•Problem: Visit each city exactly once and return to the starting city.

•Goal: Find the shortest possible route that completes the tour.

This problem is very difficult (known as NP-hard), meaning there is no efficient solution for large numbers of cities. As
the number of cities increases, the number of possible routes grows exponentially, making it hard to solve.

Applications of TSP:

1.Travel planning: Like a salesperson planning visits to customers in different cities.


2.Circuit board drilling: Optimizing the order in which holes are drilled to minimize the drill’s travel time.
3.Manufacturing: Optimizing the movements of machines or robots on a shop floor to complete tasks more efficiently

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The VLSI layout problem involves organizing millions of tiny components and connections on a microchip to make the
chip as efficient as possible.

The goal is to:

•Minimize the area: Make the chip as small as possible.


•Minimize circuit delays: Ensure the signals travel quickly across the chip.
•Minimize stray capacitances: Reduce unwanted interactions between electrical components.
•Maximize manufacturing yield: Increase the number of chips that can be produced without errors.

The layout problem happens after the logical design of the chip is done and is usually split into two main tasks:
1.Cell layout:
1. Cells are small groups of components that perform specific functions on the chip.
2. Each cell has a fixed size and shape and needs to be connected to other cells.
3. The challenge is to place these cells on the chip so that they don’t overlap, and there’s enough space for the wires
(connections) between them.
2.Channel routing:
1. After placing the cells, you need to route the wires that connect them.
2. The task is to find paths for these wires through the gaps between the cells, ensuring they don’t cross each other
unnecessarily.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Robot navigation extends the route-finding problem by allowing robots to move in a continuous space rather than just
along discrete routes.

Key Features of Robot Navigation:


1.Continuous Space:
1. Unlike a car that follows set roads, a robot can move anywhere in a flat area, creating potentially infinite routes
and positions. This makes the navigation problem much more complex.

2.Dimensionality:
1. For a simple robot that moves on a flat surface, the navigation space is two-dimensional (length and width).
2. If the robot has limbs, wheels, or other moving parts, the space becomes multi-dimensional, making it even harder
to navigate.

3.Finite Search Space:


1. Due to the complexity, advanced techniques are needed to limit the search space to a manageable size, allowing
the robot to effectively plan its movements.

4.Sensor and Control Errors:


1. Real robots face challenges like inaccuracies in their sensors (which detect the robot's environment) and motor
controls (which move the robot). These errors can affect the robot’s ability to navigate accurately.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Automatic assembly sequencing involves using robots to assemble complex objects in a specific order to ensure that all
parts fit together correctly. Here’s a simplified explanation of the key concepts:

Automatic Assembly Sequencing:

1.Historical Context:
1. The first successful demonstration of automatic assembly sequencing by a robot was done by Freddy in 1972. Since
then, progress has gradually improved, allowing robots to assemble intricate items like electric motors efficiently.
2.Objective:
1. The main goal is to determine the correct order in which to assemble the components of an object. If parts are
assembled in the wrong order, it may become impossible to add later components without dismantling previously
completed work.
3.Feasibility Checking:
1. Assessing whether a certain step in the assembly process is possible is a complex geometrical search problem. This is
similar to the challenges faced in robot navigation, where the robot must determine its path and actions based on the
environment.
4.Efficiency in Action Generation:
1. Generating legal actions for assembly (i.e., determining which parts can be assembled next) is the most
computationally intensive aspect of assembly sequencing. A practical algorithm needs to avoid checking every
possible arrangement and focus only on a small subset of feasible actions.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
SEARCHING FOR SOLUTIONS
In problem-solving using search algorithms, we are essentially exploring all possible paths (action sequences) from the
start to the goal. These paths can be visualized as a search tree.
•Search Tree: It starts with the initial state (like the city "Arad") at the root.
•Nodes: Each node represents a state (e.g., being in a specific city).
•Branches: Actions, like moving from one city to another, form branches between nodes.
Steps:
1.Initial Node: Start at the initial state (e.g., In(Arad)).
2.Goal Test: Check if this initial state is the goal (e.g., "Is Arad the goal city?").
3.Expanding: If it’s not the goal, we explore other possibilities by expanding the current node. This means applying
possible actions to move to new states.
4.Generating New States: For example, from "Arad," you can travel to "Sibiu," "Timisoara," or "Zerind." These are new
child nodes in the tree.
5.Choosing Next Step: Now, you decide which of these new nodes (states) to explore next.
This process repeats until you find a path to the goal (e.g., reaching Bucharest).
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY

In search algorithms, the goal is to explore one option (path) at a time while saving other options for
later in case the first doesn't lead to a solution.
For example, if we first choose to explore "Sibiu" after leaving "Arad":
1.Goal Test: Check if "Sibiu" is the goal state (it's not).
2.Expand Sibiu: Generate more options from "Sibiu"—like going to "Arad," "Fagaras," "Oradea," or
"Rimnicu Vilcea."
3.Leaf Nodes: Each of these new states is a leaf node—nodes that don't yet have child nodes.
4.Frontier: All current leaf nodes (like "Timisoara," "Zerind," and the new cities from "Sibiu") form the
frontier. The frontier is the set of nodes that are ready to be expanded next.
Essentially, you keep exploring the frontier (the set of available options) until you find the goal.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

When searching for a solution, you keep expanding nodes on the frontier (the current set of unexplored
options) until either:
•A solution is found (you reach the goal), or
•There are no more states left to explore.
Search Strategy:
•The key difference between search algorithms is how they decide which node to expand next. This
choice is called the search strategy.
Repeated States and Loopy Paths:
•In the search tree (Figure 3.6), you might see repeated states. For instance, after traveling from "Arad" to
"Sibiu," you might generate the state "Arad" again, creating a loopy path (a path that loops back to where
you started).
•Loopy paths can cause the search tree to become infinitely large, even though the actual number of unique
states (like cities) is limited.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Avoiding Loopy and Redundant Paths:


•Fortunately, loops are unnecessary to consider. Since moving along a loop doesn't help reduce the cost (since
path costs add up), a loopy path is always worse than a simpler, direct path.
•This concept extends to redundant paths: different routes that lead to the same state. For example, traveling
from "Arad" to "Sibiu" directly (140 km) is clearly better than a longer, roundabout way through multiple
cities (297 km).
Thus, there's no need to keep multiple paths to the same state; the goal can always be reached through the best
available path.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
The two algorithms, TREE-SEARCH and GRAPH-SEARCH, are designed to find a solution to a problem
by exploring a series of possible actions. The main difference between them is how they handle repeated or
redundant states to avoid unnecessary exploration.
TREE-SEARCH:
1.Initialize the Frontier: Start with the initial state of the problem in the frontier (the list of unexplored
nodes).
2.Loop Until a Solution or Failure:
1. If the frontier is empty (no more nodes to explore), return failure.
2. Otherwise, pick a leaf node from the frontier (an unexplored node) and remove it.
3. If the node is a goal state, return the solution.
4. If not, expand the node (generate new states by taking actions) and add the new nodes to the frontier.
In TREE-SEARCH, every node is considered, even if it has already been explored, which may lead to
revisiting the same states multiple times (such as loopy or redundant paths).

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

GRAPH-SEARCH:
1.Initialize the Frontier and Explored Set: Start with the initial state in the frontier and an explored set (to
keep track of already visited nodes).
2.Loop Until a Solution or Failure:
1. If the frontier is empty, return failure.
2. Pick a leaf node from the frontier and remove it.
3. If it’s a goal state, return the solution.
4. Add the node to the explored set (mark it as visited).
5. Expand the node, generating new states, and add the resulting nodes to the frontier only if they are not
already in the frontier or the explored set.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Key Differences:
•TREE-SEARCH does not track which states have been explored, so it might revisit the same state multiple times,
which can lead to inefficiency (e.g., exploring loopy or redundant paths).
•GRAPH-SEARCH avoids revisiting the same state by maintaining an explored set. This prevents cycles and
redundant paths, making it more efficient for problems where the search space has repeated states

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

• In some problems, such as route-finding or sliding-block puzzles, paths can be reversible, meaning that actions can
undo themselves (e.g., moving left and then right returns you to the same place). This leads to redundant paths—
multiple ways of reaching the same state.
Example of Redundant Paths:
In a rectangular grid, each state (position) has four possible actions (up, down, left, right). While this creates many
possible paths, the number of unique states (positions) is much smaller than the number of paths. For instance, at depth d
(meaning after d steps), there may be 4^d possible paths, but only about 2d² unique positions to reach.
This results in:
•Huge search trees (many possible paths) with few distinct states.
•Following redundant paths could turn a manageable problem into a much larger one.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Avoiding Redundant Paths:


• The key to avoiding redundant paths is to remember where you’ve been, so you don’t revisit the same state
unnecessarily. To do this, we introduce an explored set (or closed list):
•Explored Set (Closed List): This data structure keeps track of every node that has already been expanded.
•Before adding a new node to the frontier (unexplored options), the algorithm checks if the node is already in the explored
set or the frontier itself. If it is, the node is discarded, preventing unnecessary work.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

GRAPH-SEARCH Algorithm:
• The GRAPH-SEARCH algorithm is the improved version of TREE-SEARCH, using an explored set to avoid redundant
paths. This ensures that the search tree only grows on the state-space graph, which is a representation of all unique states.
Key Property of GRAPH-SEARCH:
In GRAPH-SEARCH, the frontier separates the state space into:
• Explored region: States that have already been visited and expanded.
• Unexplored region: States that have not yet been reached.
• Every path from the initial state to an unexplored state has to pass through a state in the frontier. This guarantees that the
algorithm systematically explores all reachable states until a solution is found, ensuring efficient exploration without
repetition.
• Thus, GRAPH-SEARCH systematically moves states from the unexplored region to the explored region, gradually
expanding the frontier and avoiding redundant paths, making it far more efficient in many cases.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Infrastructure for search algorithms

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Given the components for a parent node, it is easy to see how to compute the necessary components for
a child node. The function CHILD-NODE takes a parent node and an action and returns the resulting
child node:

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
This function, `CHILD-NODE`, is used to create a new node in the search tree based on a given parent node and an action
applied to it. It is a fundamental part of search algorithms, as it helps expand nodes and generate new ones during the
search process.

Key Components of the `CHILD-NODE` Function:


1. Inputs:
- `problem`: The problem being solved, which includes information about the state space and actions.
- `parent`: The parent node, representing the current state from which the new child node will be created.
- `action`: The action to be applied to the parent node's state to generate the new child state.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

2. Outputs:
- A new node (the child node) is returned, with the following attributes:
- STATE: The new state, generated by applying the given action to the parent's state. This is calculated using the
`problem.RESULT(parent.STATE, action)` function, which returns the state resulting from taking the action.
- PARENT: The parent node, indicating the source of this new node.
- ACTION: The action that was applied to generate this new node.
- PATH-COST: The total cost of reaching this node. It is the parent's path cost plus the cost of the action taken to move
from the parent to this new state. This is calculated using `parent.PATH-COST + problem.STEP-COST(parent.STATE,
action)`.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Explanation:
- STATE: The new situation (state) resulting from taking the given action in the current state.
- PARENT: Keeps track of where the node came from, helping to build the full path later.
- ACTION: Records the specific move or action that led to this state.
- PATH-COST: Keeps track of the cumulative cost of the path so far, allowing us to evaluate the cost of reaching a particular
node (important for algorithms like A* or uniform-cost search).

In summary, the `CHILD-NODE` function generates a new node (child) by applying an action to a parent node’s state and
updating the necessary information like state, action, and path cost for the search process.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

To handle the frontier in a search algorithm, we need a way to store nodes that are waiting to be explored. This is done
using a queue, which allows us to manage the order in which nodes are processed.

Basic Queue Operations:


1. EMPTY?(queue): Checks if the queue is empty. Returns `true` if there are no more elements (nodes) left to explore.
2. POP(queue): Removes and returns the first element of the queue. This is the next node to be explored based on the
queue's strategy.
3. INSERT(element, queue): Adds a new element (node) to the queue. The node is inserted in the appropriate position,
depending on the type of queue.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Different Queue Strategies:


- FIFO (First-In, First-Out): The first node added is the first one to be removed. Used in breadth-first search.
- LIFO (Last-In, First-Out): The last node added is the first one to be removed. Used in depth-first search (acts like a
stack).
- Priority Queue: Nodes are removed based on their priority (like path cost or heuristic), not the order they were added.
Used in algorithms like A* or uniform-cost search.

This queue structure ensures that nodes are expanded in the right order, according to the search algorithm's strategy.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Measuring problem-solving performance

When evaluating search algorithms, there are four key criteria to consider:

1. Completeness: Will the algorithm always find a solution if one exists?


2. Optimality: Does the algorithm find the best possible solution (the one with the lowest cost)?
3. Time Complexity: How long does the algorithm take to find a solution?
4. Space Complexity: How much memory does the algorithm use?

Complexity Factors:
Search algorithms' performance is often described using three important factors:
- b: The branching factor – the maximum number of successors (child nodes) from any node.
- d: The depth of the shallowest goal – how many steps it takes to reach the nearest goal.
- m: The maximum path length – the longest path in the state space.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Time and Space:


- Time complexity is measured by how many nodes are generated.
- Space complexity is measured by the maximum number of nodes stored in memory at any time.

Cost:
- Search cost: The time taken by the algorithm to search.
- Total cost: The combination of search cost and the solution path cost (e.g., the length of a path in kilometers).

For example, if you're driving from Arad to Bucharest, the search cost is how long the algorithm takes to find a route, and
the solution cost is the length of the route in kilometers. You might combine both to find the best trade-off between
computation time and path length to make the search efficient.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

UNINFORMED SEARCH STRATEGIES


This section talks about uninformed search strategies, also known as blind search. Here’s a simple breakdown:

Uninformed Search:
- What It Is: These strategies explore possible paths in a problem without any extra information about the states.
They only know how to generate new states and whether a state is a goal or not.

Key Characteristics:
- Node Expansion: Different uninformed search strategies vary based on the order in which they explore nodes.
They don't have a method to decide which node might lead to a solution faster.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Informed Search Comparison:


- Informed search strategies, also called heuristic search, use extra information to figure out which nodes are more
promising to explore. This usually makes them more efficient than uninformed searches.

In summary, uninformed search strategies work blindly without any extra hints about which paths to take, while informed
searches use additional knowledge to make smarter decisions.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Breadth-first search
Breadth-First Search (BFS) is a straightforward search strategy that explores nodes level by level.

How BFS Works:


1. Expansion Order: Start with the root node, then expand all its immediate successors (children), followed by their
successors, and so on. This means all nodes at a particular depth in the search tree are expanded before moving deeper.

2. Queue Structure: BFS uses a FIFO queue to keep track of which nodes to explore next. When a new node is added, it
goes to the back of the queue, ensuring that older nodes (which are at a shallower depth) are expanded first.

3. Goal Test: The algorithm checks if a node is the goal as soon as it is generated (created) rather than waiting until it is
selected for expansion. This helps in quickly identifying a solution.

4. Avoiding Redundancy: BFS avoids adding paths to states already in the queue or explored set, ensuring that it always
finds the shortest path to any node on the frontier.

Key Points:
- Level by Level: BFS explores nodes one level at a time.
- Shallowest Paths: Because of its structure, BFS always finds the shortest path to the goal if one exists.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

how Breadth-First Search (BFS) measures up against the four criteria we discussed earlier:

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

BFS systematically explores each level of the search tree, ensuring that all possible states are considered in
order before moving deeper. It keeps track of explored states to avoid unnecessary work and guarantees that
the first solution found is the shallowest one if one exists. This algorithm is effective for finding solutions in
problems with finite state spaces.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Challenges of Breadth-First Search (BFS), especially its high resource demands when searching for solutions in
complex problems.
Main Points:
1. Exponential Growth:
- BFS's time and memory requirements increase rapidly as the depth of the search tree increases. This means that the
deeper the solution is, the more resources are needed to find it.
2. Resource Assumptions:
- It’s assumed that a computer can generate 1 million nodes every second, and each node uses about 1000 bytes of
memory.
3. Memory Issues:
- At greater depths (like depth 12), BFS can require huge amounts of memory (around 1 petabyte), which is far beyond
what a typical computer can handle.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

4. Time Issues:
- Even if you can wait a long time for a solution, it can take an impractical amount of time. For example, finding a solution at
depth 16 could take about 350 years.

5. Conclusion:
- Because of these high demands on time and memory, BFS can only be used effectively for simpler problems. For more
complex ones, other search strategies are needed to make the search process feasible.

In short, while BFS is a straightforward method, its high resource requirements make it impractical for deep searches,
highlighting the need for more efficient algorithms.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Uniform-cost search

The passage explains Uniform-Cost Search, a variation of Breadth-First Search that is optimal for problems where step
costs can vary. Here are the key points:

Key Points:

1. Optimality with Equal Step Costs:


- When all step costs are the same, Breadth-First Search (BFS) is optimal because it always expands the shallowest
(closest to the start) node first.

2. Uniform-Cost Search:
- Uniform-Cost Search improves upon BFS by expanding the node with the lowest path cost \( g(n) \), which is the total
cost to reach that node from the start. This ensures that the search finds the optimal path even when step costs differ.

3. Using a Priority Queue:


- The frontier (the set of nodes to explore) is stored in a priority queue. This queue is ordered by the path costs, allowing
the algorithm to always expand the least costly node next.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

4. Goal Test Timing:


- In Uniform-Cost Search, the goal test (checking if a node is the goal) is performed when the node is selected for
expansion, not when it’s generated. This is important because the first goal node generated might not be the best
(optimal) path.

5. Updating Paths:
- If a better path to a node already in the frontier is found, the algorithm updates the frontier to reflect this new, lower-
cost path. This ensures that the search remains focused on the most promising routes.

Summary:
In essence, Uniform-Cost Search enhances Breadth-First Search by prioritizing nodes based on their total path cost,
ensuring that it finds the most optimal solution regardless of the costs of actions. This makes it suitable for more
complex problems where step costs can vary.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The function UNIFORM-COST-SEARCH is an algorithm designed to find the optimal solution to a search problem,
particularly when the costs of steps (or actions) may vary. Here’s a breakdown of the function in simple terms:

Function Breakdown:

1. Initialization:
- Create the Initial Node: A node is created representing the starting state of the problem with a path cost of 0.
- Set Up the Frontier: The frontier is a priority queue that will hold nodes to be explored, ordered by their path cost.
Initially, it contains just the starting node.
- Create Explored Set: This is an empty set that will keep track of the states that have already been explored.

2. Main Loop:
- The loop continues until a solution is found or the frontier is empty (meaning there are no more nodes to explore).

3. Check if Frontier is Empty:


- If the frontier is empty, it returns failure, indicating that no solution exists.

4. Choose Node to Expand:


- The node with the lowest path cost is removed from the frontier for expansion.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

5. Goal Test:
- The algorithm checks if this node is the goal state. If it is, the solution is returned.

6. Add Node to Explored Set:


- If the node is not the goal, its state is added to the explored set.

7. Expand the Node:


- For each possible action from the current node, a child node is generated.

8. Check for Child Node States:


- If the state of the child node is not in the explored set or the frontier, it is inserted into the frontier.
- If the state of the child node is already in the frontier but has a lower path cost than the existing node in the frontier, the
existing node is replaced with the new child node (because the child represents a better path).

Summary:
The UNIFORM-COST-SEARCH algorithm effectively explores paths based on their total cost, ensuring it always expands
the least costly option. This approach guarantees finding the optimal solution, as it considers all potential paths and their
associated costs, updating the frontier as necessary to prioritize more efficient routes.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

The Uniform-Cost Search algorithm works by always expanding the node that has the lowest total path cost. Here’s how it
operates, illustrated with the example of finding a route from Sibiu to Bucharest:
Example Walkthrough:
1.Starting Node: The algorithm starts at Sibiu.
1. Successors: The possible next locations (successors) are Rimnicu Vilcea (cost 80) and Fagaras (cost 99).
2.Expanding Nodes:
1. First Expansion: It expands Rimnicu Vilcea first because it has the lower cost (80).
1. This adds Pitesti to the frontier with a total cost of 177 (80 + 97).
2. Next Expansion: Then it expands Fagaras (cost 99), which adds Bucharest with a total cost of 310 (99 + 211).
3.Goal Node Generated: Now, a goal node (Bucharest) has been generated, but Uniform-Cost Search continues exploring.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

4. Further Exploration:
1. The algorithm expands Pitesti next, creating a new path to Bucharest with a total cost of 278 (80 + 97 +
101).
2. It compares this new path to the existing path to Bucharest (310). Since 278 is cheaper, it discards the old
path.

5. Optimal Path: Now, Bucharest has a new optimal cost of 278. When it is selected for expansion again, it
means that this path is the best option available.

Key Points:
•Guaranteed Optimality: Whenever a node nnn is selected for expansion in Uniform-Cost Search, it means the
optimal path to that node has already been found. If a better path existed, it would have already been chosen for
expansion due to its lower cost.
•Non-Negative Costs: Since all step costs are non-negative, adding more nodes to a path can never reduce its total
cost.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY
Iterative deepening depth-first search

What It Is:
•Iterative Deepening Search is a search strategy that combines the advantages of both Depth-First Search (DFS) and
Breadth-First Search (BFS).

How It Works:
1.Gradual Depth Increase: The algorithm starts with a depth limit of 0 and gradually increases it (1, 2, 3, etc.) until it
finds the goal node.
2.Repeated Searches: For each depth limit, it performs a Depth-First Search until it reaches that limit. If it doesn't find
the goal, it increases the limit and repeats the process.
3.Finding the Goal: The search continues until it discovers the goal node at depth d (the depth of the shallowest goal).

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Benefits:
•Memory Efficiency: The memory requirement is modest, specifically O(bd), where b is the branching factor and d is the
depth of the shallowest solution. This is similar to DFS.
•Completeness: Iterative deepening is complete, meaning it will find a solution if one exists (given a finite branching
factor).
•Optimality: It is optimal when the path cost does not decrease with depth, similar to BFS.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Explanation
1.Outer Loop:
1. The algorithm iterates through increasing depth limits starting from 0 and continues indefinitely (or until a solution
is found).
2.Depth-Limited Search:
1. For each depth limit, it calls the DEPTH-LIMITED-SEARCH function. This function attempts to find a solution within
the specified depth.
2. If the search exceeds the depth limit without finding a goal, it will return a special value, often called "cutoff."
3.Check for Solution:
1. After each call to DEPTH-LIMITED-SEARCH, it checks the result:
1. If the result is not equal to "cutoff," it means a solution has been found, and the algorithm returns that
solution.
2. If it is "cutoff," it means the search reached the depth limit without finding a goal, and the outer loop continues
to the next depth limit.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


BANGALORE INSTITUTE OF TECHNOLOGY

Summary
•Iterative Deepening Search efficiently combines depth-first search’s low memory usage with the completeness of
breadth-first search, making it ideal for large search problems.

•The method’s node generation strategy helps mitigate concerns about inefficiency due to repeated node expansion.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

You might also like