Module 2
Module 2
Algorithms
Gahan A V
Assistant Professor
Department of Electronics and Communication Engineering
Bangalore Institute of Technology
BANGALORE INSTITUTE OF TECHNOLOGY
Module 2
2
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY
• The reflex agents mentioned in Chapter 2 are the simplest type of agents.
• These agents choose actions based on a simple rule: "If I'm in this state, take this action." They don’t
think about the future or the long-term consequences of their actions.
• This works fine in small or simple environments, but in more complex situations, where there are many
possible states or outcomes, this approach fails because there’s just too much information to store or learn.
• In contrast, goal-based agents are smarter because they think about the future. Instead of just reacting to
the current state, they consider what will happen next and whether their actions will help them reach a
specific goal.
• This chapter talks about a specific type of goal-based agent called a problem-solving agent. These
agents simplify their thinking by treating each state of the world as a whole (like snapshots), without
worrying about the details inside that state.
• They use this simplified view to figure out the best sequence of actions to solve a problem.
• More advanced goal-based agents, called planning agents, handle more complex problems.
In simple terms:
1. Uninformed search algorithms: These algorithms don't have any extra knowledge about the problem
beyond its basic definition. They explore all possibilities blindly, without knowing which paths are better.
Some uninformed algorithms can solve any solvable problem, but they often do so very inefficiently (they
take a lot of time or resources).
2. Informed search algorithms: These algorithms are smarter because they use additional information (or
"guidance") about the problem to help them focus on more promising paths. This allows them to solve
problems more efficiently than uninformed search algorithms.
In summary:
- Uninformed search: No extra information, slower, tries everything.
- Informed search: Uses guidance to find solutions faster.
PROBLEM-SOLVING AGENTS
In this scenario, we are looking at an intelligent agent that tries to make decisions to maximize its performance. To do
this effectively, the agent often sets a specific goal and focuses on achieving it.
• Imagine an agent on a holiday in Arad, Romania, with many objectives like improving its suntan, learning
Romanian, sightseeing, and enjoying the nightlife.
• The agent faces a complex decision problem because there are many factors to consider.
• But let’s say this agent has a nonrefundable flight from Bucharest the next day.
• Now, instead of juggling multiple activities, the agent can adopt a clear goal: get to Bucharest in time.
• This simplifies things because any action that doesn't help the agent reach Bucharest can be ignored.
This means that by setting a clear goal—reaching Bucharest—the agent no longer has to consider
unnecessary actions. For example, activities like sightseeing or enjoying the nightlife, which don't help the
agent get to Bucharest, can be ignored.
• Problem Formulation
Next, the agent figures out how to reach its goal. This is called problem formulation. Instead of focusing on tiny details like
how to steer the car slightly, the agent thinks about bigger actions, like driving from one town to another. This approach
helps simplify decision-making by reducing the number of unnecessary steps.
• Exploring Options
Now the agent must decide how to get from Arad to Bucharest. There are three roads to choose from: one to Sibiu, one to
Timisoara, and one to Zerind. The agent isn’t sure which road is best since it doesn't know Romanian geography.
If the agent has no information (like a map), it might have to randomly pick one road. However, if the agent has a map, it can
look at future possibilities. The map shows what happens after going to each town and helps the agent figure out which
path will eventually lead to Bucharest. By using this information, the agent can create a plan to successfully reach its goal.
BANGALORE INSTITUTE OF TECHNOLOGY
Key Ideas
•Problem formulation is deciding what actions and states to consider to achieve the goal.
•Without information, the agent might have to guess; with information (like a map), the agent can plan a better path
toward the goal.
In summary, intelligent agents set goals, plan their actions based on available information, and use that planning to
reach their objectives more efficiently.
1. Initial State
- This is where the agent starts. For our tourist in Romania, the initial state is In(Arad), meaning they are in the city of
Arad.
2. Actions
- These are the possible things the agent can do from a given state. For example, from In(Arad), the tourist can choose
to Go(Sibiu), Go(Timisoara), or Go(Zerind). So, the set of actions available is {Go(Sibiu), Go(Timisoara),
Go(Zerind)}.
3. Transition Model
- This describes what happens when an action is taken. It’s a function called RESULT(s, a), which tells us the new
state after doing action a in state s. For example, if the agent is In(Arad) and takes the action Go(Zerind), the result
will be In(Zerind).
4. State Space
- This is the collection of all possible states the agent can reach from the initial state by taking various actions. It forms a
graph where each state is a node and the actions connecting them are the links. For example, a map of Romania can be
seen as a state-space graph, showing all the towns and roads.
5. Goal Test
- This checks if the agent has reached its goal. For our tourist, the goal is to be In(Bucharest). The goal test will confirm if
the current state is Bucharest.
6. Path Cost
- This is a way to measure how good or efficient a particular path is. The agent assigns a cost to each action based on its
performance measure. For example, if time is important, the cost could be based on the distance traveled. The total cost of
a journey is the sum of the costs of all actions taken.
Summary
When these elements are combined, they create a problem that can be solved by a problem-solving algorithm. A solution
is a sequence of actions that takes the agent from the initial state to the goal state. The quality of the solution is measured
by its path cost, with an optimal solution being the one with the lowest cost among all possible solutions.
How It Works
1.Inputs:
1. percept: This is the current information the agent receives about its environment.
2.Persistent Variables:
1. seq: A sequence of actions to be taken, starting off as empty.
2. state: Describes the current situation of the agent in the world.
3. goal: The objective the agent wants to achieve, initially set to null (nothing).
4. problem: A formulation of the problem that needs solving.
Steps:
1.Update State:
1. The agent updates its state based on the current percept it receives from the environment.
2.Formulate Goal and Problem:
1. If the action sequence seq is empty (meaning there are no actions to take yet), the agent:
1. Formulates a goal based on the current state.
2. Creates a problem formulation using the current state and the newly defined goal.
3.Search for Actions:
1. The agent searches for a sequence of actions that can solve the problem. This is stored in seq.
2. If the search fails (no solution found), the agent returns a null action (essentially doing nothing).
4. Execute Actions:
1. The agent takes the first action from the sequence seq.
2. It then removes this action from the sequence.
3. Finally, it returns the action to be executed.
5.Repeat:After completing the action, the agent will repeat the process by formulating a new goal and starting
over.
Formulating problems
In this section, we discuss the concept of abstraction in problem-solving, especially in the context of
planning a trip to Bucharest. Explanation follows:
Key Points:
4. Validity of Abstraction:
- An abstraction is considered valid if any abstract solution can be expanded into a detailed solution. For instance, if
there’s a path from a detailed state in Arad to a detailed state in Sibiu, it means the abstraction works.
5. Ease of Execution:
- The abstract actions should be easier to carry out than dealing with all the details of the original problem. For example,
deciding to drive from one town to another is simpler than planning every single action along the way.
6. Importance of Abstraction:
- Abstraction allows intelligent agents to focus on relevant information without being overwhelmed by the complexities
of the real world. Without useful abstractions, agents would struggle to make decisions effectively.
Toy Problems
•Definition: Toy problems are simplified scenarios created primarily to demonstrate and test problem-solving
methods.
•Characteristics:
• They have a clear and concise description.
• They can be easily understood and replicated by researchers.
• They are often used for comparing the performance of different algorithms.
•Examples: Classic puzzles like the Tower of Hanoi, the 8-puzzle, or algorithms used in simple board games.
Real-World Problems
•Definition: Real-world problems are practical issues that people face and care about solving.
•Characteristics:
• They are more complex and often lack a single, agreed-upon description.
• They involve various factors and uncertainties that make them harder to define precisely.
•Examples: Navigating traffic in a city, optimizing supply chain logistics, or diagnosing medical conditions.
Toy problems
The vacuum world is a classic example used to illustrate problem-solving in artificial intelligence.
1. States:
- The state of the environment is defined by the agent's location and the dirt locations.
- The agent can be in one of two locations (let’s say Location A and Location B).
- Each location may or may not contain dirt.
- Therefore, there are in total:
A larger environment with n locations has n x 2^ n states.
2. Initial State:
- Any one of the possible states can be chosen as the initial state where the problem starts.
3. Actions:
- The agent can perform three possible actions in this environment:
- Left: Move to the left location.
- Right: Move to the right location.
- Suck: Clean the current location.
- In larger environments, actions like Up and Down could also be added.
4. Transition Model:
- The effects of the actions are defined as follows:
- Moving Left at the leftmost location has no effect (the agent cannot move further left).
- Moving Right at the rightmost location has no effect.
- Sucking in a clean square has no effect (the square is already clean).
- This defines how the state changes in response to each action.
5. Goal Test:
- The goal of the agent is to ensure that all squares are clean. This test checks whether the agent has achieved this state.
6. Path Cost:
- Each action taken (each step) costs 1 unit. Therefore, the total path cost is simply the number of steps taken to reach the
goal.
8-puzzle
The 8-puzzle is a classic problem in artificial intelligence and problem-solving. Here’s a breakdown of how the 8-
puzzle can be formally defined:
1. States:
- A state describes the arrangement of the eight numbered tiles and the blank space on a 3x3 board.
- Each configuration can be represented by the positions of the tiles and the blank space in the nine squares.
2. Initial State:
- Any configuration of the tiles can be the initial state.
- Notably, only half of the possible configurations can reach a specific goal state, which is an important characteristic of the
puzzle.
3. Actions:
- The actions available are based on moving the blank space:
- Left: Slide a tile from the left into the blank space.
- Right: Slide a tile from the right into the blank space.
- Up: Slide a tile from above into the blank space.
- Down: Slide a tile from below into the blank space.
- The available actions depend on the position of the blank space (e.g., if the blank is in a corner, some moves won’t be
possible).
4. Transition Model:
- This model describes how the state changes when an action is taken.
- For example, if you apply the Left action to a state where the blank is next to tile 5, the result would be a new state where
tile 5 and the blank are swapped.
5. Goal Test:
- The goal test checks if the current state matches the target configuration of the tiles, which is usually a sequential
arrangement with the blank in a specific position (e.g., from 1 to 8 in order).
6. Path Cost:
- Each move (action) has a cost of 1. Therefore, the path cost is the total number of moves taken to reach the goal state
from the initial state.
8-queens problem
The 8-queens problem is a puzzle where the goal is to place 8 queens on a chessboard in such a way that none of
the queens can attack each other. Queens in chess can attack in straight lines (rows, columns) and diagonals, so
the challenge is to arrange the queens without any two being able to attack.
There are two main ways to approach this problem using search algorithms:
An incremental formulationis a method where you start with an empty board (no queens) and gradually add queens
one by one to build up the solution. Each action (or step) adds a queen to the board, and the goal is to keep adding
queens until you have all 8 placed without them attacking each other.
In short: you build the solution step by step, starting from nothing and adding pieces until the problem is solved.
•States: A state is just the current arrangement of queens on the board. It could have anywhere from 0 to 8 queens.
•Actions: You add one queen at a time to any empty square on the board.
•Goal: To have 8 queens on the board, with none attacking each other.
But if you try every possible sequence of placing queens, it leads to an enormous number of possibilities (about 180 trillion
possibilities).
A smarter approach is to never place a queen in a spot where it could be attacked by another queen. This reduces the number
of possibilities from 180 trillion to just 2,057.
A complete-state formulation is a method where you start with all 8 queens already placed on the board, but they
might be attacking each other. The goal is to move the queens around to find a configuration where no queens can
attack each other.
In this approach, you're not adding queens but adjusting their positions to solve the problem. The process of moving queens
(or path cost) doesn’t matter—only the final arrangement where no queens are attacking each other is important.
•States: A state is an arrangement where there are up to 8 queens, and none of them can attack each other.
•Starting Point: Start with 0 queens and gradually add queens.
•Actions: Add a queen to a valid spot where it won't be attacked by others.
•Goal: Place all 8 queens without any of them attacking each other.
This smaller number of possibilities makes it much easier to find a solution. However, when trying to solve the same
problem for a much larger chessboard (like with 100 queens), even this smarter method leaves a huge number of
possibilities, though it's still a big improvement.
In simple terms:
•The "naive" approach tries every possible way to place the queens, which takes forever.
•The "smart" approach only places queens in safe spots, cutting down the number of possibilities a lot, making it easier
to solve the puzzle.
Real-world problems
The route-finding problem is about finding the best path between two locations, usually through a network
of connected points (like cities or intersections). This problem can be seen in real life through various
applications, such as:
•Driving directions: When apps like Google Maps help you navigate from one place to another by showing
the best route.
•Computer networks: When video streams or data are sent from one computer to another, routing decides
the best path for the data to travel across the internet.
•Military operations: Planning the best routes for moving troops or supplies.
•Airline travel: Finding the best flights between two airports, potentially with layovers.
Some of these problems are relatively simple, like finding the best driving route, while others involve much
more complex decisions and constraints, such as coordinating multiple flights or handling the massive data
traffic on the internet.
•States: Each "state" represents where you are (airport) and the time. It also keeps track of things like past flights, fare
classes (economy/business), and whether the flight is domestic or international.
•Initial state: The starting point, which is based on the user’s search (e.g., where you’re flying from, where you want to
go, and when).
•Actions: The available options are any flights leaving from your current airport after your current time. You also need to
consider things like enough time for layovers or transfers.
•Transition model: When you take a flight, you update your current location to the flight's destination and the current time
to when the flight lands.
•Goal test: The goal is to reach the final destination specified by the user.
•Path cost: This is more complex. The total cost considers factors like the price of the flight, how long you wait between
flights, total travel time, seat class (economy/business), and even things like customs and immigration. Other factors could
include comfort, time of day, or airline points.
Travel websites use this problem-solving approach, but the real world is more complicated, so good systems also plan for
problems. For example, they might book backup flights in case your original flight gets delayed or canceled.
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY
The touring problem is similar to the route-finding problem, but with one key difference: instead of just finding a path
from one city to another, the goal is to visit every city in a set at least once and return to the starting point.
•Problem: You need to visit all cities in a map (like in Figure 3.2) at least once, starting and ending in Bucharest.
•Actions: You can travel between cities that are connected by a direct road (like in route-finding problems).
•Initial state: You start in Bucharest with only Bucharest marked as visited. So the state is: In(Bucharest),
Visited({Bucharest}).
•Intermediate state: As you travel, the state updates to reflect your new location and the cities you've visited. For
example, if you move to Vaslui after visiting Bucharest and Urziceni, the state becomes: In(Vaslui), Visited({Bucharest,
Urziceni, Vaslui}).
•Goal: The goal is to end up in Bucharest after visiting all the cities. The system checks whether you have visited all the
cities and returned to Bucharest.
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
BANGALORE INSTITUTE OF TECHNOLOGY
The Traveling Salesperson Problem (TSP) is a special type of touring problem where the goal is to visit a set of cities
exactly once and return to the starting point, while minimizing the total distance traveled. In simple terms, you need to
find the shortest route that visits all the cities and brings you back to where you started.
•Problem: Visit each city exactly once and return to the starting city.
•Goal: Find the shortest possible route that completes the tour.
This problem is very difficult (known as NP-hard), meaning there is no efficient solution for large numbers of cities. As
the number of cities increases, the number of possible routes grows exponentially, making it hard to solve.
Applications of TSP:
The VLSI layout problem involves organizing millions of tiny components and connections on a microchip to make the
chip as efficient as possible.
The layout problem happens after the logical design of the chip is done and is usually split into two main tasks:
1.Cell layout:
1. Cells are small groups of components that perform specific functions on the chip.
2. Each cell has a fixed size and shape and needs to be connected to other cells.
3. The challenge is to place these cells on the chip so that they don’t overlap, and there’s enough space for the wires
(connections) between them.
2.Channel routing:
1. After placing the cells, you need to route the wires that connect them.
2. The task is to find paths for these wires through the gaps between the cells, ensuring they don’t cross each other
unnecessarily.
Robot navigation extends the route-finding problem by allowing robots to move in a continuous space rather than just
along discrete routes.
2.Dimensionality:
1. For a simple robot that moves on a flat surface, the navigation space is two-dimensional (length and width).
2. If the robot has limbs, wheels, or other moving parts, the space becomes multi-dimensional, making it even harder
to navigate.
1.Historical Context:
1. The first successful demonstration of automatic assembly sequencing by a robot was done by Freddy in 1972. Since
then, progress has gradually improved, allowing robots to assemble intricate items like electric motors efficiently.
2.Objective:
1. The main goal is to determine the correct order in which to assemble the components of an object. If parts are
assembled in the wrong order, it may become impossible to add later components without dismantling previously
completed work.
3.Feasibility Checking:
1. Assessing whether a certain step in the assembly process is possible is a complex geometrical search problem. This is
similar to the challenges faced in robot navigation, where the robot must determine its path and actions based on the
environment.
4.Efficiency in Action Generation:
1. Generating legal actions for assembly (i.e., determining which parts can be assembled next) is the most
computationally intensive aspect of assembly sequencing. A practical algorithm needs to avoid checking every
possible arrangement and focus only on a small subset of feasible actions.
In search algorithms, the goal is to explore one option (path) at a time while saving other options for
later in case the first doesn't lead to a solution.
For example, if we first choose to explore "Sibiu" after leaving "Arad":
1.Goal Test: Check if "Sibiu" is the goal state (it's not).
2.Expand Sibiu: Generate more options from "Sibiu"—like going to "Arad," "Fagaras," "Oradea," or
"Rimnicu Vilcea."
3.Leaf Nodes: Each of these new states is a leaf node—nodes that don't yet have child nodes.
4.Frontier: All current leaf nodes (like "Timisoara," "Zerind," and the new cities from "Sibiu") form the
frontier. The frontier is the set of nodes that are ready to be expanded next.
Essentially, you keep exploring the frontier (the set of available options) until you find the goal.
When searching for a solution, you keep expanding nodes on the frontier (the current set of unexplored
options) until either:
•A solution is found (you reach the goal), or
•There are no more states left to explore.
Search Strategy:
•The key difference between search algorithms is how they decide which node to expand next. This
choice is called the search strategy.
Repeated States and Loopy Paths:
•In the search tree (Figure 3.6), you might see repeated states. For instance, after traveling from "Arad" to
"Sibiu," you might generate the state "Arad" again, creating a loopy path (a path that loops back to where
you started).
•Loopy paths can cause the search tree to become infinitely large, even though the actual number of unique
states (like cities) is limited.
GRAPH-SEARCH:
1.Initialize the Frontier and Explored Set: Start with the initial state in the frontier and an explored set (to
keep track of already visited nodes).
2.Loop Until a Solution or Failure:
1. If the frontier is empty, return failure.
2. Pick a leaf node from the frontier and remove it.
3. If it’s a goal state, return the solution.
4. Add the node to the explored set (mark it as visited).
5. Expand the node, generating new states, and add the resulting nodes to the frontier only if they are not
already in the frontier or the explored set.
Key Differences:
•TREE-SEARCH does not track which states have been explored, so it might revisit the same state multiple times,
which can lead to inefficiency (e.g., exploring loopy or redundant paths).
•GRAPH-SEARCH avoids revisiting the same state by maintaining an explored set. This prevents cycles and
redundant paths, making it more efficient for problems where the search space has repeated states
• In some problems, such as route-finding or sliding-block puzzles, paths can be reversible, meaning that actions can
undo themselves (e.g., moving left and then right returns you to the same place). This leads to redundant paths—
multiple ways of reaching the same state.
Example of Redundant Paths:
In a rectangular grid, each state (position) has four possible actions (up, down, left, right). While this creates many
possible paths, the number of unique states (positions) is much smaller than the number of paths. For instance, at depth d
(meaning after d steps), there may be 4^d possible paths, but only about 2d² unique positions to reach.
This results in:
•Huge search trees (many possible paths) with few distinct states.
•Following redundant paths could turn a manageable problem into a much larger one.
GRAPH-SEARCH Algorithm:
• The GRAPH-SEARCH algorithm is the improved version of TREE-SEARCH, using an explored set to avoid redundant
paths. This ensures that the search tree only grows on the state-space graph, which is a representation of all unique states.
Key Property of GRAPH-SEARCH:
In GRAPH-SEARCH, the frontier separates the state space into:
• Explored region: States that have already been visited and expanded.
• Unexplored region: States that have not yet been reached.
• Every path from the initial state to an unexplored state has to pass through a state in the frontier. This guarantees that the
algorithm systematically explores all reachable states until a solution is found, ensuring efficient exploration without
repetition.
• Thus, GRAPH-SEARCH systematically moves states from the unexplored region to the explored region, gradually
expanding the frontier and avoiding redundant paths, making it far more efficient in many cases.
Given the components for a parent node, it is easy to see how to compute the necessary components for
a child node. The function CHILD-NODE takes a parent node and an action and returns the resulting
child node:
2. Outputs:
- A new node (the child node) is returned, with the following attributes:
- STATE: The new state, generated by applying the given action to the parent's state. This is calculated using the
`problem.RESULT(parent.STATE, action)` function, which returns the state resulting from taking the action.
- PARENT: The parent node, indicating the source of this new node.
- ACTION: The action that was applied to generate this new node.
- PATH-COST: The total cost of reaching this node. It is the parent's path cost plus the cost of the action taken to move
from the parent to this new state. This is calculated using `parent.PATH-COST + problem.STEP-COST(parent.STATE,
action)`.
Explanation:
- STATE: The new situation (state) resulting from taking the given action in the current state.
- PARENT: Keeps track of where the node came from, helping to build the full path later.
- ACTION: Records the specific move or action that led to this state.
- PATH-COST: Keeps track of the cumulative cost of the path so far, allowing us to evaluate the cost of reaching a particular
node (important for algorithms like A* or uniform-cost search).
In summary, the `CHILD-NODE` function generates a new node (child) by applying an action to a parent node’s state and
updating the necessary information like state, action, and path cost for the search process.
To handle the frontier in a search algorithm, we need a way to store nodes that are waiting to be explored. This is done
using a queue, which allows us to manage the order in which nodes are processed.
This queue structure ensures that nodes are expanded in the right order, according to the search algorithm's strategy.
When evaluating search algorithms, there are four key criteria to consider:
Complexity Factors:
Search algorithms' performance is often described using three important factors:
- b: The branching factor – the maximum number of successors (child nodes) from any node.
- d: The depth of the shallowest goal – how many steps it takes to reach the nearest goal.
- m: The maximum path length – the longest path in the state space.
Cost:
- Search cost: The time taken by the algorithm to search.
- Total cost: The combination of search cost and the solution path cost (e.g., the length of a path in kilometers).
For example, if you're driving from Arad to Bucharest, the search cost is how long the algorithm takes to find a route, and
the solution cost is the length of the route in kilometers. You might combine both to find the best trade-off between
computation time and path length to make the search efficient.
Uninformed Search:
- What It Is: These strategies explore possible paths in a problem without any extra information about the states.
They only know how to generate new states and whether a state is a goal or not.
Key Characteristics:
- Node Expansion: Different uninformed search strategies vary based on the order in which they explore nodes.
They don't have a method to decide which node might lead to a solution faster.
In summary, uninformed search strategies work blindly without any extra hints about which paths to take, while informed
searches use additional knowledge to make smarter decisions.
Breadth-first search
Breadth-First Search (BFS) is a straightforward search strategy that explores nodes level by level.
2. Queue Structure: BFS uses a FIFO queue to keep track of which nodes to explore next. When a new node is added, it
goes to the back of the queue, ensuring that older nodes (which are at a shallower depth) are expanded first.
3. Goal Test: The algorithm checks if a node is the goal as soon as it is generated (created) rather than waiting until it is
selected for expansion. This helps in quickly identifying a solution.
4. Avoiding Redundancy: BFS avoids adding paths to states already in the queue or explored set, ensuring that it always
finds the shortest path to any node on the frontier.
Key Points:
- Level by Level: BFS explores nodes one level at a time.
- Shallowest Paths: Because of its structure, BFS always finds the shortest path to the goal if one exists.
how Breadth-First Search (BFS) measures up against the four criteria we discussed earlier:
BFS systematically explores each level of the search tree, ensuring that all possible states are considered in
order before moving deeper. It keeps track of explored states to avoid unnecessary work and guarantees that
the first solution found is the shallowest one if one exists. This algorithm is effective for finding solutions in
problems with finite state spaces.
Challenges of Breadth-First Search (BFS), especially its high resource demands when searching for solutions in
complex problems.
Main Points:
1. Exponential Growth:
- BFS's time and memory requirements increase rapidly as the depth of the search tree increases. This means that the
deeper the solution is, the more resources are needed to find it.
2. Resource Assumptions:
- It’s assumed that a computer can generate 1 million nodes every second, and each node uses about 1000 bytes of
memory.
3. Memory Issues:
- At greater depths (like depth 12), BFS can require huge amounts of memory (around 1 petabyte), which is far beyond
what a typical computer can handle.
4. Time Issues:
- Even if you can wait a long time for a solution, it can take an impractical amount of time. For example, finding a solution at
depth 16 could take about 350 years.
5. Conclusion:
- Because of these high demands on time and memory, BFS can only be used effectively for simpler problems. For more
complex ones, other search strategies are needed to make the search process feasible.
In short, while BFS is a straightforward method, its high resource requirements make it impractical for deep searches,
highlighting the need for more efficient algorithms.
Uniform-cost search
The passage explains Uniform-Cost Search, a variation of Breadth-First Search that is optimal for problems where step
costs can vary. Here are the key points:
Key Points:
2. Uniform-Cost Search:
- Uniform-Cost Search improves upon BFS by expanding the node with the lowest path cost \( g(n) \), which is the total
cost to reach that node from the start. This ensures that the search finds the optimal path even when step costs differ.
5. Updating Paths:
- If a better path to a node already in the frontier is found, the algorithm updates the frontier to reflect this new, lower-
cost path. This ensures that the search remains focused on the most promising routes.
Summary:
In essence, Uniform-Cost Search enhances Breadth-First Search by prioritizing nodes based on their total path cost,
ensuring that it finds the most optimal solution regardless of the costs of actions. This makes it suitable for more
complex problems where step costs can vary.
The function UNIFORM-COST-SEARCH is an algorithm designed to find the optimal solution to a search problem,
particularly when the costs of steps (or actions) may vary. Here’s a breakdown of the function in simple terms:
Function Breakdown:
1. Initialization:
- Create the Initial Node: A node is created representing the starting state of the problem with a path cost of 0.
- Set Up the Frontier: The frontier is a priority queue that will hold nodes to be explored, ordered by their path cost.
Initially, it contains just the starting node.
- Create Explored Set: This is an empty set that will keep track of the states that have already been explored.
2. Main Loop:
- The loop continues until a solution is found or the frontier is empty (meaning there are no more nodes to explore).
5. Goal Test:
- The algorithm checks if this node is the goal state. If it is, the solution is returned.
Summary:
The UNIFORM-COST-SEARCH algorithm effectively explores paths based on their total cost, ensuring it always expands
the least costly option. This approach guarantees finding the optimal solution, as it considers all potential paths and their
associated costs, updating the frontier as necessary to prioritize more efficient routes.
The Uniform-Cost Search algorithm works by always expanding the node that has the lowest total path cost. Here’s how it
operates, illustrated with the example of finding a route from Sibiu to Bucharest:
Example Walkthrough:
1.Starting Node: The algorithm starts at Sibiu.
1. Successors: The possible next locations (successors) are Rimnicu Vilcea (cost 80) and Fagaras (cost 99).
2.Expanding Nodes:
1. First Expansion: It expands Rimnicu Vilcea first because it has the lower cost (80).
1. This adds Pitesti to the frontier with a total cost of 177 (80 + 97).
2. Next Expansion: Then it expands Fagaras (cost 99), which adds Bucharest with a total cost of 310 (99 + 211).
3.Goal Node Generated: Now, a goal node (Bucharest) has been generated, but Uniform-Cost Search continues exploring.
4. Further Exploration:
1. The algorithm expands Pitesti next, creating a new path to Bucharest with a total cost of 278 (80 + 97 +
101).
2. It compares this new path to the existing path to Bucharest (310). Since 278 is cheaper, it discards the old
path.
5. Optimal Path: Now, Bucharest has a new optimal cost of 278. When it is selected for expansion again, it
means that this path is the best option available.
Key Points:
•Guaranteed Optimality: Whenever a node nnn is selected for expansion in Uniform-Cost Search, it means the
optimal path to that node has already been found. If a better path existed, it would have already been chosen for
expansion due to its lower cost.
•Non-Negative Costs: Since all step costs are non-negative, adding more nodes to a path can never reduce its total
cost.
What It Is:
•Iterative Deepening Search is a search strategy that combines the advantages of both Depth-First Search (DFS) and
Breadth-First Search (BFS).
How It Works:
1.Gradual Depth Increase: The algorithm starts with a depth limit of 0 and gradually increases it (1, 2, 3, etc.) until it
finds the goal node.
2.Repeated Searches: For each depth limit, it performs a Depth-First Search until it reaches that limit. If it doesn't find
the goal, it increases the limit and repeats the process.
3.Finding the Goal: The search continues until it discovers the goal node at depth d (the depth of the shallowest goal).
Benefits:
•Memory Efficiency: The memory requirement is modest, specifically O(bd), where b is the branching factor and d is the
depth of the shallowest solution. This is similar to DFS.
•Completeness: Iterative deepening is complete, meaning it will find a solution if one exists (given a finite branching
factor).
•Optimality: It is optimal when the path cost does not decrease with depth, similar to BFS.
Explanation
1.Outer Loop:
1. The algorithm iterates through increasing depth limits starting from 0 and continues indefinitely (or until a solution
is found).
2.Depth-Limited Search:
1. For each depth limit, it calls the DEPTH-LIMITED-SEARCH function. This function attempts to find a solution within
the specified depth.
2. If the search exceeds the depth limit without finding a goal, it will return a special value, often called "cutoff."
3.Check for Solution:
1. After each call to DEPTH-LIMITED-SEARCH, it checks the result:
1. If the result is not equal to "cutoff," it means a solution has been found, and the algorithm returns that
solution.
2. If it is "cutoff," it means the search reached the depth limit without finding a goal, and the outer loop continues
to the next depth limit.
Summary
•Iterative Deepening Search efficiently combines depth-first search’s low memory usage with the completeness of
breadth-first search, making it ideal for large search problems.
•The method’s node generation strategy helps mitigate concerns about inefficiency due to repeated node expansion.