Searching in Ai
Searching in Ai
Searching in Ai
Search algorithms are one of the most important areas of Artificial Intelligence.
Problem-solving agents:
State space search in artificial intelligence is a fundamental technique used to solve problems
by navigating through a series of states and transitions. In this approach, a problem is
represented as a collection of states, each depicting a specific configuration, and the transitions
represent possible actions or moves between these states. The objective is to find a sequence
of actions that leads from an initial state to a goal state.
This concept is analogous to finding a path through a complex maze: each decision or action
leads to a new state, and the goal is to discover the optimal sequence of actions that leads to a
desired outcome.
State Space Representation involves identifying an initial state and a goal state, then finding a
sequence of actions (states) to navigate from the former to the latter.
o Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be represented as a transition
model.
o Solution: It is an action sequence which leads from the start node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.
• Step 1: Define the Problem: Clearly state the problem, specify the initial node, the goal
node, and the set of possible transitions (edges) between nodes.
• Step 2: Create the Graph: Represent the problem as a directed or undirected graph, where
nodes represent states and edges represent possible transitions or actions.
• Step 3: Select a Search Algorithm: Choose an appropriate search algorithm based on the
characteristics of the problem and the graph. Common choices include Breadth-First Search
(BFS) and Depth-First Search (DFS).
• Step 4: Initialize Data Structures: Create a data structure to keep track of visited nodes and
nodes to be explored. Add the initial node to the list of nodes to be explored.
• Step 5: Iterate through the Graph: While nodes remain to explore, extract the next node.
Verify if it’s the goal; if true, a solution is attained. If not, generate successor nodes by
traversing edges from the current node.
• Step 6: Check for Goal Node: Upon generating successor nodes, check if any of them are
the goal node. If so, the search is complete, and a solution has been found.
• Step 7: Manage Visited Nodes: Keep track of visited nodes to avoid revisiting them and
potentially entering loops.
• Step 8: Update Data Structures: Update the data structures with newly generated nodes.
Add them to the list of nodes to be explored.
• Step 9: Repeat Steps 5-8 until Goal Node is Found: Continue iterating through the graph,
generating successor nodes, and checking for the goal node until a solution is found.
• Step 10: Backtrack (If Necessary): In some cases, if a dead-end is reached, backtrack to a
previous node and explore alternative paths.
• Step 11: Retrieve Solution Path: Once the goal node is reached, trace back the path from the
goal node to the initial node to retrieve the sequence of actions or transitions that led to the
solution.
• Step 12: Evaluate Solution: Evaluate the solution based on relevant metrics, such as path
cost, optimality, and completeness.
• Step 13: Implement post-processing (if needed): Depending on the problem domain,
additional steps may be required to implement the solution.
The 8-puzzle is a popular sliding puzzle that involves a 3×3 grid where eight numbered tiles
and one blank space are arranged. The objective is to rearrange the tiles from an initial
configuration to a target configuration using a sequence of valid moves. Here’s a step-by-step
explanation of the 8-puzzle:
Step 1: Initial State
The 8-puzzle starts with an initial configuration where eight numbered tiles (usually from 1 to
8) and one empty space are arranged randomly within a 3×3 grid. For example, an initial state
could look like this:
The goal is to reach a predefined target state, which typically has the tiles arranged in
numerical order. The empty space is usually in the bottom-right corner. The goal state looks
like this:
Tiles can only be moved into the empty space if they are adjacent to it (either horizontally or
vertically, not diagonally). This means that at any given state, you can slide a neighboring tile
into the empty space.
Step 4: Objective
The objective is to find a sequence of moves that transforms the initial configuration into the
goal configuration. Each move represents a state transition.
Step 5: State Space Search
State space search algorithms, like A* or Breadth-First Search, are used to systematically
explore possible states and find an optimal solution. These algorithms evaluate and prioritize
states based on certain criteria, such as distance to the goal state.
Step 6: Solution
The solution is a series of moves (states) that, when applied to the initial configuration, lead
to the goal configuration. For instance, a sequence of moves might look like this:
• Puzzle Solving: Solving puzzles like the 8-puzzle, Rubik’s Cube, and Sudoku using state space
search algorithms
• Pathfinding in Games: Finding the shortest path for characters or agents in video games is a
common use case for algorithms like A*
• Automated Planning: In areas like logistics, transportation, and manufacturing, state space
search helps in planning and scheduling tasks
• Natural Language Processing: In tasks like machine translation, state space search can be
used to generate optimal translations
• Chess and Games: Determining optimal moves in games with well-defined rules and states,
like chess, checkers, and Go
Following are the four essential properties of search algorithms to compare the efficiency of these
algorithms:
Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.
Space Complexity: It is the maximum storage space required at any point during the search, as the
complexity of the problem.
Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms.
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the location of the
goal. It operates in a brute-force way as it only includes information about how to traverse the tree and
how to identify leaf and goal nodes. Uninformed search applies a way in which search tree is searched
without any information about the search space like initial state operators and test for the goal, so it is
also called blind search.It examines each node of the tree until it achieves the goal node.
o Breadth-first search
o Depth-first search
o Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem information is
available which can guide the search. Informed search strategies can find a solution more efficiently
than an uninformed search strategy. Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a
good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another way.
1. Greedy Search
2. A* Search
It doesn’t use
Using It uses knowledge for the
knowledge for the
Knowledge searching process.
searching process.
Parameters Informed Search Uninformed Search
No suggestion is given
There is a direction given
Direction regarding the solution
about the solution.
in it.
It is more efficient as
It is comparatively less
efficiency takes into
efficient as incurred
account cost and
cost is more and the
Efficiency performance. The
speed of finding the
incurred cost is less and
Breadth-Firstsolution is
speed of finding solutions
slow.
is quick.
Parameters Informed Search Uninformed Search
• This process of extending the root node’s immediate neighbours, then to their neighbours, and
so on, lasts until all the nodes within the graph have been visited or until the specific
condition is met. From the above image we can observe that after visiting the node B it moves
to node C. when the level 1 is completed, it further moves to the next level i.e 2 and explore
node D. it will move systematically to node E, node F and node G. After visiting the node G it
will terminate.
1. First-in-First-Out (FIFO): The FIFO queue is typically preferred in BFS because the FIFO
queue will be faster than a priority queue and often results in the correct order of nodes. When
FIFO is applied to BFS, new nodes deeper in the search tree are added to the back of the
queue. The old nodes which are swallower than the new nodes get expanded first.
2. Early goal test: In traditional BFS implementations, the algorithm maintains a set of states
reached during the search process. However, instead of storing all reached states, it selectively
stores only the set of reached states that allow for an early goal test. This test involves
checking whether the newly generated node meets the goal criteria as soon as it is generated.
3. Cost-optimal: BFS always aims to find a solution with a minimum cost prioritizing
the shortest path, when BFS generates nodes at a certain depth d, it has already explored and
generated all the nodes at the previous depth d-1. Consequently, if a solution exists within a
search space, BFS can discover it as soon as it reaches that depth level. Therefore, BFS is said
to be a cost-optimal solution.
Algorithm
Depth-first search (DFS) algorithm in artificial intelligence is like an explorer. It is a graph traversal
algorithm that begins at a starting point, checks nearby spots first, and keeps going deeper before
moving to new places. It repeats this pattern to explore the whole graph.
Depth-first search (DFS) explores a graph by selecting a path and traversing it as deeply as
possible before backtracking.
• Originally it starts at the root node, then it expands all of its one branch until it reaches
a dead end, then backtracks to the most recent unexplored node, repeating until all
nodes are visited or a specific condition is met. ( As shown in the above image, starting
from node A, DFS explores its successor B, then proceeds to its descendants until
reaching a dead end at node D. It then backtracks to node B and explores its remaining
successors i.e E. )
• This systematic exploration continues until all nodes are visited or the search terminates.
(In our case after exploring all the nodes of B. DFS explores the right side node i.e C
then F and and then G. After exploring the node G. All the nodes are visited. It will
terminate.
In simple terms, the DFS algorithms in AI holds the power of extending the current path as
deeply as possible before considering the other options.
• DFS is not cost-optimal since it doesn’t guarantee to find the shortest paths.
• DFS uses the simple principle to keep track of visited nodes: It uses a stack to keep track
of nodes that have been visited so far which helps in the backtracking of
the graph. When the DFS encounters a new node, it adds it to the stack to explore its
neighbours. If it reaches a node with no successors (leaf node), it works by backtracking
such as popping nodes off the stack to explore the alternative paths.
• Backtracking search: The variant of DFS is called backtracking search which uses less
memory than traditional depth-first search. Rather than generating all the successors,
the backtracking search enables the DFS to generate only one successor at a time. This
approach allows dynamic state modification, such as generating successors by directly
modifying the current state description instead of allocating the memory to a brand-new
state. Thus reducing the memory requirements to store one state description and path of
actions.