The document discusses various search algorithms for complex environments, including local search algorithms and systematic algorithms. Local search algorithms like hill climbing, simulated annealing, and local beam search use iterative improvement to explore neighborhoods of solutions and may converge to local optima. Systematic algorithms are characterized by observable, deterministic, and known environments where the solution is a sequence of actions. The document provides details on different local search techniques, explaining concepts like hill climbing, simulated annealing, and local beam search.
The document discusses various search algorithms for complex environments, including local search algorithms and systematic algorithms. Local search algorithms like hill climbing, simulated annealing, and local beam search use iterative improvement to explore neighborhoods of solutions and may converge to local optima. Systematic algorithms are characterized by observable, deterministic, and known environments where the solution is a sequence of actions. The document provides details on different local search techniques, explaining concepts like hill climbing, simulated annealing, and local beam search.
The document discusses various search algorithms for complex environments, including local search algorithms and systematic algorithms. Local search algorithms like hill climbing, simulated annealing, and local beam search use iterative improvement to explore neighborhoods of solutions and may converge to local optima. Systematic algorithms are characterized by observable, deterministic, and known environments where the solution is a sequence of actions. The document provides details on different local search techniques, explaining concepts like hill climbing, simulated annealing, and local beam search.
The document discusses various search algorithms for complex environments, including local search algorithms and systematic algorithms. Local search algorithms like hill climbing, simulated annealing, and local beam search use iterative improvement to explore neighborhoods of solutions and may converge to local optima. Systematic algorithms are characterized by observable, deterministic, and known environments where the solution is a sequence of actions. The document provides details on different local search techniques, explaining concepts like hill climbing, simulated annealing, and local beam search.
ENVIRONMENTS (Slides are based on Artificial Intelligence A Modern Approach by Stuart Russell & Peter Norvig)
Dr. K. Venkateswara Rao
Professor CSE Contents • Local search algorithms and optimization problems – Hill-climbing search – Simulated annealing – Local beam search – Evolutionary algorithms • Optimal decisions in games – The minimax search algorithm – Optimal decisions in multiplayer games • Alpha-Beta pruning – Move ordering • Monte Carlo tree search • Kalman Filter Systematic Algorithms • Systematic Algorithms are characterized by Observable, deterministic, known environments where solution (goal configuration) and path (a sequence of actions) to the goal are important. • Observable: In an observable environment, the agent (the entity trying to solve the problem) can directly observe the current state of the environment. This means that the agent has complete information about the state of the environment at any given time. • Deterministic: In a deterministic environment, the outcome of an action is fully determined by the current state of the environment and the action taken by the agent. There is no randomness or uncertainty in the environment's response to the agent's actions. Systematic Algorithms • Known: In a known environment, the agent has complete knowledge of the environment's rules and dynamics. There are no hidden states or unknown elements in the environment. • Solution as a sequence of actions: In such environments, the solution to a problem is typically represented as a sequence of actions that the agent can take to transition from the initial state of the environment to the goal state. • Local search, on the other hand, is a problem-solving technique used in optimization and search problems. It differs from the scenario (characterized by Observable, deterministic, known environments where a solution is a sequence of actions ) described above in several ways Local Search • Local search is an optimization technique used in artificial intelligence to find solutions to problems by iteratively exploring the space of possible solutions, focusing on improving the current solution by making small incremental changes. • It is particularly useful for problems where the search space is large and it is impractical to explore all possible solutions. • Characteristics of local search algorithms 1. Iterative Improvement 2. Exploration of Neighborhoods 3. No Backtracking 4. Heuristic Guidance 5. Stochastic or Deterministic 6. Convergence to Local Optima Local Search • Local search algorithms start with an initial solution and iteratively explore the neighborhood of the current solution by applying small changes to the current solution. • At each iteration, the algorithm evaluates neighboring solutions and selects the one that offers the most improvement according to some evaluation function or objective. • Local search algorithms often use heuristic information to guide the search process. • One of the main limitations of local search algorithms is that they may converge to local optima, solutions that are locally optimal but not globally optimal. Local Search • Local search algorithms do not backtrack. – They only move forward by considering neighboring solutions and selecting the one that offers the most improvement. This can make them more efficient in large search spaces but also means they may get stuck in local optima. • Local search algorithms can be stochastic or deterministic. – Stochastic algorithms introduce randomness into the search process, which can help escape local optima and explore a broader range of solutions. – Deterministic algorithms, on the other hand, follow a fixed set of rules for selecting neighboring solutions and may be more predictable in their behavior. Local Search • Local Search Algorithms are not Systematic. The path followed by the agent is not retained. Two key advantages of local search algorithms are: 1. They use very little memory, usually constant. 2. They can often find reasonable solutions in large or infinite search spaces • A state space landscape has both location (defined by state), and elevation (defined by value of the heuristic cost function (if aim is to find global minimum) or objective function (if aim is to find global maximum)). • A complete local search algorithm always finds a goal if one exists Local Search Algorithms 1. Hill-climbing search 2. Simulated Annealing 3. Local beam search 4. Evolutionary algorithms Hill Climbing Search Hill-Climbing Search
Steepest-Ascent Hill Climbing
• current start node • loop do – neighbor a highest-valued successor of current – if neighbor.Value <= current.Value then return current.State – current neighbor • end loop Hill-Climbing Example & Problems Local Search – Hill Climbing Ridges Illustration Variants of Hill Climbing • Allow backtracking • Stochastic hill climbing: choose at random from among the uphill moves. The probability of selection varies with the steepness of the uphill move. This usually converges more slowly than steepest ascent. • First-choice hill climbing: implements stochastic hill climbing by generating successors randomly until one is generated that is better than the current state. • Random restart hill climbing: “If at first you don’t succeed, try, try again.” It conducts a series of hill- climbing searches from randomly generated initial states, until a goal is found. Simulated Annealing • Hill-climbing is incomplete • Pure random walk, keeping track of the best state found so far, is complete but very inefficient because it relies solely on random moves without any intelligent guidance • Simulated Annealing is a Variant of hill climbing that Combines hill climbing with random walk in some way to yield both efficient and completeness. • Simulated Annealing comes from the physical process of annealing that is used to temper or harden metals and glass by heating them to a high temperature / high energy levels and then gradually cooling them, thus allowing the material to reach a low energy crystalline state. Simulated Annealing Simulated Annealing Algorithm • Initialize: Start with an initial solution to the optimization problem. • Define Temperature Schedule: The algorithm requires a cooling schedule, which determines how the temperature decreases over time. Initially, the temperature is high, allowing for more exploration of the solution space, and then it gradually decreases to focus more on exploitation. • Iterate: Perform a series of iterations, each consisting of the following steps: 1. Generate a Neighbor 2. Evaluate Neighbor 3. Accept or Reject Neighbor: Compare the cost of the new solution with the cost of the current solution. If the new solution is better (i.e., has a lower cost), accept it. If the new solution is worse, accept it with a certain probability, which depends on the temperature and the difference in cost between the current and new solutions. This probabilistic acceptance allows the algorithm to escape local optima. 4. Update Temperature: Adjust the temperature according to the cooling schedule. • Termination: Repeat the iterations until a stopping criterion is met. This could be a maximum number of iterations, reaching a certain temperature threshold, or finding a solution that meets some desired criteria. Simulated Annealing
The probability of moving to a higher energy state, instead of lower is
p = e^(-E/kT) where E is the positive change in energy level, T is the temperature, and k is Bolzmann’s constant. At the beginning, the temperature is high. As the temperature becomes lower kT becomes lower E/kT gets bigger (-E/kT) gets smaller e^(-E/kT) gets smaller As the process continues, the probability of a downhill move gets smaller and smaller. Simulated Annealing Algorithm • current start node; /* Initial state of the problem (input) */ • for each T on the schedule /* need a schedule (input) */ 1. next randomly selected successor of current 2. evaluate next; if it’s a goal, return it 3. E next.Value – current.Value /* already negated */ 4. if E > 0 then current next /* better than current */ else current next with probability e^(E/T) • E represents the change in the value of the objective function. • Since the physical relationships no longer apply, drop k. So p = e^(-E/T) • We need an annealing schedule, which is a sequence of values of T: T0, T1, T2, ... Simulated Annealing Simulated Annealing Applications • Basic Problems – Traveling salesman – Graph partitioning – Matching problems – Graph coloring – Scheduling • Engineering – VLSI design • Placement • Routing • Array logic minimization • Layout – Facilities layout – Image processing – Code design in information theory Local Beam Search Technique Local Beam Search Algorithm 1. Initialization: Start with k randomly generated initial solutions or paths, where k is the beam width or the number of beams. K is an input to the algorithm. 2. Expansion: For each beam, generate successor solutions by applying possible operators or actions. These successor solutions represent neighboring states in the search space. 3. Evaluation: Evaluate the quality of each successor solution using a heuristic or objective function. This function measures how close a solution is to the desired goal or optimal solution. If any one is a goal, return solution (algorithm halts). 4. Selection: Select the top k successor solutions across all beams based on their evaluation scores. These top k solutions become the new set of beams for the next iteration. 5. Termination: Repeat steps 2-4 until a termination condition is met. This could be a maximum number of iterations, reaching a certain quality threshold, or finding a solution that satisfies specific criteria. 6. Result: Once the termination condition is satisfied, return the best solution found among all beams. Local Beam Search Characteristics • Parallel Exploration: Local Beam Search explores multiple paths simultaneously by maintaining multiple beams. This parallel exploration can help the algorithm to avoid getting stuck in local optima and increase the chances of finding a good solution. • Beam Selection: At each iteration, only the top k successor solutions are selected to become the new set of beams. This selective mechanism helps to focus the search on the most promising regions of the search space. • Diversity: Since Local Beam Search maintains multiple beams, it can explore different regions of the search space concurrently. This diversity in exploration can be beneficial for finding a variety of solutions or for escaping local optima. • Stochastic beam search: Instead of choosing the k best nodes from he pool of candidate successors, stochastic beam search chooses k successors, with probability of choosing a given successor being increasing function of its value • Stochastic beam search resembles to a process of natural selection: The successors (offspring) of a state (organism) populate the next generation according to its value (fitness). Genetic Algorithms • Variant of stochastic beam search • Combines two parent states to generate successors • Uses one fitness function and two operators called Crossover and Mutation. • Start with random population of states – Representation serialized (ie. strings of characters or bits) – States are ranked with “fitness function” • Produce new generation – Select random pair(s) using probability: • probability ~ fitness – Randomly choose “crossover point” • Offspring mix halves – Randomly mutate bits Genetic Algorithm Genetic Algorithm