Reporte 1
Reporte 1
Reporte 1
Homework 1
Diego Campos Peña
222978969
I. ESSAY
A. Introduction
This document's principal point is to understand the importance of metaheuristic algorithms and their
uses in the computer field. Nowadays, we have many coding examples that we can copy and paste to
see how it works, but that does not mean that we understand what 'metaheuristics' means. We are not
going to see any codes. We want to ensure we have a clear and solid concept.
B. Body
The first thing that we have to know is what heuristics are. This concept came from classic Greece,
and it is a derivate of eureka, that famous exclamation used by Archimedes. The short meaning is:
solve problems with the available information, and it refers to discovery and invention, to reflection
and not to chance.
Now let us look at metaheuristics; actually, we can think of this as a familiar heuristics. These are
strategies to design or enhance heuristic procedures and work with approximations to the correct or
the best solution for the problems. Problems where the classical heuristic is not enough to get the best
solution.
When we use metaheuristics, we have space to create and develop new algorithms with better
performance and combine multiple fields and concepts, for example, biology or artificial intelligence.
This space allows us not just to solve a specific problem but actually, the same algorithm could be
applied to multiple problems with a few changes. In general, metaheuristic is made to optimize.
In the real world, we have many problems that can not be solved only using analyzing methods, and
here is when we use the power of the metaheuristics algorithms to optimize the process that we are
already using and get an approximation of a better solution for that problem or, as we saw before
problems. Some examples of problems we could solve using these optimization methods are focused
on design, minimizing the cost of the fabrication of any product, topology optimization (we could
name this shape optimization), and many possibilities.
Another characteristic and advantage of metaheuristic algorithms is that we can set multiple objectives
and solve them using only one algorithm. The issue is that we sacrifice some characteristics to
complete more than one objective. For example, maybe we want to produce something faster and
cheaper, so if we want to get the best solution considering these two specific characteristics, we will
make it, but that does not mean there is another cheaper or faster solution. The thing is that we always
have a loss from both sides.
2
C. Conclusion
We have much space to improve, innovate or make something different to get more and better results.
The field of applications is large and competitive, and we must work hard to make it excellent and
functional. Maybe we will not be able to apply all those algorithms in the real world, but I am hopeful
that someday I will see it.
A. Process
With the following formula, write a code (for this time in Matlab) that make the gradient descent and
visualize the results.
I made three functions out of the test file to make a clear and readable code file. These are the function
to evaluate the points and get the solution (called 'f1'), the other one to calculate the random numbers
for the start point (called 'randN'), and the last one is to calculate the gradient (called 'gradiente'):
The function f1 receives two variables, passes them through the formula, and returns the result of the
operation.
The function rand receives two variables, the limits to consider. We are just using two variables for
two dimensions, but this depends on what we want and the dimensions we are considering. For
example, maybe the dimensions have different limits, and we will need specific limits for each
3
dimension. In this case, we will need to redefine the function's parameters. For now, we will consider
the same limits for both dimensions.
The gradient function receives the values from the two dimensions (in this case, x1 and x2), the actual
value of the point evaluating the values of the dimensions through the function f1, the hstep, and the
f1 function to make the evaluation. All these parameters to satisfice the following formula:
Now, let us look at the test file where the algorithm is implemented. The first thing we have to consider
is the upper and lower limit, the number of points to evaluate and plot, iterations number, alfa, and the
step, and this is the first part of our code:
After that, using our randN function, we calculate two random numbers to set our entry or start point.
After this, we can start our iterative process/algorithm. For this example, we will try 200 iterations,
this is a configurable parameter.
B. Results
A.
In this method, we can have a bigger value for alfa because, in each iteration, this value is decreasing,
managing to take bigger steps at first and smaller ones at the end.
1) Results
2) Conclusion
The difference between the classical o generic form is that in each iteration, the given step is smaller
than the last step; this is why we can have a bigger value for the alfa. This simple operation makes the
process faster.
B.
As the implementation saw before, it is necessary to make some changes in the calculation of the new
best position for the best solution. For this specific variation, we made the following changes:
6
We have evident changes. Before we only had the calculation of the new best solution using alfa, the
gradient descent, and the last value considered as the best solution, but now we are not just considering
the position of the last best solution but also the evaluation and calculating the difference between
them.
1) Results
As we see, the steps taken in this process are large, comparing them with the past variation.
2) Conclusion
This is a great way to get the best solution quickly, but we must consider more when using this
variation. We need a value of alfa bigger because the value of the difference of the evaluated points
with the function tends to be small and small in each iteration at the point to give a value of zero.
As we see, we have many points that could be the best solution (considering that our purpose is to
minimize), so if we run the algorithm one time, we will not necessarily get the best solution, which is
why we implement the multi-start mode.
So, the point is to start randomly in our search space N times. If we get a solution better than the
previous one, we replace the value of a variable created to be in constant change and comparing with
the actual solution to get the best result.
1) Results
Moreover, the graphical representation of the implementation of the multi-start algorithm is as follows
next:
8
2) Conclusion
The multi-start form is a very useful way to get the best solution when considering more than one
solution. The impediments we could have could be the power of computing our equipment.