Algorithm - Wikipedia
Algorithm - Wikipedia
As an effective method, an algorithm can be expressed within a finite amount of space and
time[4] and in a well-defined formal language[5] for calculating a function.[6] Starting from an
initial state and initial input (perhaps empty),[7] the instructions describe a computation that,
when executed, proceeds through a finite[8] number of well-defined successive states, eventually
producing "output"[9] and terminating at a final ending state. The transition from one state to the
next is not necessarily deterministic; some algorithms, known as randomized algorithms,
incorporate random input.[10]
Etymology
Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb
al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī
("Addition and subtraction in Indian arithmetic"). Both of these texts are lost in the original Arabic
at this time. However, his other book on algebra remains.[1]
In the early 12th century, Latin translations of said al-Khwarizmi texts involving the Hindu–Arabic
numeral system and arithmetic appeared: Liber Alghoarismi de practica arismetrice (attributed to
John of Seville) and Liber Algorismi de numero Indorum (attributed to Adelard of Bath).[2] Hereby,
alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the
phrase Dixit Algorismi ("Thus spoke Al-Khwarizmi").[3]
Around 1230, the English word algorism is attested and then by Chaucer in 1391, English
adopted the French term.[4][5] In the 15th century, under the influence of the Greek word ἀριθμός
(arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus.
Definition
One informal definition is "a set of rules that precisely defines a sequence of operations",[11]
which would include all computer programs (including programs that do not perform numeric
calculations), and (for example) any prescribed bureaucratic procedure[12] or cook-book
recipe.[13] In general, a program is an algorithm only if it stops eventually[14]—even though infinite
loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be a
set of instructions for determining an output, given explicitly, in a form that can be followed by
either a computing machine, or a human who could only carry out specific elementary
operations on symbols.[15]
The concept of algorithm is also used to define the notion of decidability—a notion that is central
for explaining how formal systems come into being starting from a small set of axioms and
rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not
apparently related to the customary physical dimension. From such uncertainties, that
characterize ongoing work, stems the unavailability of a definition of algorithm that suits both
concrete (in some sense) and abstract usage of the term.
History
Ancient algorithms
Since antiquity, step-by-step procedures for solving mathematical problems have been attested.
This includes Babylonian mathematics (around 2500 BC),[16] Egyptian mathematics (around
1550 BC),[16] Indian mathematics (around 800 BC and later; e.g. Shulba Sutras, Kerala School,
and Brāhmasphuṭasiddhānta),[17][18] The Ifa Oracle (https://www.jstor.org/stable/3027363)
(around 500 BC), Greek mathematics (around 240 BC, e.g. sieve of Eratosthenes and Euclidean
algorithm),[19] and Arabic mathematics (9th century, e.g. cryptographic algorithms for code-
breaking based on frequency analysis).[20] The first cryptographic algorithm for deciphering
encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript
On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by
frequency analysis, the earliest codebreaking algorithm.[20]
Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the
Rhind Mathematical Papyrus c. 1550 BC.[16] Algorithms were later used in ancient Hellenistic
mathematics. Two examples are the Sieve of Eratosthenes, which was described in the
Introduction to Arithmetic by Nicomachus,[23][19]: Ch 9.2 and the Euclidean algorithm, which was
first described in Euclid's Elements (c. 300 BC).[19]: Ch 9.1
Computers
Weight-driven clocks
Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the
Middle Ages]", in particular, the verge escapement[24] that provides us with the tick and tock of a
mechanical clock. "The accurate automatic machine"[25] led immediately to "mechanical
automata" beginning in the 13th century and finally to "computational machines"—the difference
engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th
century.[26] Lovelace is credited with the first creation of an algorithm intended for processing on
a computer—Babbage's analytical engine, the first device considered a real Turing-complete
computer instead of just a calculator—and is sometimes called "history's first programmer" as a
result, though a full implementation of Babbage's second device would not be realized until
decades after her lifetime.
Electromechanical relay
Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards
(punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to
the development of the first computers.[27] By the mid-19th century the telegraph, the precursor
of the telephone, was in use throughout the world, its discrete and distinguishable encoding of
letters as "dots and dashes" a common sound. By the late 19th century, the ticker tape (c. 1870s)
was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter
(c. 1910) with its punched-paper use of Baudot code on tape.
Telephone-switching networks of electromechanical relays (invented 1835) was behind the work
of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell
Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went
home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had
constructed a binary adding device".[28] The mathematician Martin Davis supported the
particular importance of the electromechanical relay.[29]
Formalization
In 1928, a partial formalization of the modern concept of algorithms began with attempts to
solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations
were framed as attempts to define "effective calculability"[30] or "effective method".[31] Those
formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and
1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan
Turing's Turing machines of 1936–37 and 1939.
Representations
Algorithms can be expressed in many kinds of notation, including natural languages,
pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by
interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous
and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts
and control tables are structured ways to express algorithms that avoid many of the ambiguities
common in statements based on natural language. Programming languages are primarily
intended for expressing algorithms in a form that can be executed by a computer, but they are
also often used as a way to define or document algorithms.
Turing machines
There is a wide variety of representations possible and one can express a given Turing machine
program as a sequence of machine tables (see finite-state machine, state-transition table and
control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a
form of rudimentary machine code or assembly code called "sets of quadruples" (see Turing
machine for more). Representations of algorithms can also be classified into three accepted
levels of Turing machine description: high level description, implementation description, and
formal description.[32] A high level description describes qualities of the algorithm itself, ignoring
how it is implemented on the Turing machine.[32] An implementation description describes the
general manner in which the machine moves its head and stores data in order to carry out the
algorithm, but doesn't give exact states.[32] In the most detail, a formal description gives the
exact state table and list of transitions of the Turing machine.[32]
Flowchart representation
The graphical aid called a flowchart offers a way to describe and document an algorithm (and a
computer program corresponding to it). Like the program flow of a Minsky machine, a flowchart
always starts at the top of a page and proceeds down. Its primary symbols are only four: the
directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-
ELSE), and the dot (OR-tie). The Böhm–Jacopini canonical structures are made of these
primitive shapes. Sub-structures can "nest" in rectangles, but only if a single exit occurs from the
superstructure. The symbols and their use to build the canonical structures are shown in the
diagram.[33]
Algorithmic analysis
It is frequently important to know how much of a particular resource (such as time or storage) is
theoretically required for a given algorithm. Methods have been developed for the analysis of
algorithms to obtain such quantitative answers (estimates); for example, an algorithm which
adds up the elements of a list of n numbers would have a time requirement of , using big O
notation. At all times the algorithm only needs to remember two values: the sum of all the
elements so far, and its current position in the input list. Therefore, it is said to have a space
requirement of , if the space required to store the input numbers is not counted, or if
it is counted.
Different algorithms may complete the same task with a different set of instructions in less or
more time, space, or 'effort' than others. For example, a binary search algorithm (with cost
) outperforms a sequential search (cost ) when used for table lookups on sorted
lists or arrays.
Execution efficiency
To illustrate the potential improvements possible even in well-established algorithms, a recent
significant innovation, relating to FFT algorithms (used heavily in the field of image processing),
can decrease processing time up to 1,000 times for applications like medical imaging.[35] In
general, speed improvements depend on special properties of the problem, which are very
common in practical applications.[36] Speedups of this magnitude enable computing devices
that make extensive use of image processing (like digital cameras and medical equipment) to
consume less power.
Design
Algorithm design refers to a method or a mathematical process for problem-solving and
engineering algorithms. The design of algorithms is part of many solution theories, such as
divide-and-conquer or dynamic programming within operation research. Techniques for
designing and implementing algorithm designs are also called algorithm design patterns,[37] with
examples including the template method pattern and the decorator pattern. One of the most
important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big
O notation is used to describe e.g., an algorithm's run-time growth as the size of its input
increases.
Structured programming
Per the Church–Turing thesis, any algorithm can be computed by a model known to be Turing
complete. In fact, it has been demonstrated that Turing completeness requires only four
instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny
and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-
THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using
only these instructions; on the other hand "it is also possible, and not too hard, to write badly
structured programs in a structured language".[38] Tausworthe augments the three Böhm-
Jacopini canonical structures:[39] SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-
WHILE and CASE.[40] An additional benefit of a structured program is that it lends itself to proofs
of correctness using mathematical induction.[41]
Classification
There are various ways to classify algorithms, each with its own merits.
By implementation
One way to classify algorithms is by implementation means.
Recursion
A recursive algorithm is one that invokes
(makes reference to) itself repeatedly
until a certain condition
(also known as int
termination condition) gcd(int
A, int
matches, which is a
B) {
method common to if
functional (B == 0)
programming. Iterative
return
algorithms use A;
repetitive constructs else
if (A >
like loops and
B)
sometimes additional
data structures like return
stacks to solve the gcd(A-
B,B);
given problems. Some else
problems are naturally
suited for one return
implementation or the gcd(A,B-
A);
other. For example,
}
towers of Hanoi is well
understood using Recursive C
recursive implementation
implementation. Every of Euclid's
recursive version has algorithm from
an equivalent (but the above
possibly more or less flowchart
complex) iterative
version, and vice versa.
Serial, parallel or distributed
Algorithms are usually discussed with
the assumption that computers execute
one instruction of an algorithm at a
time. Those computers are sometimes
called serial computers. An algorithm
designed for such an environment is
called a serial algorithm, as opposed to
parallel algorithms or distributed
algorithms. Parallel algorithms are
algorithms that take advantage of
computer architectures where multiple
processors can work on a problem at
the same time. Distributed algorithms
are algorithms that use multiple
machines connected with a computer
network. Parallel and distributed
algorithms divide the problem into more
symmetrical or asymmetrical
subproblems and collect the results
back together. For example, a CPU
would be an example of a parallel
algorithm. The resource consumption in
such algorithms is not only processor
cycles on each processor but also the
communication overhead between the
processors. Some sorting algorithms
can be parallelized efficiently, but their
communication overhead is expensive.
Iterative algorithms are generally
parallelizable, but some problems have
no parallel algorithms and are called
inherently serial problems.
Deterministic or non-deterministic
Deterministic algorithms solve the
problem with exact decision at every
step of the algorithm whereas non-
deterministic algorithms solve problems
via guessing although typical guesses
are made more accurate through the
use of heuristics.
Exact or approximate
While many algorithms reach an exact
solution, approximation algorithms seek
an approximation that is closer to the
true solution. The approximation can be
reached by either using a deterministic
or a random strategy. Such algorithms
have practical value for many hard
problems. One of the examples of an
approximate algorithm is the Knapsack
problem, where there is a set of given
items. Its goal is to pack the knapsack
to get the maximum total value. Each
item has some weight and some value.
Total weight that can be carried is no
more than some fixed number X. So, the
solution must consider weights of items
as well as their value.[42]
Quantum algorithm
Quantum algorithms run on a realistic
model of quantum computation. The
term is usually used for those
algorithms which seem inherently
quantum, or use some essential feature
of Quantum computing such as
quantum superposition or quantum
entanglement.
By design paradigm
Another way of classifying algorithms is by their design methodology or paradigm. There is a
certain number of paradigms, each different from the other. Furthermore, each of these
categories includes many different types of algorithms. Some common paradigms are:
Optimization problems
For optimization problems there is a more specific classification of algorithms; an algorithm for
such problems may fall into one or more of the general categories described above as well as
into one of the following:
Linear programming
When searching for optimal solutions to
a linear function bound to linear equality
and inequality constraints, the
constraints of the problem can be used
directly in producing the optimal
solutions. There are algorithms that can
solve any problem in this category, such
as the popular simplex algorithm.[44]
Problems that can be solved with linear
programming include the maximum flow
problem for directed graphs. If a
problem additionally requires that one or
more of the unknowns must be an
integer then it is classified in integer
programming. A linear programming
algorithm can solve such a problem if it
can be proved that all restrictions for
integer values are superficial, i.e., the
solutions satisfy these restrictions
anyway. In the general case, a
specialized algorithm or an algorithm
that finds approximate solutions is used,
depending on the difficulty of the
problem.
Dynamic programming
When a problem shows optimal
substructures—meaning the optimal
solution to a problem can be
constructed from optimal solutions to
subproblems—and overlapping
subproblems, meaning the same
subproblems are used to solve many
different problem instances, a quicker
approach called dynamic programming
avoids recomputing solutions that have
already been computed. For example,
Floyd–Warshall algorithm, the shortest
path to a goal from a vertex in a
weighted graph can be found by using
the shortest path to the goal from all
adjacent vertices. Dynamic
programming and memoization go
together. The main difference between
dynamic programming and divide and
conquer is that subproblems are more
or less independent in divide and
conquer, whereas subproblems overlap
in dynamic programming. The difference
between dynamic programming and
straightforward recursion is in caching
or memoization of recursive calls. When
subproblems are independent and there
is no repetition, memoization does not
help; hence dynamic programming is
not a solution for all complex problems.
By using memoization or maintaining a
table of subproblems already solved,
dynamic programming reduces the
exponential nature of many problems to
polynomial complexity.
The greedy method
A greedy algorithm is similar to a
dynamic programming algorithm in that
it works by examining substructures, in
this case not of the problem but of a
given solution. Such algorithms start
with some solution, which may be given
or have been constructed in some way,
and improve it by making small
modifications. For some problems they
can find the optimal solution while for
others they stop at local optima, that is,
at solutions that cannot be improved by
the algorithm but are not optimum. The
most popular use of greedy algorithms
is for finding the minimal spanning tree
where finding the optimal solution is
possible with this method. Huffman
Tree, Kruskal, Prim, Sollin are greedy
algorithms that can solve this
optimization problem.
The heuristic method
In optimization problems, heuristic
algorithms can be used to find a
solution close to the optimal solution in
cases where finding the optimal solution
is impractical. These algorithms work by
getting closer and closer to the optimal
solution as they progress. In principle, if
run for an infinite amount of time, they
will find the optimal solution. Their merit
is that they can find a solution very close
to the optimal solution in a relatively
short time. Such algorithms include
local search, tabu search, simulated
annealing, and genetic algorithms.
Some of them, like simulated annealing,
are non-deterministic algorithms while
others, like tabu search, are
deterministic. When a bound on the
error of the non-optimal solution is
known, the algorithm is further
categorized as an approximation
algorithm.
Legal status
Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting
solely of simple manipulations of abstract concepts, numbers, or signals does not constitute
"processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson).
However practical applications of algorithms are sometimes patentable. For example, in
Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic
rubber was deemed patentable. The patenting of software is controversial,[45] and there are
criticized patents involving algorithms, especially data compression algorithms, such as Unisys's
LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of
cryptography).
Examples
One of the simplest algorithms is to find the largest number in a list of numbers of random order.
Finding the solution requires looking at every number in the list. From this follows a simple
algorithm, which can be stated in a high-level description in English prose, as:
High-level description:
Algorithm LargestNumber
Input: A list of numbers L.
Output: The largest number
in the list L.
if L.size = 0 return null
largest ← L[0]
for each item in L, do
if item > largest, then
largest ← item
return largest
See also
Algorithm
characterizations
Algorithmic bias
Algorithmic composition
Algorithmic entities
Algorithmic synthesis
Algorithmic technique
Algorithmic topology
Garbage in, garbage out
Introduction to
Algorithms (textbook)
Government by algorithm
List of algorithms
List of algorithm general topics
Regulation of algorithms
Theory of computation
Computability theory
Computational complexity theory
Computational mathematics
Notes
Bibliography
Further reading
External links
hworld.wolfram.com/Al At
Wikiver
gorithm.html) .
sity,
MathWorld. you can
learn
Dictionary of more
Algorithms and Data and
teach
Structures (https://ww others
w.nist.gov/dads/) – about
Algorit
National Institute of hm at
Standards and the
Depart
Technology ment of
Algorith
Algorithm repositories
m
The Stony Brook Wikime
Algorithm Repository (h dia
Commo
ttp://www.cs.sunysb.ed ns has
u/~algorith/) – State media
related
University of New York to
at Stony Brook Algorith
ms.
Collected Algorithms of
the ACM (http://calgo.acm.org/) –
Associations for Computing Machinery
The Stanford GraphBase (http://www-cs
-staff.stanford.edu/~knuth/sgb.html)
Archived (https://web.archive.org/web/2
0151206222112/http://www-cs-staff.st
anford.edu/%7Eknuth/sgb.html)
December 6, 2015, at the Wayback
Machine – Stanford University
Retrieved from
"https://en.wikipedia.org/w/index.php?
title=Algorithm&oldid=1225107452"