Report on Ai
Report on Ai
Report on Ai
Report on Ai
Table of contents:
1. Introduction...................................................................................................................................1
2. A history of artificial intelligence ................................................................................................2
2
1. Introduction
Technology has geometrically progressed during the last few decades, and subsequently
artificial intelligence (AI) research reached new peaks. Today, AI plays a sizeable role across
many economic sectors throughout the world. Furthermore, AI seems to be here to stay which
is evident from the mass digitization of the world. Traditional operations are less able to
survive, let alone thrive, in this global market where efficiency of scope and scale is achieved
through state-of-the-art technology. With that in mind I set forth to research modern AI
applications in business as well as the chronological steps which lead to its current state. My
paper is divided into six distinct parts. The first part is concerned with the history of artificial
intelligence. It is purposefully called ‘A history of AI’ because it is not the all-encompassing,
condensed history, but my selection of, arguably, the most important points in time. Moving
on with the second part where I describe the two main types of AI with its subsequent
subsections. There I will present our current feats in the field and, hopefully, the future forms
it will take. In the third section my attention falls on the ways AI is achieved at the present
moment; this includes the two main ways which are Machine Learning and Deep Learning.
Furthermore, in the fourth section I will describe some examples of AI in business practice.
My attention is mainly focused on examples from the fields of banking and trading in the
financial sector, and transportation. I chose the two fields, in part, due to my personal
interests, and due to the gravity of AI developments in the fields. In the following, fifth,
section I present a critical commentary of the selected case studies and fields in general. In
the last section I concerned myself with a commentary of the future prospects in applied AI.
In the conclusion I summarized the key points from this paper.
1
2. A history of artificial intelligence
The beginnings of AI thought span from deep in the past and across fields of science.
Philosophers such as R. Descartes or G. W. Leibnitz imagined mechanical men and
mechanical reasoning devices respectively. First instances of human complacency had their
beginnings when the famous mathematician and philosopher B. Pascal created the
mechanical calculator called “Pascaline” in 1642. “It could only do addition and subtraction,
with numbers being entered by manipulating its dials”1. Later on, science fiction relied on the
possibility of intelligent machines which appealed to the fantasy of intelligent non-humans.
The recent history of AI is also multidisciplinary. Notable people with different backgrounds
left their marks on its history, including the following people, as Bruce Buchanan wrote in his
journal article: “The inspiration of modern AI thought came from people working in
engineering (such as Norbert Wiener’s work on cybernetics, which includes feedback and
control), biology (for example, W. Ross Ashby and Warren McCulloch and Walter Pitts’s
work on neural networks in simple organisms), experimental psychology (see Newell and
Simon [1972]), communication theory (for example, Claude Shannon’s theoretical work),
game theory (notably by John Von Neumann and Oskar Morgenstern), mathematics and
statistics
1
Pascaline – Encyclopedia Britannica. Retrieved June/July, 2019, from
https://www.britannica.com/technology/Pascaline
2
The Turk. Retrieved June/July 2019, from https://interestingengineering.com/the-turk-fake-automaton
chess-player
3
The Chess Turk explained. Retrieved June/July, 2019, from https://youtu.be/0DbJUTsUwZE
2
(for example, Irving J. Good), logic and philosophy (for example, Alan Turing, Alonzo
Church, and Carl Hempel), and linguistics (such as Noam Chomsky’s work on
grammar).4”However, it wasn’t until the furthest half of the 20 th century that researchers had
enough computing power and programming languages to conduct experiments on the
realization of such visions.
A major turning point in AI history was marked by the 1950s paper in the philosophy journal
Mind where Alan Turing crystalized the idea of programming an intelligent computing
device, eventually leading to the imitation game known as Turing’s test. In layman terms
Turing’s test is an imitation game where a human being and a computer are interrogated in
such conditions that the interrogator doesn’t not know which one is which. Communication is
performed over textual messages and if the interrogator does not manage to distinguish them
by questioning, the computer would be deemed intelligent5. Turing’s arguments relied on our
own propensity to judge intelligence based on communication capabilities.
In 1956, the work of Allen Newell, J. C. Shaw and Herb Simon was presented at the landmark
conference on artificial intelligence which took place in Dartmouth. That conference might as
well have engraved the initials “AI” into marble as artificial intelligence got its name then and
there. Their presentation revolved around the Logic Theorist (LT) program which startled the
world with as it could invent proofs of logic theorems. This feat certainly required the creation
and application of programming artificial intelligence as well as creativity and it was deemed
as the first program which utilized artificial intelligence. The program was deliberately
engineered to mimic the problem-solving skills of humans and it was based on the system of
Principa mathematica by A. N. Whitehead and B. Russell 6. Finally, LT could prove theorems
just as well as a talented mathematician which was an astounding success.
Another important example from that era is the checker-playing program by Arthur Samuel in
1956. The program was run on an IBM 701 computer and in 1962 a master checkers player
lost a game against its mechanical opponent, however he managed to win the subsequent
games7. Although simple, the program was inspiring as it learned from its human, and
computer opponents. However, computing power and programming languages were still very
limited. In
4
Buchanan, B. G. (2006). A (Very) Brief History of Artificial Intelligence. AI Magazine, 26(4), 56. Retrieved
June/July, 2019.
5
Alan Turing's scrapbook. Retrieved June/July 2019, from https://www.turing.org.uk/scrapbook/test.html 6 The
development of the first artificial intelligence program. Retrieved June/July 2019, from:
http://www.historyofinformation.com/detail.php?id=742
7
IBM icons of progress. Retrieved June/July 2019, from
https://www.ibm.com/ibm/history/ibm100/us/en/icons/ibm700series/impacts/
3
the 1950s and 1960s, some new programming languages such as Lisp, POP and IPL blew
more wind in the sails of AI research, but the motionless nature and omnipresent clumsiness
of early operating systems as well as the sheer size of programming devices still posed a
major problem.
Other examples in the subsequent decade include T. Evans’s 1963 thesis on solving analogy
problems similar to the ones given on standardized IQ tests, J. Slagle’s dissertation program
which used heuristics to solve integration problems from freshmen calculus, D. Waterman’s
1970 dissertation where he used a production system to play draw poker and another program
which learned how to play better.
Meanwhile, two major approaches to AI emerged. Rule-based approach and the learning
approach. Proponents of the rule-based approach, which was also called symbolic or expert
systems approach, made an effort to teach computers to think according to preset rules based
on logic. In a simplified manner, these logical rules are coded in the form of if-then-else, and
this approach has worked well for simple games with relatively few decisions. The
disadvantage of this system is its reliance on the knowledge of a human expert in a very
specialized domain. Therefore, it fails to deliver optimal performance when the scope of
possible combinations of choices expands. Due to its reliance on human knowledge, it is
sometimes referred to as fake AI and the scientific community is divided on its potential.
Maintaining these systems is cumbersome and expensive, and its scope of application is
limited due to the inability of expanding its base of knowledge without setting some
contradicting rules. On the other hand, the learning or Artificial Neural Network (ANN)
approach took to reconstructing the human brain instead of teaching the program human
logic. This approach had monumentally more success when it comes to its application in
practice. The subsequent ability of the machine to learn, resulted in adaptive intelligence,
meaning that knowledge can be altered and rejected as new knowledge is accumulated 8.
Therefore, engineers created intricate webs of artificial neurons which are fed massive
amounts of data such as photos, chess games, go games, sounds, etc. letting the networks
learn to identify patterns in the data. The differences between the two approaches can best be
portrayed with the modus operandi of an image recognition task. Let’s suppose that both
methods are used to recognize pictures of cats. The rule-based approach would be to train the
algorithm by inputting rules describing a cat. If the image portrays two triangular shapes on
top of a circular
8
Paragraph based on Tricentis - Artificial intelligence software testing. (n.d.). Retrieved June/July, 2019, from
https://www.tricentis.com/artificial-intelligence-software-testing/ai-approaches-rule-based-testing-vs learning/#
4
shape, the object in the image is probably a cat. On the other hand, the learning approach
operates by feeding millions of photos labeled “cat” or “no cat”, letting the program decide
which feature are consistently seen across the images.
The neural network approach excels in environments where differences between observed
objects hinder logical approaches. This learning approach was very prominent in the
beginnings of AI thought during the 1950s and 1960s, it also delivered some impressive
results. Yet, 1969 marked the end of an optimistic era for the neural network approach when
a group of rule-based researchers convinced others that the neural network approach was very
limited in use, in addition to being unreliable. This event plunged AI research into the first of
many winters. Mid 1980s sparked a new fire in the implementation of AI with use of the
Hidden Markov Model technique, however, it wasn’t long lived as most of 1990s were
marked by another AI winter.
Still, near the end of the nineties, IBM’s computer named Deep Blue rekindled the lost vigor
in using chess as the game of choice for displaying the full range of artificially intelligent
brainpower. A brief history of Deep Blue is best summarized by IBM in their “icons of
progress” work: “In 1985, a graduate student at Carnegie Mellon University, Feng-hsiung
Hsu, began working on his dissertation project: a chess playing machine he called ChipTest.
A classmate of his, Murray Campbell, worked on the project, too, and in 1989, both were
hired to work at IBM Research. There, they continued their work with the help of other
computer scientists, including Joe Hoane, Jerry Brody and C. J. Tan. The team named the
project Deep Blue. The human chess champion won in 1996 against an earlier version of
Deep Blue; the 1997 match was billed as a “rematch.” 9” The famous match of 1997 was
held at the Equitable Center in New York. Millions of people watched the broadcast glued to
their small screens with uncertain expectations regarding the outcome of the match. The first
match was won by the chess master, while the second one went to Deep Blue. Three other
matches were held and all three ended with a draw, but the last one was claimed by IBM. The
story made headlines as a precedent for future dominations of machine over man. In later
years Deep Blue served a number of uses, ranging from playing other strategic games to
solving complex programs. Its architecture was also used in financial modeling, including
risk analysis and data mining, as well as in pharmaceutical uses and biological research.
Ultimately, Deep Blue was retired in the Smithsonian museum. However, its legacy lives on
through IBM’s latest computer named
9
IBM Deep Blue. Retrieved June/July 2019, from:
https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
5
“Watson” which helps humans detect cancers in medicine, solve simple legal cases as well as
perform deep financial analyses quicker and better than its human counterparts.
What then, has happened recently that reignited the lost flame of AI research and
implementation, and why does it seem to be omnipresent, both now and seemingly onwards?
Well one explanation can be seen in the requirements of AI. Such requirements are twofold.
First, AI requires a highly developed, reliable and fast computing power, and second, large
amounts of data. The data is used to train the algorithms which is done by feeding them
monumental quantities of specific information. This synergy allows for a quick analysis of
vast amounts of data. Both of these requirements were in scarce during the last century.
Through the years, Moore’s law held true and the development of more powerful processing
power has created an exponential leap in hardware capabilities. Additionally, the internet lead
to a burst of rich amounts of data, from pictures and videos to purchases. The end result
allowed researchers to make use of artificial neural networks with relatively cheap computing
power and a plethora of interesting data.