Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit I Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

MAHARANA PRATAP GROUP OF INSTITUTIONS

KOTHI MANDHANA, KANPUR


(Approved by AICTE, New Delhi and Affiliated to Dr. AKTU, Lucknow)

Digital Notes
[Department of Computer Application]
Subject Name : Artificial Intelligence
Subject Code : KCA301
Course : MCA
Branch : -
Semester : IIIrd
Prepared by : Mr. Narendra Kumar Sharma

Reference No./MCA/NARENDRA/KCA301/2/3
Unit – 1
Artificial Intelligence

1.1 Artificial Intelligence (AI)


Definition 1: Artificial intelligence enables computers and machines to mimic the perception,
learning, problem-solving, and decision-making capabilities of the human mind.

Definition 2: AI is a branch of computer science by which we can create intelligent machines


which can behave like a human, think like humans, and able to make decisions.

In computer science, the term artificial intelligence (AI) refers to any human-like
intelligence exhibited by a computer, robot, or other machine. In popular usage, artificial
intelligence refers to the ability of a computer or machine to mimic the capabilities of the human
mind—learning from examples and experience, recognizing objects, understanding and
responding to language, making decisions, solving problems—and combining these and other
capabilities to perform functions a human might perform, such as greeting a hotel guest or
driving a car.

After decades of being relegated to science fiction, today, AI is part of our everyday lives. The
surge in AI development is made possible by the sudden availability of large amounts of data and
2

the corresponding development and wide availability of computer systems that can process all
Page
that data faster and more accurately than humans can. AI is completing our words as we type
them, providing driving directions when we ask, vacuuming our floors, and recommending what
we should buy or binge-watch next. And it’s driving applications—such as medical image
analysis—that help skilled professionals do important work faster and with greater success.
As common as artificial intelligence is today, understanding AI and AI terminology can
be difficult because many of the terms are used interchangeably; and while they are actually
interchangeable in some cases, they aren’t in other cases. What’s the difference between artificial
intelligence and machine learning? Between machine learning and deep learning? Between
speech recognition and natural language processing? Between weak AI and strong AI? This
article will try to help you sort through these and other terms and understand the basics of how
AI works.

1.2 Why Artificial Intelligence?


o With the help of AI, you can create such software or devices which can solve real-world
problems very easily and with accuracy such as health issues, marketing, traffic issues,
etc.
o With the help of AI, you can create your personal virtual Assistant, such as Cortana,
Google Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an environment where
survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities

1.3 Goals of Artificial Intelligence

Following are the main goals of Artificial Intelligence:


1. Replicate human intelligence
2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
3
Page
5. Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.

1.4 What Comprises to Artificial Intelligence?


Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of
other factors which can contribute to it. To create the AI first we should know that how
intelligence is composed, so the Intelligence is an intangible part of our brain which is a
combination of Reasoning, learning, problem-solving perception, language understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires the
following discipline:

o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics

4
Page
1.5 Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
o High Accuracy with less errors: AI machines or systems are prone to less errors and
high accuracy as it takes decisions as per pre-experience or information.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because of
that AI systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same action
multiple times with high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as defusing a
bomb, exploring the ocean floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such as
AI technology is currently used by various E-commerce websites to show the products as
per customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a self-driving
car which can make our journey safer and hassle-free, facial recognition for security
purpose, Natural language processing to communicate with the human in human-
language, etc.

1.6 Disadvantages of Artificial Intelligence


Every technology has some disadvantages, and the same goes for Artificial intelligence. Being so
advantageous technology still, it has some disadvantages which we need to keep in our mind
while creating an AI system. Following are the disadvantages of AI:
o High Cost: The hardware and software requirement of AI is very costly as it requires lots
of maintenance to meet current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but still they
cannot work out of the box, as the robot will only do that work for which they are trained,
or programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it does
not have the feeling so it cannot make any kind of emotional attachment with human, and
may sometime be harmful for users if the proper care is not taken.
5
Page
o Increase dependency on machines: With the increment of technology, people are
getting more dependent on devices and hence they are losing their mental capabilities.
o No Original Creativity: As humans are so creative and can imagine some new ideas but
still AI machines cannot beat this power of human intelligence and cannot be creative and
imaginative.

2. History of Artificial Intelligence


Artificial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical men
in Ancient Greek and Egyptian Myths. Following are some milestones in the history of AI which
defines the journey from the AI generation to till date development.

6
Page
Maturation of Artificial Intelligence (1943-1952)

 Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
 Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
 Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.

The birth of Artificial Intelligence (1952-1956)


 Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program"Which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.
 Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist
John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic
field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974)


 Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.
 Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.

The first AI winter (1974-1980)


 The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to
the time period where computer scientist dealt with a severe shortage of funding from
7
Page

government for AI researches.


 During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)
 Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
 In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.

The second AI winter (1987-1993)


 The duration between the years 1987 to 1993 was the second AI Winter duration.
 Again Investors and government stopped in funding for AI research as due to high cost but
not efficient result. The expert system such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)


 Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
and became the first computer to beat a world chess champion.
 Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
 Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.

Deep learning, big data and artificial general intelligence (2011-present)


 Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could understand
natural language and can solve tricky questions quickly.
 Year 2012: Google has launched an Android app feature "Google now", which was able to
provide information to the user as a prediction.
 Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
 Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
8
Page
 Google has demonstrated an AI program "Duplex" which was a virtual assistant and which
had taken hairdresser appointment on call, and lady on other side didn't notice that she was
talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial Intelligence
is inspiring and will come with high intelligence.

3. Application of AI
Artificial Intelligence has various applications in today's society. It is becoming essential for
today's time because it can solve complex problems with an efficient way in multiple industries,
such as Healthcare, entertainment, finance, education, etc. AI is making our daily life more
comfortable and fast.
Following are some sectors which have the application of Artificial Intelligence:

9
Page
1. AI in Astronomy
o Artificial Intelligence can be very useful to solve complex universe problems. AI
technology can be helpful for understanding the universe such as how it works, origin,
etc.

2. AI in Healthcare
o In the last, five to ten years, AI becoming more advantageous for the healthcare industry
and going to have a significant impact on this industry.
o Healthcare Industries are applying AI to make a better and faster diagnosis than humans.
AI can help doctors with diagnoses and can inform when patients are worsening so that
medical help can reach to the patient before hospitalization.

3. AI in Gaming
o AI can be used for gaming purpose. The AI machines can play strategic games like chess,
where the machine needs to think of a large number of possible places.

4. AI in Finance
o AI and finance industries are the best matches for each other. The finance industry is
implementing automation, chatbot, adaptive intelligence, algorithm trading, and machine
learning into financial processes.

5. AI in Data Security
o The security of data is crucial for every company and cyber-attacks are growing very
rapidly in the digital world. AI can be used to make your data more safe and secure.
Some examples such as AEG bot, AI2 Platform,are used to determine software bug and
cyber-attacks in a better way.

6. AI in Social Media
o Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user
profiles, which need to be stored and managed in a very efficient way. AI can organize
and manage massive amounts of data. AI can analyze lots of data to identify the latest
trends, hashtag, and requirement of different users.
10
Page
7. AI in Travel & Transport
o AI is becoming highly demanding for travel industries. AI is capable of doing various
travel related works such as from making travel arrangement to suggesting the hotels,
flights, and best routes to the customers. Travel industries are using AI-powered chatbots
which can make human-like interaction with customers for better and fast response.

8. AI in Automotive Industry
o Some Automotive industries are using AI to provide virtual assistant to their user for
better performance. Such as Tesla has introduced TeslaBot, an intelligent virtual
assistant.
o Various Industries are currently working for developing self-driven cars which can make
your journey more safe and secure.

9. AI in Robotics:
o Artificial Intelligence has a remarkable role in Robotics. Usually, general robots are
programmed such that they can perform some repetitive task, but with the help of AI, we
can create intelligent robots which can perform tasks with their own experiences without
pre-programmed.
o Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid
robot named as Erica and Sophia has been developed which can talk and behave like
humans.

10. AI in Entertainment
o We are currently using some AI based applications in our daily life with some
entertainment services such as Netflix or Amazon. With the help of ML/AI algorithms,
these services show the recommendations for programs or shows.

11. AI in Agriculture
o Agriculture is an area which requires various resources, labor, money, and time for best
result. Now a day's agriculture is becoming digital, and AI is emerging in this field.
Agriculture is applying AI as agriculture robotics, solid and crop monitoring, predictive
11

analysis. AI in agriculture can be very helpful for farmers.


Page
12. AI in E-commerce
o AI is providing a competitive edge to the e-commerce industry, and it is becoming more
demanding in the e-commerce business. AI is helping shoppers to discover associated
products with recommended size, color, or even brand.

13. AI in education
o AI can automate grading so that the tutor can have more time to teach. AI chatbot can
communicate with students as a teaching assistant.
o AI in the future can be work as a personal virtual tutor for students, which will be
accessible easily at any time and any place.

4. Types of Artificial Intelligence


Artificial Intelligence can be divided in various types, there are mainly two types of main
categorization which are based on capabilities and based on functionally of AI. Following is flow
diagram which explain the types of AI.

AI type-1: Based on Capabilities


1. Weak AI or Narrow AI:
o Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.The
12

most common and currently available AI is Narrow AI in the world of Artificial


Page

Intelligence.
o Narrow AI cannot perform beyond its field or limitations, as it is only trained for one
specific task. Hence it is also termed as weak AI. Narrow AI can fail in unpredictable
ways if it goes beyond its limits.
o Apple Siriis a good example of Narrow AI, but it operates with a limited pre-defined
range of functions.
o IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert system
approach combined with Machine learning and natural language processing.
o Some Examples of Narrow AI are playing chess, purchasing suggestions on e-commerce
site, self-driving cars, speech recognition, and image recognition.

2. General AI:
o General AI is a type of intelligence which could perform any intellectual task with
efficiency like a human.
o The idea behind the general AI to make such a system which could be smarter and think
like a human by its own.
o Currently, there is no such system exist which could come under general AI and can
perform any task as perfect as a human.
o The worldwide researchers are now focused on developing machines with General AI.
o As systems with general AI are still under research, and it will take lots of efforts and
time to develop such systems.

3. Super AI:
o Super AI is a level of Intelligence of Systems at which machines could surpass human
intelligence, and can perform any task better than human with cognitive properties. It is
an outcome of general AI.
o Some key characteristics of strong AI include capability include the ability to think, to
reason,solve the puzzle, make judgments, plan, learn, and communicate by its own.
o Super AI is still a hypothetical concept of Artificial Intelligence. Development of such
systems in real is still world changing task.
13
Page
Artificial Intelligence type-2: Based on functionality
1. Reactive Machines
o Purely reactive machines are the most basic types of Artificial Intelligence.
o Such AI systems do not store memories or past experiences for future actions.
o These machines only focus on current scenarios and react on it as per possible best
action.
o IBM's Deep Blue system is an example of reactive machines.
o Google's AlphaGo is also an example of reactive machines.

2. Limited Memory
o Limited memory machines can store past experiences or some data for a short period of
time.
o These machines can use stored data for a limited time period only.
o Self-driving cars are one of the best examples of Limited Memory systems. These cars
can store recent speed of nearby cars, the distance of other cars, speed limit, and other
information to navigate the road.

3. Theory of Mind
o Theory of Mind AI should understand the human emotions, people, beliefs, and be able
to interact socially like humans.
o This type of AI machines are still not developed, but researchers are making lots of
14

efforts and improvement for developing such AI machines.


Page
4. Self-Awareness
o Self-awareness AI is the future of Artificial Intelligence. These machines will be super
intelligent, and will have their own consciousness, sentiments, and self-awareness.
o These machines will be smarter than human mind.
o Self-Awareness AI does not exist in reality still and it is a hypothetical concept.

5. Agents in Artificial Intelligence


An AI system can be defined as the study of the rational agent and its environment. The agents
sense the environment through sensors and act on their environment through actuators. An AI
agent can have mental properties such as knowledge, belief, intention, etc.

5.1 What is an Agent?


An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking, and acting.
An agent can be:

o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input and
act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we
are also agents. Before moving forward, we should first know about sensors, effectors, and
actuators.

Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The
15

actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Page
Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins, and display screen.

5.2 Intelligent Agent


An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve
their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:


o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

5.3 Rational Agent


A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to
maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents to use for
game theory and decision theory for various real-world scenarios.

For an AI agent, the rational action is most important because in AI reinforcement learning
16

algorithm, for each best possible action, agent gets the positive reward and for each wrong
Page

action, an agent gets a negative reward.


Note: Rational agents in AI are very similar to intelligent agents.

5.3 Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:
o Performance measure which defines the success criterion.
o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.

Note: Rationality differs from Omniscience because an Omniscient agent knows the actual
outcome of its action and act accordingly, which is not possible in reality.

5.4 Structure of an AI Agent


The task of AI is to design an agent program which implements the agent function. The structure
of an intelligent agent is a combination of architecture and agent program. It can be viewed as:

Agent = Architecture + Agent program

Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.

f : p* → A

Agent program: Agent program is an implementation of agent function. An agent program


executes on the physical architecture to produce function f.

5.5 PEAS Representation


PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
17

rational agent, then we can group its properties under PEAS representation model. It is made up
Page

of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.

5.6 PEAS for self-driving cars:

Let's suppose a self-driving car then PEAS representation will be:


Performance: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar. 18
Page
Example of Agents with their PEAS representation

Agent Performance Environment Actuators Sensors


measure

1. Medical o Healthy patient o Patient o Tests Keyboard


Diagnose o Minimized cost o Hospital o Treatments (Entry of
symptoms)
o Staff

2. Vacuum o Cleanness o Room o Wheels o Camera


Cleaner o Efficiency o Table o Brushes o Dirt detection
o Battery life o Wood floor o Vacuum Extractor sensor

o Security o Carpet o Cliff sensor

o Various o Bump Sensor


obstacles o Infrared Wall
Sensor

3. Part - o Percentage of o Conveyor belt o Jointed Arms o Camera


picking parts in correct with parts, o Hand o Joint angle sensors.
Robot bins. o Bins

6. Types of AI Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
o Simple Reflex Agent
o Model-based reflex agent
o Goal-based agents
19

o Utility-based agent
Page

o Learning agent
1. Simple Reflex agent:
o The Simple reflex agents are the simplest agents. These agents take decisions on the basis
of the current percepts and ignore the rest of the percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
o The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
o Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.

2. Model-based reflex agent


o The Model-based agent can work in a partially observable environment, and track the
situation.
20

o A model-based agent has two important factors:


Page
o Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
o Internal State: It is a representation of the current state based on percept history.
o These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
o Updating the agent state requires information about:
a. How the world evolves
b. How the agent's action affects the world.

3. Goal-based agents
o The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before deciding
21

whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
Page
4. Utility-based agents
o These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each action
achieves the goals.

22
Page
5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its past experiences, or it
has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
o A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning
from environment
b. Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.

7. Agent Environment in AI
23

An environment is everything in the world which surrounds the agent, but it is not a part of an
Page

agent itself. An environment can be described as a situation in which an agent is present.


The environment is where agent lives, operate and provide the agent with something to sense and
act upon it. An environment is mostly said to be non-feministic.

7.1 Features of Environment


As per Russell and Norvig, an environment can have various features from the point of view of
an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

1. Fully observable vs Partially Observable:


o If an agent sensor can sense or access the complete state of an environment at each point
of time then it is a fully observable environment, else it is partially observable.
o A fully observable environment is easy as there is no need to maintain the internal state to
keep track history of the world.
o An agent with no sensors in all environments then such an environment is called
as unobservable.

2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next state of
the environment, then such environment is called a deterministic environment.
o A stochastic environment is random in nature and cannot be determined completely by an
agent.
o In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
24
Page
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.

4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single agent
environment.

5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such environment
is called a dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the world at each
action.
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.

6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it is
called continuous environment.
o A chess gamecomes under discrete environment as there is a finite number of moves that
can be performed.
25

o A self-driving car is an example of a continuous environment.


Page
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.
o It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.

8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called
inaccessible.
o An empty room whose state can be defined by its temperature is an example of an
accessible environment.
o Information about an event on earth is an example of Inaccessible environment.

8. Turing Test in AI
In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not,
this test is known as the Turing Test. In this test, Turing proposed that the computer can be said
to be an intelligent if it can mimic human response under specific conditions.
Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and
Intelligence," which considered the question, "Can Machine think?"

26
Page
The Turing test is based on a party game "Imitation game," with some modifications. This game
involves three players in which one player is Computer, another player is human responder, and
the third player is a human Interrogator, who is isolated from other two players and his job is to
find that which player is machine among two of them.

Consider, Player A is a computer, Player B is human, and Player C is an interrogator.


Interrogator is aware that one of them is machine, but he needs to identify this on the basis of
questions and their responses.

The conversation between all players is via keyboard and screen so the result would not depend
on the machine's ability to convert words as speech.

The test result does not depend on each correct answer, but only how closely its responses like a
human answer. The computer is permitted to do everything possible to force a wrong
identification by the interrogator.

The questions and answers can be like:


Interrogator: Are you a computer?
PlayerA (Computer): No
Interrogator: Multiply two large numbers such as (256896489*456725896)
Player A: Long pause and give the wrong answer.

In this game, if an interrogator would not be able to identify which is a machine and which is
human, then the computer passes the test successfully, and the machine is said to be intelligent
and can think like a human.

"In 1991, the New York businessman Hugh Loebner announces the prize competition, offering a
$100,000 prize for the first computer to pass the Turing test. However, no AI program to till
date, come close to passing an undiluted Turing test".

8.1 Chatbots to attempt the Turing test:


ELIZA: ELIZA was a Natural language processing computer program created by Joseph
27

Weizenbaum. It was created to demonstrate the ability of communication between machine and
Page

humans. It was one of the first chatterbots, which has attempted the Turing Test.
Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to simulate
a person with Paranoid schizophrenia(most common chronic mental disorder). Parry was
described as "ELIZA with attitude." Parry was tested using a variation of the Turing Test in the
early 1970s.

Eugene Goostman: Eugene Goostman was a chatbot developed in Saint Petersburg in 2001.
This bot has competed in the various number of Turing Test. In June 2012, at an event,
Goostman won the competition promoted as largest-ever Turing test content, in which it has
convinced 29% of judges that it was a human.Goostman resembled as a 13-year old virtual boy.

8.2 The Chinese Room Argument:


There were many philosophers who really disagreed with the complete concept of Artificial
Intelligence. The most famous argument in this list was "Chinese Room."

In the year 1980, John Searle presented "Chinese Room" thought experiment, in his paper
"Mind, Brains, and Program," which was against the validity of Turing's Test. According to
his argument, "Programming a computer may make it to understand a language, but it will
not produce a real understanding of language or consciousness in a computer."

He argued that Machine such as ELIZA and Parry could easily pass the Turing test by
manipulating keywords and symbol, but they had no real understanding of language. So it cannot
be described as "thinking" capability of a machine such as a human.

8.3 Features required for a machine to pass the Turing test:


o Natural language processing: NLP is required to communicate with Interrogator in
general human language like English.
o Knowledge representation: To store and retrieve information during the test.
o Automated reasoning: To use the previously stored information for answering the
questions.
o Machine learning: To adapt new changes and can detect generalized patterns.
o Vision (For total Turing test): To recognize the interrogator actions and other objects
during a test.
28

o Motor Control (For total Turing test): To act upon objects if requested.
Page
9. Computer Vision
Computer vision is a field of artificial intelligence that trains computers to interpret and
understand the visual world. Machines can accurately identify and locate objects then react to
what they “see” using digital images from cameras, videos, and deep learning models.
Starting in the late 1950s and early 1960s, the goal of image analysis was to mimic human vision
systems and to ask computers what they see. Prior to this, image analysis had been completed
manually using x-rays, MPIs or hi-res space photography. Nasa’s map of the moon took the lead
with digital image processing, but wasn’t fully accepted until 1969.

As computer vision evolved, programming algorithms were created to solve individual


challenges. Machines became better at doing the job of vision recognition with repetition. Over
the years, there has been a huge improvement of deep learning techniques and technology. We
now have the ability to program supercomputers to train themselves, self-improve over time and
provide capabilities to businesses as online applications.

9.1 The Breakdown of Computer Vision


Images are broken down into pixels, which are considered to be the elements of the picture or the
smallest unit of information that make up the picture.
29
Page
Computer vision is not just about converting a picture into pixels and then trying to make sense of
what’s in the picture through those pixels. You have to understand the bigger picture of how to
extract information from those pixels and interpret what they represent.

Each pixel is represented by 8 numbers (bits of memory). In computer science, each color is
30

represented by a value.
Page
9.2 Neural networks and Deep Learning are Making Computer Vision More Capable of
Replicating Human Vision
―Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed
to recognize patterns. They interpret sensory data through a kind of machine perception, labeling
or clustering raw input. The patterns they recognize are numerical, contained in vectors, into
which all real-world data, be it images, sound, text or time series, must be translated.‖

9.3 Image Classification and Segmentation


A simple explanation of the classification of a picture is when a computer classifies an image in a
certain category. In the picture below the classification of the first object would be sheep.
The localization or location is identified by the box surrounding the object in the picture.
Object detection detects instances of semantic objects of a certain class. The picture
below has 3 sheep in the picture. Classifying them (boxes) as sheep1, sheep2, and sheep3
Every pixel belongs to a particular class. In the picture below the classes are sheep, grass,
or road. Pixels within the class are represented by the same color. (sheep is orange, road is gray,
and grass is green). This describes semantic segmentation.
With instance segmentation different objects of the same class have different colors.
(sheep1 = bright yellow, sheep2 = dark yellow, sheep3 = orange)

31
Page
9.4 Computer Vision Application
Computer vision has a wide variety of applications, both old (e.g., mobile robot navigation,
industrial inspection, and military intelligence) and new (e.g., human computer interaction,
image retrieval in digital libraries, medical image analysis, and the realistic rendering of
synthetic scenes in computer graphics).

It is a broad area of study with many specialized tasks and techniques, as well as specializations
to target application domains.
1. Optical character recognition (OCR)
2. Machine inspection
3. Retail (e.g. automated checkouts)
4. 3D model building (photogrammetry)
5. Medical imaging
6. Automotive safety
7. Match move (e.g. merging CGI with live actors in movies)
8. Motion capture (mocap)
32

9. Surveillance
Page

10. Fingerprint recognition and biometrics


Many popular computer vision applications involve trying to recognize things in photographs;
for example:

 Object Classification: What broad category of object is in this photograph?


 Object Identification: Which type of a given object is in this photograph?
 Object Verification: Is the object in the photograph?
 Object Detection: Where are the objects in the photograph?
 Object Landmark Detection: What are the key points for the object in the
photograph?
 Object Segmentation: What pixels belong to the object in the image?
 Object Recognition: What objects are in this photograph and where are they?

10. Natural Language Processing


Natural Language Processing (NLP) is a branch of AI that helps computers to understand,
interpret and manipulate human languages like English or Hindi to analyze and derive it’s
meaning. NLP helps developers to organize and structure knowledge to perform tasks like
translation, summarization, named entity recognition, relationship extraction, speech recognition,
topic segmentation, etc.

10.1 History of NLP


1950- NLP started when Alan Turing published an article called "Machine and Intelligence."
1950- Attempts to automate translation between Russian and English
1960- The work of Chomsky and others on formal language theory and generative syntax
1990- Probabilistic and data-driven models had become quite standard
2000- A Large amount of spoken and textual data become available

10.2 How Does NLP Work?


Every day, we say thousand of a word that other people interpret to do countless things. We,
consider it as a simple communication, but we all know that words run much deeper than that.
There is always some context that we derive from what we say and how we say it, NLP
33

in Artificial Intelligence never focuses on voice modulation; it does draw on contextual patterns.
Page
Example:

Man is to woman as king is to __________?


Meaning (king) – meaning (man) + meaning ( woman)=?
The answer is-- queen

Here, we can easily co-relate because man is male gender and woman is female gender. In the
same way, the king is masculine gender, and its female gender is queen.

Example:

Is King to kings as the queen is to_______?


The answer is--- queens

Here, we can see two words kings and kings where one is singular and other is plural. Therefore,
when the world queen comes, it automatically co-relates with queens again singular plural.

Here, the biggest question is that how do we know what words mean? Let's, say who will call it
queen?

34
Page
The answer is we learn this thinks through experience. However, here the main question is that
how computer know about the same?
We need to provide enough data for Machines to learn through experience. We can feed details
like
 Her Majesty the Queen.
 The Queen's speech during the State visit
 The crown of Queen Elizabeth
 The Queens's Mother
 The queen is generous.
With above examples the machine understands the entity Queen.
The machine creates word vectors as below. A word vector is built using surrounding words.

The machine creates these vectors


 As it learns from multiple datasets
35

 Use Machine learning (e.g., Deep Learning algorithms)



Page

A word vector is built using surrounding words.


Formula: Here is the formula:
 Meaning (king) – meaning (man) + meaning (woman)=?
 This amounts to performing simple algebraic operations on word vectors:
 Vector ( king) – vector (man) + vector (woman)= vector(?)
 To which the machine answers queen.

10.3 Components of NLP


There are the following two components of NLP -

1. Natural Language Understanding (NLU)


Natural Language Understanding (NLU) helps the machine to understand and analyse human
language by extracting the metadata from content such as concepts, entities, keywords, emotion,
relations, and semantic roles.
NLU mainly used in Business applications to understand the customer's problem in both spoken
and written language.

NLU involves the following tasks -


o It is used to map the given input into useful representation.
o It is used to analyze different aspects of the language.

2. Natural Language Generation (NLG)


Natural Language Generation (NLG) acts as a translator that converts the computerized data into
natural language representation. It mainly involves Text planning, Sentence planning, and Text
Realization.

10.4 Difference between NLU and NLG

NLU NLG

NLU is the process of reading and NLG is the process of writing or generating
interpreting language. language.

It produces non-linguistic outputs from It produces constructing natural language


36

natural language inputs. outputs from non-linguistic inputs.


Page
10.5 Applications of NLP
There are the following applications of NLP -

1. Question Answering
Question Answering focuses on building systems that automatically answer the questions asked
by humans in a natural language.

2. Spam Detection
Spam detection is used to detect unwanted e-mails getting to a user's inbox.

37
Page
3. Sentiment Analysis
Sentiment Analysis is also known as opinion mining. It is used on the web to analyse the
attitude, behaviour, and emotional state of the sender. This application is implemented through a
combination of NLP (Natural Language Processing) and statistics by assigning the values to the
text (positive, negative, or natural), identify the mood of the context (happy, sad, angry, etc.)

4. Machine Translation
Machine translation is used to translate text or speech from one natural language to another
natural language.

Example: Google Translator

5. Spelling correction
Microsoft Corporation provides word processor software like MS-word, PowerPoint for the
38

spelling correction.
Page
6. Speech Recognition
Speech recognition is used for converting spoken words into text. It is used in applications, such
as mobile, home automation, video recovery, dictating to Microsoft Word, voice biometrics,
voice user interface, and so on.

7. Chatbot
Implementing the Chatbot is one of the important applications of NLP. It is used by many
companies to provide the customer's chat services.

8. Information extraction
Information extraction is one of the most important applications of NLP. It is used for extracting
39

structured information from unstructured or semi-structured machine-readable documents.


Page
9. Natural Language Understanding (NLU)
It converts a large set of text into more formal representations such as first-order logic structures
that are easier for the computer programs to manipulate notations of the natural language
processing.

11. Phases of NLP


There are the following five phases of NLP:

1. Lexical Analysis and Morphological


The first phase of NLP is the Lexical Analysis. This phase scans the source code as a stream of
characters and converts it into meaningful lexemes. It divides the whole text into paragraphs,
sentences, and words.

2. Syntactic Analysis (Parsing)


Syntactic Analysis is used to check grammar, word arrangements, and shows the relationship
40

among the words.


Page
Example: Agra goes to the Poonam

In the real world, Agra goes to the Poonam, does not make any sense, so this sentence is rejected
by the Syntactic analyzer.

3. Semantic Analysis
Semantic analysis is concerned with the meaning representation. It mainly focuses on the literal
meaning of words, phrases, and sentences.

4. Discourse Integration
Discourse Integration depends upon the sentences that proceeds it and also invokes the meaning
of the sentences that follow it.

5. Pragmatic Analysis
Pragmatic is the fifth and last phase of NLP. It helps you to discover the intended effect by
applying a set of rules that characterize cooperative dialogues.

For Example: "Open the door" is interpreted as a request instead of an order.

12. Why NLP is difficult?


NLP is difficult because Ambiguity and Uncertainty exist in the language.

12.1Ambiguity: There are the following three ambiguity -

1. Lexical Ambiguity
Lexical Ambiguity exists in the presence of two or more possible meanings of the sentence
within a single word.

Example:
Manya is looking for a match.

In the above example, the word match refers to that either Manya is looking for a partner or
Manya is looking for a match. (Cricket or other match)
41
Page

2. Syntactic Ambiguity
Syntactic Ambiguity exists in the presence of two or more possible meanings within the
sentence.

Example:
I saw the girl with the binocular.

In the above example, did I have the binoculars? Or did the girl have the binoculars?

3. Referential Ambiguity
Referential Ambiguity exists when you are referring to something using the pronoun.

Example: Kiran went to Sunita. She said, "I am hungry."

In the above sentence, you do not know that who is hungry, either Kiran or Sunita.

13. NLP APIs


Natural Language Processing APIs allow developers to integrate human-to-machine
communications and complete several useful tasks such as speech recognition, chatbots, spelling
correction, sentiment analysis, etc.

A list of NLP APIs is given below:


o IBM Watson API
IBM Watson API combines different sophisticated machine learning techniques to enable
developers to classify text into various custom categories. It supports multiple languages,
such as English, French, Spanish, German, Chinese, etc. With the help of IBM Watson API,
you can extract insights from texts, add automation in workflows, enhance search, and
understand the sentiment. The main advantage of this API is that it is very easy to use.
Pricing: Firstly, it offers a free 30 days trial IBM cloud account. You can also opt for its paid
plans.
o Chatbot API
Chatbot API allows you to create intelligent chatbots for any service. It supports Unicode
42

characters, classifies text, multiple languages, etc. It is very easy to use. It helps you to create
Page

a chatbot for your web applications.


Pricing: Chatbot API is free for 150 requests per month. You can also opt for its paid
version, which starts from $100 to $5,000 per month.
o Speech to text API
Speech to text API is used to convert speech to text
Pricing: Speech to text API is free for converting 60 minutes per month. Its paid version
starts form $500 to $1,500 per month.
o Sentiment Analysis API
Sentiment Analysis API is also called as 'opinion mining' which is used to identify the tone
of a user (positive, negative, or neutral)
Pricing: Sentiment Analysis API is free for less than 500 requests per month. Its paid version
starts form $19 to $99 per month.
o Translation API by SYSTRAN
The Translation API by SYSTRAN is used to translate the text from the source language to
the target language. You can use its NLP APIs for language detection, text segmentation,
named entity recognition, tokenization, and many other tasks.
Pricing: This API is available for free. But for commercial users, you need to use its paid
version.
o Text Analysis API by AYLIEN
Text Analysis API by AYLIEN is used to derive meaning and insights from the textual
content. It is available for both free as well as paid from$119 per month. It is easy to use.
Pricing: This API is available free for 1,000 hits per day. You can also use its paid version,
which starts from $199 to S1, 399 per month.
o Cloud NLP API
The Cloud NLP API is used to improve the capabilities of the application using natural
language processing technology. It allows you to carry various natural language processing
functions like sentiment analysis and language detection. It is easy to use.
Pricing: Cloud NLP API is available for free.
o Google Cloud Natural Language API
43

Google Cloud Natural Language API allows you to extract beneficial insights from
unstructured text. This API allows you to perform entity recognition, sentiment analysis,
Page
content classification, and syntax analysis in more the 700 predefined categories. It also
allows you to perform text analysis in multiple languages such as English, French, Chinese,
and German.
Pricing: After performing entity analysis for 5,000 to 10,000,000 units, you need to pay
$1.00 per 1000 units per month.

14. NLP Libraries


1. Scikit-learn: It provides a wide range of algorithms for building machine learning
models in Python.
2. Natural language Toolkit (NLTK): NLTK is a complete toolkit for all NLP techniques.
3. Pattern: It is a web mining module for NLP and machine learning.
4. TextBlob: It provides an easy interface to learn basic NLP tasks like sentiment analysis,
noun phrase extraction, or pos-tagging.
5. Quepy: Quepy is used to transform natural language questions into queries in a database
query language.
6. SpaCy: SpaCy is an open-source NLP library which is used for Data Extraction, Data
Analysis, Sentiment Analysis, and Text Summarization.
7. Gensim: Gensim works with large datasets and processes data streams.

Difference between Natural language and Computer Language

Natural Language Computer Language

Natural language has a very large Computer language has a very limited
vocabulary. vocabulary.

Natural language is easily understood by Computer language is easily understood by the


humans. machines.

Natural language is ambiguous in nature. Computer language is unambiguous.


44
Page
References

1. Stuart Russell, Peter Norvig, ―Artificial Intelligence – A Modern Approach‖, Pearson


Education
2. Elaine Rich and Kevin Knight, ―Artificial Intelligence‖, McGraw-Hill
3. E Charniak and D McDermott, ―Introduction to Artificial Intelligence‖, Pearson
Education
4. Dan W. Patterson, ―Artificial Intelligence and Expert Systems‖, Prentice Hall of India
5. https://www.ibm.com/in-en/cloud/learn/what-is-artificial-intelligence
6. https://www.oracle.com/in/artificial-intelligence/what-is-ai/
7. https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/what-is-artificial-
intelligence
8. https://www.javatpoint.com/history-of-artificial-intelligence
9. https://www.geeksforgeeks.org/agents-artificial-intelligence/
10. https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_agents_and_
environments.htms
11. https://towardsdatascience.com/an-overview-of-computer-vision-1f75c2ab1b66
12. https://www.javatpoint.com/nlp
13. https://www.guru99.com/nlp-tutorial.html

45
Page

You might also like