What Is Artificial Intelligence
What Is Artificial Intelligence
What Is Artificial Intelligence
(Source: https://www.ibm.com//topics/artificial-intelligence)
While a number of definitions of artificial intelligence (AI) have surfaced over the last
few decades, John McCarthy offers the following definition in this 2004 paper (link resides
outside ibm.com), " It is the science and engineering of making intelligent machines,
especially intelligent computer programs. It is related to the similar task of using computers
to understand human intelligence, but AI does not have to confine itself to methods that are
biologically observable."
However, decades before this definition, the birth of the artificial intelligence
conversation was denoted by Alan Turing's seminal work, "Computing Machinery and
Intelligence"(link resides outside ibm.com), which was published in 1950. In this paper,
Turing, often referred to as the "father of computer science", asks the following question,
"Can machines think?" From there, he offers a test, now famously known as the "Turing
Test", where a human interrogator would try to distinguish between a computer and human
text response. While this test has undergone much scrutiny since its publish, it remains an
important part of the history of AI as well as an ongoing concept within philosophy as it
utilizes ideas around linguistics.
Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A
Modern Approach (link resides outside ibm.com), becoming one of the leading textbooks in
the study of AI. In it, they delve into four potential goals or definitions of AI, which
differentiates computer systems on the basis of rationality and thinking vs. acting:
Human approach:
At its simplest form, artificial intelligence is a field, which combines computer science
and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine
learning and deep learning, which are frequently mentioned in conjunction with artificial
intelligence. These disciplines are comprised of AI algorithms which seek to create expert
systems which make predictions or classifications based on input data.
Over the years, artificial intelligence has gone through many cycles of hype, but even
to skeptics, the release of OpenAI’s ChatGPT seems to mark a turning point. The last time
generative AI loomed this large, the breakthroughs were in computer vision, but now the leap
forward is in natural language processing. And it’s not just language: Generative models can
also learn the grammar of software code, molecules, natural images, and a variety of other
data types.
The applications for this technology are growing every day, and we’re just starting to
explore the possibilities. But as the hype around the use of AI in business takes off,
conversations around ethics become critically important. To read more on where IBM stands
within the conversation around AI ethics, read more here.
"Deep" machine learning can leverage labeled datasets, also known as supervised
learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. It can
ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine
the hierarchy of features which distinguish different categories of data from one another.
Unlike machine learning, it doesn't require human intervention to process data, allowing us to
scale machine learning in more interesting ways.
Generative models have been used for years in statistics to analyze numerical data.
The rise of deep learning, however, made it possible to extend them to images, speech, and
other complex data types. Among the first class of models to achieve this cross-over feat were
variational autoencoders, or VAEs, introduced in 2013. VAEs were the first deep-learning
models to be widely used for generating realistic images and speech.
“VAEs opened the floodgates to deep generative modeling by making models easier to
scale,” said Akash Srivastava, an expert on generative AI at the MIT-IBM Watson AI Lab.
“Much of what we think of today as generative AI started here.”
Early examples of models, like GPT-3, BERT, or DALL-E 2, have shown what’s
possible. The future is models that are trained on a broad set of unlabeled data that can be
used for different tasks, with minimal fine-tuning. Systems that execute specific tasks in a
single domain are giving way to broad AI that learns more generally and works across
domains and problems. Foundation models, trained on large, unlabeled datasets and
fine-tuned for an array of applications, are driving this shift.
Reading
1. According to John McCarthy's definition in his 2004 paper, artificial intelligence (AI) is
primarily concerned with:
a) Creating intelligent machines and computer programs.
b) Understanding human intelligence through computer methods.
c) Observing biological methods to develop intelligent systems.
d) Engineering systems for specific tasks.
2. Who is credited with proposing the Turing Test to distinguish between human and
computer intelligence?
a) John McCarthy
b) Alan Turing
c) Stuart Russell
d) Peter Norvig
3. Stuart Russell and Peter Norvig's book, "Artificial Intelligence: A Modern Approach,"
outlines four potential goals or definitions of AI, including:
a) Systems that act intuitively.
b) Systems that think biologically.
c) Systems that think rationally.
d) Systems that learn autonomously.
5. Deep learning differs from classical, or "non-deep," machine learning primarily in:
a) Its reliance on human intervention for learning.
b) Its ability to process structured data.
c) Its use of neural networks with more than three layers.
d) Its requirement for labeled datasets.
6. Generative AI models, such as GPT-3 and BERT, are distinguished by their ability to:
a) Analyze numerical data.
b) Generate statistically probable outputs.
c) Classify images with high accuracy.
d) Process structured datasets.
7. What is one of the real-world applications mentioned for computer vision technology?
a) Automated stock trading
b) Natural language processing
c) Radiology imaging in healthcare
d) Cross-selling strategies in e-commerce
8. Which event in the history of artificial intelligence marked the advent of deep generative
modeling according to the text?
a) Alan Turing's proposal of the Turing Test
b) John McCarthy's coinage of the term "artificial intelligence"
c) Introduction of variational autoencoders (VAEs) in 2013
d) IBM's Deep Blue defeating Garry Kasparov in chess
9. The year 2016 saw a significant milestone in artificial intelligence when DeepMind's
AlphaGo program:
a) Defeated Ken Jennings and Brad Rutter at Jeopardy!
b) Identified and categorized images with high accuracy
c) Surpassed human intelligence in the game of Go
d) Introduced the concept of generative AI
10. According to the text, what marked a significant change in the performance of AI and its
potential to drive enterprise value in 2023?
a) The invention of variational autoencoders (VAEs)
b) The rise of large language models (LLMs) like ChatGPT
c) IBM's Deep Blue defeating Garry Kasparov in chess
d) Google's purchase of DeepMind for $400 million
Mixed-up Summary:
1. The text discusses various definitions and milestones in the field of artificial
intelligence (AI).
A. Stuart Russell and Peter Norvig outline four potential goals or definitions of AI.
B. Deep learning and machine learning are sub-fields of AI, with deep learning being a
subset of machine learning.
C. Weak AI, also known as Narrow AI, is focused on performing specific tasks.
D. Generative AI models, such as GPT-3 and BERT, have revolutionized the field by
generating statistically probable outputs.
E. The history of AI includes significant events like Alan Turing's proposal of the Turing
Test and DeepMind's AlphaGo defeating the world champion Go player.
F. The rise of large language models, like ChatGPT, has drastically improved AI
performance and its potential applications.
Listening
AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED
1.What kind of email did Sasha Luccioni receive that made her think about her job?
3.Does A.I have something in common with basic social prejudice? Which ones? What
problems it can cause?
Speaking
1. Taking into account the text you read, make a brief summary about A.I’s history.
2. Taking into account the video you watched, give your opinion about these topics:
a)Which, in your opinion, is the worst thing about A.I nowadays?
b)How does A.I’s impact on climate global changes?
c)Is there a possibility of regulating in a political way the A.I’s? Is that important?