Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

NLP Qa

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

NATURAL LANGUAGE PROCESSING

QUESTIONS AND ANSWERS (SET 01) - 1 mark

1. What is a Chabot?
A chatbot is a computer program that's designed to simulate human conversation through voice
commands or text chats or both. Eg: Mitsuku Bot, Jabberwacky etc.
OR
A chatbot is a computer program that can learn over time how to best interact with humans. It can
answer questions and troubleshoot customer problems, evaluate and qualify prospects, generate sales
leads and increase sales on an ecommerce site.
OR
A chatbot is a computer program designed to simulate conversation with human users. A chatbot is
also known as an artificial conversational entity (ACE), chat robot, talk bot, chatterbot or chatterbox.
OR
A chatbot is a software application used to conduct an on-line chat conversation via text or text-to-
speech, in lieu of providing direct contact with a live human agent.

2. What is the full form of NLP?


Natural Language Processing
3. While working with NLP what is the meaning of?
a. Syntax
b. Semantics
Syntax: Syntax refers to the grammatical structure of a sentence.
Semantics: It refers to the meaning of the sentence.

4. What is the difference between stemming and lemmatization?


Stemming is a technique used to extract the base form of the words by removing affixes from them. It
is just like cutting down the branches of a tree to its stems. For example, the stem of the words eating,
eats, eaten is eat.
Lemmatization is the grouping together of different forms of the same word. In search queries,
lemmatization allows end users to query any version of a base word and get relevant results.
OR
Stemming is the process in which the affixes of words are removed and the words are converted to
their base form.
In lemmatization, the word we get after affix removal (also known as lemma) is a meaningful one.
Lemmatization makes sure that lemma is a word with meaning and hence it takes a longer time to
execute than stemming.
OR
Stemming algorithms work by cutting off the end or the beginning of the word, taking into account a
list of common prefixes and suffixes that can be found in an inflected word.
Lemmatization on the other hand, takes into consideration the morphological analysis of the words.
To do so, it is necessary to have detailed dictionaries which the algorithm can look through to link the
form back to its lemma.

5. What is the full form of TFIDF?


Term Frequency and Inverse Document Frequency

6. What is meant by a dictionary in NLP?


Dictionary in NLP means a list of all the unique words occurring in the corpus. If some words are
repeated in different documents, they are all written just once as while creating the dictionary.
7. What is term frequency?
Term frequency is the frequency of a word in one document. Term frequency can easily be found
from the document vector table as in that table we mention the frequency of each word of the
vocabulary in each document.

8. What is a document vector table?


Document Vector Table is used while implementing Bag of Words algorithm.
In a document vector table, the header row contains the vocabulary of the corpus and other rows
correspond to different documents.
If the document contains a particular word it is represented by 1 and absence of word is represented
by 0 value.
OR
Document Vector Table is a table containing the frequency of each word of the vocabulary in each
document.

9. What do you mean by corpus?


In Text Normalization, we undergo several steps to normalize the text to a lower level. That is, we
will be working on text from multiple documents and the term used for the whole textual data from
all the documents altogether is known as corpus.
OR
A corpus is a large and structured set of machine-readable texts that have been produced in a natural
communicative setting.
OR
A corpus can be defined as a collection of text documents. It can be thought of as just a bunch of text
files in a directory, often alongside many other directories of text files.

QUESTIONS AND ANSWERS - 2 marks

1. What are the types of data used for Natural Language Processing applications?
Natural Language Processing takes in the data of Natural Languages in the form of written words and
spoken words which humans use in their daily lives and operates on this.

2. Differentiate between a script-bot and a smart-bot. (Any 2 differences)


Script-bot Smart-bot
 A scripted chatbot doesn’t carry  Smart bots are built on NLP and
even a glimpse of A.I ML.
 Script bots are easy to make  Smart –bots are comparatively
difficult to make.
 Script bot functioning is very  Smart-bots are flexible and
limited as they are less powerful. powerful.
 Script bots work around a script ● Smart bots work on bigger
which is programmed in them databases and other resources
directly
 No or little language processing ● NLP and Machine learning skills
skills are required.
 Limited functionality ● Wide functionality

3. Give an example of the following:


 Multiple meanings of a word
 Perfect syntax, no meaning
Example of Multiple meanings of a word –
His face turns red after consuming the medicine
Meaning - Is he having an allergic reaction? Or is he not able to bear the taste of that medicine?
Example of Perfect syntax, no meaning-
Chickens feed extravagantly while the moon drinks tea.
This statement is correct grammatically but it does not make any sense. In Human language, a perfect
balance of syntax and semantics is important for better understanding.

4. Define the following:


● Stemming
● Lemmatization
Stemming: Stemming is a rudimentary rule-based process of stripping the suffixes (“ing”, “ly”, “es”,
“s” etc) from a word.
Stemming is a process of reducing words to their word stem, base or root form (for example, books
— book, looked — look).
Lemmatization: Lemmatization, on the other hand, is an organized & step by step procedure of
obtaining the root form of the word, it makes use of vocabulary (dictionary importance of words) and
morphological analysis (word structure and grammar relations).
The aim of lemmatization, like stemming, is to reduce inflectional forms to a common base form. As
opposed to stemming, lemmatization does not simply chop off inflections. Instead it uses lexical
knowledge bases to get the correct base forms of words.
OR
Stemming is a technique used to extract the base form of the words by removing affixes from them. It
is just like cutting down the branches of a tree to its stems. For example, the stem of the words eating,
eats, eaten is eat.
Lemmatization is the grouping together of different forms of the same word. In search queries,
lemmatization allows end users to query any version of a base word and get relevant results.
OR
Stemming is the process in which the affixes of words are removed and the words are converted to
their base form.
In lemmatization, the word we get after affix removal (also known as lemma) is a meaningful one.
Lemmatization makes sure that lemma is a word with meaning and hence it takes a longer time to
execute than stemming.
OR
Stemming algorithms work by cutting off the end or the beginning of the word, taking into account a
list of common prefixes and suffixes that can be found in an inflected word.
Lemmatization on the other hand, takes into consideration the morphological analysis of the words.
To do so, it is necessary to have detailed dictionaries which the algorithm can look through to link the
form back to its lemma.

5. What do you mean by document vectors?


Document Vector contains the frequency of each word of the vocabulary in a particular document.
In document vector vocabulary is written in the top row. Now, for each word in the document, if it
matches with the vocabulary, put a 1 under it. If the same word appears again, increment the previous
value by 1. And if the word does not occur in that document, put a 0 under it.

6. Which words in a corpus have the highest values and which ones have the least?
Stop words like - and, this, is, the, etc. have highest values in a corpus. But these words do not talk
about the corpus at all. Hence, these are termed as stopwords and are mostly removed at the pre-
processing stage only.
Rare or valuable words occur the least but add the most importance to the corpus. Hence, when we
look at the text, we take frequent and rare words into consideration.
7. Does the vocabulary of a corpus remain the same before and after text normalization? Why?
No, the vocabulary of a corpus does not remain the same before and after text normalization. Reasons
are –
● In normalization the text is normalized through various steps and is lowered to minimum
vocabulary since the machine does not require grammatically correct statements but the essence of it.
● In normalization Stop words, Special Characters and Numbers are removed.
● In stemming the affixes of words are removed and the words are converted to their base form.
So, after normalization, we get the reduced vocabulary.

8. What is the significance of converting the text into a common case?


In Text Normalization, we undergo several steps to normalize the text to a lower level.
After the removal of stop words, we convert the whole text into a similar case, preferably lower case.
This ensures that the case-sensitivity of the machine does not consider same words as different just
because of different cases.

9. Mention some applications of Natural Language Processing.


Natural Language Processing Applications-
● Sentiment Analysis.
● Chatbots & Virtual Assistants.
● Text Classification.
● Text Extraction.
● Machine Translation
● Text Summarization
● Market Intelligence
● Auto-Correct

10. What is the need of text normalization in NLP?


Since we all know that the language of computers is Numerical, the very first step that comes to our
mind is to convert our language to numbers.
This conversion takes a few steps to happen. The first step to it is Text Normalization.
Since human languages are complex, we need to first of all simplify them in order to make sure that
the understanding becomes possible. Text Normalization helps in cleaning up the textual data in such
a way that it comes down to a level where its complexity is lower than the actual data.

11. Explain the concept of Bag of Words.


Bag of Words is a Natural Language Processing model which helps in extracting features out of the
text which can be helpful in machine learning algorithms. In bag of words, we get the occurrences of
each word and construct the vocabulary for the corpus.
Bag of Words just creates a set of vectors containing the count of word occurrences in the document
(reviews). Bag of Words vectors are easy to interpret.
12. Explain the relation between occurrence and value of a word.
As shown in the graph, occurrence and value of a word are inversely proportional. The words which
occur most (like stop words) have negligible value. As the occurrence of words drops, the value of
such words rises. These words are termed as rare or valuable words. These words occur the least but
add the most value to the corpus.

13. What are stop words? Explain with the help of examples.
“Stop words” are the most common words in a language like “the”, “a”, “on”, “is”, “all”. These
words do not carry important meaning and are usually removed from texts. It is possible to remove
stop words using Natural Language Toolkit (NLTK), a suite of libraries and programs for symbolic
and statistical natural language processing.

14. Differentiate between Human Language and Computer Language.


Humans communicate through language which we process all the time. Our brain keeps on
processing the sounds that it hears around itself and tries to make sense out of them all the time.
On the other hand, the computer understands the language of numbers. Everything that is sent to the
machine has to be converted to numbers. And while typing, if a single mistake is made, the computer
throws an error and does not process that part. The communications made by the machines are very
basic and simple.

QUESTIONS AND ANSWERS (SET 01) - 3/4 marks

1. Create a document vector table for the given corpus:


Document 1: We are going to Mumbai
Document 2: Mumbai is a famous place.
Document 3: We are going to a famous place.
Document 4: I am famous in Mumbai.

We Are going to Mumbai i s a famous place I am in

1 1 1 1 1 0 0 0 0 0 0 0
0 0 0 0 1 1 1 1 1 0 0 0
1 1 1 1 0 0 1 1 1 0 0 0
0 0 0 0 1 0 0 1 0 1 1 1
2. Classify each of the images according to how well the model’s output matches the data
samples:

Here, the red dashed line is model’s output while the blue crosses are actual data samples.
● The model’s output does not match the true function at all. Hence the model is said to be under
fitting and its accuracy is lower.
● In the second case, model performance is trying to cover all the data samples even if they are out of
alignment to the true function. This model is said to be over fitting and this too has a lower accuracy
● In the third one, the model’s performance matches well with the true function which states that the
model has optimum accuracy and the model is called a perfect fit.

3. Explain how AI can play a role in sentiment analysis of human beings?


The goal of sentiment analysis is to identify sentiment among several posts or even in the same post
where emotion is not always explicitly expressed.
Companies use Natural Language Processing applications, such as sentiment analysis, to identify
opinions and sentiment online to help them understand what customers think about their products and
services (i.e., “I love the new iPhone” and, a few lines later “But sometimes it doesn’t work well”
where the person is still talking about the iPhone) and overall *
Beyond determining simple polarity, sentiment analysis understands sentiment in context to help
better understand what’s behind an expressed opinion, which can be extremely relevant in
understanding and driving purchasing decisions.

4. Why are human languages complicated for a computer to understand? Explain.


The communications made by the machines are very basic and simple. Human communication is
complex. There are multiple characteristics of the human language that might be easy for a human to
understand but extremely difficult for a computer to understand.
For machines it is difficult to understand our language. Let us take a look at some of them here:
Arrangement of the words and meaning - There are rules in human language. There are nouns, verbs,
adverbs, adjectives. A word can be a noun at one time and an adjective some other time. This can
create difficulty while processing by computers.
Analogy with programming language- Different syntax, same semantics: 2+3 = 3+2 Here the way
these statements are written is different, but their meanings are the same that is 5. Different
semantics, same syntax: 2/3 (Python 2.7) ≠ 2/3 (Python 3) Here the statements written have the same
syntax but their meanings are different. In Python 2.7, this statement would result in 1 while in
Python 3, it would give an output of 1.5.
Multiple Meanings of a word - In natural language, it is important to understand that a word can have
multiple meanings and the meanings fit into the statement according to the context of it.
Perfect Syntax, no Meaning - Sometimes, a statement can have a perfectly correct syntax but it does
not mean anything. In Human language, a perfect balance of syntax and semantics is important for
better understanding.
These are some of the challenges we might have to face if we try to teach computers how to
understand and interact in human language.

5. What are the steps of text Normalization? Explain them in brief.


Text Normalization in Text Normalization, we undergo several steps to normalize the text to a lower
level.
Sentence Segmentation - Under sentence segmentation, the whole corpus is divided into sentences.
Each sentence is taken as a different data so now the whole corpus gets reduced to sentences.
Tokenization- After segmenting the sentences, each sentence is then further divided into tokens.
Tokens is a term used for any word or number or special character occurring in a sentence. Under
tokenization, every word, number and special character is considered separately and each of them is
now a separate token.
Removing Stop words, Special Characters and Numbers - In this step, the tokens which are not
necessary are removed from the token list.
Converting text to a common case -After the stop words removal, we convert the whole text into a
similar case, preferably lower case. This ensures that the case-sensitivity of the machine does not
consider same words as different just because of different cases.
Stemming In this step, the remaining words are reduced to their root words. In other words, stemming
is the process in which the affixes of words are removed and the words are converted to their base
form.
Lemmatization -in lemmatization, the word we get after affix removal (also known as lemma) is a
meaningful one.
With this we have normalized our text to tokens which are the simplest form of words present in the
corpus. Now it is time to convert the tokens into numbers. For this, we would use the Bag of Words
algorithm
6. Normalize the given text and comment on the vocabulary before and after the normalization:
Raj and Vijay are best friends. They play together with other friends. Raj likes to play football
but Vijay prefers to play online games. Raj wants to be a footballer. Vijay wants to become an
online gamer.
Normalization of the given text:
Sentence Segmentation:
1. Raj and Vijay are best friends.
2. They play together with other friends.
3. Raj likes to play football but Vijay prefers to play online games.
4. Raj wants to be a footballer.
5. Vijay wants to become an online gamer.

Tokenization:
Raj and Vijay are best friends .
Raj and Vijay
are best
friends.

They play They play Together with other friends .


together with
other friends

Same will be done for all sentences.


Removing Stop words, Special Characters and Numbers:
In this step, the tokens which are not necessary are removed from the token list.
So, the words and, are, to, an, (Punctuation) will be removed.Converting text to a
common case:
After the stop words removal, we convert the whole text into a similar case, preferably lower case.
Here we don’t have words in different case so this step is not required for given text.
Stemming:
In this step, the remaining words are reduced to their root words. In other words, stemming is the
process in which the affixes of words are removed and the words are converted to their base form.

Word Affixes Stem

Likes -s Like

Prefers -s Prefer

Wants -s want

In the given text Lemmatization is not required.


Given Text
Raj and Vijay are best friends. They play together with other friends. Raj likes to play football but
Vijay prefers to play online games. Raj wants to be a footballer. Vijay wants to become an online
gamer.
Normalized Text
Raj and Vijay best friends They play together with other friends Raj likes to play football but Vijay
prefers to play online games Raj wants to be a footballer Vijay wants to become an online gamer
6. What do you mean by Natural Language Processing?
Answer – The area of artificial intelligence known as natural language processing, or NLP, is
dedicated to making it possible for computers to comprehend and process human languages. The
interaction between computers and human (natural) languages is the focus of artificial intelligence
(AI), a subfield of linguistics, computer science, information engineering, and artificial intelligence.
This includes learning how to programme computers to process and analyze large amounts of natural
language data.

7. What are the different applications of NLP which are used in real-life scenario?
Answer – Some of the applications which is used in the real-life scenario are –
• Automatic Summarization – Automatic summarization is useful for gathering data from social
media and other online sources, as well as for summarizing the meaning of documents and other
written materials. When utilized to give a summary of a news story or blog post while eliminating
redundancy from different sources and enhancing the diversity of content acquired, automatic
summarizing is particularly pertinent.
• Sentiment Analysis – In posts when emotion is not always directly expressed, or even in the same
post, the aim of sentiment analysis is to detect sentiment. To better comprehend what internet users
are saying about a company’s goods and services, businesses employ natural language processing
tools like sentiment analysis.
• Text Classification – Text classification enables you to classify a document and organize it to
make it easier to find the information you need or to carry out certain tasks. Spam screening in email
is one example of how text categorization is used.
• Virtual Assistants – These days, digital assistants like Google Assistant, Cortana, Siri, and Alexa
play a significant role in our lives. Not only can we communicate with them, but they can also
facilitate our life. They can assist us in making notes about our responsibilities, making calls for us,
sending messages, and much more by having access to our data.
8. Explain the types of Chatbot?
Answer – There are two types of Chatbot –
• Script Bot – An Internet bot, sometimes known as a web robot, robot, or simply bot, is a software
programme that does automated operations (scripts) over the Internet, typically with the aim of
simulating extensive human online activity like communicating.
• Smart Bot – An artificial intelligence (AI) system that can learn from its surroundings and past
experiences and develop new skills based on that knowledge is referred to as a smart bot. Smart bot
that are intelligent enough can operate alongside people and learn from their actions.

9. What is Text Normalisation?


Answer – The process of converting a text into a canonical (standard) form is known as text
normalisation. For instance, the canonical form of the word “good” can be created from the words
“gooood” and “gud.” Another case is the reduction of terms that are nearly identical, such as
“stopwords,” “stop-words,” and “stop words,” to just “stopwords.”
We must be aware that we will be working on a collection of written text in this portion before we
start. As a result, we will be analysing text from a variety of papers. This collection of text from all
the documents is referred to as a corpus. We would perform each stage of Text Normalization and test
them on a corpus in addition to going through them all.

10. What is Sentence Segmentation in AI?


Answer – The challenge of breaking down a string of written language into its individual sentences is
known as sentence segmentation. The method used in NLP to determine where sentences actually
begin and end, or you can just say that this is how we divide a text into sentences. Sentence
segmentation is the process in question. Using the spacy library, we implement this portion of NLP in
Python.

11. What is Tokenisation in AI?


Answer – The challenge of breaking down a string of written language into its individual words is
known as word tokenization (also known as word segmentation). Space is a good approximation of a
word divider in English and many other languages that use some variation of the Latin alphabet.

12. What is purpose of Stopwords?


Answer – Stopwords are words that are used frequently in a corpus but provide nothing useful.
Humans utilize grammar to make their sentences clear and understandable for the other person.
However, grammatical terms fall under the category of stopwords because they do not add any
significance to the information that is to be communicated through the statement. Stopword examples
include –
a/ an/ and/ are/ as/ for/ it/ is/ into/ in/ if/ on/ or/ such/ the/ there/ to
13. What is Stemming in AI?
Answer – The act of stripping words of their affixes and returning them to their original forms is
known as stemming. The process of stemming can be carried out manually or by an algorithm that an
AI system may use. Any inflected form that is encountered can be reduced to its root by using a
variety of stemming techniques. A stemming algorithm can be created easily.

14. What is Lemmatization?


Answer – Stemming and lemmatization are alternate techniques to one another because they both
function to remove affixes. However, lemmatization differs from both of them in that the word that
results from the elimination of the affix (also known as the lemma) is meaningful.
Lemmatization takes more time to complete than stemming because it ensures that the lemma is a
word with meaning.

15. What is bag of Words?


Answer – Bag of Words is a model for natural language processing that aids in removing textual
elements that can be used by machine learning techniques. We obtain each word’s occurrences from
the bag of words and create the corpus’s vocabulary.
An approach to extracting features from text for use in modelling, such as with machine learning
techniques, is known as a bag-of-words model, or BoW for short. The method is really
straightforward and adaptable, and it may be applied in a variety of ways to extract features from
documents.
16. Write the steps necessary to implement the bag of words algorithm.
Answer – The steps to implement bag of words algorithm are as follows:
1. Text Normalisation: Collect data and pre-process it
2. Create Dictionary: Make a list of all the unique words occurring in the corpus.
3. Create document vectors: For each document in the corpus, find out how many times the word
from the unique list of words has occurred.
4. Create document vectors for all the documents.

17. How does the relationship between a word’s value and frequency in a corpus look like in the
given graph?
Answer – The graph demonstrates the inverse relationship between word frequency and word value.
The most frequent terms, such as stop words, are of little significance. The value of words increases
as their frequency decreases. These words are referred to as precious or uncommon words. The least
frequently occurring but most valuable terms in the corpus are those.

18. In data processing, define the term “Text Normalization.”


Answer – Text normalisation is the initial step in the data processing process. Text normalisation
assists in reducing the complexity of the textual data to a point where it is comparable to the actual
data. To lower the text’s normalisation level in this, we go through numerous procedures. We work
with text from several sources, and the collective textual data from all the papers is referred to as a
corpus.

19. Explain the differences between lemmatization and stemming. Give an example to assist you
explain.
Answer – Stemming is the process of stripping words of their affixes and returning them to their
original form.
After the affix is removed during lemmatization, we are left with a meaningful word known as a
lemma. Lemmatization takes more time to complete than stemming because it ensures that the lemma
is a word with meaning.
The following example illustrates the distinction between stemming and lemmatization:
Caring >> Lemmatization >> Care
Caring >> Stemming >> Car

You might also like