404 Ba P2 Artificial Intelligence in Businessapplications
404 Ba P2 Artificial Intelligence in Businessapplications
404 Ba P2 Artificial Intelligence in Businessapplications
BUSINESS APPLICATIONS
(404BA)
(2019 Pattern) (Semester-IV)
Time : 2½ Hours] [Max. Marks : 50
Instructions to the candidates:
1) All questions are compulsory.
2) Neat diagrams must be drawn wherever necessary.
3) Assume suitable data, if necessary.
Breadth-first search (BFS) is a graph search algorithm that explores the nodes of a
graph in a breadth-first manner. This means that the algorithm starts at the root
node and then explores all of the nodes at the same level before moving on to the
next level.
d) Define machine learning.
Machine learning (ML) is a field of computer science that gives computers the
ability to learn without being explicitly programmed. ML algorithms are trained on
data, and they learn to make predictions or decisions based on that data.
Artificial neural networks (ANNs) are a type of machine learning algorithm that
are inspired by the human brain. ANNs are made up of interconnected nodes,
which are similar to neurons in the brain. ANNs learn by adjusting the weights of
the connections between the nodes.
Uniform-cost search (UCS) is a graph search algorithm that explores the nodes of
a graph in a way that minimizes the total cost of the search. This means that the
algorithm always chooses the node with the lowest cost to explore next.
Here is a table that summarizes the key differences between forward chaining and
backward chaining:
I hope this helps! Let me know if you have any other questions.
b) What is the common way to represent and parse grammars for natural
language processing?
The most common way to represent and parse grammars for natural language
processing is using context-free grammars (CFGs). CFGs are a type of formal
grammar that is used to describe the structure of sentences in a natural language.
CFGs are made up of production rules, which are statements that describe how a
sequence of symbols can be generated.
For example, the following CFG production rule describes how a noun phrase
(NP) can be generated:
NP → Det N
This rule states that a noun phrase can be generated by starting with a determiner
(Det) and then following it with a noun (N).
CFGs can be used to parse sentences by starting with the start symbol of the
grammar and then applying the production rules to generate the sentence. For
example, the following sentence can be parsed using the CFG above:
The parser would start with the start symbol of the grammar, which is NP. It would
then apply the production rule NP → Det N to generate the phrase "the boy". It
would then apply the production rule NP → Det N to generate the phrase "the
apple". Finally, it would combine the two phrases to generate the complete
sentence.
CFGs are a powerful tool for representing and parsing grammars for natural
language processing. They are relatively easy to understand and implement, and
they can be used to represent a wide variety of grammatical structures.
Here are some other ways to represent and parse grammars for natural language
processing:
The state space approach can be used to solve any AI problem that can be
represented as a state space. Some examples of AI problems that can be solved
using the state space approach include:
Here are some of the steps involved in the state space approach:
The following predicate logic statement represents the rule that if John is tall and
Mary is short, then John is taller than Mary:
Predicate logic can be used to reason about knowledge in a variety of ways. For
example, predicate logic can be used to:
• Deduce new knowledge: New knowledge can be deduced from existing
knowledge by using the rules of logic. For example, the following predicate
logic statement can be deduced from the statements above:
taller(john, mary)
• Answer questions: Questions can be answered by using predicate logic to
reason about the knowledge that is represented. For example, the following
question can be answered using the statements above:
Is John taller than Mary?
Predicate logic is a powerful tool for representing and reasoning about knowledge
in AI. It is a versatile language that can be used to represent a wide variety of
knowledge, and it can be used to reason about knowledge in a variety of ways.
The A* search algorithm works by maintaining a priority queue of nodes that have
not yet been explored. The priority queue is ordered by the estimated cost of the
path from the current node to the goal node. The algorithm then pops the node
with the lowest estimated cost from the priority queue and explores it. If the node
is the goal node, then the algorithm terminates. Otherwise, the algorithm expands
the node and adds its children to the priority queue.
Here is an example of how the A* search algorithm can be used to find the shortest
path between two nodes in a graph:
A-B-C-D-E
The A* search algorithm would start by adding the node A to the priority queue.
The estimated cost of the path from A to E is 4.
The algorithm would then pop the node A from the priority queue and explore it.
The algorithm would then add the nodes B and C to the priority queue. The
estimated cost of the path from B to E is 3, and the estimated cost of the path from
C to E is 2.
The algorithm would then pop the node B from the priority queue and explore it.
The algorithm would then add the node D to the priority queue. The estimated cost
of the path from D to E is 1.
The algorithm would then pop the node D from the priority queue and explore it.
The algorithm would then add the node E to the priority queue. The estimated cost
of the path from E to E is 0.
The algorithm would then terminate, because the node E is the goal node.
If these conditions are met, then the A* search algorithm will always find the
shortest path between two nodes in the graph.
Overall, the A* search algorithm is a powerful and efficient algorithm for finding
the shortest path between two nodes in a graph. However, it can be
computationally expensive for graphs with a large number of nodes.
Here is an example of how the steps involved in NLP of an English sentence can
be applied to the sentence "The quick brown fox jumps over the lazy dog":
• Tokenization: The sentence is tokenized into the following tokens: "The",
"quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog".
• Part-of-speech tagging: The tokens are tagged with their part-of-speech tags.
For example, the token "The" is tagged as a determiner, the token "quick" is
tagged as an adjective, and the token "jumps" is tagged as a verb.
• Stemming: The words are stemmed to their root form. For example, the
words "jumping" and "jumped" are both stemmed to the root word "jump".
• Lemmatization: The words are lemmatized to their lemma. For example, the
words "jumping", "jumped", and "jump" are all lemmatized to the lemma
"jump".
• Named entity recognition: The named entities in the sentence are identified.
For example, the sentence identifies the named entities "The quick brown
fox", "the lazy dog", and "over".
• Parsing: The sentence is parsed into a tree structure that shows the
relationships between the different words in the sentence.
• Semantic analysis: The meaning of the sentence is determined. This can be
done by using a variety of techniques, such as word sense disambiguation,
coreference resolution, and sentiment analysis.
Depth-first search (DFS) is a graph search algorithm that explores the nodes of a
graph in a depth-first manner. This means that DFS starts by exploring the nodes
that are closest to the start node, and then it explores the nodes that are one level
further away, but only if they are not connected to any of the nodes that have
already been explored.
Here is an example of a problem for which BFS would work better than DFS:
• Finding the shortest path between two nodes in a graph: BFS is a good
algorithm for finding the shortest path between two nodes in a graph
because it explores all of the nodes that are close to the start node before it
explores nodes that are further away. This means that BFS is more likely to
find the shortest path than DFS, which may explore nodes that are further
away from the start node before it explores nodes that are closer to the start
node.
Here is an example of a problem for which DFS would work better than BFS:
• Finding a path to a goal node in a graph with a large number of dead ends:
DFS is a good algorithm for finding a path to a goal node in a graph with a
large number of dead ends because it will eventually explore all of the
possible paths, even if some of the paths lead to dead ends. BFS, on the
other hand, may not explore all of the possible paths if there are a large
number of dead ends.
I hope this helps! Let me know if you have any other questions.
A search:* A* search is a graph search algorithm that is used to find the shortest
path between two nodes in a graph. A* search is a greedy algorithm, which means
that it always chooses the node that seems to be the most promising one.
A* search works by maintaining a priority queue of nodes that have not yet been
explored. The priority queue is ordered by the estimated cost of the path from the
current node to the goal node. The algorithm then pops the node with the lowest
estimated cost from the priority queue and explores it. If the node is the goal node,
then the algorithm terminates. Otherwise, the algorithm expands the node and adds
its children to the priority queue.
The estimated cost of a node is a heuristic function that estimates the cost of the
path from the node to the goal node. The heuristic function should be admissible,
which means that it should never overestimate the cost of the path.
Overall, informed search strategies are a powerful tool for solving problems that
involve searching a graph. However, they can be more computationally expensive
than uninformed search strategies.
Here are some of the techniques that can be used for disambiguation in AI:
Overall, ambiguity and disambiguation are important concepts in AI that are used
to resolve the multiple meanings of words and phrases. The techniques that can be
used for disambiguation in AI include context, knowledge bases, and statistical
methods.
[5860]-416 2