Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SPCC Case Study Parser

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Name : Sayyed Sohail Rashid

Roll no: 18CO48


Sub: SPCC
Date: 9/3/2022

Case Study

Aim: Case Study on What is Parser and How to Implement it .

Theory:

What is Parser?

Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of


symbols, either in natural language, computer languages or data structures, confirming to
the rules of a formal grammar.

Process / Working :

1) The first stage is the token generation, or lexical analysis, by which the input character
stream is split into meaningful symbols defined by a grammar of regular expressions. For
example, a calculator program would look at an input such as "12 * (3 + 4)^2" and split it
into the tokens 12, *, (, 3, +, 4, ), ^, 2, each of which is a meaningful symbol in the context
of an arithmetic expression. The lexer would contain rules to tell it that the characters *, +,
^, ( and ) mark the start of a new token, so meaningless tokens like "12*" or "(3" will not be
generated.
2) The next stage is parsing or syntactic analysis, which is checking that the tokens form an
allowable expression. This is usually done with reference to a context-free grammar which
recursively defines components that can make up an expression and the order in which they
must appear. However, not all rules defining programming languages can be expressed by
context-free grammars alone, for example type validity and proper declaration of identifiers.
These rules can be formally expressed with attribute grammars.
3) The final phase is semantic parsing or analysis, which is working out the implications of
the expression just validated and taking the appropriate action. In the case of a calculator or
interpreter, the action is to evaluate the expression or program; a compiler, on the other
hand, would generate some kind of code. Attribute grammars can also be used to define
these actions.
Types of Parser:

1) The task of the parser is essentially to determine if and how the input can be derived from
the start symbol of the grammar. This can be done in essentially two ways:
• Top-down parsing - Top-down parsing can be viewed as an attempt to find left-most
derivations of an input-stream by searching for parse trees using a top-down
expansion of the given formal grammar rules. Tokens are consumed from left to
right. Inclusive choice is used to accommodate ambiguity by expanding all
alternative right-hand-sides of grammar rules. This is known as the primordial soup
approach. Very similar to sentence diagramming, primordial soup breaks down the
constituencies of sentences.
• Bottom-up parsing - A parser can start with the input and attempt to rewrite it to the
start symbol. Intuitively, the parser attempts to locate the most basic elements, then
the elements containing these, and so on. LR parsers are examples of bottom-up
parsers. Another term used for this type of parser is Shift-Reduce parsing.
2) LL parsers and recursive-descent parser are examples of top-down parsers which cannot
accommodate left recursive production rules. Although it has been believed that simple
implementations of top-down parsing cannot accommodate direct and indirect left-recursion
and may require exponential time and space complexity while parsing ambiguous context-
free grammars, more sophisticated algorithms for top-down parsing have been created by
Frost, Hafiz, and Callaghan which accommodate ambiguity and left recursion in polynomial
time and which generate polynomial-size representations of the potentially exponential
number of parse trees. Their algorithm is able to produce both left-most and right-most
derivations of an input with regard to a given context-free grammar.
3) An important distinction with regard to parsers is whether a parser generates a leftmost
derivation or a rightmost derivation. LL parsers will generate a leftmost derivation and LR
parsers will generate a rightmost derivation (although usually in reverse).
4) Some graphical parsing algorithms have been designed for visual programming
languages.Parsers for visual languages are sometimes based on graph grammars.
5) Adaptive parsing algorithms have been used to construct "self-extending" natural
language user interfaces.
Flowchart:

Working of Parser
Implementation:

import parser

print("Program to demonstrate parser module in Python")


print("\n")

exp = "5 + 8"


print("The given expression for parsing is as follows:")

print(exp)
print("\n")
print("Parsing of given expression results as: ")

st = parser.expr(exp)
print(st)
print("\n")
print("The parsed object is converted to the code object")

code = st.compile()
print(code)
print("\n")
print("The evaluated result of the given expression is as follows:")

res = eval(code)
print(res)

Output:

Conclusion:

Case Study on Lexical Analysis has been Successfully performed.

You might also like