Compiler Design UNIT-1

 
Compiler Design UNIT-1
Compiler is a translator that converts the high-level language into the machine language.
o   A compiler is a translator that converts the high-level language into the machine language.
o   High-level language is written by a developer and machine language can be understood by the processor.
o   Compiler is used to show errors to the programmer.
o   The main purpose of compiler is to change the code written in one language without changing the meaning of the program.
o   When you execute a program which is written in HLL programming language then it executes into two parts.
o   In the first part, the source program compiled and translated into the object program (low level language).
o   In the second part, object program translated into the target program through the assembler.
Compiler Introduction
Fig: Execution process of source program in Compiler
 
Compiler Phases
The compilation process contains the sequence of various phases. Each phase takes source program in one representation and produces output in another representation. Each phase takes input from its previous stage.
There are the various phases of compiler:

compilerDesign.jpgFig: phases of compiler
Compiler Phases
Ø  Lexical Analysis:
Lexical analyzer phase is the first phase of compilation process. It takes source code as input. It reads the source program one character at a time and converts it into meaningful lexemes. Lexical analyzer represents these lexemes in the form of tokens.
Syntax Analysis
Syntax analysis is the second phase of compilation process. It takes tokens as input and generates a parse tree as output. In syntax analysis phase, the parser checks that the expression made by the tokens is syntactically correct or not.
Ø  Semantic Analysis
Semantic analysis is the third phase of compilation process. It checks whether the parse tree follows the rules of language. Semantic analyzer keeps track of identifiers, their types and expressions. The output of semantic analysis phase is the annotated tree syntax.
Ø  Intermediate Code Generation
In the intermediate code generation, compiler generates the source code into the intermediate code. Intermediate code is generated between the high-level language and the machine language. The intermediate code should be generated in
such a way that you can easily translate it into the target machine code.
Ø  Code Optimization
Code optimization is an optional phase. It is used to improve the intermediate code so that the output of the program could run faster and take less space. It removes the unnecessary lines of the code and arranges the sequence of statements in order to speed up the program execution.
Ø  Code Generation
Code generation is the final stage of the compilation process. It takes the optimized intermediate code as input and maps it to the target machine language. Code generator translates the intermediate code into the machine code of the specified computer.
 
Example:
 
 
                  Sum=Old sum+Rate*50
Compiler Phases 1
 
 
 
 
 
 
 
 
Compiler Passes
Pass is a complete traversal of the source program. Compiler has two passes to traverse the source program.
Ø  Multi-pass Compiler
o   Multi pass compiler is used to process the source code of a program several times.
o   In the first pass, compiler can read the source program, scan it, extract the tokens and store the result in an output file.
o   In the second pass, compiler can read the output file produced by first pass, build the syntactic tree and perform the syntactical analysis. The output of this phase is a file that contains the syntactical tree.
o   In the third pass, compiler can read the output file produced by second pass and check that the tree follows the rules of language or not. The output of semantic analysis phase is the annotated tree syntax.
o   This pass is going on, until the target output is produced.
o   One-pass Compiler
o   One-pass compiler is used to traverse the program only once. The one-pass compiler passes only once through the parts of each compilation unit. It translates each part into its final machine code.
o   In the one pass compiler, when the line source is processed, it is scanned and the token is extracted.
o   Then the syntax of each line is analyzed and the tree structure is build. After the semantic part, the code is generated.
o   The same process is repeated for each line of code until the entire program is compiled.
 
Bootstrapping
o   Bootstrapping is widely used in the compilation development.
o   Bootstrapping is used to produce a self-hosting compiler. Self-hosting compiler is a type of compiler that can compile its own source code.
o   Bootstrap compiler is used to compile the compiler and then you can use this compiled compiler to compile everything else as well as future versions of itself.
A compiler can be characterized by three languages:
1.      Source Language
2.      Target Language
3.      Implementation Language
The T- diagram shows a compiler SCIT for Source S, Target T, implemented in I.
Compiler Bootstrapping 1
Follow some steps to produce a new language L for machine A:
1. Create a compiler SCAA for subset, S of the desired language, L using language "A" and that compiler runs on machine A.
 
Compiler Bootstrapping 3
2. Create a compiler LCSA for language L written in a subset of L.
Compiler Bootstrapping 5
3. Compile LCSA using the compiler SCAA to obtain LCAALCAA is a compiler for language L, which runs on machine A and produces code for machine A.
Compiler Bootstrapping 10

Compiler Bootstrapping 11
The process described by the T-diagrams is called bootstrapping.
 
Finite state machine
o   Finite state machine is used to recognize patterns.
o   Finite automata machine takes the string of symbol as input and changes its state accordingly. In the input, when a desired symbol is found then the transition occurs.
o   While transition, the automata can either move to the next state or stay in the same state.
o   FA has two states: accept state or reject state. When the input string is successfully processed and the automata reached its final state then it will accept.
A finite automata consists of following:
Q:finite set of states
∑: finite set of input symbol
q0: initial state
F: final state
δ: Transition function
Transition function can be define as
1.      δ: Q x ∑ →Q  
FA is characterized into two ways:
1.      DFA (finite automata)
2.      NDFA (non deterministic finite automata)
DFA
DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the computation. In DFA, the input character goes to one state only. DFA doesn't accept the null move that means the DFA cannot change state without any input character.
DFA has five tuples {Q, ∑, q0, F, δ}
: set of all states
∑: finite set of input symbol where δ: Q x ∑ →Q
q0: initial state
F: final state
δ: Transition function
Example
See an example of deterministic finite automata:
 
1.      Q = {q0, q1, q2}  
2.      ∑ = {01}  
3.      q0 = {q0}  
4.      F = {q3}  
 
NDFA
NDFA refer to the Non Deterministic Finite Automata. It is used to transit the any number of states for a particular input. NDFA accepts the NULL move that means it can change state without reading the symbols.
NDFA also has five states same as DFA. But NDFA has different transition function.
Transition function of NDFA can be defined as:
δ: Q x ∑ →2Q
Example
See an example of non deterministic finite automata:
 
1.      Q = {q0, q1, q2}  
2.      ∑ = {01}  
3.      q0 = {q0}  
4.      F = {q3}  
Regular expression
o   Regular expression is a sequence of pattern that defines a string. It is used to denote regular languages.
o   It is also used to match character combinations in strings. String searching algorithm used this pattern to find the operations on string.
o   In regular expression, x* means zero or more occurrence of x. It can generate {e, x, xx, xxx, xxxx,.....}
o   In regular expression, x+ means one or more occurrence of x. It can generate {x, xx, xxx, xxxx,.....}
Operations on Regular Language
The various operations on regular language are:
Union: If L and M are two regular languages then their union L U M is also a union.
1.      L U M = {s | s is in L or s is in M}  
Intersection: If L and M are two regular languages then their intersection is also an intersection.
1.       M = {st | s is in L and t is in M}  
Kleene closure: If L is a regular language then its kleene closure L1* will also be a regular language.
 
Derivation
Derivation is a sequence of production rules. It is used to get the input string through these production rules. During parsing we have to take two decisions. These are as follows:
o   We have to decide the non-terminal which is to be replaced.
o   We have to decide the production rule by which the non-terminal will be replaced.
We have two options to decide which non-terminal to be replaced with production rule.
Left-most Derivation
In the left most derivation, the input is scanned and replaced with the production rule from left to right. So in left most derivatives we read the input string from left to right.
Production rules:
1.      S = S + S  
2.      S = S - S  
3.      S = a | b |c  
Input:
4.      a - b + c
 
The left-most derivation is:
 
1.      S = S + S  
2.      S = S - S + S  
3.      S = a - S + S  
4.      S = a - b + S  
5.      S = a - b + c  
 
Right-most Derivation
In the right most derivation, the input is scanned and replaced with the production rule from right to left. So in right most derivatives we read the input string from right to left.
Example:
 
1.      S = S + S  
2.      S = S - S  
3.      S = a | b |c  
 
Input:
a - b + c
The right-most derivation is:
 
1.      S = S - S  
2.      S = S - S + S  
3.      S = S - S + c  
4.      S = S - b + c  
5.      S = a - b + c  
 
 
 
 
 
 
 
Parse tree
o   Parse tree is the graphical representation of symbol. The symbol can be terminal or non-terminal.
o   In parsing, the string is derived using the start symbol. The root of the parse tree is that start symbol.
o   It is the graphical representation of symbol that can be terminals or non-terminals.
o   Parse tree follows the precedence of operators. The deepest sub-tree traversed first. So, the operator in the parent node has less precedence over the operator in the sub-tree.
The parse tree follows these points:
o   All leaf nodes have to be terminals.
o   All interior nodes have to be non-terminals.
o   In-order traversal gives original input string.
Example:
Production rules:
1.      S= S+ S| S * S 
2.      S= a|b|c  
Input:
a * b + c
Step 1:
Parse tree1
Step 2:
Parse tree2
Step 3:
Parse tree3
Step 4:
Parse tree4
Step 5:
Parse tree5
 
Ambiguity
A grammar is said to be ambiguous if there exists more than one leftmost derivation or more than one rightmost derivative or more than one parse tree for the given input string. If the grammar is not ambiguous then it is called unambiguous.
Example:
1.      S = aSb | SS  
2.      S =   
For the string aabb, the above grammar generates two parse trees:
Ambiguity Ambiguity 1

If the grammar has ambiguity then it is not good for a compiler construction. No method can automatically detect and remove the ambiguity but you can remove ambiguity by re-writing the whole grammar without ambiguity.
 
BNF Notation
BNF stands for Backus-Naur Form. It is used to write a formal representation of a context-free grammar. It is also used to describe the syntax of a programming language.
BNF notation is basically just a variant of a context-free grammar.
In BNF, productions have the form:
Left side → definition  
Where leftside ∈ (Vn∪ Vt)+ and definition ∈ (Vn∪ Vt)*. In BNF, the leftside contains one non-terminal.
We can define the several productions with the same leftside. All the productions are separated by a vertical bar symbol "|".
There is the production for any grammar as follows:
S → aSa  
S → bSb  
S → c  
In BNF, we can represent above grammar as follows:
S → aSa| bSb| c  
 
Regular expression
Regular expression is a sequence of pattern that defines a string. It is used to denote regular languages.
It is also used to match character combinations in strings. String searching algorithm used this pattern to find the operations on string.
In regular expression, x* means zero or more occurrence of x. It can generate {e, x, xx, xxx, xxxx,.....}
In regular expression, x+ means one or more occurrence of x. It can generate {x, xx, xxx, xxxx,.....}
Operations on Regular Language
The various operations on regular language are:
Union: If L and M are two regular languages then their union L U M is also a union.
L U M = {s | s is in L or s is in M}  
Intersection: If L and M are two regular languages then their intersection is also an intersection.
L ⋂ M = {st | s is in L and t is in M}  
Kleene closure: If L is a regular language then its kleene closure L1* will also be a regular language.
L* = Zero or more occurrence of language L.  
Example
Write the regular expression for the language:
L = {abn w:n ≥ 3, w ∈ (a,b)+}
Solution:
The string of language L starts with "a" followed by atleast three b's. Itcontains atleast one "a" or one "b" that is string are like abbba, abbbbbba, abbbbbbbb, abbbb.....a
So regular expression is:
r= ab3b* (a+b)+
Here + is a positive closure i.e. (a+b)+ = (a+b)* - ∈
 
Optimization of DFA
To optimize the DFA you have to follow the various steps. These are as follows:
Step 1: Remove all the states that are unreachable from the initial state via any set of the transition of DFA.
Step 2: Draw the transition table for all pair of states.
Step 3: Now split the transition table into two tables T1 and T2. T1 contains all final states and T2 contains non-final states.
Step 4: Find the similar rows from T1 such that:
δ (q, a) = p  
δ (r, a) = p  
That means, find the two states which have same value of a and b and remove one of them.
Step 5: Repeat step 3 until there is no similar rows are available in the transition table T1.
Step 6: Repeat step 3 and step 4 for table T2 also.
Step 7: Now combine the reduced T1 and T2 tables. The combined transition table is the transition table of minimized DFA.
Example
Optimization of DFA
Solution:
Step 1: In the given DFA, q2 and q4 are the unreachable states so remove them.
Step 2: Draw the transition table for rest of the states.
Optimization of DFA 1
Step 3:
Now divide rows of transition table into two sets as:
1. One set contains those rows, which start from non-final sates:
Optimization of DFA 2
2. Other set contains those rows, which starts from final states.
Optimization of DFA 3
Step 4: Set 1 has no similar rows so set 1 will be the same.
Step 5: In set 2, row 1 and row 2 are similar since q3 and q5 transit to same state on 0 and 1. So skip q5 and then replace q5 by q3 in the rest.
Optimization of DFA 4
Step 6: Now combine set 1 and set 2 as:
Optimization of DFA 5
Now it is the transition table of minimized DFA.
Transition diagram of minimized DFA:
Optimization of DFA 6
         Fig: Minimized DFA
 
 
 
LEX
Lex is a program that generates lexical analyzer. It is used with YACC parser generator.
The lexical analyzer is a program that transforms an input stream into a sequence of tokens.
It reads the input stream and produces the source code as output through implementing the lexical analyzer in the C program.
The function of Lex is as follows:
Firstly lexical analyzer creates a program lex.1 in the Lex language. Then Lex compiler runs the lex.1 program and produces a C program lex.yy.c.
Finally C compiler runs the lex.yy.c program and produces an object program a.out.
a.out is lexical analyzer that transforms an input stream into a sequence of tokens.

LEX
Lex file format
A Lex program is separated into three sections by %% delimiters. The formal of Lex source is as follows:
{ definitions }   
%%  
 { rules }   
%%   
{ user subroutines }  
Definitions include declarations of constant, variable and regular definitions.
Rules define the statement of form p1 {action1} p2 {action2}....pn {action}.
Where pi describes the regular expression and action1 describes the actions what action the lexical analyzer should take when pattern pi matches a lexeme.
User subroutines are auxiliary procedures needed by the actions. The subroutine can be loaded with the lexical analyzer and compiled separately.

 
Formal grammar
Formal grammar is a set of rules. It is used to identify correct or incorrect strings of tokens in a language. The formal grammar is represented as G.
Formal grammar is used to generate all possible strings over the alphabet that is syntactically correct in the language.
Formal grammar is used mostly in the syntactic analysis phase (parsing) particularly during the compilation.
Formal grammar G is written as follows:
G = <V, N, P, S>  
Where:
N describes a finite set of non-terminal symbols.
V describes a finite set of terminal symbols.
P describes a set of production rules
S is the start symbol.
Example:
L = {a, b}, N = {S, R, B}  
Production rules:
S = bR  
R = aR  
R = aB   
B = b  
Through this production we can produce some strings like: bab, baab, baaab etc.
This production describes the string of shape banab.
Formal grammar
Context free grammar
Context free grammar is a formal grammar which is used to generate all possible strings in a given formal language.
Context free grammar G can be defined by four tuples as:
G= (V, T, P, S)  
Where,
G describes the grammar
T describes a finite set of terminal symbols.
V describes a finite set of non-terminal symbols
P describes a set of production rules
S is the start symbol.
In CFG, the start symbol is used to derive the string. You can derive the string by repeatedly replacing a non-terminal by the right hand side of the production, until all non-terminal have been replaced by terminal symbols.
Example:
L= {wcwR | w € (a, b)*}
Production rules:
S → aSa  
S → bSb  
S → c  
Now check that abbcbba string can be derived from the given CFG.
S ⇒ aSa  
S ⇒ abSba  
S ⇒ abbSbba  
S ⇒ abbcbba  
By applying the production S → aSa, S → bSb recursively and finally applying the production S → c, we get the string abbcbba.
 
Capabilities of CFG
There are the various capabilities of CFG:
Context free grammar is useful to describe most of the programming languages.
If the grammar is properly designed then an efficientparser can be constructed automatically.
Using the features of associatively & precedence information, suitable grammars for expressions can be constructed.
Context free grammar is capable of describing nested structures like: balanced parentheses, matching begin-end, corresponding if-then-else's & so on.
 
NFA TO DFA
Example 1:
Convert the given NFA to DFA.
Conversion from NFA to DFA

Solution: For the given transition diagram we will first construct the transition table.

State

0

1

→q0

q0

q1

q1

{q1, q2}

q1

*q2

q2

{q1, q2}

Now we will obtain δ' transition for state q0.
1.      δ'([q0], 0) = [q0]  
2.      δ'([q0], 1) = [q1]  
The δ' transition for state q1 is obtained as:
1.      δ'([q1], 0) = [q1, q2]       (new state generated)  
2.      δ'([q1], 1) = [q1]  
The δ' transition for state q2 is obtained as:
1.      δ'([q2], 0) = [q2]  
2.      δ'([q2], 1) = [q1, q2]  
Now we will obtain δ' transition on [q1, q2].
1.      δ'([q1, q2], 0) = δ(q1, 0 δ(q2, 0)  
2.                            = {q1, q2}  {q2}  
3.                            = [q1, q2]  
4.      δ'([q1, q2], 1) = δ(q1, 1 δ(q2, 1)  
5.                            = {q1}  {q1, q2}  
6.                            = {q1, q2}  
7.                            = [q1, q2]  
The state [q1, q2] is the final state as well because it contains a final state q2. The transition table for the constructed DFA will be:

State

0

1

→[q0]

[q0]

[q1]

[q1]

[q1, q2]

[q1]

*[q2]

[q2]

[q1, q2]

*[q1, q2]

[q1, q2]

[q1, q2]

The Transition diagram will be:
Conversion from NFA to DFA

The state q2 can be eliminated because q2 is an unreachable state.
Example 2:
Convert the given NFA to DFA.
Conversion from NFA to DFA

Solution: For the given transition diagram we will first construct the transition table.

State

0

1

→q0

{q0, q1}

{q1}

*q1

ϕ

{q0, q1}

Now we will obtain δ' transition for state q0.
1.      δ'([q0], 0) = {q0, q1}  
2.                     = [q0, q1]       (new state generated)  
3.      δ'([q0], 1) = {q1} = [q1]  
The δ' transition for state q1 is obtained as:
1.      δ'([q1], 0) = ϕ  
2.      δ'([q1], 1) = [q0, q1]  
Now we will obtain δ' transition on [q0, q1].
1.      δ'([q0, q1], 0) = δ(q0, 0 δ(q1, 0)  
2.                            = {q0, q1}  ϕ  
3.                            = {q0, q1}  
4.                            = [q0, q1]  
Similarly,
1.      δ'([q0, q1], 1) = δ(q0, 1 δ(q1, 1)  
2.                            = {q1}  {q0, q1}  
3.                            = {q0, q1}  
4.                            = [q0, q1]  
As in the given NFA, q1 is a final state, then in DFA wherever, q1 exists that state becomes a final state. Hence in the DFA, final states are [q1] and [q0, q1]. Therefore set of final states F = {[q1], [q0, q1]}.
The transition table for the constructed DFA will be:

State

0

1

→[q0]

[q0, q1]

[q1]

*[q1]

ϕ

[q0, q1]

*[q0, q1]

[q0, q1]

[q0, q1]

The Transition diagram will be:
Conversion from NFA to DFA

Even we can change the name of the states of DFA.
Suppose
1.      A = [q0]  
2.      B = [q1]  
3.      C = [q0, q1]  
Conversion from NFA with Null to DFAWith these new names the DFA will be as follows:
Conversion from NFA to DFA
 
NFA with move: If any FA contains ε transaction or move, the finite automata is called NFA with move.
ε-closure: ε-closure for a given state A means a set of states which can be reached from the state A with only ε(null) move including the state A itself.
Steps for converting NFA with ε to DFA:
Step 1: We will take the ε-closure for the starting state of NFA as a starting state of DFA.
Step 2: Find the states for each input symbol that can be traversed from the present. That means the union of transition value and their closures for each state of NFA present in the current state of DFA.
Step 3: If we found a new state, take it as current state and repeat step 2.
Step 4: Repeat Step 2 and Step 3 until there is no new state present in the transition table of DFA.
Step 5: Mark the states of DFA as a final state which contains the final state of NFA.
Example 1:
Convert the NFA with ε into its equivalent DFA.


Solution:
Let us obtain ε-closure of each state.
1.      ε-closure {q0} = {q0, q1, q2}  
2.      ε-closure {q1} = {q1}  
3.      ε-closure {q2} = {q2}  
4.      ε-closure {q3} = {q3}  
5.      ε-closure {q4} = {q4}  
Now, let ε-closure {q0} = {q0, q1, q2} be state A.
Hence
δ'(A, 0) = ε-closure {δ((q0, q1, q2), 0) }
              = ε-closure {δ(q0, 0) δ(q1, 0) δ(q2, 0) }
              = ε-closure {q3}
              = {q3}            call it as state B.
 
δ'(A, 1) = ε-closure {δ((q0, q1, q2), 1) }
              = ε-closure {δ((q0, 1) δ(q1, 1) δ(q2, 1) }
              = ε-closure {q3}
              = {q3} = B.
The partial DFA will be
Conversion from NFA with Null to DFA

Now,
δ'(B, 0) = ε-closure {δ(q3, 0) }
              = ϕ
δ'(B, 1) = ε-closure {δ(q3, 1) }
              = ε-closure {q4}
              = {q4}            i.e. state C
For state C:
1.      δ'(C, 0) = ε-closure {δ(q4, 0) }  
2.                    = ϕ  
3.      δ'(C, 1) = ε-closure {δ(q4, 1) }  
4.                    = ϕ  
The DFA will be,
Conversion from NFA with Null to DFA
Example 2:
Convert the given NFA into its equivalent DFA.
Conversion from NFA with Null to DFA
Solution: Let us obtain the ε-closure of each state.
1.      ε-closure(q0) = {q0, q1, q2}  
2.      ε-closure(q1) = {q1, q2}  
3.      ε-closure(q2) = {q2}  
Now we will obtain δ' transition. Let ε-closure(q0) = {q0, q1, q2} call it as state A.
δ'(A, 0) = ε-closure{δ((q0, q1, q2), 0)}
              = ε-closure{δ(q0, 0) δ(q1, 0) δ(q2, 0)}
              = ε-closure{q0}
              = {q0, q1, q2}
 
δ'(A, 1) = ε-closure{δ((q0, q1, q2), 1)}
              = ε-closure{δ(q0, 1) δ(q1, 1) δ(q2, 1)}
              = ε-closure{q1}
              = {q1, q2}         call it as state B
 
δ'(A, 2) = ε-closure{δ((q0, q1, q2), 2)}
              = ε-closure{δ(q0, 2) δ(q1, 2) δ(q2, 2)}
              = ε-closure{q2} 
              = {q2}         call it state C
Thus we have obtained
1.      δ'(A, 0) = A  
2.      δ'(A, 1) = B  
3.      δ'(A, 2) = C  
The partial DFA will be:
Conversion from NFA with Null to DFA


Now we will find the transitions on states B and C for each input.
Hence
δ'(B, 0) = ε-closure{δ((q1, q2), 0)}
              = ε-closure{δ(q1, 0) δ(q2, 0)}
              = ε-closure{ϕ}
              = ϕ
 
δ'(B, 1) = ε-closure{δ((q1, q2), 1)}
              = ε-closure{δ(q1, 1) δ(q2, 1)}
              = ε-closure{q1}
              = {q1, q2}         i.e. state B itself
 
δ'(B, 2) = ε-closure{δ((q1, q2), 2)}
              = ε-closure{δ(q1, 2) δ(q2, 2)}
              = ε-closure{q2}
              = {q2}         i.e. state C itself
Thus we have obtained
1.      δ'(B, 0) = ϕ  
2.      δ'(B, 1) = B  
3.      δ'(B, 2) = C  
The partial transition diagram will be
Conversion from NFA with Null to DFA

Now we will obtain transitions for C:
δ'(C, 0) = ε-closure{δ(q2, 0)}
              = ε-closure{ϕ}
              = ϕ
 
δ'(C, 1) = ε-closure{δ(q2, 1)}
              = ε-closure{ϕ}
              = ϕ
 
δ'(C, 2) = ε-closure{δ(q2, 2)}
              = {q2}
Hence the DFA is
Conversion from NFA with Null to DFA

As A = {q0, q1, q2} in which final state q2 lies hence A is final state. B = {q1, q2} in which the state q2 lies hence B is also final state. C = {q2}, the state q2 lies hence C is also a final state.
 
 
YACC
YACC stands for Yet Another Compiler Compiler.
YACC provides a tool to produce a parser for a given grammar.
YACC is a program designed to compile a LALR (1) grammar.
It is used to produce the source code of the syntactic analyzer of the language produced by LALR (1) grammar.
The input of YACC is the rule or grammar and the output is a C program.
These are some points about YACC:
Input: A CFG- file.y
Output: A parser y.tab.c (yacc)
The output file "file.output" contains the parsing tables.
The file "file.tab.h" contains declarations.
The parser called the yyparse ().
Parser expects to use a function called yylex () to get tokens.
The basic operational sequence is as follows:
YACC
This file contains the desired grammar in YACC format.
YACC 1
It shows the YACC program.
YACC 2
It is the c source program created by YACC.
YACC 3
C Compiler
YACC 4
Executable file that will parse grammar given in gram.Y

 

Comments

Popular posts from this blog

COA- Unit -5 Peripheral DeviceS

UNIT-1 COMPUTER NETWORK