SlideShare a Scribd company logo
Lexical Analysis
Outline
 Role of lexical analyzer
 Specification of tokens
 Recognition of tokens
 Lexical analyzer generator
 Finite automata
 Design of lexical analyzer generator
The role of lexical analyzer
The main task of the lexical analyzer is to read
the input characters of the source program
group them into lexemes and produce as
output a sequence of tokens for each lexeme in
the source program.
If a lexeme has an identifier, that lexeme is entered into
the symbol table.
The lexical analyzer not only identifies the lexemes but
also pre-processes the source text like removing
comments, white spaces, etc.
The role of lexical analyzer
Lexical
Analyzer
Parser
Source
program
token
getNextToken
Symbol
table
To semantic
analysis
Why to separate Lexical analysis
and parsing
1. Simplicity of design
2. Improving compiler efficiency
3. Enhancing compiler portability
Lexical analysis
Lexical analyzers are divided into a cascade of two
processes:
 Scanning - It consists of simple processes that do not
require the tokenization of the input such as deletion
of comments, compaction of consecutive white space
characters into one.
 Lexical Analysis- This is the more complex portion
where the scanner produces sequence of tokens as
output.
Tokens, Patterns and Lexemes
 A token is a pair a token name and an optional token
value
 A pattern is a description of the form that the lexemes
of a token may take
 Specification of Tokens: Regular expressions are
important part in specifying lexeme patterns. While
they cannot express all possible patterns, they are very
effective in specifying those type of patterns that we
actually need for tokens.
 A lexeme is a sequence of characters in the source
program that matches the pattern for a token
Example
Token Informal description Sample lexemes
if
else
comparison
id
number
literal
Characters i, f
Characters e, l, s, e
< or > or <= or >= or == or !=
Letter followed by letter and digits
Any numeric constant
Anything but “ sorrounded by “
if
else
<=, !=
pi, score, D2
3.14159, 0, 6.02e23
“core dumped”
printf(“total = %dn”, score);
Attributes for tokens
 E = M * C ** 2
 <id, pointer to symbol table entry for E>
 <assign-op>
 <id, pointer to symbol table entry for M>
 <mult-op>
 <id, pointer to symbol table entry for C>
 <exp-op>
 <number, integer value 2>
Lexical errors
 Some errors are out of power of lexical analyzer to
recognize:
 fi (a == f(x)) …
 However it may be able to recognize errors like:
 d = 2r
 Such errors are recognized when no pattern for tokens
matches a character sequence
Error recovery
 Panic mode: successive characters are ignored until we
reach to a well formed token
 Delete one character from the remaining input
 Insert a missing character into the remaining input
 Replace a character by another character
 Transpose two adjacent characters
Input buffering
 Sometimes lexical analyzer needs to look ahead some
symbols to decide about the token to return
 In C language: we need to look after -, = or < to decide
what token to return
 We need to introduce a two buffer scheme to handle
large look-aheads safely
E = M * C * * 2 eof
Sentinels
Switch (*forward++) {
case eof:
if (forward is at end of first buffer) {
reload second buffer;
forward = beginning of second buffer;
}
else if {forward is at end of second buffer) {
reload first buffer;
forward = beginning of first buffer;
}
else /* eof within a buffer marks the end of input */
terminate lexical analysis;
break;
cases for the other characters;
}
E = M eof * C * * 2 eof eof
Specification of tokens
 In theory of compilation regular expressions are used
to formalize the specification of tokens
 Regular expressions are means for specifying regular
languages
 Example:
 Letter_(letter_ | digit)*
 Each regular expression is a pattern specifying the
form of strings
Regular expressions
 Ɛ is a regular expression, L(Ɛ) = {Ɛ}
 If a is a symbol in ∑then a is a regular expression, L(a)
= {a}
 (r) | (s) is a regular expression denoting the language
L(r) ∪ L(s)
 (r)(s) is a regular expression denoting the language
L(r)L(s)
 (r)* is a regular expression denoting (L(r))*
 (r) is a regular expression denoting L(r)
Regular definitions
d1 -> r1
d2 -> r2
…
dn -> rn
 Example:
letter_ -> A | B | … | Z | a | b | … | Z | _
digit -> 0 | 1 | … | 9
id -> letter_ (letter_ | digit)*
Extensions
 One or more instances: (r)+
 Zero of one instances: r?
 Character classes: [abc]
 Example:
 letter_ -> [A-Za-z_]
 digit -> [0-9]
 id -> letter_(letter|digit)*
Recognition of tokens
 Starting point is the language grammar to understand
the tokens:
stmt -> if expr then stmt
| if expr then stmt else stmt
| Ɛ
expr -> term relop term
| term
term -> id
| number
Recognition of tokens (cont.)
 The next step is to formalize the patterns:
digit -> [0-9]
Digits -> digit+
number -> digit(.digits)? (E[+-]? Digit)?
letter -> [A-Za-z_]
id -> letter (letter|digit)*
If -> if
Then -> then
Else -> else
Relop -> < | > | <= | >= | = | <>
 We also need to handle whitespaces:
ws -> (blank | tab | newline)+
Transition diagrams
 Transition diagram for relop
Transition diagrams (cont.)
 Transition diagram for reserved words and identifiers
Transition diagrams (cont.)
 Transition diagram for unsigned numbers
Transition diagrams (cont.)
 Transition diagram for whitespace
Architecture of a transition-
diagram-based lexical analyzer
TOKEN getRelop()
{
TOKEN retToken = new (RELOP)
while (1) { /* repeat character processing until a
return or failure occurs */
switch(state) {
case 0: c= nextchar();
if (c == ‘<‘) state = 1;
else if (c == ‘=‘) state = 5;
else if (c == ‘>’) state = 6;
else fail(); /* lexeme is not a relop */
break;
case 1: …
…
case 8: retract();
retToken.attribute = GT;
return(retToken);
}
Lexical Analyzer Generator - Lex
Lexical
Compiler
Lex Source program
lex.l
lex.yy.c
C
compiler
lex.yy.c a.out
a.out
Input stream Sequence
of tokens
Structure of Lex programs
declarations
%%
translation rules
%%
auxiliary functions
Pattern {Action}
Example
%{
/* definitions of manifest constants
LT, LE, EQ, NE, GT, GE,
IF, THEN, ELSE, ID, NUMBER, RELOP */
%}
/* regular definitions
delim [ tn]
ws {delim}+
letter [A-Za-z]
digit [0-9]
id {letter}({letter}|{digit})*
number {digit}+(.{digit}+)?(E[+-]?{digit}+)?
%%
{ws} {/* no action and no return */}
if {return(IF);}
then {return(THEN);}
else {return(ELSE);}
{id} {yylval = (int) installID(); return(ID); }
{number} {yylval = (int) installNum(); return(NUMBER);}
…
Int installID() {/* funtion to install the
lexeme, whose first character is
pointed to by yytext, and whose
length is yyleng, into the symbol
table and return a pointer thereto
*/
}
Int installNum() { /* similar to
installID, but puts numerical
constants into a separate table */
}
28
Finite Automata
 Regular expressions = specification
 Finite automata = implementation
 A finite automaton consists of
 An input alphabet 
 A set of states S
 A start state n
 A set of accepting states F  S
 A set of transitions state input state
29
Finite Automata
 Transition
s1 a s2
 Is read
In state s1 on input “a” go to state s2
 If end of input
 If in accepting state => accept, othewise => reject
 If no transition possible => reject
30
Finite Automata State Graphs
 A state
• The start state
• An accepting state
• A transition
a
31
A Simple Example
 A finite automaton that accepts only “1”
 A finite automaton accepts a string if we can follow
transitions labeled with the characters in the string
from the start to some accepting state
1
32
Another Simple Example
 A finite automaton accepting any number of 1’s
followed by a single 0
 Alphabet: {0,1}
 Check that “1110” is accepted but “110…” is not
0
1
33
And Another Example
 Alphabet {0,1}
 What language does this recognize?
0
1
0
1
0
1
34
And Another Example
 Alphabet still { 0, 1 }
 The operation of the automaton is not completely
defined by the input
 On input “11” the automaton could be in either state
1
1
35
Epsilon Moves
 Another kind of transition: -moves

• Machine can move from state A to state B
without reading input
A B
36
Deterministic and
Nondeterministic Automata
 Deterministic Finite Automata (DFA)
 One transition per input per state
 No -moves
 Nondeterministic Finite Automata (NFA)
 Can have multiple transitions for one input in a given
state
 Can have -moves
 Finite automata have finite memory
 Need only to encode the current state
37
Execution of Finite Automata
 A DFA can take only one path through the state graph
 Completely determined by input
 NFAs can choose
 Whether to make -moves
 Which of multiple transitions for a single input to take
38
Acceptance of NFAs
 An NFA can get into multiple states
• Input:
0
1
1
0
1 0 1
• Rule: NFA accepts if it can get in a final state
39
NFA vs. DFA (1)
 NFAs and DFAs recognize the same set of languages
(regular languages)
 DFAs are easier to implement
 There are no choices to consider
40
NFA vs. DFA (2)
 For a given language the NFA can be simpler than the
DFA
0
1
0
0
0
1
0
1
0
1
NFA
DFA
• DFA can be exponentially larger than NFA
41
Regular Expressions to Finite
Automata
 High-level sketch
Regular
expressions
NFA
DFA
Lexical
Specification
Table-driven
Implementation of DFA
42
Regular Expressions to NFA (1)
 For each kind of rexp, define an NFA
 Notation: NFA for rexp A
A
• For 

• For input a
a
43
Regular Expressions to NFA (2)
 For AB
A B

• For A | B
A
B




44
Regular Expressions to NFA (3)
 For A*
A



45
Example of RegExp -> NFA
conversion
 Consider the regular expression
(1 | 0)*1
 The NFA is

1
C E
0
D F


B


G



A H
1
I J
46
Next
Regular
expressions
NFA
DFA
Lexical
Specification
Table-driven
Implementation of DFA
47
NFA to DFA. The Trick
 Simulate the NFA
 Each state of resulting DFA
= a non-empty subset of states of the NFA
 Start state
= the set of NFA states reachable through -moves from
NFA start state
 Add a transition S a S’ to DFA iff
 S’ is the set of NFA states reachable from the states in S
after seeing the input a
 considering -moves as well
48
NFA -> DFA Example
1
0
1
 






A B
C
D
E
F
G H I J
ABCDHI
FGABCDHI
EJGABCDHI
0
1
0
1
0 1
49
NFA to DFA. Remark
 An NFA may be in many states at any time
 How many different states ?
 If there are N states, the NFA must be in some subset
of those N states
 How many non-empty subsets are there?
 2N - 1 = finitely many, but exponentially many
50
Implementation
 A DFA can be implemented by a 2D table T
 One dimension is “states”
 Other dimension is “input symbols”
 For every transition Si a Sk define T[i,a] = k
 DFA “execution”
 If in state Si and input a, read T[i,a] = k and skip to state
Sk
 Very efficient
51
Table Implementation of a DFA
S
T
U
0
1
0
1
0 1
0 1
S T U
T T U
U T U
52
Implementation (Cont.)
 NFA -> DFA conversion is at the heart of tools such as
flex or jflex
 But, DFAs can be huge
 In practice, flex-like tools trade off speed for space in
the choice of NFA and DFA representations

More Related Content

What's hot (20)

PPTX
Bootstrapping in Compiler
Akhil Kaushik
 
PPTX
Syntax Analysis in Compiler Design
MAHASREEM
 
PPTX
Structure of the compiler
Sudhaa Ravi
 
PPTX
Semantics analysis
Bilalzafar22
 
PPTX
Specification-of-tokens
Dattatray Gandhmal
 
PDF
Lecture 01 introduction to compiler
Iffat Anjum
 
PDF
Syntax analysis
Akshaya Arunan
 
PPTX
Single pass assembler
Bansari Shah
 
PPTX
Recognition-of-tokens
Dattatray Gandhmal
 
PPTX
Assemblers
Dattatray Gandhmal
 
PPTX
Compiler Chapter 1
Huawei Technologies
 
PPT
Intermediate code generation (Compiler Design)
Tasif Tanzim
 
PDF
Intermediate code generation in Compiler Design
Kuppusamy P
 
PPTX
Lecture 02 lexical analysis
Iffat Anjum
 
PPTX
Assembler1
jayashri kolekar
 
PPTX
Lexical analysis - Compiler Design
Muhammed Afsal Villan
 
PPTX
System software - macro expansion,nested macro calls
SARASWATHI S
 
PPTX
Macro Processor
Saranya1702
 
PPT
Compiler Construction introduction
Rana Ehtisham Ul Haq
 
PDF
loaders and linkers
Temesgen Molla
 
Bootstrapping in Compiler
Akhil Kaushik
 
Syntax Analysis in Compiler Design
MAHASREEM
 
Structure of the compiler
Sudhaa Ravi
 
Semantics analysis
Bilalzafar22
 
Specification-of-tokens
Dattatray Gandhmal
 
Lecture 01 introduction to compiler
Iffat Anjum
 
Syntax analysis
Akshaya Arunan
 
Single pass assembler
Bansari Shah
 
Recognition-of-tokens
Dattatray Gandhmal
 
Assemblers
Dattatray Gandhmal
 
Compiler Chapter 1
Huawei Technologies
 
Intermediate code generation (Compiler Design)
Tasif Tanzim
 
Intermediate code generation in Compiler Design
Kuppusamy P
 
Lecture 02 lexical analysis
Iffat Anjum
 
Assembler1
jayashri kolekar
 
Lexical analysis - Compiler Design
Muhammed Afsal Villan
 
System software - macro expansion,nested macro calls
SARASWATHI S
 
Macro Processor
Saranya1702
 
Compiler Construction introduction
Rana Ehtisham Ul Haq
 
loaders and linkers
Temesgen Molla
 

Similar to Lecture 1 - Lexical Analysis.ppt (20)

PPT
02. Chapter 3 - Lexical Analysis NLP.ppt
charvivij
 
PPT
Compiler Design ug semLexical Analysis.ppt
ssuser6ba09a
 
PPTX
Ch 2.pptx
woldu2
 
PDF
Lexical
baran19901990
 
PPT
compiler Design course material chapter 2
gadisaAdamu
 
PPT
Compiler Designs
wasim liam
 
PPT
Ch3.ppt
TabassumMaktum
 
PPT
Ch3.ppt
ProvatMajhi
 
PPT
02. chapter 3 lexical analysis
raosir123
 
PPTX
04LexicalAnalysissnsnjmsjsjmsbdjjdnd.pptx
OishiBiswas1
 
PPT
52232.-Compiler-Design-Lexical-Analysis.ppt
cujjal191
 
PPT
LexicalAnalysis in Compiler design .pt
Sannidhanapuharika
 
PPTX
SS UI Lecture 5
Avinash Kapse
 
PPT
Lexical analysis, syntax analysis, semantic analysis. Ppt
ovidlivi91
 
PPTX
Chahioiuou9oioooooooooooooofffghfpterTwo.pptx
dejenehundaol91
 
PPT
Ch3.ppt
MDSayem35
 
PPTX
A simple approach of lexical analyzers
Archana Gopinath
 
PPTX
Implementation of lexical analyser
Archana Gopinath
 
PDF
Lexical analysis Compiler design pdf to read
shubhamsingaal
 
02. Chapter 3 - Lexical Analysis NLP.ppt
charvivij
 
Compiler Design ug semLexical Analysis.ppt
ssuser6ba09a
 
Ch 2.pptx
woldu2
 
Lexical
baran19901990
 
compiler Design course material chapter 2
gadisaAdamu
 
Compiler Designs
wasim liam
 
Ch3.ppt
ProvatMajhi
 
02. chapter 3 lexical analysis
raosir123
 
04LexicalAnalysissnsnjmsjsjmsbdjjdnd.pptx
OishiBiswas1
 
52232.-Compiler-Design-Lexical-Analysis.ppt
cujjal191
 
LexicalAnalysis in Compiler design .pt
Sannidhanapuharika
 
SS UI Lecture 5
Avinash Kapse
 
Lexical analysis, syntax analysis, semantic analysis. Ppt
ovidlivi91
 
Chahioiuou9oioooooooooooooofffghfpterTwo.pptx
dejenehundaol91
 
Ch3.ppt
MDSayem35
 
A simple approach of lexical analyzers
Archana Gopinath
 
Implementation of lexical analyser
Archana Gopinath
 
Lexical analysis Compiler design pdf to read
shubhamsingaal
 
Ad

Recently uploaded (20)

PDF
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
PDF
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
PPTX
Why Businesses Are Switching to Open Source Alternatives to Crystal Reports.pptx
Varsha Nayak
 
PPTX
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
PPTX
Tally software_Introduction_Presentation
AditiBansal54083
 
PDF
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
PDF
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
PPTX
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
Thread In Android-Mastering Concurrency for Responsive Apps.pdf
Nabin Dhakal
 
PPTX
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
PDF
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
PDF
IDM Crack with Internet Download Manager 6.42 Build 43 with Patch Latest 2025
bashirkhan333g
 
PDF
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
PPTX
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
Automate Cybersecurity Tasks with Python
VICTOR MAESTRE RAMIREZ
 
PPTX
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
PPTX
AEM User Group: India Chapter Kickoff Meeting
jennaf3
 
PDF
Driver Easy Pro 6.1.1 Crack Licensce key 2025 FREE
utfefguu
 
PDF
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
Why Businesses Are Switching to Open Source Alternatives to Crystal Reports.pptx
Varsha Nayak
 
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
Tally software_Introduction_Presentation
AditiBansal54083
 
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
MiniTool Partition Wizard 12.8 Crack License Key LATEST
hashhshs786
 
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Thread In Android-Mastering Concurrency for Responsive Apps.pdf
Nabin Dhakal
 
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
IDM Crack with Internet Download Manager 6.42 Build 43 with Patch Latest 2025
bashirkhan333g
 
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Automate Cybersecurity Tasks with Python
VICTOR MAESTRE RAMIREZ
 
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
AEM User Group: India Chapter Kickoff Meeting
jennaf3
 
Driver Easy Pro 6.1.1 Crack Licensce key 2025 FREE
utfefguu
 
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
Ad

Lecture 1 - Lexical Analysis.ppt

  • 2. Outline  Role of lexical analyzer  Specification of tokens  Recognition of tokens  Lexical analyzer generator  Finite automata  Design of lexical analyzer generator
  • 3. The role of lexical analyzer The main task of the lexical analyzer is to read the input characters of the source program group them into lexemes and produce as output a sequence of tokens for each lexeme in the source program. If a lexeme has an identifier, that lexeme is entered into the symbol table. The lexical analyzer not only identifies the lexemes but also pre-processes the source text like removing comments, white spaces, etc.
  • 4. The role of lexical analyzer Lexical Analyzer Parser Source program token getNextToken Symbol table To semantic analysis
  • 5. Why to separate Lexical analysis and parsing 1. Simplicity of design 2. Improving compiler efficiency 3. Enhancing compiler portability
  • 6. Lexical analysis Lexical analyzers are divided into a cascade of two processes:  Scanning - It consists of simple processes that do not require the tokenization of the input such as deletion of comments, compaction of consecutive white space characters into one.  Lexical Analysis- This is the more complex portion where the scanner produces sequence of tokens as output.
  • 7. Tokens, Patterns and Lexemes  A token is a pair a token name and an optional token value  A pattern is a description of the form that the lexemes of a token may take  Specification of Tokens: Regular expressions are important part in specifying lexeme patterns. While they cannot express all possible patterns, they are very effective in specifying those type of patterns that we actually need for tokens.  A lexeme is a sequence of characters in the source program that matches the pattern for a token
  • 8. Example Token Informal description Sample lexemes if else comparison id number literal Characters i, f Characters e, l, s, e < or > or <= or >= or == or != Letter followed by letter and digits Any numeric constant Anything but “ sorrounded by “ if else <=, != pi, score, D2 3.14159, 0, 6.02e23 “core dumped” printf(“total = %dn”, score);
  • 9. Attributes for tokens  E = M * C ** 2  <id, pointer to symbol table entry for E>  <assign-op>  <id, pointer to symbol table entry for M>  <mult-op>  <id, pointer to symbol table entry for C>  <exp-op>  <number, integer value 2>
  • 10. Lexical errors  Some errors are out of power of lexical analyzer to recognize:  fi (a == f(x)) …  However it may be able to recognize errors like:  d = 2r  Such errors are recognized when no pattern for tokens matches a character sequence
  • 11. Error recovery  Panic mode: successive characters are ignored until we reach to a well formed token  Delete one character from the remaining input  Insert a missing character into the remaining input  Replace a character by another character  Transpose two adjacent characters
  • 12. Input buffering  Sometimes lexical analyzer needs to look ahead some symbols to decide about the token to return  In C language: we need to look after -, = or < to decide what token to return  We need to introduce a two buffer scheme to handle large look-aheads safely E = M * C * * 2 eof
  • 13. Sentinels Switch (*forward++) { case eof: if (forward is at end of first buffer) { reload second buffer; forward = beginning of second buffer; } else if {forward is at end of second buffer) { reload first buffer; forward = beginning of first buffer; } else /* eof within a buffer marks the end of input */ terminate lexical analysis; break; cases for the other characters; } E = M eof * C * * 2 eof eof
  • 14. Specification of tokens  In theory of compilation regular expressions are used to formalize the specification of tokens  Regular expressions are means for specifying regular languages  Example:  Letter_(letter_ | digit)*  Each regular expression is a pattern specifying the form of strings
  • 15. Regular expressions  Ɛ is a regular expression, L(Ɛ) = {Ɛ}  If a is a symbol in ∑then a is a regular expression, L(a) = {a}  (r) | (s) is a regular expression denoting the language L(r) ∪ L(s)  (r)(s) is a regular expression denoting the language L(r)L(s)  (r)* is a regular expression denoting (L(r))*  (r) is a regular expression denoting L(r)
  • 16. Regular definitions d1 -> r1 d2 -> r2 … dn -> rn  Example: letter_ -> A | B | … | Z | a | b | … | Z | _ digit -> 0 | 1 | … | 9 id -> letter_ (letter_ | digit)*
  • 17. Extensions  One or more instances: (r)+  Zero of one instances: r?  Character classes: [abc]  Example:  letter_ -> [A-Za-z_]  digit -> [0-9]  id -> letter_(letter|digit)*
  • 18. Recognition of tokens  Starting point is the language grammar to understand the tokens: stmt -> if expr then stmt | if expr then stmt else stmt | Ɛ expr -> term relop term | term term -> id | number
  • 19. Recognition of tokens (cont.)  The next step is to formalize the patterns: digit -> [0-9] Digits -> digit+ number -> digit(.digits)? (E[+-]? Digit)? letter -> [A-Za-z_] id -> letter (letter|digit)* If -> if Then -> then Else -> else Relop -> < | > | <= | >= | = | <>  We also need to handle whitespaces: ws -> (blank | tab | newline)+
  • 21. Transition diagrams (cont.)  Transition diagram for reserved words and identifiers
  • 22. Transition diagrams (cont.)  Transition diagram for unsigned numbers
  • 23. Transition diagrams (cont.)  Transition diagram for whitespace
  • 24. Architecture of a transition- diagram-based lexical analyzer TOKEN getRelop() { TOKEN retToken = new (RELOP) while (1) { /* repeat character processing until a return or failure occurs */ switch(state) { case 0: c= nextchar(); if (c == ‘<‘) state = 1; else if (c == ‘=‘) state = 5; else if (c == ‘>’) state = 6; else fail(); /* lexeme is not a relop */ break; case 1: … … case 8: retract(); retToken.attribute = GT; return(retToken); }
  • 25. Lexical Analyzer Generator - Lex Lexical Compiler Lex Source program lex.l lex.yy.c C compiler lex.yy.c a.out a.out Input stream Sequence of tokens
  • 26. Structure of Lex programs declarations %% translation rules %% auxiliary functions Pattern {Action}
  • 27. Example %{ /* definitions of manifest constants LT, LE, EQ, NE, GT, GE, IF, THEN, ELSE, ID, NUMBER, RELOP */ %} /* regular definitions delim [ tn] ws {delim}+ letter [A-Za-z] digit [0-9] id {letter}({letter}|{digit})* number {digit}+(.{digit}+)?(E[+-]?{digit}+)? %% {ws} {/* no action and no return */} if {return(IF);} then {return(THEN);} else {return(ELSE);} {id} {yylval = (int) installID(); return(ID); } {number} {yylval = (int) installNum(); return(NUMBER);} … Int installID() {/* funtion to install the lexeme, whose first character is pointed to by yytext, and whose length is yyleng, into the symbol table and return a pointer thereto */ } Int installNum() { /* similar to installID, but puts numerical constants into a separate table */ }
  • 28. 28 Finite Automata  Regular expressions = specification  Finite automata = implementation  A finite automaton consists of  An input alphabet   A set of states S  A start state n  A set of accepting states F  S  A set of transitions state input state
  • 29. 29 Finite Automata  Transition s1 a s2  Is read In state s1 on input “a” go to state s2  If end of input  If in accepting state => accept, othewise => reject  If no transition possible => reject
  • 30. 30 Finite Automata State Graphs  A state • The start state • An accepting state • A transition a
  • 31. 31 A Simple Example  A finite automaton that accepts only “1”  A finite automaton accepts a string if we can follow transitions labeled with the characters in the string from the start to some accepting state 1
  • 32. 32 Another Simple Example  A finite automaton accepting any number of 1’s followed by a single 0  Alphabet: {0,1}  Check that “1110” is accepted but “110…” is not 0 1
  • 33. 33 And Another Example  Alphabet {0,1}  What language does this recognize? 0 1 0 1 0 1
  • 34. 34 And Another Example  Alphabet still { 0, 1 }  The operation of the automaton is not completely defined by the input  On input “11” the automaton could be in either state 1 1
  • 35. 35 Epsilon Moves  Another kind of transition: -moves  • Machine can move from state A to state B without reading input A B
  • 36. 36 Deterministic and Nondeterministic Automata  Deterministic Finite Automata (DFA)  One transition per input per state  No -moves  Nondeterministic Finite Automata (NFA)  Can have multiple transitions for one input in a given state  Can have -moves  Finite automata have finite memory  Need only to encode the current state
  • 37. 37 Execution of Finite Automata  A DFA can take only one path through the state graph  Completely determined by input  NFAs can choose  Whether to make -moves  Which of multiple transitions for a single input to take
  • 38. 38 Acceptance of NFAs  An NFA can get into multiple states • Input: 0 1 1 0 1 0 1 • Rule: NFA accepts if it can get in a final state
  • 39. 39 NFA vs. DFA (1)  NFAs and DFAs recognize the same set of languages (regular languages)  DFAs are easier to implement  There are no choices to consider
  • 40. 40 NFA vs. DFA (2)  For a given language the NFA can be simpler than the DFA 0 1 0 0 0 1 0 1 0 1 NFA DFA • DFA can be exponentially larger than NFA
  • 41. 41 Regular Expressions to Finite Automata  High-level sketch Regular expressions NFA DFA Lexical Specification Table-driven Implementation of DFA
  • 42. 42 Regular Expressions to NFA (1)  For each kind of rexp, define an NFA  Notation: NFA for rexp A A • For   • For input a a
  • 43. 43 Regular Expressions to NFA (2)  For AB A B  • For A | B A B    
  • 44. 44 Regular Expressions to NFA (3)  For A* A   
  • 45. 45 Example of RegExp -> NFA conversion  Consider the regular expression (1 | 0)*1  The NFA is  1 C E 0 D F   B   G    A H 1 I J
  • 47. 47 NFA to DFA. The Trick  Simulate the NFA  Each state of resulting DFA = a non-empty subset of states of the NFA  Start state = the set of NFA states reachable through -moves from NFA start state  Add a transition S a S’ to DFA iff  S’ is the set of NFA states reachable from the states in S after seeing the input a  considering -moves as well
  • 48. 48 NFA -> DFA Example 1 0 1         A B C D E F G H I J ABCDHI FGABCDHI EJGABCDHI 0 1 0 1 0 1
  • 49. 49 NFA to DFA. Remark  An NFA may be in many states at any time  How many different states ?  If there are N states, the NFA must be in some subset of those N states  How many non-empty subsets are there?  2N - 1 = finitely many, but exponentially many
  • 50. 50 Implementation  A DFA can be implemented by a 2D table T  One dimension is “states”  Other dimension is “input symbols”  For every transition Si a Sk define T[i,a] = k  DFA “execution”  If in state Si and input a, read T[i,a] = k and skip to state Sk  Very efficient
  • 51. 51 Table Implementation of a DFA S T U 0 1 0 1 0 1 0 1 S T U T T U U T U
  • 52. 52 Implementation (Cont.)  NFA -> DFA conversion is at the heart of tools such as flex or jflex  But, DFAs can be huge  In practice, flex-like tools trade off speed for space in the choice of NFA and DFA representations