[go: up one dir, main page]

100% found this document useful (1 vote)
538 views41 pages

Spos Lab Manual

The document describes the objectives and implementation of Pass I and Pass II of a two-pass assembler. It discusses the data structures required like the opcode table, symbol table, literal table, and intermediate code. Pass I performs analysis of the source program by building tables and generating intermediate code. Pass II synthesizes the target machine code by processing the intermediate code and outputting the assembled program. Pseudo-code algorithms are provided for both passes.

Uploaded by

Albert Einstein
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
538 views41 pages

Spos Lab Manual

The document describes the objectives and implementation of Pass I and Pass II of a two-pass assembler. It discusses the data structures required like the opcode table, symbol table, literal table, and intermediate code. Pass I performs analysis of the source program by building tables and generating intermediate code. Pass II synthesizes the target machine code by processing the intermediate code and outputting the assembled program. Pseudo-code algorithms are provided for both passes.

Uploaded by

Albert Einstein
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

JayawantShikshanPrasarakMandal

T.E.COMP
System Programming & Operating System Lab

Laboratory Manual

Computer Engineering
Laboratory Objectives:
1. Understandand develop programs in LEX & YACC for illustrating Lexical and
syntax analysis phase of compiler.
2. To implement different scheduling algorithms.
3. To implement language translators(system software) like assembler and macro-
processor.

Experiment Learning Outcome: -

Students will be able to :


1. Explain different concepts in lexical analyzer & write program for lexical analyzer
2. Write a program to implement Syntax analyzer phase of complier.
GROUP A
Assignment no.:-A1

TITLE: IMPLEMENTATION OF PASS – I OF TWO PASS ASSEMBLER

OBJECTIVES:
 To study basic translation process of assembly language to machine language.
 To study different data structures required to implement pass I of assembler.

PROBLEM STATEMENT:
Design suitable data structures and implement pass-I of a two-pass assembler for pseudo-
machine in Java using object oriented feature. Implementation should consist of a few
instructions from each category and few assembler directives.

SOFTWARE & HARDWARE REQUIREMENTS:


1. 64-bit Open source Linux or its derivative

2. Eclipse

3. JDK

THEORY:
A language translator bridges an execution gap to machine language of computer system. An
assembler is a language translator language. Language processing activity consists of two phases,
Analysis phase and synthesis phase. Analysis of source program consists of three components,
Lexical rules, syntax rules and semantic rules. Lexical rules govern the formation of valid
statements in source language.

Semantic rules associate the formation meaning with valid statements of language. Synthesis
phase is concerned with construction of target language statements, which have the same
meaning as source language statements. This consists of memory allocation and code generation.

WHAT IS AN ASSEMBLER?

An assembler is an translator which takes its input in the form of an assembly language program
and produces machine language code as its output

Assembly Language statements: -

There are three types of statements,


1] Imperative - An imperative statement indicates an action to be performed during the execution
of assembled program. Each imperative statement usually translates into one machine
instruction.

2] Declarative - Declarative statement e.g. DS reserves areas of memory and associates names

with them. DC constructs memory word containing constantscontaining constants.

3] Assembly directives- Assembler directives instruct the assembler to perform certain actions

during assembly of a program, e.g. START<constant> directive indicates that the first word of

the target program generated by assembler should be placed at memory word with address

<constant>

DESIGN OF TWO PASS ASSEMBLER

Pass I: -

1. Separate the symbol, mnemonic opcode and operand fields.

2. Build the symbol table and the literal table.

3. Perform LC processing.

4. Construct the intermediate representation code for every assembly language statement.

Pass II: -

Synthesize the target code by processing the intermediate code generated during pass1

DATA STRUCTURES REQUIRED BY PASS I:

 OPTAB – a table of mnemonic op codes


 Contains mnemonic op code, class and mnemonic info
 Class field indicates whether the op code corresponds to Imperative Statement (IS),
Declaration Statement (DL) or Assembler Directive (AD)

For IS, mnemonic info field contains the pair ( machine opcode, instruction length)

 SYMTAB - Symbol Table Contains address and length


 LOCCTR - Location Counter
 LITTAB – a table of literals used in the program Contains literal and address
 Literals are allocated addressesstarting with the current value in LC and LC is
incremented, appropriately List of hypothetical instructions:

Advanced Assembler Directives :

• LTORG

• ORIGIN

• EQU
LTORG and Literal Pool

The LTORG directive, which stands for ‘origin for literals’, allows a programmer to specify
where literals should be placed. The assembler uses the following scheme for placement of
literals: When the use of a literal is seen in a statement, the assembler enters it into a literal
pool unless a matching literal already exists in the pool At every LTORG statement, as also at
the END statement, the assembler allocates memory to the literals of the literal pool and clears
the literal pool. This way, a literal pool would contain all literals used in the program since the
start of the program or since the previous LTORG an LTORG statement, the assembler would
enter all literals used in the program into a single pool and allocate memory to them when it
encounters the END statement.
Advantages of Literal Pool

• Automatic organization of the literal data into sections that are correctly aligned and

arranged so that minimal space is wasted in the literal pool.

• Assembling of duplicate data into the same area.

ORIGIN Directive

The syntax of this directive is

ORIGIN <address specification>

Where <address specification> is an EQU Directive

The EQU directive has the syntax

<symbol> EQU <address specification>

Where <address specification> is an <operand specification> or <constant>

The EQU statement simply associates the name <symbol> with the address specified by

<address specification>.

However the address in the location counter is not affected.

Algorithm (Pass I of Two – Pass Assembler)

1. LC := 0; (This is a default value)

littab_ptr := 1;

pooltab_ptr := 1;

POOLTABLE[1].first := 1;
2. While the next statement is not an END statement

(a) If a symbol is present in label field then

this_lable := symbol in label field;

Make an entry (this_label, <LC>, __) in SYMTAB.

(b) If an LTORG statement then

(i) If POOLTAB[pooltab_ptr].#literal > 0 then littab_ptr,

POOLTAB[pooltab_ptr].#literals := 0;

(c) If a START or ORIGIN statement then

LC : = value specified in operand field;

(d) If an EQU statement then

(i) this_addr : = value of <address specification>

(ii) Correct the SYMTAB entry for this_lable to (this_label, this_addr , 1 )

(e) If a declaration statement then

(i) Invoke the routine whose id is mentioned in the mnemonic info field.

(ii) If the symbol is present in the label field, correct SYMTAB entry for

this_label to (this_label, LC>, Size)

(iii) LC := LC + 1;

(iv) Generate intermediate code for declaration statement.

(f) If an imperative statement then

(i) code : = machine code from mnemonic info field of OPTAB;

(ii) LC := LC + instruction length from length field of OPTAB;

(iii) If Operand is literal then

this_literal : = literal in operand field;

If this_literal does not match any literal in LITTAB then

LITTAB[littab_ptr].value := this_literal;

POOLTAB[pooltab_ptr].#literal = POOLTAB[pooltab_ptr].#literal +1;

littab_ptr := littab_ptr + 1;

else (i.e operand is a symbol)

this_entry := SYMTAB entry number of operand;


Generate intermediate code for the imperative statement.

3. Processing of END statement

(a) Perform actions (i) – (iii) of Step 2(b)

(b) Generate intermediate code for the END statement.

FLOWCHART FOR PASS 1


EXAMPLE:

CONCLUSION: We can design our


Assignment no.:-A2
TITLE: Implement Pass II OF TWO PASS ASSEMBLER

PROBLEM STATEMENT: Implement Pass-II of two pass assembler for pseudo-machine in

Java using object oriented features. The output of assignment-1 (intermediate file and symbol

table) should be input for this assignment.

OBJECTIVES:

 To study basic translation process of intermediate code to machine language.

 To study and implement pass II of two pass assembler

SOFTWARE & HARDWARE REQUIREMENTS:

1. 64-bit Open source Linux or its derivative

2. Eclipse

3. JDK

THEORY:

Data Structure used by Pass II:

1. OPTAB: A table of mnemonic opcodes and related information.

2. SYMTAB: The symbol table

3. LITTAB: A table of literals used in the program

4. Intermediate code generated by Pass I

5. Output files containing Target code / error listing.

Pass – II of Two Pass Assembler Algorithm

 LC : Location Counter

 littab_ptr : Points to an entry in LITTAB

 Pooltab_ptr : Points to an entry in POOLTAB

 machine_code_buffer : Area for construction code for one statement

 code_area : Area for assembling the target program

 code_area_address : Contains address of code_area

1. code_area_address := address of code_area;


pooltab_ptr := 1;

LC := 0;

2. While the next statement is not an END statement

(a) Clear machine_code_buffer;

(b) If an LTORG statement

(i) If POOLTAB[pooltab_ptr].#literal > 0 then

Process literals in the entries LITTAB[POOLTAB[pool_ptr].

first … LITTAB[POOLTAB[pool_ptr+1]-1] similar to

processing of constants in a DC statement. It results in

assembling the literals in machine_code_buffer.

(ii) size := size of memory area required for literals;

(iii) pooltab_ptr := pooltab_ptr + 1;

(c) If a START or ORIGIN statement

(i) LC := value specified in operand field;

ii) size := 0;

(d) If a declaration statement

(i) If a DC statement then

Assemble the constant in machine_code_buffer.

(ii) size := size of the memory area required by the declaration

statement;

(e) If an Imperative statement

(i) Get address of the operand from its entry in SYMTAB or LITTAB,

(ii) Assemble the instruction in machine_code_buffer.

(iii) size := size of the instruction;

(f) If size ≠ 0 then

(i) Move contents of machine_code_buffer to memory word with the address

code_area_address + <LC>;

(ii) LC := LC + size;

3. Processing of END statement


(a) Perform actions (i) – (iii) of Step 2(b)

(b) Perform actions (i) – (ii) of Step 2(f)

(c) Write code_area into the output file.

Flowchart

Conclusion:

Assignment
no.:-A3
TITLE: Design suitable data structures and implement pass-I of a two-pass macroprocessor using oop features in
java.

OBJECTIVES:
To understand macro facility, features and its use in assembly language programming.
To study how the macrodefinition is processed and how macro call results in the expansion of code.

PROBLEM STATEMENT:
Implement Pass-I of two-pass macroprocessor.

SOFTWARE REQUIRED: 64-bit Open Source Linux or its derivative,jdk

INPUT: Input data as sample program.

OUTPUT: It will generate data structure by scanning all the macro definition.

Macro:
Macro allows a sequence of source language code to be defined once and then referred to by name
each time is it to be referred Each time this name macro occures in a program the sequence of code is
substituted at that point.

Macro Definition
Macro definition typically appears at the start of a program. Each macro definition is enclosed between
a macro header statement and a macro end statement, having mnemonic opcodes MACRO and MEND.
Statements included in the macro definition can use formal parameters ; the ‘&’ is prefixed to the name
of a formal parameter to differentiate a formal parameter’s name from symbolic name.
A macro prototype statement declares the name of the macro and names and kinds of formal
parameters.

<macro name> [<formal parameter specification> [,…]]

Macro Expansion
The macro processor substitutes the definition for all occurrences of abbreviations (macro call) in the
program. Replacement of macro call by corresponding sequence of instructions is called as macro
expansion. Macro expansion can be performed by using two kinds of language processors.
• Macro Assembler
• Macro Preprocessor
A macro assembler performs expansion of each macro call in a program into sequence of assembly
statements and also assembles the resulting program.

A macro preprocessor merely performs macro expansion of macro calls in a program.

It produces assembly program in which a macro call has been replaced by statements that resulted
from its expansion but statements that were not macro calls have been retained in their original form.
This program can be assembled using assembler.
Design of a Macro preprocessor
The macro preprocessor accepts an assembly program containing definitions and calls and translates it
into an assembly program which does not contain any macro definition or call. The program form
output by the macro preprocessor would be handed over to an assembler to obtain the target program.
This way, a macro facility is made available independent of an assembler, which makes it both simpler
and less expensive to develop.

Design Procedure:-
Tasks involved in macro expansion

• Identify macro calls in the program.


• Determine the values of formal parameters.
• Maintain the values of expansion time variables declared in a macro.
• Organize expansion time control flow.
• Finding the statement that defines a specific sequencing symbol.
• Perform expansion of a model statement.

Design of Two – Pass Macro Processor


The Two – Pass Macro Processor will have two systematic scans or passes, over the input text, searching
first for macro definitions and then for macro calls.

Just as assembler cannot process a reference to a symbol before its definition, so the macro processor
cannot process a reference to a symbol before its definition, so the macro cannot expand a macro call
before having found and saved macro definition. Thus we need two passes over the input text, one to
handle definitions and one to handle calls.

Design of Two – Pass Macro Processor Pass I:


• Generate Macro Name Table (MNT)
• Generate Macro Definition Table (MDT)
• Generate IC i.e. a copy of source code without macro definitions.

Pass II:
• Replace every occurrence of macro call with macro definition.

Macro Processor Pass – I Data Structures


1. The input file (Source Program)
2. The output file (To be used by Pass – II)
3. The Macro Definition Table (MDT), used to store the body of the macro definitions.
4. The Macro Name Table (MNT), used to store the names of defined macros
5. The Macro Definition Table Counter (MDTC), used to indicate the next available entry in the MDT
6. The Macro Name Table Counter (MNTC), used to indicate the next available entry in the
MNT

7. The Argument List Array (ALA), used to substitute index markers for dummy arguments before
storing a macro definition.

Flowchart:

Data structure Used in pass1 :

1.Macro Definition Table(MDT):Used to store name of macros.

2.Macro Name Table(MNT):The entire macro definition is stored in the MDT.

The index into MDT is stored into MDT.

3.Argument List Array(ALA):The argument list array is used to substitute index markers for formal
parameters before storing macro definition in MDT.

4.Macro Name Table Pointer(MNTP):MNTP gives the next available entry in MNT.
5. Macro Definition Table Pointer(MDTP):MDTP gives the next available entry in MDT.

Example:

MACRO FIRST

ADD AREG, BREG

MEND

MACRO

SECOND &ARG1,&ARG2

ADD AREG,&ARG1

SUB BREG,&ARG2

MEND

MACRO THIRD &ARG1=DATA3,&ARG2=DATA1

ADD AREG,&ARG1

ADD BREG,&AGR2

SUB BREG,&ARG1

MEND

START

FIRST

:
SECOND DATA1,DATA2

THIRD DATA1,DATA3

DATA1 DS 3

DATA2 DS 2

DATA3 DC ‘3’
END

PASS I Output

CONCLUSION:

Assignment no.:-A4
TITLE: Implement Pass-II of two-pass macroprocessor

OBJECTIVES: To understand macro facility, features and its use in assembly language programming.
To study how the macro definition is processed and how macro call results in the expansion of code Implement
Pass-I of two-pass macroprocessor.

SOFTWARE REQUIRED: 64-bit Open Source Linux or its derivative,jdk

INPUT: Input data as sample program.

OUTPUT: It will generate Expanded code by scanning the code.

THEORY:

Design Procedure:-
Tasks involved in macro expansion

• Identify macro calls in the program.


• Determine the values of formal parameters.
• Maintain the values of expansion time variables declared in a macro.
• Organize expansion time control flow.
• Finding the statement that defines a specific sequencing symbol.
• Perform expansion of a model statement.

Design of Two – Pass Macro Processor


The Two – Pass Macro Processor will have two systematic scans or passes, over the input text, searching
first for macro definitions and then for macro calls.

Just as assembler cannot process a reference to a symbol before its definition, so the macro processor
cannot process a reference to a symbol before its definition, so the macro cannot expand a macro call
before having found and saved macro definition. Thus we need two passes over the input text, one to
handle definitions and one to handle calls.

Design of Two – Pass Macro Processor Pass I:


• Generate Macro Name Table (MNT)
• Generate Macro Definition Table (MDT)
• Generate IC i.e. a copy of source code without macro definitions.

Pass II:
• Replace every occurrence of macro call with macro definition.

Macro Pass – II (Macro Calls & Expansion)


The algorithm for Pass- II tests mnemonic of each line to see if it is a name in the MNT. When a call is
found, the call processor sets a pointer, the Macro Definition Table Pointer (MDTP), to the
corresponding macro definition stored in MDT.
The initial value of MDT is obtained from the “MDT index” field of the MNT entry.

The macro expander prepares the Argument List Array (ALA) consisting of a table of dummy argument
indices and corresponding arguments to the call.

As each successive line is read, the values from the argument list are substituted for dummy argument
indices in the macro definition. Reading of the MEND line in the MDT terminates expansion of the
macro, and scanning continues from the input file. When the END is encountered, the expanded source
program is transferred to the assembler for further processing.

Macro Processor Pass – II Data Structures

1. The input file (Output of Pass - I)


2. The output file (To store Target Program)
3. The Macro Definition Table (MDT), created by Pass – I
4. The Macro Name Table (MNT) gives number of entries in MNT.
5. The Macro Definition Table Pointer (MDTP), used to indicate the next line of text to be used during
macro expansion
6. The Argument List Array (ALA), used to substitute macro call arguments for the index markers in the
stored macro definition
FLOWCHART:

Example:

START

ADD AREG,BREG

ADD AREG,DATA2

SUB BREG,DATA1

ADD AREG,DATA3

ADD BREG,DATA1

SUB AREG,DATA3

DATA1 DS 3
DATA2 DS 2

DATA3 DC ‘3’

END

PASS I OUTPUT:

ALA FOR SECOND MACRO:

Sr.No Formal Args Actual Args


1 &ARG1
2 &ARG2

ALA FOR THIRD MACRO:

Sr.No Formal Args Actual Args


1 &ARG1
2 &ARG2

PASS II OUTPUT:

ALA FOR SECOND MACRO:

Sr.No Formal Args Actual Args


1 &ARG1 DATA1
2 &ARG2 DATA2

ALA FOR THIRD MACRO:


Sr.No Formal Args Actual Args
1 &ARG1 DATA1
2 &ARG2 DATA3

Conclusion:

GROUP B
Assignment no.:-B2 and B3
TITLE: Implement Lexical analyzer for sample language using LEX

OBJECTIVES:
Understand the importance and usage of LEX automated tool.

PROBLEM STATEMENT:
Implement a lexical analyzer for a sample languageusing LEX Implementation

SOFTWARE REQUIRED: Linux Operating Systems, GCC

INPUT:Input data as Sample language.

OUTPUT: It will generate tokens for sample language.


THEORY:

As the first phase of a compiler, the main task of the lexical analyzer is to read the input
characters of the source program, group them into lexemes, and produce as output a sequence of
tokens for each lexeme in the source program.

Additionally, it willfillter out whatever separates the tokens (the so-called white-space), i.e., lay-
out characters (spaces, newlines etc.) and comments.

THE LEXICAL ANALYZER GENERATOR LEX


Lex, or Flex, that allows one to specify a lexical analyzer by specifying regular expressions to
describe patterns for tokens. The input notation for the Lex tool is referred to as the Lex
language and the tool itself is the Lex compiler.

TOKENS, PATTERNS, AND LEXEMES:


When discussing lexical analysis, we use three related but distinct terms:
Token:
A token is a pair consisting of a token name and an optional attribute value. The token name is an
abstract symbol representing a kind of lexical unit, e.g., a particular keyword, or a sequence of
input characters denoting an identifier. The token names are the input symbols that the parser
processes..
Pattern:
A pattern is a description of the form that the lexemes of a token may take. In the case of a
keyword as a token,the pattern is just the sequence of characters that form the keyword.
Lexeme:
A lexeme is a sequence of characters in the source program that matches the pattern for a token
and is identified by the lexical analyzer as an instance of that token.

REGULAR EXPRESSIONS
Regular expressions in lex are composed of metacharacters (Table 1). Pattern-matching
examples are shown in Table 2. Within a character class, normal operators lose their meaning.
Two operators allowed in a character class are the hyphen (“-”) and circumflex (“^”). When
used between two characters, the hyphen represents a range of characters. The circumflex, when
used as the first character, negates the expression. If two patterns match the same string, the
longest match wins. In case both matches are the same length, then the first pattern listed is used.

Table 1: Pattern Matching Primitives


Table 2: Pattern Matching Examples

STRUCTURE OF A LEX PROGRAM:


A Lex program has the following form:
----------------------
Declarations
----------------------
%%
----------------------
translation rules
----------------------
%%
---------------------------
auxiliary functions
----------------------------
 The declarations section includes declarations of variables, manifest constants (identifiers
declared to stand for a constant, e.g., the name of a token), and regular definitions
 The translation rules each have the form
Pattern{ Action )
Each pattern is a regular expression, which may use the regular definitions of the
declaration section. The actions are fragments of code, typically written in C.
 The third section holds all the procedures used in your C – Codewhatever additional
functions are used in the actions. Alternatively, these functions can be compiled
separately and loaded with the lexical analyzer.

 Built-in Functions i.e. yylex() , yyerror() , yywrap() etc.


1) yylex() :- This function is used for calling lexical analyzer for given translation rules.

2) yyerror() :- This function is used for displaying any error message.

3) yywrap() :- This function is used for taking i/p from more than one file.

 Built-in Variables i.e. yylval, yytext, yyin, yyout etc.


1) yylval :- This is a global variable used to store the value of any token.

2) yytext :-This is global variable which stores current token.

3) yyin :- This is input file pointer used to change value of input file pointer. Default file
pointer is pointing to stdin i.e. keyboard.

4) yyout :- This is output file pointer used to change value of output file pointer. Default
output file pointer is pointing to stdout i.e. Monitor.

 How to execute LEX program :- For executing LEX program follow the following
steps

 Compile *.l file with lex command


# lex *.l It will generate lex.yy.c file for your lexical analyzer.
 Compile lex.yy.c file with cc command
# cc lex.yy.c It will generate object file with name a.out.
 Execute the *.out file to see the output # ./a.out
Assignment no.:-B4 and B5
TITLE:Parser for sample language using YACC.

OBJECTIVES:
1. To understand Second phase of compiler: Syntax Analysis.
2. Understand the importance and usage of YACC automated tool.

PROBLEM STATEMENT:
Write a program using YACC specification to implement syntax analysis phase of compiler.
SOFTWARE REQUIRED:Linux Operating Systems, GCC

INPUT:Input data as Sample language.

OUTPUT: It will generate parser for sample language.

THEORY:
1) YACC Specifications: -
 Parser generator facilitates the construction of the front end of a compiler. YACC is
LALR parser generator. It is used to implement hundreds of compilers. YACC is
command (utility) of the UNIX system. YACC stands for “Yet Another Compiler
Complier”.
 File in which parser generated is with .y extension. e.g. parser.y, which is containing
YACC specification of the translator. After complete specification UNIX command.
YACC transforms parser.y into a C program called y.tab.c using LR parser. The program
y.tab.c is automatically generated. We can use command with –d option as yacc –d
parser.y. By using –d option two files will get generated namely y.tab.c andy.tab.h.
 The header file y.tab.h will store all the token information and so you need not have to
create y.tab.h explicitly. The program y.tab.c is a representation of an LALR parser
written in C, along with other C routines that the user may have prepared. By compiling
y.tab.c with the ly library that contains the LR parsing program using the command cc
y.tab.c -ly. We obtain the desired object program a.out that perform the translation
specified by the original program. If procedure is needed, they can be compiled or loaded
with y.tab.c, just as with any C program.
 LEX recognizes regular expressions, whereas YACC recognizes entire grammar. LEX
divides the input stream into tokens, while YACC uses these tokens and groups them
together logically.LEX and YACC work together to analyze the program syntactically.
The YACC can report conflicts or ambiguities (if at all) in the form of error messages.
Structure of the YACC Program is as follows
-----------------------------
Declaration Section
-----------------------------
%%
-----------------------------
Grammer Rule Section
-----------------------------
%%
-----------------------------
Auxiliary Procedures Section
-----------------------------
Declaration Section :-

In declaration section, % { and %} symbol used for C declaration. This section is used for
definition of token, associativity and precedence of operator. The statement between % { and
% } is passed as it is to C program, normally used for comments.

For token declaration the statement is:


% token DIGIT
which declares DIGIT to be token.

Associativity and precedence defines like


% left ‘’ ‘/’
which declare operator are left associative and and / are having same
precedence than +.
Tokens declared in this section can be used in the second and third part of YACC
specification.
% token SUM
% token LOG
YACC program is produced by command $ YACC demo.y -d (Here program
name is supposed as demo with extension y). Output of this command generate
a file ytabh which contains information about token declaration of first section.
e.g.
# define SUM 100
# define LOG 112
Grammer rule Section :-
In YACC specification after the first %% pair, we put the translation rules. Each rule
consists of a grammar production and the associated semantic action. It means that YACC rules
define what is a legal sequence of tokens in our specifications language.
A set of productions
<left side><alt 1> | <alt 2> | … <alt n>

can be written in YACC as


<left side> : <alt 1> {action 1}
| <alt 2> {action 2}

| <alt n> {action n}
;

Auxiliary Procedure Section :-


 YACC generates a single function called yyparse ( ). This function requires no
parameters and returns either a 0 on success, and 1 on failure. If syntax error over its
return 1.
 The special function yyerror ( ) is called when YACC encounters an invalid syntax. The
yyerror ( ) is passed a single string (char) argument. This function just prints “parse
error” message, it is possible to give your own message in this function like
yyerror (char err)
{
fprintf (stderr, “% S \ n”, error);
}

Built-in Types :
 %token
Used to declare the tokens used in the grammar. The tokens that are declared in the
declaration section will be identified by the parser.
Eg.:- %token NAME NUMBER
 %start
Used to declare the start symbol of the grammar.
Eg.:- %start STMT
 %left
Used to assign the associatively to operators.
Eg.:- %left ‘+’ ‘-‘ - Assign left associatively to + & – with lowest precedence.
%left ‘*’ ‘/‘ - Assign left associatively to * & / with highest precedence.
 %right
Used to assign the associatively to operators.
Eg.:- 1) %right ‘+’ ‘-‘
- Assign right associatively to + & – with lowest precedence
2) %right ‘*’ ‘/‘
- Assign right left associatively to * & / with highest precedence.
 %nonassoc
Used to un associate.
Eg.:- %nonassoc UMINUS
 %prec
Used to tell parser use the precedence of given code.
Eg.:- %prec UMINUS
 %type
Used to define the type of a token or a non-terminal of the production written in the rules
section of the .Y file.
Eg.:- %type <name of any variable>exp

CONCLUSION:

Group:- c
Assignment no.:-C1
TITLE: Process Scheduling Algorithms.
OBJECTIVES:
1. To understand cpu scheduling algorithms.
PROBLEM STATEMENT:
Write a java program to implement scheduling algorithm like FCFS, SJF, Priority, Round Robin.

SOFTWARE REQUIRED:Linux Operating Systems, GCC, java editor, jdk s/w

INPUT:Arrival time, Burst time of process.

OUTPUT: It will generate the total turn around time and waiting time.

THEORY:
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.
First come, first served:
With this scheme, the process that requests the CPU first is allocated the CPU first. Consider the
following set of processes that arrive at time 0, burst time is given.
Process Burst Time
P1 24
P2 3
P3 3

If the processes arrive in the order p1,p2,p3 and served in FCFS order, then Gantt Chart is :

P1 p2 p3

0 24 27 30
Thus, the average waiting time is ( 0+24+27)/3 = 17 ms.
Shortest Job First:
This algorithm associates with each process the length of the latter’s next CPU burst. When the
CPU is available, it is assigned to the process that has the smallest next CPU burst. If the two
process have the same length next CPU burst, FCFS scheduling is used to break the tie.
Consider the following set of processes.
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Using SJF scheduling, we would schedule these processes according to the following Gantt
Chart :

P4 p1 p3 p2

0 3 9 16 24
Thus, the average waiting time is ( 3 + 16 + 9 + 0)/4 = 7 ms.
Preemptive SJF
When new process arrives at the ready queue while previous process is executing, the new
process may have a shorter next CPU burst than what is left of the currently executing process,
preemptive algo. will preempt the currently executing process.
E.g.
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Corresponding Gantt Chart :

P1 p2 p4 p1 p3

0 1 5 10 17 26

The avg. waiting time for this example is ((10 – 1)+(1 – 1) + (17 – 2) + (5 – 3))/4 = 6.5 ms.

Priority Scheduling :
A prority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal priority processes are scheduled in FCFS order.

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Using priority scheduling, we would schedule processes according to the following Gantt chart
P2 p5 p1 p3 p4
01 6 16 18
01 6 16 18 19
The avg. waiting time is 8.2 msec

Round Robin Scheduling :

Process Burst Time


P1 24
P2 3
P3 3

P1 p2 p3 p1 p1 p1 p1 p1

0 4 7 10 14 18 22 26 30

The avg. waiting time is 17/3 = 5.66 ms.

Assignment no.:-C2
TITLE:Banker’s Algorithm

OBJECTIVES:
1. To understand dead lock and dead lock avoidance concepts.
PROBLEM STATEMENT:
Write a java program to implement Banker’s Algorithm.

SOFTWARE REQUIRED:Linux Operating Systems, GCC, java editor, jdk s/w


INPUT: Number of resources, number of processes, allocation, max/claim
matrix

OUTPUT: It will generate the safe sequence if process is executed.

THEORY:

The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.

Following Data structures are used to implement the Banker’s Algorithm:

Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types

Available : 

 It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
 Available[ j ] = k means there are ‘k’ instances of resource type Rj

Max :

 It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
 Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.

Allocation :

 It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
 Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource type
Rj

Need :

  It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
 Need [ i,  j ] = k means process Pi currently allocated ‘k’ instances of resource type Rj
 Need [ i,  j ] = Max [ i,  j ] – Allocation [ i,  j ]
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.
Banker’s algorithm consist of Safety algorithm and Resource request algorithm
Safety Algorithm

Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti[j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:

Example:
Considering a system with five processes P0 through P4 and three resources types A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time
t0 following snapshot of the system has been taken:

Need Matrix :

Need [i, j] = Max [i, j] – Allocation [i, j]

So, the content of Need Matrix is:

Safe stateand the safe sequence


process P1 requests one additional instance of resource type A and two instances of resource
type C

We must determine whether this new system state is safe. To do so, we again execute Safety
algorithm on the above data structures.
After applying Safety algorithm, the new system state is safe, (safe sequence P1, P3, P4, P0, P2)
so, we can immediately grant the request for process  P1 .
Assignment no.:C3
TITLE:Process Management in UNIX Sytsem

OBJECTIVES:
1. To understand the basic UNIX system calls for process.
PROBLEM STATEMENT:
Implement Unix System Calls like fork, ps, join, exec family and wait for process management

SOFTWARE REQUIRED:Linux Operating Systems, GCC or java editor and jdk s/w

INPUT: UNIX system calls.

OUTPUT: It will generate the output for UNIX system calls.

THEORY:

A process is an executing program. A process is created whenever the user gives a command to
computer and the process is given a unique number i.e. known as process ID or simply PID.
System Calls for Process Control

fork():
#include <sys/types.h>
#include <unistd.h>
pid_t fork(void);

The fork function is used to create a process from within a process.


The resultant new process created by fork() is known as child process while the original process
(from which fork() was called) becomes the parent process.
The fork system call does not take an argument.
If the fork system call fails, it will return a -1.
If the fork system call is successful, the process ID of the child process is returned in the parent
process and a 0 is returned in the child process.
When a fork() system call is made, the operating system generates a copy of the parent process
which becomes the child process.

PS ( Process Status)
purpose:- it is used to list the running processes.
Syntax:-$ ps[option]
options:- a-displays all processes
e - displays every process running at the moment
u (username) - displays processes of the single mentioned user
t (terminal name) - displays the processes running on the mentioned terminal
l - displays running processes in long format with memory related information
wait()
#include <sys/types.h>
#include <sys/wait.h>
pid_t wait(int *status);
 A parent process usually needs to synchronize its actions by waiting until the child
process has either stopped or terminated its actions.
 The wait() system call allows the parent process to suspend its activities until one of these
actions has occurred.
 The wait() system call accepts a single argument, which is a pointer to an integer and
returns a value defined as type pid_t.
 If the calling process does not have any child associated with it, wait will return
immediately with a value of -1.
 For the cases where a parent process has more than one child processes, there is a
function waitpid() that can be used by the parent process to query the change state of a
particular child.
The signature of waitpid() is :
pid_twaitpid(pid_tpid, int *status, int options);
By default,  waitpid() waits only for terminated children, but this behavior is modifiable via the
options argument, as described below.
The value of pid can be:
 < -1 : Wait for any child process whose process group ID is equal to the absolute value of
pid.
 -1 : Wait for any child process.
 0 : Wait for any child process whose process group ID is equal to that of the calling
process.
 >0 : Wait for the child whose process ID is equal to the value of pid.

exec*()
#include <unistd.h>
extern char **environ;

intexecl(const char *path, const char *arg, ...);


intexeclp(const char *file, const char *arg, ...);
intexecle(const char *path, const char *arg , ..., char * constenvp[]);
intexecv(const char *path, char *constargv[]);
intexecvp(const char *file, char *constargv[]);

 The exec family of functions replaces the current process image with a new process
image.
 Commonly a process generates a child process because it would like to transform the
child process by changing the program code the child process is executing.
 The text, data and stack segment of the process are replaced and only the u (user) area of
the process remains the same.
 If successful, the exec system calls do not return to the invoking program as the calling
image is lost.
 It is possible for a user at the command line to issue an exec system call, but it takes over
the current shell and terminates the shell.

The naming convention: exec*

 'l' indicates a list arrangement (a series of null terminated arguments)


 'v' indicate the array or vector arrangement (like the argv structure).
 'e' indicates the programmer will construct (in the array/vector format) and pass their own
environment variable list
 'p' indicates the current PATH string should be used when the system searches for
executable files.
 In the four system calls where the PATH string is not used (execl, execv, execle, and
execve) the path to the program to be executed must be fully specified.
Conclusion:-
Group:-D
Assignment no.:-D1

TITLE:Page replacement policies

OBJECTIVES:
1. To understand and compare page replacement methods.

PROBLEM STATEMENT:
Write a java program to implement page replacement policies like LRU (Least Recently Used),
Optimal and FIFO.

SOFTWARE REQUIRED:Linux Operating Systems, GCC, java editor, jdk s/w

INPUT:Page string, size of memory frame.

OUTPUT: It will generate the page faults , page hits , fault ratio and hit ratio.

THEORY:
Paging is a memory management scheme that permits the physical address space of a process to
be non-contiguous.
In paged memory management each job’s address space is divided into equal no of pieces
called as pages and physical memory is divided into pieces of same size called as blocks or
frames.
Whenever there is a page reference for which the page needed is not in memory that
event is called as page fault.
Suppose all page frames are occupied and new page has to be brought in due to page
fault, in such case we have to make space in memory for this new page by replacing any existing
page. There are several algo. or policies for page replacement.
FIFO page replacement:
When a page must be replaced, the oldest page is chosen.
Consider the reference string 4, 3, 2, 1, 4, 3, 5, 4, 3, 2, 1, 5 with memory of three references

Ref. String 4 3 2 1 4 3 5 4 3 2 1 5
4 4 4 1 1 1 5 5 5 5 5 5
3 3 3 4 4 4 4 4 2 2 2
2 2 2 3 3 3 3 3 1 1
Fault + + + + + + + + +
Page Fault = 9 hit ratio = 3/12 , fault ratio=9/12
Algo.is affected by the Belady’s anomaly ( the page fault rate may increase as the number of
allocated frames increases.)
Optimal page replacement:
The basic idea of this algo is to replace the page that will not be used for the longest period of
time. It has the lowest page fault of all algo. and will never suffer from the Belady’sanamoly.
Consider the string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 with three memory frames.
Ref. 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
string
7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7
0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
1 1 1 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1
Fault + + + + + + + + +
Page fault = 9 Hit ratio = 11/20 , fault ratio=9/20
This algo is difficult to implement because it requires further knowledge of reference string
LRU page replacement:
We will replace the page that has not been used for the longest period of time. This algo.also
never suffers from Belady’sanamoly.
Consider the string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 with three memory frames
Ref. 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
string
7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
1 1 3 3 3 2 2 2 2 2 2 2 2 2 2 7 7 7
Fault + + + + + + + + +
Page fault = 9 Hit ratio = 11/20, fault ratio=9/20

You might also like