[go: up one dir, main page]

0% found this document useful (0 votes)
23 views78 pages

CD Unit-6

o

Uploaded by

testmg875
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views78 pages

CD Unit-6

o

Uploaded by

testmg875
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

Compiler Design (CD)

(CE6003)

Unit – 6
Code Generation
and Code
Optimization

Mr. Viral H. Panchal


Computer Engineering
Department
CGPIT, UTU, BARDOLI
viral.panchal@utu.ac.in
Topics to be covered
 Looping
• Role of code generator
• Issues in the design of a code generator
• The Target machine
• Basic block and flow-graph
• Transformation on basic block
• Next-use information
• Loops in flow graph
• DAG Representation of Basic Block
• A simple code generator
• Generating Code from DAGs
• Code optimization (Machine independent)
• Code optimization (Machine dependent)
o Peephole optimization
Role of Code Generator
Role of Code Generator
 The final phase of compilation is code generation.
 It takes an intermediate representation of the source program as input
and produces an equivalent target program as output.

Figure: Position of code generator


 The target program should have following properties:
1. Correctness i.e., produce correct code.
2. High quality i.e., produce high quality code.
3. Efficient use of resources of target code.
4. Quick code generation.
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 4
Issues in the Design of
a Code Generator
Issues in the Design of a Code Generator
 Issues in Code Generation are:
1. Input to code generator
2. Target program
3. Memory management
4. Instruction selection
5. Register allocation
6. Choice of evaluation
7. Approaches to code generation

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 6
Input to code generator
 Input to the code generator consists of the intermediate representation
of the source program.
 There are several types of intermediate languages such as
1. Postfix notation
2. Three address code
3. Syntax trees or DAGs
 The intermediate code generated by the front end should be such that
the target machine can easily manipulate it, in order to generate
appropriate target code.
 In the front end of compiler necessary type checking and type
conversion needs to be done.
 The detection of semantic error should be done before submitting the
input to the code generator.
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 7
Target program
 The output of the code generator is the target program.
 The output may be in form of:
1. Absolute machine language: Absolute machine language
program can be placed in a memory location and
immediately execute.
2. Relocatable machine language: The subroutine can be
compiled separately. A set of relocatable object modules
can be linked together and loaded for execution.
3. Assembly language: Producing an assembly language
program as output makes the process of code generation
easier, then assembler is require to convert code in binary
form.
 Prof. Jay R Dhamsaniya
CD  Unit 6 – Code Generation and Code
#3130006 (PS)  Unit 1 – Basic Probability 8
Memory management
 Both the front end and code generator performs the task of
mapping the names in the source program to addresses to the
data objects in run time memory.
 The names used in three address code refers to the entries in
the symbol table.
 The type in a declaration statement determines the amount of
storage (memory) needed to store the declared identifier.
 Thus, using the symbol table information about memory
requirements, code generator determines the addresses in the
target code.
 Similarly, if the three address code contains the labels then
those labels can be converted into equivalent memory
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 9
Instruction selection
 The uniformity and completeness of instruction set is an important
factor for the code generator. The selection of instructions depends
upon the instruction set of target machine.
 If we do not consider the efficiency of the target code then the
instruction selection becomes a straightforward task.
 For example, x := y + z then the code sequence that can be
generated as
MOV y, R0 /* load y into register R0 */
ADD z, R0 /* add z to R0 */
MOV R0, x /* store R0 into x */
 In this example, the code is generated line by line and it generates
the poor code because the redundancies can be achieved by
subsequent lines and those redundancies can not be considered in
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 10
Instruction selection
 For example, the sequence of statements
a := b + c
d := a + e
 would be translated into  This generated code is poor code
MOV b, R0 because the statement MOV R0,
ADD c, R0 a is not used and the statement
MOV a, R0 is redundant.
MOV R0, a
 Hence the efficient code can be
MOV a, R0
ADD e, R0 MOV b, R0
MOV R0, d ADD c, R0
ADD e, R0
MOV R0, d
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 11
Register allocation
 If the instruction contains register operands then such a use becomes
shorter and faster than that of using operands in the memory.
 Hence, while generating a good code efficient utilization of register is an
important factor.
 The use of registers is often subdivided into two sub problems:
1. During register allocation, we select the set of variables that will
reside in registers at a point in the program.
2. During a subsequent register assignment phase, we pick the specific
register that a variable will reside in.
 Finding an optimal assignment of registers to variables is difficult, even
with single register value.
 Certain machine require register pairs (such as even and next odd
numbered register) for some operands and results.
 ForProf.
example, in IBM systems
Jay R Dhamsaniya
CD  integer multiplication
Unit 6 – Code Generation
#3130006 (PS)  Unit
and Code
1 – Basic Probability requires register pairs.
12
Register allocation
 Consider the three address code sequence as below.

t := a +
b
t := t * c
 Thet shortest
:= t / d
assembly code sequence will be
MOV a,
R0
ADD b,
R0
MUL c,
R0
DIV d,
R0 CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 13
Choice of evaluation
 The order in which computations are performed can affect the
efficiency of the target code.
 Some computation orders require fewer registers to hold
intermediate results than others.
 Picking a best order is one of the difficulty in code generation.
 Mostly, we can avoid this problem by generating code for the
three-address statements in the order in which they have been
produced by the intermediate code generator.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 14
Approaches to code generation
 The most important criterion for a code generator is that it
produces correct code.
 Correctness takes on special significance because of the
number of special cases that code generator might face.
 Given the premium on correctness, designing a code generator
so it can be easily implemented, tested and maintained is an
important design goal.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 15
Basic Blocks and Flow
Graphs
Basic Blocks
 A basic block is a sequence of consecutive statements in which flow
of control enters at the beginning and leaves at the end without halt
or possibility of branching except at the end.
 The basic blocks does not have any jump statements among them.
 The following sequence of three-address statements forms a basic
block:
t1 := a*a
t2 := a*b
t3 := 2*t2
t4 := t1+t3
t5 := b*b
t6 := t4+t5 CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 23
Terminologies used in basic blocks
 Define and use –
 A three-address statement x := y + z is said to define x and to
use y and z.
 Live and dead –
 A name (variable) in a basic block is said to be live at a given
point if its value is used after that point in the program.
 A name (variable) in the basic block is said to be dead at a given
point if its value is never used after that point in the program.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 24
Algorithm: Partition into basic blocks
Input: A sequence of three-address statements.
Output: A list of basic blocks with each three-address statement in
exactly one block.
Method:
1. We first determine the set of leaders, for that we use the following
rules:
i. The first statement is a leader.
ii. Any statement that is the target of a conditional or
unconditional goto is a leader.
iii. Any statement that immediately follows a goto or conditional
goto statement is a leader.
2. For each leader, its basic block consists of the leader and all
statements
Prof. Jay R Dhamsaniya
up to but #3130006
not including
CD  Unit the
6 – Code Generation next leader or the end
and Code
(PS)  Unit 1 – Basic Probability 25
of
Example: Partition into basic blocks
begin (1) prod := 0 Leader
Block B1(2) i := 1
prod := 0; (3) t1 := 4*i
Leader
i := 1; (4) t2 := a [t1]
(5) t3 := 4*i
do (6) t4 :=b [t3]
prod := prod + a[i] * (7) t5 := t2*t4
b[i]; (8) t6 := prod +t5
(9) prod := t6 Block B2
i := i+1; (10) t7 := i+1
while i<= 20 (11) i := t7
(12) if i<=20 goto
end (3)
Three Address
Code
 Statement (1) is a leader by rule (i) and statement (3) is a leader by rule (ii), since
the last statement can jump to it.
 Therefore, statements (1) and (2) forms a basic block.
 The remainder of the program beginning with statement (3) forms a second basic
block. CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 26
Flow Graph
 A graph representation of three-address statements, called a flow graph,
is useful for understanding code-generation algorithms.
 Nodes in the flow graph represent computations, and the edges represent
the flow of control.
 Example of flow graph for following three address code:
prod=0 Block B1
i=1
t1 := 4*i
t2 := a [t1]
t3 := 4*i
Flow t4 :=b [t3]
Graph t5 := t2*t4 Block B2
t6 := prod +t5
prod := t6
t7 := i+1
i := t7
if i<=20 goto B2
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 27
Transformation on
Basic Blocks
Optimization of Basic block
Transformation on Basic Blocks
 A number of transformations can be applied to a basic block
without changing the set of expressions computed by the block.
 Many of these transformations are useful for improving the
quality of the code.
 There are two important classes of local transformations that
can be applied to basic block. These are:
1. Structure preserving transformation
2. Algebraic transformation

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 29
Structure Preserving Transformations
 The primary structure-preserving transformations on basic
blocks are:
1. Common sub-expression elimination
2. Dead-code elimination
3. Renaming of temporary variables
4. Interchange of two independent adjacent statements

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 30
Common sub-expression elimination
 Consider the basic block,
a:= b+c
b:= a-d
c:= b+c
d:= a-d
 The second and fourth statements compute the same expression, hence
this basic block may be transformed into the equivalent block:
a:= b+c  Although, the 1st and 3rd statements in both cases
b:= a-d appear to have the same expression on right, the 2nd
statement redefines b.
c:= b+c
 Therefore, the value of b in the 3rd statement is
d:= b
different from the value of b in the 1st, and the 1st
and 3rd statements do not compute the same
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 31
Dead-code elimination
 Suppose s dead, that is, never subsequently used, at the point
where the statement appears in a basic block.
 Then this statement may be safely removed without changing
the value of the basic block.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 32
Renaming of temporary variables
 Suppose we have a statement
t:=b+c, where t is a temporary variable.
 If we change this statement to
u:= b+c, where u is a new temporary variable,
 Change all uses of this instance of t to u, then the value of the
basic block is not changed.
 In fact, we can always transform a basic block into an
equivalent block in which each statement that defines a
temporary defines a new temporary.
 We call such a basic block a normal-form block.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 33
Interchange of two independent adjacent
statements
 Suppose we have a block with the two adjacent statements,
t1:= b+c
t2:= x+y
 Then we can interchange the two statements without affecting
the value of the block if and only if neither nor is and neither
nor is .
 A normal-form basic block permits all statement interchanges
that are possible.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 34
Algebraic Transformation
 Countless algebraic transformation can be used to change the set of
expressions computed by the basic block into an algebraically
equivalent set.
 The useful ones are those that simplify expressions or replace
expensive operations by cheaper one.
 For example, statements such as x := x + 0 or x := x * 1 can be
eliminated from a basic block without changing the set of
expressions it computes.
 The exponentiation operator in the statement
x := y * * 2
usually requires a function call to implement.
 Using an algebraic transformation, this statement can be replaced
by the cheaper, but equivalent statement
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 35
Next Use Information
Next-Use Information
 The next-use information is a collection of all the names that
are useful for next subsequent statement in a block.
 The use of a name is defined as follows:
Consider a statement,
x := i
j := x op y
 That means the statement j uses value of x.
 The next-use information can be collected by making the
backward scan of the programming code in that specific block.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 37
Next-Use Information
 Suppose the three-address statement is as given below-
L: a = b op c
 Then the steps in backward scan are –
i. The currently found information in symbol table regarding the next
use and liveness of a, b, and c is associated with the statement L.
ii. In the symbol table set ‘a’ to “not live” and “no next use”.
iii. Set ‘b’ and ‘c’ to “live” and next uses of b and c in symbol table.
 Note that the order of step (ii) and (iii) may not be
interchanged because a may be b or c.
 If three-address statement L is of the form a = b or a = op b,
the steps are same as above, ignoring c.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 38
Storage for Temporary Names
 For the distinct names each time a temporary is needed. And
each time a space gets allocated for each temporary.
 To have optimization in the process of code generation we
pack two temporaries into the same location if they are not
live simultaneously.
 Consider three address code as,
t1=a*a
t1=a*a
Can be t2=a*b
t2=a*b packed into
t2=4*t2
t3=4*t2 two
temporaries t1=t1+t2
t4=t1+t3
as t2=b*b
t5=b*b
t1=t1+t2
t6=t4+t5

 Many times the temporaries can be packed into registers


CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 39
Loops in Flow Graph
Dominators
 In a flow graph, a node d dominates n if every path to node n from initial
node goes through d only.
 This can be denoted as ’d dom n'.
 Every initial node dominates all the remaining nodes in the flow graph.
 Every node dominates itself. 1

 Node 1 is initial node and it dominates every node as it is initial


node. 2
 Node 2 dominates 3, 4 and 5 as there exists only one path from
3 4
initial node to node 2 which is going through 2 (it is 1-2-3).
 Node 3 dominates itself. Similarly node 4 dominates itself. 5
 The reason is that for going to remaining node 5 we have
another path 1-2-4-5 which is not through 3 (hence 3 dominates
itself). Similarly for going to node 5 there is another path 1-2-3-
5 which is not through 4 (hence 4 dominates itself).
 Node 5 dominates no node. CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 41
Natural Loops
 There are two essential properties of natural loop:
 A loop must have single entry point, called the header. This point
dominates all nodes in the loop.
 There must be at least one way to iterate loop.
61 is natural loop. Head
1
er

3 4

6
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 42
Inner Loops
 The inner loop is a loop that contains no other loop.
 Here the inner loop is 42 that mean edge given by 2-3-4.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 43
Reducible Flow Graph
 The reducible graph is a flow graph in which there are two types
of edges: Forward edges and Backward edges.
 These edges have following properties:
 The forward edge forms an acyclic graph.
 The back edges are such edges whose head dominates their
tail.
1

3 4

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 44
Non-reducible Flow Graph
 A non-reducible flow graph is a flow graph in which:
 There are no back edges.
 Forward edges may produce cycle in the graph.

2 3

 This flow graph has no back edges, since no head of an edge


dominates the tail of that edge.
 Thus it could only be reducible if the entire graph were acyclic.
But since it is not, the flow graph is not reducible.
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 45
DAG Representation of
Basic Block
DAG Representation of Basic Block
 The Directed Acyclic Graph (DAG) is used to apply transformations on the basic
block.
 A DAG gives a picture of how the value computed by each statement in a basic
block is used in subsequent statements of the block.
 To apply the transformations on basic block, a DAG is constructed from three-
address statements.
 A DAG is constructed for the following types of labels on nodes:
 Leaf nodes are labeled by identifiers or variable names or constants.
Generally leaves represent r-values.
 Interior nodes store operator values.
 Nodes are also optionally given a sequence of identifiers for label.
 The DAG and flow graphs are two different pictorial representations.
 Each node of the flow graph can be represented by DAG because each node of
the flow graph is a basic block.
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 47
Algorithm for Construction of DAG
 We assume that a three address statement could of any of the following types:
Case (i) x:=y op z
Case (ii) x:=op y
Case (iii) x:=y
 With the help of following steps, the DAG can be constructed.
 Step 1: If y is undefined then create node(y). Similarly if z is undefined, create a
node(z).
 Step 2:
Case(i) Create a node(op) whose left child is node(y) and node(z) will be the
right child. Also check for any common sub expressions.
Case (ii) Determine whether a node is labeled as op, such node will have a
child node(y).
Case (iii) Node n will be node(y).
 Step 3: Delete x from list of identifiers for node(x). Append x to the list of attached
CD  Unit 6 – Code Generation and Code
identifiers for node n found in#3130006
Prof. Jay R Dhamsaniya 2. (PS)  Unit 1 – Basic Probability 48
Example: DAG Representation of Basic Block

Example:
(1) t1 := 4*i t6, prod
(2) t2 := a [t1] +¿
(3) t3 := 4*i
(4) t4 :=b [t3]
prod ∗ t5

t4
(5) t5 := t2*t4 [ ]t 2
[] t1,t3
(6) t6 := prod
+t5
(7) prod := t6
∗ +¿ t7, i

a b
4 i 1
(8) t7 := i+1
(9) i := t7

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 49
Applications of DAG
 The DAGs are used in following:
1. Determining the common sub-expressions.
2. Determining which names are used inside the block and
computed outside the block.
3. Determining which statements of the block could have their
computed value outside the block.
4. Simplifying the list of quadruples by eliminating the
common sub-expressions and not performing the
assignment of the form x:=y unless and until it is a must.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 50
A Simple Code
Generator
A Simple Code Generator
 The code generation strategy generates target code for a sequence
of three address statement.
 It uses function getReg() to assign register to variable.
 The code generator algorithm uses descriptors to keep track of
register contents and addresses for names.
 Address descriptor stores the location where the current value of
the name can be found at run time. The information about locations
can be stored in the symbol table and is used to access the
variables.
 Register descriptor is used to keep track of what is currently in
each register. The register descriptor shows that initially all the
registers are empty. As the generation for the block progresses the
registers will hold the values of computation.
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 52
A Code Generation Algorithm
 The algorithm takes a sequence of three-address statements as input. For
each three address statement of the form x:= y op z perform the various
actions. Assume L is the location where the output of operation y op z is
stored.
1. Invoke a function getReg() to find out the location L where the result of
computation y op z should be stored.
2. Determine the present location of ‘y’ by consulting address description
for y if y is not present in location L then generate the instruction MOV
y' , L to place a copy of y in L
3. Present location of z is determined using step 2 and the instruction is
generated as OP z' , L
4. If L is a register then update it’s descriptor that it contains value of x.
update the address descriptor of x to indicate that it is in L.
5. If the current value of y CD
or zUnithave no next uses or not live on exit from
6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 53
Generating a code for assignment statement
 The assignment statement d:= (a-b) + (a-c) + (a-c) can be translated into
the following sequence of three address code:
t:= a-
Statem Code Register Address b
ent Generated descriptor descriptor u:= a-
t:= a - b MOV a,R0 R0 contains t t in R0 c
SUB b, R0
u:= a - c MOV a,R1 R0 contains t t in R0 v:= t +
SUB c, R1 R1 contains u u in R1 u
v:= t + u ADD R1, R0 R0 contains v u in R1 d:= v+
R1 contains u v in R0
u
d:= v + ADD R1,R0 R0 contains d d in R0
u MOV R0, d d in R0 and
memory

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 54
Generating Code from
DAGs
Generating Code from DAGs
 Methods for generating code from DAGs are:
1. Rearranging Order
2. Heuristic Ordering
3. Labeling Algorithm

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 56
Rearranging Order
 The order of three address code affects the cost of the object code being
generated.
 By changing the order in which computations are done we can obtain the
object code with minimum cost.
 Example:
t1:=a+b
t
t2:=c+d − 4
t3:=e-t2 t
t4:=t1-t3 t
1
−3
Three Address
Code
+¿e
t
2 +¿
a b

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 57
Example: Rearranging Order

t1:=a+b t2:=c+d
t2:=c+d Re-arrange
t3:=e-t2
t3:=e-t2 t1:=a+b
t4:=t1-t3 MOV a, R0
Three Address
t4:=t1-t3
Code
ADD b, R0 Three Address
MOV c, R1 Code MOV c, R0
ADD d, R1 ADD d, R0
MOV R0, t1 MOV e, R1
MOV e, R0 SUB R0, R1
SUB R1, R0 MOV a, R0
MOV t1, R1 ADD b, R0
SUB R0, R1 SUB R1, R0
MOV R1, t4 MOV R0, t4
Assembly Code Assembly Code
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 58
Heuristic Ordering
 The heuristic ordering algorithm is as follows:
1. Obtain all the interior nodes. Consider these interior nodes as unlisted
nodes.
2. while(unlisted interior nodes remain)
3. {
4. pick up an unlisted node n, whose parents have been listed
5. list n;
6. while(the left child m of n has no unlisted parent AND is not leaf)
7. {
8. List m;
9. n=m;
10. }
11. }

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 59
Example: Heuristic Ordering

∗𝟏 𝐼𝑛𝑡𝑒𝑟𝑖𝑜𝑟 𝑛𝑜𝑑𝑒𝑠=123 4568


𝟐
+¿ − 𝟑𝑈𝑛𝑙𝑖𝑠𝑡𝑒𝑑𝑛𝑜𝑑𝑒𝑠=123 456 8
Pick up an unlisted node,
∗𝟒 whose parents have been
listed

𝟏
−𝟓 𝟖 +¿ Leftchild
Leftchild of
Parent 5 is 1not
Parent
of 21==
listed so
is listed 𝟔
𝟐
𝟔 +¿ 𝟕𝑐 𝑑11 𝑒 12 can’t list26
so list

𝟗𝑎 𝑏 10
Listed 1 2
Node
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 60
Example: Heuristic Ordering

∗𝟏
𝟐
+¿ − 𝟑𝑈𝑛𝑙𝑖𝑠𝑡𝑒𝑑𝑛𝑜𝑑𝑒𝑠=123 456 8
Pick up an unlisted node,
∗𝟒 whose parents have been
listed

−𝟓 𝟖 +¿ Leftchild of 1
Rightchild
Parent
Parent2,3
1 isare
3=
𝟑
𝟒
listed
listed
𝟔 +¿ 𝟕𝑐 𝑑11 𝑒 12 solist
so list34

𝟗𝑎 𝑏 10
Listed 1 2 3 4
Node
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 61
Example: Heuristic Ordering

∗𝟏
𝑈𝑛𝑙𝑖𝑠𝑡𝑒𝑑𝑛𝑜𝑑𝑒𝑠=123 456 8
𝟐
+¿ − 𝟑
Pick up an unlisted node,
∗𝟒 whose parents have been
listed

−𝟓 𝟖 +¿ Leftchild of 4
Parent 2,54are
Parent
5=
𝟔
𝟓
listed so
is listed
𝟔 +¿ 𝟕𝑐 𝑑11 𝑒 12 solist
list6 5

𝟗𝑎 𝑏 10
Listed 1 2 3 4 5 6
Node

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 62
Example: Heuristic Ordering

∗𝟏
𝟐
+¿ − 𝟑𝑈𝑛𝑙𝑖𝑠𝑡𝑒𝑑𝑛𝑜𝑑𝑒𝑠=123 456 8
Pick up an unlisted node,
∗𝟒 whose parents have been
listed

−𝟓 𝟖 +¿ Rightchild of 4 =
Parent 4 is listed 𝟖
+¿
so list 8
𝟔 𝟕𝑐 𝑑11 𝑒 12
𝟗𝑎 𝑏 10
Listed 1 2 3 4 5 6 8
Node
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 63
Example: Heuristic Ordering

∗𝟏
Listed 1 2 3 4 5 6 8
𝟐
+¿ −𝟑 Node
Reverse Order for three address code = 8 6
54321
∗𝟒 t8=d+e
t6=a+b Optim
t5=t6-c alThre
−𝟓 𝟖+¿ t4=t5*t8
t3=t4-e
e
Addre

+¿
ss
𝟔 𝟕𝑐 𝑑11 𝑒 12 t2=t6+t4
t1=t2*t3
code

𝑎 𝑏 10
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 64
Labeling Algorithm
 Labeling algorithm is used by compiler during code generation
phase. Basically, this algorithm is used to find out how many
registers will be required by a program to complete its execution.
 Using labeling algorithm the labeling can be done to tree by
visiting nodes in bottom up order.
 For computing the label at node n with the label L1 to left child
and label L2 to right child as,

 We start in bottom-up fashion and label left leaf as 1 and right


leaf as 0.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 65
Example: Labeling Algorithm

t1:=a+b 𝑳𝒂𝒃𝒆𝒍(𝒏)=¿
t2:=c+d
t3:=e-t2
t4:=t1-t3 𝟐t
Three Address −4
Code 𝟐t
t
1
−3
𝟏
+¿ 𝟏e
𝟏t
+¿ 2
𝟏a b𝟎
𝟏c d𝟎

𝒑𝒐𝒔𝒕𝒐𝒓𝒅𝒆𝒓 𝒕𝒓𝒂𝒗𝒆𝒓𝒔𝒂𝒍=𝒂𝒃𝒕𝟏𝒆𝒄𝒅𝒕𝟐𝒕𝟑𝒕𝟒
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 66
Code Optimization
Machine independent optimization
Code Optimization
 Code Optimization is a program transformation technique which tries to
improve the code by eliminating unnecessary code lines and arranging the
statements in such a sequence that speed up the execution without
wasting the resources.
Advantag
es
1. Faster execution
2. Better performance
3. Improves the efficiency

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 68
Code Optimization Techniques (Machine
Independent Techniques)
Techniqu
es
1. Compile time evaluation
2. Common sub expressions elimination
3. Copy propagation
4. Code movement or code motion
5. Reduction in strength
6. Dead code elimination

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 69
Compile time evaluation
 Compile time evaluation means shifting of computations from run time to
compile time.
 There are two methods used to obtain the compile time evaluation.
Folding
 In the folding technique the computation of constant is done at compile
time instead of run time.
Example : length = (22/7)*d
 Here folding is implied by performing the computation of 22/7 at compile
time.
Constant propagation
 In this technique the value of variable is replaced and computation of an
expression is done at compilation time.
Example : pi = 3.14; r = 5;
Area = pi * r * r;
 HereProf. at
Jay R the compilation time
Dhamsaniya
CD  the
Unit 6 –value
#3130006 (PS)  Unit 1 – of
Code Generationpi
and is
Code replaced by 3.14 and r70by 5
Basic Probability
Common sub expressions elimination
 The common sub expression is an expression appearing repeatedly in the
program which is computed previously.
 If the operands of this sub expression do not get changed at all then result
of such sub expression is used instead of re-computing it each time.
 Example:

t1 := 4 * i
t2 := a + 2
t3 := 4 * j
t4 : = 4 * i
t5:= n
t6 := b[t4]+t5

Before
After
Optimization
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 71
Copy Propagation
 Copy propagation means use of one variable instead of
another.
 Example:
x = pi;
area = x * r * r;

area = pi * r * r;

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 72
Code Movement or Code Motion
 Optimization can be obtained by moving some amount of code outside
the loop and placing it just before entering in the loop.
 It won't have any difference if it executes inside or outside the loop.
 This method is also called loop invariant computation.
 Example:

While(i<=ma N=max-1;
x-1) While(i<=N)
{ {

sum=sum+a[ sum=sum+a[i];
i]; Before } After
}Optimization Optimization

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 73
Reduction in Strength
 Priority of certain operators is higher than others.
 For instance strength of * is higher than +.
 In this technique the higher strength operators can be replaced
by lower strength operators.
 Example:
for(i=1;i<=50;i++) temp=7;
{ for(i=1;i<=50;i++)
count = i*7; {
} count = temp;
temp =
temp+7;
}

 Here we get the count values as 7, 14, 21…. and so on.


CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 74
Dead Code Elimination
 The variable is said to be dead at a point in a program if the value
contained into it is never been used.
 The code containing such a variable supposed to be a dead code.
 Example:
i=0;
if(i==1)
{
Dead Code
a=x+5;
}

 If statement is a dead code as this condition will never get satisfied


hence, statement can be eliminated and optimization can be done.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 75
Code Optimization
Machine dependent optimization
Machine Dependent Optimization
 Machine dependent optimization may vary from machine to machine.
 Machine-dependent optimization is done after the target code has been
generated and when the code is transformed according to the target
machine architecture.
 Machine-dependent optimizers put efforts to take
maximum advantage of the memory hierarchy.
Techniqu
es Number of register may vary from machine
to machine.
Used register may be of 32-bit register or 64
1. Register allocation Addressing mode also vary from machine
bit register.
to machine.
2. Use of addressing modes
3. Peephole optimization

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 77
Peephole Optimization
 Peephole optimization is a simple and effective technique for locally
improving target code.
 This technique is applied to improve the performance of the target
program by examining the short sequence of target instructions (called
the peephole) and replacing these instructions by shorter or faster
sequence whenever possible.
Characteristics
 Peephole of moving
is a small, Peephole
window on the target program.
optimization

1. Redundant loads and stores


2. Flow of control optimization
3. Algebraic simplification
4. Reduction in strength
5. Machine idioms CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 78
Redundant Loads and Stores
 Especially the redundant loads and stores can be eliminated in following
type of transformations.
 Example:
MOV R0,x
MOV x,R0
 We can eliminate the second instruction since x is in already R0.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 79
Flow of Control Optimization
 The unnecessary jumps can be eliminated in either the intermediate code
or the target code by the following types of peephole optimizations.
 We can replace the jump sequence.

Goto L1
…… Goto L2
L1: goto L2

 It may be possible to eliminate the statement L1: goto L2 provided it is


preceded by an unconditional jump. Similarly, the sequence can be
replaced by:
If a<b goto L1
…… If a<b goto L2
L1: goto L2
CD  Unit 6 – Code Generation and Code
Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 80
Algebraic Simplification
 Peephole optimization is an effective technique for algebraic
simplification.
 The statements such as x = x + 0 or x := x* 1 can be
eliminated by peephole optimization.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 81
Reduction in Strength
 Certain machine instructions are cheaper than the other.
 In order to improve performance of the intermediate code we can replace
these instructions by equivalent cheaper instruction.
 For example, x2 is cheaper than x * x.
 Similarly addition and subtraction are cheaper than multiplication and
division. So we can add effectively equivalent addition and subtraction for
multiplication and division.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 82
Machine Idioms
 The target instructions have equivalent machine instructions for
performing some operations.
 Hence we can replace these target instructions by equivalent machine
instructions in order to improve the efficiency.
 Example: Some machines have auto-increment or auto-decrement
addressing modes.
(Example : INC i)
 These modes can be used in code for statement like i=i+1.

CD  Unit 6 – Code Generation and Code


Prof. Jay R Dhamsaniya #3130006 (PS)  Unit 1 – Basic Probability 83
Thank You

You might also like