Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Reduced Instruction Set Computers: William Stallings Computer Organization and Architecture 7 Edition

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 38

Reduced Instruction Set Computers

Chapter 13 William Stallings Computer Organization and Architecture 7th Edition

Major Advances in Computers


The family concept
IBM System/360 in 1964 DEC PDP-8 Separates architecture from implementation

Cache memory
IBM S/360 model 85 in 1968

Pipelining
Introduces parallelism into sequential process

Multiple processors

The Next Step - RISC


Reduced Instruction Set Computer
Key features
Large number of general purpose registers or use of compiler technology to optimize register use Limited and simple instruction set Emphasis on optimising the instruction pipeline

Comparison of processors

Driving force for CISC


Increasingly complex high level languages (HLL) structured and object-oriented programming Semantic gap: implementation of complex instructions Leads to:
Large instruction sets More addressing modes Hardware implementations of HLL statements, e.g. CASE (switch) on VAX

Intention of CISC
Ease compiler writing (narrowing the semantic gap) Improve execution efficiency
Complex operations in microcode (the programming language of the control unit)

Support more complex HLLs

Execution Characteristics
Operations performed (types of instructions) Operands used (memory organization, addressing modes) Execution sequencing (pipeline organization)

Dynamic Program Behaviour


Studies have been done based on programs written in HLLs Dynamic studies are measured during the execution of the program Operations, Operands, Procedure calls

Operations
Assignments
Simple movement of data

Conditional statements (IF, LOOP)


Compare and branch instructions => Sequence control

Procedure call-return is very time consuming Some HLL instruction lead to many machine code operations and memory references

Weighted Relative Dynamic Frequency of HLL Operations [PATT82a]


Dynamic Occurrence Pascal C Machine-Instruction Weighted Pascal C Memory-Reference Weighted Pascal C

ASSIGN
LOOP CALL IF GOTO OTHER

45%
5% 15% 29% 6%

38%
3% 12% 43% 3% 1%

13%
42% 31% 11% 3%

13%
32% 33% 21% 1%

14%
33% 44% 7% 2%

15%
26% 45% 13% 1%

Operands
Mainly local scalar variables Optimisation should concentrate on accessing local variables
Pascal Integer constant 16% C 23% Average 20%

Scalar variable

58%

53%

55%

Array/structure

26%

24%

25%

Procedure Calls
Very time consuming - load Depends on number of parameters passed Depends on level of nesting Most programs do not do a lot of calls followed by lots of returns limited depth of nesting Most variables are local

Why CISC (1)? Compiler simplification?


Disputed Complex machine instructions harder to exploit Optimization more difficult

Smaller programs?
Program takes up less memory but Memory is now cheap May not occupy less bits, just look shorter in symbolic form
More instructions require longer op-codes Register references require fewer bits

Why CISC (2)?


Faster programs?
Bias towards use of simpler instructions More complex control unit Thus even simple instructions take longer to execute

It is far from clear that CISC is the appropriate solution

Implications - RISC

Best support is given by optimising most used and most time consuming features Large number of registers
Operand referencing (assignments, locality)

Careful design of pipelines


Conditional branches and procedures

Simplified (reduced) instruction set - for optimization of pipelining and efficient use of registers

RISC v CISC
Not clear cut Many designs borrow from both design strategies: e.g. PowerPC and Pentium II No pair of RISC and CISC that are directly comparable No definitive set of test programs Difficult to separate hardware effects from compiler effects Most comparisons done on toy rather than production machines

RICS v CISC
No. of instructions: 69 - 303 No. of instruction sizes: 1 - 56 Max. instruction size (byte): 4 - 56 No. of addressing modes: 1 - 44 Indirect addressing: no - yes Move combined with arithmetic: no yes Max. no. of memory operands: 1 - 6

Large Register File


Software solution
Require compiler to allocate registers Allocation is based on most used variables in a given time Requires sophisticated program analysis

Hardware solution
Have more registers Thus more variables will be in registers

Registers for Local Variables


Store local scalar variables in registers Reduces memory access and simplifies addressing Every procedure (function) call changes locality
Parameters must be passed down Results must be returned Variables from calling programs must be restored

Register Windows
Only few parameters passed between procedures Limited depth of procedure calls Use multiple small sets of registers Call switches to a different set of registers Return switches back to a previously used set of registers

Register Windows cont.


Three areas within a register set
1. Parameter registers 2. Local registers 3. Temporary registers

Temporary registers from one set overlap with parameter registers from the next
This allows parameter passing without moving data

Overlapping Register Windows

Circular Buffer diagram

Operations of Circular Buffer


When a call is made, a current window pointer is moved to show the currently active register window If all windows are in use and a new procedure is called: an interrupt is generated and the oldest window (the one furthest back in the call nesting) is saved to memory

Operations of Circular Buffer (cont.)


At a return a window may have to be restored from main memory A saved window pointer indicates where the next saved window should be restored

Global Variables
Allocated by the compiler to memory
Inefficient for frequently accessed variables

Have a set of registers dedicated for storing global variables

SPARC register windows


Scalable Processor Architecture Sun Physical registers: 0-135 Logical registers
Global variables: 0-7 Procedure A: parameters 135-128 locals 127-120 temporary 119-112 Procedure B: parameters 119-112 etc.

Compiler Based Register Optimization Assume small number of registers (16-32) Optimizing use is up to compiler HLL programs usually have no explicit references to registers Assign symbolic or virtual register to each candidate variable Map (unlimited) symbolic registers to real registers Symbolic registers that do not overlap can share real registers If you run out of real registers some variables use memory

Graph Coloring Given a graph of nodes and edges Assign a color to each node Adjacent nodes have different colors Use minimum number of colors Nodes are symbolic registers Two registers that are live in the same program fragment are joined by an edge Try to color the graph with n colors, where n is the number of real registers Nodes that can not be colored are placed in memory

Graph Coloring Approach

RISC Pipelining Most instructions are register to register Arithmetic/logic instruction:


I: Instruction fetch E: Execute (ALU operation with register input and output)

Load/store instruction:
I: Instruction fetch E: Execute (calculate memory address) D: Memory (register to memory or memory to register operation)

Delay Slots in the Pipeline


Sequential LOAD rA, m1 LOAD rB, m2 ADD rC, rA, rB STORE m3, rC 1 I 2 E 3 D I E D I E I E D 4 5 6 7 8 9 10 11

Pipelined LOAD rA, m1


LOAD rB, m2 ADD rC, rA, rB STORE m3, rC

1 I

2 E
I

3 D
E I

D E I E D

Optimization of Pipelining
Code reorganization techniques to reduce data and branch dependencies Delayed branch
Does not take effect until the execution of following instruction This following instruction is the delay slot More successful with unconditional branch

1st approach: insert NOOP (prevents fetching instr., no pipeline flush and delays the effect of jump) 2nd approach: reorder instructions

Normal and Delayed Branch


Address 100 101 102 Normal branch LOAD rA, X ADD rA, 1 JUMP 105 1st Delayed branch LOAD rA, X ADD rA, 1 JUMP 106 2nd Delayed branch LOAD rA, X JUMP 105 ADD rA, 1

103 104 105


106

ADD rA, rB SUB rC, rB STORE Z, rA

NOOP ADD rA, rB SUB rC, rB


STORE Z, rA

ADD rA, rB SUB rC, rB STORE Z, rA

Use of Delayed Branch


Normal branch
100. LOAD rA, X 101. ADD rA, 1 102. JUMP 105 103. ADD rA, rB

1
I

2
E I

3
D

4
E I

E I

105. STORE Z, rA Delayed branch


100. LOAD rA, X 102. JUMP 105 101. ADD rA, 1 105. STORE Z, rA

I 1
I

E 6

2
E I

3
D E I

E I E D

MIPS S Series - Instructions


All instructions 32 bit; three instruction formats 6-bit opcode, 5-bit register addresses/26bit instruction address (e.g., jump) plus additional parameters (e.g., amount of shift) ALU instructions: immediate or register addressing Memory addressing: base (32-bit) + offset (16-bit)

MIPS S Series - Pipelining


60 ns clock 30 ns substages (superpipeline) 1. Instruction fetch 2. Decode/Register read 3. ALU/Memory address calculation 4. Cache access 5. Register write

MIPS R4000 pipeline


1.Instruction Fetch 1: address generated 2.IF 2: instruction fetched from cache 3.Register file: instruction decoded and operands fetched from registers 4.Instruction execute: ALU or virt. address calculation or branch conditions checked 5.Data cache 1: virt. add. sent to cache 6.DC 2: cache access 7.Tag check: checks on cache tags 8.Write back: result written into register

You might also like