228_Final_Review

228_Final_Review - SOEN 228 Final Review Generic Operation...

Info iconThis preview shows pages 1–21. Sign up to view the full content.

View Full Document Right Arrow Icon
SOEN 228 Final Review
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Generic Operation (Hyde,4)
Background image of page 2
Pipeline
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Pipelining Fetch instruction Decode instruction Calculate operands (i.e. EAs) Fetch operands Execute instructions Write result Overlap these operations
Background image of page 4
Six Stage Instruction Pipeline
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Timing Diagram for Instruction Pipeline Operation
Background image of page 6
The Effect of a Conditional Branch on Instruction Pipeline Operation
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Stalls in a pipeline
Background image of page 8
Data Hazards Conflict in access of an operand location Two instructions to be executed in sequence Both access a particular memory or register operand If in strict sequence, no problem occurs If in a pipeline, operand value could be updated so as to produce different result from strict sequential execution E.g. x86 machine instruction sequence: ADD EAX, EBX /* EAX = EAX + EBX SUB ECX, EAX /* ECX = ECX EAX ADD instruction does not update EAX until end of stage 5, at clock cycle 5 SUB instruction needs value at beginning of its stage 2, at clock cycle 4 Pipeline must stall for two clocks cycles Without special hardware and specific avoidance algorithms, results in inefficient pipeline usage
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Data Hazard andall Hyde fig 4.10 p.30
Background image of page 10
How Intel handles the data Hazard Randall Hyde fig 4.11, p 32
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Harvard Architecture The architecture where the CPU has two separate memory spaces, one for instructions and one for data, each with their own bus is called the Harvard Architecture since the first such machine was built at Harvard. On a Harvard machine there would be no contention for the bus. The BIU could continue to fetch opcodes on the instruction bus while accessing memory on the data/memory bus.
Background image of page 12
The Harvard Machine
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Application of Harvard Architecture The extra pins needed on the processor to support two physically separate busses increase the cost of the processor and introduce many other engineering problems. Separate on-chip caches for data and instructions give most of the advantages for little extra cost. Advanced CPUs use an internal Harvard architecture and an external Von Neumann architecture.
Background image of page 14
Internal Harvard Architecture
Background image of page 15

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Superscalar architecture A superscalar CPU has, essentially, several execution units If it encounters two or more instructions in the instruction stream (i.e., the prefetch queue) which can execute independently, it will do so. Most superscalar CPUs do not completely duplicate the execution unit. There might be multiple ALUs, floating point units, etc. This means that certain instruction sequences can execute very quickly while others won t.
Background image of page 16
Superscalar machine
Background image of page 17

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
DMA Commands control DMA I/O Control Main memory Control bits Read/write Track/sector #bytes Memory address Chain address DMA command structure DMA command list
Background image of page 18
DMA example Disk dma The command has a memory address, a disk/ sector address, R/W, number of bytes, chain address The cpu sends it the address of the command The cpu sends it a start I/O The disk controller picks up the command, and starts the transfer
Background image of page 19

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Cycle stealing This works on a request/grant basis
Background image of page 20
Image of page 21
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 06/30/2011 for the course SOEN 228 taught by Professor T.fancott during the Winter '11 term at Concordia Canada.

Page1 / 119

228_Final_Review - SOEN 228 Final Review Generic Operation...

This preview shows document pages 1 - 21. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online