dsm_lec2

dsm_lec2 - Today's Lecture Thursday Jan 22, 2008...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
09/09/09 CS252-s06, Lec 01-intro 1 Today’s Lecture Thursday Jan 22, 2008 Dependability – MTTF, etc. Quantitative Principles of Computer Design Taking Advantage of Parallelism Principle of Locality Focus on the Common Case Amdahl’s Law The Processor Performance Equation Measuring Performance Relative Performance of Two Systems Benchmarks (if time)
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
09/09/09 CS252-s06, Lec 01-intro 2 Define and quantify power (2 / 2) Because leakage current flows even when a transistor is off, now static power important too Leakage current increases in processors with smaller transistor sizes Increasing the number of transistors increases power even if they are turned off In 2006, goal for leakage is 25% of total power consumption; high performance designs at 40% Very low power systems even gate voltage to inactive modules to control loss due to leakage Voltage Current Power static static × =
Background image of page 2
09/09/09 CS252-s06, Lec 01-intro 3 Define and quantify dependability (1/3) How decide when a system is operating properly? Infrastructure providers now offer Service Level Agreements (SLA) to guarantee that their networking or power service would be dependable Systems alternate between 2 states of service with respect to an SLA: 1. Service accomplishment , where the service is delivered as specified in SLA 2. Service interruption , where the delivered service is different from the SLA Failure = transition from state 1 to state 2 Restoration = transition from state 2 to state 1
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
09/09/09 CS252-s06, Lec 01-intro 4 Define and quantify dependability (2/3) Module reliability = measure of continuous service accomplishment (or time to failure). 2 metrics 1. Mean Time To Failure ( MTTF ) measures Reliability 2. Failures In Time ( FIT ) = 1/MTTF, the rate of failures Traditionally reported as failures per billion hours of operation Mean Time To Repair ( MTTR ) measures Service Interruption Mean Time Between Failures ( MTBF ) = MTTF+MTTR Module availability measures service as alternate between the 2 states of accomplishment and interruption (number between 0 and 1, e.g. 0.9) Module availability = MTTF / ( MTTF + MTTR)
Background image of page 4
09/09/09 CS252-s06, Lec 01-intro 5 Example calculating reliability If modules have exponentially distributed lifetimes (age of module does not affect probability of failure), overall failure rate is the sum of failure rates of the modules Calculate FIT and MTTF for 10 disks (1M hour MTTF per disk), 1 disk controller (0.5M hour MTTF), and 1 power supply (0.2M hour MTTF): = = MTTF e FailureRat
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
09/09/09 CS252-s06, Lec 01-intro 6 Example calculating reliability If modules have exponentially distributed lifetimes (age of module does not affect probability of failure), overall failure rate is the sum of failure rates of the modules Calculate FIT and MTTF for 10 disks (1M hour
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 27

dsm_lec2 - Today's Lecture Thursday Jan 22, 2008...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online