DerekChiou_EE360N_Spring2010_Lecture22

DerekChiou_EE360N_Spring2010_Lecture22 - Lecture 22: Vector...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
© Derek Chiou 1 Lecture 22: Vector Machines & Performance Evaluation Prof. Derek Chiou University of Texas at Austin
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Test of size Lecture 21 Survey Results Attempts20 (Total of 20 attempts for this assessment)Instructions Question 1: Multiple Choice The lecture was clear Percent AnsweredAgree70%Little improvement required20%Some improvement required10%Significant improvement required0% Unanswered 0% Question 2: Multiple Choice The lecture was well organized Percent AnsweredAgree80%Little improvement required15%Some improvement required0%Significant improvement required0% Unanswered 5% Question 3: Multiple Choice The pace of the lecture was: Percent AnsweredWay too fast5%A little too fast25%Just right65%A little too slow5%Way too slow0% Unanswered 0% Question 4: Essay How could this lecture have been improved? What material should have been covered, or covered more clearly? What material should be removed? Any other comments are welcome. Unanswered Responses8 Given AnswersThe lecture was good. Do present day processors use this cache that you discussed about? Or do they use something more advanced and entirely different? Earlier, you talked about delay slots. Which one is better? Delay slots or branch predictors? 1)With delay slots, if we need to exploit its full potential, we need to carefully order the instructions and must be able to find instructions without dependencies that we can place after the branch and so on. . If we can find such instructions in most cases, delay slots seem to be the better choice because they don't need this cache look up business, and the instructions executed in the pipeline are guaranteed to be useful (i.e., they won't get thrown away for sure) 2) With branch prediction, it seems like the programmer doesn't have to worry much, and the hardware does all the donkey work for us. If the programmer knows about the branch predictor, he could write his conditions such that the instructions chosen by the predictor are the ones most likely to be executed. Has anyone tried to combine both these techniques to achieve benefits of both? Say we feed a program treating it as a delay slot machine. If the decoder encounters a nop (or a special nop instruction for this purpose!), it realizes that the user doesn't have useful instruction to fill the pipeline with, and automatically start predicting the branch from that point on. . Do you think this is practical ? Will it be useful ?Idea of branch prediction is still vague.Lecture probably went a little slower than you wanted, but I liked the pace and the discussion.The lecture was quite good but guess overall we could have been a bit more slower since most of it were new topics.A good review afterward helped, but in general I was lost during this lecture.I thought the lecture could have been better. You gave us a foundation of what branch prediction is all about. You could have taken an example of some machine (say
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 27

DerekChiou_EE360N_Spring2010_Lecture22 - Lecture 22: Vector...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online