When making changes to optimize part of a processor, it is often the case that s
ID: 3688269 • Letter: W
Question
When making changes to optimize part of a processor, it is often the case that speeding up one type of instruction comes at the cost of slowing down something else. For example, if we put in a complicated fast floating-point unit, that takes space, and something might have to be moved farther away from the middle to accommodate it, adding an extra cycle in delay to reach that unit. The basic Amdahl’s law equation does not take into account this trade- off.
a. If the new fast floating- point unit speeds up floating- point operations by, on average, 2×, and floating- point operations take 20% of the original program’s execution time, what is the overall speedup ( ignoring the penalty to any other instructions)?
b. Now assume that speeding up the floating- point unit slowed down data cache accesses, resulting in a 1.5× slowdown (or 2/ 3 speedup). Data cache accesses consume 10% of the execution time. What is the overall speedup now?
c. After implementing the new floating- point operations, what percentage of execution time is spent on floating- point operations? What percentage is spent on data cache accesses?
Explanation / Answer
a. Amdhal's law: speedup = 1/((1-fraction_enhanced)+(fraction_enhanced/speedup_enhanced))
1/(0.8+0.20/2) = 1.11
b. Since the FP speedup slows down the cache by a factor of 2/3, the cache is "speed up" by 3/2. Hence the overall speedup is
= 1 / ( ( 1 - 0.2 - 0.1 ) + 0.2/2 + (0.1) * (3/2) )
= 1/(0.7+0.20/2+0.10*3/2)=1.05
c. The percentage time spent on FP now is 0.10 / (0.7+0.20/2+0.10*3/2) = 0.105 = 10.5%.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.