Consider a RISC microprocessor, like the MIPS presented in the textbook, for whi
ID: 3862648 • Letter: C
Question
Consider a RISC microprocessor, like the MIPS presented in the textbook, for which we want to implement the full addressable space using byte addressability and 32-bit addresses. Assume we have a 320GB hard disk, a 1GB main memory, a 2MB L2 Cache and a 512KB internal Cache. Assume that we define a memory hierarchy based on a block size of 32KB (block = page).
(a) How many blocks can be stored in each level of the memory hierarchy?
(b) Consider the three cache organizations (Direct, Fully Associative, and Set Associative Mapping). Explain how each of them works as well as the advantages and disadvantages for each organization
Explanation / Answer
The number of levels in the memory hierarchy and the performance at each level has increased over time.
--------------------------------------------------------------------------------------------------------------------------------------------------------------Direct Mapped Cache: The direct mapped cache is the simplest form of cache and the easiest to check for a hit.
Unfortunately, the direct mapped cache also has the worst performance. Let's look again at our 512 KB level 2 cache and 64 MB of system memory. As you recall this cache has 16,384 lines and so each one is shared by 4,096 memory addresses.
In short Direct Mapping:
Address length = (s + w) bits where w = log2 (block size)
• Number of addressable units = 2s+w words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+ w/2w = 2s
• Size of line field is r bits — Number of lines in cache = m = 2r — Size of tag = (s – r) bits
• Size of cache 2r+w bytes or words.
Pro
— Simple — Inexpensive
• Con
— Fixed location for given block — If a program accesses 2 blocks that map to the same line repeatedly, cache misses are very high (thrashing)
• Victim cache
— A solution to direct mapped cache thrashing — Discarded lines are stored in a small “victim” cache (4 to 16 lines) — Victim cache is fully associative .
Fully Associative Cache: The fully associative cache has the best hit ratio because any line in the cache can hold any address that needs to be cached. This means the problem seen in the direct mapped cache disappears, because there is no dedicated single line that an address must use.
this cache suffers from problems involving searching the cache. Block can be placed any where in cache
Requires all entries to be searched at once,Comparator per entry (expensive)
Feasible for very small sized caches only
In Short Fully Associative Mapping:
• Address length = (s + w) bits where w = log2 (block size)
• Number of addressable units = 2s+w words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+ w/2w = 2s
• Number of lines in cache = undetermined
• Size of tag = s bits
Set Associative Cache: The set associative cache is a good compromise between the direct mapped and set associative caches.
• For a k-way set associative cache with v sets (each set contains k lines):
— Address length = (t+d+w) bits where w = log2 (block size) and d = log2 (v)
— Number of addressable units = 2t+d+w words or bytes
— Size of tag = t bits
— Block size = line size = 2w words or bytes
— Number of blocks in main memory = 2t+d
— Number of lines in set = k
— Number of sets = v = 2d
— Number of lines in cache = kv = k * 2d
Advantage: Cheaper than fully associative cache, and has a lower miss ration than a directly mapped cache.
table of the different cache mapping techniques and their relative performance:
Cache Type
Hit Ratio
Search Speed
Direct Mapped
Good
Best
Fully Associative
Best
Moderate
Set Associative, N>1
Very Good, Better as N Increases
Good, Worse as N Increases
Cache Type
Hit Ratio
Search Speed
Direct Mapped
Good
Best
Fully Associative
Best
Moderate
Set Associative, N>1
Very Good, Better as N Increases
Good, Worse as N Increases
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.