Topic outline
-
In prior units, you have studied elementary hardware components like combinational circuits and sequential circuits, functional hardware components like adders, arithmetic logical units, and data buses, and computational components like processors.
This unit will address the memory hierarchy of a computer and will identify different types of memory and how they interact with one another. This unit will look into a memory type known as cache and will discuss how caches improve computer performance. This unit will then discuss the main memory, DRAM (or the Dynamic Random Access Memory), and the associated concept of virtual memory. You will take a look at the common framework for memory hierarchy. The unit concludes with a review of the design of a cache hierarchy for an industrial microprocessor.
Completing this unit should take you approximately 6 hours.
-
-
Watch this lecture. Previously, we focused on processor design to increase performance. Now, we will turn to memory. This video introduces various methods of improving processor performance through the use of memory hierarchy. The lecture discusses memory technologies, which vary in cost and speed. We have to assume that memory is flat, but with current technology, flat memory does not meet performance demands placed on it. You will take a look at hierarchical memory and the use of caches. This video will also discuss the analysis of memory hierarchies and cache performance with respect to miss rates and block size. Finally, the lecturer considers cache policy.
-
Read section 1.2 to learn about memory hierarchies, and read section 1.4 to learn about locality and data reuse.
-
-
-
Watch this lecture on memory hierarchy design with caches. This lecture discusses the impact that memory operations have on overall processor performance and identifies different cache architectures that can improve overall processor performance. In the memory hierarchy, from top to bottom, there are: processor registers, cache, main memory, and secondary memory. The order goes from faster, smaller capacity, and higher cost to slower, higher capacity, and lower cost. Data moves from the main memory to cache. To do this requires a mapping from addresses in the main memory to cache addresses. How this is done affects performance. Performance analysis of cache and memory organization involves miss rate and miss penalty. Policies for reading, loading, fetching, replacing, and writing memory also affect cost and performance.
-
Read section 1.5, which discusses the relationship of pipelining and caching to programming.
-
-
-
Watch this lecture, which addresses the subject of virtual memory. This topic pertains to the relationship of main memory and secondary memory and has similarities and differences to the relationship of cache and main memory. Differences arise due to the speed of the various memories and their capacities. This lecture discusses virtual memory, mapping from virtual memory addresses to physical addresses in main and paging techniques used with the main memory. The lecture discusses the use of page tables in translating virtual addresses to physical addresses. Issues that arise with page tables include structure, location, and large size.
-
-
-
Read Chapters 1–5. Before reading these chapters, list the factors that you can think of that can affect performance, like memory performance, cache, memory hierarchy, multi-cores, and so on, and what you might suggest as ways to increase performance. After reading these chapters, what might you add, if anything, to your list?
-
-
-
Take this assessment to see how well you understood this unit.
- This assessment does not count towards your grade. It is just for practice!
- You will see the correct answers when you submit your answers. Use this to help you study for the final exam!
- You can take this assessment as many times as you want, whenever you want.
-