Despite rapid increases in CPU performance, the primary obstacles to achieving higher performance in contemporary processor organizations remain control and data hazards. Primary data cache misses are responsible for the majority of the data hazards. With CPU primary cache sizes limited by clock cycle time constraints, the performance of future CPUs is effectively going to be limited by the number of primary data cache misses whose penalty cannot be masked.
To address this problem, this dissertation takes a detailed look at memory access patterns in complex, real-world programs. A simple memory reference pattern classification is introduced, which is applicable to a broad range of computations, including pointer-intensive and numeric codes. To exploit the new classification, a data prefetch device called the Indirect Reference Buffer (IRB) is proposed. The IRB extends data prefetching to indirect memory address sequences, while also handling dense scientific codes. It is distinguished from previous designs in its seamless integration of linear and indirect address prefetching. The behavior of the IRB on a suite of programs drawn from the Spec92, Spec95, and public domain codes, is measured under a variety of abstract models.
Next, a detailed hardware design for the IRB that can be easily integrated into modern CPUs is presented. In this design, the IRB is decomposed into a recurrence recognition unit (RRU) and a prefetch unit (PU). The RRU is tightly coupled to the CPU pipelines, and monitors individual load instructions in executing programs. The PU operates asynchronously with respect to the processor pipelines, and is coupled only to the processor's bus interface. This division has two important ramifications. First, it allows the PU to pull ahead of the processor as a program executes. Second, it makes it possible to tune the IRB for processors with varying memory subsystems, simply by redesigning the PU. An early embodiment of the design is evaluated via detailed timing simulation on our benchmark suite.
The dissertation concludes by outlining compile-time transformations to enhance IRB performance and offering suggestions for possible extensions to this work.
Cited By
- Solihin Y, Lee J and Torrellas J (2003). Correlation Prefetching with a User-Level Memory Thread, IEEE Transactions on Parallel and Distributed Systems, 14:6, (563-580), Online publication date: 1-Jun-2003.
- Joseph D and Grunwald D (1999). Prefetching Using Markov Predictors, IEEE Transactions on Computers, 48:2, (121-133), Online publication date: 1-Feb-1999.
- Harrison L Examination of a memory access classification scheme for pointer-intensive and numeric programs Proceedings of the 10th international conference on Supercomputing, (133-140)
Recommendations
Data prefetch mechanisms
The expanding gap between microprocessor and DRAM performance has necessitated the use of increasingly aggressive techniques designed to reduce or hide the latency of main memory access. Although large cache hierarchies have proven to be effective in ...
Reducing Cache Pollution via Dynamic Data Prefetch Filtering
In order to bridge the gap of the growing speed disparity between processors and their memory subsystems, aggressive prefetch mechanisms, either hardware-based or compiler-assisted, are employed to hide memory latencies. As the first-level cache gets ...