Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Data prefetch mechanisms for accelerating symbolic and numeric computation
Publisher:
  • University of Illinois at Urbana-Champaign
  • Champaign, IL
  • United States
ISBN:978-0-591-08905-9
Order Number:AAI9702609
Pages:
139
Reflects downloads up to 17 Oct 2024Bibliometrics
Skip Abstract Section
Abstract

Despite rapid increases in CPU performance, the primary obstacles to achieving higher performance in contemporary processor organizations remain control and data hazards. Primary data cache misses are responsible for the majority of the data hazards. With CPU primary cache sizes limited by clock cycle time constraints, the performance of future CPUs is effectively going to be limited by the number of primary data cache misses whose penalty cannot be masked.

To address this problem, this dissertation takes a detailed look at memory access patterns in complex, real-world programs. A simple memory reference pattern classification is introduced, which is applicable to a broad range of computations, including pointer-intensive and numeric codes. To exploit the new classification, a data prefetch device called the Indirect Reference Buffer (IRB) is proposed. The IRB extends data prefetching to indirect memory address sequences, while also handling dense scientific codes. It is distinguished from previous designs in its seamless integration of linear and indirect address prefetching. The behavior of the IRB on a suite of programs drawn from the Spec92, Spec95, and public domain codes, is measured under a variety of abstract models.

Next, a detailed hardware design for the IRB that can be easily integrated into modern CPUs is presented. In this design, the IRB is decomposed into a recurrence recognition unit (RRU) and a prefetch unit (PU). The RRU is tightly coupled to the CPU pipelines, and monitors individual load instructions in executing programs. The PU operates asynchronously with respect to the processor pipelines, and is coupled only to the processor's bus interface. This division has two important ramifications. First, it allows the PU to pull ahead of the processor as a program executes. Second, it makes it possible to tune the IRB for processors with varying memory subsystems, simply by redesigning the PU. An early embodiment of the design is evaluated via detailed timing simulation on our benchmark suite.

The dissertation concludes by outlining compile-time transformations to enhance IRB performance and offering suggestions for possible extensions to this work.

Contributors
  • University of Illinois Urbana-Champaign
  • University of Illinois Urbana-Champaign

Recommendations