The emergence of global-scale online services has galvanized scale-out software, characterized by... more The emergence of global-scale online services has galvanized scale-out software, characterized by splitting vast datasets and massive computation across many independent servers. Datacenters housing thousands of servers are designed to support scale-out workloads, with per-server throughput dictating the overall datacenter capacity and cost. However, today’s processors do not use the die area efficiently, limiting the per-server throughput. We find that existing processors over-provision cache capacity, leading to designs with sub-optimal performance density (performance per unit area). Furthermore, as these designs are scaled up with technology, the increasing number of cores leads to further performance density reduction due to increased on-chip latencies. We use a suite of real-world scale-out workloads to investigate performance density and formulate a methodology to design optimally-efficient processors for scale-out workloads. Our proposed architecture is based on the notion o...
Storing data in synthetic DNA offers the possibility of improving information density and durabil... more Storing data in synthetic DNA offers the possibility of improving information density and durability by several orders of magnitude compared to current storage technologies. However, DNA data storage requires a computationally intensive process to retrieve the data. In particular, a crucial step in the data retrieval pipeline involves clustering billions of strings with respect to edit distance. Datasets in this domain have many notable properties, such as containing a very large number of small clusters that are well-separated in the edit distance metric space. In this regime, existing algorithms are unsuitable because of either their long running time or low accuracy. To address this issue, we present a novel distributed algorithm for approximately computing the underlying clusters. Our algorithm converges efficiently on any dataset that satisfies certain separability properties, such as those coming from DNA data storage systems. We also prove that, under these assumptions, our a...
When a computational task tolerates a relaxation of its specification or when an algorithm tolera... more When a computational task tolerates a relaxation of its specification or when an algorithm tolerates the effects of noise in its execution, hardware, system software, and programming language compilers or their runtime systems can trade deviations from correct behavior for lower resource usage. We present, for the first time, a synthesis of research results on computing systems that only make as many errors as their end-to-end applications can tolerate. The results span the disciplines of computer-aided design of circuits, digital system design, computer architecture, programming languages, operating systems, and information theory. Rather than over-provisioning the resources controlled by each of these layers of abstraction to avoid errors, it can be more efficient to exploit the masking of errors occurring at one layer and thereby prevent those errors from propagating to a higher layer. We demonstrate the potential benefits of end-to-end approaches using two illustrative examples....
Recent research advocates using large die-stacked DRAM caches to break the memory bandwidth wall.... more Recent research advocates using large die-stacked DRAM caches to break the memory bandwidth wall. Existing DRAM cache designs fall into one of two categories --- block-based and page-based. The former organize data in conventional blocks (e.g., 64B), ensuring low off-chip bandwidth utilization, but co-locate tags and data in the stacked DRAM, incurring high lookup latency. Furthermore, such designs suffer from low hit ratios due to poor temporal locality. In contrast, page-based caches, which manage data at larger granularity (e.g., 4KB pages), allow for reduced tag array overhead and fast lookup, and leverage high spatial locality at the cost of moving large amounts of data on and off the chip. This paper introduces Footprint Cache, an efficient die-stacked DRAM cache design for server processors. Footprint Cache allocates data at the granularity of pages, but identifies and fetches only those blocks within a page that will be touched during the page's residency in the cache --...
The emergence of global-scale online services has galvanized scale-out software, characterized by... more The emergence of global-scale online services has galvanized scale-out software, characterized by splitting vast datasets and massive computation across many independent servers. Datacenters housing thousands of servers are designed to support scale-out workloads, with per-server throughput dictating the overall datacenter capacity and cost. However, today’s processors do not use the die area efficiently, limiting the per-server throughput. We find that existing processors over-provision cache capacity, leading to designs with sub-optimal performance density (performance per unit area). Furthermore, as these designs are scaled up with technology, the increasing number of cores leads to further performance density reduction due to increased on-chip latencies. We use a suite of real-world scale-out workloads to investigate performance density and formulate a methodology to design optimally-efficient processors for scale-out workloads. Our proposed architecture is based on the notion o...
Storing data in synthetic DNA offers the possibility of improving information density and durabil... more Storing data in synthetic DNA offers the possibility of improving information density and durability by several orders of magnitude compared to current storage technologies. However, DNA data storage requires a computationally intensive process to retrieve the data. In particular, a crucial step in the data retrieval pipeline involves clustering billions of strings with respect to edit distance. Datasets in this domain have many notable properties, such as containing a very large number of small clusters that are well-separated in the edit distance metric space. In this regime, existing algorithms are unsuitable because of either their long running time or low accuracy. To address this issue, we present a novel distributed algorithm for approximately computing the underlying clusters. Our algorithm converges efficiently on any dataset that satisfies certain separability properties, such as those coming from DNA data storage systems. We also prove that, under these assumptions, our a...
When a computational task tolerates a relaxation of its specification or when an algorithm tolera... more When a computational task tolerates a relaxation of its specification or when an algorithm tolerates the effects of noise in its execution, hardware, system software, and programming language compilers or their runtime systems can trade deviations from correct behavior for lower resource usage. We present, for the first time, a synthesis of research results on computing systems that only make as many errors as their end-to-end applications can tolerate. The results span the disciplines of computer-aided design of circuits, digital system design, computer architecture, programming languages, operating systems, and information theory. Rather than over-provisioning the resources controlled by each of these layers of abstraction to avoid errors, it can be more efficient to exploit the masking of errors occurring at one layer and thereby prevent those errors from propagating to a higher layer. We demonstrate the potential benefits of end-to-end approaches using two illustrative examples....
Recent research advocates using large die-stacked DRAM caches to break the memory bandwidth wall.... more Recent research advocates using large die-stacked DRAM caches to break the memory bandwidth wall. Existing DRAM cache designs fall into one of two categories --- block-based and page-based. The former organize data in conventional blocks (e.g., 64B), ensuring low off-chip bandwidth utilization, but co-locate tags and data in the stacked DRAM, incurring high lookup latency. Furthermore, such designs suffer from low hit ratios due to poor temporal locality. In contrast, page-based caches, which manage data at larger granularity (e.g., 4KB pages), allow for reduced tag array overhead and fast lookup, and leverage high spatial locality at the cost of moving large amounts of data on and off the chip. This paper introduces Footprint Cache, an efficient die-stacked DRAM cache design for server processors. Footprint Cache allocates data at the granularity of pages, but identifies and fetches only those blocks within a page that will be touched during the page's residency in the cache --...
Uploads
Papers by Djordje Jevdjic