Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Internet governance is now an active topic of international discussion. Interest has been fueled by media attention to cyber crime, global surveillance, commercial espionage, cyber attacks, and threats to critical national... more
Internet governance is now an active topic of international discussion. Interest has been fueled by media attention to cyber crime, global surveillance, commercial espionage, cyber attacks, and threats to critical national infrastructures. Many nations have decided that they need more control over Internet-based technologies and the policies that support them. Others, emphasizing the positive aspects of these technologies, argue that traditional systems of Internet governance, which they label “multi-stakeholder” and which they associate with the success of the Internet, must continue to prevail.
Algebraic circuits combine operations drawn from an algebraic system. In this chapter we develop algebraic and combinatorial circuits for a variety of generally non-Boolean problems, including multiplication and inversion of matrices,... more
Algebraic circuits combine operations drawn from an algebraic system. In this chapter we develop algebraic and combinatorial circuits for a variety of generally non-Boolean problems, including multiplication and inversion of matrices, convolution, the discrete Fourier transform, and sorting networks. These problems are used primarily to illustrate concepts developed in later chapters, so that this chapter may be used for reference when studying those chapters. For each of the problems examined here the natural algorithms are straight-line and the graphs are directed and acyclic; that is, they are circuits. Not only are straight-line algorithms the ones typically used for these problems, but in some cases they are the best possible. The quality of the circuits developed here is measured by circuit size, the number of circuit operations, and circuit depth, the length of the longest path between input and output vertices. Circuit size is a measure of the work necessary to execute the c...
Publisher Summary This chapter discusses the progress in sequential decoding, and describes the analysis, simulation, and construction of sequential decoders. Sequential decoding procedures are important because they achieve, at modest... more
Publisher Summary This chapter discusses the progress in sequential decoding, and describes the analysis, simulation, and construction of sequential decoders. Sequential decoding procedures are important because they achieve, at modest cost, a decoding error rate that approximates that of the optimum and expensive maximum likelihood decoder. This is possible because sequential decoders allow the level of the decoding computation to fluctuate with the level of the channel noise. A sequential decoder, operated at rate less than R comp , may be constructed from a logic unit and a buffer. The logic unit can compute at several times the average computation rate, while the buffer stores data that accumulates during noisy periods. A sequential decoder is designed on the basis of the buffer overflow probability because this probability decreases slowly and as an inverse power of the product of the buffer size and machine speed. Indeed, it is stated that a sequential decoder must be designed to minimize the probability of overflow as this is generally much larger than the undetected error rate. The Wozencraft sequential decoding algorithm is a procedure for decoding tree codes or, more properly, convolutional codes (δ). Recently, a sequential decoder using the Fano algorithm was built and incorporated into a Lincoln Experimental Terminal (LET) for communication over active and passive satellite links. The decoder operates at rates ½ and ¼ bits/waveform, and the waveforms are selected by mapping groups of four binary digits at the output of the encoder into one of 16 orthogonal signals.
Optimum and suboptimum decision rules for two- channel deep space telemetry system with modulation consisting of PM with two orthogonal phase functions
Heuristics for Parallel Graph-Partitioning * t * John E. Savage Markus G. Wloka Brown University Department of Computer Science, Box 1910 Providence, Rhode Island 02912 Tel: 401-863-7600 jes@cs.brown.edu mgw@cs.brown.edu Abstract Graph... more
Heuristics for Parallel Graph-Partitioning * t * John E. Savage Markus G. Wloka Brown University Department of Computer Science, Box 1910 Providence, Rhode Island 02912 Tel: 401-863-7600 jes@cs.brown.edu mgw@cs.brown.edu Abstract Graph partitioning is an important NP-...
The nanowire crossbar is a promising nanotechnology for assembling memories and circuits. In both, a small number of lithographically produced mesoscale wires (MWs) must control a large number of nanoscale wires (NWs). Previ- ous... more
The nanowire crossbar is a promising nanotechnology for assembling memories and circuits. In both, a small number of lithographically produced mesoscale wires (MWs) must control a large number of nanoscale wires (NWs). Previ- ous strategies for achieving this have been vunerable to mis - alignment. In this paper, we introduce core-shell NWs, whic h eliminate misalignment errors. We also give a two-step as- sembly process that reduces the amount of crossbar control circuitry.
As we saw in Chapter 1, every finite computational task can be realized by a combinational circuit. While this is an important concept, it is not very practical; we cannot afford to design a special circuit for each computational task.... more
As we saw in Chapter 1, every finite computational task can be realized by a combinational circuit. While this is an important concept, it is not very practical; we cannot afford to design a special circuit for each computational task. Instead we generally perform computational tasks with machines having memory. In a strong sense to be explored in this chapter, the memory of such machines allows them to reuse their equivalent circuits to realize functions of high circuit complexity. In this chapter we examine the deterministic and nondeterministic finite-state machine (FSM), the random-access machine (RAM), and the Turing machine. The finite-state machine moves from state to state while reading input and producing output. The RAM has a central processing unit (CPU) and a random-access memory with the property that each memory word can be accessed in one unit of time. Its CPU executes instructions, reading and writing data from and to the memory. The Turing machine has a control unit...
Recent research in nanoscale computing offers multiple techniques for producing large numbers of parallel nanowires (NWs). These wires can be assembled into crossbars, two orthogonal sets of parallel NWs separated by a layer of molecular... more
Recent research in nanoscale computing offers multiple techniques for producing large numbers of parallel nanowires (NWs). These wires can be assembled into crossbars, two orthogonal sets of parallel NWs separated by a layer of molecular devices. In a crossbar, pairs of orthogonal NWs provides control over the molecules at their crosspoints. Hysteretic molecules act as programmable diodes, allowing crossbars to function as both memories and circuits (a PLA for example). Either application requires that NWs be interfaced with existing CMOS technology.
We describe a technique for addressing individual nanoscale wires with microscale control wires without using lithographic-scale processing to define nanoscale dimensions. Such a scheme is necessary to exploit sublithographic nanoscale... more
We describe a technique for addressing individual nanoscale wires with microscale control wires without using lithographic-scale processing to define nanoscale dimensions. Such a scheme is necessary to exploit sublithographic nanoscale storage and computational devices. Our technique uses modula- tion doping to address individual nanowires and self-assembly to organize them into nanoscale-pitch decoder arrays. We show that if coded nanowires are chosen at random from a sufficiently large population, we can ensure that a large fraction of the selected nanowires have unique addresses. For example, we show that lines can be uniquely addressesd over 99% of the time using no more than address wires. We further show a hybrid decoder scheme that only needs to address wires at a time through this sto- chastic scheme; as a result, the number of unique codes required for the nanowires does not grow with decoder size. We give an procedure to discover the addresses which are present. We also de...
We show that important local search heuristics for grid and hypercube embeddings based on the successive swapping of pairs of vertices, such as simulated annealing, are P-hard and unlikely to run in polylogarithmic time. This puts... more
We show that important local search heuristics for grid and hypercube embeddings based on the successive swapping of pairs of vertices, such as simulated annealing, are P-hard and unlikely to run in polylogarithmic time. This puts experimental results reported in the literature into perspective: attempts to construct the exact parallel equivalent of serial simulated-annealing-based heuristics for graph embedding have yielded disappointing parallel speedups. We have developed and implemented on the Connection Machine CM-2 a new massively parallel heuristic for such embeddings, called the Mob heuristic. We report on an extensive series of experiments with our heuristics on the 32K-processor CM-2 Connection Machine for grid and hypercube embeddings that show impressive reductions in edge costs and run in less than 30 minutes on random graphs of 1 million edges. Due to excessive run times, previous heuristics reported in the literature were able to construct graph embeddings only for gr...
Research Interests:
Google, Inc. (search), Subscribe (Full Service), Register (Limited Service, Free), Login. Search: The ACM Digital Library The Guide. ...
Research Interests:
Although serial programming languages assume that programs are written for the RAM model, this model is rarely implemented in practice. Instead, the random-access memory is replaced with a hierarchy of memory units of increasing size,... more
Although serial programming languages assume that programs are written for the RAM model, this model is rarely implemented in practice. Instead, the random-access memory is replaced with a hierarchy of memory units of increasing size, decreasing cost per bit, and increasing access time. In this chapter we study the conditions on the size and speed of these units when a CPU and a memory hierarchy simulate the RAM model. The design of memory hierarchies is a topic in operating systems. A memory hierarchy typically contains the local registers of the CPU at the lowest level and may contain at succeeding levels a small, very fast, local random-access memory called a cache, a slower but still fast random-access memory, and a large but slow disk. The time to move data between levels in a memory hierarchy is typically a few CPU cycles at the cache level, tens of cycles at the level of a random-access memory, and hundreds of thousands of cycles at the disk level! A CPU that accesses a rando...
The Dynamic Adaptation of Parallel Mesh-Based Computation by
If the past is prologue to the future, computer science will continue to be extremely successful. In a few short decades computer science has lead to revolutions in work, recreation and societal interactions. There are few technologies... more
If the past is prologue to the future, computer science will continue to be extremely successful. In a few short decades computer science has lead to revolutions in work, recreation and societal interactions. There are few technologies whose impact has been greater. Theoretical computer science has played a central role in these developments and is destined to play a central role in the future. This document provides a brief discussion of the role of theory in computer science as well as a quick survey of contributions by the eld. It takes the position that the unpredictable nature of research argues for a liberal attitude toward research funding while maintaining standards. Thus, it argues against picking speciic research agendas or betting large amounts of support on a few research groups. Finally, it lists important areas of computer science that have the potential to attract the attention of theoretical computer scientists. A distinguishing characteristic of computer science is ...
The speed of CPUs is accelerating rapidly, outstripping that of peripheral storage devices and making it increasingly difficult to keep CPUs busy. Consequently multi-level memory hierarchies, scaled to simulate single-level memories, are... more
The speed of CPUs is accelerating rapidly, outstripping that of peripheral storage devices and making it increasingly difficult to keep CPUs busy. Consequently multi-level memory hierarchies, scaled to simulate single-level memories, are increasing in importance. In this paper we introduce the Memory Hierarchy Game, a multi-level pebble game that simulates data movement in memory hierarchies in terms of which we
We have extended the Mob heuristic for graph partitioning [21] to grid and hypercube embedding and have e ciently implemented our new heuristic on the CM-2 Connection Machine. We have conducted an extensive series of experiments to show... more
We have extended the Mob heuristic for graph partitioning [21] to grid and hypercube embedding and have e ciently implemented our new heuristic on the CM-2 Connection Machine. We have conducted an extensive series of experiments to show that it exploits parallelism, is fast, and gives very low embedding costs. For example, on the 32K-processor CM-2 it runs in less than 30 minutes on random graphs of 1 million edges and shows impressive reductions in edge costs. Due to excessive run times, other heuristics reported in the literature can construct equally-good graph embeddings only for graphs that are 100 to 1000 times smaller than those used in our experiments.
A key challenge facing nanotechnologies will be controlling nanoarrays, two orthogonal sets of nanowires that form a crossbar, using a moderate number of mesoscale wires. Three methods have been proposed to use mesoscale wires to control... more
A key challenge facing nanotechnologies will be controlling nanoarrays, two orthogonal sets of nanowires that form a crossbar, using a moderate number of mesoscale wires. Three methods have been proposed to use mesoscale wires to control individual nanowires. The first is based on nanowire differentiation during manufacture, the second makes random doped connections between nanowires and mesoscale wires, and the third, a mask-based approach, interposes high-K dielectric regions between nanowires and mesoscale wires. All three addressing schemes involve a stochastic step in their implementation. In this paper we analyze the mask-based approach and show that a large number of mesoscale control wires is necessary for its realization. 1
— Error-correcting codes have been very successful in protecting against errors in data transmission. Computing on encoded data, however, has proved more difficult. In this paper we extend a framework introduced by Spielman [14] for... more
— Error-correcting codes have been very successful in protecting against errors in data transmission. Computing on encoded data, however, has proved more difficult. In this paper we extend a framework introduced by Spielman [14] for computing on encoded data. This new formulation offers signifi-cantly more design flexibility, reduced overhead, and simplicity. It allows for a larger variety of codes to be used in computation and makes explicit conditions on codes that are compatible with computation. We also provide a lower bound on the overhead required for a single step of coded computation. I.
We presentanoverview of algorithms and data structures for dynamic re#nement #coarsening #adaptation# of unstructured FE meshes on loosely coupled parallel processors. We describe a# a parallel adaptation algorithm, b# an online parallel... more
We presentanoverview of algorithms and data structures for dynamic re#nement #coarsening #adaptation# of unstructured FE meshes on loosely coupled parallel processors. We describe a# a parallel adaptation algorithm, b# an online parallel repartitioning algorithm based on mesh adaptation histories, c# an algorithm for the migration of mesh elements between processors, and d# an integrated object-oriented framework for the adaptation, repartitioning and migration of the mesh. A two-dimensional triangle-based prototype demonstrates the feasibility of these ideas. 1 Introduction Although massively parallel computers can deliver impressive peak performances, their computational power is not su#cient to simulate physical problems with highly localized phenomena by using only brute force computations. Adaptive computation o#ers the potential to provide large increases in performance for problems with dissimilar physical scales by focusing the available computing power on the regions w...
We introduce radial encoding of nanowires (NWs), a new method of differentiating and controlling NWs by a small set of mesoscale wires for use in crossbar memories. We describe methods of controlling these NWs and give efficient... more
We introduce radial encoding of nanowires (NWs), a new method of differentiating and controlling NWs by a small set of mesoscale wires for use in crossbar memories. We describe methods of controlling these NWs and give efficient manufacturing algorithms. These new encoding and decoding methods do not suffer from the misalignment characteristic of flow-aligned NWs. They achieve comparable effective pitch and resulting memory density with axially encoded NWs while avoiding potential cases of address ambiguity and simplifying NW preparation. We also explore hybrid axial/radial encodings and show that they offer no net benefit over pure codes.
Methods for assembling crossbars from nanowires (NWs) have been designed and implemented. Methods for controlling individual NWs within a crossbar have also been proposed, but implementation remains a challenge. A NW decoder is a device... more
Methods for assembling crossbars from nanowires (NWs) have been designed and implemented. Methods for controlling individual NWs within a crossbar have also been proposed, but implementation remains a challenge. A NW decoder is a device that controls many NWs with a much smaller number of lithographically produced mesoscale wires (MWs). Unlike traditional demultiplexers, all proposed NW decoders are assembled stochastically. In a randomized-contact decoder (RCD) [11], for example, field-effect transistors are randomly created at about half of the NW/MW junctions. In this paper, we tightly bound the number of MWs required to produce a correctly functioning RCD with high probability. We show that the number of MWs is logarithmic in the number of NWs, even when errors occur. We also analyze the overhead associated with controlling a stochastically assembled decoder. As we explain, lithographically-produced control circuitry must store information regarding which MWs control which NWs. ...
Abstract—We describe a technique for addressing individual nanoscale wires with microscale control wires without using lithographic-scale processing to define nanoscale dimensions. Such a scheme is necessary to exploit sublithographic... more
Abstract—We describe a technique for addressing individual nanoscale wires with microscale control wires without using lithographic-scale processing to define nanoscale dimensions. Such a scheme is necessary to exploit sublithographic nanoscale storage and computational devices. Our technique uses modulation doping to address individual nanowires and self-assembly to organize them into nanoscale-pitch decoder arrays. We show that if coded nanowires are chosen at random from a sufficiently large population, we can ensure that a large fraction of the selected nanowires have unique addresses. For example, we show that lines can be uniquely addressesd over 99 % of the time using no more than P P�� � P @ A CIIaddress wires. We further show a hybrid decoder scheme that only needs to address a @ ���� � ���™ � �—� � ���™�A wires at a time through this stochastic scheme; as a result, the number of unique codes required for the nanowires does not grow with decoder size. We give an @ P A proce...
Introduction and Objectives Forty years ago Gordon Moore predicted exponential growth in the density of electronic circuits, a forecast that has been realized. A concomitant of this remarkable achievement is that design and manufacturing... more
Introduction and Objectives Forty years ago Gordon Moore predicted exponential growth in the density of electronic circuits, a forecast that has been realized. A concomitant of this remarkable achievement is that design and manufacturing costs have increased dramatically. We are approaching a tipping point at which either exponential growth in density will cease or new technologies and methods of assembly will emerge that will allow continuing progress at reasonable cost. Our research program is predicated on the proposition that new materials of nanometer-sized dimensions and assembled in new ways will supplement and replace conventional lithography-based wires and devices. Our goal is to understand and address problems that stand in the way of achieving this objective. More specifically, we seek to a) develop low-dimensional nanoelectronic materials that can be self-assembled into large-scale circuits, b) create and analyze models for directed self-assembly of such circuits with p...
The computational work and the time required to decode with reliability E at code rate R on noisy channels are defined, and hounds on the size of these measures are developed. A number of ad hoc decoding pro- cedures are ranked on the... more
The computational work and the time required to decode with reliability E at code rate R on noisy channels are defined, and hounds on the size of these measures are developed. A number of ad hoc decoding pro- cedures are ranked on the basis of the computational work they require.
Computer science is the study of computers and programs, the collections of instructions that direct the activity of computers. Although computers are made of simple elements, the tasks they perform are often very complex. The great... more
Computer science is the study of computers and programs, the collections of instructions that direct the activity of computers. Although computers are made of simple elements, the tasks they perform are often very complex. The great disparity between the simplicity of computers and the complexity of computational tasks offers intellectual challenges of the highest order. It is the models and methods of analysis developed by computer science to meet these challenges that are the subject of theoretical computer science. Computer scientists have developed models for machines, such as the random-access and Turing machines; for languages, such as regular and context-free languages; for programs, such as straight-line and branching programs; and for systems of programs, such as compilers and operating systems. Models have also been developed for data structures, such as heaps, and for databases, such as the relational and object-oriented databases. Methods of analysis have been developed ...
We present a new parallel repartitioning algorithm for adaptive finite-element meshes that significantly reduces the amount of data that needs to move between processors in or-der to rebalance a workload after mesh adaptation (refine-ment... more
We present a new parallel repartitioning algorithm for adaptive finite-element meshes that significantly reduces the amount of data that needs to move between processors in or-der to rebalance a workload after mesh adaptation (refine-ment or coarsening). These results derive their importance from the fact that the time to migrate data can be a large fraction of the total time for the parallel adaptive solution of partial differential equations.
A computational approach to cognition is developed that has both broad scope and precision. The theory is based on a few simple assumptions and draws its conclusions from an analysis of limits on computational activity. These limits are a... more
A computational approach to cognition is developed that has both broad scope and precision. The theory is based on a few simple assumptions and draws its conclusions from an analysis of limits on computational activity. These limits are a delay limit and a processing ...
From the Publisher: Your book fills the gap which all of us felt existed too long. Congratulations on this excellent contribution to our field." --Jan van Leeuwen, Utrecht University "This is an impressive book. The subject has... more
From the Publisher: Your book fills the gap which all of us felt existed too long. Congratulations on this excellent contribution to our field." --Jan van Leeuwen, Utrecht University "This is an impressive book. The subject has been thoroughly researched and carefully presented. All the machine models central to the modern theory of computation are covered in depth; many for the first time in textbook form. Readers will learn a great deal from the wealth of interesting material presented." --Andrew C. Yao, Professor of Computer Science, Princeton University "Models of Computation" is an excellent new book that thoroughly covers the theory of computation including significant recent material and presents it all with insightful new approaches. This long-awaited book will serve as a milestone for the theory community." --Akira Maruoka, Professor of Information Sciences, Tohoku University "This is computer science." --Elliot Winard, Student, Brown...
CHAPTER 67 Parallel Graph-Embedding Heuristicst John E. Savage* Markus G. Wloka* Abstract. Wc introduce the massively parallel Mob heuristic for graph embedding, an NP-complete problem, and report on experiments on the CM-2 Connection... more
CHAPTER 67 Parallel Graph-Embedding Heuristicst John E. Savage* Markus G. Wloka* Abstract. Wc introduce the massively parallel Mob heuristic for graph embedding, an NP-complete problem, and report on experiments on the CM-2 Connection Machine. Graph embedding ...
The credibility of computer science as a science depends to a large extent on the complexity theoretic results which are now emerging. In this survey the efficiency of algorithms and machines for finite tasks, i.e., tasks representable by... more
The credibility of computer science as a science depends to a large extent on the complexity theoretic results which are now emerging. In this survey the efficiency of algorithms and machines for finite tasks, i.e., tasks representable by functions with finite domain and range, will be examined. The complexity of functions can be measured in several ways. Two useful measures to be discussed in this survey are the shortest length program for a function on a universal Turing machine and the smallest number of logical operations to compute a function. Two storage-time tradeoff inequalities for the computation of functions on random-access general purpose computers will be stated. These imply that efficient use of these machines is possible only for algorithms using small storage and large time or the reverse. Intermediate amounts of storage and time generally imply inefficient operation.

And 90 more