Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Virtual environment (VE) designs have evolved from text-based to immersive graphical systems. The next logical step of this evolution is to have a fully immersive environment in which thousands of widely distributed users will be able to... more
Virtual environment (VE) designs have evolved from text-based to immersive graphical systems. The next logical step of this evolution is to have a fully immersive environment in which thousands of widely distributed users will be able to move around and interact. This requires a VE architecture that can scale well for a large number of participants while providing the necessary support for quality of service, security and flexibility. Current VE architectures are unable to fully meet these requirements and a new network/protocol architecture is needed. The VENUS approach addresses these problems by creating a network architecture which is scalable and flexible. We define a new architecture consisting of a transmit-only satellite/server and bi-directional links which will be capable of sustaining a wide-area virtual environment. We then offer the preliminary results of our experiments.
Virtual environment (VE) designs have evolved from text-based to immersive graphical systems. The next logical step of this evolution is to have a fully immersive environment in which thousands of widely distributed users will be able to... more
Virtual environment (VE) designs have evolved from text-based to immersive graphical systems. The next logical step of this evolution is to have a fully immersive environment in which thousands of widely distributed users will be able to move around and interact. This requires a VE architecture that can scale well for a large number of participants while providing the necessary support for quality of service, security and flexibility. Current VE architectures are unable to fully meet these requirements and a new network/protocol architecture is needed. The VENUS approach addresses these problems by creating a network architecture which is scalable and flexible. We define a new architecture consisting of a transmit-only satellite/server and bi-directional links which will be capable of sustaining a wide-area virtual environment. We then offer the preliminary results of our experiments. 1
A file system operation to a remote procedure call request * A remote procedure call request to a host-independent packet (essentially converting it to a string of characters) A packet to one or more messages A message to one or more... more
A file system operation to a remote procedure call request * A remote procedure call request to a host-independent packet (essentially converting it to a string of characters) A packet to one or more messages A message to one or more network packets, via a multi-plexed Input/ ...
This presentation is an outgrowth of a workshop session held at Lake Arrowhead in 1973 on Distributed Software. The participants in the session were Dave Farber of UC Irvine (Chairman). Don Bennett of Sperrty Rand. Bob Bressler of Bolt.... more
This presentation is an outgrowth of a workshop session held at Lake Arrowhead in 1973 on Distributed Software. The participants in the session were Dave Farber of UC Irvine (Chairman). Don Bennett of Sperrty Rand. Bob Bressler of Bolt. Beranek and Newman, Larry Rowe of UC Irvine, Bob Metcalfe of XEROX PARC, and Marty Graham of UC Berkeley.
Program verification applied to kernel architectures forms a promising method for providing uncircumventably secure, shared computer systems. A precise definition of data security is developed here in terms of a general model for... more
Program verification applied to kernel architectures forms a promising method for providing uncircumventably secure, shared computer systems. A precise definition of data security is developed here in terms of a general model for operating systems. This model is suitable as a basis for verifying many of those properties of an operating system which are necessary to assure reliable enforcement of security. The application of this approach to the UCLA secure operating system is also discussed.
We believe that many distributed computing systems of the future will use distributed shared memory as a technique for interprocess communication. Thus, traffic generated by memory requests will be a major component of the traffic for any... more
We believe that many distributed computing systems of the future will use distributed shared memory as a technique for interprocess communication. Thus, traffic generated by memory requests will be a major component of the traffic for any networks which connect nodes in such a system. In this paper, we study memory reference strings gathered with a tracing program we devised. We study several models. First, we look at raw reference data, as would be seen if the network were a backplane. Second, we examine references in units of "blocks", first using a one-block cache model and then with an infinite cache. Finally, we study the effect of predictive prepaging of these "blocks " on the traffic. We provide a novel representation of memory reference data which can be used to calculate interarrival distributions directly. Integrating communication
Two possible modes of Input/Output (I/O)are "sequential" and "random-access", and there is an extremely strong conceptual link between I/O and communication. Sequential communication, typified in the I/O setting by... more
Two possible modes of Input/Output (I/O)are "sequential" and "random-access", and there is an extremely strong conceptual link between I/O and communication. Sequential communication, typified in the I/O setting by magnetic tape, is typified in the communication setting by a stream , e.g., a UNIX 1 pipe. Random-access communication, typified in the I/O setting by a drum or disk device, is typified in the communication setting by shared memory . In this paper, we study and survey the extension of the random-access model to distributed computer systems. A Distributed Shared Memory (DSM) is a memory area shared by processes running on computers connected by a network. DSM provides direct system support of the shared memory programming model. When assisted by hardware, it can also provide a low-overhead interprocess communication (IPC) mechanism to software. Shared pages are migrated on demand between the hosts. Since computer network latency is typically much larger...
Many applications envisioned for ultra-high-speed networks require cryptographic transformations for data in transit. Security has often been an afterthought, and cryptographic support has generally not kept pace with performance... more
Many applications envisioned for ultra-high-speed networks require cryptographic transformations for data in transit. Security has often been an afterthought, and cryptographic support has generally not kept pace with performance increases in other network components. Two distinct experimental prototypes of high-speed DES boards were built to understand architectural issues in providing cryptographic support for the AURORA gigabit testbed. Combining cryptographic support with the host/network interface is indicated. 1. Introduction Network usage and capabilities are both increasing at terrific rates. The traffic load on the NSFnet backbones in the United States doubles every few months; much of this is due to the increased connectivity provided by interconnection with other networks, large numbers of personal workstations connected to LANs, and large numbers of computers connected via 9600 bit per second (bps) dial-ins. Much of this traffic is "traditional" Internet traffi...
Aurora is one of five U.S. networking testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. The emphasis of the Aurora testbed, distinct from the... more
Aurora is one of five U.S. networking testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. The emphasis of the Aurora testbed, distinct from the other four testbeds, BLANCA, CASA, NECTAR and VISTANET, is research into the supporting technologies for gigabit networking. Like the other testbeds, Aurora itself is an experiment in collaboration, where government initiative (in the form of the Corporation for National Research Initiatives, which is funded by DARPA and the National Science Foundation) has spurred interaction among pre-existing centers of excellence in industry, academia, and government. Aurora has been charged with research into networking technologies that will underpin future high-speed networks. This paper provides an overview of the goals and methodologies employed in Aurora, and points to some preliminary results from our first year of research, ranging from analytic results to...
Integrity is rarely a valid presupposition in many systems architectures, yet it is necessary to make any security guarantees. To address this problem, we have designed a secure bootstrap process, AEGIS, which presumes a minimal amount of... more
Integrity is rarely a valid presupposition in many systems architectures, yet it is necessary to make any security guarantees. To address this problem, we have designed a secure bootstrap process, AEGIS, which presumes a minimal amount of integrity, and which we have prototyped on the Intel x86 architecture. The basic principle is sequencing the bootstrap process as a chain of progressively higher levels of abstraction, and requiring each layer to check a digital signature of the next layer before control is passed to it. A major design decision is the consequence of a failed integrity check. A simplistic strategy is to simply halt the bootstrap process. However, as we show in this paper, the AEGIS bootstrap process can be augmented with automated recovery procedures which preserve the security properties of AEGIS under the additional assumption of the availability of a trusted repository. We describe two means by which such a repository can be implemented, and focus our attention o...
Mether is a Distributed Shared Memory (DSM) that runs on Sun 2 workstations under the SunOS 4.0 operating system. User programs access the Mether address space in a way indistinguishable from other memory. When we began to use Mether and... more
Mether is a Distributed Shared Memory (DSM) that runs on Sun 2 workstations under the SunOS 4.0 operating system. User programs access the Mether address space in a way indistinguishable from other memory. When we began to use Mether and measure its performance a number of issues made themselves felt: ffl Programs could saturate the network with packets while trying to synchronize operations ffl Because the communications are accomplished via a user level server, an Mether client program spinning on a lock could block the user level server, resulting in a backup of queued packets 1 A (much) shorter version of this paper appears in the proceedings of the tenth ICDCS 2 Sun and SunOS are trademarks of Sun Microsystems Inc. 1 ffl The combined effects of deep queues of packets waiting for service and communications servers blocked by Mether clients greatly increased the latency of the shared memory ffl Too many packets had to flow for simple synchronization operations, ...
Advanced manufacturing concepts such as "Virtual Factories " use an information infrastructure to tie together changing groups of specialized facilities into agile manufacturing systems. A necessary element of such systems is... more
Advanced manufacturing concepts such as "Virtual Factories " use an information infrastructure to tie together changing groups of specialized facilities into agile manufacturing systems. A necessary element of such systems is the ability to teleoperate machines, for example telerobotic systems with full-capability sensory feedback loops. We have identified three network advances needed for splitting robotic control from robotic function: increased bandwidth, decreased error rates, and support for isochronous traffic. These features are available in the Gigabit networks under development at Penn and elsewhere. A number of key research questions are posed by gigabit telerobotics. There are issues in network topology, robot control and distributed system software, packaging and transport of sensory data (including wide-area transport), and performance implications of architectural choices using measures such as cost, response time, and network utilization. We propose to explo...
EROS, the Extremely Reliable Operating System, addresses the issues of reliability and security by combining two ideas from earlier systems: capabilities and a persistent single-level store. Capabilities unify object naming with access... more
EROS, the Extremely Reliable Operating System, addresses the issues of reliability and security by combining two ideas from earlier systems: capabilities and a persistent single-level store. Capabilities unify object naming with access control. Persistence extends this naming and access control uniformly across the memory hierarchy; main memory is viewed simply as a cache of the single-level store. The combination simplifies application design, allows programs to observe the "principle of least privilege," and enables active objects to be constructed securely. Prior software capability implementations have suffered from poor performance. In EROS, cacheing techniques are used to implement authority checks efficiently and to preserve the state of active processes in a form optimized for the demands of the machine. The resulting system provides performance competative with conventional designs. This paper describes the EROS object model and the structures used to efficiently ...
Current protocols are expected to become inefficient if used at speeds in excess of 1 Gigabit per second. While this premise is widely accepted, no model exists to explain the phenomenon. We define a model for understanding protocols... more
Current protocols are expected to become inefficient if used at speeds in excess of 1 Gigabit per second. While this premise is widely accepted, no model exists to explain the phenomenon. We define a model for understanding protocols which is aimed at explaining why such a barrier exists, and indicates alternate designs which do not have this limit. Existing protocols are akin to classical mechanics; 1 Gigabit/second is the speed near which relativistic effects emerge. In order to account for these effects, we need to express knowledge at a distance, latent measurement, and uncertainty as real entities, not negligible estimates. The result is a model which expresses not only existing protocols, and may contribute to a better understanding of the Gigabit communications domain. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-89-79. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_repor...
The Federal Communications Commission’s Notice of Inquiry in GN 09-157 Fostering Innovation and Investment in the Wireless Communications Market is a significant event at an opportune moment. Wireless communication has already radically... more
The Federal Communications Commission’s Notice of Inquiry in GN 09-157 Fostering Innovation and Investment in the Wireless Communications Market is a significant event at an opportune moment. Wireless communication has already radically changed the way that not only Americans, but people the world over communicate with each other and access and share information. In this article, we review the wireless industry’s past performance in three dimensions: (i) the rate of innovation, (ii) how competitive the industry is, and (iii) how competitive wireless innovation is. We do so by examining the record of three key layers in the industry’s vertical chain: software applications, devices (handhelds), and the core wireless distribution networks. We find that all three markets exhibit very high rates of innovation, that the markets are competitive, and that this competition has driven innovation. As in previous work (Faulhaber, 2009a) we argue that, absent market failure, regulatory intervent...
AURORA is one of five U.S. testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. AURORA is also an experiment in collaboration, where government... more
AURORA is one of five U.S. testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. AURORA is also an experiment in collaboration, where government support (through the Corporation for National Research Initiatives, which is in turn funded by DARPA and the NSF) has spurred interaction among centers of excellence in industry, academia, and government. The emphasis of the AURORA testbed, distinct from the other four testbeds, is research into the supporting technologies for gigabit networking. Our targets include new software architectures, network abstractions, hardware technologies, and applications. This paper provides an overview of the goals and methodologies employed in AURORA, and reports preliminary results from our first year of research. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-93-20. Author(s) David D. Clark, Bruce S. Da...
Page 1. CAPNET - AN APPROACH TO ULTRA HIGH SPEED NETWORK Ming Chit Tam, David Farber ... 1 The Impact of the Very High Speed Networks With the advance of networking technology, the bandwidth of net-works will soon be in gigabit range. ...
ABSTRACT This paper defines a capability implementation which uses the memory management hardware and the TRAP instruction of the higher members of the Digital Equipment Corporation PDP-11/XX (XX = 34, 45, 55, 70) to create a capability... more
ABSTRACT This paper defines a capability implementation which uses the memory management hardware and the TRAP instruction of the higher members of the Digital Equipment Corporation PDP-11/XX (XX = 34, 45, 55, 70) to create a capability architecture processor. ...
Since, in general, the underlying initial distribution ... REFERENCES D. J. Gooding, “Performance monitor techniques for digital receivers based on extrapolation of error rate,” ZEEE Trans. Commun. TechnoZ., vol. COM-16, pp. 380-387, June... more
Since, in general, the underlying initial distribution ... REFERENCES D. J. Gooding, “Performance monitor techniques for digital receivers based on extrapolation of error rate,” ZEEE Trans. Commun. TechnoZ., vol. COM-16, pp. 380-387, June 1968. D. R. Smith, “A performance ...
The design of the TTI prototype Trusted Mail Agent (TMA) is discussed. This agent interfaces between two entities: a key distribution center (KDC) and a user agent (UA). The KDC manages keys for the encryption of text messages, which two... more
The design of the TTI prototype Trusted Mail Agent (TMA) is discussed. This agent interfaces between two entities: a key distribution center (KDC) and a user agent (UA). The KDC manages keys for the encryption of text messages, which two subscribers to a key distribution ...

And 91 more