Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Networking support for high-performance servers
Publisher:
  • University of Massachusetts
  • Computer and Information Science Dept. Graduate Research Center Amherst, MA
  • United States
Order Number:UMI Order No. GAX97-21482
Reflects downloads up to 10 Oct 2024Bibliometrics
Skip Abstract Section
Abstract

Networked information systems have seen explosive growth in the last few years, and are transforming society both economically and socially. The information available via the global information infrastructure is growing rapidly, dramatically increasing the performance requirements for large scale information servers. Example services include digital libraries, video-on-demand, World-Wide Web and high-performance file systems.

In this dissertation, we investigate performance issues that affect networking-support for high-performance servers. We focus on three research issues: (1) Parallelism Using Packets. The first part of this dissertation identifies performance issues of network protocol processing on shared-memory multiprocessors when packets are used as the unit of concurrency. Our results show good available parallelism for connectionless protocols such as UDP, but limited speedup using TCP within a single connection. However, with multiple connections, parallelism is improved. We demonstrate how locking structure impacts performance, and that a complex protocol such as TCP with large connection state yields better speedup with a single lock than with multiple locks. We show how preserving packet order, exploiting cache affinity and avoiding contention affect performance. (2) Support for Secure Servers. The second part of this dissertation shows how parallelism is an effective means of improving the performance of cryptographic protocols. We demonstrate excellent available parallelism by showing linear speedup with several Internet-based cryptographic protocol stacks, using packet-level parallelism. We also show linear speedup using another approach to parallelism, where connections are the unit of concurrency. (3) Cache Memory Behavior. In the final part of this dissertation we present a performance study of memory reference behavior in network protocol processing. We show that network protocol memory reference behavior varies widely. We find that instruction cache behavior is the primary contributor to protocol performance under most scenarios, and we investigate the impact of architectural features such as associativity and larger cache sizes.

We explore these issues in the context of the network subsystem, i.e., the protocol stack, examining throughput, latency, and scalability.

Contributors
  • IBM Thomas J. Watson Research Center

Recommendations