Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Do computer scientists need to experiment at all? Only if the answer is "yes" does it make sense to ask whether there is enough of it. The author argues that experimentation is central to the scientific process. Only... more
Do computer scientists need to experiment at all? Only if the answer is "yes" does it make sense to ask whether there is enough of it. The author argues that experimentation is central to the scientific process. Only experiments test theories. Only experiments can explore critical factors and bring new phenomena to light, so theories can be formulated and corrected. Without experiments, according to the author, computer science is in danger of drying up and becoming an auxiliary discipline. The current pressure to concentrate on application is the writing on the wall. The author rebuts the eight most common objections computer scientists have to focusing on experimentation: The traditional scientific method isn't applicable. The current level of experimentation is good enough. Experiments cost too much. Demonstrations will suffice. There's too much noise in the way. Progress will slow. Technology changes too fast. You'll never get it published. In contrast, the author argues that experimentation would build a reliable base of knowledge and thus reduce uncertainty about which theories, methods, and tools are adequate; lead to new, useful, and unexpected insights and open whole new areas of investigation; and accelerate progress by quickly eliminating fruitless approaches, erroneous assumptions, and fads. Conversely, when we ignore experimentation and avoid contact with reality, we hamper progress. As computer science leaves adolescence behind, the author advocates the development of its experimental branch.
Today's cluster computers suffer from slow I/O, which slows down I/O-intensive applications. We show that fast disk I/O can be achieved by operating a parallel file system over fast networks such as Myrinet or Gigabit Ethernet. In this... more
Today's cluster computers suffer from slow I/O, which slows down I/O-intensive applications. We show that fast disk I/O can be achieved by operating a parallel file system over fast networks such as Myrinet or Gigabit Ethernet. In this paper, we demonstrate how the ParaStation3 communication system helps speed-up the performance of parallel I/O on clusters using the open source parallel virtual file system (PVFS) as testbed and production system. We will describe the set-up of PVFS on the Alpha-Linux-Cluster-Engine (ALiCE) located at Wuppertal University, Germany. Benchmarks on ALiCE achieve write-performances of up to 1 GB/s from a 32-processor compute-partition to a 32-processor PVFS I/O-partition, outperforming known benchmark results for PVFS on the same network by more than a factor of 2. Read-performance from buffer-cache reaches up to 2.2 GB/s. Our benchmarks are giant, I/O-intensive eigenmode problems from lattice quantum chromodynamics, demonstrating stability and performance of PVFS over Parastation in large-scale production runs.
Research Interests:
ABSTRACT
Research Interests:
Directed graphs are used in a significant number of applications for visualizing concepts and relationships. This paper describes research in knowledge-based editors for the direct, visual manipulation of such graphs. The novel aspects of... more
Directed graphs are used in a significant number of applications for visualizing concepts and relationships. This paper describes research in knowledge-based editors for the direct, visual manipulation of such graphs. The novel aspects of this work are: (1) The editor produces an aesthetically pleasing layout of the graph automatically, freeing the user from cut-and-paste work after changes. (2) The editor
The Revision Control System (RCS) is a software tool that helps in managing multiple revisions of text. RCS automates the storing, retrieval, logging, identification, and merging of revisions, and provides access control. It is useful for... more
The Revision Control System (RCS) is a software tool that helps in managing multiple revisions of text. RCS automates the storing, retrieval, logging, identification, and merging of revisions, and provides access control. It is useful for text that is revised frequently, for example programs and documentation. This paper presents the design and implementation of RCS. Both design and implementation are
ABSTRACT
In 1976, Les Belady and Manny Lehman published the first empirical growth study of a large software system, IBMs OS 360. At the time, the operating system was twelve years old and the authors were able to study 21 successive releases of... more
In 1976, Les Belady and Manny Lehman published the first empirical growth study of a large software system, IBMs OS 360. At the time, the operating system was twelve years old and the authors were able to study 21 successive releases of the software. By looking at variables such as number of modules, time for preparing releases, and modules handled
ABSTRACT Punched cards were already obsolete when I began my studies at the Technical University of Munich in 1971. Instead, we had the luxury of an interactive, line-oriented editor for typing our programs. Doug Engelbart had already... more
ABSTRACT Punched cards were already obsolete when I began my studies at the Technical University of Munich in 1971. Instead, we had the luxury of an interactive, line-oriented editor for typing our programs. Doug Engelbart had already invented the mouse, but the device was not yet available. With line editors, users had to identify lines by numbers and type in awkward substitution commands just to add missing semicolons. Though cumbersome by today's standards, it was obvious that line-oriented editors were far better than punched cards. Not long after, screen oriented editors such as Vi and Emacs appeared. Again, these editors were obvious improvements and everybody quickly made the switch. No detailed usability studies were needed. "Try it and you'll like it" was enough. (Brian Reid at CMU likened screen editors to handing out free cocaine in the schoolyard.) Switching from Assembler to Fortran, Algol, or Pascal also was a no-brainer. But in the late '70s, the acceptance of new technologies for building software seemed to slow down, even though more people were building software tools. Debates raged over whether Pascal was superior to C, without a clear winner. Object-oriented programming, invented back in the '60s with Simula, took decades to be widely adopted. Functional programming is languishing to this day. The debate about whether agile methods are better than plan-driven methods has not led to a consensus. Literally hundreds of software development technologies and programming languages have been invented, written about, and demoed over the years, only to be forgotten. What went wrong?
Software Change Dynamics or Half of all Ada Compilations are Redundant Roll Adams, Annette Weinert, Walter Tichy University of Karlsruhe Karlsruhe, FRG Abstract This paper is an empirical study of the evolution of a medium-size,... more
Software Change Dynamics or Half of all Ada Compilations are Redundant Roll Adams, Annette Weinert, Walter Tichy University of Karlsruhe Karlsruhe, FRG Abstract This paper is an empirical study of the evolution of a medium-size, industrial software system written in Ada. ...
Page 1. Programming-in-t he-Large: Past, Present, and Future Walter F. Tichy University of Karlsruhe 7500 Karlsruhe 1, FRG Abstract ... authorized operation to it. The principle of data abstraction embodied in the lev-els model traces... more
Page 1. Programming-in-t he-Large: Past, Present, and Future Walter F. Tichy University of Karlsruhe 7500 Karlsruhe 1, FRG Abstract ... authorized operation to it. The principle of data abstraction embodied in the lev-els model traces back to Dennis and Van Horn's 1966 pa-...
Research Interests:
Se

And 175 more