Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Werner Vogels

    Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The... more
    Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon’s core services use to provide an “always-on ” experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-a...
    The development of distributed systems in the next few years will most probably be centered around: im-provement of facilities for application development and execution control; introduction of new distributed services; integration of new... more
    The development of distributed systems in the next few years will most probably be centered around: im-provement of facilities for application development and execution control; introduction of new distributed services; integration of new technologies to support the first two. One of ...
    This paper describes how this evaluation lead to the insight that Microsoft's Windows NT is the operating system that is best prepared for the future
    ABSTRACT Computer architecture is about to undergo, if not another revolution, then a vigorous shaking-up. The major chip manufacturers have, for the time being, simply given up trying to make processors run faster. Instead, they have... more
    ABSTRACT Computer architecture is about to undergo, if not another revolution, then a vigorous shaking-up. The major chip manufacturers have, for the time being, simply given up trying to make processors run faster. Instead, they have recently started shipping ...
    We have performed a study of the usage of the Windows NT File System through long-term kernel tracing. Our goal was to provide a new data point with respect to the 1985 and 1991 trace-based File System studies, to investigate the usage... more
    We have performed a study of the usage of the Windows NT File System through long-term kernel tracing. Our goal was to provide a new data point with respect to the 1985 and 1991 trace-based File System studies, to investigate the usage details of the Windows NT file system architecture, and to study the overall statistical behavior of the usage data.In this paper we report on these issues through a detailed comparison with the older traces, through details on the operational characteristics and through a usage analysis of the file system and cache manager. Next to architectural insights we provide evidence for the pervasive presence of heavy-tail distribution characteristics in all aspect of file system usage. Extreme variances are found in session inter-arrival time, session holding times, read/write frequencies, read/write buffer sizes, etc., which is of importance to system engineering, tuning and benchmarking.
    ABSTRACT
    Research Interests:
    The U-Net communication architecture prowdes processeswith a virtual view of a network interface to enable userIevel accessto high-speed communication dewces. The architecture, implemented on standard workstations using offthe-shelf ATM... more
    The U-Net communication architecture prowdes processeswith a virtual view of a network interface to enable userIevel accessto high-speed communication dewces. The architecture, implemented on standard workstations using offthe-shelf ATM communication hardware, removes the kernel from the communication path, while stall prowdmg full The model presented by U-Net allows for the construction of protocols at user level whose performance is only
    ABSTRACT
    This paper describes how this evaluation lead to the insight that Microsoft's Windows NT is the operating system that is best prepared for the future
    The dramatic growth of computer networks creates both an opportunity and a daunting distributed computing problem for users seeking to perform data fusion and data mining. The problem is that data often resides on large numbers of devices... more
    The dramatic growth of computer networks creates both an opportunity and a daunting distributed computing problem for users seeking to perform data fusion and data mining. The problem is that data often resides on large numbers of devices and evolves rapidly. Systems that collect data at a single location scale poorly and suffer from single-point-failures. Astrolabe performs data fusion in real-time, creating a virtual system-wide hierarchical database, which evolves as the underlying information changes. A scalable aggregation mechanism offers a flexible way to perform data mining within the resulting virtual database. Astrolabe is secure, robust under a wide range of failure and attack scenarios, and imposes low loads even under stress.
    Research Interests:
    Research Interests:
    Although the use of groups and group communication is becoming more and more accepted as an important tool for implementing distributed applications and algorithms, there is still much controversy about the way the group paradigm should... more
    Although the use of groups and group communication is becoming more and more accepted as an important tool for implementing distributed applications and algorithms, there is still much controversy about the way the group paradigm should actually be implemented. In this paper the authors identify some of the areas in which these controversies still exist and express their views on
    Distributed objects and applications have been an important element of research and industrial computing for over 20 years. Early research on RPC systems, asynchronous messaging, specialized distributed programming languages, and... more
    Distributed objects and applications have been an important element of research and industrial computing for over 20 years. Early research on RPC systems, asynchronous messaging, specialized distributed programming languages, and component architectures led to the industrial-strength distributed objects platforms such as CORBA, DCOM, and J2EE that became commonplace over the past decade. Continued research and evolution in these areas, along with the explosive growth of the Internet and World Wide Web, have now carried us into areas such as peer-to-peer computing, mobile applications, model-driven architecture, distributed real-time and embedded systems, grid computing, and web services. Distributed objects are not only today’s workhorse for mission-critical high-performance enterprise computing systems, but they also continue to serve as a research springboard into new areas of innovation.
    The dramatic growth of computer networks creates both an opportunity and a daunting distributed computing problem for users seeking to build applications that can configure themselves and adapt as disruptions occur. The problem is that... more
    The dramatic growth of computer networks creates both an opportunity and a daunting distributed computing problem for users seeking to build applications that can configure themselves and adapt as disruptions occur. The problem is that data often resides on large numbers of devices and evolves rapidly. Systems that collect data at a single location scale poorly and suffer from single-point
    Virtualization technology was developed in the late 1960s to make more efficient use of hardware. Hardware was expensive, and there was not that much available. Processing was largely outsourced to the few places that did have computers.... more
    Virtualization technology was developed in the late 1960s to make more efficient use of hardware. Hardware was expensive, and there was not that much available. Processing was largely outsourced to the few places that did have computers. On a single IBM System/360, one could run in parallel several environments that maintained full isolation and gave each of its customers the illusion of owning the hardware. Virtualization was time sharing implemented at a coarse-grained level, and isolation was the key achievement of the technology. It also provided the ability to manage resources efficiently, as they would be assigned to virtual machines such that deadlines could be met and a certain quality of service could be achieved.
    ... Author, Werner Vogels, Amazon.com. Publisher, IEEE Computer Society Washington, DC, USA. ... Collaborative Colleagues: Werner Vogels: colleagues. The ACM Portal is published by the Association for Computing Machinery. Copyright © 2010... more
    ... Author, Werner Vogels, Amazon.com. Publisher, IEEE Computer Society Washington, DC, USA. ... Collaborative Colleagues: Werner Vogels: colleagues. The ACM Portal is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc. ...
    At the foundation of Amazon’s cloud computing are infrastructure services such as Amazon’s S3 (Simple Storage Service), SimpleDB, and EC2 (Elastic Compute Cloud) that provide the resources for constructing Internet-scale computing... more
    At the foundation of Amazon’s cloud computing are infrastructure services such as Amazon’s S3 (Simple Storage Service), SimpleDB, and EC2 (Elastic Compute Cloud) that provide the resources for constructing Internet-scale computing platforms and a great variety of applications. The requirements placed on these infrastructure services are very strict; they need to score high marks in the areas of security, scalability, availability, performance, and cost effectiveness, and they need to meet these requirements while serving millions of customers around the globe, continuously.
    Distributed objects and applications have been an important element of research and industrial computing for over 20 years. Early research on RPC systems, asynchronous messaging, specialized distributed programming languages, and... more
    Distributed objects and applications have been an important element of research and industrial computing for over 20 years. Early research on RPC systems, asynchronous messaging, specialized distributed programming languages, and component architectures led to the industrial-strength distributed objects platforms such as CORBA, DCOM, and J2EE that became commonplace over the past decade. Continued research and evolution in these areas, along
    Research Interests:
    Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability.
    ... and methods are needed for query languages, basic operations, and query processing strategies. ... These bottom-up and indexing methods were devel-oped for finding simple association rules. ... In the same way business applications... more
    ... and methods are needed for query languages, basic operations, and query processing strategies. ... These bottom-up and indexing methods were devel-oped for finding simple association rules. ... In the same way business applications are currently supported using SQL-based API ...

    And 14 more