Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Carey Williamson

    This paper presents a simulation study of diierent token generation policies for a leaky bucket traac shaper. In particular, the paper addresses the following question: Can the performance of a leaky bucket traac shaper be improved by... more
    This paper presents a simulation study of diierent token generation policies for a leaky bucket traac shaper. In particular, the paper addresses the following question: Can the performance of a leaky bucket traac shaper be improved by matching the token generation policy to the characteristics of the input traac? Three diierent token generation policies are considered: Deterministic, Bernoulli, and a Markov Modulated Bernoulli Process (MMBP). Each policy is simulated on three diierent input traf-c models (Deterministic, Bernoulli, and MMBP), using a full factorial experimental design. The simulation results show that, overall, the deterministic token generation policy provides the most consistent performance (lower cell loss and lower cell delay) across a wide range of input traac types. Therefore the deterministic policy is the best token generation policy to use, even if the characteristics of the input traac are known to be other than deterministic.
    On behalf of the WWW2007 Program Committee, we are delighted to welcome you to WWW2007, the 16th conference in the World Wide Web conference series. The number of submissions to the refereed papers track and their quality has grown over... more
    On behalf of the WWW2007 Program Committee, we are delighted to welcome you to WWW2007, the 16th conference in the World Wide Web conference series. The number of submissions to the refereed papers track and their quality has grown over the years. This year we had a record 755 submissions, of which we accepted 111 papers (a 14.7% acceptance rate). In response to the growing interest from the research community, we added a fourth day to the refereed papers track program, which allowed us to accept approximately 20% more papers than previous years. Despite this increase, many good papers had to be turned away. The refereed papers track consists of thirteen research areas. This year, we introduced two new tracks -- Web Services and XML and Web Data, which were formed by splitting the old "XML and Web Services" track into two new tracks. We also eliminated the "Alternate papers tracks" by folding those research areas into the regular refereed paper tracks. Thus, Tech...
    Network utility maximization (NUM) for Multipath TCP (MPTCP) is a challenging task, since there is no well-defined utility function for MPTCP [6]. In this paper, we identify the conditions under which we can use Kelly's NUM mechanism,... more
    Network utility maximization (NUM) for Multipath TCP (MPTCP) is a challenging task, since there is no well-defined utility function for MPTCP [6]. In this paper, we identify the conditions under which we can use Kelly's NUM mechanism, and explicitly compute the equilibrium. We obtain this equilibrium by using Tullock's rent-seeking framework from game theory to define a utility function for MPTCP. This approach allows us to design MPTCP algorithms with common delay and/or loss constraints at the subflow level. Furthermore, this utility function has diagonal strict concavity, which guarantees a globally unique (normalized) equilibrium.
    In wireless sensor networks, cluster-based data gathering has been pursued as a means to achieve network scalability as well as energy efficiency. By dividing a network into clusters, data aggregation and compression can be conveniently... more
    In wireless sensor networks, cluster-based data gathering has been pursued as a means to achieve network scalability as well as energy efficiency. By dividing a network into clusters, data aggregation and compression can be conveniently implemented in each cluster resulting in significant reduction in overall network energy consumption. Although many clustering algorithms have been proposed in the literature for minimizing energy consumption in sensor networks, a comprehensive and systematic analysis of optimal clustering subject to inherent network attributes such as data correlation, node density, and distance to the sink is still lacking. In particular, existing clustering schemes are designed to form uniform clusters in the network, where, on average, clusters have the same size. In this paper, we exploit spatial data correlation present among sensor readings to form optimal-sized clusters that minimize the total energy cost of the network. We develop a generalized multi-region ...
    This paper uses trace-driven simulation to study the trac arrival process for Web workloads in a simple Web proxy caching hierarchy. Both empirical and synthetic Web proxy workloads are used in the study. The simulation results show that... more
    This paper uses trace-driven simulation to study the trac arrival process for Web workloads in a simple Web proxy caching hierarchy. Both empirical and synthetic Web proxy workloads are used in the study. The simulation results show that a Web cache reduces both the peak and the mean request arrival rate for Web trac workloads, while the variance-to-mean ratio of the ltered trac typically increases, depending on the input arrival process and the conguration of the cache. If the input trac is self-similar, then the ltered request trac remains self-similar, with the same Hurst parameter, though with reduced mean. Finally, we nd that a Gamma distribution provides a exible and robust means of modeling aggregate workloads in hierarchical Web caching architectures, for a broad range of workload characteristics and Web proxy cache sizes. To demonstrate the generality and eectiv eness of the modeling approach, we present a detailed example of lter eects and trac superposition in a two-level...
    Most visual search interfaces rely on the established ranking paradigm geared towards satisfying text-based queries. Yet, we think that the ranking paradigm has constrained our thinking about what Web search should be. Instead of aiming... more
    Most visual search interfaces rely on the established ranking paradigm geared towards satisfying text-based queries. Yet, we think that the ranking paradigm has constrained our thinking about what Web search should be. Instead of aiming for a subset of an information space, visual aggregation aims to provide an overview or summary thereof. One way to provide visual summaries is to visualize a collection's meaningful facets, such as time, location, and keywords for news-related blog posts. VisGets [2](Fig. 2) provide a start in ...
    This paper presents empirical measurements of wireless media streaming traffic in an IEEE 802.11b wireless ad hoc network. The results show that the IEEE 802.11b WLAN can support up to 8 clients with good media stream- ing quality, with... more
    This paper presents empirical measurements of wireless media streaming traffic in an IEEE 802.11b wireless ad hoc network. The results show that the IEEE 802.11b WLAN can support up to 8 clients with good media stream- ing quality, with each client receiving a separate 400 kbps video stream and 128 kbps audio stream. With 9 clients, the WLAN is overloaded, and performance degrades for all clients. Finally, we demonstrate a "bad apple" phe- nomenon in wireless ad hoc networks, wherein a single client with poor wireless connectivity disrupts the media streaming quality for all clients sharing the WLAN.
    Research Interests:
    This paper addresses the problem of multicast (i.e., point to multipoint) routing in ATMnetworks. In particular, formal experimental methods are used to evaluate the relative performanceof three simple multicast routing algorithms on... more
    This paper addresses the problem of multicast (i.e., point to multipoint) routing in ATMnetworks. In particular, formal experimental methods are used to evaluate the relative performanceof three simple multicast routing algorithms on simple mesh-based networks, usingcall-level simulation.
    — Compound TCP will play a central role in future home WiFi networks supporting Internet of Things (IoT) applications. Compound TCP was designed to be fair but can manifest throughput unfairness in infrastructure-based IEEE 802.11... more
    — Compound TCP will play a central role in future home WiFi networks supporting Internet of Things (IoT) applications. Compound TCP was designed to be fair but can manifest throughput unfairness in infrastructure-based IEEE 802.11 networks when devices at different locations experience different wireless channel quality. In this paper, we develop a comprehensive analytical model for compound TCP over WiFi. Our model captures the flow and congestion control dynamics of multiple competing long-lived compound TCP connections as well as the medium access control layer dynamics (i.e., contention, collisions, and retransmissions) that arise from different signal-to-noise ratios (SNRs) perceived by the devices. Our model provides accurate estimates for TCP packet loss probabilities and steady-state throughputs for IoT devices with different SNRs. More importantly, we propose a simple adaptive control algorithm to achieve better fairness without compromising the aggregate throughput of the system. The proposed real-time algorithm monitors the access point queue, drives the system dynamics to the desired operating point which mitigates the adverse impacts of SNR differences, and accommodates the sporadically transmitting IoT sensors in the system. Index Terms— Internet of Things, compound TCP, fixed-point analysis, WiFi, throughput unfairness, adaptive control.
    Given the continued growth of the World-Wide Web, performance of Web sewers is becoming increasingly important. File caching can be used to reduce the time that it takes a Web server to respond to client requests, by storing the most... more
    Given the continued growth of the World-Wide Web, performance of Web sewers is becoming increasingly important. File caching can be used to reduce the time that it takes a Web server to respond to client requests, by storing the most popular files in the main memory of the Web sewer, and by reducing the volume of data that must be transferred between secondary storage and the Web server. In this paper, we use trace-driven simulation to evaluate the effects of various replacement, threshold, and partitioning policies on the performance of a Web sewer. The workload traces for the simulations come from Web server access logs, from six different Internet Web sewers. The traces represent three different orders of magnitude in sewer activity and two different orders of magnitude in time duration. The results from our simulation study show that frequency-based caching strategies, using a variation of the Least Frequently Used (LFU) replacement policy, perform the best for the Web sewer wor...
    This paper evaluates a hybrid congestion control strategy called the Virtual Loss-Load model. The approach combines the leaky bucket traffic shaper (a preventive congestion control mechanism) with the loss-load model (a reactive... more
    This paper evaluates a hybrid congestion control strategy called the Virtual Loss-Load model. The approach combines the leaky bucket traffic shaper (a preventive congestion control mechanism) with the loss-load model (a reactive congestion control mechanism). Simulation is used to evaluate the virtual loss-load model, and to compare its performance to that of other reactive congestion control strategies from the literature. The evaluation is done using a benchmark suite of network scenarios proposed by Kanakia, Keshav and Mishra. The performance metrics used in the evaluation are file transfer time and packet loss probability.The simulation results show that the virtual loss-load model is an effective congestion control strategy, providing performance comparable to several other reactive congestion control strategies. While transient effects due to dynamic changes in cross traffic result in significantly higher packet loss than expected in the virtual loss-load model, the parameters of the virtual loss-load model make it directly tunable to trade off packet loss and file transfer time.
    ... BibTeX. @INPROCEEDINGS{Melle96networktraffic, author = {Reid Van Melle and Carey Williamson and Tim Harrison}, title = {Network Traffic ... 358, Mbone: The Multicast Backbone -Eriksson - 1994. 114, TCP/IP Illustrated, Volume 2: The... more
    ... BibTeX. @INPROCEEDINGS{Melle96networktraffic, author = {Reid Van Melle and Carey Williamson and Tim Harrison}, title = {Network Traffic ... 358, Mbone: The Multicast Backbone -Eriksson - 1994. 114, TCP/IP Illustrated, Volume 2: The Implementation - Wright, Stevens - 1994. ...
    Researchers have long realized that TCP performance degrades over wireless links. Many solutions, including link-layer error recovery, have been proposed to overcome this problem. This paper studies how these solutions perform in a... more
    Researchers have long realized that TCP performance degrades over wireless links. Many solutions, including link-layer error recovery, have been proposed to overcome this problem. This paper studies how these solutions perform in a wireless LAN network environment. The experiments are conducted using o-the-shelf products, in a controlled way. In particular, we focus on Linux 2.4 TCP performance using a USB-based Compaq 802.11b multiport wireless card, as a case study. A multi-layer view is adopted to analyze the complex interactions between layers of the network protocol stack. Our results show that the USB bus, the TCP implementation, and the wireless link-layer protocols can all aect the overall TCP performance. While the TCP and USB implementation problems can be corrected through bug-xing and proper setting of the USB mode, the wireless-related problems, such as the data/ACK collision problem and the network thrashing problem, may require fundamental changes to link-layer protocols in wireless LANs.
    Page 1. Web Server Workload Characterization: The Search for Invariants Martin F. Arlitt Carey L. Williamson Department of Computer Science University of Saskatchewan Abstract The phenomenal growth in popularity of the World ...
    Abstract: Network traffic locality is a special case of the locality phenomenon commonly seen in computer systems. The source and destination addresses seen in network packet traffic are observed to be non-uniformly distributed, both in... more
    Abstract: Network traffic locality is a special case of the locality phenomenon commonly seen in computer systems. The source and destination addresses seen in network packet traffic are observed to be non-uniformly distributed, both in time and space. Several recent ...
    Mosaic traffic (i.e., World-Wide Web) is the fastestgrowing component of the aggregate packet and bytetraffic on the NSFNET backbone. Modeling the workloadcharacteristics of these mosaic sessions is thereforedeemed important in any... more
    Mosaic traffic (i.e., World-Wide Web) is the fastestgrowing component of the aggregate packet and bytetraffic on the NSFNET backbone. Modeling the workloadcharacteristics of these mosaic sessions is thereforedeemed important in any simulation study of the Internetor future high speed networks.The ATM-TN TeleSim project has designed and implementeda synthetic workload model for Internet mosaictraffic, to be used as input to a parallel simulatorfor high speed ATM networks. This paper...
    ABSTRACT This paper uses packet-level ns2 network simulations to study the impacts of TCP and network-level effects on user-perceived Web performance in Web proxy caching hierar-chies. We model a simple two-level Web proxy caching... more
    ABSTRACT This paper uses packet-level ns2 network simulations to study the impacts of TCP and network-level effects on user-perceived Web performance in Web proxy caching hierar-chies. We model a simple two-level Web proxy caching hierarchy, and study the effects of link speed, propaga-tion delay, request arrival rate, cache hit ratio, and cache management policy on the transfer latency for Web docu-ment downloads. The multi-factor experiments show that link capacity, round-trip time, and TCP behaviours all have a significant influence on user-level response time. The simulation results also highlight the relationships between cache hit ratio and user-perceived Web performance.

    And 191 more