Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Shivendra Panwar

    Due to massive available spectrum in the millimeter wave (mmWave) bands, cellular systems in these frequencies may provides orders of magnitude greater capacity than networks in conventional lower frequency bands. However, due to high... more
    Due to massive available spectrum in the millimeter wave (mmWave) bands, cellular systems in these frequencies may provides orders of magnitude greater capacity than networks in conventional lower frequency bands. However, due to high susceptibility to blocking, mmWave links can be extremely intermittent in quality. This combination of high peak throughputs and intermittency can cause significant challenges in end-to-end transport-layer mechanisms such as TCP. This paper studies the particularly challenging problem of bufferbloat. Specifically, with current buffering and congestion control mechanisms, high throughput — high variable links can lead to excessive buffers incurring long latency. In this paper, we capture the performance trends obtained while adopting two potential solutions that have been proposed in the literature: Active queue management (AQM) and dynamic receive window. We show that, over mmWave links, AQM mitigates the latency but cannot deliver high throughput. The main reason relies on the fact that the current congestion control was not designed to cope with high data rates with sudden change. Conversely, the dynamic receive window approach is more responsive and therefore supports higher channel utilization while mitigating the delay, thus representing a viable solution.
    5G is expected to expand the set of interactive applications enabled by the mobile network. Low end-to-end packet latency is a fundamental requirement for proper operation of those applications, but may be compromised in packet buffers... more
    5G is expected to expand the set of interactive applications enabled by the mobile network. Low end-to-end packet latency is a fundamental requirement for proper operation of those applications, but may be compromised in packet buffers shared with elastic applications that use greedy TCP sources for end-to-end transport. The accumulation of queuing delay by the elastic application degrades the latency for the interactive application, however light the throughput of the latter may be (as is the case with online gaming and over-the top voice). Buffer sharing is unavoidable in the RLC layer of the 3GPP RAN stack. To minimize its negative effect on interactive applications, we split the buffering between the RLC and PDCP layers. Then we equip the PDCP buffer with per-flow queues, and apply to the RLC buffer a new dynamic sizing mechanism that enforces the lowest queuing delay that is compatible with the existing configuration of the RLC connection. On a cellular link shared with a greedy TCP flow, our dynamic sizing solution can reduce the queuing delay of PING packets by up to two orders of magnitude compared to the default configuration with an RLC-only buffer of fixed size.
    Opportunistic communications through wireless ad-hoc mesh networks have been thoroughly studied in the context of military infrastructureless deployments, sensor networks and even human-centered pervasive networking. However, due to the... more
    Opportunistic communications through wireless ad-hoc mesh networks have been thoroughly studied in the context of military infrastructureless deployments, sensor networks and even human-centered pervasive networking. However, due to the lack of a model that accurately computes the probability distribution of the delay, we usually content ourselves with the mean values. Such an approach can limit both the ability to predict the system's behavior and the ways to affect it. In this paper, we present an analytical framework that allows us to estimate the probability distribution of the delay as a function of the field size, the number of participating users and the movement model. In addition, the short computational time, as compared to simulations, allows us to analyze systems that would otherwise be infeasible, due to their size. The derived delay probability distribution can help us decide whether opportunistic networking can be practically used in, e.g., dense vehicular environments, highly volatile mesh networks, or even predicting a successful marketing campaign. We validate the analytical results against a simulation of the presented model. Furthermore, we created a second, highly sophisticated and realistic, simulation, in order to verify the validity of the observed trends in almost-real-life situations.
    The Available Bit Rate (ABR) service class has been de ned by the ATM Forum. Closed-loop ratebased congestion control has been adopted by the ATM Forum as the standard approach for supporting ABR service in ATM networks. This paper... more
    The Available Bit Rate (ABR) service class has been de ned by the ATM Forum. Closed-loop ratebased congestion control has been adopted by the ATM Forum as the standard approach for supporting ABR service in ATM networks. This paper examines congestion control mechanisms for ABR service and presents fundamental performance results for rate-based tra c management schemes under varying available bandwidth conditions. We show the frequency range where a rate-based congestion control scheme can operate e ectively. Our results contribute to the fundamental understanding of closed-loop tra c management mechanisms for ABR service and provide guidelines for the future development of e ective congestion control algorithms.
    Millimeter-wave (mmWave) is a promising network access technology to enable the high data rates, high reliability and ultra-low latency required by connected vehicle services in future vehicular networks. However, mmWave links are prone... more
    Millimeter-wave (mmWave) is a promising network access technology to enable the high data rates, high reliability and ultra-low latency required by connected vehicle services in future vehicular networks. However, mmWave links are prone to blockages due to high penetration loss, which can cause frequent service interruptions and degrade the system performance in terms of reliability and latency. In this study, we analyze the latency and reliability performance of mmWave communications between vehicles and roadside units (RSUs) in a highway scenario, where the line-of-sight (LOS) links can be blocked by vehicles. First, we establish a continuous-time Markov chain model of the blockage events. By using the steady-state solution of this model, we explicitly derive the blockage probability and average blockage duration, which can be used to characterize the reliability and latency performance. We validate the accuracy of the analytical model by comparing it with simulations using real-world traffic data and vehicle distributions. We demonstrate that reducing the duration of long-lasting blockage events is more challenging than reducing the frequency of blockages. We consider three approaches to control the blockage probability and distribution of blockage durations: (i) increasing RSU density, (ii) increasing RSU heights and (iii) managing the vehicular speed limits. We show that the first two approaches are effective in reducing the blockage probability, whereas the third approach can be used to eliminate long blockages and improve latency performance. We discuss the implications of our results in terms of the benefits and challenges associated with these approaches.
    FOR COMMUNICATIONS NETWORKS, bandwidth has long been king. With every generation of fiber optic, cellular, or Wi-Fi technology has come a jump in throughput that has enriched our online lives. Twenty years ago we were merely exchanging... more
    FOR COMMUNICATIONS NETWORKS, bandwidth has long been king. With every generation of fiber optic, cellular, or Wi-Fi technology has come a jump in throughput that has enriched our online lives. Twenty years ago we were merely exchanging texts on our phones, but we now think nothing of streaming videos from YouTube and Netflix. No wonder, then, that video now consumes up to 60 percent of Internet bandwidth. If this trend continues, we might yet see full-motion holography delivered to our mobiles—a techie dream since Princess Leia's plea for help in Star Wars. • Recently, though, high bandwidth has begun to share the spotlight with a different metric of merit: low latency. The amount of latency varies drastically depending on how far in a network a signal travels, how many routers it passes through, whether it uses a wired or wireless connection, and so on. The typical latency in a 4G network, for example, is 50 milliseconds. Reducing latency to 10 milliseconds, as 5G and Wi-Fi are currently doing, opens the door to a whole slew of applications that high bandwidth alone cannot. With virtual-reality headsets, for example, a delay of more than about 10 milliseconds in rendering and displaying images in response to head movement is very perceptible, and it leads to a disorienting experience that is for some akin to seasickness. • Multiplayer games, autonomous vehicles, and factory robots also need extremely low latencies. Even as 5G and Wi-Fi make 10 milliseconds the new standard for latency, researchers, like my group at New York University's NYU Wireless research center, are already working hard on another order-of-magnitude reduction, to about 1 millisecond or less. • Pushing latencies down to 1 millisecond will require reengineering every step of the communications process. In the past, engineers have ignored sources of minuscule delay because they were inconsequential to the overall latency. Now, researchers will have to develop new methods for encoding, transmitting, and routing data to shave off even the smallest sources of delay. And immutable laws of physics—specifically the speed of light—will dictate firm restrictions on what networks with 1-millisecond latencies will look like. There's no one-size-fits-all technique that will enable these extremely low-latency networks. Only by combining solutions to all these sources of latency will it be possible to build networks where time is never wasted.
    In this paper, we propose a general framework for combining deep neural networks (DNNs) with dynamic programming to solve combinatorial optimization problems. For problems that can be broken into smaller subproblems and solved by dynamic... more
    In this paper, we propose a general framework for combining deep neural networks (DNNs) with dynamic programming to solve combinatorial optimization problems. For problems that can be broken into smaller subproblems and solved by dynamic programming, we train a set of neural networks to replace value or policy functions at each decision step. Two variants of the neural network approximated dynamic programming (NDP) methods are proposed; in the value-based NDP method, the networks learn to estimate the value of each choice at the corresponding step, while in the policy-based NDP method the DNNs only estimate the best decision at each step. The training procedure of the NDP starts from the smallest problem size and a new DNN for the next size is trained to cooperate with previous DNNs. After all the DNNs are trained, the networks are fine-tuned together to further improve overall performance. We test NDP on the linear sum assignment problem, the traveling salesman problem and the tale...
    In this paper we describe a network modelingapproach intended to assist in the performancemanagement, design, and optimization of broadbandtraffic networks. Switch and source models, as well asrouting optimization and decision support... more
    In this paper we describe a network modelingapproach intended to assist in the performancemanagement, design, and optimization of broadbandtraffic networks. Switch and source models, as well asrouting optimization and decision support algorithmshave been integrated in a prototype software tool,called DATANMOT (Data Network Modeling and OptimizationTool). The switch models developed are based on standard Frame Relay and ATM switch implementations.Specifically, an
    Research Interests:
    In this paper, we consider a problem in networks with storage servers for providing multimedia service. The design involves assigning communication link capacity, sizing the multimedia servers, and distributing different types of content... more
    In this paper, we consider a problem in networks with storage servers for providing multimedia service. The design involves assigning communication link capacity, sizing the multimedia servers, and distributing different types of content at each server, while guaranteeing an upper limit on the individual end‐to‐end blocking probability. We consider alternative methods for obtaining the end‐to‐end blocking probability with low computation time and present optimization procedures to obtain an optimal solution. Under a linear cost structure, our numerical investigations consider different scenarios that might be helpful in understanding how to distribute multimedia content for a cost‐optimized solution. © 2001 John Wiley & Sons, Inc.
    We consider the problem of scheduling customers with deadlines in the G/M/c queue. We assume that customer deadlines are not known but that the scheduling policy has available to it partial information regarding stochastic relationships... more
    We consider the problem of scheduling customers with deadlines in the G/M/c queue. We assume that customer deadlines are not known but that the scheduling policy has available to it partial information regarding stochastic relationships between the deadlines of eligible customers. We prove three main results, 1) in the case that deadlines are until the beginning of service, the nonpreemptive, non-idling policy that stochastically minimizes the number of customers lost during an interval of time belongs to the class of non-idling stochastic earliest deadline (SED) policies, 2) in the case that deadlines are until the end of service, the optimum policy belongs to the class of SED policies, and 3) in the case of deadlines until the end of service, the optimum non-preemptive policy belongs to the class of non- preemptive, non-idling SED policies. The last result assumes that a customer in service that misses its deadline is always removed and thrown away. Here a policy belongs to the class of SED policies if it never schedules a customer whose deadline is known to be stochastically larger than that of some other customer in the queue. We describe several applications for which these classes contain exactly one policy. These include queues where i) deadlines are known exactly, ii) deadlines are characterized by distributions with increasing failure rate, iii) deadlines are characterized by distributions with decreasing failure rate, iv) customers fall into several classes, each with its own exponential deadline distribution, and v) certain combinations of the previous three applications. The optimal policies for the first four applications are the earliest deadline first come first serve, last come first serve, and head of the line priority scheduling. The scheduling policy for the last application combines elements of head of the line priority, first come first serve, and last come first serve. The paper concludes with some generalizations to discrete time systems, finite buffers and vacation models.
    A popular course in computer networks at NYU Tandon involves several hours each week of small-group in-person lab work. Following NYU’s move to online learning in response to COVID-19, we transitioned a class of 150 students from this... more
    A popular course in computer networks at NYU Tandon involves several hours each week of small-group in-person lab work. Following NYU’s move to online learning in response to COVID-19, we transitioned a class of 150 students from this in-person lab setting to an equivalent online experience using the GENI testbed, an open infrastructure for networking and distributed systems research and education. In this white paper, we describe the publicly available resources that enabled this transition, our experiences teaching a computer networking lab course on GENI, and outstanding challenges that remain for future semesters.
    The licensing model for millimeter wave bands has been the subject of considerable debate, with some industry players advocating for unlicensed use and others for traditional geographic area exclusive use licenses. Meanwhile, the massive... more
    The licensing model for millimeter wave bands has been the subject of considerable debate, with some industry players advocating for unlicensed use and others for traditional geographic area exclusive use licenses. Meanwhile, the massive bandwidth, highly directional antennas, high penetration loss and susceptibility to shadowing in these bands suggest certain advantages to spectrum and infrastructure sharing. However, even when sharing is technically beneficial (as recent research in this area suggests that it is), it may not be profitable. In this paper, both the technical and economic implications of resource sharing in millimeter wave networks are studied. Millimeter wave service is considered in the economic framework of a network good, where consumers' utility depends on the size of the network, and the strategic decisions of consumers and service providers are connected to detailed network simulations. The results suggest that "open" deployments of neutral small...
    ... 9; A Guide to the Wireless Engineering Body of Knowledge, 2009 Edition by G. Giannattasio, J. Erfanian, KD Wong, P. Wills, H. Nguyen, T. Croda, K. Rauscher, X. Fernando, N. Pavlidou Copyright 0 2009 the Institute of Electrical and... more
    ... 9; A Guide to the Wireless Engineering Body of Knowledge, 2009 Edition by G. Giannattasio, J. Erfanian, KD Wong, P. Wills, H. Nguyen, T. Croda, K. Rauscher, X. Fernando, N. Pavlidou Copyright 0 2009 the Institute of Electrical and Electronics Engineers, Inc. Page 2. 94 ...
    In this paper random access algorithms for packet broadcast channels are considered. It is shown that utilizing channels with a central repeater as for instance a satellite channel more information is available to resolve conflicts than... more
    In this paper random access algorithms for packet broadcast channels are considered. It is shown that utilizing channels with a central repeater as for instance a satellite channel more information is available to resolve conflicts than that utilized in Capetanakis type tree algorithms. Based on this extra information, a new class of collision resolution algorithms is presented. Only preliminary results on maximum achievable trhoughput are given and it is shown that this can approach. 673.

    And 236 more