Papers by Dmitri I Arkhipov
Roadside units (RSUs) are public and personal wireless access points that can provide communicati... more Roadside units (RSUs) are public and personal wireless access points that can provide communications with infrastructure in ad hoc vehicular networks. We present CLOCS (Counting and Localization using Online Compressive Sensing), a novel system to retrieve both the number and locations of RSUs through wardriving. CLOCS employs online compressive sensing (CS), where received signal strength (RSS) values are recorded at runtime, and the number and location of RSUs are recovered immediately based on limited RSS readings. CLOCS also uses fine retrieval based on an expectation maximization method along the driving route. Extensive simulation results and experiments in a real testbed deployed in the campus of the University of California, Irvine confirm that CLOCS can successfully reduce the number of measurements for RSU recovery, while maintaining satisfactory counting and localization accuracy. In addition, data dissemination, time cost, and effects of different mobile scenarios using CLOCS are analyzed, and the impact of CLOCS on network connectivity is studied using Microsoft VanLan traces.
The importance of understanding freight demand is increasing with the appreciation of freight tra... more The importance of understanding freight demand is increasing with the appreciation of freight transportation’s multi-faceted impact on our economy. There is a significant need to access a wide array of data sources for freight modeling and analysis. However, current data sources are not always easily accessible or obtainable even with the availability of the Internet. Reasons include differing user interfaces, unavailability of data type definition, data format incompatibility and inability to conveniently assess scope of data.The repository developed in this study: the California Freight Data Repository (Cal-FRED) is a user-centered online tool that is designed from a systems perspective with several objectives. First, it facilitates convenient access, standardized interface and a centralized location for obtaining freight data. Data dictionaries and lookup tables are provided for each data source to allow users to understand the scope of the data source as well as a clear definition of terms found in the data. A quality assessment summary is also provided to inform users of the strengths and potential limitations associated with each data source. Second, the repository is equipped with several Geographical Information Systems-based (GIS-based) visualization tools intended to provide users with the ability to perform preliminary evaluation desired data to determine its suitability for specific modeling or analysis needs. Third, the repository is designed with a customized search engine to retrieve web resources specifically associated with freight modeling and analysis. This paper presents metadata architecture used for identifying data sources, the assessment framework used to evaluate selected data sources and system and interface design of the California Freight Data Repository (Cal-FRED). Several use cases of the data repository are presented to demonstrate the applicability of this resource.
In light of the demand for more complex network models and general solution methods, this researc... more In light of the demand for more complex network models and general solution methods, this research introduces a radial basis function-based method as a faster alternative global heuristic to a genetic algorithm method for the continuous network design problem. Two versions of the algorithm are tested against the genetic algorithm in three experiments: the Sioux Falls, South Dakota, network with standard origin-destination flows; the same network with double the flows to test performance under a more congested scenario; and an illustrative experiment with the Anaheim, California, network to compare the scalability of performance. To perform the experiments, parameters for the network design problem were developed for the Anaheim network. The Anaheim test would be the first instance of testing the radial basis function methods on a 31-dimensional network design problem. Results indicate that the multistart local radial basis function method performs notably better than the genetic algorithm in all three experiments and would therefore be an attractive method to apply to more complicated network design models involving larger networks and more complex constraints, objectives, and representations of the time dimension.
A real option portfolio management framework is proposed to make use of an adaptive network desig... more A real option portfolio management framework is proposed to make use of an adaptive network design problem developed using stochastic dynamic programming methodologies. The framework is extended from Smit’s and Trigeorgis’ option portfolio framework to incorporate network synergies. The adaptive planning framework is defined and tested on a case study with time series origin–destination demand data. Historically, OD time series data is costly to obtain, and there has not been much need for it because most transportation models use a single time-invariant estimate based on deterministic forecasting of demand. Despite the high cost and institutional barriers of obtaining abundant OD time series data, we illustrate how having higher fidelity data along with an adaptive planning framework can result in a number of improved management strategies. An insertion heuristic is adopted to run the lower bound adaptive network design problem for a coarse Iran network with 834 nodes, 1121 links, and 10 years of time series data for 71,795 OD pairs.
Conference Publications by Dmitri I Arkhipov
Accessing online social media content on underground
metro systems is a challenge due to the fact... more Accessing online social media content on underground
metro systems is a challenge due to the fact that passengers often
lose connectivity for large parts of their commute. As the oldest
metro system in the world, the London underground represents a
typical transportation network with intermittent Internet connectivity.
To deal with disruption in connectivity along the sub-surface and
deep-level underground lines on the London underground, we have
designed a context-aware mobile system called DeepOpp. DeepOpp
enables efficient offline access to online social media by prefetching
and caching content opportunistically when signal availability (e.g.
urban 3G and station WiFi) is detected. DeepOpp can measure, crowd-
source and predict signal characteristics such as strength, bandwidth
and latency; it can use these predictions of mobile network signal
to activate prefetching, and then employ an optimization routine
to determine which social content should be cached in the system
given real-time network conditions and device capacities. DeepOpp
has been implemented as an Android application and tested on the
London Underground; it shows significant improvement over existing
approaches, e.g. reducing the amount of power needed to prefetch so-
cial media items by 2.5 times. While we use the London Underground
to test our system, it is equally applicable in New York, Paris, Madrid,
Shanghai, or any other urban underground metro system, or indeed
in any situation in which users experience long breaks in connectivity.
2015 IEEE Conference on Computer Communications (INFOCOM), 2015
Proceedings of the 94th Annual Meeting of the Transportation Research Board, Jan 15, 2015
In this paper we extend the standard meta-description for genetic algorithms with a simple non-tr... more In this paper we extend the standard meta-description for genetic algorithms with a simple non-trivial parallel implementation. Our work is chiefly concerned with the development of a straightforward way for engineers to modify existing genetic algorithm implementations for real industrial or scientific problems to make use of commonly available hardware resources without completely reworking complex, useful and useable codes. We present our framework and computational results comparing small scale parallelization for a classical transportation related combinatorial optimization problem – the traveling salesman problem with a standard sequential genetic algorithm implementation. Our empirical analysis shows that this simple extension can lead to considerable solution improvements. Next, we tested our assumptions that the results are typical and that the method is easily implemented by an engineer not initially familiar with genetic algorithms by implementing the toolkit for another ...
Proceedings of the 3rd International Conference on Soft Computing and Machine Intelligence (ISCMI), 2016
Programmatic advertising is an actively developing industry and research area. Some of the rese... more Programmatic advertising is an actively developing industry and research area. Some of the research in this area concerns the development of optimal or approximately optimal contracts and policies between publishers, advertisers and intermediaries such as ad networks and ad exchanges. Both the development of contracts and the construction of policies governing their implementation are difficult challenges, and different models take different features of the problem into account. In programmatic advertising decisions are made in real
time, and time is a scarce resource particularly for publishers who
are concerned with content load times. Policies for advertisement
placement must execute very quickly once content is requested;
this requires policies to either be pre-computed and accessed
as needed, or for the policy execution to be very efficient. In
this paper we formulate a stochastic optimization problem for
per-publisher ad sequencing with binding latency constraints.
We adapt a well-known heuristic optimization technique to this
problem and evaluate its performance on real data instances.
Our experimental results indicate that our heuristic algorithm
is near-optimal for instances where an optimality calculation
is feasible, and superior to other reasonable approaches for
instances when the calculation is not feasible.
Uploads
Papers by Dmitri I Arkhipov
Conference Publications by Dmitri I Arkhipov
metro systems is a challenge due to the fact that passengers often
lose connectivity for large parts of their commute. As the oldest
metro system in the world, the London underground represents a
typical transportation network with intermittent Internet connectivity.
To deal with disruption in connectivity along the sub-surface and
deep-level underground lines on the London underground, we have
designed a context-aware mobile system called DeepOpp. DeepOpp
enables efficient offline access to online social media by prefetching
and caching content opportunistically when signal availability (e.g.
urban 3G and station WiFi) is detected. DeepOpp can measure, crowd-
source and predict signal characteristics such as strength, bandwidth
and latency; it can use these predictions of mobile network signal
to activate prefetching, and then employ an optimization routine
to determine which social content should be cached in the system
given real-time network conditions and device capacities. DeepOpp
has been implemented as an Android application and tested on the
London Underground; it shows significant improvement over existing
approaches, e.g. reducing the amount of power needed to prefetch so-
cial media items by 2.5 times. While we use the London Underground
to test our system, it is equally applicable in New York, Paris, Madrid,
Shanghai, or any other urban underground metro system, or indeed
in any situation in which users experience long breaks in connectivity.
time, and time is a scarce resource particularly for publishers who
are concerned with content load times. Policies for advertisement
placement must execute very quickly once content is requested;
this requires policies to either be pre-computed and accessed
as needed, or for the policy execution to be very efficient. In
this paper we formulate a stochastic optimization problem for
per-publisher ad sequencing with binding latency constraints.
We adapt a well-known heuristic optimization technique to this
problem and evaluate its performance on real data instances.
Our experimental results indicate that our heuristic algorithm
is near-optimal for instances where an optimality calculation
is feasible, and superior to other reasonable approaches for
instances when the calculation is not feasible.
metro systems is a challenge due to the fact that passengers often
lose connectivity for large parts of their commute. As the oldest
metro system in the world, the London underground represents a
typical transportation network with intermittent Internet connectivity.
To deal with disruption in connectivity along the sub-surface and
deep-level underground lines on the London underground, we have
designed a context-aware mobile system called DeepOpp. DeepOpp
enables efficient offline access to online social media by prefetching
and caching content opportunistically when signal availability (e.g.
urban 3G and station WiFi) is detected. DeepOpp can measure, crowd-
source and predict signal characteristics such as strength, bandwidth
and latency; it can use these predictions of mobile network signal
to activate prefetching, and then employ an optimization routine
to determine which social content should be cached in the system
given real-time network conditions and device capacities. DeepOpp
has been implemented as an Android application and tested on the
London Underground; it shows significant improvement over existing
approaches, e.g. reducing the amount of power needed to prefetch so-
cial media items by 2.5 times. While we use the London Underground
to test our system, it is equally applicable in New York, Paris, Madrid,
Shanghai, or any other urban underground metro system, or indeed
in any situation in which users experience long breaks in connectivity.
time, and time is a scarce resource particularly for publishers who
are concerned with content load times. Policies for advertisement
placement must execute very quickly once content is requested;
this requires policies to either be pre-computed and accessed
as needed, or for the policy execution to be very efficient. In
this paper we formulate a stochastic optimization problem for
per-publisher ad sequencing with binding latency constraints.
We adapt a well-known heuristic optimization technique to this
problem and evaluate its performance on real data instances.
Our experimental results indicate that our heuristic algorithm
is near-optimal for instances where an optimality calculation
is feasible, and superior to other reasonable approaches for
instances when the calculation is not feasible.