UAV Traffic
UAV Traffic
UAV Traffic
Aerial Vehicle
Rabi G. Mishalani, a
Associate Professor
mishalani.1@osu.edu
Yuxiong Ji, a
Graduate Research Assistant
ji.28@osu.edu
a: Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, Columbus, OH, 43210. USA
b: Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH, 43210, USA
c: Knowlton School of Architecture, City and Regional Planning, The Ohio State University, Columbus. OH 43210, USA
ABSTRACT
Roadway networks span large distances and can be difficult to monitor. Most efforts to collect
roadway usage data either require a large fixed infrastructure or are labor intensive.
Technological advances in electronics and communication have recently enabled an alternative,
Unmanned Aerial Vehicles (UAVs). UAVs capable of carrying sensors and communications
hardware to relay data to the ground are becoming available on the commercial market. UAVs
can cover large areas and focus resources. They can travel at higher speeds than ground vehicles
and are not restricted to traveling on the road network. In this paper we investigate the use of a
UAV to monitor roadway traffic and develop and demonstrate several applications using data
collected from a UAV flying in an urban environment. We describe our use of the data to
determine level of service, average annual daily traffic, intersection operations, origin-destination
flows on a small network, and parking lot utilization. Our ability to determine these measures
illustrates the feasibility of extracting useful information from images sampled from a UAV for
both off-line planning and real-time management applications, and our discussion of the methods
indicates the challenges and opportunities images obtained from such a platform pose and entail.
INTRODUCTION
Roadway networks span large distances and can be difficult to monitor. Most efforts to collect
roadway usage data either require a large fixed infrastructure or are labor intensive. Conventional
traffic surveillance relies on a set of detectors (including cameras) deployed at fixed locations,
requiring a high density of detectors to monitor changing conditions throughout the network.
When information is needed from beyond the range of these fixed detectors, personnel are
usually deployed to assess conditions. Technological advances in electronics and communication
have recently enabled an alternative to an inflexible fixed network of sensors or the labor-
intensive and potentially slow deployment of personnel. Unmanned Aerial Vehicles (UAVs)
capable of carrying a video camera, geo-positioning sensors and communications hardware to
relay data to the ground are becoming available on the commercial market. Examples include the
MLB-BAT [1] and GeoData Systems-Airborne Data Acquisition System (ADAS) [2]. Many of
these low cost aircraft are capable of sophisticated autonomous flight.
In this paper we investigate the feasibility of using a UAV to monitor traffic and develop several
applications. As noted in [3], UAVs can cover a large area and focus resources. They can travel at
higher speeds than ground vehicles and UAVs are not restricted to traveling on the road network.
With autonomous flight capabilities they can potentially free up personnel from time that would
otherwise be spent in transit to remote field locations.
To explore the benefits of UAVs in this context, on July 22, 2003 a set of experiments was
conducted on the campus of The Ohio State University in Columbus using the BAT III
technology [1] carrying a payload of two video cameras. The UAV flew at an altitude of 150 m
and an air speed around 50 km/h while transmitting video images to the ground station in real-
time. The flight lasted almost two hours and data were collected from several facilities. Figure
1A shows a map of the study area, roughly 2 km east-west and 2.2 km north-south (adapted from
[4]). This urban setting included many intersections and parking lots as well as a freeway, SR
315, running north/south through the middle of the map. As described below, five applications
were examined: measuring level of service (LOS), estimating average annual daily travel
(AADT), examining intersection operation, measuring origin destination (OD) flows on a small
network, and measuring parking lot utilization. Figure 1B shows a schematic of the primary
roadways used in this study.
The subsequent descriptions are intended to provide the main concepts and methods applied to
derive useful information for both off-line planning and real-time management applications.
Moreover, the extracted information is presented in some detail. The purpose is to demonstrate
the feasibility of extracting useful information from images sampled from an UAV and illustrate
the challenges and opportunities such images pose and entail.
LOS MEASUREMENT AND AADT ESTIMATION
The UAV made one round trip along the southern 1.5 km of SR 315 shown in Figure 1. The
video images from this trip were used to investigate LOS and AADT, respectively reflecting
instantaneous and long-term traffic conditions. Density (vehicles/km) was used to measure LOS,
while flow (vehicles/hr) was used to estimate AADT. As discussed below, two different methods
of estimating freeway density and flow were developed. The first method uses still frames, and
the second method exploits information from a series of frames using Edie's generalized
definitions over time and space [5].
1
Method 1: Density and Flow from Still Frames
The numbers of passenger cars and trucks can be obtained directly from still video images.
Given these numbers and the length of imaged roadway segment they occupy, the traffic density
is commonly defined as:
PC + TR ⋅1.5
k= , (1)
L
where,
L = the segment length in km;
PC = the number of passenger cars in the imaged portion of the segment;
TR = the number of trucks in the imaged portion of the segment.
Here the most difficult aspect is measuring L along the roadway accurately. For this study we
used a geo-referenced aerial photo and a geographic information system (GIS) to measure
distance between observed landmarks. Conceivably the camera could be calibrated so that
distance along the ground could be extracted from the UAV views and distance traveled by the
UAV. In any event, given the density calculated in Equation 1, the hourly volume can be
estimated from the fundamental equation,
qH = k ⋅ v , (2)
where,
v = space mean speed.
Although one cannot measure speed from a single still frame, it can either be estimated from the
posted speed limit or from the video stream. In this study two different approaches were used to
estimate speed. The first speed estimate used the GIS to measure the distance between two points
visible in the given still frame. The video stream was then used to measure travel times across
this segment for as many vehicles as possible. The space mean speed was then estimated by the
arithmetic mean of the measured speeds across the link, i.e.,
m
1 L
v= ∑ , (3)
m i=1 t i
where,
t i = travel time of the i-th vehicle in the segment of length L,
m = the total number of vehicles measured.
The second speed estimate is intended to be used when it is difficult or impossible to match the
start and end of a segment with landmarks in the GIS. In this case we selected several vehicles at
random from the given still frame. Each vehicle was tracked over several seconds, we counted
the number of lane markers passed and recorded the travel time. Once more the space mean
speed was estimated by the arithmetic mean of the measured speeds, i.e.,
n ⋅l
m
1
v= ∑ i , (4)
m i=1 t i
2
where,
n i = the number of lane markers that the i-th vehicle passed,
l = the distance from the beginning of one lane marker to the beginning of the next, (12.2
m was taken from the design specifications and used here),
t i = travel time of the i-th vehicle.
This method was applied to many frames, including the five shown in Figure 2.
3
through the FOV, i.e.,
d( FOV ) = ∑ ( x exit (i) − x enter (i)) = ∑ x exit (i) − ∑ x enter (i) , (6A)
i i i
t ( FOV ) = ∑ (t exit (i) − t enter (i)) = ∑ t exit (i) − ∑ t enter (i) , (6B)
i i i
where the i-th vehicle enters the FOV at ( x enter (i),t enter (i)) and exits at ( x exit (i),t exit (i)) . The
boundaries of the FOV need to be specified as accurately as possible to measure FOV precisely.
Equations 6A and 6B can be summed across individual lanes, or the entire roadway can be
processed without regard to which lane a vehicle is traveling in.
4
accuracy is likely due to the fact that two averages incorporate more vehicles over a larger region
of the time-space plane compared to the still frame method applied to a single frame.
INTERSECTION OPERATIONS
The majority of the flight time was devoted to monitoring intersections. First the UAV circled the
small network shown in Figure 1B, counter clockwise past i1-i2-i4-i5, then reversed direction,
flying clockwise past i1-i5-i4-i2. Figure 4A-C shows an example of the views from the UAV
while circling counter clockwise. Then the UAV circled individual intersections for several
minutes, first i1, then i4, i5, and the diamond interchange of i6 and i7. Figure 4C-D shows
opposing views of i1 and illustrates how the view changed while circling i1.
These video segments were used to analyze intersection operations. Most of the analysis was
based on common queuing measures. Two methods of measuring intersection queue lengths were
examined. The first method used input-output flows to measure point queues on intersection
approaches. To facilitate measuring the point queues, a straightforward computer program was
written and used to track the times when vehicles pass two points on each approach. These data
allowed for the construction of vehicle arrival and departure curves. Queues, signal timings,
arrival rates, and turning movements were all derived from these curves. The second method
examined spatial queues and consisted of sampling spatial queue lengths on intersection
approaches at fixed time intervals.
Point Queues
Using the computer interface to record vehicle arrivals and departures on each approach at each
intersection that was circled, the cumulative arrival and the cumulative departure curves were
obtained. On a given approach the arrival curve, A(t), was measured at a point far enough
upstream of the intersection that queues rarely overran the location while being close enough to
the intersection that the point was usually within the field of view. In the rare event that either of
these assumptions was violated, the analysis was suspended until the violated assumption was
restored. The departure curve, D(t), was measured at the stop bar and vehicle turning movements
were recorded through different key presses using the computer interface as the vehicles passed.
Following normal queuing theory the arrival curve was shifted forward in time by a constant,
free flow travel time to yield the virtual arrival curve at the stop bar, V(t). The time shift was
estimated empirically on each approach by measuring several vehicles traversing the distance
between the two locations under free flow conditions. In this way, V(t) reflects the time vehicles
would have passed the stop bar if there were no delay between the two points. The point queue at
any instant, then, is simply,
Q( t ) = V (t ) − D( t ) . (8)
Figure 5 shows two cycles from the point queue model of the eastbound approach to i1. The
other approaches were similarly processed. Figure 5A shows A(t), V(t) and D(t). On this
approach the free flow travel time was determined to be 6 seconds. The traffic signal indications
were not visible at the resolution of the UAV video. The signal phasing in Figure 5B were
extracted by watching vehicle movements on all four approaches. The validity of these extracted
phasings was verified at this intersection using concurrent video filmed on the ground. There are
two periods where D(t) is nearly horizontal, corresponding to the red phase. The small number of
departures observed during these periods comes from vehicles turning right on red. Figure 5C
5
shows Q(t) using Equation 8. Queue growth after the signal turns red and the subsequent decay
after the signal turns green are clearly evident. The observed peaks correspond to queue lengths
just before D(t) exhibits saturation flow at the start of the green phase.
Spatial Queues
The point queue accurately captures delay, but assumes all the delay occurs at the stop bar. In
fact most of the delay is encumbered upstream of the stop bar. As the queue grows upstream of
the stop bar, vehicles enter the queue sooner than would be predicted by a point queue model,
consuming some of the free flow travel time while in the queue. Such spatial queuing models
capture greater detail of what is experienced by travelers but lead to complications when trying
to extract delay and other metrics. Recognizing the spatial nature of real queues is important,
since a queue can impede the operation of the network if it overruns an upstream intersection (a
good overview of the trade offs between point and spatial queue models can be found in [9]).
Therefore, in the second method, spatial queues at intersections were measured at regular time
intervals by counting the number of vehicles queued at an instant in time. Sampling spatial
queues in this fashion precludes delay measurement but accurately captures the spatial extent of
the queue and can be done very quickly. Since each measurement only needs to capture an
instant, this method utilized the video from circling the network in addition to the video from
circling individual intersections. An observer manually counted the number of vehicles in the
queue by lane at fixed intervals for each approach to i1, i5, i6, and i7. A 10 second sapling
interval was used while the UAV circled a particular intersection. A given intersection was only
in view for 10-30 seconds when the UAV circled the network, and a 5 second sampling interval
was used to provide more observations to be used in the comparisons.
Figure 6 presents the average spatial queue length for i1 obtained from this method. This figure
compares average spatial queue lengths obtained during intervals when the UAV circled a
particular intersection to those obtained during periods when the UAV circled the larger network.
If the intermittent observations obtained while circling the network all fall in the same phase of
the cycle, the queue measurements would show disproportionately longer average queues on the
approaches that were imaged during red phases and disproportionately shorter average queues on
the approaches that were imaged during green phases. Provided care is taken to ensure that the
observations while circling the network do not fall at the same point in the cycle, it should be
possible to use a UAV to monitor concurrently the queue lengths at many intersections, e.g., the
seven intersections in the small network of Figure 1B. Since the empirical results in Figure 6
come from different times of the day (roughly an hour apart), at this time it is not possible to
conclusively confirm or refute the hypothesis that intermittent sampling while circling a large
network is sufficient. However, the results do fall within expectations. Between the two data sets,
three of the four approaches had average queue lengths within 2 vehicles. The fourth approach
had much longer queues when the UAV circled the network, which occurred during the morning
peak period, and much shorter queues when the UAV circled the intersection, which occurred
after the peak.
ORIGIN-DESTINATION ESTIMATION
We used the UAV video while circling intersections i6 and i7 in conjunction with video taken
from ground-based cameras at i1 to determine origin-destination flows on this small network.
The network is shown in Figure 7A superimposed on an aerial photo. The origins were defined to
6
be the approaches to the diamond interchange, i6 and i7, and the destinations were the three
branches from i1. Figure 7B shows a simplified schematic of the network highlighting the fact
that most of the roadway between i7 and i1 went unobserved. The primary goal of this effort is to
match observations across this unobserved region and measure network OD flows. To this end a
methodology was developed in which we match platoons of vehicles from the various origins to
an approach feeding into the downstream intersection, and subsequently the vehicles are
followed through to their respective destinations. The platoon-matching method avoids the need
to match individual vehicles, since any unobserved reordering within a platoon from a given
origin will not change the proportion of vehicles bound for a given destination.
The computer interface developed to study intersection operations was used here to record the
origins, departure times, and lanes in which vehicles depart the upstream intersection field of
view. Similarly, the process was repeated for the vehicles at the downstream intersection, from
arrival in the downstream field of view to their final destination. Within each intersection the
vehicle movements were followed, yielding a map to the particular origin or destination. A few
vehicles departed the network at the freeway on-ramps, which were visible from the UAV view.
These vehicles were excluded from further analysis.
The number of vehicles at the upstream and downstream intersections, along with their origins
and destinations, represent the marginal totals of the OD flows, i.e., the number of vehicles that
originate at an origin or terminate at a destination. Table 3 shows the origin and destination
totals. In principle the measurement of flows between origins and destinations could theoretically
be accomplished through exact one-to-one matching of all vehicles between origins and
destinations. However, this task would prove tedious over large samples, and in this case would
be very difficult, if not impossible, given the resolution of the imagery and the non-overlapping
fields of view obtained in the upstream UAV video and the downstream ground camera views.
Instead, we matched distinct vehicle platoons between the intersections.
Key to the feasibility of this methodology is the fact that the headway between platoons is
normally much larger than the headway between vehicles within platoons. As the upstream
signal cycles through approaches, the clearance intervals introduce these larger headways
between platoons from the various origins. Large headways mean overtaking between platoons is
generally uncommon, and if it does occur, it will only be among the vehicles at the end of the
platoon. Moreover, reordering of vehicles within platoons due to overtaking will not impact OD
flows, thus allowing the use of the simple First-In-First-Out (FIFO) assumption. Figures 7C-D
illustrate the process, where each origin is given one shade and each destination one symbol. In
this hypothetical example there is a gap after the platoon from o2 has departed the upstream
intersection (on the right) before the platoon from o1 is released.
Table 4 shows the resulting OD flows from the platoon-matching procedure applied to the
empirical data. In the platoon matching method, measurement errors are inevitable during data
reduction. Such errors could change the derived OD flows. The impact of such errors are
investigated in [10], where it is shown that many of these errors can be detected using the
observed data.
PARKING LOT UTILIZATION
The last few minutes of the UAV flight were devoted to flying over several large parking lots
shown in Figure 8A, namely P10a, P10b, P11, P12, P9, P1a, and P1b (in that order). Figure 8B
shows P11 to the left and P12 to the right in the foreground with a third lot behind them on the
7
right. Throughout the flight many other parking lots were visible. These comprise the remaining
labeled lots in Figure 8A (Figure 8A and Figure 1 are to the same scale). In fact, P3 and P8 are
visible, respectively, in the bottom and top of Figure 4A. The capacity of the parking lot, the
number of available parking spaces, and the number of occupied parking spaces can be acquired
from the video. Such information would be useful for directing commuters or event traffic to the
best lots and in providing planning data on parking needs. Compared to ground-based crews, a
UAV can quickly fly among multiple parking lots to provide many more repeated samples of the
usage in a given time. Figure 8C shows the utilization of the parking lots. Lot 4 is nearly full,
while most of the remaining lots were relatively empty, since this was summer quarter.
CONCLUSIONS
This paper presented methodological developments to exploit UAV data for multiple
applications. The applications discussed here were level of service, annual average daily traffic,
intersection operations, origin-destination flows on a small network, and parking lot utilization.
All these applications were demonstrated from less than 2 hours of flight time. Most of the data
reduction in this study was done manually using computers to simplify many of the tasks. If
UAVs were used on a large scale for any of these applications, it is likely that additional aids
would be developed to assist this process, e.g., software to keep the FOV on the road or make it
easier to measure distances in the FOV, and hardware, such as multiple cameras or specialized
lenses, to extend the FOV. In the long term, it is likely that many of the tasks could be
automated.
ACKNOWLEDGEMENTS
This study was funded through a grant to the National Consortium for Remote Sensing in
Transportation-Flows (NCRST-F) from the US Department of Transportation. The efforts of
Steve Morris from MLB in providing the aircraft and operator for the field experiment and of
Keith Redmill from OSU in providing logistical support are particularly appreciated. The authors
also acknowledge the help of the many agencies – including the Federal Aviation Administration
and the Columbus Police Department – and their staff that enabled the UAV experiment.
REFERENCES
[1] MLB, "MLB Company," http://spyplanes.com, accessed on December 8, 2005.
[2] GeoData Systems, "GeoDataSystems", http://www.geodatasystems.com, accessed on
December 8, 2005.
[3] Coifman, B., McCord, M., Mishalani, R., Redmill, K., "Surface Transportation
Surveillance from Unmanned Aerial Vehicles" Proc. of the 83rd Annual Meeting of the
Transportation Research Board, 2004.
[4] Ohio State University Department of physical facilities, http://www.physfac.ohio-
state.edu/mapping, accessed on July 23, 2003.
[5] Edie, L., "Discussion of Traffic Stream Measurements and Definitions," Proc. 2nd
International Symposium on the Theory of Traffic Flow, 1963, pp 139-154.
[6] TRB, Highway Capacity Manual, Transportation Research Board, 2000.
[7] McCord, M., Yang, Y., Jiang, Z., Coifman, B., Goel, P., (2003) Estimating AADT from
8
Satellite Imagery and Air Photos: Empirical Results, Transportation Research Record
1855, 2003, pp 136-142.
[8] Jiang, Z., McCord, M., Goel, P., (2005) Improved AADT Estimation by Combining
Information in Image- and Ground-based Traffic Data, ASCE Journal of Transportation
Engineering [in press].
[9] Erera, A., Lawson, T., Daganzo, C., "Simple, Generalized Method for Analysis of Traffic
Queue Upstream of a Bottleneck," Transportation Research Record 1646, 1998, pp 132-
140.
[10] Mishalani, R., McCord, M., Coifman, B., Iswalt, M., Ji., "Platoon Based Origin
Destination Estimation" [in preparation].
9
Table 1 Level of Service Classification density range (veh/km/lane)
Basic Freeway
LOS Segments
A 0-7
B 7-11
C 11-16
D 16-22
E 22-28
F over 28
Table 2: Density, Flow, AADT and LOS Results of SR 315 Southbound
Description # of Vehicles
O1 315S 51
O2 315N 55
O3 Lane WB 16
Total O's 122
D1 Kenny SB 34
D2 Lane WB 82
D3 Kenny NB 6
Total D's 122
Table 4: Complete OD Matrix
D1 D2 D3 Sum O
O1 18 32 1 51
O2 11 42 2 55
O3 5 8 3 16
Sum D 34 82 6 122
Launch site
N
SR31
5
i6 Lane Ave
i1 i7 i5
W.H. Dr
i3 i4
J.H. Fyffe Rd
i2
Kenny Rd
Dr
Figure 1, (A) Map of the study area, roughly 2 km across, (B) schematic of the primary network
from the map (at the same scale) used in this study.
A) B)
C) D)
E)
Figure 2, Five sample frames used to estimate flow, density, LOS and AADT, A-F
correspond respectively to frames 1-5.
Vehicle Trajectories in Lane 1
550
500 = FOV
= measured vehicle trajectory
450
= extrapolated vehicle trajectory
400
350
Distance (m)
300
250
200
150
100
50
0
0 2 4 6 8 10 12 14 16 18 20
Time (s)
Figure 3, Example of extracted vehicle trajectories and the UAV FOV on 315-South.
A) B)
C) D)
Figure 4, Sample images while circling the network and individual intersections, (A)
viewing east with i5 on the far left and i4 on the right, (B) viewing west i6 then
i7, (C) viewing west i1, (D) viewing east i1.
A)
80
70
60
Cumulative Vehicles
50
40
30
20 A(t)
V(t)
D(t)
10
0
600 650 700 750 800 850
Time (seconds)
B)
R G R G
C)
25
20
Point Queue
15
10
0
600 650 700 750 800 850
Time (seconds)
Figure 5, (A) Cumulative arrivals and virtual arrivals at the stop bar for two cycles from the
eastbound approach to i1, (B) the signal phasing on this approach, (C) the measured
point queue lengths.
Spatial Queue at Lane/Kenny Intersection
Network
10
9 Intersection
Spatial Queue (Vehicles)
8
7
6
5
4
3
2
1
0
Lane WB Lane EB Kenny SB Kenny NB
Approach
Figure 6, Average spatial queue comparison between observations while circling the network and
later while circling the intersection.
A)
d3
o1
d2
i1
i7 i6 o2 i5
d1
o3
B) d3 o1
d2 i1 Unobserved i7 o2
d1 o3
C)
D)
Figure 7, (A) The OD network superimposed on an aerial photo, (B) a simplified schematic of the
network, most of the roadway between i7 and i1 went unobserved, (C) example where
westbound traffic from o2 just received a green light, each origin is given one shade and
each destination one symbol, (D) continuation of the example a few seconds later as the
platoon from o2 passes i1 and a new platoon starts from o1.
A) P2 B)
Launch site
P1a P1b
N
P9
P3 P8
P11
P6 P12
P7
P10b P10a P4
P5
C)
100%
Proportion of Occupied and Empty spaces
90%
80%
70%
60% % Available
50% % Full
40%
30%
20%
10%
0%
P1a P1b P2 P3 P4 P5 P6 P7 P8 P9 P10a P10b P11 P12
Figure 8, (A) schematic showing the location of observed parking lots relative to the primary
network, (B) a sample view from the flight showing P11 to the left and P12 to the right in
the foreground, (C) measured parking lot utilization.