Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Examining Smoothening Techniques For Developing Vehicular Trajectory Data Under Heterogeneous Conditions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Journal of the Eastern Asia Society for Transportation Studies, Vol.

12, 2017

Examining Smoothening Techniques for Developing Vehicular Trajectory Data


under Heterogeneous Conditions

Narayana RAJU a, Pallav KUMAR b, Chakradhar REDDY c, Shriniwas ARKATKAR d,


Gaurang JOSHI e
a
Research Scholar, Sardar Vallabhbhai National Institute of Technology, Surat, India.
E-mail: s.narayanaraju.10@gmail.com
b
Research Scholar, Sardar Vallabhbhai National Institute of Technology, Surat, India.
E-mail: pallav318@gmail.com
c
Graduate student, Sardar Vallabhbhai National Institute of Technology, Surat, India.
E-mail: chakri6023@gmail.com
d
Assistant professor, Sardar Vallabhbhai National Institute of Technology, Surat, India.
E-mail: sarkatkar@gmail.com
e
Associate professor, Sardar Vallabhbhai National Institute of Technology, Surat, India.
E-mail: gjsvnit92@gmail.com

Abstract: This research work is carried out with an objective of developing vehicular trajectory
data under heterogeneous traffic conditions in India. However, the actual difficulty lies in
development of trajectory data for the heterogeneous traffic. This is due to the fact that
automated tools are not much efficient in developing vehicular trajectories under these traffic
conditions. For the present study, semi-automated data extraction tool is used for extracting
vehicular trajectories. This tool helps in tracking the vehicular movement in terms of
longitudinal and lateral coordinates over the study sections. Since, it is a semi-automated tool
where vehicle position will be tracked by means of mouse pointer, it may result in high instant
accelerations and velocity ranges because of miss clicks. To eliminate and nullify the errors
during the trajectory data extraction process, various smoothening techniques are used and
trajectory data is smoothened and their effectiveness is analyzed.

Keywords: Vehicular Trajectories, Smoothening Techniques, Traffic Data Extractor, Vehicular


Behavior

1. BACKGROUND

Quantifying the vehicular behavior on a given road section is one of the most challenging task.
Especially under heterogeneous traffic conditions, studying vehicular behavior further
increases complexity to a next level, because of the involvement of the several intangible
parameters in the phenomenon. From the inception of vehicular behavior studies, different car
following models and lane changing behavior models were modelled to explain the longitudinal
and lateral movement of vehicles over the road segments (Brackstone and McDonald (1999).
Toledo et al, (2007) presented an approach in processing the of position data to develop vehicle
trajectories along with speed and acceleration profiles and proposed methodology for
developing the trajectory data. Guo et al.(2010) in their work developed an approach to extract
the representative points of vehicles and interpolated trajectories to establish topological
relationships among trajectories over the locations. Oh and Kim (2010) studied the rear-end
crash potential using trajectory data and modelled crash rate index and time to collision on a
given time etc. Xu et al. (2011) analyzed vehicular ad hoc networks by means of trajectory data
and a Shared-Trajectory-based Data Forwarding Scheme. Liu et al. (2012) had integrated GPS

1549
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

mechanism in the vehicle and extracted vehicle trajectories and framed a methodology in
calibrating large scale data. Houenou and Bonnifait (2013) adopted an approach in testing the
prerecorded human real driving data and tested their methodology in developing the trajectory
data. Kanagaraj et al., (2015) reported an approach in developing the data set of vehicle
trajectory data in mixed traffic on an urban midblock road section in Chennai, India. On this
basis the micro level traffic flow characteristics were analyzed. (Zheng 2015) generated a
massive trajectory data and given an approach in collecting the trajectory data. Taylor et al.
(2015) examined heterogeneity of traffic along with tested the Newell car following model and
applied a dynamic time wrapping algorithm in modelling traffic conditions. Most of the
research studies are mostly concentrated for analyzing behavior under homogeneous traffic
conditions. For calibrating these kinds of models, highly precise data, in terms of vehicular
coordinates with respect to each time frame is required over the study section. This forms the
strong basis in developing vehicular trajectory data over the road segments.
However, very few research studies have been carried out for quantifying the driving
behavior under heterogeneous traffic conditions like in many developing countries. The basic
challenge lies with the lack of availability of vehicle trajectory data on the road segments.
Similarly. Because of less sample size under heterogeneous conditions, the calibrated driving
behavior models may not be significantly robust in explaining the actual phenomenon occurring
in the field. Whereas in homogenous traffic conditions like in developed countries under
FHWA’s, Next generation simulation project (2012) collected and shared several datasets of
vehicle trajectories on freeways and urban arterials in the US. These datasets have been used
extensively used to calibrate and validate different driving behavior models. From the available
literature, it was observed that only few researchers have attempted to develop vehicle
trajectories under heterogeneous traffic conditions. Kanagaraj et al (2010) had collected the
vehicle following pair trajectories in merging situations. Sangole and Patil (2014) tracked
trajectories of vehicles which are involved in gap acceptance behavior. Kanagaraj et al (2015)
have developed vehicle trajectories by means of semi-automated tool and examined various
traffic flow characteristics such as speed, acceleration and deceleration, and longitudinal
spacing. On similar basis, Bhardwaj et al (2015) had used TRAZER software and developed
vehicular trajectories for studying macro level traffic flow characteristics on NE-1, India. Even
the authors Raju et.al (2017) has worked on calibrating car following models under
heterogeneous traffic conditions and calibrated Wiedemann models. In most of the cases, the
vehicle trajectory data is not available openly in the context of mixed traffic conditions. This is
mainly because of the difficulty and high cost involved in data collection and extraction and the
technical complexities associated with the presence of a wide mix of vehicles types with varying
physical dimensions and dynamics characteristics (speed and acceleration capabilities) and
non-lanes based movements. With the advancement of technology, many image processing
tools were available worldwide, but the reliability of the results from these tools is highly
questionable because of the complexity involved in the calibration process in these software.
Considering all these research gaps, the present study is aimed at highlighting the availability
and applicability of different smoothening techniques using field extracted data under
heterogeneous traffic conditions. In this study, a semi-automated tool called ‘Traffic Data
Extractor’ is used for extracting vehicular trajectories and credibility of different smoothening
techniques is tested using some measures.
The paper is organized in total nine-sections, including this section. Data collection
procedure is explained in the section-two. Section three presents trajectory extraction process
using traffic data extraction tool in a comprehensive manner. Section four describes the
trajectory building methodology. Section five describes smoothening techniques of the
trajectory data obtained in section-three. This section itself includes three subsections, which

1550
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

explains the smoothening techniques in relevance to the extracted vehicular trajectory data. Out
of which, first subsection includes moving average method; second subsection summarizes the
local weighted regression method; third subsection includes savitzky-golay filtering technique.
Section 6 describes the quantification of achieving smoothened data. This is followed by with
SWOT analysis on the smoothening techniques in Section 7 from the present study. Section
eight demonstrates the practical applications of accurately smoothened data. Summary and
conclusions are reported in section nine of the paper.

2. FIELD DATA

In the present study, four mid-block road sections on four different on uninterrupted roadway
facilities in India were selected to study micro level behavior on basic midblock sections. The
segments includes Delhi-Gurgaon Multi lane intercity road, Western expressway in Mumbai,
Ahmedabad – Vadodara Expressway, Pune – Mumbai expressway, India. The roadway sections
selected in the study comprise of varying roadway conditions such as eight-lane, ten-lane, four-
lane and six-lane divided carriageways. Roadway sections are selected in such a way that
varying roadway and traffic characteristics are accounted so that results would have wide
applicability. Figure 1 depicts the snapshots taken at the study locations. A reconnaissance
survey was conducted to selecting mid-block sections and proper care has been taken for
ensuring that all study sections are free from the effect of curvature, gradient, side-friction,
direct access, potholes, pedestrian movement, etc., which can affect the continuity of traffic
flow. The general features of these roadways are as follows: (i) Each lane is of 3.5 meters in
width (ii) hard shoulder is provided on each direction of travel (iii) Traffic flow on both
directions is parted by median.
At these study sections, video camera was installed at vantage point on the study locations
to capture the continuous traffic flow and identify the vehicles along the sections distinctly. The
vantage point is selected on a Foot over Bridge (FoB) located near identified study sections in
all four different locations. The FoBs are located exactly perpendicular across the carriageway
width of study sections, which are under consideration, in this study. Video-graphic surveys
were conducted on a bright sunny day on all four sections. The data was collected for twelve
hours on each of the selected study sections in order to capture the variation because of different
traffic-flow conditions prevailing on different time periods of the selected day. The details of
the data collection along with its time duration are given in the Table1.

Table 1 Data Collection Details on Study Sections


Sl. Location Duration Day of Survey
No.
1 Multi-lane Urban Road, 6:00 am to 6:00 pm (12 hours) 18th March 2016
Delhi
2 Multi-lane Urban Road, 6:00 am to 6:00 pm (12 hours) 5th March 2016
Mumbai
3 Pune Mumbai 6:00 am to 6:00 pm (12 hours) 10th April 2016
Expressway
4 Ahmedabad- Vadodara 6:00 am to 6:00 pm (12 hours) 25th April 2016
Expressway

1551
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 1 (a) Delhi study location Figure 1(b)Mumbai study location

Figure 1(c) Pune – Mumbai expressway Figure 1(d) Ahmedabad – Vadodara expressway

3. TRAJECTORY DATA EXTRACTION

With the advancement of technology over recent years, several automated and semi-automated
tools came in to existence and some of them are of commercial type and rest are freeware. Most
widely used data extraction tools majorly include VEVID, the NGSIM-Video and Trajectory
Extractor. Whereas in the case of heterogeneous conditions prevailing on Indian roads,
TRAZER and Traffic Data Extractor are used for developing vehicular trajectories. TRAZER
is a commercial type software and automated type, whereas Traffic Data Extractor is of semi-
automated and is a freeware.
In the present study, Traffic Data Extractor was used for developing vehicular trajectories
which was developed by IIT Bombay. With the help of that software, vehicular trajectory in
terms of longitudinal and lateral positions were tracked for a given time frame along with
vehicle type. In the present study for low and medium traffic flow conditions vehicle trajectories
were extracted at a time resolution of 0.5 sec and for heavy traffic flow conditions time
resolution is considered as 1 sec. The extraction is semi-automated process where a operator
manually tracks the vehicle by means of clicking the vehicle with the help of mouse pointer on
windows based graphical user interface. Traffic data extractor converts these mouse clicks in to
real world coordinates and calculates vehicles positions. The coordinate conversion relies on
four reference points in the video images and their coordinates in the real world. Figure 2 shows
the software’s graphical user interface and explains the vehicle tracking on the road section and
tracked vehicles were highlighted with a green mark on them.

1552
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 2 Trajectory data extractor user interface showing tracked vehicles on the road section

In traffic data extractor user interface, the surveyed video files were loaded. The trap
length and width of road sections were given as length and width of the marked rectangle
portion, which acts as calibration of road section on the software screen. Further, for any given
vehicle, its vehicle class was entered in the software interface by observation. Then, that
particular vehicle will be tracked by means of clicking on the vehicle with the help of mouse
pointer. On similar basis, each and every vehicle present in the traffic stream were tracked for
the selected time duration. The tracked data were exported to excel files in the form of image
coordinates and further converted to real world coordinates along with vehicle trajectories.
In the present study, for Delhi section a random 20 minutes’ duration of vehicular flow is
selected for developing vehicular trajectories. From this section, around 2447 vehicles were
observed and the trajectories of these vehicles were developed for a trap length of 195 meters.
Whereas in Mumbai western expressway road section, different flow conditions were observed
over the survey duration. Keeping in view of this, a total of 35 minutes of duration is selected
for developing trajectory data at different flow conditions. This includes a random sample video
of 10 minutes of free flow conditions, 15 minutes of medium flow conditions and 10 minutes
of stop and go conditions were selected. From this section, a total of 3455 vehicles were
observed in the Mumbai road section and trajectories were developed for a trap length of 120
meters. Similarly on Pune-Mumbai Expressway, a random 20 minutes period is selected for
trajectory development for a trap length of 100 meters for 801 vehicles. For Ahmedabad-
Vadodara expressway, 20 minutes of video file is selected and trajectory data were developed
for a trap length of 120m for 400 vehicles as explained in Table 2. The vehicular trajectory data
obtained as output from software for Mumbai section is presented as an example in Figure 3.
Based on this trajectory data, both micro and macro level traffic flow parameters can be
analyzed. But these results will be more accurate unless and until this trajectory data is
smoothened by means of some techniques.

1553
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 3 Extracted trajectory data in excel files

Table 2 Details of trajectory data from the study sections.


Sl. No. Study Trap length Trajectory data No of Dominant vehicle
extracted vehicles category
Section (m)
(minutes) tracked
1 Delhi Section 195 20 2506 Cars

2 Mumbai 120 35 3455 2w, cars, 3w


section
3 Pune-Mumbai 100 20 801 Cars and trucks
section
4 Ahmedabad- 120 20 400 Cars and trucks
Vadodara

4. TRAJECTORY BUILDING METHODOLOGY

From the extracted data, time space plots (longitudinal distance vs time) were plotted for
vehicles from the extracted trajectory data as shown in Figure 4. Figure 4 depicts the time space
plots of all vehicles in stream for 10-minutes period. However, it is quite difficult to distinguish
different vehicle trajectories from Figure 4. Therefore, in Figure 5, time-space plots were
exaggerated by reducing the observed time duration on x-axis. With the help of these plots,
following behavior and over taking behavior among the vehicles can be distinctly and clearly
visualized. This can be synthesized later and the vehicular behavior over the roadway section
can be evaluated.

1554
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 4 Time space plots of vehicles over the road section

Figure 5 Time space plots of vehicles between time frame 200 to 300 seconds

5. DATA SMOOTHENING

When the time space plots were exaggerated to check the consistency among the data set. It had
resulted in high instant velocities and acceleration rates. This is because of human errors while
tracking vehicles as the data was manually extracted. This may not resemble the real field
conditions resulting in the inconsistency of the data. In order to address this issue, different
smoothening techniques needs to be applied so as to smoothen the extracted trajectory data.
From the past studies, it was observed that in order to nullify these kinds of errors in vehicular
trajectories, different smoothening techniques were applied to smoothen the data and obtain the
continuous positions of vehicles. Toledo et al (1999) applied local weighted regression method
to smoothen the data. Kanagaraj et al (2015) has applied a tricube function to increase the
internal consistency among the vehicular trajectories highlighting the significance of internal
consistency for smoothening data. In this study, in order to increase the internal consistency and

1555
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

nullify the errors among the trajectory data, different smoothening techniques were applied.
Smoothening techniques considered in the present study includes: (i) moving average methods;
(ii) local regression methods; and (iii) Savitzky–Golay filtering techniques were applied to
increase the internal consistency of the data and obtain the continuous coordinates of vehicles.
By means of a MATLAB code, data has been smoothened by considering each vehicle as a
separate entity. In the subsequent subsections below, all the methods of smoothening are
explained with the help of time space plots along with their performance related to
smoothening.

5.1 Moving Average Method

Said et.al (1984) and Roland (2004) used moving average method to normalize the errors by
analyzing data points by creating series of averages of different subsets from the data set. In
general it is nothing but the set of numbers, which is the average of the corresponding subset
from larger set of datum points. Mathematically, a moving average is a type of convolution were
the mean is normally taken from an equal number of data on either side of a central value. This
ensures that variations in the mean are aligned with the variations in the data rather than being
shifted in time. Hence a central moving average can be computed, using data equally spaced on
either side of the point in the series where the mean is calculated. This requires using an odd
number of datum points for reporting the value.
In the present study, moving average of 3-points, 5-points and 7- points were applied to
smoothen the data, where means of the selected subset of points (i.e. 3-points, 5- points and 7–
points) were taken and is reported as central value. Based on this, the whole trajectory data was
smoothened and compared with extracted trajectory data to for testing the improvement in the
internal consistency. Since smoothening is such a micro level operation, acceleration rates and
instant velocities from the smoothened data were calculated in order to check the effectiveness
of smoothening. Also, cumulative probability functions were considered along with time space
plots of random vehicles for checking the effectiveness of each smoothening technique. Based
on this, time space plots were plotted for different random vehicles as explained in Figures 6(a),
6(b) and 6(c) below and compared with extracted data. From time space plots, it was observed
that with increase of degree in moving average, the data has been smoothened in a much better
way. But with the increase in degree there is chance that the fluctuations observed in the field
conditions will be obliterated. Considering this limitation of moving average method, other
smoothening techniques were also applied in the study for nullifying the errors.

1556
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 6 (a) Comparison of time space of vehicle among data of extracted data and 3point moving
average

Figure 6 (b) Comparison of time space plot of a vehicle among data of extracted data and 5 point
moving average

1557
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 6 (c) Comparison of time space plot of a vehicle among data of extracted data and 7 point
moving average

5.2 Local Weighted Regression

The locally weighted regression approach for smoothening vehicular trajectory data was
proposed by Toledo et al. (2011). For the present study, two types of local regression methods
were applied for data of each individual vehicle. This method includes Lowess and Loess and
both these methods use locally weighted linear regression to smoothen the data. These methods
are differentiated by the model used in the regression: lowess uses a linear polynomial function,
while loess uses a quadratic polynomial function. Lowess and Loess are derived from the term
"locally weighted scatter plot smooth," as both methods are generally used to smooth the data
by means of locally weighted linear regression. The smoothing process is of local in nature,
which is similar to the moving average method. Each smoothened data value is determined by
neighbouring data points defined within the span. In addition to the regression weight function,
different robust weight functions can be used, which makes the process resistant to outliers.
Regression weights for each data point in the span is calculated by means of a tricube function.
The weights are given by the tricube function as shown below in Equation 1.

(1)

Where ‘x’ is the predictor value associated with the response value to be smoothed,
‘xi’ are the nearest neighbors of x as defined by vehicle type, and
‘d(x)’ is the distance along the abscissa from x to the most distant predictor value within
the span.

1558
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

The data points are smoothened based on the largest weight and the most influential on
the fit. The smoothed data points are given by the weighted regression functions. Local
weighted regression data has been smoothened by means of both lowess and loess. From the
preliminary analysis, it was found that data smoothened by loess tends to possess good
correlation with extracted data, because of quadratic function. So, this method cannot be
considered as an effective smoothening technique. Whereas on the other side, Lowes tends to
smoothen the data points effectively when compared with loess. In order to check the
effectiveness of this method, time space plots were plotted among the vehicles as shown in
Figure 7(a) and 7(b). From the time space plots, it was observed that results from the both
lowess and loess tend to have a good correlation with the extracted data and fails to nullify the
errors in the much effective way.

Figure 7(a) Comparison of time space plot of a vehicle among data of extracted data and lowess

Figure 7(b) Comparison of time space plot of a vehicle among data of extracted data and loess

1559
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

5.3 Savtizky-Golay Filtering Method

Considering the limitations of moving average and local weighted regression methods,
Savitzky–Golay filter (Savitzky and Golay (1964)) is also applied to smoothen the data. This
method uses a smoothened filter that can be applied to a set of data points for the purpose of
nullifying the errors in the data sets. By means of fitting successive sub-sets of data points with
a low-degree polynomial functions by the method of linear least square method, in a process
known as convolution. When the data points are equally spaced, an analytical solution to the
least-squares equations can be found, in the form of a single set of "convolution coefficients"
that can be applied to all data sub-sets, to determine the estimates of the smoothed data points,
(or derivatives of the position of vehicles) at the central point of each sub-sets. In the present
study, a generalized moving average of 5 points with filter coefficients determined by a un
weighted linear least squares regression and a second-degree polynomial function was adopted
to smoothen the data. Based on this in similar way time space plots were plotted for the selected
random vehicles to check the effectiveness in smoothening. From the plots, it was observed that
even though Savitzky-Golay filter is having a good correlation with the extracted data, but when
compared to moving average method it fails to nullify the errors in the trajectory data while
extraction which can be visualized in Figure 8.

Figure 8 Comparison of time space plot of a vehicle among data of extracted data and sgolay
smoothening technique

6. QUANTIFYING SMOOTHENING BEHAVIOUR

Quantifying smoothening behavior based on some of the smoothening techniques is one of the
most challenging task in this research study. From the literature, it was inferred that only few
studies had been carried in this direction with vehicle trajectory data, from the studies there no
much information was found in related smoothening the trajectory data. From the basic concept
of smoothening, it can be understood that the smoothened output should be able to nullify the

1560
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

errors and increase internal consistency and should not exhibit strong correlation with the
extracted data. On the other side, smoothened data should not exhibit much deviation from the
field data. In the present study a different methodology was adopted to quantify the
effectiveness of smoothened data effectiveness with the help of time space plots of random
vehicles, instant velocity rates and instant acceleration rates.

6.1 Visual inspection of Time space plots

After the application of the smoothening techniques, time space plots were plotted for different
random vehicles for testing the effectiveness of different smoothening techniques on the
trajectory data sets. From the results, it was found that moving average with different degrees
is the most suitable candidate for increasing the internal consistency in vehicular trajectories.
This is due to the fact that in this method, errors were averaged and a smoothened curve is
obtained, which incorporates fluctuations in terms of both positive and negative errors as
visualized in Figures 6(a),6(b) and 6(c). Whereas, in case of local weight regression, the
smoothened data showed good correlation with the extracted data. However, this method is not
effective in nullifying the errors in the data sets. The smoothened line in this method
incorporates the points which are prone to errors as reported in Figures 7(a) and 7(b). The
similar conclusion was also drawn when Savitzky-Golay smoothening method as depicted in
Figure 8.

6.2 Instantaneous Velocities

From the basic concept, instantaneous velocities were generally taken as first derivative of
position of vehicle. From the preliminary analysis, it was found that errors while extraction had
resulted in high non-realistic instant velocities. So, instant velocities has been considered as a
performance measure for checking the effectiveness of various smoothening techniques
considered in this study. For this purpose, instantaneous velocities were calculated from the
trajectory data set for all vehicles as a single entity. Based on the data sets from different
smoothening techniques, cumulative plots of instant velocities were plotted for all vehicles over
different datasets. For the explanation purpose instant velocity plots were presented at different
traffic flow conditions as shown in Figures 9, 10 and 11 under section 6.2. For the trajectory
data of other road sections, similar kind of analysis has been carried out. From the results, it
was observed that the unrealistic velocities are avoided with the application of smoothening
techniques. From the Figures 9,10 and 11 it can be observed that there is no much significant
variation among the datasets of the smoothening techniques are found except for loess and
savitzky- golay filtering.

1561
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 9 Cumulative plots of velocities at free flow conditions on Mumbai section

Figure 10 Cumulative plots of velocities at Medium flow conditions on Mumbai section

Figure 11 Cumulative plots of velocities at Medium flow conditions on Mumbai section

1562
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

6.3 Instantaneous Acceleration Rates

In general, instant acceleration rates were considered as a second derivative of position of


vehicles and first derivative of velocities. From the preliminary analysis, it was found that
higher non-realistic instant acceleration rates were observed due to errors in the data sets. For
nullifying this, instant acceleration rates were considered as another performance measure for
evaluating the effectiveness of smoothening techniques. Instant acceleration rates were
calculated over the datasets from different smoothening techniques. Similar to instant
velocities, instant acceleration rates were also evaluated from the outputs for all vehicles
considering as a single entity. Based on this data had been analyzed and different percentile
ranges were investigated and acceleration values at 20 percentiles and 80 percentiles were
considered to explain the effectiveness of smoothening operations as reported in table 3. In a
similar way, cumulative plots were plotted for the selected study sections. Instant velocities of
Mumbai section is presented in Figures 12, 13 and 14. Over the study sections, it was found
that there is variation among the datasets, which acts as a weapon in evaluating the smoothening
techniques.

Table 3 Variation of acceleration rates (m/s2) over different smoothening techniques at


different percentile ranges on the study sections
sr. Section Percentil Extracte MV- MV- MV- Lowess Loess sgolay
no e ranges d 3points 5point 7points
s
1. Delhi 20 -10.01 -2.60 -2.45 -2.15 -9.85 -2.75 -2.58
Gurgaon
Multi lane 80 8.65 2.55 2.40 2.18 8.70 2.36 2.49
urban road
Mumbai 20 -9.80 -2.10 -1.90 -1.60 -9.7 -2.20 -2.10
western
expressway 80 7.52 2.00 1.90 1.60 -7.8 1.95 2.10
(low flow
conditions)
Mumbai 20 -6.98 -1.96 -1.85 -1.80 -6.95 -2.00 -1.98
western
2. expressway 80 6.90 1.80 1.85 1.80 6.98 2.10 1.95
(medium
flow
conditions)
Mumbai 20 -0.80 -0.35 -0.12 -0.10 -4.20 -0.35 -0.40
western
expressway 80 0.83 0.36 0.12 0.10 4.25 0.35 0.40
(stop and
go
conditions)
3. Pune- 20 -5.01 -2.15 -2.10 -1.98 -4.95 -2.20 -2.15
Mumbai
expressway 80 4.98 2.12 2.08 2.00 4.80 2.21 2.16
section
4. Ahmedabad 20 -4.95 -1.95 -1.93 -1.90 -4.72 -2.00 -1.95
-Vadodara
expressway 80 4.85 1.96 1.92 1.89 4.65 1.98 1.95

1563
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 12 Cumulative plots of instant acceleration rates on Mumbai section at free flow conditions

Figure 13 Cumulative plots of instant acceleration rates on Mumbai section at medium flow conditions

Figure 14 Cumulative plots of instant acceleration rates on Mumbai section at congested flow conditions

1564
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

After the application of three different smoothening techniques, it was found that moving
average method is the suitable candidate for increasing the internal consistency in vehicular
trajectories with minimum effort. This is due to the fact that in this method, errors were averaged
and a smoothened curve is obtained, which incorporates fluctuations in terms of both positive
and negative errors. Whereas, in case of local weight regression, it forms a good correlation
with the extracted data, which was experienced in time space plots, however, it is not able to
reduce the data errors in instant acceleration plots. The smoothened line in this method
incorporates the points, which are prone to errors. The similar conclusion is also drawn when
Savitzky-Golay smoothening method is used. Therefore, in the present research, data
output obtained in terms of vehicular trajectories after employing moving average methods are
used for further analysis.

7. SWOT ANALYSIS

Based on the performance of the smoothening techniques in the present study. A SWOT analysis
(strengths, weaknesses, opportunities, and threats) had been performed from the results and
their ability in nullifying the errors in the data sets as reported in Table 4. Which will be helpful
for identifying the best smoothening technique for the reducing the errors in the trajectory data
sets, which helps is studying the micro-level behavior of vehicles.

Table 4: SWOT analysis on smoothening techniques for the reducing internal inconsistency of
vehicular trajectory data.
Sr.no Smoothenin Strengths Weakness Opportunities Threats
g technique
1. Moving Able to nullify the Consider the Used when the Risk of
Average errors and fluctuations field fluctuations eliminating the
method increase internal observed in observed are less fluctuations
consistency, based field conditions in number, able observed in
on the adjacent as errors, to smooth the field conditions
data points by results in data were errors
means of simple smoothening are high in
moving average the number
fluctuations.
2. Local With the help of Chance of Able to handle Because of
Weighted tricube function having a good fluctuations of weightage
Regression weights were correlation the field factors, it was
assigned to the with extracted conditions due to unable to
data sets based on data at high polynomial nullify the
their deviation degree functions errors in the
from the mean polynomial present study
functions.
3. Savitzky- By assigning the Similar to local Able to maintain The percentage
golay convolution regression internal reduction in
filtering coefficients, method, risk of consistency errors is very
convolution having good without less when
matrix is correlation changing the compared to
evaluated and data with extracted sense of the data other methods
were smoothened data.

1565
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

8. APPLICATION OF TRAJECTORY DATA

Based on the visual inspection of time space plots from the study sections, random vehicle
following leader follower pairs were identified, based on these relative distance vs relative
velocity was plotted among the vehicles, from the plots even the hysteresis phenomenon among
the leader follower pairs were detected which can be observed in accurate trajectory data sets
as shown Figure 15.

Figure 15 Hysteresis plots among the observed leader follower vehicular pairs

Similarly, in the present study, as an explanation purpose even variation in lateral behavior
that is, lateral gaps of vehicles, is analyzed as a function of speed of adjacent vehicles, using
vehicle trajectory data. The lateral clearance gaps were clustered for every 5 kmph speed
interval, from the clusters minimum and average value of lateral clearances were correlated
with speed of the adjacent vehicles. For this purpose, a linear regression equation is fitted to the
data as shown in Figure 16 and the lateral clearance share is evaluated over speed ranges of
vehicles.

1566
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

Figure 16Variation of lateral clearance with speed.

9. SUMMARY and CONCLUSIONS

Studies on vehicle trajectories are very fewer, however, if a researcher wants to study the actual
vehicular behavior, then trajectory data is an input with pronounced resource. In this study, the
authors have developed vehicular trajectories for heterogeneous traffic conditions prevailing on
multilane urban roads in India. The roadway sections are located in Delhi, Mumbai, Pune -
Mumbai Expressway and Ahmedabad-Vadodara Expressway. After analyzing the trajectory
data, it was found that it is prone to some internal errors in terms of speed and acceleration
rates. It was investigated and found that these errors are mainly due to human misplays while
tracking vehicles, which is quite common while extracting such huge datasets. In order to
resolve these issues different smoothening techniques were applied to have a robust data sets.
For this purpose, three smoothening techniques were applied, which are moving average
method, local weighted regression and Savizky-Golay filtering technique. There are no much
studies available that have quantified the thresholds in evaluating the effectiveness of
smoothening techniques. In order to overcome, that a distinct methodology had been adopted
to find the best data from the smoothened outcomes in the present study, which may be
considered as a new approach in handling vehicular trajectory smoothening operations. Based
on this vehicular trajectory data, different micro-level and macro-level traffic parameters can
be studied without compromising with the accuracy. The detailed study on modelling vehicle-
following and lane changing behaviors are in future scope of this study.

1567
Journal of the Eastern Asia Society for Transportation Studies, Vol.12, 2017

REFERENCES
Bharadwaj, N., Kumar, P., Arkatkar,S., Maurya, A., Joshi, G., (2015) “Traffic Data
Analysis using Image Processing Technique on Delhi-Gurgaon Expressway". Current
Science, volume 110 (5).
Brackstone, M., and McDonald M. (1999). “Car-Following: A Historical Review.”
Transportation Research Part F: Traffic Psychology and Behaviour 2(4): 181–96.
FHWA, U.S. Department of Transportation. NGSIM–Next Generation 372
SIMulation.http://ops.fhwa.dot.gov/trafficanalysistools/ngsim.htm. Accessed July 30,
2016.
Guo, D., Shufan, L., and Jin,H., (2010). “A Graph-Based Approach to Vehicle Trajectory
Analysis.” Journal of Location Based Services 4(3): 183–99.
Houenou, A., and Philippe, B., (2013). “Vehicle Trajectory Prediction Based on Motion
Model and Maneuver Recognition.” Intelligent Robots and … (61161130528): 4363–
69.
Kanagaraj,V., Srinivasan, K. K. and Sivanandan, R. (2010) Modeling Vehicular Merging
Behavior under Heterogeneous Traffic Conditions, Transportation Research Record,
No. 2188, TRB, Washington, D.C., pp. 140-147
Kanagaraj, V., Gowri A., Tomer T., and Tzu-Chang, L. (2015). “Trajectory Data and Flow
Characteristics of Mixed Traffic.” Transportation Research Record: Journal of the
Transportation Research Board 2491: 1–11.
Liu, Siyuan. (2012). “Calibrating Large Scale Vehicle Trajectory Data.” In Proceedings -
2012 IEEE 13th International Conference on Mobile Data Management, MDM 2012, ,
222–31.
Oh, C., and Taejin, K. (2010). “Estimation of Rear-End Crash Potential Using Vehicle
Trajectory Data.” Accident Analysis and Prevention 42(6): 1888–93.
Raju, N., Chepuri, A., Kumar, P., Arkatkar, S., Joshi, G. (2017) “Calibration of vehicle
following models using trajectory data under Heterogeneous Traffic Conditions” 96th
Annual Meeting Transportation Research Board, Washington DC.
Roland. (2004). “Understanding Institutional Change: Fast-Moving and Slow-Moving
Institutions.” Studies in Comparative International Development SCID 38(4): 109–31.
Savitzky, A.; Golay, M.J.E. (1964). "Smoothing and Differentiation of Data by Simplified
Least Squares Procedures". Analytical Chemistry. 36 (8): 1627–39.
Said, S.E. and Dickey, D.A., (1984). Testing for unit roots in autoregressive-moving
average models of unknown order. Biometrika, 71(3), pp.599-607.
Taylor, J., Xuesong Z., Nagui, M., and Richard J. (2015). “Method for Investigating
Intradriver Heterogeneity Using Vehicle Trajectory Data: A Dynamic Time Warping
Approach.” Transportation Research Part B: Methodological 73: 59–80.
Toledo, T, Haris K., and Kazi, A. (2007). “Estimation of Vehicle Trajectories with Locally
Weighted Regression.” Transportation Research Record: Journal of the Transportation
Research Board 1999(1999): 161–69.
Toledo, T. (2007) Driving Behaviour: Models and Challenges, Transport Reviews, Vol.
27, No. 1, 370, pp. 65-84.
User Reference Manual for Traffic Data Extractor. IIT Bombay 2013.
Xu, Fulong et al. 2011. “Utilizing Shared Vehicle Trajectories for Data Forwarding in
Vehicular Networks.” In Proceedings - IEEE INFOCOM, , 441–45.
Zheng, Y U. 2015. “Trajectory Data Mining : An Overview.” ACM Trans. Intell. Syst.
Technol. 6(3): 1–41.

1568

You might also like