Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Tao  Li

    Tao Li

    The advent of connected and autonomous vehicles (CAVs) will change driving behavior and the travel environment, providing opportunities for safer, smoother, and smarter road transportation. During the transition from the current... more
    The advent of connected and autonomous vehicles (CAVs) will change driving behavior and the travel environment, providing opportunities for safer, smoother, and smarter road transportation. During the transition from the current human-driven vehicles (HDVs) to a fully CAV traffic environment, the road traffic will consist of a " mixed " traffic flow of HDVs and CAVs. Equipped with multiple sensors and vehicle-to-vehicle communications, a CAV can track the trajectories of other CAVs in its vicinity, and ideally, all CAVs in communication range. Such CAV trajectory data can be leveraged with advances in computing and machine learning algorithms to potentially predict trajectory data of HDVs. Based on these predictions, CAVs can react accordingly to avoid or mitigate traffic flow oscillations and accidents. Most previous studies can only generate trajectory predictions with no evaluation of uncertainty, e.g., ignoring the distribution of prediction. In this study, we propose a novel framework that can model the uncertainty in the leading HDV's trajectory prediction for a CAV by integrating deep learning models with Kernel Density Estimation (KDE) method. Distribution of the HDV's future trajectory can then be calculated and are critical to develop an optimal CAV control mechanism. Deep learning models have shown powerful prediction performances in many applications. Empowered with large scale neural networks, carefully designed computer architectures, and novel training algorithms, studies show that deep learning-based methods significantly outperform normal vehicle trajectory prediction methods in accuracy. However, deep learning models usually require huge efforts for tuning of hyperparameters in model structure (number of layers, size of each layer) and optimization methods (learning rate and mini-batch size in stochastic gradient descent). They also have issues like prone to overfitting and get stuck at local optimum. According to a recent study, models that have best performance are not necessary to be the one that uses optimal parameters. Our framework trains deep learning models with various hyperparameters in a parallel computing grid of massive nodes and generate trajectory prediction samples, then apply the KDE to estimate the prediction distribution to model the uncertainty. Not like traditional uncertain modeling methods, the KDE method does not require a lot of samples, therefore it is more efficient as fewer runs of deep learning model training are needed; furthermore, the KDE does not rely on prior knowledge or assumption about underlying distributions. This property makes KDE a good fit for the framework, because given the features of deep learning, it's very hard or almost impossible to get such distribution of predictions through mathematical deductions. This framework is tested on the NGSIM dataset.
    Research Interests:
    Research Interests: