Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A9 Road Traffic Vehicle Dtection With Custom Dataset

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

A

Major Project Report

On

Road Traffic Vehicle Detection and Tracking using Deep Learning with
Custom-Collected and Public Datasets
Submitted in partial fulfilment of the requirement for the award of degree of

BACHELOR OF TECHNOLOGY
in
ELECTRONICS AND COMMUNICATION ENGINEERING
Submitted by

E.Sai Charan (19K81A0414)


I.Prem sai (19K81A0428)
G.Raksheth (19K81A0422)
R.Karthik (19K81A0450)

Under the Guidance of


Gadi Sanjeev
Assistant Professor
Department of Electronics and Communication Engineering

St. MARTIN'S ENGINEERING COLLEGE


An Autonomous Institute
NBA& NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
www.smec.ac.in
NOVEMBER-2022
St. MARTIN'S ENGINEERING COLLEGE
An Autonomous Institute
NBA & NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
ww.smec.ac.in

CERTIFICATE

This is to certify that the project entitled “Road Traffic Vehicle Detection and

Tracking using Deep Learning with Custom-Collected and Public Datasets” is

being submitted by E Sai Charan (19K81A0414), I Premsai (19K81A0428), G

Raksheth (19K81A0422), R Karthik (19K81A0450) in fulfillment of the requirement

for the award of degree of BACHELOR IN ELECTRONICS AND

COMMUNICATION ENGINEERING is recorded of bonafide work carried out by

them. The result embodied in this report have been verified and found satisfactory.

Guide Head of the Department


Gadi Sanjeev Dr.B Hari Krishna

Assistant Professor Professor

Internal Examiner External Examiner


ACKNOWLEDGEMENT

The satisfaction and euphoria that accompanies the successful completion of any
task would be incomplete without the mention of the people who made it possible and
whose encouragement and guidance have crowded my efforts with success.

We extend our deep sense of gratitude to Principal, Dr. P. SANTOSH KUMAR


PATRA, St. Martin’s Engineering College Dhulapally, for permitting us to
undertake this project.

We are also thankful to Dr.B HARI KRISHNA, Professor, Head of the


Department, Electronics and Communication Engineering, St. Martin’s Engineering
College, Dhulapally, Secunderabad, for his support and guidance throughout the
project as well as Project Coordinator Mr. VENKANNA MOOD Professor,
Electronics and Communication Engineering department for his valuablesupport.

We would like to express our sincere gratitude and indebtedness to our project
supervisor GADI SANJEEV, Assistant Professor department of Electronics and
Communication Engineering, St. Martins Engineering College, Dhulapally, for his
support and guidance throughout our project.

Finally, we express thanks to all those who have helped us successfully completing
this project. Furthermore, we would like to thank our family and friends for their
moral support and encouragement. We express thanks to all those who have helped
us in successfully completing the project.

E. Sai Charan 19K81A0414

I. Premsai 19K81A0428

G. Raksheth 19K81A0422

R. Karthik 19K81A0450

St. MARTIN'S ENGINEERING COLLEGE


i
An Autonomous Institute
NBA & NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
ww.smec.ac.in

DECLARATION

We, the students of ‘Bachelor of Technology in Department of Electronics and

Communication Engineering’, session: 2019 - 2023, St. Martin’s Engineering

College, Dhulapally, Kompally, Secunderabad, hereby declare that the work

presented in this Major Project Work entitled Road Traffic Vehicle Detection and

Tracking using Deep Learning with Custom-Collected and Public Datasets is

the outcome of our own bonafide work and is correct to the best of our knowledge

and this work has been undertaken taking care of Engineering Ethics. This result

embodied in thisproject report has not been submittedin any university for award of

any degree.

E. Sai Charan 19K81A0414

I. Premsai 19K81A0428

G. Raksheth 19K81A0422

R. Karthik 19K81A0450

ii
CONTENTS

CHAPTER NO. TITLE PAGE NO.

Certificate

Acknowledgment i

Declaration ii

List Of Figures v

List Of Acronyms vi

Abstract vii

CHAPTER 1 INTRODUCTION 1

1.1 overview 1

1.2 Problem statement 1

CHAPTER 2 LITERATURE SURVEY 3-5

CHAPTER 3 EXISTING SYSTEM 6-7

CHAPTER 4 PROPOSED SYSTEMS 8


4.1 Dataset 9
4.2 Preprocessing and Frames 9
4.3 YOLO V4 10
4.4 Deep-sort 13
CHAPTER 5 MACHINE LEARNING 15
5.1 What is Machine Learning 15
5.2 Categories of ML 15
5.3 Needs of ML 16
5.4 Challenges in ML 16
5.5 Application of ML 17
5.6 How to start learning ML 17

iii
5.7 How to start learning ML? 18
5.7.1 Terminologies in ML 19
5.7.2 Types of ML 19
5.8 Advantages of ML 20
5.9 Disadvantages of ML 21

CHAPTER 6 SOFTWARE ENVIRONMENT 22


6.1 What is python? 22
6.2 Advantages of python 22
6.3 Disadvantages of python 25
6.4 History pf python 26
6.5 Python development steps 27
6.6 Purpose 27
6.7 Python 28
6.8 Modules used in project 28
6.9 Installation of python 33

CHAPTER 7 RESULTS AND DISCUSSION 38


7.1 Modules 38
7.2 Results 38

CHAPTER 8 CONCLUSION 40-41

REFERENCES 42-43

APPENDIX CODE 44-52

iv
LIST OF FIGURES

Figure No. Figure Name Page No.

4.1 Block diagram of proposed method 8

4.2 Methodology of proposed system 11

4.3 Structuire of YOLOV4 Algorithm 11

4.4 Detection of cars using YOLOV4 12

4.5 Deep-Sort algorithm for multi-object tracking 13

7.1 Detection and tracking of cars and trucks using YOLOV4

With DeepSort algorithm 38

7.2 Tracking of vehicles using frame per second and application

Till the end of the video 39

v
LIST OF ACRONYMS AND DEFINITIONS

S.NO ACRONYM DEFINITION

1 YOLO You Only Look at Once

2 ML Machine Learning

3 Deep sort deep Learning based on simple

Real time Tracking

4 COCO Common Object In Context

5 CNN’s Deep Convolutional Network

6 KSA Kingdom of Saudi Arabian

7 DML Dynamic Language Model

8 LDA Latent Dirichlet Allocation

9 MOT Multiple Object Tracking

10 IOU Intersectional Over Union

vi
ABSTRACT

Intelligent vehicle detection and counting are becoming increasingly important in the field of
highway management. However, due to the different sizes of vehicles, their detection remains
a challenge that directly affects the accuracy of vehicle counts. To address this issue, this paper
proposes a vision-based vehicle detection and counting system. A new high-definition highway
vehicle dataset with a total of 57,290 annotated instances in 11,129 images is published in this
study. Compared with the existing public datasets, the proposed dataset contains annotated tiny
objects in the image, which provides the complete data foundation for vehicle detection based
on deep learning.

The use of deep convolutional networks (CNNs) has achieved amazing success in the
field of vehicle object detection. CNNs have a strong ability to learn image features and can
perform multiple related tasks, such as classification and bounding box regression. The
detection method can be generally divided into two categories. The two-stage method generates
a candidate box of the object via various algorithms and then classifies the object by a
convolutional neural network. The one-stage method does not generate a candidate box but
directly converts the positioning problem of the object bounding box into a regression problem
for processing.

Therefore, this work is focused on implementation of You Only Look Once (YOLO-
V4) based DeepSORT model for real time vehicle detection and tracking from video
sequences. Deep learning based Simple Real time Tracker (Deep SORT) algorithm is added,
which will track actual presence of vehicles from video frame predicted by YOLO-V4 so the
false prediction perform by YOLOV4 can be avoid by using DeepSort algorithm. The video
will be converted into multiple frames and give as input to YOLO-V4 for vehicle detection.
The detected vehicle frame will be further analysed by DeepSort algorithm to track vehicle and
if vehicle tracked then DeepSort will put bounding box across tracked vehicle and increment
the tracking count. The proposed model is trained with three different datasets such as COCO,
Berkeley and Dash Cam dataset, where Dash Cam dataset is the custom collected dataset.

vii
CHAPTER 1

INTRODUCTION

1.1 Overview

Vehicle detection and tracking is a common problem with multiple use cases. Government
authorities and private establishment might want to understand the traffic flowing through a
place to better develop its infrastructure for the ease and convenience of everyone. A road
widening project, timing the traffic signals and construction of parking spaces are a few
examples where analysing the traffic is integral to the project. Traditionally, identification and
tracking has been carried out manually. A person will stand at a point and note the count of the
vehicles and their types. Recently, sensors have been put into use, but they only solve the
counting problem. Sensors will not be able to detect the type of vehicle.

A fundamental source of the economic growth of any nation depends on well-planned and
resilient transportation systems based on spatial information. Regardless, most cities around
the world are still facing a rampant increase in traffic volume and complications in traffic
management, resulting in poor quality of life in modern cities. However, recent advancements
in internet bandwidth, artificial intelligence, and sensing technologies have minimized these
difficulties by collaboratively bringing forward location intelligence for public safety.
Automation in location intelligence in road environments using sensing technologies allow
authorities to achieve resilience in road safety, controlled commutes, and assessments of road
conditions.

1.2 Problem statement

Several state-of-the-art deep learning models in the domain of near real-time multi-object
classification belonging to the You Only Look Once (YOLO) family (two versions of improved
YOLOv3 and two versions of YOLOv5) were trained. The models were ensembled such that
it tackles several of the existing challenges of real-time object detection, recognition, and
classification. The challenges tackled by these YOLO models include the absence of an
integrated anchor box selection process, time-taking space-to-depth conversion, the gradient
descent problem, weak feature propagation, a large number of network parameters, and
problems in the generalization of objects of different sizes and scales. Object detection
algorithm based on deep learning is one of the most widely used and challenging tasks in the

1
field of computer vision. It is widely used in many fields such as medical image, unmanned
vehicle, security system and robot research. The object detection task is to distinguish the target
object in the image from the background information. It usually consists of two parts namely
locating and marking the bounding box of the object to be detected in the image and completing
the task of classifying the target in the bounding box. One of the most common methods for
object detection is YOLO. The algorithm was introduced and outperformed other detection
methods in speed and accuracy. Since then, it has been under continuous improvements to
enhance its performance. YOLOv4 achieved high performance compared with state-of-the-art
object detection methods. Naturally, such performance encouraged researchers to exploit its
potentials in transportation.

Therefore, this work implements YOLOv4 for object detection and DeepSORT for tracking
the detected vehicles. Three different variations of the deep learning models are used and
compared their performance; a pre-trained model with the COCO dataset, and two custom-
trained models with different datasets. The three different datasets; the COCO dataset, the
Berkeley DeepDrive dataset, and our custom developed dataset obtained by a Dash Cam
installed onboard vehicle driven on city streets and highways in the Kingdom of Saudi Arabia
(KSA).

2
CHAPTER 2

LITERATURE SURVEY
Wang et al. [1] proposed an algorithm to detect abnormality in vehicles behaviour such as
stalled cars and cars speeding up or slowing down. They used YOLO algorithm for detection
and Kalman filter for tracking. They tested their framework on videos from traffic cameras.
The combination of YOLO and Kalman filter is applicable, despite some scenarios where
farther contextual knowledge is needed to improve detection results.

Kumar et al. [2] trained a sentiment classification model to detect negative sentiment about a
road hazard from Twitter. The data is collected using search filtering with specific terms that
relate to traffic. Then, naive Bayes, K-nearest-neighbor and the dynamic language model
(DLM) are used to build models to classify the tweets into a hazard and not hazard.

Song et al. [3] considered small vehicles on highways in their proposed detection and counting
system. They published a new high-definition dataset containing annotations of small objects.
They developed a segmentation method to extract and divide roads into remote and proximal
areas. They used YOLOv3 as their detection method and the ORB algorithm as a feature
extractor. They analysed the trajectories of detected objects for counting purposes. Their
proposed method provide good performance as can replace the traditional ways of counting
vehicles without any new hardware equipment.

Tejaswin et al. [4] also used random forest classifier to predict traffic incidents. The traffic
incidents are clustered and predicted using spatio-temporal data from Twitter. The location
information is extracted using NLP and background knowledge by using Freebase API, which
is a community-curated structured database containing large number of entities and each one
defied by multiple properties and attributes that helps in entity disambiguation.

Suma et al. [5] built a classification model using logistic regression with stochastic gradient
descent to detect events related to road traffic from English tweets using Apache Spark. They
used the latent Dirichlet allocation (LDA) topic modeling module to filter traffic messages. In
addition, they used the Spark MLib library and trained classifiers using SVM, KNN and NB to
detect traffic events.

Chen et al. [6] aimed at detecting objects by generating 3D object proposals. Their proposed
work utilized stereo imagery. They based the method on minimizing an energy function which

3
encodes object size prior, object placement and some context depth information. Then, they
used convolutional neural network to use appearance, context and depth information for object
detection. The method predicted 3D bounding box coordinate and object pose. The approach
proved to be superior to previously published object detection work on the KITTI benchmark.

Sudha and Priyadarshini [7] designed an approach for multiple vehicles detection and tracking.
Their work utilized an enhanced YOLOv3 algorithm with an improved visual background
extractor for detection. For tracking, Kalman filtering with particle filter technique were
deployed. Authors tested the proposed solution on two private datasets, KITTI and DETRAC
benchmark datasets.

Sang et al. [8] came up with an improved detection model based on YOLOv2, which used the
k-means++ algorithm to train a dataset to cluster vehicle bounding boxes. To improve the loss
due to different scales of vehicles, normalization was introduced. Moreover, repeated
convolution layers were removed to improve feature extraction.

Du et al. [9] proposed the real-time detection of vehicles and traffic lights with the YOLOv3
network, an improved version of YOLOv2, by detecting small objects with balanced speed and
precision using a new, high-quality dataset named the Vehicle and Traffic Light Dataset (V-
TLD). They used YOLOv3 to detect and classify the vehicles and used an ORB algorithm to
obtain driving directions.

Mahto et al. [10] used a fine-tuned YOLOv4 for vehicle detection using the UA-DETRAC
dataset, which was faster than previous iterations. YOLOv5, despite being produced by a
different author than its predecessors, has higher performance in terms of accuracy and speed
among the YOLO family.

Liu et al., in [11], proposed 3-D constrained multiple kernels, facilitated with Kalman filtering,
to track objects detected by a YOLOv3 network. These recent but sophisticated tracking
algorithms have improved the accuracy of object tracking, but they require heavy
computational power. Here, they propose a simple object-centroid tracking algorithm to track
the detection provided by YOLO-based DL networks in multiple lanes of the road in real time.
Furthermore, this study compares the use of two YOLO variants, YOLOv3 and YOLOv5, to
obtain a real-time vehicle tracking method that can process multiple video streams with a single
GPU, using multi-threading techniques.

4
Ali et al. [12] used inductive loop sensors to detect and count diverse vehicles in lane-less
roads. They developed a multiple loop system with a new structure for inductive loop sensors.
Their solution was able to sense vehicles and divide them by type. During testing, the system
provided accurate counting of vehicles despite the heterogenous traffic conditions.

Neupane, et al. [13] proposed a multi-vehicle tracking algorithm that obtains the per-lane count,
classification, and speed of vehicles in real time. The experiments showed that accuracy
doubled after fine-tuning (71% vs. up to 30%). Based on a comparison of four YOLO networks,
coupling the YOLOv5-large network to our tracking algorithm provided a trade-off between
overall accuracy (95% vs. up to 90%), loss (0.033 vs. up to 0.036), and model size (91.6 MB
vs. up to 120.6 MB). The implications of these results are in spatial information management
and sensing for intelligent transport planning.

A. H. Abdel-Gawad, et al. [14] proposed a detection-based tracking approach for Multiple


VRU Tracking of video from an inside-vehicle camera in real-time. YOLOv4 scans every
frame to detect VRUs first, then Simple Online and Realtime Tracking with a Deep Association
Metric (Deep SORT) algorithm, which is customized for multiple VRU tracking, is applied.
The results of our experiments on both the Joint Attention in Autonomous Driving (JAAD) and
Multiple Object Tracking (MOT) datasets exhibit competitive performance.

Tao et al. [15] constructed vehicle identification for images on road using an optimized YOLO
method. In this method, the last two fully connected layers are detached and added an average
pool layer. The simulation results shows that the accuracy and precision rate is higher than
single object detection. This algorithm will compute the intersection-over-union (IOU)
distance between each detection and every predicted bounding box from the existing targets.
The last algorithm work in SORT algorithm will assign either create unique identities or
destroyed it. In DeepSORT there will be deep learning algorithm which helps reduces high
number of identity switches and improve efficiency of tracking through occlusions in SORT
algorithm.

5
CHAPTER 3

EXISTING SYSTEM
The old versions of the YOLO algorithm are the existing methods:

YOLOv3 is a real-time object detection algorithm that identifies specific objects in videos, live
feeds, or images. The YOLO machine learning algorithm uses features learned by a deep
convolutional neural network to detect an object. Versions 1-3 of YOLO were created by
Joseph Redmon and Ali Farhadi, and the third version of the YOLO machine learning
algorithm is a more accurate version of the original ML algorithm. The first version of YOLO
was created in 2016, and version 3, which is discussed extensively in this article, was made
two years later in 2018. YOLOv3 is an improved version of YOLO and YOLOv2. YOLO is
implemented using the Keras or OpenCV deep learning libraries. Object classification systems
are used by Artificial Intelligence (AI) programs to perceive specific objects in a class as
subjects of interest. The systems sort objects in images into groups where objects with similar
characteristics are placed together, while others are neglected unless programmed to do
otherwise.

YOLOv3 is a Convolutional Neural Network (CNN) for performing object detection in real-
time. CNNs are classifier-based systems that can process input images as structured arrays of
data and recognize patterns between them (view image below). YOLO has the advantage of
being much faster than other networks and still maintains accuracy. It allows the model to look
at the whole image at test time, so its predictions are informed by the global context in the
image. YOLO and other convolutional neural network algorithms “score” regions based on
their similarities to predefined classes. High-scoring regions are noted as positive detections of
whatever class they most closely identify with. YOLO can be used to detect different kinds of
vehicles depending on which regions of the video score highly in comparison to predefined
classes of vehicles, in a live feed of traffic. The YOLOv3 algorithm first separates an image
into a grid. Each grid cell predicts some number of boundary boxes (sometimes referred to as
anchor boxes) around objects that score highly with the aforementioned predefined classes.
Each boundary box has a respective confidence score of how accurate it assumes that prediction
should be and detects only one object per bounding box. The boundary boxes are generated by
clustering the dimensions of the ground truth boxes from the original dataset to find the most
common shapes and sizes. Other comparable algorithms that can carry out the same objective

6
are R-CNN (Region-based Convolutional Neural Networks) and Fast R-CNN (R-CNN
improvement), and Mask R-CNN. However, unlike systems like R-CNN and Fast R-CNN,
YOLO is trained to do classification and bounding box regression at the same time. YOLOv2
was using Darknet-19 as its backbone feature extractor, while YOLOv3 now uses Darknet-53.
Darknet-53 is a backbone also made by the YOLO creators Joseph Redmon and Ali Farhadi.
Darknet-53 has 53 convolutional layers instead of the previous 19, making it more powerful
than Darknet-19 and more efficient than competing backbones (ResNet-101 or ResNet-152).

The YOLOv3 uses independent logistic classifiers and binary cross-entropy loss for the class
predictions during training. These edits make it possible to use complex datasets such as
Microsoft’s Open Images Dataset (OID) for YOLOv3 model training. OID contains dozens of
overlapping labels, such as “man” and “person” for images in the dataset. YOLO v3 uses a
multilabel approach which allows classes to be more specific and be multiple for individual
bounding boxes. Meanwhile, YOLOv2 used a softmax, which is a mathematical function that
converts a vector of numbers into a vector of probabilities, where the probabilities of each value
are proportional to the relative scale of each value in the vector. Using a softmax makes it so
that each bounding box can only belong to one class, which is sometimes not the case,
especially with datasets like OID.

7
CHAPTER 4

PROPOSED SYSTEM
Intelligent vehicle detection and counting are becoming increasingly important in the field of
highway management. However, due to the different sizes of vehicles, their detection remains
a challenge that directly affects the accuracy of vehicle counts. To address this issue, this paper
proposes a vision-based vehicle detection and counting system. A new high-definition highway
vehicle dataset with a total of 57,290 annotated instances in 11,129 images is published in this
study. Compared with the existing public datasets, the proposed dataset contains annotated tiny
objects in the image, which provides the complete data foundation for vehicle detection based
on deep learning.

Fig. 4.1. Block diagram of proposed method.

Figure 1 shows the block diagram of proposed memthod. The use of deep convolutional
networks (CNNs) has achieved amazing success in the field of vehicle object detection. CNNs
have a strong ability to learn image features and can perform multiple related tasks, such as
classification and bounding box regression. The detection method can be generally divided into
two categories. The two-stage method generates a candidate box of the object via various
algorithms and then classifies the object by a convolutional neural network. The one-stage
method does not generate a candidate box but directly converts the positioning problem of the
object bounding box into a regression problem for processing.

Therefore, this work is focused on implementation of You Only Look Once (YOLO-V4) based
DeepSORT model for real time vehicle detection and tracking from video sequences. Deep
learning based Simple Real time Tracker (Deep SORT) algorithm is added, which will track
actual presence of vehicles from video frame predicted by YOLO-V4 so the false prediction
perform by YOLOV4 can be avoid by using DeepSort algorithm. The video will be converted
into multiple frames and give as input to YOLO-V4 for vehicle detection. The detected vehicle
frame will be further analysed by DeepSort algorithm to track vehicle and if vehicle tracked
then DeepSort will put bounding box across tracked vehicle and increment the tracking count.

8
The proposed model is trained with three different datasets such as COCO, Berkeley and Dash
Cam dataset. Here, Dash Cam dataset is the custom collected dataset

4.1 Dataset

COCO dataset: The dataset contains car images with one or more damaged parts.
The img/ folder has all 80 images in the dataset. There are three more
folders train/, val/ and test/ for training, validation and testing purposes respectively.

Berkeley dataset: Berkeley DeepDrive(link is external) (BDD) and Nexar announced the
release of 36,000 high frame-rate videos of driving, in addition to 5,000 pixel-level semantics-
segmented labeled images, and invited public and private institution researchers to join the
effort to develop accurate automotive perception and motion prediction models.

4.2 Preprocessing and frame separation

Digital image processing is the use of computer algorithms to perform image processing on
digital images. As a subfield of digital signal processing, digital image processing has many
advantages over analogue image processing. It allows a much wider range of algorithms to be
applied to the input data — the aim of digital image processing is to improve the image data
(features) by suppressing unwanted distortions and/or enhancement of some important image
features so that our AI-Computer Vision models can benefit from this improved data to work
on. To train a network and make predictions on new data, our images must match the input size
of the network. If we need to adjust the size of images to match the network, then we can
rescale or crop data to the required size.

we can effectively increase the amount of training data by applying randomized augmentation
to data. Augmentation also enables to train networks to be invariant to distortions in image
data. For example, we can add randomized rotations to input images so that a network is
invariant to the presence of rotation in input images. An augmented Image Datastore provides
a convenient way to apply a limited set of augmentations to 2-D images for classification
problems.

we can store image data as a numeric array, an ImageDatastore object, or a table. An


ImageDatastore enables to import data in batches from image collections that are too large to
fit in memory. we can use an augmented image datastore or a resized 4-D array for training,
prediction, and classification. We can use a resized 3-D array for prediction and classification
only.

9
There are two ways to resize image data to match the input size of a network. Rescaling
multiplies the height and width of the image by a scaling factor. If the scaling factor is not
identical in the vertical and horizontal directions, then rescaling changes the spatial extents of
the pixels and the aspect ratio.

Cropping extracts a subregion of the image and preserves the spatial extent of each pixel. We
can crop images from the center or from random positions in the image. An image is nothing
more than a two-dimensional array of numbers (or pixels) ranging between 0 and 255. It is
defined by the mathematical function f(x,y) where x and y are the two co-ordinates horizontally
and vertically.

Resize image: In this step-in order to visualize the change, we are going to create two functions
to display the images the first being a one to display one image and the second for two images.
After that, we then create a function called processing that just receives the images as a
parameter.

Need of resize image during the pre-processing phase, some images captured by a camera and
fed to our AI algorithm vary in size, therefore, we should establish a base size for all images
fed into our AI algorithms.

4.3 YOLO V4

The test video from the datasets is taken to apply the algorithms and splitted into different
frames. The spitted frames are applied to YOLOV4 to detect the vehicles and to track the
vehicles, frames are applied to DeepSort. Finally, the output video is generated with vehicle
detection and tracking.

10
Fig. 4.2. The methodology of proposed system.

Tracking vehicles is another aspect of research in transportation. DeepSORT is a recent


tracking algorithm, extending SORT (Simple Online and Real-Time) tracking algorithm. The
original algorithm was developed considering MOT task. With the main goal of supporting
online and real-time applications. This means that the tracker associates detected objects from
previous and current frames only.

YOLO is a Convolutional Neural Network object detection system, that handles object
detection as one regression problem, from image pixels to bounding boxes with their class
probabilities. Its performance is much better than other traditional methods of object detection,
since it trains directly on full images. YOLO is formed of 27 CNN layers, with 24 convolutional
layers, two fully connected layers, and a final detection layer.

Fig. 4.3. Structure of YOLOV4 algorithm.

YOLO divides the input images into an N-by-N grid cell, then during the processing, predicts
for each one of them several bounding boxes to predict the object to be detected. Thus, a loss
function has to be calculated. YOLO calculates first, for each bounding box, the Intersection

11
over Union (IoU); It uses then sum-squared error to calculate error loss between the predicted
results and real objects. The final loss being the sum of the three loss functions:

1) classification loss: related to class probability.


2) localization loss: related to the bounding box position and size.
3) confidence loss measuring the probability of objects in the box.

YOLO framework though is the one-stage methods which directly converts the object
bounding box positioning issue into a regression issue for processing without generate a
candidate box. The YOLO network breaks the picture into a defined number of grids. Each
grid is accountable for estimating objects within the grid whose central points are. Then after
several years the YOLO framework has been developed it algorithm to version 4 which
improve speed and accuracy of object detection.

Fig. 4.4. Detection of cars using YOLOV4.

YOLO V4 extracts the residual network part of the future entreats each region in the entire
future map equally considering that each region contributes the same to the final detection
however in real life scenes complex and rich contextual information often exist around the
object to be detected in the image and each region in the feature map is treated equally resulting
in a lack of network feature expression ability inaccurate bounding box position poor
robustness and other problems. To solve these problems a channel attention mechanism module
is introduced into the YOLO V4 object detection algorithm.

12
4.4 Deep-SORT

Deep-SORT algorithm is deployed for the purpose of tracking. The enhanced version of the
algorithm where the association metric is substituted by an informed metric integrating motion
and appearance information using Convolutional Neural Network. The algorithm takes the
detection outputs from the previous stage and run tracking for each detected object. In tracking
by detection scheme, the accuracy of tracking is based on the quality of detection results. The
Kalman filter an important role in deep sort. It identifies noise in detecting and uses previous
states to predict the closed frame surrounding the object best suited. Each time it detects an
object it creates a track containing all the necessary information of that object it also tracks and
deletes track with detection time exceeds a given threshold due to objects are out of frame. In
addition to eliminate duplicates they set a minimum threshold value for detection in the first
frame the next problem lies in association between new objects and new predictions from the
Kalman filter.

Fig. 4.5 Deep-SORT algorithm for multi-object Tracking.

Deep-SORT which is an improved version of SORT is one of the most popular state-of-the-art
objects tracking frameworks today. Deep-SORT has integrated a pre-trained neural network to
generate feature vectors to be used as a deep association metric. Since Deep-SORT was
developed focusing on the Motion Analysis and Re-identification Set (MARS) dataset, which
is a large-scale video-based human reidentification dataset, it uses a feature extractor trained
on humans which does not perform well on vehicles. Several state-of-the-art object detection
and tracking algorithms including SORT and Deep-SORT were deployed detect and track
different classes of vehicles in their region of interest and it has been stated that the trackers
did not perform ideally at predicting vehicle trajectories which resulted in ID switches during
occlusions.

13
A vehicle tracking fuses the prior information of the Kalman filter to solve the problem of
vehicle tracking under occlusion. But it has been stated that the proposed method does not
perform well if the target is lost for a longer period.

14
CHAPTER 5

MACHINE LEARNING

5.1 What is Machine Learning

Before we look at the details of various machine learning methods, let's start by looking at what
machine learning is, and what it isn't. Machine learning is often categorized as a subfield of
artificial intelligence, but I find that categorization can often be misleading at first brush. The
study of machine learning certainly arose from research in this context, but in the data science
application of machine learning methods, it's more helpful to think of machine learning as a
means of building models of data.

Fundamentally, machine learning involves building mathematical models to help understand


data. "Learning" enters the fray when we give these models tunable parameters that can be
adapted to observed data; in this way the program can be considered to be "learning" from the
data. Once these models have been fit to previously seen data, they can be used to predict and
understand aspects of newly observed data. I'll leave to the reader the more philosophical
digression regarding the extent to which this type of mathematical, model-based "learning" is
similar to the "learning" exhibited by the human brain. Understanding the problem setting in
machine learning is essential to using these tools effectively, and so we will start with some
broad categorizations of the types of approaches we'll discuss here.

5.2 Categories of Machine Leaning

At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.

Supervised learning involves somehow modeling the relationship between measured features
of data and some label associated with the data; once this model is determined, it can be used
to apply labels to new, unknown data. This is further subdivided into classification tasks
and regression tasks: in classification, the labels are discrete categories, while in regression,
the labels are continuous quantities. We will see examples of both types of supervised learning
in the following section.

Unsupervised learning involves modeling the features of a dataset without reference to any
label and is often described as "letting the dataset speak for itself." These models include tasks

15
such as clustering and dimensionality reduction. Clustering algorithms identify distinct groups
of data, while dimensionality reduction algorithms search for more succinct representations of
the data. We will see examples of both types of unsupervised learning in the following section.

5.3 Need for Machine Learning

Human beings, at this moment, are the most intelligent and advanced species on earth because
they can think, evaluate, and solve complex problems. On the other side, AI is still in its initial
stage and have not surpassed human intelligence in many aspects. Then the question is that
what is the need to make machine learn? The most suitable reason for doing this is, “to make
decisions, based on data, with efficiency and scale”.

Lately, organizations are investing heavily in newer technologies like Artificial Intelligence,
Machine Learning and Deep Learning to get the key information from data to perform several
real-world tasks and solve problems. We can call it data-driven decisions taken by machines,
particularly to automate the process. These data-driven decisions can be used, instead of using
programing logic, in the problems that cannot be programmed inherently. The fact is that we
can’t do without human intelligence, but other aspect is that we all need to solve real-world
problems with efficiency at a huge scale. That is why the need for machine learning arises.

5.4 Challenges in Machines Learning

While Machine Learning is rapidly evolving, making significant strides with cybersecurity and
autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is
that ML has not been able to overcome number of challenges. The challenges that ML is facing
currently are −

1. Quality of data − Having good-quality data for ML algorithms is one of the biggest
challenges. Use of low-quality data leads to the problems related to data preprocessing
and feature extraction.
2. Time-Consuming task − Another challenge faced by ML models is the consumption of
time especially for data acquisition, feature extraction and retrieval.
3. Lack of specialist persons − As ML technology is still in its infancy stage, availability
of expert resources is a tough job.
4. No clear objective for formulating business problems − Having no clear objective and
well-defined goal for business problems is another key challenge for ML because this

16
technology is not that mature yet.
5. Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot
be represented well for the problem.
6. Curse of dimensionality − Another challenge ML model faces is too many features of
data points. This can be a real hindrance.
7. Difficulty in deployment − Complexity of the ML model makes it quite difficult to be
deployed in real life.

5.5 Applications of Machines Learning

Machine Learning is the most rapidly growing technology and according to researchers we are
in the golden year of AI and ML. It is used to solve many real-world complex problems which
cannot be solved with traditional approach. Following are some real-world applications of ML

 Emotion analysis
 Sentiment analysis
 Error detection and prevention
 Weather forecasting and prediction
 Stock market analysis and forecasting
 Speech synthesis
 Speech recognition
 Customer segmentation
 Object recognition
 Fraud detection
 Fraud prevention
 Recommendation of products to customer in online shopping

5.6 How to Start Learning Machine Learning?

Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study
that gives computers the capability to learn without being explicitly programmed”.

And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.

17
But there is still a lot of doubt about what exactly is Machine Learning and how to start learning
it? So, this article deals with the Basics of Machine Learning and also the path you can follow
to eventually become a full-fledged Machine Learning Engineer. Now let’s get started!!!

5.7 How to start learning ML?

This is a rough roadmap you can follow on your way to becoming an insanely talented Machine
Learning Engineer. Of course, you can always modify the steps according to your needs to
reach your desired end-goal!

Step 1 – Understand the Prerequisites

In case you are a genius, you could start ML directly but normally, there are some prerequisites
that you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and
Python. And if you don’t know these, never fear! You don’t need a Ph.D. degree in these topics
to get started but you do need a basic understanding.

(a) Learn Linear Algebra and Multivariate Calculus

Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However,
the extent to which you need them depends on your role as a data scientist. If you are more
focused on application heavy machine learning, then you will not be that heavily focused on
maths as there are many common libraries available. But if you want to focus on R&D in
Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very important
as you will have to implement many ML algorithms from scratch.

(b) Learn Statistics

Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert
will be spent collecting and cleaning data. And statistics is a field that handles the collection,
analysis, and presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance, Probability
Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very
important part of ML which deals with various concepts like Conditional Probability, Priors,
and Posteriors, Maximum Likelihood, etc.

(c) Learn Python

18
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them
as they go along with trial and error. But the one thing that you absolutely cannot skip
is Python! While there are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact, there are many Python
libraries that are specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.

So if you want to learn ML, it’s best if you learn Python! You can do that using various online
resources and courses such as Fork Python available Free on GeeksforGeeks.

Step 2 – Learn Various ML Concepts

Now that you are done with the prerequisites, you can move on to actually learning ML (Which
is the fun part!!!) It’s best to start with the basics and then move on to the more complicated
stuff. Some of the basic concepts in ML are:

5.7.1 Terminologies of Machine Learning

 Model – A model is a specific representation learned from data by applying some


machine learning algorithm. A model is also called a hypothesis.
 Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as
input to the model. For example, in order to predict a fruit, there may be features like
color, smell, taste, etc.
 Target (Label) – A target variable or label is the value to be predicted by our model.
For the fruit example discussed in the feature section, the label with each set of input
would be the name of the fruit like apple, orange, banana, etc.
 Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels),
so after training, we will have a model (hypothesis) that will then map new data to one
of the categories trained on.
 Prediction – Once our model is ready, it can be fed a set of inputs to which it will
provide a predicted output(label).

5.7.2 Types of Machine Learning

 Supervised Learning – This involves learning from a training dataset with labeled data
using classification and regression models. This learning process continues until the

19
required level of performance is achieved.
 Unsupervised Learning – This involves using unlabelled data and then finding the
underlying structure in the data in order to learn more and more about the data itself
using factor and cluster analysis models.
 Semi-supervised Learning – This involves using unlabelled data like Unsupervised
Learning with a small amount of labeled data. Using labeled data vastly increases the
learning accuracy and is also more cost-effective than Supervised Learning.
 Reinforcement Learning – This involves learning optimal actions through trial and
error. So the next action is decided by learning behaviors that are based on the current
state and that will maximize the reward in the future.

5.8 Advantages of Machine learning

1. Easily identifies trends and patterns -

Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon,
it serves to understand the browsing behaviors and purchase histories of its users to help cater
to the right products, deals, and reminders relevant to them. It uses the results to reveal relevant
advertisements to them.

2. No human intervention needed (automation)

With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms on
their own. A common example of this is anti-virus softwares; they learn to filter new threats as
they are recognized. ML is also good at recognizing spam.

3. Continuous Improvement

As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets
them make better decisions. Say you need to make a weather forecast model. As the amount of
data you have keeps growing, your algorithms learn to make more accurate predictions faster.

4. Handling multi-dimensional and multi-variety data

Machine Learning algorithms are good at handling data that are multi-dimensional and multi-
variety, and they can do this in dynamic or uncertain environments.

20
5. Wide Applications

You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers
while also targeting the right customers.

5.9 Disadvantages of Machine Learning

1. Data Acquisition

Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must wait for new
data to be generated.

2. Time and Resources

ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.

3. Interpretation of Results

Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.

4. High error-susceptibility

Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize
the source of the issue, and even longer to correct it.

21
CHAPTER 6

SOFTWARE ENVIRONMENT

6.1 What is Python?

Below are some facts about Python.

 Python is currently the most widely used multi-purpose, high-level programming


language.
 Python allows programming in Object-Oriented and Procedural paradigms. Python
programs generally are smaller than other programming languages like Java.
 Programmers have to type relatively less and indentation requirement of the language,
makes them readable all the time.
 Python language is being used by almost all tech-giant companies like – Google,
Amazon, Facebook, Instagram, Dropbox, Uber… etc.

The biggest strength of Python is huge collection of standard library which can be used for the
following –

 Machine Learning
 GUI Applications (like Kivy, Tkinter, PyQt etc. )
 Web frameworks like Django (used by YouTube, Instagram, Dropbox)
 Image processing (like Opencv, Pillow)
 Web scraping (like Scrapy, BeautifulSoup, Selenium)
 Test frameworks
 Multimedia

6.2 Advantages of Python

Let’s see how Python dominates over other languages.

1. Extensive Libraries

Python downloads with an extensive library and it contain code for various purposes like
regular expressions, documentation-generation, unit-testing, web browsers, threading,

22
databases, CGI, email, image manipulation, and more. So, we don’t have to write the complete
code for that manually.

2. Extensible

As we have seen earlier, Python can be extended to other languages. You can write some of
your code in languages like C++ or C. This comes in handy, especially in projects.

3. Embeddable

Complimentary to extensibility, Python is embeddable as well. You can put your Python code
in your source code of a different language, like C++. This lets us add scripting capabilities to
our code in the other language.

4. Improved Productivity

The language’s simplicity and extensive libraries render programmers more productive than
languages like Java and C++ do. Also, the fact that you need to write less and get more things
done.

5. IOT Opportunities

Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for
the Internet Of Things. This is a way to connect the language with the real world.

6. Simple and Easy

When working with Java, you may have to create a class to print ‘Hello World’. But in Python,
just a print statement will do. It is also quite easy to learn, understand, and code. This is why
when people pick up Python, they have a hard time adjusting to other more verbose languages
like Java.

7. Readable

Because it is not such a verbose language, reading Python is much like reading English. This
is the reason why it is so easy to learn, understand, and code. It also does not need curly braces
to define blocks, and indentation is mandatory. This further aids the readability of the code.

23
8. Object-Oriented

This language supports both the procedural and object-oriented programming paradigms.
While functions help us with code reusability, classes and objects let us model the real world.
A class allows the encapsulation of data and functions into one.

9. Free and Open-Source

Like we said earlier, Python is freely available. But not only can you download Python for free,
but you can also download its source code, make changes to it, and even distribute it. It
downloads with an extensive collection of libraries to help you with your tasks.

10. Portable

When you code your project in a language like C++, you may need to make some changes to
it if you want to run it on another platform. But it isn’t the same with Python. Here, you need
to code only once, and you can run it anywhere. This is called Write Once Run Anywhere
(WORA). However, you need to be careful enough not to include any system-dependent
features.

11. Interpreted

Lastly, we will say that it is an interpreted language. Since statements are executed one by one,
debugging is easier than in compiled languages.

Advantages of Python Over Other Languages

1. Less Coding

Almost all of the tasks done in Python requires less coding when the same task is done in other
languages. Python also has an awesome standard library support, so you don’t have to search
for any third-party libraries to get your job done. This is the reason that many people suggest
learning Python to beginners.

2. Affordable

Python is free therefore individuals, small companies or big organizations can leverage the free
available resources to build applications. Python is popular and widely used so it gives you
better community support.

24
The 2019 Git-hub annual survey showed us that Python has overtaken Java in the most popular
programming language category.

3. Python is for Everyone

Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need
to learn different languages for different jobs but with Python, you can professionally build
web apps, perform data analysis and machine learning, automate things, do web scraping and
also build games and powerful visualizations. It is an all-rounder programming language.

6.3 Disadvantages of Python

So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing Python
over another language.

1. Speed Limitations

We have seen that Python code is executed line by line. But since Python is interpreted, it often
results in slow execution. This, however, isn’t a problem unless speed is a focal point for the
project. In other words, unless high speed is a requirement, the benefits offered by Python are
enough to distract us from its speed limitations.

2. Weak in Mobile Computing and Browsers

While it serves as an excellent server-side language, Python is much rarely seen on the client-
side. Besides that, it is rarely ever used to implement smartphone-based applications. One such
application is called Carbonnelle.

The reason it is not so famous despite the existence of Brython is that it isn’t that secure.

3. Design Restrictions

As you know, Python is dynamically typed. This means that you don’t need to declare the type
of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just
means that if it looks like a duck, it must be a duck. While this is easy on the programmers
during coding, it can raise run-time errors.

25
4. Underdeveloped Database Access Layers

Compared to more widely used technologies like JDBC (Java DataBase Connectivity) and
ODBC (Open DataBase Connectivity), Python’s database access layers are a bit
underdeveloped. Consequently, it is less often applied in huge enterprises.

5. Simple

No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t
do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java
code seems unnecessary.

This was all about the Advantages and Disadvantages of Python Programming Language.

6.4 History of Python

What do the alphabet and the programming language Python have in common? Right, both
start with ABC. If we are talking about ABC in the Python context, it's clear that the
programming language ABC is meant. ABC is a general-purpose programming language and
programming environment, which had been developed in the Netherlands, Amsterdam, at the
CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence
the design of Python. Python was conceptualized in the late 1980s. Guido van Rossum worked
that time in a project at the CWI, called Amoeba, a distributed operating system. In an interview
with Bill Venners1, Guido van Rossum said: "In the early 1980s, I worked as an implementer
on a team building a language called ABC at Centrum voor Wiskunde en Informatica (CWI).
I don't know how well people know ABC's influence on Python. I try to mention ABC's
influence because I'm indebted to everything I learned during that project and to the people
who worked on it. "Later on in the same Interview, Guido van Rossum continued: "I
remembered all my experience and some of my frustration with ABC. I decided to try to design
a simple scripting language that possessed some of ABC's better properties, but without its
problems. So I started typing. I created a simple virtual machine, a simple parser, and a simple
runtime. I made my own version of the various ABC parts that I liked. I created a basic syntax,
used indentation for statement grouping instead of curly braces or begin-end blocks, and
developed a small number of powerful data types: a hash table (or dictionary, as we call it), a
list, strings, and numbers."

26
6.5 Python Development Steps

Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in
February 1991. This release included already exception handling, functions, and the core data
types of list, dict, str and others. It was also object oriented and had a module system.

Python version 1.0 was released in January 1994. The major new features included in this
release were the functional programming tools lambda, map, filter and reduce, which Guido
Van Rossum never liked. Six and a half years later in October 2000, Python 2.0 was introduced.
This release included list comprehensions, a full garbage collector and it was supporting
unicode. Python flourished for another 8 years in the versions 2.x before the next major release
as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python 3 is not
backwards compatible with Python 2.x. The emphasis in Python 3 had been on the removal of
duplicate programming constructs and modules, thus fulfilling or coming close to fulfilling the
13th law of the Zen of Python: "There should be one -- and preferably only one -- obvious way
to do it."Some changes in Python 7.3:

Print is now a function.

 Views and iterators instead of lists


 The rules for ordering comparisons have been simplified. E.g., a heterogeneous list
cannot be sorted, because all the elements of a list must be comparable to each other.
 There is only one integer type left, i.e., int. long is int as well.
 The division of two integers returns a float instead of an integer. "//" can be used to
have the "old" behaviour.
 Text Vs. Data Instead of Unicode Vs. 8-bit

6.6 Purpose

We demonstrated that our approach enables successful segmentation of intra-retinal layers—


even with low-quality images containing speckle noise, low contrast, and different intensity
ranges throughout—with the assistance of the ANIS feature.

27
6.7 Python

Python is an interpreted high-level programming language for general-purpose programming.


Created by Guido van Rossum and first released in 1991, Python has a design philosophy that
emphasizes code readability, notably using significant whitespace.

Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do not


need to compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.

Python also acknowledges that speed of development is important. Readable and terse code is
part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something
about how much code you have to scan, read and/or understand to troubleshoot problems or
tweak behaviors. This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library is key to another area
where Python excels. All its tools have been quick to implement, saved a lot of time, and several
of them have later been patched and updated by people with no Python background - without
breaking.

6.8 Modules Used in Project

TensorFlow

TensorFlow is a free and open-source software library for dataflow and differentiable
programming across a range of tasks. It is a symbolic math library and is also used for machine
learning applications such as neural networks. It is used for both research and production at
Google.

TensorFlow was developed by the Google Brain team for internal Google use. It was released
under the Apache 2.0 open-source license on November 9, 2015.

28
NumPy

NumPy is a general-purpose array-processing package. It provides a high-performance


multidimensional array object, and tools for working with these arrays.

It is the fundamental package for scientific computing with Python. It contains various features
including these important ones:

 A powerful N-dimensional array object


 Sophisticated (broadcasting) functions
 Tools for integrating C/C++ and Fortran code
 Useful linear algebra, Fourier transform, and random number capabilities

Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional
container of generic data. Arbitrary datatypes can be defined using NumPy which allows
NumPy to seamlessly and speedily integrate with a wide variety of databases.

Pandas

Pandas is an open-source Python Library providing high-performance data manipulation and


analysis tool using its powerful data structures. Python was majorly used for data munging and
preparation. It had very little contribution towards data analysis. Pandas solved this problem.
Using Pandas, we can accomplish five typical steps in the processing and analysis of data,
regardless of the origin of data load, prepare, manipulate, model, and analyze. Python with
Pandas is used in a wide range of fields including academic and commercial domains including
finance, economics, Statistics, analytics, etc.

Matplotlib

Matplotlib is a Python 2D plotting library which produces publication quality figures in a


variety of hardcopy formats and interactive environments across platforms. Matplotlib can be
used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web application
servers, and four graphical user interface toolkits. Matplotlib tries to make easy things easy and
hard things possible. You can generate plots, histograms, power spectra, bar charts, error charts,
scatter plots, etc., with just a few lines of code. For examples, see the sample plots and
thumbnail gallery.

29
For simple plotting the pyplot module provides a MATLAB-like interface, particularly when
combined with IPython. For the power user, you have full control of line styles, font properties,
axes properties, etc, via an object oriented interface or via a set of functions familiar to
MATLAB users.

Scikit – learn

Scikit-learn provides a range of supervised and unsupervised learning algorithms via a


consistent interface in Python. It is licensed under a permissive simplified BSD license and is
distributed under many Linux distributions, encouraging academic and commercial use. Python

Python is an interpreted high-level programming language for general-purpose programming.


Created by Guido van Rossum and first released in 1991, Python has a design philosophy that
emphasizes code readability, notably using significant whitespace.

Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do not


need to compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.

Python also acknowledges that speed of development is important. Readable and terse code is
part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something
about how much code you have to scan, read and/or understand to troubleshoot problems or
tweak behaviors. This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library is key to another area
where Python excels. All its tools have been quick to implement, saved a lot of time, and several
of them have later been patched and updated by people with no Python background - without
breaking.

Install Python Step-by-Step in Windows and Mac

Python a versatile programming language doesn’t come pre-installed on your computer


devices. Python was first released in the year 1991 and until today it is a very popular high-

30
level programming language. Its style philosophy emphasizes code readability with its notable
use of great whitespace.

The object-oriented approach and language construct provided by Python enables programmers
to write both clear and logical code for projects. This software does not come pre-packaged
with Windows.

How to Install Python on Windows and Mac

There have been several updates in the Python version over the years. The question is how to
install Python? It might be confusing for the beginner who is willing to start learning Python
but this tutorial will solve your query. The latest or the newest version of Python is version
3.7.4 or in other words, it is Python 3.

Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.

Before you start with the installation process of Python. First, you need to know about your
System Requirements. Based on your system type i.e. operating system and based processor,
you must download the python version. My system type is a Windows 64-bit operating system.
So the steps below are to install python version 3.7.4 on Windows 7 device or to install Python
3. Download the Python Cheatsheet here.The steps on how to install Python on Windows 10,
8 and 7 are divided into 4 parts to help understand better.

Download the Correct version into the system

Step 1: Go to the official site to download and install python using Google Chrome or any other
web browser. OR Click on the following link: https://www.python.org

31
Now, check for the latest and the correct version for your operating system.

Step 2: Click on the Download Tab.

Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color
or you can scroll further down and click on download with respective to their version. Here,
we are downloading the most recent python version for windows 3.7.4

Step 4: Scroll down the page until you find the Files option.

Step 5: Here you see a different version of python along with the operating system.

32
 To download Windows 32-bit python, you can select any one from the three options:
Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86
web-based installer.
 To download Windows 64-bit python, you can select any one from the three options:
Windows x86-64 embeddable zip file, Windows x86-64 executable installer or
Windows x86-64 web-based installer.

Here we will install Windows x86-64 web-based installer. Here your first part regarding which
version of python is to be downloaded is completed. Now we move ahead with the second part
in installing python i.e. Installation

Note: To know the changes or updates that are made in the version you can click on the Release
Note Option.

6.9 Installation of Python

Step 1: Go to Download and Open the downloaded python version to carry out the installation
process.

33
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.

Step 3: Click on Install NOW After the installation is successful. Click on Close.

34
With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation.

Note: The installation process might take a couple of minutes.

Verify the Python Installation

Step 1: Click on Start

Step 2: In the Windows Run Command, type “cmd”.

Step 3: Open the Command prompt option.

Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.

Step 5: You will get the answer as 3.7.4

35
Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.

Check how the Python IDLE works

Step 1: Click on Start

Step 2: In the Windows Run command, type “python idle”.

Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program

Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click
on Save

Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.

Step 6: Now for e.g. enter print (“Hey World”) and Press Enter.

36
You will see that the command given is launched. With this, we end our tutorial on how to
install Python. You have learned how to download python for windows into your respective
operating system.

Note: Unlike Java, Python does not need semicolons at the end of the statements otherwise it
won’t work.

37
CHAPTER 7

RESULTS AND DISCUSSION

7.1 Modules

To implement this project, we have designed following modules

1) Generate & Load YOLOv4-DeepSort Model: using this module we will generate and
load YOLOV4-DeepSort model.

2) Upload Video & Detect Car & Truck: using this module we will upload test video and
then apply YOLOV4 to detect vehicle and this detected vehicle frame will be further
analyse by DeepSort to track real vehicles.

7.2 Results

The pre-trained YOLOv4 model is obtained by training the model on the COCO dataset. Figure
7.1 shows the detection and tracking of cars and trucks using YOLOV4 with DeepSORT
algorithms. Figure 7.2 shows the tracking of vehicles using Frame Per Second and application
till the end of the video.

Figure 7.1. Detection and tracking of cars and trucks using YOLOV4 with DeepSORT
algorithms.

38
Figure 7.2. Tracking of vehicles using Frame Per Second and application till the end of the
video.

39
CHAPTER 8

CONCLUSION

In conclusion, the vehicle detection and tracking method presented uses TensorFlow library
with DeepSORT algorithm based on YOLOv4 model. It can be proven that using YOLOv4
and YOLOv4-tiny is acceptable and faster than previous one. It can be use in Realtime
surveillance camera in the highway or recording video to evaluate the number of vehicles pass
by according to what time it started recorded to last recorded. This data then can be used for
traffic management by implementing answer if the place proven a lot of congestion or not. It
is the best to use YOLOv4 model than previous model YOLOv3 if the system wants the highest
accuracy with acceptable speed. If the system wants the best accuracy with the highest speed
as possible because limitation in hardware or to process it in real-time, it is recommended to
use YOLOV4-tiny model which it can achieve higher accuracy. This system can be improved
to be more adaptable for vehicle detection if using several suggestion ideas. A vehicle tracking
algorithm based on the framework suggested in DeepSORT which is capable of tracking the
nonlinear motion of vehicles with a high level of accuracy. The proposed algorithm utilizes
YOLOv4 with Darknet, an open-source neural network framework, for vehicle localization and
identification. The number of detection errors was minimized by optimizing the training of the
detector through hyperparameter optimization and data augmentation.

40
Future scope

As a modification to the DeepSORT implementation, which is incapable of capturing nonlinear


motion, the unscented Kalman filter is used to obtain highly accurate track predictions which
in turn reduces the errors in track association significantly. AlexNet, a pre-trained
convolutional neural network, is used to perform feature extraction which is an integral part of
the tracking algorithm aimed at further reducing the errors in track association. The
experimental results demonstrate that the proposed vehicle tracking algorithm ensures better
performance in tracking the non-linear motion of vehicles and tracking through occlusions with
a reduced number of ID switches compared to the state-of-the-art object trackers. The pre-
trained model was unable to deliver consistently good performance across all five scenarios
both in terms of precision and tracking success rate. The results of the custom trained models
are comparable to the pre-Trained model. Many misclassification cases by all three models
suggest the need of further experimentations. Many misclassification cases also suggest that
our models are not over-fitted. This shows that there is a need to add more data for training and
testing for all three models particularly the custom-trained models. An important finding of
this work is that pre-trained models cannot work in KSA environments without retraining due
to the differences in the language, driving culture, driving environments, and vehicle models.
Future work will look into building larger datasets for vehicle detection, tracking, and other
problems in road transportation, and developing highly accurate deep learning models
optimized for the environment.

41
REFERENCES

[1] C. Wang, A. Musaev, P. Sheinidashtegol, and T. Atkison, “Towards Detection of


Abnormal Vehicle Behavior Using Traffic Cameras,” in Big Data -- BigData 2019,
2019, pp. 125–136.
[2] Kumar, A.; Jiang, M.; Fang, Y. Where not to go? In Proceedings of the 37th
International ACM SIGIR Conference on Research & Development in Information
Retrieval; ACM: New York, NY, USA, 2014; Volume 2609550, pp. 1223–1226.
[3] H. Song, H. Liang, H. Li, Z. Dai, and X. Yun, “Visionbased vehicle detection and
counting system using deep learning in highway scenes,” Eur. Transp. Res. Rev., vol.
11, no. 1, p. 51, Dec. 2019.
[4] Tejaswin, P.; Kumar, R.; Gupta, S. Tweeting Traffic: Analyzing Twitter for generating
real-time city traffic insights and predictions. In Proceedings of the 2nd IKDD
Conference on Data Sciences; ACM: New York, NY, USA, 2015; pp. 1–4.
[5] Suma, S.; Mehmood, R.; Albeshri, A. Automatic Event Detection in Smart Cities Using
Big Data Analytics. In Proceedings of the Communications and Networking; Metzler,
J.B., Ed.; Springer: Cham, Switzerland, 2018; Volume 224, pp. 111–122.
[6] X. Chen, K. Kundu, Y. Zhu, H. Ma, S. Fidler, and R. Urtasun, “3D Object Proposals
Using Stereo Imagery for Accurate Object Class Detection,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 40, no. 5, pp. 1259–1272, 2018.
[7] D. Sudha and J. Priyadarshini, “An intelligent multiple vehicle detection and tracking
using modified vibe algorithm and deep learning algorithm,” Soft Comput., vol.
0123456789, 2020.
[8] Sang, J.; Wu, Z.; Guo, P.; Hu, H.; Xiang, H.; Zhang, Q.; Cai, B. An improved YOLOv2
for vehicle detection. Sensors 2018, 18, 4272.
[9] Du, L.; Chen, W.; Fu, S.; Kong, H.; Li, C.; Pei, Z. Real-time detection of vehicle and
traffic light for intelligent and connected vehicles based on YOLOv3 network. In
Proceedings of the 5th International Conference on Transportation Information and
Safety (ICTIS), Liverpool, UK, 14–17 July 2019; IEEE: Piscataway, NJ, USA, 2019;
pp. 388–392.
[10] Mahto, P.; Garg, P.; Seth, P.; Panda, J. Refining YOLOv4 for Vehicle
Detection. Int. J. Adv. Res. Eng. Technol. (IJARET) 2020, 11, 409–419.

42
[11] Liu, T.; Liu, Y. Deformable model-based vehicle tracking and recognition using
3-D constrained multiple-Kernels and Kalman filter. IEEE Access 2021, 9, 90346–
90357
[12] S. Sheik Mohammed Ali, B. George, L. Vanajakshi, and J. Venkatraman, “A
multiple inductive loop vehicle detection system for heterogeneous and lane-less
traffic,” IEEE Trans. Instrum. Meas., vol. 61, no. 5, pp. 1353– 1360, 2012.
[13] Neupane, Bipul, Teerayut Horanont, and Jagannath Aryal. "Real-Time Vehicle
Classification and Tracking Using a Transfer Learning-Improved Deep Learning
Network." Sensors 22.10 (2022): 3813.
[14] A. H. Abdel-Gawad, A. Khamis, L. A. Said and A. G. Radwan, "Vulnerable
Road Users Detection and Tracking using YOLOv4 and Deep SORT," 2021 9th
International Japan-Africa Conference on Electronics, Communications, and
Computations (JAC-ECC), 2021, pp. 140-145, doi: 10.1109/JAC-
ECC54461.2021.9691441.
[15] Tao J, Wang H, Zhang X, Li X, Yang H (2018) An object detection system
based on YOLO in traffic scene. In: 6th International conference on computer science
and network technology, pp 315–319.

43
APPENDIX
CODE

from tkinter import messagebox


from tkinter import *
from tkinter import simpledialog
import tkinter
from tkinter import filedialog
from tkinter.filedialog import askopenfilename
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import time
import tensorflow as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU') #convert GPU model
to CPU
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
from absl import app, flags, logging
from absl.flags import FLAGS
import core.utils as utils
from core.yolov4 import filter_boxes #load YOLOV4 package to filter boxes which contains
vehicles
from tensorflow.python.saved_model import tag_constants
from core.config import cfg
from PIL import Image
import cv2
import matplotlib.pyplot as plt
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

44
# deep sort imports
from deep_sort import preprocessing, nn_matching
from deep_sort.detection import Detection
from deep_sort.tracker import Tracker #deep sort tracker model to predict or track vehciles
from YOLO processed video frame
from tools import generate_detections as gdet
from tqdm import tqdm
from collections import deque

pts = [deque(maxlen=30) for _ in range(9999)]

main = tkinter.Tk()
main.title("Road Traffic Vehicle Detection and Tracking using Deep Learning with Custom-
Collected and Public Datasets")
main.geometry("1300x1200")

global filename
global model, encoder, tracker, config
max_cosine_distance = 0.4
nn_budget = None
nms_max_overlap = 1.0
global accuracy, precision

def loadModel():
global model, encoder, tracker, config

# initialize deep sort


model_filename = 'model_data/mars-small128.pb'
encoder = gdet.create_box_encoder(model_filename, batch_size=1)
# calculate cosine distance metric

45
metric = nn_matching.NearestNeighborDistanceMetric("cosine", max_cosine_distance,
nn_budget)
# initialize tracker
tracker = Tracker(metric)

# load configuration for object detector


config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

pathlabel.config(text="YOLOv4 DeepSort Model Loaded")


text.delete('1.0', END)
text.insert(END,"YOLOv4 DeepSort Model Loaded\n\n");

def vehicleDetection():
global model, encoder, tracker, config
global accuracy, precision
accuracy = 0
precision = 0
filename = filedialog.askopenfilename(initialdir="Videos")
pathlabel.config(text=filename)
text.delete('1.0', END)
text.insert(END,filename+" loaded\n\n")
text.update_idletasks()
saved_model_loaded = tf.saved_model.load('checkpoints/yolov4-416',
tags=[tag_constants.SERVING])
model = saved_model_loaded.signatures['serving_default']
cap = cv2.VideoCapture(filename)
start_time = time.time()
while (cap.isOpened()):
ret, frame = cap.read()

46
if frame is not None:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image = Image.fromarray(frame)
frame_size = frame.shape[:2]
image_data = cv2.resize(frame, (416, 416))
image_data = image_data / 255.
image_data = image_data[np.newaxis, ...].astype(np.float32)
batch_data = tf.constant(image_data)
pred_bbox = model(batch_data)
for key, value in pred_bbox.items():
boxes = value[:, :, 0:4]
pred_conf = value[:, :, 4:]
boxes, scores, classes, valid_detections =
tf.image.combined_non_max_suppression(boxes=tf.reshape(boxes, (tf.shape(boxes)[0], -1, 1,
4)),
scores=tf.reshape(pred_conf,
(tf.shape(pred_conf)[0], -1, tf.shape(pred_conf)[-1])),
max_output_size_per_class=50,
max_total_size=50,
iou_threshold=0.45,
score_threshold=0.50)
# convert data to numpy arrays and slice out unused elements
num_objects = valid_detections.numpy()[0]
bboxes = boxes.numpy()[0]
bboxes = bboxes[0:int(num_objects)]
scores = scores.numpy()[0]
scores = scores[0:int(num_objects)]
classes = classes.numpy()[0]
classes = classes[0:int(num_objects)]
# format bounding boxes from normalized ymin, xmin, ymax, xmax ---> xmin, ymin,
width, height
original_h, original_w, _ = frame.shape

47
bboxes = utils.format_boxes(bboxes, original_h, original_w)
# store all predictions in one parameter for simplicity when calling functions
pred_bbox = [bboxes, scores, classes, num_objects]
# read in all class names from config
class_names = utils.read_class_names(cfg.YOLO.CLASSES)
# by default allow all classes in .names file
#allowed_classes = list(class_names.values())
# custom allowed classes (uncomment line below to customize tracker for only
people)
allowed_classes = ['car','truck']
names = []
deleted_indx = []
for i in range(num_objects):
class_indx = int(classes[i])
class_name = class_names[class_indx]
if class_name not in allowed_classes:
deleted_indx.append(i)
else:
names.append(class_name)
names = np.array(names)
count = len(names)
cv2.putText(frame, "tracked: {}".format(count), (5, 70), 0, 5e-3 * 200, (0, 255, 0), 2)

# delete detections that are not in allowed_classes


bboxes = np.delete(bboxes, deleted_indx, axis=0)
scores = np.delete(scores, deleted_indx, axis=0)

# encode yolo detections and feed to tracker


features = encoder(frame, bboxes)

48
detections = [Detection(bbox, score, class_name, feature) for bbox, score,
class_name, feature in zip(bboxes, scores, names, features)]

#initialize color map


cmap = plt.get_cmap('tab20b')
colors = [cmap(i)[:3] for i in np.linspace(0, 1, 20)]
boxs = np.array([d.tlwh for d in detections])
scores = np.array([d.confidence for d in detections])
classes = np.array([d.class_name for d in detections])
indices = preprocessing.non_max_suppression(boxs, classes, nms_max_overlap,
scores)
detections = [detections[i] for i in indices]
# Call the tracker
print(scores)
if accuracy == 0:
accuracy = scores[0]
text.insert(END,"YoloV4 DeepSort Accuracy : "+str(scores[0])+"\n\n")
text.insert(END,"YoloV4 DeepSort Precision : "+str(scores[1])+"\n\n")
text.update_idletasks()
tracker.predict()
tracker.update(detections)
# update tracks
for track in tracker.tracks:
if not track.is_confirmed() or track.time_since_update > 1:
continue
bbox = track.to_tlbr()
class_name = track.get_class()
color = colors[int(track.track_id) % len(colors)]
color = [i * 255 for i in color]
cv2.rectangle(frame, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])),
color, 2)

49
cv2.rectangle(frame, (int(bbox[0]), int(bbox[1]-30)),
(int(bbox[0])+(len(class_name)+len(str(track.track_id)))*17, int(bbox[1])), color, -1)
cv2.putText(frame, class_name + "-" + str(track.track_id),(int(bbox[0]),
int(bbox[1]-10)),0, 0.75, (255,255,255),2)
# Tracking with historical trajectory
center = (int(((bbox[0])+(bbox[2]))/2),int(((bbox[1])+(bbox[3]))/2))
pts[track.track_id].append(center)
thickness = 5
# center point
cv2.circle(frame, (center), 1, color, thickness)
# draw motion path
for j in range(1, len(pts[track.track_id])):
if pts[track.track_id][j - 1] is None or pts[track.track_id][j] is None:
continue
thickness = int(np.sqrt(64 / float(j + 1)) * 2)
cv2.line(frame,(pts[track.track_id][j-1]),
(pts[track.track_id][j]),(color),thickness)
fps = 1.0 / (time.time() - start_time)
#print("FPS: %.2f" % fps)
frame = cv2.resize(frame,(800,800))
cv2.putText(frame, "FPS: %f" %(fps), (5,150), 0, 5e-3 * 200, (0,255,0), 2)
result = np.asarray(frame)
result = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
cv2.imshow("Output Video", result)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

def close():

50
main.destroy()

font = ('times', 16, 'bold')


title = Label(main, text='Road Traffic Vehicle Detection and Tracking using Deep Learning
with Custom-Collected and Public Datasets',anchor=W, justify=CENTER)
title.config(bg='yellow4', fg='white')
title.config(font=font)
title.config(height=3, width=120)
title.place(x=0,y=5)

font1 = ('times', 14, 'bold')


upload = Button(main, text="Generate & Load YOLOv4-DeepSort Model",
command=loadModel)
upload.place(x=50,y=100)
upload.config(font=font1)

pathlabel = Label(main)
pathlabel.config(bg='yellow4', fg='white')
pathlabel.config(font=font1)
pathlabel.place(x=50,y=150)

markovButton = Button(main, text="Upload Video & Detect Car & Truck",


command=vehicleDetection)
markovButton.place(x=50,y=200)
markovButton.config(font=font1)

predictButton = Button(main, text="Exit", command=close)


predictButton.place(x=50,y=250)
predictButton.config(font=font1)

51
font1 = ('times', 12, 'bold')
text=Text(main,height=15,width=78)
scroll=Scrollbar(text)
text.configure(yscrollcommand=scroll.set)
text.place(x=450,y=100)
text.config(font=font1)

main.config(bg='magenta3')
main.mainloop()

52

You might also like