Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Staar Presentation 3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 25

C.V.

Raman Global
University

VEHICle
IDENTIFICATION
USING VISUAL FEATURES
Under the
guidance of
Dr.Priyanka Saha

Identification - Visual Features - Deep Learning

01
C.V.Raman Global
University

TEAM MEMBERS
ARPAN KUMAR BEHERA
2101020821

SWAGAT BEHERA
2101020787

SATYA SUNDAR MALLICK


2101020738

JHASKESAN MUDULI
2101020545 -

02
INTRODUCTIO
highway control systems.N
■Vehicle detection and statistics are essential for intelligent traffic management and

■Traffic surveillance cameras provide large amounts of video footage for analysis.
■Detecting vehicles in videos faces challenges due to variations in vehicle sizes and
camera angles.
■Small vehicles at a distance are particularly difficult to detect accurately, impacting
vehicle counting precision.
■The proposed system uses deep learning techniques for vision-based vehicle detection
and counting.A new highway vehicle dataset is introduced with over 57,000 annotated
instances across 11,129 images, focusing on tiny vehicle objects.
■The YOLOv3 network is employed to detect vehicles, with road segmentation into
proximal and remote areas to enhance small object detection.
■The ORB algorithm is used to track vehicle movements, calculate trajectories, and
classify vehicles by type and direction.

03
■The research contributes to intelligent traffic management by enhancing vehicle
Problem
statement/Advancement/Future
Scope
■DL's Impact on Vehicle Detection: Deep learning techniques,
especially DCNNs, have significantly improved vehicle detection
accuracy, precision, and robustness in applications like traffic monitoring
and autonomous driving, surpassing traditional methods like HOG and
LBP.
■Advancements in Computational Power: The rise in GPU
capabilities has enabled real-time implementation of DL-based vehicle
detection and classification models, making them more efficient.
■Innovations Enhancing DL Models: Techniques such as
transfer learning, hyper-parameter optimization, and image preprocessing
are improving DL performance while reducing computational costs.
■Challenges in Nighttime Detection: Nighttime vehicle
detection remains challenging due to low lighting, but DL models like
YOLO and R-CNN are overcoming limitations of traditional methods.
■Future Directions: Research should aim to enhance data efficiency,
improve model interpretability, and reduce computational costs for DL
models to further advance vehicle detection systems.

04
C.V.Raman Global
University

LITERATURE REVIEW
Study Focus Key Contributions Methods/Techniques

Comprehensive
review on vehicle
Al-Smadi et al. (2016) Traffic surveillance Vision-based techniques
detection, recognition,
and tracking

Video object
Background subtraction
Radhakrishnan (2013) Object extraction extraction for sports
techniques
applications

Proposed detection
Qiu-Lin & Jia-Feng using three-frame Temporal image
Vehicle detection
(2011) difference and cross- differencing
entropy threshold
05
C.V.Raman Global
University

LITERATURE REVIEW
Study Focus Key Contributions Methods/Techniques

Optical flow-based
Urban road vehicle
Liu et al. (2014) tracking of urban road Optical flow
tracking
vehicles

Street-parking
Parking violation violation detection
Park et al. (2007) Image processing
detection through video-based
systems

Generic deformable
Ferryman et al. (1995) Vehicle recognition model for vehicle Deformable model
recognition

06
METHODOLOGY
1.Road Surface Segmentation:
- The highway road surface is extracted from the video footage using image processing
methods, including
Gaussian Mixture Modeling to separate background and foreground.
- The road surface is divided into remote and proximal areas, allowing for better vehicle
detection by
focusing on specific regions.
2.Vehicle Detection Using YOLOv3:
- YOLOv3 (You Only Look Once, version 3) is employed for vehicle detection, with separate
detection for
remote and proximal areas of the road.
- The network uses a Darknet-53 architecture and employs three different scales for detecting
objects,
improving detection accuracy across various sizes and distances.
3.Feature Extraction and Matching:
- TheC.V.Raman
ORB (Oriented
Global FAST and Rotated BRIEF) algorithm is used for feature extraction from
07
METHODOLOGY
4.Multi-Object Tracking:
- RANSAC (Random Sample Consensus) is used to eliminate noise from detected feature
points, improving
tracking accuracy.
- The trajectory of each vehicle is predicted using perspective transformation, and Kalman
filtering is
applied for consistent vehicle tracking across frames.
5.Trajectory Analysis:
- Vehicles are tracked over time, and their trajectories are analyzed to determine their driving
direction and
count the number of vehicles in both directions.
- The system uses the trajectory data to calculate traffic statistics, such as the number of
vehicles per
category (car, bus, truck) and their movement direction.
6.Vehicle Counting:
- Based detection and tracking data, vehicles are counted as they pass predefined 08
on theGlobal
C.V.Raman lines
09
Proposed Solution:
1. Camera Installation: Install cameras at strategic
locations
(e.g., toll booths, traffic signals, parking entrances)
to capture
images of vehicles.

2. Image Processing: Use computer vision algorithms


to
process captured images, enhancing quality and
extracting
relevant features.

3. License Plate Recognition (LPR): Utilize Optical


Character
Recognition (OCR) technology to detect and read
license
plate numbers.
-
4. Vehicle Classification: Analyze images to determine
vehicle type (e.g., car, truck, motorcycle), make, and C.V.Raman Global 10
University
BLOCK DIAGRAM
+---------------+
| Camera |
+---------------+
|
|
v
+---------------+
| Image Processing|
| (Enhancement, |
| Feature Extraction)|
+---------------+
|
|
v
+---------------+
| License Plate |
| Recognition (LPR)|
+---------------+
|
|
v
+---------------+
| Vehicle Classification|
| (Type, Make, Model) |
+---------------+
|
|
v
+---------------+
| Database Comparison|
| (Registered Vehicles)|
+---------------+
|
|
v
+---------------+
- | Alert System |
| (Wanted/Stolen |
| Vehicles) | 02
+---------------+ C.V.Raman Global 11
University
KEY COMPONENTS
1. Camera: Captures images of vehicles.
2. Image Processing Software: Enhances image
quality, extracts features.
3. LPR Software: Reads license plate numbers.
4. Vehicle Classification Software: Analyzes images
to determine vehicle type, make, and model.
5. Database: Stores registered vehicle information.
6. Alert System: Triggers notifications to
authorities/operators.

12
TECHNOLOGIES
USED
1. Computer Vision
2. Optical Character Recognition (OCR)
3. Machine Learning (ML)
4. Database Management Systems
5. Alert/Notification Systems

This solution can be integrated with existing traffic


management systems, parking management systems, or law
enforcement databases to enhance vehicle identification and
tracking capabilities.

13
CODE

14
CODE

15
CODE

16
Result and
Discussion:
Performance Metrics:

14
Result and
Discussion:

14
Result and
Discussion:

14
Result and DISCUSSION:

- The system showed high detection accuracy for small vehicles


Discussion: in the remote area of the highway, addressing a common
challenge in highway vehicle detection. By dividing the road
into remote and proximal sections and applying the YOLOv3
Performance
model, detection performance was enhanced for both small and
Metrics: large vehicles.

These results highlight


- The multi-object tracking system based on ORB (Oriented
the success of the
FAST and Rotated BRIEF) feature extraction was effective,
system in improving
achieving a tracking accuracy of 92.3% for determining vehicle
vehicle detection in
direction and 93.2% for counting vehicles.
highway scenes,
particularly for small
- A key takeaway from the discussion is that their approach
objects and multi-object
combines efficient object detection with accurate tracking,
tracking.
making it suitable for real-time applications in highway traffic
management. 15
POSSIBLE FUTURE WORKS

Phase 1
Explore the integration of
advanced deep learning
techniques to further enhance
the accuracy and efficiency of
vehicle identification systems

Phase 2
Investigate the application of
transfer learning methods to
adapt the existing models to
different geographical regions
and license plate formats
C.V.Raman Global
University
16
POSSIBLE FUTURE WORKS

Phase 3
Conduct research on real-time
implementation of the dual
identification system in practical
scenarios to assess its performance
in dynamic environments

Phase 4
Evaluate the scalability of the
proposed system for large-scale
deployment and explore potential
optimizations for resource-
efficient operation
C.V.Raman Global
University
17
CONCLUSION
In conclusion, the four research papers highlight the advancements in vehicle
detection and classification systems through deep learning (DL) and machine
learning (ML) techniques. Approaches like YOLOv3 and enhanced HOG (S-HOG)
features have significantly improved accuracy, precision, and real-time performance
in both daytime and nighttime scenarios. However, challenges such as detecting
small or distant vehicles, handling occlusions, and adapting to varying weather
conditions remain. While these DL-based methods outperform traditional
approaches like number plate recognition or RFID, they still require further
optimization, especially in terms of computational efficiency and robustness in
diverse environments. Future research should focus on addressing these limitations
to enhance the effectiveness of vehicle detection systems for intelligent
transportation, autonomous driving, and traffic monitoring.
18
REFERENCES
1. Al-Smadi, M., Abdulrahim, K., Salam, R.A. (2016). Traffic surveillance: A review of vision based vehicle detection,
recognition and tracking. International Journal of Applied Engineering Research, 11(1), 713–726.
2. Radhakrishnan, M. (2013). Video object extraction by using background subtraction techniques for sports applications.
Digital Image Processing, 5(9), 91–97.
3. Qiu-Lin, L.I., & Jia-Feng, H.E. (2011). Vehicles detection based on three-frame-difference method and cross-entropy
threshold method. Computer Engineering, 37(4), 172–174.
4. Liu, Y., Yao, L., Shi, Q., Ding, J. (2014). Optical flow based urban road vehicle tracking, In 2013 Ninth International
Conference on Computational Intelligence and Security. https://doi.org/10.1109/cis.2013.89: IEEE.
5. Park, K., Lee, D., Park, Y. (2007). Video-based detection of street-parking violation, In International Conference on Image
Processing. https://www.tib. eu/en/search/id/BLCP%3ACN066390870/Video-based-detectionofstreet-parking-violation, vol. 1
(pp. 152–156). Las Vegas: IEEE.
6. Ferryman, J.M., Worrall, A.D., Sullivan, G.D., Baker, K.D. (1995). A generic deformable model for vehicle recognition, In
Procedings of the British Machine Vision Conference 1995. https://doi.org/10.5244/c.9.13: British Machine Vision
Association.
7. Han, D., Leotta, M.J., Cooper, D.B., Mundy, J.L. (2006). Vehicle class recognition from video-based on 3d
curve probes, In 2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of
Tracking and Surveillance. https://doi.org/10.1109/vspets.2005.1570927: IEEE.
19
THANK YOU!

20

You might also like