License Plate Recognition System (LPR)
License Plate Recognition System (LPR)
License Plate Recognition System (LPR)
1. Title:
“License Plate Recognition System (LPR)”
2. Abstract:
Transportation is one of the essential needs for the economy of a developed or developing
country. Population of automobiles is increasing rapidly and every four minutes one death
occurs due to road accidents in India. Due to which it is difficult to keep a track of all these
accidents. Parking areas are seen densely packed that leads to minor accidents and causes
unnoticeable damage to vehicles. whether in road accidents or in parking slots, in most of the
cases, car owner may end up in not knowing about the vehicle (person) who had dent or
damaged the car and bears unnecessary expenditures for the mistake committed by some
unknown person.
In this project, the License Plate Recognition system, it is possible to identify the license plate
of a car or any vehicle which is involved in the accident and traffic rules violation can be
identified and collect the details of the vehicle with the help of license number. In this project
we mainly focus on license plate detection and character recognition on a given image using
Computer Vision, image processing and Optical Character Recognition (OCR) on images.
License Plate Recognition systems are widely used on a wide range of applications nowadays.
Applications like Highway Tolling, Smart Parking and Enhanced vehicle theft and damage
preventions. LPR system is the heart of any ITS (Intelligent Transportation) systems, which is
used by police forces and highway patrolling for traffic monitoring and Effective enforcement
of traffic rules monitoring.
3. Keywords:
4. Introduction:
The increase in population and its requirements has increased the number of vehicles on road
and that has led to an increase in accidents and violation of traffic rules. Monitoring vehicles
for law enforcement and security purposes is a difficult problem because of the number of
automobiles on the road today. An example is these lies in border patrol: it is time-consuming
for an officer to physically check the license plate of every car. Additionally, it is not feasible
to employ a number of police officers to act as full-time license plate inspectors. Police patrols
cannot just drive in their cars staring at the plates of other cars. There must exist a way of
detecting and identifying license plates without constant human intervention. As a solution, we
have implemented a system that can extract the license plate number of a vehicle from an
image – given a set of constraints.
A number plate is the unique identification of a vehicle. LPR is designed to identify the
number plate and to recognize the characters in it and further from the obtained license number
we can collect the necessary details to track the vehicle involved in the accident or any other
law enforcement violations.
5. Literature survey:
Real-time segmentation of dynamic regions or objects in the images are often referred to as
background subtraction or foreground segmentation and is a basic step in several computer
vision applications.
A strong technique for localization, segmentation, and recognition of the characters within the
located plate. Images from still cameras or videos are obtained and regenerated into grayscale
images. Hough lines are determined by using Hough transform and therefore, the segmentation
of greyscale image generated by finding edges for smoothing image is employed to cut back
the quantity of the connected part, and then connected part is calculated. Finally, a single
character within the registration code is detected. The aim is to indicate that the planned
technique achieved high accuracy by optimizing numerous parameters that have a higher
recognition rate than the standard ways [3].
In this method using neural networks, a perceptron is trained by providing a sample set and
few intelligent rules. The problem with neural networks is that training a per-caption is quite
difficult and it involves huge sample sets to train the network. If the neural network is not
trained in an appropriate manner, it may not address scale and orientation invariance. But
training a network with a rule that solves these problems is even more difficult. Template
matching on the other hand is an easier technique as compared to neural networks. Also, it
does not require powerful hardware to perform its operations. But it is susceptible to the
problems of scale and orientation [4].
In Vinay Kumar V, Dr. R Srikant Swamy's proposed project Sobel edge detector was used
to locate edge points in the image. Intensity variation and periodogram were used to identify
license plates from the image. Iterative backpropagation approach was used to construct High-
Resolution images from various Low-Resolution images to overcome the effects of motion
blur, camera misfocus, and aging of the sensor. Character extraction was done by connected
component analysis. Character recognition was done by feature extraction from images and
classification through SVM. Feature extraction was done by three methods Principal
Component Analysis, Linear Discriminant Analysis, and HOG Transform method. Results of
6.1 License Plate Recognition (LPR) Algorithm using KNN and SVM:
The Fig: 1.0 represents the flow of the LPR algorithm, captured image is first converted into
full contrast gray scale image, then it is passed through Gaussian filter for noise removal and
adaptive thresholding is performed for better output. For plate extraction, the KNN will look for
possible characters in the scene. When a possible character is first found, it will check for the
character besides and identifies the plate length. The algorithm uses contours to predict the
characters taking account of their rectangle and horizontal bounding areas with a particular
aspect ratio. The plate extracted is again gone through preprocess and thresholding.
When the plate is extracted, it will check for list of possible matching character in the extracted
plate, before that basic mathematical operations are applied to determine the distance between
the characters. Pythagoras theorem to calculate the distance between two characters,
trigonometric operations to calculate the angle. Character recognition is done by K-Nearest
Neighbors (KNN) classifier, It uses large number of datasets to generate classifications and
image value files with which it compares with the input samples. Each character is matched
against the values present in the flattened files and classifications file and the output is acquired
accordingly.
The image value (Flattened image) file and classifications file can be generated through the
above operation in Fig: 5.4, Usage of a greater number of image samples will lead to more
accuracy. At the end, when the training is completed the characters are matched against the
standard files and output is obtained as shown in the Fig: 5.5. The different types of character
samples that are taken for training is illustrated in the Fig: 5.6
Figure 3.1: SVM in Scikit-learn supports both sparse and dense sample vectors as input.
Classification of SVM
SVC
The objective of a Linear SVC (Support Vector Classifier) is to fit the data you provide,
returning a "best fit" hyperplane that divides or categorizes your data. From there, after getting
the hyperplane, you can then feed some features to your classifier to see what the "predicted"
class is.
The algorithm
SVC uses the Support Vector Domain Description (SVDD) to delineate the region in the data
space where the input examples are concentrated. SVDD belongs to the general category
of kernel-based learning. In its "linear" version SVDD looks for the smallest sphere that
encloses the data. When used in conjunction with a kernel function, it looks for the smallest
enclosing sphere in the feature space defined by the kernel function. While in feature space the
data is described by a sphere, when mapped back to data-space the sphere is transformed into a
set of non-linear contours that enclose the data (see Figure 2). SVDD provides a decision
function that tells whether a given input is inside the feature-space sphere or not, indicating
whether a given point belongs to the support of the distribution. More specifically, it is the
radius-squared of the feature-space sphere minus the distance-squared of the image of a data
point x from the center of the feature-space sphere. This function, denoted by f(x) returns a value
greater than 0 if x is inside the feature space sphere and negative otherwise
Figure 3.2: The line segment that connects points in different clusters has to go through a
low-density region in data space where the SVDD returns a negative value.
The key geometrical observation that enables to infer clusters out of the SVDD is that given a
pair of data points that belong to different components (clusters), the line segment that connects
them must go through a region in data space which is part of a "valley" in the probability density
of the data, i.e., does not belong to the support of the distribution. Such a line must then go
outside the feature-space sphere, and therefore have a segment with points that return a negative
value when tested with the SVDD decision function (see Figure 1). This observation leads to the
definition of an adjacency matrix A between pairs of points in our dataset. For a given pair of
points xi and xj the i,j element of A is given by Aij={1,0if f(x)>0 for all x on the line segment
connecting xi and xj otherwise. Clusters are now defined as the connected components of the
graph induced by A. Checking the line segment is implemented by sampling several points (20
points were used in numerical experiments).
The captured image shown in Figure 4.1. It is converted into greyscale image as shown in the
Figure 4.2. After greyscale conversion adaptive thresholding is performed as shown in the
Figure 4.3. After thresholding, all the contours are extracted which illustrated in the Figure 4.4.
Figure 4.3: Thresholded image after performing adaptive thresholding on grayscale image
Figure 4.5: Contours filtered from all extracted contours based on certain parameters
All the extracted contours are filtered and only characters are identified. After this, plate length
is determined by looking for all characters which are adjacent to each other as shown in Figure
4.5. Plate is extracted as shown in the Figure 4.6. It again undergoes thresholding and all the
characters are extracted, resized and fed to the KNN classifier as vectors. Output of the classifier
is a label of the class i.e. A, B, C,1,2.... etc. So in this way we obtain a string which is license
plate number of vehicles.
The approach used to segment the images is Connected Component Analysis. Connected
regions will imply that all the connected pixels belong to the same object. A pixel is said to be
connected to another if they both have the same value and are adjacent to each other.
Car Image -> Grayscale Image -> Binary Image -> Applying CCA to get connected regions ->
Detect license plate out of all connected regions (Assumptions made: width of the license plate
region to the full image ranges between 15% and 40% and height of the license plate region to
the full image is between 8% & 20%)
The output of the first step is a license plate image detected in a car image. This is provided as
input to step2 and CCA is applied to this image to bound the characters in a plate. Each
character identified is appended into a list.
Model is trained using SVC (4 cross-fold validation) with dataset present in directory
train20X20. The model is saved as finalized_model.sav which is then loaded to predict each
character.
Once the characters of the plate are obtained and the model is trained, the model is loaded to
predict each character.
The license plate recognition algorithm does not give expected results in following cases:
• If characters are not visible and if license plate is damaged which is shown in Figure 6.4.
• If scene is too complex, then many contours extracted from thresholded image are
misinterpreted as characters as shown in Figure 6.5. So, plate extraction will be difficult.
• If illumination is too low and characters are not properly visible.
9. Conclusion:
In this project, the License Plate Recognition system using vehicle license plate is presented.
This system uses image processing techniques for recognition of the vehicle. The system works
satisfactorily for wide variation of conditions and different types of number plates. The system
is implemented and executed in PyCharm and performance is tested on genuine images. This
LPR system works quite well however, there is a need in improvement of character recognition
techniques because real time implementation of LPR is a tedious task. The OCR method is
sensitive to misalignment and to different sizes, so we have to create different kind of templets
for different RTO specifications. At present there are certain limits on parameters like script on
the vehicle number plate, skew in the image which can be removed by enhancing the algorithms
further.
10. References:
[1] Prabhakar, Priyanka Anupama, P R Rasmi, S. Automatic vehicle number plate detection and
recognition. International Conference on Control, Instrumentation, Communication and
Computational Technologies, ICCICCT, 185-190, 2014.
[2] B. Pechiammal and J. A. Renjith, “An efficient approach for automatic license plate
recognition system,” Third International Conference on Science Technology Engineering
Management (ICONSTEM), Chennai, pp. 121-129, 2017.
[3] P. Prabhakar, P. Anupama and S. R. Resmi, “Automatic vehicle number plate detection and
recognition,” International Conference on Control, Instrumentation, Communication and
Computational Technologies (ICCICCT), Kanyakumari, pp-185-190, 2014.
[4] H. Karwal and A. Girdhar, “Vehicle Number Plate Detection System for Indian Vehicles,”
IEEE International Conference on Computational Intelligence Communication Technology,
Ghaziabad, pp. 8-12, 2015.
[5] Vinay Kumar V, Dr R Srikant swamy, “Automatic License Plate Recognition using
Histogram of Orient Gradients for character recognition”, 2014.
1.
2.
3.
4.