Sensors: Automatic Number Plate Recognition:A Detailed Survey of Relevant Algorithms
Sensors: Automatic Number Plate Recognition:A Detailed Survey of Relevant Algorithms
Sensors: Automatic Number Plate Recognition:A Detailed Survey of Relevant Algorithms
Review
Automatic Number Plate Recognition:A Detailed Survey of
Relevant Algorithms
Lubna 1 , Naveed Mufti 2, * and Syed Afaq Ali Shah 3
Keywords: automatic number plate recognition; image processing; computer vision; machine learn-
Received: 17 March 2021
ing; vehicle identification; neural networks; intelligent transportation system; smart vehicle technolo-
Accepted: 21 April 2021
gies; object detection and tracking; recognition
Published: 26 April 2021
modern world. People migrate away from rural areas and choose to live in cities mostly.
Local governments often fail to recognize the present and potential mobility needs of
residents and visitors as traffic rises in these areas. ANPR is being increasingly used to
examine the free flow of traffic, facilitating the intelligent transportation [1].
Not only can modern ANPR cameras read plates, but they can provide useful addi-
tional information such as counting, direction, groups of vehicles and their speed. The
ability to detect and read large volumes of fast moving vehicles has meant that ANPR
technology has found its way into many aspects of today’s digital landscape. Whilst
ANPR technology can come in many different packages, they all perform the same basic
function which is to provide a highly accurate system of reading a vehicle without human
intervention. It is utilized in very diverse applications such as access control, parking
management, tolling, user billing, delivery tracking, traffic management, policing and
security services, customer services and directions, the red light and lane enforcement,
queue length estimation, and many other services [2–8]. Figure 1 shows the basic system
diagram of a fixed and mobile ANPR technology.
Figure 1. Typical ANPR System Diagram of a Fixed ANPR System (right) and a Mobile ANPR
System (left) (Source: latech.us, accessed on 5 November 2020).
Number Plate Recognition involves acquisition of number plate images from the
intended scene, using a camera. Either still images or a photographic video is captured and
further processed by a series of image processing based recognition algorithms to attain an
alpha-numeric conversion of the captured images into a text entry. After obtaining a good
quality image of the scene/vehicle, then the core dependence of any ANPR system is on
the robustness of its algorithms. These algorithms need a very careful consideration and
require thousands of lines of software coding to get desired results and cover all system
complexities. As a whole, a series of primary algorithms are necessary for smart vehicle
technologies and ANPR to be effective. The general processes involved in ANPR systems
is shown in Figure 2.
A typical ANPR system goes through the general process of image acquisition (input
to the system), number plate extraction (NPE), character segmentation (CS) and character
recognition (CR) (as output from the system) [9]. After successful recognition of the vehicle
the data can be accessed and used for post processing operations as required. The vehicles
data is sent to the connected back office system software which is the central repository to
all data along with tools to support data analysis, queries and reporting accordingly. This
data collected can be utilized for several other intelligent transportation applications since
ANPR systems not just visually capture the vehicle images but also record the metadata
in their central repository. This can potentially include vehicle recognition through date
and time stamping as well as exact location, whilst storing a comprehensive database of
Sensors 2021, 21, 3028 3 of 35
traffic movement. This data can be helpful in modelling different transport systems and
their analysis.
The image taken from the scene may experience some complexities depending upon
the type of camera used, its resolution, lightening/illumination aids, the mounting position,
area/lanes coverage capability, complex scenes, shutter speed and other environmental
and system constraints. Figure 3 shows License plate diversity in styles, colors, fonts, sizes,
and physical conditions; which may affect the recognition accuracy. When a vehicle is de-
tected in the scene/image, the system uses plate localization functions to extract the license
plate from the vehicle image, a process commonly termed as Number Plate Extraction.
Characters on the extracted number plate are then segmented prior to recognition process.
Character segmentation is an algorithm that locates the alpha numeric characters on a
number plate. The segmented characters are then translated into an alpha numeric text
entry using the optical character recognition (OCR) techniques. For character recognition,
algorithms such as template matching or neural network classifiers are used. The perfor-
mance of an ANPR system relies on the effectiveness of each individual stage. A parameter
used to quantify the whole process is the performance-rate or success-rate, which is the
ratio of the number of number-plates successfully recognized to the total number of input
images taken. The performance rate involves all the three stages of recognition process,
number plate extraction, segmentation and character recognition.
The ANPR system collects the primary form of the information from ANPR software
including the images and its associated metadata. It provides the transport system with
automation and security features. Its integration in ITS makes it possible to automate the
system by providing services in toll collections, traffic analysis, improving law enforce-
ment’s and building a comprehensive database of traffic movements. Integrating ANPR
with Information Communication Technology (ICT) tools is another useful feature of the
technology. The data from ANPR systems can be well utilized for modelling and imple-
mentation of various aspects of transport systems such as to model Passenger Mobility
systems [10], traffic flow analysis and road network control strategies using Network Fun-
damental Diagram (NFD) models [11], in vehicle routing choice model to decide on Route
and Path Choices of Freight Vehicles [12] and travel demand patterns through Floating Car
Data (FCD) [13].”
Sensors 2021, 21, 3028 4 of 35
Figure 3. License plate diversity in styles, colors, fonts, sizes, and physical conditions. (Source: Plate
Recognizer ALPR [14]-a division of ParkPow [15]).
number plate characters. It is possible that number plate is obscured with dirt or broken
or located at position that is out of sight to the camera (since different types of vehicles
have their number plates affixed at different positions of the vehicle body). Environmental
factors, light, motion blur, reflections, fog, and other similar conditions makes it challenging
for the system to extract the number plate efficiently. Algorithms using geometrical features
for extracting the rectangular shaped number plates may have issues if there are multiple
similar shapes drawn/pasted over the car body. Along with rectangular shaped features,
further algorithms must be used to eliminate the unwanted regions.
The algorithms need to be robust to differentiate between the number plate and other
objects in the image frame. Researchers have used various features for the extraction of
number plates. A brief study of these feature extraction algorithms is presented in the
following section.
recognizing its number plate. A success rate of 95.7% for roadside and 93.7% for inspection
stations test images was achieved. In [36], Hough Transform is used to extract the number
plate using the boundaries. The number plate is located by detecting straight lines in the
test image. This transform has the ability to detect straight line with an inclination of
up to 30 degrees. However, it is computationally expensive and requires large memory.
In [37], the generalized symmetry transform is used. The corners from the edges in the
image are detected by scanning them in selective directions. By using the generalized
symmetry transform the number plate region is extracted by detecting the similarities
between these corners. The continuity of the edges is important when using the edge
based methods as these are considered to be simple and fast. The extraction rate can
significantly improve by eliminating the unwanted edges using some morphological steps.
A combination of morphology and the edge statistics was proposed in [38]. Prior to this, the
basic pre-processing techniques were also applied to enhance the image for color contrast
and noise removal. 98% of successful extraction and 75–85% overall performance rate is
achieved by testing on 9745 images.
Another effective technique for plate detection used by many researchers is to utilize
the line weight density map. This can be combined with other techniques to improve
results. In [49,57–63], the scan line technique is utilized where the peaks are formed from
the color transition of the grey scale level which corresponds to the number of characters on
the number plate. In [57], the authors proposed the horizontal line scanning with multiple
thresholds technique for plate detection in a real time/complex images. The experimental
results were compared to the conventional model based on Hough transforms with low
detection accuracy of 69.8% and longer processing time of 8–10s, as in [58]. In comparison
to [58], the extraction rate of 99.2% was achieved using line scan technique in [57]. The
execution time for locating the plate was incredibly reduced to 0.3–0.5s comparatively.
In [59], the edge lines were selected using a weight density map. This technique was
implemented on a certain set of images and the results were effective with an extraction rate
of 93%, whereas for different image standards the effectiveness dropped to 83%. In [60],
the weight density map is combined with neural network based algorithms. A dataset of
400 varying conditions images were tested. This hybrid approach proved to be effective,
with a success rate of 97.23% and average extraction time of 0.0093 s per image, based on
simulations results. In [49], the authors used histogram data with combined features of
morphology. 97.7% success rate is recorded for 360 images set under different conditions.
Vector quantization technique is used to detect the characters in the image by mapping the
higher contrast regions into smaller blocks. Using different quality images, it yielded 98%
of successful detection with a processing time of 200 ms [64].
The use of sliding concentric window was proposed in [54,65]. It uses the texture
irregularities in the image. The area with abrupt changes is considered to be the candidate
number plate region. In [54], histogram was used along with the sliding concentric window
method. Texture analysis is widely used for number plate extraction. Gabor filters being
the candidate tools had the ability to analyze textures in any number of orientation and
scales [66]. It is, however, a computationally expensive method. While using fixed and
specific angled images, it resulted in a 98% success rate [67].
Wavelet transform is used in [62,68]. In [62], horizontal reference lines with wavelet
transform processing achieved a success of 92.4%. In [68], using wavelet transform, an
accuracy of 97.3% is achieved with a processing time of 0.2 s. A combination of Haar-like
features and adaptive boosting is used in [69,70]. This feature is commonly used to detect
objects and is invariant to the position, size, contrast or color of the number plate. The
gradient density is used by the cascade classifiers in [69], with a detection success of 93.5%.
With adaptive boosting technique, images of different sizes, formats and with various
illumination conditions, yielded 99% detection rate [71].
Texture based techniques are independent of the number plate boundary and can
detect number plate properly even if it is deformed. However, these techniques are
computationally complex to extract number plate from such images, if too many edges are
detected in the image or the background has multiple elements, or the light illumination is
not enough.
specified range and has the same number of objects as the characters, the region between
these lines is termed as the number plate area. In [74], the scale-space analysis is used to
extract the number plate characters. This technique extracts blob type large sized figures
that include line type of smaller figures as character candidates. In [75], firstly, the regions
containing characters are identified in terms of the character width and the difference
between character region and its background. Then the number plate extraction process is
executed in the plate region to identify inter-character distance. This extraction technique
produces an extraction accuracy of about 99.5%. In [76], the first stage character classifier
obtains a primary set of all the possible character-like regions. Then the set is passed to
the next stage classifier. This second stage classifier process eliminates the non-character
regions from the initial set. In this technique, 36 AdaBoost classifiers act as primary stage
classifier. The second stage classifier is employed with Support Vector Machine (SVM),
which uses Scale-Invariant Feature Transform (SIFT) algorithm, to detect and describe
local features of number plate images. These techniques of feature extraction from binary
images, to define the number plate region, take quite long time. The reason is attributed to
the processing of all the objects from those binary images. These processes also generate
errors if the image contains other texts on it.
edges and color within this window to determine if it is a candidate area containing
number plate. In [85], the neural network horizontally scans an HLS image with a 1 × M
window. Here, M represents an approximate width value of the number plate, and vertical
scanning is done with an N × 1 window, where N indicates the height of the plate. For
every pixel of the image, the hue value is used to denote the color details and intensity
value represents the texture details. Resultant of both the horizontal and vertical scans is
combined to extract the candidate areas of the plate. In [86], Time-Delay Neural Network
(TDNN) is processed for number plate extraction. Two such TDNNs are implemented
in the color and texture analysis of the number plate by checking small windows of the
image’s horizontal and vertical cross sections. Based on pixel values similarity with the
number plate, the area with higher edge density is extracted as a number plate. Edge and
color information based approach is used for extraction in [87]. Covariance matrix has
been used for plate extraction in [88]. It is based on combining the spatial information
and statistical data. Each matrix has adequate amount of information and it is enough
to match the area in multiple views. This matrix is used to train the neural network
efficiently in order to detect the number plate area. A combination of texture shape
and color features is used in [89]. The number plate extraction from 1176 images shows
extraction rate of 97.3%, considering various light illumination condition and scenes.
Connected Component Labelling (CCL), threshold and Gabor filter are combined in [66]
to extract number plates. To detect edges, wavelet transform is used in [87]. This is
done by employing the morphology techniques once edges are detected. The shape and
structure information analyzed from the input images is helpful to localize the number
plate. HLS color decomposition, Hough detection and wavelet analysis are proposed in [90].
In [91], two-dimensional Discrete Wavelet Transform (DWT) is used. The proposed method
successfully eliminates the background noise by highlighting the number plates’ vertical
edges. The plate extraction was done using orthogonal projection histogram inspection
and Otsu’s segmentation. The most accurate candidate is then chosen on the basis of
edge density verification and aspect ratio constraint. In [92], the number plate detection
is done by Modified Census Transform (MCT) computed local structure patterns. After
that, two post-processing parts are utilized to reduce the incorrect positive rates. One
post-processing includes the position-based technique between a vehicle plate and a false
positive that has indifferent local structure patterns like radiators or headlights. The other
one is the color-dependent method that utilizes entire color details of the number plates.
Deep Learning (DL) techniques based ANPR systems generally address the character
identification and segmentation as a whole. Montazzolli et al. [93] proposed a CNN
architecture for character segmentation and recognition. The experiment was carried out
using a publicly available dataset. In their technique, 99% of the characters were segmented
successfully while the accuracy of reading the segmented characters was 93%. However,
in spite of the outstanding achievements of DL techniques in ANPR [79,94,95], ANPR
datasets with cars/vehicles and NPs annotations still have a huge demand. The training
data set is responsible for the progressive performance of DL methods. A huge training
data will help in the training of data hungry deep neural networks and better utilization of
more robust network architectures along with additional layers and parameters.
Tilted number plates are fixed using a least square method in [97], it treats both the vertical
and horizontal oriented tilts. In [98], using the Karhunen-Loeve (K-L) transformation,
the character coordinates are organized into 2D variance matrix. After that, the angle
of rotation and Eigenvectors are computed, and tilt correction is implemented for both
vertical and horizontal tilt of the image. Three techniques namely: K-means cluster based
line fitting, least squares based line fittings, and K-L transform are suggested to compute
the angle of vertical tilt. Threshold application seems simple while converting to binary
image, but it is a very challenging part in the whole process. An inappropriate threshold
value may result in connected characters, with either within characters or with the number
plate frame, which makes it difficult for segmentation [97]. A single value threshold may
not be appropriate, for all the images, due to variance in image conditions and lighting.
Image enhancement is necessary before binarizing the image. Enhancement includes
noise removal from the image, enhancing its contrast or to apply histogram equalization
techniques. In [99], to implement gradient analysis over the entire image, a technique was
proposed to sense the number plate followed by the enhancement of the plate by grey color
transformation. In [100], the Niblack binarization algorithm adjusts the image threshold
according to the standard deviation and local mean. In [101], for every pixel, local threshold
technique is applied. The threshold value is obtained by subtracting the specified constant
value from the average grey levels in an m×n window placed in the middle of the pixel.
In [102], a new method was proposed to reduce noise and characters enhancement. The
character size was assumed to be about 20% of the size of number plate. Initially, the level
of the grey scale is ranged between 0–100. Then 20% larger pixels are scaled by a factor of
2.55. The noise pixels are minimized whereas the characters are intensified. As the image
binarization is unable to generate desirable results with one global threshold, method of
adaptive local binarization is followed.
In the following, number plate segmentation methods are reviewed, based on the
features used.
of recognizing multiple number plates present in single images. From review, it can be
concluded that the technique following both horizontal and vertical pixel projections is
the simplest and commonly implemented. The projection techniques show promising
results for segmentation of characters as it is not dependent on their position. However, it
requires prior knowledge of the character count. Image quality and noise may affect the
projection values.
era distance [99]. The characters may be broken, tilted or effected by noise. Character
recognition methods are covered in this section.
character contouring. The output waveform is passed through quantization to get the
feature vector. This technique identifies variable sized and multi-font characters as the
character contour does not change with the change in any font or size. In [132], character
extraction is implemented by using Gabor filter. The character edges having the same
orientation angle as the filter will have highest filter response. It can be utilized to generate
characteristic vector per character. In [133], to extract characters in different directions from
the character image, Kirsch edge detection is applied. This detection method for character
identification and extraction yielded better acceptable results than the other techniques of
edge detection including Wallis, Prewitt, and Frei Chen [134]. In [135], the characteristic
vector extraction is done from binary image, followed by thinning operation to turn the
character strokes direction into a singular code. In [136], the grey level values of the pixels
of 11 sub-blocks are applied into the classifier of neural network as the characters. In [137],
a scene is examined by reaching the non-overlapped 5 × 5 pixels blocks, thereby processing
the overall image details to extract “spread” edge characteristics as per the experiment
conducted in [138]. While in [139], following the coarse-to-fine recognition approach, the
sub-image classification is described. In [71], three characteristic parameters, namely:
peripheral background area, contour-crossing and directional counts are utilized where an
SVM is used to perceive the classification.
5. Discussion
Some ANPR systems may use simple image processing techniques, performed un-
der controlled conditions for predictable license plate styles. However, dedicated object
detectors-such as HOG, CNN, SVM and YOLO to name a few-are used by advanced
ANPR systems. Further advanced and intelligent ANPR systems utilize state-of-the-art
ANPR software based on Neural Network techniques with AI capabilities. Just like many
other fields, computer vision and machine learning have applications in ANPR too. The
sheer diversity of license plate types across territories, states and countries makes ANPR
challenging. The fact that any ANPR algorithm will need to work in real time further
complicates Number Plate identification. Hence, utilizing ML, CV, AI techniques can
relevantly empower ANPR.
Previous reviews from Literature are presented in Table 1. The cited works reviewed
various techniques for each stage of ANPR system. The authors in [17] provided a good
references collection for the new researchers in license plate detection field. However,
the research did not compare the performance of recognition techniques in terms of
accuracy rates.
The authors in [18] discussed the performance of the various algorithms used in the
past. The percentage efficiency of the ANPR system per year was presented for years 1999–
2015. The conclusions included that the efficiency of the ANPR system is not stable, and
the performance varies due to various factors affecting ANPR like noise, environmental
condition, choice for the algorithms and models training.
Sensors 2021, 21, 3028 15 of 35
ine This research reviewed various techniques for each stage of ANPR
[17]
2017 system.
• License Plate Extraction:
– Edge information Analysis
– Probabilistic model
– Subspace Projection and Probabilistic Neural Network
– Blob Analysis, Mathematical Morphology
– Color Space and Geometrical Properties
– Thresholding, Histogram, Computational Intelligence and
Adaptive Boost techniques
• Segmentation:
– Gabor transform
– K-Means Algorithm
– Tree of Shapes
– Hidden Markov Chains
• Recognition:
– Optical Character Recognition (OCR)
– Embedded DSP-Platform
– Pattern match method
– Computational Intelligence - Neural Networks
Table 1. Cont.
Overall
Image Extraction Segmentation Recognition Processing Real Device Plate Problem
No. From Procedure Database Recognition
Condition Rate Rate Rate Time Time Config. Format Areas
Rate
Extraction Segmentation Recognition
4GB
Otsu
memory
Adaptive Set 1: 533 Various
Set 1: DDR4 and
thresh- Set 2: 651 situations Set 1: 98.1% Moroccan
Template 96.37% 3.4 GHz
olding , Closed Set 3: 757 with Set 2: 96.37% , four
1 [39] Match- Set 2–4: — — — No Intel(R) —
CCA, curves Set 4: 611 different Set 3: 93.07% different
ing 96.06% Core(TM) i5
Edge (Video Light Set 4: 92.52% formats
CPU -
Detection- Sequences) Conditions
MATLAB
Canny
R2015b
Edge
statistics Not suitable
Template
and mor- Bounding for different
2 [38] Match- 9745 images — 98% — 82.6% 75–85% — No MATLAB Indian
phology box orienta-
ing
tech- tions.
niques
HOG
Low Day time
feature
69 images, 45 resolution only, no
Used and
images used portion of South license
3 [140] provided — Extreme — — 90% 90% — Yes —
as trainers, 5 the image, Thailand localization
Images Learn-
classes 15–18 px process is
ing
height applied.
Machine
Region
Morphology props Template Low Multi
4 [141] Tech- bounding Match- 30 images brightness, 92% 97% 98% 98% — No Matlab Fonts, —
niques box using ing contrast Indian
Matlab
Sensors 2021, 21, 3028 18 of 35
Table 2. Cont.
Overall
Image Extraction Segmentation Recognition Processing Real Device Plate Problem
No. From Procedure Database Recognition
Condition Rate Rate Rate Time Time Config. Format Areas
Rate
Extraction Segmentation Recognition
Cannot detect
beyond
30-degrees
OCR –
Histogram horizon-
Tem-
Analysis Vertical Various tal/vertical
5 [36] plate 110 images 89.7% — — — — No OpenALPR Europe
using Histogram Conditions angle, if the
Match-
HOG car is moving
ing
(image blur)
or there is low
light
Data Adjustments
SSIG:
augmen- NVIDIA have to be
Object Character SSIG: SSIG: 21.31 ms,
tation, SSIG Dataset: Titan XP made for other
detection, Segmenta- 100.00% 97.75% SSIG: 97.83% SSIG: 93.53% 47 FPS
Distant 2000 Frames, 1920 × 1080 GPU (3840 than Brazilian
6 [78] CNNs— tion CNNs, UFPR- UFPR- UFPR-ALPR: UFPR-ALPR: UFPR- Yes Brazil
CNNs UFPR-ALPR: pixels CUDA formats.
(YOLO Bounding ALPR: ALPR: 90.37% 78.33% ALPR:
for letter 4500 Frames cores and 12 Dependent on
Detector) box 98.33% 95.97% 28.30 ms ,
and GB of RAM license plate
35 FPS
Digits layout.
Dependent on
standardiza-
640 × 480 tion for
Cascade pixels with detection too,
RaspberryPi
classifier 50 × overall
3 Model B
with LBP 11pixels accuracy is for
Tesseract’s 1.2 s,10 operating at
7 [125] features — 1300 images aspect ratio 98.35% — 92.12% 96.73% Yes Indian front side
OCR FPS 1.2 GHz
(Local of license license plate
with 1 GB
Binary plate, only at fixed
RAM
Pattern) various 90d angle,
conditions High
processing
time.
Sensors 2021, 21, 3028 19 of 35
Table 2. Cont.
Overall
Image Extraction Segmentation Recognition Processing Real Device Plate
No. From Procedure Database Recognition Problem Areas
Condition Rate Rate Rate Time Time Config. Format
Rate
Extraction Segmentation Recognition
Tesseract
LBP,
OCR 250 pixels
Charac-
with wide, High
ter and Vertical
8 [117] prepro- 1200+ images Various 100% — 90% 90% Vary No OpenALPR Myanmar processing
edge Histogram
cessing conditions time.
informa-
tech- and colors
tion
niques
Connected Statistical
Vertical
component features Variable
and hori-
labeling matched size and MATLAB Very Limited
9 [23] zontal 50 Images 90% 91% 93% 92.75% — No Pakistan
and mor- with style, Latin R2015a dataset tested
edge his-
phological stored script only
tograms
method ones
Image
resizing
using
nearest Computationally
neighbor CCA intensive cost,
interpola- labelling OCR HD images, HD camera
10 [26] tion, and mor- algo- 958 images various 98.10% 99.75% 99.50% 98% 61ms No MATLAB Qatar used—
Prepro- phological rithms conditions memory and
cessing operations time
and geo- constraints
metrical
condi-
tions
Bounding
Limited set of
box
Preprocessing 8 mp images, cannot
feature
techniques, camera, recognize low
and
11 [121] — Otsu’s 14 images different — — — 92.85% — No MATLAB Malaysia quality images,
template
Threshold- timings and works for
match-
ing distances standardized
ing
format only
OCR
Sensors 2021, 21, 3028 20 of 35
Table 2. Cont.
Overall
Image Extraction Segmentation Recognition Processing Real Device Plate Problem
No. From Procedure Database Recognition
Condition Rate Rate Rate Time Time Config. Format Areas
Rate
Extraction Segmentation Recognition
MATLAB
2.7 GHz
core i7, 8
Geometrical Complex GB of Higher
features images, RAM , processing
using 1792 × Python time, cost and
571 images,
12 [25] Mathe- — — 1312, 800 × — — — 98.45% 20 ms No Raspberry Greek power for
multiple sets
matical 600, and Pi with 700 higher
Morphol- 640 × 480 MHz resolution
ogy pixels processor images
and 256
MB of
RAM
ROI ex- Failed for
traction multiple
using Otsu’s OCR objects in the
intensity Threshold- with scene and for
detection ing with Correla- 480 × 640 MATLAB unclear
13 [103] 40 Images 87.5% — 85.7% 86.6% — No Iraq
and preprocess- tions pixels R2014a. images or
morpho- ing ap- algorithm
logical techniques proach removing
opera- objects by
tions mistake
Extensive
training
Scale- Variable required
Adaptive Scale- Distances comprise all
System, adaptive between the possible
2600+ images
Feature Scale- model camera PASCAL situations,
Multiple OS: 87.38% OS: 74.29% USA,
computa- weighted and em- and Visual Segmentation
14 [142] Datasets OS, Stills&Caltech: Stills&Caltech: 98.98% 97% 3.16–9.43 s Yes Taiwan,
tion with linear inter- pirically vehicle, Object process can be
Stills&Caltech, 84.41% 84.13% Spanish
Gentle- polation constrained- scenes and Classes enhanced by
AOLP
boost deformation sizes (in using
algo- model color JPEG additional
rithms format) morphologi-
cal
techniques.
Sensors 2021, 21, 3028 21 of 35
Table 2. Cont.
Overall
Image Extraction Segmentation Recognition Processing Real Device Plate Problem
No. From Procedure Database Recognition
Condition Rate Rate Rate Time Time Config. Format Areas
Rate
Extraction Segmentation Recognition
Support
Vector Threshold,
Intel core
Machine Morpholog- Artificial
i5 PC, C++
(SVM) ical Neural
with
15 [126] with operations Net- — — — — — — — No Spain —
OPENCV
Prepro- and work
3.2.0
cessing Contours (ANN)
Library
tech- Algorithms
niques
OCR
using HD images
Image
Field with 34 × Matlab,
resizing, Computationally
CCA Pro- 22 Xilinx
Prepro- intensive cost,
labelling grammable 454+ Images, character Zynq-7000
cessing HD camera
16 [24] and mor- Gate 2790 size matrix, — — — 99.50% 3.78 ms Yes All Pro- Qatar
and geo- used-memory
phological Array Characters various grammable
metrical and time
operations (FPGA) light and SoC,
condi- constraints
Process- weather FPGA/ARM
tions
ing conditions
Unit
Vertical Different
Region of
Edge orienta-
Interest
Detection tion, light Format
17 [116] (ROI) — 1000+ videos — — — 92.31% 8.3 FPS Yes — Indian
with conditions dependent
based
removal of and type of
filtering
long edges vehicles
Intensity
detection Back
and propaga-
Labeling Variable
mathe- tion
connected size and il- MALAB
18 [103] matical Neural 60 images 98.3% — 93.2% 97.75% — No Iraq —
compo- lumination R2014a
morpho- Net-
nents condition
logical work
opera- (BPNN)
tions
Sensors 2021, 21, 3028 22 of 35
Table 2. Cont.
Overall
Image Extraction Segmentation Recognition Processing Real Device Plate Problem
No. From Procedure Database Recognition
Condition Rate Rate Rate Time Time Config. Format Areas
Rate
Extraction Segmentation Recognition
Method
E1:
Vertical
edge
detection
All methods
using
OCR1 for OCR1 for were tested as
Sobel filter OCR1:
Pixel E1: 81.99% E1: 42.41% from literature
Method Tem-
Projection 1024 × 768 E2: 78.65% E3: E2: 28.12% E3: to verify the
E2: plate E1: 65.25% E1: 60.87%
in Vertical pixels, 81.50% OCR2 21.66% OCR2 researcher’s
19 [19] Gradient Match- 141 images E2: 43.26% E2: 63.93% — No — Canada
and Variable for for claims and
extraction, ing E3: 33.33% E3: 65.91%
Horizontal conditions E1: 82.42% E1: 42.01% almost all
vertical OCR2:
directions E2: 78.36% E2: 27.86% failed for
his- PNN
E3: 77.95% E3: 21.46% variable
tograms,
datasets.
CCA
Method
E3: Using
shape
features
Color Format
RGB Color
Images, specific. Not
Pixel Extractor
2448 * 3264 Tesseract- compatible
Statistics and United
Template * 80 pixels Open- with low light
with pre- character States
20 [112] Match- 255 Images Camera: — 98.5% 95.1% 95% — Yes source images.
processing isolation (Illi-
ing iPhone 5s OCR Ambiguous
tech- using nois)
(Variable engine characters
niques threshold-
light have low
ing.
conditions) recognition
Sensors 2021, 21, 3028 23 of 35
Table 2. Cont.
Overall
Image Extraction Segmentation Recognition Processing Real Device Plate
No. From Procedure Database Recognition Problem Areas
Condition Rate Rate Rate Time Time Config. Format
Rate
Extraction Segmentation Recognition
Eclipse IDE,
Android
Low resolution
Platform
camera, Format
SDK
Filtering specific, Limited
Color Processor:
techniques system memory,
Images ARM v6
with Motion blur,
taken with 800MHz
contrast en- Vertical OCR object
5MP phone RAM: 285
21 [143] hancement Projection using — — 83.5% 92% 88% — Yes Malaysia obscuring,
built-in Mega Bytes
and other method ANN day/night shots
Camera, Screen Size:
preprocess- are problem
1600 * 1200 320 × 480
ing areas for the
pixels Camera: 5
techniques system. It can
Mega Pixels
be improved a
OS Version:
lot in future.
2.3 Ginger
Bread
Practical and
accurate for
targeted lane.
The claimed
96% ,94%
Set1: Gatso
Set1: Speed system
control
Image Lane: 96.6% Speed Lane: performance is
speed
Processing Mid Lane: 92.6% Mid tested on a very
Color cameras on
Filter 93.14% Side Lane: 87.14% limited data set.
features highways
(Histogram, Set1: 1150 Im- Lane: 78.8% Side Lane: 0.75–1.59 s The overall
(Hue and Decision Set2: Parked 1.7 GHz
Laplacian, ages Multi 64.8% Multi (Total performance
22 [144] shape) with Tree and Vehicles — — Yes CPU with 4 Iran
Morphology), Set2: 540 Vehicles: 96% Vehicles: 94% system considering
verti- SVM captured GB RAM
Connected Images Set2: Day: Set2: Day: response) other lanes is
cal/slope with phone
Compo- 96.8% Night: 94.4% Night: comparatively
sweep camera
nent/Projection 91.4% 72.14% Angled poor in
1.3MP.
Analysis Angled upto upto 20°: 62% recognition for
(Different
20°: 74.6% set1. In set2,
conditions)
only daylight
images are well
recognized with
poor results
otherwise.
Sensors 2021, 21, 3028 24 of 35
is used in [126]. Convolutional Neural Networks is used in [78] in real time scenario and
has shown great results for each stage of ANPR system. Neural network based algorithms
seems promising for ANPR and are proposed in [19,103,145–162].
In [163], an alternative but unique self-learning algorithm, based on Bayesian-probability
and Levenshtein-Text-Mining, is proposed. It offers higher matching accuracy of ANPR
system. It utilizes conditional probabilities of observing one character at a station for an
assigned character at some other station, using the “Association Matrix”.
Tesseract is the most widely adopted OCR engine, with the ability to recognize over
100 languages and is not limited to further training of new/unlisted languages. Majority
of the AI/ML based ANPR software providers utilize this engine for vehicle recognition
applications. There are many vendors providing ANPR solutions around the globe. For
example, the OpenALPR specializes in license plate and vehicle recognition software. It
is an open-source license plate reader service provider and is available as commercial
software too. The most convenient feature is the compatibility with most cameras and
diverse environmental conditions. Since this is based on artificial intelligence and machine
vision technology, its success is solely based on the effectiveness of the algorithms used in
the software along the hardware employed in the ANPR system.
Public and commercial vehicle image-datasets are available for testing ANPR algo-
rithms and some of these are listed in Table 3. These datasets are of great help to the
research community and are widely utilized with attributions to the providing source. Re-
searchers can use these images for testing the accuracy of their algorithms. These datasets
comprise of a great number of real-time images captured under diverse conditions. These
images/videos have variations in backgrounds, lighting conditions, environmental condi-
tions, plate position, physical conditions, size, style format and possible real time effecting
metrics. Some of them have both open-source and commercial versions available; the latter
normally using different algorithms for OCR, based on larger datasets, to offer higher
accuracy than the former.
There are quite many vendors serving the ANPR solutions systems but not all of
them provide the same services and user feasibility with relative concerns. Selecting the
right software is the key to getting the required accuracy in the recognition process and
Sensors 2021, 21, 3028 26 of 35
problems that are more common in Image processing based systems. The most impor-
tant step involved in recognition within CS/ML based ANPR systems is the extraction
of number plate from the scene, which is most complex part in terms of performance.
While using RFID for extraction/identification purposes, in case of missed vehicles this
technology may come in action hence helping the ANPR. Also, the speed detection can
be performed with RFIDs techniques. The vehicle may be tracked with RFID technology
irrespective of its location whether is it within or without line of sight to the camera. The
vehicle can be easily tracked throughout its travel on the road depending upon the types of
RFID technology utilized. RFID allows toll payments facility as well. In short, RFID works
on radio frequency whereas the image processing based ANPR systems are dependent
on camera. RFID does not require any camera and it can communicate with the tag on
the vehicle on the go, eliminating many complexities that are associated with camera
dependent technologies.
Image processing based ANPR integration with RFID technology may help in various
road applications and may improve system efficiency as in [176,177]. The integration of
RFID and ANPR may result in a hybrid system and it can be considered for multiple
applications of intelligent transport systems in present and future [176].
It is important to mention RFID technology here as future technology since many
countries are now considering the integration of ANPR and RFID to take maximum
benefit of the hybrid solution making the transportation system more accurate and secure.
Both systems have their strengths and weaknesses. For ANPR, in terms of algorithms
and performance, the main weakness is the successful localization of the vehicle number
plate which is very much dependent on the camera and many other factors that makes it
challenging to successfully recognize a vehicle number plate in some conditions mentioned
earlier. Also, no additional transponders or tags are required to attach on the vehicles. The
strength of this technology is that along with the recognition of vehicle its very helpful for
security/surveillance applications since it captures the vehicles visuals.
The strength of RFID is the highest accuracy rates for recognition since it works on
radio frequency detections by sensing the transponder attached on the vehicle, in most
cases a label or tag. It can track the vehicle throughout the travel irrespective of the line of
light as compared to the cameras based systems. It can effectively be used in etoll collection
and the tag data can be updated accordingly. The weaknesses to this technology is limited
in case of recognition/reading vehicle however it does not store any visuals of the scene.
Detection accuracy from RFID and visual security sense from ANPR together creates a
hybrid system which can make the transportation system more secure and accurate.
This technology is unfortunately not a one-size-fits-all solution and needs optimiza-
tion from region-to-region. To allow a uniform evaluation of different approaches, the
proposed algorithms needs to be tested using complex datasets provided various factors as
diversity in number plate styles, colors, fonts, sizes, orientations/tilt/skewed, occlusions,
obscure characters and other physical conditions, camera resolution, shutter speed, lighten-
ing/illumination aids, coverage capability for number plate extraction from the real time
complex scenes, fast moving vehicles and to maintain low processing times and increase
recognition capabilities in real time scenarios. A real-time video scene is recommended for
the tests rather than using pre-taken still images.
The current state of the art approaches are more inclined towards the use of OCR
engines equipped with AI capabilities. Recognition algorithms based on Artificial Neural
Networks are providing better recognition rates. Integration of the ANPR system with
other ICT tools is also gaining popularity such as integration of ANPR engines with
GPS, Online databases, Android/IOS platforms, RFID and other various tools that serves
different applications in intelligent transportation systems. Future research is needed
to highlight importance and ways of incorporating this technology with other ICT tools
which can be beneficial for the transport system and its policy making. The available CV
algorithm’s accuracy is limited to particular regions and its standardization for the number
Sensors 2021, 21, 3028 28 of 35
plates. Further research is needed to make the algorithms smart enough to work in variable
environments provided non-standardized diverse number-plate datasets.
Now that the accuracy of ANPR systems is improving with time and are being used in
tandem with AI capabilities and IoT, it is expected that these disruptive technologies and
applications will be more widely adopted and that new use cases will emerge in the coming
times. It is possible with the relevant tools/software to transform the raw augmented
ANPR camera data into practical knowledge and help understand the traffic flow including
passenger and freight mobility. ANPR cameras, have the potential and can be augmented
with vehicle category information [178].
Author Contributions: Conceptualization, Methodology and writing of the paper was done by L.
and N.M. Formal Analysis Validation and review was carried out by N.M. and S.A.A.S. The whole
study was supervised by N.M., S.A.A.S facilitated the study by discussion and giving valuable
suggestions for the refinement of the study. All authors have read and agreed to the published
version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Luo, X.; Ma, D.; Jin, S.; Gong, Y.; Wang, D. Queue length estimation for signalized intersections using license plate recognition
data. IEEE Intell. Transp. Syst. Mag. 2019, 11, 209–220. [CrossRef]
2. Lin, H.Y.; Dai, J.M.; Wu, L.T.; Chen, L.Q. A vision-based driver assistance system with forward collision and overtaking detection.
Sensors 2020, 20, 5139. [CrossRef] [PubMed]
3. Thangallapally, S.K.; Maripeddi, R.; Banoth, V.K.; Naveen, C.; Satpute, V.R. E-Security System for Vehicle Number Tracking
at Parking Lot (Application for VNIT Gate Security). In Proceedings of the 2018 IEEE International Students’ Conference on
Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 24–25 February 2018; pp. 1–4.
4. Negassi, I.T.; Araya, G.G.; Awawdeh, M.; Faisal, T. Smart Car plate Recognition System. In Proceedings of the 2018 1st
International Conference on Advanced Research in Engineering Sciences (ARES), Dubai, United Arab Emirates, 15 June 2018; pp.
1–5.
5. Kanteti, D.; Srikar, D.; Ramesh, T. Intelligent smart parking algorithm. In Proceedings of the 2017 International Conference on
Smart Technologies For Smart Nation (SmartTechCon), Bengaluru, India, 17–19 August 2017 ; pp. 1018–1022.
6. Shreyas, R.; Kumar, B.P.; Adithya, H.; Padmaja, B.; Sunil, M. Dynamic traffic rule violation monitoring system using automatic
number plate recognition with SMS feedback. In Proceedings of the 2017 2nd International Conference on Telecommunication
and Networks (TEL-NET), Noida, India, 10–11 August 2017; pp. 1–5.
7. Chaithra, B.; Karthik, K.; Ramkishore, D.; Sandeep, R. Monitoring Traffic Signal Violations using ANPR and GSM. In Proceedings
of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC),
Mysore, India, 8–9 September 2017; pp. 341–346.
8. Felix, A.Y.; Jesudoss, A.; Mayan, J.A. Entry and exit monitoring using license plate recognition. In Proceedings of the 2017
IEEE International Conference on Smart Technologies and Management for Computing, Communication, Controls, Energy and
Materials (ICSTM), Chennai, India, 2–4 August 2017; pp. 227–231.
9. Du, S.; Ibrahim, M.; Shehata, M.; Badawy, W. Automatic license plate recognition (ALPR): A state-of-the-art review. IEEE Trans.
Circuits Syst. Video Technol. 2012, 23, 311–325. [CrossRef]
10. Birgillito, G.; Rindone, C.; Vitetta, A. Passenger mobility in a discontinuous space: Modelling access/egress to maritime barrier
in a case study. J. Adv. Transp. 2018, 2018, 6518329. [CrossRef]
11. Alonso, B.; Pòrtilla, Á.I.; Musolino, G.; Rindone, C.; Vitetta, A. Network Fundamental Diagram (NFD) and traffic signal control:
First empirical evidences from the city of Santander. Transp. Res. Procedia 2017, 27, 27–34. [CrossRef]
12. Croce, A.I.; Musolino, G.; Rindone, C.; Vitetta, A. Route and Path Choices of Freight Vehicles: A Case Study with Floating Car
Data. Sustainability 2020, 12, 8557. [CrossRef]
13. Nuzzolo, A.; Comi, A.; Papa, E.; Polimeni, A. Understanding taxi travel demand patterns through Floating Car Data. In
Proceedings of the 4th Conference on Sustainable Urban Mobility; Springer: Skiathos Island, Greece, 24–25 May 2018; pp.
445–452.
14. PlateRecognizer. Plate Recognizer ALPR. Available online: https://platerecognizer.com/ (accessed on 25 November 2020 ).
15. ParkPow. a division of ParkPow. Available online: https://parkpow.com/ (accessed on 25 November 2020).
Sensors 2021, 21, 3028 29 of 35
16. Kyaw, N.N.; Sinha, G.; Mon, K.L. License plate recognition of Myanmar vehicle number plates a critical review. In Proceedings
of the 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), Nara, Japan, 9–12 October 2018; pp. 771–774.
17. Chou, J.S.; Liu, C.H. Automated Sensing System for Real-Time Recognition of Trucks in River Dredging Areas Using Computer
Vision and Convolutional Deep Learning. Sensors 2021, 21, 555. [CrossRef]
18. Bakhtan, M.A.H.; Abdullah, M.; Abd Rahman, A. A review on license plate recognition system algorithms. In Proceedings of the
2016 International Conference on Information and Communication Technology (ICICTM), Kuala Lumpur, Malaysia, 16–17 May
2016; pp. 84–89.
19. Ahmad, I.S.; Boufama, B.; Habashi, P.; Anderson, W.; Elamsy, T. Automatic license plate recognition: A comparative study. In
Proceedings of the 2015 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Abu Dhabi,
United Arab Emirates, 7–10 December 2015; pp. 635–640. [CrossRef]
20. Zheng, D.; Zhao, Y.; Wang, J. An efficient method of license plate location. Pattern Recognit. Lett. 2005, 26, 2431–2438. [CrossRef]
21. Ashourian, M.; DaneshmandPour, N.; SHARIFI, T.O.; Moallemb, P. Real time implementation of a license plate location
recognition system based on adaptive morphology. Int. J. Eng. 2013, 26, 1347–1356. [CrossRef]
22. Hongliang, B.; Changping, L. A hybrid license plate extraction method based on edge statistics and morphology. In Proceedings
of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26 August 2004; Volume 2, pp. 831–834.
23. Haider, S.A.; Khurshid, K. An implementable system for detection and recognition of license plates in Pakistan. In Proceedings of
the 2017 International Conference on Innovations in Electrical Engineering and Computational Technologies (ICIEECT), Karachi,
Pakistan, 5–7 April 2017; pp. 1–5.
24. Farhat, A.A.; Al-Zawqari, A.; Hommos, O.; Al-Qahtani, A.; Bensaali, F.; Amira, A.; Zhai, X. OCR-based hardware implementation
for qatari number plate on the Zynq SoC. In Proceedings of the 2017 9th IEEE-GCC Conference and Exhibition (GCCCE),
Manama, Bahrain, 8–11 May 2017; pp. 1–9.
25. Yepez, J.; Ko, S.B. Improved license plate localisation algorithm based on morphological operations. IET Intell. Transp. Syst. 2018,
12, 542–549. [CrossRef]
26. Hommos, O.; Al-Qahtani, A.; Farhat, A.; Al-Zawqari, A.; Bensaali, F.; Amira, A.; Zhai, X. HD Qatari ANPR system. In Proceedings
of the 2016 International Conference on Industrial Informatics and Computer Systems (CIICS), Sharjah, United Arab Emirates,
13–15 March 2016; pp. 1–5.
27. Kanayama, K.; Fujikawa, Y.; Fujimoto, K.; Horino, M. Development of vehicle-license number recognition system using real-
time image processing and its application to travel-time measurement. In Proceedings of the 41st IEEE Vehicular Technology
Conference, St. Louis, MO, USA, 19–22 May 1991; pp. 798–804.
28. Busch, C.; Domer, R.; Freytag, C.; Ziegler, H. Feature based recognition of traffic video streams for online route tracing. In
Proceedings of the 48th IEEE Vehicular Technology Conference. Pathway to Global Wireless Revolution (Cat. No. 98CH36151),
Ottawa, ON, Canada, 21 May 1998; Volume 3, pp. 1790–1794.
29. Khan, M.; Mufti, N.; others. Comparison of various edge detection filters for ANPR. In Proceedings of the 2016 Sixth International
Conference on Innovative Computing Technology (INTECH), Dublin, Ireland, 24–26 August 2016; pp. 306–309.
30. Pechiammal, B.; Renjith, J.A. An efficient approach for automatic license plate recognition system. 2017 Third International
Conference on Science Technology Engineering & Management (ICONSTEM). IEEE, Chennai, India, 23–24 March 2017, pp.
121–129.
31. Sarfraz, M.; Ahmed, M.J.; Ghazi, S.A. Saudi Arabian license plate recognition system. In Proceedings of the 2003 International
Conference on Geometric Modeling and Graphics, London, UK, 16–18 July 2003; pp. 36–41.
32. Dev, A. A novel approach for car license plate detection based on vertical edges. In Proceedings of the 2015 Fifth International
Conference on Advances in Computing and Communications (ICACC), Kochi, India, 2–4 September 2015; pp. 391–394.
33. Wang, S.Z.; Lee, H.J. Detection and recognition of license plate characters with different appearances. In Proceedings of the
2003 IEEE International Conference on Intelligent Transportation Systems, Shanghai, China, 12–15 October 2003; Volume 2, pp.
979–984.
34. Lee, H.J.; Chen, S.Y.; Wang, S.Z. Extraction and recognition of license plates of motorcycles and vehicles on highways. In
Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26 August 2004; Volume 4, pp. 356–359.
35. Huang, Y.P.; Chen, C.H.; Chang, Y.T.; Sandnes, F.E. An intelligent strategy for checking the annual inspection status of motorcycles
based on license plate recognition. Expert Syst. Appl. 2009, 36, 9260–9267. [CrossRef]
36. Sferle, R.M.; Moisi, E.V. Automatic Number Plate Recognition for a Smart Service Auto. In Proceedings of the 2019 15th
International Conference on Engineering of Modern Electric Systems (EMES), Oradea, Romania, 13–14 June 2019; pp. 57–60.
37. Kim, D.S.; Chien, S.I. Automatic car license plate extraction using modified generalized symmetry transform and image warping.
In Proceedings of the 2001 IEEE International Symposium on Industrial Electronics Proceedings (Cat. No. 01TH8570), Pusan,
Korea (South), 12–16 June 2001; Volume 3, pp. 2022–2027.
38. Kashyap, A.; Suresh, B.; Patil, A.; Sharma, S.; Jaiswal, A. Automatic number plate recognition. In Proceedings of the
Communication Control and Networking (ICACCCN), Greater Noida, India, 12–13 October 2018; pp. 838–843.
39. Slimani, I.; Zaarane, A.; Hamdoun, A.; Atouf, I. Vehicle License Plate Localization and Recognition System for Intelligent
Transportation Applications. In Proceedings of the 2019 6th International Conference on Control, Decision and Information
Technologies (CoDIT), Paris, France, 23–26 April 2019; pp. 1592–1597.
Sensors 2021, 21, 3028 30 of 35
40. Anagnostopoulos, C.N.E.; Anagnostopoulos, I.E.; Psoroulas, I.D.; Loumos, V.; Kayafas, E. License plate recognition from still
images and video sequences: A survey. IEEE Trans. Intell. Transp. Syst. 2008, 9, 377–391. [CrossRef]
41. Wu, B.F.; Lin, S.P.; Chiu, C.C. Extracting characters from real vehicle licence plates out-of-doors. IET Comput. Vis. 2007, 1, 2–10.
[CrossRef]
42. Bellas, N.; Chai, S.M.; Dwyer, M.; Linzmeier, D. FPGA implementation of a license plate recognition SoC using automatically
generated streaming accelerators. In Proceedings of the 20th IEEE International Parallel & Distributed Processing Symposium,
Rhodes, Greece, 25–29 April 2006; p. 8.
43. Wu, H.H.P.; Chen, H.H.; Wu, R.J.; Shen, D.F. License plate extraction in low resolution video. In Proceedings of the 18th
International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 1, pp. 824–827.
44. Chacon, M.I.; Zimmerman, A. License plate location based on a dynamic PCNN scheme. Proc. Int. Jt. Conf. Neural Netw 2003, 2,
1195–1200.
45. Miyamoto, K.; Nagano, K.; Tamagawa, M.; Fujita, I.; Yamamoto, M. Vehicle license-plate recognition by image analysis. In
Proceedings of the IECON’91: 1991 International Conference on Industrial Electronics, Control and Instrumentation, Kobe, Japan,
28 October 1991; pp. 1734–1738.
46. Shi, X.; Zhao, W.; Shen, Y. Automatic license plate recognition system based on color image processing. In International Conference
on Computational Science and Its Applications; Springer: Singapore, 9–12 May 2005; pp. 1159–1168.
47. Chang, S.L.; Chen, L.S.; Chung, Y.C.; Chen, S.W. Automatic license plate recognition. IEEE Trans. Intell. Transp. Syst. 2004,
5, 42–53. [CrossRef]
48. Lee, E.R.; Kim, P.K.; Kim, H.J. Automatic recognition of a car license plate using color image processing. In Proceedings of the 1st
International Conference on Image Processing, Austin, TX, USA, 13–16 November 1994; Volume 2, pp. 301–305.
49. Yang, Y.Q.; Bai, J.; Tian, R.L.; Liu, N. A vehicle license plate recognition system based on fixed color collocation. In Proceedings of
the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; Volume 9, pp.
5394–5397.
50. Jia, W.; Zhang, H.; He, X.; Piccardi, M. Mean shift for accurate license plate localization. In Proceedings of the 2005 IEEE
Intelligent Transportation Systems, Vienna, Austria, 16 September 2005; pp. 566–571.
51. Jia, W.; Zhang, H.; He, X. Region-based license plate detection. J. Netw. Comput. Appl. 2007, 30, 1324–1333. [CrossRef]
52. Pan, L.; Li, S. A new license plate extraction framework based on fast mean shift. In Proceedings of the International Conference
on Image Processing and Pattern Recognition in Industrial Engineering. International Society for Optics and Photonics, Xi’an,
China, 19 August 2010; Volume 7820, p. 782007.
53. Wang, F.; Man, L.; Wang, B.; Xiao, Y.; Pan, W.; Lu, X. Fuzzy-based algorithm for color recognition of license plates. Pattern
Recognit. Lett. 2008, 29, 1007–1020. [CrossRef]
54. Deb, K.; Jo, K.H. A vehicle license plate detection method for intelligent transportation system applications. Cybern. Syst. Int. J.
2009, 40, 689–705. [CrossRef]
55. Huang, D.; Shan, C.; Ardabilian, M.; Wang, Y.; Chen, L. Local binary patterns and its application to facial image analysis: A
survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2011, 41, 765–781. [CrossRef]
56. Teoh, S.S.; Bräunl, T. Performance evaluation of HOG and Gabor features for vision-based vehicle detection. In Proceedings
of the 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE),Penang, Malaysia, 19–27
November 2015; pp. 66–71.
57. Soh, Y.S.; Chun, B.T.; Yoon, H.S. Design of real time vehicle identification system. In Proceedings of the IEEE International
Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 2–5 October 1994; Volume 3, pp. 2147–2152.
58. Agui, T.; Choi, H.; Nakajima, M. A method of number plate extraction using a fast pyramid hierarchical Hough transformation.
IEICE Trans. Info. Sys. 1987, 70, 1383–1389. (In Japanese)
59. Nathan, V.S.L.; Ramkumar, J.; Priya, S.K. New approaches for license plate recognition system. In Proceedings of the International
Conference on Intelligent Sensing and Information Processing, Chennai, India, 4–7 January 2004; pp. 149–152.
60. Seetharaman, V.; Sathyakhala, A.; Vidhya, N.; Sunder, P. License plate recognition system using hybrid neural networks. In
Proceedings of the IEEE Annual Meeting of the Fuzzy Information, Banff, AB, Canada, 27–30 June 2004; Volume 1, pp. 363–366.
61. Anagnostopoulos, C.; Alexandropoulos, T.; Boutas, S.; Loumos, V.; Kayafas, E. A template-guided approach to vehicle surveillance
and access control. In Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance,Como, Italy, 15–16
September 2005; pp. 534–539.
62. Hsieh, C.T.; Juan, Y.S.; Hung, K.M. Multiple license plate detection for complex background. In Proceedings of the 19th
International Conference on Advanced Information Networking and Applications (AINA’05) Volume 1 (AINA papers), Taipei,
Taiwan, 28–30 March 2005; Volume 2, pp. 389–392.
63. Sankari, M.; Bremananth, R.; Meena, C. A Robust Diverged Localization and Recognition of License Registration Characters. Int.
J. Electr. Comput. Eng. 2013, 6, 1225–1232.
64. Zunino, R.; Rovetta, S. Visual location of license plates by vector quantization. In Proceedings of the 1999 IEEE International
Symposium on Circuits and Systems (ISCAS), Orlando, FL, USA, 30 May 1999; Volume 4, pp. 135–138.
65. Anagnostopoulos, C.N.E.; Anagnostopoulos, I.E.; Loumos, V.; Kayafas, E. A license plate-recognition algorithm for intelligent
transportation system applications. IEEE Trans. Intell. Transp. Syst. 2006, 7, 377–392. [CrossRef]
Sensors 2021, 21, 3028 31 of 35
66. Caner, H.; Gecim, H.S.; Alkar, A.Z. Efficient embedded neural-network-based license plate recognition system. IEEE Trans. Veh.
Technol. 2008, 57, 2675–2683. [CrossRef]
67. Kahraman, F.; Kurt, B.; Gökmen, M. License plate character segmentation based on the gabor transform and vector quantization.
In International Symposium on Computer and Information Sciences; Springer: Antalya, Turkey, 3–5 November 2003; pp. 381–388.
68. Wang, Y.R.; Lin, W.H.; Horng, S.J. A sliding window technique for efficient license plate localization based on discrete wavelet
transform. Expert Syst. Appl. 2011, 38, 3142–3146. [CrossRef]
69. Zhang, H.; Jia, W.; He, X.; Wu, Q. Learning-based license plate detection using global and local features. In Proceedings of the
18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 2, pp. 1102–1105.
70. Dlagnekov, L. License Plate Detection Using Adaboost; Computer Science and Engineering Department: San Diego, CA, USA, 2004.
71. Wang, S.Z.; Lee, H.J. A cascade framework for a real-time statistical plate recognition system. IEEE Trans. Inf. Forensics Secur.
2007, 2, 267–282. [CrossRef]
72. Matas, J.; Zimmermann, K. Unconstrained licence plate and text localization and recognition. In Proceedings of the 2005 IEEE
Intelligent Transportation Systems, Vienna, Austria, 16 September 2005; pp. 225–230.
73. Alegria, F.; Girao, P.S. Vehicle plate recognition for wireless traffic control and law enforcement system. In Proceedings of the
2006 IEEE International Conference on Industrial Technology, Mumbai, India, 15–17 December 2006; pp. 1800–1804.
74. Hontani, H.; Koga, T. Character extraction method without prior knowledge on size and position information. In Proceedings of
the IEEE International Vehicle Electronics Conference 2001. IVEC 2001 (Cat. No.01EX522), Tottori, Japan, 25–28 September 2001;
pp. 67–72. [CrossRef]
75. Cho, B.; Ryu, S.; Shin, D.; Jung, J. License plate extraction method for identification of vehicle violations at a railway level crossing.
Int. J. Automot. Technol. 2011, 12, 281–289. [CrossRef]
76. Ho, W.T.; Lim, H.W.; Tay, Y.H. Two-stage license plate detection using gentle Adaboost and SIFT-SVM. In Proceedings of the 2009
First Asian Conference on Intelligent Information and Database Systems, Dong hoi, Vietnam, 1–3 April 2009; pp. 109–114.
77. Le, W.; Li, S. A hybrid license plate extraction method for complex scenes. In Proceedings of the 18th International Conference
on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 2, pp. 324–327.
78. Laroca, R.; Severo, E.; Zanlorensi, L.A.; Oliveira, L.S.; Gonçalves, G.R.; Schwartz, W.R.; Menotti, D. A robust real-time automatic
license plate recognition based on the YOLO detector. In Proceedings of the 2018 International Joint Conference on Neural
Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–10.
79. Masood, S.Z.; Shu, G.; Dehghan, A.; Ortiz, E.G. License plate detection and recognition using deeply learned convolutional
neural networks. arXiv 2017, arXiv:1703.07330.
80. OpenALPR. OpenALPR Rekor Solutions Suite. Available online: http://www.openalpr.com/cloud-api.html (accessed on 29
December 2020).
81. Bulan, O.; Kozitsky, V.; Ramesh, P.; Shreve, M. Segmentation-and annotation-free license plate recognition with deep localization
and failure identification. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2351–2363. [CrossRef]
82. Nijhuis, J.; Ter Brugge, M.; Helmholt, K.; Pluim, J.; Spaanenburg, L.; Venema, R.S.; Westenberg, M. Car license plate recognition
with neural networks and fuzzy logic. In Proceedings of the ICNN’95-International Conference on Neural Networks, Australia,
27 November 1995; Volume 5, pp. 2232–2236.
83. Ter Brugge, M.; Stevens, J.; Nijhuis, J.; Spaanenburg, L. License plate recognition using DTCNNs. In Proceedings of the 1998 Fifth
IEEE International Workshop on Cellular Neural Networks and their Applications. Proceedings (Cat. No. 98TH8359), London,
UK, 14–17 April 1998; pp. 212–217.
84. Xu, J.F.; Li, S.F.; Yu, M.S. Car license plate extraction using color and edge information. In Proceedings of the 2004 International
Conference on Machine Learning and Cybernetics (IEEE Cat. No. 04EX826), Shanghai, China , 26–29 August 2004; Volume 6, pp.
3904–3907.
85. Park, S.H.; Kim, K.I.; Jung, K.; Kim, H.J. Locating car license plates using neural networks. Electron. Lett. 1999, 35, 1475–1477.
[CrossRef]
86. Kim, K.K.; Kim, K.I.; Kim, J.; Kim, H.J. Learning-based approach for license plate recognition. In Proceedings of the Proceedings
of the 2000 IEEE Signal Processing Society Workshop (Cat. No. 00TH8501), Neural Networks for Signal Processing X, Sydney,
NSW, Australia, 11–13 December 2000; Volume 2, pp. 614–623.
87. Wang, M.L.; Liu, Y.H.; Liao, B.Y.; Lin, Y.S.; Horng, M.F. A vehicle license plate recognition system based on spatial/frequency
domain filtering and neural networks. In International Conference on Computational Collective Intelligence; Springer: Kaohsiung,
Taiwan, 10–12 November 2010; pp. 63–70.
88. Porikli, F.; Kocak, T. Robust license plate detection using covariance descriptor in a neural network framework. In Proceedings of
the 2006 IEEE International Conference on Video and Signal Based Surveillance, Sydney, NSW, Australia, 22–24 November 2006;
pp. 107–107.
89. Chen, Z.X.; Liu, C.Y.; Chang, F.L.; Wang, G.Y. Automatic license-plate location and recognition based on feature salience. IEEE
Trans. Veh. Technol. 2009, 58, 3781–3785. [CrossRef]
90. Mao, S.; Huang, X.; Wang, M. An adaptive method for Chinese license plate location. In Proceedings of the 2010 8th World
Congress on Intelligent Control and Automation, Jinan, China, 7–9 July 2010; pp. 6173–6177.
Sensors 2021, 21, 3028 32 of 35
91. Wu, M.K.; Wei, J.S.; Shih, H.C.; Ho, C.C. License plate detection based on 2-level 2D Haar wavelet transform and edge density
verification. In Proceedings of the 2009 IEEE International Symposium on Industrial Electronics, Seoul, Korea (South), 5–8 July
2009; pp. 1699–1704.
92. Lee, Y.; Song, T.; Ku, B.; Jeon, S.; Han, D.K.; Ko, H. License plate detection using local structure patterns. In Proceedings of the
2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, 29 August 2010;
pp. 574–579.
93. Montazzolli, S.; Jung, C. Real-time brazilian license plate detection and recognition using deep convolutional neural networks. In
Proceedings of the 2017 30th SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), Niteroi, Brazil, 17–20 October
2017; pp. 55–62.
94. Li, H.; Wang, P.; You, M.; Shen, C. Reading car license plates using deep neural networks. Image Vis. Comput. 2018, 72, 14–23.
[CrossRef]
95. Li, H.; Wang, P.; Shen, C. Toward end-to-end car license plate detection and recognition with deep neural networks. IEEE Trans.
Intell. Transp. Syst. 2018, 20, 1126–1136. [CrossRef]
96. Xu, X.; Wang, Z.; Zhang, Y.; Liang, Y. A method of multi-view vehicle license plates location based on rectangle features. In
Proceedings of the 2006 8th international Conference on Signal Processing, Guilin, China, 16–20 November 2006; Volume 3.
97. Pan, M.S.; Yan, J.B.; Xiao, Z.H. Vehicle license plate character segmentation. Int. J. Autom. Comput. 2008, 5, 425–432. [CrossRef]
98. Pan, M.S.; Xiong, Q.; Yan, J.B. A new method for correcting vehicle license plate tilt. Int. J. Autom. Comput. 2009, 6, 210–216.
[CrossRef]
99. Zhang, Y.; Zhang, C. A new algorithm for character segmentation of license plate. In Proceedings of the IEEE IV2003 Intelligent
Vehicles Symposium. Proceedings (Cat. No. 03TH8683), Columbus, OH, USA, 9–11 June 2003; pp. 106–109.
100. Llorens, D.; Marzal, A.; Palazon, V.; Vilar, J.M. Car license plates extraction and recognition based on connected components
analysis and HMM decoding. In Iberian Conference on Pattern Recognition and Image Analysis; Springer: Estoril, Portugal, 7–9 June
2005; pp. 571–578.
101. Coetzee, C.; Botha, C.; Weber, D. PC based number plate recognition system. In Proceedings of the IEEE International Symposium
on Industrial Electronics. Proceedings. ISIE’98 (Cat. No. 98TH8357), Pretoria, South Africa, 7–10 July 1998; Volume 2, pp. 605–610.
102. Comelli, P.; Ferragina, P.; Granieri, M.N.; Stabile, F. Optical recognition of motor vehicle license plates. IEEE Trans. Veh. Technol.
1995, 44, 790–799. [CrossRef]
103. Omran, S.S.; Jarallah, J.A. Iraqi License Plate Localization and Recognition System Using Neural Network. In Proceedings of
the 2017 Second Al-Sadiq International Conference on Multidisciplinary in IT and Communication Science and Applications
(AIC-MITCSA), Baghdad, Iraq, 30–31 December 2017; pp. 73–78.
104. Sanyuan, Z.; Mingli, Z.; Xiuzi, Y. Car plate character extraction under complicated environment. In Proceedings of the 2004 IEEE
International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), The Hague, Netherlands, 10–13 October
2004; Volume 5, pp. 4722–4726.
105. Duan, T.D.; Du, T.H.; Phuoc, T.V.; Hoang, N.V. Building an automatic vehicle license plate recognition system. In Proceedings of
the International Conference Computer Science RIVF. Citeseer, Can Tho, Vietnam, 21–24 February 2005; Volume 1, pp. 59–63.
106. Qin, Z.; Shi, S.; Xu, J.; Fu, H. Method of license plate location based on corner feature. In Proceedings of the 2006 6th World
Congress on Intelligent Control and Automation, Dalian, China, 21–23 June 2006; Volume 2, pp. 8645–8649.
107. Cheng, Y.; Lu, J.; Yahagi, T. Car license plate recognition based on the combination of principal components analysis and radial
basis function networks. In Proceedings of the 7th International Conference on Signal Processing, Beijing, China, 31 August 2004;
Volume 2, pp. 1455–1458.
108. Chowdhury, S.; Das, A.; Punitha, P. Projection Profile based Number Plate Localization and Recognition. Comput. Sci. Inf. Technol.
2016, 185–200. [CrossRef]
109. Hegt, H.A.; De La Haye, R.J.; Khan, N.A. A high performance license plate recognition system. In Proceedings of the 1998
IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 98CH36218), San Diego, CA, USA, 14 October 1998;
Volume 5, pp. 4357–4362.
110. Shan, B. Vehicle License Plate Recognition Based on Text-line Construction and Multilevel RBF Neural Network. JCP 2011,
6, 246–253. [CrossRef]
111. Barroso, J.; Dagless, E.; Rafael, A.; Bulas-Cruz, J. Number plate reading using computer vision. In Proceedings of the Proceeding
of the IEEE International Symposium on Industrial Electronics, ISIE’97, Guimaraes, Portugal, 7–11 July 1997; pp. 761–766.
112. Jia, Y.; Gonnot, T.; Saniie, J. Design flow of vehicle license plate reader based on RGB color extractor. In Proceedings of the 2016
IEEE International Conference on Electro Information Technology (EIT), Grand Forks, ND, USA, 19–21 May 2016; pp. 0494–0498.
113. Paliy, I.; Turchenko, V.; Koval, V.; Sachenko, A.; Markowsky, G. Approach to recognition of license plate numbers using neural
networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541),
Budapest, Hungary, 25–29 July 2004; Volume 4, pp. 2965–2970.
114. Gao, Q.; Wang, X.; Xie, G. License plate recognition based on prior knowledge. In Proceedings of the 2007 IEEE International
Conference on Automation and Logistics, Jinan, China, 18–21 AugustJaipur, India, 9-10 May 2007; pp. 2964–2968.
115. Guo, J.M.; Liu, Y.F. License plate localization and character segmentation with feedback self-learning and hybrid binarization
techniques. IEEE Trans. Veh. Technol. 2008, 57, 1417–1424.
Sensors 2021, 21, 3028 33 of 35
116. Singh, V.; Srivastava, A.; Kumar, S.; Ghosh, R. A Structural Feature Based Automatic Vehicle Classification System at Toll Plaza.
In International Conference on Internet of Things and Connected Technologies; Springer: Jaipur, India, 9–10 May 2019; pp. 1–10.
117. Lin, N.H.; Aung, Y.L.; Khaing, W.K. Automatic vehicle license plate recognition system for smart transportation. In Proceedings
of the 2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS), Bali, Indonesia, 1–3 November
2018; pp. 97–103.
118. Nomura, S.; Yamanaka, K.; Katai, O.; Kawakami, H. A new method for degraded color image binarization based on adaptive
lightning on grayscale versions. IEICE Trans. Inf. Syst. 2004, 87, 1012–1020.
119. Nomura, S.; Yamanaka, K.; Katai, O.; Kawakami, H.; Shiose, T. A novel adaptive morphological approach for degraded character
image segmentation. Pattern Recognit. 2005, 38, 1961–1975. [CrossRef]
120. Kang, D.J. Dynamic programming-based method for extraction of license plate numbers of speeding vehicles on the highway.
Int. J. Automot. Technol. 2009, 10, 205–210. [CrossRef]
121. Yogheedha, K.; Nasir, A.; Jaafar, H.; Mamduh, S. Automatic vehicle license plate recognition system based on image processing
and template matching approach. In Proceedings of the 2018 International Conference on Computational Approach in Smart
Systems Design and Applications (ICASSDA), Kuching, Malaysia, 15–17 August 2018; pp. 1–8.
122. Shuang-tong, T.; Wen-ju, L. Number and letter character recognition of vehicle license plate based on edge Hausdorff distance.
In Proceedings of the Sixth International Conference on Parallel and Distributed Computing Applications and Technologies
(PDCAT’05), Dalian, China, 5–8 December 2005; pp. 850–852.
123. Xiaobo, L.; Xiaojing, L.; Wei, H. Vehicle license plate character recognition. In Proceedings of the International Conference on
Neural Networks and Signal Processing,Nanjing, China , 14–17 December 2003; Volume 2, pp. 1066–1069.
124. Naito, T.; Tsukada, T.; Yamada, K.; Kozuka, K.; Yamamoto, S. Robust license-plate recognition method for passing vehicles under
outside environment. IEEE Trans. Veh. Technol. 2000, 49, 2309–2319. [CrossRef]
125. Desai, G.G.; Bartakke, P.P. Real-Time Implementation Of Indian License Plate Recognition System. In Proceedings of the 2018
IEEE Punecon, Pune, India, 30 November 2018; pp. 1–5.
126. Sasi, A.; Sharma, S.; Cheeran, A.N. Automatic car number plate recognition. In Proceedings of the 2017 International Conference
on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–6.
127. Li, M.; Sun, T.; Liu, H. Image Recognition of Steel Plate Based on an improved Support Vector Machine. In Proceedings of the
2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 11–13 August 2018; pp. 1411–1415.
128. Al-Shemarry, M.S.; Li, Y. Developing Learning-Based Preprocessing Methods for Detecting Complicated Vehicle Licence Plates.
IEEE Access 2020, 8, 170951–170966. [CrossRef]
129. Dia, Y.; Zheng, N.; Zhang, X.; Xuan, G. Automatic recognition of province name on the license plate of moving vehicle. In
Proceedings of the 9th International Conference on Pattern Recognition, Rome, Italy, 14–17 May 1988; pp. 927–929.
130. Ko, M.A.; Kim, Y.M. A simple OCR method from strong perspective view. In Proceedings of the 33rd Applied Imagery Pattern
Recognition Workshop (AIPR’04), Washington, DC, USA, 13–15 October 2004; pp. 235–240.
131. Kim, M.K.; Kwon, Y.B. Multi-font and multi-size character recognition based on the sampling and quantization of an unwrapped
contour. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996;
Volume 3, pp. 170–174.
132. Hu, P.; Zhao, Y.; Yang, Z.; Wang, J. Recognition of gray character using gabor filters. In Proceedings of the Fifth International
Conference on Information Fusion. FUSION 2002.(IEEE Cat. No. 02EX5997), Annapolis, MD, USA, 8–11 July 2002; Volume 1, pp.
419–424.
133. Abdullah, S.N.H.S.; Khalid, M.; Yusof, R.; Omar, K. License plate recognition using multi-cluster and multilayer neural networks.
In Proceedings of the 2006 2nd International Conference on Information & Communication Technologies, Damascus, Syria, 24–28
April 2006; Volume 1, pp. 1818–1823.
134. Abdullah, S.N.H.S.; Khalid, M.; Yusof, R.; Omar, K. Comparison of feature extractors in license plate recognition. In Proceedings
of the First Asia International Conference on Modelling & Simulation (AMS’07), Phyket, Thailand, 27–30 March 2007; pp. 502–506.
135. Duangphasuk, P.; Thammano, A. Thai vehicle license plate recognition using the hierarchical cross-correlation ARTMAP. In
Proceedings of the 2006 3rd International IEEE Conference Intelligent Systems, London, UK, 4–6 September 2006; pp. 652–655.
136. Jiao, J.; Ye, Q.; Huang, Q. A configurable method for multi-style license plate recognition. Pattern Recognit. 2009, 42, 358–369.
[CrossRef]
137. Amit, Y.; Geman, D.; Fan, X. A coarse-to-fine strategy for multiclass shape detection. IEEE Trans. Pattern Anal. Mach. Intell. 2004,
26, 1606–1621. [CrossRef]
138. Amit, Y. A neural network architecture for visual selection. Neural Comput. 2000, 12, 1141–1164. [CrossRef]
139. Amit, Y.; Geman, D. A computational model for visual selection. Neural Comput. 1999, 11, 1691–1715. [CrossRef] [PubMed]
140. Kraisin, S.; Kaothanthong, N. Accuracy Improvement of A Province Name Recognition on Thai License Plate. In Proceedings of
the 2018 International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP), Pattaya, Thailand,
15–17 November 2018; pp. 1–6.
141. Vaishnav, A.; Mandot, M. An integrated automatic number plate recognition for recognizing multi language fonts. In
Proceedings of the 2018 7th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future
Directions)(ICRITO), Noida, India, 29–31 August 2018; pp. 551–556.
Sensors 2021, 21, 3028 34 of 35
142. Molina-Moreno, M.; González-Díaz, I.; Díaz-de María, F. Efficient scale-adaptive license plate detection system. IEEE Trans. Intell.
Transp. Syst. 2018, 20, 2109–2121. [CrossRef]
143. Mutholib, A.; Gunawan, T.S.; Kartiwi, M. Design and implementation of automatic number plate recognition on android platform.
In Proceedings of the 2012 International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur,
Malaysia, 3–5 July 2012; pp. 540–543.
144. Ashtari, A.H.; Nordin, M.J.; Fathy, M. An Iranian license plate recognition system based on color features. IEEE Trans. Intell.
Transp. Syst. 2014, 15, 1690–1705. [CrossRef]
145. Kakani, B.V.; Gandhi, D.; Jani, S. Improved OCR based automatic vehicle number plate recognition using features trained neural
network. In Proceedings of the 2017 8th international conference on computing, communication and networking technologies
(ICCCNT), Delhi, India, 3–5 July 2017; pp. 1–6.
146. How, D.N.T.; Sahari, K.S.M. Character recognition of Malaysian vehicle license plate with deep convolutional neural networks. In
Proceedings of the 2016 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Tokyo, Japan, 17–20 December
2016; pp. 1–5.
147. Lee, S.; Son, K.; Kim, H.; Park, J. Car plate recognition based on CNN using embedded system with GPU. In Proceedings of the
2017 10th International Conference on Human System Interactions (HSI), Ulsan, Korea (South), 17–19 July 2017; pp. 239–241.
148. Quiros, A.R.F.; Bedruz, R.A.; Uy, A.C.; Abad, A.; Bandala, A.; Dadios, E.P.; Fernando, A. A kNN-based approach for the machine
vision of character recognition of license plate numbers. In Proceedings of the TENCON 2017–2017 IEEE Region 10 Conference,
Penang, Malaysia, 5–8 November 2017; pp. 1081–1086.
149. Selmi, Z.; Halima, M.B.; Alimi, A.M. Deep learning system for automatic license plate detection and recognition. In Proceedings
of the 2017 14th IAPR international conference on document analysis and recognition (ICDAR), Kyoto, Japan, 9–15 November
2017; Volume 1, pp. 1132–1138.
150. Mondal, M.; Mondal, P.; Saha, N.; Chattopadhyay, P. Automatic number plate recognition using CNN based self synthesized
feature learning. In Proceedings of the 2017 IEEE Calcutta Conference (CALCON), Kolkata, India, 2–3 December 2017; pp.
378–381.
151. Liu, J.; Li, X.; Zhang, H.; Liu, C.; Dou, L.; Ju, L. An implementation of number plate recognition without segmentation using
convolutional neural network. In Proceedings of the 2017 IEEE 19th International Conference on High Performance Computing
and Communications; IEEE 15th International Conference on Smart City; IEEE 3rd International Conference on Data Science and
Systems (HPCC/SmartCity/DSS), Bangkok, Thailand, 18–20 December 2017; pp. 246–253.
152. Huang, Z.K.; Hou, L.Y. Chinese License Plate Detection Based on Deep Neural Network. In Proceedings of the 2018 International
Conference on Control and Robots (ICCR),Hong Kong, China, 15–17 September 2018; pp. 84–88.
153. Ruili, J.; Haocong, W.; Han, W.; O’Connell, E.; McGrath, S. Smart parking system using image processing and artificial intelligence.
In Proceedings of the 2018 12th International Conference on Sensing Technology (ICST), Limerick, Ireland, 4–6 December 2018;
pp. 232–235.
154. Huang, S.; Xu, H.; Xia, X.; Zhang, Y. End-to-end vessel plate number detection and recognition using deep convolutional neural
networks and LSTMs. In Proceedings of the 2018 11th International Symposium on Computational Intelligence and Design
(ISCID), Hangzhou, China, 8–9 December 2018; Volume 1, pp. 195–199.
155. Rabbani, G.; Islam, M.A.; Azim, M.A.; Islam, M.K.; Rahman, M.M. Bangladeshi license plate detection and recognition with
morphological operation and convolution neural network. In Proceedings of the 2018 21st International Conference of Computer
and Information Technology (ICCIT), Dhaka, Bangladesh, 21-23 December 2018; pp. 1–5.
156. Imaduddin, H.; Anwar, M.K.; Perdana, M.I.; Sulistijono, I.A.; Risnumawan, A. Indonesian vehicle license plate number detection
using deep convolutional neural network. In Proceedings of the 2018 International Electronics Symposium on Knowledge
Creation and Intelligent Computing (IES-KCIC), East Java, Indonesia, 29–30 October 2018, pp. 158–163.
157. Akhtar, Z.; Ali, R. Automatic Number Plate Recognition Using Random Forest Classifier. SN Comput. Sci. 2020, 1, 1–9. [CrossRef]
158. GONG, W.b.; SHI, Z.s.; Qiang, J. Non-Segmented Chinese License Plate Recognition Algorithm based on Deep neural Networks.
In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 20–24 August 2020; pp. 66–71.
159. Silva, S.M.; Jung, C.R. Real-time license plate detection and recognition using deep convolutional neural networks. J. Vis.
Commun. Image Represent. 2020, 71, 102773. [CrossRef]
160. Pustokhina, I.V.; Pustokhin, D.A.; Rodrigues, J.J.; Gupta, D.; Khanna, A.; Shankar, K.; Seo, C.; Joshi, G.P. Automatic vehicle license
plate recognition using optimal K-means with convolutional neural network for intelligent transportation systems. IEEE Access
2020, 8, 92907–92917. [CrossRef]
161. Shvai, N.; Hasnat, A.; Meicler, A.; Nakib, A. Accurate classification for automatic vehicle-type recognition based on ensemble
classifiers. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1288–1297. [CrossRef]
162. Weihong, W.; Jiaoyang, T. Research on license plate recognition algorithms based on deep learning in complex environment.
IEEE Access 2020, 8, 91661–91675. [CrossRef]
163. Oliveira-Neto, F.M.; Han, L.D.; Jeong, M.K. An online self-learning algorithm for license plate matching. IEEE Trans. Intell.
Transp. Syst. 2013, 14, 1806–1816. [CrossRef]
164. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of
the 2009 IEEE conference on computer vision and pattern recognition. Miami, FL, USA, 20–25 June 2009; pp. 248–255.
Sensors 2021, 21, 3028 35 of 35
165. Yang, L.; Luo, P.; Change Loy, C.; Tang, X. A large-scale car dataset for fine-grained categorization and verification. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3973–3981.
166. Xu, Z.; Yang, W.; Meng, A.; Lu, N.; Huang, H.; Ying, C.; Huang, L. Towards end-to-end license plate detection and recognition: A
large dataset and baseline. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14
September 2018; pp. 255–271.
167. Tafazzoli, F.; Frigui, H.; Nishiyama, K. A large and diverse dataset for improved vehicle make and model recognition. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July
2017; pp. 1–8.
168. Gonçalves, G.R.; Diniz, M.A.; Laroca, R.; Menotti, D.; Schwartz, W.R. Real-time automatic license plate recognition through
deep multi-task networks. In Proceedings of the 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI),
Paraná, Brazil, 29 October–1 November 2018; pp. 110–117.
169. Hsu, G.S.; Chen, J.C.; Chung, Y.Z. Application-oriented license plate recognition. IEEE Trans. Veh. Technol. 2012, 62, 552–561.
[CrossRef]
170. Krause, J.; Stark, M.; Deng, J.; Fei-Fei, L. 3d object representations for fine-grained categorization. In Proceedings of the IEEE
International Conference on Computer Vision Workshops, Sydney, Australia, 3–8 December 2013; pp. 554–561.
171. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset
for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
Las Vegas, Nevada, USA, 27–30 June 2016; pp. 3213–3223.
172. Weber, M. Pasadena Computational Vision at California Institute of Technology: The Caltech database. Available online:
http://www.vision.caltech.edu/html-files/archive.html (accessed on 2 January 2021).
173. Arróspide, J.; Salgado, L.; Nieto, M.; Mohedano, R. Homography-based ground plane detection using a single on-board camera.
IET Intell. Transp. Syst. 2010, 4, 149–160. [CrossRef]
174. Ferdowsi, A.; Challita, U.; Saad, W. Deep learning for reliable mobile edge analytics in intelligent transportation systems: An
overview. IEEE Veh. Technol. Mag. 2019, 14, 62–70. [CrossRef]
175. Henry, C.; Ahn, S.Y.; Lee, S.W. Multinational license plate recognition using generalized character sequence detection. IEEE
Access 2020, 8, 35185–35199. [CrossRef]
176. Mohandes, M.; Deriche, M.; Ahmadi, H.; Kousa, M.; Balghonaim, A. An intelligent system for vehicle access control using RFID
and ALPR technologies. Arab. J. Sci. Eng. 2016, 41, 3521–3530. [CrossRef]
177. Yang, C.H.; Tsai, H.M. Vehicle counting and speed estimation with RFID backscatter signal. In Proceedings of the 2019 IEEE
Vehicular Networking Conference (VNC), Los Angeles, CA, USA, 4–6 December 2019; pp. 1–8.
178. Hadavi, S.; Rai, H.B.; Verlinde, S.; Huang, H.; Macharis, C.; Guns, T. Analyzing passenger and freight vehicle movements from
automatic-Number plate recognition camera data. Eur. Transp. Res. Rev. 2020, 12, 1–17. [CrossRef]