Sensors 19 00644 PDF
Sensors 19 00644 PDF
Sensors 19 00644 PDF
Article
An Automatic Surface Defect Inspection System for
Automobiles Using Machine Vision Methods
Qinbang Zhou 1 , Renwen Chen 1, *, Bin Huang 1 , Chuan Liu 1 , Jie Yu 2 and Xiaoqing Yu 3
1 State Key Laboratory of Mechanics and Control of Mechanical Structures, College of Aerospace Engineering,
Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China;
zhouqinbang@nuaa.edu.cn (Q.Z.); binhuang@nuaa.edu.cn (B.H.); chuanliu@nuaa.edu.cn (C.L.)
2 COMAC ShangHai Aircraft Design and Research Institute, Shanghai 201210, China; yujie1@comac.cc
3 China National Aeronautical Ratio Electronics Research Institute, Shanghai 200241, China;
yuxiaoqing_tl@163.com
* Correspondence: rwchen@nuaa.edu.cn; Tel.: +86-139-5163-1159
Received: 4 January 2019; Accepted: 31 January 2019; Published: 4 February 2019
Abstract: Automobile surface defects like scratches or dents occur during the process of
manufacturing and cross-border transportation. This will affect consumers’ first impression and
the service life of the car itself. In most worldwide automobile industries, the inspection process is
mainly performed by human vision, which is unstable and insufficient. The combination of artificial
intelligence and the automobile industry shows promise nowadays. However, it is a challenge to
inspect such defects in a computer system because of imbalanced illumination, specular highlight
reflection, various reflection modes and limited defect features. This paper presents the design and
implementation of a novel automatic inspection system (AIS) for automobile surface defects which are
the located in or close to style lines, edges and handles. The system consists of image acquisition and
image processing devices, operating in a closed environment and noncontact way with four LED light
sources. Specifically, we use five plane-array Charge Coupled Device (CCD) cameras to collect images
of the five sides of the automobile synchronously. Then the AIS extracts candidate defect regions
from the vehicle body image by a multi-scale Hessian matrix fusion method. Finally, candidate defect
regions are classified into pseudo-defects, dents and scratches by feature extraction (shape, size,
statistics and divergence features) and a support vector machine algorithm. Experimental results
demonstrate that automatic inspection system can effectively reduce false detection of pseudo-defects
produced by image noise and achieve accuracies of 95.6% in dent defects and 97.1% in scratch defects,
which is suitable for customs inspection of imported vehicles.
Keywords: flaw detection; automatic visual inspection; machine vision; feature extraction; support
vector machine (SVM)
1. Introduction
The external appearance of a vehicle is of outmost importance as it gives consumers their first
impression [1]. However, due to the existence or occurrence of surface defects (small scratches or dents)
during the process of manufacturing and cross-border transportation of imported vehicles which can
bring huge economic losses to automobile import agency companies and imported vehicle buyers,
automobile agency companies will arrange for professional human checkers [2] to conduct pipelined
manual inspections of imported vehicles, even for minor defects (up to 0.5 mm in diameter), after
each shipment. Unqualified vehicles with surface defects are picked up to be returned to the factory
for repair. The traditional checking procedure is a random sampling method of manual inspection in
the product pipeline, judging whether the product is qualified or not by observing the differences of
manually. This detection method cannot meet the needs of modern industrial production, because the
unqualified product manually. This detection method cannot meet the needs of modern industrial
current manual inspection process has the following issues:
production, because the current manual inspection process has the following issues:
• The criteria for human vision are not quantified, but more dependent on subjective evaluation, so
• The criteria for human vision are not quantified, but more dependent on subjective evaluation,
the criteria for each checker judging the product quality are not the same.
so the criteria for each checker judging the product quality are not the same.
• Due to the large output of the product and the large number of inspection items, the inspector will
• Due to the large output of the product and the large number of inspection items, the inspector
experience ocular fatigue on the assembly line due to the high-intensity repetitive work which
will experience ocular fatigue on the assembly line due to the high-intensity repetitive work
leads to less reliable defect detection making the quality of product not fully guaranteed.
which leads to less reliable defect detection making the quality of product not fully guaranteed.
• • When
Whenthethesurface
surfacedefect
defectisisnot
not obvious,
obvious, the the help
help of
of external
external conditions (such as
conditions (such as strong
strong lighting
lighting
environment)
environment)isisneeded
neededtotodetect
detect it.it.It It
is is
difficult to to
difficult easily identify
easily thethe
identify surface defects
surface only
defects by the
only by
human eye, and it is impossible to achieve a continuous and stable workflow,
the human eye, and it is impossible to achieve a continuous and stable workflow, resulting in resulting in greatly
reduced production
greatly reduced efficiency.efficiency.
production
Automatic visual
Automatic visual inspection
inspection systems
systems (AISs),
(AISs), however,
however, havehave gained
gained great
great popularity
popularity withwith the
the
development of programmable hardware processors and built-in image processing
development of programmable hardware processors and built-in image processing techniques which techniques which
have the
have the characteristics
characteristics ofof fast
fast speed,
speed, high
high precision,
precision, low
low cost
cost and
and nondestructive
nondestructive features,
features, improve
improve
productivity, quality management and provide a competitive advantage to industries
productivity, quality management and provide a competitive advantage to industries that employ that employ this
technology [3]. This paper focuses on the core techniques
this technology [3]. This paper focuses on the core techniques of AIS.of AIS.
Vehicle surface
Vehicle surface defects
defects are
are broadly
broadly divided
divided into
into two
two categories:
categories: scratches
scratches or
or dents.
dents. A A scratch
scratch isis
a defect that occurs when a vehicle rubs against a hard raised metal during loading
a defect that occurs when a vehicle rubs against a hard raised metal during loading into a carrier or into a carrier or
container transportation.
container transportation.The Thedents
dentsononthe
thevehicle
vehiclebody
bodyareare usually
usually caused
caused by
by the
the small
small stones
stones that
that
are propelled
are propelled from
from the
the road
road surface
surface hitting
hitting the
the surface
surface of
of the
the vehicle
vehicle traveling
traveling at
at aa high
high speed.
speed. These
These
defects are difficult to detect with automatic visual inspection systems (and even
defects are difficult to detect with automatic visual inspection systems (and even for human eyes) for human eyes)
because of
because of various
various factors:
factors:
• • Imbalanced
Imbalancedillumination:
illumination:Vehicle
Vehiclesurface
surfaceimages
imagesare are captured
captured by by cameras installed at slideslide rails
rails
under the irradiation of four LED lights. The light mode determines that
under the irradiation of four LED lights. The light mode determines that the vehicle body the vehicle body surface
cannot
surfaceachieve
cannotfull uniform
achieve illumination,
full therefore global
uniform illumination, features
therefore or threshold
global featuresparameters
or thresholdare
not suitable for
parameters aredefect positioning
not suitable and recognition.
for defect positioning and recognition.
• • Specular
Specularhighlight
highlight reflection:
reflection: TheThedarkdark
fieldfield illumination
illumination technique
technique (Figure (Figure 1) is utilized
1) is utilized to enhanceto
enhance
the theincontrast
contrast in defect samples.
defect samples. Even
Even in the casein the case of dark-field
of dark-field illumination,
illumination, the captured
the captured vehicle
vehicle
frame gapframe
maygapstill may
causestill cause specular
specular highlightshighlights
where thewherenormal thesurface
normalissurface
orientedis precisely
oriented
preciselybetween
halfway halfwaythe between the of
direction direction
incoming of incoming
light (called light (called half-angle
half-angle direction direction
because itbecause
bisects
it bisectsinto
(divides (divides
halves)into
thehalves) the anglethe
angle between between
incoming the light
incoming
and thelight and the viewer).
viewer).
vehicle body as an example, the gray-value of black vehicle defects present higher value than the
body background, whereas the white vehicle shows lower values because of the diffuse reflection
under different reflection coefficients.
• Limited features for defect recognition: Defects on the vehicle surface do not share common
textures or shape features. Further, the pixel area of defects is even less than 20 pixels for tiny
scratches or dents. The most reliable feature to distinguish surface defects is local gray-level
information. The limitation of visual features makes inspection systems exclude most object
recognition methodologies that are based on sophisticated texture and shape features [4].
This paper presents an online AIS for vehicle body surface defects. The AIS comprises an image
acquisition subsystem (IAS) and image processing subsystem. The IAS acquires gray automobile
images for the surface of a vehicle body. After that, the automobile image is processed and possible
defects are detected. In this paper, we focus on three key techniques of AIS: image preprocessing, defect
binarization, pseudo-defect removal and classification. We propose image preprocessing methods to
remove the specular highlight pixels and enhance the distinction between defects and background
in a vehicle image, considering imbalanced illumination and specular highlight reflection property
of vehicle surfaces. In addition, we put forward multi-scale defect binarization based on a Hessian
matrix (DBHM) algorithm, which identifies possible defects by construction of defect filter based
on the Hessian matrix over each acquisition of the image detection area. Finally, we come up with
pseudo-defect removal and classification on account of the noise-sensitive characteristics of Hessian
filters and the existence of pseudo-defects. AIS has the following advantages:
1) Image preprocessing is a data fusion and image enhancement step. Therefore, it is able to
eliminate the specular highlight reflection of raw images. In addition, it can greatly reduce the
variation in the background of a vehicle image.
2) DBHM combined with pseudo-defect removal and classification is robust to noise and easy to
identify tiny scratches or dents with a width of 0.5 mm, because DBHM relies on the local
distribution and features of intensities, rather than the global characteristics used in most
thresholding algorithms [5–8].
3) AIS has the characteristics of fast speed, high precision, low cost and nondestructive features.
Under our experimental setup, VIS is able to be in real time with a speed of 1 min 50 s/vehicle,
which is greatly faster than the manual inspection of check-man (10–15 min).
The rest of paper is organized as follows: in Section 2, related work on automatic defect inspection
of vehicles is introduced. Then, the overall system and proposed approach are respectively described
in detail in Sections 3 and 4. Afterwards, Section 5 presents the experimental results. Finally, discussion
and conclusions are provided in Section 6.
2. Related Work
Defect detection on car bodies usually adopts an inspection program flow, which can perform
image acquisition, fusion by collecting the images of car bodies, image processing and classification to
detect and distinguish millimeter-sized defects. Various vehicle inspection systems and techniques
have been introduced with the aim to detect surface defects.
A commercial and very successful system was developed and installed in Ford Motor Company
factories all around the world. This system, described in [9], uses a moving structure made up of
several light bars (high-frequency fluorescent tubes) and a set of cameras in fixed positions around the
stationary car body, and is able to detect millimetric defects of 0.3 mm diameter or greater with different
shapes which were very hard to detect, thus improving the quality of the manual inspection carried
out until that point. Thenceforth, Molina et al. [2] introduced a novel approach using deflectometry-
and vision-based technologies at industrial automatic quality control system named QEyeTunnel
in the production line at the Mercedes-Benz factory in Vitoria, Spain. The authors present that the
Sensors 2019, 19, 644 4 of 18
inspection system satisfies cycle time production requirements (around 30 s per car). A new image
fusion pre-processing algorithm is proposed to enhance the contrast between pixels with a low level
of intensity (indicating the presence of defects) based on rendering equation [10,11] in computer
graphics; Then a post-processing step with an image background extraction approach based on a local
directional blurring method and a modified image contrast enhancement, which enables detection of
larger defects in the entire illuminated area with a multi-level structure, given that each level identifies
defects of different sizes.
Fan et al. [12] demonstrated a closed indirect diffusion-lighting system with the pattern-light and
full-light mode by the lighting semi-permeable membrane to overcome the strong reflection of the
car’s body. Then, a smooth band-pass filter based on frequency domain analysis processing technology
is applied to extract the defect information on the panels from the used cars. In order to improve the
accuracy of the result and remove the pseudo-defects, the area size is employed by the authors as the
parameter to filter out the noise and achieve the segmentation of flaw for the reason that compared with
flaws, noise is always some isolated points which just occupy a few pixels, but by filtering noise only by
the area information of the noise (one important feature, but not comprehensive enough), it is possible
to mistakenly filter some small-area pixels which originally were a defect. Kamani et al. [13,14] located
defect regions by using a joint distribution of local binary variance pattern (LBP) and rotation invariant
measure of local variance (VAR) operators, and then the detected defects are classified into various
defect types using Bayesian and support vector machine classifiers. Chung et al. [15] implemented a
optical 3D scanner and visualization system to find defect regions from the curvature map of unpainted
car body outer panels. Leon et al. [16] presented new strategies to inspect specular and painted surfaces.
This method classified defects into three classes: tolerable defects, removable defects and defects that
lead to the rejection of the inspected car body part with 100% classification accuracy. Borsu et al. [17]
used 3D reconstruction of the surface profile of the panel to detect deformation and the positions of
the surface defects are provided to a robotic marking station that handles pose and motion estimation
of the part on an assembly line using passive vision. Xiong et al. [18] proposed a 3D laser profiling
system which integrates a laser scanner, odometer, inertial measurement unit and global position
system to capture the rail surface profile data. The candidate defective area is located by comparing
the deviation of registered rail surface profile and the standard model with the given threshold. Then,
the candidate defect points are merged into candidate defect regions using the K-means clustering and
the candidate defect regions are classified by a decision tree classifier.
In addition to the traditional detection methods mentioned above, learning-based methods
have also been investigated for surface crack detection. Qu et al. [19] proposed a bridge cracks
detection algorithm by using a modified active contour model and greedy search-based support
vector machine. Zhang et al. [20] used a deep convolutional neural network to conduct automatic
detection of pavement cracks on a data set of 500 images collected by a low-cost smart phone. Fully
convolutional neural networks were studied to infer cracks of nuclear power plants using multi-view
images by Schmugge et al. [21]. Recently, Zou et al. [22] proposed a novel end-to-end trainable deep
convolutional neural network called DeepCrack for automatic pavement crack detection by learning
high-level features for crack representation. In this method, multi-scale deep convolutional features
learned at hierarchical convolutional stages are fused together to capture the line structures. Many other
methods were also proposed for crack detection, e.g., the saliency detection method [23], the structure
analysis methods by using the minimal spanning tree [24] and the random structure forest [25].
3. Overall System
In order to eliminate the influence of natural light irradiated on the vehicle body, the detection
process will be carried out in a closed laboratory environment shown in Figure 2. Five plane-array
CCD cameras are mounted on motor-driven slide guides to achieve fixed-point acquisition of vehicle
body images. A set of images is acquired during cameras sweeping the vehicle body ensuring complete
cover of the object to be inspected, see Figure 2b.
system mainly includes light sources, industrial cameras, camera lens, slide rails, servo controller,
servo driver and servo motors. The slide rails, servo controller, servo driver and servo motors
constitute the motion control system. Software system mainly includes image acquisition, vehicle
model database, image processing algorithms, damage detection results display and motor control
program
Sensors 2019,module.
19, 644 5 of 18
3.1. AIS
The overall architecture of vehicle automatic inspection system is depicted in Figure 3. The
automatic inspection system of appearance defect for automobile mainly includes two parts, namely,
image acquisition subsystem (hardware) and image processing subsystem (software). The hardware
system mainly includes light sources, industrial cameras, camera lens, slide rails, servo controller,
servo driver and servo motors. The slide rails, servo controller, servo driver and servo motors
(a) (b)
constitute the motion control system. Software system mainly includes image acquisition, vehicle
model database,
Figure 2. image processing
2. Laboratory algorithms,
of vehicle automatic damagesystem.
inspection detection
system. (a) resultsofdisplay
(a) Sketch and inspection
vehicle
the vehicle motor control
inspection
laboratory.
laboratory.
program module.(b)
(b) The
The actual
actual experimental
experimental scene.
scene. The cameral is installed at 3-axis robot aluminum arms
which
which are
are motor-driven
motor-driven by
by slide
slide rails.
rails.
The number of images depends on the size of the vehicle and the resolution of acquisition of
images by the camera. These images are transmitted to the control and defection analysis room for
further detection and classification. The control and defection analysis room is also used to control the
light sources, the camera shooting and the operation of the guide rail motors.
3.1. AIS
The overall architecture of vehicle automatic inspection system is depicted in Figure 3. The
automatic inspection system of appearance defect for automobile mainly includes two parts, namely,
image acquisition subsystem (hardware) and image processing subsystem (software). The hardware
(a)
system mainly includes light sources, industrial cameras, camera lens, slide (b) rails, servo controller, servo
driver and servo
Figure motors. of
2. Laboratory The slide rails,
vehicle servoinspection
automatic controller,system.
servo driver andofservo
(a) Sketch motorsinspection
the vehicle constitute the
motion control (b)
laboratory. system. Software
The actual systemscene.
experimental mainlyTheincludes
cameral image acquisition,
is installed vehicle
at 3-axis robot model arms
aluminum database,
image processing algorithms,
Figure 3. damage
Overall detection
architecture
which are motor-driven by slide rails. of results
the vehicledisplay and
automatic motor
inspectioncontrol program
system. module.
Figure 3. Overall
Overall architecture
architecture of the vehicle automatic inspection system.
3.2. Cameral
3.2. Cameral Resolution
Resolution
The key
The key component
component of of the
the proposed
proposed system
system is
is aa GCP4241
GCP4241 GigE
GigE Vision
Vision plane-array
plane-array CCD
CCD camera
camera
(SMARTEK Vision, Cakovec,
(SMARTEK Vision, Cakovec,Croatia)
Croatia)with
with
1212 megapixels
megapixels (4240
(4240 × 2824)
× 2824) at a frame
at a frame rate ofrate ofwhich
9 fps 9 fps
which is the perfect for moving object inspection, low-light and outdoor applications or
is the perfect for moving object inspection, low-light and outdoor applications or in high-speed in high-speed
automation with
automation with very
very short
short exposure
exposure times. Full Gigabit
times. Full Gigabit Ethernet
Ethernet bandwidth
bandwidth based
based on
on the
the proven
proven
GigE Vision standard data interface is utilized to support high resolution image transmission.
GigE Vision standard data interface is utilized to support high resolution image transmission. With With
a C-10MP-4/3-12mm lens (Foshan HuaGuo Optical Co., Ltd., Guangdong, China)
a C-10MP-4/3-12mm lens (Foshan HuaGuo Optical Co.,Ltd., Guangdong, China) with a shooting with a shooting
distance of 1.5 meters, the resolution of the camera can reach 72 dpi (0.35 mm/pixel) which meets the
accuracy requirements for detecting defects above 0.5 mm in size.
distance of 1.5 m, the resolution of the camera can reach 72 dpi (0.35 mm/pixel) which meets the
accuracy requirements for detecting defects above 0.5 mm in size.
1) Image preprocessing: Defects are easily hidden or confused in vehicle images because of
illumination inequality and the variation of reflection property of vehicle surfaces, so image
preprocessing is a necessary procedure to highlight defects area from the image background.
SensorsEffective
2019, 19, ximage preprocessing process can improve the efficiency and classification accuracy 6 ofof
18
subsequent algorithms.
After a vehicle body image is generated by the IAS, an image processing subsystem is proposed
2) Defect binarization: Defect binarization is the process of extracting candidate defect areas from a
to detect possible defects in the image. This subsystem includes three main modules: image
vehicle body. The second-order information of defect region can be extracted by DBHM method,
preprocessing, defect binarization and pseudo-defect remove and classification. Figure 4 shows the
and the detection rate of micro-defects can be improved by binarizing the possible defect regions.
diagram of the subsystem:
3) Feature extration and classification: A defect binarization image comprises three types of pixels:
1) Image
defects,preprocessing:
background andDefects are easilygenerated
pseudo-defect hidden or by confused in preprocessed
noise of the vehicle images because
image. Due toof
illumination
the influenceinequality and
of noise and the variation
image of reflection
quality, there are casesproperty of vehicleafter
of pseudo-defects surfaces, so image
binarization of
preprocessing
defects. Through is afeature
necessary procedure
extraction to highlight
of candidate defects
feature area we
regions, fromcanthe imagedefects
classify background.
while
Effective
excludingimagesome preprocessing process
pseudo-authorities, can improve
thereby improvingthe the
efficiency andaccuracy
detection classification accuracy of
of AIS.
subsequent algorithms.
Figure 4.
Figure 4. Diagram of image processing
processing subsystem.
subsystem.
4.
2) Proposed Approach Defect binarization is the process of extracting candidate defect areas from
Defect binarization:
a vehicle body. The second-order information of defect region can be extracted by DBHM
4.1. Image Preprocessing
method, and the detection rate of micro-defects can be improved by binarizing the possible
The image
defect formation process consists of two steps: optical reflection and photoelectric conversion.
regions.
In the first step,
3) Feature extrationthe illumination model converts
and classification: the physicalimage
A defect binarization properties such three
comprises as thetypes
reflectivity of
of pixels:
the vehicle
defects,body surface into
background andthe reflected lightgenerated
pseudo-defect intensity. by
The second
noise steppreprocessed
of the converts the image.
reflected light
Due to
intensity into a digital
the influence image
of noise andofimage
the vehicle
quality,body
therethrough
are casesa of
camera lens and an
pseudo-defects analog-to-digital
after binarization of
converter (ADC
defects. or A/D).
Through Based
feature on the understanding
extraction of the physical
of candidate feature regions, properties of light,
we can classify the Phong
defects while
illumination
excluding model
some(PIM) is a widely used
pseudo-authorities, illumination
thereby improvingphysics model. PIM
the detection was first
accuracy proposed by
of AIS.
4. Proposed Approach
Phong [26] and improved by Whitted [27]. Its main structure consists of ambient term, diffuse term
and specular term:
→→ →→
∑
di f spe
Rp = I pamb + kd Lm N l
Im,d + kd Rm V l
Im,d (1)
m∈lights
where
For multiple point light sources, the PIM sums up diffuse term and specular term simultaneously
for each individual light source [26]. In the second step, the photoelectric process converts the
reflected light intensity Rp into a digital image Ip of the vehicle body through a camera lens and an
analog-to-digital converter (ADC or A/D). This transformation can be expressed by a linear formula:
I p = α· R p + β (2)
1) Using indirect diffused light pattern and increasing the incident angle of the light source can
reduce or remove the specular reflection item in the body image to obtain a uniform image of the
vehicle body;
2) The method of image superposition can effectively remove the Gaussian noise in the initial image
(this experiment uses the output image after superimposing six consecutive image frames as the
actual image to be detected).
The local behavior of an image I can be approximated to its second-order Taylor expansion [29].
Thus, image I in the neighborhood of point xo can be expressed as:
This expansion approximates the structure of the image up to second order. ∇o,s and Ho,s are
respectively the gradient vector and Hessian matrix of the image in position xo at scale s. According to
the concept of linear scale space theory, differentiation is defined as a convolution with derivatives
of Gaussians:
∂ ∂
I (x, s) = I (x) ∗ G (x, s) (4)
∂x ∂x
where the two-dimensional Gaussian is defined as:
1 − x2 +2y2
G (x, s) = e 2s (5)
2πs2
" #
Hxx Hxy
Assume that the Hessian matrix is expressed as Ho,s = , ( Hxy = Hyx ). Eigenvalues
Hyx Hyy
of the Hessian matrix can be calculated as:
1
q
2
λ1 = Hxx + Hyy + Hxx − Hyy + 4Hxy (6)
2
1
q
2
λ2 = Hxx + Hyy − Hxx − Hyy + 4Hxy (7)
2
The eigenvalue of the Hessian matrix satisfies the case of |λ1 | 0, |λ2 | 0 or |λ1 | |λ2 |, |λ2 |≈ 0
at the location where a body defect exists. Thus, a two-dimensional candidate defect enhanced image
is defined as:
0, f ( x, y) < f
L or f ( x, y ) > f H
e= (8)
q 2
1
2 Hxx + Hyy + Hxx − Hyy + 4Hxy − th, otherwise
where th is a constant threshold value, which is used to enhance the possible defective regions. The
background noise in the vehicle body diagram is directly removed by f (x,y) < fL and f (x,y) > fH , and
the subsequent mask to background interference operation is reduced.
For body defect structural elements, the output of defect region in enhanced image is maximized
only when the scale factor s best matches the actual width of the defect, so we introduce a three level
scales detection method as shown in Figure 5. By performing a global thresholding of enhanced
images s1 e, s2 e, s3 e in different scales s1 , s2 , s3 (usually s1 = 1), the middle result images can be
obtained as s1 r, s2 r, s3 r, the presentations of which are defined as s1 r ∈ {0,1}X , s2 r ∈ {0,1}Y , s3 r ∈ {0,1}Z .
X = Zm × Zn = x := ( x, y) ∈ Z2 |0 ≤ x ≤ m − 1, 0 ≤ y ≤ n − 1 , where m and n are the number of
columns and rows of the image, Y = Z[m× s2 ] × Z[n× s3 ] , and Z = Z[m× s3 ] × Z[n× s3 ] . The binary pattern
s1 s1 s1 s1
images can be obtained as follows:
s1 r = η s1 e
s2 r = η s2 e
(9)
s3 r = η s3 e
(
1 if e > 0
where η = . By iterating the scale factor, the images at different scales are obtained, and
0 if e < 0
the union of the defect regions is taken as the final candidate defect image:
OUT := s1 rkχ1 s2
r kχ2 s3 r (10)
s3
r = η ( s3 e )
1 if 𝑒 > 0
where 𝜂 = . By iterating the scale factor, the images at different scales are obtained, and the
0 if 𝑒 < 0
union of the defect regions is taken as the final candidate defect image:
Sensors 2019, 19, 644 9 of 18
OUT:= r||χ1 ( r ) ||χ 2 ( r )
s1 s2 s3
(10)
Y 𝐗 X , χ2 𝐙: RZ → RX the
where
where χ𝜒1 : :ℝR𝐘 →→ℝR, 𝜒 : ℝ → ℝ𝐗 are areup-scaling
the up-scaling operators
operators Z ⊂ ZY⊂⊂Y ⊂
with with X. X. k the
‖ is is the bitwise
bitwise or
or operator.
operator.
A
T = 4π (11)
P2
where A is the area and P is the perimeter of the defect which is defined as the total pixels that
constitute the edge of the defect. As the defect becomes thinner and thinner, the perimeter becomes
larger and larger relative to the area and the ratio decreases so the roundness decreases. As the shape
of defect becomes closer to a circle, this ratio become closer to 1.
Mean pixel value and standard deviation of candidate defect regions are utilized to eliminate
pseudo-defect areas. In addition to mean and standard deviation, we also propose the concept of
divergence in feature extraction. From the point of view of energy diffusion, the divergence of gradient
image (two-dimensional vector field) is the extent to which the vector field flow behaves like a source
at a given point. It is a local measure of its “outgoingness”—the extent to which there is more of some
quantity exiting an infinitesimal region of space than entering it [31]. If the divergence is positive, it
means that the vector fields are scattered outward. If it is negative, it means that the vector fields are
Sensors 2019, 19, 644 10 of 18
concentrated inward. That is to say, divergence indicates the direction of the flow of the vector, and
the greater the degree of the flow, the faster the divergence, the greater the corresponding divergence
value. The divergence of a vector field is defined as follows:
‹
1
divF := lim F·dS (12)
V →0 V
∂V
In the 2-dimensional Descartes coordinate system, the divergence of image gradient field
grad(I) = Ui + Vj is defined as the scalar-valued function:
∂ ∂ ∂U ∂V
div( grad(I)) := ∇· grad(I) = , ·(U, V ) = + (13)
∂x ∂y ∂x ∂y
where i, j are the corresponding basis of the unit vectors, grad(I) is the gradient of image I. Although
expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation
suggests. Thus, the mean value, standard deviation and maximum value of regional divergence also
have the properties of translation and rotation invariance.
minω,b,ξ i 12 ωT ω + C ∑i ξ i
(14)
s.t. yi ωT φ(xi ) + b ≥ 1 − ξ i , ξ i ≥ 0
where xi are the data samples, yi are the labels, ϕ(·) is the kernel function, and C is the penalty
coefficient. Common kernel functions include linear kernels: φ(xi ) T φ x j = xiT x j , and radial basis
function (RBF) kernels: φ(xi ) T φ x j = exp −γk xi − x j k22 , where γ is a positive hyperparameter. The
candidate Defects are divided into training and test sets using LIBSVM [33] to perform training and
testing.
(a) (b)
Figure6.6.Image
Figure Imagesuperposition
superpositionwith
withdifferent
differentnumber
numberofofsheets
sheets(a)
(a)superposition
superpositionofofsix
sixvehicle
vehicleimages
images
(b)superposition
(b) superpositionofof20
20vehicle
vehicleimages.
images.
Although the photoelectric distance sensor installed on the boom can accurately determine the
position of the camera, the manual parking way used may cause a certain deviation in the spatial
coordinate position of the vehicle image. Therefore, before the candidate defect is extracted, the
region of interest (ROI) region is extracted using image registration. Image registration is obtained by
feature point extraction, feature point description and feature point matching, and then the spatial
transformation matrix estimation of two similar images is obtained. Compared with the traditional
SIFT method [34], the SURF method [35] uses a small amount of data and has a short calculation time,
therefore, the SURF registration method is adopted in this paper.
When a new body is ready to be introduced into the inspection system, the checker manually
selects the test vehicle type on the PC software detection interface and the database provides all
the necessary body-related data (e.g., model, color, template, mask, etc.) The complete body defect
detection process is illustrated in Figure 7. Firstly, the spatial transformation matrix H is obtained from
the experimental image and template image by the registration principle (steps
1 and
).
2 Secondly,
the template mask corresponding to the template image is read by the database, and the mask of the
image to be detected is obtained by interpolating the space transformation matrix H, that is, the area
to be detected (ROI) of the experimental image (steps
3 and
).
4 Finally, the actual defect area is
obtained by the proposed defect detection algorithm and marked with green box on the experimental
map (steps
5 and
).6
In order to improve the accuracy of image registration, we introduce three steps before registration
which are camera calibration (Figure 8), edge detection and area opening. This means that the
registration process of ROI extraction is not performed directly on gray scale vehicle images, but on the
edge images after area opening operations (excluding noise pixels). The lens distortion of the camera
has a great influence on the accuracy of image registration. Through experiments, we find that the edge
pixel error of the calibrated camera is decreased from more than a dozen pixels to 2–5 pixels compared
with the uncalibrated camera, which meets the requirement of body detection area extraction.
from the experimental image and template image by the registration principle (steps ① and ②).
Secondly, the template mask corresponding to the template image is read by the database, and the
mask of the image to be detected is obtained by interpolating the space transformation matrix H, that
is, the area to be detected (ROI) of the experimental image (steps ③ and ④). Finally, the actual
defect2019,
Sensors area19,
is 644
obtained by the proposed defect detection algorithm and marked with green box on
12 ofthe
18
experimental map (steps ⑤ and ⑥).
Sensors 2019, 19, x Figure 7. Examples of detection of ROI defects detection procedure. 12 of 18
Figure 7. Examples of detection of ROI defects detection procedure.
In order to improve the accuracy of image registration, we introduce three steps before
registration which are camera calibration (Figure 8), edge detection and area opening. This means
that the registration process of ROI extraction is not performed directly on gray scale vehicle images,
but on the edge images after area opening operations (excluding noise pixels). The lens distortion of
the camera has a great influence on the accuracy of image registration. Through experiments, we find
that the edge pixel error of the calibrated camera is decreased from more than a dozen pixels to 2–5
pixels compared with the uncalibrated camera, which meets the requirement of body detection area
extraction.
Figure8.8.Camera
Figure Cameracalibration
calibrationusing
usingaa66×× 99 ×
× 100 mm33chessboard.
100mm chessboard.
Figure 9. Cont.
(a) (b)
(c) (d)
Figure
Figure 9. 9. Case
Case study:
study: evaluation
evaluation ofof
thethe proposed
proposed algorithm
algorithm dealing
dealing with
with a defect
a defect close
close to to fuel
fuel tank
tank
cap
cap (a)(a) Detection
Detection region
region on on
fuelfuel
tanktank
cap;cap; (b) representation
(b) 3D 3D representation ofmagnified
of the the magnified
regionregion (c) 3D(c)
(view1);
(view1);
representation of the of
3D representation magnified regionregion
the magnified (view2); (d) 3D(d
(view2); map
) 3Dofmap
λ1 in
ofDBHM.
λ1 in DBHM.
InWith
Figure a 10,
globalthe picture
thresholdused(Tfor experiments
= 16) set by theisbinarization
acquired from an automated
filter in Equations vision-based
(8) and (9),quality
we can
control inspection system named QEyeTunnel in Molina’s paper [2]. Their inspection
distinguish between noise and defect and thus identify the defect on the surface as a result which is system consists
ofshown
two parts: an external
in Figure 9d. fixed structure where a determined number of cameras are optimally placed
in orderIntoFigure
see the10, entire surface used
the picture of thefor
body to be analyzed,
experiments as wellfrom
is acquired as a moving internalvision-based
an automated structure
similar to a scanning machine which houses curved screens known
quality control inspection system named QEyeTunnel in Molina’s paper [2]. Their inspection as ‘sectors’ which act as the system
light
sources
consiststhat
ofproject
two parts: the illumination
an externalpatterns over the where
fixed structure body surface. Figure 10a
a determined showsofnine
number defectsare
cameras
ofoptimally
automobile placed in order to see the entire surface of the body to be analyzed, as well as a image
surface after spraying paint on automobile production line. This vehicle body moving
contains punctiform
internal structure similar pits, linear
to a defects
scanning andmachine
uneven which
paint spray
houses that are difficult
curved screensfor the human
known eye
as ‘sectors’
towhich
distinguish.
act as the light sources that project the illumination patterns over the body surface. Figure 10a
In our
shows nine method,
defectsthe most obvious
of automobile line and
surface afterpoint scratches
spraying paintareon extracted
automobile in production
the first scale level
line. This
detection (s
vehicle body1 = 1). Defects of uneven paint spray which are difficult for human eye
image contains punctiform pits, linear defects and uneven paint spray that are difficult to distinguish are
detected
for the in second
human eye andto third scale level detection (s2 = 0.5, s3 = 0.25). After the second and third level
distinguish.
imagesIn areour
rescaled to the
method, the most first level scale of
obvious detection
line and point result, the final
scratches areimage resultindetects
extracted all scale
the first typeslevel
of
defects which
detection (s1 is= merged
1). Defects by of
theuneven
bitwisepaint
or operator using are
spray which the difficult
three levelforrescaled
human results.
eye to distinguish are
By comparing the graphs of Figure 10b,e, we can conclude that,
detected in second and third scale level detection (s2 = 0.5, s3 = 0.25). After the second and for the spot-like, linear
thirdandlevel
unevenly sprayed surface defects that are difficult to detect in the human eye, our proposed
images are rescaled to the first level scale of detection result, the final image result detects all types multi-scale
Hessian eigenvalue
of defects which ismethodmergedcan by still detect these
the bitwise defects.using the three level rescaled results.
or operator
for the human eye to distinguish.
In our method, the most obvious line and point scratches are extracted in the first scale level
detection (s1 = 1). Defects of uneven paint spray which are difficult for human eye to distinguish are
detected in second and third scale level detection (s2 = 0.5, s3 = 0.25). After the second and third level
images
Sensorsare rescaled
2019, 19, 644 to the first level scale of detection result, the final image result detects all types 14 of 18
of defects which is merged by the bitwise or operator using the three level rescaled results.
(f)
Figure
Figure 10.10. Illustrationofofthree
Illustration threelevel
levelscales
scales detection
detection process
process (a)
(a) ground
groundtruth
truthofofsurface
surfacedefects (b)(b)
defects
detection
detection result
result ofofenhancement
enhancement background
background subtraction
subtractionofof
three-level detection
three-level method
detection proposed
method by
proposed
by Jaime
JaimeMolina
Molina[2][2]
(c)(c)
ourour
firstfirst
scale levellevel
scale detection represented
detection by a small
represented by asolid-line box, (d) our
small solid-line box,second
(d) our
scale scale
second level level
detection represented
detection by a broken-line
represented box (e)box
by a broken-line our(e)
third
ourscale
thirdlevel
scaledetection by a large
level detection by a
dotted-line box (f) our final fusion candidate defect image.
large dotted-line box (f) our final fusion candidate defect image.
By comparing the graphs of Figure 10b,e, we can conclude that, for the spot-like, linear and
unevenly sprayed surface defects that are difficult to detect in the human eye, our proposed multi-
scale Hessian eigenvalue method can still detect these defects.
(a) (b)
(c) (d)
Figure
Figure 11. 11.
Set Set of detection
of detection examples
examples showing
showing thethe detectionofofdefects
detection defectsonondifferent
differentposition
positionofofcar
carbodies
bodies (a) one defect near edge of body trunk cover on the roof of the car, (b) two defects
(a) one defect near edge of body trunk cover on the roof of the car, (b) two defects near the edge of the near the
edge
fender of the fender
surface surface
(c) three (c) three
defects, onedefects,
on the one
fuelon the cover
tank fuel tank
two cover
neartwo
thenear
edge theofedge
the of the fender
fender surface (d)
surface (d) two defects, one on the handle, one near the style line.
two defects, one on the handle, one near the style line.
6. Conclusions and Future Work
Sensors 2019, 19, 644 16 of 18
To demonstrate the performance of the defect detection algorithm presented in this paper, a
selection of results obtained on the production line with different parts and car bodies is shown in
Figure 11. In some of these, detections are observed close to or on style lines and surface edges, as
in Figure 11a–c. In addition, it is possible to see detections carried out in concave areas such as the
handle, Figure 11d.
1) Deflectometry techniques [36] show promise in image defect contrast enhancement on specular
surfaces. If the funds are sufficient, we will try this lighting method to collect the body images.
2) The precision of AIS could be improved if the selection of the parameter th for the multi-scale
DBHM method could be more intelligent, although this has the risk of slowing down the detection
speed. We will investigate adaptive selection approaches.
3) The detection speed of the AIS could be further increased if it is paralleled using parallel
computing techniques, such as general-purpose computation on graphics processing units. The
five plane-array CCD cameras collect images of the five sides of the automobile synchronously,
so the AIS has the potential to be paralleled and accelerated.
Author Contributions: Conceptualization, R.C. and Q.Z.; methodology, Q.Z. and B.H.; software, Q.Z. and J.Y.;
validation, Q.Z., C.L. and X.Y.; formal analysis, J.Y.; investigation, Q.Z., C.L. and X.Y.; resources, R.C.; data
curation, Q.Z. and X.Y.; writing—original draft preparation, Q.Z.; writing—review and editing, Q.Z.; visualization,
C.L. and X.Y.; supervision, R.C.; project administration, R.C.
Funding: This research was funded by National Natural Science Foundation of China (Grant No.51675265),
the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) and the China
Scholarship Council.
Acknowledgments: We would like to thank the anonymous reviewers for their valuable comments and
suggestions. This work was supported by Professor Huixiang Wu and Shanghai Together International Logistics
Co., Ltd. (TIL). The authors gratefully acknowledge these supports.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Karbacher, S.; Babst, J.; Häusler, G.; Laboureux, X. Visualization and detection of small defects on car-bodies.
Mode Vis. 1999, 99, 1–8.
2. Molina, J.; Solanes, J.E.; Arnal, L.; Tornero, J. On the detection of defects on specular car body surfaces.
Robot. Comput. Integr. Manuf. 2017, 48, 263–278. [CrossRef]
Sensors 2019, 19, 644 17 of 18
3. Malamas, E.N.; Petrakis, E.G.; Zervakis, M.; Petit, L.; Legat, J.-D. A survey on industrial vision systems,
applications and tools. Image Vis. Comput. 2003, 21, 171–188. [CrossRef]
4. Li, Q.; Ren, S. A real-time visual inspection system for discrete surface defects of rail heads. IEEE Trans.
Instrum. Meas. 2012, 61, 2189–2199. [CrossRef]
5. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9,
62–66. [CrossRef]
6. Liao, P.-S.; Chen, T.-S.; Chung, P.-C. A fast algorithm for multilevel thresholding. J. Inf. Sci. Eng. 2001, 17,
713–727.
7. dos Anjos, A.; Shahbazkia, H.R. Bi-level image thresholding. BioSignals 2008, 2, 70–76.
8. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation.
J. Electron. Imaging 2004, 13, 146–166.
9. Armesto, L.; Tornero, J.; Herraez, A.; Asensio, J. Inspection system based on artificial vision for paint
defects detection on cars bodies. In Proceedings of the 2011 IEEE International Conference on Robotics and
Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 1–4.
10. Immel, D.S.; Cohen, M.F.; Greenberg, D.P. A radiosity method for non-diffuse environments. In ACM
Siggraph Computer Graphics; ACM: New York, NY, USA, 1986; pp. 133–142.
11. Kajiya, J.T. The rendering equation. In ACM Siggraph Computer Graphics; ACM: New York, NY, USA, 1986;
pp. 143–150.
12. Fan, W.; Lu, C.; Tsujino, K. An automatic machine vision method for the flaw detection on car’s body. In
Proceedings of the 2015 IEEE 7th International Conference on Awareness Science and Technology (iCAST),
Qinhuangdao, China, 22–24 September 2015; pp. 13–18.
13. Kamani, P.; Noursadeghi, E.; Afshar, A.; Towhidkhah, F. Automatic paint defect detection and classification
of car body. In Proceedings of the 2011 7th Iranian Conference on Machine Vision and Image Processing,
Tehran, Iran, 16–17 November 2011; pp. 1–6.
14. Kamani, P.; Afshar, A.; Towhidkhah, F.; Roghani, E. Car body paint defect inspection using rotation invariant
measure of the local variance and one-against-all support vector machine. In Proceedings of the 2011
First International Conference on Informatics and Computational Intelligence, Bandung, Indonesia, 12–14
December 2011; pp. 244–249.
15. Chung, Y.C.; Chang, M. Visualization of subtle defects of car body outer panels. In Proceedings of the
SICE-ICASE International Joint Conference, Busan, Korea, 18–21 October 2006; pp. 4639–4642.
16. Leon, F.P.; Kammel, S. Inspection of specular and painted surfaces with centralized fusion techniques.
Measurement 2006, 39, 536–546. [CrossRef]
17. Borsu, V.; Yogeswaran, A.; Payeur, P. Automated surface deformations detection and marking on automotive
body panels. In Proceedings of the 2010 IEEE Conference on Automation Science and Engineering (CASE),
Toronto, ON, Canada, 21–24 August 2010; pp. 551–556.
18. Xiong, Z.; Li, Q.; Mao, Q.; Zou, Q. A 3D laser profiling system for rail surface defect detection. Sensors 2017,
17, 1791. [CrossRef] [PubMed]
19. Qu, Z.; Bai, L.; An, S.-Q.; Ju, F.-R.; Liu, L. Lining seam elimination algorithm and surface crack detection in
concrete tunnel lining. J. Electron. Imaging 2016, 25, 063004. [CrossRef]
20. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network.
In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA,
25–28 September 2016; pp. 3708–3712.
21. Schmugge, S.J.; Rice, L.; Lindberg, J.; Grizziy, R.; Joffey, C.; Shin, M.C. Crack Segmentation by Leveraging
Multiple Frames of Varying Illumination. In Proceedings of the 2017 IEEE Winter Conference on Applications
of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017; pp. 1045–1053.
22. Zou, Q.; Zhang, Z.; Li, Q.; Qi, X.; Wang, Q.; Wang, S. DeepCrack: Learning Hierarchical Convolutional
Features for Crack Detection. IEEE Trans. Image Process. 2019, 28, 1498–1512. [CrossRef] [PubMed]
23. Xu, W.; Tang, Z.; Zhou, J.; Ding, J. Pavement crack detection based on saliency and statistical features.
In Proceedings of the 2013 20th IEEE International Conference on Image Processing (ICIP), Melbourne,
Australia, 15–18 September 2013; pp. 4093–4097.
24. Zou, Q.; Cao, Y.; Li, Q.; Mao, Q.; Wang, S. CrackTree: Automatic crack detection from pavement images.
Pattern Recognit. Lett. 2012, 33, 227–238. [CrossRef]
Sensors 2019, 19, 644 18 of 18
25. Shi, Y.; Cui, L.; Qi, Z.; Meng, F.; Chen, Z. Automatic road crack detection using random structured forests.
IEEE Trans. Intell. Transp. Syst. 2016, 17, 3434–3445. [CrossRef]
26. Phong, B.T. Illumination for computer generated pictures. Commun. ACM 1975, 18, 311–317. [CrossRef]
27. Whitted, T. An improved illumination model for shaded display. In Proceedings of the ACM Siggraph 2005
Courses, Los Angeles, CA, USA, 31 July–4 August 2005.
28. Lorenz, C.; Carlsen, I.-C.; Buzug, T.M.; Fassnacht, C.; Weese, J. A multi-scale line filter with automatic
scale selection based on the Hessian matrix for medical image segmentation. In Proceedings of the
International Conference on Scale-Space Theories in Computer Vision, Utrecht, The Netherlands, 2–4
July 1997; pp. 152–163.
29. Frangi, A.F.; Niessen, W.J.; Vincken, K.L.; Viergever, M.A. Multiscale vessel enhancement filtering. In
Proceedings of the International Conference on Medical Image Computing and Computer-Assisted
Intervention, Cambridge, MA, USA, 11–13 October 1998; pp. 130–137.
30. Rehkugler, G.; Throop, J. Apple sorting with machine vision. Trans. ASAE 1986, 29, 1388–1397. [CrossRef]
31. Available online: https://en.wikipedia.org/w/index.php?title=Divergence&oldid=863835077 (accessed on
13 October 2018).
32. Vapnik, V. Statistical Learning Theory; Wiley: New York, NY, USA, 1998; Volume 3.
33. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intel. Syst. Technol. 2011,
2, 27. [CrossRef]
34. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110.
[CrossRef]
35. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the European
Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 404–417.
36. Prior, M.A.; Simon, J.; Herraez, A.; Asensio, J.M.; Tornero, J.; Ruescas, A.V.; Armesto, L. Inspection System
and Method of Defect Detection on Specular Surfaces. U.S. Patent US20130057678A1, 7 March 2013.
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).