Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

1.Radar-Camera Sensor Fusion Based Object Detection For ... - ThinkMind

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

ACCSE 2018 : The Third International Conference on Advances in Computation, Communications and Services

Radar-Camera Sensor Fusion Based Object Detection for Smart Vehicles

Eugin Hyun Young-Seok Jin Hyeong-Cheol Jeon Young-Nam Shin


ART Lab., ART Lab., ART Lab., DAS Engineering Team,
DGIST, DGIST, DGIST, SL Corporation,
Daegu, Korea Daegu, Korea Daegu, Korea Gyeongsanbuk-do, Korea
e-mail: braham@dgist.ac.kr e-mail: ysjin@dgist.ac.kr e-mail: hamtooree@dgist.ac.kr e-mail: ynshin@slworld.com

Abstract—We propose a post-data processing scheme for radar accuracy is unsatisfactory because the camera sensor detects
sensors in a fusion system consisting of a camera and radar the range of ROI as the pre-processing. In order to overcome
sensors. The proposed scheme is divided into the recursive least the limitation, in another work [8], a more robust and efficient
square filter, the ghost target and clutter cancellation, and vision-based vehicle detection method was presented. In that
region of interest (ROI) selection. Especially for the recursive case, the radar sensor provides ROIs for the camera sensor.
least square filter, we determine whether detections are valid Compared to the [6][7], because the radar sensor detects the
tracks or are new tracks to initialize or update the filter range of target, the method can improve the detection
parameters. Next, we apply the valid detections to the filter to accuracy and reduces the false alarm rate.
reduce detection errors. Next, we cancel ghost targets as
For the fusion method, the radar system provides precise
comparing the current tracks and the last tracks, and suppress
clutter using the detected radial velocity. Finally, we select the
ROI information to the camera sensor. Specifically, because
ROI and determine the transfer coordinates to provide these the coordinates between the radar and the camera have
values to the camera sensor. To verify the proposed method, we differences, coordinate matching is also required. Finally,
use Delphi commercial radar and carry out the measurements because the both sensors’ fields of views (FOVs) are also
in a chamber and on the real road. different, the overlap area should be considered.
Thus, in this paper, we propose a radar post-processing
Keywords- Post processing; Sensor fusion; ADAS. scheme, which takes these issues into account for fusion of the
camera and radar sensor. Section II briefly presents the
I. INTRODUCTION proposed radar post-data processing scheme. Section III
At present, the Advanced Driver Assistance System describes measurement results under the real field. Section IV
(ADAS) is one of the main issues for smart vehicle safety in presents the conclusion.
road traffic. To support effective ADASs, the target detection
sensors used are very important. Among currently available II. POST-DATA PROCESSING SCHEME
sensors, camera and radar sensors are commonly used in Figure 1 presents the expected fusion results of systems
ADASs [1][2]. incorporating both camera and radar sensors.
Because radar detects objects by emitting radio signals and
analyzing the echo in the reflected signal, this system can
operate robustly in different weather conditions [1][2].
Cameras are also widely used because they can provide rich
data, similar to that by the human eye [1][2]. However, radar
measurements are limited in terms of the angle resolution and
this data is rather noisy due to false alarms. Cameras are also
sensitive to light and weather conditions, and they have low
detection accuracy levels, such as for the velocity and range
detections.
Owing to these limitations, sensor fusion technology is Figure 1. Expected fusion results of camera and radar sensors.
considered as an efficient means of increasing target detection
performance levels [1]-[3]. Because previous works have The angle detection of the radar sensor has a low
provided sensor fusion outcomes in the end stage [4][5], the resolution while range detection error of the camera sensor is
computational complexity of camera classification remains very high. Here, the overlap between the two sensors is the
high. Thus, in order to improve the detection performance and final detected target position. In order to overcome the
reduce the computational intensity, early-stage-based sensor limitations of each sensor, we propose the sensor fusion based
fusion was proposed in previous works [6]-[8]. processing concept shown in Figure 2.
In the previous works [6][7], radar is used to detect targets From the radar sensor, the detection information is
in the region of interest (ROI) of captured image and searches received through a controller area network (CAN) with radar
for vehicle features within the ROI. However, the detection start commend. After packet receiving and decoding, the track

Copyright (c) IARIA, 2018. ISBN: 978-1-61208-658-3 5


ACCSE 2018 : The Third International Conference on Advances in Computation, Communications and Services

data and the corresponding flag (or status) data are saved in
the registers. Subsequently, through the proposed post data
processing and projective transformation steps, the ROI
information is transferred to the image processing path. In the
camera sensor, based on these ROIs, feature extraction and
target classification are carried out.

Figure 2. Proposed camera and radar sensor fusion processing concept.

The proposed post data processing scheme for radar is


illustrated in Figure 3. First, we determine where detection is
valid or not valid using the flag and status values. Figure 3. Post-data processing scheme for radar.
In the second step, we initialize the parameters for error
minimizing filter if the current track is new. On the other hand, In the next step, we distinguish whether the current track
for existing tracks, the corresponding parameters are is a target or a clutter. If the velocity of the current track, v[k],
calculated using the values updated in previous state. In this meets (3), the current track is target, otherwise it can be regard
paper, we employed the recursive least square second order as clutter. Here, vmin is maximum velocity of clutter and vego is
filter [9] to improve the angle detection error. On the other ego velocity of subject vehicle. In addition, vmin is statistically
hand, the range and velocity values of the first step are passed estimated through the experiment through trade-off between
without any modifications because the detection errors are the detection probability and false alarm rate.
very low.
The filter processing is expressed by (1) and (2), where (3)
K1 2 2k 1 / k k , K2 6/ k k , and Res
1 k 1 . In this equation, k is the kth Then we select the ROI information (range, radial velocity,
the estimated angle and is the kth measurement. For a and angle) in the overlap area of the radar and the camera.
new track (k is 1), we define 0 1 and 0 0. Finally, the projective transformation is carried out. In that
case, the error bound is also fed to camera sensor considering
k 1 1 1∙ (1) the range detection error of the camera sensor and angle
1 2∙ (2) detection error of the radar sensor. The camera sensor will
process images within window size, which reduces the
Next, the sudden tracks are cancelled. That is, as complexity of image processing.
comparing the current track status and the previous The steps described above are repeated until the scanning
information, we can determine the received detection is ghost of final tracks is completed. The number of tracks is
or not. dependent on the type of commercial radar and the
corresponding parameter setting.

Copyright (c) IARIA, 2018. ISBN: 978-1-61208-658-3 6


ACCSE 2018 : The Third International Conference on Advances in Computation, Communications and Services

III. MEASUREMENT RESULTS Figure 6 shows the detection result for several frame times:
In this paper, we employ the Delphi 77GHz ESR range (top), radial velocity (middle), and angle (bottom). As
(Electronically Scanning Radar) system. In this radar system shown in the results, the angle detections contain numerous
[10], because the maximum number of tracks is 64, the post- errors. Even when the moving target is placed on the middle
data processing described in Section II are repeated until the line of the radar, the detection results were found to vary. Thus,
scanning of 64 tracks is completed. we filtered the detection outcomes through a recursive least
square filter. The corresponding outputs are indicated by the
TABLE I. DELPHI ESR SPECIFICATIONS red line in Figure 6.
Category Values
173.7 × 90.2 x 49.2 mm
Size
(L x W x H)
Weight 575 g
Scanning frequency 76.5 GHz
Field of View +/- 45°
Range ~ 60m
Target 64
Update rate <= 50 ms

In order to verify the post-data processing method for


radar, we configured a moving target measurement scenario
in a chamber room, as shown in Figure 4. First, we install the
Delphi ESR on the positioner and a single target is placed on
the rail approximately 3.5 m away from the radar. The target
then moved from 3.5m to 5.9 m in a round trip.

Figure 6. Results of post data processing for radar.

Next, in order to verify the algorithm on the real road, we


install the radar and camera on the middle of the vehicle front
bumper. The electrical power of vehicle was supplied to the
both sensors. Signal lines were built in the vehicle to acquire
radar signals and capture camera images together with a PC.
In addition, we considered four scenarios as shown in
Figure 7. First, a single human is moving along the middle line
of the radar sensor at approximately 6 m (a). In the second
scenario (b), a human is walking along the right edge of radar
FOV. In next scenario (c), the human is walking 3m away
from the middle line in the longitudinal direction. Final case
Figure 4. Moving target measurement scenario in chamber room.
(d) shows the pedestrian who moves laterally at about 3 m
away from the radar.

Figure 5. Radar measurement set-up.

We also utilized the measurement set-up shown in Figure


5. Here, a PC is connected to the ESR device through CAN to
a USB converter. We coded the device driver software to start
the radar sensor and data parsing program so as to log the
received data using the Matlab simulator. In addition, we also
completed the aforementioned post-data processing algorithm.

Copyright (c) IARIA, 2018. ISBN: 978-1-61208-658-3 7


ACCSE 2018 : The Third International Conference on Advances in Computation, Communications and Services

track. Here, the black line indicates FOV of radar. Figures 9


(c) and (f) show the radial velocity (m/s) over each frame.
In the results of Figure 8, we can see that angle errors are
compensated through the recursive least square filter.
Moreover, in Figure 9, we can find that the ghost targets and
clutter are cancelled and the multiple scattering points
oriented from one target are grouped together.
Next, in Figures 10, 11, and 12, we present the processing
results for the scenarios 2, 3, and 4. In the results, we describe
the x-y positions (meter) of target as calculating with the
detected range and angle for all frames. We also mark black
line to be able to see the detectable angle of radar. From all
results, it was proved that the angle errors are minimized, the
ghost and clutter were canceled, and the grouping was
Figure 7. Results of post data processing for radar. completed.

Figures 8-13 present the measurements and post-


processing results for each scenario. We monitor moving
human for about 7 seconds. In all figures, the blue points are
the detected tracks in each frame.
First, Figures 8 and 9 show the results when the human is
walking and running for the first scenario, respectively. Here,
Figures (a) ~ (c) present the valid tracks received from radar
sensor and Figures (d) ~ (f) show the post-data processing
results. In Figures (a) and (d), the x-axis is the frame index and
the y-axis indicates the range (meter). In Figure (b) and (e),
the x-axis indicates the frame index and the y-axis is the angle
(degree). Figures 8 (c) and (f) express the corresponding x- Figure 13. Example of ROI selection and window generation for the
camera sensor
and y-positions (meter) of tracks over the whole frames. The
results are calculated using range and angle values of each

Figure 1. Figure 8. Detected target tracks (range, angle, and the corresponding xy-position) for the first scenario: (a)~(c) tracks before post-
processing and (d)~(e) tracks after post-processing.

Copyright (c) IARIA, 2018. ISBN: 978-1-61208-658-3 8


ACCSE 2018 : The Third International Conference on Advances in Computation, Communications and Services

Last, Figure 13 shows an example of the ROI selection generated based on selected ROI including the range and
process (red circle) on the captured image for the first scenario. angle such the example of Figure 13.
Here, the camera with wide angle was developed by the SL
Corporation. In the image processing, the window can be

Figure 2. Figure 9. Detected target tracks (range, angle, and radial velocity) for the second scenario: (a)~(c) tracks before post-processing and (d)~(e)
tracks after post-processing.

Figure 3. Figure 10. Detected target tracks (xy-position) over the whole frames for the third scenario.

Figure 4. Figure 11. Detected target tracks (xy-position) over the whole frames for the fourth scenario.

Copyright (c) IARIA, 2018. ISBN: 978-1-61208-658-3 9


ACCSE 2018 : The Third International Conference on Advances in Computation, Communications and Services

Figure 5. Figure 12. Detected target tracks (xy-position) over the whole frames for the fifth scenario.

Time Headlight Control Based on Camera and Radar Sensor


IV. CONCLUSION Information,” IEEE ITSC 2015, Anchorage, USA, Sep. 2012.
[2] E. Hyun and Y. S. Jin, “Multi-level Fusion Scheme for Target
In this paper, we proposed post radar data processing for a Classification using Camera and Radar Sensors,” IPCV’17,
camera and radar sensor fusion system. To do this, we utilized Lasvegas, USA, July 2017, pp. 111-114.
a Delphi 77GHz automotive commercial radar system. [3] J. Laneurit, C. Blanc, R. Chapuis, and L. Trassoudaine,
First, using the flag values received from the radar, we “Multisensorial data fusion for global vehicle and obstacles
determined instances of valid detection and new tracks. Next, absolute positioning,” IEEE Intelligent Vehicles Symposium,
we employed a recursive least square filter to reduce the Columbus, USA, Jun. 2003, pp. 138–143.
detected angle error. Next we cancelled the ghost target and [4] R. O. Chavez-Garcia, J. Burlet, T. D. Vu, and O. Aycard,
clutter using the received track information. Finally, based on “Frontal object perception using radar and mono-vision”, IEEE
Intelligent Vehicles Symposium 2012, Alcala de Henares,
the selected ROI information, the projective transformation is Spain, June 2012
carried out for the camera sensor. The performance [5] U. Kadow, G. Schneider, and A. Vukotich, “Radar-vision
capabilities of the proposed scheme were assessed in a based vehicle recognition with evolutionary optimized and
chamber and in the outdoor environment. boosted features,” IEEE Intelligent Vehicles Symposium,
In the future, we will verify the proposed processing Istanbul, Turkey, June 2007, pp. 749–754.
scheme in various scenarios on the real road. Thus, we will [6] A. Sole, O. Mano, G. Stain, H. Kumon, Y. Tamatsu, and A.
provide the meaningful results. Moreover, together with the Shashua, “Solid or not solid: Vision for radar target validation,”
IEEE Intelligent Vehicles Symposium, Parma, Italy, Jun. 2004,
camera sensor, we will develop methods of sensor fusion pp. 819–824.
processing. Thus, we will compare the results of sensor fusion [7] G. Alessandretti, A. Broggi, and P. Cerri, “Vehicle and Guard
and them obtained by camera database alone. Rail Detection using Radar and Vision Data Fusion,” IEEE
Transactions on Intelligent Transportation Systems, Vol. 8, Iss.
1, pp. 95-105, Mar. 2007.
ACKNOWLEDGMENT [8] X. Wang, L. Xu, H. Sun, J. Xin, and N. Zheng, “On-Road
This research was supported by the Technology Transfer Vehicle Detection and Tracking Using MMW Radar and
and Commercialization Program through INNOPOLIS Mono-vision Fusion,” IEEE Transaction on Intelligent
Transportation Systems, Vol. 17, No. 7, pp. 2075-2084, July
Foundation (2017-DG-0001) and the DGIST R&D Program 2016.
(18-FA-07) funded by the Ministry of Science and ICT, Korea. [9] J. Lee and V. J. Mathews, "A fast recursive least squares
adaptive second order Volterra filter and its performance
analysis," IEEE Transactions on Signal Processing, Vol. 41,
REFERENCES Issue 3, pp. 1087-1102, Mar. 1993.
[1] A. Gavriilidis, D. Müller, S. Müller-Schneiders, J. Velten, and [10] Autonimoustuff, “Delphi ESR Startup Guide version 2.1”, Oct.
A. Kummert, “Sensor System Blockage Detection for Night 2015.

Copyright (c) IARIA, 2018. ISBN: 978-1-61208-658-3 10

You might also like