1.Radar-Camera Sensor Fusion Based Object Detection For ... - ThinkMind
1.Radar-Camera Sensor Fusion Based Object Detection For ... - ThinkMind
1.Radar-Camera Sensor Fusion Based Object Detection For ... - ThinkMind
Abstract—We propose a post-data processing scheme for radar accuracy is unsatisfactory because the camera sensor detects
sensors in a fusion system consisting of a camera and radar the range of ROI as the pre-processing. In order to overcome
sensors. The proposed scheme is divided into the recursive least the limitation, in another work [8], a more robust and efficient
square filter, the ghost target and clutter cancellation, and vision-based vehicle detection method was presented. In that
region of interest (ROI) selection. Especially for the recursive case, the radar sensor provides ROIs for the camera sensor.
least square filter, we determine whether detections are valid Compared to the [6][7], because the radar sensor detects the
tracks or are new tracks to initialize or update the filter range of target, the method can improve the detection
parameters. Next, we apply the valid detections to the filter to accuracy and reduces the false alarm rate.
reduce detection errors. Next, we cancel ghost targets as
For the fusion method, the radar system provides precise
comparing the current tracks and the last tracks, and suppress
clutter using the detected radial velocity. Finally, we select the
ROI information to the camera sensor. Specifically, because
ROI and determine the transfer coordinates to provide these the coordinates between the radar and the camera have
values to the camera sensor. To verify the proposed method, we differences, coordinate matching is also required. Finally,
use Delphi commercial radar and carry out the measurements because the both sensors’ fields of views (FOVs) are also
in a chamber and on the real road. different, the overlap area should be considered.
Thus, in this paper, we propose a radar post-processing
Keywords- Post processing; Sensor fusion; ADAS. scheme, which takes these issues into account for fusion of the
camera and radar sensor. Section II briefly presents the
I. INTRODUCTION proposed radar post-data processing scheme. Section III
At present, the Advanced Driver Assistance System describes measurement results under the real field. Section IV
(ADAS) is one of the main issues for smart vehicle safety in presents the conclusion.
road traffic. To support effective ADASs, the target detection
sensors used are very important. Among currently available II. POST-DATA PROCESSING SCHEME
sensors, camera and radar sensors are commonly used in Figure 1 presents the expected fusion results of systems
ADASs [1][2]. incorporating both camera and radar sensors.
Because radar detects objects by emitting radio signals and
analyzing the echo in the reflected signal, this system can
operate robustly in different weather conditions [1][2].
Cameras are also widely used because they can provide rich
data, similar to that by the human eye [1][2]. However, radar
measurements are limited in terms of the angle resolution and
this data is rather noisy due to false alarms. Cameras are also
sensitive to light and weather conditions, and they have low
detection accuracy levels, such as for the velocity and range
detections.
Owing to these limitations, sensor fusion technology is Figure 1. Expected fusion results of camera and radar sensors.
considered as an efficient means of increasing target detection
performance levels [1]-[3]. Because previous works have The angle detection of the radar sensor has a low
provided sensor fusion outcomes in the end stage [4][5], the resolution while range detection error of the camera sensor is
computational complexity of camera classification remains very high. Here, the overlap between the two sensors is the
high. Thus, in order to improve the detection performance and final detected target position. In order to overcome the
reduce the computational intensity, early-stage-based sensor limitations of each sensor, we propose the sensor fusion based
fusion was proposed in previous works [6]-[8]. processing concept shown in Figure 2.
In the previous works [6][7], radar is used to detect targets From the radar sensor, the detection information is
in the region of interest (ROI) of captured image and searches received through a controller area network (CAN) with radar
for vehicle features within the ROI. However, the detection start commend. After packet receiving and decoding, the track
data and the corresponding flag (or status) data are saved in
the registers. Subsequently, through the proposed post data
processing and projective transformation steps, the ROI
information is transferred to the image processing path. In the
camera sensor, based on these ROIs, feature extraction and
target classification are carried out.
III. MEASUREMENT RESULTS Figure 6 shows the detection result for several frame times:
In this paper, we employ the Delphi 77GHz ESR range (top), radial velocity (middle), and angle (bottom). As
(Electronically Scanning Radar) system. In this radar system shown in the results, the angle detections contain numerous
[10], because the maximum number of tracks is 64, the post- errors. Even when the moving target is placed on the middle
data processing described in Section II are repeated until the line of the radar, the detection results were found to vary. Thus,
scanning of 64 tracks is completed. we filtered the detection outcomes through a recursive least
square filter. The corresponding outputs are indicated by the
TABLE I. DELPHI ESR SPECIFICATIONS red line in Figure 6.
Category Values
173.7 × 90.2 x 49.2 mm
Size
(L x W x H)
Weight 575 g
Scanning frequency 76.5 GHz
Field of View +/- 45°
Range ~ 60m
Target 64
Update rate <= 50 ms
Figure 1. Figure 8. Detected target tracks (range, angle, and the corresponding xy-position) for the first scenario: (a)~(c) tracks before post-
processing and (d)~(e) tracks after post-processing.
Last, Figure 13 shows an example of the ROI selection generated based on selected ROI including the range and
process (red circle) on the captured image for the first scenario. angle such the example of Figure 13.
Here, the camera with wide angle was developed by the SL
Corporation. In the image processing, the window can be
Figure 2. Figure 9. Detected target tracks (range, angle, and radial velocity) for the second scenario: (a)~(c) tracks before post-processing and (d)~(e)
tracks after post-processing.
Figure 3. Figure 10. Detected target tracks (xy-position) over the whole frames for the third scenario.
Figure 4. Figure 11. Detected target tracks (xy-position) over the whole frames for the fourth scenario.
Figure 5. Figure 12. Detected target tracks (xy-position) over the whole frames for the fifth scenario.