Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Platform for evaluating sensors and human detection in autonomous mowing operations

  • Published:
Precision Agriculture Aims and scope Submit manuscript

Abstract

The concept of autonomous farming concerns automatic agricultural machines operating safely and efficiently without human intervention. In order to ensure safe autonomous operation, real-time risk detection and avoidance must be undertaken. This paper presents a flexible vehicle-mounted sensor system for recording positional and imaging data with a total of six sensors, and a full procedure for calibrating and registering all sensors. Authentic data were recorded for a case study on grass-harvesting and human safety. The paper incorporates parts of ISO 18497 (an emerging standard for safety of highly automated machinery in agriculture) related to human detection and safety. The case study investigates four different sensing technologies and is intended as a dataset to validate human safety or a human detection system in grass-harvesting. The study presents common algorithms that are able to detect humans, but struggle to handle lying or occluded humans in high grass.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Bahnsen, C. (2013). Thermal-visible-depth image registration. Unpublished Master Thesis, Aalborg University, Aalborg, Denmark.

  • Christiansen, P., Kragh, M., Steen, K. A., Karstoft, H., & Jørgensen, R. N. (2015). Advanced sensor platform for human detection and protection in autonomous farming. Precision Agriculture, 15, 291–298.

    Article  Google Scholar 

  • Christiansen, P., Steen, K. A., Jørgensen, R. N., & Karstoft, H. (2014). Automated detection and recognition of wildlife using thermal cameras. Sensors, 14(8), 13778–13793.

    Article  PubMed  PubMed Central  Google Scholar 

  • CLAAS Steering Systems. (2011). Tracking control optimisation. Retrieved 2016, 26 September from http://claas.via-us.co.uk/booklets/gps-steering-systems/download.

  • Dollar, P., Belongie, S., & Perona, P. (2010). The fastest pedestrian detector in the west. In F. Labrosse, R. Zwiggelaar, Y. Liu & B. Tiddeman (Eds.), Proceedings of the British machine vision conference 2010 (pp 68.1–68.11). BMVA Press, Durham University, UK.

  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.

    Article  Google Scholar 

  • Freitas, G., Hamner, B., Bergerman, M., & Singh, S. (2012). A practical obstacle detection system for autonomous orchard vehicles. In 2012 IEEE/RSJ international conference on intelligent robots and systems (pp 3391–3398).

  • ISO/DIS 18497:2015: Agricultural and forestry tractors and self-propelled machinerySafety of highly automated machinery. Retrieved 2016, 26 September from https://drive.google.com/file/d/0B1ilODNTH9nzRUV2N0JzbklubFU/view.

  • Johnson, M. J., & Bajcsy, P. (2008). Integration of thermal and visible imagery for robust foreground detection in tele-immersive spaces. In P. Solbrig (Ed.), Proceedings of the 11th international conference on information fusion (pp. 1265–1272). Piscataway, USA: IEEE.

    Google Scholar 

  • Krotosky, S. J., & Trivedi, M. M. (2007). Mutual information based registration of multimodal stereo videos for person tracking. Computer Vision and Image Understanding, 106(2–3), 270–287.

    Article  Google Scholar 

  • McLachlan, G. J., & Basford, K. E. (1988). Mixture models: Inference and applications to clustering. In Statistics: textbooks and monographs. New York, USA: Dekker.

  • Paden, B., Cáp, M., Yong, Z. S., Yershov, D., & Frazzoli, E. (2016). A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on Intelligent Vehicles, 1(1), 33–55. arXiv:cs.CV/1604.07446v1.

  • Pilarski, T., Happold, M., Pangels, H., Ollis, M., Fitzpatrick, K., & Stentz, A. (2002). The Demeter System for automated harvesting. Autonomous Robots, 13, 9–20.

    Article  Google Scholar 

  • Rasshofer, R. H., & Gresser, K. (2005). Automotive radar and lidar systems for next generation driver assistance functions. Advances in Radio Science, 3, 205–209.

    Article  Google Scholar 

  • Reina, G., & Milella, A. (2012). Towards autonomous agriculture: Automatic ground detection using trinocular stereovision. Sensors, 12(12), 12405–12423.

    Article  PubMed Central  Google Scholar 

  • Rouveure, R., Nielsen, M., & Petersen, A. (2012). The QUAD-AV Project: Multi-sensory approach for obstacle detection in agricultural autonomous robotics. In International conference of agricultural engineering. Valencia, Spain: EurAgEng.

  • Serrano-Cuerda, J., Fernández-Caballero, A., & López, M. (2014). Selection of a visible-light vs. thermal infrared sensor in dynamic environments based on confidence measures. Applied Sciences, 4(3), 331–350.

    Article  Google Scholar 

  • Steen, K. A., Villa-Henriksen, A., Therkildsen, O. R., & Green, O. (2012). Automatic detection of animals in mowing operations using thermal cameras. Sensors, 12(6), 7587–7597.

    Article  PubMed  PubMed Central  Google Scholar 

  • The MathWorks, Inc. (2015). MATLAB and computer vision system toolbox. Natick, MA, USA: The MathWorks, Inc.

    Google Scholar 

  • Wei, J., Rovira-Mas, F., Reid, J. F., & Han, S. (2005). Obstacle detection using stereo vision to enhance safety of autonomous machines. Transactions of the ASAE, 48(6), 2389–2397. doi:10.13031/2013.20078.

    Article  Google Scholar 

  • Yang, L., & Noguchi, N. (2012). Human detection for a robot tractor using omni-directional stereo vision. Computers and Electronics in Agriculture, 89, 116–125.

    Article  Google Scholar 

  • Zhang, Z. (1994). Iterative point matching for registration of free-form curves and surfaces. International Journal of Computer Vision, 13(2), 119–152.

    Article  Google Scholar 

  • Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330–1334.

    Article  Google Scholar 

  • Zhao, J., & Cheung, S. S. (2014). Human segmentation by geometrically fusing visible-light and thermal imageries. Multimedia Tools and Applications, 76(1), 7361–7389.

    Google Scholar 

Download references

Acknowledgements

This research is sponsored by the Innovation Fund Denmark as part of the Project “SAFE - Safer Autonomous Farming Equipment” (Project No. 16-2014-0) and “Multi-sensor system for ensuring ethical and efficient crop production” (Project No. 155-2013-6).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Christiansen.

Appendix: Thermal–visual registration and evaluation

Appendix: Thermal–visual registration and evaluation

First a total of 47 thermal and stereo synchronized images were selected from a single calibration recording. For each image, a rectangle area inside the checkerboard was marked manually to specify an image cropping, see Fig. 11. For RGB images, the cropped image was converted to the LAB color space and a Gaussian mixture model separated the pixels into two clusters (copper and non-copper areas). The posterior probability of belonging to one of the Gaussian clusters was determined for all pixels in the original image, see Fig. 12. For thermal images, the cropped image was normalized—transforming pixel values in the range [0 1] by shifting and scaling. The same normalization was applied to the whole thermal image, see Fig. 13. The MATLAB calibration toolbox was able to automatically detect checkerboards of the transformed RGB and thermal images. The calibration toolbox was able to detect the checkerboard in 27 and 43 out of the 45 images for respectively stereo and thermal images. The 27 stereo images were used for calibrating the intrinsic and extrinsic parameters of the stereo camera. The 43 thermal images were used for determining the intrinsic parameters of the thermal camera.

Fig. 11
figure 11

Image example and a manually marked rectangle

Fig. 12
figure 12

Posterior probability of belonging to one of the Gaussian clusters for all pixels in the image example. Checkerboard detection is marked with blue crosses (Color figure online)

Fig. 13
figure 13

Thermal image is normalized relative to the checkerboard. Checkerboard detection is marked with blue crosses (Color figure online)

Fig. 14
figure 14

Zoomed images. Blue crosses mark corners detected by the MATLAB calibration toolbox for both an RGB image (left) and a thermal image (right). The red crosses (left) show how 3D points are projected to the thermal camera (Color figure online)

In 25 out of 47 synchronized images, the checkerboard was successfully detected by the MATLAB calibration toolbox for both RGB and thermal images. The toolbox estimated the 3D position of the checkerboard in all 25 images for each camera. The extrinsic parameters of the thermal camera were determined as the least square rigid transformation that mapped the estimated checkerboards from the left RGB camera to the thermal camera (in 3D).

The registration was evaluated on the 25 images to provide a quantitative evaluation of the thermal–visual registration. The camera calibration for the left stereo camera estimated—as already described—the checkerboard positions in 3D. These positions were then projected to the thermal image using the estimated extrinsic and intrinsic parameters of the thermal camera, see Fig. 4 (right).

The error was determined as the distance between the detected checkerboard and the projected 3D positions. Figure 15 shows the mean pixel error for each of the 25 images and the mean pixel error across all images on 4.66 pixels. The image example used in Figs. 11, 12, 13, and 14 is image 21 with a mean pixel error close to the mean pixel error across all images.

Fig. 15
figure 15

The mean pixel error for 25 images (blue bars) and the mean pixel error across all images (red line) (Color figure online)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Christiansen, P., Kragh, M., Steen, K.A. et al. Platform for evaluating sensors and human detection in autonomous mowing operations. Precision Agric 18, 350–365 (2017). https://doi.org/10.1007/s11119-017-9497-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11119-017-9497-6

Keywords