Robot Bionic Vision Technologies: A Review
Abstract
:1. Introduction
2. Literature Review
2.1. Human Visual System
2.2. Differences and Similarities between Human Vision and Bionic Vision
2.3. Advantages and Disadvantages of Robot Bionic Vision
- High accuracy: Human vision is 64 gray-levels, and the resolution of small targets is low [104]. Machine vision can identify significantly more gray levels and resolve micron-scale targets. Human visual adaptability is considerably strong and can identify a target in a complex and changing environment. However, color identification is easily influenced by a person’s psychology. Humans cannot identify color quantitatively, and their gray-level identification can also be considered poor, normally seeing only 64 grayscale levels. Their ability to resolve and identify tiny objects is weak.
- Fast: According to Potter [105], the human brain can process images seen by the human eye within 13 ms, which is converted into approximately 75 frames per second. The results extend beyond the 100 ms recognized in earlier studies [106]. Bionic vision can use a 1000 frames per second frame rate or higher to realize rapid recognition in high-speed image movement, which is impossible for human beings.
- High stability: Bionic vision detection equipment does not have fatigue problems, and there are no emotional fluctuations. Bionic vision is carefully executed according to the algorithm and requirements every time with high efficiency and stability. However, for large volumes of image detection or in the cases of high-speed or small-object detection, the human eye performs poorly. There is a relatively high rate of missed detections owing to fatigue or inability.
- Information integration and retention: The amount of information obtained by bionic vision is comprehensive and traceable, and the relevant information can be easily retained and integrated.
2.4. The Development Process of Bionic Vision
2.5. Common Bionic Vision Techniques
2.5.1. Binocular Stereo Vision
2.5.2. Structured Light
2.5.3. TOF (Time of Flight)
2.6. Robot Bionic Vision
2.6.1. Early Robot Vision System Model
2.6.2. Robot Bionic Eye System
2.7. A Panoramic Imaging System Based on Binocular Stereo Vision
2.8. Robot Bionic Vision Development Based on Human Vision
3. Challenges and Countermeasures for Robot Bionic Vision Development
3.1. Challenges That Restrict Robot Bionic Vision Development
- Although many scholars have conducted in-depth research on human vision, bionic vision still requires further research.
- At present, most robot vision systems use only left and right cameras to obtain and process target images. Left- and right-camera-driven systems have poor coordination, slow target tracking, and easy target loss. It is difficult to compensate for the line-of-sight deviation caused by image jitter in a complex motion environment. At present, researchers have constructed a head–eye motion control model of a bionic robot, which cannot fundamentally solve the problems faced by this version of this bionic machine [159].
- Existing bionic eye models usually only realize one or two movements of the human eye, and most are one-dimensional horizontal monocular or binocular movements. The separation mechanism of saccade motion and smooth tracking motion is widely used; however, its ability to realize multi-eye motion is poor.
- Binocular linkage control and head–eye coordination control in the line-of-sight transfer process are key technologies of bionic robot eyes. At present, research on binocular head–eye coordinated movement mostly remains at the level of biological neural control mechanisms and physiological models. Research on the construction of bionic binocular head-and-neck coordinated control is still in its infancy.
- The advanced mechanisms of human vision have not been fully explored, and some mechanisms remain controversial and undiscussed. Despite research and analysis from the perspectives of neurophysiology and anatomy, there has been no breakthrough in head–eye coordinated motion control technology. Therefore, it is necessary to establish new models, theories, and methods to make a machine’s eyes more intelligent, dexterous, and realistic.
3.2. Countermeasures That Can Promote Robot Bionic Vision Development
3.2.1. Building a Complete Bionic Eye Model
3.2.2. Achieving Robot Head–Eye Coordination through a Biological Neural Control Mechanism
3.2.3. Improving the Speed and Accuracy of Random-Moving-Target Tracking
3.2.4. Solving the Problem of Large-Scale Line-of-Sight Deviation Compensation
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
VOR | Vestibular ocular reflex |
OKR | Objectives and key results |
LGN | Lateral geniculate nucleus |
RGB | Red–green–blue |
FOV | Field of view |
OpenCV | Open source computer vision library |
CNN | Convolutional neural network |
RNN | Recurrent neural network |
NCNN | High-performance neural network inference computing framework |
R-CNN | Region-Convolutional neural network |
Fast R-CNN | Fast region-Convolutional neural network |
Mask R-CNN | Mask region-Convolutional neural network |
SSD | Single Shot multibox detector algorithm |
YOLO | YOLO algorithm |
MS | Millisecond |
CCD | Charge-coupled device |
COMS | Complementary metal-oxide semiconductor |
TOF | Time of flight |
DC | Direct current |
CAN | Controller area network |
DOF | Degree of freedom |
IMU | Inertial measurement unit |
CPU | Central processing unit |
GPU | Graphics processing unit |
DPU | Deep learning processing unit |
NPU | Neural network processing unit |
TPU | Tensor processing unit |
FPGA | Field-programmable gate array |
ASIC | Application-specific integrated circuit |
References
- Gorodilov, Y. About the origin of the “Cambrian Explosion” phenomenon and the origin of animal types. Proc. Zool. Inst. RAS 2019, 323, 1–125. [Google Scholar] [CrossRef]
- Charles. On The Origin Of Species, 1859; New York University Press: New York, NY, USA, 2010. [Google Scholar]
- Nilsson, D. 1.07—Eye Evolution in Animals. In The Senses: A Comprehensive Reference, 2nd ed.; Fritzsch, B., Ed.; Elsevier: Oxford, UK, 2020; pp. 96–121. [Google Scholar]
- Nityananda, V.; Read, J. Stereopsis in animals: Evolution, function, and mechanisms. J. Exp. Biol. 2017, 220, 2502–2512. [Google Scholar] [CrossRef] [PubMed]
- Nilsson, D. The Evolution of Visual Roles—Ancient Vision Versus Object Vision. Front. Neuroanat. 2022, 16, 789375. [Google Scholar] [CrossRef]
- Tan, Y.; Shi, Y.; Tang, Q. Brain Storm Optimization Algorithm. In Advances in Swarm Intelligence; Lecture Notes in Computer Science Series; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- Joukal, M. Anatomy of the Human Visual Pathway; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Alipour, H.; Namazi, H.; Azarnoush, H.; Jafari, S. Fractal-based analysis of the influence of color tonality on human eye movements. Fractals 2019, 27, 403. [Google Scholar] [CrossRef]
- Sebastian, E.T. The Complexity and Origins of the Human Eye: A Brief Study on the Anatomy, Physiology, and Origin of the Eye. Senior Honors Thesis, Liberty University, Lynchburg, VA, USA, 2010. [Google Scholar]
- Fritsch, G.; Hitzig, E. The excitable cerebral cortex. Fritsch, G., Hitzig, E. Uber die elektrische Erregbarkeit des Grosshirns. Arch. Anat. Physiol. Wissen. 1870, 37, 300–332. [Google Scholar]
- Sabbah, S.; Gemmer, J.A.; Bhatia-Lin, A.; Manoff, G.; Castro, G.; Siegel, J.K.; Jeffery, N.; Berson, D.M. A retinal code for motion along the gravitational and body axes. Nature 2017, 546, 492–497. [Google Scholar] [CrossRef] [PubMed]
- Berson, D.M. 1.03—The Sensory Organ: Eye, Receptors, Retina. In The Senses: A Comprehensive Reference, 2nd ed.; Fritzsch, B., Ed.; Elsevier: Oxford, UK, 2020; pp. 31–35. [Google Scholar]
- Berson, D. Keep both eyes on the prize: Hunting mice use binocular vision and specialized retinal neurons to capture prey. Neuron 2021, 109, 1418–1420. [Google Scholar] [CrossRef] [PubMed]
- Carlson, C.; Devinsky, O. The excitable cerebral cortex Fritsch G, Hitzig E. Uber die elektrische Erregbarkeit des Grosshirns. Arch Anat Physiol Wissen 1870;37:300-32. Epilepsy Behav. 2009, 15, 131–132. [Google Scholar] [CrossRef]
- Crawford, B.H.; Ikeda, H. The physics of vision in vertebrates. Contemp. Phys. 1971, 12, 75–97. [Google Scholar] [CrossRef]
- Elizabeth Barsotti, A.C.A.C. Neural architectures in the light of comparative connectomics. Curr. Opin. Neurobiol. 2021, 71, 139–149. [Google Scholar] [CrossRef]
- Liu, S.C.; Delbruck, T. Neuromorphic sensory systems. Curr. Opin. Neurobiol. 2010, 20, 288–295. [Google Scholar] [CrossRef]
- Nilsson, D. The Diversity of Eyes and Vision. Annu. Rev. Vis. Sci. 2021, 7, 19–41. [Google Scholar] [CrossRef]
- Zhang, X.S.D. Causes and consequences of the Cambrian explosion. China Earth Sci. 2014, 57, 930–942. [Google Scholar] [CrossRef]
- Zhang, X.; Shu, D. Current understanding on the Cambrian Explosion: Questions and answers. Paläontologische Z. 2021, 95, 641–660. [Google Scholar] [CrossRef]
- Young, L.R.; Stark, L. Variable Feedback Experiments Testing a Sampled Data Model for Eye Tracking Movements. IEEE Trans. Hum. Factors Electron. 1963, HFE-4, 38–51. [Google Scholar] [CrossRef]
- Robinson, D.A. The oculomotor control system: A review. Proc. IEEE 1968, 56, 1032–1049. [Google Scholar] [CrossRef]
- Robinson, D.A. Oculomotor unit behavior in the monkey. J. Neurophysiol. 1970, 33, 393. [Google Scholar] [CrossRef]
- Robinson, D.A.; Gordon, J.L.; Gordon, S.E. A model of the smooth pursuit eye movement system. Biol. Cybern. 1986, 55, 43–57. [Google Scholar] [CrossRef]
- Lisberger, S.G.; Morris, E.J.; Tychsen, J. Visual motion processing and sensory-motor integration for smooth pursuit eye movements. Annu. Rev. Neurosci. 1987, 10, 97–129. [Google Scholar] [CrossRef]
- Deno, D.C.; Keller, E.L.; Crandall, W.F. Dynamical neural network organization of the visual pursuit system. IEEE Trans. Biomed. Eng. 1989, 36, 85–92. [Google Scholar] [CrossRef]
- Lunghi, F.; Lazzari, S.; Magenes, G. Neural adaptive predictor for visual tracking system. In Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Hong Kong, China, 1 November 1998. [Google Scholar]
- Gomi, H.; Kawato, M. Adaptive feedback control models of the vestibulocerebellum and spinocerebellum. Biol. Cybern. 1992, 68, 105–114. [Google Scholar] [CrossRef]
- Scassellati, B. Eye finding via face detection for a foveated active vision system. In Proceedings of the 15th National Conference on Artificial Intelligence AAAI/IAAI, Menlo Park, CA, USA, 26–30 July 1998. [Google Scholar]
- de Brouwer, S.; Missal, M.; Barnes, G.; Lefèvre, P. Quantitative Analysis of Catch-Up Saccades During Sustained Pursuit. J. Neurophysiol. 2002, 87, 1772–1780. [Google Scholar] [CrossRef]
- Merfeld, D.M.; Park, S.; Gianna-Poulin, C.; Black, F.O.; Wood, S. Vestibular Perception and Action Employ Qualitatively Different Mechanisms. I. Frequency Response of VOR and Perceptual Responses During Translation and Tilt. J. Neurophysiol. 2005, 94, 186–198. [Google Scholar] [CrossRef]
- Zhang, X. An object tracking system based on human neural pathways of binocular motor system. In Proceedings of the 2006 9th International Conference on Control, Automation, Robotics and Vision, Singapore, 5–8 December 2006. [Google Scholar]
- Cannata, G.; D’Andrea, M.; Maggiali, M. Design of a Humanoid Robot Eye: Models and Experiments. In Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, Genova, Italy, 4–6 December 2006. [Google Scholar]
- Wang, Q.; Zou, W.; Zhang, F.; Xu, D. Binocular initial location and extrinsic parameters real-time calculation for bionic eye system. In Proceedings of the 11th World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014. [Google Scholar]
- Fan, D.; Chen, X.; Zhang, T.; Chen, X.; Liu, G.; Owais, H.M.; Kim, H.; Tian, Y.; Zhang, W.; Huang, Q. Design of anthropomorphic robot bionic eyes. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macao, China, 5–8 December 2017. [Google Scholar]
- Liu, Y.; Zhu, D.; Peng, J.; Wang, X.; Wang, L.; Chen, L.; Li, J.; Zhang, X. Real-Time Robust Stereo Visual SLAM System Based on Bionic Eyes. IEEE Trans. Med. Robot. Bionics 2020, 2, 391–398. [Google Scholar] [CrossRef]
- Wang, X.; Li, D.; Zhang, G. Panoramic Stereo Imaging of a Bionic Compound-Eye Based on Binocular Vision. Sensors 2021, 21, 1944. [Google Scholar] [CrossRef] [PubMed]
- Berthoz, A.; Pavard, B.; Young, L.R. Perception of linear horizontal self-motion induced by peripheral vision (linearvection) basic characteristics and visual-vestibular interactions. Exp. Brain Res. 1975, 23, 471–489. [Google Scholar] [CrossRef] [PubMed]
- Zacharias, G.L.; Young, L.R. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation. Exp. Brain Res. 1981, 41, 159–171. [Google Scholar] [CrossRef]
- Huang, J.; Young, L.R. Sensation of rotation about a vertical axis with a fixed visual field in different illuminations and in the dark. Exp. Brain Res. 1981, 41, 172–183. [Google Scholar] [CrossRef]
- Lichtenberg, B.K.; Young, L.R.; Arrott, A.P. Human ocular counterrolling induced by varying linear accelerations. Exp. Brain Res. 1982, 48, 127–136. [Google Scholar] [CrossRef]
- Young, L.R.; Oman, C.M.; Watt, D.G.D.; Money, K.E.; Lichtenberg, B.K.; Kenyon, R.V.; Arrott, A.P. M.I.T./Canadian vestibular experiments on the Spacelab-1 mission: 1. Sensory adaptation to weightlessness and readaptation to one-g: An overview. Exp. Brain Res. 1986, 64, 291–298. [Google Scholar] [CrossRef]
- Arnold, D.B.; Robinson, D.A. A neural network that learns to integrate oculomotor signals. In Proceedings of the 1990 IJCNN International Joint Conference on Neural Networks, San Diego, CA, USA, 17–21 June 1990. [Google Scholar]
- Robinson, D.A.; Fuchs, A.F. Eye movements evoked by stimulation of frontal eye fields. J. Neurophysiol. 1969, 32, 637–648. [Google Scholar] [CrossRef]
- Robinson, A.D. Real neural networks in movement control. In Proceedings of the 1994 IEEE American Control Conference, Baltimore, MD, USA, 29 June–1 July 1994. [Google Scholar]
- Hubel, D.H. Exploration of the primary visual cortex, 1955–1978. Nature 1982, 299, 515–524. [Google Scholar] [CrossRef]
- Hubel, D.H. Evolution of ideas on the primary visual cortex, 1955–1978: A biased historical account. Biosci. Rep. 1982, 2, 435–469. [Google Scholar] [CrossRef]
- Yau, J.M.; Pasupathy, A.; Brincat, S.L.; Connor, C.E. Curvature Processing Dynamics in Macaque Area V4. Cereb. Cortex 2013, 23, 198–209. [Google Scholar] [CrossRef]
- Grill-Spector, K.; Malach, R. The human visual cortex. Annu. Rev. Neurosci. 2004, 27, 649–677. [Google Scholar] [CrossRef]
- Hari, R.; Kujala, M.V. Brain Basis of Human Social Interaction: From Concepts to Brain Imaging. Physiol. Rev. 2009, 89, 453–479. [Google Scholar] [CrossRef]
- Hao, Q.; Tao, Y.; Cao, J.; Tang, M.; Cheng, Y.; Zhou, D.; Ning, Y.; Bao, C.; Cui, H. Retina-like Imaging and Its Applications: A Brief Review. Appl. Sci. 2021, 11, 7058. [Google Scholar] [CrossRef]
- Ayoub, G. On the Design of the Vertebrate Retina. Orig. Des. 1996, 17, 1. [Google Scholar]
- Williams, G.C. Natural Selection: Domains, Levels, and Challenges; Oxford University Press: Oxford, UK, 1992; Volume 72. [Google Scholar]
- Navarro, R. The Optical Design of the Human Eye: A Critical Review. J. Optom. 2009, 2, 3–18. [Google Scholar] [CrossRef]
- Schiefer, U.; Hart, W. Functional Anatomy of the Human Visual Pathway; Springer: Berlin/Heidelberg, Germany, 2007; pp. 19–28. [Google Scholar]
- Horton, J.C.; Hedley-Whyte, E.T. Mapping of cytochrome oxidase patches and ocular dominance columns in human visual cortex. Philos. Trans. R. Soc. B Biol. Sci. 1984, 304, 255–272. [Google Scholar]
- Choi, S.; Jeong, G.; Kim, Y.; Cho, Z. Proposal for human visual pathway in the extrastriate cortex by fiber tracking method using diffusion-weighted MRI. Neuroimage 2020, 220, 117145. [Google Scholar] [CrossRef]
- Welsh, D.K.; Takahashi, J.S.; Kay, S.A. Suprachiasmatic nucleus: Cell autonomy and network properties. Annu. Rev. Physiol. 2010, 72, 551–577. [Google Scholar] [CrossRef]
- Ottes, F.P.; Van Gisbergen, J.A.M.; Eggermont, J.J. Visuomotor fields of the superior colliculus: A quantitative model. Vis. Res. 1986, 26, 857–873. [Google Scholar] [CrossRef]
- Lipari, A. Somatotopic Organization of the Cranial Nerve Nuclei Involved in Eye Movements: III, IV, VI. Euromediterr. Biomed. J. 2017, 12, 6–9. [Google Scholar]
- Deangelis, G.C.; Newsome, W.T. Organization of disparity-selective neurons in macaque area MT. J. Neurosci. 1999, 19, 1398–1415. [Google Scholar] [CrossRef]
- Larsson, J.; Heeger, D.J. Two Retinotopic Visual Areas in Human Lateral Occipital Cortex. J. Neurosci. 2006, 26, 13128–13142. [Google Scholar] [CrossRef]
- Borra, E.; Luppino, G. Comparative anatomy of the macaque and the human frontal oculomotor domain. Neurosci. Biobehav. Rev. 2021, 126, 43–56. [Google Scholar] [CrossRef]
- França De Barros, F.; Bacqué-Cazenave, J.; Taillebuis, C.; Courtand, G.; Manuel, M.; Bras, H.; Tagliabue, M.; Combes, D.; Lambert, F.M.; Beraneck, M. Conservation of locomotion-induced oculomotor activity through evolution in mammals. Curr. Biol. 2022, 32, 453–461. [Google Scholar] [CrossRef]
- McLoon, L.K.; Willoughby, C.L.; Andrade, F.H. Extraocular Muscle Structure and Function; McLoon, L., Andrade, F., Eds.; Craniofacial Muscles; Springer: New York, NY, USA, 2012. [Google Scholar]
- Horn, A.K.E.; Straka, H. Functional Organization of Extraocular Motoneurons and Eye Muscles. Annu. Rev. Vis. Sci. 2021, 7, 793–825. [Google Scholar] [CrossRef]
- Adler, F.H.; Fliegelman, M. Influence of Fixation on the Visual Acuity. Arch. Ophthalmol. 1934, 12, 475–483. [Google Scholar] [CrossRef]
- Carter, B.T.; Luke, S.G. Best practices in eye tracking research. Int. J. Psychophysiol. 2020, 155, 49–62. [Google Scholar] [CrossRef] [PubMed]
- Cazzato, D.; Leo, M.; Distante, C.; Voos, H. When I Look into Your Eyes: A Survey on Computer Vision Contributions for Human Gaze Estimation and Tracking. Sensors 2020, 20, 3739. [Google Scholar] [CrossRef] [PubMed]
- Kreiman, G.; Serre, T. Beyond the feedforward sweep: Feedback computations in the visual cortex. Ann. N. Y. Acad. Sci. 2020, 1464, 222–241. [Google Scholar] [CrossRef] [PubMed]
- Müller, T.J. Augenbewegungen und Nystagmus: Grundlagen und klinische Diagnostik. HNO 2020, 68, 313–323. [Google Scholar] [CrossRef]
- Golomb, J.D.; Mazer, J.A. Visual Remapping. Annu. Rev. Vis. Sci. 2021, 7, 257–277. [Google Scholar] [CrossRef]
- Tzvi, E.; Koeth, F.; Karabanov, A.N.; Siebner, H.R.; Krämer, U.M. Cerebellar—Premotor cortex interactions underlying visuomotor adaptation. NeuroImage 2020, 220, 117142. [Google Scholar] [CrossRef]
- Banks, M.S.; Read, J.C.; Allison, R.S.; Watt, S.J. Stereoscopy and the Human Visual System. SMPTE Motion Imaging J. 2012, 121, 24–43. [Google Scholar] [CrossRef]
- Einspruch, N. (Ed.) Application Specific Integrated Circuit (ASIC) Technology; Academic Press: Cambridge, MA, USA, 2012; Volume 23. [Google Scholar]
- Verri Lucca, A.; Mariano Sborz, G.A.; Leithardt, V.R.Q.; Beko, M.; Albenes Zeferino, C.; Parreira, W.D. A Review of Techniques for Implementing Elliptic Curve Point Multiplication on Hardware. J. Sens. Actuator Netw. 2021, 10, 3. [Google Scholar] [CrossRef]
- Zhao, C.; Yan, Y.; Li, W. Analytical Evaluation of VCO-ADC Quantization Noise Spectrum Using Pulse Frequency Modulation. IEEE Signal Proc. Lett. 2014, 11, 249–253. [Google Scholar]
- Jouppi, N.P.; Young, C.; Patil, N.; Patterson, D.; Agrawal, G.; Bajwa, R.; Bates, S.; Bhatia, S.; Boden, N.; Borchers, A.; et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the ISCA ‘17: Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, ON, Canada, 24–28 June 2017. [Google Scholar]
- Huang, H.; Liu, Y.; Hou, Y.; Chen, R.C.-J.; Lee, C.; Chao, Y.; Hsu, P.; Chen, C.; Guo, W.; Yang, W.; et al. 45nm High-k/Metal-Gate CMOS Technology for GPU/NPU Applications with Highest PFET Performance. In Proceedings of the 2007 IEEE International Electron Devices Meeting, Washington, DC, USA, 10–12 December 2007. [Google Scholar]
- Kim, S.; Oh, S.; Yi, Y. Minimizing GPU Kernel Launch Overhead in Deep Learning Inference on Mobile GPUs. In Proceedings of the HotMobile ‘21: The 22nd International Workshop on Mobile Computing Systems and Applications, Virtual, 24–26 February 2021. [Google Scholar]
- Shah, N.; Olascoaga, L.I.G.; Zhao, S.; Meert, W.; Verhelst, M. DPU: DAG Processing Unit for Irregular Graphs With Precision-Scalable Posit Arithmetic in 28 nm. IEEE J. Solid-state Circuits 2022, 57, 2586–2596. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Ren, J.; Wang, Y. Overview of Object Detection Algorithms Using Convolutional Neural Networks. J. Comput. Commun. 2022, 10, 115–132. [Google Scholar]
- Kreiman, G. Biological and Computer Vision; Cambridge University Press: Oxford, UK, 2021. [Google Scholar]
- Sabour, S.; Frosst, N.; Geoffrey, E. Hinton Dynamic routing between capsules. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17), Curran Associates Inc., Red Hook, NY, USA, 4–9 December 2017; pp. 3859–3869. [Google Scholar]
- Schwarz, M.; Behnke, S. Stillleben: Realistic Scene Synthesis for Deep Learning in Robotics. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 10502–10508, ISSN: 2577-087X. [Google Scholar]
- Piga, N.A.; Bottarel, F.; Fantacci, C.; Vezzani, G.; Pattacini, U.; Natale, L. MaskUKF: An Instance Segmentation Aided Unscented Kalman Filter for 6D Object Pose and Velocity Tracking. Front. Robot. AI 2021, 8, 594593. [Google Scholar] [CrossRef]
- Bottarel, F.; Vezzani, G.; Pattacini, U.; Natale, L. GRASPA 1.0: GRASPA is a Robot Arm graSping Performance BenchmArk. IEEE Robot. Autom. Lett. 2020, 5, 836–843. [Google Scholar] [CrossRef]
- Bottarel, F. Where’s My Mesh? An Exploratory Study on Model-Free Grasp Planning; University of Genova: Genova, Italy, 2021. [Google Scholar]
- Dog-qiuqiu. MobileNet-YOLO That Works Better Than SSD. GitHub. 2021. Available online: https://github.com/dog-qiuqiu/MobileNet-Yolo (accessed on 6 February 2021).
- Yuan, L.; Chen, D.; Chen, Y.L.; Codella, N.; Dai, X.; Gao, J.; Hu, H.; Huang, X.; Li, B.; Li, C.; et al. Florence: A New Foundation Model for Computer Vision. arXiv 2021, arXiv:2111.11432. [Google Scholar]
- Yamins, D.L.K.; Dicarlo, J.J. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 2016, 19, 356–365. [Google Scholar] [CrossRef]
- Breazeal, C. Socially intelligent robots: Research, development, and applications. In Proceedings of the 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace, Tucson, AZ, USA, 7–10 October 2001. [Google Scholar]
- Shaw-Garlock, G. Looking Forward to Sociable Robots. Int. J. Soc. Robot. 2009, 1, 249–260. [Google Scholar] [CrossRef]
- Gokturk, S.B.; Yalcin, H.; Bamji, C. A Time-Of-Flight Depth Sensor—System Description, Issues and Solutions. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004. [Google Scholar]
- Gibaldi, A.; Canessa, A.; Chessa, M.; Sabatini, S.P.; Solari, F. A neuromorphic control module for real-time vergence eye movements on the iCub robot head. In Proceedings of the 2011 11th IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 26–28 October 2011. [Google Scholar]
- Xiaolin, Z. A Novel Methodology for High Accuracy Fixational Eye Movements Detection. In Proceedings of the 4th International Conference on Bioinformatics and Biomedical Technology, Singapore, 26–28 February 2012. [Google Scholar]
- Song, Y.; Xiaolin, Z. An Integrated System for Basic Eye Movements. J. Inst. Image Inf. Telev. Eng. 2012, 66, J453–J460. [Google Scholar]
- Dorrington, A.A.; Kelly, C.D.B.; McClure, S.H.; Payne, A.D.; Cree, M.J. Advantages of 3D time-of-flight range imaging cameras in machine vision applications. In Proceedings of the 16th Electronics New Zealand Conference (ENZCon), Dunedin, New Zealand, 18–20 November 2009. [Google Scholar]
- Jain, A.K.; Dorai, C. Practicing vision: Integration, evaluation and applications. Pattern Recogn. 1997, 30, 183–196. [Google Scholar] [CrossRef]
- Ma, Z.; Ling, H.; Song, Y.; Hospedales, T.; Jia, W.; Peng, Y.; Han, A. IEEE Access Special Section Editorial: Recent Advantages of Computer Vision. IEEE Access 2018, 6, 31481–31485. [Google Scholar]
- Rebecq, H.; Ranftl, R.; Koltun, V.; Scaramuzza, D. Events-to-Video: Bringing Modern Computer Vision to Event Cameras. arXiv 2019, arXiv:1904.08298. [Google Scholar]
- Silva, A.E.; Chubb, C. The 3-dimensional, 4-channel model of human visual sensitivity to grayscale scrambles. Vis. Res. 2014, 101, 94–107. [Google Scholar] [CrossRef] [PubMed]
- Potter, M.C.; Wyble, B.; Hagmann, C.E.; Mccourt, E.S. Detecting meaning in RSVP at 13 ms per picture. Atten. Percept. Psychophys. 2013, 76, 270–279. [Google Scholar] [CrossRef] [PubMed]
- Al-Rahayfeh, A.; Faezipour, M. Enhanced frame rate for real-time eye tracking using circular hough transform. In Proceedings of the 2013 IEEE Long Island Systems, Applications and Technology Conference (LISAT), Farmingdale, NY, USA, 3 May 2013. [Google Scholar]
- Wilder, K. Photography and Science; Reaktion Books: London, UK, 2009. [Google Scholar]
- Smith, G.E. The invention and early history of the CCD. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2009, 607, 1–6. [Google Scholar] [CrossRef]
- Marcandali, S.; Marar, J.F.; de Oliveira Silva, E. Através da Imagem: A Evolução da Fotografia e a Democratização Profissional com a Ascensão Tecnológica. In Perspectivas Imagéticas; Ria Editorial: Aveiro, Portugal, 2019. [Google Scholar]
- Kucera, T.E.; Barret, R.H. A History of Camera Trapping. In Camera Traps in Animal Ecology; Springer: Tokyo, Japan, 2011. [Google Scholar]
- Boyle, W.S. CCD—An extension of man’s view. Rev. Mod. Phys. 2010, 85, 2305. [Google Scholar] [CrossRef]
- Sabel, A.B.; Flammer, J.; Merabet, L.B. Residual Vision Activation and the Brain-eye-vascular Triad: Dysregulation, Plasticity and Restoration in Low Vision and Blindness—A Review. Restor. Neurol. Neurosci. 2018, 36, 767–791. [Google Scholar] [CrossRef]
- Rosenfeld, J.V.; Wong, Y.T.; Yan, E.; Szlawski, J.; Mohan, A.; Clark, J.C.; Rosa, M.; Lowery, A. Tissue response to a chronically implantable wireless intracortical visual prosthesis (Gennaris array). J. Neural Eng. 2020, 17, 46001. [Google Scholar] [CrossRef]
- Gu, L.; Poddar, S.; Lin, Y.; Long, Z.; Zhang, D.; Zhang, Q.; Shu, L.; Qiu, X.; Kam, M.; Javey, A.; et al. A biomimetic eye with a hemispherical perovskite nanowire array retina. Nature 2020, 581, 278–282. [Google Scholar] [CrossRef]
- Gu, L.; Poddar, S.; Lin, Y.; Long, Z.; Zhang, D.; Zhang, Q.; Shu, L.; Qiu, X.; Kam, M.; Fan, Z. Bionic Eye with Perovskite Nanowire Array Retina. In Proceedings of the 2021 5th IEEE Electron Devices Technology & Manufacturing Conference (EDTM), Chengdu, China, 8–11 April 2021. [Google Scholar]
- Cannata, G.; Maggiali, M. Models for the Design of a Tendon Driven Robot Eye. In Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 10–14. [Google Scholar]
- Hirai, K. Current and future perspective of Honda humanoid robot. In Proceedings of the Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems IROS ‘97, Grenoble, France, 11 September 1997. [Google Scholar]
- Goswami, A.V.P. ASIMO and Humanoid Robot Research at Honda; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Kajita, S.; Kaneko, K.; Kaneiro, F.; Harada, K.; Morisawa, M.; Nakaoka, S.I.; Miura, K.; Fujiwara, K.; Neo, E.S.; Hara, I.; et al. Cybernetic Human HRP-4C: A Humanoid Robot with Human-Like Proportions; Springer: Berlin, Heidelberg, 2011; pp. 301–314. [Google Scholar]
- Faraji, S.; Pouya, S.; Atkeson, C.G.; Ijspeert, A.J. Versatile and robust 3D walking with a simulated humanoid robot (Atlas): A model predictive control approach. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
- Kaehler, A.; Bradski, G. Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2016. [Google Scholar]
- Karve, P.; Thorat, S.; Mistary, P.; Belote, O. Conversational Image Captioning Using LSTM and YOLO for Visually Impaired. In Proceedings of 3rd International Conference on Communication, Computing and Electronics Systems; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
- Kornblith, S.; Shlens, J.; Quoc, L.V. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Jacobstein, N. NASA’s Perseverance: Robot laboratory on Mars. Sci. Robot. 2021, 6, eabh3167. [Google Scholar] [CrossRef]
- Zou, Y.; Zhu, Y.; Bai, Y.; Wang, L.; Jia, Y.; Shen, W.; Fan, Y.; Liu, Y.; Wang, C.; Zhang, A.; et al. Scientific objectives and payloads of Tianwen-1, China’s first Mars exploration mission. Adv. Space Res. 2021, 67, 812–823. [Google Scholar] [CrossRef]
- Jung, B.; Sukhatme, G.S. Real-time motion tracking from a mobile robot. Int. J. Soc. Robot. 2010, 2, 63–78. [Google Scholar] [CrossRef]
- Brown, J.; Hughes, C.; DeBrunner, L. Real-time hardware design for improving laser detection and ranging accuracy. In Proceedings of the Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 4–7 November 2012; pp. 1115–1119. [Google Scholar]
- Min Seok Oh, H.J.K.T. Development and analysis of a photon-counting three-dimensional imaging laser detection and ranging (LADAR) system. Opt. Soc. Am. 2011, 28, 759–765. [Google Scholar]
- Rezaei, M.; Yazdani, M.; Jafari, M.; Saadati, M. Gender differences in the use of ADAS technologies: A systematic review. Transp. Res. Part F Traffic Psychol. Behav. 2021, 78, 1–15. [Google Scholar] [CrossRef]
- Maybank, S.J.; Faugeras, O.D. A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 1992, 8, 123–151. [Google Scholar] [CrossRef]
- Murphy-Chutorian, E.; Trivedi, M.M. Head Pose Estimation in Computer Vision: A Survey. IEEE Trans. Pattern Anal. 2009, 31, 607–626. [Google Scholar] [CrossRef]
- Jarvis, R.A. A Perspective on Range Finding Techniques for Computer Vision. IEEE Trans. Pattern Anal. 1983, PAMI-5, 122–139. [Google Scholar] [CrossRef]
- Grosso, E. On Perceptual Advantages of Eye-Head Active Control; Springer: Berlin, Heidelberg, 2005; pp. 123–128. [Google Scholar]
- Binh Do, P.N.; Chi Nguyen, Q. A Review of Stereo-Photogrammetry Method for 3-D Reconstruction in Computer Vision. In Proceedings of the 19th International Symposium on Communications and Information Technologies (ISCIT), Ho Chi Minh City, Vietnam, 25–27 September 2019. [Google Scholar]
- Mattoccia, S. Stereo Vision Algorithms Suited to Constrained FPGA Cameras; Springer International Publishing: Cham, Switzerland, 2014; pp. 109–134. [Google Scholar]
- Mattoccia, S. Stereo Vision Algorithms for FPGAs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
- Park, J.; Kim, H.; Tai, Y.W.; Brown, M.S.; Kweon, I. High quality depth map upsampling for 3D-TOF cameras. Int. Conf. Comput. Vis. 2011, 1623–1630. [Google Scholar]
- Foix, S.; Alenya, G.; Torras, C. Lock-in Time-of-Flight (ToF) Cameras: A Survey. IEEE Sensors J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef]
- Li, J.; Yu, L.; Wang, J.; Yan, M. Obstacle information detection based on fusion of 3D LADAR and camera; Technical Committee on Control Theory, CAA. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017. [Google Scholar]
- Gill, T.; Keller, J.M.; Anderson, D.T.; Luke, R.H. A system for change detection and human recognition in voxel space using the Microsoft Kinect sensor. In Proceedings of the 2011 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 11–13 October 2011. [Google Scholar]
- Atmeh, G.M.; Ranatunga, I.; Popa, D.O.; Subbarao, K.; Lewis, F.; Rowe, P. Implementation of an Adaptive, Model Free, Learning Controller on the Atlas Robot; American Automatic Control Council: New York, NY, USA, 2014. [Google Scholar]
- Nilsson, N.J. Shakey the Robot; Tech. Rep. TR223; SRI Int.: Menlo Park, CA, USA, 1984. [Google Scholar]
- Burnham, T.C.; Hare, B. Engineering Human Cooperation. Hum. Nat. 2007, 18, 88–108. [Google Scholar] [CrossRef]
- Liu, Y.; Zhu, D.; Peng, J.; Wang, X.; Wang, L.; Chen, L.; Li, J.; Zhang, X. Robust Active Visual SLAM System Based on Bionic Eyes. In Proceedings of the 2019 IEEE International Conference on Cyborg and Bionic Systems (CBS), Munich, Germany, 18–20 September 2019. [Google Scholar]
- Li, B.; Xiaolin, Z.; Sato, M. Pitch angle estimation using a Vehicle-Mounted monocular camera for range measurement. In Proceedings of the International Conference on Signal Processing, Hangzhou, China, 19–23 October 2014. [Google Scholar]
- Xiaolin, Z. Novel Human Fixational Eye Movements Detection using Sclera Images of the Eyeball. Jpn. J. Appl. Physiol. 2012, 42, 143–152. [Google Scholar]
- Xiaolin, Z. Wide Area Tracking System Using Three Zoom Cameras. Ph.D. Thesis, Tokyo Institute of Technology, Tokyo, Japan, 2011. [Google Scholar]
- Xiaolin, Z. A Binocular Camera System for Wide Area Surveillance. J. Inst. Image Inf. Telev. Eng. 2009. [Google Scholar] [CrossRef]
- Xiaolin, Z. A Mathematical Model of a Neuron with Synapses based on Physiology. Nat. Preced. 2008. [Google Scholar] [CrossRef]
- Xiaolin, Z. Cooperative Movements of Binocular Motor System. In Proceedings of the 2008 IEEE International Conference on Automation Science and Engineering, Arlington, VA, USA, 23–26 August 2008. [Google Scholar]
- Zhang, X.K.H.A. Image Segmentation through Region Fusion Based on Watershed. J. Comput. Inf. Syst. 2014, 19, 8231–8236. [Google Scholar]
- Wang, Q.; Yin, Y.; Zou, W.; Xu, D. Measurement error analysis of binocular stereo vision: Effective guidelines for bionic eyes. IET Sci. Meas. Technol. 2017, 11, 829–838. [Google Scholar] [CrossRef]
- Wang, Q.; Zou, W.; Xu, D.; Zhu, Z. Motion Control in Saccade and Smooth Pursuit for Bionic Eye Based on Three-dimensional Coordinates. J. Bionic Eng. 2017, 14, 336–347. [Google Scholar] [CrossRef]
- Wang, Q.; Zou, W.; Xu, D.; Zhang, F. 3D Perception of Biomimetic Eye Based on Motion Vision and Stereo Vision. Robot 2015, 37, 760–768. [Google Scholar]
- Zheng Zhu, W.Z.Q.W. A Velocity Compensation Visual Servo Method for Oculomotor Control Of Bionic Eyes. Int. J. Robot. Autom. 2018, 33, 33–44. [Google Scholar]
- Zhu, Z.; Wang, Q.; Zou, W.; Zhang, F. Motion Control on Bionic Eyes: A Comprehensive Review. arXiv 2019, arXiv:1901.01494. [Google Scholar]
- Chen, X.; Wang, C.; Zhang, T.; Hua, C.; Fu, S.; Huang, Q. Hybrid Image Stabilization of Robotic Bionic Eyes. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018. [Google Scholar]
- Kardamakis, A.A.; Grantyn, A.; Moschovakis, A.K. Neural network simulations of the primate oculomotor system. v. eye–head gaze shifts. Biol. Cybern. 2010, 102, 209–225. [Google Scholar] [CrossRef] [PubMed]
- Aragon-Camarasa, G.; Fattah, H.; Siebert, J.P. Towards a unified visual framework in a binocular active robot vision system. Robot. Auton. Syst. 2010, 58, 276–286. [Google Scholar] [CrossRef]
Year | Survey Title | Reference | Focus |
---|---|---|---|
1859 | THE ORIGIN OF SPECIES | Darwin, C | Established the theory of biological evolution and studied the origin of everything [2]. |
1870 | The excitable cerebral cortex | Chad Carlson⋯. | Pioneered the study of the human cortex [10]. |
1963 | Variable Feedback Experiments Testing a Sampled Data Model for Eye Tracking Movements | Young, L R Stark, L. | Created models of machines that imitate human vision, saccades, etc. [21]. |
1968 | The Oculomotor Control System: A Review | DAVID A. ROBINSON. | The saccade, smooth movement, convergence, and control systems of eye movement were studied [22]. |
1970 | Oculomotor Unit Behavior in the Monkey | Robinson D A. | The relationship between the firing rate of motor neurons and eye position and movement was revealed by recording the oculomotor nerves of awake monkeys [23]. |
1986 | A model of the smooth pursuit eye movement system | D. A. Robinson, J. L. Gordon *, and S. E, Gordon. | Through research on macaques, a smooth-tracking-movement model of human eyes was created [24]. |
1987 | Visual motion processing and sensory-motor integration for the smooth pursuit of eye movements | Lisberger S G, Morris E J Tychsen | The proposed smooth-tracking model improves the visual tracking effect and overcomes the contradiction between high gain and large delay [25]. |
1989 | Dynamical neural network organization of the visual pursuit system | D.C. Deno, E.L. Keller, W.F, Crandall. | The dynamic neural network model was extended to a smooth-tracking system [26]. |
1998 | Neural adaptive predictor for visual tracking system | Lunghi, F Lazzari, S Magenes, G | An adaptive predictor was designed to simulate the prediction mechanism of the brain for visual tracking [27]. |
1992 | Adaptive feedback control models of the vestibulocerebellum and spinocerebellum | GOMI, H.; KAWATO, M. | An adaptive feedback control model was proposed, which is helpful for the vestibular eye reflex, the lobule and adaptive correction of eye movement [28]. |
1998 | Eye Finding via Face Detection for a Foveated, Active Vision System | Scassellati, B. | For the first time, robot eye interaction, image acquisition, and recognition functions were realized [29]. |
2002 | Quantitative Analysis of Catch-Up Saccades During Sustained Pursuit | De Brouwer S, Missal M Barnes G | A target-tracking experiment was carried out on macaques. Two conclusions were drawn: (1) there is a continuous overlap between saccade and smooth tracking; (2) the retinal sliding signal is shared by the two movements, which is different from the conclusion that the two traditional systems are completely separated [30]. |
2005 | Vestibular Perception and Action Employ Qualitatively Different Mechanisms. I. Frequency Response of VOR and Perceptual Responses During Translation and Tilt | Merfeld D M, Park S, Gianna-Poulin C, et al. | Established tVOR and rVOR models that simulate humans and studied the VOR and OKR vestibular-reflex eye-movement models through the ICub robot, which proved the important role of the cerebellum in image stabilization [31]. |
2006 | An Object Tracking System Based on Human Neural Pathways of Binocular Motor System | Xiaolin Zhang | A binocular motor system model based on the neural pathways of the human binocular motor system is proposed. Using this model, an active camera control system was constructed [32]. |
2006 | Design of a Humanoid Robot Eye: Models and Experiments | Cannata, G.; D’Andrea, M.; Maggiali, M. | By quantitatively comparing the performance of the robot’s eyes with the physiological data of humans and primates during saccades, the hypothesis that the geometry of the eyes and their driving system (extraocular muscles) are closely related was verified [33]. |
2014 | Binocular Initial Location and Extrinsic Parameters Real-time Calculation for Bionic Eye System | Qingbin Wang, Wei Zou, Feng Zhang and De Xu | A simple binocular vision device was designed, using hand–eye calibration and an algorithm model to ensure the depth perception of the binocular vision system [34]. |
2017 | Design of Anthropomorphic Robot Bionic Eyes | Di Fan, Xiaopeng Chen, Taoran Zhang, Xu Chen, Guilin Liu⋯ | A new type of anthropomorphic robot with bionic eyes was proposed. A compact series-parallel eye mechanism with 3 degrees of freedom was designed to realize the eyeball movement without eccentricity [35]. |
2020 | Real-Time Robust Stereo Visual SLAM System Based on Bionic Eyes | Yanqing Liu, Dongchen Zhu, Xiaolin Zhang, | A real-time stereo vision SLAM system based on bionic eyes was designed that performed all movements of the human eye [36]. |
2021 | Panoramic Stereo Imaging of a Bionic Compound-Eye Based on Binocular Vision | Wang, X.; Li, D.; Zhang, G. | The optical optimization design scheme and algorithm for panoramic imaging based on binocular stereo vision were proposed, and a panoramic stereo real-time imaging system was developed [37]. |
Network | VOC mAP(0.5) | COCO mAP(0.5) | Resolution | Inference Time (NCNN/Kirin 990) | Inference Time (MNN arm82/Kirin 990) | FLOPS | Weight Size |
---|---|---|---|---|---|---|---|
MobileNetV2-YOLOv3-Lite(our) | 73.26 | 37.44 | 320 | 28.42 ms | 18 ms | 1.8 BFlops | 8.0 MB |
MobileNetV2-YOLOv3-Nano(our) | 65.27 | 30.13 | 320 | 10.16 ms | 5 ms | 0.5 BFlops | 3.0 MB |
MobileNetV2-YOLOv3 | 70.7 | & | 352 | 32.15 ms | & ms | 2.44 BFlops | 14.4 MB |
MobileNet-SSD | 72.7 | & | 300 | 26.37 ms | & ms | & BFlops | 23.1 MB |
YOLOv5s | & | 56.2 | 416 | 150.5 ms | & ms | 13.2 BFlops | 28.1 MB |
YOLOv3-Tiny-Prn | & | 33.1 | 416 | 36.6 ms | & ms | 3.5 BFlops | 18.8 MB |
YOLOv4-Tiny | & | 40.2 | 416 | 44.6 ms | & ms | 6.9 BFlops | 23.1 MB |
YOLO-Nano | 69.1 | & | 416 | & ms | & ms | 4.57 BFlops | 4.0 MB |
Technology Category | Monocular Vision | Binocular Stereo Vision | Structured Light | TOF | Optical Laser Radar |
---|---|---|---|---|---|
Product pictures | |||||
Technology principle | |||||
Principle of work | Single camera | Dual camera | Camera and infrared projection patterns | Infrared reflection time difference | Time difference of laser pulse reflection |
Response time | Fast | Medium | Slow | Medium | Medium |
Weak light | Weak | Weak | Good | Good | Good |
Bright light | Good | Good | Weak | Medium | Medium |
Identification precision | Low | Low | Medium | Low | Medium |
Resolving capability | High | High | Medium | Low | Low |
Identification distance | Medium | Medium | Very short | Short | Far |
Operation difficulty | Low | High | Medium | Low | High |
Cost | Low | Medium | High | Medium | High |
Power consumption | Low | Low | Medium | Low | High |
Disadvantages | Low recognition accuracy, poor dark light | Dark light features are not obvious | High requirements for ambient light, short recognition distance | Low resolution, short recognition distance, limited by light intensity | Cloudy and rainy days, fog, and other weather interference have effects |
Representative company | Cognex, Honda, Keyence | LeapMoTion, iit | Intel, Microsoft, PrimeSense | Intel, TI, ST, Pmd | Velodyne, Boston Dynamics |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, H.; Lee, S. Robot Bionic Vision Technologies: A Review. Appl. Sci. 2022, 12, 7970. https://doi.org/10.3390/app12167970
Zhang H, Lee S. Robot Bionic Vision Technologies: A Review. Applied Sciences. 2022; 12(16):7970. https://doi.org/10.3390/app12167970
Chicago/Turabian StyleZhang, Hongxin, and Suan Lee. 2022. "Robot Bionic Vision Technologies: A Review" Applied Sciences 12, no. 16: 7970. https://doi.org/10.3390/app12167970