Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Vision-guided fine-operation of robot and its application in eight-puzzle game

  • Regular Paper
  • Published:
International Journal of Intelligent Robotics and Applications Aims and scope Submit manuscript

Abstract

Industrial robots can perform delicate operations, and they have good stability and durability. However, robots adapt with difficulty to changes in tasks and environments, and they basically can only perform operations in a fixed logic sequence. In contrast, humans can adapt to changes in the environment at any time due to their strong hand–eye coordination. Humans can quickly adjust their limbs to adapt to changes in targets, distances, and directions, as observed by their visual system; this produces a perfect closed-loop control process called the sensor–actor process. This paper studies the robotic hand–eye coordination problem using a robot playing an eight-puzzle game. First, the robot’s system analyzes changes in the position, angle, and layout of the puzzle board in the scene through an image recognition algorithm. It then formulates a more optimized operation sequence. Next, the system transforms the instructions of moving tiles into the physical coordinate values in the world coordinate system according to the image-understanding result. The robot then moves its hand to accurately point at tiles and move them in the correct direction. These tasks require the robot to perceive various changes in the position, posture, and initial layout of the puzzle board in the scene, and to calculate the motion vector parameters. To validate the proposed approach, field experiments are conducted. The success rate of the move operation was over 96%, which shows that this system based on visual perception can greatly improve the adaptability of the robot, making it more flexible and autonomous, and laying the foundation for expanding the robot’s ability to work in spontaneous scenes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

modified D–H representation

Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Aggarwal, N., Karl, W.C.: Line detection in images through regularized Hough transform. IEEE Trans. Image Process. 15(3), 582–591 (2006). https://doi.org/10.1109/tip.2005.863021

    Article  Google Scholar 

  • Chao, F., Chen, F.H., Shen, Y.H., He, W.L., Sun, Y., Wang, Z.S., Zhou, C.L., Jiang, M.: Robotic free writing of chinese characters via human–robot interactions. Int. J. Humanoid Robot. 11(1), 26 (2014). https://doi.org/10.1142/s0219843614500078

    Article  Google Scholar 

  • Chao, F., Zhu, Z., Lin, C.-M., Hu, H., Yang, L., Shang, C., Zhou, C.: Enhanced robotic hand–eye coordination inspired from human-like behavioral patterns. IEEE Trans. Cognit. Dev. Syst. 10(2), 384–396 (2018). https://doi.org/10.1109/tcds.2016.2620156

    Article  Google Scholar 

  • Denavit, J., Hartenberg, R.S.: A kinematic notation for lower-pair mechanisms based on matrices. J. Appl. Mech. 22, 44 (1955)

    Article  MathSciNet  Google Scholar 

  • Dollar, P., Zitnick, C.L., IEEE: structured forests for fast edge detection. In: 2013 Ieee International Conference on Computer Vision. IEEE International Conference on Computer Vision, pp. 1841–1848 (2013)

  • Fang, F., Shi, M.X., Qian, K., Zhou, B., Gan, Y.H.: A human-aware navigation method for social robot based on multi-layer cost map. J. Intell. Robot, Int (2020). https://doi.org/10.1007/s41315-020-00125-4

    Book  Google Scholar 

  • Giske, L.A.L., Bjorlykhaug, E., Lovdal, T., Mork, O.J.: Experimental study of effectiveness of robotic cleaning for fish-processing plants. Food Control 100, 269–277 (2019). https://doi.org/10.1016/j.foodcont.2019.01.029

    Article  Google Scholar 

  • https://github.com/pnezis/pyNpuzzle

  • He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. Proc IEEE Confer Comput Vis Pattern Recogn. 14, 770–778 (2016)

    Google Scholar 

  • Heller, J., Havlena, M., Pajdla, T.: Globally optimal hand-eye calibration using branch-and-bound. IEEE Trans. Pattern Anal. Mach. Intell. 38(5), 1027–1033 (2016). https://doi.org/10.1109/tpami.2015.2469299

    Article  Google Scholar 

  • Jiang, W., Wang, M., Deng, X., Gou, L.: Fault diagnosis based on TOPSIS method with Manhattan distance. Adv. Mech. Eng. (2019). https://doi.org/10.1177/1687814019833279

    Article  Google Scholar 

  • Jiang, B., Yang, J., Meng, Q., Li, B., Lu, W.: A deep evaluator for image retargeting quality by geomet rical and contextual interaction. IEEE Trans. Cybern. 50(1), 87–99 (2020)

    Article  Google Scholar 

  • Johnson, W.W., Story, W.E.J.A.J.O.M.: Notes on the “15” Puzzle. Am. J. Math. 2(4), 397 (1879)

    Article  MathSciNet  Google Scholar 

  • Kajic, I., Schillaci, G., Bodiroza, S., Hafner, V.V., Acm/IEEE: learning hand-eye coordination for a humanoid robot using SOMs. In: Hri’14: Proceedings of the 2014 Acm/Ieee International Conference on Human-Robot Interaction. ACM IEEE International Conference on Human-Robot Interaction, pp. 192–193 (2014)

  • Kappler, D., Bohg, B., Schaal, S.: Leveraging big data for grasp planning. In: IEEE International Conference on Robotics and Automation (2015)

  • Lenz, I., Lee, H., Saxena, A.: Deep learning for de- tecting robotic grasps. Int. J. Robot. Res. 34(4–5), 705–724 (2015)

    Article  Google Scholar 

  • Levine, S., Pastor, P., Krizhevsky, A., et al.: Learning Hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 37(4–5), 421–436 (2016)

    Google Scholar 

  • Levine, S., Pastor, P., Krizhevsky, A., Quillen, D.: Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection. In: Kulic, D., Nakamura, Y., Khatib, O., Venture, G. (eds.) 2016 International Symposium on Experimental Robotics, vol. 1. Springer Proceedings in Advanced Robotics, pp. 173-184. Springer International Publishing Ag, Cham (2017)

  • Li, W.-L., Xie, H., Zhang, G., Yan, S.-J., Yin, Z.-P.: Hand-eye calibration in visually-guided robot grinding. IEEE Trans. Cybernet. 46(11), 2634–2642 (2016). https://doi.org/10.1109/tcyb.2015.2483740

    Article  Google Scholar 

  • Machhale, K.S., Zode, P.P., Zode, P.P.: Implementation of NUMBER RECOGNITION USING ADAPTIVE TEMPLATE MATCHING AND FEATURE EXTRACTION METHod. In: International Conference on Communication Systems & Network Technologies (2012)

  • Matuszek, C., Mayton, B., Aimi, R., Deisenroth, M.P., Bo, L., Chu, R., Kung, M., LeGrand, L., Smith, J.R., Fox, D.: IEEE: gambit: an autonomous chess-playing robotic system. In: 2011 IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation ICRA. (2011)

  • Nguyen, P. D. H., Fischer, T., Chang, H. J., Pattacini, U., Metta, G., Demiris, Y.: Transferring visuomotor learning from simulation to the real world for robotics manipulation tasks. In Proceeding of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 6667–6674 (2018)

  • Pierris, G., Dahl, T.S., IEEE: a developmental perspective on humanoid skill learning using a hierarchical SOM-based encoding. In: Proceedings of the 2014 International Joint Conference on Neural Networks. IEEE International Joint Conference on Neural Networks (IJCNN), pp. 708–715 (2014)

  • Qu, J., Zhang, F., Fu, Y., Guo, S.: Approach movement of redundant manipulator using stereo vision. In: Proceeding of the IEEE International Conference on Robotics and Biomimetics, pp. 2489–2494 (2014)

  • Quillen, D., Jang, E., Nachum, O., et al. Deep reinforcement learning for vision-based robotic grasping: a simulated comparative evaluation of off-policy methods. (2018)

  • Redmon, J. and Angelova, A. Real-time grasp detection using convolutional neural networks. In: IEEE International Conference on Robotics and Automation, pp. 1316–1322, 2015

  • Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779–788 (2016)

  • Sivcev, S., Rossi, M., Coleman, J., Dooly, G., Omerdic, E., Toal, D.: Fully automatic visual servoing control for work-class marine intervention rovs. Control Eng. Pract. 74, 153–167 (2018)

    Article  Google Scholar 

  • Srinivasa, S., Berenson, D., Cakmak, M., Romea, A.C., Dogar, M., Dragan, A., Knepper, R.A., Niemueller, T.D., Strabala, K., Vandeweghe, J.M., Ziegler, J.: HERB: lessons learned from developing a mobile manipulator for the home. Proc. IEEE 100(8), 1–19 (2012)

    Article  Google Scholar 

  • Vicente, P., Jamone, L., Bernardino, A.: Robotic hand pose estimation based on stereo vision and GPU-enabled internal graphical simulation. J. Intell. Robot. Syst. 83(3/4), 339–358 (2016)

    Article  Google Scholar 

  • Wahrmann, D., Hildebrandt, A.C., Schuetz, C., Wittmann, R., Rixen, D.: An autonomous and flexible robotic framework for logistics applications. J. Intell. Robot. Syst. 93(3–4), 419–431 (2019). https://doi.org/10.1007/s10846-017-0746-8

    Article  Google Scholar 

  • Wei, A.H., Chen, B.Y.: Robotic object recognition and grasping with a natural background. Image Vision Comput. 17(2), 1729881420921102 (2020). https://doi.org/10.1177/1729881420921102

    Article  Google Scholar 

  • Widmaier, F., Kappler, D., Schaal, S., Bohg, J.: Robot Arm Pose Estimation by Pixel-wise Regression of Joint Angles. In: Okamura, A., Menciassi, A., Ude, A., Burschka, D., Lee, D., Arrichiello, F., Liu, H., Moon, H., Neira, J., Sycara, K., Yokoi, K., Martinet, P., Oh, P., Valdastri, P., Krovi, V. (eds.) 2016 IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation ICRA, pp. 616-623 (2016)

  • Widmaier, F., Kappler, D., Schaal, S., Bohg, J.: Robot arm pose estimation by pixel-wise regression of joint angles. In: Proceedings of International Conference on Robotics and Automation, pp. 616–623 (2016)

  • Wu, K., et al.: Safety-enhanced model-free visual servoing for continuum tubular robots through singularity avoidance in confined environments. IEEE Access 7, 21539–21558 (2019)

    Article  Google Scholar 

  • Wu, L., Ren, H.: Finding the kinematic base frame of a robot by hand-eye calibration using 3D position data. IEEE Trans. Autom. Sci. Eng. 14(1), 314–324 (2017). https://doi.org/10.1109/tase.2016.2517674

    Article  Google Scholar 

  • Yang, J., Zhu, Y., Jiang, B., Gao, L., Xiao, L., Zheng, Z.: Aircraft detection in remote sensing images based on a deep residual network and supervector coding. Remote Sens. Lett. 9(3), 228–236 (2018)

    Article  Google Scholar 

  • Yang, J., Man, J., Xi, M., Gao, X., Lu, W., Meng, Q.: Precise measurement of position and attitude based on convolutional neural network and visual correspondence relationship. In: IEEE Transactions on Neural Networks and Learning Systems pp. 1–12 (2019)

  • Yu, F., Wang, D., Shelhamer, E. et al.: Deep layer aggregation.I n: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2403–2412 (2018)

  • Zanchettin, A.M., Casalino, A., Piroddi, L., Rocco, P.: Prediction of human activity patterns for human-robot collaborative assembly tasks. IEEE Trans. Ind. Informat. 15(7), 3934–3942 (Jul. 2019)

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported by the NSFC Project (Project Nos. 61771146 and 61375122), and (in part) by Shanghai Science and Technology Development Funds (Project Nos. 13dz2260200 and 13511504300).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui Wei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, H., Chen, XX. & Miao, XY. Vision-guided fine-operation of robot and its application in eight-puzzle game. Int J Intell Robot Appl 5, 576–589 (2021). https://doi.org/10.1007/s41315-021-00186-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41315-021-00186-z

Keywords