Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Digital twin for autonomous collaborative robot by using synthetic data and reinforcement learning

Published: 01 February 2024 Publication History

Abstract

Training robots in real-world environments can be challenging due to time and cost constraints. To overcome these limitations, robots can be trained in virtual environments using Reinforcement Learning (RL). However, this approach faces a significant challenge in obtaining suitable data. This paper proposes a novel method for training collaborative robots in virtual environments using synthetic data and the point cloud framework. The proposed method is divided into four stages: data generation, 3D object classification, robot training, and integration. The first stage of the proposed method is data generation, where synthetic data is generated to resemble real-world scenarios. This data is then used to train robots in virtual environments. The second stage is 3D object classification, where the generated data is used to classify objects in 3D space. In the third stage, robots are trained using RL algorithms, which are based on the generated data and the 3D object classifications. Finally, the effectiveness of the proposed method is integrated in the fourth stage. This proposed method has the potential to be a significant contribution to the field of robotics and 3D computer vision. By using synthetic data and the point cloud framework, the proposed method offers an efficient and cost-effective solution for training robots in virtual environments. The ability to reduce the time and cost required for training robots in real-world environments is a major advantage of this proposed method, and has the potential to revolutionize the field of robotics and 3D computer vision.

Highlights

The increasing usage of robots cause the increase of programming time to adapt the arbitrary shapes of new products.
To deal with this, data synthesis for training object detection and reinforcement learning for finding grasp positions, are used.
The proposed method consists of the series of digital twins from 3D random generator, 2D computer vision, 3D depth scanning, gripper simulators, and path generator.
This approach can reduce manual work and increase productivity by developing a reinforcement learning safety path system using cooperative robots.

References

[1]
Da Silveira G., Borenstein D., Fogliatto F.S., Mass customization: Literature review and research directions, Int. J. Prod. Econ. 72 (1) (2001) 1–13.
[2]
Luck M., Aylett R., Applying artificial intelligence to virtual reality: Intelligent virtual environments, Appl. Artif. Intell. 14 (1) (2000) 3–32.
[3]
Tao F., Qi Q., Wang L., Nee A., Digital twins and cyber–physical systems toward smart manufacturing and industry 4.0: Correlation and comparison, Engineering 5 (4) (2019) 653–661.
[4]
Wang S., Zhang J., Wang P., Law J., Calinescu R., Mihaylova L., A deep learning-enhanced digital twin framework for improving safety and reliability in human–robot collaborative manufacturing, Robot. Comput.-Integr. Manuf. 85 (2024).
[5]
Yu J., Weng K., Liang G., Xie G., A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation, in: 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2013, pp. 1175–1180.
[6]
Volden Ø., Stahl A., Fossen T.I., Vision-based positioning system for auto-docking of unmanned surface vehicles (USVs), Int. J. Intell. Robot. Appl. 6 (1) (2022) 86–103.
[7]
Lins R.G., Givigi S.N., Kurka P.R.G., Vision-based measurement for localization of objects in 3-D for robotic applications, IEEE Trans. Instrum. Meas. 64 (11) (2015) 2950–2958.
[8]
Ren S., He K., Girshick R., Sun J., Faster r-cnn: Towards real-time object detection with region proposal networks, Adv. Neural Inf. Process. Syst. 28 (2015).
[9]
K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2961–2969.
[10]
T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2980–2988.
[11]
J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
[12]
J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7263–7271.
[13]
Redmon J., Farhadi A., Yolov3: An incremental improvement, 2018, arXiv preprint arXiv:1804.02767.
[14]
Liu C., Tao Y., Liang J., Li K., Chen Y., Object detection based on YOLO network, in: 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC), IEEE, 2018, pp. 799–803.
[15]
Wenna W., Weili D., Changchun H., Heng Z., Haibing F., Yao Y., A digital twin for 3D path planning of large-span curved-arm gantry robot, Robot. Comput.-Integr. Manuf. 76 (2022).
[16]
Zhou Y., Sun P., Zhang Y., Anguelov D., Gao J., Ouyang T., Guo J., Ngiam J., Vasudevan V., End-to-end multi-view fusion for 3d object detection in lidar point clouds, in: Conference on Robot Learning, PMLR, 2020, pp. 923–932.
[17]
Yuda M., Xiangjun Z., Weiming S., Shaofeng L., Target accurate positioning based on the point cloud created by stereo vision, in: 2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP), IEEE, 2016, pp. 1–5.
[18]
Liu Y., Tang Q., Tian X., Yang S., A novel offline programming approach of robot welding for multi-pipe intersection structures based on NSGA- and measured 3D point-clouds, Robot. Comput.-Integr. Manuf. 83 (2023).
[19]
Zhou Z., Li L., Fürsterling A., Durocher H.J., Mouridsen J., Zhang X., Learning-based object detection and localization for a mobile robot manipulator in SME production, Robot. Comput.-Integr. Manuf. 73 (2022).
[20]
Walsh S.B., Borello D.J., Guldur B., Hajjar J.F., Data processing of point clouds for object detection for structural engineering applications, Comput.-Aided Civ. Infrastruct. Eng. 28 (7) (2013) 495–508.
[21]
Khaloo A., Lattanzi D., Robust normal estimation and region growing segmentation of infrastructure 3D point cloud models, Adv. Eng. Inform. 34 (2017) 1–16.
[22]
Kuts V., Otto T., Tähemaa T., Bondarenko Y., Digital twin based synchronised control and simulation of the industrial robotic cell using virtual reality, J. Mach. Eng. 19 (1) (2019) 128–145.
[23]
Al-Ahmari A.M., Abidi M.H., Ahmad A., Darmoul S., Development of a virtual manufacturing assembly simulation system, Adv. Mech. Eng. 8 (3) (2016).
[24]
Belousov I.R., Chellali R., Clapworthy G.J., Virtual reality tools for Internet robotics, in: Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), Vol. 2, IEEE, 2001, pp. 1878–1883.
[25]
Aleotti J., Caselli S., Reggiani M., Leveraging on a virtual environment for robot programming by demonstration, Robot. Auton. Syst. 47 (2–3) (2004) 153–161.
[26]
Jen Y.H., Taha Z., Vui L.J., VR-based robot programming and simulation system for an industrial robot, Int. J. Ind. Eng.: Theory Appl. Pract. 15 (3) (2008) 314–322.
[27]
Akan B., Ameri A., Cürüklü B., Asplund L., Intuitive industrial robot programming through incremental multimodal language and augmented reality, in: 2011 IEEE International Conference on Robotics and Automation, IEEE, 2011, pp. 3934–3939.
[28]
Pérez L., Diez E., Usamentiaga R., García D.F., Industrial robot control and operator training using virtual reality interfaces, Comput. Ind. 109 (2019) 114–120.
[29]
Amidi O., Thorpe C.E., Integrated mobile robot control, in: Mobile Robots V, Vol. 1388, International Society for Optics and Photonics, 1991, pp. 504–523.
[30]
Seder M., Macek K., Petrovic I., An integrated approach to real-time mobile robot control in partially known indoor environments, in: 31st Annual Conference of IEEE Industrial Electronics Society, 2005. IECON 2005, IEEE, 2005, pp. 6–pp.
[31]
Elhajj I., Xi N., Liu Y.-h., Real-time control of internet based teleoperation with force reflection, in: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), Vol. 4, IEEE, 2000, pp. 3284–3289.
[32]
Brogårdh T., Robot control overview: An industrial perspective, Model. Identif. Control 30 (3) (2009) 167.
[33]
Kelly A., Chan N., Herman H., Huber D., Meyers R., Rander P., Warner R., Ziglar J., Capstick E., Real-time photorealistic virtualized reality interface for remote mobile robot control, Int. J. Robot. Res. 30 (3) (2011) 384–404.
[34]
Laaki H., Miche Y., Tammi K., Prototyping a digital twin for real time remote control over mobile networks: Application of remote surgery, IEEE Access 7 (2019) 20325–20336.
[35]
Leng J., Zhang H., Yan D., Liu Q., Chen X., Zhang D., Digital twin-driven manufacturing cyber-physical system for parallel controlling of smart workshop, J. Ambient Intell. Humaniz. Comput. 10 (2019) 1155–1166.
[36]
Malik A.A., Brem A., Digital twins for collaborative robots: A case study in human-robot interaction, Robot. Comput.-Integr. Manuf. 68 (2021).
[37]
Zhang Z., Zhang Z., Wang L., Zhu X., Huang H., Cao Q., Digital twin-enabled grasp outcomes assessment for unknown objects using visual-tactile fusion perception, Robot. Comput.-Integr. Manuf. 84 (2023).
[38]
Alam K.M., El Saddik A., C2PS: A digital twin architecture reference model for the cloud-based cyber-physical systems, IEEE Access 5 (2017) 2050–2062.
[39]
Zheng Y., Yang S., Cheng H., An application framework of digital twin and its case study, J. Ambient Intell. Humaniz. Comput. 10 (2019) 1141–1153.
[40]
Wei Y., Hu T., Dong L., Ma S., Digital twin-driven manufacturing equipment development, Robot. Comput.-Integr. Manuf. 83 (2023).
[41]
C.-J. Liang, W. McGee, C. Menassa, V. Kamat, Bi-directional communication bridge for state synchronization between digital twin simulations and physical construction robots, in: Proceedings of the International Symposium on Automation and Robotics in Construction (IAARC), 2020.
[42]
Rassõlkin A., Rjabtšikov V., Kuts V., Vaimann T., Kallaste A., Asad B., Partyshev A., Interface development for digital twin of an electric motor based on empirical performance model, IEEE Access 10 (2022) 15635–15643.
[43]
Kuts V., Otto T., Bondarenko Y., Yu F., Digital twin: Collaborative virtual reality environment for multi-purpose industrial applications, in: ASME International Mechanical Engineering Congress and Exposition, 84492, American Society of Mechanical Engineers, 2020.
[44]
Al-Geddawy T., A digital twin creation method for an opensource low-cost changeable learning factory, Procedia Manuf. 51 (2020) 1799–1805.
[45]
Kaarlela T., Pieskä S., Pitkäaho T., Digital twin and virtual reality for safety training, in: 2020 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), IEEE, 2020, pp. 000115–000120.
[46]
Sjödin D.R., Parida V., Leksell M., Petrovic A., Smart factory implementation and process innovation: A preliminary maturity model for leveraging digitalization in manufacturing moving to smart factories presents specific challenges that can be addressed through a structured approach focused on people, processes, and technologies, Res.-Technol. Manage. 61 (5) (2018) 22–31.
[47]
Zhuang C., Wang Z., Zhao H., Ding H., Semantic part segmentation method based 3D object pose estimation with RGB-d images for bin-picking, Robot. Comput.-Integr. Manuf. 68 (2021).
[48]
Zhuang C., Li S., Ding H., Instance segmentation based 6D pose estimation of industrial objects using point clouds for robotic bin-picking, Robot. Comput.-Integr. Manuf. 82 (2023).
[49]
Borkman S., Crespi A., Dhakad S., Ganguly S., Hogins J., Jhang Y.-C., Kamalzadeh M., Li B., Leal S., Parisi P., et al., Unity perception: Generate synthetic data for computer vision, 2021, arXiv preprint arXiv:2107.04259.
[50]
Almamou A.A., Gebhardt T., Bock S., Hildebrand J., Schwarz W., Quality control of constructed models using 3d point cloud, 2015.
[51]
Machado F., Malpica N., Borromeo S., Parametric CAD modeling for open source scientific hardware: Comparing OpenSCAD and FreeCAD Python scripts, PLoS One 14 (12) (2019).
[52]
Juliani A., Berges V.-P., Teng E., Cohen A., Harper J., Elion C., Goy C., Gao Y., Henry H., Mattar M., et al., Unity: A general platform for intelligent agents, 2018, arXiv preprint arXiv:1809.02627.
[53]
Choi H.-B., Kim C.-M., Kim J.-B., Han Y.-H., Design and implementation of reinforcement learning environment using unity 3D-based ML-agents toolkit, in: Proceedings of the Korea Information Processing Society Conference, Korea Information Processing Society, 2019, pp. 548–551.
[54]
Liu Y., Xu H., Liu D., Wang L., A digital twin-based sim-to-real transfer for deep reinforcement learning-enabled industrial robot grasping, Robot. Comput.-Integr. Manuf. 78 (2022).
[55]
Lu Y., Liu C., Kevin I., Wang K., Huang H., Xu X., Digital twin-driven smart manufacturing: Connotation, reference model, applications and research issues, Robot. Comput.-Integr. Manuf. 61 (2020).
[56]
Mo F., Rehman H.U., Monetti F.M., Chaplin J.C., Sanderson D., Popov A., Maffei A., Ratchev S., A framework for manufacturing system reconfiguration and optimisation utilising digital twins and modular artificial intelligence, Robot. Comput.-Integr. Manuf. 82 (2023).
[57]
Garbev A., Atanassov A., Comparative analysis of RoboDK and robot operating system for solving diagnostics tasks in off-line programming, in: 2020 International Conference Automatics and Informatics (ICAI), IEEE, 2020, pp. 1–5.
[58]
Ciaglia F., Zuppichini F.S., Guerrie P., McQuade M., Solawetz J., Roboflow 100: A rich, multi-domain object detection benchmark, 2022, arXiv preprint arXiv:2211.13523.
[59]
Rahardja K., Kosaka A., Vision-based bin-picking: Recognition and localization of multiple complex objects using simple visual cues, in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS’96, Vol. 3, IEEE, 1996, pp. 1448–1457.
[60]
A. Mousavian, C. Eppner, D. Fox, 6-dof graspnet: Variational grasp generation for object manipulation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2901–2910.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Robotics and Computer-Integrated Manufacturing
Robotics and Computer-Integrated Manufacturing  Volume 85, Issue C
Feb 2024
399 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 01 February 2024

Author Tags

  1. Object detection
  2. Synthetic data
  3. Point cloud
  4. Reinforcement learning
  5. Digital twin

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Sep 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media