Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3332165.3347902acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article
Public Access

GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality

Published: 17 October 2019 Publication History

Abstract

We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study.

Supplementary Material

MP4 File (ufp4405pv.mp4)
Preview video
MP4 File (ufp4405vf.mp4)
Supplemental video
MP4 File (p521-cao.mp4)

References

[1]
2019. Holoens. (2019).https://www.microsoft.com/en-CY/hololens.
[2]
2019. Oculus. (2019).https://www.oculus.com/.
[3]
2019. Optitrack. (2019).https://optitrack.com/.
[4]
2019. Razebo Simulator. (2019).http://gazebosim.org/.
[5]
2019a. Robot Operating System. (2019).http://www.ros.org/.
[6]
2019b. RosSharp. (2019).https://github.com/siemens/ros-sharp.
[7]
2019. XSense. (2019).https://www.xsens.com/tags/motion-capture/.
[8]
Heni Ben Amor, Gerhard Neumann, Sanket Kamthe,Oliver Kroemer, and Jan Peters. 2014. Interactionprimitives for human-robot cooperation tasks. In2014IEEE international conference on robotics andautomation (ICRA). IEEE, 2831--2837.
[9]
Rasmus S Andersen, Ole Madsen, Thomas B Moeslund,and Heni Ben Amor. 2016. Projecting robot intentionsinto human environments. In2016 25th IEEEInternational Symposium on Robot and HumanInteractive Communication (RO-MAN). IEEE, 294--301.
[10]
Andrea Bauer, Dirk Wollherr, and Martin Buss. 2008.Human--robot collaboration: a survey.InternationalJournal of Humanoid Robotics5, 01 (2008), 47--66.
[11]
Aude Billard, Sylvain Calinon, Ruediger Dillmann, andStefan Schaal. 2008. Robot programming bydemonstration.Springer handbook of robotics(2008),1371--1394.
[12]
Mark Billinghurst, Adrian Clark, Gun Lee, and others.2015. A survey of augmented reality.Foundations andTrends®in Human--Computer Interaction8, 2--3 (2015),73--272.
[13]
Ronan Billon, Alexis Nedelec, and Jacques Tisseau.2008. Gesture recognition in flow based on PCAanalysis using multiagent system. InProceedings of the2008 International Conference on Advances inComputer Entertainment Technology. ACM, 139--146.
[14]
Yuanzhi Cao, Zhuangying Xu, Terrell Glenn, Ke Huo,and Karthik Ramani. 2018. Ani-Bot: A ModularRobotics System Supporting Creation, Tweaking, andUsage with Mixed-Reality Interactions. InProceedingsof the Twelfth International Conference on Tangible,Embedded, and Embodied Interaction. ACM, 419--428.
[15]
Yuanzhi Cao, Zhuangying Xu, Fan Li, Wentao Zhong,Ke Huo, and Karthik Ramani. 2019. V. Ra: An In-SituVisual Authoring System for Robot-IoT Task Planningwith Augmented Reality. InProceedings of the 2019 onDesigning Interactive Systems Conference. ACM,1059--1070.
[16]
Ravi Teja Chadalavada, Henrik Andreasson, RobertKrug, and Achim J Lilienthal. 2015. That's on my mind!robot to human intention communication throughon-board projection on shared floor space. In2015European Conference on Mobile Robots (ECMR). IEEE,1--6.
[17]
Sonia Chernova and Andrea L Thomaz. 2014. Robotlearning from human teachers.Synthesis Lectures onArtificial Intelligence and Machine Learning8, 3 (2014),1--121.
[18]
Jonathan Wun Shiung Chong, SK Ong, Andrew YC Nee,and K Youcef-Youmi. 2009. Robot programming usingaugmented reality: An interactive method for planningcollision-free paths.Robotics and Computer-IntegratedManufacturing25, 3 (2009), 689--701.
[19]
Christian Daniel, Gerhard Neumann, and Jan Peters.2012. Learning concurrent motor skills in versatilesolution spaces. In2012 IEEE/RSJ InternationalConference on Intelligent Robots and Systems. IEEE,3591--3597.
[20]
Tobias Ende, Sami Haddadin, Sven Parusel, TiloWüsthoff, Marc Hassenzahl, and Alin Albu-Schäffer.2011. A human-centered approach to robot gesturebased communication within collaborative workingprocesses. In2011 IEEE/RSJ International Conferenceon Intelligent Robots and Systems. IEEE, 3367--3374.
[21]
Paul Evrard, Elena Gribovskaya, Sylvain Calinon, AudeBillard, and Abderrahmane Kheddar. 2009. Teachingphysical collaborative tasks: Object-lifting case studywith a humanoid. In2009 9th IEEE-RAS InternationalConference on Humanoid Robots. IEEE, 399--404.
[22]
Marco Ewerton, Gerhard Neumann, Rudolf Lioutikov,Heni Ben Amor, Jan Peters, and Guilherme Maeda.2015. Learning multiple collaborative tasks with amixture of interaction primitives. In2015 IEEEInternational Conference on Robotics and Automation(ICRA). IEEE, 1535--1542.
[23]
HC Fang, SK Ong, and AYC Nee. 2012. Interactiverobot trajectory planning and simulation usingaugmented reality.Robotics and Computer-IntegratedManufacturing28, 2 (2012), 227--237.
[24]
HC Fang, SK Ong, and AYC Nee. 2014. A novelaugmented reality-based interface for robot pathplanning.International Journal on Interactive Designand Manufacturing (IJIDeM)8, 1 (2014), 33--42.
[25]
Jared Alan Frank, Sai Prasanth Krishnamoorthy, andVikram Kapila. 2017. Toward Mobile Mixed-RealityInteraction With Multi-Robot Systems.IEEE Roboticsand Automation Letters2, 4 (2017), 1901--1908.
[26]
Richard Fung, Sunao Hashimoto, Masahiko Inami, andTakeo Igarashi. 2011. An augmented reality system forteaching sequential tasks to a household robot. InRO-MAN, 2011 IEEE. IEEE, 282--287.
[27]
Ramsundar Kalpagam Ganesan. 2017.Mediatinghuman-robot collaboration through mixed reality cues.Ph.D. Dissertation. Arizona State University.
[28]
Fabrizio Ghiringhelli, Jérôme Guzzi, Gianni A Di Caro,Vincenzo Caglioti, Luca M Gambardella, andAlessandro Giusti. 2014. Interactive augmented realityfor understanding and analyzing multi-robot systems. In2014 IEEE/RSJ International Conference on IntelligentRobots and Systems. IEEE, 1195--1201.
[29]
Sunao Hashimoto, Akihiko Ishida, Masahiko Inami, andTakeo Igarashi. 2011. Touchme: An augmented realitybased remote robot manipulation. In21st Int. Conf. onArtificial Reality and Telexistence, Proc. of ICAT2011.
[30]
Hooman Hedayati, Michael Walker, and Daniel Szafir.2018. Improving Collocated Robot Teleoperation withAugmented Reality. InProceedings of the 2018ACM/IEEE International Conference on Human-RobotInteraction. ACM, 78--86.
[31]
Valentin Heun, James Hobin, and Pattie Maes. 2013.Reality editor: Programming smarter objects. InProceedings of the 2013 ACM conference on Pervasiveand ubiquitous computing adjunct publication. ACM,307--310.
[32]
Ke Huo, Yuanzhi Cao, Sang Ho Yoon, Zhuangying Xu,Guiming Chen, and Karthik Ramani. 2018a. Scenariot:Spatially Mapping Smart Things Within AugmentedReality Scenes. InProceedings of the 2018 CHIConference on Human Factors in Computing Systems.ACM, 219.
[33]
Ke Huo, Tianyi Wang, Luis Paredes, Ana M Villanueva,Yuanzhi Cao, and Karthik Ramani. 2018b.SynchronizAR: Instant Synchronization forSpontaneous and Spatial Collaborations in AugmentedReality. InThe 31st Annual ACM Symposium on UserInterface Software and Technology. ACM, 19--30.
[34]
Kentaro Ishii, Yoshiki Takeoka, Masahiko Inami, andTakeo Igarashi. 2010. Drag-and-drop interface forregistration-free object delivery. InRO-MAN, 2010IEEE. IEEE, 228--233.
[35]
Astrid Jackson, Brandon D Northcutt, and GitaSukthankar. 2018. The Benefits of Teaching Robotsusing VR Demonstrations. InCompanion of the 2018ACM/IEEE International Conference on Human-RobotInteraction. ACM, 129--130.
[36]
Shunichi Kasahara, Ryuma Niiyama, Valentin Heun,and Hiroshi Ishii. 2013. exTouch: spatially-awareembodied manipulation of actuated objects mediated byaugmented reality. InProceedings of the 7thInternational Conference on Tangible, Embedded andEmbodied Interaction. ACM, 223--228.
[37]
Ben Kehoe, Sachin Patil, Pieter Abbeel, and KenGoldberg. 2015. A survey of research on cloud roboticsand automation.IEEE Transactions on automationscience and engineering12, 2 (2015), 398--409.
[38]
Gary Klein, Paul J Feltovich, Jeffrey M Bradshaw, andDavid D Woods. 2005. Common ground andcoordination in joint activity.Organizational simulation53 (2005), 139--184.
[39]
Hema S Koppula, Ashesh Jain, and Ashutosh Saxena.2016. Anticipatory planning for human-robot teams. InExperimental Robotics. Springer, 453--470.
[40]
Benoit Larochelle and Geert-Jan M Kruijff. 2012.Multi-view operator control unit to improve situationawareness in usar missions. InRO-MAN, 2012 IEEE.IEEE, 1103--1108.
[41]
David Lindlbauer and Andy D Wilson. 2018. Remixedreality: Manipulating space and time in augmentedreality. InProceedings of the 2018 CHI Conference onHuman Factors in Computing Systems. ACM, 129.
[42]
Kexi Liu, Daisuke Sakamoto, Masahiko Inami, andTakeo Igarashi. 2011. Roboshop: multi-layeredsketching interface for robot housework assignment andmanagement. InProceedings of the SIGCHI Conferenceon Human Factors in Computing Systems. ACM,647--656.
[43]
Guilherme Maeda, Marco Ewerton, Gerhard Neumann,Rudolf Lioutikov, and Jan Peters. 2017a. Phaseestimation for fast action recognition and trajectorygeneration in human--robot collaboration.TheInternational Journal of Robotics Research36, 13--14(2017), 1579--1594.
[44]
Guilherme J Maeda, Gerhard Neumann, Marco Ewerton,Rudolf Lioutikov, Oliver Kroemer, and Jan Peters.2017b. Probabilistic movement primitives forcoordination of multiple human--robot collaborativetasks.Autonomous Robots41, 3 (2017), 593--612.
[45]
Stéphane Magnenat, Morderchai Ben-Ari, SeverinKlinger, and Robert W Sumner. 2015. Enhancing robotprogramming with visual feedback and augmentedreality. InProceedings of the 2015 ACM conference oninnovation and technology in computer scienceeducation. ACM, 153--158.
[46]
Alan G Millard, Richard Redpath, Alistair Jewers,Charlotte Arndt, Russell Joyce, James A Hilder, Liam JMcDaid, and David M Halliday. 2018. ARDebug: anaugmented reality tool for analysing and debuggingswarm robotic systems.Frontiers Robotics AI(2018).
[47]
Scott Niekum, Sarah Osentoski, George Konidaris, andAndrew G Barto. 2012. Learning and generalization ofcomplex tasks from unstructured demonstrations. In2012 IEEE/RSJ International Conference on IntelligentRobots and Systems. IEEE, 5239--5246.
[48]
Stefanos Nikolaidis, Jodi Forlizzi, David Hsu, JulieShah, and Siddhartha Srinivasa. 2017a. Mathematicalmodels of adaptation in human-robot collaboration.arXiv preprint arXiv:1707.02586(2017).
[49]
Stefanos Nikolaidis, Swaprava Nath, Ariel D Procaccia,and Siddhartha Srinivasa. 2017b. Game-theoreticmodeling of human adaptation in human-robotcollaboration. In2017 12th ACM/IEEE InternationalConference on Human-Robot Interaction (HRI. IEEE,323--331.
[50]
Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu,and Julie Shah. 2015. Efficient model learning fromjoint-action demonstrations for human-robotcollaborative tasks. InProceedings of the tenth annualACM/IEEE international conference on human-robotinteraction. ACM, 189--196.
[51]
Stefanos Nikolaidis and Julie Shah. 2013. Human-robotcross-training: computational formulation, modelingand evaluation of a human team training strategy. InProceedings of the 8th ACM/IEEE internationalconference on Human-robot interaction. IEEE Press,33--40.
[52]
Stefanos Nikolaidis, Yu Xiang Zhu, David Hsu, andSiddhartha Srinivasa. 2017. Human-robot mutualadaptation in shared autonomy. In2017 12th ACM/IEEEInternational Conference on Human-Robot Interaction(HRI. IEEE, 294--302.
[53]
Michael Pardowitz, Steffen Knoop, Ruediger Dillmann,and Raoul D Zollner. 2007. Incremental learning oftasks from user demonstrations, past experiences, andvocal comments.IEEE Transactions on Systems, Man,and Cybernetics, Part B (Cybernetics)37, 2 (2007),322--332.
[54]
Luka Peternel, Tadej Petric, Erhan Oztop, and Jan Babic.2014. Teaching robots to cooperate with humans indynamic manipulation tasks based on multi-modalhuman-in-the-loop approach.Autonomous robots36, 1--2(2014), 123--136.
[55]
Eric Rosen, David Whitney, Elizabeth Phillips, GaryChien, James Tompkin, George Konidaris, and StefanieTellex. 2017. Communicating robot arm motion intentthrough mixed reality head-mounted displays.arXivpreprint arXiv:1708.03655(2017).
[56]
Daisuke Sakamoto, Yuta Sugiura, Masahiko Inami, andTakeo Igarashi. 2016. Graphical instruction for homerobots.Computer49, 7 (2016), 20--25.
[57]
Hiroaki Sakoe, Seibi Chiba, A Waibel, and KF Lee.1990. Dynamic programming algorithm optimization forspoken word recognition.Readings in speechrecognition159 (1990), 224.
[58]
Joe Saunders, Chrystopher L Nehaniv, and KerstinDautenhahn. 2006. Teaching robots by mouldingbehavior and scaffolding the environment. InProceedings of the 1st ACM SIGCHI/SIGARTconference on Human-robot interaction. ACM,118--125.
[59]
Yasaman S Sefidgar, Thomas Weng, Heather Harvey,Sarah Elliott, and Maya Cakmak. 2018. RobotIST:Interactive Situated Tangible Robot Programming. InProceedings of the Symposium on Spatial UserInteraction. ACM, 141--149.
[60]
Samsu Sempena, Nur Ulfa Maulidevi, and Peb RuswonoAryan. 2011. Human action recognition using dynamictime warping. InProceedings of the 2011 InternationalConference on Electrical Engineering and Informatics.IEEE, 1--5.
[61]
Julie Shah, James Wiken, Brian Williams, and CynthiaBreazeal. 2011. Improved human-robot teamperformance using chaski, a human-inspired planexecution system. InProceedings of the 6thinternational conference on Human-robot interaction.ACM, 29--36.
[62]
Aaron P Shon, Keith Grochow, and Rajesh PN Rao.2005. Robotic imitation from human motion captureusing gaussian processes. In5th IEEE-RAS InternationalConference on Humanoid Robots, 2005.IEEE, 129--134.
[63]
Rainer Stiefelhagen, C Fugen, R Gieselmann, HartwigHolzapfel, Kai Nickel, and Alex Waibel. 2004. Naturalhuman-robot interaction using speech, head pose andgestures. In2004 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS)(IEEE Cat. No.04CH37566), Vol. 3. IEEE, 2422--2427.
[64]
Daniel Szafir, Bilge Mutlu, and Terrence Fong. 2017.Designing planning and control interfaces to supportuser collaboration with flying robots.The InternationalJournal of Robotics Research36, 5--7 (2017), 514--542.
[65]
Andrea Thomaz, Guy Hoffman, Maya Cakmak, andothers. 2016. Computational human-robot interaction.Foundations and Trends® in Robotics4, 2--3 (2016),105--223.
[66]
David Vogt, Simon Stepputtis, Steve Grehl, BernhardJung, and Heni Ben Amor. 2017. A system for learningcontinuous human-robot interactions fromhuman-human demonstrations. In2017 IEEEInternational Conference on Robotics and Automation(ICRA). IEEE, 2882--2889.
[67]
David Vogt, Simon Stepputtis, Bernhard Jung, andHeni Ben Amor. 2018. One-shot learning ofhuman--robot handovers with triadic interaction meshes.Autonomous Robots42, 5 (2018), 1053--1065.
[68]
Michael Walker, Hooman Hedayati, Jennifer Lee, andDaniel Szafir. 2018. Communicating robot motion intentwith augmented reality. InProceedings of the 2018ACM/IEEE International Conference on Human-RobotInteraction. ACM, 316--324.
[69]
Haijun Xia, Sebastian Herscher, Ken Perlin, and DanielWigdor. 2018. Spacetime: Enabling Fluid Individual andCollaborative Editing in Virtual Reality. InThe 31stAnnual ACM Symposium on User Interface Softwareand Technology. ACM, 853--866.

Cited By

View all
  • (2024)RealityEffects: Augmenting 3D Volumetric Videos with Object-Centric Annotation and Dynamic Visual EffectsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661631(1248-1261)Online publication date: 1-Jul-2024
  • (2024)Understanding On-the-Fly End-User Robot ProgrammingProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660721(2468-2480)Online publication date: 1-Jul-2024
  • (2024)PRogramAR: Augmented Reality End-User Robot ProgrammingACM Transactions on Human-Robot Interaction10.1145/364000813:1(1-20)Online publication date: 12-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '19: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
October 2019
1229 pages
ISBN:9781450368162
DOI:10.1145/3332165
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 October 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. augmented reality
  2. embodied authoring
  3. embodied interaction
  4. human-robot collaboration
  5. human-robot interaction
  6. program-by-demonstration
  7. time-space editing

Qualifiers

  • Research-article

Funding Sources

Conference

UIST '19

Acceptance Rates

Overall Acceptance Rate 842 of 3,967 submissions, 21%

Upcoming Conference

UIST '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)466
  • Downloads (Last 6 weeks)48
Reflects downloads up to 13 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)RealityEffects: Augmenting 3D Volumetric Videos with Object-Centric Annotation and Dynamic Visual EffectsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661631(1248-1261)Online publication date: 1-Jul-2024
  • (2024)Understanding On-the-Fly End-User Robot ProgrammingProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660721(2468-2480)Online publication date: 1-Jul-2024
  • (2024)PRogramAR: Augmented Reality End-User Robot ProgrammingACM Transactions on Human-Robot Interaction10.1145/364000813:1(1-20)Online publication date: 12-Jan-2024
  • (2024)Unlocking Understanding: An Investigation of Multimodal Communication in Virtual Reality CollaborationProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642491(1-16)Online publication date: 11-May-2024
  • (2024)Fast-Forward Reality: Authoring Error-Free Context-Aware Policies with Real-Time Unit Tests in Extended RealityProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642158(1-17)Online publication date: 11-May-2024
  • (2024)End-User Development for Human-Robot InteractionCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3638546(1355-1357)Online publication date: 11-Mar-2024
  • (2024)Goal-Oriented End-User Programming of RobotsProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634974(582-591)Online publication date: 11-Mar-2024
  • (2024)RoboVisAR: Immersive Authoring of Condition-based AR Robot VisualisationsProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634972(462-471)Online publication date: 11-Mar-2024
  • (2023)Integrating Virtual, Mixed, and Augmented Reality to Human–Robot Interaction Applications Using Game Engines: A Brief Review of Accessible Software Tools and FrameworksApplied Sciences10.3390/app1303129213:3(1292)Online publication date: 18-Jan-2023
  • (2023)GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VRProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606736(1-17)Online publication date: 29-Oct-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media