Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

A Systematic Process to Engineer Dependable Integration of Frame-based Input Devices in a Multimodal Input Chain: Application to Rehabilitation in Healthcare

Published: 17 June 2024 Publication History

Abstract

Designing new input devices and associated interaction techniques is a key contribution in order to increase the bandwidth between users and interactive applications. In the field of Human-Computer Interaction, research and development services in industry and research laboratories in universities have been, since the invention of the mouse and the graphical user interface, proposing multiple contributions including the integration of multiple input devices in multimodal interaction technique. Those contributions, most of the time, are presented as prototypes or demonstrators providing evidence of the bandwidth increase through user studies. Such contributions however, do not provide any support to software developers for integrating them in real-life systems. When done, this integration is performed in a craft manner, outside required software engineering good practice. This paper proposes a systematic process to integrate novel input devices and associated interaction techniques to better support users' work. It exploits most recent interactive systems architectural model and formal model-based approaches for interactive systems supporting verification and validation when required. The paper focusses on Frame-based input devices which support gesture-based interactions and movements recognition but also addresses their multimodal use. This engineering approach is demonstrated on an interactive application in the area of rehabilitation in healthcare where dependability of interactions and applications is as critical as their usability.

References

[1]
2014. IBodyFrameSource Interface. https://learn.microsoft.com/fr-fr/previous-versions/windows/kinect/dn772943(v=ieb.10)
[2]
2014. Kinect API Overview. https://learn.microsoft.com/fr-fr/previous-versions/windows/kinect/dn782033(v=ieb.10)
[3]
2023. Discontinuing Kinect and Azure Kinect. https://techcrunch.com/2023/08/22/microsoft-cant-stop-discontinuing-kinect/
[4]
2024. Leap Motion Controller - UltraLeap. https://leap2.ultraleap.com/leap-motion-controller/
[5]
Johnny Accot, Stéphane Chatty, Sébastien Maury, and Philippe Palanque. 1997. Formal Transducers: Models of Devices and Building Bricks for the Design of Highly Interactive Systems. In Design, Specification and Verification of Interactive Systems '97, W. Hansmann, W. T. Hewitt, W. Purgathofer, Michael Douglas Harrison, and Juan Carlos Torres (Eds.). Springer Vienna, Vienna, 143--159. https://doi.org/10.1007/978-3-7091-6878-3_10
[6]
Johnny Accot, Stéphane Chatty, and Philippe Palanque. 1996. A Formal Description of Low Level Interaction and its Application to Multimodal Interactive Systems. In Design, Specification and Verification of Interactive Systems '96, W. Hansmann, W. T. Hewitt, W. Purgathofer, Francois Bodart, and Jean Vanderdonckt (Eds.). Springer Vienna, Vienna, 92--104. https://doi.org/10.1007/978-3-7091-7491-3_5
[7]
Carayon Axel, Juan E. Garrido, Célia Martinie, Philippe Palanque, Eric Barboni, María Dolores Lozano, and Víctor M. R. Penichet. 2023. Engineering Rehabilitation: Blending Two Tool-supported Approaches to Close the Loop from Tasks-based Rehabilitation to Exercises and Back Again. Proceedings of the ACM on Human-Computer Interaction 7, EICS (June 2023), 1--23. https://doi.org/10.1145/3593229
[8]
Rémi Bastide, Philippe Palanque, Ousmane Sy, and David Navarre. 2000. Formal specification of CORBA services: experience and lessons learned. ACM SIGPLAN Notices 35, 10 (Oct. 2000), 105--117. https://doi.org/10.1145/354222.353179
[9]
Rémi Bastide, Ousmane Sy, David Navarre, and Philippe Palanque. 2000. A Formal Specification of the CORBA Event Service. In Formal Methods for Open Object-Based Distributed Systems IV, Scott F. Smith and Carolyn L. Talcott (Eds.). Vol. 49. Springer US, Boston, MA, 371--395. https://doi.org/10.1007/978-0-387-35520-7_19
[10]
Rémi Bastide, Ousmane Sy, and Philippe Palanque. 2000. A formal notation and tool for the engineering of CORBA systems. Concurrency: Practice and Experience 12, 14 (Dec. 2000), 1379--1403. https://doi.org/10.1002/1096-9128(20001210)12:14<1379::AID-CPE514>3.0.CO;2-B
[11]
Renaud Blanch and Michel Beaudouin-Lafon. 2006. Programming rich interactions using the hierarchical state machine toolkit. In Proceedings of the working conference on Advanced visual interfaces - AVI '06. ACM Press, Venezia, Italy, 51. https://doi.org/10.1145/1133265.1133275
[12]
Richard A. Bolt. 1980. "Put-that-there": Voice and gesture at the graphics interface. In Proceedings of the 7th annual conference on Computer graphics and interactive techniques - SIGGRAPH '80. ACM Press, Seattle, Washington, United States, 262--270. https://doi.org/10.1145/800250.807503
[13]
W. Buxton and B. Myers. 1986. A study in two-handed input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Boston Massachusetts USA, 321--326. https://doi.org/10.1145/22627.22390
[14]
Ziyun Cai, Jungong Han, Li Liu, and Ling Shao. 2017. RGB-D datasets using microsoft kinect or similar sensors: a survey. Multimedia Tools and Applications 76 (2017), 4313--4355.
[15]
José Creissac Campos, Camille Fayollas, Michael D. Harrison, Célia Martinie, Paolo Masci, and Philippe Palanque. 2020. Supporting the Analysis of Safety Critical User Interfaces: An Exploration of Three Formal Tools. ACM Transactions on Computer-Human Interaction 27, 5 (Oct. 2020), 1--48. https://doi.org/10.1145/3404199
[16]
Alexandre Canny, Elodie Bouzekri, Célia Martinie, and Philippe Palanque. 2019. Rationalizing the Need of Architecture-Driven Testing of Interactive Systems. In Human-Centered Software Engineering, Cristian Bogdan, Kati Kuusinen, Marta Kristin Lárusdóttir, Philippe Palanque, and Marco Winckler (Eds.). Vol. 11262. Springer International Publishing, Cham, 164--186. https://doi.org/10.1007/978-3-030-05909-5_10
[17]
Alexandre Canny, Célia Martinie, David Navarre, Philippe Palanque, Eric Barboni, and Christine Gris. 2021. Engineering Model-Based Software Testing of WIMP Interactive Applications: A Process based on Formal Models and the SQUAMATA Tool. Proceedings of the ACM on Human-Computer Interaction 5, EICS (May 2021), 1--30. https://doi.org/10.1145/3461729
[18]
Martin Cronel, Bruno Dumas, Philippe Palanque, and Alexandre Canny. 2019. MIODMIT: A Generic Architecture for Dynamic Multimodal Interactive Systems. In Human-Centered Software Engineering, Cristian Bogdan, Kati Kuusinen, Marta Kristín Lárusdóttir, Philippe Palanque, and Marco Winckler (Eds.). Vol. 11262. Springer International Publishing, Cham, 109--129. https://doi.org/10.1007/978-3-030-05909-5_7
[19]
Florian Echtler and Gudrun Klinker. 2008. A multitouch software architecture. In Proceedings of the 5th Nordic conference on Human-computer interaction: building bridges. ACM, Lund Sweden, 463--466. https://doi.org/10.1145/1463160.1463220
[20]
Matthias Grimmer, Manuel Rigger, Lukas Stadler, Roland Schatz, and Hanspeter Mössenböck. 2013. An efficient native function interface for Java. In Proceedings of the 2013 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools. ACM, Stuttgart Germany, 35--44. https://doi.org/10.1145/2500828.2500832
[21]
Tovi Grossman and Ravin Balakrishnan. 2005. The bubble cursor: enhancing target acquisition by dynamic resizing of the cursor's activation area. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Portland Oregon USA, 281--290. https://doi.org/10.1145/1054972.1055012
[22]
Arnaud Hamon, Phil Palanque, Raphaël André, Eric Barboni, Martin Cronel, and David Navarre. 2014. Multi-Touch interactions for control and display in interactive cockpits: issues and a proposal. In Proceedings of the International Conference on Human-Computer Interaction in Aerospace. ACM, Santa Clara California, 1--10. https://doi.org/10.1145/2669592.2669650
[23]
Arnaud Hamon, Philippe Palanque, Martin Cronel, Raphaël André, Eric Barboni, and David Navarre. 2014. Formal modelling of dynamic instantiation of input devices and interaction techniques: application to multi-touch interactions. In Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems. ACM, Rome Italy, 173--178. https://doi.org/10.1145/2607023.2610286
[24]
Lode Hoste, Bruno Dumas, and Beat Signer. 2011. Mudra: a unified multimodal interaction framework. In Proceedings of the 13th international conference on multimodal interfaces. ACM, Alicante Spain, 97--104. https://doi.org/10.1145/2070481.2070500
[25]
D.G. Kamper, R.L. Harvey, S. Suresh, and W.Z. Rymer. 2003. Relative contributions of neural mechanisms versus muscle mechanics in promoting finger extension deficits following stroke. Muscle & Nerve 28, 3 (Sept. 2003), 309--318. https://doi.org/10.1002/mus.10443
[26]
Spyros Kousidis, Casey Kennington, Timo Baumann, Hendrik Buschmeier, Stefan Kopp, and David Schlangen. 2014. A Multimodal In-Car Dialogue System That Tracks The Driver's Attention. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, Istanbul Turkey, 26--33. https://doi.org/10.1145/2663204.2663244
[27]
Radoslava Kraleva and Velin Kralev. 2016. On model architecture for a children's speech recognition interactive dialog system. (2016). https://doi.org/10.48550/ARXIV.1605.07733
[28]
Pradeep Kumar, Himaanshu Gauba, Partha Pratim Roy, and Debi Prosad Dogra. 2017. A multimodal framework for sensor based sign language recognition. Neurocomputing 259 (Oct. 2017), 21--38. https://doi.org/10.1016/j.neucom.2016.08.132
[29]
Pradeep Kumar, Himaanshu Gauba, Partha Pratim Roy, and Debi Prosad Dogra. 2017. Coupled HMM-based multisensor data fusion for sign language recognition. Pattern Recognition Letters 86 (Jan. 2017), 1--8. https://doi.org/10.1016/j.patrec.2016.12.004
[30]
Bass L, Little R, Pellegrino R, Reed S, Seacord R, Sheppard S, and Szezur M. R. 1991. The arch model: Seeheim revisited.
[31]
Xavier Lacaze, Philippe Palanque, David Navarre, and Rémi Bastide. 2002. Performance Evaluation as a Tool for Quantitative Assessment of Complexity of Interactive Systems. In Interactive Systems:Design, Specification, and Verification, Gerhard Goos, Juris Hartmanis, Jan Van Leeuwen, Peter Forbrig, Quentin Limbourg, Jean Vanderdonckt, and Bodo Urban (Eds.). Vol. 2545. Springer Berlin Heidelberg, Berlin, Heidelberg, 208--222. https://doi.org/10.1007/3-540-36235-5_16
[32]
Jean-François Ladry, David Navarre, and Philippe Palanque. 2009. Formal description techniques to support the design, construction and evaluation of fusion engines for sure (safe, usable, reliable and evolvable) multimodal interfaces. In Proceedings of the 2009 international conference on Multimodal interfaces. ACM, Cambridge Massachusetts USA, 185--192. https://doi.org/10.1145/1647314.1647347
[33]
Bo Li, Chao Zhang, Cheng Han, and Baoxing Bai. 2018. Fingertip data fusion of Kinect v2 and leap motion in unity. Ingénierie des systèmes d'information 23, 6 (Dec. 2018), 143--159. https://doi.org/10.3166/isi.23.6.143-159
[34]
Giulio Marin, Fabio Dominio, and Pietro Zanuttigh. 2014. Hand gesture recognition with leap motion and kinect devices. In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, Paris, France, 1565--1569. https://doi.org/10.1109/ICIP.2014.7025313
[35]
Giulio Marin, Fabio Dominio, and Pietro Zanuttigh. 2016. Hand gesture recognition with jointly calibrated Leap Motion and depth sensor. Multimedia Tools and Applications 75, 22 (Nov. 2016), 14991--15015. https://doi.org/10.1007/s11042-015-2451-6
[36]
Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Masia, and Ana Serrano. 2022. Multimodality in VR: A Survey. Comput. Surveys 54, 10s (Jan. 2022), 1--36. https://doi.org/10.1145/3508361
[37]
Brad A. Myers, Ashley Lai, Tam Minh Le, YoungSeok Yoon, Andrew Faulring, and Joel Brandt. 2015. Selective Undo Support for Painting Applications. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 4227--4236. https://doi.org/10.1145/2702123.2702543
[38]
David Navarre, Philippe Palanque, Jean-Francois Ladry, and Eric Barboni. 2009. ICOs: A model-based user interface description technique dedicated to interactive systems addressing usability, reliability and scalability. ACM Transactions on Computer-Human Interaction 16, 4 (Nov. 2009), 1--56. https://doi.org/10.1145/1614390.1614393
[39]
Michael Nebeling, Elena Teunissen, Maria Husmann, and Moira C. Norrie. 2014. XDKinect: development framework for cross-device interaction using kinect. In Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems. ACM, Rome Italy, 65--74. https://doi.org/10.1145/2607023.2607024
[40]
Albert Ng, Julian Lepinski, Daniel Wigdor, Steven Sanders, and Paul Dietz. 2012. Designing for low-latency direct-touch input. In Proceedings of the 25th annual ACM symposium on User interface software and technology. ACM, Cambridge Massachusetts USA, 453--464. https://doi.org/10.1145/2380116.2380174
[41]
Object Management Group. 1995. The Common Object Request Broker (CORBA): Architecture and Specification. https://www.omg.org/spec/CORBA/2.5/PDF
[42]
Ph Palanque and R. Bastide. 1995. Verification of an Interactive Software by Analysis of its Formal Specification. In Human---Computer Interaction, Knut Nordby, Per Helmersen, David J. Gilmore, and Svein A. Arnesen (Eds.). Springer US, Boston, MA, 191--196. https://doi.org/10.1007/978-1-5041-2896-4_32
[43]
Fabrizio Pedersoli, Sergio Benini, Nicola Adami, and Riccardo Leonardi. 2014. XKin: an open source framework for hand pose and gesture recognition using kinect. The Visual Computer 30, 10 (Oct. 2014), 1107--1122. https://doi.org/10.1007/s00371-014-0921-x
[44]
Benoît Penelle and Olivier Debeir. 2014. Multi-sensor data fusion for hand tracking using Kinect and Leap Motion. In Proceedings of the 2014 Virtual Reality International Conference. ACM, Laval France, 1--7. https://doi.org/10.1145/2617841.2620710
[45]
Günther E. Pfaff (Ed.). 1985. User Interface Management Systems: Proceedings of the Workshop on User Interface Management Systems held in Seeheim, FRG, November 1-3, 1983. Springer Berlin Heidelberg, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-70041-5
[46]
Palanque Philippe and Schyn Amélie. 2003. A Model-Based Approach for Engineering Multimodal Interactive. In IFIP TC13 International Conference on Human-Computer Interaction. Zurich.
[47]
Bastide Rémi and Palanque Philippe. 1996. Implementation Techniques for Petri Net Based Specifications of Human-Computer Dialogues.
[48]
Kizito Salako and Lorenzo Strigini. 2014. When Does "Diversity"' in Development Reduce Common Failures? Insights from Probabilistic Modeling. IEEE Transactions on Dependable and Secure Computing 11, 2 (March 2014), 193--206. https://doi.org/10.1109/TDSC.2013.32
[49]
Teddy Seyed, Alaa Azazi, Edwin Chan, Yuxi Wang, and Frank Maurer. 2015. SoD-Toolkit: A Toolkit for Interactively Prototyping and Developing Multi-Sensor, Multi-Device Environments. In Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces - ITS '15. ACM Press, Madeira, Portugal, 171--180. https://doi.org/10.1145/2817721.2817750
[50]
Arthur Sluÿters, Mehdi Ousmer, Paolo Roselli, and Jean Vanderdonckt. 2022. QuantumLeap, a Framework for Engineering Gestural User Interfaces based on the Leap Motion Controller. Proceedings of the ACM on Human-Computer Interaction 6, EICS (June 2022), 1--47. https://doi.org/10.1145/3532211
[51]
Juan R. Terven and Diana M. Córdova-Esparza. 2021. KinZ an Azure Kinect toolkit for Python and Matlab. Science of Computer Programming 211 (Nov. 2021), 102702. https://doi.org/10.1016/j.scico.2021.102702
[52]
Ramadevi Vennelakanti, Prasenjit Dey, Ankit Shekhawat, and Phanindra Pisupati. 2011. The picture says it all!: multimodal interactions and interaction metadata. In Proceedings of the 13th international conference on multimodal interfaces. ACM, Alicante Spain, 89--96. https://doi.org/10.1145/2070481.2070499
[53]
Norman Villaroman, Dale Rowe, and Bret Swan. 2011. Teaching natural user interaction using OpenNI and the Microsoft Kinect sensor. In Proceedings of the 2011 conference on Information technology education. ACM, West Point New York USA, 227--232. https://doi.org/10.1145/2047594.2047654
[54]
Oliver Wasenmüller and Didier Stricker. 2017. Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision. In Computer Vision - ACCV 2016 Workshops, Chu-Song Chen, Jiwen Lu, and Kai-Kuang Ma (Eds.). Vol. 10117. Springer International Publishing, Cham, 34--45. https://doi.org/10.1007/978-3-319-54427-4_3
[55]
Yuanjie Wu, Lei Gao, Simon Hoermann, and Robert W. Lindeman. 2018. Towards Robust 3D Skeleton Tracking Using Data Fusion from Multiple Depth Sensors. In 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games). IEEE, Wurzburg, 1--4. https://doi.org/10.1109/VS-Games.2018.8493443
[56]
Yuanjie Wu, Yu Wang, Sungchul Jung, Simon Hoermann, and Robert W. Lindeman. 2019. Exploring the Use of a Robust Depth-sensor-based Avatar Control System and its Effects on Communication Behaviors. In 25th ACM Symposium on Virtual Reality Software and Technology. ACM, Parramatta NSW Australia, 1--9. https://doi.org/10.1145/3359996.3364267
[57]
Yuanjie Wu, Yu Wang, Sungchul Jung, Simon Hoermann, and Robert W. Lindeman. 2019. Towards an articulated avatar in VR: Improving body and hand tracking using only depth cameras. Entertainment Computing 31 (Aug. 2019), 100303. https://doi.org/10.1016/j.entcom.2019.100303

Index Terms

  1. A Systematic Process to Engineer Dependable Integration of Frame-based Input Devices in a Multimodal Input Chain: Application to Rehabilitation in Healthcare

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue EICS
      EICS
      June 2024
      589 pages
      EISSN:2573-0142
      DOI:10.1145/3673909
      Issue’s Table of Contents
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 17 June 2024
      Accepted: 01 April 2024
      Revised: 01 April 2024
      Received: 01 February 2024
      Published in PACMHCI Volume 8, Issue EICS

      Check for updates

      Author Tags

      1. Frame-based input devices
      2. Software Architectures
      3. formal models
      4. interaction techniques
      5. transducers
      6. verification

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 132
        Total Downloads
      • Downloads (Last 12 months)132
      • Downloads (Last 6 weeks)47
      Reflects downloads up to 15 Oct 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media