Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

InDepth: Real-time Depth Inpainting for Mobile Augmented Reality

Published: 29 March 2022 Publication History

Abstract

Mobile Augmented Reality (AR) demands realistic rendering of virtual content that seamlessly blends into the physical environment. For this reason, AR headsets and recent smartphones are increasingly equipped with Time-of-Flight (ToF) cameras to acquire depth maps of a scene in real-time. ToF cameras are cheap and fast, however, they suffer from several issues that affect the quality of depth data, ultimately hampering their use for mobile AR. Among them, scale errors of virtual objects - appearing much bigger or smaller than what they should be - are particularly noticeable and unpleasant. This article specifically addresses these challenges by proposing InDepth, a real-time depth inpainting system based on edge computing. InDepth employs a novel deep neural network (DNN) architecture to improve the accuracy of depth maps obtained from ToF cameras. The DNN fills holes and corrects artifacts in the depth maps with high accuracy and eight times lower inference time than the state of the art. An extensive performance evaluation in real settings shows that InDepth reduces the mean absolute error by a factor of four with respect to ARCore DepthLab. Finally, a user study reveals that InDepth is effective in rendering correctly-scaled virtual objects, outperforming DepthLab.

References

[1]
Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk. 2012. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 11 (2012), 2274--2282.
[2]
Amazon. 2021. Amazon AR view. https://www.amazon.com/adlp/arview.
[3]
Apple. 2021. Augmented Reality - Apple. https://www.apple.com/augmented-reality/.
[4]
Apple. 2022. ARKit overview. https://developer.apple.com/augmented-reality/arkit/.
[5]
Jonathan T Barron and Ben Poole. 2016. The fast bilateral solver. In ECCV.
[6]
Ali J. Ben Ali, Zakieh Sadat Hashemifar, and Karthik Dantu. 2020. Edge-SLAM: Edge-Assisted Visual Simultaneous Localization and Mapping. In ACM MobiSys.
[7]
Mary Branscombe. 2018. How Microsoft is making its most sensitive HoloLens depth sensor yet. https://www.zdnet.com/article/howmicrosoft-is-making-its-most-sensitive-hololens-depth-sensor-yet.
[8]
Duane C. Brown. 1971. Close-range camera calibration. Photogrammetric Engineering 37, 8 (1971), 855--866.
[9]
John Canny. 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 8, 6 (1986), 679--698. https://doi.org/10.1109/TPAMI.1986.4767851
[10]
Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3D: Learning from RGB-D Data in Indoor Environments. In International Conference on 3D Vision (3DV).
[11]
L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2018. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 4 (2018), 834--848. https://doi.org/10.1109/TPAMI.2017.2699184
[12]
Xinjing Cheng, Peng Wang, and Ruigang Yang. 2020. Learning depth with convolutional spatial propagation network. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 10 (2020), 2361--2379.
[13]
Ilya Chugunov, Seung-Hwan Baek, Qiang Fu, Wolfgang Heidrich, and Felix Heide. 2021. Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging. In IEEE CVPR.
[14]
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li. 2009. ImageNet: A large-scale hierarchical image database. In IEEE CVPR.
[15]
Ruofei Du, Eric Turner, Maksym Dzitsiuk, Luca Prasso, Ivo Duarte, Jason Dourgarian, Joao Afonso, Jose Pascoal, Josh Gladstone, Nuno Cruces, Shahram Izadi, Adarsh Kowdle, Konstantine Tsotsos, and David Kim. 2020. DepthLab: Real-Time 3D Interaction With Depth Maps for Mobile Augmented Reality. In ACM UIST.
[16]
Ivan Eichhardt, Dmitry Chetverikov, and Zsolt Janko. 2017. Image-guided ToF depth upsampling: a survey. Machine Vision and Applications 28, 3-4 (2017), 267--282.
[17]
Elsevier. [n. d.]. 3D4Medical - Augmented Reality. https://3d4medical.com/support/complete-anatomy/ar.
[18]
Sergi Foix, Guillem Alenya, and Carme Torras. 2011. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sensors Journal 11, 9 (2011), 1917--1926.
[19]
Peter Fürsattel, Simon Placht, Michael Balda, Christian Schaller, Hannes Hofmann, Andreas Maier, and Christian Riess. 2015. A comparative error analysis of current time-of-flight sensors. IEEE Transactions on Computational Imaging 2, 1 (2015), 27--41.
[20]
S. Garrido-Jurado, R. Muñoz-Salinas, F.J. Madrid-Cuevas, and R. Medina-Carnicer. 2016. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming. Pattern Recognition 51 (2016), 481--491.
[21]
Michele Gattullo, Lucilla Dammacco, Francesca Ruospo, Alessandro Evangelista, Michele Fiorentino, Jan Schmitt, and Antonio E Uva. 2020. Design preferences on Industrial Augmented Reality: a survey with potential technical writers. In IEEE ISMAR Adjunct (2020).
[22]
H. Gonzalez-Jorge, P. Rodríguez-Gonzálvez, J. Martínez-Sánchez, D. González-Aguilera, P. Arias, M. Gesto, and L. Díaz-Vilariño. 2015. Metrological comparison between Kinect I and Kinect II sensors. Measurement 70 (2015), 21--26. https://doi.org/10.1016/j.measurement.2015.03.042
[23]
Google. 2021. Experience 3D & augmented reality in search. https://support.google.com/websearch/answer/9817187?co=GENIE.Platform%3DAndroid&hl=en&oco=0.
[24]
Google. 2021. Introduction to Depth on Unity targeting Android. https://developers.google.com/ar/develop/unity/depth/introduction.
[25]
Google. 2022. Use Raw Depth in your Android app. https://developers.google.com/ar/develop/java/depth/raw-depth.
[26]
Rostam Affendi Hamzah and Haidi Ibrahim. 2016. Literature survey on stereo vision disparity map algorithms. Journal of Sensors 2016 (2016).
[27]
Miles Hansard, Seungkyu Lee, Ouk Choi, and Radu Horaud. 2012. Time-of-flight cameras: Principles, methods and applications. Springer.
[28]
Alastair Harrison and Paul Newman. 2010. Image and Sparse Laser Fusion for Dense Scene Reconstruction. In Field and Service Robotics.
[29]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs.CV]
[30]
Radu Horaud, Miles Hansard, Georgios Evangelidis, and Clément Ménier. 2016. An overview of depth cameras and range scanners based on time-of-flight technologies. Machine vision and applications 27, 7 (2016), 1005--1020.
[31]
Mu Hu, Shuling Wang, Bin Li, Shiyu Ning, Li Fan, and Xiaojin Gong. 2021. PENet: Towards Precise and Efficient Image Guided Depth Completion. In IEEE ICRA.
[32]
Yu-Kai Huang, Tsung-Han Wu, Yueh-Cheng Liu, and Winston H. Hsu. 2019. Indoor Depth Completion with Boundary Consistency and Self-Attention. In IEEE ICCV Workshops.
[33]
Huawei. 2021. HUAWEI P40 Pro. https://consumer.huawei.com/en/phones/p40-pro/specs/.
[34]
Patrick Hübner, Kate Clintworth, Qingyi Liu, Martin Weinmann, and Sven Wursthorn. 2020. Evaluation of HoloLens tracking and depth sensing for indoor mapping applications. Sensors 20, 4 (2020), 1021.
[35]
Tak-Wai Hui, Chen Change Loy, and Xiaoou Tang. 2016. Depth map super-resolution by deep multi-scale guidance. In ECCV.
[36]
IKEA. 2021. IKEA Apps. https://www.ikea.com/us/en/customer-service/mobile-apps/.
[37]
International Telecommunication Union (ITU) Radiocommunication Sector. 2005. BT.470: Conventional analogue television systems. Available at https://www.itu.int/rec/R-REC-BT.470-6-199811-S/en.
[38]
International Telecommunications Union (ITU) Telecommunication Standardization Sector. 2018. P.808: Subjective evaluation of speech quality with a crowdsourcing approach. Available at https://www.itu.int/rec/T-REC-P.808-201806-I/en.
[39]
International Telecommunications Union (ITU) Telecommunication Standardization Sector. 2018. P.809: Subjective evaluation methods for gaming quality. Available at https://www.itu.int/rec/T-REC-P.809-201806-I/en.
[40]
Hanseul Jun and Jeremy Bailenson. 2020. Temporal RVL: A Depth Stream Compression Method. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 664--665.
[41]
Max Krichenbauer, Goshiro Yamamoto, Takafumi Taketomi, Christian Sandor, and Hirokazu Kato. 2014. Towards augmented reality user interfaces in 3D media production. In IEEE ISMAR (2014).
[42]
Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. 2016. Deeper depth prediction with fully convolutional residual networks. In International Conference on 3D Vision (3DV).
[43]
Junyi Liu and Xiaojin Gong. 2013. Guided Depth Enhancement via Anisotropic Diffusion. In Pacific-Rim Conference on Multimedia.
[44]
Luyang Liu, Hongyu Li, and Marco Gruteser. 2019. Edge Assisted Real-Time Object Detection for Mobile Augmented Reality. In ACM MobiCom.
[45]
Qiang Liu and Tao Han. 2018. DARE: Dynamic adaptive mobile augmented reality with edge computing. In IEEE ICNP 2018.
[46]
Zida Liu, Guohao Lan, Jovan Stojkovic, Yunfan Zhang, Carlee Joe-Wong, and Maria Gorlatova. 2020. CollabAR: Edge-assisted Collaborative Image Recognition for Mobile Augmented Reality. In ACM/IEEE IPSN.
[47]
Zifan Liu, Hongzi Zhu, Junchi Chen, Shan Chang, and Lili Qiu. 2019. HyperSight: Boosting distant 3D vision on a single dual-camera smartphone. In ACM SenSys.
[48]
Fangchang Ma and Sertac Karaman. 2018. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In IEEE ICRA.
[49]
Sanjeev Mehrotra, Zhengyou Zhang, Qin Cai, Cha Zhang, and Philip A Chou. 2011. Low-complexity, near-lossless coding of depth maps from kinect-like depth cameras. In 2011 IEEE 13th International Workshop on Multimedia Signal Processing. IEEE, 1--6.
[50]
Nate Merrill, Patrick Geneva, and Guoquan Huang. 2021. Robust Monocular Visual-Inertial Depth Completion for Embedded Systems. In IEEE ICRA.
[51]
Microsoft. 2021. Microsoft Mixed Reality - Manufacturing. https://www.microsoft.com/en-us/hololens/industry-manufacturing.
[52]
Yue Ming, Xuyang Meng, Chunxiao Fan, and Hui Yu. 2021. Deep learning for monocular depth estimation: A review. Neurocomputing 438 (2021), 14--33. https://doi.org/10.1016/j.neucom.2020.12.089
[53]
James Mure-Dubois and Heinz Hügli. 2007. Real-time scattering compensation for time-of-flight camera. In IEEE ICCV.
[54]
Niantic. 2021. Catching Pokémon in AR+ mode. https://niantic.helpshift.com/a/pokemon-go/?s=accessories&f=catchingpokemon-in-ar-mode&p=web.
[55]
Thomas Olsson and Markus Salo. 2011. Online user survey on current mobile augmented reality applications. In IEEE ISMAR (2011).
[56]
Peter Ondrúška, Pushmeet Kohli, and Shahram Izadi. 2015. Mobilefusion: Real-time volumetric surface reconstruction and dense tracking on mobile phones. IEEE Transactions on Visualization and Computer Graphics 21, 11 (2015), 1251--1258.
[57]
HyeonJung Park, Youngki Lee, and JeongGil Ko. 2021. Enabling Real-time Sign Language Translation on Mobile Platforms with On-board Depth Cameras. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1--30.
[58]
Antonio Pepe, Gianpaolo Francesco Trotta, Peter Mohr-Ziak, Christina Gsaxner, Jürgen Wallner, Vitoantonio Bevilacqua, and Jan Egger. 2019. A marker-less registration approach for mixed reality--aided maxillofacial surgery: a pilot evaluation. Journal of Digital Imaging 32, 6 (2019), 1008--1018.
[59]
Jiaxiong Qiu, Zhaopeng Cui, Yinda Zhang, Xingdi Zhang, Shuaicheng Liu, Bing Zeng, and Marc Pollefeys. 2019. DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image. In IEEE CVPR.
[60]
Qualtrics. 2021. Qualtrics. https://www.qualtrics.com.
[61]
Xukan Ran, Haoliang Chen, Zhenming Liu, and Jiasi Chen. 2017. Delivering Deep Learning to Mobile Devices via Offloading. In ACM Workshop on Virtual Reality and Augmented Reality Network (VR/AR Network).
[62]
Xukan Ran, Carter Slocum, Maria Gorlatova, and Jiasi Chen. 2019. ShareAR: Communication-efficient multi-user mobile augmented reality. In ACM HotNets.
[63]
Gernot Riegler, Matthias Rüther, and Horst Bischof. 2016. ATGV-Net: Accurate depth super-resolution. In ECCV.
[64]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention.
[65]
Samsung. 2020. What is ToF camera technology on Galaxy and how does it work? https://www.samsung.com/global/galaxy/whatis/tof-camera/.
[66]
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In IEEE CVPR.
[67]
Mahadev Satyanarayanan, Paramvir Bahl, Ramón Caceres, and Nigel Davies. 2009. The case for VM-based cloudlets in mobile computing. IEEE Pervasive Computing 8, 4 (Oct.-Dec. 2009), 14--23.
[68]
Tim Scargill, Jiasi Chen, and Maria Gorlatova. 2021. Here to stay: Measuring hologram stability in markerless smartphone augmented reality. arXiv preprint arXiv:2109.14757 (2021).
[69]
Steven Schmidt, Babak Naderi, Saeed Shafiee Sabet, Saman Zadtootaghaj, and Sebastian Möller. 2020. Assessing Interactive Gaming Quality of Experience Using a Crowdsourcing Approach. In IEEE QoMEX.
[70]
Thomas Schöps, Torsten Sattler, Christian Häne, and Marc Pollefeys. 2017. Large-scale outdoor 3D reconstruction on a mobile device. Computer Vision and Image Understanding 157 (2017), 151--166.
[71]
Yong-Jun Seo, Joohyung Lee, Jungyeon Hwang, Dusit Niyato, Hong-Shik Park, and Jun Kyun Choi. 2021. A Novel Joint Mobile Cache and Power Management Scheme for Energy-Efficient Mobile Augmented Reality Service in Mobile Edge Computing. IEEE Wireless Communications Letters 10, 5 (2021), 1061--1065.
[72]
Yawar Siddiqui, Julien Valentin, and Matthias NieBner. 2020. ViewAL: Active Learning With Viewpoint Entropy for Semantic Segmentation. In IEEE CVPR.
[73]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.
[74]
Shuran Song, Samuel P. Lichtenberg, and Jianxiong Xiao. 2015. SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite. In IEEE CVPR.
[75]
Xibin Song, Yuchao Dai, and Xueying Qin. 2016. Deep depth super-resolution: Learning depth super-resolution using deep convolutional neural network. In Asian Conference on Computer Vision.
[76]
Mingxing Tan and Quoc Le. 2019. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In International Conference on Machine Learning.
[77]
Jie Tang, Fei-Peng Tian, Wei Feng, Jian Li, and Ping Tan. 2021. Learning Guided Convolutional Network for Depth Completion. IEEE Transactions on Image Processing (2021).
[78]
TechInsights. 2020. Sony d-ToF sensor identified in Apple's New LiDAR Camera. https://www.techinsights.com/blog/sony-d-tofsensor-found-apples-new-lidar-camera.
[79]
Texas Instruments. 2014. Time-of-Flight Camera - An Introduction (Rev. B). Technical white paper. https://www.ti.com/lit/wp/sloa190b/sloa190b.pdf.
[80]
Texas Instruments. 2019. Introduction to Time-of-Flight Long Range Proximity and Distance Sensor System Design (Rev. B). https://www.ti.com/lit/pdf/sbau305.
[81]
Julien Valentin, Adarsh Kowdle, Jonathan T Barron, Neal Wadhwa, Max Dzitsiuk, Michael Schoenberg, Vivek Verma, Ambrus Csaszar, Eric Turner, Ivan Dryanovski, et al. 2018. Depth from motion for smartphone AR. ACM Transactions on Graphics (ToG) 37, 6 (2018), 1--19.
[82]
Igor Vasiljevic, Nick Kolkin, Shanyi Zhang, Ruotian Luo, Haochen Wang, Falcon Z. Dai, Andrea F. Daniele, Mohammadreza Mostajabi, Steven Basart, Matthew R. Walter, and Gregory Shakhnarovich. 2019. DIODE: A dense indoor and outdoor depth dataset. CoRR abs/1908.00463 (2019). http://arxiv.org/abs/1908.00463
[83]
Reid Vassallo, Adam Rankin, Elvis C. S. Chen, and Terry M. Peters. 2017. Hologram stability evaluation for Microsoft HoloLens. In SPIE Medical Imaging 2017: Image Perception, Observer Performance, and Technology Assessment.
[84]
Andreas Veit, Michael Wilber, and Serge Belongie. 2016. Residual Networks Behave like Ensembles of Relatively Shallow Networks. In International Conference on Neural Information Processing Systems (NeurIPS).
[85]
Fu-EnWang, Yu-Hsuan Yeh, Min Sun,Wei-Chen Chiu, and Yi-Hsuan Tsai. 2020. BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion. In IEEE CVPR.
[86]
Zhengjie Wang, Yushan Hou, Kangkang Jiang, Wenwen Dou, Chengming Zhang, Zehua Huang, and Yinjing Guo. 2019. Hand gesture recognition based on active ultrasonic sensing of smartphone: a survey. IEEE Access 7 (2019), 111897--111922.
[87]
Zhihui Wang, Xinchen Ye, Baoli Sun, Jingyu Yang, Rui Xu, and Haojie Li. 2020. Depth upsampling based on deep edge-aware learning. Pattern Recognition 103 (2020), 107274.
[88]
Wayfair. 2020. Augmented Reality with a purpose. https://www.aboutwayfair.com/augmented-reality-with-a-purpose.
[89]
Andrew D Wilson. 2017. Fast lossless depth image compression. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. 100--105.
[90]
Zhiyuan Xie, Xiaomin Ouyang, and Guoliang Xing. 2021. UltraDepth: Exposing High-Resolution Texture from Depth Cameras. In ACM SenSys.
[91]
Zhiwei Xiong, Yueyi Zhang, Feng Wu, and Wenjun Zeng. 2017. Computational depth sensing: Toward high-performance commodity depth cameras. IEEE Signal Processing Magazine 34, 3 (2017), 55--68.
[92]
Qingxiong Yang, Ruigang Yang, James Davis, and David Nister. 2007. Spatial-Depth Super Resolution for Range Images. In IEEE CVPR.
[93]
W. Yin, Y. Liu, C. Shen, and Y. Yan. 2019. Enforcing Geometric Constraints of Virtual Normal for Depth Prediction. In IEEE ICCV.
[94]
Fisher Yu and Vladlen Koltun. 2016. Multi-Scale Context Aggregation by Dilated Convolutions. In International Conference on Learning Representations.
[95]
Xiao Zeng, Biyi Fang, Haichen Shen, and Mi Zhang. 2020. Distream: Scaling Live Video Analytics with Workload-Adaptive Distributed Edge Intelligence. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems (Virtual Event, Japan) (SenSys '20). Association for Computing Machinery, New York, NY, USA, 409--421. https://doi.org/10.1145/3384419.3430721
[96]
Yinda Zhang and Thomas Funkhouser. 2018. Deep Depth Completion of a Single RGB-D Image. In IEEE CVPR.
[97]
Yiqin Zhao and Tian Guo. 2020. PointAR: Efficient Lighting Estimation for Mobile Augmented Reality. In ECCV.
[98]
Michael Zollhöfer. 2019. Commodity RGB-D sensors: Data acquisition. In RGB-D Image Analysis and Processing. Springer, 3--13.
[99]
Fernando J. Álvarez, Teodoro Aguilera, and Roberto López-Valcarce. 2017. CDMA-based acoustic local positioning system for portable devices with multipath cancellation. Digital Signal Processing 62 (2017), 38--51.

Cited By

View all
  • (2024)Predicting Multi-dimensional Surgical Outcomes with Multi-modal Mobile SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596288:2(1-30)Online publication date: 15-May-2024
  • (2024)WiFi-CSI Difference ParadigmProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596088:2(1-29)Online publication date: 15-May-2024
  • (2024)PRECYSE: Predicting Cybersickness using Transformer for Multimodal Time-Series Sensor DataProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595948:2(1-24)Online publication date: 15-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 6, Issue 1
March 2022
1009 pages
EISSN:2474-9567
DOI:10.1145/3529514
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 March 2022
Published in IMWUT Volume 6, Issue 1

Check for updates

Author Tags

  1. Augmented reality
  2. depth inpainting
  3. depth sensing
  4. edge computing
  5. user experience

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)725
  • Downloads (Last 6 weeks)76
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Predicting Multi-dimensional Surgical Outcomes with Multi-modal Mobile SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596288:2(1-30)Online publication date: 15-May-2024
  • (2024)WiFi-CSI Difference ParadigmProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596088:2(1-29)Online publication date: 15-May-2024
  • (2024)PRECYSE: Predicting Cybersickness using Transformer for Multimodal Time-Series Sensor DataProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595948:2(1-24)Online publication date: 15-May-2024
  • (2024)AutoAugHAR: Automated Data Augmentation for Sensor-based Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595898:2(1-27)Online publication date: 15-May-2024
  • (2024)Intelligent Wearable Systems: Opportunities and Challenges in Health and SportsACM Computing Surveys10.1145/364846956:7(1-42)Online publication date: 9-Apr-2024
  • (2024)ARISE: High-Capacity AR Offloading Inference Serving via Proactive SchedulingProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661894(451-464)Online publication date: 3-Jun-2024
  • (2024)MetaFormerProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435508:1(1-27)Online publication date: 6-Mar-2024
  • (2024)Mobile AR Depth Estimation: Challenges & ProspectsProceedings of the 25th International Workshop on Mobile Computing Systems and Applications10.1145/3638550.3641122(21-26)Online publication date: 28-Feb-2024
  • (2024)Community Archetypes: An Empirical Framework for Guiding Research Methodologies to Reflect User Experiences of Sense of Virtual Community on RedditProceedings of the ACM on Human-Computer Interaction10.1145/36373108:CSCW1(1-33)Online publication date: 26-Apr-2024
  • (2024)Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314447:4(1-23)Online publication date: 12-Jan-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media