Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Self-supervised Learning for Reading Activity Classification

Published: 14 September 2021 Publication History

Abstract

Reading analysis can relay information about user's confidence and habits and can be used to construct useful feedback. A lack of labeled data inhibits the effective application of fully-supervised Deep Learning (DL) for automatic reading analysis. We propose a Self-supervised Learning (SSL) method for reading analysis. Previously, SSL has been effective in physical human activity recognition (HAR) tasks, but it has not been applied to cognitive HAR tasks like reading. We first evaluate the proposed method on a four-class classification task on reading detection using electrooculography datasets, followed by an evaluation of a two-class classification task of confidence estimation on multiple-choice questions using eye-tracking datasets. Fully-supervised DL and support vector machines (SVMs) are used as comparisons for the proposed SSL method. The results show that the proposed SSL method is superior to the fully-supervised DL and SVM for both tasks, especially when training data is scarce. This result indicates the proposed method is the superior choice for reading analysis tasks. These results are important for informing the design of automatic reading analysis platforms.

References

[1]
Pulkit Agrawal, João Carreira, and Jitendra Malik. 2015. Learning to See by Moving. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV) (Santiago, Chile). IEEE, 37--45. https://doi.org/10.1109/ICCV.2015.13
[2]
Gregory P. Amis and Gail A. Carpenter. 2010. Self-supervised ARTMAP. Neural Networks 23, 2 (March 2010), 265--282. https://doi.org/10.1016/j.neunet.2009.07.026
[3]
Wenjia Bai, Chen Chen, Giacomo Tarroni, Jinming Duan, Florian Guitton, Steffen E. Petersen, Yike Guo, Paul M. Matthews, and Daniel Rueckert. 2019. Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction. In Proceedings of the Medical Image Computing and Computer Assisted Intervention. Springer, Cham, 541--549. https://doi.org/10.1007/978-3-030-32245-8_60
[4]
Ralf Biedert, Jörn Hees, Andreas Dengel, and Georg Buscher. 2012. A Robust Realtime Reading-Skimming Classifier. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara) (ETRA '12). ACM, New York, NY, USA, 123--130. https://doi.org/10.1145/2168556.2168575
[5]
Andreas Bulling, Jamie A. Ward, and Hans Gellersen. 2012. Multimodal Recognition of Reading Activity in Transit Using Body-Worn Sensors. ACM Trans. Appl. Percept. 9, 1, Article 2 (March 2012), 21 pages. https://doi.org/10.1145/2134203.2134205
[6]
Andreas Bulling, Jamie A. Ward, Hans Gellersen, and Gerhard Tröster. 2011. Eye Movement Analysis for Activity Recognition Using Electrooculography. IEEE Trans. Pattern Anal. Mach. Intell. 33, 4 (April 2011), 741--753. https://doi.org/10.1109/TPAMI.2010.86
[7]
Christopher S. Campbell and Paul P. Maglio. 2001. A Robust Algorithm for Reading Detection. In Proceedings of the Workshop on Perceptive User Interfaces (Orlando, Florida, USA) (PUI '01). ACM, New York, NY, USA, 1--7. https://doi.org/10.1145/971478.971503
[8]
Nixon Chan and Peter E. Kennedy. 2002. Are Multiple-Choice Exams Easier for Economics Students? A Comparison of Multiple-Choice and "Equivalent" Constructed-Response Exam Questions. Southern Economic Journal 68, 4 (April 2002), 957--971. https://doi.org/10.2307/1061503
[9]
Leana Copeland, Tom Gedeon, and Sumudu Mendis. 2014. Predicting reading comprehension scores from eye movements using artificial neural networks and fuzzy output error. Artificial Intelligence Research 3, 3 (Aug. 2014), 35--48. https://doi.org/10.5430/air.v3n3p35
[10]
Carl Doersch, Abhinav Gupta, and Alexei A. Efros. 2015. Unsupervised Visual Representation Learning by Context Prediction. In Proceedings of the IEEE International Conference on Computer Vision (Santiago, Chile) (ICCV '15). IEEE, USA, 1422--1430. https://doi.org/10.1109/ICCV.2015.167
[11]
Spyros Gidaris, Praveer Singh, and Nikos Komodakis. 2018. Unsupervised Representation Learning by Predicting Image Rotations. In Proceedings of the International Conference on Learning Representations. ICLR, 1--16. https://openreview.net/forum?id=S1v4N2l0-
[12]
Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. 2019. Scaling and Benchmarking Self-Supervised Visual Representation Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (Seoul, South Korea). IEEE, 6390--6399. https://doi.org/10.1109/ICCV.2019.00649
[13]
Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (Vancouver, Canada). IEEE, 6645--6649. https://doi.org/10.1109/ICASSP.2013.6638947
[14]
Thomas M. Haladyna. 2015. Developing and Validating Multiple-choice Test Items (3rd edition ed.). Routledge, New York, NY, USA.
[15]
Nils Y. Hammerla, Shane Halloran, and Thomas Plötz. 2016. Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables. In Proceedings of the International Joint Conference on Artificial Intelligence (New York, NY, USA) (IJCAI'16). AAAI Press, 1533--1540. https://www.ijcai.org/Proceedings/16/Papers/220.pdf
[16]
Awni Y. Hannun, Pranav Rajpurkar, Masoumeh Haghpanahi, Geoffrey H. Tison, Codie Bourn, Mintu P. Turakhia, and Andrew Y. Ng. 2019. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nature Medicine 25, 1 (Jan. 2019), 65--69. https://doi.org/10.1038/s41591-018-0268-3
[17]
Harish Haresamudram, Apoorva Beedu, Varun Agrawal, Patrick L. Grady, Irfan Essa, Judy Hoffman, and Thomas Plötz. 2020. Masked Reconstruction Based Self-Supervision for Human Activity Recognition. In Proceedings of the International Symposium on Wearable Computers (Virtual Event, Mexico) (ISWC '20). ACM, New York, NY, USA, 45--49. https://doi.org/10.1145/3410531.3414306
[18]
Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. 2019. Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty. In Advances in Neural Information Processing Systems, Vol. 32. Curran Associates, Inc., 15663--15674. https://proceedings.neurips.cc/paper/2019/file/a2b15837edac15df90721968986f7f8e-Paper.pdf
[19]
Shoya Ishimaru, Kensuke Hoshika, Kai Kunze, Koichi Kise, and Andreas Dengel. 2017. Towards Reading Trackers in the Wild: Detecting Reading Activities by EOG Glasses and Deep Neural Networks. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers (Maui, Hawaii) (UbiComp '17). ACM, New York, NY, USA, 704--711. https://doi.org/10.1145/3123024.3129271
[20]
Shoya Ishimaru, Kai Kunze, Koichi Kise, Jens Weppner, Andreas Dengel, Paul Lukowicz, and Andreas Bulling. 2014. In the Blink of an Eye: Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. In Proceedings of the Augmented Human International Conference (Kobe, Japan) (AH '14). ACM, New York, NY, USA, Article 15, 4 pages. https://doi.org/10.1145/2582051.2582066
[21]
Shoya Ishimaru, Kai Kunze, Katsuma Tanaka, Yuji Uema, Koichi Kise, and Masahiko Inami. 2015. Smart Eyewear for Interaction and Activity Recognition. In Proceedings of the Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI EA '15). ACM, New York, NY, USA, 307--310. https://doi.org/10.1145/2702613.2725449
[22]
Shoya Ishimaru, Takanori Maruichi, Andreas Dengel, and Koichi Kise. 2021. Confidence-Aware Learning Assistant. CoRR abs/2102.07312 (2021). arXiv:2102.07312 https://arxiv.org/abs/2102.07312
[23]
Shoya Ishimaru, Takanori Maruichi, Manuel Landsmann, Koichi Kise, and Andreas Dengel. 2019. Electrooculography Dataset for Reading Detection in the Wild. In Adjunct Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (London, UK) (UbiComp/ISWC '19 Adjunct). ACM, New York, NY, USA, 85--88. https://doi.org/10.1145/3341162.3343812
[24]
Simon Jenni and Paolo Favaro. 2018. Self-Supervised Feature Learning by Learning to Spot Artifacts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT, USA). IEEE, 2733--2742. https://doi.org/10.1109/cvpr.2018.00289
[25]
Wenchao Jiang and Zhaozheng Yin. 2015. Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks. In Proceedings of the ACM International Conference on Multimedia (Brisbane, Australia) (MM '15). ACM, New York, NY, USA, 1307--1310. https://doi.org/10.1145/2733373.2806333
[26]
JINS. 2020. JINS MEME Electrooculography Glasses. https://jins-meme.com/en/. Accessed: Nov 10, 2020.
[27]
Conor Kelton, Zijun Wei, Seoyoung Ahn, Aruna Balasubramanian, Samir R. Das, Dimitris Samaras, and Gregory Zelinsky. 2019. Reading Detection in Real-Time. In Proceedings of the ACM Symposium on Eye Tracking Research & Applications (Denver, Colorado) (ETRA '19). ACM, New York, NY, USA, Article 43, 5 pages. https://doi.org/10.1145/3314111.3319916
[28]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2017. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 60, 6 (May 2017), 84--90. https://doi.org/10.1145/3065386
[29]
Kai Kunze, Yuzuko Utsumi, Yuki Shiga, Koichi Kise, and Andreas Bulling. 2013. I Know What You Are Reading: Recognition of Document Types Using Mobile Eye Tracking. In Proceedings of the International Symposium on Wearable Computers (Zurich, Switzerland) (ISWC '13). ACM, New York, NY, USA, 113--116. https://doi.org/10.1145/2493988.2494354
[30]
Yongjin Kwon, Kyuchang Kang, and Changseok Bae. 2014. Unsupervised learning for human activity recognition using smartphone sensors. Expert Systems with Applications 41, 14 (Oct. 2014), 6067 - 6074. https://doi.org/10.1016/j.eswa.2014.04.037
[31]
Manuel Landsmann, Olivier Augereau, and Koichi Kise. 2019. Classification of Reading and Not Reading Behavior Based on Eye Movement Analysis. In Adjunct Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (London, UK) (UbiComp/ISWC '19 Adjunct). ACM, New York, NY, USA, 109--112. https://doi.org/10.1145/3341162.3343811
[32]
Paul Lau, Sie Lau, Kian Hong, and Hasbee Usop. 2011. Guessing, Partial Knowledge, and Misconceptions in Multiple-Choice Tests. Educational Technology & Society 14, 4 (Jan. 2011), 99--110. https://eric.ed.gov/?id=EJ963283
[33]
Frédéric Li, Kimiaki Shirahama, Muhammad Adeel Nisar, Lukas Köping, and Marcin Grzegorzek. 2018. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors. Sensors 18, 2, Article 679 (Feb. 2018), 22 pages. https://doi.org/10.3390/s18020679
[34]
Chang Liu, Yu Cao, Yan Luo, Guanling Chen, Vinod Vokkarane, and Yunsheng Ma. 2016. DeepFood: Deep Learning-Based Food Image Recognition for Computer-Aided Dietary Assessment. In Proceedings of the International Conference on Inclusive Smart Cities and Digital Health (Wuhan, China) (ICOST 2016). Springer, Cham, 37--48. https://doi.org/10.1007/978-3-319-39601-9_4
[35]
Martin Längkvist, Lars Karlsson, and Amy Loutfi. 2014. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognition Letters 42, 1 (June 2014), 11 - 24. https://doi.org/10.1016/j.patrec.2014.01.008
[36]
Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: System Demonstrations (Baltimore, Maryland). Association for Computational Linguistics, 55--60. https://doi.org/10.3115/v1/P14-5010
[37]
Cheryl A. Melovitz Vasan, David O. DeFouw, Bart K. Holland, and Nagaswami S. Vasan. 2018. Analysis of testing with multiple choice versus open-ended questions: Outcome-based observations in an anatomy course. Anatomical Sciences Education 11, 3 (May 2018), 254--261. https://doi.org/10.1002/ase.1739
[38]
Chaitanya Mitash, Kostas E. Bekris, and Abdeslam Boularias. 2017. A self-supervised learning system for object detection using physics simulation and multi-view pose estimation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Vancouver, BC, Canada). IEEE, 545--551. https://doi.org/10.1109/IROS.2017.8202206
[39]
Ross H. Nehm and Leah Reilly. 2007. Biology Majors Knowledge and Misconceptions of Natural Selection. BioScience 57, 3 (March 2007), 263--272. https://doi.org/10.1641/B570311
[40]
Mehdi Noroozi and Paolo Favaro. 2016. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European conference on computer vision. Springer, Cham, 69--84. https://doi.org/10.1007/978-3-319-46466-4_5
[41]
Mehdi Noroozi, Ananth Vinjimoor, Paolo Favaro, and Hamed Pirsiavash. 2018. Boosting Self-Supervised Learning via Knowledge Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT, USA). IEEE, 9359--9367. https://doi.org/10.1109/cvpr.2018.00975
[42]
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. 2016. Context Encoders: Feature Learning by Inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, NV, USA). IEEE, 2536--2544. https://doi.org/10.1109/cvpr.2016.278
[43]
Keith Rayner, Alexander Pollatsek, Jane Ashby, and Charles Clifton Jr. 2012. Psychology of Reading (2nd ed.). Psychology Press, New York, NY, USA.
[44]
Yuji Roh, Geon Heo, and Steven Euijong Whang. 2018. A Survey on Data Collection for Machine Learning: a Big Data - AI Integration Perspective. CoRR abs/1811.03402 (2018). http://arxiv.org/abs/1811.03402
[45]
Charissa Ann Ronao and Sung-Bae Cho. 2016. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Systems with Applications 59, 15 (Oct. 2016), 235--244. https://doi.org/10.1016/j.eswa.2016.04.032
[46]
Aaqib Saeed, Tanir Ozcelebi, and Johan Lukkien. 2019. Multi-Task Self-Supervised Learning for Human Activity Detection. Proceedings of the ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 2, Article 61 (June 2019), 30 pages. https://doi.org/10.1145/3328932
[47]
Aaqib Saeed, Victor Ungureanu, and Beat Gfeller. 2020. Sense and Learn: Self-Supervision for Omnipresent Sensors. CoRR abs/2009.13233 (2020). arXiv:2009.13233 https://arxiv.org/abs/2009.13233
[48]
Kathleen Siren. 2020. The Best of both Worlds: Expanding the Depth and Breadth of Multiple-Choice Questions. In Proceedings of the INTED2020 14th International Technology, Education and Development Conference (Valencia, Spain). Social Science Research Network, Rochester, NY, 7173--7177. https://papers.ssrn.com/abstract=3660997
[49]
Boris Sofman, Ellie Lin, J. Andrew Bagnell, John Cole, Nicolas Vandapel, and Anthony Stentz. 2006. Improving robot navigation through self-supervised online learning. Journal of Field Robotics 23, 11--12 (Nov. 2006), 1059--1075. https://doi.org/10.1002/rob.20169
[50]
Namrata Srivastava, Joshua Newn, and Eduardo Velloso. 2018. Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 4, Article 189 (Dec. 2018), 27 pages. https://doi.org/10.1145/3287067
[51]
Julian Steil and Andreas Bulling. 2015. Discovery of Everyday Human Activities from Long-Term Visual Behaviour Using Topic Models. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (Osaka, Japan) (UbiComp '15). ACM, New York, NY, USA, 75--85. https://doi.org/10.1145/2750858.2807520
[52]
Alexander Strukelj and Diederick C. Niehorster. 2018. One page of text: Eye movements during regular and thorough reading, skimming, and spell checking. Journal of Eye Movement Research 11, 1 (Feb. 2018), 1--22. https://doi.org/10.16910/jemr.11.1.1
[53]
Setareh Rahimi Taghanaki and Ali Etemad. 2020. Self-supervised Wearable-based Activity Recognition by Learning to Forecast Motion. arXiv (2020). arXiv:2010.13713 https://arxiv.org/abs/2010.13713
[54]
Tobii. 2020. Tobii eye-tracker. https://gaming.tobii.com/getstarted/. Accessed: Nov 10, 2020.
[55]
Meng-Jung Tsai, Huei-Tse Hou, Meng-Lung Lai, Wan-Yi Liu, and Fang-Ying Yang. 2012. Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis. Computers & Education 58, 1 (Jan. 2012), 375--385. https://doi.org/10.1016/j.compedu.2011.07.012
[56]
Zhiguang Wang and Tim Oates. 2015. Imaging Time-Series to Improve Classification and Imputation. In Proceedings of the International Conference on Artificial Intelligence (Buenos Aires, Argentina) (IJCAI'15). AAAI Press, 3939--3945. https://www.ijcai.org/Proceedings/15/Papers/553.pdf
[57]
Wikipedia. 2020. Horizontal and vertical writing in East Asian scripts. https://en.wikipedia.org/w/index.php?title=Horizontal_and_vertical_writing_in_East_Asian_scripts&oldid=984358336. Accessed: Oct 29, 2020.
[58]
Wei Wu and Yuan Zhang. 2019. Activity Recognition from Mobile Phone using Deep CNN. In Proceedings of the Chinese Control Conference (Guangzhou, China). IEEE, 7786--7790. https://doi.org/10.23919/ChiCC.2019.8865142
[59]
Kento Yamada, Koichi Kise, and Olivier Augereau. 2017. Estimation of Confidence Based on Eye Gaze: An Application to Multiple-Choice Questions. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers (Maui, Hawaii) (UbiComp '17). ACM, New York, NY, USA, 217--220. https://doi.org/10.1145/3123024.3123138

Cited By

View all
  • (2024)JoulesEyeProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314227:4(1-29)Online publication date: 12-Jan-2024
  • (2024)“I know I have this till my Last Breath”: Unmasking the Gaps in Chronic Obstructive Pulmonary Disease (COPD) Care in IndiaProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642504(1-16)Online publication date: 11-May-2024
  • (2023)Multi-dimensional task recognition for human-robot teaming: literature reviewFrontiers in Robotics and AI10.3389/frobt.2023.112337410Online publication date: 7-Aug-2023
  • Show More Cited By

Index Terms

  1. Self-supervised Learning for Reading Activity Classification

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 5, Issue 3
    Sept 2021
    1443 pages
    EISSN:2474-9567
    DOI:10.1145/3486621
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 September 2021
    Published in IMWUT Volume 5, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Self-supervised learning
    2. confidence estimation
    3. fully-supervised deep learning
    4. reading analysis
    5. reading detection

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)238
    • Downloads (Last 6 weeks)24
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)JoulesEyeProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314227:4(1-29)Online publication date: 12-Jan-2024
    • (2024)“I know I have this till my Last Breath”: Unmasking the Gaps in Chronic Obstructive Pulmonary Disease (COPD) Care in IndiaProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642504(1-16)Online publication date: 11-May-2024
    • (2023)Multi-dimensional task recognition for human-robot teaming: literature reviewFrontiers in Robotics and AI10.3389/frobt.2023.112337410Online publication date: 7-Aug-2023
    • (2023)Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learningProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36108767:3(1-35)Online publication date: 27-Sep-2023
    • (2023)SpiroMask: Measuring Lung Function Using Consumer-Grade MasksACM Transactions on Computing for Healthcare10.1145/35701674:1(1-34)Online publication date: 27-Feb-2023
    • (2023)RFID-Based Human Action Recognition Through Spatiotemporal Graph Convolutional Neural NetworkIEEE Internet of Things Journal10.1109/JIOT.2023.328268010:22(19898-19912)Online publication date: 15-Nov-2023
    • (2023)EOG-Based Reading Detection in the Wild Using Spectrograms and Nested Classification ApproachIEEE Access10.1109/ACCESS.2023.331603211(105619-105632)Online publication date: 2023
    • (2023)A Wearable Device Integrated with Deep Learning‐Based Algorithms for the Analysis of Breath PatternsAdvanced Intelligent Systems10.1002/aisy.2023001745:11Online publication date: 18-Aug-2023
    • (2022)Exploring Sensor Modalities to Capture User Behaviors for Reading DetectionIEICE Transactions on Information and Systems10.1587/transinf.2020ZDL0003E105.D:9(1629-1633)Online publication date: 1-Sep-2022
    • (2022)Self-supervised Learning Method for Behavior Prediction during Dialogue Based on Temporal Consistency対話中の振る舞い予測のための時間的整合性に注目した自己教師あり学習Transactions of the Japanese Society for Artificial Intelligence10.1527/tjsai.37-6_B-M4337:6(B-M43_1-13)Online publication date: 1-Nov-2022
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media