Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3565387.3565420acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsaeConference Proceedingsconference-collections
research-article

MPSKT: Multi-head ProbSparse Self-Attention for Knowledge Tracing

Published: 13 December 2022 Publication History

Abstract

Over the past two years, COVID-19 has led to a widespread rise in online education, and knowledge tracing has been used on various educational platforms. However, most existing knowledge tracing models still suffer from long-term dependence. To address this problem, we propose a Multi-head ProbSparse Self-Attention for Knowledge Tracing(MPSKT). Firstly, the temporal convolutional network is used to encode the position information of the input sequence. Then, the Multi-head ProbSparse Self-Attention in the encoder and decoder blocks is used to capture the relationship between the input sequences, and the convolution and pooling layers in the encoder block are used to shorten the length of the input sequence, which greatly reduces the time complexity of the model and better solves the problem of long-term dependence of the model. Finally, experimental results on three public online education datasets demonstrate the effectiveness of our proposed model.

References

[1]
Corbett A T, Anderson J R. (1994). Knowledge tracing: Modeling the acquisition of procedural knowledge. User modeling and user-adapted interaction, 4(4), 253-278.
[2]
Piech C, Bassen J, Huang J, (2015). Deep knowledge tracing. Advances in neural information processing systems, 28.
[3]
Kohlschein, C. (2006). An introduction to hidden Markov models. In Probability and Randomization in Computer Science. Seminar in winter semester, Vol. 2007.
[4]
Medsker, L. R., & Jain, L. C. (2001). Recurrent neural networks. Design and Applications, 5, 64-67.
[5]
Yeung, C. K., & Yeung, D. Y. (2018). Addressing two problems in deep knowledge tracing via prediction-consistent regularization. In Proceedings of the Fifth Annual ACM Conference on Learning at Scale, pp. 1-10.
[6]
Pandey, S., & Karypis, G. (2019). A self-attentive model for knowledge tracing. arXiv preprint arXiv:1907.06837.
[7]
Pandey, S., & Srivastava, J. (2020). RKT: relation-aware self-attention for knowledge tracing. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 1205-1214.
[8]
Ait Khayi, N. (2021). Deep Knowledge Tracing using Temporal Convolutional Networks. In Proceedings of the Workshop Artificial Intelligence for Education, IJCAI 2021.
[9]
Malhotra, Pankaj, (2015). Long short term memory networks for anomaly detection in time series. In Proceedings, Vol. 89, pp. 89-94.
[10]
Shen, Shuanghong, (2020). Convolutional knowledge tracing: Modeling individualization in student learning process. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1857-1860.
[11]
Zhou, Haoyi, (2021). Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, No. 12, pp. 11106-11115.
[12]
Long J, Shelhamer E, Darrell T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440.

Cited By

View all

Index Terms

  1. MPSKT: Multi-head ProbSparse Self-Attention for Knowledge Tracing

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    CSAE '22: Proceedings of the 6th International Conference on Computer Science and Application Engineering
    October 2022
    411 pages
    ISBN:9781450396004
    DOI:10.1145/3565387
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 December 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. deep learning
    2. knowledge tracing
    3. self-attention mechanism
    4. temporal convolutional network

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • National Natural Science Foundation of China
    • ?Shaanxi Province 2021 Provincial First-Class Undergraduate Course Construction Project ?Computer Network?
    • Shaanxi Normal University Teacher Teaching Model Innovation and Practice Research Special Fund Project in 2021

    Conference

    CSAE 2022

    Acceptance Rates

    Overall Acceptance Rate 368 of 770 submissions, 48%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 68
      Total Downloads
    • Downloads (Last 12 months)36
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 26 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media