Abstract
Adaptive learning typically focuses on the challenge of adjusting the presentation of instructional material based on an automated assessment of real-time student performance. The goal is to move away from a one-size-fits-all approach to learning toward a more personalized experience. While we support this goal, in this paper we focus on a related but logically prior challenge, namely, the development of an adaptive environment that helps the instructional designer build better learning materials in the first place. Rather than focus on the automated assessment of student performance, we have instead focused on mining data generated by the instructional designers as they developed new course materials. Working on the assumption that the discovery of patterns in past design decisions will inform better future design decisions, we have applied an off-the-shelf machine learning technique to explore associations between learning contexts and the selection of specific learning activities. Although machine learning techniques have become commodities, practical guidance regarding their expected performance is harder to come by. We have filled this gap by systematically generating our own synthetic data sets that represent notional histories of user interactions to hone our own intuitions about the performance of the Bayesian network that underpins our adaptive environment. Our intent is not to add yet one more benchmark data set to promote Bayesian approaches but, rather, to describe by way of example a method for generating such data sets that will help the practitioner understand what an otherwise black box is doing.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
In earlier prototypes, we considered using 11 different multi-valued inputs which resulted in 8,398,080 unique combinations of input values and learning activities. While this number is manageable from the perspective of data base limitations, it far outstrips human comprehensibility.
- 2.
For present purposes, it does not matter which of the 384 possible rules we use to verify performance. We leave the question of whether particular set of rules represents best practice to domain experts. We are only interested in determining how well the recommendation engine is able to pick out any pattern of use.
References
Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kauffman Publishers Inc., San Francisco (1988)
Scutari, M., Denis, J.-B.: Bayesian Networks with Examples in R. CRC Press, Boca Raton (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Warwick, W., Ford, R., Funke, M. (2021). Using Synthetic Datasets to Hone Intuitions Within an Adaptive Learning Environment. In: Stephanidis, C., et al. HCI International 2021 - Late Breaking Papers: Cognition, Inclusion, Learning, and Culture. HCII 2021. Lecture Notes in Computer Science(), vol 13096. Springer, Cham. https://doi.org/10.1007/978-3-030-90328-2_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-90328-2_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90327-5
Online ISBN: 978-3-030-90328-2
eBook Packages: Computer ScienceComputer Science (R0)