Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.3115/1220835.1220869dlproceedingsArticle/Chapter ViewAbstractPublication PageshltConference Proceedingsconference-collections
Article
Free access

Modelling user satisfaction and student learning in a spoken dialogue tutoring system with generic, tutoring, and user affect parameters

Published: 04 June 2006 Publication History

Abstract

We investigate using the PARADISE framework to develop predictive models of system performance in our spoken dialogue tutoring system. We represent performance with two metrics: user satisfaction and student learning. We train and test predictive models of these metrics in our tutoring system corpora. We predict user satisfaction with 2 parameter types: 1) system-generic, and 2) tutoring-specific. To predict student learning, we also use a third type: 3) user affect. Although generic parameters are useful predictors of user satisfaction in other PARADISE applications, overall our parameters produce less useful user satisfaction models in our system. However, generic and tutoring-specific parameters do produce useful models of student learning in our system. User affect parameters can increase the usefulness of these models.

References

[1]
J. Ang, R. Dhillon, A. Krupski, E. Shriberg, and A. Stolcke. 2002. Prosody-based automatic detection of annoyance and frustration in human-computer dialog. In Proc. Int. Conf. Spoken Language Processing (ICSLP).
[2]
A. Batliner, K. Fischer, R. Huber, J. Spilker, and E. Noth. 2003. How to find trouble in communication. Speech Communication, 40:117--143.
[3]
K. Bhatt, M. Evens, and S. Argamon. 2004. Hedged responses and expressions of affect in human/human and human/computer tutorial interactions. In Proc. 26th Annual Meeting of the Cognitive Science Society.
[4]
H. Bonneau-Maynard, L. Devillers, and S. Rosset. 2000. Predictive performance of dialog systems. In Proc. Language Resources and Evaluation Conf. (LREC)).
[5]
M. T. H. Chi, S. A. Siler, H. Jeong, T. Yamauchi, and R. G. Hausmann. 2001. Learning from human tutoring. Cognitive Science, 25:471--533.
[6]
S. Craig, A. Graesser, J. Sullins, and B. Gholson. 2004. Affect and learning: An exploratory look into the role of affect in learning. Journal of Educational Media, 29:241--250.
[7]
K. Forbes-Riley and D. Litman. 2005. Correlating student acoustic-prosodic profiles with student learning in spoken tutoring dialogues. In Proc. INTERSPEECH.
[8]
K. Forbes-Riley, D. Litman, S. Silliman, and J. Tetreault. 2006. Comparing synthesized versus pre-recorded tutor speech in an intelligent tutoring spoken dialogue system. In Proc. FLAIRS.
[9]
C. M. Lee, S. Narayanan, and R. Pieraccini. 2002. Combining acoustic and language information for emotion recognition. In Proc. ICSLP.
[10]
D. Litman and K. Forbes-Riley. 2004a. Annotating student emotional states in spoken tutoring dialogues. In Proc. SIGdial, pages 144--153.
[11]
D. Litman and K. Forbes-Riley. 2004b. Predicting student emotions in computer-human tutoring dialogues. In Proc. ACL, pages 352--359.
[12]
D. Litman, C. Rosé, K. Forbes-Riley, K. VanLehn, D. Bhembe, and S. Silliman. 2006. Spoken versus typed human and computer dialogue tutoring. Intnl Jnl of Artificial Intelligence in Education, To Appear.
[13]
S. Möller. 2005a. Parameters for quantifying the inter-actioin with spoken dialogue telephone services. In Proc. SIGdial.
[14]
S. Möller. 2005b. Towards generic quality prediction models for spoken dialogue systems - a case study. In Proc. INTERSPEECH.
[15]
J. D. Moore, K. Porayska-Pomsta, S. Varges, and C. Zinn. 2004. Generating tutorial feedback with affect. In Proc. FLAIRS.
[16]
J. Mostow and G. Aist. 2001. Evaluating tutors that listen: An overview of Project LISTEN. In K. Forbus and P. Feltovich, editors, Smart Machines in Education.
[17]
H. Pon-Barry, B. Clark, E. Owen Bratt, K. Schultz, and S. Peters. 2004. Evaluating the effectiveness of SCoT: a Spoken Conversational Tutor. In Proc. of ITS 2004 Workshop on Dialogue-based Intelligent Tutoring Systems: State of the Art and New Research Directions.
[18]
E. Shriberg, E. Wade, and P. Price. 1992. Human-machine problem solving using spoken language systems (SLS): Factors affecting performance and user satisfaction. In Proc. DARPA Speech and NL Workshop, pages 49--54.
[19]
K. VanLehn, P. W. Jordan, C. P. Rosé, D. Bhembe, M. Böttner, A. Gaydos, M. Makatchev, U. Pappuswamy, M. Ringenberg, A. Roque, S. Siler, R. Srivastava, and R. Wilson. 2002. The architecture of Why2-Atlas: A coach for qualitative physics essay writing. In Proc. Intelligent Tutoring Systems.
[20]
M. Walker, D. Litman, C. Kamm, and A. Abella. 1997. PARADISE: A framework for evaluating spoken dialogue agents. In Proc. ACL/EACL, pages 271--280.
[21]
M. Walker, C. Kamm, and D. Litman. 2000. Towards developing general models of usability with PARADISE. Natural Language Engineering, 6:363--377.
[22]
M. Walker, A. Rudnicky, R. Prasad, J. Aberdeen, E. Bratt, J. Garofolo, H. Hastie, A. Le, B. Pellom, A. Potamianos, R. Passonneau, S. Roukos, G. Sanders, S. Seneff, and D. Stallard. 2002. DARPA communicator: Cross-system results for the 2001 evaluation. In Proc. Internat. Conf. on Spoken Language Processing (ICSLP).

Cited By

View all
  • (2018)Keep Me in the LoopProceedings of the 20th ACM International Conference on Multimodal Interaction10.1145/3242969.3242974(384-392)Online publication date: 2-Oct-2018
  • (2012)Evaluating language understanding accuracy with respect to objective outcomes in a dialogue systemProceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics10.5555/2380816.2380874(471-481)Online publication date: 23-Apr-2012
  • (2011)Investigating the relationship between dialogue structure and tutoring effectivenessInternational Journal of Artificial Intelligence in Education10.5555/2336135.233614021:1-2(65-81)Online publication date: 1-Jan-2011
  • Show More Cited By
  1. Modelling user satisfaction and student learning in a spoken dialogue tutoring system with generic, tutoring, and user affect parameters

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image DL Hosted proceedings
        HLT-NAACL '06: Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
        June 2006
        522 pages

        Publisher

        Association for Computational Linguistics

        United States

        Publication History

        Published: 04 June 2006

        Qualifiers

        • Article

        Acceptance Rates

        HLT-NAACL '06 Paper Acceptance Rate 62 of 257 submissions, 24%;
        Overall Acceptance Rate 240 of 768 submissions, 31%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)20
        • Downloads (Last 6 weeks)3
        Reflects downloads up to 30 Aug 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2018)Keep Me in the LoopProceedings of the 20th ACM International Conference on Multimodal Interaction10.1145/3242969.3242974(384-392)Online publication date: 2-Oct-2018
        • (2012)Evaluating language understanding accuracy with respect to objective outcomes in a dialogue systemProceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics10.5555/2380816.2380874(471-481)Online publication date: 23-Apr-2012
        • (2011)Investigating the relationship between dialogue structure and tutoring effectivenessInternational Journal of Artificial Intelligence in Education10.5555/2336135.233614021:1-2(65-81)Online publication date: 1-Jan-2011
        • (2008)The relative impact of student affect on performance models in a spoken dialogue tutoring systemUser Modeling and User-Adapted Interaction10.1007/s11257-007-9038-518:1-2(11-43)Online publication date: 1-Feb-2008
        • (2007)WIREProceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies10.5555/1556328.1556340(84-88)Online publication date: 26-Apr-2007
        • (2006)Exploiting discourse structure for spoken dialogue performance analysisProceedings of the 2006 Conference on Empirical Methods in Natural Language Processing10.5555/1610075.1610089(85-93)Online publication date: 22-Jul-2006

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media