Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3593434.3593960acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
research-article

Investigating Factors Influencing Students’ Assessment of Conceptual Models

Published: 14 June 2023 Publication History

Abstract

This paper discusses the challenges in evaluating the quality of conceptual models in educational settings. While automated grading techniques may work for simplistic modeling tasks, realistic modeling tasks that allow for a wide variety of solutions cannot be evaluated using automated techniques. However, the traditional approach of having instructors grade the exercises may not be feasible in larger courses. To address this issue, alternative approaches, such as educating students to assess the quality of their own solutions or using calibrated peer reviews, can be used. Therefore, it is crucial to identify the quality of feedback a student can deliver on their own. As a first step, this paper reports on the results of controlled experiments with 368 participants to investigate factors that influence students’ model comprehension and to identify ways to distinguish good student assessments from bad ones.

References

[1]
Özlem Albayrak and Jeffrey C. Carver. 2014. Investigation of individual factors impacting the effectiveness of requirements inspections: a replicated experiment. Empirical Software Engineering 19, 1 (Feb. 2014), 241–266.
[2]
Victor R. Basili, Scott Green, Oliver Laitenberger, Filippo Lanubile, Forrest Shull, Sivert Sørumgård, and Marvin V. Zelkowitz. 1996. The Empirical Investigation of Perspective-Based Reading. Empirical Software Engineering 1, 2 (1996), 133–164.
[3]
Gabriele Bavota, Carmine Gravino, Rocco Oliveto, Andrea De Lucia, Genoveffa Tortora, Marcela Genero, and José Antonio Cruz-Lemus. 2011. Identifying the Weaknesses of UML Class Diagrams during Data Model Comprehension. In Model Driven Engineering Languages and Systems. Springer, 168–182.
[4]
Tomas Berling and Per Runeson. 2003. Evaluation of a perspective based review method applied in an industrial setting. IEE Proceedings - Software 150, 3 (2003), 177–184.
[5]
Marian Daun, Jennifer Brings, Lisa Krajinski, and Thorsten Weyer. 2019. On the benefits of using dedicated models in validation processes for behavioral specifications. In Int. Conf. on Software and System Processes. IEEE / ACM, 44–53.
[6]
Marian Daun, Jennifer Brings, Patricia Aluko Obe, Klaus Pohl, Steffen Moser, Hermann Schumacher, and Marcel Rieß. 2017. Teaching conceptual modeling in online courses: Coping with the need for individual feedback to modeling exercises. In IEEE Conf. on Software Engineering Education and Training. 134–143.
[7]
Marian Daun, Jennifer Brings, Patricia Aluko Obe, and Viktoria Stenkova. 2021. Reliability of self-rated experience and confidence as predictors for students’ performance in software engineering: Results from multiple controlled experiments on model comprehension with graduate and undergraduate students. Empirical Software Engineering 26, 4 (2021), 80.
[8]
Marian Daun, Jennifer Brings, and Thorsten Weyer. 2017. On the Impact of the Model-Based Representation of Inconsistencies to Manual Reviews - Results from a Controlled Experiment. In Conceptual Modeling - 36th Int. Conf.466–473.
[9]
Marian Daun, Jennifer Brings, and Thorsten Weyer. 2020. Do Instance-level Review Diagrams Support Validation Processes of Cyber-Physical System Specifications: Results from a Controlled Experiment. In Int. Conf. on Software and System Processes. ACM, 10 pages.
[10]
Marian Daun, Alicia M Grubb, Viktoria Stenkova, and Bastian Tenbergen. 2022. A systematic literature review of requirements engineering education. Requirements Engineering (2022), 1–31.
[11]
Marian Daun, Thorsten Weyer, and Klaus Pohl. 2019. Improving manual reviews in function-centered engineering of embedded systems using a dedicated review model. Software and Systems Modeling 18, 6 (2019), 3421–3459.
[12]
Robert F. DeVellis. 2016. Scale Development: Theory and Applications. SAGE Publications, Inc.
[13]
Amnon H Eden and Rick Kazman. 2003. Architecture, design, implementation. In Int. Conf. on Software Engineering. IEEE, 149–159.
[14]
Kathrin Figl, Jan Mendling, and Mark Strembeck. 2013. The Influence of Notational Deficiencies on Process Model Comprehension. Journal of the Association for Information Systems 14, 6 (June 2013).
[15]
Lulu He and Jeffrey C. Carver. 2006. PBR vs. checklist: a replication in the n-fold inspection context. In Int. Symp. on Empirical Software Engineering. ACM, 95–104.
[16]
S. J. B. A. Hoppenbrouwers, H. A. (Erik) Proper, and Th. P. van der Weide. 2005. A Fundamental View on the Process of Conceptual Modeling. In Conceptual Modeling – ER 2005. Springer, Berlin, Heidelberg, 128–143.
[17]
Andreas Jedlitschka, Marcus Ciolkowski, and Dietmar Pfahl. 2008. Reporting experiments in software engineering. In Guide to Advanced Empirical Software Engineering, Forrest Shull, Janice Singer, and Dag I. K. Sjøberg (Eds.). Springer London, 201–228.
[18]
Oliver Laitenberger, Khaled El Emam, and Thomas G. Harbich. 2001. An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents. IEEE Trans. Software Eng. 27, 5 (2001), 387–421.
[19]
A. D. Lucia, C. Gravino, R. Oliveto, and G. Tortora. 2008. Data Model Comprehension: An Empirical Comparison of ER and UML Class Diagrams. In IEEE Int. Conf. on Program Comprehension. 93–102.
[20]
José Carlos Maldonado, Jeffrey Carver, Forrest Shull, Sandra Camargo Pinto Ferraz Fabbri, Emerson Dória, Luciana Andréia Fondazzi Martimiano, Manoel G. Mendonça, and Victor R. Basili. 2006. Perspective-Based Reading: A Replicated Experiment Focused on Individual Reviewer Effectiveness. Empirical Software Engineering 11, 1 (2006), 119–142.
[21]
Jan Mendling, Mark Strembeck, and Jan Recker. 2012. Factors of process model comprehension—Findings from a series of experiments. Decision Support Systems 53, 1 (April 2012), 195–206.
[22]
James Miller, Murray Wood, and Marc Roper. 1998. Further Experiences with Scenarios and Checklists. Empirical Software Engineering 3, 1 (1998), 37–64.
[23]
Ariadi Nugroho. 2009. Level of detail in UML models and its impact on model comprehension: A controlled experiment. Information and Software Technology 51, 12 (Dec. 2009), 1670–1685.
[24]
Adam Porter, Harvey Siy, Audris Mockus, and Lawrence Votta. 1998. Understanding the sources of variation in software inspections.
[25]
Adam A. Porter and Lawrence G. Votta. 1998. Comparing Detection Methods For Software Requirements Inspections: A Replication Using Professional Subjects. Empirical Software Engineering 3, 4 (1998), 355–379.
[26]
Adam A. Porter, Lawrence G. Votta, and Victor R. Basili. 1995. Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment. IEEE Trans. Software Eng. 21, 6 (1995), 563–575.
[27]
Giedre Sabaliauskaite, Shinji Kusumoto, and Katsuro Inoue. 2004. Assessing defect detection performance of interacting teams in object-oriented design inspection. Information & Software Technology 46, 13 (2004), 875–886.
[28]
Harald Søndergaard and Raoul A Mulder. 2012. Collaborative learning through formative peer review: Pedagogy, programs and potential. Computer Science Education 22, 4 (2012), 343–367.
[29]
Claes Wohlin. 2000. Experimentation in software engineering: An introduction. Kluwer Academic, Boston, Mass.
[30]
Michael Zimoch, Rüdiger Pryss, Thomas Probst, Winfried Schlee, and Manfred Reichert. 2017. Cognitive Insights into Business Process Model Comprehension: Preliminary Results for Experienced and Inexperienced Individuals. In Enterprise, Business-Process and Information Systems Modeling. Springer, Cham, 137–152.

Cited By

View all
  • (2024)Extending Goal Models with Execution Orders: An Investigation of the Impact on ComprehensibilityAdvances in Conceptual Modeling10.1007/978-3-031-75599-6_17(219-228)Online publication date: 26-Oct-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EASE '23: Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering
June 2023
544 pages
ISBN:9798400700446
DOI:10.1145/3593434
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 June 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. conceptual modeling
  2. controlled experiment
  3. model comprehension
  4. student assessment

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

EASE '23

Acceptance Rates

Overall Acceptance Rate 71 of 232 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)3
Reflects downloads up to 28 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Extending Goal Models with Execution Orders: An Investigation of the Impact on ComprehensibilityAdvances in Conceptual Modeling10.1007/978-3-031-75599-6_17(219-228)Online publication date: 26-Oct-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media