Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Automatic Grading for Complex Multifile Programs

Published: 01 January 2020 Publication History
  • Get Citation Alerts
  • Abstract

    This paper presents an automatic grading method DGRADER, which handles complex multifile programs. Both the dynamic and the static grading support multifile program analysis. So, it can be an advantage to handle complex programming problem which requires more than one program file. Dynamic analysis takes advantage of object file linker in compilation to link complex multifile program. The static grading module consists of the following steps. Firstly, the program is parsed into abstract syntax tree, which is mapped into abstract syntax tree data map. Then, the information of preprocessor is used for linking external sources called in main program by complex multifile program linker-fusion algorithm. Next, standardization process is performed for problematic code removal, unused function removal, and function sequence ordering based on function call. Finally, program matching successfully tackles structure variance problem by previous standardization process and by simple tree matching using tag classifier. The novelty of the approach is that it handles complex multifile program analysis with flexible grading with consideration of modularity and big scale of programming problem complexity. The results have shown improvement in grading precision which gives reliable grading score delivered with intuitive system.

    References

    [1]
    S. Li, X. Xiao, B. Basset, T. Xie, and N. Tillman, “Measuring code behavioral similarity for programming and software engineering education,” in Proceedings of the ACM 38th IEEE International Conference on Software Engineering Companion, pp. 501–510, Austin, TX, USA, May 2016.
    [2]
    S. Gulwani, I. Radiček, and F. Zuleger, “Automated clustering and program repair for introductory programming assignments,” in Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2018), pp. 465–480, New York, NY, USA, June 2018.
    [3]
    T. Wang, J. Xu, X. Su, C. Li, and Y. Chi, “Automatic debugging of operator errors based on efficient mutation analysis,” Multimedia Tools and Applications, vol. 78, no. 21, pp. 29881–29898, 2019.
    [4]
    D. G. Kay, T. Scoot, P. Isaacson, and K. A. Reek, “Automated grading assistance for student program,” ACM SIGCSE Bulletin, vol. 26, no. 1, pp. 381–382, 1994.
    [5]
    P. Li and L. Toderick, “An automatic grading and feedback system for e-learning in information technology education,” in Proceedings of the ASSE Annual Conference and Exposition for Emerging Computing and Information Technologies, pp. 1–11, Seattle, Washington, June 2015.
    [6]
    C. Wilcox, “Testing strategies for the automated grading of student program,” in Proceedings of the 47th ACM Technical Symposium on Computing Science Education—SIGCSE ’16, pp. 437–442, Memphis, TN, USA, March 2016.
    [7]
    J. Qi, G. Jiang, G. Li, Y. Sun, and B. Tao, “Intelligent human-computer interaction based on surface EMG gesture recognition,” IEEE Access, vol. 7, pp. 61378–61387, 2019.
    [8]
    G. Li, L. Zhang, Y. Sun, and J. Kong, “Towards the sEMG hand: internet of things sensors and haptic feedback application,” Multimedia Tools and Applications, vol. 78, no. 21, pp. 29765–29782, 2019.
    [9]
    G. Conole and B. Warburton, “A review of computer-assisted assessment,” ALT-J, vol. 13, no. 1, pp. 17–31, 2005.
    [10]
    K. M. Ala-Mutka, “A Survey of automated assessment approaches for programming assignments,” Computer Science Education, vol. 15, no. 2, pp. 83–102, 2005.
    [11]
    P. Ilhantola, T. Ahoniemi, V. Karavirta, and O. Seppala, “Review of recent systems for automatic assessment of programming assessment,” in Proceedings of the 10th Koli Calling International Conference on Computing Education Research—Koli Calling ’10, pp. 86–93, Koli, Finland, October 2010.
    [12]
    T. Wang, X. Su, and P. Ma, “Program normalization for removing code variations,” in Proceedings of the 2008 IEEE International Conference on Computer Science and Software Engineering, pp. 306–309, Hubei, China, December 2008.
    [13]
    T. Wang, X. Su, and P. Ma, “Function inlining algorithm for program analysis,” in Proceedings of the 2009 IEEE International Conference on Computational Intelligence and Software Engineering, pp. 1–4, Wuhan, China, December 2009.
    [14]
    S. M. Arifi, A. Zahi, and R. Benabbou, “Semantic similarity based evaluation for C programs through the use of symbolic execution,” in Proceedings of the 2016 IEEE Global Engineering Education Conference, pp. 826–833, Abu Dhabi, UAE, April 2016.
    [15]
    A. N. Jacobvitz, A. D. Hilton, and D. J. Sorin, “Multi-program benchmark definition,” in Proceedings of the 2015 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 72–82, Philadelphia, PA, USA, March 2015.
    [16]
    K. Matthews, T. Janicki, L. He, and L. Patterson, “Implementation of an automatic grading system with an adaptive learning component to affect student feedback and response time,” Journal of Information System Education, vol. 23, no. 1, pp. 71–83, 2012.
    [17]
    M. Pozenel, L. Furst, and V. Mahnic, “Introduction of the automated assessment of homework assignments in a university-level programming course,” in Proceedings of the 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 761–766, Opatija, Croatia, May 2015.
    [18]
    T. Wang, X. H. Su, and P. J. Ma, “Semantic similarity-based grading of student programs,” Information Software Technology, vol. 49, no. 2, pp. 17–31, 2007.
    [19]
    D. Fonte, D. Cruz, A. L. Gancarski, and P. R. Henriques, “A flexible dynamic system for automatic grading of programming exercises,” in Proceedings of the 2nd Symposium on Language, Applications and Technologies, pp. 129–144, Dagstuhl, Germany, 2013.
    [20]
    S. D. Benford, E. K. Burke, E. Foxley, and C. A. Higgins, “The Ceilidh system for the automatic grading of students on programming courses,” in Proceedings of the 33rd Annual on Southeast Regional Conference ACM-SE 33, Clemson, SC, USA, March 1995.
    [21]
    D. Jackson and M. Usher, “Grading student programs using ASSYST,” in Proceedings of the 28th SIGCSE Technical Symposium on Computer Science Education, pp. 335–339, San Jose, CA, USA, February 1997.
    [22]
    J. Spacco, D. Hovemeyer, W. Pugh, F. Emad, J. K. Hollingsworth, and N. Padua-Perez, “Experiences with marmoset,” ACM SIGCSE Bulletin, vol. 38, no. 3, pp. 13–17, 2006.
    [23]
    S. H. Edwards and M. A. Perez-Quinones, “Web-CAT,” ACM SIGCSE Bulletin, vol. 40, no. 3, p. 328, 2008.
    [24]
    F. Alshamsi and A. Elnagar, “An automated assessment and reporting tool for introductory Java programs,” in Proceedings of the 2011 International Conference on Innovations in Information Technology (IIT), pp. 324–329, Abu Dhabi, UAE, April 2011.
    [25]
    T. Wang, X. Su, P. Ma, Y. Wang, and K. Wang, “Ability-training-oriented automated assessment in introductory programming course,” Computers & Education, vol. 56, no. 1, pp. 220–226, 2011.
    [26]
    D. Fonte, I. V. Boas, D. Cruz, A. L. Gancarski, and P. R. Henriques, “Program analysis and evaluation using quimera,” in Proceedings of ICEIS, pp. 209–219, Wroclaw, Poland, June 2012.
    [27]
    T.-H. Wang, “Web-based dynamic assessment: taking assessment as teaching and learning strategy for improving students’ e-Learning effectiveness,” Computers & Education, vol. 54, no. 4, pp. 1157–1166, 2010.
    [28]
    D. Muñoz de la Peña, F. Gómez-Estern, and S. Dormido, “A new internet tool for automatic evaluation in control systems and programming,” Computers & Education, vol. 59, no. 2, pp. 535–550, 2012.
    [29]
    I. Neamtiu, J. S. Foster, and M. Hicks, “Understanding source code evolution using abstract syntax tree matching,” in Proceedings of the 2005 International workshop on Mining Software Repositories, pp. 1–5, Saint Louis, MO, USA, May 2005.

    Cited By

    View all

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Complexity
    Complexity  Volume 2020, Issue
    2020
    17125 pages
    This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    Publisher

    John Wiley & Sons, Inc.

    United States

    Publication History

    Published: 01 January 2020

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media