Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

What’s on the syllabus? An analysis of assessment criteria in first year courses across US and Spanish universities

  • Published:
Educational Assessment, Evaluation and Accountability Aims and scope Submit manuscript

Abstract

This study examined differences in the assessment criteria used by the USA and Spanish university instructors to assign course grades. The US sample included two hundred and fifty course syllabi (159 from universities and 91 from 4-year colleges) developed by randomly selected instructors from five academic disciplines (education, math, science, psychology, and English). Spanish data set included 175 syllabi, chosen from the national database from the same five domains. The results revealed that university instructors employed a number of criteria when assigning course grades, with the US instructors relying equally on process and product criteria, and Spanish instructors using a higher proportion of product indicators. We also found that self- and peer assessment were used scarcely between the two countries and that no syllabi employed progress criteria. Theoretical, practical, and policy implications are discussed along with avenues for further research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. The data upon which the findings of this study are based are available on request from the corresponding author.

References

  • Allen, J. D. (2005). Grades as valid measures of academic achievement of classroom learning. The Clearing House, 78(5), 218–223.

    Article  Google Scholar 

  • Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (pp. 148–153).

  • Angelo, T. A., Cross, K. P., & Rechnique, C. A. (1993). A handbook for college teachers. Jose-Bass Publishers.

  • Bers, T. H., Davis, B. D., & Taylor, B. (2000). The use of syllabi in assessments: Unobtrusive indicators and tools for faculty development. Assessment Update, 12(3), 4–7.

    Google Scholar 

  • Bjork, E. L., Storm, B. C., & DeWinstanley, P. A. (2011). Learning from the consequences of retrieval: Another test effect. In A. S. Benjamin (Ed.), Successful remembering and successful forgetting: A Festschrift in Honor of Robert A. Bjork (pp. 347–364). Psychology Press.

  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.

    Google Scholar 

  • Bloom, B. S. (1956). Taxonomy of educational objectives. David McKay Co., Inc.

  • Brookhart, S. M. (1991). Grading practices and validity. Educational Measurement: Issues and Practice, 10(1), 35–36.

  • Brookhart, S. M. (2009). Grading. Pearson Education.

  • Brookhart, S. M. (2011). Starting the conversation about grading. Educational Leadership, 69(3), 10–14.

    Google Scholar 

  • Brookhart, S. M. (2014). How to design questions and tasks to assess student thinking. ASCD.

  • Brookhart, S. M., & Nitko, A. J. (2008). Assessment and grading in classrooms. Pearson Education.

  • Brookhart, S. M., Guskey, T. R., Bowers, A. J., McMillan, J. H., Smith, J. K., Smith, L. F., Stevens, M. T., & Welsh, M. E. (2016). A century of grading research: Meaning and value in the most common educational measure. Review of Educational Research, 86(4), 803–848.

    Article  Google Scholar 

  • Brown, G. T., & Harris, L. R. (2014). The future of self-assessment in classroom practice: Reframing self-assessment as a core competency. Frontline Learning Research, 2(1), 22–30.

    Google Scholar 

  • Brown, G., & Harris, L. R. (2018). Methods in feedback research. In A. A. Lipnevich & J. K. Smith (Eds.), Cambridge handbook of instructional feedback (pp. 97–121). Cambridge University Press.

  • Cashwell, C. S., & Young, J. S. (2004). Spirituality in counselor training: A content analysis of syllabi from introductory spirituality courses. Counseling and Values, 48(2), 96–109.

    Article  Google Scholar 

  • Casner-Lotto, J., Barrington, L., & Wright, M. (2006). Are they really ready to work? Employers’ perspectives on the basic knowledge and applied skills of new entrants to the twenty-first century U.S. workforce. Report Number BED-06-WF-KF. The Conference Board, Corporate Voices for Working Families, the Partnership for Twenty-first Century Skills, and the Society for Human Resource Management.

  • Cassady, J. C., & Johnson, R. E. (2002). Cognitive test anxiety an academic performance. Contemporary Educational Psychology, 27(2), 270–295.

    Article  Google Scholar 

  • Cizek, G. J., Fitzgerald, S. M., & Rachor, R. E. (1996). Teachers’ assessment practices: Preparation, isolation, and the kitchen sink. Educational Assessment, 3(2), 159–179.

    Article  Google Scholar 

  • Cross, L. H., & Frary, R. B. (1999). Hodgepodge grading: Endorsed by students and teachers alike. Applied Measurement in Education, 12(1), 53–72.

    Article  Google Scholar 

  • De Swert, K. (2012). Calculating inter-coder reliability in media content analysis using Krippendorff’s Alpha. Center for Politics and Communication, 1–15.

  • Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58.

    Article  Google Scholar 

  • European Commission/EACEA/Eurydice. (2018). The European Higher Education Area in 2018: Bologna Process Implementation Report. Publications Office of the European Union.

  • Guskey, T. R. (1996). Reporting on student learning: Lessons from the past – prescriptions for the future. In T. R. Guskey (Ed.), Communicating student learning. 1996 yearbook of the association for supervision and curriculum development (pp. 13–24). Association for Supervision and Curriculum Development.

  • Guskey, T. R. (2006). Making high school grades meaningful. Phi Delta Kappan, 87(9), 670–675.

    Article  Google Scholar 

  • Guskey, T. R. (2011). Five obstacles to grading reform. Educational Leadership, 69(3), 16–21.

    Google Scholar 

  • Guskey, T. R. (2019). Get set, go! Implementing successful reforms in grading and reporting. Solution Tree.

  • Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. Routledge.

  • Hopfenbeck, T. N., Lenkeit, J., El Masri, Y., Cantrell, K., Ryan, J., & Baird, J. A. (2018). Lessons learned from PISA: A systematic review of peer-reviewed articles on the programme for international student assessment. Scandinavian Journal of Educational Research, 62(3), 333–353.

    Article  Google Scholar 

  • Jessop, T., & Maleckar, B. (2016). The influence of disciplinary assessment patterns on student learning: A comparative study. Studies in Higher Education, 41(4), 696–711.

    Article  Google Scholar 

  • Jessop, T., & Tomas, C. (2017). The implications of programme assessment patterns for student learning. Assessment & Evaluation in Higher Education, 42(6), 990–999.

    Article  Google Scholar 

  • Karran, T. (2005). Pan-European grading scales: Lessons from national systems and the ECTS. Higher Education in Europe, 30(1), 5–22. https://doi.org/10.1080/03797720500087949

    Article  Google Scholar 

  • Kornell, N., Hays, M. J., & Bjork, R. A. (2009). Unsuccessful retrieval attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(4), 989.

    Google Scholar 

  • Lipnevich, A. A., McCallen, L. N., Miles, K. P., & Smith, J. K. (2014). Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4), 539–559.

    Article  Google Scholar 

  • Lipnevich, A. A., Guskey, T. R., Murano, D. M., & Smith, J. K. (2020). What do grades mean? Variation in grading criteria in American college and university courses. Assessment in Education: Principles, Policy & Practice, 27(5), 480–500.

  • O’Connor, K. (2009). Grades: When, why, what impact, and how? Education Canada, Spring, 2010, 38–41.

  • Oswald, F. L., Schmitt, N., Kim, B. H., Ramsay, L. J., & Gillespie, M. A. (2004). Developing a biodata measure and situational judgment inventory as predictors of college student performance. Journal of Applied Psychology, 89(2), 187–207.

    Article  Google Scholar 

  • Panadero, E., & Brown, G. T. L. (2017). Teachers’ reasons for using peer assessment: Positive experience predicts use. European Journal of Psychology of Education, 32(1), 133–156. https://doi.org/10.1007/s10212-015-0282-5

    Article  Google Scholar 

  • Panadero, E., Fraile, J., Fernández Ruiz, J., Castilla-Estévez, D., & Ruiz, M. A. (2019). Spanish university assessment practices: examination tradition with diversity by faculty. Assessment & Evaluation In Higher Education, 44(3), 379–397. https://doi.org/10.1080/02602938.2018.1512553

    Article  Google Scholar 

  • Parkes, K. A. (2018). Instructional feedback in music. In A. A. Lipnevich & J. K. Smith (Eds.), Cambridge handbook of instructional feedback. Cambridge Press.

  • Pellegrino, J. W. (2006). Rethinking and redesigning curriculum, instruction and assessment: What contemporary research and theory suggests (pp. 1–15). Chicago: Commission on the Skills of the American Workforce.

  • Postareff, L., Lindblom-Ylänne, S., & Nevgi, A. (2007). The effect of pedagogical training on teaching in higher education. Teaching and teacher education, 23(5), 557–571.

    Article  Google Scholar 

  • Rathbun, G. A., Leatherman, J., & Jensen, R. (2017). Evaluating the impact of an academic teacher development program: practical realities of an evidence-based study. Assessment & Evaluation in Higher Education, 42(4), 548–563.

    Article  Google Scholar 

  • Richland, L. E., Kornell, N., & Kao, L. S. (2009). The pretesting effect: Do unsuccessful retrieval attempts enhance learning? Journal of Experimental Psychology: Applied, 15(3), 243.

    Google Scholar 

  • Rojstaczer, S., & Healy, C. (2012). Where A is ordinary: The evolution of American college and university grading, 1940–2009. Teachers College Record, 114(7), 1–23.

    Article  Google Scholar 

  • Royal, K. D., & Guskey, T. R. (2015). A case for differentiated grades. Medical Science Education, 25, 323–325.

    Article  Google Scholar 

  • Seipp, B. (1991). Anxiety and academic performance: A meta-analysis of findings. Anxiety, Stress, and Coping, 4(1), 27–41.

    Google Scholar 

  • Smith, J. K., & Smith, L. F. (2009). The impact of framing effect on student preferences for university grading systems. Studies in Educational Evaluation, 35, 160–167.

    Article  Google Scholar 

  • Stanny, C., Gonzalez, M., & McGowan, B. (2015). Assessing the culture of teaching and learning through a syllabus review. Assessment & Evaluation in Higher Education, 40(7), 898–913.

    Article  Google Scholar 

  • Tippin, G. K., Lafrenier, K. D., & Page, S. (2012). Student perception of academic grading: Personality, academic orientation, and effort. Active Learning in Higher Education, 13(1), 51–61.

    Article  Google Scholar 

  • Tomas, C., & Jessop, T. (2019). Struggling and juggling: a comparison of student assessment loads across research and teaching-intensive universities. Assessment & Evaluation in Higher Education, 44(1), 1–10.

    Article  Google Scholar 

  • Turner, K., Roberts, L., Heal, C., & Wright, L. (2013). Oral presentation as a form of summative assessment in a master’s level PGCE module: The student perspective. Assessment & Evaluation in Higher Education, 38(6), 662–673.

    Article  Google Scholar 

  • United States Department of Education. (2016). Transitioning to the Every Student Succeeds Act (ESSA): Frequently asked questions. Retrieved from https://www2.ed.gov/policy/elsec/leg/essa/essafaqstransition62916.pdf. Accessed 22 Mar 2021.

  • Van Zundert, M., Sluijsmans, D., & Van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20(4), 270–279.

    Article  Google Scholar 

  • Wächter, B. (Ed.). (2004). Higher education in a Changing Environment. Lemmens.

  • Westerheijden, E., et al. (2008). The first decade of working on the European Higher Education Area [Report by the Center for Higher Education Policy Studies].

  • Wiggins, G. (1996). Honesty and fairness: Toward better grading and reporting. In T. R. Guskey (Ed.), Communicating student learning: 1996 yearbook of the association for supervision and curriculum development (pp. 141–176). Association for Supervision and Curriculum Development.

  • Zabalza, M. A. (2003). Competencias docentes del profesorado universitario. Calidad y desarrollo profesional. Narcea.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anastasiya A. Lipnevich.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Table 6 Assessment components explanation

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lipnevich, A.A., Panadero, E., Gjicali, K. et al. What’s on the syllabus? An analysis of assessment criteria in first year courses across US and Spanish universities. Educ Asse Eval Acc 33, 675–699 (2021). https://doi.org/10.1007/s11092-021-09357-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11092-021-09357-9

Keywords