See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/233479102
Peer Assessment in Hospitality Education
Article in Journal of Teaching in Travel & Tourism · January 2003
DOI: 10.1300/J172v03n01_05
CITATIONS
READS
3
39
2 authors:
Ian Knowd
Pheroza Daruwalla
6 PUBLICATIONS 97 CITATIONS
6 PUBLICATIONS 146 CITATIONS
Western Sydney University
SEE PROFILE
Western Sydney University
SEE PROFILE
All content following this page was uploaded by Ian Knowd on 18 January 2017.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Peer Assessment
1
PEER ASSESSMENT IN HOSPITALITY EDUCATION
PUBLISHED IN THE JOURNAL OF TEACHING IN TRAVEL AND TOURISM
VOLUME 3, NUMBER 1, 2003 PAGES 65 TO 83
Peer Assessment In Hospitality Education: Impacts Of Group Size On Performance
And Rating
Ian Knowd and Pheroza Daruwalla
University of Western Sydney
Author Note
Mr Ian Knowd
Dr Pheroza Daruwalla
Lecturer, Tourism Studies
Senior Lecturer, Hospitality
School of Environment and Agriculture
Management
College of Science, Technology and Environment
School of Management, College of
University of Western Sydney
Law and Business
University of Western Sydney
Peer Assessment
2
Abstract
This paper explores the role of peer assessment in managing student performance in
hospitality education. Issues integrity, reliability and effectiveness for students and educators are
investigated with a focus on the impacts of group size on the practice of peer assessment.
Differences between peer assessment in groups of less than five students and groups with five or
more are explored. Findings indicate differential patterns of marking based on group size with
smaller groups tending to mark peers more generously than larger groups. Issues of collusion and
subversion of the instrument and the process of peer assessment are also examined.
Keywords: Peer, assessment, evaluation, performance, hospitality, education.
Peer Assessment
3
Peer Assessment In Hospitality Education: Impacts Of Group Size On Performance
And Rating
Introduction
The use of peer and self-assessment as a moderating technique for student group-work
marking is well known in academia (Adams, Thomas and King 2000; Adams and King 1994;
Boud 1981, 1986; Cheng and Warren 1997; Dochy, Segers, Sluijsmans 1999;Gibbs 1989; Gibbs
and Jenkins 1992; Hughes and Large 1993). This paper explores the role of peer assessment in
managing student group work performance in hospitality education at university level. It
explores the principles of peer evaluation and details the use of various peer assessment
mechanisms within experiential learning contexts. The paper describes the instrument, its unique
properties and its use to moderate individual grades in a group project. It also describes the
method used with this peer assessment technique and provides a reflective expose of the praxis
of using such a method in experiential learning paradigms. A discussion of issues for students
and educators follows some analysis of data derived from peer evaluations. This analysis seeks
to explore the effects of group size and student rating behaviour.
Since the start of the Hospitality Management program at the University of Western
Sydney in 1990, students have been asked to use peer assessment as part of the grading process
for group learning. There have been a variety approaches used over time and the current
mechanism uses a 10-criteria scale of group performance attributes adapted from a similar
instrument used at the University of Wisconsin-Stout (Winger-Haunty, 1990). The criteria
include:
Quantity of Work
Quality of Work
Peer Assessment
4
Communication Skills
Initiative
Efficiency
Personal Relations
Group meeting attendance
Attitude and Enthusiasm
Effort
Dependability
A section is included on the rating sheet for students to write qualitative commentary on
the conduct of their group and individuals within it, or to provide some rationale for their scoring
on the numerical section. This qualitative commentary has also occasionally assisted instructors
to clarify or shed light on group dynamics and to identify ‘rogue scorers’.
The original instrument when used in the Australian context was found to be too
ambiguous in some aspects of the rating scale and linguistic interpretations, so modifications to
language were made to reflect more appropriately Australian students’ understanding of terms.
Additionally the instrument has evolved from a ranking of 1 to 10 with no specific criteria
specified by the instructors to its current format wherein 10 criteria are evaluated and each is
ranked from 0-5. Descriptors are clearly enunciated for each of the rankings (minimum 0-5
maximum) allowing for a critical evaluation and the ability to give mid range scores (e.g. 2.5 etc)
for students who might feel that a single digit score does not adequately reflect their
interpretation of the criteria.
While the use of such an instrument to grade group projects requires greater capabilities in
staff administering and processing the peer assessment instrument, the ability of the technique to
Peer Assessment
5
accurately reflect student performance has been demonstrated in improved assessment processes.
Cheng and Warren (2000) use a very similar method of peer assessment. They report on the
variations it can produce in individual marks on a group project, and these variations appear
similar to the instrument under discussion.
Systemic benefits accrue to teaching staff through the use of peer evaluation as a tool used
in assessment. These benefits for teaching staff are in terms of ‘metacognitive’ advancement
when ‘supplementary rather than substitutional’ peer assessment is practiced. This form of peer
assessment encourages reflective praxis not only from students but also staff who begin to
‘scrutinize and clarify assessment objectives and purposes criteria and marking scales’. (Topping
1998, p.256). Peer assessment is a useful technique for addressing equity and fairness issues for
students in group learning environments and it is an accurate mechanism for moderating grades
so that individual performance in groups can be reflected (Cheng and Warren 2000).
The use of peer assessment in educational assessment has sound pedagogical foundations
(Bacon, Stewart and Silver 1999; Dochy, Segers and Sluijsmans 1999; Topping 1998). The
benefits have been identified as the assistance of professional development, an increase in
responsibility and autonomy of students, and the development of collaborative attitudes between
staff and students (Calhoun, Tenhaken and Woolliscroft 1990). Falchikov (1986) emphasised
the outcome of peer assessment as being improved personal and interpersonal skills, and Henbest
and Fehrsen (1985) saw peer assessment as helping students to be reflective and cognisant of
their own weaknesses in learning. Authors such as Boud (1989), Moerkerke (1996) and Dochy,
Segers and Sluijsmans (1999) refer to the building of lifelong learning habits and the
encouragement of deeper learning rather than superficial learning.
Peer Assessment
6
Peer assessment has been used in diverse disciplines including medicine (Burnett and
Cavaye 1980; Paquet and Des Marchais 1998), biological sciences and biology (Falchikov 1986;
Stefani 1994), engineering (Boud and Holmes 1995; Oldsfield and MacAlpine 1995),
management (Bacon, Stewart and Silver 1999; Sivan 2000, Kwan and Leung 1996), leisure
studies (Wicks and Stribling 1991) and, languages (Cheng and Warren 1997). Sivan (2000) used
hotel and tourism industry students, who were fully employed at the managerial level and had
experienced peer assessment in their workplace, as part of a sample group in her study on the
implementation of peer assessment in an action research approach to assessment.
In the industry-based discipline of hospitality and tourism education, there are two key
industry-driven imperatives for use of peer assessment: the prevalence of teams in hospitality
work environments and the related need to develop team or group management skills. It is
incumbent upon academics to recognise and accept the importance of these two key elements,
and to implement structured coursework that facilitates team activities. There is a need to buildin some accountability for individual performance that reflects and simulates sanctions/reward
systems in the workplace, usually in the form of performance appraisals, bonuses and incentive
schemes. Peer assessment is a reflective process that encourages deeper learning (Boud et al
1999;Cheng and Warren, 2000; Dochy, et al. 1999) about the role of individuals in a team
context, and it fosters important characteristics of professionalism, such as leadership qualities,
communications skills and organizational capabilities. Peer assessment praxis aims to make
social and communication skills (Topping 1998), including those of giving and accepting
criticism and praise, justifying one’s position and rejecting suggestions, transferable to
workplace environments where teamwork is essential. Further, practice in student peer
assessment skills could then be applied in subsequent employee evaluation mechanisms
Peer Assessment
7
(Marcoulides and Simkin 1991). Sivan (2000) suggests that the relevance of the method of peer
assessment to student’s learning and future careers enhances its value to students and makes
them consider it a worthwhile exercise in which to participate.
Topping (1998) provides a typology of peer assessment in his review, classifying peer
assessment through tests, marks or grades, oral presentation skills, writing, group projects,
professional skills and computer assisted peer assessment. This paper concentrates on the use of
peer assessment within group projects where the primary purpose of the peer assessment is to
moderate group marks in order to give discrete marks to individuals. In this way individual
effort and input into the group project is formally accorded a value. This approach seeks to
address the issue of 'social loafing' in team exercises, which is an important motivation in
implementing peer assessment for group assignments.
The notions of 'social loafing' (Bacon, Stewart and Silver, 1999; Cheng and Warren 2000;
Ingham, Levinger, Graves and Peckham 1974; Latane, Williams and Harkins 1979), 'freeloading'
(Boud, Cohen and Sampson 1999), and 'free riding' (Webb 1995) all refer to the phenomenon
where a member of a group, who contributes little or nothing to the group-work outcomes, stands
to benefit from the work of that student’s peers when there is no means of measuring actual
member performance.
This phenomenon, referred to as 'bludging' in Australian parlance, can cause both
dysfunction as well as acrimony in a group. Further, serial 'bludgers' often find that they are
shunned in self-selected groups, or ostracised in assigned groups. The incorporation of peer
moderation into group assessments often has the effect of making explicit the criteria for desired
behaviour, as well as putting-on-notice individuals with a history of social loafing. Mello (1993)
and Strong and Anderson (1990) among other writers, suggest implementing peer assessments as
Peer Assessment
8
a means of reducing this free riding. Bacon et al. (1999) observed that the behaviour continues
when the net benefit of free riding is greater than the net loss incurred from a poor evaluation or
no peer evaluation at all, whereas a peer evaluation that actually impacts on the grade is much
more likely to reduce this behaviour.
Cheng and Warren (2000) allude to the division of labour (which often characterises group
work) leaving some participants without a sense of completeness regarding the project. Bacon et
al (1999) and Sivan (2000) question whether this is really essential, emphasising the importance
of using a relevant and appropriate peer assessment mechanism that addresses this problem.
Educators need to have a clear idea of what they are seeking from a group task and their
expectations of students’ management of the task. Further, they need to communicate these
expectations so that students are under no illusions as to how they should structure their groups
and manage the work (e.g. either breaking it up into smaller tasks and then bringing the
components together, or everyone working on every aspect of the project).
From a practical viewpoint, large classes involving experiential group work make
monitoring of individual student performance by teaching staff virtually impossible. This
increasingly common phenomenon has created a demand for an assessment-moderating tool that
accounts for group dynamics in real-life situations, where direct supervision by staff is not
feasible. Added to this is the complexity of large and small groups, sometimes occurring in a
single class, with this occasionally compounded by individual students having membership of
multiple groups. Some mechanism that engenders ownership of the learning experience and
embeds peer performance in the final mark gained by each student is essential if performance
accountability is to be made an important element in industry-based education.
Peer Assessment
9
The dilemma of wanting to foster collaborative behaviours, while implementing an
assessment regime that effectively pits individuals against each other, as peer assessment may be
cynically perceived to do by students, raises important pedagogical considerations as to its use
(Boud et al. 1999; Cheng and Warren 2000). The use of peer evaluations as a punitive tool is
cautioned by Bacon et al (1999) and Boud et al. (1999). Peers may attempt to punish a poor
performer by ‘burning’ them with a highly critical peer assessment. This is not necessarily the
best option, or an appropriate use of the mechanism and, in fact, instructors should give students
the option of ‘firing or divorcing’ a non performing team member as a more effective way of
managing poor performers (Strong and Andersen 1990).
A corollary to the use of peer evaluations as a punitive tool is the reluctance of students to
peer-assess their colleagues because of issues about the confidentiality of their assessments.
Warren and Cheng (2000), MacKenzie (2000), Mallinger (1998) and Paquet and Marchais
(1998) all refer to the reluctance of students to evaluate each other for fear of offending other
group members, occasioned by a lack of confidentiality, difficulties in being objective, the 'social
embarrassment' and the 'cognitive challenge and strain of the exercise' (Topping 1998). How a
peer assessment is presented to students and how it is used is an important consideration for
educators planning to use it as a formal method of assessment. Additionally, students need to
know what percentage of marks and how their peer assessment is likely to impact an individual's
grades. Numerous authors (Bacon et al 1999; Boud et al 1999; Sivan 2000; Paquet and Marchais
1999 and Topping 1998) refer to the importance of providing some training on the use of the
peer assessment instrument prior to its implementation.
Another issue pertinent to the practise of peer assessment is the original composition of the
group. The use of self-selection versus random or teacher-assigned groups is also worthy of
Peer Assessment
10
consideration. Bacon et al. (1999) suggest that self-selected groups often have group-related
norms already established, which in turn promotes both greater productivity, as well as an ability
to take more ownership of group problems. Mello (1993) suggests that the ownership of group
problems also influences students to manage interpersonal conflict more successfully, although it
may also lead to a paucity of adequate skill sets in the group. Self-selection that facilitates
greater initial cohesion (Strong and Anderson 1990) has been linked to student group
performance by authors such as Gosenpud and Washbush (1991), Jaffe and Nebenzahl (1990)
and Wolfe and Box (1988). Problems with self-selection relate to the lack of the advantage
through diversity (Bacon, Stewart and Stewart-Belle 1998) and the attendant problems of
homogeneity (Jalajas and Sutton, 1984-1985). Overly homogenous groups tend to have not only
limited skill sets to draw upon, but also a proclivity to 'group-think' (Bacon et al 1999) and an
inability to handle both change and creativity.
In the context of this exploration, smaller groups of less than five tended to be self-selected
whereas larger groups tended to have a greater element of random selection. While promoting
diversity, this also led to more cautious assessments of peer capabilities and subsequently lower
grading of peers on various criteria used in the current study.
Peer Assessment Practice
The instrument described here measures aspects of student communication, contribution
and cooperation within their peer group. It does not measure any elements of content or output,
which remains the province of educators. Thus the peer assessment is 'supplementary rather than
substitutional' pedagogical practice as discussed earlier. This is clearly explained to students
when discussing group work at the beginning of the semester.
Students assess themselves and each other using 10 dimensions of performance. (Figure 1)
Peer Assessment
11
Insert Figure 1 here
Each of these dimensions has six ranked descriptors (0-5) indicating performance from
poor to excellent. For each dimension students select a rating from the list, and then calculate a
total score for themselves and each of their peers out of 50. They divide this score by five to
generate an assumed average out of 10. From these a group mean and individual means are
calculated. Finally, each student’s peer factor is generated by dividing individual mean by the
group mean. (Figure 2)
Peer factor = Individual Mean/ Group Mean
Insert Figure 2 here
The output from this peer assessment technique is a number which the authors refer to as
the peer factor. This number typically has a value ranging from 0.9 to 1.1 in the more functional
groups and can range between 0.3 and 1.7 for demonstrably dysfunctional groups.
The peer factor is used to moderate the mark gained for group assessment tasks and other
assessable activities such as presentations, the staging of student special events and project
planning activities. Each individual student will gain a mark for the assessable component of the
group's activities, which is derived by multiplying the mark assigned by teaching staff for the
group work by the peer factor. The technique assigns an average performance (based on the 10
criteria) a peer factor value of 1, and so, any performances either above or below the average will
attract a peer factor correspondingly more or less than one. Peer factors have excellent finetuning effects as they typically vary in increments of 1%. This means that the technique can
reflect peer evaluations for groups where the members only perceived slight differences in their
individual performances.
Peer Assessment
12
Peer factors are then applied to the group-work components of the assessment regime as
shown in Figure 3.
Insert Figure 3 here
Methodology
A sample of peer assessment survey data was analysed using the data sets from 500 student
peer assessments. The sample population of students were undertaking their final year hospitality
management programs. Anecdotal evidence gathered from teaching staff had indicated that peer
assessment patterns varied in relation to the group size, and that group dysfunctionality increased
with group size, so the data was divided into data sets for student groups with five or more
members, and student groups with fewer than five members to see if variations could be
demonstrated.
Figure 4 shows a typical peer assessment survey form after completion by the student.
Insert Figure 4 here
Student scoring for each dimension of peer performance was recorded and a simple
statistical analysis was performed to assess frequency distributions of ratings within each
performance dimension. Modes, means and standard deviations were calculated. The data were
also used to assess how students effectively grade their performance in terms of Pass (50-64%),
Credit (65-74%), Distinction (75-84%) and High Distinction (85+%) performances.
Results and Discussion
Groups with five or more members exhibit a greater spread of scoring than those with less
than five members and the smaller groups tend to score each other more highly than do the larger
groups. It may be hypothesised that this is because smaller groups tend to get better acquainted
Peer Assessment
13
with each other, develop closer relationships, have greater reliance on each other and are prone
to more critical examination of their self and peer roles.
As noted earlier, the survey instrument gives students a rating scale with six graded options
for assessing each of the performance criteria. The data analysis showed that some students
scored every performance criteria except communication skills fractionally, whereby the criteria
descriptors were apparently found to be too coarse for them to make a definitive choice. In other
words, the descriptors for a rating of 2 or 3 on a behavioural dimension, did not accurately reflect
the scorer’s perception about their peer’s performance. In these cases (20 out of 500 data sets)
they would indicate a ranking between, for example 2 and 3 by entering 2.5 into the cell,
indicating that the 2 and 3 descriptors did not adequately describe their assessment and that a
ranking between these two ordinals was where they would prefer to place their score.
There were also a number of groups (one with 10 members) who had apparently arrived at
an agreement about the role of peer assessment in the assessment regime because each student
had scored each other exactly the same for every performance criteria. This effectively nullifies
the peer assessment, the result of the scoring process producing a 1 or unitary peer factor for
every student. The student’s scoring forms all had additional commentary to the effect that ' we
all worked well together and were pleased with the end result and did not experience problems
with group dynamics'. This was an indication that the entire group was happy to receive the same
mark (teacher assigned) for their assessment task. However, in some instances it could also be an
indicator of collusion within highly effective or highly dysfunctional groups.
Pond, Ui-Hao and Wade (1995) refer to the various forms of collusion that can take place.
They identified 'friendship' scoring which resulted in grossly exaggerated scores, 'collusive'
scoring where differentiation of scores was impossible due to prearranged group scores, 'decibel'
Peer Assessment
14
scoring where the dominant individuals tended to get the highest scores and 'parasite' scoring
where students failed to contribute to the group work but benefited from the scoring.
All forms of scoring described above have occurred at one time or another in the
administration of the instrument used in this study. However, the effect of friendship, and
collusive scoring are easy to detect in this instrument as they result in a peer factor of one which
then ensures that everyone in the group receives the same mark. Parasite scorers on the other
hand tend to get ‘burned’ by others in the group and as a result are penalised both in the
numerical and qualitative sections of the instrument.
Additionally, aggregating the scores tends to balance out the ‘rogue scorers’ who may selfinflate their own scores or attempt to ‘burn’ others. Other anecdotal evidence suggests that
occasionally, in case of extreme personality conflicts in a group, 'factions' are formed and
collusive scoring may result. These results are harder to work with and usually result in educator
intervention in the form of group counselling and/or overriding of peer assessment in exceptional
circumstances. (One instance in the 10 years of experience with this instrument for the authors
involved a student who repeatedly failed a subject on peer assessment alone, and was incapable
of changing her approach to group dynamics and team management despite coaching and
counselling).
Insert Table 1 here
Table 1 shows additional details about the analysis. The table shows modal ratings for the
two categories of groups, the average rating and the standard deviation (SD) of scores within the
two categories (groups with ≥5 and groups with <5 members). The analysis indicates that for
larger groups, performances were perceived to be of a lower standard than when students worked
in smaller groups. (Many of the same students were in both large and small groups across a
Peer Assessment
15
number of subjects). There is a remarkable consistency in the responses to all performance
criteria, indicating a tendency for students to score each other at the same rating regardless of the
criteria being considered, perhaps because they were unable to differentiate clearly enough
between the criteria. In all cases, the effect of working in a small group seems to have produced a
single rating improvement in scoring of student performance. It needs to be noted that groups
with more than five students tended to be more cautious and were less prone to awarding higher
ratings to their peers. It may also have been a function of the lack of familiarity with the
individuals in the group because of group size.
These results raise interesting questions. Group size appears to have a potentially negative
impact on all of the performance criteria. This certainly accords with educator experience that
larger groups are often a less effective context in which to situate student learning and, that in
most instances, large groups display frank dysfunctionality. However Bacon et al. (1999) found
no relationship between team size and best or worst team experiences or team processes in their
study of MBA students. They recommend that team size should be guided by pedagogical
objectives rather than convenience or arbitrariness.
To take some specific examples, the analysis shows that Dependability and, Attitude and
Enthusiasm is likely to be scored higher in small groups, but intuitive judgement about these
characteristics would hypothesise that they should remain the same regardless of the size of the
group because they are intrinsic to the student. Perhaps students have greater difficulty assessing
this aspect of their peers in larger groups and hence are more cautious in their rating of each
other.
Peer Assessment
16
Quantity of Work and Effort could quite logically have a direct relationship with group
size as members of smaller groups have a greater opportunity to engage with the workload and to
demonstrate these qualities to their peers.
An interesting extension of the calculations involved in using this technique is to see how
the students 'rate' their overall performance as a group or team when converted to a graded
system from Pass Minus (50%) to High distinction (>85%).
The results show that groups with five or more members rate their performances as group
members at a Credit Minus level (65%) and the smaller groups at Distinction level (82%). This
might help to explain why students frequently complain about large group work activities and
especially the managing of larger groups. The group size appears to have a negative effect on
students’ capacity to perform at their desired level. It may, however, be hypothesised that smaller
groups tend to ‘gloss’ over or overestimate their own abilities and efforts as opposed to larger
groups where it is easier to see the gaps (in the presentation, report, assignment) by a less
involved or committed person. There is also, perhaps, an element of taking things less
personally and being more critical when one works in larger groups. Alternatively, it could be
the cohesion factor, where small groups tend to be more united in their activities than larger
groups, where factionalism and cliques are more likely to form.
Conclusion
Peer assessment adds value to the learning experience for both student and lecturer. A
deeper learning results whereby students situate their learning experience in the realities of
working in teams and develop an appreciation of group dynamics, management and leadership
qualities.
Peer Assessment
17
Control and responsibility are delegated to student groups but there is no abrogation of the
assessment role for teaching staff when an instrument and process like that described here is
used. Less sophisticated mechanisms have pitfalls and raise issues for the students who have to
use them, but this may be outweighed by the simplicity and ease of administration of these
techniques.
In the context of this study it would be judicious to employ some of the recommendations
of Bacon et al. (1999), which included providing teams with adequate descriptions of outcomes
and processes, as desired by the teaching staff and their approach to the subject. Suggestions to
prevent group-think by using a semi-structured group selection tool which determines student
demographics and then goes for a best-fit solution is an ideal response and appropriate, but often
quite impossible due to time and resource constraints. The size of student work groups is an issue
that needs further attention: group size needs to be determined by pedagogical objectives rather
than bureaucratic expediency.
Peer assessment is a useful performance-monitoring tool that reflects measures similar to
those applied in hospitality work environments. This tool allows educators to expose student
groups to the realities of industry-based performance measures in the relatively safe context of a
study environment. The influence of group size, as in the workplace, raises issues for those who
wish to effectively manage the completion of tasks, and generate functionally harmonious
working environments. In hospitality management education, peer assessment techniques should
attempt to reflect these challenges in as realistic a way as possible, despite the fact that workrelated sanctions and rewards are not available to teaching staff as a means of punishment and
reward for effort in team activities.
Peer Assessment
18
References
Adams, C. and King, K. 1994, 'Towards a framework for student self-assessment' Innovations in
Education and Training International, vol 32, no. 4, pp. 336-343.
Adams, C., Thomas, R., and King, K. 2000, 'Business student's ranking of reasons for
assessment: Gender differences', Innovations in Education and Training International,
London, vol. 37, no. 3, p. 234+
Bacon, D.R., Stewart, K.A. and Stewart-Belle, S. 1998, 'Exploring predictors of student team
project performance', Journal of Marketing Education, vol. 20, no. 1, pp. 63-71.
Bacon, D.R., Stewart, K.A. and Silver, W.S. 1999, 'Lessons from the best and worst student team
experiences: How a teacher can make the difference', Journal of Management Education,
vol. 23, no. 5, pp. 467-488.
Boud, D. 1981, Developing greater student autonomy in student learning, Kogan Page, London.
Boud, D. 1986, Implementing Student Self Assessment, HERDSA, Green Guide no. 5, Sydney.
Boud, D. 1989, ''The role of self assessment in student grading', Assessment and Evaluation in
Higher Education, vol, 14, pp, 20-30.
Boud, D. and Cohen, R. and Sampson, J. 1999, 'Peer learning assessment', Assessment and
Evaluation in Higher Education, vol, 24, no. 4, pp. 413-426.
Boud, D. and Holmes, H. 1995, 'Peer and self marking in a large technical subject' in D. Boud,
(ed.), Enhancing Learning through Self Assessment, pp. 63-78.
Burnett, W. and Cavaye, G. 1980, 'Peer assessment by fifth year students of surgery', Assessment
in Higher Education, vol. 5, no. 3, pp. 273-278.
Peer Assessment
19
Calhoun, J., Ten Haken, J. and Woolliscroft, J. 1990, 'Medical students' development of self and
peer assessment skills: a longitudinal study', Teaching and Learning in Medicine, vol. 2,
pp. 25-29.
Cheng, W. and Warren, M. 1997, 'Having second thoughts: student perceptions before and after
a peer assessment exercise', Studies in Higher Education, vol. 22, no. 2, pp. 233-239.
Cheng, W. and Warren, M. 2000, 'Making a difference: using peers to assess individual students'
contributions to a group project', Teaching in Higher Education, vol. 5, no. 2, pp. 243-255.
Dochy, F., Segers, M. and Sluijsmans, D. 1999, 'The use of self-, peer and co-assessment in
higher education: A review', Studies in Higher Education, vol. 24, no. 3, pp. 331-350.
Falchikov, N. 1986, 'Product comparisons and process benefits of collaborative peer group and
self assessment', Assessment and Evaluation in Higher Education, vol. 11, pp. 146-166.
Gatfield, T. 1998, 'Group project and peer assessment evaluation study: An investigation into
Australian and International student perceptions', Assessment and Evaluation in Higher
Education, vol, 24, no. 4, pp. 365-377., Carfax Publishing, London
Gibbs, G. 1989, Module 3 Assessment, Certificate in Teaching in Higher Education by Open
Learning, The Oxford Centre for Staff Development, Oxford.
Gibbs, G. and Jenkins, A. 1992, Teaching large classes in Higher Education: How to maintain
quality with reduced resources, Kogan Page, London.
Gopinath, C. 1999, 'Alternatives to instructor assessment of class participation', Journal of
Education for Business, vol. 75, no. 1, pp. 10-14.
Gosenpud, J.J. and Washbush, J.B. 1991, 'Predicting simulation performance: differences
between individuals and groups' in W.J. Wheatley and J. Gosenpud (eds.) Developments in
business simulations and experiential exercises, pp. 44-48.
Peer Assessment
20
Henbest ,R. and Fehrsen, M. 1985, 'Preliminary study at the medical university of Southern
Africa on student self assessment as a means of evaluation', Journal of Medical Education,
vol. 60, pp. 66-67.
Hughes, I. and Large, B. 1993, 'Assessment of students' oral communication skills by staff and
peer groups', The New Academic, Summer.
Ingham, A.G., Levinger, G., Graves, J. and Peckham, V. 1974, The Ringelmann effect: Studies
of group size and group performance', Journal of Experimental Social Psychology, vol. 10,
pp. 371-384.
Jaffe, E.D. and Nebenzahl, I.D. 1990, 'Group interaction and business game performance',
Simulation and Gaming, vol. 21, no. 2, pp, 133-146.
Jalajas D.S. and Sutton, R.I. 1984-1985, Feuds in student groups, coping with whiners, martyrs,
saboteurs, bullies and deadbeats', Organizational Teaching Review, vol. 9, no. 4, pp. 217227.
Kwan, K. and Leung, R. 1996, 'Tutor versus peer group assessment of student performance in a
simulation training exercise', Assessment and Evaluation in Higher Education, vol. 21, no.
3, pp. 205-214.
Latane, B., Williams, K. and Harkins S. 1979, Many hands make light the work: The causes and
consequences of social loafing', Journal of Personality and Social Psychology, vol. 37, no.
6, pp. 822-832.
MacKenzie, L. 2000, 'Occupational therapy students as peer assessors in viva examinations',
Assessment and Evaluation in Higher Education, Bath, vol. 25, no. 2, pp. 135-147.
Mallinger, M. 1998, 'Maintaining control in the classroom by giving up control', Journal of
Management Education, vol. 22, no. 4. pp. 427-483.
Peer Assessment
21
Marcoulides, G. and Simkin, M.G. 1991, 'Evaluating student papers: The case for peer review',
Journal of Education for Business, vol. 67, pp. 80-83.
Mello, J.A. 1993, Improving individual member accountability in small work group settings',
Journal of Management Education, vol. 17, no. 2, pp. 253-259.
Moerkerke, G. 1996, Assessment for Flexible Learning, Utrecht, Lemma.
Oldsfield, K. and Macalpine, J. 1995, 'Peer and self assessment at tertiary level: an experiential
report', Assessment and Evaluation in Higher Education, vol. 20, no. 1, pp. 125-132.
Paquet, M.R. and Des Marchais, J.E. 1998, 'Students' acceptance of peer assessment', Education
for Health, vol. 11, no. 1, pp. 25Pond, K., Ui-Hao, R. and Wade, W. 1995, Peer review: a precursor to peer assessment',
Innovations in Education and Training International, vol. 32, pp. 314-323.
Sivan, A. 2000, 'The implementation of peer assessment: An action research approach',
Assessment in Education, vol. 7, no. 2, pp. 193-213.
Stefani, L.A.J. 1994, 'Peer, self and tutor assessment: relative reliabilities', Assessment and
Evaluation in Higher Education, vol. 19, no. 1, pp. 69-75.
Strong J.T. and Anderson, R.E. 1990, Free riding in group projects: Control mechanisms and
preliminary data', Journal of Marketing Education, vol. 12, pp. 61-67.
Topping, K. 1998, 'Peer assessment between students in colleges and universities', Review of
Educational Research, vol, 68, no. 3, pp. 249-276.
Webb, N.M. 1995, Group collaboration in assessment : multiple objectives, processes and
outcomes', Educational Evaluation and Policy Analysis, vol. 17, no. 2, pp. 239-261.
Peer Assessment
Wicks, B. and Stribling, J. 1991, 'The use of peer reviews for evaluation of individual student
performance in group projects', SCHOLE, A Journal of Leisure Studies and Recreation
Education, vol. 6. pp. 46-56.
Winger-Haunty, S., 1990, Peer Assessment Instrument, University of Wisconsin-Stout,
Menomonie.
Wolfe, J. and Box, T.M. 1988, 'Team cohesion effects on business game performance',
Simulation and Games, vol. 19, no. 1, pp. 82-98.
22
Peer Assessment
23
Table 1
Comparison of Scoring Between Large and Small Groups
Groups with 5 or More Members
CRITERIA
Modal Rating
Quantity of Work 3 - Satisfactory. Does more
M
SD Modal Rating
M
SD
3.2
.96 4 - Very industrious. High
4.1
.75
4.0
.69
4.1
.92
3.7
.78
4.0
.89
4.5
.69
than what is required
Quality of Work 3 - Accurate when and where
it really counts. Satisfactory.
Communication 3 - Always very polite and
Skills
Groups with Fewer Than 5 Members
Quality. Consistent
3.1
.99 4 - Almost always accurate
in all areas of contribution
3.1 1.08 4 - Courteous and very
willing to help. Very sociable
pleasant. Excellent at
and outgoing. Listens and
establishing good will.
understands.
Initiative
3 - Strives hard. Desire to
achieve.
3.1 1.06 4 - High desire to achieve.
Always puts in a solid days
work.
Efficiency
3 - Work always complete on
schedule.
2.9 1.14 4 - Work complete.
Consistent in defining and
resolving major problems.
Personal
Relations
3 - Satisfactory, harmonious
3.4 1.08 5 - Respected by others.
Presence adds to
environmental stability
Peer Assessment
24
Table 1 (Continued)
Group Meeting 4 - Could be counted on to
Attendance
attend.
Attitude and
3 - Positive demeanor
4.7
.66
Always on time
3.4 0.89 4 - Positive attitude and
4.0 0.68
spirited.
Enthusiasm
Effort
3.9 1.02 5 - Never missed a meeting.
3 - Solid contributions
3.1 1.02 4 - Strives very hard.
3.9 0.89
Energetic.
Dependability 4 - Very trustworthy. Could be 3.5 0.99 4 - Very trustworthy. Could
counted on to take
be counted on to take
responsibility.
responsibility.
4.3 0.68
Peer Assessment
Figure Captions
Figure 1 Peer Assessment Proforma
Figure 2 Calculation of Peer Factor from Scores
Figure 3 Using Peer Factor to Adjust Group Marks
Figure 4 Sample Peer Form After Completion by Student
25
Peer Assessment
Figures
Figure 1
26
Peer Assessment
Figure 2
27
Peer Assessment
Figure 3
28
Peer Assessment
Figure 4
29
Peer Assessment
View publication stats
30