Manipulation and Peer Mechanisms: A Survey: Keywords
Manipulation and Peer Mechanisms: A Survey: Keywords
Manipulation and Peer Mechanisms: A Survey: Keywords
Matthew Olckers
Macquarie University and the e61 Institute
Toby Walsh
School of Computer Science and Engineering, UNSW Sydney
arXiv:2210.01984v3 [cs.AI] 29 May 2024
Abstract
In peer mechanisms, the competitors for a prize also determine who wins. Each competitor may
be asked to rank, grade, or nominate peers for the prize. Since the prize can be valuable, such
as financial aid, course grades, or an award at a conference, competitors may be tempted to
manipulate the mechanism. We survey approaches to prevent or discourage the manipulation of
peer mechanisms. We conclude our survey by identifying several important research challenges.
Keywords: peer mechanism, peer ranking, peer review, peer grading, community-based
targeting
1. Introduction
Imagine that you are competing for a prize at your workplace. The prize is awarded by asking
everyone at your work, including you, to nominate who deserves the prize. The person with the
most nominations wins. Who should you nominate? If you tell the truth and nominate who you
think is most deserving, your nomination could cause you to lose out. Would you be truthful?
Do you think your colleagues would be truthful?
As this example illustrates, when the competitors for a prize also determine who wins, the
competitors may be tempted to manipulate the outcome. A growing research literature, which we
collect under the term “peer mechanisms”, aims to prevent or discourage manipulation in these
situations. This paper surveys both the theoretical and empirical research on peer mechanisms
and offers several research challenges.
The interest in peer mechanisms has been fueled by the variety of high-stakes applica-
tions. The prize can take many forms. Closest to the experience of researchers, the prize could
be an award at a conference. Further afield, the prize could be grades in a course (Topping,
1998), a time slot to use a telescope (Merrifield and Saari, 2009), aid targeted to people in
need (Conning and Kevane, 2002; Alatas et al., 2012), loans for entrepreneurs (Hussam et al.,
2022), a job for freelancers (Kotturi et al., 2020), an award for the best soccer player of the year
(Caragiannis et al., 2019), or even the papacy of the Catholic Church (Mackenzie, 2020).
The variety of applications has inspired a variety of models. Some models, known as peer se-
lection, assume the mechanism designer wishes to select a single participant or a limited number
of participants for the prize. While other models, known as peer grading, assume each partici-
pant should receive a cardinal score. Besides the mechanism’s output, the models can differ in
many other dimensions, such as what the participants are asked to report, the type of information
participants hold about their peers, and whether the mechanism designer can make payments.
We provide a taxonomy to highlight the differences in each model introduced in the liter-
ature, and to identify some common themes. The approaches to prevent manipulation in peer
mechanisms can be grouped into one of three categories:
1. impartial mechanisms where a participant’s report cannot impact their chance of winning
the prize,
2. audits where the mechanism attempts to detect and punish manipulation,
3. rewards for truthful reports.
Although these three approaches are distinct, they are not mutually exclusive. The approaches
can be combined. The context will determine which approach or combination of approaches is
most suitable.
The bulk of the theoretical research has focused on impartial mechanisms. We delve deeper
into these results by focusing on the model of peer selection, where peers nominate each other for
a single prize or several identical prizes. Researchers have taken two main approaches: checking
if impartiality is compatible with desirable axioms and checking how closely impartial mecha-
nisms can approximate the most desirable outcome (such as awarding the prize to the participants
with the most nominations).
The axiomatic and approximation results highlight how flexibility in the number of prizes
is crucial for designing impartial peer mechanisms with desirable properties. If the mechanism
must always award a fixed number of prizes, discouraging manipulation can lead to undesirable
outcomes, such as awarding the prize to a participant who does not receive any nominations.
If the mechanism can award more prizes than planned or choose to leave the prize unassigned,
manipulation can be discouraged while avoiding many of the undesirable outcomes that stem
from a fixed number of prizes. The theoretical results also highlight the importance of random-
ization. In most cases, mechanisms that use randomization provide better approximation results
than mechanisms that use a deterministic rule.
Manipulation in peer mechanisms is not merely a theoretical concern. We survey empir-
ical research that shows that people will try to manipulate peer mechanisms when given the
chance. Employees will reduce the peer grades of coworkers when competing for promotion
(Huang et al., 2019), and entrepreneurs bias peer reports in favor of friends and family when
competing for loans (Hussam et al., 2022). Manipulation has also been shown in the lab. Whether
experimental participants produce art (Balietti et al., 2016) or label envelopes (Carpenter et al.,
2010), they are happy to sabotage their peers to increase their own chance of winning a prize.
The empirical research provides several lessons for the theoretical study of peer mechanisms.
First, participants often make errors in their peer evaluations. Mechanisms that are robust to er-
rors are more likely to succeed in practice. Second, nepotism is common. Most models assume
that participants care only about their own chance of winning a prize and not whether other per-
haps related participants win a prize. Third, small amounts of manipulation may be acceptable.
Rather than aiming to disincentivize manipulation from all participants, mechanisms could allow
some manipulation and still yield good outcomes. Fourth, choosing the optimal manipulation
can be difficult for participants. Complexity may be a useful tool to discourage manipulation.
Models of peer mechanisms differ from both the classic approach to mechanism design and
the classic approach to social choice. The classic approach to mechanism design is to elicit infor-
mation from individuals about themselves, such as the amount an individual would be willing to
2
pay for an item at an auction. In peer mechanisms, individuals hold preferences or information
about other participants (their peers). The classic approach to social choice is to aggregate the
preferences of voters about a set of candidates. The voters and candidates are distinct. In peer
mechanisms, the participants are both voters and candidates.
We focus this survey on preventing manipulation in peer mechanisms. We do not include
research that focuses on how to aggregate nominations, rankings, or grades. Examples of this
line of research include Caragiannis et al. (2016, 2020), which considers how counting methods
can aggregate partial rankings, and Wang and Shah (2019), which considers how to aggregate
grades from individuals that have different standards and ranges. We do not include a recent line
of research that studies how to incentivize participants to invite their peers to participate in a
mechanism (Zhao, 2021). We restrict our focus to settings where all participants are aware of the
mechanism and the prize. We do not include research on peer grading that designs mechanisms
to encourage peer graders to exert effort when grading (see Zarkoob et al. (2023) for a recent
example of this line of work). Our focus is on peer grading mechanisms that prevent graders
from improving their own grades or rankings through manipulation.
We include peer prediction mechanisms that have been adapted to evaluating information
about people, such as a person’s need for financial aid or their entrepreneurial ability. Typically,
peer prediction is used for reports about external objects, such as the quality of a product or
the forecast of an event. Not to be confused with the “peer” in peer mechanisms, the “peer”
in peer prediction refers to the way these mechanisms use reports from multiple participants to
incentivize truthful reports without access to ground truth to check the reports. Peer prediction
mechanisms make payments to participants that depend on the participant’s report and the reports
of other participants who evaluate the same target object. We are interested in cases when the
target object is information about another participant. See Faltings and Radanovic (2017) for a
survey of peer prediction mechanisms.
Our survey has some overlap with a recent survey of academic peer review by Shah (2022),
but our surveys make distinct contributions. Rather than focusing on a single application (aca-
demic peer review in the case of Shah (2022)), we focus on manipulation in peer mechanisms,
which extends to several other applications, such as poverty targeting and peer grading of student
assignments. Shah’s (2022) survey covers some work on manipulation but does not go into the
same detail as we do. Also, the models we discuss are often different. The models of peer mech-
anisms we discuss only correspond to models of academic peer review when each author submits
one sole-authored paper and is also available as a reviewer. Authors often submit multiple pa-
pers, each paper can have multiple authors, authors may not act as reviewers, and reviewers may
not submit papers.
We begin the survey with a motivating example to describe why many peer mechanisms
create an opportunity for the participants to manipulate who wins the prize. We then provide
a taxonomy of approaches to prevent manipulation and discuss the range of techniques that re-
searchers have proposed. To highlight the two main theoretical approaches of axiomatic and ap-
proximation analysis, we focus on the model of peer selection. We survey the empirical studies
of peer mechanisms and list key lessons the empirical research provides for theory. We conclude
the survey by highlighting several areas in need of further research.
2. Motivating Example
Suppose a group of people compete for a prize by participating in a peer mechanism. The
mechanism determines who wins the prize by asking each participant to nominate one or more
3
peers and awarding the prize to the participant who receives the most nominations. Assume there
is only one prize, which cannot be split between multiple participants. In the case of a tie, the
prize is awarded uniformly at random between those with the most nominations.
One temptation is for each participant to nominate themselves, so we may want the mecha-
nism to exclude self-nominations.1 But even when the participants cannot nominate themselves,
they may still have opportunities to manipulate who wins.
A simple example with three participants (a, b, and c) demonstrates that they can still manip-
ulate who wins the prize. Suppose that:
• a nominates b and c,
• b nominates c, and
• c nominates a.
Since both b and c have two nominations each, the mechanism awards the prize randomly to b
or c with equal probability. Let’s consider c’s perspective. All else equal, c can ensure he wins
the prize by nominating no one or nominating a. Even if c believes that b is worthy of the prize,
c has a strong incentive to manipulate the outcome to increase his chance of winning.
Whether the participants are asked to nominate, rank, or grade their peers, the same tempta-
tion remains. To increase their own chance of winning a prize, participants in a peer mechanism
are tempted to manipulate their evaluation of their closest competitors. In the example, c’s clos-
est competitor is b. By failing to nominate b, c increases his chance of winning the prize at b’s
expense.
3. A Taxonomy of Models
We surveyed the literature for peer mechanisms that address manipulation. Since peer mech-
anisms are inspired by a variety of applications, there are a variety of different models to describe
these applications. In Table 1, we provide a taxonomy of the models we uncovered. We distin-
guish between different models according to their:
Input: What do the participants need to report about their peers?
Output: What output does the mechanism produce?
Information: What type of information do participants hold about their peers?
We have some notation within Table 1. We use m for the number of peers each participant is
asked to evaluate. In most cases, m is small relative to the number of participants. We use k for
the number of winners when the mechanism selects multiple winners.
The table also includes columns to categorize the mechanisms according to:
1 As Ng and Sun (2003) and Ohseto (2012) have shown, excluding self-evaluations can create problems in aggregating
peer grades. Suppose participants a and b unanimously grade a higher than b, but a uses more generous grades than b.
Excluding self-evaluations will discard a’s generous grade about himself but keep the generous grade he gives to b,
which may cause b to have a higher aggregated grade than a. Ng and Sun (2003) provides theoretical results highlighting
an incompatibility between unanimity and excluding self-evaluations. Ohseto (2012) strengthen Ng and Sun’s (2003)
results to show that if participants can select grades from a finite and large set of real numbers, no aggregation rule
excludes self-evaluations and satisfies monotonicity and unanimity.
4
Table 1: Taxonomy of peer mechanisms
Model Mechanism
Holzman and Moulin (2013) Nominate one peer Single winner Subjective Impartial Partition
Mackenzie (2015) Nominate one peer Single winner Subjective Impartial Random dictatorship
Babichenko et al. (2020b) Nominate one peer Single winner Subjective Impartial Expand possible winners
Edelman and Por (2021) Nominate one peer Single winner Subjective Impartial Random dictatorship
Cembrano et al. (2023b) Nominate one peer Single winner Subjective Impartial Permutation
Amorós (2011) Nominate one peer Single winner Common Impartial Fix position
Mackenzie (2020) Nominate one peer At most one winner Subjective Impartial Threshold
Bjelde et al. (2017) Nominate one peer At most two winners Subjective Impartial Permutation
Tamura and Ohseto (2014); Tamura (2016) Nominate one peer Top k winners Subjective Impartial Expand possible winners
Fischer and Klimm (2014, 2015) Nominate one or more peers Single winner Subjective Impartial Permutation
Bousquet et al. (2014) Nominate one or more peers Single winner Subjective Impartial Permutation
Babichenko et al. (2018) Nominate one or more peers Single winner Subjective Impartial Expand possible winners
Caragiannis et al. (2019) Nominate one or more peers Single winner Subjective Impartial Jury
Zhang et al. (2021) Nominate one or more peers Single winner Subjective Impartial Expand possible winners
Caragiannis et al. (2021) Nominate one or more peers Single winner Subjective Impartial Threshold
Cembrano et al. (2022a) Nominate one of more peers Single winner Subjective Impartial Threshold
Ito et al. (2018) Nominate one or more peers Single winner Subjective Reward Peer prediction
Zhao et al. (2023) Nominate one or more peers Single winner Subjective Impartial Expand possible winners
Alon et al. (2011) Nominate one or more peers Top k winners Subjective Impartial Partition
Cembrano et al. (2022b) Nominate one or more peers Top k winners Subjective Impartial Expand possible winners
Was
˛ et al. (2019) Nominate one or more peers Grade Subjective Impartial Fix grade
Li et al. (2018) Nominate top participant Ranking Common Impartial Fix position
Bao et al. (2021) Nominate network neighbor Single loser Ground truth Audit Compare to nominee
Table 1 continued: Taxonomy of peer mechanisms
Model Mechanism
Mattei et al. (2020); Lev et al. (2021, 2023) Rank m peers Top k winners Common Impartial Threshold
Merrifield and Saari (2009) Rank m peers Ranking Common Reward Reward consensus
Xu et al. (2019) Rank m peers Ranking Subjective Impartial Partition
Stelmakh et al. (2021) Rank m peers Ranking Ground truth Audit Target manipulation
Bloch and Olckers (2021) Rank network neighbors Top k winners Ground truth Impartial Threshold
Bloch and Olckers (2022) Rank network neighbors Ranking Common Impartial Fix position
Amorós et al. (2002); Amorós (2023) Rank all peers Ranking Common Impartial Fix position
Kahng et al. (2018) Rank all peers Ranking Subjective Impartial Partition
Alcalde-Unzu et al. (2022) Rank all peers Ranking Subjective Impartial Partition
Cembrano et al. (2023a) Rank all peers Ranking Subjective Impartial Fix position
Hussam et al. (2022) Rank or grade m peers Single winner Ground truth Reward Peer prediction
Rai (2002) Binary type Grades Ground truth Audit Target disagreement
De Clippel et al. (2008) Relative grade Cardinal share Subjective Impartial Reduce total reward
Kurokawa et al. (2015) Grade m peers At most k winners Subjective Impartial Expand possible winners
Aziz et al. (2016, 2019c) Grade m peers Top k winners Subjective Impartial Partition
Dhull et al. (2022) Grade m peers Top k winners Subjective Impartial Partition
Wang et al. (2018) Grade m peers Top k winners Common Impartial Partition
Chakraborty et al. (2024) Grade m peers Grades Ground truth Audit Assign audited peers
De Alfaro and Shavlovsky (2014) Grade m peers Grades Ground truth Reward Reward consensus
Baumann (2023) Grade network neighbors Single winner Ground truth Audit Limit misreports
Babichenko et al. (2020a) Grade network neighbors Top k winners Subjective Impartial Expand possible winners
Cembrano et al. (2023a) Grade all peers Top k winners Subjective Impartial Partition
Walsh (2014) Grade all peers Grades Ground truth Reward Reward consensus
Niemeyer and Preusser (2022) Abstract message space Single winner Subjective Impartial Jury
Approach: Does the mechanism use audits, rewards, or impartiality to discourage ma-
nipulation?
Technique: What technique does the mechanism use?
The taxonomy is useful for several reasons. First, the contributions to peer mechanisms are
spread across computer science and economics, and the taxonomy shows which contributions
are most closely connected. Second, the taxonomy shows gaps in the literature. For example,
the “Approach” column of Table 1 shows that most of the existing work focuses on constructing
impartial mechanisms. Less work has focused on using audits or rewards to discourage manipu-
lation.
3.1. Inputs
The input into a peer mechanism is the reports from the participants about their peers. Models
differ by the detail of the reports and which peers they can report on. In increasing order of
detail, the mechanism could ask for a nomination, a ranking, or a grade. A nomination asks the
participant to select one or more peers. A ranking asks for a strict order of peers. A grade asks
for a cardinal score for each peer.
The appropriate form of peer report can be linked to the level of information each participant
holds about their peers. If participants are only able to sort their peers into two groups, such
as worthy and unworthy for the prize, nominations are appropriate. If they have more detailed
information to order peers but not enough to determine a cardinal score, a ranking is appropriate.
Finally, the most detailed information can be modeled by a cardinal score.
To our knowledge, there is little research on the form of peer report that participants prefer.
In the context of peer grading, De Alfaro and Shavlovsky (2014) reported that:
“Students expressed some uneasiness in ranking their peers, especially as they per-
ceived ranking as a blunt tool, unable to capture the difference between a pair of
roughly equivalent submissions, and a pair of submissions, one of which was very
good, and the other non-functional.”
Further empirical research will be needed to guide theory on the appropriate form of peer reports
in peer grading and other contexts.
Although most models use either peer nominations, ranks, or grades, there are some excep-
tions. Rai (2002) uses a model with two participants that can be a binary type (either rich or
poor). Each participant reports whether they are poor and whether their peer is poor. However,
Rai’s (2002) model can be thought of as a model of nominations where self-nominations are
allowed. Participants can choose to nominate no one, nominate themselves only, their peer only,
or both themselves and their peer. Another exception is Niemeyer and Preusser (2022), who use
an abstract message space. They do not restrict how the participants can communicate with the
mechanism designer.
The mechanism must also specify which peers each participant can evaluate. Some models
do not restrict which peers each participant can evaluate. Each participant considers all his peers
in his reports. Others assume that each participant reports on a fixed number of peers (represented
by m in Table 1). When the size of the community is large, reporting on every peer may be too
onerous.2 Distributing peer reports equally among participants is a natural alternative.
2 The burden of grading all peers can be reduced by combining nominations and grades. The mechanism in
Cembrano et al. (2023c) asks each participant to nominate as many peers as they would like and assign grades (or
weights) to each nomination. The peers that are not nominated receive a grade of zero.
7
Another class of models assumes that the participants are connected by a network, and each
participant can only evaluate his network neighbors. For example, in the network shown below,
a can evaluate b and c, but cannot evaluate d and e.
a d
c
b e
The network can capture situations where participants may learn about their peers through
social interactions. The network could represent friendships or co-worker relationships. So-
cial networks often display common structures, such as clustering—if a is friends with b and
c, then b and c are likely to be friends. A recent line of research, initiated by Baumann (2023)
and Bloch and Olckers (2022) in economics and Babichenko et al. (2020a) in computer science,
investigates how the structure of the network impacts the designer’s ability to construct mecha-
nisms that perform well.
The network can also represent who the participants are not allowed to report on. For exam-
ple, in the context of conference peer review, the network could represent conflicts of interest,
such as co-author and student-supervisor relationships (Xu et al., 2019). The network of conflicts
is the complement or inverse of the network studied by Baumann (2023), Bloch and Olckers
(2022) and Babichenko et al. (2020a).
Once the network is defined, the model must still define the form of the peer reports. Par-
ticipants may be asked to nominate (Bao et al., 2021), rank (Bloch and Olckers, 2022), or grade
(Baumann, 2023; Babichenko et al., 2020a) their network neighbors.
In Baumann (2023), Bloch and Olckers (2022) and Babichenko et al. (2020a), the network
that captures the ability to evaluate peers is unweighted.3 Either the participant can evaluate a
given peer or they cannot. Dhull et al. (2022) uses a weighted network where the weight mea-
sures expertise to evaluate a given peer. The higher the weight, the more accurate the peer
evaluation. In the context of conference peer review, the weight can be thought of as a similarity
between two authors’ research interests. Dhull et al. (2022) study how to assign paper submis-
sions to evaluators to maximize the accuracy of evaluations while ensuring that evaluators’ own
submissions are not competing with the submissions they are called to evaluate. (This is done
using the partition mechanism, which we discuss in more detail in Section 3.4.3.)
3.2. Outputs
The output of a peer mechanism describes the form of the prize. Models differ in the number
of prizes and if the prizes differ in quality. Inspired by best paper awards at conferences or
selecting the most influential user in a social media network, some models assume the output
is a single winner. A participant’s utility is defined by the probability he wins the prize. Other
applications, such as research grants, have inspired models with multiple winners for a prize of
equal quality. In Table 1, the number of winners is given by a constant k. Finally, peer grading
and targeting of aid or loans have inspired models where the mechanism assigns a rank or a grade
3 After the participants grade their peers in Babichenko et al.’s (2020a) model, the grades can be represented as a
weighted network.
8
to each participant, and utility increases in the rank or grade. Similar to a grade, De Clippel et al.
(2008) defined the output as the share of a divisible prize.
A theme in the theoretical analysis of peer mechanisms (which we will expand on in Section
4) is that flexibility in the output of the mechanism allows the designer to construct mechanisms
with desirable properties. For example, suppose a designer who aims to award a single prize
has the flexibility to award a second prize to a runner-up. If the runner-up is close enough to
the winner that the runner-up can divert the prize to himself by changing his peer report, the
mechanism can simply award a second prize to the runner-up to discourage the temptation for
the runner-up to manipulate who wins.
Nearly every paper listed in Table 1 assumes the prize is desirable. Participants want to be
selected or improve their rank or grade. The only exception is a model of criminal networks
studied by Bao et al. (2021) where the selected participant pays a fine. Rather than selecting a
single winner, the mechanism selects a single loser.
3.3. Information
Peer mechanisms focus on eliciting the information participants hold about their peers. How
is the peer information generated? We divide the models into three categories:
• Subjective information: an opinion about who is worthy of a prize, who should receive a
higher rank or a higher grade.
• Common information: all participants agree about who is worthy or who should receive a
higher rank or grade.
• Ground truth information: each participant has a value which can be checked through an
audit or some other measure.
The types of peer information in the above list are ordered from least to most constrained.
Subjective information allows the participants to hold any opinions about their peers. Common
information constrains all peers to agree. Ground truth information adds an additional require-
ment to common information that the information can be checked by the mechanism designer.
A model with common information or ground truth information does not imply that all par-
ticipants have identical information. Each participant may only hold information about a subset
of peers. One participant may know that a is ranked above b while another may know that b is
ranked above c but have no knowledge about a. Also, the participants may make errors if the
common or ground truth information is observed with noise. For example, Lev et al. (2021) use
the Mallows model to shuffle the ranking that each participant observes. The Mallows model
has a dispersion parameter that ranges from perfect information at 0 to no information at 1. As
the parameter approaches 0, participants observe a ranking that is concentrated around the true
common ranking. At 1, each participant draws a ranking uniformly at random from all possible
rankings.
Some participants may make systematically more errors than others. In the context of peer
grading, students who have a good understanding of an assignment may grade their peers more
accurately than students who struggle to understand the assignment. The mechanism designer
may wish to give more weight to graders who got good grades themselves (Walsh, 2014). Grader
accuracy or reliability may also be measured directly if the mechanism designer can compare a
peer grade to ground truth, such as the grade given by an instructor (Chakraborty et al., 2024).
9
Each type of information matches different applications. Social media (such as Twitter) is an
example of an application with subjective information. Each user follows peers who they find
interesting, but users may disagree on which peers are interesting. The disagreement does not
imply that one user is wrong. Peer grading is an example of ground truth information. Each
student has a ground truth score according to the marking rubric that could be verified by an
expert grader. Common information has similar applications to ground truth, except that the
model does not specify how the mechanism designer can verify the information. In the example
of targeting aid, participants may agree which peers are most in need of aid, but the designer may
not have a method to measure need.
Almost all of the papers in Table 1 assume the mechanism designer has no prior information
on the peer reports. Caragiannis et al. (2021) is the one exception. Their model assumes that
the designer knows the probability that each participant will nominate each peer. The prior
information is useful to select a default winner in the case of ties, as ties can create opportunities
to manipulate the mechanism.
3.4.1. Audits
In some settings, an auditor can check the peer reports. For example, in peer grading of
large courses, the instructor can check if students have graded their peers accurately, while in
poverty targeting, the grant agency could conduct surveys to measure the poverty level of some
households.
If audits can uncover the truth, why not audit everyone? In many applications, including peer
grading and poverty targeting, we can assume that audits are more costly than peer reports. The
goal is to undertake a limited number of audits to achieve the required performance.
One technique is to target disagreement in peer reports (Rai, 2002). In the poverty targeting
setting, if I claim I am poor, but my neighbor says I am rich, the mechanism can audit the dis-
agreement and punish misreports. The mere possibility of an audit can discourage misreporting
and may produce a desirable equilibrium where all participants report truthfully, and no audits
need to be conducted.
Targeting disagreement is effective when the designer knows that the participants have perfect
information about each other. When there is some error or noise in the peer reports, detecting
manipulation becomes more difficult. Disagreement no longer implies that one of the participants
is lying. Participants could hold different information about their peers by chance and disagree
even when they are both reporting truthfully. For the designer to target manipulation, they need
to determine if a participant’s peer report improves his position by chance or due to manipulation
(Stelmakh et al., 2021).
Audits may also limit the extent to which participants can lie about their peers. When em-
ployees compete for a promotion, they may need to support claims about peers and themselves
with some evidence. The mechanism can exploit the need for evidence limits misreports about
themselves and their peers (Baumann, 2023). The best employee can make the highest claim
about his own performance, and the most negative peer review he can receive will be better than
the most negative peer review any of his participants can receive.
10
In the constant of peer grading, the mechanism can audit a small number of student assign-
ments and then assign the audited peers assignments to all students for grading (Chakraborty et al.,
2024). Since the students do not know which assignments have been audited, the mechanism dis-
courages manipulation for all assignments.
The mechanism can use peer nominations to decide who to audit. In a model of criminal
networks, the mechanism samples one participant and asks that participant to nominate one peer.
The compare to nominee mechanism audits both the sampled participant and the nominated peer
but only fines the one with the higher level of criminal activity. Since audits only provide a
signal of criminal activity, the sampled participant minimizes their chance of incurring a fine by
nominating the peer who they think has the highest level of criminal activity.
3.4.2. Rewards
To prevent manipulation, the mechanism designer can reward truthful reporting. The rewards
can take the form of monetary payments or some other form. In peer grading, the reward might
not be money but the student’s grade. For instance, the mechanism in Walsh (2014) increases a
student’s grade when they grade other students well.
If the peer reports are entirely subjective, then paying for truthful reports is not particularly
applicable. A participant can always claim that their report is their true but subjective opinion.
In Table 1, most of the mechanisms that use payments are in settings where there is common
information or ground truth. The only exception is Ito et al. (2018), where each participant’s
nomination is subjective. However, the reward component of Ito et al.’s (2018) mechanism is
used on a part of the model that does have common information—whether both participant a and
participant b observe a nomination from a to b.
If each peer has common information that all participants agree upon, a natural approach is to
reward consensus. Multiple peer assessments of the peer’s value should converge. A participant
who wishes to manipulate the mechanism must consider the cost of diverging from the consensus
and reducing their reward. For example, if the prize is the ranking of a grant application, the
participants can receive an increase in the ranking as payment for a report that agrees with the
consensus (Merrifield and Saari, 2009).
Rewarding consensus can, however, create problems. If a participant believes that the con-
sensus will be biased, he may bias his peer reports to match the consensus. The mechanism must
give the participants an incentive to report their true assessment, even if they believe that their
true assessment will differ from the consensus.
The field of peer prediction uses payments to extract true assessments even though the de-
signer does not have access to the ground truth (Faltings and Radanovic, 2017). In peer predic-
tion, the focus is usually on assessing objects that are unrelated to the peers themselves, such as
the quality of a product or a forecast of an event. The peer mechanisms we discuss in this survey
involve peers assessing each other.
Peer prediction mechanisms can be adapted to peers assessing each other. For example,
Hussam et al. (2022) adapt Witkowski and Parkes’s (2012) peer prediction mechanism to peer
reports. The mechanism asks each participant for peer reports and for their belief about what
other participants’ peer reports will be. If the other participants report truthfully, a participant
maximizes their expected payment by truthfully reporting their peer report and their belief of
other participants’ peer reports.
11
3.4.3. Impartial mechanisms
The bulk of the research surveyed in Table 1 discourages manipulation by designing an impar-
tial mechanism. A peer mechanism is impartial when a participant cannot influence his chances
of receiving a prize or improving his rank or grade.4
Although most papers define impartiality so that a mechanism is impartial when every partic-
ipant cannot change their own probability of receiving a prize or improve their own rank or grade,
there are some exceptions. For example, Alcalde-Unzu et al. (2022) use a stricter definition of
impartiality for the case where the output of the mechanism is a ranking. In Alcalde-Unzu et al.’s
(2022) definition, a mechanism is impartial when a participant cannot impact their own position
in the ranking and who is below and who is above them in the ranking. In contrast, Kahng et al.
(2018) and Cembrano et al. (2023a) use the more standard definition of impartiality that each
participant cannot change their own position in the ranking.
One technique to achieve impartiality is to partition participants into groups (Alon et al.,
2011; Holzman and Moulin, 2013). For example, the mechanism designer divides participants
into two groups, A and B. Participants in B pick a winner in A, while participants in A pick a
winner in B. The overall winner is decided with a coin toss. Each individual can only influence
the chance a peer outside of his or her group wins the prize, so the mechanism is impartial.
How many participants should be in each group? At one extreme, the random dictatorship
randomly chooses a single participant for one group and places the rest of the participants in the
other group. The single participant (the dictator) decides who wins the prize. Jury mechanisms
increase the number of participants in the dictatorship group, and these participants act as a jury
to decide on the winner among the remaining participants.
Partition mechanisms can have poor performance if all the best participants end up in the
same group. For example, suppose a partition mechanism for selecting two winners divides the
participants into two groups and selects one winner from the first group and one from the second
group. If the top two participants end up in the first group, the mechanism will only select one of
them. Aziz et al. (2016, 2019c) counter this problem by using more than two groups and deciding
on the number of winners in each group based on the grades from peers outside the group.
One participant can also be part of many different partitions. In the permutation mechanism
introduced by Fischer and Klimm (2014, 2015), the participants are placed in random order, and
the mechanism only counts nominations from peers that are before the participant in the order. To
decide on the winner, the mechanism starts with the first participant as the candidate winner and
moves through the order to update the candidate winner. A participant p becomes the candidate
winner if he is above the current candidate c in the order, and the number of nominations p
receives from peers before p in the order (excluding c) is greater than or equal to the number
of nominations c receives from peers before c in the order. The winner is the candidate winner
after moving through all participants in the order. The mechanism is impartial because each
participant can only influence if a peer wins when the participant is no longer able to win.
Researchers often assume the designer must award a fixed number of prizes, which creates
a strong incentive for participants to misreport. For example, if the mechanism must award a
single prize, the participant in second place has a strong incentive to share a negative review of
the participant in first place.
4 The definition of an impartial mechanism is slightly different from a strategy-proof mechanism. In a strategy-proof
mechanism, an agent has a weakly dominant strategy to report the truth. As Fischer and Klimm (2014, 2015) point out,
impartiality is equivalent to strategy-proofness if the utility of the participant only depends on their chance of winning
the prize. Strategy-proof mechanisms are also referred to as dominant-strategy incentive-compatible mechanisms.
12
If the designer has some flexibility in awarding the prize, impartiality can be achieved without
resorting to a partition. One approach is to expand the set of possible winners to include partici-
pants that could win if they changed their peer reports (Tamura and Ohseto, 2014; Kurokawa et al.,
2015). If the designer is constrained to choose at most k winners, she can randomly pick k from
the expanded set of possible winners. But, the probability that each participant is selected cannot
depend on their peer report. One solution is to have the option of not selecting any winners to
remove the participant’s incentive to reduce the number of possible winners (Kurokawa et al.,
2015). If the designer has no constraints on the number of winners, another approach is to
choose an exogenous threshold and only award prizes to participants who exceed the threshold
(Mattei et al., 2020). By tweaking the mechanism, the designer can award k prizes in expectation.
Some flexibility in the number of winners also helps to generalize the permutation mechanism
described above. The permutation mechanism selects one winner. If the designer aims to select
two winners, Bjelde et al. (2017) show that the permutation mechanism can be adapted if the
mechanism is allowed to select one winner in a certain case. The permutation mechanism is run
both forward and backward, and one winner is selected on each run. If the same participant is
selected in both the forward and backward run, the mechanism only selects one winner.
If the prize is divisible, the designer can reduce the total reward to achieve impartiality.
Suppose the designer would like to share the prize according to the participants’ reports when
the participants have consensus. The designer can discourage deviations from consensus by
reducing the share of the prize awarded to participants that disagree and increasing the share of
a default participant (De Clippel et al., 2008).
Most of the research uses a strict definition of impartiality—a participant cannot influence
whether he receives the prize no matter what his peers report. We could relax this definition so
that the participant cannot influence whether he receives the prize provided his peers report truth-
fully. Rather than looking for strategy-proof or dominant-strategy incentive-compatible mech-
anisms, we look for ex post incentive-compatible mechanisms. Ex post incentive compatibility
is achieved if each participant has no incentive to lie when all other participants report truth-
fully. Several papers, including Amorós et al. (2002); Amorós (2011, 2023), Li et al. (2018),
Bloch and Olckers (2022) and Babichenko et al. (2020b), use more relaxed definitions of impar-
tiality and incentive-compatibility to construct mechanisms with good performance.
If the mechanism only needs to meet ex post incentive compatibility, the designer can con-
struct what we refer to as fix position mechanisms. Provided all other participants report the
truth, the participant’s position in the output is fixed, and he cannot change his probability of
winning the prize. Consider the following example of a fix position mechanism. Suppose three
participants, a, b, and c, are asked to rank their peers to win a prize, and the true order is a first, b
second, and c last. If all participants report truthfully, do they have an incentive to deviate from
this equilibrium? Consider b’s perspective when both a and c have reported truthfully that a ≻ b
and b ≻ c. No matter what b reports, the designer can still use a and c’s reports to fix b in second
place. At equilibrium, b’s report does not influence his chance of winning the prize.
The fix position mechanism applies when the mechanism’s output is a ranking or selection
of participants. When the mechanism outputs a grade, and the participants care only about their
grade (and not their relative position), a simple way to achieve impartiality is to fix the grade of
each participant to depend only on other participants’ peer reports (Was ˛ et al., 2019). Participants
can change other participants’ grades but not their own.
We conclude this section with the disclaimer that impartiality, alone, does not guarantee a
good outcome. If each participant cares only whether they receive a prize and the mechanism is
impartial, each participant will be indifferent between reporting their true knowledge or prefer-
13
ences about their peers and any other report. This indifference gives rise to multiple equilibria,
some of which may have undesirable outcomes.5 For example, suppose an impartial mechanism
selects a winner who would not receive any nominations under the participants true preferences.
The participants have no incentive to change their report even though their change may cause
a more desirable outcome. Mechanisms need to satisfy other properties in addition to impar-
tiality to guarantee desirable outcomes. In the next section, we discuss the other properties that
impartial mechanisms can satisfy.
4. Theoretical Results
The theoretical results on peer mechanisms illuminate the limits and potential of peer mecha-
nisms. Researchers have taken two main approaches: proving which mechanisms (if any) satisfy
a set of axioms and showing how certain impartial mechanisms can approximate the optimal
truthful outcome.
To highlight the differences between the two approaches, we focus on peer selection, where
each participant can nominate peers to receive a prize. As Table 1 shows, the model of peer
selection is popular. Nominations are used as the input of the mechanism in 21 of the 37 papers
on impartial peer mechanisms.
The axiomatic study of peer selection was initiated by Holzman and Moulin (2013) while the
approximation approach was initiated by Alon et al. (2011). In the following two subsections,
we describe results from each of these two seminal works and the papers that built upon them.
The nominations can be represented as a directed graph. An edge from a participant to a
peer shows that the participant nominates that peer. In the graph below, a nominates b and b
nominates c. Participant a is nominated by c, d, and e so a receives the most nominations.
b
a
c
e
d
4.1. Axioms
Holzman and Moulin (2013) use a model where each participant can nominate one peer, and
there is a single prize. Self-nominations are not allowed. In this model, Holzman and Moulin
(2013) show that impartial peer mechanisms fail to satisfy two weak axioms simultaneously:
• positive unanimity: A participant always wins if he is nominated by everyone else.
• negative unanimity: The winner gets at least one nomination.
Theorem 1 (Holzman and Moulin, 2013). There exists no nomination rule that satisfies impar-
tiality, positive unanimity, and negative unanimity
5 Some models, such as Amorós et al. (2002), assume that participants prefer to report truthfully if their report does
not influence their own chances of winning a prize. This assumption avoids many undesirable equilibria.
14
The strong impossibility result raises the question of whether similar results apply in different
(perhaps more complex) models. If we allow the mechanism to award more than one prize, the
impossibility no longer holds. Tamura and Ohseto (2014) show that a peer mechanism with
single nominations and more than one prize can simultaneously satisfy impartiality, positive
unanimity, and negative unanimity, provided there are at least four participants.
To prove the result, Tamura and Ohseto (2014) defines a mechanism called “plurality with
runners-up”. The participant with the most nominations wins a prize. If there is a tie for the
most nominations, all the tied participants win a prize. Each runner-up also wins if and only if
he nominates the single participant with the most nominations who only wins by one point.
The downside of plurality with runners-up is that every participant could win a prize, and,
in this situation, a single participant could reduce the number of winners from everyone to just
himself and one other by changing his nomination. Consider the example below; there is a
cycle of nominations. Since every participant receives one nomination, plurality with runners-up
awards a prize to everyone.
b
a
c
e
d
Suppose a switched his nomination from b to c (as shown below). Since c receives the most
nominations, he would still win a prize. Participants a, b, d, and e are all one nomination behind
c, but only a would continue to win a prize. Recall that plurality with runners-up only awards a
prize to a runner-up if he nominates the single participant with the most nominations who only
wins by one nomination.
b
a
c
e
d
Tamura and Ohseto (2014) show that it is possible to adjust plurality with runners-up to have
at most two winners while simultaneously satisfying impartiality, positive unanimity, and nega-
tive unanimity. However, the adjustment requires that in the case of a tie for the most nomina-
tions, the participant who is earlier in a pre-defined order always wins. Thus, participants who
are earlier in the order have an advantage over participants who are later in the order.
Favoring certain applicants who happen to be earlier in an order may be an undesirable prop-
erty. To address this challenge, Tamura (2016) search for impartial peer mechanisms that satisfy
the following axioms:
• Symmetry: The determination of the winners is independent of the order of the participants.
• Anonymity: An exchange of nominations between two participants does not affect whether any
other participant wins.
15
• Monotonicity: Receiving an additional nomination cannot cause a winner to lose his prize.
Theorem 2 (Tamura, 2016). Plurality with runners-up is the only minimal nomination rule satis-
fying impartiality, symmetry, anonymity, and monotonicity.
The definition of minimal is that the nomination rule must have the smallest set of winners
while still satisfying the four axioms. For example, a rule that always gives a prize to every
participant would satisfy impartiality, symmetry, anonymity, and monotonicity but would not be
minimal since plurality with runners-up can satisfy the same axioms and choose a strictly smaller
number of winners for at least one profile of nominations.
Anonymity is a desirable property because it gives the participants privacy when they report
their nominations. The winner can be determined even if the participants complete their nomi-
nations anonymously. Unfortunately, anonymity is a difficult property for impartial mechanisms
to satisfy.
If we return to the single winner setting of Holzman and Moulin (2013), the only impartial
nomination rules that satisfy anonymity give the prize to the same default participant—no matter
the profile of nominations. However, the result only applies to deterministic mechanisms. If the
mechanism designer is willing to use randomization, Mackenzie (2015) shows that:
Theorem 3 (Mackenzie, 2015). An impartial nomination rule satisfies anonymity and negative
unanimity if and only if it is a uniform random dictatorship.
The uniform random dictatorship picks a single participant uniformly at random, and this
participant picks the winner. If the nominations are placed anonymously in a box, the uniform
random dictatorship can be implemented by randomly selecting one of the nominations.6
Besides randomization, are there other features of the model that can allow the designer to
break away from the incompatibility between impartiality and anonymity? Holzman and Moulin’s
(2013) model assumes that a winner must be selected—the prize cannot remain unassigned. If
the prize can remain unassigned, a threshold mechanism can satisfy impartiality, anonymity, and
other desirable axioms, such as positive unanimity and monotonicity. The threshold mechanism
assigns the prize to the participant who receives at least a threshold number of nominations and
leaves the prize unassigned otherwise. The threshold must be greater than half the number of
nominations to ensure that there is, at most, a single winner. Mackenzie (2020) shows that the
threshold mechanism is the only deterministic impartial nomination rule that satisfies anonymity,
monotonicity, positive unanimity, and candidate neutrality (all participants are treated symmetri-
cally when they are considered as possible winners).
The results on anonymity we have discussed thus far use nominations as the input to the
mechanism. Do other inputs, such as ranks or grades, also display a tension between impartiality
and anonymity? Alcalde-Unzu et al. (2022) show that when participants are asked to rank all
their peers and the mechanism outputs a ranking, impartiality and anonymity are incompatible.
Similar to Holzman and Moulin’s (2013) result for nominations, the only mechanisms satisfying
impartiality and anonymity for ranking output the same default ranking—ignoring the partici-
pants’ reports.
4.2. Approximation
If everyone reported truthfully, the ideal mechanism would be to award the prize to the partic-
ipant who receives the most nominations. Unfortunately, this ideal mechanism is not impartial.
6 Anonymity and negative unanimity is not the only way to characterize the uniform random dictatorship.
Edelman and Por (2021) show that the rule can also be characterized by other axioms.
16
For example, suppose a and b both receive the same number of nominations, and one of the
nominations b receives is from a. By simply changing his nomination to any other peer, a can
reduce the number of nominations b receives and put himself in the top position.
Can we design a mechanism that is close to the ideal but still retains impartiality? A line of
research initiated by Alon et al. (2011) seeks to answer this question by designing mechanisms
that are impartial and closely approximate the ideal of awarding the prize to the participant
with the most nominations. Alon et al. (2011) defined the approximation ratio as the number of
nominations the selected participant receives divided by the number of nominations received by
the participant with the most nominations. The closer this ratio is to 1, the better the mechanism
performs.
To better understand the approximation ratio, consider the example shown below. Participant
a receives 3 nominations, which is the most nominations received by any participant. Participants
b and c both receive 1 nomination and d and e receive zero nominations. If the mechanism
selected participant b, the approximation ratio is 31 .
b
a
c
e
d
If the designer is restricted to deterministic mechanisms, the results are disappointing. Alon et al.
(2011) find that no deterministic mechanism can provide a finite approximation ratio. The re-
sult mirrors Holzman and Moulin’s (2013) result that an impartial deterministic mechanism may
assign the prize to a participant who does not receive any nominations.
Randomization provides more encouraging results. Alon et al. (2011) show that the partition
mechanism that divides the participants into two equal-size groups, only counts nominations
across groups, and randomly picks which group the winner is selected from, provides an approx-
imation ratio of 14 in expectation. Since only the between-group nominations are counted, each
nomination is counted with probability 21 . And the winner is selected from a given group with
probability 21 .
In the partition mechanism with two groups, the mechanism may ignore many of the nomina-
tions. To attain better approximation ratios, Fischer and Klimm (2014, 2015) tackle this problem
by increasing the number of partitions and allowing participants to be part of many different par-
titions. Since each participant is part of many different partitions, they call their mechanism the
“permutation mechanism". The permutation mechanism achieves an approximation ratio of 21 ,
which turns out to be the best possible approximation ratio.
Consider the example shown below on the left. The two participants, a and b, both nominate
each other. Without loss of generality, let’s focus on participant a. To achieve impartiality, the
probability that a wins must be the same whether they nominate b or abstain (the situation shown
below on the right). Thus, the probability a wins must be 12 when they abstain and b nominates
them. But now b must win with probability 12 even though they do not receive any nominations.
This situation shows that any mechanism cannot be more than 21 optimal.
17
a a
b b
The above example relies on the case of two participants who are allowed to abstain. Several
papers have circumvented the ceiling of 12 by ruling out this case with conditions on the profile
of nominations. If participants cannot abstain, the permutation mechanism achieves an approxi-
7
mation ratio of at least 12 (Fischer and Klimm, 2014, 2015). If each participant submits exactly
one nomination, the permutation mechanism has an approximation ratio of 32 (Cembrano et al.,
2023b). If the participant with the most nominations receives at least a threshold number of
nominations, Bousquet et al. (2014) design a “slicing mechanism" that has an approximation
ratio close to one. The slicing mechanism first samples some participants to decide how the
remaining participants should be partitioned and the order in which the partitions should be con-
sidered. The slicing mechanism adds a sampling step to the techniques used in partition and
permutation mechanisms. The nearly optimal approximation ratio of the slicing mechanism re-
lies on placing conditions on the nominations—the input to the mechanism. We can also consider
how conditions on the output of the mechanism impact the approximation ratio.
Similar to how flexibility in the prizes provides mechanisms with better axiomatic properties,
flexibility in the prizes also allows for better approximation ratios. If the mechanism targets k
winners, Bjelde et al. (2017) show that allowing for the mechanism to select fewer than k winners
in some situations can allow for improved approximation ratios. In comparison to the case where
the mechanism must select exactly two winners, allowing the mechanism to sometimes select
7
fewer winners can improve the approximation ratio from 12 to 32 .
The approximation results described above all focus on the approximation ratio. Why should
the mechanism designer target the ratio and not some other metric? Caragiannis et al. (2019)
propose additive approximation. Instead of the ratio, the designer targets the difference between
the winner and the participant with the most nominations. For example, suppose the winner
receives 5 nominations, but the participant with the most nominations receives 8 nominations.
The approximation ratio is 58 whereas the difference is 3. In the case of single nominations √ and
no abstentions, a randomized partition mechanism provides an additive approximation of O( n)
(Caragiannis et al., 2019).
√ A deterministic threshold mechanism can also achieve an additive
approximation of O( n) (Cembrano et al., 2022a).
The study of the additive approximation ratio by Caragiannis et al. (2019) and Cembrano et al.
(2022a) assumes the mechanism can select at most one winner. If the mechanism is allowed to se-
lect many winners, the additive approximation ratio must be defined differently. Cembrano et al.
(2022b) propose that the number of nominations for the participant with the most nominations
should be compared to the selected participant with the least nominations. For example, suppose
the most popular participant received 8 nominations, and the mechanism selected two partici-
pants, one with 7 nominations and another with 5 nominations. The min-additive approximation
proposed by Cembrano et al. (2022b) is 8 − 5 = 3. The plurality with runners-up mechanism
proposed by Tamura and Ohseto (2014) is 1-min-additive, which turns out to be the best possible
min-additive approximation Cembrano et al. (2022b).
The approximation results described above all focus on the participant with the most nom-
inations. In social networks, the mechanism designer may wish to target the most influential
18
user rather than the user who is the most popular. In social networks, participant a nominat-
ing participant b can be thought of as user a following user b. If we consider the graph of
nominations, the approximation results discussed above target the participant with the maximum
in-degree, but other measures of centrality may more accurately capture influence. For example,
Babichenko et al. (2018) define an influence measure using the expected number of paths that
will end at a given participant when starting randomly at any participant.
Similar to the way participants can manipulate which participant receives the most nomi-
nations, participants can also manipulate which participant has the highest influence measure.
Consider the example shown below. Participant a and c both have two nominations (or “follows"
in the language of social media), but a is more influential than c because he can influence d and
e via his influence on c. Suppose the mechanism selects a. If c chooses to abstain and remove
his nomination, c would become the most influential participant. Participant c can change the
outcome by changing his nomination.
b c
d e
Babichenko et al. (2018) design mechanisms that are impartial and approximate the ideal
of selecting the most influential participant. When the nomination graph is a tree or a forest,
an impartial mechanism with a constant approximation ratio to the maximum influence exists.
Babichenko et al. (2020b) provide tighter bounds on the approximation for forests, Zhang et al.
(2021) improve the bounds for directed acyclic graphs when participants can only manipulate by
hiding nominations, and Zhao et al. (2023) design a mechanism that achieves the upper bound
shown in Zhang et al. (2021).
5. Empirical Evidence
In this section, we discuss the empirical evaluation of peer mechanisms. After providing
an overview of the studies and discussing evidence of manipulation, we highlight several key
lessons the empirical studies provide for theory.
Table 2 lists research studies that test peer mechanisms in practice. We focus on studies
where the participants providing the peer reports are also eligible for the prize. Many fascinating
studies that ask a third party to report on the participants are excluded.7
We separate studies into three settings: field experiments, lab experiments, and observational
studies. The field experiments introduce experimental treatments in real-life settings. For ex-
ample, Hussam et al. (2022) assigned business grants to entrepreneurs in India based on peer
rankings of profitability. The lab experiments invite participants into a controlled laboratory
setting and study the impact of experimental treatments. Lab experiments may be conducted in
7 For example, Maitra et al. (2020) study mechanisms where local traders and political representatives nominate farm-
ers to receive loans. We do not include Maitra et al.’s (2020) study in Table 2 because the local traders and political
representatives are not eligible to receive loans.
19
Table 2: Empirical evidence of peer mechanisms
Sample
Paper Setting Context Input Participants
Size
Alatas et al. (2019) Field Government aid programs Influence beneficiary lists Residents 3998
Alatas et al. (2016) Field Government cash transfers Rank 8 households Residents 5633
Hussam et al. (2022) Field Business grants Rank 5 entrepreneurs Entrepreneurs 1345
Huang et al. (2019) Field Employee promotion Grade peers Employees 432
Trachtman et al. (2021) Field Cash transfers Rank 10 households Residents 300
Dupas et al. (2022) Field Poverty targeting Rank neighbors Residents 507
Carpenter et al. (2010) Lab Worker compensation Grade 7 peers Students 224
Balietti et al. (2016) Lab Art exhibition Grade 3 peers Students 144
Chakraborty et al. (2024) Lab Peer grading Grade 5 peers Students 69
Leibbrandt et al. (2018) Lab Neutral Grade 3 peers Students 200
Kotturi et al. (2020) Lab Freelance job applications 50 pairwise comparisons MTurk workers 320
Stelmakh et al. (2021) Lab Neutral Rank 4 peers Students 55
Bao et al. (2021) Lab Crime Nominate 1 peer Students 300
Piech et al. (2013) Observational Peer grading Grade 4 peers Students 3600
Basurto et al. (2020) Observational Government subsidies Nominate households Residents 1559
Vera-Cossio (2022) Observational Government loans Assess loan applications Residents 710
physical locations or online. Carpenter et al. (2010) recruit student participants to complete tasks
in a laboratory environment while Kotturi et al. (2020) conduct experiments using the Amazon
Mechanical Turk online crowdsourcing platform. The line between field and lab experiments can
be unclear. As Chakraborty et al. (2024) point out, their lab experiment studies peer grading in
a classroom setting so could be considered a field experiment. The final setting is observational
studies, where the researchers did not introduce any experimental treatments.
As peer mechanisms can have many different applications, the studies listed in Table 2 have
been conducted in many different contexts. Many studies focus on the context of government
transfers, subsidies, and loans. Policymakers have recognized that local community members
may have superior information compared to the central government on which community mem-
bers are in most need of aid or will make the most productive use of a loan. Mechanisms that rely
on the local community to target aid (often called “community-based targeting") are a popular
means to decide which community members should receive aid.
Another popular study context is peer grading of student assignments. As online delivery
has enabled instructors to scale their courses to thousands of students, the instructor does not
have the time to grade all the assignments. Peer grading offers a scalable solution for grading in
massive open online courses (MOOCs). Although there are many studies on different facets of
peer grading, we focus on studies that highlight how the manipulation of grades can be prevented.
Outside of the targeting of government programs and peer grading of student assignments,
empirical studies of peer mechanisms have focused on a diverse range of contexts. Huang et al.
(2019), Carpenter et al. (2010), and Kotturi et al. (2020) focus on labor markets: peer evaluations
can influence which employee is promoted, set salary bonuses, or decide which freelancer is
hired. Balietti et al. (2016) use peer grading to assess artworks. Bao et al. (2021) use a lab
experiment to show how allowing criminal suspects to nominate another suspect can reduce the
overall crime level. Hussam et al. (2022) use peer evaluations to determine which entrepreneurs
will create the largest return from a business grant or loan.
We also note that Spliddit, a popular online tool for using fair division algorithms, has im-
plemented De Clippel et al.’s (2008) peer mechanism to divide credit for a joint project among
the members of the team (Goldman and Procaccia, 2015). Empirical studies of Spliddit have fo-
cused on other functionality, such as the rent sharing algorithms (Gal et al., 2017), and the peer
mechanism part of the tool has not been the main focus. Spliddit’s peer mechanism is a small
part of Lee and Baykal’s (2017) user study that focuses primarily on the algorithms for assigning
chores. The study did not provide insights on preventing the manipulation of peer mechanisms.
The empirical studies listed in Table 2 use the full range of inputs discussed in the taxonomy
in Section 3: nominations, rankings, and grades. Except for Kotturi et al. (2020), most studies
asked participants to evaluate a small set of peers for nominations, rankings, and grades. Eval-
uating a large number of peers is likely to be tedious or cognitively demanding. However it is
unclear how many peers people can evaluate before accuracy begins to erode. Participants may
also find it easier to use nominations than ranking or grading as the number of peers increases.
In the final two columns of Table 2, we list the type of study participants and sample size.
The sample sizes range from large countrywide studies to small lab experiments.
21
• In the context of employee promotion, Huang et al. (2019) shows that employees will re-
duce peer grades for coworkers eligible for the same promotion. If their coworker was not
eligible and therefore not a direct competitor, employees tended to inflate their peer grade
of the coworker.
• In a field experiment with entrepreneurs in India, Hussam et al. (2022) show that peer re-
ports decrease substantially in accuracy when the reports influence the chance of receiving
a business grant.
• During a lab experiment conducted by Carpenter et al. (2010), participants were asked to
print, seal, and address letters to a list of recipients—complete with handwritten addresses.
Each participant was then asked to count the number of letters and rate the quality of the
work of each of their seven peers in the experiment. When the peer reports determined
who received a bonus, the participants under-counted the number of letters and reduced
quality ratings. Participants relied more on the subtle manipulation of reducing the quality
rating than the more obvious manipulation of under-counting.
• During a lab experiment framed as an art competition, participants gave lower peer review
scores to direct competitors than to other peers when the prize was split among the winners
(Balietti et al., 2016). When all winners received the same prize and the number of winners
was unlimited, participants gave similar scores to direct competitors and other peers.
As these examples show, participants in peer mechanisms do tend to take opportunities to
manipulate peer mechanisms in their favor. The examples highlight one type of manipulation—
downgrading competitors—but manipulation can take other forms, such as collusion and nepo-
tism.
Notorious cases in academic peer review provide examples of collusion (Ferguson et al.,
2014; Littman, 2021). One researcher gives another researcher a positive review on their pa-
per in exchange for a positive review in return. The collusion can be more complex than two
researchers exchanging positive reviews. A collusion ring may form where researcher a larger
group of researchers exchange reviews.8 For example, in a collusion ring with three researchers,
a writes a positive review about b, then b writes a positive review about c, and the ring closes
with c writing a positive review about a.
For an example of nepotism, entrepreneurs in Hussam et al.’s (2022) study tended to in-
crease the rank of friends and family members—especially when the peer rankings influenced
the chance of receiving a business grant.
8 Jecmen et al. (2024) provide evidence that collusion rings are difficult to detect.
22
and students differ in the reliability of their grades (Piech et al., 2013). In the context of poverty
targeting, community members often report that they don’t know the ranking of two fellow com-
munity members (Alatas et al., 2016).
If the mechanism designer can assume that participants do not make errors, she can design
a mechanism that punishes differences in peer evaluations. If two participants share different
evaluations of a given peer, one must be lying. The chance of errors makes such mechanisms
difficult to implement. The designer does not know whether the participant is lying or making an
honest error. Any mechanism that punishes differences or rewards consensus must consider the
chance of errors.
In some contexts, participants may hold very little accurate information about their peers.
For example, Dupas et al. (2022) asked survey respondents in Abidjan, Côte D’Ivoire to rank
neighbors from poorest to richest. The ranking contained many cycles and had a weak correlation
with other measures of wealth. An open problem is to design a mechanism that can adjust to the
level of peer information participants hold. Perhaps the mechanism could award more prizes or
larger prizes when participants provide more accurate peer evaluations.
A lack of consensus in peer evaluations is a clear sign that the participants have made errors.
But, the participants may make errors even when they agree. Trachtman et al. (2021) found that
although residents agreed on peer rankings of need, the rankings reflected long-term attributes,
and the residents could not identify which of their fellow residents were in immediate need. If the
mechanism designer aimed to collect a ranking of immediate need, the consensus peer rankings
would not reflect this aim.
9 Baumann (2023) is one exception. The mechanism uses an equilibrium where participants misreport their peer
evaluations.
23
Allowing for small amounts of manipulation may allow researchers to take new approaches to
designing peer mechanisms.
6. Research Challenges
Based on our reading of the theoretical and empirical research on peer mechanisms, we
highlight several important research challenges.
6.1. Collusion
If peers can communicate, they can collude. “I will give you a positive review if you give
me a positive review.” Peer mechanisms must prevent manipulation by groups as well as by
individuals.
Most existing peer mechanisms can be manipulated by groups. One partial exception is the
partition approach. Reviewers placed within the same group cannot collude because their reviews
only impact peers outside of their group. Unfortunately, the partition approach cannot prevent
collusion between participants in different groups.
Preventing all forms of collusion is likely impossible. For example, in the model of nominat-
ing one or more peers for a fixed number of winners, Alon et al. (2011) prove the impossibility
of designing a group-strategy-proof peer mechanism with good performance. Even if we can-
not design peer mechanisms immune to collusion, the challenge of discouraging and detecting
collusion is still important.
At a minimum, the mechanism should not encourage collusion.10 Consider a simple peer
mechanism for poverty targeting—give aid to a person if he claims he is poor and his neighbor
10 For example, in designing algorithms to assign wine producers to certify fellow producers, Barrot et al. (2020)
recognize that if two producers are assigned to review each other, they may be tempted to collude. The algorithms
include a constraint that two producers cannot review each other.
24
agrees. The mechanism is impartial but provides a strong incentive to collude. Suppose the
claimant is rich. He could claim he is poor and pay his neighbor to agree by giving his neighbor
a portion of the aid.
As Rai (2002) shows, the pressure to collude can be alleviated by limiting the aid budget. If
two neighbors both claim to be poor, they each receive half the budget, whereas if they claim that
one of the neighbors is rich, the poor neighbor receives the full budget. The disadvantage is that
poor claimants with rich neighbors receive less aid than poor claimants with poor neighbors.
Besides early work by Rai (2002) on poverty targeting, we know little about how to discour-
age collusion in peer mechanisms, in which settings collusion is most likely, or how to detect
collusion. We encourage work on these open questions.
6.2. Nepotism
Much research on peer mechanisms starts with the assumption that people only care about
their own chance of winning the prize. If it’s not me, then I don’t care who wins.
Academic peer review provides a counterexample to the assumption. Reviewers are often
asked to list conflicts of interest—to avoid biases towards colleagues, coauthors, and students.
The conflicts of interest are usually public knowledge and can be modeled as a conflict graph.
The mechanism designer can prevent manipulation by choosing a partition that respects the con-
flict graph (Xu et al., 2019).
What if the conflicts of interest are not public knowledge? As discussed above, empirical
research shows many examples of nepotism. Participants often favor family and friends. The
mechanism designer may not observe these social connections. Can the mechanism discourage
nepotism without observing the conflicts of interest? Hussam et al. (2022) show that using peer
prediction mechanisms to pay for accuracy can discourage nepotism. Are there other approaches
that can discourage nepotism?
25
6.5. Mechanisms that can respect constraints
We might have constraints on the winners. For example, we might want a gender-balanced
group of winners. One solution to this problem is to have the women vote on the male winners
and the men on the female winners. However, this may not be very satisfactory if the women
know more about each other and the men similarly. How then can we adapt the peer mechanisms
discussed so far to deal with additional constraints like this? Such constraints have proved useful
for capturing real-world issues in other areas of social choice (e.g. diversity constraints in school
choice mechanisms (Aziz et al., 2019b)), and we may be able to adapt ideas from these domains
to peer mechanisms.
Peer mechanisms that respect constraints can have unintended consequences that create new
challenges. A lab experiment studied how a gender quota impacted groups of two men and two
women completing tasks for payment (Leibbrandt et al., 2018). In one version, performance was
measured by peer review, and only the top two performing participants in each group received
higher pay. Participants could manipulate the mechanism by under-reporting the number of
tasks their peers completed. When a gender quota was in place (where at least one woman
received higher pay), peers were more likely to under-report the performance of women. The
difference was driven by women being more likely to sabotage women in the presence of a gender
quota while men sabotaged women and men equally. The gender quota had the unanticipated
consequence of intensifying competition between women and focusing women’s manipulation
of peer review on other women.
7. Conclusion
Manipulation is a very real problem in peer mechanisms where a group is selecting one or
more of the group to win a prize, receive a ranking, or be given a grade. This survey identified
three broad approaches to prevent such manipulation: mechanisms designed to be impartial so
that a participant cannot impact their outcome, audits to detect and punish manipulation, and
rewards for truthful reports. Empirical evidence of manipulation in practice suggests several
outstanding research challenges, such as dealing with collusion between participants as well as
various forms of nepotism. Despite the considerable body of research in this area, there remain
many significant obstacles to be overcome in the design of peer mechanisms to address a range
of issues met in the real world.
Acknowledgments
This research was supported under Australian Research Council’s Laureate Fellowship (project
number FL200100204). We thank Felix Fischer, Andrew Mackenzie, Herve Moulin, Axel Niemeyer,
Shinji Ohseto, and Nihar Shah for their helpful suggestions. We also thank the anonymous ref-
erees for the detailed comments that helped to improve the paper.
References
Alatas, V., Banerjee, A., Chandrasekhar, A.G., Hanna, R., Olken, B.A., 2016. Network structure and the aggregation of
information: Theory and evidence from Indonesia. American Economic Review 106, 1663–1704.
Alatas, V., Banerjee, A., Hanna, R., Olken, B.A., Purnamasari, R., Wai-Poi, M., 2019. Does elite capture matter? Local
elites and targeted welfare programs in Indonesia, in: AEA Papers and Proceedings, pp. 334–39.
26
Alatas, V., Banerjee, A., Hanna, R., Olken, B.A., Tobias, J., 2012. Targeting the poor: Evidence from a field experiment
in Indonesia. American Economic Review 102, 1206–40.
Alcalde-Unzu, J., Berga, D., Gjorgjiev, R., 2022. Impartial social rankings: Some impossibilities. Working Paper SSRN
4068178.
Alon, N., Fischer, F., Procaccia, A., Tennenholtz, M., 2011. Sum of us: Strategyproof selection from the selectors, in:
Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge, pp. 101–110.
Amorós, P., 2011. A natural mechanism to choose the deserving winner when the jury is made up of all contestants.
Economics Letters 110, 241–244.
Amorós, P., 2023. Implementing optimal scholarship assignments via backward induction. Mathematical Social Sciences
125, 1–10.
Amorós, P., Corchón, L.C., Moreno, B., 2002. The scholarship assignment problem. Games and Economic behavior 38,
1–18.
Aziz, H., Caragiannis, I., Igarashi, A., Walsh, T., 2019a. Fair allocation of indivisible goods and chores, in: Proceedings
of 28th International Joint Conference on Artificial Intelligence, pp. 53–59.
Aziz, H., Gaspers, S., Sun, Z., Walsh, T., 2019b. From matching with diversity constraints to matching with regional
quotas, in: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp.
377–385.
Aziz, H., Lev, O., Mattei, N., Rosenschein, J., Walsh, T., 2016. Strategyproof peer selection: Mechanisms, analyses, and
experiments, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 390–396.
Aziz, H., Lev, O., Mattei, N., Rosenschein, J.S., Walsh, T., 2019c. Strategyproof peer selection using randomization,
partitioning, and apportionment. Artificial Intelligence 275, 295–309.
Babichenko, Y., Dean, O., Tennenholtz, M., 2018. Incentive-compatible diffusion, in: Proceedings of the 2018 World
Wide Web Conference, pp. 1379–1388.
Babichenko, Y., Dean, O., Tennenholtz, M., 2020a. Incentive-compatible classification, in: Proceedings of the AAAI
Conference on Artificial Intelligence, pp. 7055–7062.
Babichenko, Y., Dean, O., Tennenholtz, M., 2020b. Incentive-compatible selection mechanisms for forests, in: Proceed-
ings of the 21st ACM Conference on Economics and Computation, pp. 111–131.
Balietti, S., Goldstone, R.L., Helbing, D., 2016. Peer review and competition in the art exhibition game. Proceedings of
the National Academy of Sciences 113, 8414–8419.
Bao, Z., Gangadharan, L., Leister, C.M., 2021. Deterrence using peer information. Working Paper SSRN 3725400.
Barrot, N., Lemeilleur, S., Paget, N., Saffidine, A., 2020. Peer reviewing in participatory guarantee systems: Modelisation
and algorithmic aspects, in: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent
Systems, pp. 114–122.
Basurto, M.P., Dupas, P., Robinson, J., 2020. Decentralization and efficiency of subsidy targeting: Evidence from chiefs
in rural Malawi. Journal of Public Economics 185, 104047.
Baumann, L., 2023. Robust implementation with peer mechanisms and evidence. Working Paper.
Bjelde, A., Fischer, F., Klimm, M., 2017. Impartial selection and the power of up to two choices. ACM Transactions on
Economics and Computation (TEAC) 5, 1–20.
Bloch, F., Olckers, M., 2021. Friend-based ranking in practice, in: AEA Papers and Proceedings, pp. 567–71.
Bloch, F., Olckers, M., 2022. Friend-based ranking. American Economic Journal: Microeconomics 14, 176–214.
Bousquet, N., Norin, S., Vetta, A., 2014. A near-optimal mechanism for impartial selection, in: International Conference
on Web and Internet Economics, Springer. pp. 133–146.
Brero, G., Lepore, N., Mibuari, E., Parkes, D.C., 2022. Learning to mitigate ai collusion on economic platforms. ArXiv
preprint arXiv:2202.07106.
Caragiannis, I., Christodoulou, G., Protopapas, N., 2019. Impartial selection with additive approximation guarantees, in:
International Symposium on Algorithmic Game Theory, Springer. pp. 269–283.
Caragiannis, I., Christodoulou, G., Protopapas, N., 2021. Impartial selection with prior information. ArXiv preprint
2102.09002.
Caragiannis, I., Krimpas, G.A., Voudouris, A.A., 2016. How effective can simple ordinal peer grading be?, in: Proceed-
ings of the 2016 ACM Conference on Economics and Computation, pp. 323–340.
Caragiannis, I., Krimpas, G.A., Voudouris, A.A., 2020. How effective can simple ordinal peer grading be? ACM
Transactions on Economics and Computation (TEAC) 8, 1–37.
Carpenter, J., Matthews, P.H., Schirm, J., 2010. Tournaments and office politics: Evidence from a real effort experiment.
American Economic Review 100, 504–17.
Cembrano, J., Fischer, F., Hannon, D., Klimm, M., 2022a. Impartial selection with additive guarantees via iterated
deletion. ArXiv preprint 2205.08979.
Cembrano, J., Fischer, F., Klimm, M., 2022b. Optimal impartial correspondences, in: International Conference on Web
and Internet Economics, pp. 187–203.
Cembrano, J., Fischer, F., Klimm, M., 2023a. Impartial rank aggregation. ArXiv preprint arXiv:2310.13141.
27
Cembrano, J., Fischer, F., Klimm, M., 2023b. Improved bounds for single-nomination impartial selection. ArXiv preprint
2305.09998.
Cembrano, J., Griesbach, S.M., Stahlberg, M.J., 2023c. Deterministic impartial selection with weights. ArXiv preprint
arXiv:2310.14991.
Chakraborty, A., Jindal, J., Nath, S., 2024. Removing bias and incentivizing precision in peer-grading. Journal of
Artificial Intelligence Research 79, 1001–1046.
Conitzer, V., Walsh, T., 2016. Barriers to manipulation in voting, in: Brandt, F., Conitzer, V., Endriss, U., Lang, J.,
Procaccia, A.D. (Eds.), Handbook of Computational Social Choice. Cambridge University Press, pp. 127–145.
Conning, J., Kevane, M., 2002. Community-based targeting mechanisms for social safety nets: A critical review. World
Development 30, 375–394.
De Alfaro, L., Shavlovsky, M., 2014. Crowdgrader: A tool for crowdsourcing the evaluation of homework assignments,
in: Proceedings of the 45th ACM technical symposium on Computer science education, pp. 415–420.
De Clippel, G., Moulin, H., Tideman, N., 2008. Impartial division of a dollar. Journal of Economic Theory 139, 176–191.
Dhull, K., Jecmen, S., Kothari, P., Shah, N.B., 2022. Strategyproofing peer assessment via partitioning: The price in
terms of evaluators’ expertise, in: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing,
pp. 53–63.
Dupas, P., Fafchamps, M., Houeix, D., 2022. Measuring relative poverty through peer rankings: Evidence from Côte
d’Ivoire. NBER Working Paper 29911.
Dütting, P., Feng, Z., Narasimhan, H., Parkes, D., Ravindranath, S.S., 2019. Optimal auctions through deep learning, in:
International Conference on Machine Learning, pp. 1706–1715.
Edelman, P.H., Por, A., 2021. A new axiomatic approach to the impartial nomination problem. Games and Economic
Behavior 130, 443–451.
Faltings, B., Radanovic, G., 2017. Game theory for data science: Eliciting truthful information. Synthesis Lectures on
Artificial Intelligence and Machine Learning 11, 1–151.
Ferguson, C., Marcus, A., Oransky, I., 2014. The peer-review scam. Nature 515, 480.
Fischer, F., Klimm, M., 2014. Optimal impartial selection, in: Proceedings of the fifteenth ACM conference on Eco-
nomics and computation, pp. 803–820.
Fischer, F., Klimm, M., 2015. Optimal impartial selection. SIAM Journal on Computing 44, 1263–1285.
Gal, Y., Mash, M., Procaccia, A.D., Zick, Y., 2017. Which is the fairest (rent division) of them all? Journal of the ACM
(JACM) 64, 1–22.
Goldman, J., Procaccia, A.D., 2015. Spliddit: Unleashing fair division algorithms. ACM SIGecom Exchanges 13, 41–46.
Holzman, R., Moulin, H., 2013. Impartial nominations for a prize. Econometrica 81, 173–196.
Huang, Y., Shum, M., Wu, X., Xiao, J.Z., 2019. Discovery of bias and strategic behavior in crowdsourced performance
assessment. ArXiv preprint 1908.01718.
Hussam, R., Rigol, N., Roth, B.N., 2022. Targeting high ability entrepreneurs using community information: Mechanism
design in the field. American Economic Review 112, 861–98.
Ito, K., Ohsawa, S., Tanaka, H., 2018. Information diffusion enhanced by multi-task peer prediction, in: Proceedings of
the 20th International Conference on Information Integration and Web-based Applications & Services, pp. 96–104.
Jecmen, S., Shah, N.B., Fang, F., Akoglu, L., 2024. On the detection of reviewer-author collusion rings from paper
bidding. ArXiv preprint arXiv:2402.07860.
Kahng, A., Kotturi, Y., Kulkarni, C., Kurokawa, D., Procaccia, A.D., 2018. Ranking wily people who rank each other,
in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1087–1094.
Kotturi, Y., Kahng, A., Procaccia, A., Kulkarni, C., 2020. Hirepeer: Impartial peer-assessed hiring at scale in expert
crowdsourcing markets, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2577–2584.
Kurokawa, D., Lev, O., Morgenstern, J., Procaccia, A.D., 2015. Impartial peer review, in: Proceedings of the Twenty-
Fourth International Joint Conference on Artificial Intelligence, pp. 582–588.
Lee, M.K., Baykal, S., 2017. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated
vs. discussion-based social division, in: Proceedings of the 2017 acm conference on computer supported cooperative
work and social computing, pp. 1035–1048.
Leibbrandt, A., Wang, L.C., Foo, C., 2018. Gender quotas, competitions, and peer review: Experimental evidence on the
backlash against women. Management Science 64, 3501–3516.
Lev, O., Mattei, N., Turrini, P., Zhydkov, S., 2021. Peer selection with noisy assessments. ArXiv preprint
arXiv:2107.10121.
Lev, O., Mattei, N., Turrini, P., Zhydkov, S., 2023. Peernomination: A novel peer selection algorithm to handle strategic
and noisy assessments. Artificial Intelligence 316, 103843.
Li, Z., Zhang, L., Fang, Z., Li, J., 2018. A two-stage mechanism for ordinal peer assessment, in: International Symposium
on Algorithmic Game Theory, Springer. pp. 176–188.
Littman, M.L., 2021. Collusion rings threaten the integrity of computer science research. Communications of the ACM
64, 43–44.
28
Mackenzie, A., 2015. Symmetry and impartial lotteries. Games and Economic Behavior 94, 15–28.
Mackenzie, A., 2020. An axiomatic analysis of the papal conclave. Economic Theory 69, 713–743.
Maitra, P., Mitra, S., Mookherjee, D., Visaria, S., 2020. Decentralized targeting of agricultural credit programs: Private
versus political intermediaries. NBER Working Paper 26730.
Mattei, N., Turrini, P., Zhydkov, S., 2020. Peernomination: Relaxing exactness for increased accuracy in peer selection,
in: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pp. 393–
399.
Merrifield, M.R., Saari, D.G., 2009. Telescope time without tears: a distributed approach to peer review. Astronomy &
Geophysics 50, 4–16.
Ng, Y.K., Sun, G.Z., 2003. Exclusion of self evaluations in peer ratings: An impossibility and some proposals. Social
Choice and Welfare 20, 443–456.
Niemeyer, A., Preusser, J., 2022. Simple allocation with correlated types. Working Paper.
Ohseto, S., 2012. Exclusion of self evaluations in peer ratings: monotonicity versus unanimity on finitely restricted
domains. Social Choice and Welfare 38, 109–119.
Piech, C., Huang, J., Chen, Z., Do, C.B., Ng, A.Y., Koller, D., 2013. Tuned models of peer assessment in moocs, in:
Proceedings of the 6th International Conference on Educational Data Mining, pp. 153–160.
Rai, A.S., 2002. Targeting the poor using community information. Journal of Development Economics 69, 71–83.
Shah, N.B., 2022. Challenges, experiments, and computational solutions in peer review. Communications of the ACM
65, 76–87.
Stelmakh, I., Shah, N.B., Singh, A., 2021. Catch me if i can: Detecting strategic behaviour in peer assessment, in:
Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4794–4802.
Tamura, S., 2016. Characterizing minimal impartial rules for awarding prizes. Games and Economic Behavior 95, 41–46.
Tamura, S., Ohseto, S., 2014. Impartial nomination correspondences. Social Choice and Welfare 43, 47–54.
Topping, K., 1998. Peer assessment between students in colleges and universities. Review of Educational Research 68,
249–276.
Trachtman, C., Permana, Y.H., Sahadewo, G.A., 2021. How much do our neighbors really know? The limits of
community-based targeting. Working Paper.
Vera-Cossio, D., 2022. Targeting credit through community members. Journal of the European Economic Association
20, 778–821.
Walsh, T., 2014. The peerrank method for peer assessment, in: Proceedings of the Twenty-first European Conference on
Artificial Intelligence, pp. 909–914.
Wang, J., Shah, N.B., 2019. Your 2 is my 1, your 3 is my 9: Handling arbitrary miscalibrations in ratings, in: Proceedings
of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 864–872.
Wang, Y., Fang, H., Cheng, C., Jin, Q., 2018. Tsp: Truthful grading-based strategyproof peer selection for moocs,
in: 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), IEEE. pp.
679–684.
Was,
˛ T., Rahwan, T., Skibski, O., 2019. Random walk decay centrality, in: Proceedings of the AAAI Conference on
Artificial Intelligence, pp. 2197–2204.
Witkowski, J., Parkes, D.C., 2012. A robust bayesian truth serum for small populations, in: Proceedings of the AAAI
Conference on Artificial Intelligence, pp. 1492–1498.
Xu, Y., Zhao, H., Shi, X., Shah, N.B., 2019. On strategyproof conference peer review, in: Proceedings of the 28th
International Joint Conference on Artificial Intelligence, AAAI Press. p. 616–622.
Zarkoob, H., d’Eon, G., Podina, L., Leyton-Brown, K., 2023. Better peer grading through bayesian inference, in:
Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6137–6144.
Zhang, X., Zhang, Y., Zhao, D., 2021. Incentive compatible mechanism for influential agent selection, in: International
Symposium on Algorithmic Game Theory, Springer. pp. 79–93.
Zhao, D., 2021. Mechanism design powered by social interactions, in: Proceedings of the 20th International Conference
on Autonomous Agents and MultiAgent Systems, pp. 63–67.
Zhao, Y., Zhang, Y., Zhao, D., 2023. Incentive-Compatible Selection for One or Two Influentials. IJCAI International
Joint Conference on Artificial Intelligence , 2931–2938.
29