1 Introduction
There is a growing desire among researchers to create academic venues that are more accessible and safe spaces for increased equity, diversity, and inclusion [
2] (c.f. the SIGCHI Equity talks series [
14]). Despite this, much knowledge building and sharing, both in academia more broadly and in
Human-Computer Interaction (HCI) in particular, is dominated by Western research [
34]. In recent years, conferences have began to run workshops and panels to address such challenges within their own communities, to create safe spaces and accessibility to venues (see [
17,
20]). Despite this, little research has been done on how to support underserved researchers within academic communities. As Kumar and Karusala posit in their discussion regarding marginalisation within the HCI community:
“Every year there are scholars from underrepresented contexts across the Global South and North, who are unable to afford participation in [HCI’s] most celebrated publication venues. This impacts their ability to learn from and network with others” [
30].
Every interest-based or practice-based community, including conference communities that engage with global challenges, exhibits a number of characteristics that revolve around what the members have in common, and are also defined by who they exclude [
32].
How selection into the conference space is carried out, and
who is accepted into the conference space shapes the nature of the community that is present. Thus, it is important to notice and scrutinize the boundaries created by top conferences as they decide who is included and/or excluded in the conference community.
Early career researchers (
ECRs), particularly those based in the Global South, are the most acutely impacted by barriers to entry within academic venues. [
1,
50]. Though there are varied definitions of the term, in this article, we take ECRs as postgraduate students and researchers who have less than 5 years research experience from their PhD award date or their research career start date if they do not have a PhD. Barriers to entry include imposter syndrome, limited professional development on how to engage with academic communities, a lack of professional network connections, the time and financial costs for attending conferences abroad (which are often not justifiable for many researchers [
1]), as well as logistical issues concerning the location of the venue where the conference and events are held. As a consequence to these barriers, ECRs in the Global South are often excluded from mentoring and networking events, workshops, and doctoral consortia which would be beneficial for their academic growth [
15]. Research by historically under-represented researchers (i.e., black, indigenous, people of color, people from the Global South, women, etc.) also tends to be less valued [
30]. Writing about ECRs with disability, Kirkham et al. made a similar argument about their lack of inclusion within conferences, and the need for introducing modifications within peer-review processes to promote inclusion and accessibility [
26].
As a gateway for inclusion, academic conferences require high-quality reviewers to ensure the acceptance of high-quality submissions and the tactful denial of low-quality submissions. The reviewing process is a meaningful avenue for ECRs to engage with conference communities. However, despite their interest, ECRS are denied this opportunity as they often have little or no formal reviewing experience. While there have been attempts by
Special Interest Groups (SIGs) to publicize open calls for reviewer positions, a lack of mentorship remains a barrier to participation. This leads to other problems, such as work from the Global South being mis-judged by reviewers unfamiliar with such contexts [
33]. Thus, we argue that supporting underserved academics within the peer-review process is an area of concern for conference communities (as much as supporting any other professional community).
While uncommon within HCI spaces, Shadow Program Committees (
Shadow PC (
SPC)) have been used by other technology conferences (e.g., [
15,
23]) with an intent to educate and equip junior researchers to participate in conference reviewing processes. Writing about their motivation for introducing a SPC to the Operating Systems academic community, Rebecca Isaacs argued that
“the top conferences seem inaccessible and remote... dominated by a small number of mainly US institutions” [
23]. It is with a similar motivation that we piloted SPC within an ACM cross-disciplinary conference called SUBVERT Conference (pseudonym for review). ACM SUBVERT Conference (pseudonym for review) is committed to addressing global challenges faced by under-represented communities and key challenges that hinder the growth of sustainable societies, making it a viable venue to explore inclusion and the sustainability of future reviewing processes. We sought to address the challenges of limited review training opportunities for ECRs (particularly those from the Global South) as an avenue for engaging with their academic communities. Thus improving diversity and inclusion of conference programme committees and creating a more equitable peer-review process. SPC is an innovative approach that works to address the barriers faced by ECRs in two ways: (1) allowing ECRs to “shadow” the review process followed by conference program committee members (PCs), supporting their learning about the reviewing process in a safe space with their peers; and (2) enabling conference committees to prepare junior researchers for serving on future program committees.
Our work contributes to the ongoing (and increasingly proactive) discourse within conference venues about the inclusion of underserved communities by: (1) identifying an opportunity for conferences to be scaffolded around the professional development of ECRs, who are currently sidelined by existing onboarding and reviewing processes; (2) taking a capability approach to highlight the immense agency of ECRs, and designing peer-based infrastructures placed in the vicinity of existing conferences to facilitate their capacity for self-organizing; and (3) re-imagining peer-review processes in modern conferences and journals to support mentorship and facilitate author-reviewer transfer of knowledge, in pursuit of academic venues’ desire to better train the next generation of reviewers. We argue that such innovative approaches, that aim to provide transnational and cross-institutional professional development for academics, are critically needed in the post-COVID academic hybrid-conferencing environment. This involves addressing the emerging disparities in relation to equitable participation and challenges in community-building for underserved researchers.
3 Context and Motivation
During the planning stage of SUBVERT Conference, the organizers engaged in several conversations challenging the existing reviewer recruitment methods and brainstorming ways in which conference attendee/presenter diversity could be increased while increasing the opportunity for ECRs, especially those from the Global South, to add their voice to existing conversations. Some questions that shaped these conversations included: (1) How do we create opportunities for ECRs to become equipped with the appropriate tools to join the cohort of conference reviewers? (2) How do we diversify the existing reviewing pools and open up these conference communities? (3) What platforms and spaces should we be creating for ECRs to open up, share their experiences and suggestions and learn from each other? In addressing these questions, the organizers perceived that they would be able to provide more tangible steps for ECRs in their desire to become better reviewers and improve their chances in becoming part of future conference program committees.
In the previous section, we outlined some of the previous approaches to the professional development of ECRs. However, despite many of the benefits these programs delivered, we identified that they were not sufficiently equipped to address the contextual challenges of the distributed community of ECRs in the context of COVID-19. This prompted the SUBVERT Conference general chairs (with a combined academic experience of 50 years) to set up a working group consisting of ECRs to design and develop a training academy specific to the SUBVERT Conference. This working group (who are also the authors), consisted of four ECRs at different stages of their academic journey from PhD Candidate to 5-year post-PhD. Each came from a different cultural context (British Asian, African, white American, and South Asian). While the team had prior experience reviewing papers, they had no experience of being on a program committee. They drew on these experiences to develop the program from the perspective of what ECRs would desire and need to become better reviewers. The working group worked closely with the experienced general chairs to design the training program. For instance, the chairs introduced the ECRs to prominent members of the SUBVERT Conference community, who were able to contribute towards resources developed for emerging reviewers.
5 Findings
5.1 Infrastructures for Mentorship and Growth
Junior researchers need extra guidance as they navigate typical processes of academia [
35,
45]. Our study shows the importance of infrastructures that take into account mentorship from peers and seniors, access to curated resources and curated spaces to explore the boundaries of their discipline.
During the orientation meetings, many participants described their previous experiences of submitting to peer-reviewed conferences, and admitted their ignorance of what happens behind the scenes. Consequently, they ventured into the SPC program in order to learn the basics of reviewing structures (in addition to the technical aspects of writing reviews). For junior researchers, doctoral programs may provide training on critically reading research papers and academic writing but it is not until they “gain practical experience” [P32] that the learning truly takes place. From our own experience as researchers who have published at HCI conferences, we noted that it was not until we started reviewing papers that we improved as paper writers. This sentiment was put forward by our participants as well. “This experience would help me to understand what attributes of a paper make it more effective in communicating the subject matter with its readers. It will also help me to learn how to identify the strengths, weaknesses, and points to improve upon in a paper. These experiences will be a stepping stone for becoming a more focused researcher and better at presenting research works through quality writing” [P18]. Thus, taking part in the SPC was as much about becoming a better reviewer as it was about becoming a better author. Arguably, the two are inseparable.
Our data indicates that the SPC was valued by participants because it exposed them to disciplines outside their direct area of expertise (a concern particularly relevant for computer scientists). “I’m eager to learn more about the scholarship procedures, norms, and expectations in the SUBVERT Conference community... what methods, concepts, and practices are valued and/or missing within the SUBVERT Conference discourse” [P66]. Others, despite having experience with inter-disciplinary venues, wanted to continue their professional development and stated “I hope to sharpen my review skills and gain more exposure to the behind the scenes process of paper reviews. While I have been a reviewer before for conferences like CHI and DIS, I have never participated in a review mentorship program. I want to ensure that I am contributing back to the academic community in a beneficial way” [P64].
The networking potential of the SPC was also highly valued by participants. Participants believed the SPC would be a great opportunity to meet and interact with fellow ECR and to learn from experienced researchers around the world. For example, one participant mentioned “Besides learning and gaining experience, I look forward to enriching networking opportunities and getting exposed to the work being carried out in our research community” [P23]. However, while group-based programs like SPC are valuable, there is still a need for more direct-mentorship approaches. Our participants were based in institutional contexts where these opportunities were not available and wanted “to be mentored in reviewing research and conference papers” [P20]. Another participant bemoaned the general nature of information available about the reviewing processes for CS conferences and asked for “access to distinctive mentors... for critical and timely guidance in the academic review process” [P18]. Moreover, the participants were eager for individual-level feedback from the authors for the reviews they had submitted, but this was not possible due to authors’ privacy. Instead, the SPC organizers attempted to provide mediated feedback from the authors and illustrative examples through the authors’ quotes to help participants identify where they might have done (or not done) these things in their own reviews.
5.1.1 Diversifying the Nature of the Program.
Although the Shadow program has provided a platform for equipping ECRs in the review process, as highlighted by a participant sharing, “I now have a framework to use when embarking on a review” [Group A Debrief Session], there was a strong push to have more one on one and in depth mentorship opportunities. This speaks to the continued need to create spaces for equipping and training ECRs on reviewing. Given the nature of the program, time constraints and available resources, one on one mentoring was not a feasible solution with 81 participants. However, useful suggestions going forward have emerged from the process such as “having a buddy system–pairing SPC with experienced mentors” [Group A Debrief Session, Breakout Room 1]. Participants were also eager to receive individual feedback from authors on their reviews, which was not feasible due to the number of participants and the confidentiality that was promised to authors for this SPC. Having diverse shadow programs that are complimentary in nature is one way to cater for the divergent needs of ECRs without burdening a handful of SPC trainers on a program. This may be an opportunity for conference communities to offer creative opportunities that cater for different stages and needs of ECRs.
5.2 Understanding the Intricacies of the Reviewing Process
Since the majority of participants had limited experience of formally reviewing papers, many questions were raised throughout the program. For those who had never reviewed papers before, the SPC provided a platform to gain “first-hand experience of reviewing a full paper” [Group B Debrief Session]. For others who had previously reviewed a few papers, they gained the opportunity to learn “the entire process of reviewing for a conference, end to end” [Group A Debrief Session]. This included a deeper understanding of subcommittees and tracks (at conferences such as COMPASS), and how to frame their research to maximize their chances of gaining a fair hearing from reviewers. These discoveries helped to demystify the review process, especially for first time reviewers.
As anticipated, participants did indeed relate to us as organisers who were ECRs, and felt free to ask a range of questions during the training sessions that were unrelated to review-writing. Such questions prompted us to broaden the scope of the SPC discussions to be flexible to participant needs. Through this approach, we also noticed that SPC members felt comfortable in challenging biases within existing institutional mechanisms. For some attendees, who were based in institutions that exclusively targeted top-tier conferences, they were curious to discover more about alternatives and why some conferences are held in high esteem: “I am also curious to understand the difference in the reviewing process of CHI and other conferences. Why do we hear that the quality of CHI reviewing is better than others?” [Group B, WhatsApp].
SPC participants also raised questions about reviewing etiquette across conferences, especially as they are not explicitly differentiated. This can be a difficult terrain to navigate for novice researchers working in an interdisciplinary research area. One participant pointed out that sample reviews from HCI conferences that we provided looked “more detailed than what you would expect from a systems conference” [Group C Orientation Meeting 2]. At first glance, observations such as this seem basic—but they point to the numerous parts of the conferencing experience where the guidance of an experienced researcher is needed. Whenever such queries were raised, researchers in the WhatsApp group with some experience would often share their knowledge or advice. One such instance was when several papers had not followed the anonymous submission guidelines. Several participants brought this to notice, “I got one paper mentioning the author’s name and affiliation... I wonder if it is ethical to review. I mean reviewers might be biased!! What do you think?” [WhatsApp Group A]. In this case reviewers who had previously submitted to this conference shared their views on how to tactfully approach this concern.
SPC participants also learned how to engage with other reviewers to make accept/reject decisions and how to synthesize reviews when writing a meta-review. Participants sought guidance on how to resolve differences in opinion, asking questions like: “As meta-reviewers, can we use the “add comments” feature on HotCRP with the option “hidden from authors” as a way to discuss and deliberate among the reviewers, specifically if there are disagreements in terms of “overall merit” of a paper?” [Group D WhatsApp]. Taking on board the spirit of SPC, that is, a collaborative and constructive approach to review-writing, we noticed many reviewers taking extra efforts to write balanced and carefully-thought out reviews.
We also encountered questions exploring the nature of meta-reviews. For example, one participant asked: “When writing a meta review, should the score assigned to the paper be the average between 2AC and R or an independent choice of the meta reviewer?” [Group B WhatsApp]. Participants wondered about the role that the score can play, and the power that the meta-reviewer’s judgement plays in the peer-review process. Some even went so far as to suggest that the meta-reviewer role could be antithetical to the spirit of peer-review. “The metareview. can be problematic because it’s basically a translation... the authors might lose [out]... because the AC is favoring one idea over the other as... a centralized decision maker. Personally, I think that plays into un-democratizing academia, which makes me feel a little bit uncomfortable.” [Group A Orientation Meeting 2]. In this vein, some participants preferred meta-reviews that played the role of a helpful summary generator and interpreter (i.e., taking the brief comments of reviewers and helping authors understand how to address them). “So I do like the second review that we read, which was breaking out each reviewers’ work separately and just making them more direct in their wording and feedback to the authors”. Perhaps due to the inexperience of the participants with the review process, we did not encounter any discussions around contentious practices in mainstream conferences, for example, varying acceptance rates between subcommittees at a conference, Program Chairs enforcing stronger accept/reject ratings to reduce an imbalance of papers in the middle range.
5.3 Growth in Self-efficacy as Academics
As we have seen so far, taking part in the review training process not only provided participants with tips and techniques on how to review, but provoked them to engage with and ask questions about the wider conferencing process. This evoked two effects on our participants: (1) gaining a greater appreciation of wider research, (2) rethinking existing reviewing practices
Firstly, they reassessed their identity as researchers and their own writing practices: “reviewing opened my thoughts to the mindset of a reviewer” [Group B Debrief Session]. For some, it was a shift in their understanding of the importance of the reviewer role and a desire to serve in that role in the future: “being conscious that by reviewing a paper I’m playing a role in advancing knowledge in my field will stay with me throughout my academic journey” [Group C, Debrief Session]. In other words, as a result of their experience of being on both sides of the author-reviewer divide, they gained a broader understanding about how academic research operates: “[this experience] provided a better way of looking at research” [Group B Debrief Session]. Furthermore, the SPC training’s emphasis on reviewer mindset caused many participants to reflect on their own approaches to thinking through and writing reviews: “the [resources] helped me understand how to deal with internal biases which often crop up while reading a paper. I will now definitely try to have an internal dialogue to reduce these biases” [Group A, WhatsApp].
Secondly, the SPC proved to be a space for participants to challenge the peer-review process and the way it is practiced in Computer Science conferences. The majority of participants came into the program with little to no prior review knowledge and experience and, perhaps unsurprisingly, spoke frequently of the lack of confidence to review papers. Throughout the program as participants engaged in the different activities that were assigned, they began to grow in their confidence. Such confidence enabled the participants to not only see the program as a space to becoming better reviewers but also a space to identify ways in which the SPC and the overall academic review system could be enhanced. Some participants identified issues with current practices in reviewing, and discussed better approaches. They began to question the different process such as the the PC meeting and how it is conducted; the ranking of final papers that make it through a typical conference program; the selection of reviewers and the approaches for reviewers to engage and discuss papers before writing the final reviews. A dominant critique that came through has been in the area of reviewer self ratings: “This is questionable how people categorize themselves as expert or knowledgeable. They might be an expert on methods, but not expert in the domain. I have seen ‘experts’ that gave one hundred percent wrong feedback on my paper” [Group C Orientation Meeting 2]. A participant who had previous experience of reviewing papers noted how potential reviewers accept and reject papers based on limited information: “When review requests come to me... just reading a 150 word abstract you will probably not know [if you are an expert]. This is actually a problem of the system as well” [Group C Orientation Meeting 2].
Reviewer ratings do play a role in how a review is viewed by a meta reviewer or editor for a journal. However, the process in which such ratings take place have been questioned whether they are useful in the overall review process. At times, reviewers rate themselves as experts and provide poor quality reviews. Alternatively, there are cases where reviewers rate themselves as knowledgeable and do engage more fully with a paper and provide more meaningful feedback. In such cases, important questions arise about how useful the rating is in terms of the final decision on a paper. We need to explore alternative means of helping reviewers rate themselves more accurately or holistically, providing guidance and ways to capture their expertise across multiple dimensions (rather than a single 5-point summary scale). Questions were also raised about where expertise might be high but due to limited time spent, the reviews may not be useful and as such, there might be value in introducing a confidence score, where the review rates the value of their own review in terms of their own understanding. These are some of the concerns that were raised around existing practices by the SPC participants.
We also noticed a growth in participant self-efficacy through their own belief in being able to write good reviews, provided they were given extra time: “It gave me a lot of confidence. I am now able to write a review that I am proud of. I write knowing that I am giving it my best. Thank you again!” [Group C Debrief Meeting]. Participants frequently wished to have multiple rounds of discussions with their fellow reviewers before finalizing their final reviews, confident that through this process they could make sure to give the paper under consideration full coverage as each person took different angles. To facilitate this, many groups of participants asked for extra time, which was granted to them.
5.4 Giving Feedback to Reviewers: The Missing Link?
As part of the SPC, we introduced a post-review feedback activity that was designed to reduce the author-reviewer feedback loop. As part of this activity, we contacted the authors of the papers that were reviewed and conducted Zoom interviews with them using the reviews as a discussion point. During the interview, we referred back to participant reviews to get their perspective and motivation for particular sections they had contributed. Typically, reviewers receive no or limited feedback on the reviews that they provide to authors. In some reviewing platforms like HotCRP there are possibilities for other reviewers to anonymously rate their peers’ reviews. However, the only feedback that is solicited from authors is during the author rebuttal stage, during which authors have to defend the merit of their work (and as such, is not intended to be feedback on the quality of the reviews received).
This type of emphasis is not present in previous attempts at SPC, because in those attempts, the emphasis was on creating a separate safe space for amateur reviewers, similar to a moot-court experience for budding lawyers. However, we found that by exposing the reviews of amateur reviewers to real-world authors and receiving their live feedback, a rich avenue for formative feedback emerged. Authors indicated the ways in which the reviews were useful or not, and provided suggestions on how reviewers could improve going forward. The feedback from the authors was collated and summarized into core themes that formed part of the reflective discussions that occurred during the debrief meetings at the end of the program. Such formative feedback provided an additional learning opportunity for the SPC participants.
Many authors felt that the quality of the reviews from the SPC and TPC were similar, particularly in the cases when the SPC members were familiar with the authors’ field. For example, Author 7 said: “To be honest, if I hadn’t known that I received the reviews from the Shadow PC, [then] that would have sufficed as feedback. And also in terms of the decision... there is a very good congruence [with the Technical PC]”.
In many cases, authors even pointed out that the SPC reviews highlighted useful areas for improvement that were not identified by the TPC reviewers. “I think people [from the SPC] really put in a lot of effort. Because there are things that I didn’t get comments on from the Technical PC, that I feel are still very strong points that I’m going to pick up” [Author 7]. From our analysis of the reviews, we surmised that this difference could be attributed to the amount of time and scrutiny that SPC members applied to papers. Their increased efforts, focused on fewer papers than members of TPC (who typically have a greater reviewing load), meant that they were able to pick up more areas of improvement at a micro level. This can also be seen in Author 2’s assessment of the two sets of reviews they received: “we found actionable feedback in a number of the reviews... [Shadow PC reviews] were much clearer about, ‘Here is my concern with the paper, here is how I might address that”’ [Author 2]. However, it was not always the case that SPC reviews were on par with TPC reviews. As would be expected from a training program’s first group of trainees working in a cross-disciplinary conference space, many SPC members were assigned papers to review that were outside their disciplinary area of expertise.
Inevitably, this led to SPC members providing general and high-level feedback, not providing commentary on methods or domains within papers that they were not comfortable with or had limited exposure to: “I felt like the [Shadow PC] reviewers were more scratching the surface, not going into depth” [Author 6]. Indeed, as part of the training, we had advised the participants on strategies in such situations (i.e., acknowledging their limited area of expertise and focusing on areas on the paper that they were confident addressing).
The authors we spoke with also commented on the quality of the writing and communication from the reviewers. Reviews framed in a constructive and polite tone were considered highly desirable. For example, Author 2 said, “I feel like a lot of effort and thought went into making the reviews useful and critical, but not overly harsh. Like, yeah, you see lots of memes about ‘Reviewer 2,’ and I didn’t get ‘Reviewer 2’! And I’m grateful for that”. Based on their previous experience of receiving terse and unhelpfully-framed reviews, the present set of reviews were a welcome change. On the other hand, we also heard from one author whose paper was negatively rated by both SPC and TPC reviewers about negative experiences regarding the communication of reviewer feedback: “I thought [one Shadow PC reviewer] took a very aggressive tone... the way they were asking questions was, “How can you not do this? How can you...?” things like that. Whereas [another Shadow PC reviewer], had a very calm but critical stance which they took in the reviews. It definitely helps to have that calmness in the tone” [Author 6].
6 Discussion
The SPC at SUBVERT Conference was designed with the assumption that participants would learn the most from it as a collaborative experience that mimicked the features of a real-world program committee as closely as possible. Our findings provide many observations about the nature of reviewing, junior researchers’ views of conferences, and the challenges they face in their own professional development. Drawing on these observations, we list four design opportunities for HCI researchers who want to explore innovative ways to support junior researchers and improve the conferencing process more broadly and the reviewing process more specifically.
6.1 Designing for Social Capital
Experienced PC members often reference the relationships they built during the time they spent on the PC, suggesting that networking is a key outcome of participation in program committees. In addition, earlier SPC programs identified that participants wanted to “put a name to a face” and meet other participants [
15]. Even before the pandemic there were barriers to entry to these committees for researchers lacking resources (e.g., time-off from teaching commitments and travel funding). Since 2020, the barriers have been exacerbated as researchers worldwide have been significantly impacted by the restricted opportunities for networking.
When designing online events, we should consider what is valued in their offline counterparts. Anderson & Anderson argue that the most important function of a conference is
“the creation of opportunities for informal socialization, entertainment and networking. deal making... bonding and friendship building by members of that community” [
3, p. x]. Unfortunately, online academic events struggle to replicate these benefits. Previous work has explored the design of conference spaces for socialization through the simulating venues for informal discussions (e.g., coffee hours) [
22], or the use of interactive virtual hangout spaces (e.g., second life) [
21,
49]. However, such techniques have had mixed success and uptake [
49].
We approached the SPC program as a different means of facilitating networking in and around online conferencing events. Mindful of participants from the Global South, we made four design decisions aimed at supporting ECRs in gaining social capital and relationship-building: (1) splitting participants into four roughly equal cohorts (of approximately 20 individuals), who were then able to build relationships over the duration of the program. (2) participants were distributed across the groups to ensure that no one country or region was over-represented in any group (despite the inconvenience in meeting scheduling that this would have caused). This ensured that participants based in India (where 18 of the ECRs came from) were able to be co-reviewers with participants in Uganda or Iran. (3) participants were also distributed based on their career stage, that is, to ensure an even distribution of PhD candidates and post-docs in each cohort. (4) supporting asynchronous communication (through WhatsApp) was another priority due to the wide range of time-zones participants came from. This is a practice that is also common in other conferences, notably at the PC meeting for HCI conferences.
Furthermore, these design decisions helped to expand what has been done by previous SPC programs and earlier publications on them. Earlier SPCs were conducted in-person and, as a result, most participants were from high-income countries (Europe, US, and Canada) [
15,
23], reducing access for ECRs from the Global South and limiting the breadth of networking opportunities. For instance, Feldmann et al. reported that invitees from Asia were unable to participate in the SIGCOMM SPC due to the cost of travel to Germany [
15].
There are many new opportunities that we can identify for increasing social capital of junior researchers that have emerged since the widespread adoption of virtual and hybrid conferences. At a fundamental level, we argue that the review process can be redesigned in order to integrate junior researchers into the academic community and the process of peer-review. SPC is but one of many approaches that can be utilized to achieve this aim. Others have tried to pilot approaches such as: (1) introducing transparency: using application forms for applying to be part of a PC (and thus making the process of enlisting in reviewing papers more inclusive) (2) ECR-specific events: building in workshops or colloquiums that are deliberately focused on providing short and confidential mentorship sessions where confidentiality is possible [
16]. We can also design new opportunities for peer-mentorship to reduce any challenges around senior researchers being burdened with mentoring activities. For instance, one such approach may involve pairing junior faculty members and recent PhD graduates with first-year PhD candidates.
6.2 Designing for Reliable Reviewer Processes
One area that has been receiving significant attention is that of improving the peer-review process as a means to oversee quality in academic scholarship. Peer-reviewers and the peer-review process play a significant role in academic publishing. However, the peer-review process itself has to be effective in order to uphold the quality and validity of scholarly publications as it seeks to do [
12]. The peer-review process continues to be strained as more academics enter the academic market and publications continue to rise. Consequently, there is a growing demand for reviewers to keep up with the growing publication outputs and conference/journal needs [
46,
52,
57]. Moreover, it has proven difficult to maintain a pool of reviewers and have them meet deadlines [
12]. According to Newman et al., the typical HCI review process
“suffers from some recurring problems such as highly unpredictable submission levels, shortage of experienced reviewers, and a complex, tightly scheduled review cycles. Reviewers often encounter these problems in the form of extra papers to review, papers outside their area of expertise, and lack of sufficient time to do the job properly” [
44].
Others before us, such as Birman and Schneider, have questioned the assumptions underpinning CS conferences’ reviewing processes and how they lead to reduced impact of researchers’ work. This will require a fundamental rethink:
“Force fields are needed to encourage researchers to maximize their impact, but creating these force fields will likely require changing our culture and values” [
7]. Grudin similarly argues that although a potential solution is to bring in junior researchers, they may feel more comfortable identifying minor flaws and less comfortable in declaring that work is more or less important [
19]. Moreover, the standard expectations on reviewing have many nuances to them which are often not addressed. For instance, while reviewers are often told to identify if a paper is making original contributions, they are not told to identify “innovation for innovation’s sake” or where there are potentially negative consequences to innovation (for instance, if a new technology is introduced among a marginalized group). To identify this distinction requires nuance, which researchers like Sturdee et al. argue is lacking among ECRs and those new to the peer-review process [
55].
If, as we have argued in this article so far, we have junior researchers involved more heavily in the conference peer-reviewing cycle and afforded review training opportunities, then we can build processes around diversifying the review load and tackling the time constraints often experienced by reviewers. This is just one example of the several challenges that both established and emerging conferences face: due to the voluntary nature of reviewing commitments made by established and senior researchers, many reviews are not submitted on time and sometimes are not submitted at all. This is a problem that those on program committees have become used to and there is no way to enforce timely submissions by reviewers, beyond standard approaches like sending repeated reminders. Often when senior researchers do not respond and the program committee is scrambling for reviewers, they turn to reviewers they trust and or junior researchers.
One of the main challenges we faced in delivering the SPC was the logistics of managing a conference cycle while the pandemic was happening. Emergency reviewing is always a challenge in peer-reviewing [
47]. We faced this issue during SPC as well, exacerbated by the worldwide COVID-19 pandemic. SPC participants in South Asia were significantly affected as the timeline for SPC coincided with a huge humanitarian disaster on the Indian subcontinent. We thus had to make decisions to extend deadlines to account for the emergency situations faced by a number of our SPC participants. Initially, we had created generous timelines that allowed participants to provide reviews at their own pace. Perhaps unsurprisingly, many scheduled time towards the end of the timelines we provided which also coincided with personal health emergencies for many of our participants. While we initially gave a limited extension and asked participants to submit within the timescale, participants mobilized through the WhatsApp platform to request more time.
Despite this temporary challenge, what we have discovered in our endeavor is that junior researchers are highly motivated to take part in reviewing. We can thus design reviewing activities that enable the production of high quality reviews with the assistance of junior research contributions on certain micro-tasks. Let us take a few examples of where they can contribute: (1) they can help with paper allocation. Rather than using automated algorithms that conference submission platforms use, which are often unreliable. The paper allocation decided by junior researchers can then be vetted by PC members before being sent to potential reviewers. (2) They can help with screening papers for desk rejections and checking if the paper scope is relevant to the conference, again making it easier for the PC to have access to information to make informed decisions in less time. (3) Screening individual papers’ literature reviews to write notes on whether the paper has correctly surmised the cited papers’ arguments, and writing recommendations on whether certain papers need to be cited that are currently missing. Their notes can then be helpful to the reviewers. (4) Parsing the methodology of submitted papers to identify whether appropriate procedures have been followed or if there is an incomplete description of ethical requirements for working with that methodology. It is worth adding a caveat that many of these micro-tasks vary in the level of expertise required to carry them out. As such our intention here is to present them as provocations for the conferencing community to respond to these as potential design opportunities.
6.3 Designing for Teaching Points in Conferences
Halfaker et al. ask the question
“Does the community have a conference or does a conference form a community?” [
20]. Cabot et al. found that within the top Computer Science conferences, it is rare to find papers by newcomers to the conference (papers where none of the authors had previously published in that conference) [
9]. The authors also argued that
semi-newcomers, researchers who had never published in the main track but had published in other tracks (e.g., posters and demos) or satellite events, fared much better. There is a general feeling that
“newcomers must first learn the community’s particular “culture” (in the widest sense of the word, including its topics of interest, preferred research methods, social behavior, vocabulary, and even writing style) either by simply attending the conference or warming-up publishing in satellite events, before being able to get their papers accepted in the main research track” [
9]. However, this is an unnecessary barrier that is (unintentionally) enabled by insiders within the research community. Conference committees risk becoming a gated community, where external reviewers are called upon to fulfill quick review jobs but very few get invited to stay and gain from the community.
Our findings showcase the extent to which ECRs are unaware about the process of reviewing. There were numerous questions raised by the cohort throughout the program that exemplify this issue. Understanding reviewing processes, writing good reviews and meta-reviews, and simply understanding how paper accept/reject decisions are made were some such concerns. Although these may seem trivial to senior researchers, this is certainly not the case with ECRs. Even though 92% of our cohort had experience of writing papers, the process after they had submitted their work was somewhat unknown, as illustrated by our findings.
Postdoctoral researchers are a key part of the conference reviewing cycle. These same individuals, as authors, submit papers to conferences while not understanding the intricacies of the peer-review process. When these same individuals start reviewing papers, there is no feedback mechanism in place with which their reviewing skills can be assessed and improved. It is left to individual researchers to simply get better over time through their own efforts. Power dynamics are also at play between authors and reviewers. The anonymous format of the reviews gives a lot of power to the reviewer, and many have previously called for greater accountability of reviewers.
One of the key questions is what happens after the reviews are provided to the authors. There are many unanswered questions such as: were the reviews useful feedback to the authors, how were the reviews perceived from the authors’ perspective? If we have a pool of motivated ECRs then it is not a stretch to create an activity which is focused on collecting author feedback on the reviews received. This activity can be carried out by the ECRs. This can take the form of a 15-min video call.
With the expertise they have gained through their doctoral studies, ECRs are strategically placed to carry out a debrief with the author. This has some additional advantages as well. Enabling ECRs to network with global academic audiences in their field and conference of choice (indicated by the fact that they are already present at this conference and they have been assigned a paper within their area of expertise). This humanizes the process with the PhD being involved. Once the reviews have been sent back, the PhD can be told by the program committee about the authors’ identity (with the consent of the authors) to organize a call and then anonymously report back the PhD’s observations to other reviewers as well. Through this step, we will also recognize that ECRs have something very valuable to bring. The design of such pathways need to be carefully considered around the issue of maintaining double-blind reviews, particularly if unfavourable review decisions are made (e.g., through use of asynchronous text-based anonymous platforms). The key takeaway for conference designers is that review processes should be re-designed to scaffold in participation of junior researchers to improve efficiency and also to aid researcher development.
6.4 Designing for Continuous Professional Development
There are many models of improving the reviewing skills of junior researchers. Events like SPC are limited attempts at doing this. This SPC program involved a longer and more-intensive level of training for participants, relative to earlier SPC-type programs [
15,
23,
54]. However, even with this additional support and training for participants, it is clear that once-off training programs like this are not the most effective strategies. Extant literature shows that for learning to be effective, learning processes need to be embedded within the week-to-week work of the participants. There was also a strong demand for this kind of ongoing development among participants in our SPC program. Previous research has shown that metrics for tracking academic performance improvements can denigrate into prescriptive methods or approaches that can embody top-down micro-management [
13]. As such, we need approaches that are embedded and give greater agency to participants.
Another limitation of this kind of once-off approach to preparing reviewers is the time commitment and effort required on the part of the organizers, spread over many months. This limits its sustainability as an approach. We, the authors, were also in privileged positions to be able to do so through the pandemic: none of us had caring responsibilities, and we were all supported by our respective departments in allocating time towards the initiative. We acknowledge that not all researchers are in this position and as such, organizers need to be aware of these considerations. In our case, the SPC organizing team consisted of four members, which was adequate to share the load.
A third limitation pertains to the unequal impacts of the COVID-19 pandemic, the backdrop for our SPC efforts. Studies among researchers have shown that women are authoring less during the pandemic, while men’s productivity has gone up [30; 32] [
39]. One potential explanation could be due to uneven caregiving burdens. Conferences being online make them easier to attend, but can also be impacted by expectations (both self-imposed and from peers and supervisors) about having to work a normal day on top of conference attendance ([18] from [
39]). In our particular instance, we deliberately ensured the number of real-time events were few (four) and spread out over a period of 5 months. Despite these limitations, for our particular immediate purposes for SUBVERT Conference, this approach worked well for us. We were able to create a thorough training program, and the authors who received feedback from the SPC members were appreciative of the reviews. In future conferences, we will be trying a more embedded approach.
In many professions (e.g., healthcare and education), continuous professional development (CPD) is an expectation on practitioners. However, within academia there is limited attention that is paid to this aspect of our careers. Unfortunately, the lack of emphasis creates a divide between those who are engaged in, and have access to, activities that contribute to our professional development and those who do not. Those who are privileged have ready access to the informal opportunities for CPD that others do not.
For instance, academia is a highly networked sector and has been described as an
“incestuous” industry [
27]. Job positions, collaboration opportunities, and grant-writing are competitive, and those with connections with other researchers are rewarded with more opportunities and possibilities. A key part of the professional development of researchers in academia is about networking, mentorship and collaboration events. Networking widens the circle of civil society, private sector and research practitioners that an individual is connected with, making it more likely for them to hear about certain opportunities. Mentorship enables a researcher to overcome gaps in their knowledge, as they gain input from someone who is much further along in their academic journey. And collaborative events are spaces deliberately created to kickstart a working relationship between two or more practitioners with mutual interests.
In this article, we have argued that the research community, like other practice communities, needs a more holistic approach to enable the ongoing professional development of its members (particularly ECRs and those based in the Global south). While the arguments we have put forward are radical in nature and can be perceived as another burden to over-committed academics, adapting existing practices presents an opportunity to grow a global community of trained ECRs. Through such efforts we are in a better position to diversify programme committees and promote inclusion in conference communities. Our findings highlight the value of such initiatives in the professional development of ECRs. Through this article, we contribute to the discussion around future of conferences as community building activities.
Indeed, radical changes to the reviewing process are not new to computing conferences. Prior to 1999, in CHI, differing review criteria existed for papers depending on their category. This was abolished in favor of a unified reviewing criteria that assesses all papers on the strength of their contributions rather than importance relative to their particular category/subcommittee [
44].