Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Shadow Program Committee: Designing for Diversity and Equity within Academic Communities

Published: 13 January 2024 Publication History

Abstract

The development of early career researchers (ECRs) and their induction into academia has traditionally been a process that is at best obscure, and at worst, cronyism laden. Arguably this is especially true for cross-disciplinary fields like HCI, where relatively fragmented specialisms co-exist. With COVID-19 and its negative impacts on ECRs as the backdrop, we explored the design of a 5-month virtual training program for ECRs worldwide (with particular emphasis on the Global South). Through an action research approach, the program was executed in collaboration with the organizers of a cross-disciplinary conference. Eighty-one participants from 26 countries took part. The program created a collaborative learning experience for attendees and provided opportunities for networking and learning the nuances of the peer-review process. This article details our experiences and provides reflections on design opportunities to (1) develop professional development spaces for underserved researchers, and (2) leverage ECRs’ unique capacity for contributing to inclusive conference spaces.

1 Introduction

There is a growing desire among researchers to create academic venues that are more accessible and safe spaces for increased equity, diversity, and inclusion [2] (c.f. the SIGCHI Equity talks series [14]). Despite this, much knowledge building and sharing, both in academia more broadly and in Human-Computer Interaction (HCI) in particular, is dominated by Western research [34]. In recent years, conferences have began to run workshops and panels to address such challenges within their own communities, to create safe spaces and accessibility to venues (see [17, 20]). Despite this, little research has been done on how to support underserved researchers within academic communities. As Kumar and Karusala posit in their discussion regarding marginalisation within the HCI community: “Every year there are scholars from underrepresented contexts across the Global South and North, who are unable to afford participation in [HCI’s] most celebrated publication venues. This impacts their ability to learn from and network with others” [30].
Every interest-based or practice-based community, including conference communities that engage with global challenges, exhibits a number of characteristics that revolve around what the members have in common, and are also defined by who they exclude [32]. How selection into the conference space is carried out, and who is accepted into the conference space shapes the nature of the community that is present. Thus, it is important to notice and scrutinize the boundaries created by top conferences as they decide who is included and/or excluded in the conference community. Early career researchers (ECRs), particularly those based in the Global South, are the most acutely impacted by barriers to entry within academic venues. [1, 50]. Though there are varied definitions of the term, in this article, we take ECRs as postgraduate students and researchers who have less than 5 years research experience from their PhD award date or their research career start date if they do not have a PhD. Barriers to entry include imposter syndrome, limited professional development on how to engage with academic communities, a lack of professional network connections, the time and financial costs for attending conferences abroad (which are often not justifiable for many researchers [1]), as well as logistical issues concerning the location of the venue where the conference and events are held. As a consequence to these barriers, ECRs in the Global South are often excluded from mentoring and networking events, workshops, and doctoral consortia which would be beneficial for their academic growth [15]. Research by historically under-represented researchers (i.e., black, indigenous, people of color, people from the Global South, women, etc.) also tends to be less valued [30]. Writing about ECRs with disability, Kirkham et al. made a similar argument about their lack of inclusion within conferences, and the need for introducing modifications within peer-review processes to promote inclusion and accessibility [26].
As a gateway for inclusion, academic conferences require high-quality reviewers to ensure the acceptance of high-quality submissions and the tactful denial of low-quality submissions. The reviewing process is a meaningful avenue for ECRs to engage with conference communities. However, despite their interest, ECRS are denied this opportunity as they often have little or no formal reviewing experience. While there have been attempts by Special Interest Groups (SIGs) to publicize open calls for reviewer positions, a lack of mentorship remains a barrier to participation. This leads to other problems, such as work from the Global South being mis-judged by reviewers unfamiliar with such contexts [33]. Thus, we argue that supporting underserved academics within the peer-review process is an area of concern for conference communities (as much as supporting any other professional community).
While uncommon within HCI spaces, Shadow Program Committees (Shadow PC (SPC)) have been used by other technology conferences (e.g., [15, 23]) with an intent to educate and equip junior researchers to participate in conference reviewing processes. Writing about their motivation for introducing a SPC to the Operating Systems academic community, Rebecca Isaacs argued that “the top conferences seem inaccessible and remote... dominated by a small number of mainly US institutions” [23]. It is with a similar motivation that we piloted SPC within an ACM cross-disciplinary conference called SUBVERT Conference (pseudonym for review). ACM SUBVERT Conference (pseudonym for review) is committed to addressing global challenges faced by under-represented communities and key challenges that hinder the growth of sustainable societies, making it a viable venue to explore inclusion and the sustainability of future reviewing processes. We sought to address the challenges of limited review training opportunities for ECRs (particularly those from the Global South) as an avenue for engaging with their academic communities. Thus improving diversity and inclusion of conference programme committees and creating a more equitable peer-review process. SPC is an innovative approach that works to address the barriers faced by ECRs in two ways: (1) allowing ECRs to “shadow” the review process followed by conference program committee members (PCs), supporting their learning about the reviewing process in a safe space with their peers; and (2) enabling conference committees to prepare junior researchers for serving on future program committees.
Our work contributes to the ongoing (and increasingly proactive) discourse within conference venues about the inclusion of underserved communities by: (1) identifying an opportunity for conferences to be scaffolded around the professional development of ECRs, who are currently sidelined by existing onboarding and reviewing processes; (2) taking a capability approach to highlight the immense agency of ECRs, and designing peer-based infrastructures placed in the vicinity of existing conferences to facilitate their capacity for self-organizing; and (3) re-imagining peer-review processes in modern conferences and journals to support mentorship and facilitate author-reviewer transfer of knowledge, in pursuit of academic venues’ desire to better train the next generation of reviewers. We argue that such innovative approaches, that aim to provide transnational and cross-institutional professional development for academics, are critically needed in the post-COVID academic hybrid-conferencing environment. This involves addressing the emerging disparities in relation to equitable participation and challenges in community-building for underserved researchers.

2 Literature Review

2.1 Professional Development for Researchers

Academics’ roles have expanded in recent years: quality academics are expected to succeed not only in teaching, but also consistently demonstrate their excellence and impact in community engagement, administration, research, and their active engagement as members of research communities [5, 38]. Browning et al. [8] identified seven basic activities that are required to build successful academic careers: (1) having a research doctorate, (2) being mentored, (3) attending conferences, (4) supervising postgraduate students, (5) being part of an active research group, (6) receiving assistance to develop grant applications, and (7) receiving support for staff to develop their research careers. Consequently, for ECRs to have a successful academic career, they must be equipped with the necessary skills to build a research track record [8]. However, to receive these diverse types of support, ECRs and academics require adequate institutional support, which at times can be quite complex given the diverse disciplines, institutional cultures, personal motivations, and personal characteristics that influence the professional development process that ECRs undergo [38]. Given these factors, dynamic, nuanced, and varied approaches of support are critically needed so that ECRs within the post-COVID academic environment can thrive. Institutions need to create environments that allow ECRs to engage and network with successful researchers to develop their research capabilities and familiarize themselves with the ins and outs of research through mentorship [12]. Despite the benefits of engaging with experienced researchers, Shagrir [51] found that there are cases where experienced academics have minimal interest in engaging with ECRs and novice researchers. In light of this, as institutions develop collaborative spaces, there is the need to be aware of the preferences and motivations of both experienced researchers and ECRs so that continued support and collaboration is encouraged.
Donnelly [12] stresses the importance of collegiality and collaboration as a means towards academic professional development. Spaces that promote collaboration and peer learning are therefore encouraged [11, 51]. Building academic networks, communities, and opportunities to receive feedback to achieve meaningful change are useful ways in which collaboration can be fostered [51]. Academic Conferences have the potential to create opportunities for learning, collaborative engagement, and facilitate entry into research communities [53] and thus are a key space for academic development [11].

2.2 Overcoming Exclusion within HCI Venues

While academic publishing is dominated by Western research in many fields, an additional critique that is particular to computer science is that conferences, rather than journals, are the main venue for publication, further increasing the barrier to entry. Grudin argues: “in journal-centered fields, conferences represent work in progress toward journal publication. Higher acceptance rates enable participation by researchers from other disciplines, students, and practitioners who do not aim for journal publication” [18]. There is a vicious cycle that is created where those that attend conferences get more critical feedback and support, further exacerbating and entrenching ECRs who are already disadvantaged. Mentors are critical for career guidance, introductions and for obtaining documentation for job applications (e.g., recommendation and reference letters) [27 from [39]]. Like McKay and Buchanan [39] before us, it calls us to challenge the “myth of meritocracy” that is, highly pervasive in academia.
There is an increasing recognition of the need for inter-disciplinary and diverse, geographically distributed teams within HCI [4]. The calls for greater diversity respond to critiques of the field such as HCI’s disregard or misunderstanding of literature from other fields [36], and the fact that the majority of HCI papers published at prominent HCI venues report studies with participants from wealthy, Global North countries [56].
In response to this recognized need, new ideas have emerged to counter the disadvantage experienced by ECRs and HCI researchers who are located in the Global South [48]. For example, researchers have explored the use of digital platforms to promote ICT4D collaboration with researchers from LMICs [42]. The HCI Across Borders (HCIxB) community is growing, with an emphasis on strengthening HCI research quality transnationally and fostering ties between junior and senior researchers [29]. Muller and Fitzpatrick also created a workshop to help ECRs in identifying career pathways in HCI by bringing them into contact with senior researchers and mentorship opportunities [40]. In addition, researchers like Nacke and Wilson offered courses at HCI conferences on how to write and review papers [43, 58], with the aim to teach participants “what reviewers are looking for and how to signpost this information to make papers more attractive to read” [43]. However, these initiatives have not yet proven sufficient to address the challenges outlined above—not least because workshop participation requires attendance at HCI conferences. Our research is thus guided by the question “How do we create an inclusive environment for novice researchers to be part of the reviewing space and provide appropriate training?”

2.3 Training Emerging and Under-resourced Academics

Despite past recommendations for training reviewers as a means of improving the peer-review process and increasing the numbers of qualified reviewers [54], there is still insufficient training for reviewers—especially those in the early stages of their research career [6, 44, 46]. The current processes of recruiting new reviewers are often opaque, as they are selected from existing reviewers’ own networks based on past reviewing and publication experiences in top conferences. This therefore limits opportunities for ECRs who have not built a review and publication track record in prestigious venues, or lack access to these conference networks.
HCIxB 2019 attempted to “recognize and foster research efforts situated in historically underrepresented contexts, inviting members to submit works in progress and iterate towards stronger scholarship” [31]. Participants who were currently in the middle of project implementation were also invited to present their work so that they could solicit actionable feedback from the HCIxB community [31]. In a similar vein, Munteanu et al. organized an ECR symposium to support PhD graduates through a formal mentoring process for strengthening professional networks and providing advice tailored to career challenges [41].
Such approaches are needed to train new reviewers that do not require significant time or financial investment. Academics are already overwhelmed and are stripped for time [16], so we need strategies that are not time-consuming. One such effort has been to bootstrap conferences with SPC programs to facilitate peer-based researcher upskilling through a community-driven approach [37]. SPC programs have been developed alongside other large, international conferences, and the organizers have shared their outcomes and learning. For example, in 2005 a SPC was run with the Sigcomm conference with a total of 42 participants [15]. Similarly, the SOSP conference in 2007 had an SPC for postgraduate students and early career researchers (ECR) [23]. Both SPC programs included participants mostly from Europe and North America. Both of these SPC programs focused on conducting an in-person SPC meeting to discuss the reviewed papers, rather than providing advance training or extensive resources on how to write reviews before the meeting. Through these programs, postgraduate and ECR learned strategies to improve their papers’ acceptance rate.
Another approach was employed at the ICML 2020 conference, with a slightly different motivation. Rather than focusing on training and introducing ECRs to reviewing, this program focused on expanding the reviewer pool to meet the growing demand for qualified reviewers in the machine learning community. The program recruited novice reviewers and employed a competitive selection process to decide who would participate in the program (unlike earlier programs that accepted all interested participants) [54]. Both this program and the Sigcomm 2005 program found some evidence that the quality of reviews by the SPC was equal or higher than the quality of reviews by the “real” PC. For example, at Sigcomm 2005, the length of SPC reviews and number of on-time submissions were higher for the SPC than the PC [15]. At ICML 2020, there were similar findings on review completion and length, and they also found that blinded meta-reviewers rated the quality of the novice reviewers’ reviews as higher on average than those of the usual conference reviewers [54].
However, there are limited instances of such programs, despite their potential to create value for participants, conference organizers, and authors. Previous attempts at training programs, surveyed above, have focused on this topic from the perspective of conference venues rather than that of equity and diversity. In other words, the question has been framed as “how can conferences attract more reviewers?” rather than “how can we enable the participation of overlooked groups within the academic community?”. Most of the training initiatives mentioned above were carried out in-person at Global North conference venues. Others, like that of Stelmakh et al. [54], were only open to junior researchers from five large, top US universities (CMU, MIT, UMD, UC Berkeley, and Stanford).

3 Context and Motivation

During the planning stage of SUBVERT Conference, the organizers engaged in several conversations challenging the existing reviewer recruitment methods and brainstorming ways in which conference attendee/presenter diversity could be increased while increasing the opportunity for ECRs, especially those from the Global South, to add their voice to existing conversations. Some questions that shaped these conversations included: (1) How do we create opportunities for ECRs to become equipped with the appropriate tools to join the cohort of conference reviewers? (2) How do we diversify the existing reviewing pools and open up these conference communities? (3) What platforms and spaces should we be creating for ECRs to open up, share their experiences and suggestions and learn from each other? In addressing these questions, the organizers perceived that they would be able to provide more tangible steps for ECRs in their desire to become better reviewers and improve their chances in becoming part of future conference program committees.
In the previous section, we outlined some of the previous approaches to the professional development of ECRs. However, despite many of the benefits these programs delivered, we identified that they were not sufficiently equipped to address the contextual challenges of the distributed community of ECRs in the context of COVID-19. This prompted the SUBVERT Conference general chairs (with a combined academic experience of 50 years) to set up a working group consisting of ECRs to design and develop a training academy specific to the SUBVERT Conference. This working group (who are also the authors), consisted of four ECRs at different stages of their academic journey from PhD Candidate to 5-year post-PhD. Each came from a different cultural context (British Asian, African, white American, and South Asian). While the team had prior experience reviewing papers, they had no experience of being on a program committee. They drew on these experiences to develop the program from the perspective of what ECRs would desire and need to become better reviewers. The working group worked closely with the experienced general chairs to design the training program. For instance, the chairs introduced the ECRs to prominent members of the SUBVERT Conference community, who were able to contribute towards resources developed for emerging reviewers.

4 Study Design

A typical training program, such as the one envisioned for SPC, consists of a series of interventions. We undertook the SPC over a period of several months during Q1–Q3 2021 through Action Research (AR): an approach suited to the iterative demands of planning and reflecting on multiple engagements with the participants. AR is particularly apt for situations where researchers need to enter the social worlds of their participants and “when the change process itself is the subject being studied” [28]. We followed an AR approach that was inspired by the influential work of Davison et al. on Canonical AR, where they described the centrality of the Cyclical Process Model (CPM) [10]. CPM consists of the following phases: diagnosing, action planning, action taking, evaluating, and reflection [10]. These phases are usually undertaken in a sequential order where the loop re-starts and in subsequent cycles the reflection phase can lead to the diagnosis phase again.

4.1 Recruitment

Our study was approved by the IRB of the first author, and informed consent was obtained from all of the participants. Participant recruitment was conducted in two phases. In the first phase, we recruited students, post-doctoral fellows and ECR to be a part of the SPC. We recruited participants through social media platforms and online academic groups. Initially, 81 participants expressed interest in participating and signed up for the program. However, nine participants dropped out prior to the first orientation session, due to personal or professional commitments, leaving a total of 72 participants. All participants were informed about the research aspect of the program and invited to participate in the study. Participation in the research component was entirely voluntary and they had the option to withdraw at any time without it impacting their experience in the SPC. The benefits for those taking part included a better understanding about the usefulness of the critical reviews they produced as part of the SPC process, serving as formative feedback in their own development as researchers. Over the course of the program, eight participants dropped out as they could not complete the reviews allocated to them. Data from such participants was excluded and not considered for analysis. Overall, 64 participants completed the entire program.
Fig. 1.
Fig. 1. A typical reviewing and publication pipeline used by program committees within HCI conferences. Adapted from [47].

4.2 Participant Demographics

We had a total of 81 participants (Male=45, Female=34, two did not disclose) in the SPC (see Figure 3). They represented 53 different educational and research institutions from 26 countries around the globe. The majority of the participants were in the early stages of academia, with 55 of them being PhD candidates. We did not receive any applications from participants in South America. In addition, we did not capture the country of origin of participants, so countries present may be a more accurate representation of institutions of origin (for instance, there might have been South American scholars present who are based at US institutions who would nevertheless have put down US as their country. Although most participants (n=75) had written one or more academic papers in the past, they had no or minimal experience of formally reviewing papers. Thirty six participants (44%) had never reviewed a paper before. The number of participants who applied to be part of the SPC in comparison to the attendees for the actual conference was also notable. There were 150 total attendees for SUBVERT Conference, thus the SPC had \(53.3\%\) of that figure. It should be noted that relatively few members of the SPC eventually went on to attend SUBVERT Conference as we had many participants who were interested in becoming better reviewers independently of their interest in SUBVERT Conference. Indeed, the chairs found that SPC met a real need for ECR mentorship programs and that was the primary outcome (rather than increased conference attendance).

4.3 Planning and Running Shadow PC

Our initial discussions with the General Chairs of the conference revealed the need for a safe space for junior researchers from around the world to improve their review-writing ability and better engage with the SUBVERT Conference community. Here, we detail the actions we planned based on our guiding principles.

4.3.1 Research-driven.

Before teaching or training, we must first understand what makes a “good” review. According to Petre et al., the criteria for evaluating a review can be categorized as review criteria (assessing the review itself e.g., its timeliness, tone, and constructive nature) and paper criteria (how well the review has assessed the paper and provided appropriate feedback) [47]. Other factors that affect review criteria include [47]: (1) decisiveness: providing feedback and a score that helps the program committee with a positive or negative recommendation (2) coverage: does the review cover all of the key aspects of the paper or all of the areas that the reviewer has sufficient expertise in? (3) justified: providing sufficient evidence for the conclusions reached by the reviewer. (4) helpfulness: going beyond the basics of writing a review and providing additional commentary that helps the authors understand key recommendations and how to implement them.

4.3.2 Global Accessibility.

Given the geographic diversity in our cohort, one of the challenges we faced in the earlier stages was determining appropriate times for our training sessions and meetings. Naturally, no single time was convenient for the entire cohort. With the increasing reliance on virtual conferences to disseminate research, time-zones have been a problem for conference attendance as well. Many flagship venues are located in the US. A typical conference organized at 11 am ET is very inconvenient for significant parts of the globe. Our objective was to provide participants an opportunity to engage with other participants around the globe, whilst ensuring that all communication and training sessions were held within acceptable hours. As a result, we decided to split up the cohort into four sub-groups on the basis of their time-zone. These groups were created to ensure that participants would not have to attend any session prior to 9 am and that all sessions would be concluded by 11 pm for all participants. Each of these groups had their own training sessions and meetings throughout the program. As such, we had four sessions conducted on a given day, one for each group. This ensured the participants could partake in the program without having to significantly compromise on their routines.

4.3.3 Using Popular Communication Channels.

Due to the distributed nature of this program, with participants being present across 11 time-zones, we needed a combination of real-time and asynchronous communication platforms. We also recognized that many of our participants from low-resource settings needed additional flexibility around digital accessibility. In addition, the high-level of literacy and motivation among the target group (researchers in Information Technology) allowed for flexibility in communication channels. Communication took place over six channels: WhatsApp group chats with about 20 participants per group; e-mail; HotCRP, the SUBVERT Conference conference management software to manage papers and reviews; Zoom for the workshop sessions outlined above; Google Docs and Storm-board for collaborative group discussions during breakout sessions.
Fig. 2.
Fig. 2. A breakdown of countries where participants’ institutions where located.

4.3.4 Creating Arrangements for Managing Cultural Sensitivities.

As seen in Figure 2 (breakdown of countries where participants’ institutions where located) participants came from countries with their own cultural and social expectations around hierarchical relationships (i.e., members of the organizing committee being present in a conversation and how that affects power dynamics and perceived openness to discuss matters). To reduce any such biases, the SPC workshops were designed to facilitate and maximize participant peer-interactions and peer-mentoring, with minimal instructor-led modes of engagement. We also introduced conversations around author expectations, how to be an encouraging reviewer, and how to provide balanced reviews [24], especially when reviewing across cultural contexts, as authors’ cultural backgrounds are unknown to reviewers.

4.3.5 Producing Multimedia and Web Resources for Offline Learning.

Another key feature of the program was the video resources and reading materials that were shared throughout the program. By providing participants with curated materials it removed the guess work on what resources to rely on and what is important or useful. This becomes even more meaningful when there is a dearth of resources and one lacks clarity on where to even begin. We produced three types of resources: (1) Videos involving PC members and Steering committee members sharing their review experiences and tips they acquired over the years. (2) Example reviews sourced from the General chairs (who were senior academics within the HCI space with over 50 years of experience in academia between them) who shared with us reviews from across the spectrum (good and bad quality reviews) for papers they were previously involved in. (3) Web resources highlighting best practices of writing reviews.

4.4 Training Process

We conducted Zoom training sessions as part of the SPC training. There were a total of four sessions: (1) two orientation meetings, (2) one PC meeting, and (3) one debrief meeting.

4.4.1 Orientation Meetings.

Our first engagement with SPC participants was a series of two orientation sessions running through the nature and goals of the SPC. Drawing on the motivations for participation, we then crafted a discussion on what makes a good paper. The discussion was facilitated across three breakout rooms via zoom. The orientation was replicated across the four groups, given the diverse timezones.

4.4.2 Assigning Papers.

Once the conference paper authors had submitted papers to the main conference review system (HotCRP), we worked with the General chairs to identify 20 papers from the submitted group of papers. This was done while keeping in consideration the general paper preferences of SPC participants based on their application form (where they had been asked to indicate their preferences for different conference tracks). We set up an HotCRP account for the SPC process, to give participants the full experience of taking part in the conference reviewing cycle. Through HotCRP we assigned papers to authors (initially they were given a chance to nominate themselves for certain papers, ensuring that they signed up for all possible roles on one paper each). We strictly emphasized that all papers were confidential and must not be shared or discussed outside of the SPC membership.
Utilizing the structure of mainstream HCI conferences such as ACM COMPASS, we devised three roles: first associate chair (1AC), second associate chair (2AC), and a regular reviewer (R). All SPC members were assigned an 1AC, 2AC, and R role for 1 paper each (i.e., each member had three reviewing commitments in total). For each paper, 2AC, and R2 would write a review and subsequently, 1AC would write a meta-review (summarizing points from 2AC and R2). We utilized the meta-reviewer role to help provide a summary of reviews to the authors. In addition, this role would enable the reviewer to learn strategies for gaining decisiveness—as they would have to draw on their expertise to make a final recommendation about the state of the paper and how the paper should be handled by the program committee. We also provided guidance to the SPC reviewers on scheduling and communicating with each other. For instance, we provided instructions such as:
Reviewing period: 1AC: reads the paper (does not not write a review), 2AC: reads the paper and starts writing a review, R: reads the paper and starts writing a review. Before submitting reviews to HotCRP, SPC members discuss questions about paper reviewing (general issues or specific issues with a paper) through WhatsApp.

4.4.3 PC Meeting.

This was a 150-mins online session chaired by two members of the research team. Similar to the other meetings, four sessions were held, one for each SPC sub-group. At the PC meeting, the team discussed the nature and purpose of such meetings within CS conferences and proceeded facilitating a PC discussion. The following rules were followed: (1) 1AC makes a brief presentation (2 mins, verbal only) of the topic, reviews and meta-review of the paper. (2) 2AC adds anything additional to 1AC’s presentation (1 min, verbal only). (3) All SPC members discuss whether to accept or reject (provisional decision, 2 mins). The session concluded with “accept” and “reject” decisions being made for each paper while also giving the SPC members an opportunity to propose changes to any decisions made. Finally, the papers were ranked in order of most-likely to least-likely to be accepted.

4.4.4 Debrief Meeting.

All SPC members were given three days to refine their reviews (if needed). These reviews were then compiled by the research team and sent to paper authors who had agreed to take part in the research study. An interview with the authors was arranged to invite their comments and feedback which could then be provided to SPC members. Instead of directly sharing the author interview recordings/transcripts with the SPC members, we compiled the interview responses and shared key insights and author feedback with the group, through a debrief meeting. We also sought to hear from SPC members about their experience of taking part in this program and suggestions they had for improving it in future.

4.5 Data Collection and Analysis

We had various streams of data that we collected for this research. All of the Zoom-based training sessions were recorded with the consent of the participants. In addition, all correspondence with the participants, in the form of e-mails and participation data via Google forms was recorded for analysis. We also captured the content from WhatsApp group conversations. A total of 492 messages were captured across all 4 groups (Avg = 118). These contained instructions about the program, training resources sent to the participants and the queries participants had about reviewing or any of the content shared. In the second phase, we contacted the paper authors to participate in the research process for the program. The authors were required to receive and assess the reviews made by the SPC on their papers and provide feedback on those reviews. 11 out of 20 authors agreed to participate in this research. We conducted interviews with 11 authors to obtain feedback on the quality and usefulness of the SPC reviews. One section of the interview with authors focused on how the SPC reviews compared to the technical program committee (TPC) reviews they had received. These interviews were conducted via Zoom and audio-recorded. In this way, the interactions between authors and SPC members were mediated by the project team. As such any inherent risks in direct contact between the two parties were averted. We transcribed the training sessions and the author interviews. These transcripts were thematically analyzed using inductive thematic analysis by four members of the research team [25]. Additionally, all correspondence with the participants in WhatsApp chats was analyzed to gain insights about participants’ views and reflections on both the training process and their expectations of SPC.

5 Findings

5.1 Infrastructures for Mentorship and Growth

Junior researchers need extra guidance as they navigate typical processes of academia [35, 45]. Our study shows the importance of infrastructures that take into account mentorship from peers and seniors, access to curated resources and curated spaces to explore the boundaries of their discipline.
Fig. 3.
Fig. 3. Current positions of SPC participants.
Fig. 4.
Fig. 4. Total number of papers written by SPC participants. They were also provided the option to share links of relevant publications/outputs.
During the orientation meetings, many participants described their previous experiences of submitting to peer-reviewed conferences, and admitted their ignorance of what happens behind the scenes. Consequently, they ventured into the SPC program in order to learn the basics of reviewing structures (in addition to the technical aspects of writing reviews). For junior researchers, doctoral programs may provide training on critically reading research papers and academic writing but it is not until they “gain practical experience” [P32] that the learning truly takes place. From our own experience as researchers who have published at HCI conferences, we noted that it was not until we started reviewing papers that we improved as paper writers. This sentiment was put forward by our participants as well. “This experience would help me to understand what attributes of a paper make it more effective in communicating the subject matter with its readers. It will also help me to learn how to identify the strengths, weaknesses, and points to improve upon in a paper. These experiences will be a stepping stone for becoming a more focused researcher and better at presenting research works through quality writing” [P18]. Thus, taking part in the SPC was as much about becoming a better reviewer as it was about becoming a better author. Arguably, the two are inseparable.
Our data indicates that the SPC was valued by participants because it exposed them to disciplines outside their direct area of expertise (a concern particularly relevant for computer scientists). “I’m eager to learn more about the scholarship procedures, norms, and expectations in the SUBVERT Conference community... what methods, concepts, and practices are valued and/or missing within the SUBVERT Conference discourse” [P66]. Others, despite having experience with inter-disciplinary venues, wanted to continue their professional development and stated “I hope to sharpen my review skills and gain more exposure to the behind the scenes process of paper reviews. While I have been a reviewer before for conferences like CHI and DIS, I have never participated in a review mentorship program. I want to ensure that I am contributing back to the academic community in a beneficial way” [P64].
The networking potential of the SPC was also highly valued by participants. Participants believed the SPC would be a great opportunity to meet and interact with fellow ECR and to learn from experienced researchers around the world. For example, one participant mentioned “Besides learning and gaining experience, I look forward to enriching networking opportunities and getting exposed to the work being carried out in our research community” [P23]. However, while group-based programs like SPC are valuable, there is still a need for more direct-mentorship approaches. Our participants were based in institutional contexts where these opportunities were not available and wanted “to be mentored in reviewing research and conference papers” [P20]. Another participant bemoaned the general nature of information available about the reviewing processes for CS conferences and asked for “access to distinctive mentors... for critical and timely guidance in the academic review process” [P18]. Moreover, the participants were eager for individual-level feedback from the authors for the reviews they had submitted, but this was not possible due to authors’ privacy. Instead, the SPC organizers attempted to provide mediated feedback from the authors and illustrative examples through the authors’ quotes to help participants identify where they might have done (or not done) these things in their own reviews.

5.1.1 Diversifying the Nature of the Program.

Although the Shadow program has provided a platform for equipping ECRs in the review process, as highlighted by a participant sharing, “I now have a framework to use when embarking on a review” [Group A Debrief Session], there was a strong push to have more one on one and in depth mentorship opportunities. This speaks to the continued need to create spaces for equipping and training ECRs on reviewing. Given the nature of the program, time constraints and available resources, one on one mentoring was not a feasible solution with 81 participants. However, useful suggestions going forward have emerged from the process such as “having a buddy system–pairing SPC with experienced mentors” [Group A Debrief Session, Breakout Room 1]. Participants were also eager to receive individual feedback from authors on their reviews, which was not feasible due to the number of participants and the confidentiality that was promised to authors for this SPC. Having diverse shadow programs that are complimentary in nature is one way to cater for the divergent needs of ECRs without burdening a handful of SPC trainers on a program. This may be an opportunity for conference communities to offer creative opportunities that cater for different stages and needs of ECRs.

5.2 Understanding the Intricacies of the Reviewing Process

Since the majority of participants had limited experience of formally reviewing papers, many questions were raised throughout the program. For those who had never reviewed papers before, the SPC provided a platform to gain “first-hand experience of reviewing a full paper” [Group B Debrief Session]. For others who had previously reviewed a few papers, they gained the opportunity to learn “the entire process of reviewing for a conference, end to end” [Group A Debrief Session]. This included a deeper understanding of subcommittees and tracks (at conferences such as COMPASS), and how to frame their research to maximize their chances of gaining a fair hearing from reviewers. These discoveries helped to demystify the review process, especially for first time reviewers.
As anticipated, participants did indeed relate to us as organisers who were ECRs, and felt free to ask a range of questions during the training sessions that were unrelated to review-writing. Such questions prompted us to broaden the scope of the SPC discussions to be flexible to participant needs. Through this approach, we also noticed that SPC members felt comfortable in challenging biases within existing institutional mechanisms. For some attendees, who were based in institutions that exclusively targeted top-tier conferences, they were curious to discover more about alternatives and why some conferences are held in high esteem: “I am also curious to understand the difference in the reviewing process of CHI and other conferences. Why do we hear that the quality of CHI reviewing is better than others?” [Group B, WhatsApp].
SPC participants also raised questions about reviewing etiquette across conferences, especially as they are not explicitly differentiated. This can be a difficult terrain to navigate for novice researchers working in an interdisciplinary research area. One participant pointed out that sample reviews from HCI conferences that we provided looked “more detailed than what you would expect from a systems conference” [Group C Orientation Meeting 2]. At first glance, observations such as this seem basic—but they point to the numerous parts of the conferencing experience where the guidance of an experienced researcher is needed. Whenever such queries were raised, researchers in the WhatsApp group with some experience would often share their knowledge or advice. One such instance was when several papers had not followed the anonymous submission guidelines. Several participants brought this to notice, “I got one paper mentioning the author’s name and affiliation... I wonder if it is ethical to review. I mean reviewers might be biased!! What do you think?” [WhatsApp Group A]. In this case reviewers who had previously submitted to this conference shared their views on how to tactfully approach this concern.
SPC participants also learned how to engage with other reviewers to make accept/reject decisions and how to synthesize reviews when writing a meta-review. Participants sought guidance on how to resolve differences in opinion, asking questions like: “As meta-reviewers, can we use the “add comments” feature on HotCRP with the option “hidden from authors” as a way to discuss and deliberate among the reviewers, specifically if there are disagreements in terms of “overall merit” of a paper?” [Group D WhatsApp]. Taking on board the spirit of SPC, that is, a collaborative and constructive approach to review-writing, we noticed many reviewers taking extra efforts to write balanced and carefully-thought out reviews.
We also encountered questions exploring the nature of meta-reviews. For example, one participant asked: “When writing a meta review, should the score assigned to the paper be the average between 2AC and R or an independent choice of the meta reviewer?” [Group B WhatsApp]. Participants wondered about the role that the score can play, and the power that the meta-reviewer’s judgement plays in the peer-review process. Some even went so far as to suggest that the meta-reviewer role could be antithetical to the spirit of peer-review. “The metareview. can be problematic because it’s basically a translation... the authors might lose [out]... because the AC is favoring one idea over the other as... a centralized decision maker. Personally, I think that plays into un-democratizing academia, which makes me feel a little bit uncomfortable.” [Group A Orientation Meeting 2]. In this vein, some participants preferred meta-reviews that played the role of a helpful summary generator and interpreter (i.e., taking the brief comments of reviewers and helping authors understand how to address them). “So I do like the second review that we read, which was breaking out each reviewers’ work separately and just making them more direct in their wording and feedback to the authors”. Perhaps due to the inexperience of the participants with the review process, we did not encounter any discussions around contentious practices in mainstream conferences, for example, varying acceptance rates between subcommittees at a conference, Program Chairs enforcing stronger accept/reject ratings to reduce an imbalance of papers in the middle range.

5.3 Growth in Self-efficacy as Academics

As we have seen so far, taking part in the review training process not only provided participants with tips and techniques on how to review, but provoked them to engage with and ask questions about the wider conferencing process. This evoked two effects on our participants: (1) gaining a greater appreciation of wider research, (2) rethinking existing reviewing practices
Firstly, they reassessed their identity as researchers and their own writing practices: “reviewing opened my thoughts to the mindset of a reviewer” [Group B Debrief Session]. For some, it was a shift in their understanding of the importance of the reviewer role and a desire to serve in that role in the future: “being conscious that by reviewing a paper I’m playing a role in advancing knowledge in my field will stay with me throughout my academic journey” [Group C, Debrief Session]. In other words, as a result of their experience of being on both sides of the author-reviewer divide, they gained a broader understanding about how academic research operates: “[this experience] provided a better way of looking at research” [Group B Debrief Session]. Furthermore, the SPC training’s emphasis on reviewer mindset caused many participants to reflect on their own approaches to thinking through and writing reviews: “the [resources] helped me understand how to deal with internal biases which often crop up while reading a paper. I will now definitely try to have an internal dialogue to reduce these biases” [Group A, WhatsApp].
Secondly, the SPC proved to be a space for participants to challenge the peer-review process and the way it is practiced in Computer Science conferences. The majority of participants came into the program with little to no prior review knowledge and experience and, perhaps unsurprisingly, spoke frequently of the lack of confidence to review papers. Throughout the program as participants engaged in the different activities that were assigned, they began to grow in their confidence. Such confidence enabled the participants to not only see the program as a space to becoming better reviewers but also a space to identify ways in which the SPC and the overall academic review system could be enhanced. Some participants identified issues with current practices in reviewing, and discussed better approaches. They began to question the different process such as the the PC meeting and how it is conducted; the ranking of final papers that make it through a typical conference program; the selection of reviewers and the approaches for reviewers to engage and discuss papers before writing the final reviews. A dominant critique that came through has been in the area of reviewer self ratings: “This is questionable how people categorize themselves as expert or knowledgeable. They might be an expert on methods, but not expert in the domain. I have seen ‘experts’ that gave one hundred percent wrong feedback on my paper” [Group C Orientation Meeting 2]. A participant who had previous experience of reviewing papers noted how potential reviewers accept and reject papers based on limited information: “When review requests come to me... just reading a 150 word abstract you will probably not know [if you are an expert]. This is actually a problem of the system as well” [Group C Orientation Meeting 2].
Reviewer ratings do play a role in how a review is viewed by a meta reviewer or editor for a journal. However, the process in which such ratings take place have been questioned whether they are useful in the overall review process. At times, reviewers rate themselves as experts and provide poor quality reviews. Alternatively, there are cases where reviewers rate themselves as knowledgeable and do engage more fully with a paper and provide more meaningful feedback. In such cases, important questions arise about how useful the rating is in terms of the final decision on a paper. We need to explore alternative means of helping reviewers rate themselves more accurately or holistically, providing guidance and ways to capture their expertise across multiple dimensions (rather than a single 5-point summary scale). Questions were also raised about where expertise might be high but due to limited time spent, the reviews may not be useful and as such, there might be value in introducing a confidence score, where the review rates the value of their own review in terms of their own understanding. These are some of the concerns that were raised around existing practices by the SPC participants.
We also noticed a growth in participant self-efficacy through their own belief in being able to write good reviews, provided they were given extra time: “It gave me a lot of confidence. I am now able to write a review that I am proud of. I write knowing that I am giving it my best. Thank you again!” [Group C Debrief Meeting]. Participants frequently wished to have multiple rounds of discussions with their fellow reviewers before finalizing their final reviews, confident that through this process they could make sure to give the paper under consideration full coverage as each person took different angles. To facilitate this, many groups of participants asked for extra time, which was granted to them.

5.4 Giving Feedback to Reviewers: The Missing Link?

As part of the SPC, we introduced a post-review feedback activity that was designed to reduce the author-reviewer feedback loop. As part of this activity, we contacted the authors of the papers that were reviewed and conducted Zoom interviews with them using the reviews as a discussion point. During the interview, we referred back to participant reviews to get their perspective and motivation for particular sections they had contributed. Typically, reviewers receive no or limited feedback on the reviews that they provide to authors. In some reviewing platforms like HotCRP there are possibilities for other reviewers to anonymously rate their peers’ reviews. However, the only feedback that is solicited from authors is during the author rebuttal stage, during which authors have to defend the merit of their work (and as such, is not intended to be feedback on the quality of the reviews received).
This type of emphasis is not present in previous attempts at SPC, because in those attempts, the emphasis was on creating a separate safe space for amateur reviewers, similar to a moot-court experience for budding lawyers. However, we found that by exposing the reviews of amateur reviewers to real-world authors and receiving their live feedback, a rich avenue for formative feedback emerged. Authors indicated the ways in which the reviews were useful or not, and provided suggestions on how reviewers could improve going forward. The feedback from the authors was collated and summarized into core themes that formed part of the reflective discussions that occurred during the debrief meetings at the end of the program. Such formative feedback provided an additional learning opportunity for the SPC participants.
Many authors felt that the quality of the reviews from the SPC and TPC were similar, particularly in the cases when the SPC members were familiar with the authors’ field. For example, Author 7 said: “To be honest, if I hadn’t known that I received the reviews from the Shadow PC, [then] that would have sufficed as feedback. And also in terms of the decision... there is a very good congruence [with the Technical PC]”.
In many cases, authors even pointed out that the SPC reviews highlighted useful areas for improvement that were not identified by the TPC reviewers. “I think people [from the SPC] really put in a lot of effort. Because there are things that I didn’t get comments on from the Technical PC, that I feel are still very strong points that I’m going to pick up” [Author 7]. From our analysis of the reviews, we surmised that this difference could be attributed to the amount of time and scrutiny that SPC members applied to papers. Their increased efforts, focused on fewer papers than members of TPC (who typically have a greater reviewing load), meant that they were able to pick up more areas of improvement at a micro level. This can also be seen in Author 2’s assessment of the two sets of reviews they received: “we found actionable feedback in a number of the reviews... [Shadow PC reviews] were much clearer about, ‘Here is my concern with the paper, here is how I might address that”’ [Author 2]. However, it was not always the case that SPC reviews were on par with TPC reviews. As would be expected from a training program’s first group of trainees working in a cross-disciplinary conference space, many SPC members were assigned papers to review that were outside their disciplinary area of expertise.
Inevitably, this led to SPC members providing general and high-level feedback, not providing commentary on methods or domains within papers that they were not comfortable with or had limited exposure to: “I felt like the [Shadow PC] reviewers were more scratching the surface, not going into depth” [Author 6]. Indeed, as part of the training, we had advised the participants on strategies in such situations (i.e., acknowledging their limited area of expertise and focusing on areas on the paper that they were confident addressing).
The authors we spoke with also commented on the quality of the writing and communication from the reviewers. Reviews framed in a constructive and polite tone were considered highly desirable. For example, Author 2 said, “I feel like a lot of effort and thought went into making the reviews useful and critical, but not overly harsh. Like, yeah, you see lots of memes about ‘Reviewer 2,’ and I didn’t get ‘Reviewer 2’! And I’m grateful for that”. Based on their previous experience of receiving terse and unhelpfully-framed reviews, the present set of reviews were a welcome change. On the other hand, we also heard from one author whose paper was negatively rated by both SPC and TPC reviewers about negative experiences regarding the communication of reviewer feedback: “I thought [one Shadow PC reviewer] took a very aggressive tone... the way they were asking questions was, “How can you not do this? How can you...?” things like that. Whereas [another Shadow PC reviewer], had a very calm but critical stance which they took in the reviews. It definitely helps to have that calmness in the tone” [Author 6].

6 Discussion

The SPC at SUBVERT Conference was designed with the assumption that participants would learn the most from it as a collaborative experience that mimicked the features of a real-world program committee as closely as possible. Our findings provide many observations about the nature of reviewing, junior researchers’ views of conferences, and the challenges they face in their own professional development. Drawing on these observations, we list four design opportunities for HCI researchers who want to explore innovative ways to support junior researchers and improve the conferencing process more broadly and the reviewing process more specifically.

6.1 Designing for Social Capital

Experienced PC members often reference the relationships they built during the time they spent on the PC, suggesting that networking is a key outcome of participation in program committees. In addition, earlier SPC programs identified that participants wanted to “put a name to a face” and meet other participants [15]. Even before the pandemic there were barriers to entry to these committees for researchers lacking resources (e.g., time-off from teaching commitments and travel funding). Since 2020, the barriers have been exacerbated as researchers worldwide have been significantly impacted by the restricted opportunities for networking.
When designing online events, we should consider what is valued in their offline counterparts. Anderson & Anderson argue that the most important function of a conference is “the creation of opportunities for informal socialization, entertainment and networking. deal making... bonding and friendship building by members of that community” [3, p. x]. Unfortunately, online academic events struggle to replicate these benefits. Previous work has explored the design of conference spaces for socialization through the simulating venues for informal discussions (e.g., coffee hours) [22], or the use of interactive virtual hangout spaces (e.g., second life) [21, 49]. However, such techniques have had mixed success and uptake [49].
We approached the SPC program as a different means of facilitating networking in and around online conferencing events. Mindful of participants from the Global South, we made four design decisions aimed at supporting ECRs in gaining social capital and relationship-building: (1) splitting participants into four roughly equal cohorts (of approximately 20 individuals), who were then able to build relationships over the duration of the program. (2) participants were distributed across the groups to ensure that no one country or region was over-represented in any group (despite the inconvenience in meeting scheduling that this would have caused). This ensured that participants based in India (where 18 of the ECRs came from) were able to be co-reviewers with participants in Uganda or Iran. (3) participants were also distributed based on their career stage, that is, to ensure an even distribution of PhD candidates and post-docs in each cohort. (4) supporting asynchronous communication (through WhatsApp) was another priority due to the wide range of time-zones participants came from. This is a practice that is also common in other conferences, notably at the PC meeting for HCI conferences.
Furthermore, these design decisions helped to expand what has been done by previous SPC programs and earlier publications on them. Earlier SPCs were conducted in-person and, as a result, most participants were from high-income countries (Europe, US, and Canada) [15, 23], reducing access for ECRs from the Global South and limiting the breadth of networking opportunities. For instance, Feldmann et al. reported that invitees from Asia were unable to participate in the SIGCOMM SPC due to the cost of travel to Germany [15].
There are many new opportunities that we can identify for increasing social capital of junior researchers that have emerged since the widespread adoption of virtual and hybrid conferences. At a fundamental level, we argue that the review process can be redesigned in order to integrate junior researchers into the academic community and the process of peer-review. SPC is but one of many approaches that can be utilized to achieve this aim. Others have tried to pilot approaches such as: (1) introducing transparency: using application forms for applying to be part of a PC (and thus making the process of enlisting in reviewing papers more inclusive) (2) ECR-specific events: building in workshops or colloquiums that are deliberately focused on providing short and confidential mentorship sessions where confidentiality is possible [16]. We can also design new opportunities for peer-mentorship to reduce any challenges around senior researchers being burdened with mentoring activities. For instance, one such approach may involve pairing junior faculty members and recent PhD graduates with first-year PhD candidates.

6.2 Designing for Reliable Reviewer Processes

One area that has been receiving significant attention is that of improving the peer-review process as a means to oversee quality in academic scholarship. Peer-reviewers and the peer-review process play a significant role in academic publishing. However, the peer-review process itself has to be effective in order to uphold the quality and validity of scholarly publications as it seeks to do [12]. The peer-review process continues to be strained as more academics enter the academic market and publications continue to rise. Consequently, there is a growing demand for reviewers to keep up with the growing publication outputs and conference/journal needs [46, 52, 57]. Moreover, it has proven difficult to maintain a pool of reviewers and have them meet deadlines [12]. According to Newman et al., the typical HCI review process “suffers from some recurring problems such as highly unpredictable submission levels, shortage of experienced reviewers, and a complex, tightly scheduled review cycles. Reviewers often encounter these problems in the form of extra papers to review, papers outside their area of expertise, and lack of sufficient time to do the job properly” [44].
Others before us, such as Birman and Schneider, have questioned the assumptions underpinning CS conferences’ reviewing processes and how they lead to reduced impact of researchers’ work. This will require a fundamental rethink: “Force fields are needed to encourage researchers to maximize their impact, but creating these force fields will likely require changing our culture and values” [7]. Grudin similarly argues that although a potential solution is to bring in junior researchers, they may feel more comfortable identifying minor flaws and less comfortable in declaring that work is more or less important [19]. Moreover, the standard expectations on reviewing have many nuances to them which are often not addressed. For instance, while reviewers are often told to identify if a paper is making original contributions, they are not told to identify “innovation for innovation’s sake” or where there are potentially negative consequences to innovation (for instance, if a new technology is introduced among a marginalized group). To identify this distinction requires nuance, which researchers like Sturdee et al. argue is lacking among ECRs and those new to the peer-review process [55].
If, as we have argued in this article so far, we have junior researchers involved more heavily in the conference peer-reviewing cycle and afforded review training opportunities, then we can build processes around diversifying the review load and tackling the time constraints often experienced by reviewers. This is just one example of the several challenges that both established and emerging conferences face: due to the voluntary nature of reviewing commitments made by established and senior researchers, many reviews are not submitted on time and sometimes are not submitted at all. This is a problem that those on program committees have become used to and there is no way to enforce timely submissions by reviewers, beyond standard approaches like sending repeated reminders. Often when senior researchers do not respond and the program committee is scrambling for reviewers, they turn to reviewers they trust and or junior researchers.
One of the main challenges we faced in delivering the SPC was the logistics of managing a conference cycle while the pandemic was happening. Emergency reviewing is always a challenge in peer-reviewing [47]. We faced this issue during SPC as well, exacerbated by the worldwide COVID-19 pandemic. SPC participants in South Asia were significantly affected as the timeline for SPC coincided with a huge humanitarian disaster on the Indian subcontinent. We thus had to make decisions to extend deadlines to account for the emergency situations faced by a number of our SPC participants. Initially, we had created generous timelines that allowed participants to provide reviews at their own pace. Perhaps unsurprisingly, many scheduled time towards the end of the timelines we provided which also coincided with personal health emergencies for many of our participants. While we initially gave a limited extension and asked participants to submit within the timescale, participants mobilized through the WhatsApp platform to request more time.
Despite this temporary challenge, what we have discovered in our endeavor is that junior researchers are highly motivated to take part in reviewing. We can thus design reviewing activities that enable the production of high quality reviews with the assistance of junior research contributions on certain micro-tasks. Let us take a few examples of where they can contribute: (1) they can help with paper allocation. Rather than using automated algorithms that conference submission platforms use, which are often unreliable. The paper allocation decided by junior researchers can then be vetted by PC members before being sent to potential reviewers. (2) They can help with screening papers for desk rejections and checking if the paper scope is relevant to the conference, again making it easier for the PC to have access to information to make informed decisions in less time. (3) Screening individual papers’ literature reviews to write notes on whether the paper has correctly surmised the cited papers’ arguments, and writing recommendations on whether certain papers need to be cited that are currently missing. Their notes can then be helpful to the reviewers. (4) Parsing the methodology of submitted papers to identify whether appropriate procedures have been followed or if there is an incomplete description of ethical requirements for working with that methodology. It is worth adding a caveat that many of these micro-tasks vary in the level of expertise required to carry them out. As such our intention here is to present them as provocations for the conferencing community to respond to these as potential design opportunities.

6.3 Designing for Teaching Points in Conferences

Halfaker et al. ask the question “Does the community have a conference or does a conference form a community?” [20]. Cabot et al. found that within the top Computer Science conferences, it is rare to find papers by newcomers to the conference (papers where none of the authors had previously published in that conference) [9]. The authors also argued that semi-newcomers, researchers who had never published in the main track but had published in other tracks (e.g., posters and demos) or satellite events, fared much better. There is a general feeling that “newcomers must first learn the community’s particular “culture” (in the widest sense of the word, including its topics of interest, preferred research methods, social behavior, vocabulary, and even writing style) either by simply attending the conference or warming-up publishing in satellite events, before being able to get their papers accepted in the main research track” [9]. However, this is an unnecessary barrier that is (unintentionally) enabled by insiders within the research community. Conference committees risk becoming a gated community, where external reviewers are called upon to fulfill quick review jobs but very few get invited to stay and gain from the community.
Our findings showcase the extent to which ECRs are unaware about the process of reviewing. There were numerous questions raised by the cohort throughout the program that exemplify this issue. Understanding reviewing processes, writing good reviews and meta-reviews, and simply understanding how paper accept/reject decisions are made were some such concerns. Although these may seem trivial to senior researchers, this is certainly not the case with ECRs. Even though 92% of our cohort had experience of writing papers, the process after they had submitted their work was somewhat unknown, as illustrated by our findings.
Postdoctoral researchers are a key part of the conference reviewing cycle. These same individuals, as authors, submit papers to conferences while not understanding the intricacies of the peer-review process. When these same individuals start reviewing papers, there is no feedback mechanism in place with which their reviewing skills can be assessed and improved. It is left to individual researchers to simply get better over time through their own efforts. Power dynamics are also at play between authors and reviewers. The anonymous format of the reviews gives a lot of power to the reviewer, and many have previously called for greater accountability of reviewers.
One of the key questions is what happens after the reviews are provided to the authors. There are many unanswered questions such as: were the reviews useful feedback to the authors, how were the reviews perceived from the authors’ perspective? If we have a pool of motivated ECRs then it is not a stretch to create an activity which is focused on collecting author feedback on the reviews received. This activity can be carried out by the ECRs. This can take the form of a 15-min video call.
With the expertise they have gained through their doctoral studies, ECRs are strategically placed to carry out a debrief with the author. This has some additional advantages as well. Enabling ECRs to network with global academic audiences in their field and conference of choice (indicated by the fact that they are already present at this conference and they have been assigned a paper within their area of expertise). This humanizes the process with the PhD being involved. Once the reviews have been sent back, the PhD can be told by the program committee about the authors’ identity (with the consent of the authors) to organize a call and then anonymously report back the PhD’s observations to other reviewers as well. Through this step, we will also recognize that ECRs have something very valuable to bring. The design of such pathways need to be carefully considered around the issue of maintaining double-blind reviews, particularly if unfavourable review decisions are made (e.g., through use of asynchronous text-based anonymous platforms). The key takeaway for conference designers is that review processes should be re-designed to scaffold in participation of junior researchers to improve efficiency and also to aid researcher development.

6.4 Designing for Continuous Professional Development

There are many models of improving the reviewing skills of junior researchers. Events like SPC are limited attempts at doing this. This SPC program involved a longer and more-intensive level of training for participants, relative to earlier SPC-type programs [15, 23, 54]. However, even with this additional support and training for participants, it is clear that once-off training programs like this are not the most effective strategies. Extant literature shows that for learning to be effective, learning processes need to be embedded within the week-to-week work of the participants. There was also a strong demand for this kind of ongoing development among participants in our SPC program. Previous research has shown that metrics for tracking academic performance improvements can denigrate into prescriptive methods or approaches that can embody top-down micro-management [13]. As such, we need approaches that are embedded and give greater agency to participants.
Another limitation of this kind of once-off approach to preparing reviewers is the time commitment and effort required on the part of the organizers, spread over many months. This limits its sustainability as an approach. We, the authors, were also in privileged positions to be able to do so through the pandemic: none of us had caring responsibilities, and we were all supported by our respective departments in allocating time towards the initiative. We acknowledge that not all researchers are in this position and as such, organizers need to be aware of these considerations. In our case, the SPC organizing team consisted of four members, which was adequate to share the load.
A third limitation pertains to the unequal impacts of the COVID-19 pandemic, the backdrop for our SPC efforts. Studies among researchers have shown that women are authoring less during the pandemic, while men’s productivity has gone up [30; 32] [39]. One potential explanation could be due to uneven caregiving burdens. Conferences being online make them easier to attend, but can also be impacted by expectations (both self-imposed and from peers and supervisors) about having to work a normal day on top of conference attendance ([18] from [39]). In our particular instance, we deliberately ensured the number of real-time events were few (four) and spread out over a period of 5 months. Despite these limitations, for our particular immediate purposes for SUBVERT Conference, this approach worked well for us. We were able to create a thorough training program, and the authors who received feedback from the SPC members were appreciative of the reviews. In future conferences, we will be trying a more embedded approach.
In many professions (e.g., healthcare and education), continuous professional development (CPD) is an expectation on practitioners. However, within academia there is limited attention that is paid to this aspect of our careers. Unfortunately, the lack of emphasis creates a divide between those who are engaged in, and have access to, activities that contribute to our professional development and those who do not. Those who are privileged have ready access to the informal opportunities for CPD that others do not.
For instance, academia is a highly networked sector and has been described as an “incestuous” industry [27]. Job positions, collaboration opportunities, and grant-writing are competitive, and those with connections with other researchers are rewarded with more opportunities and possibilities. A key part of the professional development of researchers in academia is about networking, mentorship and collaboration events. Networking widens the circle of civil society, private sector and research practitioners that an individual is connected with, making it more likely for them to hear about certain opportunities. Mentorship enables a researcher to overcome gaps in their knowledge, as they gain input from someone who is much further along in their academic journey. And collaborative events are spaces deliberately created to kickstart a working relationship between two or more practitioners with mutual interests.
In this article, we have argued that the research community, like other practice communities, needs a more holistic approach to enable the ongoing professional development of its members (particularly ECRs and those based in the Global south). While the arguments we have put forward are radical in nature and can be perceived as another burden to over-committed academics, adapting existing practices presents an opportunity to grow a global community of trained ECRs. Through such efforts we are in a better position to diversify programme committees and promote inclusion in conference communities. Our findings highlight the value of such initiatives in the professional development of ECRs. Through this article, we contribute to the discussion around future of conferences as community building activities.
Indeed, radical changes to the reviewing process are not new to computing conferences. Prior to 1999, in CHI, differing review criteria existed for papers depending on their category. This was abolished in favor of a unified reviewing criteria that assesses all papers on the strength of their contributions rather than importance relative to their particular category/subcommittee [44].

7 Conclusions

Organizers of academic conferences are not just concerned with quantity (number of attendees and paper submissions), but also with quality (e.g., academic standard of submitted papers and rigor of reviewing practices) and inclusion (i.e., enabling underserved groups within academic communities to be better involved). SPC was an attempt by the organizers of an interdisciplinary ACM conference to provide an opportunity for training Early Career Researchers (ECRs) from around the world. It consisted of a 5-month virtual training program through which reviewers were given basic training and then assigned to papers submitted at the conference. Once they reviewed the papers, they attended a Program Committee meeting facilitated by the authors to discuss, which papers would theoretically get accepted into the conference program. The reviews were then collated and sent to selected authors, who responded to the reviews and gave feedback.
We recognise that the SPC is a worthwhile experience as a point of entry to providing peerreview training to ECRs, engagement opportunities in academic communities and consequently diversifying programme committees. However, we also acknowledge that as a once-off endeavor it is not as an effective strategy for long term continous development. We contribute to the discussion around building stronger communities of academics, particularly underserved participants from the Global South whose limited professional development opportunities have been further impacted by the pandemic. We present design considerations around how conferences can be re-imagined to leverage ECR capabilities and create genuinely inclusive community spaces by addressing systemic inequalities faced by ECRs.

Acknowledgments

We wish to acknowledge all the participants who worked really hard and engaged with the Shadow PC program. Special thanks also to the conference steering committee for their generous support.

References

[1]
Titipat Achakulvisut, Tulakan Ruangrong, Isil Bilgin, Sofie VAN DEN BOSSCHE, Brad Wyble, Dan F. M. Goodman, and Konrad P. Kording. 2020. Improving on legacy conferences by moving online. eLife 9 (2020). DOI:
[2]
Alex Ahmed. 2018. Broadening participation beyond diversity: Considering the confluence of research questions and sociopolitical dynamics. Communication of the ACM 61, 7 (2018), 30–32. DOI:
[3]
Lynn Anderson and Terry Anderson. 2010. Online Conferences: Professional Development for a Networked Era. IAP.
[4]
Aruna D. Balakrishnan, Sara Kiesler, Jonathon N. Cummings, and Reza Zadeh. 2011. Research team integration: what it is and why it matters. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work (CSCW’11). Association for Computing Machinery, New York, NY, 523–532.
[5]
Kasturi Behari-Leak. 2017. New academics, new higher education contexts: A critical perspective on professional development. Teaching in Higher Education 22, 5 (2017), 485–500. DOI:
[6]
Michael Bernstein, Dan Cosley, Carl DiSalvo, Sanjay Kairam, David Karger, Travis Kriplean, Cliff Lampe, Wendy Mackay, Loren Terveen, Jacob Wobbrock, and Sarita Yardi. 2012. Reject me: Peer review and SIGCHI. In Proceedings of the Conference on Human Factors in Computing Systems - Proceedings, 1197–1200. DOI:
[7]
Ken Birman and Fred B. Schneider. 2009. Viewpoint<br><br>program committee overload in systems. Communication of the ACM 52, 5 (May2009), 34–37. DOI:
[8]
Lynette Browning, Kirrilly Thompson, and Drew Dawson. 2014. Developing future research leaders: Designing early career researcher programs to enhance track record. International Journal for Researcher Development 5, 2 (November2014), 123–134. DOI:
[9]
Jordi Cabot, Javier Luis Cánovas Izquierdo, and Valerio Cosentino. 2018. Viewpoint: Are CS conferences (too) closed communities? Communication of the ACM 61, 10 (2018), 32–34. DOI:
[10]
Robert M. Davison, Maris G. Martinsons, and Ned Kock. 2004. Principles of canonical action research. Information Systems Journal 14, 1 (2004), 65–86. DOI:
[11]
Roisin Donnelly. 2015. Values informing professional practice in academic professional development. International Journal of Technology and Inclusive Education 4, 1 (62015), 540–546.
[12]
Roisin Donnelly and Fiona McSweeney. 2010. From humble beginnings: Evolving mentoring within professional development for academic staff. Professional Development in Education 37, 2 (42010), 259–274. DOI:
[13]
Chris Elsden, Sebastian Mellor, Patrick Olivier, Pete Wheldon, David Kirk, and Rob Comber. 2016. ResViz : Politics and Design Issues in Visualizing Academic Metrics. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.
[14]
SIGCHI Executive Committee, Adriana S. Vivacqua, Andrew L. Kun, Cale Passmore, Helena Mentis, Josh Andres, Kashyap Todi, Matt Jones, Luigi De Russis, Naomi Yamashita, Neha Kumar, Nicola J. Bidwell, Pejman Mirza-Babaei, Priya C. Kumar, Shaowen Bardzell, Simone Kriglstein, Susan Dray, Susanne Boll, Stacy M. Branham, and Tamara Clegg. 2022. Equity talks @SIGCHI. In Proceedings of the Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems . Association for Computing Machinery, New York, NY, Article 161, 3 pages. DOI:
[15]
Anja Feldmann. 2005. Experiences from the sigcomm 2005 european shadow PC experiment. Computer Communication Review 35, 3 (2005), 97–102. DOI:
[16]
Geraldine Fitzpatrick. 2017. Reflect, re-claim, reconnect: Learning to say yes wisely and strategically. In Proceedings of the Conference on Human Factors in Computing Systems, 1178–1181. DOI:
[17]
Sarah A. Gilbert, Casey Fiesler, Lindsay Blackwell, Michael Ann DeVito, Michaelanne Dye, Shamika Goddard, Kishonna L. Gray, David Nemer, and C. Estelle Smith. 2020. Public Scholarship and CSCW: Trials and Twitterations. Association for Computing Machinery, New York, NY, 447–456. DOI:
[18]
Jonathan Grudin. 2005. Why chi fragmented. In Proceedings of the Conference on Human Factors in Computing Systems, 1083–1084. DOI:
[19]
Jonathan Grudin. 2011. Technology, conferences, and community. Communication of the ACM 54, 2 (Feb.2011), 41–43. DOI:
[20]
Aaron Halfaker, Cliff Lampe, Amy Bruckman, Aniket Kittur, R. Stuart Geiger, Loren Terveen, Brian Keegan, and Geraldine Fitzpatrick. 2013. Community, impact and credit: Where should I submit my papers? In Proceedings of the ACM Conference on Computer Supported Cooperative Work, 89–94. DOI:
[21]
Hsiao-Cheng Sandrine Han and Christine Liao. 2018. A hybrid virtual world conference as a way to create community. Journal of Virtual Studies 9, 2 (2018), 37–42.
[22]
Daniel L. Hoffman, Seungoh Paek, Curtis P. Ho, and Bert Y. Kimura. 2021. Online-only international conferences: Strategies for maintaining community. TechTrends 65, 4 (2021), 418–420. DOI:
[23]
Rebecca Isaacs. 2008. Report on the 2007 SOSP shadow program committee. Operating Systems Review (ACM) 42, 3 (2008), 127–131. DOI:
[24]
Yvonne Jansen, Pierre Dragicevic, and Kasper Hornbæk. 2016. What did authors value in the CHI’16 reviews they received? In Proceedings of the Conference on Human Factors in Computing Systems, 596–606. DOI:
[25]
H. Joffe. 2011. Thematic Analysis. In Qualitative Research Methods in Mental Health and Psychotherapy, D. Harper and A. R. Thompson (Eds.).
[26]
Reuben Kirkham, John Vines, and Patrick Olivier. 2015. Being reasonable: A manifesto for improving the inclusion of disabled people in SIGCHI conferences. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. 601–612.
[27]
Daniel B. Klein and Charlotta Stern. 2009. Groupthink in academia: Majoritarian departmental politics and the professional pyramid. The Independent Review 13, 4 (2009), 585–600.
[28]
Thomas Kohler, Johann Fueller, Kurt Matzler, and Daniel Stieger. 2011. CO-creation in virtual worlds: The design of the user experience. MIS Quarterly: Management Information Systems 35, 3 (2011), 773–788. DOI:
[29]
Neha Kumar, Kurtis Heimerl, David Nemer, Naveena Karusala, Aditya Vashistha, Susan M. Dray, Christian Sturm, Laura S. Gaytan-Lugo, Anicia Peters, Nova Ahmed, Nicola Dell, and Jay Chen. 2018. HCI across borders: Paving new pathways. In Proceedings of the Conference on Human Factors in Computing Systems, 1–7. DOI:
[30]
Neha Kumar and Naveena Karusala. 2021. Braving citational justice in human-computer interaction. In Proceedings of the Conference on Human Factors in Computing Systems. Association for Computing Machinery. DOI:
[31]
Neha Kumar, Christian Sturm, Syed Ishtiaque Ahmed, Naveena Karusala, Marisol Wong-Villacres, Leonel Morales, Rita Orji, Michaelanne Dye, Nova Ahmed, Laura S. Gaytán-Lugo, Aditya Vashistha, David Nemer, Kurtis Heimerl, and Susan Dray. 2019. HCI across borders and intersections. In Proceedings of the Conference on Human Factors in Computing Systems, 1–8. DOI:
[32]
Ying Feng Kuo and Lien Hui Feng. 2013. Relationships among community interaction characteristics, perceived benefits, community commitment, and oppositional brand loyalty in online brand communities. International Journal of Information Management 33, 6 (December2013), 948–962.
[33]
Shaimaa Lazem, Danilo Giglitto, Makuochi Samuel Nkwo, Hafeni Mthoko, Jessica Upani, and Anicia Peters. 2022. Challenges and paradoxes in decolonising HCI: A critical discussion. Computer Supported Cooperative Work 31, 2 (2022), 159–196.
[34]
Sebastian Linxen, Christian Sturm, Florian Brühlmann, Vincent Cassau, Klaus Opwis, and Katharina Reinecke. 2021. How WEIRD is CHI?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . Association for Computing Machinery, New York, NY, Article 143, 14 pages. DOI:
[35]
Catherine Lyall and Laura R. Meagher. 2012. A masterclass in interdisciplinarity: Research into practice in training the next generation of interdisciplinary researchers. Futures 44, 6 (2012), 608–617.
[36]
Joe Marshall, Jocelyn Spence, Conor Linehan, and Stefan Rennick Egglestone. 2017. A little respect: Four case studies of HCI’s disregard for other disciplines. In Proceedings of the Conference on Human Factors in Computing Systems -848–857. DOI:
[37]
Kenneth I. Maton and Deborah A. Salem. 1995. Organizational characteristics of empowering community settings: A multiple case study approach. American Journal of Community Psychology 23, 5 (1995), 631–656. DOI:
[38]
Kelly E. Matthews, Jason M. Lodge, and Agnes Bosanquet. 2014. Early career academic perceptions, attitudes and professional development activities: Questioning the teaching and research gap to further academic development. International Journal for Academic Development 19, 2 (2014), 112–124. DOI:
[39]
Dana McKay and George Buchanan. 2020. Feed the tree: Representation of australia-based academic women at HCI conferences. In Proceedings of the ACM International Conference Proceeding Series. 263–269. DOI:
[40]
Michael Muller and Geraldine Fitzpatrick. 2018. 3rdEarly career development symposium. In Proceedings of the Conference on Human Factors in Computing Systems, 2–3. DOI:
[41]
Cosmin Munteanu and Sharon Oviatt. 2019. CHI 2019 early career development symposium. In Proceedings of the Conference on Human Factors in Computing Systems, 1–5. DOI:
[42]
Samwel Dick Mwapwele and Judy Van Biljon. 2021. Digital platforms in supporting ICTD research collaboration : A case study from South Africa. In Proceedings of the 3rd African Human-Computer Interaction Conference, 125–130.
[43]
Lennart E. Nacke. 2021. How to write CHI papers, online edition. In Proceedings of the Conference on Human Factors in Computing Systems, 9–11. DOI:
[44]
William Newman, Robin Jeffries, and M. C. Schraefel. 2005. Do CHI papers work for you? Addressing concerns of authors, audiences and reviewers. In Proceedings of the Conference on Human Factors in Computing Systems, 2045–2046. DOI:
[45]
David Nicholas, Anthony Watkinson, Cherifa Boukacem-Zeghmouri, Blanca Rodríguez-Bravo, Jie Xu, Abdullah Abrizah, Marzena Świgoń, and Eti Herman. 2017. Early career researchers: Scholarly behaviour and the prospect of change. Learned Publishing 30, 2 (2017), 157–166.
[46]
Kayhan Parsi and Nanette Elster. 2018. Peering into the future of peer review. American Journal of Bioethics 18, 5 (2018), 3–4. DOI:
[47]
Marian Petre, Kate Sanders, Robert Mccartney, Marzieh Ahmadzadeh, Cornelia Connolly, Sally Hamouda, Brian Harrington, Jérémie Lumbroso, Joseph Maguire, Lauri Malmi, Monica M. Mcgill, and Jan Vahrenhold. 2020. Mapping the landscape of peer review in computing education research. In Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, 173–209 pages. DOI:
[48]
Henning Pohl and Aske Mottelson. 2019. How we guide, write, and cite at CHI. In Proceedings of the Conference on Human Factors in Computing Systems, 1–11. DOI:
[49]
Michael Rymaszewski, Wagner James Au, Mark Wallace, Catherine Winters, Cory Ondrejka, and Benjamin Batstone-Cunningham. 2007. Second Life: The Official Guide. John Wiley & Sons.
[50]
Sarvenaz Sarabipour, Aziz Khan, Samantha Seah, Aneth D. Mwakilili, Fiona N. Mumoki, Pablo J. Sáez, Benjamin Schwessinger, Humberto J. Debat, and Tomislav Mestrovic. 2021. Evaluating features of scientific conferences: A call for improvements. bioRxiv 2020.04.02.022079. DOI:
[51]
Leah Shagrir. 2017. Collaborating with colleagues for the sake of academic and professional development in higher education. International Journal for Academic Development 22, 4 (82017), 331–342. DOI:
[52]
Kay Cheng Soh. 2013. Peer review: Has it a future? European Journal of Higher Education 3, 2 (2013), 129–139. DOI:
[53]
Maria Spilker, Fleur Prinsen, and Marco Kalz. 2019. Valuing technology-enhanced academic conferences for continuing professional development. A systematic literature review. Professional Development in Education 46, 3 (52019), 482–499. DOI:
[54]
I. Stelmakh, N. B. Shah, A. Singh, and H. Daumé III. 2021. A novice-reviewer experiment to address scarcity of qualified reviewers in large conferences. In Proceedings of the AAAI Conference on Artificial Intelligence 35, 6 (2021), 4785–4793.
[55]
Miriam Sturdee, Joseph Lindley, Conor Linehan, Chris Elsden, Neha Kumar, Tawanna Dillahunt, Regan Mandryk, and John Vines. 2021. Consequences, schmonsequences! considering the future as part of publication and peer review in computing research. In Proceedings of the Conference on Human Factors in Computing Systems. DOI:
[56]
Christian Sturm, Alice Oh, Sebastian Linxen, Jose Abdelnour-Nocera, Susan Dray, and Katharina Reinecke. 2015. How WEIRD is HCI? Extending HCI principles to other countries and cultures. In Proceedings of the Conference on Human Factors in Computing Systems, 2425–2428. DOI:
[57]
Stephen Wilkins, Joe Hazzam, and Jonathan Lean. 2021. Doctoral publishing as professional development for an academic career in higher education. The International Journal of Management Education 19, 1 (32021), 100459. DOI:
[58]
Max L. Wilson. 2020. How to: Peer review for CHI (and beyond). In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA’20). Association for Computing Machinery, New York, NY, 1–4.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Journal on Computing and Sustainable Societies
ACM Journal on Computing and Sustainable Societies  Volume 2, Issue 1
March 2024
255 pages
EISSN:2834-5533
DOI:10.1145/3613746
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 January 2024
Online AM: 28 October 2023
Accepted: 15 June 2023
Revised: 14 June 2023
Received: 07 April 2023
Published in ACMJCSS Volume 2, Issue 1

Check for updates

Author Tags

  1. Co-design
  2. technology appropriation
  3. community volunteering

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 1,087
    Total Downloads
  • Downloads (Last 12 months)906
  • Downloads (Last 6 weeks)76
Reflects downloads up to 11 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media