Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Convening a Minipublic During a Pandemic: A Case Study of the Oregon Citizens’ Assembly Pilot on COVID-19 Recovery

Published: 28 July 2022 Publication History
  • Get Citation Alerts
  • Abstract

    From July–August, 2020, the nonprofit organization Healthy Democracy convened a seven-week pilot test of an online Citizen Assembly on the state of Oregon's response to the COVID-19 pandemic. This pilot project presented a unique research opportunity, because its organizers had ten years of experience running the Citizens’ Initiative Review, a face-to-face minipublic authorized by the State of Oregon to write voting guides for the wider electorate on ballot measures. This case study compares survey data from the Citizen Assembly pilot with the prior Citizens’ Initiative Reviews and provides analysis and recommendations that could improve the design and execution of future online assemblies.

    1 Introduction

    When a health crisis erupts, society relies on medical professionals to deploy their intelligence and experience. During the COVID-19 pandemic, daily news reports celebrate front-line healthcare workers and biomedical researchers, who care for the ill and develop treatments and vaccines, respectively. Each of these fields provides deeper knowledge of the disease, including how it spreads and its long-term health effects.
    Governments and civic organizations also have special roles to play during emergencies like a pandemic. These include the need to obtain candid input from diverse experts and to distill and deploy citizens’ volunteer labor and collective intelligence. The latter, in turn, supplement scientific knowledge with insights about how to apply this information, weigh alternative courses of action, and face up to the trade-offs that policy and social choices imply [Moore and MacKenzie 2020; Park and Johnston 2017]. In the United States, the health and medical professionals have earned high marks for their response to COVID-19, but local, state, and federal government responses have been uneven, at best. Civic organizations have likewise struggled to interface with government to forge a public consensus across the country's wide cultural divides.
    In the state of Oregon, however, the conditions for engaging citizens on COVID-19 response were more favorable thanks to that state's decade of experience with a public engagement process that set precedents for bringing citizens together for deliberation on pressing issues. In response to intense lobbying by good-government advocates concerned about the state's initiative process [Gastil and Knobloch 2000], the 2009 state legislature authorized an official test of the Oregon Citizens’ Initiative Review (CIR). In 2011, that same government made the process permanent by establishing the CIR Commission. Thus, many state legislators were already familiar with concepts of minipublics such as random selection, citizen deliberation, and feedback mechanisms between small deliberative bodies and the wider public. In the language of this special issue, Oregon passed into law a process that qualifies as a case of CrowdLaw [Noveck 2018].
    Thanks to the success of the CIR, Healthy Democracy became one of Oregon's most prominent civic groups [Gastil and Knobloch 2020]. From 2010 to 2016, this organization convened seven official CIR processes that permitted a stratified random sample of Oregon citizens to draft a Citizens’ Statement that gave fellow voters crucial information about statewide ballot measures. In addition, Healthy Democracy conducted ten pilot CIRs from 2014 to 2018 to experiment with variations on the official process, and as a side project, it began Community Oregon, a series of summer dialogues aimed at bridging partisan divides.1
    Oregon also has civic organizations with experience convening public discussions on healthcare. Oregon Health Decisions helped the state develop policy on living wills, end-of-life decisions, and other healthcare dilemmas.2 Partly owing to the civic mission of the city-centered Portland State University, other groups such as Kitchen Table Democracy (formerly the Policy Consensus Initiative) helped facilitate conversations on a wide range of topics.3
    When the COVID-19 pandemic emerged, these civic organizations and a handful of legislators saw the need to apply their experience with the CIR and similar processes to help citizens and government officials work through the challenges of COVID-19 response.4 What they saw was a tangle of inconsistent policies, divided opinion, and widely varying public safety behaviors. As in many states, the Oregon public's response to government actions was mixed, at best. Some critics said that the governor and local officials were not acting fast enough to halt the spread of the virus, particularly in Portland and other urban areas. More libertarian critics and those following the lead of President Trump decried mask mandates, business and school closures, and limits on free association, such as in-person religious services, particularly in the state's more rural counties.5
    Healthy Democracy saw an opportunity to step into the fray and find what common ground might help reconcile disparate views and policies. In May 2020, Healthy Democracy partnered with Kitchen Table Democracy to design a deliberative process that would address one or more specific policy questions, initially provided by three state senators (one Republican and two Democrats) who were participating in the project. Informed by its CIR experience, Healthy Democracy recruited three dozen state residents to participate in the Oregon Citizen Assembly Pilot on COVID-19 Recovery (ORCA) during July and August. This pilot minipublic provided a unique research opportunity, because it tested the capacity of an experienced organization's ability to pivot into a fully online space of citizen engagement during an emergency. Building on previously tested design concepts (but also experimenting with an online format), the organization attempted to replicate the success of the CIR, which typically brings citizens together for face-to-face deliberation over four to five days.
    In this case study, we review the larger theoretical and practical context of the ORCA, compare survey data from the ORCA and the CIR, and incorporate the first-hand observations of a large research team, while assessing the overall quality of the ORCA. Our final sections draw recommendations and conclusions from this case study for future efforts to study and develop CrowdLaw.

    2 Theoretical and Political Context

    Healthy Democracy did not appear spontaneously on the Oregon landscape. It grew organically out of a particular mix of intellectual, pragmatic, and political soils. Its name came from a book by the same title, authored by Ned Crosby, a philanthropist and civic leader who founded the Jefferson Center for New Democratic Processes in Minnesota. He wrote that book while in conversation with John Gastil, whose 2000 book on election reform [Gastil 2000] inspired two veteran Oregon activists to join forces with Crosby and create the CIR [Gastil and Knobloch 2020].
    The CIR has evolved since its inception in 2010, but the basic design remains the same. Healthy Democracy draws a random sample of 20–24 of Oregon's registered voters, stratifying to create a sample that broadly represents the state's geography and demographics. Focusing on a single ballot measure, these CIR panelists hear from proponents and opponents, as well as policy experts either chosen by each side or selected by Healthy Democracy and accepted by pro and con representatives. Panelists ask questions of these witnesses and spend the remainder of their time deliberating as a full body and in smaller groups. Using a supermajority rule to approve each sentence in their Citizens’ Statement, the panelists write a single page of Key Findings, Arguments For, and Arguments Against the ballot measure. The Secretary of State then places this page in the official state Voters’ Pamphlet [Knobloch et al. 2013].
    From the perspective of the state legislature that adopted the CIR, it mitigates the hazards of direct democracy by allowing a group of voters to systematically evaluate ballot measures, helping to identify those that are most ill-conceived or will likely have more harmful than beneficial effects for the public. Meanwhile, the CIR gives Oregon voters something they want—easy access to high-quality information and arguments during initiative campaigns increasingly saturated with misleading information.6 Evidence collected in previous research shows that the CIR does have these intended effects on the electorate, at least by degrees [Gastil et al. 2018; Már and Gastil 2020], as well as secondary benefits for voters, such as boosts in political self-confidence [Knobloch, Barthel, and Gastil 2019].
    Scholars refer to the broader theory underlying the CIR—and the writings that gave rise to it—as “deliberative democracy” [Chambers 2003; Gutmann and Thompson 2004]. The normative aspect of this body of work argues that modern democracy requires not just broad participation but also a deliberative quality of speech in everything from governing bodies to public meetings. Empirical deliberative theory stresses a system-wide perspective, which can sometimes discern a deliberative purpose in anything from protests to advocacy media that are not, in and of themselves, instances of deliberation [Parkinson and Mansbridge 2012]. In a deliberative democracy, social movements and policy decisions alike ground themselves in thoughtful judgments, empathic discernment of the public interest, and a solid foundation of relevant information.
    Given the state of affairs at present, it is not surprising that this theory often presents itself as a critique of disinformation, misguided populism and political extremism, but it just as often inspires innovations in civic engagement. The most visible of these is the modern “minipublic,” which typically convenes a paid random sample of citizens or residents to deliberate over a series of meetings on a particular policy question or social problem [Setälä and Smith 2018]. A minipublic usually concludes by proposing a law, making a recommendation, or issuing a report. They have appeared across the globe in recent decades, with the most common examples being Citizens’ Assemblies, Citizens Juries, and Deliberative Polls [Fung 2007; Grönlund, Bachtiger, and Setälä 2014; MacKenzie and Warren 2012].
    That Oregon provided fertile ground for citizen deliberation might not be obvious to an outside political observer. Nestled in the Pacific Northwest, the Cascade Mountains divide the state into eastern and western sides. Western Oregon features the state's largest city, Portland, the progressive bastion of Eugene, which includes the University of Oregon, and the state capitol building in Salem. Much of the state is rural, and areas east of the Cascades have very low population density, especially in the southwest part of the state where density is less than one person per square mile [U.S. Census Bureau 2010]. The rural parts of the state feature a conservative worldview with an anti-government strain of individualism. The aforementioned cities host a progressive political mindset that has flashes of radical egalitarianism. Aside from wearing flannels during the region's mild winter, there is little on which people from these distinct areas reliably agree [Gastil et al. 2016].
    In recent years, the state's bitterest disputes have garnered international attention. In early 2016, far-right extremists occupied a building on the Malheur National Wildlife Refuge in the far east of the state for five weeks, with one participant dying in an attempted arrest by the FBI.7 Four years later, Black Lives Matter protests in downtown Portland featured creative displays of solidarity but also attracted more violent responses, resulting in a shooting death of a far-right protester in August, 2020.8
    Nonetheless, Oregon has a long tradition of citizen participation in governance. It was among the first U.S. states to authorize statewide citizens’ initiative and referendum processes at the beginning of the twentieth century [Miller 2009]. With a population of roughly four million people, Oregon has a part-time non-professional legislature that alternates between “long sessions” (160 days) and a much shorter one in alternating years. The annual pay is just over U.S. ∃31,000, plus a per diem for the days the legislature is in session.9
    It is in this political setting that Healthy Democracy went about its business in election years, bringing together 20–24 citizens for the CIR to weigh in on ballot measures on issues such as medical marijuana, election reform, tax laws, and mandatory criminal sentencing. A decade of successful CIR deliberation in Oregon convinced the leadership of Healthy Democracy that they could apply the same basic minipublic design to the COVID-19 issue, even if it meant creating an entirely online process and conclusions aimed at public officials instead of voters.
    Such thinking was reasonable, not least because many of the CIR's legislative sponsors still served in the legislature. The state's sitting governor, Kate Brown, praised the CIR when she visited it in her capacity as Secretary of State at its inauguration. Proud of the process’ potential and invoking the state's identity, she quipped at the time, “I'm hoping we're going to be able to go national at some point…You guys are the pioneers” [Gastil and Knobloch 2020, p. 76]. Indeed, Healthy Democracy succeeded in implementing a COVID-19 minipublic partly because of established relationships with both Republican and Democratic legislators.10 Also, Healthy Democracy ran this event—like the CIR—using private donations, so the legislature did not need to provide government funding.

    2.1 Overview of the ORCA

    When designing their Citizen Assembly pilot test, Healthy Democracy staff tried to incorporate the lessons they had learned in running the CIR. Given the unique conditions of the project, however, they chose to deviate from the CIR model in several ways. The ORCA resembled the CIR in that it drew on a random sample of registered voters, gave them a single topic to discuss, offered them expert witnesses whose testimony they could incorporate into their deliberation, and charged them with writing a relatively simple statement at the close of their process.
    As shown in Table 1, the ORCA diverged from the CIR model in many ways. It had a relatively open-ended topic (i.e., COVID-19 response, versus Yes/No on a specific ballot measure). Its deliberation was relatively brief and dispersed, with seven 2 h meetings versus 24–30 h over 4–5 consecutive days. Owing to travel and meeting restrictions resulting from COVID-19, it used a Zoom interface to connect participants online across the state, rather than having panelists meet face-to-face. Each difference made the ORCA more challenging to operate than the CIR with less time for deliberation, a broader topic, and an untested online interface.11
    Table 1.
    FeatureCIRORCA Pilot
    Participants20–2436
    SelectionRandom draw from list of registered votersRandom draw from registered Oregon voters who had volunteered previously to serve on a CIR
    LocationConvention hall, campus, etc.Remote locations (typically homes) via Zoom online interface
    Duration and dispersion of deliberation3.5 to 5 consecutive days for a total of 24–30 h7 weeks of 2 h sessions for a total of 14 h
    ResponsibilityWrite a one-page statement on a single ballot measureWrite a statement of indeterminate length offering advice to the state govt. on its response to COVID-19
    WitnessesPro and con advocates on the ballot measure and neutral experts with topical expertiseNeutral experts on subtopics identified by the panelists (housing and education in this case)
    Orientation phase (learning the process and issue)Day 1, including an extensive exercise that walks panelists through the CIR process and an introduction to the ballot measure being studiedDays 1–2 (2 h each), including brief cognitive bias module, establishing core principles, identifying focal subtopics, and choosing witnesses
    Deliberation phase (weighing witness testimony and evidence)2–3 days, depending on event lengthDays 3–5 (2 h each)
    Resolution phase (finishing statement)Final day (full day)Days 6–7 (2 h each)
    Table 1. A Comparison of the Basic Features of the CIR and ORCA Processes
    At the request of Healthy Democracy, three Oregon state senators had provided potential sets of questions for the ORCA to consider. After an initial deliberation among ORCA panelists that weighed the alternative directions presented in the legislators’ questions for their discussion, the ORCA chose to concentrate on these questions:
    What do you see as relationships between the pandemic and inequalities in social & economic structures – what has been learned about that? With increased interest in racial justice—how important is that? How much do we need change now? When the pandemic is done, how important is it to pursue changes in basic systems? Does this point to an imperative to change our basic systems—social, economic, justice, even energy? How does the pandemic affect inequalities and does it actually affect inequalities?

    2.2 Focal Research Question

    Because the ORCA and CIR differed on multiple dimensions, it was not possible to call this case study a simple comparison of a deliberative process occurring face-to-face versus online. Rather, this comparison has to take that difference into account alongside what we consider the main divergences: size, dispersion and duration of deliberation, and topical focus. Taken together, each of these differences made the ORCA a more challenging process for Healthy Democracy to manage.
    Our research aimed to discern how well Healthy Democracy could deliver a credible and timely result from a deliberative process that simultaneously faced each of the aforementioned hindrances. Put in the terms of this special issue on CrowdLaw, we asked whether a civic organization with experience in institutionalized public engagement could develop an effective deliberative process during a health crisis to inform state government policymaking.
    This evaluative approach to studying the ORCA resembles both prior approaches to the CIR [Knobloch et al. 2013] and general methods for studying deliberation [Black et al. 2011]. Our pairing of comparative survey data with in-depth case studies has even deeper roots in political science. For instance, both were used in Beyond Adversary Democracy [Mansbridge 1983] to study democracy and deliberation in town meetings and the inner workings of a nonprofit.

    3 Survey Method

    3.1 Response Rate

    High response rates are crucial for deliberation research, because they ensure that researchers have recorded the full variety of subjective experiences, particularly those of dissenters or people who feel excluded from the process. A self-selection feature of minipublics makes such rates possible: everyone in attendance has already proven themselves to be responsive by accepting the initial invitation to participate and by showing up for the event itself.
    Initially, forty Oregonians were signed up for the ORCA, but four of those prospective participants ended up declining to participate and Healthy Democracy chose not to replace them. Thus, our surveys were distributed via the Qualtrics online interface to 36 panelists, though for three of the sessions only 35 attended and were surveyed. Almost all of the participants posed for a Zoom-style group photo to appear in the public domain.12
    With a survey administered at the end of each weekly session, we obtained 232 of 249 possible surveys for a 93% response rate. One panelist only completed two surveys, but each of the rest completed at least five. The lowest response rate was on the final day, for which only 80% of ORCA panelists completed their surveys.
    For the CIR, response rates have been even higher. For each of the four CIRs held from 2010 to 2012, researchers invited the 24 panelists to complete a survey after every one of the event's five days. For the three CIRs from 2014 to 2016, the 20 panelists had the same opportunity for each of four days. Combining these data, 715 surveys were returned of a possible 720, which yields a response rate of 99%. Though some respondents chose not to answer individual items, as was true for the ORCA surveys, the completeness of these surveys made more sophisticated missing data analyses unnecessary.13

    3.2 Participants

    By design, the demographics of the participants in the ORCA and CIR processes were similar—and broadly representative of the Oregon population. In both samples, the plurality of respondents were Democrats (33% in ORCA, 40% in CIR) and 27% were Republicans. A majority were female (56% in ORCA, 53% in CIR). A plurality identified themselves as White (69% in ORCA, 81% in CIR), with ORCA having a more balanced representation of other ethnicities (i.e., three Native American, three Asian American, three African American, two Hispanic/Latinx, and one Native Hawaiian/Pacific Islander participant). Roughly a third had a college degree or more (30% in ORCA, 35% in CIR), with roughly one-in-five having no formal education after high school (20% in ORCA, 21% in CIR). A plurality of both samples reported being employed full-time (42% in ORCA, 32% in CIR), followed by those who were retired (21% in ORCA, 26% in CIR) and those who were unemployed or only able to find part-time work (21% in both samples). The samples also had comparable age distributions (average = 49 in ORCA, 51 in CIR; lowest quartile = under 35 in ORCA, under 38 in CIR; highest quartile = 64 and older in both samples). Chi-square statistics and t-tests found no significant differences between the two samples on any of the aforementioned demographics.
    Before comparing the process assessments provided by ORCA and CIR panelists, it is useful to begin by noting the connection that the ORCA panelists had to their discussion issue relative to those who attend a CIR. For each CIR, panelists are summoned to deliberate on a particular ballot measure—a political choice that typically is not seen as a pressing issue for many of the panelists. By contrast, in the case of the ORCA, responses to the initial survey showed that COVID-19 had affected many of them directly. Sixty percent said that it had limited their “personal sense of freedom,” and 54% agreed that COVID had both made them “personally feel less safe” and that it had “a negative economic effect” on them or a family member. More than a quarter (27%) said it “had a negative health effect” on them or a family member. As a result, some came into the Assembly having pre-existing concerns about COVID policy, and most had received government information regarding the pandemic.

    3.3 Survey Measures

    Ten survey items were suitable for comparison across the ORCA and CIR. Table 2 shows that all of these measures used 5-point response scales, except for the item that asked whether “organizers and staff” had shown “a political bias in today's meeting,” which used a three-point scale to show the direction of bias, if any was perceived. Bias perceptions were so infrequent—and ideologically balanced, in any case—that this particular measure was recoded to distinguish any bias from none. Like most of the survey items, this bias item was asked at the close of each day's session. Rather than averaging the daily responses, however, the maximum value was used such that it measured whether a participant ever perceived bias.
    Table 2.
    MeasureResponse scaleORCA Pilot 2020Oregon CIR 2010-16t-test p value
    1. Overall satisfaction1 = Very dissatisfied 5 = Very satisfied3.93 (0.90)4.45 (0.78)0.007
    2. Perceived moderator bias0 = Never 1 = Once or more0.14 (0.35)0.10 (0.31)0.511
    3. I learned enough to make an informed decision1 = Definitely not 5 = Definitely yes4.04 (0.79)4.73 (0.55)<0.001
    4. Evaluation of final statement1 = Low 5 = High3.89 (0.76)4.24 (0.63)0.032
    5. Importance of my role1 = Not at all 5 = Extremely important3.08 (0.82)3.58 (0.70)0.001
    6. Had trouble following the discussion1 = Never 5 = Almost always1.89 (0.75)1.98 (0.62)0.446
    7. Able to express my views1 = Never 5 = Almost always4.35 (0.67)4.49 (0.51)0.228
    8. Considered others’ views1 = Never 5 = Almost always4.38 (0.68)4.47 (0.47)0.418
    9. Felt respected by others1 = Never 5 = Almost always4.78 (0.33)4.69 (0.38)0.147
    10. Felt pressure to agree1 = Never 5 = Almost always1.33 (0.50)1.54 (0.55)0.026
    Table 2. Means, Standard Deviations, and T-tests for Significant Differences Across Survey Metrics
    Note. Though Levene's test for equality of variances only occasionally showed a significant different in variances, the reported p-values are from t-tests that do not assume equal variances, given the unbalanced sample sizes and corresponding larger variances for the ORCA sample. In no case did the statistical significance of the differences shown change as a result of this decision.
    In all other cases where questions were asked daily, scores were averaged and yielded reliable scales. For example, the role-importance measure in the fifth row of Table 2 yielded an α = 0.84 across 4–5 days for the CIR and an α = 0.79 across the seven sessions of the ORCA.
    Only the first question (overall satisfaction) and the fourth (statement evaluation) came from a single questionnaire administered after the final session. The statement-evaluation score (row 4 in Table 2) was unique in that it combined different items reflecting the difference in the tasks of the CIR and ORCA. For the ORCA, this score was the average of panelists’ satisfaction with their two sets of recommendations (housing and education, r = 0.73), and for the CIR, this score averaged satisfaction with the three elements of the Citizens’ Statement (Key Findings, Arguments in Favor, and Arguments in Opposition, average r = 0.28).

    4 Survey Results

    4.1 Overall Differences in Survey Responses

    Table 2 shows that the two processes did not differ on half of these survey measures. Only 14% of participants in the ORCA ever detected moderator bias during a session, and the figure was similar (10%) across the CIRs. On five-point frequency scales, average participants in both processes said they “rarely” (2) had “trouble understanding or following the discussion.” Average scores in both ORCA and CIR also were near 4.5 (between “Often” and “Almost always”) for questions about how often they “had sufficient opportunity” to express their views, considered views “different from their own,” and felt they were “treated with respect by the other participants.”
    The CIR panelists had significantly higher scores on the other half of the survey items. On average, ORCA participants rated their “overall satisfaction with the Assembly process” just below “Satisfied” (3.93) on a 1–5 scale, with CIR panelists averaging a rating much closer to “Very satisfied” (4.45). By the end of the processes, when asked if they had “learned enough” to make informed recommendations, ORCA participants averaged a “Probably yes” (4.04) on a 1–5 scale, whereas CIR panelists were much closer to a “Definitely yes” (4.73) at the end of their service. This meshes with the finding that the CIR panelists gave their final report a rating (4.24) closer to the highest scale point (5 = “Very satisfied”) than did the ORCA panelists (3.89).
    Two other findings in Table 2 make an interesting pair of differences. The average ORCA panelists rated their role in the process as “moderately important” (3.08), whereas the CIR panelists had an average rating of 3.58 (on a 1–5 scale). However, the CIR panelists also more often felt “pressure to agree with something that [they] weren't sure about,” with an average rating of 1.54, which was closer to the second scale point (“Rarely”) than the bottom one (“Never”), compared to a 1.33 for the ORCA panelists. These differences may reflect the fact that the CIR was an empowered deliberative process, charged with forging broad agreement on a Citizens’ Statement that would be distributed to every voter. By contrast, the ORCA panelists participated in a pilot process, coordinated with a handful of state legislators but with no authority.

    4.2 Variations in Differences Over Time

    This and many other survey measures were asked at the close of each day of the CIR and ORCA. It is revealing to disaggregate these items into the three phases of the process. As described in Table 1, there were Orientation, Deliberation, and Resolution phases to each process. Figure 1 shows how one of the differences shown in Table 2 emerged over time. Both the ORCA and CIR panelists saw themselves as playing a “Moderately important” role initially, but those scores differed significantly (Mdiff = 0.41, SEdiff = 0.15) during the Deliberation Phase. That gap widened to one full scale point during the Resolution Phase (Mdiff = 1.0, SEdiff = 0.19). (As in Table 1, equal variances were not assumed and the critical p value for these t-test comparisons was .05.)
    Fig. 1.
    Fig. 1. Variation in perceived role importance at the ORCA and CIR at three phases.
    The difference between ORCA and CIR panelists on feeling pressure was similar to the pattern shown in Figure 1, with the ORCA scores remaining relatively flat over time but the CIR scores rising. There was no difference during the Orientation Phase, but a significant gap opened up during the Deliberation Phase (Mdiff = 0.23, SEdiff = 0.11) and widened slightly in the final phase (Mdiff = 0.30, SEdiff = 0.11).
    On the question of whether panelists believed they had sufficient opportunity to express their views, it was the ORCA respondents whose responses diverged over time. The CIR participants gave an average rating near 4.5 each time, between “Definitely yes” and “Probably yes.” The ORCA panelists gave comparable scores in the Orientation Phase, but after their scores dipped slightly in the Deliberation Phase in synch with the CIR panelists, their scores continued to drop in the Resolution Phase, resulting in a significant gap (Mdiff = 0.30, SEdiff = 0.14). Figure 2 summarizes these results.
    Fig. 2.
    Fig. 2. Variation in perceived opportunity to speak at the ORCA and CIR at three phases.
    The final difference detected was unique in that it appeared in the Deliberation Phase, then disappeared in the Resolution Phase. Across both the CIR and ORCA, panelists said that they felt respected, with average scores above 4.5 on a 1–5 scale at every point in the process. During that middle phase, however, the CIR panelists gave lower ratings than did their peers in the ORCA (Mdiff = −0.16, SEdiff = 0.07).

    4.3 Summary

    One way to summarize these findings is with two distinct observations. First, panelists rated the ORCA's overall performance as comparable to that of the CIR on numerous dimensions. Even where there were differences, scores were still in close contact. That result is noteworthy, given the fact that the ORCA was designed and deployed in a rushed response to an emergency—and conducted in an online setting not previously planned or tested.
    Second, the CIR performed at a higher level on a few dimensions, particularly with regard to its cognitive task. Participants came away feeling better informed and more satisfied with both their process and final product. The CIR's official status as the creator of a voting guide likely contributed to panelists’ satisfaction, but the CIR process itself gets credit for panelists feeling they played a more important role than did their peers in the ORCA. Finally, though the ORCA's time constraints resulted in its participants feeling they had less opportunity to speak in the decisive stage of their process compared to CIR panelists, only the CIR participants felt a mounting pressure to reach agreement.
    By contrast with the CIR panelists, ORCA participants felt less pressured to agree, thought their role was less important, and expressed less satisfaction with their process and their recommendations. Nearly all of these findings can be explained by ORCA's remote online setting. Online participation while sitting at home or in other personal spaces—where panelists could turn off their video cameras and divert their attention to other matters, thus lowering the amount of social information they conveyed and received—likely attenuated ORCA sense of connection with other panelists and the ORCA process [Walther 2013]. This diminished sense of connection likely insulated many ORCA panelists from others’ persuasion attempts and ambiguous nonverbal behaviors. This, in turn, may have lowered social pressure while reducing their sense of involvement in the collective process. The end result was a reduced sense of role importance and lower satisfaction with the ORCA process and its work product.
    An alternative account could emphasize the diffuse topic and recommendations of the ORCA, relative to the focused topic and statement of a CIR. The ORCA's designers had intended to have it choose a single sub-topic regarding COVID-19 response, but when panelists were divided over whether to address problems regarding housing versus public education, the organizers let the panelists address both issues. This avoided potential stress and conflict over sub-topic choice, and it also further diluted the deliberation and recommendations by spreading them across two topics instead of one.14

    5 Qualitative Evaluation of the ORCA

    When stepping back from these survey results to assess the ORCA, it helps to recall the theme of this special issue. The aim is to advance our understanding of how CrowdLaw can be deployed effectively during an emergency circumstance, such as the COVID-19 pandemic. In this spirit, we asked whether a civic organization with experience in institutionalized public engagement could develop a credible and timely minipublic during a public health crisis to influence state government policymaking on the issue that gave rise to the emergency.
    That question can be broken down into sub-components, which we take in turn in this section. First, such a process can be assessed in terms of its procedural rigor and transparency [cf. Karpowitz and Raphael 2014; Papadopoulos and Warin 2007]. Second, we evaluate the quality of deliberative and democratic discussion that those procedures generate [Knobloch et al. 2013]. Third, we consider the long-term impact of the process on future policymaking [cf. Barrett, Wyman, and Coelho 2012] and/or indirect influence on the wider deliberative system [Boulianne 2018; Felicetti, Niemeyer, and Curato 2016]. In the case of a pilot test, one must temper expectations for such impacts, with attention instead shifting to how fruitful the process was at generating improvement for future—and more consequential—iterations.

    5.1 Procedural Rigor and Transparency

    Healthy Democracy's experience with the CIR sets a high bar for procedural integrity. After an initial pilot test, Healthy Democracy put in place detailed agendas for the 2010 CIR that made it clear to panelists, witnesses, researchers, and interested members of the public how the process would unfold. Continuous process adjustments, which mostly (but not always) amounted to improvements, would change each subsequent CIR, but procedural transparency was ensured each time. Once the CIR Commission was established in 2011, to ensure accountability a public body was tasked with overseeing refinements of the CIR process.
    As for the rigor of this process, the CIR's agendas were built to ensure equal time for opposing sides, provide ample time for questions of witnesses and discussion among panelists, and make clear how panelists would move from open discussion to the final votes that would result in the Citizens’ Statements distributed to voters. CIR discussions sometimes bogged down when procedural confusion stumped even the professional moderators, or when substantive disagreements could not be resolved. Nonetheless, there were contingency plans built into the detailed CIR manual for handling precisely these sorts of problems, and moderators, staff, or the panelists themselves always found a way to smooth out the tangles that occurred.
    Though the convenors had years of experience running state-authorized CIRs, in some ways the 2020 ORCA was more akin to the 2008 pilot test of the CIR. For example, the organizers were tasked with innovating new processes or workarounds for each unique feature of the ORCA, from the event's duration and week-to-week agenda down to the finer details of managing the Zoom interface or tabulating votes. As a result, the ORCA process was often opaque or brand new to the panelists, researchers, and even to the event staff, such as the moderators.
    The duration of the process provides an illustration. Healthy Democracy staff acknowledged at the outset that the process might require seven or eight sessions instead of six (the sessions were spaced once a week in a 2 h block 6–8 p.m.). That intuition proved correct, and panelists voted to add a seventh session. One reason it became necessary was that the organizers chose to deviate from their plan to focus on only one major dimension of the government response to COVID-19. When two such topics—education and housing—ended up in a virtual tie as priorities for the panelists, Healthy Democracy reversed their previous decision and opted to cover both. This decision resulted in a cascade of agenda changes, such as splitting the panelists into subcommittees, dividing up expert testimony across the two areas, and requiring panelists to write two sets of recommendations in their final report.
    Prior to the beginning of the event, as in a CIR, agendas for the ORCA had been sketched out in advance, but the research team observed the staff engaging in substantial modifications before and during most sessions. In one example, even the voting system for the panelists deployed during the ORCA was subject to change. Early in the process, the panelists were tasked with reaching supermajority agreement on the principles that would guide their recommendations. The point was to commit to evaluative criteria before vetting proposals, so that the pros and cons of each would be weighed consistently. This approach has deep roots in theories of group decision making (cite Dewey, etc.), but the method used in the ORCA resulted in too many prospective principles competing for votes. When not a single principle met the threshold for adoption, Healthy Democracy simply changed the procedure to get closure on this problem and move onto the next task.
    In pilot testing of processes such mid-course corrections can be useful as ways to test alternative procedures that solve unanticipated problems. The frequency, immediacy, and procedural significance of these adjustments at the ORCA, however, resulted in a process that was far from transparent. Though changes and other interventions usually shored up deficiencies, they detracted from the rigor of the process to the extent that they made it ad hoc.

    5.2 Deliberative Democratic Quality

    Though the ORCA was unpredictable at times, one of the purposes behind procedural modifications was to shore up the quality of its discussions. Having endured years of critical reports from members of the research team studying the CIR,15 Healthy Democracy was well versed in the need to sustain simultaneously a rigorous deliberative discussion and a democratic social process among the panelists.
    In a sense, Healthy Democracy had learned to “teach to the test” by placing into their process lessons in democratic deliberation, including reminders during the event, such that when panelists completed their surveys, high marks on deliberative components of the process were likely to be achieved. As the CIR survey comparison showed, the ORCA did perform well by the same metrics used to assess that more established process, even outperforming it slightly by virtue of the ORCA panelists feeling less conformity pressure during the final phase of their process.
    Direct observation of the seven ORCA sessions, however, left the research team co-authoring this article more concerned about the depth of deliberation that took place. The panelists gave themselves high marks, but most had no larger frame of reference from which to judge how much better the process could have been.16 Put another way, during their evaluations panelists may be more attuned to their own contributions, which are under their control, than with the process as a whole.
    Foremost among the barriers to deliberative rigor was the scope of the agenda relative to the time afforded the panelists. With far less time spent in session than a CIR but a far wider range of topics, relatively little time was spent by the panelists scrutinizing potential recommendations for the state legislature.
    Table 3 provides a simplified summary of the ORCA agenda. This overview breaks down the complex sequencing of tasks into five areas, which together account for the 770 min of time (excluding breaks and delays) spent in the 2 h Zoom sessions over seven weeks. This combines related portions, follows the timeline of each day when possible, but excludes breaks and any remaining portion of the agenda that was shorter than 5 min. Most of the ORCA's time was spent in breakout groups, which were typically five panelists and a moderator, but much of the orientation and testimony occurred in plenaries.
    Table 3.
       Session/Week (110 min each)
    TasksPrimary formatTotal mins1234567
    Orientation, welcomes, introductions, training, debriefPlenary21555353030302015
    Agenda planning (e.g., choosing focal topics and witnesses)Groups120256010151000
    Establishing core principles underlying the recommendationsGroups140151502570150
    Weighing testimony and/or preparing questions for witnessesPlenary/ Groups16015070400350
    Discussing, refining, and finalizing policy recommendationsGroups135000004095
    Table 3. Analysis of the ORCA Agenda by Activity Type and Topical Focus, Divided by Session
    The most striking finding from this analysis concerns the difficulty of organizing a deliberation of this kind during the sessions themselves. Orientation to the process and agenda-planning occupied a combined 44% of the 14 h; these activities added up to large chunks of time for all but the final two sessions. Twenty-one percent of the ORCA's total time was spent considering public testimony on COVID-19 that the legislature had received (Session 1) and preparing for or hearing from witnesses (Sessions 3–6). The consideration of core principles accounted for 18% of the agenda, and this included not only portions of the initial two sessions but also a majority of the time in Session 5. Finally, weighing potential recommendations accounted for nearly one-fifth (18%) of the ORCA's time, with all of that time coming during Sessions 6 and 7.
    One or more of observed these sessions live via Zoom as they happened, including nearly all of the small-group breakout meetings, followed by a weekly debrief. Based on this immersion in the ORCA process, we are confident in our interpretations of these agenda activities. First and foremost, we concluded that 3 h was insufficient to formulate, refine, and ratify recommendations. Further, we determined that this process became too rushed toward the end of the assembly. Part of the problem was that the roadmap (agenda and processes) for the ORCA process was constantly in-flux. “Establishing core principles” was stretched from the planned three sessions to five. As a result, the organizers consistently ate into the time originally allocated to considering testimony and recommendations.
    Group work was often conducted simultaneously in a single GoogleDoc, with inconsistency across small groups in how inclusive that technology was for panelists. In some cases, small-group moderators managed the group's document but in other groups a panelist would take the reins on editing, with the others following along. Simultaneous editing of a collective document often meant eight or more individuals were altering the document making it difficult to track a specific small discussion and provide feedback. This difficulty was worsened by the fact that panelists were charged with identifying a set of core principles and then developing two separate sets of recommendations regarding government response to COVID-19—one on housing and one on education. Thus, it is not surprising that the survey data revealed some ORCA panelists becoming unsure of their opportunities to express themselves in the later stages of the process.
    Unlike the CIR, which from the outset has a clear focus on a specific ballot measure, much of the ORCA's time was spent deciding on a sub-topic for the Assembly's deliberations. Participants were given two separate methods of narrowing and asked to reconcile them into a coherent agenda. The first method was a list of questions posed by the three state senators. These question sets varied considerably in their scope and coherence. Participants were tasked with discussing the question sets and choosing one on which to focus their deliberations. They ultimately chose a complex, multi-faceted set of questions written by Senator Jeff Golden, who asked what the pandemic reveals about inequality within social, economic, and political systems in the state (see the end of “Overview of the ORCA” earlier in this article).
    Deciding on this question from the pool of questions took nearly an hour of the Assembly's time, and it only led to further confusion. While some participants expressed that they did not understand the connection between the question and their task to discuss the pandemic, the Assembly was immediately given a different topic-narrowing exercise layered on top of the question from Golden. The organizers provided participants with a summary of different categories of testimony on COVID to the Oregon state legislature. Participants, after already selecting a question to guide their discussions, now had to choose two categories of COVID-related topics from this list. They ultimately settled on education and housing, and any connection to Senator Golden's guiding question was, for the most part, lost in subsequent deliberations.
    A point in favor of this agenda is that 4 h consisted of preparing for, receiving, or weighing testimony provided by fellow citizens outside the ORCA or expert witnesses. This was the largest share of the total agenda, and given the constraints of the design, it was an appropriate use of time. However, tasking the panelists with working through two issues (housing and education) reduced by half the depth of the information received. That a full third of the agenda was spent on orientation and planning is also striking, as it took time away from deliberation on the substance of the ORCA's final report.
    As for the democratic social relations among panelists, we saw some favorable evidence of a democratic process taking shape. In the transcripts of the housing subgroup, for example, we saw the panelists develop group cohesion and democratic social norms during the time they deliberated together. This helped them develop a shared understanding of the systemic nature of the housing problem, and it contributed to a shared sense of responsibility to address that problem that crossed class lines (e.g., lines between landlords and renters). Those insights carried through to the final report. In cases like this, participants developed constructive democratic social relationships with one another that enabled them to cooperate effectively in writing and editing recommendations.
    The Zoom interface and staggered sessions served as an equalizer and a hindrance for some panelists. This format accommodated prospective panelists who otherwise would not have been able to take part in a face-to-face assembly. That a paid random sample was assembled to deliberate online each week for 2 h over nearly 2 months was itself an accomplishment. Some panelists had the freedom to meet uninterrupted each week, but others had carved out just enough time from other family obligations during the 6–8 p.m. time period to meet.
    Within the Zoom interface, however, the differences in participants’ online skills became apparent, even with training provided by Healthy Democracy before deliberative sessions of the Assembly began. Some panelists showed tremendous facility with the setup and had high-quality video, audio, and Wi-Fi, whereas others struggled with some basics of the interface, such as navigating the breakout room, juggling a shared screen with the images of fellow group members, and even keeping the Zoom window in front of other applications open on their computer. Occasionally, a small group moderator would either guide a panelist out of their breakout group and back to the plenary room to receive tech support or would summon a staff member into the breakout group to assist. More often, staff could address the issue, and the technologically challenged panelist muddled through.
    A face-to-face CIR involves writing on large sheets of paper (flipcharts) with thick marking pens, as a simple method for drafting and revising elements of a final report. During the ORCA, GoogleDocs was used much like a paper flipchart in a face-to-face CIR, first to identify core principles and later to write recommendations. During some of these sessions, technological differences among panelists became more pronounced. Some panelists were new to the very concept of a shared document being simultaneously edited not only by a moderator or group member but also by their counterparts in the other groups. Over the course of the ORCA, the shock of that method wore off, but the variance in comfort with using it remained. Even so, the wordsmithing required in later sessions was easier because of the real-time editing of statements that this made possible.

    5.3 Direct and Indirect Impacts

    At the close of its process, Healthy Democracy presented a Final Report from the ORCA, which included sections of Core Principles and Policy Recommendations written by the panelists themselves.17 The panelists’ principles varied in their specificity, but the first provides a strong example of the fruits of their deliberation:
    Oregon's response to COVID-19 should be guided by the best available science and research currently available. The policy discussion should stay focused on science. In addition, to the extent possible, Oregon should coordinate policies with neighboring states to promote consistent policies over a broader geographic region.
    The Final Report offered numerous policy recommendations supported by at least two-thirds of the panelists. The ORCA was able to group these into four main takeaways. On the housing topic, they advised the legislature to “provide programs to keep people in safe and secure homes,” which included detailed points about displacement, homelessness, and evictions, but also a recognition that “payments should be made directly to landlords” who may otherwise receive no rent. On education, the three recommendations concerned securing internet access for all, providing students with wellness counseling, and clear policies to ensure public safety if/when students returned to face-to-face classrooms (i.e., via social distancing, health metrics, etc.).
    In the end, the ORCA's recommendations had negligible impact on the Oregon state government's COVID-19 response. Though a handful of senators submitted questions to the panel at the outset and a state senator was present to hear their recommendations, there is no direct evidence that the legislature will consider these recommendations.
    The only bright spot was a warm reception of the ORCA from a single state senator. From the three sets of questions they received, the panelists chose to focus on the question that Democratic State Senator Jeff Golden submitted to them. Senator Golden has a distinctly scholarly reputation as a Harvard graduate whose district features the quirky town of Ashland, home to the world-renowned Oregon Shakespeare Festival. At the close of the process, he expressed appreciation for their recommendations, and he has kept in touch with Healthy Democracy to explore opportunities for policy deliberation beyond the pilot project.
    Thus far, the main impact of the pilot is offering evidence that a minipublic can be convened quickly and online, and since a similar test-run of a deliberative process built confidence in the CIR years earlier, perhaps that will influence subsequent convenings of online citizen assemblies. As for policy impact, by the time the legislature reconvened in January, 2021, the ORCA's potential influence was further diminished by national events. A change in the federal government and the rollout of vaccines drove Oregon's COVID-19 response. It remains possible that the ORCA more subtly shaped the views of Senator Golden or other legislators who worked with it, but no direct evidence of such influence exists.
    Participants themselves may have experienced more direct positive outcomes given the nature of some of the sessions. Several of the experts requested by panelists and provided by Healthy Democracy presented information that addressed personal interests rather than being focused on the public as a whole. Panelists were tasked with developing recommendations at a state level but were sometimes provided with information that might help them personally. In one case, the expert asked panelists to identify where they were from so she could offer specific advice about the nature and availability of COVID-19 relief programs.
    A final potential impact would be the ORCA's influence on future iterations of the CIR. The organizers of that process have sought ways to reduce the cost of each CIR [Gastil and Knobloch 2020], and moving online could help alleviate such costs. Or, if a pandemic or other emergency requires it, then Healthy Democracy might take lessons from the ORCA to devise an online variant of CIR.

    6 Recommendations

    We believe this case comparison of the CIR and ORCA provides many insights for the theory and practice of deliberative minipublics. From a theoretical standpoint, this study provides a fresh example of how experience with one form of minipublic can lead to another. This has occurred elsewhere, notably in Ireland [Farrell and Suiter 2019] and Belgium [Niessen and Reuchamps 2020], and it is telling that in both cases, the initial effort was successful by the evaluative metrics its organizers had chosen. Just as the jury system itself was the inspiration for one of the earliest modern minipublics (the Citizens Jury), so might we see a more rapid cascade of such influences as the variety of minipublics proliferates during what some observers have dubbed a period of crisis for democracy [Curato et al. 2020].
    From a research standpoint, this case demonstrated the value of continuity in survey measures across distinct deliberative events. Though the number of design differences between the CIR and ORCA make clear causal inference impossible, using equivalent metrics at least helps clarify the baselines against which one can measure phenomena such as feeling social pressure or process satisfaction. That said, the need for a more comprehensive and standardized set of metrics for democratic deliberation remains. Though Participedia has made an effort to develop such surveys, they have yet to become a regular feature of process evaluation.18
    As for practical recommendations, we will close with what we consider the most important ways to improve a minipublic convened during an emergency such as COVID-19. First, the time and resources allocated for deliberation need to be adequate for the complexity of the deliberative task. The ORCA panelists were overwhelmed by having to work through two distinct sets of recommendations in what amounted to just a few hours. They heard very few witnesses, and portions of their final statement align so closely with the witness testimony that it suggests an uncritical acceptance of witness claims. Were there more witnesses, as in a CIR, this problem would be less acute. Likewise, the panelists had to rush through their initial agenda-setting tasks, which included sifting through a mountain of public testimony collected prior to the event. A Healthy Democracy intern had thematized these public comments, but even so, the panelists did not appear able to use them effectively as guidance in their earliest deliberative tasks.
    Second, a process such as this requires a more transparent and comprehensible agenda. The ORCA pilot became a testing bed for many different tasks and concepts Healthy Democracy had in mind, from an exercise on reducing one's biases to complex voting schemes for choosing among numerous alternatives. That the organizers changed gears mid-process so often speaks to their flexibly and ability to recognize when something does not work. In this sense, the pilot was a successful test of many of the elements that could fold into a successful minipublic. It is just as important to recognize, however, that should such a process become official—or quasi-official—there needs to be more predictability and robustness in the design.
    Third, online deliberation such as this requires moderators who are skilled in online facilitation. Whereas online facilitation involves similar skills as offline, it requires additional skills specific to an online setting. ORCA moderators were as effective as CIR moderators in terms of being unbiased and creating a space where panelists could express their views freely. Nonetheless, ORCA moderators often struggled to navigate the various innovations in the ORCA process, such as features of the Zoom interface and simultaneous editing of Google Docs. This is understandable given the ad hoc nature of the pilot; nonetheless, such inefficiency takes valuable time away from group discussions and potentially harms the legitimacy of the process. Another challenge that can emerge in face-to-face settings and is amplified in an online setting is a binary relationship between moderator and participant. In ORCA, although at times participants engaged in lively interactions with one another about, for example, proposed housing recommendations, much of the small group discussions were in the form of a moderator posing a question and panelists responding one by one to the moderator, rather than discussing their answers to a question with each other. This paucity of spontaneous, organic interaction among participants constricted the range and the depth of ideas discussed in small groups and, consequently, affected the quality of deliberation and final recommendations.
    Some of these difficulties within the Zoom environment of ORCA are unavoidable, whereas others can be remedied or managed. For example, it is doubtful that the level of social presence among participants created by a face-to-face experience will ever be fully replicable online. In a face-to-face process, there is down time where participants can splinter off, talk one-on-one or in small groups before meetings and at the coffee table. In Zoom, small scale interactions must be planned, giving participants some opportunities for social mingling but still remaining more artificial than in-person socialization.
    Nevertheless, convenors can address some difficulties with online engagement through additional facilitator and panelist training and by continuing to refine their procedures. Both of these are vital to ensure quality in future online assemblies. Given the complexity and multitude of tasks involved in online facilitation, as resources allow, having two moderators per group, one responsible for facilitation and the other handling technical operations (e.g., managing chats), would be optimal. Adapting to the online setting requires thinking beyond replication of face-to-face discussions. For instance, when panelists gather in person it is easy to use “dot voting,” whereby individuals stick colorful dots on their preferred proposals. There was no online equivalent available through the Zoom interface, and the various methods achieved through shared editing of documents proved cumbersome and frustrating in that setting. At the time of the ORCA, the Zoom polling feature would not have had the same features as dot voting but perhaps with more time to plan, in future assemblies other methods of creating and taking polls during the sessions can be found.19 Similarly, the protocols for using the GoogleDoc came from collective editing practices in face-to-face meetings that did not translate well to the online environment. The easy rapport that forms in face-to-face groups was not available online, and efforts to simulate informal gatherings (such as via “kitchen table” homerooms each day) did not always work as intended. Even so, there may be distinctly online icebreakers, such as collaboratively solving visual puzzles, that would better serve these functions in the virtual setting—perhaps while indirectly teaching some skills that will be helpful for participating in the online deliberation environment.

    7 Conclusion

    Although the Citizen Assembly we studied did not meet the high bar set by the Oregon CIR, it bears repeating that its survey metrics were broadly comparable to those of the CIR, with one (conformity pressure) performing better in the final stage of its process. Though ORCA staff struggled to manage a complex agenda spread over seven weeks in the midst of a public health emergency, the panelists still managed to produce a sensible set of policy recommendations.
    In spite of its limitations, the Oregon Citizen Assembly validated the basic idea that a minipublic design can adapt to an emergency, even one in which meeting together face-to-face would present too great of a public health risk. Unlike the Oregon CIR, the ORCA panelists were directly involved in the immediate problem faced by their minipublic. They were, quite literally, deliberating in the midst of a crisis. In spite of their personal struggles with COVID-19, the panelists took their responsibility seriously and worked together respectfully, across real ideological differences, to come up with ways to help the most vulnerable Oregonians survive the pandemic.
    That gives some ground for optimism about using this form of CrowdLaw in the future. We hope that any future effort to build on this pilot includes a robust external evaluation to discern whether, iteration by iteration, practices such as these become more reliable and effective as they mature. In the meantime, case studies such as this show that a deliberative minipublic can operate effectively, even if the nature of an emergency alters its design. Contrary to the idea that only executives or expert advisors can offer timely advice during a crisis, the ORCA suggests that minipublics can be gathered and convened quickly to provide timely advice to policymakers.

    Footnotes

    4
    Insights about the details of the ORCA's development come from correspondence with Healthy Democracy's Program Manager, Linn Davis, on October 11, 2020, and in previous conversations. Davis noted that the idea for the ORCA—and its funding—came from Ned Crosby, who is discussed later in this article. Healthy Democracy had been planning to convene a deliberative event on the democratic process itself, then pivoted to COVID-19 when the pandemic arrived.
    5
    The Pew Research Center has conducted numerous surveys showing the political dynamics of government and public response to COVID-19. A recent Pew survey, for example, shows the sharpness of the partisan divide that has emerged on this issue: https://www.pewresearch.org/fact-tank/2020/10/12/republicans-who-rely-most-on-trump-for-covid-19-news-see-the-outbreak-differently-from-those-who-dont.
    6
    Knobloch et al. [2013] demonstrate the quality of the Review's deliberations; on their impact, see Gastil [2014] and Gastil, Richards, and Knobloch [2014]. Wells, Reedy, Gastil, and Lee [2009] demonstrate how distorted beliefs about ballot measures influence voting choices. Hendriks [In Press] and Warren and Gastil [2015] describe related practices and the broader principles to consider when linking citizens, elections, and public officials.
    7
    A timeline for the event has been created by the state's largest newspaper, The Oregonian: https://www.oregonlive.com/portland/2017/02/oregon_standoff_timeline_41_da.html.
    8
    The investigation into this death is ongoing at the time of this writing. For a contemporary account, see https://www.nytimes.com/2020/09/03/us/michael-reinoehl-arrest-portland-shooting.html.
    9
    Data provided by a nonprofit site that compiles data on US elections: https://ballotpedia.org/Oregon_State_Legislature.
    10
    From a March 10, 2021, interview with Robin Teater, who served as Executive Director of Healthy Democracy.
    11
    Financial constraints were one reason for the limited deliberation duration. Like the CIR, the ORCA paid each panelist for their time, and a larger number of panelists raises the cost per hour for the event.
    12
    Those who declined to be included simply turned off their cameras before the picture was taken. In addition, one panelist is not shown for economy of presentation (i.e., the extra image did not fit in the 5 × 6 grid).
    13
    One difficulty in the ORCA dataset is that a handful of surveys were completed anonymously. This makes combining surveys across individuals more challenging, and the first author is investigating alternative ways to model the data in light of this problem, which currently over-weights those anonymous surveys by treating them as unique. Where these appear in an analysis, this also slightly inflates the ORCA sample size.
    14
    In addition to direct observation of this, we relied on a March 24, 2021 interview with Healthy Democracy Program Manager Linn Davis. Survey questions asked participants each day about what feelings they experienced during the deliberation, and the most common ones were “enthusiasm” and “happiness” followed by “sympathy.” As much as a third reported “anxiety” on the most challenging days, but relatively few felt “anger” (maximum of 3) or “sadness” (maximum of 5). This was broadly comparable to the emotional experience of a CIR (Johnson, Morrell, and Black 2019).
    15
    The ORCA research team that co-authored this article consisted entirely of persons who had previously contributed to at least one report on a CIR conducted during 2010–2018.
    16
    As it happened, Healthy Democracy inadvertently invited to the ORCA a handful of people who did have prior experience with the CIR. When they chose to recruit from lists of people who had previously accepted invitations to the CIR, they forgot to remove those who had already served on one. We hope to incorporate interviews from these individuals in our revision of this article, but those interviews are not yet complete.
    18
    For updates on this project, visit https://participedia.net/research.
    19
    The Zoom polling is very limited and in the version they were using, the polls would have been set up beforehand with predetermined answers. In other words, Zoom polling is like closed-ended questions that can be anticipated well ahead of time and built into the meeting. It is not good for spontaneous or complex polling.

    References

    [1]
    Gregory Barrett, Miriam Wyman, and Vera Schattan P. Coelho. 2012. Assessing the policy impacts of deliberative civic engagement in the health policy processes of Brazil and Canada. In Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement, Tina Nabatchi, John Gastil, Michael Weiksner, and Matt Leighninger (Eds.). Oxford University Press, New York, NY, 181–203.
    [2]
    Laura W. Black, Stephanie Burkhalter, John Gastil, and Jennifer Stromer-Galley. 2011. Methods for analyzing and measuring group deliberation. In Sourcebook for Political Communication Research: Methods, Measures, and Analytical Techniques, Erik P. Bucy and R. Lance Holbert (Eds.). Routledge, New York, NY, 323–345.
    [3]
    Shelley Boulianne. 2018. Building faith in democracy: Deliberative events, political trust, and efficacy. Political Studies (2018), 1–27.
    [4]
    Simone Chambers. 2003. Deliberative democratic theory. Annu. Rev. Politic. Sci. 6 (2003), 307–26.
    [5]
    Felicetti Andrea, Simon Niemeyer, and Nicole Curato. 2016. Improving deliberative participation: Connecting mini-publics to deliberative systems. Eur. Politic. Sci. Rev. 8, 3 (2016), 427–48.
    [6]
    Fung Archon. 2007. Minipublics: Deliberative designs and their consequences. In Deliberation, Participation and Democracy: Can the People Govern? Shawn W. Rosenberg (Ed.). Palgrave Macmillan, Houndmills, UK, 159–83.
    [7]
    John Gastil et al. 2018. Assessing the electoral impact of the 2010 Oregon citizens’ initiative review. Amer. Politics Res. 46, 3 (2010), 534–563.
    [8]
    John Gastil and Katherine Knobloch. 2020. Hope for Democracy: How Citizens Can Bring Reason Back into Politics. Oxford University Press.
    [9]
    John Gastil, Katherine R. Knobloch, Dan Kahan, and Don Braman. 2016. Participatory policymaking across cultural cognitive divides: Two tests of cultural biasing in public forum design and deliberation. Public Admin. 94, 4 (2016), 970–87.
    [10]
    Kimmo Grönlund, Andre Bachtiger, and Maija Setälä, eds. 2014. Deliberative Mini-Publics: Involving Citizens in the Democratic Process. ECPR Press, Colchester, UK.
    [11]
    Amy Gutmann and Dennis Thompson. 2004. Why Deliberative Democracy? Princeton University Press, Princeton, NJ.
    [12]
    Johnson Genevieve Fuji, Michael E. Morrell, and Laura W. Black. 2019. Emotions and deliberation in the citizens’ initiative review. Soc. Sci. Quart. 100, 6 (2019), 2169–2187.
    [13]
    Christopher F. Karpowitz and Chad Raphael. 2014. Deliberation, Democracy, and Civic Forums: Improving Equality and Publicity. Cambridge University Press, New York.
    [14]
    Katherine R. Knobloch, Michael L. Barthel, and John Gastil. 2019. Emanating effects: The impact of the Oregon citizens’ initiative review on voters’ political efficacy. Political Studies 68, 2 (2019), 426–45.
    [15]
    Katherine R. Knobloch, John Gastil, Justin Reedy, and Katherine Cramer Walsh. 2013. Did they deliberate? Applying an evaluative model of democratic deliberation to the Oregon citizens’ initiative review. J. Appl. Commun. Res. 41, 2 (2013), 105–25.
    [16]
    Michael K. MacKenzie and Mark E. Warren. 2012. Two trust-based uses of minipublics in democratic systems. In Deliberative Systems: Deliberative Democracy at the Large Scale, John Parkinson and Jane J. Mansbridge (Eds.). Cambridge University Press, New York, NY, 95–124.
    [17]
    Jane J. Mansbridge. 1983. Beyond Adversary Democracy. University of Chicago Press, Chicago.
    [18]
    Kristinn Már and John Gastil. 2020. Tracing the boundaries of motivated reasoning: How deliberative minipublics can improve voter knowledge. Political Psychol. 40, 1 (2020), 107–27.
    [19]
    Kenneth P. Miller. 2009. Direct Democracy and the Courts. Cambridge University Press, Cambridge, UK.
    [20]
    Moore Alfred and Michael K. MacKenzie. 2020. Policy making during crises: How diversity and disagreement can help manage the politics of expert advice. BMJ 371, m4039 (2020).
    [21]
    Noveck Beth Simone. 2018. Crowdlaw: Collective Intelligence and Lawmaking. Analyse Kritik 40, 2 (2018), 359–380.
    [22]
    Yannis Papadopoulos and Philippe Warin. 2007. Are innovative, participatory and deliberative procedures in policy making democratic and effective? Eur. J. Political Res. 46 (2007), 445–72.
    [23]
    Park Chul Hyun and Erik W. Johnston. 2017. A framework for analyzing digital volunteer contributions in emergent crisis response efforts. New Media Soc. 19, 8 (2017), 1308–1327.
    [24]
    John Parkinson and Jane J. Mansbridge, eds. 2012. Deliberative Systems: Deliberative Democracy at the Large Scale. Cambridge University Press, Cambridge, UK.
    [25]
    Maija Setälä and Graham Smith. 2018. Mini-publics and deliberative democracy. In The Oxford Handbook of Deliberative Democracy, André Bächtiger, John S. Dryzek, Jane Mansbridge, and Mark E. Warren (Eds.). Oxford University Press, Oxford, UK, 300–314
    [26]
    U.S. Census Bureau. 2020. 2010 Census: Oregon Profile. Retrieved from.
    [27]
    Joseph B. Walther. 2013. Groups and computer-mediated communication. In The Social Net: Understanding Our Online Behavior. Yair Amichai-Hamburger (Ed.). Oxford University Press, Oxford, UK, 165–179.

    Cited By

    View all
    • (2024)Public participation in decisions about measures to manage the COVID-19 pandemic: a systematic reviewBMJ Global Health10.1136/bmjgh-2023-0144049:6(e014404)Online publication date: 3-Jun-2024

    Index Terms

    1. Convening a Minipublic During a Pandemic: A Case Study of the Oregon Citizens’ Assembly Pilot on COVID-19 Recovery

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Digital Government: Research and Practice
        Digital Government: Research and Practice  Volume 3, Issue 2
        April 2022
        129 pages
        EISSN:2639-0175
        DOI:10.1145/3543999
        Issue’s Table of Contents

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 28 July 2022
        Online AM: 23 May 2022
        Accepted: 01 March 2022
        Received: 01 December 2021
        Published in DGOV Volume 3, Issue 2

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Civic engagement
        2. deliberative democracy
        3. government consultation
        4. political participation

        Qualifiers

        • Research-article
        • Refereed

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)759
        • Downloads (Last 6 weeks)77
        Reflects downloads up to 10 Aug 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Public participation in decisions about measures to manage the COVID-19 pandemic: a systematic reviewBMJ Global Health10.1136/bmjgh-2023-0144049:6(e014404)Online publication date: 3-Jun-2024

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Full Access

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media