CHAPTER 6
Writing It Up
Abstract In this chapter, we reflect on standards relating to writing up
and publishing research based on authoritarian fieldwork. After briefly
relating the history of recent transparency initiatives, we first report extensively on our own current practices in relation to anonymization, protection, and transparency. Then, we make some recommendations regarding
how the tension between the value of anonymity and the value of transparency might be better navigated, if not resolved. We make two proposals:
the first concerns a shift from transparency about the identity of our
sources to transparency about our methods of working. The second is to
promote a culture of controlled sharing of anonymized sources. Finally,
we reflect on trade-offs between publicly criticizing authoritarian regimes
and future access to the authoritarian field.
Keywords Authoritarianism • Field research • Publishing • Transparency
• Anonymity • Dissemination
In this chapter, we reflect, and make recommendations, on standards
relating to writing up and publishing research based on authoritarian
fieldwork. We do so at a time when many scientists in all disciplines are
finding their role in society becoming less self-evident, and some feel that
new measures are necessary to buttress the credibility and legitimacy of
science. Such measures are intended to make our work more transparent,
© The Author(s) 2018
M. Glasius et al., Research, Ethics and Risk in the Authoritarian Field,
https://doi.org/10.1007/978-3-319-68966-1_6
97
98
M. GLASIUS ET AL.
but they raise many questions and challenges, particularly but by no
means exclusively for scholars in the authoritarian field. After briefly relating the history of recent transparency initiatives, which may not be familiar to all readers, we first report extensively on our own current practices
in relation to anonymization, protection, and transparency. Subsequently,
we make some recommendations regarding how the tension between the
value of anonymity and the value of transparency might be better navigated, if not resolved. We make two proposals: the first concerns a shift
from transparency about the identity of our sources to transparency about
our methods of working. The second is to promote a culture of controlled sharing of anonymized sources. Finally, we reflect on trade-offs
between publicly criticizing authoritarian regimes and future access to the
authoritarian field.
The Call for TransparenCy
In 2017, scientists in 600 cities undertook the first ever March for Science,
believing that scientific ‘values are currently at risk’, and that ‘(w)hen science is threatened, so is the society that scientists uphold and protect’
(Principles and Goals, March for Science 2017). Actually, it is not clear
that there is a general decline of trust in science (see, for instance, Pew
Center 2017). Nonetheless, social scientists, like others, have felt under
increased pressure to explain and justify why they deserve public funding
and how their methods hold up to scrutiny. In political science, one
response to this has been a change in the Ethics Guide of the American
Political Science Association in 2012, reflecting the so-called Data Access
and Research Transparency (DA-RT) principles. Subsequently, a number
of leading political science journals have adopted a Journal Editors’
Transparency Statement (JETS) (https://www.dartstatement.org), which
constitutes an operationalization of the DA-RT principles form the perspective of journal editors. DA-RT states that ‘researchers should provide
access to … data or explain why they cannot’, and JETS operationalizes
this by committing journal editors to ‘(r)equire authors to ensure that
cited data are available at the time of publication through a trusted digital
repository’. While journal editors would be at liberty to grant exemptions,
the new standard intended by JETS is full publication of raw data.
From late 2014, these statements became subject to increasing controversy, and a petition signed by many leading political scientists requested a
delay in the implementation of DA-RT and more specifically JETS. The
WRITING IT UP
99
ethical and epistemological implications of these statements for various
types of qualitative research, they argued, had not been sufficiently thought
through. A lively debate on the implications of DA-RT has since ensued
especially among primarily US-based political scientists (see https://dialogueondart.org and https://www.qualtd.net).
Whereas US debates on transparency have been motivated by concerns
about scientific legitimacy and reliability probes, recent European initiatives
promoting ‘open science’ have been more government-driven, arguing
that data-sharing accelerates innovation, and could give Europe a competitive edge. In practical terms, they have focused on developing institutional
digital storing and archiving capabilities rather than on changing editorial
practices of journals (Directorate-General, Research & Innovation 2016).
In a recent position paper, Germany and the Netherlands proposed the
fast-track development of a ‘European Open Science Cloud’ (EOSC),
which is to be ‘a trusted, open environment for European researchers for
the handling of all phases of the data life cycle and generated results’. The
principle underlying the cloud is ‘to make research data findable, accessible,
interoperable and re-usable (FAIR)’ (Joint Position Paper 2017, 1). What
these European policy plans have in common with the US initiatives is the
sense of urgency and universal applicability with which its proponents contend that all data should become open to all, as soon as possible.
Some scholars have already published their reflections, particularly in
response to DA-RT, on tensions between transparency obligations and
protection of respondents in specific authoritarian contexts (Driscoll
2015; Shih 2015; Lynch 2016). However, these comments tend to focus
more on why DA-RT and JETS are problematic than on what should be
considered best practice. And unlike DA-RT and JETS, the European
‘open science’ initiatives have yet to generate extensive debate within the
political science profession. Hence, we find more extensive reflection, and
making recommendations, on standards relating to writing up and publishing research-based authoritarian fieldwork desirable.
While, as we have reflected in previous chapters, the sources we collect
in the field are much broader than interviews, we will focus the discussion
on interview practices, because this is the area where the tension between
transparency and the ‘do no harm’ imperative is most evident. When we
conduct interviews, we always begin with a little opening speech explaining who we are and what kind of research we are doing and explaining that
we will transcribe the interview, the transcript stays with us, but we may
quote from it in academic publications. We always deal with the matter of
100
M. GLASIUS ET AL.
informed consent orally. None of us has ever used informed consent forms.
To our knowledge, they are not customary in authoritarianism research.
They can cause distrust among respondents (‘why do I need to sign something?’), as well as bringing about a potentially risky paper trail during
fieldwork. At this point, our practices diverge, depending on the type of
respondent. We discern roughly three categories of respondents. The first
type, ‘ordinary people’, we typically inform that we intend to anonymize
the transcript of the interview and not use their real name if we should cite
them. With the second type, ‘expert informants’, we typically have an
exchange about whether and if so how they would like to be anonymized.
The third type are ‘spokespersons’, whom we typically ask for their permission to be cited by name.
InTervIews wITh ‘ordInary people’
The default option of anonymity we find most appropriate when we interview certain categories of ‘ordinary people’. Thus, our Kazakhstan
researcher has used this when interviewing young Kazakhs who had studied or were studying abroad. Our Malaysia researcher used it when interviewing people about their decision-making as to whether to join a
demonstration. We find the default option of anonymity appropriate in
these cases for three reasons. First, as ‘ordinary citizens’, these people have
typically not chosen to be professionally engaged with politics, they are
usually not accustomed to being interviewed and cited, and they thus
deserve a high level of protection of their privacy. Secondly, while some
respondents might simply refuse to speak to us if we were to cite them by
name, others might well agree to be interviewed, but we believe that the
validity of their answers to our questions might suffer if they knew they
could be quoted by name. This is an issue that is not unique to authoritarianism research, it would apply to interviews with vulnerable groups (i.e.
victims of sexual abuse, undocumented migrants, or drug users) in democratic societies as well. In an authoritarian context, all ordinary citizens are
in the ‘vulnerable’ category when we ask them questions that relate to
their views of their government or dissident behavior. This is not to suggest that they would be in immediate fear of arrest or worse. In both cases
we mentioned here, Kazakhstani students and Malaysian potential demonstrators, respondent concerns related more to their professional environment, their relation to their university, or even their family, all of which
might disapprove of dissident views or behavior. Finally, we think
WRITING IT UP
101
anonymity is not problematic in this particular category because the
respondents do not have access to unique information as individuals.
These kinds of interviews are more akin to surveys in that sense. What
makes them different from surveys is that we do not claim that the people
we speak to are representative for a broader group, and we do not attempt
to quantify their opinions or experiences. Instead of reliability, it is validity
we are after, trying to reconstruct and reflect their thought processes in
relation to at least somewhat politically sensitive issues. In these instances,
our sampling method may be questioned, but not the anonymity as such.
InTervIews wITh ‘experT InformanTs’
Most of the interviews we conduct in authoritarian settings fall into a different category, one that leads us to give respondents a choice when it
comes to anonymity. We find this appropriate when interviewing lowerlevel government or party officials, corporate executives, journalists, local
academics, opposition politicians, and activists. The reason we want to
interview these people is usually that they can give us some insight into
how the authoritarian system works in practice: within the bureaucracy or
the party, in its dealings with other politically relevant actors, or in its dealings with critics. Revealing such information can make them vulnerable,
although this does not always need to be the case. By anonymizing them
(in such a way that they are genuinely unrecognizable, see below) we
exclude any such risk. But it requires us to relax the ideal of complete transparency in how we come by our findings. If we cannot tell readers who we
spoke to, they cannot trace whether we quoted and interpreted these
sources accurately. We will reflect on this trade-off in more detail below.
Our experiences with the degree to which this second, broad category
of respondents take us up on our offer of anonymity is very varied, depending on the repressiveness of the regime, the type of respondents, and the
nature of our research question. Our Malaysia researcher found that the
activists he interviewed were all but one entirely comfortable with being
named. Their status as dissidents and members of a social movement in
opposition to the government was well known, and while the answers to
questions asked by our researcher were more specific than what they have
publicly said online, they were not more incendiary. Our China researcher
by contrast finds that both public officials and corporate employees almost
always prefer anonymity. She still asks them, but she already knows what
the answer will be. Most of us have experienced getting mixed responses
102
M. GLASIUS ET AL.
in this category. When we have a mixed response, we have a difficult
choice. It is clear that we cannot name respondents who have asked to be
anonymized, but should we always name the ones who have told us they
are comfortable with doing so? From the perspective of transparency, this
would be the best option, but we find that there may be three reasons to
act otherwise.
The first is homogeneity. In her research on municipal governance, for
instance, our Morocco researcher felt that it did not make sense to anonymize a civil servant for one municipality, but not someone in the same
position for another municipality. Likewise, our China and Iran researchers, who both interviewed people working for Internet companies who
deal with government agencies, did not believe there was value added in
providing some of their names but not others. Others in our group however believe that stylistic homogeneity should not be a priority, and every
respondent who can be named represents a transparency gain and should
therefore be named.
A second consideration is whether we sometimes feel that what we are
being told is quite sensitive, and whether perhaps the respondent requires
more protection than he or she is asking from us. This was a choice consistently made by Driscoll (2015), who interviewed Tajik or Afghan former militia members in an authoritarian but also volatile environment. He
writes that ‘(i)n a few cases, the subject insisted that I record his full name.
For my own safety, and that of my respondents, I never complied with
these requests’. If we make such a judgment, we are in effect secondguessing our respondent’s own judgment, as well as foregoing transparency. Nonetheless, some of us have occasionally made such a judgment. In
the municipal research quoted above, our Morocco researcher came across
a respondent who gave her highly sensitive information, really explaining
how the administrative system exerted power over elected officials. He did
not think he required anonymity, but she believed this was perhaps a little
naïve, and he was saying things that would undoubtedly make his boss
unhappy, and his boss’s boss, all the way up to the top. So she decided that
it was better to be overprotective than sorry and anonymized his statements. Our Mexico researcher has anonymized all journalist-respondents
in a journal article, whether they had asked him to or not, but has yet to
decide what to do in his dissertation. On the one hand, as discussed in
Chap. 5, the context is very repressive, with murder a regular outcome for
critical journalists. On the other hand, the journalists in question are
already openly critical of the government and do not reveal much to our
WRITING IT UP
103
researcher that they have not said or written before, so the additional risk
flowing from his published work may be quite limited. In fact, some activists actually seek—preferably international—visibility, not only to advertise
their cause but also because they believe it gives them some level of protection. When a respondent articulates such a strategy, it would not make
sense to overrule him or her for their protection.
A final reason to anonymize a respondent without being asked to do so
can be the long delays in the publishing process, and a change in the situation on the ground during this process. Information that was not sensitive when provided may become sensitive by the time it is published. One
of us, for instance, had the consent of (some) Egyptian activists to be
mentioned by their first name in early 2013, but decided because of the
subsequent military crackdown, that in subsequent publications they
needed to be given aliases for their protection.
InTervIews wITh ‘spokespersons’
The third category of interviews is where we speak to high-profile politicians or civil society figures on their official stance, as our China,
Kazakhstan, and Morocco researchers have all done on occasion. We may
still put anonymity on the table as an option, especially if we do not know
in advance exactly how the interview is going to go, but we do not usually
expect them to take it up. These are public figures who are used to media
exposure (albeit in the constrained circumstances of their authoritarian
context), and in some cases, they may have already spoken of the same
topics on public occasions. They will know exactly what they want to say
to us, and how to say it. Moreover, their quotes are only meaningful in the
context of who they are. What they say to us is interesting not because
they give us insight into their thought process, or into the inner workings
of the bureaucracy or the political process, but because they are the head
of the Islamist Party or the president of the biggest women’s rights association. These kinds of interviewees will often give their consent to be
quoted by name in our published work as a matter of course, although
they may also give us some off-the-record information at the same time.
Even if they only give us their official views, this can be of interest, because
they give us insight into authoritarian legitimation strategies. But they
typically only give us one face of authoritarian political processes: the
public face. For some of our research, that is all we need, but for many
other research questions, it is not enough.
104
M. GLASIUS ET AL.
proTeCTIve praCTICes
Not mentioning a name in our published work is only a small part of our
anonymization practices. When we do not use real names, we use different
kinds of descriptors instead. When respondents have very similar profiles,
sometimes we just number them. Our Mexico researcher, for instance, just
refers to journalist 1, journalist 2, and so on. Sometimes, especially when
it comes to ‘ordinary people’ interviews such as those done by our
Kazakhstan and our Malaysia researcher, we give our respondents aliases,
fake first names. This improves readability and makes it possible for a
reader to track particular respondents in a publication. When we have
interviewed people in the second category, we try to convey some additional information, so that the reader can understand why the respondent
in question would have a uniquely relevant perspective on the matter at
hand, while still not making them traceable. We might refer to them as
‘senior manager at Baidu, Beijing’, or ‘journalist, target of phishing
attempts’. Sometimes, we need to omit more than a name to keep someone’s identity a secret (see Shih 2015, 22 for some very concrete examples
on how to be protective while still conveying to a reader why a respondent
could be considered as a well-informed source). A Malaysian civil servant
who had attended an anti-government demonstration, for instance,
insisted that it was not enough to delete his name; any detailed information concerning his workplace could make him traceable, so the researcher
used the vague reference ‘works for the government’.
But anonymization is not just about what eventually gets published.
When a respondent asks to remain anonymous, we always keep their real
names separate from our transcript. The real names may be found in our
notes, in an old diary, or in a document kept separate from the transcripts.
Contact details are also kept separate from transcripts (see also Shih 2015,
22). We are well aware that none of these practices is completely secure.
While we are in the field, we are in possession both of a set of contact
details with identifiers (real names or not) and a set of transcripts. If someone were to steal or forcibly seize all our notes, transcripts, and recordings
(which is in some contexts much more likely than high-tech electronic
surveillance, as graphically depicted in Driscoll 2015, 6), and study them
attentively, they would probably be able to trace the respondents. In some
cases, the transcripts themselves are actually revealing, as, for instance, in
the case of our Mexico researcher, whose journalist-respondents sometimes refer to their own published work. At other times, it would be a
WRITING IT UP
105
matter of putting together the transcripts and our other information.
None of us have been in a position where our material was taken from us
during fieldwork (although a bag containing fieldwork material was once
stolen during transit, see Chap.2), but it can happen. Our protective practices would make it time-consuming and difficult to trace respondents,
but we cannot guarantee that it would be impossible.
As Bellin (2016) has pointed out, such risks put us under an ethical
obligation to be transparent in quite a different way from that intended by
DA-RT or ‘open science’: transparency to our research subjects (see also
Loyle 2016), resulting in ‘a negotiation of the level of risk and disclosure
that the respondents are comfortable with’ (https://www.qualtd.net,
IV.1). While we endorse the spirit of this, we do not think that her proposal that in the writing-up phase ‘the respondents work with the
researcher to specify what identifying information can be written about
and what should be removed or altered’ is always practicable. Precisely
when working with respondents who may be at risk, we cannot assume
that our—usually digital—communications with them about such matters
would be safe. We may sometimes have to take decisions about altered risk
conditions for them, as in the case of Egyptian activists referred to above.
off-The-reCord InformaTIon
We also come across information in our research that we cannot use at all,
not even on condition of anonymity. Sometimes, such information is really
of no interest to us, so we just ignore it. Our Malaysia researcher, for
instance, found that his activist respondents would sometimes disparage
each other off the record, but their internal relations had little to do with
his research question. It becomes more difficult when they do give us
information that is important, and is either new, or corroborates other
evidence. Our India researcher has literally had the experience of a respondent who had given permission for a recorded interview changing his
mind and asking for the recording to be erased. When such a request is
made, whether we think it is reasonable or not, no visible trace of the
information should remain in our published work. But what cannot be
asked of us is that we erase the information from our minds. If it is indeed
important, it will inform our analysis, and we may look for other sources
for the same information. In this case, other interviews confirmed the
story the respondent had told. Our Kazakhstan researcher has likewise had
relevant information from an opposition source who emphatically asked
106
M. GLASIUS ET AL.
that what he said be kept confidential, since the information ‘would put
him in a situation of risk’. She tried to find written sources to corroborate
the factual information he had given and kept the opinions he had
expressed in the back of her mind during her analysis. In both of these
cases, the off-the-record information was of some use to the researchers,
but in both cases, it resulted in the written analysis looking less solid than
it actually was, because it rested on an additional source that could not be
mentioned at all.
anonymITy vs. TransparenCy
The primary reason not to insist on divulging sources has already been
mentioned and is widely acknowledged as taking precedence over the
merits of research transparency: respondents in precarious circumstances
require our protection (Ahram and Goode 2016; Bellin 2016; Driscoll
2015; Lynch 2016; Shih 2015; Stroschein 2016). The value of anonymity
is not unique to authoritarianism research, or even to research on vulnerable groups in the social sciences. It also applies to medical research, or
public opinion research. In this sense, transparency is never boundless in
academia. As we have argued above, we think anonymity is relatively
unproblematic when it comes to research on random members of a subgroup of the general population. Anonymity becomes more controversial
when we rely on respondents who have specific, privileged knowledge of
the workings of the authoritarian system and who are not interchangeable
with others. Using what these kinds of respondents tell us under condition
of anonymity poses a dilemma between transparency and anonymity.
Betraying their confidence goes against the do-no-harm principle and is
ethically unconscionable. So the only other alternative would be not to
publish anything that would have to rely on anonymous sources, which
raises its own ethical challenges, since it furthers the interests of authoritarian powerholders in opaqueness and potentially ignores voices that can
and want to tell us about abusive practices. It is possible in principle to do
authoritarianism research entirely based on named sources, for instance,
by focusing on historic cases (Art 2016). But we believe that in our field
of research—as well as many others—too much would get lost. Each of us,
in many different contexts, has at times relied on anonymity. In our
experience, those of our ‘authoritarianism’ colleagues who rely on field
research as a primary source have almost all done so too. We believe that
it is fair to say that the field could not exist without it.
WRITING IT UP
107
TransparenCy abouT our praCTICes, noT our
respondenTs
None of us think reliance on anonymous sources is unproblematic from a
scientific point of view. Whether or not we believe in actual replicability of
our kind of research (we are divided on this), we all think that transparency is required for other academics to judge our work, and to build on it.
None of us have published work that rests entirely on anonymous sources.
Indeed we agree with the advice given by Shih (2015) and Bellin (comment on https://www.qualtd.net, IV.1, 2016) that authoritarianism
research should not fetishize the interview as the only or best source of
information but triangulate information from interview material with
public documents or online sources. Some of us even think work that
relies entirely on anonymous sources should not be published because it is
not even partially verifiable. Others think that under very specific circumstances, when the author can argue why there was no safe alternative way
of gaining relevant insights, such publications can be permissible. But
instead of arguing about precisely how much a publication should be
allowed to rely on anonymous sources, we find it more helpful to shift the
way we think about transparency from a primary focus on the identity of
our sources to a focus on increasing transparency about our methods of
working.
Publications that rely on qualitative research are wildly variable in the
attention they give to research methods, depending on their disciplinary
or subdisciplinary traditions and epistemological orientations. In some
journals, the standard is for qualitative research to be written up in ways
that approximate as closely as possible the manner in which quantitative
research is conducted and described, which is not always appropriate and
helpful. In other subdisciplines and associated journals, there is simply no
tradition of requiring a methodology section giving attention to how the
empirical material was gathered and analyzed. We think that there are
many ways of doing good authoritarianism research, but regardless
whether it aims to substantiate causal claims or whether it is more exploratory or interpretive in nature, it always benefits from transparency about
how we do things. We would argue for more transparency than is currently customary in our field of research. The spotlight should be not on
the identity of the sources but on the practices of the researcher. We
already typically share when an interview took place (where it took place is
occasionally sensitive, see Shih 2015, 22), so that at a minimum we could,
108
M. GLASIUS ET AL.
when challenged, prove that we were ‘in the field’ on that day, rather than
behind our desk inventing respondents, but that is just fraud-proofing. We
can do more: share how we came by respondents, and what biases there
might be in that process, give insight into the kinds of questions we asked,
into the informed consent-related conversations we had with respondents,
whether we recorded, how we treated our material, and so on. There
should also always be a justification, which can be brief if it is relatively
obvious, of why certain sources need to be kept anonymous. As Shih proposes, scholars of authoritarianism could also be more explicit about the
other ways in which research has been tailored to meet constraints imposed
by the regime, making it clear, for instance, that in undertaking a survey,
we might ‘ask proxy questions that are highly correlated with the sensitive
questions’ (Shih 2015, 20–21). Our choices regarding methods, ethics,
and integrity could all be treated in one section, or if being transparent
eats too much into our word count, they can be elaborated in an online
appendix.
a CulTure of ConTrolled sharIng
When it comes to sharing of sources, one might think that while the identity of our respondents often needs to be secret, the material itself could
be shared with all, just as the raw data underlying medical research or
population statistics can be made public. Why do we not just put anonymized transcripts online? While one of us has indeed done so in the past,
we think that too often, doing so would still put our respondents at risk.
Precisely because they are not random respondents, but people with specific expertise or privileged information, a good secret service can come to
understand who you have been talking to, either from the transcripts
alone, or by combining it with their other information about you, or your
respondents (see also Tripp’s comment on https://www.qualtd.net, IV.1,
2016). We also believe it unlikely that respondents who want to remain
anonymous would readily give their consent to having the entire transcript
of the interview made publicly available. And if they did, the information
they would give us might be a lot less valuable: having a conversation with
us, after carefully having built a relation of trust (see Chap. 4) is not the
same as making a broadcast—even an anonymized broadcast—to the
world. Transcripts cannot therefore be available to everybody.
A final reason for not making interview transcripts publicly available is
that we believe anonymized transcripts would in fact be of limited value to
WRITING IT UP
109
other scholars. Transcripts are faithful transcriptions of a conversation, but
they cannot be readily interpreted without the requisite contextual knowledge, the relation to off-the-record comments possibly made during the
interview itself, the connection with other conversations that could not be
transcribed, and so on. They are not equivalent to quantitative data, and
our process of drawing conclusions from them cannot be replicated in the
same way quantitative procedures can be replicated on the basis of the data
and code by anyone with the requisite methodological skills.
Nonetheless, we think that the current practice of saying ‘just trust us’,
and keeping transcripts entirely to ourselves, is not good for our collective
reputation as academics. We believe a culture of qualified sharing of anonymized transcripts should be fostered in our field, and perhaps also in
relation to qualitative research with vulnerable respondents more widely.
We will describe two concrete ways in which we imagine that this can
work, which should be read as complementary to each other.
The first is sharing between colleagues, usually but not necessarily
within the same department. Within our project, we share all anonymized
transcripts with each other. We do not share real names with each other:
the only benefit of sharing real names we can think of would be to further
reduce the likelihood of fraud, but as we discuss below, we do not think it
plausible that researchers can and will invent reams of pages of false transcripts. What sharing means to us in practice is that transcripts are all
stored together on an offline laptop in our office. It is a system to guarantee that a small number of people have seen the interviews and can confirm their existence. In case of a doctoral candidate struggling to turn his
material into an argument, moreover, supervisors can actually review the
material and offer better advice. This is an obvious and attractive solution
for research groups such as ours. Such groups are increasingly prevalent in
Europe due to the current nature of funding, which favors personal grants
to mid-career or senior scholars, intended for building a group around a
project. It may also work for region-based research centers, that is, centers
for Middle Eastern studies and China or Russia studies, where there is an
institutional awareness of the specificities of our work. We would be more
hesitant to recommend it as a solution in all circumstances: in general
political science departments, there may not be the same understanding of
the sensitivity of the material, or conversely, the practice might lapse
because nobody polices it. A drawback of sharing within a group or center
is that, in case there are concerns over authenticity, close colleagues may
have a personal or institutional stake in covering for each other. But this
110
M. GLASIUS ET AL.
kind of sharing is still to be preferred over not sharing at all, which we
believe to be the current standard, and it can be combined with other
sharing practices as described below.
A second sharing practice could emerge in the context of the publication process of a journal article or book manuscript. One form this could
take is peer review: either transcripts could be shared with reviewers as a
matter of course, or there could be a designated ‘source reviewer’. We
think most authoritarianism researchers would be reluctant to accept such
a system: given that most peer review is double-blind, it would require
authors to hand over transcripts without having any idea to whom, other
than that these people are presumably also academics. Researchers might
well feel that submitting transcripts in this way would breach their obligation to their respondents. Moreover, it would place a heavy burden of
responsibility on journal editors or book publishers, who would then be
responsible not only for the academic quality of the reviewer but also for
her integrity with regard to neither using the transcripts for their own
purposes nor sharing them with third parties.
A more obvious solution, we think, is that anonymized transcripts can
be shared with editors. As researchers, we know who the editors are, and
we have chosen their particular journal or publishing house as our preferred outlet, so it would not be strange to be asked to share transcripts
with them. Editors act as guarantors of quality, and this could extend to
due diligence in terms of checking the authenticity of sources. The exact
way in which this would work could be a matter of editorial policy. Given
the burden on editors, we imagine they might not ask for and actually
check through transcripts for every manuscript that relies on anonymized
sources. They might check for a random sample, or ask for transcripts
when they themselves or reviewers have concerns about authenticity, or
both. We have to admit that sharing with editors is not absolute guarantee
against fraud: unless recordings are shared, there is always a theoretical
possibility that a researcher would invent entire transcripts. We think it
implausible, however, that anyone who had the local knowledge and creative talent to do so would use their capabilities to diligently conjure up
lengthy exchanges with non-existent respondents.
In our conception, only confidential materials explicitly referred to in
publications should be subject to sharing. We concur with Lynch (2016,
38) and Tripp (https://www.qualtd.net, IV.1, 2016) that it makes little
sense to share our multilingual fieldnote scribbles, which will not be intelligible to anyone. Nor should we be under an obligation to make them
WRITING IT UP
111
intelligible, any more than we should be obliged to reconstruct inspirational thoughts we may have in the shower before writing them up. We
acknowledge that editors will not be in a position to fully interpret transcripts. As we explained above, seeing transcripts does not imply that you
can ‘replicate’ the analysis. Finally, there are practical challenges, for which
we do not yet have adequate solutions to offer, concerning how to securely
transmit transcripts to an editor. But our position is that we should move
toward a culture where it would be considered natural and legitimate for
editors to ask to see anonymized transcripts that we refer to in publications, and we would share them on request. It cannot be the case that
while quantitative researchers are increasingly being asked to make raw
data publicly available, we will not share any of our material with anyone
and just insist on being trusted.
arChIvIng our TransCrIpTs
As we explained in our introduction to this chapter, recent European policy initiatives aim to create a ‘network of networks’ of digital data repositories. It is still quite unclear at what point a researcher would be expected
to place data in a digital repository, and to what extent access would
indeed be open to all. The notion in a recent policy paper that all data
should be available to all, in all phases of the research cycle (Joint Policy
Paper 2017) reflects a poor understanding of how scientists work, and
governmental overreach in terms of transforming their ways of working.
Nonetheless, social science researchers in Europe may soon come under
institutional pressure to comply with mandatory data storage in digital
repositories. Open access repositories are subject to exactly the same
objections as the DA-RT and JETS initiatives, specifically but not exclusively from the perspective of the authoritarian field, so we need not
rehearse our arguments here.
But we want to go a step further and state our objections even to digital
repositories with restricted and/or embargoed access. Again, our primary
objection concerns risk to respondents. As Marc Lynch points out, ‘the
difficulty of guaranteeing confidentiality for materials deposited in a
trusted repository are not hypothetical to those of us who conduct research
in the Middle East and North Africa’ (Lynch 2016, 37), and the same is
true for other authoritarian contexts. We discern three aspects to this risk
of breaching confidentiality, and hence risking harm: political contingency,
legal risk, and digital risk. The first aspect relates to the apparently very
112
M. GLASIUS ET AL.
reasonable suggestion that we could place our material in digital repositories under embargo, to be made public after, for instance, five or ten years,
subject to our consent. The problem with this is that we cannot predict
the future, and hence we cannot assume that publishing transcripts would
gradually become less sensitive over time. It can also become more dangerous. Lynch gives the example of Egypt, already referred to above: what
were ‘bold but safe’ statements made by activists in 2012 or the first half
of 2013 quickly became very dangerous from the latter half of 2013.
Stroschein gives the similar example of Turkey, which has become much
more repressive after the coup attempt of 2016 (https://www.qualtd.net,
IV.1). A disembargoed interview with a Turkish respondent from, say
2011, could well land her in trouble in 2017. And as we discussed in
Chap. 3, our China researcher has seen a more subtle but discernible shift
in the ‘red lines’ of permissibility in China over the past years, that could
have implications for disembargoed transcripts.
Second, there is the possibility of transcripts becoming subject to legal
subpoena, a particular concern with US scholars (Driscoll 2015, 6; Lynch
2016). We have not given attention in this book to the risk of legal subpoena, because we have no personal experience with it, and it still seems
to be a rare occurrence. But what we can say is that when we store materials in a digital repository, we give up control: the (difficult) choice of
weighing responsibility toward respondents against legal obligations and
possible criminal liability would no longer be ours to make. Driscoll (2015,
6) records actually having burnt some of his field materials in order to
guard against the risk of subpoena. None of us have gone this far, but we
are aware that ethical review boards sometimes insist on the destruction of
data to protect respondents. A blanket destruction requirement would be
just as extreme as a blanket transparency requirement, but the fact that
social scientists can be subject to both contradictory prescriptions at the
same time illustrates the unhelpfulness of blunt, one-size-fits-all solutions
to research dilemmas.
Finally, even if deposited transcripts were to remain in restricted access,
it would be naïve, in the age of hacking, to believe that academic repositories can be made fully secure. We would like to believe that most secret
services most of the time have other priorities than getting access to our
transcripts, but we can never be certain. In Chap. 4, we quoted the forthright answer one of us got from a Moroccan activist when she asked him a
sensitive question: ‘if you can assure me that you can protect me I will give
you my answer … but since you cannot, I will not’. Here, we paraphrase
WRITING IT UP
113
him to state our position on storing sensitive interview transcripts in digital repositories: if the institution can assure us that it can protect our
respondents, we will give it our transcripts, but since it cannot, we will
not.
wrITIng, dIssemInaTIon, and fuTure aCCess
As academics, we all want our work to be paid attention, by our peers but
perhaps also beyond academia. We may even dream of being famous as
academics. But for a researcher on authoritarianism, academic fame is a
double-edged sword. If more than a handful of colleagues are taking
notice of our work, the regime may be doing so too. As we described in
earlier chapters of this book, we all do research in contexts where there is
some degree of space for, and understanding of, social science research.
But this space is constrained, and we do carefully consider what we publish, where we publish, and how we disseminate our work.
Our Kazakhstan researcher suspects that the regime would not be
happy about some of her work, especially that which focuses on the workings of the party in power. She did consider this when writing, but she
believes that most political leaders will not read it, and even if they should
read it, they would still assume that, as an academic paper, it would be
mostly ignored or considered harmless because it does not communicate
directly with a large public. She faced a dilemma when an assistant to the
prime minister specifically asked to be sent a copy of her work on the
political leadership’s legitimation strategies (Del Sordi 2016). Since the
assistant had been very helpful and had agreed to be interviewed herself,
our researcher could not refuse, but she did have some concern that her
access to the country could be jeopardized by this move. The prime minister in question has a reputation for academic curiosity, however, and she
has not in fact had difficulties with her most recent visa. She has even seen
colleagues taking a more public critical position without consequences for
their access, but as the authorities are always weighing the reputational
consequences of denying access against those of being criticized, one cannot rely on being able to combine public criticism with continued access.
Likewise, our China researcher believes that her description of the
Chinese political system as ‘fragmented’ might not please the government,
but since it does not aim to undermine the Chinese Communist Party, she
does not believe she would really be denied access to her home country.
Again, the relative obscurity of academic work also makes a difference:
114
M. GLASIUS ET AL.
journalists from the west are much more regularly denied access than
scholars. This difference seems to be confirmed by the recent experience
of a colleague in China, who was recently ‘invited for a cup of tea’ by security agents because of (English-language) news coverage of an academic
publication of hers.
Our Iran researcher by contrast has, in the very specific context of the
repressive aftermath of the 2009 election protests, initiated an activistoriented edited volume (Michaelsen 2011) that he thought could compromise future access. He found the situation in Iran so dramatic at that
time that he wanted to take a position. He decided that the story told by
this book, edited together with 11 journalists who left the country and
wrote about their experiences during and after the protests, was more
important than going back to Iran. The book was published in English
and Farsi, and he gave interviews about it to Farsi language online media
in the diaspora considered inimical by the regime. When, six years later, he
prepared for another trip to Iran, he did briefly wonder whether this publication might compromise his access to and security in the field, but he
still thinks that there are times when academic researchers should take a
clear and principled position.
Another way of disseminating our work, perhaps the most effective way in
numerical terms, is by acting as commentators in western media. The
increasing emphasis on societal engagement, moreover, may propel scholars
to think that all publicity is good publicity. We do sometimes give radio
interviews, or allow ourselves to be quoted in newspapers, but we are very
careful about the exact wording. In case of print media, we always insist on
seeing and being allowed to correct quotes before publication. A more negative phrasing than we are comfortable with, sometimes desired by journalists,
not only interferes with the nuance of what we want to say, it can also have
consequences for our access to the country and to sources in the country.
A final consideration, when it comes to weighing publicity against
future access, is the extent to which our careers and our lives are bound up
with one country and its political system. Our India and Mexico researcher
and our Malaysia researcher have not been much concerned about future
access to the relevant countries, in part because such denial of access is
relatively rare, but primarily because, at this point in their career at least,
they are mixed-methods researchers who think of themselves as political
scientists who happened to do fieldwork in one or two specific countries.
Our Morocco researcher found doing research in Tunisia to make a
refreshing change and is also thinking about broadening her expertise to
WRITING IT UP
115
West Africa. Our Kazakhstan researcher thinks of herself as a country
expert first and foremost, but has also written on Central Asia more generally, and considers future research on Russia. Our Iran researcher, while he
has invested profoundly in learning Farsi and understanding Iran, has
partly shifted his research agenda, toward studying the Iranian diaspora,
on the one hand, and a broader comparative focus on media in authoritarian contexts on the other hand. While the primary motivation for broadening our research agendas has not been to mitigate against the risk of the
authoritarian state obstructing our research, it does make it easier to navigate the dilemmas regarding publicity and access. Our China researcher is
more exclusively invested in understanding the universe that is China,
albeit comparatively. Moreover, she is and wants to remain a Chinese citizen, so for her the stakes in navigating what to write, and where to write
it, are higher, as they are likely to be for any national investigating their
own country. In sum, we all think of how we couch our criticisms of
authoritarian regimes and how publicly we do so, in relation to future
access as a trade-off, but the choices we make depend on our specific professional and personal relation to the field.
ChapTer ConClusIon: shIfTIng The TransparenCy
debaTe
There is an inherent tension in doing, but especially in publishing, research
on authoritarianism. Ahram and Goode (2016, 838) describe authoritarian regimes as ‘engines of agnotology’, by which they mean that these
regimes have an interest in maintaining ignorance and uncertainty about
many aspects of how they function. Hence, publication can raise problems
for our future access, but more importantly, potential harm to sources. We
add our voice to the chorus of scholars who have argued that a concern for
transparency in research cannot be translated into a requirement to make
transcripts or field notes public, even in anonymized version. Nor should
they be stored in potentially unsafe digital repositories. Our responsibility
to do no harm to respondents is simply paramount. But we have also tried
to go beyond only rejecting inappropriate transparency requirements. In
this chapter and in this book, we have tried to increase transparency about
how we do research: by explaining in detail how we have navigated the
methodological and ethical trade-offs that follow from doing research in
the authoritarian field, and what general learnings we think may be gleaned
from our common experiences.
116
M. GLASIUS ET AL.
referenCes
Ahram, A. I., & Goode, J. P. (2016). Researching Authoritarianism in the
Discipline of Democracy. Social Science Quarterly, 97(2016), 834–849.
https://doi.org/10.1111/ssqu.12340.
Art, D. (2016). Archivists and Adventurers: Research Strategies for Authoritarian
Regimes of the Past and Present. Social Science Quarterly, 97, 974–990.
https://doi.org/10.1111/ssqu.12348.
Bellin, E. (2016). Comment on (April 06, 2.03): “Risks and Practices to Avoid?”
in IV.1. Authoritarian/Repressive Political Regimes. Retrieved July 20, 2017,
from https://www.qualtd.net/viewtopic.php?f=26&t=174#p640.
Del Sordi, A. (2016). Legitimation and the Party of Power in Kazakhstan. In
M. Brusis, J. Ahrens, & M. S. Wessel (Eds.), Politics and Legitimacy in PostSoviet Eurasia (pp. 72–96). London: Palgrave Macmillan.
Directorate-General Research & Innovation. (2016). Open Innovation, Open
Science, Open to the World. European Commission. Retrieved from https://
publications.europa.eu/en/publication-detail/-/publication/3213b3351cbc-11e6-ba9a-01aa75ed71a1.
Driscoll, J. (2015). Can Anonymity Promises Possibly Be Credible in Police States?
In M. Golder & S. N. Golder (Eds.), Comparative Politics Newsletter.
Comparative Politics of the American Political Science Association, 25, 4–7.
Joint Position Paper. (2017). Joint Position Paper on the European Open Science
Cloud. Open Science, Germany and The Netherlands. Retrieved July 23, 2017,
from https://www.openscience.nl/binaries/content/assets/subsites-evenementen/open-science/joint-position-paper-on-the-european-open-sciencecloud-de-nl.pdf.
Loyle, C. E. (2016). Overcoming Research Obstacles in Hybrid Regimes: Lessons
from Rwanda. Social Science Quarterly, 97(4), 923–935.
Lynch, M. (2016). Area Studies and the Cost of Prematurely Implementing
DA-RT. In M. Golder & Soma N. Golder (Eds.), Comparative Politics
Newsletter. Comparative Politics of the American Political Science Association,
26, 36–40.
March for Science. (2017). Principles and Goals. Retrieved July 20, 2017, from
https://www.marchforscience.com/mission-and-vision/.
Michaelsen, M. (2011). Election Fallout. Iran’s Exiled Journalists on Their Struggle
for Democratic Change. Berlin: Hans Schiler Verlag. Open Access under:
http://library.fes.de/pdf-files/iez/08560.pdf.
Pew Center. (2017). U.S. Public Trust in Science (Rainie, L.). Retrieved July 20,
2017, from http://www.pewinternet.org/2017/06/27/u-s-public-trust-inscience-and-scientists/.
Shih, V. (2015). Research in Authoritarian Regimes: Transparency Tradeoffs and
Solutions. In T. Buthe & A. M. Jacobs (Eds.), Qualitative and Multi-Method
Research. American Political Science Association, 13, 20–22.
WRITING IT UP
117
Stroschein, S. (2016). Comment on: (April 23, 4:48 am): “Danger, Harm and
Change” in IV.1. Authoritarian/Repressive Political Regimes. Retrieved July
20, 2017, from https://www.qualtd.net/viewtopic.php?f=26&t=174#p640.
Tripp, A. (2016). Comment on (April 13, 7:26 am): “Privileging Quantitative
Methods and Challenging Field Work Condition” in IV.1. Authoritarian/
Repressive Political Regimes. Retrieved July 20, 2017, from https://www.
qualtd.net/viewtopic.php?f=26&t=174#p640.
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/
by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in
any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line to
the material. If material is not included in the chapter’s Creative Commons license
and your intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copyright
holder.