Journal of Academic Ethics
https://doi.org/10.1007/s10805-024-09526-7
Perception of Research Misconduct in a Spanish University
Ramón A. Feenstra1
· Carlota Carretero García1
· Emma Gómez Nicolau1
Accepted: 15 March 2024
© The Author(s) 2024
Abstract
Several studies on research misconduct have already explored and discussed its potential
occurrence in universities across different countries. However, little is known about this issue
in Spain, a paradigmatic context due to its consolidated scientific evaluation system, which
relies heavily on metrics. The present article attempts to fill this gap in the literature through
an empirical study undertaken in a specific university: Universitat Jaume I (Castelló). The
study was based on a survey with closed and open questions; almost half the total population of the university’s researchers participated (505 out of 1030, i.e. 49.03%), yielding a
representative sample of different academic career stages and areas of knowledge. Results
show that 71.68% (n = 362) of the respondents consider at least one form of misconduct to
be proliferating in their area of knowledge at the national level. This figure falls to 48.95%
(n = 247) in reference to misconduct in their own institution. The most frequently reported
types of misconduct linked to life with colleagues are especially the use of personal influence (in evaluation or review processes); lax supervision of doctoral theses; and the abuse of
power over people in lower positions. Personal ambitions and pressure from the evaluation
system are regarded as the most influential causes of misconduct proliferation, according to
academics at this Spanish university.
Keywords Research misconduct · Integrity · University · Evaluation system
Introduction
Universities are generally understood as spaces for fostering research and teaching excellence, activities that demand compliance with high standards of scientific integrity
(ALLEA, 2017, 2023). However, in recent years a growing number of empirical studies have warned that university researchers are engaging in certain forms of misconduct
(Pupovac et al., 2017; Buljan et al., 2018; Felaefel et al., 2018; Hofmann & Holm, 2019;
Haven et al., 2019a, b, c; Hofmann et al., 2020; Ljubenković et al., 2021; Palla & Singson,
2022). In response to this problem, the scientific literature is now exploring the extent of
research misconduct in different research contexts (Fanelli, 2009; Pupovac & Fanelli, 2015;
* Ramón A. Feenstra
feenstra@uji.es
1
Department of Philosophy and Sociology, Universitat Jaume I de Castelló,
Castelló de la Plana 12071, Spain
13
Vol.:(0123456789)
R. A. Feenstra et al.
Pupovac et al., 2017; Xie et al., 2021). One of the dimensions explored in recent studies is
the possible proliferation of misconduct in specific universities. Some examples include
studies carried out in the University of Oslo (Hofmann & Holm, 2019), the Universities of
Stockholm, Oslo and Odense (Hofmann et al., 2020), Vrije Universiteit Amsterdam, University of Amsterdam and the two Amsterdam University Medical Centres (Haven et al.,
2019a, b, c), University of Zagreb (Ljubenković et al., 2021), University of Split School
of Medicine (Buljan et al., 2018); University of Rijeka (Pupovac et al., 2017), Cairo University, the American University in Cairo (AUC), Suez Canal University in Egypt, RCSI
Medical University of Bahrain and Ain Wazein Hospital in Lebanon (Felaefel et al., 2018)
and Pondicherry University in India (Palla & Singson, 2022), among others.
Despite the abundant literature in this field, studies addressing the issue in specific
Spanish universities are notably lacking. Investigations have been carried out in this context about the possible existence of misconduct as perceived by editors of journals from
the fields of communication, education and psychology (Fonseca-Mora et al., 2014). There
are also studies on perceived misconduct in specific areas of knowledge, such as philosophy and ethics (Feenstra et al., 2021) or neuropsychology (Olabarrieta-Landa et al., 2017).
Similarly, other studies have examined the number of retracted papers in the field of biomedicine (Dal-Ré, 2020; Marco-Cuenca et al., 2019).
Spain is an especially suitable context for such case studies because of its evaluation system, which uses metrics to measure impact quantitatively (Jiménez-Contreras et al., 2002, 2003;
Butler, 2004; Hicks, 2012; Derrick et al., 2013; Delgado-López-Cózar et al., 2021; Feenstra
& Delgado-López-Cózar, 2023). This system was first introduced in the experimental sciences
in 1989 and later extended to other fields, including the humanities (Marini, 2018; Cañibano,
2018). The main evaluation tools in the Spanish research system are the sexenio (recognition
of research performance assessed by a committee every six years, and rewarded as a productivity bonus) and acreditación (‘habilitation’ or tenure review process for promotion).1 Although
the specific criteria vary for the different fields of knowledge, the main feature of both evaluation systems is the assessment of academic merits according to the impact of the journals
where researchers publish, especially the results achieved in JCR (some fields also include the
option of SJR).2 Indeed, the Spanish university context was highlighted in the Leiden manifesto
as a paradigmatic case in the use of metrics to evaluate researchers in the social sciences and
humanities (Hicks et al., 2015).
These characteristics are particularly relevant, given that the existing literature frequently points to publication and impact-based research evaluation systems as influential drivers of misconduct proliferation (De Vries et al., 2006; Delgado-López-Cózar et
al., 2007; John et al., 2012; Martin, 2013; Pupovac et al., 2017; Liao et al., 2018; Maggio et al., 2019; Holtfreter et al., 2019; Aubert Bonn et al., 2019). The Spanish context
is also characterised by low public investment in R&D&I (1.4% of GDP in 2021; INE,
2022) and deteriorating working conditions for teaching staff. This is especially evident in
1
The Agencia Nacional de Evaluación de la Calidad y Acreditación (The National Agency for Quality
Assessment and Accreditation of Spain) ANECA is responsible for these evaluations. ANECA is defined
as “the body responsible for the assessment, certification and accreditation of the Spanish university system with the aim of its continuous improvement and adaptation to the European Higher Education Area
(EHEA)”. More information at: https://www.aneca.es/en/aneca
2
Thus, promotion or productivity bonuses are granted based on the publication of a certain number of
papers in what are defined as “first level” journals (and defined as such by being in the first quartiles).
For a more detailed explanation of the Spanish Evaluation System, see Feenstra and Delgado-López-Cózar
(2023).
13
Perception of Research Misconduct in a Spanish University
the precariousness of teaching and research careers (shortage of posts, delay in access to
tenured or permanent positions) (Santos-Ortega et al., 2015). In sum, the combination of
factors leading to a prevailing climate of ‘publish or perish’, frequently mentioned in studies of misconduct (e.g., Haven et al., 2019c; Palla & Singson, 2022; Pupovac et al., 2017),
becomes particularly relevant in this context.
The above mentioned circumstances make Spanish universities an excellent setting to
deepen our understanding of perceptions and concerns about research misconduct. Thus, in
this study we address the following questions: Is the proliferation of misconduct perceived
similarly to universities in other countries? Are there variations among different contexts?
And what are the causes of such misconduct, according to Spanish university researchers?
Based on these questions, the present study seeks to fill a gap in the research misconduct
literature by exploring perceptions of misconduct in the academic community of a specific
university: Universitat Jaume I, Castelló. Conducting this study in one specific university
makes it easier to integrate different areas of knowledge into the analysis, thus expanding on the abundant information already available for branches of the biomedical sciences
prevalent in studies of misconduct (Jefferson, 1998; Gilbert & Denison, 2003; Titus et al.,
2008; Stretton et al., 2012; Buljan et al., 2018; Hofmann & Holm, 2019). It also allows
us to gather information on researchers at different stages in their academic careers, thus
broadening the approach followed in studies focussed on specific moments in the academic
journey, such as early career researchers (Hofmann & Holm, 2019; Hofmann et al., 2020;
Krstić, 2015). Therefore, the ultimate aim of the study is to gain a detailed understanding
of the concerns and perceptions of researchers at a university located in Spain, a paradigm
in the use of metrics for research evaluation.
Methodology
This quantitative study is based on a survey using both closed and open questions. The
choice of Universitat Jaume I as the setting for the research was based on the following criteria: 1) its size reflects the average for small-medium Spanish universities—11.585 degree
students in 2022; 34 degrees and 40 master degrees—(SIUVP, 2023), 2) it offers a broad
plurality of knowledge areas—Arts and Humanities, Sciences, Social Sciences, Health Sciences and Architecture and Engineering—and 3) as the 59,6% of Spanish universities, UJI
has the legal status of public university (Ministerio de Universidades, 2022).
The study population was divided into professional categories as defined by the European Commission, which differentiates between four groups: First Stage Researcher (R1,
up to the point of PhD), Recognised Researcher (R2, PhD holders or equivalent who are
not yet fully independent), Established Researcher (R3, researchers who have developed a
level of independence), Leading Researcher (R4, researchers leading their research area or
field). The population includes all researchers with contracts at the institution and all in its
areas of knowledge. The knowledge areas were grouped as follows: Arts and Humanities
(AH), Health Sciences (HS), Sciences (S), Social and Legal Sciences (SLS) and Engineering and Architecture (EA). The total population is 1,030 researchers. In surveying this population, we aim to reflect the heterogeneity of research career stages and areas of knowledge, which will reveal perceptions that are both specific to these two variables and shared
across them. The study therefore has considerable analytical potential in that perceptions
can be specified according to each area of knowledge and professional career stage, while
at the same time shared perceptions of misconduct in the university can be analysed.
13
R. A. Feenstra et al.
The open questions in the survey allow us to contextualise the responses to closed questions answered previously, and to observe the extent to which they were understood. They
also give participants the opportunity to mention elements that do not appear in the survey
questions or optional responses. The survey is structured in five blocks of questions on the
following themes: a) open access (8), b) gender equality (4), c) knowledge of research ethics (4), d) ethical governance in the institution (6) and e) research misconduct (7); in this
paper we focus on the last block. The official languages of the university, Valencian and
Spanish, were used in the survey, which was administered through the Qualtrics digital
platform. Survey responses were accepted between 5 May and 13 June, 2021. On the same
day the survey was launched, an email was sent to the research community encouraging
participation. In addition, two reminders were sent out in the following two weeks.
The university’s Vice-Rectorate for Research and Transfer collaborated in the dissemination of the survey, since the data collected were part of an institutional project on the
implementation of research ethics.3 The research community was informed that the results
would also be used to develop a Code of Best Practices in Research and Doctoral Studies
(CBPID).4
Prior to its launch, the survey underwent an initial testing phase; the draft questionnaire
was discussed in a working group comprising 13 members of the university community.
The second version of the survey was then sent out for review. The participants in this
testing phase had a variety of profiles, both in terms of stages in academic career (4 R4, 5
R3, 1 R2, 1 R1 and 2 research evaluation specialists), and knowledge areas (AH 3, HS 2,
S 2, SLS 4, 2 specialists). Gender parity was also sought (six men and seven women), and
five people with prominent positions of responsibility in the institution (vice-rectors, etc.)
participated alongside six academic researchers who do not hold positions in the university
administration and the two specialists. This process was crucial to add other forms of misconduct and also an open question on potential prevention strategies (Q7). Ultimately, it
facilitated the validation of the survey.
Questions
The first two questions on misconduct aim to uncover the researchers’ perceptions of the
proliferation of misconduct in two spheres of application: at the state level and at the university itself. The questions are as follows:
Q1 Indicate whether, at the national level, you consider there to be a proliferation of
any of the following types of misconduct in your area of knowledge as a whole (mark
all the options you consider appropriate).
Q2 Indicate whether, at our university, you consider there to be a proliferation of any
of the following types of misconduct in your area of knowledge as a whole (mark as
many options as you consider appropriate).
The list consists of the following options:
3
This project is associated with the Horizon 2020 initiative ’ETHNA System: Ethics Governance System
for RRI in Higher Education, Funding, and Research Centers,’ which was developed from 2020 to 2023.
More details at: https://ethnasystem.eu
4
This code is now accessible at:: https://www.uji.es/investigacio/base/etica/cbpid/?urlRedirect=https://
www.uji.es/investigacio/base/etica/cbpid/&url=/investigacio/base/etica/cbpid/
13
Perception of Research Misconduct in a Spanish University
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Duplicate publication or self-plagiarism
Plagiarism
Fabrication or falsification of data
Use of personal influence
Pressure on publishers
False authorship (ghost or honorary authorship)
Abuse of power over research staff in lower positions
False participation in research projects
Failure to declare conflict of interest when publishing results or hiring research support
staff
Lax supervision of doctoral theses
Misuse of available research project resources (including for research staff recruitment)
Failure to disseminate research results or to commercialise them where appropriate
Fraudulent review of research papers or projects (motivated by aversion or affinity)
Failure to protect personal data
Other forms of misconduct
⊗ NO misconduct occurs in this area
The concept of proliferation is particularly relevant for these two questions. They are
worded in such a way as to highlight not the presence, but the possible proliferation of
certain forms of misconduct, as perceived by the respondents. That means that forms of
misconducts that are prevalent might appear as less proliferating than others that respondents perceive rapidly increasing in frequency. Another key issue in these questions relates
to the types of misconduct listed. Our study offers a wide range of options, purposely going
beyond falsification, fabrication and plagiarism (FFP) to include a number of questionable
research practices (QRP). The 14 options presented were previously discussed in depth in
the working group, and were chosen because their impact could potentially affect all the
areas of knowledge in the university. “Other forms of misconduct” allowed respondents to
provide additional information. Finally, participants were offered the option “NO misconduct occurs in this area”.
Furthermore, an open question, Q3, “Other forms of misconduct” allowed respondents
to provide additional information.
The survey also includes a question on possible causes:
Q4. You identified forms of research misconduct in one or both the spheres in Q1 and
Q2. What do you consider to be the cause(s) of these forms of misconduct? (Mark as
many options as appropriate):
•
•
•
•
•
Spain’s scientific evaluation policy (accreditations–ANECA, and sexenios–CNEAI)
The evaluation policy in our university
The lack of training in ethical issues for researchers
Researchers’ personal ambitions
Other reasons
If a participant marked “Other reasons”, they were given the opportunity to elaborate in
the following question:
Q5. You selected “Other reasons”; what, in your view, are the motivations for
research misconduct:
13
R. A. Feenstra et al.
Table 1 Population, Sample, and Proportion of Respondents by Category Overview
Category
Population
Sample
Proportion of respondents by professional category within the population
R1
243
52
21.40%
R2
R3
R4
Total
169
448
170
1030
89
259
105
505
52.66%
57.81%
61.76%
49.03%
The respondents were also asked to assess the weight associated with each cause in the
following question:
Q6 To what extent do you consider that each cause mentioned influences the appearance of misconduct? (Participants only see the options they marked previously in
Q4) Response options: Never, very little, a little, a lot, totally
This was followed by an open question on possible prevention strategies:
Q7. What strategies do you think could be implemented to help prevent future
research misconduct?
This open question allowed respondents to expand the information beyond the limits of
closed options and numerical data, enabling them to give a deeper, more comprehensive
vision of the topic.
The block ends with another open question (Q8) where participants could add further
reflections or comment on any points of the survey.
Survey participation
The survey was begun by 539 people, of whom 505 provided sufficient information for our
analysis of research misconduct. The remaining 34 people (12 R1, 3 R2, 15 R3 and 1 R4)
did not complete the survey. The final number of responses was 505 out of a total population
of 1030, that is, 49.03%. For a 95.5% confidence interval (two sigmas) and p = q, the sampling error is ± 3.18% for the whole sample, assuming simple random sampling. The sampling strategy employed (a self-administered questionnaire was sent online to the researchers’
entire population) inevitably generates a self-selection process in the sample. This might lead
to bias in the study, as individuals who respond do so for non-random reasons. The sample
distribution by academic career stage and area of knowledge is shown in Tables 1 and 2.
Table 2 Response rate by field of knowledge
Field of knowledge
Population
Sample
Proportion of respondents by field of
knowledge area within the population
Arts and humanities (AH)
127
58
45.67%
Sciences (S)
Social and legal sciences (SLS)
Health sciences (HS)
Engineering and architecture (EA)
194
393
34
282
95
202
25
125
48.97%
51.40%
73.53%
44.33%
13
Perception of Research Misconduct in a Spanish University
The high level of participation in the open questions is also noteworthy. We received
32 responses to the question on other forms of misconduct, 40 responses to the question
on other causes of misconduct proliferation, 203 responses to the question on how to
prevent it, and 76 responses to the final open question (in which the topic with the most
comments was research misconduct or research ethics). The length of the content of
the qualitative section (a total of 19,992 words) reflects the importance the participants
attribute to the topic.
The qualitative data was subjected to a thematic analysis to identify consensus and discursive patterns within the corpus. The process was carried out through an inductive process by two individual researchers. This way, a double manual coding process of content
analysis was carried out. This approach enabled the identification of patterns for the perception of misconduct typologies, their potential proliferation, underlying causes, and potential
preventive measures. We include some of the most notable and illustrative extracts in this
article; respondents are identified by professional category R and area of knowledge.
Results
Perception of misconduct proliferation
Our analysis shows that 71.68% (n = 362) of respondents perceive a proliferation of at
least one type of misconduct in their knowledge area at the national level, compared to
21.19% (n = 107) who do not perceive this to be the case; 7.13% (n = 36) did not answer
this question. The percentage falls significantly when the question refers to misconduct
in the university: 48.91% (n = 247) of respondents perceive the proliferation of at least
one type of misconduct (this figure represents the cumulative count of respondents
who perceive the proliferation of one or more forms of research misconduct), whereas
34.85% (n = 176) do not perceive any misconduct to be on the increase. In addition, the
number of non-responses to this question rises significantly to 16.24% (n = 82). In sum,
there are substantial differences between the perceptions of misconduct at the national
level and in the respondents’ own university (see Fig. 1).
Considering only those individuals who believe that research misconduct is proliferating, the distribution of misconduct practices is as follows when comparing the Spanish university system and the Universitat Jaume I (See Fig. 2).
The perceived proliferation of misconduct is observed as well in the open questions
(Q5 and Q8), in which several participants reflect on its proliferation. Some of these
responses highlight their proliferation as follows:
These (mal)practices are not only substantial, but are likely to become even more
so in the future. R3.SLS.41
The types of research misconduct described are relatively common. R3.EA.15.
Research, not the administration, in my area of knowledge lies somewhere
between the jungle and a sharks’ feeding frenzy. R3.EA.18
Another type of response refers to the difference between the possible “sporadic”
presence of misconduct as opposed to its “proliferation” (something we will return to in
the discussion). One participant specifically notes that:
There are some types of misconduct that do occur in my area of knowledge, as
far as I know, but I haven’t marked them because the question asks about “prolif-
13
R. A. Feenstra et al.
80.00
71.68
70.00
60.00
48.91
50.00
40.00
34.85
30.00
21.19
16.24
20.00
7.13
10.00
0.00
All Spain
UJI
No perceived misconduct
At least 1 type of misconduct
DK/NA
Fig. 1 Perception of misconduct proliferation in the Spanish university system and at Universitat Jaume I.
Y-Axis (Vertical): Percentage
eration”. I don’t think they are widespread, but they do occur from time to time.
R4.AH.7
Regarding the average number of misconduct types marked as proliferating in their
field (excluding responses reporting no perceived misconduct) this is distributed as follows according to gender, professional categories and knowledge areas (Table 3).
35.00
32.79
30.00
24.70
23.20
25.00
20.00
18.51
17.96
18.22
17.68
14.09
15.00
11.74
8.50
10.00
8.56
4.05
5.00
0.00
1 type
2 types
3 types
All Spain
4 types
5 types
More than 5
types
UJI
Fig. 2 Distribution of the number of proliferating misconduct types perceived in the Spanish university system and at the Universitat Jaume I. Y-Axis (Vertical): Percentage
13
Perception of Research Misconduct in a Spanish University
Table 3 Comparison of the
average proliferating misconduct
types perceived by gender,
professional category, and
knowledge area
All Spain
UJI
Difference
N
Average
N
Average
% Difference
Women
162
3.49
108
2.75
0.74
Men
R1
R2
R3
R4
AH
S
HS
SLS
EA
200
41
71
177
73
42
67
22
150
81
3.41
3.63
3.38
3.62
2.99
4.21
2.73
3.27
3.49
3.60
139
31
48
115
53
28
35
16
108
60
2.71
2.87
2.60
2.86
2.45
2.89
2.46
2.00
2.90
2.68
0.70
0.76
0.78
0.75
0.53
1.32
0.27
1.27
0.59
0.92
There are no significant differences in the number of misconduct types perceived
as proliferating by gender. With regard to professional categories, group R1 identifies
the highest number of misconduct types, followed by R3 and R2; group R4 perceives
a lower proliferation of misconduct and reports a smaller difference between the UJI
and the Spanish university system in general. By knowledge area, the highest number
of misconduct types is perceived in Arts and Humanities for the Spanish university
system as a whole. However, the figure drops significantly for the UJI in this knowledge area, which is relevant since it falls below the average misconduct perceived in
Social and Legal Sciences. In Sciences, the knowledge area with the fewest perceived
misconduct types, the difference between the Spanish system and the UJI is only 0.27,
indicating that the perception of misconduct in the UJI is not only very low, but is also
consistent with the perception for the Spanish university system. This assumption is
supported by comments in the open section, in which some respondents state that misconduct does not affect the Science knowledge area:
In my field of knowledge, the sciences, work is carried out seriously, professionally, responsibly and with a high level of commitment, so research ethics doesn’t
seem to me to be such a relevant issue, or at least not something that needs
improving. R3.S.4
As reflected in the excerpt, research misconduct appears to be perceived by science
researchers as less on the rise in their field of Science due to the nature of the scientific method. This suggests a prevailing hierarchical vision of areas of knowledge in the
sciences. In contrast, the difference between perceptions of the Spanish university system and the Universidad Jaume I reported by Arts and Humanities and Health Sciences
respondents is 1.32 and 1.27, respectively, suggesting that these are areas with a high
perception of misconduct in Spanish universities in general (in contrast to their own).
Types of misconduct perceived as most proliferating
The types of misconduct respondents most frequently report at the state level are 1) use
of personal influence, 2) lax supervision of doctoral theses, and 3) abuse of power over
13
R. A. Feenstra et al.
research staff. These are also the most proliferating forms of QRP perceived in the university, although options 2) and 3) are reversed (see Fig. 3).
These data show strikingly high percentages of the most common perceived misconduct
types, especially at the national level. Lower levels of perceived misconduct are found for
variables 14) plagiarism and 15) failure to protect personal data, both at the national level
and within the university. Perceptions according to professional category, area and gender
are reported in Table 4 below.
By area, self-plagiarism, plagiarism, pressure on publishers, lax supervision of doctoral
theses, fraudulent review of papers and non-compliance with data protection stand out in
AH. In S, only data fabrication is of note. In HS, false authorship and misuse of research
project resources are most commonly cited. In SLS, the use of personal influence, abuse of
power and failure to declare conflicts of interest prevail. Finally, in EA, false participation
in research projects and lack of dissemination of results are prominent.
By professional category, the highest perceptions of misconduct proliferation are found
in groups R1 and R2. It is indicative that lax supervision of theses and plagiarism are the
practices most commonly reported by R4, possibly because they are in a better position in
the academic hierarchy to observe these practices more easily.
By gender, men perceive a higher proliferation of misconduct. The most striking differences are found in the use of personal influence, a practice that women report 5.9% more
often, and in false authorship, reported 7.4% more often by men than by women.
Moreover, it is significant that in the space provided for participants to describe types of
misconduct not included in the list (Q3), they referred to publication in dubious or predatory
journals. This idea is associated with “aggressive”, “predatory”, “lax” publication models
(seven people referred to these issues). These terms can be interpreted as indicative of an
effort to delegitimise the researchers who publish in them and their work. Here, researchers’
choice of journals in which to publish their work assumes an ethical dimension.
Failure to protect personal data
2.77
1.39
Plagiarism
1.78
Pressure on publishers
1.78
5.54
6.93
Failure to declare conflict of interest when publishing or hiring
3.76
Other forms of misconduct
3.76
Fabricaon or falsificaon of data
Misuse of available research project resources
7.33
7.33
8.12
2.77
7.72
Failure to disseminate research results or to commercialise them
9.90
Fraudulent review of research papers or projects
16.44
8.91
False authorship (ghost or honorary authorship)
16.83
11.88
20.79
6.34
False parcipaon in research projects
14.65
Duplicate publicaon or self-plagiarism
21.19
23.56
12.28
Lax supervision of doctoral theses
28.12
16.44
Abuse of power over research staff in lower posions
28.32
19.80
Use of personal influence
43.76
23.76
21.19
⊗NO misconduct occurs in this area
All Spain
34.85
UJI
Fig. 3 Perception of misconduct proliferation in the Spanish university system and at the Universitat Jaume
I. X-Axis (Horizontal): Percentage
13
AH
S
8.42%
HS
SLS
EA
R1
Duplicate publication or self-plagiarism
15.52%
Plagiarism
Fabrication or falsification of data
Use of personal influence
Pressure on publishers
False authorship (ghost or honorary authorship)
Abuse of power over research staff in lower positions
False participation in research projects
Failure to declare conflict of interest when publishing results or hiring research support staff
Lax supervision of doctoral theses
Misuse of available research project resources (including for research
staff recruitment)
Failure to disseminate research results or to commercialise them
where appropriate
Fraudulent review of research papers or projects (motivated by aversion or affinity)
Failure to protect personal data
Other forms of misconduct
⊗ NO misconduct occurs in this area
6.90% 2.11% 0.00% 0.50% 1.60% 0.00%
5.17% 5.26% 0.00% 0.99% 3.20% 9.62%
29.31% 14.74% 28.00% 32.67% 12.80% 21.15%
5.17% 1.05% 4.00% 1.98% 0.00% 0.00%
1.72% 10.53% 24.00% 11.88% 15.20% 21.15%
17.24% 12.63% 16.00% 25.74% 17.60% 17.31%
10.34% 9.47% 16.00% 13.37% 22.40% 19.23%
1.72% 3.16% 4.00% 4.46% 4.00% 13.46%
R2
4.00% 14.85% 11.20% 15.38% 16.85%
2.25%
0.00%
30.34%
1.12%
12.36%
24.72%
14.61%
3.37%
R3
R4
Women Men
9.27% 14.29% 11.74% 12.73%
1.54%
3.09%
22.78%
2.32%
10.04%
20.46%
14.67%
3.09%
2.86% 2.17% 1.45%
0.95% 1.74% 3.64%
21.90% 26.96% 21.09%
1.90% 1.30% 2.18%
11.43% 7.83% 15.27%
15.24% 19.57% 20.00%
12.38% 13.91% 15.27%
0.95% 3.91% 3.64%
22.41% 13.68% 8.00% 18.81% 13.60% 13.46% 11.24% 16.22% 22.86% 17.39% 15.64%
10.34% 2.11% 12.00% 8.42% 8.80% 9.62% 10.11% 7.72% 4.76% 8.70% 6.91%
3.45%
6.32%
8.00%
9.41% 12.80% 15.38%
8.62%
0.00%
4.00% 10.40%
4.00% 11.54%
7.87%
8.49%
7.62%
6.96% 10.55%
3.37%
6.18%
6.67%
5.65%
6.91%
1.72% 1.05% 0.00% 1.49% 1.60% 3.85% 2.25% 1.16% 0.00% 1.30% 1.45%
3.45% 3.16% 12.00% 2.48% 4.80% 7.69% 2.25% 4.25% 1.90% 4.35% 3.27%
29.31% 44.21% 24.00% 31.19% 38.40% 30.77% 31.46% 35.52% 38.10% 34.35% 35.27%
Perception of Research Misconduct in a Spanish University
Table 4 Perception of misconduct proliferation in Universitat Jaume I by area, professional category, and gender. Bold figures show the most significant values for each gender, professional category, and area
13
R. A. Feenstra et al.
A number of participants also used this open question to express concerns about or
elaborate on misconduct related to relationships with colleagues. This is noteworthy, as
it is observed both in the answers to the closed (Q1 and Q2) and open questions (extracts
from Q3 and Q8). In sum, misconduct involving interactions between different academic
categories emerges as a central concern among the researchers. These types of misconduct
refer to the perpetuation of power relations and networks (and the inequalities resulting
from them) between different groups and individuals in the academic setting. The following excerpts illustrate this point:
A conception [of the university] as a pyramid, in which the group leader puts their
name to everything that comes out, even if they do not participate, in order to
increase their index impact and get on the ‘most cited’ or ‘most published’ list, as
this results in greater visibility for the group. R3.EA.20
The hierarchical, medieval model the university is based on, which despite being outdated, continues to be reproduced on a daily basis. R2.AH.1
Hierarchies must be urgently penalised so they are not perpetuated in future generations of researchers. The academic career is hard, but it should not be painfully
intolerable in a public, fair and open organisation such as our university aspires to be.
R2.SLS.21
The worst type of misconduct in the university is linked to the abuse of power by
faculty in senior positions in the hierarchy. R3.SLS.55
There are researchers at the UJI who use mafioso/coercive practices with “inferior”
teaching staff. R3.SLS.19
Abuse of power over people coming into the university later. R3.EA.24
Using qualified research staff or those in training without paying them (beyond what
could be understood as an internship) under the promise of contracts that, if awarded,
do not correspond to the work actually done or required of them. R2.AH.4
Possible causes leading to misconduct and suggestions on how to prevent it
The possible causes behind the proliferation of misconduct identified by the respondents
relate to two particular factors: 1) personal ambitions, and 2) Spain’s scientific evaluation
system. These reasons are given by 51.5% (n = 260) and 47.3% (n = 239) of the respondents, respectively, although notably, 38.08% (n = 155) identified both causes as key factors
in the possible proliferation of misconduct. In turn, 14.9% (n = 75) consider the university’s
own evaluation policy to be behind the proliferation of misconduct, and 20.6% (n = 104),
lack of knowledge about research ethics issues (see Fig. 4).
Likewise, 9.11% (n = 36) suggest other possible reasons, describing in the open question
(Q5) issues such as excessive bureaucracy, precariousness, lack of time, lack of effective
control systems and, significantly, the hierarchical model of the university. This ties in with
the question mentioned in the previous section on the problematisation of relations with
colleagues.
With regard to the specific weighting of each cause (Q6), scientific evaluation stands out
slightly with 4.24, compared to 4.21 for personal ambitions (Fig. 5).
These data are also reported according to gender, professional category, and area in
Table 5.
This table shows the average importance attributed to each reason, taking into
account only the responses that mentioned them as a reason for the proliferation of
misconduct. The differences by gender are not significant, except for the significantly
13
Perception of Research Misconduct in a Spanish University
60.00
51.49
50.00
47.33
40.00
30.00
20.59
20.00
14.85
9.11
10.00
0.00
Spain's scienfic
evaluaon
system
UJI's scienfic
evaluaon
system
Lack of training in
ethical issues
Personal
ambions
Other reasons
Fig. 4 Reasons given for misconduct proliferation. Y-Axis (Vertical): Percentage
higher importance women attribute to all causes. Earlier career categories give more
importance to personal ambitions than to Spain’s scientific evaluation policy. The
importance attributed by both male and female R4 researchers to the lack of training in
ethical issues is notable. By area, Arts and Humanities researchers highlight the importance of the university’s own evaluation policy, and the lack of training in ethical issues
is also reported by Social and Legal Sciences researchers.
The possible origins of the proliferation of misconduct were also addressed in an
open question asking participants for their ideas on the best strategies to prevent it (Q7).
Here, many of the responses identify a need to change the system of scientific evaluation (66 respondents from different professional categories and areas of knowledge specifically mention this issue). These statements are broad and complex, and are striking
in that they identify different parts of the evaluation system, as we will now explain.
4.50
4.30
4.24
4.21
4.10
3.97
3.90
3.92
3.88
3.70
3.50
3.30
3.10
2.90
2.70
Spain's scienfic
evaluaon system
UJI's scienfic
evaluaon system
Lack of training in
ethical issues
Personal ambions
Other reasons
Fig. 5 Average importance attributed to the causes of misconduct proliferation. Y-Axis (Vertical): Mean
Value of Likert Scale 1-5
13
R. A. Feenstra et al.
Table 5 Average importance attributed to the causes of misconduct proliferation by gender, professional
category and area. Bold figures show the most significant values for each gender, professional category, and
area
Spain’s scientific
evaluation system
Own univerity’s
Lack of training Personal ambitions Other reasons
scientific evaluation in ethical issues
system
N
N
Average
Average
N
Average N
Average
N
Average
Women 110
4.28
33
4.06
47
4.06
116
4.28
21
3.90
Men
R1
R2
R3
R4
AH
S
HS
SLS
EA
4.20
3.94
4.17
4.32
4.18
4.28
4.22
4.40
4.31
4.05
39
8
14
41
9
12
10
4
26
20
3.90
3.75
3.79
4.10
3.89
4.42
3.70
4.25
4.00
3.75
55
12
21
49
20
11
21
4
43
23
3.80
3.67
3.90
3.90
4.15
3.82
3.76
3.50
4.07
3.91
139
25
49
122
59
32
51
16
107
49
4.14
4.12
4.29
4.29
4.02
4.34
4.08
4.25
4.22
4.20
22
7
9
20
7
5
8
2
17
11
3.86
3.86
4.22
3.65
4.14
4.00
3.63
3.50
4.12
3.73
126
18
46
128
44
29
32
15
103
57
Firstly, some of the respondents recognise a causal relationship between the evaluation system and misconduct proliferation. For example:
In my opinion, the misconduct stems from the evaluation system itself. R3.SLS.11
At the national level, accreditación/sexenios requirements are becoming more and
more demanding and lead to misconduct that, if it were not for the high level of
frustration in many cases, would otherwise not take place. R3.EA.16
The current research evaluation system engenders some of these types of misconduct. R3.S.16
Some types of misconduct are integral to the system itself, for example quantitative productivity. R4.S.27
Temporary teaching and research staff are running a long-distance race where the
objective is to publish at any price. R1.HS.1
Misconduct is due to the fact that research staff feel pressured by the demands of
academia to publish at any price to get promoted in their teaching and research
career. In the end, that’s what counts most. R3.SLS.15
Secondly, several participants specifically mention certain aspects of the evaluation
system as driving the proliferation of misconduct, particularly the central role of bibliometric indicators in measuring academic productivity. It is therefore unsurprising to
hear critical voices from different areas of knowledge as follows:
Don’t make promotion of researchers in public universities, who do their research
with public funds, dependent on the interests and commercial strategies of the private multinational publishing companies that generate the rankings. R3.SLS.30
13
Perception of Research Misconduct in a Spanish University
An overhaul of hiring criteria and ANECA research evaluation systems is crucial.
R4.SLS.36
Rethinking incentive policies to include broader criteria than publication in the
JCR; critical reflection on the real meaning of journal impact factors. R4.SLS.2
Don’t consider journal impact factors in CV evaluations in ANECA. R4.SLS.3
Journals’ impact factors cannot be “the golden rule” for evaluating the quality of
scientific production. R4.SLS.22
Get rid of the JCR impact factor as the key (or practically the only) element for
measuring quality. R4.SLS.9
Eliminate the evaluation of research according to journal impact factors.
R2.AH.10
Don’t focus all state-level research on journals’ position in the JCR. R4.S.31
Don’t depend so much on journal impact factors. R2.SLS.2
Stop glorifying the journal’s quartile ranking and explore new quality indicators
for academic papers. R3.EA.4
Thirdly, respondents refer to the way the evaluation system is driving a change in
direction, manifested in the growing pressure to publish, which is prioritised in research
CV evaluations. In sum, they are critical of the drift towards “publish or perish” and
associate the demands for constant productivity with the proliferation of misconduct.
Some researchers call for:
Less pressure from the administration to publish, publish, publish as the main
mechanism to forge a career in the university, or simply to survive there. Most
of what is published in some areas of knowledge, even in journals with high
impact factors, serves almost exclusively for that purpose: so authors can survive
in the university or make a career there, and to feed the whole self-referencing
wheel of the academic publishing industry, without having any real social impact.
R1.SLS.15
Reduce the pressure on research productivity and keep it at levels that are healthy
both for the rigorous construction of knowledge and for people’s lives. R3.SLS.103
Reduce the pressure to publish. R3.S.35.
Remove the pressure to publish a minimum number of publications per year/sexenio.
R3.AH.3
Reduce the pressure to publish and focus efforts on more influential articles. R4.S.3
Remove the pressure to only have (lots of) Q1 papers in the JCR. R3.EA.11.
Get rid of publish or perish, especially in accreditation for teaching staff. R2.HS.3
Less pressure to publish in high-impact journals. R2.SLS.19.
The figure of the researcher in this country has become a machine for publishing,
with no reflection on what they are researching, [or] what impact it has on society.
R3.SLS.70
13
R. A. Feenstra et al.
Change the criteria for the accreditation of teaching staff (ANECA) and sexenios and
ensure that publications and projects are not assessed “by weight”. R2.HS.5
In sum, these critical voices call for a remodelling of the evaluation system in order to
prevent misconduct. To this end, several participants also state that the key solution lies
in providing an alternative definition of what “research quality” means. In the main, these
statements advocate reducing the weight of quantitative criteria and combining them with
qualitative criteria. This sentiment is illustrated in the following extracts:
Research should not be valued in purely bibliometric terms. What is valued today
is having a large number of articles that nobody will ever read and that have not
brought about any transfer [of knowledge]. This needs to change. R3.EA.61
Qualitative evaluation of merit by experts; avoid extreme “quantification” of all merits; evaluate coherence throughout the career path. R3.EA.44
The above values and practices (transparency, etc.) cannot be extracted and put in the
right context just by assessing written CVs and comparing impact factors. Strategies
are needed that focus on the evaluation of the research process, based on the research
results. R3.EA.5
An evaluation of research activity that doesn’t just take into account the number of
publications and the misunderstood “quality” (or equated with metrics that do not
measure quality, such as impact factors). R3.SLS.20
Value the content and quality of scientific production and not the number of articles
or publications. R2.SLS.9
Value quality instead of quantity. R3.HS.4
Give greater weight to quality (which is not only the impact but also recognised
importance and authority over time) than to the quantity of publications. R3.SLS.44
Accreditation not based on the number of publications, but on the quality and impact
of the research. R2.AH.3
Although nuances emerge in the respondents’ understanding of “research quality”, they
defend aspects such as transfer, social impact, scientific dissemination or originality. Some
also advocate placing greater value on the quality of other tasks such as teaching, which
takes second place in the requirements of the academic evaluation and promotion system.
Notions of what quality should be are varied, but the responses seem to share a central idea:
the importance of preventing the means (high-impact publication) becoming the researcher’s ultimate (and only) end. This idea is clearly illustrated in the following excerpt:
[the solution] would be to go beyond the current short-sighted view to take a broader
and more critical look at what “quality” in research is considered to be [...]. In the
present system, the indicators initially conceived as a means to an end have become
ends in themselves. Researchers fall into the trap of not doing research that is necessary and practically relevant. R3.SLS.78
These responses to the open question provide insights into the survey data and, specifically, the perception of the possible causes of misconduct. According to the respondents, it
is mainly the demands of the evaluation systems that seem to contribute negatively to the
proliferation of misconduct. The general perception is that the current evaluation system,
13
Perception of Research Misconduct in a Spanish University
which demands constant productivity from researchers in the form of articles that rank well
in the bibliometric evaluation scales, eventually encourages the less desirable inclinations
of researchers in a highly competitive and precarious context. It is striking, however, that in
the question on prevention (Q7) few respondents mention the personal dimension and the
issue of ambition, which was the most frequently identified cause in the closed question.
Only four people refer to it at all, stating that ambition is an individual matter and therefore
difficult or impossible to control and change. One participant went so far as to state that
“having ambition is not bad, but [some ways of] dealing with it are” (R3.SLS.13).
Other responses refer to the explanations already mentioned that situate the most common misconduct in the context of the power relations in interactions between colleagues.
On this point, several respondents propose QPR prevention strategies that recognise the
need to 1) foster better dynamics in research groups, 2) penalise those who engage in misconduct (and/or conversely, reward those who promote good practices), 3) promote greater
transparency within research groups, 4) give fair recognition to the contributions of less
established individuals (especially in publications) and 5) eschew false authorship. Finally,
others also advocate promoting greater awareness of good practice in research in general
and in research groups in particular, that is, better knowledge of research ethics (in line
with responses to Q4).
Discussion
Several empirical studies have highlighted the possible incidence of misconduct in research
in general and in universities in particular. It is difficult to know the real extent of misconduct in research, either in general or in specific universities, as studies (including this one)
tend to collect data on perceptions. Comparisons between studies should also be made with
caution. In this regard, for example, the data may change depending on whether the focus is
on FFP, or is broadened out to include more QRPs or limited to fewer, or on how the questions are worded (Godecharle et al., 2018; John et al., 2012; Pupovac & Fanelli, 2015; Xie
et al., 2021).
The data from our study show that 71.68% of researchers perceive that at least one form
of misconduct is on the rise at the national level, a figure that falls to 48.95% when the
question refers to their own university. This is common in studies on the perceptions of
misconduct, which tend to underreport misbehaviours, the closer they are to home, and
overestimate those of their colleagues (Pupovac & Fanelli, 2015). The general proliferation
perceived falls short of that observed in some other studies which, while not focussing specifically on individual universities, offer some noteworthy data. For example, in the context
of Nigeria, 96.2% of researchers “believed that one or more forms of scientific misconduct had occurred in their workplace” (Okonta & Rossouw, 2014, 3). However, our data
do coincide with other studies in universities such as, for example, that of Felaefel et al.,
13
R. A. Feenstra et al.
(2018, 72) for three universities in the Middle East, which found that “74.5% reported having knowledge of [some] misbehaviors among […] their colleagues”.5
Any comparisons between our analysis and other studies should bear in mind that here
we are asking about the possible proliferation of misconducts. We opted to use this term
rather than “occurred”, as in Okonta and Rossouw, for example, or the concept of “knowledge” of misconduct used in Felaefel et al.’s study. This conceptual choice in formulating
our research problem allows us to define which actions are of most concern to researchers
at the specific university in our study.
The most proliferating types of misconduct reported by our participants are use of personal influence, lax supervision, and abuse of power over people in lower positions. These
are linked to what we might identify, following de Vries et al. (2006), as “life with colleagues”, which is a concern also noted in responses to the qualitative questions in our survey. The participants state that relationships between colleagues are damaged by the power
relations and inequality between staff at different points in their academic careers. This configuration of social relations in the academic setting is explained either by the aim to achieve
better results in scientific evaluations or by an unwillingness to comply with the obligations
and principles of integrity in relations with people lower down the university hierarchy.
Few studies include data on this type of misconduct, and specifically on the use of personal influence or the abuse of power over people in lower positions. One study in the
Spanish context highlighted the use of personal influence to manipulate review processes,
which reached 57.5% in the areas of philosophy and ethics (Feenstra et al., 2021). The issue
of abuse of power has also been explored in studies such as Martison et al. (2005), who
found that 1.4% of their sample engaged in “relationships with students, research subjects
or clients that may be interpreted as questionable”. However, this study enquired into the
researchers’ own actions. Our research thus extends knowledge on these little-studied QRPs.
Lax supervision of doctoral dissertations is a QRP that has been attracting wider attention among researchers (Gopalakrishna et al., 2022; Haven et al., 2019a). The data of our
study supports some questions addressed by previous research. For example, Haven et al.
identified this type of misconduct as the most outstanding problem in the analysis of the
different disciplines (Biomedicine, Natural sciences, Social sciences and Humanities) present in a sample of universities in Amsterdam (Haven et al., 2019a).6
In contrast, when compared with other studies on misconduct related to false (ghost or
honorary) authorship, our data on proliferation –15.77% for the national level and 11.13%
for the university level– are much lower than those perceived in other universities, such
as 63% found for Pondicherry University in India (Palla & Singson, 2022); 55.7% for the
University of Rijeka (Pupovac et al., 2017), or between 36 and 46% at the Universities of
Stockholm, Oslo and Odense (Hofmann et al., 2020). Therefore, and although this implies
a comparison conditioned by the diverse nature of the studies, researchers in our study are
less concerned about this aspect than about the interpersonal relationships mentioned above.
5
Fanelli’s well-known meta-analysis also finds that “In surveys asking about the behaviour of colleagues,
admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices” (2009, e5738). Besides, in a more recent and extensive meta-analysis conducted
by Xie et al. (2021, p. 40), the data obtained pointed that: “The total prevalence of observed reported RM
concerning at least 1 of FFP was 15.5% (95% CI 12.4–19.2%)”. and “The collective prevalence of observed
reported QRPs concerning 1 or more QRPs in respondents was 39.7% (95% CI 35.6–44.0%)”.
6
In turn, a nationwide study in the Netherlands by Gopalakrishna et al. (2022) identified “Insufficiently
supervised or mentored junior coworkers” as the third most common QRP (with a score of 15%).
13
Perception of Research Misconduct in a Spanish University
To a lesser extent, our study also found the proliferation of some forms of misconduct
related to the fraudulent review of research articles and/or projects, failure to disseminate
research results, misuse of research project funds and the avoidance of conflicts of interest.
Of these forms of misconducts, there is a striking difference in data for fraudulent review,
perceived by 20.79% at the national level and 6.36% for the university. These figures stand
out both for the difference between the two levels and for the reported proliferation, compared with other studies. For example, in the study by Gopalakrishna et al. (2022), the
figure is less than 1% (although this study examines researchers’ own behaviour and not
perceptions of their colleagues’). In turn, in Haven et al.’s (2019a) study of universities in
Amsterdam, “unfair review” only appears among the top five QPRs for the humanities, but
is not present in biomedicine, natural sciences and social sciences.
With regard to FFP, our study shows a lower proliferation than other forms of misconduct. This is not a surprise; other studies have been warning of the need to pay attention not
only to these forms of misconduct, but also to “mundane misbehaviour” (de Vries et al.,
2006; Haven et al., 2019a), as these are precisely the most prevalent types.
Finally, the possible causes of misconduct proliferation are mainly associated with personal ambitions and the Spanish evaluation system. The weight of the evaluation system is
also widely covered in responses to the study’s open questions. Participants point out that
the system encourages researchers to increase their productivity in order to obtain positive
evaluations for teaching accreditation, applications for positions or projects, etc., regardless of the means or channels they use to do so. This association between misconduct and
the evaluation system has been highlighted in the abundant literature in the field (Okonta
& Rossouw, 2013; Tijdink et al., 2014; Pupovac et al., 2017; Liao et al., 2018; Felaefel
et al., 2018; Maggio et al., 2019). Thus, although it is sometimes difficult to establish direct
causal relationships between misconduct and evaluation systems (Fanelli et al., 2015), our
study builds on the evidence from researchers that the pressure to obtain results and, above
all, the measurement of these results with quantitative criteria based on the accumulation
of merit, does not help to foster scientific integrity positively.
Consequently, the results of our study point to the potential adverse impact of the current evaluation systems, echoing the concerns of prominent initiatives such as the San
Francisco Declaration on Research Assessment (DORA, 2012) and, more recently, the
Coalition for Advancing Research Assessment (COARA, 2022). These initiatives demand
the academic community and policymakers to implement reforms that not only base hiring
or promotion processes on the evaluation of journal-based metrics. Instead, they advocate
for an approach that places a primary emphasis on qualitative assessment (by the peerreview approach) complemented by a responsible use of quantitative indicators (COARA).
This way, these initiatives sound a cautionary note against the potential perils of fostering
a “publish or perish ’’ culture and call academics and policy-makers to explore alternative
methods that integrate a qualitative evaluation.
Study limitations
Our study is limited to exploring the perceptions of academics at a single Spanish university. An interesting line for future studies would therefore be to compare Spanish
universities and contrast their findings with our data. Future empirical research in Spain
could also explore some of the specific forms of misconduct observed in our study
(especially those for which we found differing trends from those seen in other contexts,
such as those linked to fraudulent authorship). However, despite these limitations, we
13
R. A. Feenstra et al.
believe that the high participation rate in our study (49.03%), as well as the characteristics of this university (explained in the methodology section), make this case an
interesting example to extend understanding of possible trends shared by other Spanish universities. Note also that our study takes an initial approach to the perceptions of
various forms of misconduct for a specific university in Spain, which has received little
research attention to date.
On the other hand, when exploring potential causes for research misconduct—especially
in closed questions— it would be advisable to broaden response options for a more comprehensive understanding of the problem. For example, the qualitative data suggested that
relationships with colleagues are becoming a central concern among academics. Future
studies would benefit from broadening response options to include aspects such as a negative research climate, mentoring, or inadequate oversight.
Conclusions
This study, carried out in a Spanish university, shows a significant perception of the proliferation of misconduct, with 71.68% at national level and 48.95% in the institution itself,
pointing to similar trends to those seen in other empirical studies. One striking result is that
the most commonly perceived misconduct is related to life with colleagues. More specifically, the study highlights the incidence of aspects that have attracted less research, such as
the use of personal influence (in assessment or review processes) and the abuse of power
over people in lower positions.
The causes behind the proliferation of misconduct are attributed in the closed
questions to personal ambitions and pressure from the evaluation system, while in
the open questions the issue of the evaluation system is the main focus. This evaluation model, based on the use of bibliometric indicators, is defined as one that
encourages ‘publish or perish’, which is having negative consequences for scientific
integrity. Thus, the researchers surveyed generally consider it necessary to go to the
root of the problem and overhaul the evaluation system and the rationale behind the
evaluation and measurement of research merit characteristic of this setting. A range
of possible alternatives were suggested to tackle this change, although the calls
for a return to qualitative evaluation criteria predominate. A clear indicator of this
demand in the Spanish context has also been reflected recently in the COARA initiative.7 Spain is one of the countries with the highest number of signatories and with a
very prominent presence of universities, which could be explained by the prioritisation of quantitative criteria in the national evaluation system.The data from our study
shed light on the motives behind these demands to transform or improve the scientific evaluation system. The measures implemented by ANECA at the end of 2023,
involving the approval of new criteria for assessing sexenios (where the significance
of bibliometric indicators is diminished) suggest that the review of the assessment
system is underway.8
7
See: https://coara.eu/agreement/signatories/
https://www.aneca.es/-/aneca-actualiza-los-principios-y-los-criterios-de-evaluaci%C3%B3n-de-los-sexen
ios-de-investigaci%C3%B3n
8
13
Perception of Research Misconduct in a Spanish University
Acknowledgements The authors would like to thank Tere Mora from the Quality Office of the University for her technical support in developing the survey. We would also like to thank the Vice-Rectorate for
Research for supporting and promoting the survey at the University Jaume I (the result of which led to the
development of a Code of Best Practices in Research and Doctoral Studies). Finally, we are grateful to Mary
Savage for the English translation, and to the editor and reviewers for their thoughtful and constructive
comments.
Author Contributions Ramón Feenstra contributed to the study conception, design, planning and data collection. Statistical analysis and interpretation were performed by Emma Gómez. The qualitative analyses
were performed by Carlota Carretero and Ramón Feenstra. The first draft of the manuscript was written by
Ramón Feenstra. Carlota Carretero extensively revised and edited the manuscript. All authors have contributed to and have approved the final manuscript.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Data Availability The raw data from the survey are available at: https://repositori.uji.es/xmlui/handle/10234/
203683
Declarations
Ethics Approval Universitat Jaume I Ethics Committee approved this research on April 2021 with the reference number CD/45/202.
Disclosure Statement The authors declare that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
ALLEA. (2017). The European Code of Conduct for Research Integrity. Retrieved from https://allea.org/
european-code-of-conduct-2017/
ALLEA. (2023). The European Code of Conduct for Research Integrity. Revised edition. Retrieved from
https://allea.org/code-of-conduct/
Aubert Bonn, N., & Pinxten, W. (2019). A decade of empirical research on research integrity: What have
we (not) looked at? Journal of Empirical Research on Human Research Ethics, 14(4), 338–352.
https://doi.org/10.1177/1556264619858534
Buljan, I., Barać, L., & Marušić, A. (2018). How researchers perceive research misconduct in biomedicine and how they would prevent it: A qualitative study in a small scientific community. Accountability in Research, 25(4), 220–238. https://doi.org/10.1080/08989621.2018.1463162
Butler, L. (2004). What happens when funding is linked to publication counts? In H. Moed., W. Glänzel, &
U. Smoch (Eds.), Handbook of quantitative science and technology research (pp. 389–405). Dordrecht:
Springer.
Cañibano, C., Vilardell, I., Corona, C., & Benito-Amat, C. (2018). The evaluation of research excellence
and the dynamics of knowledge production in the humanities: The case of history in Spain. Science
and Public Policy, 45(6), 775–789. https://doi.org/10.1093/scipol/scy025
COARA. (2022). The Agreement on Reforming Research Assessment. Retrieved from https://coara.eu/
agreement/the-agreement-full-text/
Dal-Ré, R. (2020). Analysis of biomedical Spanish articles retracted between 1970 and 2018. Medicina
Clínica, 154(4), 125–130. https://doi.org/10.1016/j.medcle.2019.04.033
13
R. A. Feenstra et al.
De Vries, R., Anderson, M. S., & Martinson, B. C. (2006). Normal misbehavior: Scientists talk about the
ethics of research. Journal of Empirical Research on Human Research Ethics, 1(1), 43–50. https://
doi.org/10.1525/jer.2006.1.1.43
Delgado López-Cózar, E., Torres-Salinas, D., & Roldán-López, Á. (2007). El fraude en la ciencia:
Reflexiones a partir del caso Hwang. El Profesional De La Información, 16(2), 143–150. https://
doi.org/10.3145/epi.2007.mar.07
Delgado-López-Cózar, E., Ràfols, I., & Abadal, E. (2021). Carta: Por un cambio radical en la evaluación
de la investigación en España. El Profesional De La Información, 30(3), e300309.
Derrick, G. E., & Pavone, V. (2013). Democratising research evaluation: Achieving greater public
engagement with bibliometrics-informed peer review. Science and Public Policy, 40(5), 563–575.
https://doi.org/10.1093/scipol/sct007
DORA. (2012). San Francisco Declaration on Research Assessment. Retrieved from https://sfdora.org/read/
Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4(5), e5738. https://doi.org/10.1371/journal.pone.0005738
Fanelli, D., Costas, R., & Larivière, V. (2015). Misconduct policies, academic culture and career stage, not
gender or pressures to publish, affect scientific integrity. PloS ONE, 10(6), e0127556.
Feenstra, R. A., & Delgado López-Cózar, E. (2023). The footprint of a metrics-based research evaluation system
on Spain’s philosophical scholarship: An analysis of researchers’ perceptions. Research Evaluation, 32(1),
32–46.
Feenstra, R. A., Delgado López-Cózar, E., & Pallarés-Domínguez, D. (2021). Research misconduct in the
fields of ethics and philosophy: Researchers’ perceptions in Spain. Science and Engineering Ethics,
27(1), 1.
Felaefel, M., Salem, M., Jaafar, R., Jassim, G., Edwards, H., Rashid-Doubell, F., Yousri, R., Ali, N., & Silverman, H. (2018). A cross-sectional survey study to assess prevalence and attitudes regarding research
misconduct among investigators in the Middle East. Journal of Academic Ethics, 16(1), 71–87. https://
doi.org/10.1007/s10805-017-9295-9
Fonseca-Mora, M. C., Tur-Viñes, V., & Gutiérrez-San Miguel, B. (2014). Ética y revistas científicas
españolas de Comunicación, Educación y Psicología: La percepción editora. Revista Española De
Documentación Científica, 37(4), e065. https://doi.org/10.3989/redc.2014.4.1151
Gilbert, F. J., & Denison, A. R. (2003). Research misconduct. Clinical Radiology, 58(7), 499–504.
https://doi.org/10.1016/S0009-9260(03)00176-4
Godecharle, S., Fieuws, S., Nemery, B., & Dierickx, K. (2018). Scientists still behaving badly? A survey
within industry and universities. Science and Engineering Ethics, 24(6), 1697–1717. https://doi.
org/10.1007/s11948-017-9957-4
Gopalakrishna, G., Ter Riet, G., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. M. (2022). Prevalence
of questionable research practices, research misconduct and their potential explanatory factors: A
survey among academic researchers in The Netherlands. PLoS ONE, 17(2), e0263023.
Haven, T. L., Bouter, L. M., Smulders, Y. M., & Tijdink, J. K. (2019c). Perceived publication pressure
in Amsterdam: Survey of all disciplinary fields and academic ranks. PLoS ONE, 14(6), e0217931.
https://doi.org/10.1371/journal.pone.0217931
Haven, T. L., Tijdink, J. K., Martinson, B. C., & Bouter, L. M. (2019b). Perceptions of research integrity climate differ between academic ranks and disciplinary fields: Results from a survey among
academic researchers in Amsterdam. PLoS ONE, 14(1), e0210599. https://doi.org/10.1371/journal.
pone.0210599
Haven, T. L., Tijdink, J. K., Pasman, H. R., Widdershoven, G., Ter Riet, G., & Bouter, L. M. (2019a).
Researchers’ perceptions of research misbehaviours: A mixed methods study among academic researchers in Amsterdam. Research Integrity and Peer Review. https://doi.org/10.1186/
s41073-019-0081-7
Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251–
261. https://doi.org/10.1016/j.respol.2011.09.007
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for Research Metrics. Nature, 520, 429–431. https://doi.org/10.1038/520429a
Hofmann, B., Bredahl Jensen, L., Eriksen, M. B., Helgesson, G., Juth, N., & Holm, S. (2020). Research
integrity among PHD students at the faculty of medicine: A comparison of three Scandinavian universities. Journal of Empirical Research on Human Research Ethics, 15(4), 320–329. https://doi.
org/10.1177/1556264620929230
Hofmann, B., & Holm, S. (2019). Research integrity: Environment, experience, or ethos? Research Ethics, 15(3–4), 1–13. https://doi.org/10.1177/1747016119880844
13
Perception of Research Misconduct in a Spanish University
Holtfreter, K., Reisig, M. D., Pratt, T. C., & Mays, R. D. (2019). The perceived causes of research misconduct among faculty members in the natural, social, and applied sciences. Studies in Higher Education. https://doi.org/10.1080/03075079.2019.1593352
Instituto Nacional de Estadística (INE). (2022). Última nota de prensa (24/11/2022): Estadísticas sobre actividades. https://www.ine.es/dyngs/INEbase/es/operacion.htm?c=Estadistica_C&cid=1254736176754&
menu=ultiDatos&idp=1254735576669
Jefferson, T. (1998). Redundant publication in biomedical sciences: Scientific misconduct or necessity?
Science and Engineering Ethics, 4(2), 135–140. https://doi.org/10.1007/s11948-998-0043-9
Jiménez-Contreras, E., de Moya Anegón, F., & López-Cózar, E. D. (2003). The evolution of research
activity in Spain: The impact of the National Commission for the Evaluation of Research Activity
(CNEAI). Research Policy, 32(1), 123–142. https://doi.org/10.1016/S0048-7333(02)00008-2
Jiménez-Contreras, E., López-Cózar, E. D., Ruiz-Pérez, R., & Fernández, V. M. (2002). Impact-factor
rewards affect Spanish research. Nature, 417(6892), 898–898. https://doi.org/10.1038/417898b
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research
practices with incentives for truth telling. Psychological Science, 23(5), 524–532. https://doi.org/
10.1177/0956797611430953
Krstić, S. B. (2015). Research integrity practices from the perspective of early-career researchers. Science and Engineering Ethics, 21(5), 1181–1196. https://doi.org/10.1007/s11948-014-9607-z
Liao, Q. J., Zhang, Y. Y., Fan, Y. C., Zheng, M. H., Bai, Y., Eslick, G. D., He, X. X., Zhang, S. S., Xia,
H., & He, H. (2018). Perceptions of Chinese biomedical researchers towards academic misconduct:
A comparison between 2015 and 2010. Science and Engineering Ethics, 24(2), 629–645. https://
doi.org/10.1007/s11948-017-9913-3
Ljubenković, A. M., Borovečki, A., Ćurković, M., Hofmann, B., & Holm, S. (2021). Survey on the
Research Misconduct and Questionable Research Practices of Medical Students, PhD Students, and
Supervisors at the Zagreb School of Medicine in Croatia. Journal of Empirical Research on Human
Research Ethics, 16(4), 435–449. https://doi.org/10.1177/15562646211033727
Maggio, L., Dong, T., Driessen, E., & Artino, A. (2019). Factors Associated with Scientific Misconduct and
Questionable Research Practices in Health Professions Education. Perspectives on Medical Education,
8, 74–82. https://doi.org/10.1007/s40037-019-0501-x
Marco-Cuenca, G., Salvador-Olivan, J. A., & Arquero-Avilés, R. (2019). Ética en la publicación científica
biomedica. Revisión de las publicaciones retractadas en España. El profesional de la información,
28(2), e280222. https://doi.org/10.3145/epi.2019.mar.22
Marini, G. (2018). Tools of individual evaluation and prestige recognition in Spain: How sexenio ‘mints the
golden coin of authority.’ European Journal of Higher Education, 8(2), 201–214. https://doi.org/10.
1080/21568235.2018.1428649
Martin, B. R. (2013). Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age
of research assessment. Research Policy, 42(5), 1005–1014. https://doi.org/10.1016/j.respol.2013.03.
011
Martinson, B., Anderson, M., & de Vries, R. (2005). Scientists behaving badly. Nature, 435, 737–738.
https://doi.org/10.1038/435737a
Ministerio de Universidad. (2022). Datos y Cifras del Sistema Universitario Español. Publicación 2021–
2022. Secretaría General Técnica del Ministerio de Universidades. Available at: https://www.unive
rsidades.gob.es/wp-content/uploads/2022/11/Datos_y_Cifras_2021_22.pdf
Okonta, P., & Rossouw, T. (2013). Prevalence of scientific misconduct among a group of researchers in
Nigeria. Developing world bioethics, 13(3), 149–157. https://doi.org/10.1111/j.1471-8847.2012.
00339.x
Okonta, P. I., & Rossouw, T. (2014). Misconduct in research: A descriptive survey of attitudes, perceptions
and associated factors in a developing country. BMC Medical Ethics, 15, 1–8. https://doi.org/10.1186/
1472-6939-15-25
Olabarrieta-Landa, L., Romero, A. C., Panyavin, I., & Arango-Lasprilla, J. C. (2017). Perception of ethical
misconduct by neuropsychology professionals in Spain. NeuroRehabilitation, 41(2), 527–538.
Palla, I. A., & Singson, M. (2022). How do researchers perceive research misbehaviors? A case study of
Indian researchers. Accountability in Research, 1–18. https://doi.org/10.1080/08989621.2022.2078712
Pupovac, V., & Fanelli, D. (2015). Scientists admitting to plagiarism: A meta-analysis of surveys. Science
and Engineering Ethics, 21(5), 1331–1352. https://doi.org/10.1007/s11948-014-9600-6
Pupovac, V., Prijić-Samaržija, S., & Petrovečki, M. (2017). Research misconduct in the Croatian scientific
community: A survey assessing the forms and characteristics of research misconduct. Science and
Engineering Ethics, 23(1), 165–181. https://doi.org/10.1007/s11948-016-9767-0
13
R. A. Feenstra et al.
Santos-Ortega, A., Muñoz-Rodríguez, D., & Poveda-Rosa, M. (2015). “En cuerpo y alma” Intensificación
y precariedad en las condiciones de trabajo del profesorado universitario. Arxius De Ciences Socials,
32, 13–44.
Sistema de Información de las Universidades Valencianas Públicas (SIUVP). (2023). Universitat Jaume I de
Castelló. Available in: http://www.siuvp.es/es/#
Stretton, S., Bramich, N. J., Keys, J. R., Monk, J. A., Ely, J. A., Haley, C., Woolley, M. J., & Woolley, K. L.
(2012). Publication misconduct and plagiarism retractions: A systematic, retrospective study. Current
Medical Research and Opinion, 28(10), 1575–1583. https://doi.org/10.1185/03007995.2012.728131
Tijdink, J. K., Verbeke, R., & Smulders, Y. M. (2014). Publication pressure and scientific misconduct in
medical scientists. Journal of Empirical Research on Human Research Ethics, 9(5), 64–71. https://doi.
org/10.1177/1556264614552421
Titus, S. L., Wells, J. A., & Rhoades, L. J. (2008). Repairing research integrity. Nature, 453(7198), 980–
982. https://doi.org/10.1038/453980a
Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science and engineering ethics, 27(4), 41. https://doi.
org/10.1007/s11948-021-00314-9
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
13