8.1.5. Remaining breakout discussions: Taxonomy of contributorship/guidelines for software for
tenure review. At the start of the discussion, the breakout group brought forth the important
observation of the wide disparity in commonly accepted habits of publication in different research
fields. In domains which have, historically, relied on large groups of researchers collaborating towards
a common goal (e.g., high-energy physics, astronomy), publications often have tens or even hundreds
of co-authors (with some papers in experimental particle physics having over 3000). In other
domains, the number of co-authors is typically much smaller with, in some cases, even a preference
for single-author papers. Similarly, the various platforms for publication are valued differently in
different domains. Most commonly, publications in peer-reviewed scientific journals are regarded
as the most important and most impactful. However, in certain domains, especially in Computer
Science, many researchers typically regard conference proceedings as their prime publication target.
It is often suggested that this difference is due to the rapid developments in information technology,
a pace that cannot be upheld by traditional peer-reviewed journals. Whatever the causes, any
useful taxonomy of contributorship or guideline for tenure review should take such differences into
account.
Despite these differences, and despite the fact that software often has taken the role of a proper,
albeit less tangible, scientific research instrument, neither the software nor its creators are com-
monly credited as part of a scientific publication. The group acknowledged the need for more
recognition for the creators of such software instruments, and indicated a number of possible path-
ways. First and foremost, domain scientists must be made aware of the important role of software,
and include the developers as co-authors of papers. A second approach is to fully embrace an
open badging infrastructure (such as Mozilla’s Open Badges), where a badge is a free, transferrable,
evidence-based indicator of an accomplishment, skill, quality, or interest. A third approach is for
the scientific community to support the increasing momentum of peer-reviewed journals specialized
in the open source/open access publication of scientific research software, such as Computer Physics
Communication, F1000 Research, Journal of Open Research Software, and SoftwareX.
Recognizing publication of research software as a proper scientific contribution raises several
important but currently unsolved questions, however. For example, is the number of users of the
software a relevant measure of impact? What standards of coding quality must be followed in order
to justify publication and hence recognition? Should the release of a new version of the software
be eligible for a new publication; if so, under what conditions? And above all: should software
publications be valued in the same way as traditional scientific publications? Or is there a need for
new measures of productivity and impact?
In part, the answers will come from the scientific community at large, as a natural consequence
of growing awareness and mindset change. Some of the answers, however, also should be based
on decades of experience in (and developing standards for) implementing, maintaining, refactoring,
documenting, testing, and deploying software instruments in scientific research. Care should be
taken, however, not to impose such standards for all domains in equal ways right from the start.
Forerunners should serve as an example, but should not scare away domains that have based their
progress on much less advanced methods of software carpentry. Nevertheless, proper guidelines are
needed, which eventually should be followed across all domains. The group also recognized that
funding bodies, universities, and publishers eventually should demand that research projects follow
such guidelines, and to implement a proper software sustainability plan.
To enable a form of standardized crediting for developers of research software, the group proposed
to work towards a taxonomy for software-based contributorship. The taxonomy should be derived
from, or extend, existing taxonomies for research impact and contributorship such as defined by
CASRAI (in particular based on the Wellcome-Harvard contributorship taxonomy22, VIVO, or ISNI.
An interesting measure of impact raised by the group was the betweenness centrality, an indicator
of a person’s centrality (and hence, importance) in a scientific collaboration. It is expected that
developers of research software often play such a central role.