Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Stop The Numbers Game

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220425657

Stop the numbers game

Article  in  Communications of the ACM · November 2007


DOI: 10.1145/1297797.1297815 · Source: DBLP

CITATIONS READS

74 444

1 author:

David Parnas
Middle Road Software
302 PUBLICATIONS   19,254 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

A-7E Software Demonstration View project

my project View project

All content following this page was uploaded by David Parnas on 02 July 2015.

The user has requested enhancement of the downloaded file.


Viewpoint David Lorge Parnas

Stop the Numbers Game


Counting papers slows the rate of scientific progress.

s a senior researcher, I am It encourages overly large groups. Academics with

A saddened to see funding


agencies, department
heads, deans, and promotion
large groups, who often spend little time with
each student but put their name on all of their
students’ papers, will rank above those who work
committees encouraging intensively with a few students;
younger researchers to do It encourages repetition. Researchers who apply the
shallow research. As a reader of “copy, paste, disguise” paradigm to publish the
what should be serious scientific journals, I am same ideas in many conferences and journals will
annoyed to see the computer science literature being score higher than those who write only when
polluted by more and more papers of less and less they have new ideas or results to report;
scientific value. As one who has often served as an It encourages small, insignificant studies. Those who
editor or referee, I am offended by discussions that publish “empirical studies” based on brief obser-
imply that the journal is there to serve the authors vations of three or four students will rank higher
rather than the readers. Other readers of scientific than those who conduct long-term, carefully con-
journals should be similarly outraged and demand trolled experiments; and
change. It rewards publication of half-baked ideas.
The cause of all of these manifestations is the wide- Researchers who describe languages and systems
spread policy of measuring researchers by the number but do not actually build and use them will rank
of papers they publish, rather than by the correctness, higher than those who implement and
importance, real novelty, or relevance of their contri- experiment.
butions. The widespread practice of counting publica-
tions without reading and judging them is Paper-count-based ranking schemes are often
fundamentally flawed for a number of reasons: defended as “objective.” They are also less time-con-
suming and less expensive than procedures that
It encourages superficial research. Those who publish involve careful reading. Unfortunately, an objective
many hastily written, shallow (and often incor- measure of contribution is frequently contribution-
rect) papers will rank higher than those who independent.
invest years of careful work studying important Proponents of count-based evaluation argue that
LISA HANEY

problems; that is, counting measures quantity only good papers get into the “best” journals, and
rather than quality or value; there is no need to read them again. Anyone with

COMMUNICATIONS OF THE ACM November 2007/Vol. 50, No. 11 19


Viewpoint

experience as an editor knows there is tremendous Anything goes. Researchers publish things they know
variation in the seriousness, objectivity, and care with may be wrong, old, or irrelevant; they know that
which referees perform their task. They often contra- as long as the paper gets past some set of referees,
dict one another or make errors themselves. Many it counts;
editors don’t bother to investigate and resolve; they Bespoke research. Researchers monitor conference
simply compute an average score and pass the and special-issue announcements and “custom tai-
reviews to the author. Papers rejected by one confer- lor” papers (usually from “pre-cut” parts) to fit
ence or journal are often accepted (unchanged) by the call-for-papers;
another. Papers that were initially rejected have been Minimum publishable increment (MPI). After com-
known to win prizes later, and some accepted papers pleting a substantial study, many researchers divide
turn out to be wrong. Even careful referees and edi- the results to produce as many publishable papers
tors review only one paper at a time and may not as possible. Each one contains just enough new
know that an author has published many papers, information to justify publication but may repeat
under different titles and abstracts, based on the the overall motivation and background. After all
same work. Trusting such a process is folly. the MPIs are published, the authors can publish
Measuring productivity by counting the number the original work as a “major review.” Science
of published papers slows scientific progress; to would advance more quickly with just one publica-

If you get a letter of recommendation that counts numbers of publications, rather


than commenting substantively on a candidate’s contributions, ignore it.

increase their score, researchers must avoid tackling tion; and


the tough problems and problems that will require Organizing workshops and conferences. Initiating spe-
years of dedicated work and instead work on easier cialized workshops and conferences creates a
ones. venue where the organizer’s papers are almost cer-
Evaluation by counting the number of published tain to be published; the proceedings are often
papers corrupts our scientists; they learn to “play the published later as a book with a “foreward” giving
game by the rules.” Knowing that only the count the organizer a total of three more publications:
matters, they use the following tactics: conference paper, book chapter, and foreward.

Publishing pacts. “I’ll add your name to mine if you One sees the result of these games when attending
put mine on yours.” This is highly effective when conferences. People come to talk, not to listen. Pre-
four to six researchers play as a team. On occa- sentations are often made to nearly empty halls.
sion, I have met “authors” who never read a paper Some never attend at all.
they purportedly wrote; Some evaluators try to ameliorate the obvious
Clique building. Researchers form small groups that faults in a publication-counting system by also
use special jargon to discuss a narrow topic that is counting citations. Here too, the failure to read is
just broad enough to support a conference series fatal. Some citations are negative. Others are
and a journal. They then publish papers “from included only to show that the topic is of interest to
the clique for the clique.” Formation of these someone else or to prove that the author knows the
cliques is bad for scientific progress because it literature. Sometimes authors cite papers they have
leads to poor communication and duplication, not studied; we occasionally see irrelevant citations
even while boosting the apparent productivity of to papers with titles that sound relevant but are not.
clique members; One can observe researchers improving both their

20 November 2007/Vol. 50, No. 11 COMMUNICATIONS OF THE ACM


publication count and citation count with a restricted to a set of “leading” journals primarily dis-
sequence of papers, each new one correcting an error tinguished by their broad coverage. However, there
in the hastily written one that preceded it. Finally, are often more substantive and important contribu-
the importance of some papers is not recognized for tions in specialized journals and conferences. Even
many years. A low citation count may indicate a “secondary” journals publish papers that trigger an
paper that is so innovative it was not initially under- important new line of inquiry or contribute data
stood. that leads to a major result.
Accurate researcher evaluation requires that several Only if experts read each paper carefully can they
qualified evaluators read the paper, digest it, and pre- determine how an author’s papers have contributed
pare a summary that explains how the author’s work to their field. This is especially true in computer sci-
fits some greater picture. The summaries must then ence where new terms frequently replace similar con-
be discussed carefully by those who did the evalua- cepts with new names. The title of a paper may
tions, as well as with the researcher being evaluated. make old ideas sound original. Paper counting can-
This takes time (external evaluators may have to be not reveal these cases.
compensated for that time), but the investment is Sadly, the present evaluation system is self-perpet-
essential for an accurate evaluation. uating. Those who are highly rated by the system are
A recent article [1], which clearly described the frequently asked to rate each other and others; they
methods used by many universities and funding are unlikely to want to change a system that gave
agencies to evaluate researchers, offered software to them their status. Administrators often act as if only
support these methods. Such support will only make numbers count, a probability because their own eval-
things worse. Automated counting makes it even uators do the same.
more likely that the tactics I’ve described here will go Those who want to see computer science progress
undetected. and contribute to the society that pays for it must
One fundamental counting problem raised in [1] object to rating-by-counting schemes every time they
is the allocation of credit for multiple-author papers. see one being applied. If you get a letter of recom-
This is difficult because of the many author-ordering mendation that counts numbers of publications,
rules in use, including: rather than commenting substantively on a candi-
date’s contributions, ignore it; it states only what
Group leaders are listed first, whether or not they anyone can see. When serving on recruiting, promo-
contributed; tion, or grant-award committees, read the candidate’s
Group leaders are listed last, whether or not they papers and evaluate the contents carefully. Insist that
contributed. others do the same. c
Authors are listed in order of contribution, greatest
contribution first; Reference
1. Ren, J. and Taylor, R. Automatic and versatile publications ranking for
Authors are listed by “arrival,” that is, the one who research institutions and scholars. Commun. ACM 50, 6 (June 2007),
wrote the first draft is first; and 81–85.
Authors are listed alphabetically.
David Lorge Parnas is Professor of Software Engineering
and Director of the Software Quality Research Laboratory in the
Attributing appropriate credit to individual authors Department of Computer Science and Information Systems at the Uni-
requires either asking them (and believing their versity of Limerick, Limerick, Ireland.
answers) or comparing the paper with previous
I am grateful for suggestions made by Roger Downer and Pierre-Jacques Courtois after
papers by the authors. A paper occasionally con- reading an early version of this “Viewpoint.” Serious scientists, they did not ask to be
tributes so much that several authors deserve full co-authors.
credit. No mechanical solution to this problem can
be trusted. It was suggested in [1] that attention be © 2007 ACM 0001-0782/07/1100 $5.00

COMMUNICATIONS OF THE ACM November 2007/Vol. 50, No. 11 21


View publication stats

You might also like