-
- J Margolis.
- Science. 1967 Mar 10; 155 (3767): 1213-9.
AbstractEvaluation by means of citation patterns can be successful only insofar as published papers and their bibliographies reflect scientific activity and nothing else. Such an innocent descrip tion is becoming less and less tenable. The present scientific explosion gave rise to more than a proportional pub lication explosion, which not only re flects the scientific explosion but has its own dynamics and vicious circles. Publication of results is probably the main means of accomplishing the al most impossible task of accounting for time and money spent on research. Inevitably, this puts a premium on quantity at the expense of quality, and, as with any other type of inflation, the problem worsens: the more papers are written, the less they count for and the greater is the pressure to publish more. What makes matters worse is the fact that the sheer volume of the"litera ture" makes it increasingly difficult to separate what is worthwhile from the rest. Critical reviews have become somewhat of a rarity, and editorial judgment is usually relegated to ref erees, who are contemporaries and, per haps, competitors of the authors-a situation which has its own undesirable implications (11, 18). It requires little imagination to discover other vicious circles, all arising from distortion of the primary reasons for publishing the results of scientific inquiry. There are, it is true, signs of ad justment to this crisis, partly due to some easing of the pressure to pub lish at all costs, and partly due to the readers' changing attitudes toward the flood of publications. An increasing amount of research is now being car ried out in the form of collective proj ects in large institutions where publica tion is no longer the standard method of accounting for individual work. At the same time there is apparent an in creasing tendency for scientific journals to polarize into the relatively few leading ones which carry important informa tion and the many subsidiary journals which serve as vehicles for interim lo cal accounting and, in a way, sub stitute for detailed intradepartmental re ports. This division is a result not of some arbitrary decree but of normal competition between journals, as a re sult of which, however, the strong usual ly get stronger and the weak get weaker. Were it not for these changes and also for a striking improvement in abstracting, indexing, and alerting serv ices, most research workers would have found long ago that, even in their own specialized fields, new information is accumulating faster than it can be sorted out. These developments can pro vide only a temporary reprieve, so long as there remains a strong incentive to publish the greatest possible number of papers. A new scale of values based on citations is by no means infallible or, in many cases, even fair, but at least it provides an alternative to the existing one, which is at the root of the crisis. It might, of course, be asked whether wide acceptance of such new stand ards would not lead to deliberate abuses. A little reflection shows that the system is less open to manipula tion than might appear. First, the ref erees are expected to see to it that the submitted papers cite work which is pertinent to the subject. An increased awareness of the usefulness of citation indexing as a tool for retrieval and evaluation will make this aspect of refereeing more important, and what now passes for minor carelessness or discourtesy could easily come to be regarded as serious malpractice. Sec ond, as noted above, careful selection of references is in the author's own interest, because it helps him to reach his readers. There is, therefore, some room for hope that healthy feedback in the system will tend to keep it viable. At the basis of this hope lies the supposition that, in the long run, only good work can ensure recognition. As Martyn (2) has pointed out, as an information-retrieval method, cita tion indexing is rather "noisy." The word noisy may apply even more to the problem of evaluation. Whereas in information retrieval much of the unwanted information can be filtered out by suitable search strategy (2, 6), this is not so easy to do for the pur pose of evaluation, because a simple descendence relationship between papers is still an ideal far removed from actuality (7). The situation would be much better if we could at will exclude all citations which do not indi cate real indebtedness. A scheme of citation relationship indicators, first men tioned by Garfield (12) and elaborated by Lipetz (17), would be a help, but, even if it were technically feasible, to provide such indicators would greatly add to the production costs of the Index. Another possible way to minimize the effects of "noise" is to increase the size of the samples on which the reckon ing is based. Now that research has be come a rather popular occupation, it seems that a kind of public vote may have to be accepted as a factor in evaluation. Since this is the case, there is something to be said for extending the "franchise" to minimize acciden tal effects. An index which attempted to process all scientific publications would be several times the size of the present Index, and, what is more, it would not necessarily be an improve ment as a tool for information retrieval because most of the significant work is already concentrated in the present Index. Whether this attempt will ever be considered worthwhile remains pri marily a matter of policy and eco nomics. In the meantime there is an urgent need for more experience with the existing services. It is not the purpose of this article to advocate evaluation of scientific work by some kind of public opinion poll; its purpose is to recognize a pos sible trend in this direction. Any judg ment by public acclaim is subject to obvious fallacies, but we must not be carried away by the analogy to the Stock Exchange or to electoral prac tices. The fact that, in this case, the "public" consists of authors whose con tributions are generally linked creates quite a new pattern of organization. In this discussion some of the aspects of this pattern have been explored through analogy to idealized genetic or mechani cal network models, but the very uniqueness of the system, with its many self-organizing ramifications, makes it a new field which deserves close study, since these developments may have pro found effects on the future of scientific communication.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.