This month, the team enjoyed an atypical reading club under the trees and next to the fountain of one of our Faculty’s beautiful gardens. In this picnic edition, we shared the classic paper by Martin and Irvin from 1983 titled «Assessing Basic Research: Some Partial Indicators of Scientific Progress in Radio Astronomy». It was a very interesting meeting as the topics presented in the article, especially the partial indicators proposed by the authors, led to a broader discussion about the difficulties of measuring social phenomena, the opposition of qualitative vs quantitative approaches, and the implications for scientific evaluation and public policy, among other things.

Don’t miss the abstract of this bibliometrics classic:

«As the costs of certain types of scientific research have escalated and as growth rates in overall national science budgets have declined, so the need for an explicit science policy has grown more urgent. In order to establish priorities between research groups competing for scarce funds, one of the most important pieces of information needed by science policy-makers is an assessment of those groups’ recent scientific performance. This paper suggests a method for evaluating that performance. After reviewing the literature on scientific assessment, we argue that, while there are no simple measures of the contributions to scientific knowledge made by scientists, there are a number of ‘partial indicators’ — that is, variables determined partly by the magnitude of the particular contributions, and partly by ‘other factors’. If the partial indicators are to yield reliable results, then the influence of these ‘other factors’ must be minimised. This is the aim of the method of ‘converging partial indicators’ proposed in this paper. We argue that the method overcomes many of the problems encountered in previous work on scientific assessment by incorporating the following elements: (1) the indicators are applied to research groups rather than individual scientists; (2) the indicators based on citations are seen as reflecting the impact, rather than the quality or importance, of the research work; (3) a range of indicators are employed, each of which focusses on different aspects of a group’s performance; (4) the indicators are applied to matched groups, comparing ‘like’ with ‘like’ as far as possible; (5) because of the imperfect or partial nature of the indicators, only in those cases where they yield convergent results can it be assumed that the influence of the ‘other factors’ has been kept relatively small (i.e. the matching of the groups has been largely successful), and that the indicators therefore provide a reasonably reliable estimate of the contribution to scientific progress made by different research groups. In an empirical study of four radio astronomy observatories, the method of converging partial indicators is tested, and several of the indicators (publications per researcher, citations per paper, numbers of highly cited papers, and peer evaluation) are found to give fairly consistent results. The results are of relevance to two questions: (a) can basic research be assessed? (b) more specifically, can significant differences in the research performance of radio astronomy centres be identified? We would maintain that the evidence presented in this paper is sufficient to justify a positive answer to both these questions, and hence to show that the method of converging partial indicators can yield information useful to science policy-makers.»

Check out the full paper and cite as: Martin, B.R.; Irvine, J. (1983). Assessing Basic Research: Some Partial Indicators of Scientific Progress in Radio Astronomy. Research Policy, 12(2), 61-90. http://dx.doi.org/10.1016/0048-7333(83)90005-7

Recent news

Reading club: partial indicators

This month, the team enjoyed an atypical reading club under the trees and next to the fountain of one of our Faculty’s beautiful gardens. In