The notion of the third mission in SSH is still problematic, as well as the concept of research impact. Several streams of critical literature have raised the concern that using the third mission notion or impact may limit the academic freedom of researchers, and reduce the independence from market pressure and impoverish the SSH’s potential for critical thinking and unorthodox visioning. However, countries which have experienced selective cuts in research funding which have penalised SSH disciplines, have seen efforts to make the hidden connections between SSH research and society more visible. This chapter reports on the debate and controversies surrounding this issue. For the first time, preliminary evidence on Public Engagement activities of scholars in SSH, taken from the large-scale assessment of third mission of Italian departments and universities, is presented. This chapter argues that not only scholars in SSH do have a third mission, but that they are not less engaged than their colleagues from STEM disciplines.
Giovanni Solimine, Carla Basili, Andrea Capaccioni, Chiara Faggiolani, Domenica Fioredistella Iezzi, Luca Lanzillo, Mario Mastrangelo, Giovanni Paoloni, Giovanna Spina
ANVUR Working Paper
The paper describes a method to combine the information on the number of citations and the relevance of the publishing journal (as measured by the Impact Factor or similar impact indicators) of a publication to rank it with respect to the world scientific production in the specific subfield. The linear or non-linear combination of the two indicators is represented on the scatter plot of the papers in the specific subfield in order to immediately visualize the effect of a change in weights. The final rank of the papers is therefore obtained by partitioning the two-dimensional space through linear or higher order curves. The procedure is intuitive and versatile since it allows, after adjusting few parameters, an automatic and calibrated assessment at the level of the subfield. The derived evaluation is homogeneous among different scientific domains and can be used to address the quality of research at the departmental (or higher) levels of aggregation. We apply this method, that is designed to be feasible on a scale typical of a national evaluation exercise and to be effective in terms of cost and time, to some instances of the Thomson Reuters Web of Science database and discuss the results in view of what was done recently in Italy for the Evaluation of Research Quality exercise 2004-2010. We show how the main limitations of the bibliometric methodology used in that context can be easily overcome.
This article reports on a large-scale exercise of classification of journals in the fields of Humanities and Social Sciences, carried out by the Italian Agency for the Evaluation of Universities and Research Institutes. After discussing at some length the controversies linked with journal classification and its impact, we endeavor to compare such a classification with the scores that individual articles published in the same journals were assigned by completely independent assessors in the same period of time. The data refer to an important subset of disciplines covering History, Philosophy, Geography, Anthropology, Education, and Library Sciences, allowing for comparisons between scientific fields of different sizes, outlooks, and methods. As the controversies surrounding the rating of journals focus on the difference between the container (the journal) and the content (the individual article), we addressed the following research questions: (1) Is journal rating, produced by an expert-based procedure, a good predictor of the quality of articles published in the journal? (2) To what extent different panel of experts evaluating the same journals produce consistent ratings? (3) To what extent the assessment of scientific societies on journal rating is a good predictor of the quality of articles published in journals? (4) Are there systematic biases in the peer review of articles and in the expert-based journal rating? We find that journal rating is a legitimate and robust assessment exercise, as long as it is managed carefully and in a cautious way and used to evaluate aggregates of researchers rather than individual researchers.
Questo sito utilizza i cookie per fornire la migliore esperienza di navigazione possibile. Continuando a utilizzare questo sito senza modificare le impostazioni dei cookie o cliccando su "Accetta" permetti il loro utilizzo.