logo

AGENZIA NAZIONALE DI VALUTAZIONE
DEL SISTEMA UNIVERSITARIO E DELLA RICERCA

Home > Pubblicazioni

Pubblicazioni

  • Blasi B., Romagnosi S., Bonaccorsi A.
    (2018) Do SSH Researchers Have a Third Mission (And Should They Have)? , pp. 361-392

    Do SSH Researchers Have a Third Mission (And Should They Have)?

    The notion of the third mission in SSH is still problematic, as well as the concept of research impact. Several streams of critical literature have raised the concern that using the third mission notion or impact may limit the academic freedom of researchers, and reduce the independence from market pressure and impoverish the SSH’s potential for critical thinking and unorthodox visioning. However, countries which have experienced selective cuts in research funding which have penalised SSH disciplines, have seen efforts to make the hidden connections between SSH research and society more visible. This chapter reports on the debate and controversies surrounding this issue. For the first time, preliminary evidence on Public Engagement activities of scholars in SSH, taken from the large-scale assessment of third mission of Italian departments and universities, is presented. This chapter argues that not only scholars in SSH do have a third mission, but that they are not less engaged than their colleagues from STEM disciplines.

    DOI: 10.1007/978-3-319-68554-0_16

  • Ferruccio Biolcati-Rinaldi, Daniele Checchi, Silvia Salini, Matteo Turri
    (2017) ANVUR Working Paper , 2017/01

    Copertura, attendibilità e validità degli indicatori bibliometrici tratti da google scholar nel campo delle scienze politiche e sociali (CAVIB Scholar)

  • Antonella Basso, Ioana Galleron, Tiziana Lippiello, Geoffrey Williams
    (2017) ANVUR Working Paper , 2017/02

    The role of books in non-bibliometric areas (ROBINBA)

  • Alfio Ferrara, Stefano Montanelli, Stefano Verzillo
    (2017) ANVUR Working Paper , 2017/03

    EVA – Estrazione, validazione e analisi dei dati di Google Scholar per i settori non bibliometrici

  • Maria Teresa Biagetti, Marco Schaerf, Antonella Iacono, Antonella Trombone
    (2017) ANVUR Working Paper , 2017/04

    Verifica della disponibilità delle monografie attraverso i cataloghi delle biblioteche (VerDiMAC)

  • Ginevra Peruginelli, Tommaso Agnoloni, Sebastiano Faro, Mario Ragona
    (2017) ANVUR Working Paper , 2017/05

    Il progetto “OLTRE” e la valutazione delle monografie giuridiche

  • Giovanni Solimine, Carla Basili, Andrea Capaccioni, Chiara Faggiolani, Domenica Fioredistella Iezzi, Luca Lanzillo, Mario Mastrangelo, Giovanni Paoloni, Giovanna Spina
    (2017) ANVUR Working Paper , 2017/06

    For a Liable Evaluation of Book’s Role in Socio-Economic Sciences and Humanities: an International Comparison

  • Anfossi A, Ciolfi A, Costa F, Parisi G and Benedetto S
    (2016) Scientometrics

    Large-scale assessment of research outputs through a weighted combination of bibliometric indicators

    The paper describes a method to combine the information on the number of citations and the relevance of the publishing journal (as measured by the Impact Factor or similar impact indicators) of a publication to rank it with respect to the world scientific production in the specific subfield. The linear or non-linear combination of the two indicators is represented on the scatter plot of the papers in the specific subfield in order to immediately visualize the effect of a change in weights. The final rank of the papers is therefore obtained by partitioning the two-dimensional space through linear or higher order curves. The procedure is intuitive and versatile since it allows, after adjusting few parameters, an automatic and calibrated assessment at the level of the subfield. The derived evaluation is homogeneous among different scientific domains and can be used to address the quality of research at the departmental (or higher) levels of aggregation. We apply this method, that is designed to be feasible on a scale typical of a national evaluation exercise and to be effective in terms of cost and time, to some instances of the Thomson Reuters Web of Science database and discuss the results in view of what was done recently in Italy for the Evaluation of Research Quality exercise 2004-2010. We show how the main limitations of the bibliometric methodology used in that context can be easily overcome.

    DOI: 10.1007/s11192-016-1882-9

  • Bonaccorsi A and Cicero T
    (2016) Journal of Informetrics

    Nondeterministic ranking of university departments

  • Ferrara A and Bonaccorsi A
    (2016) Research Evaluation

    How robust is journal rating in Humanities and Social Sciences? Evidence from a large-scale, multi-method exercise

    This article reports on a large-scale exercise of classification of journals in the fields of Humanities and Social Sciences, carried out by the Italian Agency for the Evaluation of Universities and Research Institutes. After discussing at some length the controversies linked with journal classification and its impact, we endeavor to compare such a classification with the scores that individual articles published in the same journals were assigned by completely independent assessors in the same period of time. The data refer to an important subset of disciplines covering History, Philosophy, Geography, Anthropology, Education, and Library Sciences, allowing for comparisons between scientific fields of different sizes, outlooks, and methods. As the controversies surrounding the rating of journals focus on the difference between the container (the journal) and the content (the individual article), we addressed the following research questions: (1) Is journal rating, produced by an expert-based procedure, a good predictor of the quality of articles published in the journal? (2) To what extent different panel of experts evaluating the same journals produce consistent ratings? (3) To what extent the assessment of scientific societies on journal rating is a good predictor of the quality of articles published in journals? (4) Are there systematic biases in the peer review of articles and in the expert-based journal rating? We find that journal rating is a legitimate and robust assessment exercise, as long as it is managed carefully and in a cautious way and used to evaluate aggregates of researchers rather than individual researchers.

    DOI: 10.1093/reseval/rvv048

Utilizzando il sito, accetti l'utilizzo dei cookie da parte nostra. maggiori informazioni

Questo sito utilizza i cookie per fornire la migliore esperienza di navigazione possibile. Continuando a utilizzare questo sito senza modificare le impostazioni dei cookie o cliccando su "Accetta" permetti il loro utilizzo.

Chiudi