logo

AGENZIA NAZIONALE DI VALUTAZIONE
DEL SISTEMA UNIVERSITARIO E DELLA RICERCA

Home > Pubblicazioni

Pubblicazioni

  • Bonaccorsi A and Cicero T
    (2016) Journal of Informetrics

    Nondeterministic ranking of university departments

  • Ferrara A and Bonaccorsi A
    (2016) Research Evaluation

    How robust is journal rating in Humanities and Social Sciences? Evidence from a large-scale, multi-method exercise

    This article reports on a large-scale exercise of classification of journals in the fields of Humanities and Social Sciences, carried out by the Italian Agency for the Evaluation of Universities and Research Institutes. After discussing at some length the controversies linked with journal classification and its impact, we endeavor to compare such a classification with the scores that individual articles published in the same journals were assigned by completely independent assessors in the same period of time. The data refer to an important subset of disciplines covering History, Philosophy, Geography, Anthropology, Education, and Library Sciences, allowing for comparisons between scientific fields of different sizes, outlooks, and methods. As the controversies surrounding the rating of journals focus on the difference between the container (the journal) and the content (the individual article), we addressed the following research questions: (1) Is journal rating, produced by an expert-based procedure, a good predictor of the quality of articles published in the journal? (2) To what extent different panel of experts evaluating the same journals produce consistent ratings? (3) To what extent the assessment of scientific societies on journal rating is a good predictor of the quality of articles published in journals? (4) Are there systematic biases in the peer review of articles and in the expert-based journal rating? We find that journal rating is a legitimate and robust assessment exercise, as long as it is managed carefully and in a cautious way and used to evaluate aggregates of researchers rather than individual researchers.

    DOI: 10.1093/reseval/rvv048

  • Abramo G, Cicero T and D'Angelo CA
    (2015) Journal of Informetrics

    Should the research performance of scientists be distinguished by gender?

    The literature on gender differences in research performance seems to suggest a gap between men and women, where the former outperform the latter. Whether one agrees with the different factors proposed to explain the phenomenon, it is worthwhile to verify if comparing the performance within each gender, rather than without distinction, gives significantly different ranking lists. If there were some structural factor that determined a penalty in performance of female researchers compared to their male peers, then under conditions of equal capacities of men and women, any comparative evaluations of individual performance that fail to account for gender differences would lead to distortion of the judgments in favor of men. In this work we measure the extent of differences in rank between the two methods of comparing performance in each field of the hard sciences: for professors in the Italian university system, we compare the distributions of research performance for men and women and subsequently the ranking lists with and without distinction by gender. The results are of interest for the optimization of efficient selection in formulation of recruitment, career advancement and incentive schemes.

    DOI: 10.1016/j.joi.2014.11.002

  • Ancaiani A, Anfossi AF, Barbara A, Benedetto S, Blasi B, Carletti V, Cicero T, Ciolfi A, Costa F, Colizza G, Costantini M, di Cristina F, Ferrara A, Lacatena RM, Malgarini M, Mazzotta I, Nappi CA, Romagnosi S and Sileoni S
    (2015) Research Evaluation , 24(3): 242-255

    Evaluating scientific research in Italy: The 2004-10 research evaluation exercise

    The Italian Research Evaluation assessment for the period 2004-10 (VQR 2004-10) has analyzed almost 185,000 articles, books, patents, and other scientific outcomes submitted for evaluation by Italian universities and other public research bodies. This article describes the main features of this exercise, introducing its legal framework and the criteria used for evaluation. The innovative methodology that has been used for evaluation, based on a combination of peer review and bibliometric methods, is discussed and indicators for assessing the quality of participating research bodies are derived accordingly. The article also presents the main results obtained at the University level, trying to understand the existing relationship among research quality and University characteristics such as location, dimension, age, scientific specialization, and funding.

    DOI: 10.1093/reseval/rvv008

  • Bertocchi G, Gambardella A, Jappelli T, Nappi CA and Peracchi F
    (2015) Research Policy , 44(2): 451-466

    Bibliometric evaluation vs. informed peer review: Evidence from Italy

    A relevant question for the organization of large-scale research assessments is whether bibliometric evaluation and informed peer review yield similar results. In this paper, we draw on the experience of the panel that evaluated Italian research in Economics, Management and Statistics during the national assessment exercise (VQR) relative to the period 2004-2010. We exploit the unique opportunity of studying a sample of 590 journal articles randomly drawn from a population of 5681 journal articles (out of nearly 12,000 journal and non-journal publications), which the panel evaluated both by bibliometric analysis and by informed peer review. In the total sample we find fair to good agreement between informed peer review and bibliometric analysis and absence of statistical bias between the two. We then discuss the nature, implications, and limitations of this correlation. (C) 2014 Elsevier B.V. All rights reserved.

    DOI: 10.1016/j.respol.2014.08.004

  • Blasi B
    (2015) SOCIOLOGIA E POLITICHE SOCIALI , 18(2): 35

    Severità di giudizio: dinamiche valutative nell’area della sociologia nella VQR 2004-2010

    DOI: 10.3280/SP2015-002002

  • Bonaccorsi A and Cicero T
    (2015) Journal of the Association for Information Science and Technology

    Distributed or concentrated research excellence? Evidence from a large‐scale research assessment exercise

    DOI: 10.1002/asi.23539

  • Bonaccorsi A, Cicero T, Ferrara A and Malgarini M
    (2015) F1000Res , 4: 196

    Journal ratings as predictors of articles quality in Arts, Humanities and Social Sciences: an analysis based on the Italian Research Evaluation Exercise

    The aim of this paper is to understand whether the probability of receiving positive peer reviews is influenced by having published in an independently assessed, high-ranking journal: we eventually interpret a positive relationship among peer evaluation and journal ranking as evidence that journal ratings are good predictors of article quality. The analysis is based on a large dataset of over 11,500 research articles published in Italy in the period 2004-2010 in the areas of architecture, arts and humanities, history and philosophy, law, sociology and political sciences. These articles received a score by a large number of externally appointed referees in the context of the Italian research assessment exercise (VQR); similarly, journal scores were assigned in a panel-based independent assessment, which involved all academic journals in which Italian scholars have published, carried out under a different procedure. The score of an article is compared with that of the journal it is published in: more specifically, we first estimate an ordered probit model, assessing the probability for a paper of receiving a higher score, the higher the score of the journal; in a second step, we concentrate on the top papers, evaluating the probability of a paper receiving an excellent score having been published in a top-rated journal. In doing so, we control for a number of characteristics of the paper and its author, including the language of publication, the scientific field and its size, the age of the author and the academic status. We add to the literature on journal classification by providing for the first time a large scale test of the robustness of expert-based classification.

    DOI: 10.12688/f1000research.6478.1

  • Malgarini M, Nappi CA and Torrini R
    (2015) Proceedings of ISSI Conference

    Article and Journal-Level Metrics in Massive Research Evaluation Exercises: The Italian Case

  • Nappi CA and Poggi G
    (2015) Rassegna Italiana di Valutazione , 59

    L’indicatore di voto standardizzato di dipartimento basato sull’esercizio VQR 2004 2010

Utilizzando il sito, accetti l'utilizzo dei cookie da parte nostra. maggiori informazioni

Questo sito utilizza i cookie per fornire la migliore esperienza di navigazione possibile. Continuando a utilizzare questo sito senza modificare le impostazioni dei cookie o cliccando su "Accetta" permetti il loro utilizzo.

Chiudi