Presenting a Comprehensive Model for the Evaluation of Iranian Researchers: Lessons from the World

Document Type : Research Paper

Authors

1 Assistant Professor, Faculty of Management and Economics, Imam Hossein University, Tehran, Iran

2 Master student of Islamic Education and Business Management at Imam Sadegh University, Tehran, Iran

Abstract

Researcher evaluation is not just about prioritizing them. It is also there to compensate for their services or to attract them to research organizations. Above, evaluation is used as one of the methods of creating organizational commitment, promoting and guiding research capacity, increasing the level of satisfaction with research projects and finally solving real problems and improving the welfare of society. The purpose of this study is to reconstruct the evaluation metrics of researchers based on scientific documents related to this field and finally to provide a suitable comprehensive framework that meets the expectations of a committed, knowledgeable and problem-solving researcher. The method used in this research is a combination of scientific documents in which 118 scientific documents were used. Extracting the strengths and weaknesses of the main and complementary methods of researchers' evaluation, identifying the risks of traditional evaluation and new restorative measures called metamorphoses are among the most important research findings. The comprehensive evaluation model of researchers as the main result of the research consists of five levels. According to the model, the researchers' scientific documents are the result of the researcher's individual characteristics multiplied by the research ecosystem, which are measured by judges appropriate to the type and purpose of the research. In any case, research is defensible and useful when it leads to economic improvement or theory development in the medium term and increased authority and public welfare in the long run.

Keywords


  • ·         Abramo, G. T. (2012). A sensitivity analysis of researchers’ productivity rankings to the time of citation observation. Journal of Informetrics, 6(2), 192-201.

    • Agarwal, A. D. (2016). Bibliometrics: tracking research impact by selecting the appropriate metrics. Asian journal of andrology, 296.
    • Altmetrics:, P. H. (2013). Value all research products. Nature. Nature Publishing Group, 159.
    • Aoun, S. G. (2013). Standardizing the evaluation of scientific and academic performance in neurosurgery—critical review of the “h” index and its variants. World neurosurgery, 80(5), 85-90.
    • Arimoto, A. (2015). Declining symptom of academic productivity in the Japanese research university sector. Higher Education, 70(2), 155-172.
    • Azer, S. A. (2016). Bibliometric analysis of the top-cited gastroenterology and hepatology articles. BMJ open.
    • Bar-Ilan J, S. C. (2013). Altmetrics: Present and future. In: Proceedings of the 76th ASIS&T Annual Meeting: Beyond the Cloud: Rethinking Information Boundaries. 78.
    • Belter, C. W. (2015). Bibliometric indicators: opportunities and limits. Journal of the Medical Library Association: JMLA, 103(4), 219.
    • Benchimol-Barbosa, P. R. (2011). Additional comments on the paper by Thomas et al: how to evaluate. quality of publication"." Arquivos brasileiros de cardiologia.
    • Biswal, A. K. (2013). An absolute index (Ab-index) to measure a researcher’s useful contributions and productivity. PloS one.
    • Bollen, J. D. (2017). An efficient system to fund science: from proposal review to peer-to-peer distributions. Scientometrics, 101(1), 521-528.
    • Bornmann, L. a. (2015). Does quality and content matter for citedness? A comparison with para-textual factors and over time. Journal of Informetrics, 419-129.
    • Bornmann, L. a. (2017). The journal impact factor should not be discarded. Journal of Korean medical science, 180-182.
    • Callaway, E. (2016). Beat it, impact factor! Publishing elite turns against controversial metric. Nature News, 535(7611), 210.
    • Carey, R. M. (2016). Quantifying Scientific Merit: Is it Time to Transform the Impact Factor? Circulation research, 119(12), 1273-1275.
    • Christopher, M. M. (2015). Weighing the impact (factor) of publishing in veterinary journals. Journal of Veterinary Cardiology, 77-82.
    • Cronin B, S. C. (2014). Harnessing multidimensional indicators of scholarly impact. MIT Press.
    • Cronin B, Sugimoto CR. (2014). Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact. MIT Press.
    • Danielson, J. a. (2013). Quantifying published scholarly works of experiential education directors. American journal of pharmaceutical education.
    • Demetres, M. (2017). EndNote for iPad (version 2.4). Journal of the Medical Library Association: JMLA, 105(3), 305.
    • DiBartola, S. P. (2017). Metrics and the scientific literature: deciding what to read. ournal of veterinary internal medicine , 629.
    • Diem, A. a. (2013). The use of bibliometrics to measure research performance in education sciences. Research in higher education, 89-114.
    • Duffy, R. D. (2011). Duffy, R.D., Jadidian, A., Webster, G.D. and Sandell, K.J., 2011. The research productivity of academic psychologists: assessment, trends, and best practice recommendations. Scientometrics, 89(1), pp.207-227. Scientometrics, 207-227.
    • Durieux, V. a. (2010). Bibliometric indicators: quality measurements of scientific publication." Radiology 255, no. 2 (2010): 342-351. Radiology, 255(2), 342-351.
    • Efron, N. a. (2011). Citation analysis of Australia‐trained optometrists. Clinical and Experimental Optometry, 94(6), 600-605.
    • Eyre-Walker, A. a. (2013). The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations. PLoS biology.
    • F, .. G. (2012). part I. در Altmetrics for librarians and institutions . W Swets blog.
    • F, W. (2011). Science 2.0: The Open Orchestration of Knowledge Creation. Interdisciplinary Approaches to Adaptive Learning A Look at the Neighbours , 6-85.
    • Ferrer-Sapena, A. E.-P.-M.-B. (2016). The Impact Factor as a measuring tool of the prestige of the journals in research assessment in mathematics. Research evaluation , 306-314.
    • Finch, A. (2010). Can we do better than existing author citation metrics? Bioessays, 744-747.
    • Franceschini, F. M. (2012). The success-index: an alternative approach to the h-index for evaluating an individual’s research output. Scientometrics, 621-641.
    • Franco Aixelá, J. a.-E. (2015). Publishing and impact criteria, and their bearing on Translation Studies: In search of comparability. Perspectives, 23(2), 265-283.
    • Frixione, E. L.-Z. (2016). Assessing individual intellectual output in scientific research: Mexico’s national system for evaluating scholars performance in the humanities and the behavioral sciences. PloS one, 11(5).
    • H, P. (2013). Altmetrics: Value all research products. Nature, 493(7431), 159.
    • Halvorson, M. A. (2016). Ten-year publication trajectories of health services research career development award recipients: collaboration, awardee characteristics, and productivity correlates. Evaluation & the health professions, 30(1), 49-64.
    • Hammarfelt. (2014). Using altmetrics for assessing research impact in the humanities. Scientometrics. 1419-1430.
    • Haslam, N. a. (2009). Early-career scientific achievement and patterns of authorship: the mixed blessings of publication leadership and collaboration. Research Evaluation, 405-410.
    • Haustein S, P. I., & 656-69., 6. (2014). An analysis of tweets and citations in the biomedical literature. J Assoc Inf Sci Technol. 656-669.
    • Haustein S, P. I.-I., & 1145-63., 1. (2014). Coverage and adoption of altmetrics sources in the bibliometric community. Scientometrics, 1145-63.
    • Hawker, S. S. (2002). Appraising the evidence: reviewing disparate data systematically. Qualitative health research, 12(9), 1284-1299.
    • Hicks, D. P. (2015). Bibliometrics: the Leiden Manifesto for research metrics. Nature, 520(7548), 429-431.
    • Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences, 102(46), 16569-16572.
    • Ibrahim N, H. C. (2015). New scientometric indicator for the qualitative evaluation of scientific production. New Library World.
    • Ioannidis, J. P. (2016). Multiple citation indicators and their composite across scientific disciplines. PLoS biology.
    • Ion, D. O.-M. (2017). Tendencies on traditional metrics. Chirurgia, 117-123.
    • Jokić, M. (2009). H-index as a new scientometric indicator= H-indeks kao novi scientometrijski indikator. Biochemia medica, 5-9.
    • Joshi, M. A. (2014). Bibliometric indicators for evaluating the quality of scientific publications. The journal of contemporary dental practice, 258.
    • K, .. W. (2015). Social media and altmetrics: An overview of current alternative approaches to measuring scholarly impact. Incentives and performance, 261-76.
    • Kaatz, A. M. (2015). A quantitative linguistic analysis of National Institutes of Health R01 application critiques from investigators at one institution. Academic medicine: journal of the Association of American Medical Colleges, 1, 69.
    • Kali, A. (2015). Scientific impact and altmetrics. ndian journal of pharmacology.
    • King, J. (1987). A review of bibliometric and other science indicators and their role in research evaluation. Journal of information science, 13(5), 261-276.
    • Knudson, D. (2015). Kinesiology faculty citations across academic rank. Quest, 67(4), 346-351.
    • Kreiman, G. a. (2011). Nine criteria for a measure of scientific output. Frontiers in computational neuroscience, 5, 48.
    • Kreines, E. M. (2016). Control model for the alignment of the quality assessment of scientific documents based on the analysis of content-related context. Journal of Computer and Systems Sciences International, 938-947.
    • Kshettry, V. R. (2013). Research productivity and fellowship training in neurosurgery. World neurosurgery, 787-788.
    • Landis, J. R. (1977). The measurement of observer agreement for categorical data. biometrics, 159-174.
    • Li X, Thelwall M, Giustini D. (2011). Validating online reference managers for scholarly impact measurement. Scientometrics, 91(2), 461-71.
    • Marsh, H. W. (2008). Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. American psychologist, 3, 160.
    • Marzolla, M. (2016). Assessing evaluation procedures for individual researchers: The case of the Italian National Scientific Qualification. Journal of Informetrics, 10(2), 408-438.
    • Maximin, S. a. (116-118). Practice corner: the science and art of measuring the impact of an article. 2014, 34(1).
    • Maximin, S. a. (2014). Practice corner: the science and art of measuring the impact of an article. Radiographics, 116-118.
    • Meho, L. I. (2008). Citation counting, citation ranking, and h‐index of human‐computer interaction researchers: a comparison of Scopus and Web of Science. Journal of the American Society for Information Science and Technology, 1711-1726.
    • Mehraban S, Mansourian Y. (2014). Tracing scientific trends: Scientometrics methods and metrics, and the change in librarians’ roles. Iran J.
    • Minasny, B. A.-J. (2013). Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar. PeerJ, 1, 183.
    • Moustafa, K. (2016). Aberration of the Citation. Accountability in research, 230-244.
    • Mutz, R. L.‐D. (2015). Testing for the fairness and predictive validity of research funding decisions: A multilevel multiple imputation for missing data approach using ex‐ante and ex‐post peer evaluation data from the A ustrian science fund. Journal of the Association for Information Science and Technology, 66(11), 2321-2339.
    • Neylon, C. a. (2009). level metrics and the evolution of scientific impact. PLoS biology.
    • Niederkrotenthaler, T. T. (2011). Development of a practical tool to measure the impact of publications on the society based on focus group discussions with scientists. BMC Public Health, 588.
    • Pepe, A. a. (2012). A measure of total research impact independent of time and discipline. PLoS One .
    • Perlin, M. S. (2017). The Brazilian scientific output published in journals: A study based on a large CV database. Journal of Informetrics, 11(1), 18-31.
    • Pinnock, D. K. (2012). Reflecting on sharing scholarship, considering clinical impact and impact factor. Nurse education today, 744.
    • Prathap, G. (2012). Evaluating journal performance metrics. Scientometrics, 403-408.
    • Ravenscroft, J. M. (2017). Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements. PloS one.
    • Rezek, I. R. (2012). Pre-residency publication rate strongly predicts future academic radiology potential. Academic radiology, 19(5), 632-634.
    • Saad, G. (2010). Applying the h-index in exploring bibliometric properties of elite marketing scholars. Scientometrics, 423-433.
    • Sahel, J.-A. (2011). Quality versus quantity: assessing individual research performance. Science translational medicine .
    • Santangelo, G. M. (2017). level assessment of influence and translation in biomedical research. Molecular biology of the cell , 1401-1408.
    • Schlosser, R. W. (2007). Appraising the quality of systematic reviews. Focus, 17, 1-8.
    • Selek, S. a. (2014). Use of h index and g index for American academic psychiatry." . Scientometrics, 541-548.
    • Selvarajoo, K. (2015). Measuring merit: take the risk. Science, 347(6218), 139-140.
    • Shamseer, L. D. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. Bmj, 349.
    • Slim, K. A. (2017). Impact factor: an assessment tool for journals or for scientists?
    • Sood, A. P. (2015). Impact of subspecialty fellowship training on research productivity among academic plastic surgery faculty in the United States. Eplasty, 15.
    • Stallings, J. E. (2013). Determining scientific impact using a collaboration index. Proceedings of the National Academy of Sciences, 110(24), 9680-9685.
    • Stroebe, W. (2010). The graying of academia: Will it reduce scientific productivity? American Psychologist, 65(7), 660.
    • Taylor, D. R. (2015). "Not everything that matters can be measured and not everything that can be measured matters response. 544-545.
    • Thorngate, W. a. (2014). By the numbers: track record, flawed reviews, journal space, and the fate of talented authors. In Advances in Social Simulation, 177-188.
    • Trueger, N. S. (2015). The altmetric score: a new measure for article-level dissemination and impact.
    • Tschudy, M. M. (2016). Pediatric academic productivity: Pediatric benchmarks for the h-and g-indices. The Journal of pediatrics, 272-276.
    • Van Leeuwen, T. (2008). Testing the validity of the Hirsch-index for research assessment purposes. Research Evaluation, 157-160.
    • Wang, D. C.-L.-t. (2013). "Quantifying long-term scientific impact. Science, 342(6154), 127-132.
    • Welk, G. M. (2014). Editorial board position statement regarding the declaration on research assessment (DORA) recommendations with respect to journal impact factors. 429-430.
    • Wildgaard, L. J. (2014). A review of the characteristics of 108 author-level bibliometric indicators. Scientometrics, 101(1), 125-158.
    • Zahedi Z, Costas R, Wouters P. (2014). How well developed are altmetrics? A cross-disciplinary analysis of the presence of “alternative metrics” in scientific publications. Scientometrics, 101(2), 1491-513.