scholarly journals Level 1 COUNTER Compliant Vendor Statistics are a Reliable Measure of Journal Usage

2007 ◽  
Vol 2 (2) ◽  
pp. 84
Author(s):  
Gaby Haddow

A review of: Duy, Joanna and Liwen Vaughan. “Can Electronic Journal Usage Data Replace Citation Data as a Measure of Journal Use? An Empirical Examination.” The Journal of Academic Librarianship 32.5 (Sept. 2006): 512-17. Abstract Objective – To identify valid measures of journal usage by comparing citation data with print and electronic journal use data. Design – Bibliometric study. Setting – Large academic library in Canada. Subjects – Instances of use were collected from 11 print journals of the American Chemical Society (ACS), 9 print journals of the Royal Society of Chemistry (RSC), and electronic journals in chemistry and biochemistry from four publishers – ACS, RSC, Elsevier, and Wiley. ACS, Elsevier, and Wiley journals in chemistry-related subject areas were sampled for Journal Impact Factors and citations data from the Institute for Scientific Information (ISI). Methods – Journal usage data were collected to determine if an association existed between: (1) print and electronic journal use; (2) electronic journal use and citations to journals by authors from the university; and (3) electronic journal use and Journal Impact Factors. Between June 2000 and September 2003, library staff recorded the re-shelving of bound volumes and loose issues of 20 journal titles published by the ACS and the RSC. Electronic journal usage data were collected for journals published by ACS, RSC, Elsevier, and Wiley within the ISI-defined chemistry and biochemistry subject area. Data were drawn from the publishers’ Level 1 COUNTER compliant usage statistics. These data equate 1 instance of use with a user viewing an HTML or PDF full text article. The period of data collection varied, but at least 2.5 years of data were collected for each publisher. Journal Impact Factors were collected for all ISI chemistry-related journals published by ACS, Elsevier, and Wiley for the year 2001. Library Journal Utilization Reports (purchased from ISI) were used to determine the number of times researchers at the university cited journals in the same set of chemistry-related journals over the period 1998 to 2002. The authors call this “local citation data.” (512) The results from electronic journal use were also analysed for correlation with the total number of citations, as reported in the Journal Citation Reports, for each journal in the sample. Main results – The study found a significant correlation between the results for print journal and electronic journal usage. A similar finding was reported for correlation between electronic journal usage data and local citation data. No significant association was found between Journal Impact Factors and electronic journal usage data. However, when an analysis was conducted for the total number of citations to the journals (drawn from the Journal Impact Factor calculations in Journal Citation Reports) and electronic journal use, significant correlations were found for all publishers’ journals. Conclusion – Within the fields of chemistry and biochemistry, electronic journal usage data provided by publishers are an equally valid method of determining journal usage as print journal re-shelving data. The results of the study indicate this association is valid even when print journal subscriptions have ceased. Local citation data (the citations made by researchers at the institution being studied) also provide a valid measure of journal use when compared with electronic journal usage results. Journal Impact Factors should be used with caution when libraries are making journal collection decisions.

1997 ◽  
Vol 170 (2) ◽  
pp. 109-112 ◽  
Author(s):  
Louise Howard ◽  
Greg Wilkinson

BackgroundWe examined citation data for the British Journal of Psychiatry (BJP) and four other general psychiatry journals to assess their impact on the scientific community.MethodData on three measures of citations (total number of citations, impact factor and ranking by impact factor) were obtained from Journal Citation Reports for 1985–1994. Rank correlations from year to year were calculated.ResultsThe BJP currently ranks sixth of all psychiatry journals when journals are ranked by impact factor. The journal's impact factor fell between 1985 and 1990 and this was followed by a rise in impact factor after 1991. The BJP did not rank in the top 10 psychiatry journals between 1991 and 1993. Archives of General Psychiatry is cited more frequently than any other psychiatry journal, with the American Journal of Psychiatry usually ranking second. Psychopharmacology journals are replacing more general journals in the top rankings. Rankings of most journals have become less stable in recent years.ConclusionsThe BJP would have to change the nature and number of papers published to improve its impact factor. There are a number of limitations to citation data and such data are only one of several factors useful in evaluating the importance of a journal's contribution to scientific and clinical communities.Conflict of interestThese condauthor is Editor of the British Journal of Psychiatry.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-12
Author(s):  
Zao Liu

Although there are bibliometric studies of journals in various fields, the field of family studies remains unexplored. Using the bibliometric metrics of the two-year and five-year Journal Impact Factors, the H-index, and the newly revised CiteScore, this paper examines the relationships among these metrics in a bibliometric study of forty-four representative family studies journals. The citation data were drawn from Journal Citation Reports, Scopus, and Google Scholar. The correlation analysis found strong positive relationships on the metrics. Despite the strong correlations, discrepancies in rank orders of the journals were found. A possible explanation of noticeable discrepancy in rankings was provided, and the implications of the study for stakeholders were discussed.


2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
Jee-Eun Kim ◽  
Yerim Kim ◽  
Kang Min Park ◽  
Dae Young Yoon ◽  
Jong Seok Bae

Background. Altmetrics analyze the visibility of articles in social media and estimate their impact on the general population. We performed an altmetric analysis of articles on central nervous system inflammatory demyelinating disease (CIDD) and investigated its correlation with citation analysis. Methods. Articles in the 91 journals comprising the “clinical neurology,” “neuroscience,” and “medicine, general, and internal” Web of Science categories were searched for their relevance to the CIDD topic. The Altmetric Explorer database was used to determine the Altmetric.com Attention Score (AAS) values of the selected articles. The papers with the top 100 AAS values were characterized. Results. Articles most frequently mentioned online were primarily published after 2014 and were published in journals with high impact factors. All articles except one were dealt with the issue of multiple sclerosis. Most were original articles, but editorials were also common. Novel treatments and risk factors are the most frequent topics. The AAS was weakly correlated with journal impact factors; however, no link was found between the AAS and the number of citations. Conclusions. We present the top 100 most frequently mentioned CIDD articles in online media using an altmetric approach. Altmetrics can rapidly offer alternative information on the impact of research based on a broader audience and can complement traditional metrics.


2019 ◽  
Vol 26 (5) ◽  
pp. 734-742
Author(s):  
Rob Law ◽  
Daniel Leung

As the citation frequency of a journal is a representation of how many people have read and acknowledged their works, academia generally shares the notion that impact factor and citation data signify the quality and importance of a journal to the discipline. Although this notion is well-entrenched, is it reasonable to deduce that a journal is not of good quality due to its lower impact factor? Do journal impact factors truly symbolize the quality of a journal? What must be noted when we interpret journal impact factors? This commentary article discusses these questions and their answers thoroughly.


2015 ◽  
Author(s):  
B. Ian Hutchins ◽  
Xin Yuan ◽  
James M. Anderson ◽  
George M. Santangelo

AbstractDespite their recognized limitations, bibliometric assessments of scientific productivity have been widely adopted. We describe here an improved method that makes novel use of the co-citation network of each article to field-normalize the number of citations it has received. The resulting Relative Citation Ratio is article-level and field-independent, and provides an alternative to the invalid practice of using Journal Impact Factors to identify influential papers. To illustrate one application of our method, we analyzed 88,835 articles published between 2003 and 2010, and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. We demonstrate that the values generated by this method strongly correlate with the opinions of subject matter experts in biomedical research, and suggest that the same approach should be generally applicable to articles published in all areas of science. A beta version of iCite, our web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov.


Publications ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 47 ◽  
Author(s):  
Juan Campanario

Journal self-citations may be increased artificially to inflate a journal’s scientometric indicators. The aim of this study was to identify possible mechanisms of change in a cohort of journals that rose from the fourth (Q4) to the first quartile (Q1) over six years or less in Journal Citation Reports (JCR), and the role of journal self-citations in these changes. A total of 51 different journals sampled from all JCR Science Citation Index (SCI) subject categories improved their rank position from Q4 in 2009 to Q1 in any year from 2010 to 2015. I identified changes in the numerator or denominator of the Journal Impact Factor (JIF) that were involved in each year-to-year transition. The main mechanism of change was the increase in the number of citations used to compute the JIF. The effect of journal self-citations in the increase of the JIF was studied. The main conclusion is that there was no evidence of widespread JIF manipulation through the overuse of journal self-citations.


2019 ◽  
Vol 24 (7) ◽  
pp. 1121-1123
Author(s):  
Zhi-Qiang Zhang

Journal impact factors for 2018 were recently announced by Clarivate Analytics in the June 2019 edition of Journal Citation Reports (JCR). In this editorial, I compared the impact factor of Systematic and Applied Acarology (SAA) with those of other main acarological journals as I did in Zhang (2017). Following Zhang (2018a), I also highlighted the top 10 SAA papers from 2016/2017 with the highest numbers of citations in 2018 (according to JCR June 2019 edition). In addition, I remarked on the increasing impact of developing countries and emerging markets in systematic and applied acarology, both in the number of publications and citations, and also include announcements of meetings on applied acarology.


2014 ◽  
Vol 33 (12) ◽  
pp. 1284-1293 ◽  
Author(s):  
SH Zyoud ◽  
SW Al-Jabi ◽  
WM Sweileh ◽  
R Awang

Background: Toxicology in Malaysia has experienced rapid development and made great progress in education and research in conjunction with economic development in Malaysia over the past two decades. Objectives: The main objectives of this study were to analyse the research originating from Malaysia and published in toxicology journals and to examine the authorship pattern and the citations retrieved from the Scopus database. Methods: Data from 1 January 2003 till 31 December 2012 were searched for documents with specific words in the toxicology field as a ‘source title’ and Malaysia as an affiliation country. Research productivity was evaluated based on a methodology we developed and used in other bibliometric studies by analysing: (a) total and trends of contributions in toxicology fields between 2003 and 2012; (b) Malaysian authorship pattern and productivity; (c) collaboration patterns; (d) journals in which Malaysian researchers publish; (e) the classification of journals to Institute for Scientific Information (ISI) or non-ISI; (f) impact factors (IFs) of all publications; and (g) citations received by the publications. Results: In total, 290 documents were retrieved from 55 international peer-reviewed toxicology journals. The quantity of publication increased by around 10-fold from 2003 to 2012. The h-index of the retrieved documents was 20. Of the 55 journal titles, 42 (76.4%) have their IF listed in the journal citation reports 2012. Forty-two documents (14.5%) were published in journals that had no official IF. The total number of citations, at the time of manuscript writing (5 August 2013), was 1707, with a median (interquartile range) of 3 (0–7). Malaysia collaborated mostly with countries in the Asia-Pacific regions (18.3%), especially India and Japan, followed by the Middle East and Africa (10.0%), especially Palestine and Yemen. Conclusion: The present data show a promising rise and a good start for toxicology research activity in Malaysia. The sharing of relevant research questions by developed and developing countries can lead to research opportunities in the field of toxicology.


2019 ◽  
Author(s):  
Michael R Dougherty ◽  
Zachary Horne

Citation data and journal impact factors are important components of faculty dossiers and figure prominently in both promotion decisions and assessments of a researcher's broader societal impact. Although these metrics play a large role in high-stakes decisions, the evidence is mixed regarding whether they are valid proxies for key aspects of research quality. We use data from three large scale studies to assess whether citation counts and impact factors predict four indicators of aspects of research quality: (1) the number of statistical reporting errors in a paper, (2) the evidential value of the reported data, (3) the expected replicability of reported research findings in peer reviewed journals, and (4) the actual replicability of a given experimental result. Both citation counts and impact factors were weak and inconsistent predictors of research quality, so defined, and sometimes negatively related to quality. Our findings impugn the validity of citation data and impact factors as indices of research quality and call into question their usefulness in evaluating scientists and their research. In light of these results, we argue that research evaluation should instead focus on the process of how research is conducted and incentivize behaviors that support open, transparent, and reproducible research.


Sign in / Sign up

Export Citation Format

Share Document