ArticlePDF Available

Abstract

Classification of scientific publications is of great importance in biomedical research evaluation. However, accurate classification of research publications is challenging and normally is performed in a rather subjective way. In the present paper, we propose to classify biomedical publications into superfamilies, by analysing their citation profiles, i.e. the location of citations in the structure of citing articles. Such a classification may help authors to find the appropriate biomedical journal for publication, may make journal comparisons more rational, and may even help planners to better track the consequences of their policies on biomedical research.
83
Volume 111 | Number 3/4
March/April 2015
South African Journal of Science
http://www.sajs.co.za
Research Letter Classifying scientific journals based on citation profiles
Page 1 of 3
Can scientific journals be classified based on
their ‘citation profiles’?
Classification of scientific publications is of great importance in biomedical research evaluation. However,
accurate classification of research publications is challenging and normally is performed in a rather
subjective way. In the present paper, we propose to classify biomedical publications into superfamilies,
by analysing their citation profiles, i.e. the location of citations in the structure of citing articles. Such a
classification may help authors to find the appropriate biomedical journal for publication, may make journal
comparisons more rational, and may even help planners to better track the consequences of their policies
on biomedical research.
Introduction
Medical and biomedical research evaluation based on citation analysis has attracted much attention during the last
decades.1-5 Different aspects of citation analysis (in biomedical research evaluation) have been studied extensively
in the literature.6-10
In research evaluation, classification of scientific publications is of great importance.11-13 For researchers in
academia it is important to publish their results in relevant journals to guarantee visibility. In Journal Citation
Reports®, a list of ‘subject categories’ is prepared (and updated annually) to classify journals, in order to help
rank journals in specialised fields. Such a classification has already found some useful applications. For example,
the global map of science based on subject categories has been developed,14,15 and can help to better understand
scientific collaborations. Comparative assessment of the ‘quality’ of research in different subject categories12,16,17
is important in sustainable development of science and technology.
For research institutes, planners and policymakers, it might be necessary to monitor subject categories of
researchers. For example, facilitation and promotion of emerging interdisciplinary and multidisciplinary research
may be an objective of science policies.18,19 There also is a great need for reliable methods for evaluating
researchers, because the number of citations ( in both scholarly journals20 and online resources21) do not reflect
the real impact of scientific papers.
All classification approaches are based on the central assumption that ‘objects’ in the same field/category have
related features. For classification of research articles, similarity of words or citation patterns can be used as
the features.
Classification of scholarly publications can be done from different viewpoints. Typically, classifications are done
based on subject categories.22 Lewison and Paraje23, on the other hand, suggested that biomedical journals be
classified according to their approach to biomedicine, namely, clinical or basic.
Voos and Dagaev24 proposed that the location at which citations appear within the citing publications
(e.g. ‘Introduction’ and ‘Methods’) influences the meaning and the relevance of citations. This was later shown
for larger article data sets when Maričić et al.25 and also Bornmann and Daniel26 studied the relationship between
location of citation within citing articles and the frequency of citations.
In the present study, we classify journals into four groups based on the scope of their articles: protocol,
methodology, descriptive and theoretical. Then, we show that journals in each group show a fairly similar citation
pattern. We therefore propose that journals may be classified based on their citation patterns.
Materials and methods
We chose 17 (bio)medical and bioinformatics journals (Table 1). We assumed that all these journals belonged
to one of the possible four groups (protocol, methodology, descriptive and theoretical). Citations to the articles
published in each journal in 2011 and 2012 were found in Scopus. Then, full-text articles were downloaded, if
possible, and the citation profiles (i.e. the sections in which the citations appeared) were analysed manually. Only
those citations which appeared in standard sections of an article (Introduction, Methods, Results, Discussion25)
were considered. It should be noted that multiple citations in a specific section were counted once only.
Results
From the analysis of the 17 journals, 1472 citations were detected in the citing articles. Altogether, 818 citations
appeared in one of the four standard sections of an article. We computed the percentage of citations in each of
the four sections.
Figure 1a shows the citation profiles of ‘protocol’ journals. As expected, most of the citations occur in the ‘Materials
and methods’ section. Journals which are devoted to the introduction of new protocols or software are naturally
cited by end users who apply these protocols and tools for practical purposes.
There is a subtle difference between ‘protocol’ and ‘methodology’ journals: ‘protocol’ journals focus on describing
the procedures to be followed, whereas ‘methodology’ journals also describe in detail the scientific reasons
behind the presented methodology. Consequently, it can be expected that articles in methodology journals are
AUTHORS:
Sayed-Amir Marashi1
Amir Pandi2
Hossein Shariati3
Hossein Zamani-Nasab1
Narges Damavandi1
Mahshid Heidari1
Salma Sohrabi-Jahromi1
Arvand Asghari1
Saba Aslani1
Narjes Nakhaee1
Mohammad Hossein Moteallehi-
Ardakani1
AFFILIATIONS:
1Department of Biotechnology,
College of Science, University of
Tehran, Tehran, Iran
2School of Biology, College of
Science, University of Tehran,
Tehran, Iran
3School of Mathematics,
Statistics and Computer Science,
College of Science, University of
Tehran, Tehran, Iran
CORRESPONDENCE TO:
Sayed-Amir Marashi
EMAIL:
marashi@ut.ac.ir
POSTAL ADDRESS:
Department of Biotechnology,
University of Tehran, Shfiei Str.
13, Qods Str., Enghelab Ave.,
Tehran 1417614411, Islamic
Republic of Iran
DATES:
Received: 25 Apr. 2014
Revised: 21 Oct. 2014
Accepted: 16 Jan. 2015
KEYWORDS:
scientific journals; journal
classification; journal type;
citation analysis; citation profiles
HOW TO CITE:
Marashi S-A, Pandi A, Shariati
H, Zamani-Nasab H, Damavandi
N, Heidari M, et al. Can scientific
journals be classified based
on their ‘citation profiles’?
S Afr J Sci. 2015;111(3/4),
Art. #2014-0147, 3 pages.
http://dx.doi.org/10.17159/
sajs.2015/20140147
© 2015. The Author(s).
Published under a Creative
Commons Attribution Licence.
84
Volume 111 | Number 3/4
March/April 2015
South African Journal of Science
http://www.sajs.co.za
not only cited in the ‘Materials and methods’ of citing articles, but also
in all other sections. Figure 1b confirms that such a trend is observed
in these journals.
Table 1: List of journals in the present study and the categories in which
they were classified
Journal
category
Name of journal Total number
of citations
Protocol
WIREs Computational Statistics 39
Algorithms for Molecular Biology 75
Journal of Visualized Experiments 70
Source Code for Biology and Medicine 31
Methodology
Bioinformatics 123
BioTechniques 234
Evolutionary Bioinformatics 37
Computational Biology and Chemistry 111
Journal of Bioinformatics and
Computational Biology 113
Descriptive
EXCLI Journal 133
MEDICC Review 41
Archives of Iranian Medicine 189
Journal of Negative Results in BioMedicine 19
Theoretical
Acta Biotheoretica 28
Medical Hypotheses 117
Biology and Philosophy 86
Journal of the History of Biology 26
‘Descriptive’ journals include those journals which mainly discuss
experimental or clinical findings. Figure 1c shows that citations to the
articles published in these journals mainly occur in the ‘Introduction’
and ‘Discussion’ sections of citing papers, followed by ‘Methods’ and
‘Results’ sections.
Finally, there are journals which focus on the theoretical aspects of
science, including philosophical and historical issues. There are even
journals which are devoted to presenting novel (and often radical)
hypotheses. These journals are not expected to be cited in the ‘Methods’
or ‘Results’ sections of citing articles, which is reflected in the citation
profile of these journals (Figure 1d).
Altogether, we observe that, depending on their approach to science,
different journals show distinctive citation patterns. Figure 1e shows the
average citation profile in each of the four categories.
Discussion
Different aspects of (bio)medical sciences are investigated, including
clinical studies, molecular and biochemical medicine, diagnostic
methods, traditional medicine, bioethics and even computational
modelling in biomedical sciences. The first natural consequence of
citation profile analysis is that journals, including biomedical journals,
can be classified into superfamilies. Such superfamilies are based
on the journal’s approach to science, not necessarily the journal
subject. Classification of journals may show which journals have the
same approach to science, and therefore provide a framework for
‘intra-superfamily’ comparison of publications.
It is well known that citations can be interpreted based on many factors27,
including in which section of the citing paper the citation appears24,25,28.
Research Letter Classifying scientific journals based on citation profiles
Page 2 of 3
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
WIRES Comput. Stat.
Algoriths Mol. Biol.
Source Code Biol. Med.
J. Vis. Exp.
MEDICC Rev.
Arch. Iran. Med.
J. Negative Res.
Biomed.
EXCLI J.
Acta Biotheor.
Biol. Phil.
Med. Hypotheses
J. Hist. Biol.
Protocol
Methodology
Theoretical
Descriptive
Bioinformatics
J. Bionf. Comput. Biol.
BioTechniques
Comput. Biol. Chem.
Evol. Bioinform.
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Frequency of citationsFrequency of citations
Frequency of citationsFrequency of citations
Frequency of citations
Introduction Methods Results Discussion
Introduction Methods Results Discussion
Descriptive Theoretical
Average result for each group
Protocol Methodology
Section of citing article Section of citing article
Section of citing article
Section of citing article
Section of citing article
Introduction Methods Results Discussion
Introduction Methods Results Discussion
Introduction Methods Results Discussion
a
c
b
d
e
Figure 1: (a) Citation profile of protocol journals; (b) citation profile of methodology journals; (c) citation profile of descriptive journals; (d) citation profile of
theoretical journals; and (e) average citation profile of each journal type.
85
Volume 111 | Number 3/4
March/April 2015
South African Journal of Science
http://www.sajs.co.za
However, citation-based measures of research evaluation do not take into
account the differences among citations. We suggest that it is possible
to create a context-based measure which takes into account the location
of a citation in the citing article, and assign different ‘usefulness’ weights
to the citations. For example, if the applicability of the methods is taken
into account, one may give more weight to the citations which appear
in the Methods section compared with the citations that appear in the
Introduction section. On the other hand, for analysing groundbreaking
and paradigm-changing papers, one may assign more weight to the
citations in the Introduction section.
In the present study, we manually assigned the 17 journals into the four
categories. For future work, one can consider a larger data set of journals
and use statistical methods to automatically find the clusters, in order
to show if the same four categories are found. A possible drawback
of this approach is the potential difficulty of the computations. Current
tools for citation analysis, e.g. Web of Science and Scopus, analyse
the citations independently of the citation context.25,26,28 Analysing the
citation profiles manually is rather inconvenient; nevertheless, with the
online availability of (most) articles in machine-readable formats (e.g.
HTML or PDF), text-mining algorithms may be developed to analyse the
citation profiles automatically.
A side result of the automatic analysis of citation profiles could be the
detection of useful keywords. More precisely, it is possible to check what
issues inside a cited paper have attracted the attention of citing authors.
Such a survey could provide insights for selecting better keywords
for a new manuscript, in order to attract more attention, and, in turn,
more citations.
Authors’ contributions
S.A.M. was responsible for the experimental and project design and also
wrote the manuscript. All other authors were involved in collecting the
required data.
References
1. Bird SB. Journal impact factors, h indices, and citation analyses in toxicology.
J Med Toxicol. 2008;4:261–274. http://dx.doi.org/10.1007/BF03161211
2. Fang MLE. Journal rankings by citation analysis in health sciences
librarianship. Bull Med Lib Assoc. 1989;77:205–211.
3. Garfield E, Welljams-Dorof A. Of Nobel class: A citation perspective on
high impact research authors. Theor Med. 1992;13:117–135. http://dx.doi.
org/10.1007/BF02163625
4. Nieminen P, Carpenter J, Rucker G, Schumacher M. The relationship between
quality of research and citation frequency. BMC Med Res Method. 2006;6:42.
http://dx.doi.org/10.1186/1471-2288-6-42
5. Patsopoulos NA, Analatos AA, Ioannidis JPA. Relative citation impact
of various study designs in the health sciences. J Am Med Assoc.
2005;293:2362–2366. http://dx.doi.org/10.1001/jama.293.19.2362
6. Falagas ME, Alexiou VG. The top-ten in journal impact factor manipulation.
Arch Immunol Ther Exp. 2008;56:223–226. http://dx.doi.org/10.1007/
s00005-008-0024-5
7. Marashi S-A. On the identity of ‘citers’: Are papers promptly recognized by
other investigators? Med Hypoth. 2005;65:822. http://dx.doi.org/10.1016/j.
mehy.2005.05.003
8. Seglen PO. Why the impact factor of journals should not be used for
evaluating research. BMJ. 1997;314:498–502. http://dx.doi.org/10.1136/
bmj.314.7079.497
9. Garfield E. How can impact factors be improved? BMJ. 1996;313:411–413.
http://dx.doi.org/10.1136/bmj.313.7054.411
10. Broadwith P. End of the road for h-index rankings. Chem World. 2013;10:12–13.
11. Derkatch C. Demarcating medicine’s boundaries: Constituting and categorizing
in the journals of the American Medical Association. Tech Commun Quart.
2012;21:210–229. http://dx.doi.org/10.1080/10572252.2012.663744
12. Dorta-González P, Dorta-González MI. Comparing journals from different
fields of science and social science through a JCR subject categories
normalized impact factor. Scientometrics. 2013;95(2):645–672. http://
dx.doi.org/10.1007/s11192-012-0929-9
13. Glänzel W, Schubert A. A new classification scheme of science fields and
subfields designed for scientometric evaluation purposes. Scientometrics.
2003;56:357–367. http://dx.doi.org/10.1023/A:1022378804087
14. Leydesdorff L, Carley S, Rafols I. Global maps of science based on the
new Web-of-Science categories. Scientometrics. 2013;94:589–593. http://
dx.doi.org/10.1007/s11192-012-0784-8
15. Leydesdorff L, Rafols I. A global map of science based on the ISI subject
categories. J Am Soc Inf Sci Tech. 2009;60:348–362. http://dx.doi.
org/10.1002/asi.20967
16. Pudovkin AI, Garfield E. Rank-normalized impact factor: A way to compare
journal performance across subject categories. Proc Am Soc Inf Sci Tech.
2004;41:507–515. http://dx.doi.org/10.1002/meet.1450410159
17. Sombatsompop N, Markpin T, Yochai W, Saechiew M. An evaluation
of research performance for different subject categories using Impact
Factor Point Average (IFPA) index: Thailand case study. Scientometrics.
2005;65:293–305. http://dx.doi.org/10.1007/s11192-005-0275-2
18. Porter AL, Roessner JD, Heberger AE. How interdisciplinary is a
given body of research? Res Eval. 2008;17:273–282. http://dx.doi.
org/10.3152/095820208X364553
19. Porter AL, Youtie J. How interdisciplinary is nanotechnology? J Nanoparticle
Res. 2009;11:1023–1041. http://dx.doi.org/10.1007/s11051-009-9607-0
20. Hanney SR, Home PD, Frame I, Grant J, Green P, Buxton MJ. Identifying the
impact of diabetes research. Diabetic Med. 2006;23(2):176–184. http://
dx.doi.org/10.1111/j.1464-5491.2005.01753.x
21. Marashi S-A, Hosseini-Nami SMA, Alishah K, Hadi M, Karimi A, Hosseinian
S, et al. Impact of Wikipedia on citation trends. EXCLI J. 2013;12:15–19.
22. Glänzel W, Schubert A, Czerwon HJ. An item-by-item subject classification
of papers published in multidisciplinary and general journals using reference
analysis. Scientometrics. 1999;44:427–439. http://dx.doi.org/10.1007/
BF02458488
23. Lewison G, Paraje G. The classification of biomedical journals by research
level. Scientometrics. 2004;60:145–157. http://dx.doi.org/10.1023/B:SCIE.
0000027677.79173.b8
24. Voos H, Dagaev KS. Are all citations equal? Or, did we op. cit. your idem? J
Acad Lib. 1976;1:19–21.
25. Maričić S, Spaventi J, Pavičić L, Pifat-Mr zljak G. Citation context versus the
frequency counts of citation histories. J Am Soc Inf Sci. 1998;49:530–540.
http://dx.doi.org/10.1002/(SICI)1097-4571(19980501)49:6<530::AID-
ASI5>3.0.CO;2-8
26. Bornmann L, Daniel HD. Functional use of frequently and infrequently cited
articles in citing publications: A content analysis of citations to ar ticles with
low and high citation counts. Eur Sci Edit. 2008;34:35–38.
27. Nicolaisen J. Citation analysis. Ann Rev Inf Sci Tech. 2007;41:609–641.
http://dx.doi.org/10.1002/aris.2007.1440410120
28. Cano V. Citation behavior: Classification, utility, and location. J Am
Soc Inf Sci. 1989;40:284–290. http://dx.doi.org/10.1002/(SICI)1097-
4571(198907)40:4<284::AID-ASI10>3.0.CO;2-Z
Research Letter Classifying scientific journals based on citation profiles
Page 3 of 3
... However, this often fails because of the existence of interdisciplinary journals and several other reasons [13]. Currently, existing approaches to classify the subject area of journals use either Metadata or using interrelationship analysis or using subject experts, etc. [13][14][15][16][17][18][19][20][21][22][23][24]. But each of the existing approaches has limitations. ...
... [ [16][17][18] interrelationship analysis Reference analysis, citation analysis, common author's analysis, etc. ...
Thesis
The existence of search engines solely depends on user satisfaction. This is not different in the case of academic search engines too. A major difference is the category of users surfing for, as mainly librarians, academicians or faculties, scientists, researchers, and students are the main users of academic search engines. Again, all research papers have a well-defined format when compared to any other text documents with a meaningful title, authors and their affiliations, abstract, list of keywords, document contents including introduction and conclusion, and future scope followed by a list of references. This well-defined structure of scientific articles makes it compatible with the academic search engines to represent each article in a graph-based format. One of the distinct features of an academic search engine, when compared to other engines besides the category of users surfing for, is the nature of the data they are searching for. Users usually perform article searches in a discretized manner by searching for either paper title, keywords or key-phrases, journal type etc. and more preference is usually given to research areas or areas of interest, rather than content-based or other types of queries. Search based on research areas or areas of interest needs a proper subject labeling or subject classification of all documents, to retrieve relevant articles belonging to the searched area in a fast and efficient manner. Subject classification is supervised in nature and hence need a taxonomy of subject, fields, and keywords to classify the subject of an article accurately. This work proposes a novel subject labeling based academic search engine employing sequence graph-based representation of each article to take advantage of the aforementioned discretized nature of academic search engines. It’s a semi-supervised method of automatic subject labeling of each article at the indexing time itself using a pre-defined classifier hierarchy of major subjects with areas, categories, disciplines, fields and keywords in each area arranged at six levels of hierarchy, which can be utilized to classify research article in any research area and also supports long sequence text search and retrieval. This automatic subject labeling of submitted papers at the indexing time itself can be efficiently utilized by journals too to select relevant reviewers based on area. Subject labeling of articles can be efficiently used for incorporating many other features like a subject-related paper recommendation system, to find current research trends and for finding the author’s research areas of interest. Again, the sequence graph-based representation enables long sequence text search and retrieval that can be used to implement any search engines, and we also developed a biblical search engine and a desktop search engine. The proposed system is tested for efficiency and reliability in various aspects.
... However, this often fails because of the existence of interdisciplinary journals and several other reasons [6]. Currently existing approaches to classify journals based on subjects can be classified into three main categories: - Based on basic journal information and Meta data -Using journal classification system, journal ontology or by using journal meta data and contents like title information, abstract, keywords present etc. [6][7][8][9][10][11][12][13][14]  Based on interrelationship analysis -Using reference analysis, citation analysis, common author's analysis etc. [15][16][17].  By subject experts -Supervised classification using experts in each field to manually classify journals into scientific areas ...
Article
Subject Classification of scholarly articles is a pertinent area in the field of research. Proper classification of journal articles is an essential criterion for academic search engines to facilitate easier search and retrieval of journal papers based on user preferred research areas. Subject classification is equally important for search engines to find appropriate reviewers to review submitted papers based on area. It also helps to implement an efficient paper recommendation system to recommend similar articles to users based on their areas of interest. The widely used approach for subject classification is to use metadata of journal papers like title, abstract, paper keywords etc. to classify articles or by insisting users to use some classification system to specify the subject area of their article. This paper proposes an efficient graph based subject classification of journal articles using a pre-indexed classifier model by means of full text indexing approach. Journal contents are indexed using Sequence Word Graph model to classify any journal article into its relevant research areas and sub areas based on actual keyword or key phrase embedding in the journal contents. This automatic classification approach enables efficient search of scholarly articles by means of subject categories or by sub areas. The subject classification accuracy is tested using arXiv subject classified papers set of total 1307 papers and accuracy yields 91%.
... For instance, Citation analysis has limitations in that it cannot be used to identify homographs that are present in any article, and recommending research paper based on the reference lists on the publication being used by a researcher, is not accurate since the reference lists in many occasions contains irrelevant entries that usually will result in undesirable search results and paper recommendations. Likewise, searching, retrieving and recommending research papers based on the citation count score is not accurate since the number of citations does not reflect the true impact and relevance of a research paper [7]. Further, most of the metrics that are used by research paper recommender systems to search for research papers are not transparent on how they are computed and often times lead researchers to publish in some journals that are suggested as good journals by these metrics, thus leading to biasness in paper recommendation and retrievals. ...
Conference Paper
The volume of literature and more particularly research-oriented publications is growing at an exponential rate, and better tools and methodologies are required to efficiently and effectively retrieve desired documents. The development of academic search engines, digital libraries and archives has led to better information filtering mechanisms that has resulted to improved search results. However, the state-of-the art research-paper recommender systems are still retrieving research articles without explicitly defining the domain of interest of the researchers. Also, a rich set of research output (research objects) and their associated metrics are also not being utilized in the process of searching, querying, retrieving and recommending articles. Consequently, a lot of irrelevant and unrelated information is being presented to the user. Then again, the use of citation counts to rank and recommend research-paper to users is still disputed. Recommendation metrics like citation counts, ratings in collaborative filtering, and keyword analysis' cannot be fully relied on as the only techniques through which similarity between documents can be computed, and this is because recommendations based on such metrics are not accurate and have lots of biasness. Henceforth, altmetric-based techniques and methodologies are expected to give better recommendations of research papers since the circumstances surrounding a research papers are taken into consideration. This paper proposes a research paper recommender system framework that utilizes paper ontology and Altmetric from research papers, to enhance the performance of research paper recommender systems.
Article
Full-text available
The DHET Research Output Policy (2015) indicates that there has been a change in the government’s approach to research funding. Previously all research published in any accredited journal was rewarded equally. A decision has been taken, however, that a shift will be made towards rewarding better quality and higher impact peer-review research. Additional mechanisms such as biometric/bibliometric data, including citations, assessments by discipline-specific panels of experts and/or post-publication reviews may be used to determine the quality and impact of publications. The policy notes that the DHET may distinguish between "high" and "low" impact journals after proper consultation. This article highlights the need for consultation by the legal fraternity with the DHET about the implementation of these possible mechanisms in the light of the special considerations applicable to the evaluation of law journals: most journals publish mainly local legal content, there is a limited number of active legal academics, the nature of legal research is not empirical, and a premium is placed on the writing of books. The research evaluates the available data between 2009 and 2014 in an attempt to assess if it would be appropriate to introduce a legal journal ranking system in South Africa. The article discusses direct and indirect forms of quality evaluation to inform possible ranking systems. This includes the data from the ASSAf expert panel evaluation of law journals in 2014 and other bibliometric data based on whether the journal is featured in international accredited lists, the size of its print-run, author prominence, rejection-rate, usage studies, and evaluations based on citations. An additional ranking system is considered, based on the five best outputs submitted to the National Research Foundation by applicants applying for rating. The article concludes that a law journal ranking system would be inappropriate for South Africa. None of the systems meet the minimum requirements for a trustworthy ranking of South African law journals, as the data available are insufficient, non-verifiable and not based on objective quality-sensitive criteria. Consultation with the DHET is essential and urgent to avoid the implementation of inappropriate measures of quality and impact assessment.
Article
Full-text available
This article presents results to date produced by a team charged with evaluating the National Academies Keck Futures Initiative, a 15-year US$40 million program to facilitate interdisciplinary research in the United States. The team has developed and tested promising quantitative measures of the integration (1) and specialization (S) of research outputs, the former essential to evaluating the impact of the program. Both measures are based on Thomson-ISI Web of Knowledge subject categories. 'I' measures the cognitive distance (dispersion) among the subject categories of journals cited in a body of research. 'S' measures the spread of subject categories in which a body of research is published. Pilot results for samples from researchers drawn from 22 diverse Subject categories show what appears to be a surprisingly high level of interdisciplinarity. Correlations between integration and the degree of co-authorship of selected bodies of research show a low degree of association.
Article
Full-text available
The journal Impact Factor (IF) is not comparable among fields of Science and Social Science because of systematic differences in publication and citation behaviour across disciplines. In this work, a decomposing of the field aggregate impact factor into five normally distributed variables is presented. Considering these factors, a Principal Component Analysis is employed to find the sources of the variance in the JCR subject categories of Science and Social Science. Although publication and citation behaviour differs largely across disciplines, principal components explain more than 78% of the total variance and the average number of references per paper is not the primary factor explaining the variance in impact factors across categories. The Categories Normalized Impact Factor (CNIF) based on the JCR subject category list is proposed and compared with the IF. This normalization is achieved by considering all the indexing categories of each journal. An empirical application, with one hundred journals in two or more subject categories of economics and business, shows that the gap between rankings is reduced around 32% in the journals analyzed. This gap is obtained as the maximum distance among the ranking percentiles from all categories where each journal is included.
Article
Full-text available
It has been suggested that the "visibility" of an article influences its citation count. More specifically, it is believed that the social media can influence article citations.Here we tested the hypothesis that inclusion of scholarly references in Wikipedia affects the citation trends. To perform this analysis, we introduced a citation “propensity” measure, which is inspired by the concept of amino acid propensity for protein secondary structures. We show that although citation counts generally increase during time, the citation "propensity" does not increase after inclusion of a reference in Wikipedia.
Article
Full-text available
In August 2011, Thomson Reuters launched version 5 of the Science and Social Science Citation Index in the Web of Science (WoS). Among other things, the 222 ISI Subject Categories (SCs) for these two databases in version 4 of WoS were renamed and extended to 225 WoS Categories (WCs). A new set of 151 Subject Areas was added, but at a higher level of aggregation. Perhaps confusingly, these Subject Areas are now abbreviated "SC" in the download, whereas "WC" is used for WoS Categories. Since we previously used the ISI SCs as the baseline for a global map in Pajek (Pajek is freely available at http://vlado.fmf.uni-lj.si/pub/networks/pajek/) (Rafols et al., Journal of the American Society for Information Science and Technology 61:1871-1887, 2010) and brought this facility online (at http://www.leydesdorff.net/overlaytoolkit), we recalibrated this map for the new WC categories using the Journal Citation Reports 2010. In the new installation, the base maps can also be made using VOSviewer (VOSviewer is freely available at http://www.VOSviewer.com/) (Van Eck and Waltman, Scientometrics 84:523-538, 2010).
Article
Full-text available
Using publication and citation data from a study on the selection procedure of the Boehringer Ingelheim Fonds (B.I.F.), this study investigated the extent to which frequently and infrequently cited articles were used differently by the scientists that cited them. The data set consisted of 31 articles by B.I.F. grant applicants that had received 451 citations in 270 citing publications. In a comprehensive content analysis each reference to the B.I.F. article in the citing publication was classified according to two categories: 1) the location of the citation within the citing publication (section of the paper in which the citation appears) and 2) meaningful or cursory mentioning of the article in the citing publication. The results showed statistically significant differences between the B.I.F. applicants' articles with low or high citation counts. All in all, the results indicate that an article with high citation counts had greater relevance for the citing author than an article with low citation counts.
Article
This article examines professional boundary work in a set of medical journal theme issues about complementary and alternative medicine (CAM). Whereas these journals claim as their collective goal to bridge and blur boundaries between mainstream and alternative medicine, this article identifies and describes two chief rhetorical strategies through which the journals instead bolster and even expand those boundaries. These two strategies, constituting and categorizing, appear central to the demarcation of biomedical boundaries vis-à-vis CAM.
Article
Discusses whether there is a difference in the value of a citation depending on where in the body of the citing article it occurs; and whether those cited articles to which reference is made more than once within a citing article are more valuable to the user than those cited only once. (Author)
Article
Impact factors are widely used to rank and evaluate journals. They are also often used inappropriately as surrogates in evaluation exercises. The inventor of the Science Citation Index warns against the indiscriminate use of these data. Fourteen year cumulative impact data for 10 leading medical journals provide a quantitative indicator of their long term influence. In the final analysis, impact simply reflects the ability of journals and editors to attract the best papers available.Counting references to rank the use of scientific journals was reported as early as 1927 by Gross and Gross.1 In 1955 I suggested that reference counting could measure “impact,”2 but the term “impact factor” was not used until the publication of the 1961 Science Citation Index (SCI) in 1963. This led to a byproduct, Journal Citation Reports (JCR), and a burgeoning literature using bibliometric measures. From 1975 to 1989, JCR appeared as supplementary volumes in the annual SCI. From 1990-4, they have appeared in microfiche, and in 1995 a CD ROM edition was launched. Large journals that publish many papers may not have as high an impact as smaller review journals Calculation of current impact factors The most used data in the JCR are impact factors—ratios obtained from dividing citations received in one year by papers published in the two previous years. Thus, the 1995 impact factor counts the citations in 1995 journal issues to “items” published in 1993 and 1994. I say “items” advisedly. There are a dozen major categories of editorial matter. JCR's impact calculations are based on original research and review articles, as well as notes. Letters of the type published in the BMJ and the Lancet are not included in the publication count. The vast majority of research journals do not have such extensive correspondence sections. The effects of these differences in calculating journal impact …
Article
It is well known that uninformed science administrators often use ISI's journal impact factors without taking into account the inherent citation characteristics of individual scientific disciplines. A rank normalized impact factor (rnlF) is proposed which involves use of order statistics for the complete set of journals within each JCR category. We believe the normalization procedure provides reliable and easily interpretable values. For any journal j, its rnlF is designated as rnlF1and equals (K–R1+ 1)/K, where R1 is the descending rank of journal j in its JCR category and K is the number of journals in the category. Note: JCR impact factor listings are published in descending order. The proposed rnlF is compared with normalized impact factors proposed by earlier authors. The efficacy of the rnlF is illustrated in the cases of seven highly-cited scientists, one each from seven different fields.