• Search Menu
  • Advance Articles
  • Editor's Choice
  • CME Reviews
  • Best of 2021 collection
  • Abbreviated Breast MRI Virtual Collection
  • Contrast-enhanced Mammography Collection
  • Author Guidelines
  • Submission Site
  • Open Access
  • Self-Archiving Policy
  • Accepted Papers Resource Guide
  • About Journal of Breast Imaging
  • About the Society of Breast Imaging
  • Guidelines for Reviewers
  • Resources for Reviewers and Authors
  • Editorial Board
  • Advertising Disclaimer
  • Advertising and Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Society of Breast Imaging

  • < Previous

A Step-by-Step Guide to Writing a Scientific Review Article

  • Article contents
  • Figures & tables
  • Supplementary Data

Manisha Bahl, A Step-by-Step Guide to Writing a Scientific Review Article, Journal of Breast Imaging , Volume 5, Issue 4, July/August 2023, Pages 480–485, https://doi.org/10.1093/jbi/wbad028

  • Permissions Icon Permissions

Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process. The process involves selecting a topic about which the authors are knowledgeable and enthusiastic, conducting a literature search and critical analysis of the literature, and writing the article, which is composed of an abstract, introduction, body, and conclusion, with accompanying tables and figures. This article, which focuses on the narrative or traditional literature review, is intended to serve as a guide with practical steps for new writers. Tips for success are also discussed, including selecting a focused topic, maintaining objectivity and balance while writing, avoiding tedious data presentation in a laundry list format, moving from descriptions of the literature to critical analysis, avoiding simplistic conclusions, and budgeting time for the overall process.

  • narrative discourse

Email alerts

Citing articles via.

  • Recommend to your Librarian
  • Journals Career Network

Affiliations

  • Online ISSN 2631-6129
  • Print ISSN 2631-6110
  • Copyright © 2024 Society of Breast Imaging
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Advertisement

Issue Cover

  • Previous Article
  • Next Article

1. INTRODUCTION

2. journal evaluation in china, 3. the leading journal evaluation systems of academic journals in china, 4. comparative analysis of journal evaluation systems in china, 5. conclusions and discussion, acknowledgments, author contributions, competing interests, funding information, data availability, a comprehensive analysis of the journal evaluation system in china.

ORCID logo

Handling Editor: Liying Yang

  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Ying Huang , Ruinan Li , Lin Zhang , Gunnar Sivertsen; A comprehensive analysis of the journal evaluation system in China. Quantitative Science Studies 2021; 2 (1): 300–326. doi: https://doi.org/10.1162/qss_a_00103

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Journal evaluation systems reflect how new insights are critically reviewed and published, and the prestige and impact of a discipline’s journals is a key metric in many research assessment, performance evaluation, and funding systems. With the expansion of China’s research and innovation systems and its rise as a major contributor to global innovation, journal evaluation has become an especially important issue. In this paper, we first describe the history and background of journal evaluation in China and then systematically introduce and compare the most currently influential journal lists and indexing services. These are the Chinese Science Citation Database (CSCD), the Journal Partition Table (JPT), the AMI Comprehensive Evaluation Report (AMI), the Chinese S&T Journal Citation Report (CJCR), “A Guide to the Core Journals of China” (GCJC), the Chinese Social Sciences Citation Index (CSSCI), and the World Academic Journal Clout Index (WAJCI). Some other influential lists produced by government agencies, professional associations, and universities are also briefly introduced. Through the lens of these systems, we provide comprehensive coverage of the tradition and landscape of the journal evaluation system in China and the methods and practices of journal evaluation in China with some comparisons to how other countries assess and rank journals.

China is among the many countries where the career prospects of researchers, in part, depend on the journals in which they publish. Knowledge of which journals are considered prestigious and which are of dubious quality is critical to the scientific community for assessing the standing of a research institution, tenure decisions, grant funding, performance evaluations, etc.

The process of journal evaluation dates back to Gross and Gross (1927) , who postulated that the number of citations one journal receives over another similar journal suggests something about its importance to the field. Shortly after, the British mathematician, librarian, and documentalist Samuel C. Bradford published his study on publications in geophysics and lubrication. The paper presented the concept of “core-area journals” and an empirical law that would, by 1948, become Bradford’s well-known law of scattering ( Bradford, 1934 , 1984 ). In turn, Bradford influenced Eugene Garfield of the United States, who subsequently published a groundbreaking paper on citation indexing called “Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas.” According to Garfield (1955) , “the citation index … may help a historian to measure the influence of an article—that is, its ‘impact factor’.” In the 1960s, Garfield conducted a large-scale statistical analysis of citations in the literature, reaching the conclusion that many citations were concentrated in just a few journals and the many remaining journals only accounted for a few citations ( Garfield, 1963 , 1964 ). Garfield went on to create the Institute for Scientific Information (ISI), then successively published the Science Citation Index (SCI), the Social Science Citation Index (SSCI), and the Art and Humanities Citation Index (A&HCI) databases.

Assessing the quality of published research output is important in all contexts where research assessment takes place—for example, when evaluating the success of research projects or when distributing research funding ( Su, Shang et al., 2017 ). As part of the assessment, evaluating and ranking the quality of the journals where the output was published has become increasingly important ( Mingers & Yang, 2017 ). Journal evaluation and rankings are used by governments, organizations, universities, schools, and departments to evaluate the quality and quantity of faculty research productivity, ranging from promotion and tenure to monetary rewards ( Black, Stainbank et al., 2017 ). Even though the merit of using such a system is not universally agreed upon ( Dobson, 2014 ), and is sometimes even contested ( Zhang, Rousseau, & Sivertsen, 2017 ), it is, however, widely believed that the rank or citation impact of a journal is supposed to reflect its prestige, influence, and even difficulty of having a paper accepted for publication ( Su et al., 2017 ).

Over the past few years, the number of papers published in international journals by Chinese researchers has seen a dramatic increase, to the point that, today, China is the largest contributor to international journals covered by Web of Science (WoS) and Scopus. In tandem, government policies and guidance, especially the call to “publish your best work in your motherland to benefit local society,” proposed by President Xi in 2016 1 , are seeing more and more papers published in China’s domestic journals. Therefore, with these increases in the number of papers and journals, it will be an important task to explore the strengths and weaknesses of various methods for evaluating journals as well as the types of ranking systems that may be suitable for China’s national conditions.

The journal evaluation system in China was established gradually, beginning with the introduction of Western journal evaluation theories about 60 years ago. Over the last 30 years, in particular, these foreign theories have been adopted, adapted, researched, and vigorously redeveloped. In the past, journal evaluation and selection results were mainly used to help librarians develop their collections and to help readers better identify a discipline’s core journals. However, in recent years, the results of journal evaluation and ranking have increasingly been applied to scientific research evaluation and management (i.e., in tenure decisions, grant funding, and performance evaluations). ( Shu, Quan et al., 2020 ). Many institutions are increasingly relying on journal impact factors (JIFs) to evaluate papers and researchers. This is commonly referred to in China as “evaluating a paper based on the journal and ranking list” (以刊评文, Yi Kan Ping Wen ). The higher the journal’s rank and JIF, the higher the expected outcome of evaluations.

In the ever-changing environment of scientific research evaluation, the research and practice of journal evaluation in China is also evolving to meet different needs. Many influential journal evaluations and indexing systems have been established since the 1990s, with their evaluation methods and standards becoming increasingly mature. These activities have played a positive role in promoting the development of scientific research and have also been helpful for improving the quality of academic journals.

The aim of this study is to review the progress of journal evaluation in China and present a comprehensive analysis of the current state of the art. Hence, the main body of this article is a comparative analysis of the journal lists that are most influential in China’s academic landscape. The results not only offer a deeper understanding of China’s journal evaluation methods and practices but also reveal some insights into the journal evaluation activities of other countries. Overall, our aim is to make a valuable contribution to improving the theory and practice of journal evaluation and to promote the sustainable and healthy development of journal management and evaluation systems, both in China and abroad.

2.1. A Brief History

Journal evaluation in China dates back to the 1960s, with some fairly distinct stages during its development. Qiyu Zhang and Enguang Wang first introduced the Science Citation Index (SCI) to Chinese readers in 1964 ( Zhang, 2015 ). In 1973, Erzhong Wu introduced a core journal list for chemistry. This was the first mention of the concept of a “core journal” ( Wu, 1973 ). In 1982, Liansheng Meng finished his Master’s thesis entitled “Chinese science citation analysis” ( Meng, 1982 ), and then, in 1989, he built the Chinese Science Citation Index (CSCI), now called the Chinese Science Citation Database (CSCD), with the support of the Documentation and Information Center of the Chinese Academy of Sciences. At this stage of development, international journal evaluation practices were simply applied to the Chinese context almost without making any changes to the underlying methodologies. At the same time, exploring bibliometric laws and potential applications became an important topic for researchers in library and information science.

In 1988, Jing and Xian used the “citation method” to identify a list of “Chinese Natural Science Core Journals,” which included 104 core Chinese journals in the natural sciences. This is now typically recognized as the first Chinese journal list ( Jing & Xian, 1988 ). Around that same time, some institutions began to undertake journal evaluation activities. For example, in 1987, the Institute of Scientific and Technical Information of China (ISTIC) (commissioned by the Ministry of Science and Technology—formerly the National Scientific and Technological Commission) began to analyze publications in the SCI, the Index to Scientific Reviews (ISR), and the Index to Scientific&Technical Proceeding (ISTP), and in 1989 it began selecting domestic scientific journals for analysis. During this process, 1,189 journals were selected from 3,025 scientific journals nationwide as statistical sources of journal selection, which have been adjusted annually ever since ( Qian, 2006 ). Hence, this second stage of development saw the beginnings of adapting international evaluation systems and approaches to local journals, and some institutions building their own citation and bibliographic indexes.

From the 1990s onwards, journal evaluation activities moved on to rapid development with equal emphasis on theoretical research and practical applications. On the theoretical side, bibliometric researchers and information scientists were engaged in developing more advanced evaluation methods and better indicators. The theories and methods of journal evaluation spread from the natural sciences to the social sciences and humanities (SSH). In terms of practical applications, more and more researchers in the library and information science fields began to depart from individual research agendas and move into joint working groups and professional evaluation institutions to promote journal evaluation practices. The number of journal lists burgeoned as well. Some combined “quantitative” methods and “qualitative” approaches, such as "A Guide to the Core Journals of China" (GCJC) by the Peking University Library and the Chinese Social Sciences Citation Index (CSSCI) from the China Social Sciences Research Center of Nanjing University. Others were proposed by joint working groups and research institutions, and these lists began to be used to support scientific research evaluation and management.

On the whole, these advances in methods and standards played a positive role in promoting the quality of academic journals. However, over time, JIFs have tended to become a proxy for the quality of the papers and authors published within their pages (i.e., “evaluating a paper based on the journal and ranking list” [以刊评文, Yi Kan Ping Wen ]). This phenomenon has been causing a wide debate nationwide, with many calling for papers to be judged by their content, not by their wrapping ( Zhang et al., 2017 ). In this regard, the number of solutions proposed to improve the standards of journal selection and to avoid improper or even misleading use continues to multiply.

2.2. Motivations for Performing Journal Evaluation in China

Advances in science and technology and the rapid growth of scientific research have brought a change in the way that journals are evaluated. Initially, the assessments were reader oriented, serving as a guide for journal audiences to understand research trends and developments in the various disciplines. Later, greater focus was placed on the needs of libraries and other organizations. English core journals were translated into Chinese and introduced to China to ensure better use of the most valuable journals with limited funds and to optimize the journal collections of China’s libraries.

However, with the rapid development of information network technology and the popularization of reading on screen, electronic journal databases are having an unprecedented impact on journal subscriptions. Further, early use of journal evaluation systems by pioneering institutions has spread beyond the library and information science community. Today, journal evaluations are inextricably tied to many aspects of assessing research performance.

The Journal Citation Reports (JCR), an annual publication by Clarivate, contains a relatively transparent data set for calculating JIFs and citation-based performance metrics at the article and the journal level. Further, JCR clearly outlines a network of references that represent the journal’s voice in the global scholarly dialog, highlighting the institutional and international players who are part of the journal’s community. However, many journals selected for inclusion in JCR are from English-speaking countries, as shown in Table 1 . To fulfill a growing demand to extend the universe of journals in JCR, the WoS platform launched the Emerging Sources Citation Index (ESCI) in November 2015. However, ESCI has also done very little to promote journals from non-English-speaking countries and regions ( Huang, Zhu et al., 2017 ). Although English is the working language of the international scientific community, for many reasons, it is not a wise choice for researchers to only publish their scholarly contributions in English. Building domestic evaluation systems turns out to be very necessary for fostering domestic collaborations, appropriately evaluating research performance, and keeping up with research trends close to home.

The top 10 countries with the highest journal numbers in JCR 2019

Data Source: Web of Science Group (2019) .

Note: Some journals in the portfolio of international publishers have no genuine national affiliation.

Moreover, China’s economy is growing rapidly and, along with it, the country’s scientific activity is also flourishing. As Figure 1 shows, China’s scientific research inputs and outputs have consistently increased over the past few decades, exceeding that of the United States in 2019 to become the most productive country in the world. With such a large number of papers, the work of researchers cannot be assessed without shortcuts. Thus, for want of a better system, the quality of the journals in which a researcher’s papers are published has become a proxy for evaluating the quality of the researcher themselves, and ways to define “core journals” and how to select those indexed journals have attracted wide attention, especially from the Chinese government.

The 10 countries with the largest number of publications in WoS
                            (1975–2019).Note: Indexes = SCIE, SSCI, A&HCI; Document
                            types = article, review.

The 10 countries with the largest number of publications in WoS (1975–2019).Note: Indexes = SCIE, SSCI, A&HCI; Document types = article, review.

Furthermore, national policies, such as those listed in Table 2 , are now playing a vital role in these evaluation activities. Early in China’s history of journal evaluation, the policies implemented were designed to support the development of some influential journals across the natural and social sciences. More recently, however, the government’s policies have sought to reverse the excessive emphasis that has come to be placed on the volume of a researcher’s output and the JIF of their venues ( Zhang & Sivertsen, 2020 ). Research institutions and universities are now being encouraged to adopt a more comprehensive evaluation method that combines qualitative and quantitative methods and pays more attention to the quality, contribution, and impact of a researcher’s masterpiece. Hence, more indicators are being taken into account and, upon these, a culture more conducive to exploration is being established that does not prioritize SCI/SSCI/A&HCI journals to the exclusion of all else.

The related policies about journal evaluation in China (selected)

Note: Ministry of Education of the People’s Republic of China (MOE); China Association for Science and Technology (CAST); State Administration of Press, Publication, Radio, Film and Television of the People’s Republic of China (SAPPRFT), and it was renamed as National Radio and Television Administration of the People’s Republic of China (NRTA) in 2018; Chinese Academy of Sciences (CAS); Chinese Academy of Engineering (CAE); General Office of the State Council of the People’s Republic of China (GOSC); Ministry of Science and Technology of the People´s Republic of China (MOST); Communist Party of China (CPC).

Through these three stages of development, multiple institutions in China have established comprehensive journal evaluation systems that combine quantitative and qualitative methods and a variety of different indicators, many of which have had a significant influence on scientific research activities. Hence, what follows is a comparison of the current journal indexes in China. These are the CSCD and the Journal Partition Table (JPT) from the National Science Library, Chinese Academy of Sciences (NSLC); the AMI journal list from the Chinese Academy of Social Sciences Evaluation Studies (CASSES); the Chinese S&T Journal Citation Report (CJCR) from ISTIC; “A Guide to the Core Journals of China” (GCJC) from Peking University Library; CSSCI from the Institute for Chinese Social Science Research and Assessment (ICSSRA) of Nanjing University, and the World Academic Journal Clout Index (WAJCI) from the China National Knowledge Infrastructure (CNKI). In addition, some other influential lists produced by government agencies, professional associations, and universities are also briefly discussed.

3.1. NSLC: CSCD Journal List

3.1.1. background.

The CSCD was established in 1989 by the National Science Library of CAS, with the aim of disseminating excellent scientific research achievements in China and helping scientists to discover information. This database covers more than 1,000 of the top academic journals in the areas of engineering, medicine, mathematics, physics, chemistry, the life and earth sciences, agricultural science, industrial technology, the environmental sciences, and so on ( National Science Library of CAS, 2019b ). Since its inception, the CSCD has amassed 5.5 million articles and 80.5 million citation records. As the first Chinese citation database, the CSCD published the first printed book of journals in 1995 and the first retrieval CD-ROM in 1998, followed by an online version in 2003. In 1999, it launched the “CSCD ESI Annual Report” and in 2005 the “CSCD JCR Annual Report”, which are similar to the ESI and JCR and very well known across China. However, perhaps the most notable feature of CSCD is its cooperation with Clarivate Analytics (formerly, Thomson-Reuters) in 2007 to offer a cross-database search with the WoS, giving rise to the first-ever database of non-English-language journals.

CSCD provides information discovery services for analyzing China from the perspective of the world and analyzing the world from the perspective of China. Therefore, it is widely used by research institutes and universities for subject searches, funding support, project evaluations, declaring achievements, talent selection, literature measurement, and evaluation research. It is also an authoritative document retrieval tool ( Jin & Wang, 1999 ). Jin, Zhang et al. (2002) and Rousseau, Jin, and Yang (2001) both provide relatively thorough explorations and discussions of this journal list.

3.1.2. Journal selection criteria

The CSCD journal list is updated every 2 years, using both quantitative and qualitative methods. The most recent report (2019–2020) was released in April 2019 and listed 1,229 source journals in total: 228 English journals published in China and 1,001 Chinese journals. The selection criteria are summarized below ( National Science Library of CAS, 2019a ).

3.1.2.1. Journal scope

The journal must be published in either Chinese or English in China, with both an International Standard Serial Number (ISSN) and a China Domestic Uniform Serial Publication Number (CN). The subject coverage includes mathematics, physics, chemistry, earth science, biological science, agricultural science, medicine and health, engineering technology, environmental science, interdisciplinary disciplines, and some other similar subject areas.

3.1.2.2. Research fields

The research fields are mainly derived from the Level 1 and 2 classes of the 5th Chinese Library Classification (CLC). However, the Level 2 classes might be further subdivided based on the coupling strength between the citations and semantic similarity of articles published in the corresponding journal set. In the most recent edition, there are 61 research fields. To avoid the possible bias of subjectively allocating journals to fields, classifications are based on cross-citation relationships, and any journal can be classified into more than one field.

3.1.2.3. Evaluation indicators

To ensure fairness to all candidate journals, journal self-citations are excluded. The qualitative indicators used to measure different aspects of a journal’s quality are shown in Table 3 .

Quantitative indicators of CSCD journal list

3.2. NSLC: JPT Journal List

3.2.1. background.

The JPT was built and is maintained by the Centre of Scientometrics, NSLC. The idea behind the partitioned design of this list began in 2000 with the goal of helping Chinese researchers distinguish between the JIFs of journals across different disciplines. The list was first released in 2004 in Excel format and only included 13 broad research areas. In 2007, these research areas were expanded to include the JCR categories, and, since 2012, the entire list has been published online to meet the growing number of retrieval requests.

This list provides reference data for administrators and researchers to evaluate the influence of international academic journals, and is widely recognized by many research institutions as a metric in their cash reward policies ( Quan, Chen, & Shu, 2017 ).

In 2019, the NSLC released a trial variation of the list while continuing to publish the official version. The upgraded JPT (trial) includes 11930 journals that are classified into 18 major disciplines. The journals cover most of the journals indexed in SCI and SSCI, as well as the ESCI journals published in China.

3.2.2. Journal selection criteria

Journals on the list are assessed using a rich array of citation metrics, including 3-year average JIFs. The list is divided into four partitions according to the 3-year average JIFs by research areas/fields. Using averages somewhat reduces any instability caused by significant annual fluctuations in JIFs. The partitions follow a pyramidal distribution. The top partition contains the top 5% of journals with the highest 3-year average JIFs in their discipline. Partition 2 covers 6%–20% and partition 3 covers 21%–50%, with the remaining journals in the fourth partition. Additionally, all the journals in the first partition and the top 10% of the journals in the second partition with the highest total citations (TC) are marked as “Top Journals.”

In the 2019 edition, multidisciplinary journals, such as Nature and Science , were ranked according to the average impact of each paper in an assigned discipline as determined by the majority of references given in the paper ( Research Services Group at Clarivate, 2019 ). That said, the papers in these journals are counted as multidisciplinary, despite the fact that many of them are highly specialized and represent research in specific fields, such as immunology, physics, neuroscience, etc.

Compared to the official version, the trial version has incorporated several essential updates ( Centre of Scientometrics of NSLC, 2020 ). First, the journals are classified based on the average impact of each paper published in the journal, and the papers are assigned to specific topics according to both citation relationship and text similarity ( Waltman & van Eck, 2012 ). Second, this version introduces a citation success index ( Franceschini, Galetto et al., 2012 ; Kosmulski, 2011 ) to replace JIFs as a measure of a journal’s impact. The citation success index of the target journal compared with the reference journal is defined as the probability of the citation count of a randomly selected paper from the target journal being larger than that of a random paper from a reference journal ( Milojević, Radicchi, & Bar-Ilan, 2017 ). Third, it extends the coverage of disciplines from the natural sciences into the social sciences to support the internationalization process of domestic titles. More specifically, coverage is extended to some local journals that are not listed in the JCR but are listed in the ESCI.

The initial purpose of the list was to evaluate the academic influence of SCIE journals, to provide academic submission references for scientific researchers, and to support macro analysis for research management departments. Although the Centre of Scientometrics, NSLC, has consistently stated that the list should not be used to make judgments at the micro level (e.g., to evaluate the performance of an individual), many institutions still use the JCR as a tool to evaluate the research of their employees. The list’s prominent position and strong influence in China’s scientific research evaluation has caused extensive debate, especially in the field of nuclear physics in 2018 ( Wang, 2018 ).

3.3. CASSES: AMI Journal List

3.3.1. background.

The AMI journal list is managed by the Chinese Academy of Social Sciences Evaluation Studies (CASSES), which was established in July 2017 out of the Centre of Social Sciences Evaluation, Chinese Academy of Social Sciences (CASS). CASSES has conducted a series of journal evaluation systems for Chinese journals based on the characteristics of disciplines and journals to form a comprehensive evaluation report of Chinese journals in the SSH. CASSES’ mandate is to optimize the utilization of scientific research journals and literature resources, as well as to provide references for journal evaluation, scientific research performance evaluation, scientific research management, talent selection, etc. ( Ma, 2016 ). The purpose of AMI is to focus on formative evaluation “to help and improve” rather than perform a summative evaluation “to judge” a journal’s quality. Another goal is to increase recognition of journals in the SSH by collaborating nationally across institutions, rather than competing to support good journals. The basic principle of AMI is to provide well-informed judgments about journals, not simple indicators, that translates to reliable advice on where to publish.

CASSES also provides evaluations on both new journals and English-language journals published in China to promote their development. New journals are defined as less than 5 years old. At present, no other domestic evaluation scheme has undertaken a similar expansion, which turns out to be one of the innovations of this index.

3.3.2. Journal selection criteria

The AMI journal list is updated every 4 years, and its comprehensive evaluation method combines quantitative evaluation with expert qualitative evaluation. According to the latest report of 2018, 1,291 academic journals in the field of SSH founded in 2012 or before are published in mainland China, and 164 new journals and 68 English journals were targets of particular evaluation. The reports divide the journals into categories: the Top Journals (5), Authoritative Journals (56), Core Journals (546), Extended Journals (711), and Indexed Journals (179) ( CASSES, 2018 ). And 26 English journals without CN have not evaluated.

The selection criteria for inclusion in the list are summarized below ( CASSES, 2018 ; Su, 2019 ):

3.3.2.1. Journal scope

The journals in the AMI list include some 2000 SSH journals listed by the former State Administration of Press, Publication, Radio, Film and Television of the People’s Republic of China in 2014 and 2017 ( SAPPRFT, 2014 , 2017 ). The lists include English-language journals and new journals that were founded in 2013–2017, and the final scope of journal evaluation is 1,291 Chinese academic journals, 164 new journals, and 68 English-language journals.

3.3.2.2. Research fields

The journals are divided into three broad subject categories, 23 subject categories, and 33 subject subcategories based on the university degree and academic training directory published by the Ministry of Education of the People’s Republic of China, Classification and the disciplines codes GB/T 13745-2009 in the Chinese Library Classification (fifth edition).

3.3.2.3. Evaluation indicators

There are three evaluation metrics: attraction, management power, and influence. Attraction gauges the journal’s external environment, its reputation among readers and researchers, and its ability to acquire external resources. Management power refers to the ability of the editorial team to promote the journal’s development. Influence represents the journal’s academic, social, and international influence, which is affected by the other two powers (attraction and management).

In addition to these three indicators, there are a further 10 second-level indicators and 24 third-level indicators, as shown in Table 4 . Looking closely at the list, one can see that most of the quantitative indicators can be obtained from different data sources (e.g., the journal’s website, academic news sources, citation platforms). Data to inform the remaining qualitative indicators is drawn from a broad survey and follow-up interviews. Note that the weights of the first-level indicators for pure humanities journals (H) versus the social sciences (SS) and multidisciplinary journals (MJ) are different, as indicated in the table by H/SS/MJ.

Quantitative indicators of AMI journal list

Note: The indicator type S means measurements add to the total score; O means measurements will reduce the total score; N means measurements do not affect the score in the current edition.

This refers to the proportion of papers that are funded by national funds in a journal.

This indicator is a point deduction indicator. If there is no academic misconduct, the score is “0”; rather, if there is such behavior, points will be deducted.

3.4. ISTIC: CJCR Journal List

3.4.1. background.

As late as 1987, few Chinese knew how many papers were published by Chinese scientists in the world, and no one knew how many papers were published domestically. As a result, the Institute of Science and Technical Information of China (ISTIC) was commissioned to conduct a paper “census.” Thus, initiated by ISTIC and sponsored by the Ministry of Science and Technology (then the State Science and Technology Commission), the China Scientific and Technical Papers and Citations Database (CSTPCD) was born as a database dedicated to the partial evaluation of the research performance of China’s scientists and engineers ( Wu, Pan et al., 2004 ).

ISTIC took advantage of the CSTPCD data to conduct statistical analyses on various categories of China’s scientific output each year. The results were then published in the form of an annual report and an accompanying press conference to inform society of China’s academic progress. The report includes the Chinese S&T Papers Statistics and Analysis: Annual Research Report and the Chinese S&T Journal Citation Reports (Core Edition), which provides a wealth of information and decision support for government administration departments, colleges and universities, research institutions, and researchers ( ISTIC, 2020a ).

3.4.2. Journal selection criteria

The list of journals selected by CSTPCD is called the “Statistical Source Journal of Chinese Science and Technology.” These journals are selected from a rigorous peer review and quantitative evaluation, and so are regarded as important scientific and technical journals in various disciplines in China. Currently, the list includes 2049 journals (1933 Chinese-language journals and 116 English-language journals) in the fields of natural sciences, engineering and technology, and 395 journals in the social sciences ( ISTIC, 2020b ). More details on the selection criteria are provided below ( ISTIC, 2020a ).

3.4.2.1. Journal scope

The catalog of China’s core S&T journals is adjusted once a year. The candidate journals to be evaluated include the core S&T journals selected in the previous year, along with applications to be considered for the current year that have held a CN number for more than 2 years. Further, the journal’s impact indicators must be ranked at the forefront of their discipline; should operate in line with national regulations and academic publishing norms; and must meet publishing integrity and ethical requirements. If a journal fails to meet these criteria or its peer assessment, its application is rejected or, if a journal is already listed in the catalog, it is withdrawn and can be reevaluated 1 year later.

3.4.2.2. Research fields

The journals are distributed across 112 subject classifications in the natural sciences and 40 in the social sciences.

3.4.2.3. Evaluation indicators

The evaluation system is based on multiple indexes, mostly bibliometric, and a combination of quantitative and qualitative methods. Specific indexes include citation frequency, JIF, important database collection, and overall evaluation score ( Ma, 2019 ).

3.5. Peking University Library: GCJC Journal List

3.5.1. background.

The GCJC is a research project conducted by researchers at the Peking University Library in conjunction with a dozen other university libraries and experts from other institutions. The guide is regularly updated to reflect the changing dynamics of journal development and has been published every 4 years since 1992 and every 3 years since 2011. It is only published in a printed book, and to date eight editions have been published by Peking University Press.

Whether and how the guide is used is up to the institutions that make use of it. It is worth noting that the guide is not an evaluation criterion for academic research and has no legal or administrative effectiveness, but some institutions do use it this way, which can create conflict. The selection principles emphasize that core journals are a relative concept to specific disciplines and periods. For the most part, the guide is used by library intelligence departments as an informational reference to purchase and reserve books, and to help tutors develop reading lists ( Committee for A Guide to the Core Journals of China, 2018 ).

3.5.2. Journal selection criteria

The 2017 edition of GCJC contains 1983 core journals assigned to seven categories and 78 disciplines. The selection criteria are provided below ( Chen, Zhu et al., 2018 ).

3.5.2.1. Journal scope

Any Chinese journal published in mainland China can be a candidate.

3.5.2.2. Research fields

Fields are based on the CLC categories of Philosophy, Sociology, Politics, and Law (Part I); Economy (Part 2); Culture, Education and History (Part 3); Natural Science (Part 4); Medicine and Health (Part 5); Agricultural Science (Part 6); Industrial Technology (Part 7).

3.5.2.3. Evaluation indicators

Comprehensive quantitative and qualitative analysis of 16 evaluation indicators, combined with the opinions of experts and scholars, are the basis of selection, as shown in Table 5 .

Quantitative indicators of GCJC journal list

3.6. ICSSRA: CSSCI Journal List

3.6.1. background.

CSSCI was developed by Nanjing University in 1997 and launched in 2000. CSSCI collects all source and citation information from all papers in source journals and source collections (published as one volume). The index records in CSSCI contain most of the bibliographic information in the papers. The content is normative, and the reference data are searchable ( Qiu & Lou, 2014 ).

The focus is on the social sciences in China and is gathered for the purposes of providing an efficient repository of information about Chinese knowledge innovation and cutting-edge research in the SSH, coupled with a comprehensive evaluation of China’s academic influence in these areas ( Su, Deng, & Shen, 2012 ; Su, Han, & Han, 2001 ). The journal data in CSSCI provides a wealth of raw information and statistics for researchers and institutions to study or to conduct evaluations based on authentic records of research output and citations.

3.6.2. Journal selection criteria

Through quantitative and qualitative evaluation methods, the 2019–2020 edition of CSSCI contains 568 core source journals and 214 extended source journals assigned among 25 disciplines ( ICSSRA, 2019 ). Extended source journals are evaluated and those that qualify are transferred to the core source journal list. The selection criteria are summarized below ( CSSCI Editorial Department, 2018 ; ICSSRA, 2019 ).

3.6.2.1. Journal scope

The journals must be Chinese and publish mainly original academic articles and reviews in the social sciences

Journals published in mainland China must have a CN number. Journals published in Hong Kong, Macao, or Taiwan must have an ISSN, and academic collections must have an ISBN.

Journals must be published regularly according to an established publishing cycle and must conform to the standards of journal editing and publication with complete and standardized bibliographic information.

3.6.2.2. Research fields

Each article in the CSSCI database is categorized according to the Classification and Code of Discipline (GBT 13745-2009) with reference to the Catalogue of Degree Awarding and Personnel Training (2011) (degree [2011] No. 11) and the Subject Classification Catalogue of National Social Science Foundation in China. At present, there are 23 journal categories based on subject classification, and two general journal categories: multidisciplinary university journals and multidisciplinary social science journals.

3.6.2.3. Evaluation indicators

The source journals of CSSCI are determined according to their 2-year JIF (excluding self-citations), total times cited, other quantitative indicators, and the opinions of experts from various disciplines.

3.7. CNKI: WAJCI Journal List

3.7.1. background.

The China National Knowledge Infrastructure (CNKI) is the largest comprehensive database in China. It is a key national information project led by Tsinghua University, first launched in 1996 in conjunction with the Tsinghua Tongfang Company. In 1999, CNKI began to develop online databases. In October 2009, it unveiled the construction of an international digital library together with world-famous foreign partners. At present, CNKI contains literature published since 1915 in over 7,000 academic journals published in China, including nearly 2,700 core and other significant journals. The database is divided into 10 series, 168 subjects, and 3,600 subsubjects ( CNKI, 2020 ). When a Chinese scholar wants to read a paper, they typically go to CNKI as the first port of call.

Since 2009, CNKI has invested and managed the “International and Domestic Influence of Chinese Academic Journals Statistics and Analysis Database.” This database publishes international and domestic evaluation indicators for nearly 6,000 academic journals officially published in China across four reports: the “Annual Report for Chinese Academic Journal Impact Factors,” the “Annual Report for International Citation of Chinese Academic Journals,” the “Journal Influence Statistical and Analysis Database,” and the “Statistical Report for Journal Network Dissemination” ( CNKI, 2018b ).

Since 2018, CNKI has also released the “Annual Report for World Academic Journal Impact Index (WAJCI).” This report aims to explore a scientific and comprehensive method for evaluating the academic influence of journals and provides objective statistics and comprehensive ranking for the academic impact of the world’s journals. This idea is not only conducive to building an open, diversified, and fairer evaluation system for journals; it is also helpful for improving the representation of Chinese journals in Western-dominated international indexes ( CNKI, 2018a ).

3.7.2. Journal selection criteria

The WAJCI journal list is updated annually; the most recent report was released in October 2019. The statistics shown in this report were derived from 22,325 source journals from 113 countries and regions (21,165 from the WoS database, including 9,211 from SCIE, 3,409 from SSCI, 7,814 from ESCI, and 1,827 from A&HCI, plus 1,160 Chinese journals). The WoS database does not provide JCR evaluation reports for some journals. In the case of new journals, this is because the citation frequency is typically very low. Excluding these journals without a JCR report leaves 13,088 journals to be evaluated, comprising 1,429 journals from mainland China and 11,659 from other countries and regions. Of these, 486 journals are in the field of SSH, 957 journals are in the field of science, technology, engineering and medicine (STEM), and 14 journals are interdisciplinary ( CNKI, 2019 ).

The selection criteria follow.

3.7.2.1. Journal scope

Journals should be published continuously and publicly.

Journals must predominantly publish original academic achievements, which should be peer reviewed.

Journals should comply with the requirements of international publishing and professional ethics.

Papers published in journals must conform to international editorial standards, which include editorial and publishing teams of high standing in their disciplines and a high level of originality, scientific rigor, and excellent readability.

3.7.2.2. Research fields

CNKI mainly follows a hybrid of the JCR classification system, the International Classification for Standards (ICS), and the CLC. Chinese journals that cannot be found in JCR are categorized into disciplines in one of the other lists. Moreover, the disciplines of JCR journals with serious duplication are appropriately merged. The final list spans 175 STEM disciplines and 62 SSH disciplines. Ultimately, all 13,088 journals are assigned into relatively accurate disciplines to ensure that the journals are ranked and compared within a unified discipline system.

3.7.2.3. Evaluation indicators

To comprehensively assess the international influence of journals, CNKI developed a complex indicator, called the CI index (clout index), that combines JIF with citation counts ( Wu, Xiao et al., 2015 ). It is generally believed that the most influential journal in a field should be the journal with the highest JIF and TC. The meaning of the CI value represents the degree of closeness between the journal influence and the optimal state of journal influence in the field. The smaller the gap, the closer the distance, which indicates that the influence of journals is closer to the optimal state. Furthermore, to compare journals on an international scale, CNKI publishes the indicators in WAJCI. The higher the value of the WAJCI, the higher the journal’s influence. WAJCI reflects the relative position of academic influence of journals within a discipline, so it can be used for interdisciplinary and even cross-year comparison, which has practical value.

3.8. Other Lists

In addition to the above seven main journal lists in China, another influential list, called the Research Center for Chinese Science Evaluation (RCCSE) core journal list , was developed by the RCCSE of Wuhan University in the early 2000s to provide multidisciplinary and comprehensive Chinese journal rankings ( Zhang & Lun, 2019 ). The evaluated journals mainly include pure academic journals and semiacademic journals from the natural sciences or humanities and social sciences ( Qiu, Li, & Shu, 2009 ). It adopts a mixture of quantitative and qualitative methods to evaluate the target journals and mainly concerns the quality, level, and academic influence of the journal ( Qiu, 2011 ). In the process of journal evaluation, the general principles are classified evaluation and hierarchical management ( Qiu, 2011 ). In addition, there are some other lists published by government agencies, professional associations, and universities that warrant mention, and they are briefly described below.

3.8.1. CDGDC: A-class journal list

In 2016, the fourth China University Subject Rankings (CUSR) was launched to evaluate the subjects of universities and colleges in mainland China in line with the “Subject Catalogue of Degree Awarding and Personnel Training” approved by the Ministry of Education. Organized by the China Academic Degrees and Graduate Education Development Centre (CDGDC), the aim was to acquaint participating universities and institutions with the merits and demerits of their subject constructions and curriculums, and to provide relevant information on national graduate education ( CDGDC, 2016 ). The instructions of the fourth CUSR specifically point out that the number of papers published in A-class journals (both Chinese and international) is a critical indicator of the quality of a subject offering ( Ministry of Education, 2016a ).

As described by the Ministry of Education ( Ministry of Education, 2016a , 2016b ), the process for selecting which journals were “A-class” was as follows. First, the publishers and bibliometric data providers (e.g., Thomson Reuters, Elsevier, CNKI, CSSCI, CSCD) were invited to provide a preliminary list of journals based on bibliometric indicators, such as JIF and reputation indices. Then, doctoral tutors were invited to participate in online voting for the candidates. Last, the voting results were submitted to the Academic Degrees Committee of the State Council for review, who finalized the journal list. The A-class journal list exercise was an attempt to combine bibliometric indicators and expert opinions. However, the list was abandoned only 2 weeks after release as a fiery debate erupted among many scientific communities.

3.8.2. Chinese Computing Federation: CCF-recommended journal list

The China Computer Federation (CCF) is a national academic association in China, established in 1956. Their publication ranking list, released in 2012, divides well-known international computer science conferences and journals into 10 subfields. A rank of A indicates the top conferences and journals, B is for journals and conferences with significant impact, and important conferences and journals are placed in the C bracket.

In April 2019, the CCF released the 5th edition List of International Academic Conferences and Periodicals Recommended by the CCF. In the course of the review, the CCF Committee on Academic Affairs brought experts together to thoroughly discuss and analyze these suggestions. The candidates were reviewed and shortlisted by an initial assessment panel, then examined by a final evaluation panel before announcing the final results. Factors such as the venue’s influence and an approximate balance between different fields were considered when compiling the list ( China Computer Federation, 2019 ). Today, this list is widely recognized in computer science fields and has accelerated the process of publishing more papers in top conferences, as well as improving the quality of those publications ( Li, Rong et al., 2018 ).

3.8.3. School or departmental journal lists

With the rapidly increasing and burdensome number of scholarly outlets for academic assessment, administrators and research managers are constantly looking to improve the speed and efficiency of their assessment processes. Many construct their own school or departmental list as a guide for evaluating faculty research ( Beets, Kelton, & Lewis, 2015 ). Business schools particularly prefer internal journal lists to inform their promotion and tenure decisions ( Bales, Hubbard et al., 2019 ). In fact, almost all of the 137 Chinese universities that receive government funding have created their own internal journal lists as indicators of faculty performance ( Li, Lu et al., 2019 ).

What is clear from the descriptions of each of the major journal lists is that each was established to fill specific objectives, and each has its own selection criteria, yet there may be as many similarities between the seven systems as there are differences. Therefore, for a broader picture of the evaluation system landscape, we undertook a comprehensive comparative analysis. Our findings are presented in this section.

4.1. Profiles of Journal List and Indexed Journals

CJCR was first established by ISTIC in 1987. GCJC, CSCD, CSSCI, and JPT followed shortly after. AMI and WAJCI joined the club more recently. As indicated in Table 6 , studies on journal selection have included a wide variety of participants, such as research institutes, universities, and private enterprises. Another observation is that the regularity with which journal lists are updated is not the same. Currently, JPT, CJCR, and WAJCI are updated once a year; CSCD and CSSCI are updated every 2 years; GCJC is updated every 3 years, and AMI every 4 years.

Profiles of the leading academic journal lists in China

Note: The research areas are classified into five broad categories: Arts & Humanities (AH); Life Sciences & Biomedicine (LB); Physical Sciences (PS); Social Sciences (SS); Technology Engineering (TE).

Clearly, the number of journals, scope, languages, and research areas of each journal list are different. JPT and WAJCI count the most journals, both of which have a domestic and international scope. All of the other lists only cover domestic journals, obviously making them smaller than the previous two. Although most of the journal lists include English-language journals, these are few in China. In terms of disciplines, JPT and CSCD focus more on the natural sciences; AMI and CSSCI focus on the SSH field; and CJCR, GCJC, and WAJCI span all disciplines.

There are also different requirements for ISSNs and CNs. AMI, CJCR, and GCJC only accept journals with CNs, and the JPT only accepts journals with an ISSN. CSSCI and WAJCI have the least stringent requirements, requiring that the journal has one or the other. By contrast, CSCD requires its journals to have both.

4.2. The Evaluation Characteristics of Journal lists

How journals are assessed is the most critical aspect of any evaluation system. Further, as indicated in Table 7 and Table 8 , these systems were designed with many different objectives in mind. Although often used in scientific evaluation, the purpose of most is to provide readers, librarians, and information agencies with reference material to help them purchase and manage journal stocks. This is certainly the case with JPT, CJCR, GCJC, CSSCI. CSCD, CJCR, and CSSCI extend this mission further by seeking to provide references for research management and academic evaluation. However, the objectives of AMI and WAJCI are different. AMI’s goal is to increase the quality and recognition of journals in the SSH, while CNKI built WAJCI to provide “apples with apples” statistics on the world’s journals.

Evaluation purposes, methods, and results of the leading academic journal lists in China

Evaluation criteria, indicators, data sources of the leading academic journal lists in China

Note: Partly refer to Ma (2019) .

The methods of calculation and indicators that each system uses are different. Most rely on a combination of quantitative and qualitative tools, while JPT and WAJCI are largely quantitative systems. Both are heavily dependent on JIFs, but JPT relies on a 3-year average, while WAJCI combines JIFs with TC to create its indicator. The other lists mainly evaluate the attraction and management ability of journals through bibliometric indicators, such as JIFs and citations, supplemented by peer review. AMI, however, adds extra indicators over and above the standard set.

Moreover, the weight of each indicator changes depending on the purposes of lists. For example, AMI’s mission is to improve the quality and influence of both China’s SSH journals and the evaluation systems that rank them. Therefore, AMI houses a comprehensive set of indicators that cover processes, talent, management, the editorial team, etc., each of which is measured against the three “powers” (i.e., attraction, management, and influence). By contrast, the fundamental purpose of GCJC is to help librarians optimize their journal collections and provide readers with guidance on the titles in their disciplines. Hence, GCJC rests more on bibliometric indicators and quantitative analyses of the growth trends and scatter law patterns of journals in a field.

Data sources of indicators are another characteristic for comparison. JPT is mainly an international database. WAJCI and GCJC combine international databases with local Chinese databases to expand the type and volume of data provided. Although the indicators data of AMI draws from a wide range of sources, such as the self-built and self-collected data of CASSES (e.g., CHSSCD), the third-party data, the self-evaluated data of the journal editorial departments, and the data included are mostly determined by the producers of the original indexes. The same is true for CSCD, CSSCI, and GCJC.

The last criterion for comparison is the grading system. All divide their listed journals into disciplines, and most calculate their rankings relative to that discipline. JPT and WAJCI each have four tiers, but the JPT system is a pyramid, whereas the WAJCI scheme is equally divided, the same as JCR. AMI’s system is more complicated because the journals are divided into three categories (A-journals, new journals and English-language journals), then further subdivided into five levels according to quality. CSCD and CSSCI are divided into two levels—core and extended journals—and CJCR and GCJC do not have grades. To some extent, these divisions are hierarchical and systematic, which is convenient for users. However, how many journals appear in more than one index and how similar their rankings are across indexes needs further analysis and discussion.

There is no doubt that China’s journal evaluation and selection systems have achieved remarkable growth and impact, resulting in some influential journal lists. In the 60 years since journal evaluation was first introduced to China, the functions of these lists have grown, diversified, and generated their fair share of controversy. Some lists simply seek to provide decision-making support for information consultants, journal managers, research managers and funders, editors, and others. Others are designed to help optimize library collections, provide reading-list guidance or support referencing and citation services. Further, and more controversially, journal evaluations are increasingly becoming proxies for evaluating the academic achievements of individual researchers. As an extension of the original purposes, the evaluation of core journals has an important influence on a journal’s editorial procedures and strategies. To maintain the continuous development of their academic journals, publishers and publishing houses must conduct journal evaluations as well as supervision ( Ren & Rousseau, 2004 ).

5.1. Greater Cooperation Among the Different Journal List Providers

Seven different journal lists is a lot, even for a country as large and diverse as China. However, what is more notable is the number of different institutions that contribute to informing these lists, and the fact that individual universities still feel the need to create their own internal lists to complement the published systems. Everyone in this landscape is gathering their data, and most are constructing their own data sets, classifying and ranking the journals and papers, calculating their own rankings and metrics, etc. The result is simply an overlap of effort in many cases. We know that if institutions want to build an influential and authoritative journal evaluation or selection system, it not only needs to be based on sound indicators but also a on comprehensive range of triangulated data sources. An obvious solution is for the producers of these evaluation instruments to collaborate on research and development. They could build a national platform for coordination, influence, and collaboration on developing shared information resources and tools and agreed definitions and protocols ( Zhang & Sivertsen, 2020 ). Cooperation would be conducive to establishing a unified and authoritative journal evaluation and selection system and, more importantly, it could significantly increase the objectivity and fairness of the results.

5.2. More Compatibility Between Subject Classifications

Our analysis shows that each scheme adopts a different subject classification system. However, many articles are interdisciplinary and, because papers are assessed relative to their discipline, a publication can be evaluated with very different results in each category. Therefore, when evaluating and selecting journals, institutions should pay attention to the subject classification of journals to ensure the relative accuracy of the grades.

5.3. Exercise Caution when Using Journal Evaluations for Scientific Research Assessment and Management

Although the practice of China’s journal evaluation and selection systems is scientific and somewhat accurate, it is worth noting that journal rankings (such as JIFs) are not suitable for assessing the quality of individual research. The phenomenon of emphasizing JIFs and rankings in research evaluation is prevalent and persistent in China, but at least awareness of its adverse effects is growing ( Ministry of Education, 2020 ). Journal evaluation systems can make a strong contribution to research evaluation at the macro level, but applying those rankings as measures of impact and quality at the micro level (to individuals, institutions, subjects and the like) should be done with extreme caution. We should focus on the macro information about whether a given journal has been indexed in such systems, such as CSCD and CSSCI. A good example of wise macrolevel use of these evaluation systems can be found in the National Science Fund for Distinguished Young Scholars. At one time, the bibliometric data indexed in the CSCD was considered in the Fund’s document preparation and subsequent peer review.

Moreover, and in line with the new research evaluation policy in China as of 2020, the use of information such as journal quartiles and JIFs for microlevel evaluations should be reduced. Individual institutions need to establish their own guidelines on how to use journal ranking lists in their decisions ( Black et al., 2017 ), but when journal rankings are used they should be combined with other indicators. Research managers are also beginning to notice that there is no direct link between the influence of a journal and a single paper published in it. The use of JIFs for measuring the performance of individual researchers and their publications is highly contested and has been demonstrated to be based on wrong assumptions ( Zhang et al., 2017 ).

Accordingly, there have been some recent efforts to improve research evaluation at the article level as opposed to the level of journal, such as the F5000 project from ISTIC ( http://f5000.istic.ac.cn/ ). In this project, 5,000 outstanding articles from the top journals are selected each year to showcase the quality of Chinese STM journals ( Ma, 2019 ). The excellent articles project by the CAST ( http://www.cast.org.cn/art/2020/4/30/art_458_120103.html ) is another similar initiative. Reform will not be achieved overnight; that is long-term and arduous work. However, the many steps that need to be taken to get there begin with collaborative efforts such as these.

5.4. Collaborate with the International Bibliographic Databases

At present, there is no interconnection between the international evaluation systems and China’s, especially in the SSH. Journal list producers should try to cooperate with the international bibliographic databases to promote the internationalization of China’s journal evaluation systems. For example, linkages between CSCD and SCI over citation data have been established, and other joint systems, such as CSSCI and SSCI, might be promoted in the future. However, we should also realize that SSCI and A&HCI only partly represent the world’s scholarly publishing in the SSH ( Aksnes & Sivertsen, 2019 ). Therefore, China’s journal evaluation institutions should try to cooperate with international evaluation systems on the basis of improving their own systems as much as possible.

5.5. Accelerate the Establishment of an Authoritative Evaluation System

At present, there are seven main predominant journal lists in China, each established with its own evaluation objectives. This is a dispersed system, but not an integrated system. Although diversity allowed for the exploration of evaluation methods and data sources, there is no unified, authoritative standard. With the new research evaluation policy as of 2020, China is moving away from indicators based on the WoS as standard. This should empower China’s research institutions and funding organizations to define new standards, but this is a process that needs to be coordinated ( Zhang & Sivertsen, 2020 ). We contend that one comprehensive journal list, both domestic and international, should be created that reflects the full continuum of research fields, including interdisciplinary and marginalized fields. The list needs to be dynamic to reflect the changing journal market, and the evaluations need to be organized, balanced, and representative of a range of interinstitutional expert advice. A national evaluation system would not only conserve resources but also increase the credibility and authority of the core journal list. As an example, South Korea has only one system managed by the National Research Foundation of Korea. The same is true of several European, African, and Latin-American countries. So, while a national system is not a new idea, it is perhaps a proposal that deserves to be reconsidered for the scientific community in China.

We would like to thank Professor Xiaomin LIU (National Science Library, Chinese Academy of Sciences), Professor Liying YANG (National Science Library, Chinese Academy of Sciences), Professor Jinyan SU (Chinese Academy of Social Sciences Evaluation Studies), and Professor Jianhua LIU (Wanfang Data Co., LTD) for providing valuable data and materials. We also thank Ronald Rousseau (KU Leuven & University of Antwerp) and Tim Engels (University of Antwerp) for providing insightful comments.

Ying Huang: Funding acquisition, Formal analysis, Methodology, Resources, Supervision, Writing—original draft, Writing—review & editing. Ruinan Li: Formal analysis, Investigation, Writing—original draft, Writing—review & editing. Lin Zhang: Funding acquisition, Investigation, Resources, Supervision, Writing—review & editing. Gunnar Sivertsen: Funding acquisition, Methodology, Writing—review & editing.

The authors have no competing interests.

This work is supported by the National Natural Science Foundation of China (Grant No. 72004169; 71974150; 71904096; 71573085), the Research Council of Norway (Grant No. 256223), and the MOE (Ministry of Education in China) project of humanities and social sciences (18YJC630066), and the National Laboratory Center for Library and Information Science in Wuhan University.

The raw bibliometric data were collected from Clarivate Analytics. A license is required to access the Web of Science database. Therefore, the data used in this paper cannot be posted in a repository.

http://www.xinhuanet.com//politics/2016-05/31/c_1118965169.htm

Abbreviations and full names used in the article

Author notes

Email alerts, related articles, affiliations.

  • Online ISSN 2641-3337

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

  • Hirsh Health Sciences
  • Webster Veterinary

Scholarly Publishing

  • Choosing a journal to publish in

Evaluating journals

Predatory publishers, journal directories, article analyzers & journal suggesters, undergraduate research journals, tools to measure journal impact.

  • Author's rights, copyright & permissions
  • Funding & discounts for open access publishing
  • Resources for publishing & sharing your work
  • Your scholarly profile

Any questions?

Profile Photo

How can you identify journals to publish your work in? To start, look at the journals you read, that your colleagues read and publish in, and at who you cite in your work. Is there a pattern to those journals?

There are also additional tools that you can use to identify & evaluate journals you're considering publishing in. Browse this section of the guide to learn more about evaluating a journal; tools to use for finding appropriate journals such as journal directories & article analyzers; tools to measure the impact of a journal; and finding an undergraduate research journal to publish in.

When considering a journal as a potential place to publish, here are some things you might ask yourself:

Is the journal the right place for my work?

  • Does the subject matter covered in the journal match your scholarship?
  • Do the types of articles published and article length guidelines match with what you want to submit?
  • Who is the audience of the journal?

Is this a trusted journal?

Look for journals where you can answer yes to many of the following questions:

  • Can you identify the publisher? Are they affiliated with an organization you're familiar with? Is there contact information present? 
  • Do the affiliations & backgrounds of the editorial board and authors publishing in the journal appear to be appropriate for the subject matter of the journal?
  • Are articles peer-reviewed?
  • Does the journal have an ISSN, and do articles have DOIs?
  • Are the journal's copyright policies & any fees to publish clear? If you'd like to publish open access, are there options?
  • Web of Science  for journals spanning the humanities, social sciences, and STEM fields (select "Publication Name" from the drop down menu next to the search box)
  • Scopus  for journals in the social sciences and STEM fields
  • SciFinder  for journals in Chemistry and related fields (select "Journal" under the References bar)
  • PubMed  for life sciences, biomedical, clinical, and public/community health journals (choose "Journal" from the drop down menu next to the search box)
  • JSTOR  for journals spanning the arts, humanities, social sciences, and sciences (scroll down and search using the "Publication Title" search box)

You can also look at the Think Check Submit checklist, use a journal evaluation tool [pdf] , or talk to the library!

"Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices."

Grudniewicz, Agnes, et. al. (2019). Predatory journals: No definition, no defence.  Nature (London) ,  576 (7786), 210–212. https://doi.org/10.1038/d41586-019-03759-y .

Visit the website for the journal and consider the questions in the  Evaluating journals section above. Some red flags include:

  • The journal is  not  listed in the  Directory of Open Access Journals  (DOAJ)
  • It's  not  listed in  Ulrichs  (Tufts login required), which is an authoritative source on publisher information, including Open Access titles
  • It's  not  widely available within major databases
  • You don't recognize previously published authors or members of the editorial board
  • The journal isn't affiliated with a university or scholarly organization you are familiar with
  • You can't easily identify if they have author processing fees and/or how much they cost.
  • The journal doesn't appear professional - look for an impact factor, an ISSN, DOIs for individual articles, and easy to find contact information
  • There isn't clear information about a peer-review process, or the journal promises extremely fast turn-around times to publishing that don't allow enough time for review

Use these resources to browse for an appropriate journal for your work, or to research a title that you're considering publishing in.

  • Directory of Open Access Journals Use DOAJ to search or browse high-quality, peer-reviewed open access journal titles in all subjects and languages.
  • MLA Directory of Periodicals Find out information for thousands of journals and book series that cover literature, literary theory, dramatic arts, folklore, language, linguistics, pedagogy, rhetoric and composition, and the history of printing and publishing.
  • Ulrichsweb Ulrichsweb is the authoritative source of bibliographic and publisher information on more than 300,000 periodicals of all types: academic and scholarly journals, Open Access publications, peer-reviewed titles, popular magazines, newspapers, newsletters, and more from around the world.

If you've written an article but aren't sure where to submit it, these tools can help. They use your article's title, keywords, abstract, or full text to find journals that have published similar articles. The description for each resource below notes if it's limited to a specific publisher or discipline.

  • B!SON Open Access Journal Finder Enter the title, abstract, and/or references of your paper to find an open access journal suitable to publish in.
  • JSTOR Text Analyzer Drag and drop a copy of your article into the Text Analyzer, and the tool will find similar content in JSTOR. Consider the journals that those papers are published in.
  • Jane (Journal Author/Name Estimator) Enter your article title and/or abstract of the paper in the box, and click on 'Find journals', 'Find authors' or 'Find Articles'. Jane will then compare your document to millions of documents indexed in Medline to find the best matching journals, authors or articles.
  • Elsevier Journal Finder Elsevier Journal Finder uses smart search technology and field-of-research specific vocabularies to match your article to Elsevier journals.
  • IEEE Publication Recommender Searches 170+ periodicals and 1500+ conferences from IEEE, provides factors such as Impact Factor and Submission-To-Publication Time.
  • ChronosHub Journal Finder Browse, search, filter, sort, and compare more than 40,000 journals to find the right journal without worry about publishing in compliance with your funders’ Open Access policy.

Undergraduate research journals aren't indexed in many of the sources we typically use for finding journals, so lists of academic journals focused on publishing undergraduate research compiled by universities and organizations are good starting places for finding a place to publish your work:

  • Undergraduate Research Journal Listing from the Council on Undergraduate Research
  • Where to Publish Your Research  (compiled by Sacred Heart University)
  • Undergraduate Research Journals  (compiled by University of Nebraska)
  • Undergraduate Research Journals  (compiled by CUNY)
  • Student Journals hosted on the bepress platform

Some things to consider while looking for an undergraduate research journal to publish your scholarship in include:

  • Is there a submission deadline?
  • Does the journal appear to be currently publishing?
  • Are the journal's copyright policies clear?
  • Journal Citation Reports Provides Impact Factors, and Eigenfactors and Article Influence Scores for science and social science journals.
  • Scopus Journal Analyzer Use the Journal Analyzer to compare up to 10 Scopus sources on a variety of parameters: CiteScore, SJR (Scimago Journal and Country Rank), and SNIP (source normalised impact per paper)

Read more about these tools & measures on Hirsh Library's Measuring Research Impact guide .

  • << Previous: Home
  • Next: Author's rights, copyright & permissions >>
  • Last Updated: Apr 11, 2024 4:20 PM
  • URL: https://researchguides.library.tufts.edu/publishing
  • Corrections

Search Help

Get the most out of Google Scholar with some helpful tips on searches, email alerts, citation export, and more.

Finding recent papers

Your search results are normally sorted by relevance, not by date. To find newer articles, try the following options in the left sidebar:

  • click "Since Year" to show only recently published papers, sorted by relevance;
  • click "Sort by date" to show just the new additions, sorted by date;
  • click the envelope icon to have new results periodically delivered by email.

Locating the full text of an article

Abstracts are freely available for most of the articles. Alas, reading the entire article may require a subscription. Here're a few things to try:

  • click a library link, e.g., "FindIt@Harvard", to the right of the search result;
  • click a link labeled [PDF] to the right of the search result;
  • click "All versions" under the search result and check out the alternative sources;
  • click "Related articles" or "Cited by" under the search result to explore similar articles.

If you're affiliated with a university, but don't see links such as "FindIt@Harvard", please check with your local library about the best way to access their online subscriptions. You may need to do search from a computer on campus, or to configure your browser to use a library proxy.

Getting better answers

If you're new to the subject, it may be helpful to pick up the terminology from secondary sources. E.g., a Wikipedia article for "overweight" might suggest a Scholar search for "pediatric hyperalimentation".

If the search results are too specific for your needs, check out what they're citing in their "References" sections. Referenced works are often more general in nature.

Similarly, if the search results are too basic for you, click "Cited by" to see newer papers that referenced them. These newer papers will often be more specific.

Explore! There's rarely a single answer to a research question. Click "Related articles" or "Cited by" to see closely related work, or search for author's name and see what else they have written.

Searching Google Scholar

Use the "author:" operator, e.g., author:"d knuth" or author:"donald e knuth".

Put the paper's title in quotations: "A History of the China Sea".

You'll often get better results if you search only recent articles, but still sort them by relevance, not by date. E.g., click "Since 2018" in the left sidebar of the search results page.

To see the absolutely newest articles first, click "Sort by date" in the sidebar. If you use this feature a lot, you may also find it useful to setup email alerts to have new results automatically sent to you.

Note: On smaller screens that don't show the sidebar, these options are available in the dropdown menu labelled "Year" right below the search button.

Select the "Case law" option on the homepage or in the side drawer on the search results page.

It finds documents similar to the given search result.

It's in the side drawer. The advanced search window lets you search in the author, title, and publication fields, as well as limit your search results by date.

Select the "Case law" option and do a keyword search over all jurisdictions. Then, click the "Select courts" link in the left sidebar on the search results page.

Tip: To quickly search a frequently used selection of courts, bookmark a search results page with the desired selection.

Access to articles

For each Scholar search result, we try to find a version of the article that you can read. These access links are labelled [PDF] or [HTML] and appear to the right of the search result. For example:

A paper that you need to read

Access links cover a wide variety of ways in which articles may be available to you - articles that your library subscribes to, open access articles, free-to-read articles from publishers, preprints, articles in repositories, etc.

When you are on a campus network, access links automatically include your library subscriptions and direct you to subscribed versions of articles. On-campus access links cover subscriptions from primary publishers as well as aggregators.

Off-campus access

Off-campus access links let you take your library subscriptions with you when you are at home or traveling. You can read subscribed articles when you are off-campus just as easily as when you are on-campus. Off-campus access links work by recording your subscriptions when you visit Scholar while on-campus, and looking up the recorded subscriptions later when you are off-campus.

We use the recorded subscriptions to provide you with the same subscribed access links as you see on campus. We also indicate your subscription access to participating publishers so that they can allow you to read the full-text of these articles without logging in or using a proxy. The recorded subscription information expires after 30 days and is automatically deleted.

In addition to Google Scholar search results, off-campus access links can also appear on articles from publishers participating in the off-campus subscription access program. Look for links labeled [PDF] or [HTML] on the right hand side of article pages.

Anne Author , John Doe , Jane Smith , Someone Else

In this fascinating paper, we investigate various topics that would be of interest to you. We also describe new methods relevant to your project, and attempt to address several questions which you would also like to know the answer to. Lastly, we analyze …

You can disable off-campus access links on the Scholar settings page . Disabling off-campus access links will turn off recording of your library subscriptions. It will also turn off indicating subscription access to participating publishers. Once off-campus access links are disabled, you may need to identify and configure an alternate mechanism (e.g., an institutional proxy or VPN) to access your library subscriptions while off-campus.

Email Alerts

Do a search for the topic of interest, e.g., "M Theory"; click the envelope icon in the sidebar of the search results page; enter your email address, and click "Create alert". We'll then periodically email you newly published papers that match your search criteria.

No, you can enter any email address of your choice. If the email address isn't a Google account or doesn't match your Google account, then we'll email you a verification link, which you'll need to click to start receiving alerts.

This works best if you create a public profile , which is free and quick to do. Once you get to the homepage with your photo, click "Follow" next to your name, select "New citations to my articles", and click "Done". We will then email you when we find new articles that cite yours.

Search for the title of your paper, e.g., "Anti de Sitter space and holography"; click on the "Cited by" link at the bottom of the search result; and then click on the envelope icon in the left sidebar of the search results page.

First, do a search for your colleague's name, and see if they have a Scholar profile. If they do, click on it, click the "Follow" button next to their name, select "New articles by this author", and click "Done".

If they don't have a profile, do a search by author, e.g., [author:s-hawking], and click on the mighty envelope in the left sidebar of the search results page. If you find that several different people share the same name, you may need to add co-author names or topical keywords to limit results to the author you wish to follow.

We send the alerts right after we add new papers to Google Scholar. This usually happens several times a week, except that our search robots meticulously observe holidays.

There's a link to cancel the alert at the bottom of every notification email.

If you created alerts using a Google account, you can manage them all here . If you're not using a Google account, you'll need to unsubscribe from the individual alerts and subscribe to the new ones.

Google Scholar library

Google Scholar library is your personal collection of articles. You can save articles right off the search page, organize them by adding labels, and use the power of Scholar search to quickly find just the one you want - at any time and from anywhere. You decide what goes into your library, and we’ll keep the links up to date.

You get all the goodies that come with Scholar search results - links to PDF and to your university's subscriptions, formatted citations, citing articles, and more!

Library help

Find the article you want to add in Google Scholar and click the “Save” button under the search result.

Click “My library” at the top of the page or in the side drawer to view all articles in your library. To search the full text of these articles, enter your query as usual in the search box.

Find the article you want to remove, and then click the “Delete” button under it.

  • To add a label to an article, find the article in your library, click the “Label” button under it, select the label you want to apply, and click “Done”.
  • To view all the articles with a specific label, click the label name in the left sidebar of your library page.
  • To remove a label from an article, click the “Label” button under it, deselect the label you want to remove, and click “Done”.
  • To add, edit, or delete labels, click “Manage labels” in the left column of your library page.

Only you can see the articles in your library. If you create a Scholar profile and make it public, then the articles in your public profile (and only those articles) will be visible to everyone.

Your profile contains all the articles you have written yourself. It’s a way to present your work to others, as well as to keep track of citations to it. Your library is a way to organize the articles that you’d like to read or cite, not necessarily the ones you’ve written.

Citation Export

Click the "Cite" button under the search result and then select your bibliography manager at the bottom of the popup. We currently support BibTeX, EndNote, RefMan, and RefWorks.

Err, no, please respect our robots.txt when you access Google Scholar using automated software. As the wearers of crawler's shoes and webmaster's hat, we cannot recommend adherence to web standards highly enough.

Sorry, we're unable to provide bulk access. You'll need to make an arrangement directly with the source of the data you're interested in. Keep in mind that a lot of the records in Google Scholar come from commercial subscription services.

Sorry, we can only show up to 1,000 results for any particular search query. Try a different query to get more results.

Content Coverage

Google Scholar includes journal and conference papers, theses and dissertations, academic books, pre-prints, abstracts, technical reports and other scholarly literature from all broad areas of research. You'll find works from a wide variety of academic publishers, professional societies and university repositories, as well as scholarly articles available anywhere across the web. Google Scholar also includes court opinions and patents.

We index research articles and abstracts from most major academic publishers and repositories worldwide, including both free and subscription sources. To check current coverage of a specific source in Google Scholar, search for a sample of their article titles in quotes.

While we try to be comprehensive, it isn't possible to guarantee uninterrupted coverage of any particular source. We index articles from sources all over the web and link to these websites in our search results. If one of these websites becomes unavailable to our search robots or to a large number of web users, we have to remove it from Google Scholar until it becomes available again.

Our meticulous search robots generally try to index every paper from every website they visit, including most major sources and also many lesser known ones.

That said, Google Scholar is primarily a search of academic papers. Shorter articles, such as book reviews, news sections, editorials, announcements and letters, may or may not be included. Untitled documents and documents without authors are usually not included. Website URLs that aren't available to our search robots or to the majority of web users are, obviously, not included either. Nor do we include websites that require you to sign up for an account, install a browser plugin, watch four colorful ads, and turn around three times and say coo-coo before you can read the listing of titles scanned at 10 DPI... You get the idea, we cover academic papers from sensible websites.

That's usually because we index many of these papers from other websites, such as the websites of their primary publishers. The "site:" operator currently only searches the primary version of each paper.

It could also be that the papers are located on examplejournals.gov, not on example.gov. Please make sure you're searching for the "right" website.

That said, the best way to check coverage of a specific source is to search for a sample of their papers using the title of the paper.

Ahem, we index papers, not journals. You should also ask about our coverage of universities, research groups, proteins, seminal breakthroughs, and other dimensions that are of interest to users. All such questions are best answered by searching for a statistical sample of papers that has the property of interest - journal, author, protein, etc. Many coverage comparisons are available if you search for [allintitle:"google scholar"], but some of them are more statistically valid than others.

Currently, Google Scholar allows you to search and read published opinions of US state appellate and supreme court cases since 1950, US federal district, appellate, tax and bankruptcy courts since 1923 and US Supreme Court cases since 1791. In addition, it includes citations for cases cited by indexed opinions or journal articles which allows you to find influential cases (usually older or international) which are not yet online or publicly available.

Legal opinions in Google Scholar are provided for informational purposes only and should not be relied on as a substitute for legal advice from a licensed lawyer. Google does not warrant that the information is complete or accurate.

We normally add new papers several times a week. However, updates to existing records take 6-9 months to a year or longer, because in order to update our records, we need to first recrawl them from the source website. For many larger websites, the speed at which we can update their records is limited by the crawl rate that they allow.

Inclusion and Corrections

We apologize, and we assure you the error was unintentional. Automated extraction of information from articles in diverse fields can be tricky, so an error sometimes sneaks through.

Please write to the owner of the website where the erroneous search result is coming from, and encourage them to provide correct bibliographic data to us, as described in the technical guidelines . Once the data is corrected on their website, it usually takes 6-9 months to a year or longer for it to be updated in Google Scholar. We appreciate your help and your patience.

If you can't find your papers when you search for them by title and by author, please refer your publisher to our technical guidelines .

You can also deposit your papers into your institutional repository or put their PDF versions on your personal website, but please follow your publisher's requirements when you do so. See our technical guidelines for more details on the inclusion process.

We normally add new papers several times a week; however, it might take us some time to crawl larger websites, and corrections to already included papers can take 6-9 months to a year or longer.

Google Scholar generally reflects the state of the web as it is currently visible to our search robots and to the majority of users. When you're searching for relevant papers to read, you wouldn't want it any other way!

If your citation counts have gone down, chances are that either your paper or papers that cite it have either disappeared from the web entirely, or have become unavailable to our search robots, or, perhaps, have been reformatted in a way that made it difficult for our automated software to identify their bibliographic data and references. If you wish to correct this, you'll need to identify the specific documents with indexing problems and ask your publisher to fix them. Please refer to the technical guidelines .

Please do let us know . Please include the URL for the opinion, the corrected information and a source where we can verify the correction.

We're only able to make corrections to court opinions that are hosted on our own website. For corrections to academic papers, books, dissertations and other third-party material, click on the search result in question and contact the owner of the website where the document came from. For corrections to books from Google Book Search, click on the book's title and locate the link to provide feedback at the bottom of the book's page.

General Questions

These are articles which other scholarly articles have referred to, but which we haven't found online. To exclude them from your search results, uncheck the "include citations" box on the left sidebar.

First, click on links labeled [PDF] or [HTML] to the right of the search result's title. Also, check out the "All versions" link at the bottom of the search result.

Second, if you're affiliated with a university, using a computer on campus will often let you access your library's online subscriptions. Look for links labeled with your library's name to the right of the search result's title. Also, see if there's a link to the full text on the publisher's page with the abstract.

Keep in mind that final published versions are often only available to subscribers, and that some articles are not available online at all. Good luck!

Technically, your web browser remembers your settings in a "cookie" on your computer's disk, and sends this cookie to our website along with every search. Check that your browser isn't configured to discard our cookies. Also, check if disabling various proxies or overly helpful privacy settings does the trick. Either way, your settings are stored on your computer, not on our servers, so a long hard look at your browser's preferences or internet options should help cure the machine's forgetfulness.

Not even close. That phrase is our acknowledgement that much of scholarly research involves building on what others have already discovered. It's taken from Sir Isaac Newton's famous quote, "If I have seen further, it is by standing on the shoulders of giants."

  • Privacy & Terms
  • SpringerLink shop

Types of journal articles

It is helpful to familiarise yourself with the different types of articles published by journals. Although it may appear there are a large number of types of articles published due to the wide variety of names they are published under, most articles published are one of the following types; Original Research, Review Articles, Short reports or Letters, Case Studies, Methodologies.

Original Research:

This is the most common type of journal manuscript used to publish full reports of data from research. It may be called an  Original Article, Research Article, Research, or just  Article, depending on the journal. The Original Research format is suitable for many different fields and different types of studies. It includes full Introduction, Methods, Results, and Discussion sections.

Short reports or Letters:

These papers communicate brief reports of data from original research that editors believe will be interesting to many researchers, and that will likely stimulate further research in the field. As they are relatively short the format is useful for scientists with results that are time sensitive (for example, those in highly competitive or quickly-changing disciplines). This format often has strict length limits, so some experimental details may not be published until the authors write a full Original Research manuscript. These papers are also sometimes called Brief communications .

Review Articles:

Review Articles provide a comprehensive summary of research on a certain topic, and a perspective on the state of the field and where it is heading. They are often written by leaders in a particular discipline after invitation from the editors of a journal. Reviews are often widely read (for example, by researchers looking for a full introduction to a field) and highly cited. Reviews commonly cite approximately 100 primary research articles.

TIP: If you would like to write a Review but have not been invited by a journal, be sure to check the journal website as some journals to not consider unsolicited Reviews. If the website does not mention whether Reviews are commissioned it is wise to send a pre-submission enquiry letter to the journal editor to propose your Review manuscript before you spend time writing it.  

Case Studies:

These articles report specific instances of interesting phenomena. A goal of Case Studies is to make other researchers aware of the possibility that a specific phenomenon might occur. This type of study is often used in medicine to report the occurrence of previously unknown or emerging pathologies.

Methodologies or Methods

These articles present a new experimental method, test or procedure. The method described may either be completely new, or may offer a better version of an existing method. The article should describe a demonstrable advance on what is currently available.

Back │ Next

Lock logo on colorful background

Scholarly Communication Services : Publishing

We can help you navigate the evolving scholarly publishing process — not only for your “final” manuscript, but also your other critical research and publishing outputs, such as: preprints, data, interactive models, conference proceedings, posters, working papers, blog posts, and much more.

More information

The lifecycle, peer review, research impact, research data, economic overview.

comprehensive essay on scholarly published journals

You want to build your academic reputation, but how do you know to what journals or academic presses you should submit your work? You’ll want to consider norms in your field, recommendations from peers or advisors, and the extent of your desire for open access.

We’ve put together some guidelines.

Evaluating journals

With journal publishing, you will often be making choices based on the “impact” of various journals — meaning how those journals are recognized and perceived in the scholarly community, the frequency of citation of articles from those journals, and the like. (We discuss various statistical measures of impact in the  Research Impact and Scholarly Profiles  section.) But you should also consider impact in terms of openness. That is: Who can access the scholarship being published by that journal? Is it open for reading by all, or confined to only those institutions able to pay?

Gauging journal subject matter fit and impact

If you’re unfamiliar with the journals in your field, there are comparison tools that can help with the evaluation process:

  • Journal Citation Reports :  JCR provides citation data for journals across nearly two hundred subject categories. You can browse by subject category or by known title. JCR enables you to identify journals with high impact factors, understand the ranking of journals within a subject category, and more. 
  • Eigenfactor.org :  Offers valuable information about the Eigenfactor Score and the Article Influence Score for various journals. You can also explore the cost effectiveness of journals for both  subscription journals  (which search ranks subscription-based journals according to the value-per-dollar that they provide) and  open access journals  (which compares the article-processing fees charged by open access journals).
  • CiteScore :  Identify and compare journal impact metrics across a wide range of journal titles and disciplines.
  • UlrichsWeb :  Provides key information about journals’ publishing frequency, location, audience, peer review status and more.

Evaluating open access journals

If you’re interested in open access publishing, but unfamiliar with a particular OA journal you’ve come across, you can also find out more about it by checking these additional sites:

  • Is the journal included in the Directory of Open Access Journals ( DOAJ )? DOAJ is comprehensive, “community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals.” To be included, journals must be peer-reviewed or employ editorial quality control. This also means the journals do not employ deceptive marketing practices to solicit papers to get the article processing charge that authors may pay. (See our page on  open access publishing models .)
  • Is the publisher a member of the Open Access Scholarly Publishers Association ( OASPA )? 
  • The  Scholarly Communication Toolkit  page on  Evaluating Journals   also has tremendous information about how to select open access journals for publication.

Concerned about deception?

If you’ve checked the above sources, but still have questions about the legitimacy of a journal solicitation you’ve received, there are several ways you can screen for propriety.

  • Are you getting  confusing spam?  If you’ve been receiving unsolicited e-mails from journals that are  not  indexed in the above reputable sources, this may be an indication of deceptive practices.
  • Have you checked for  deceptive characteristics ?   Researchers in 2017 identified  various characteristics of deceptive journals. They went on to  summarize these as : “low article-processing fees (less than US$150); spelling and grammar errors on the website; an overly broad scope; language that targets authors rather than readers; promises of rapid publication; and a lack of information about retraction policies, manuscript handling or digital preservation. Manuscript submissions by e-mail and the inclusion of distorted images are also common.”
  • Have you done a “ Think, Check, Submit ?”  Thinkchecksubmit.org , a campaign from many leading open access publishers, also helps researches identify trusted journals for their research by offering them a simple checklist to assess journal or publisher credentials. This is another great way to evaluate journal quality and spot unscrupulous activity. In addition to their checklist, you can check out their Think, Check, Submit video:

Remember,  we can help!   If you’re not sure about a journal, email us at  [email protected] .  And you can always consult advisors or  subject specialist librarians  in your field to provide more tailored advice.

Evaluating academic presses

Choosing a book publisher can be daunting, especially if you are looking to be published for the first time. The most useful advice and guidance will likely come from peers, colleagues, and academic advisors familiar with publishing in your discipline. They’ll be most knowledgeable about the logistics, publishing terms, marketing efforts, and prestige of particular presses.

Another way to get started is to consult resources that reveal various presses’ goals, target audiences, and interests. Some of the best resources that do that are the publishers' catalogs — that is, inventories and descriptions of the books they’ve published.

  • American Association of University Presses (AAUP) has a  list of member university presses . By going to the websites of particular publishers, you can find these catalogs and see exactly what the press is publishing in your discipline.
  • Not sure which publishers’ websites to look at? AAUP also has a Subject Area Grid that identifies the interest areas of member publishers.

Explore presses with open access programs

Increasingly, presses offer open access book publishing. Open access books have tremendous potential to increase your readership and impact, while also still fostering print sales for readers who prefer it. They also can facilitate advanced media innovation in the publishing process. 

With open access books, as with some open access journals, there may be an author fee assessed as a cost recovery mechanism for the press — given that they may sell fewer print copies to libraries since the book will be made available openly online. At UC Berkeley, we have a program that subsidizes any such fees! Check out our  Get Funding to Publish Open Access  page for details.

Other networking

Finally, there’s some networking you could do. Anali Perry of Arizona State University, on the  Select a Venue  page of her  Getting Published  guide, offers some great advice for outreach that can lead to a more streamlined press selection process.

As she explains:

If you’re attending conferences, you can set up meetings with editors to review a book idea and discuss whether this might be of interest. Another option is to contact editors directly with book ideas, written as a long essay (in the style of the press’s book catalog) stating the problem, what are you proposing, and how it is yours. Do this before writing the entire book - it’s better to work with an editor while you’re writing the book, not after. You can also be in contact with more than one publisher until you decide to accept an offer — just be honest that you’re investigating multiple options.

You can also check out this video from the AAUP. In 2015, AAUP convened a virtual panel to “take the scary out of scholarly publishing.” Their experts discussed tips and strategies for working with scholarly presses throughout the publishing process.

Contact us  to set up a consultation!

What is “peer review”?

At its core, peer review (or the process called “refereeing”) is the effort of scholars within a similar discipline or area of research to critique and evaluate the scholarly contribution from others within that same domain, and determine whether that scholarship should be disseminated or how it can be improved. Peer review results in over 1.5 million scholarly articles published each year .

Journals differ in the percentage of submitted papers that they accept and reject. Higher impact factor journals such as Science or Nature can reject even good quality research papers if an editor deems it not ground-breaking enough. Other journals, such as PLoS One , instead take the approach of getting more scholarship out and circulated. They have utilized a review process that focuses on satisfaction of scientific rigors rather than assessment of innovativeness. 

Basic models for peer review

As scholarly publishing changes, so too have peer review models. Typically, though, peer review involves authors (who conduct research and write the manuscript), reviewers (“peers” in the domain who provide expert opinions and advice), and editors (who make acceptance and publishing decisions). A basic model could like like the following, though there are multiple approaches. 

Sample Peer Review Process courtesy of Taylor & Francis

In this model: A paper is submitted to a journal. A journal editor screens the manuscript to determine whether it should be passed through to the critique stage, or rejected outright. The editor collects reviewers who then undertake analysis and critique of the work. The reviewers pass opinions and suggested edits back to the editor, who asks the author to revise accordingly. This process of revision could go through several iterations. After author revisions are complete, the editor will decide whether to accept the paper for publication, or reject it.

Note, too, that some publishers have implemented a “cascading” approach so as not to squander reviewers’ efforts if a paper is ultimately rejected by an editor at the final stage. As Dan Morgan, Digital Science Publisher at the University of California Press, explains (at p. 10 of the Standing up for Science 3 guide to peer review):

Cascading peer review (a.k.a. ‘waterfall peer review’) is when a paper that has been rejected after peer review is passed on to another journal along with the reviewers’ reports. The peer review process at the second journal can be kept relatively short because the editor considers the reports from an earlier round of peer review, along with any new reviews. Variations on this process exist, according to the type of journal — but essentially reviews can ‘cascade’ down through various journals.

Cascading peer review can accelerate the time to publication so that valuable review efforts are not lost. Moreover, many publishing groups that issue multiple journals will automatically apply this process — helping to find the right journal for your particular manuscript.

Transparency

Within this basic peer review model, journals can employ different approaches to how and whether authors get to know their reviewers, and vice versa. The idea behind masking or revealing this information is that such knowledge may introduce bias, or affect how honest and critical the reviews are. These various approaches include, for example:

  • Single-blind review: Reviewers know who authors are, but authors do not know who reviewers are. 
  • Double-blind review: Neither reviewers nor authors are informed about who the others are.
  • Open review: Reviewers and authors know who each other are, and this review can also include the transmission of reviewer commentary in the open final publication.
  • Post-publication open review: Here, readers and reviewers can submit public comments on published articles. Often, these comments are mediated by the editor.

If working papers are uploaded to a repository (such as ArXiv for mathematics, physics, and non-life sciences), there is also an opportunity for pre-publication peer review via the comments submitted by readers and downloaders at those sites.

You can learn a lot more about the mechanics of peer review, and tips for how to conduct peer review, in the following guide:

  • Peer Review the Nuts-and-Bolts: A Guide for Early Career Researchers  ( Standing up for Science 3 , 2017)

And you can contact with questions at  [email protected]

Why are we talking about impact?

Among other things, awareness of your scholarly impact can help you:

  •  Strengthen your case when applying for promotion or tenure.
  •  Quantify return on research investment for grant renewals and progress reports.
  •  Strengthen future funding requests by showing the value of your research.
  •  Understand your audience and learn how to appeal to them.
  •  Identify who is using your work and confirm that it is appropriately credited.
  •  Identify collaborators within or outside of your subject area.
  •  Manage your scholarly reputation.

Measuring your impact

Measuring impact is not a perfect science, and there are many who argue against its implications altogether. Here, we just want to present information about the statistical measures that exist so that you can make informed decisions about how and whether to gauge your impact.

Often, measuring impact relies on metrics such as:  article-level metrics ,  author-level metrics ,  journal or publisher metrics , and  alt-metrics .

Article-level metrics

Article-level metrics quantify the reach and impact of published research. For this, we can look to various measures such as citation counts, field-weighted citation impact, or social networking readership statistics. 

e.g. Citation count : How many times has your article been cited? This can be difficult to assess and assign meaning to. How recent your article is obviously affects how many times it’s been cited. Additionally, the database or source of the statistic greatly impacts the count because the database needs to be able to scan a large number of possible places where your article could be cited — and not all databases have access to the same information in that regard.

e.g. Field-weighted citation impact : Since it takes time for publications to accumulate citations, it is normal that the total number of citations for recent articles is lower. Moreover, citations in research from one field may accumulate faster than others because that field simply produces more publications. Therefore, instead of comparing absolute counts of citations you might want to consider another citation measure called field-weighted citation impact (also known as FWCI) that adjusts for these differences. Field-weighted citation impact divides the number of citations received by a publication by the average number of citations received by publications in the same field, of the same type, and published in the same year. The world average is indexed to a value of 1.00. Values above 1.00 indicate above-average citation impact, and values below 1.00 likewise indicate below-average citation impact. It’s a proprietary statistic, though, meaning you’d need access to Elsevier’s SCOPUS product, which UC Berkeley provides.

e.g. Social Networking Site Readership : Another article-level metric is something like  Mendeley Readership , which indicates the number of Mendeley users who have added a particular article into their personal library.  This number can be considered an early indicator of the impact a work has, and typically Mendeley readership counts correlate moderately with future citations. 

Author-level metrics

Author-level metrics address an author’s productivity and diversity of reach. We can look to measures of overall scholarly output, journal count, journal category count, and H-index or H-graph.

e.g. Journal count:  Journal count indicates the diversity of an author’s publication portfolio: In how many of the distinct journals have this author’s publications appeared? This can be useful to show the excellence of authors who work across traditional disciplines and have a broad array of journals available in which to submit.

e.g. Journal category count : Journal category count addresses in how many journal categories has someone published. This can be useful for tracking breadth/reach of scholarship, and inter-disciplinariness.

  • e.g. H-index: H-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. The definition of the index is that a scholar with an index of h has published h papers, each of which has been cited in other papers at least h times. It is believed that after 20 years of research, an h index of 20 is good, 40 is outstanding, 60 is truly exceptional.

e.g. Scholarly output:  Scholarly   output demonstrates an author’s productivity: How many publications does this author have? This is a good metric for comparing authors who are similar, and at similar stages of career.

​ Journal or publisher metrics

Journal or publisher metrics address weights or prestige that particular publications are seen to carry. Some measures include:

e.g. ​ SCImago Journal & Country Rank : SCImago Journal & Country Rank can be considered the “average prestige per article,” and is based on the idea that not all citations of your work are the same. (In other words, your articles could be cited in publications of varying prestige.) Here, the subject field, quality, and reputation of the journals in which your publications are cited have a direct effect on the “value” of a citation. 

e.g. Impact per publication  (IPP): IPP gives you a sense of the average number of citations that a publication published in the journal will likely receive. It measures the ratio of citations per article published in a journal. Unlike the standard impact factor, the IPP metric uses a three-year citation window, widely considered to be the optimal time period to accurately measure citations in most subject fields.

e.g. Source-normalized impact per paper:  When normalized for the citations in the subject field, the raw Impact per Publication (IPP) becomes the Source Normalized Impact per Paper (SNIP). SNIP measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa.

Altmetrics 

​ Altmetrics account for “non-traditional” citations of your scholarly work. They address the fact that scholarly conversations have expanded beyond the peer-reviewed article. People are now Tweeting and blogging about your articles, for instance, and altmetrics accumulate these mentions. To find out how your work is being cited and used in these ways, learn more at  Altmetric.com .

Monitoring your impact

There are numerous existing and emerging tools available to help you track your scholarly impact by enabling you to create a virtual scholarly profile in which you input and keep track of all your professional activities and publications. 

When selecting one of these tools, it’s helpful to consider:

  • What sources of information are your chosen tools “pulling from” or indexing? The greater number of sources that the tool can “read,” the more comprehensive your metrics will be.
  • What is the business model of your tool? Is it for-profit and available with premium features for a fee, or is it a free platform available to all? For instance, Symplectic’s Elements and Elsevier’s Pure are licensed platforms that come often at substantial cost to an institution, whereas Impact Story, ORCID, and Google Scholar offer free profile services.
  • Have you made a copy of your scholarly materials available also through your institutional repository? Many of the profiling tools are not geared toward actually preserving a copy of your work. So, to ensure that a copy of your work remains publicly available, it’s best to make sure you also deposit a copy in your institutional repository (in the case of UC, this is eScholarship.org).

With all that in mind, here are a few profiling tools from which you can choose:

ImpactStory

From their site: Impactstory is an open-source website that helps researchers explore and share the the online impact of their research.By helping researchers tell data-driven stories about their work, we're helping to build a new scholarly reward system that values and encourages web-native scholarship. We’re funded by the National Science Foundation and the Alfred P. Sloan Foundation and incorporated as a 501(c)(3) nonprofit corporation.

From their site: ORCID provides an identifier for individuals to use with their name as they engage in research, scholarship, and innovation activities. We provide open tools that enable transparent and trustworthy connections between researchers, their contributions, and affiliations. We provide this service to help people find information and to simplify reporting and analysis.

Google Scholar Citations

From their site: Google Scholar Citations provide a simple way for authors to keep track of citations to their articles. You can check who is citing your publications, graph citations over time, and compute several citation metrics. You can also make your profile public, so that it may appear in Google Scholar results when people search for your name...Best of all, it's quick to set up and simple to maintain - even if you have written hundreds of articles, and even if your name is shared by several different scholars. You can add groups of related articles, not just one article at a time; and your citation metrics are computed and updated automatically as Google Scholar finds new citations to your work on the web. You can choose to have your list of articles updated automatically or review the updates yourself, or to manually update your articles at any time.

ResearchGate

From their site: Share your publications, access millions more, and publish your data. Connect and collaborate with colleagues, peers, co-authors, and specialists in your field. Get stats and find out who's been reading and citing your work.

Academia.edu

From their site: Academia.edu is a platform for academics to share research papers. The company's mission is to accelerate the world's research. Academics use Academia.edu to share their research, monitor deep analytics around the impact of their research, and track the research of academics they follow.

From their site: LinkedIn operates the world’s largest professional network on the internet with more than 500 million members in over 200 countries and territories.

Fee-based or proprietary profiling systems like  Elements  or  Pure .

These are software systems to help collect, understand, and showcase scholarly activities. These are not currently available at UC Berkeley.

Increasing your impact

In general, we recommend three overarching strategies to increase your scholarly impact:

A.  Get your work seen and cited. B.  Promote your work and be social. C.  Develop and execute a personal plan.

We discuss each of these strategies with specifics below.

A.  Get your work seen and cited

Publish pre-prints or post-prints in open access repositories.  

Institutional or discipline-specific open access repositories (e.g. eScholarship.org for UC publications, BioArXiv, Humanities Commons, etc.) enable you to self-archive a copy of your work so that it is accessible for free by readers around the world. Moreover, these repositories are indexed on Google so that your scholarship can easily be found. This is a terrific way to build readership and impact, while also contributing to progress and knowledge by making a version of your work available to all. To choose a repository that’s right for you, you can check the DOAR (Directory of Open Access Repositories).

As a UC faculty member, staff, or student, you are automatically authorized under the UC open access policies to post a pre-print copy of your scholarly articles (defined broadly) to the UC repository, eScholarship. You can also check the web tool Sherpa/ROMEO to determine whether there are other versions of your scholarship that your publisher has authorized for deposit.

Publish open access.

Open access is the free, immediate, online availability of scholarship. This means that when people publish a scholarly article in an open access journal, it is put online for anyone to access — without readers (or readers’ institutions) having to pay any fees or subscription charges for it (also known as “paywalls”).

Paywalls limit readership. The great value of publishing open access means that barriers between readers and scholarly publication are removed, making it easier for everyone to find, use, cite, and build upon knowledge and ideas. In this way, open access connects your scholarship to the world, and helps build your impact. Publishing open access is often a condition of research funding, so you should check your grants.

Open access publishers may ask for a fee to publish your scholarship open online in lieu of the fees they would ordinarily have collected from institutional memberships to the journal or publication. The UC Berkeley Library has a fund to cover these costs. You can learn more in our  BRII (Berkeley Research Impact Initiative) Guide  about applying for this funding.

There’s an open access place for all research outputs.

Your “final” publication — traditionally, an article, chapter, or scholarly monograph — is not the only thing readers desire to access and cite. You can publish your research data, code, software, presentations, working papers, and other supporting documents and documentation open access as well. In fact, in some cases, your funders might require it. Sharing these other research instruments not only advances knowledge and science, but also can help increase your impact and citation rates.

You can find the right open place for all your outputs. For instance, it’s possible to:

  • Publish code on  GitHub .
  • Publish data sets on  FigShare  or  Dryad .
  • Publish presentations on  SlideShare .

Publish several pieces on same topic.

If you’ve written a journal article, you can spread the word about it by supplementing it with a blog post or magazine article — thereby attracting greater attention from readers interested in your topic. What’s more, publishing your article open access to begin with also helps your work get discovered by journalists, making it easier for them to write their own supplemental magazine articles about your research, too.

Write for your audience and publish in sources they read.

Of course, many of us would like to be able publish in high impact journals or ones targeted to our audience. To find the best fit journals, it can be helpful to review the journal’s scope and submission criteria, and compare that to whom you believe your intended audience to be.

Use persistent identifiers to disambiguate you and your work from other authors.

There are more than 7 billion people in the world. If someone searches for your articles by your name, how can you be sure that they find yours and not someone else’s? How can you be sure that citations really reflect citations of your work and not someone else’s? Persistent identifiers — both for you and your publications — help disambiguate the chaos.

  • ORCID : Much in the same way that a social security number uniquely identifies you, an  ORCID  “provides a persistent digital identifier that distinguishes you from every other researcher and, through integration in key research workflows such as manuscript and grant submission, supports automated linkages between you and your professional activities ensuring that your work is recognized.”  Increasingly, publishers and funders ask for your ORCID upon article submission or application so that they can disambiguate you from other researchers, too. ORCIDs are free to create and doing so takes just moments. They also enable you to set up a personal web profile page where you can link all of your scholarship to your unique identifier — creating a profile that is uniquely yours.
  • Digital Object Identifiers (DOIs) : A DOI is a type of persistent identifier used to uniquely identify digital objects like scholarly articles, chapters, or data sets. Metadata about the digital object is stored in association with the DOI, which often includes a URL where the object can be found. The value of the DOI is that the identifier remains fixed over the lifetime of the digital object  even if you later change the particular URL where your article is hosted.  Thus, referring to an online document by its DOI provides more stable linking than simply using its URL. Publishers and repositories often assign DOIs to each of your publications for this reason. If you are a UC Berkeley researcher depositing in eScholarship, you can obtain a DOI through a service called  EZID .

B.  Promote your work and be social

Although it might seem too self-laudatory for some people’s tastes, speaking up about issues of interest to you and your audience can help position you as a thought leader in your space. Therefore, it can be helpful to participate and collaborate in promoting and discussing your work through social networking, blogging, list serves, personal networks, and more. 

And don’t overlook your research that’s still underway! Discussing what’s in progress can help build interest.

C.  Develop and execute a personal plan.

Perhaps the best way to increase your impact is to develop a plan that is tailored for your own needs, and check in with yourself periodically about whether it’s working. Your plan should focus on tactics that make your work visible, accessible, and reusable .

What might such a plan look like? Here is a sample that you can adapt.

  • Create and maintain an online profile (GoogleScholar, etc.).
  • Use persistent identifiers (e.g. ORCIDs, DOIs) to disambiguate/link.
  • Publish in fully OA journals or choose OA options.
  • Creative Commons license your work for re-use.
  • Post pre- or post-prints to repositories (eScholarship, PubMed Central, etc.).
  • Make social media engagement a habit.
  • Engage your audience in meaningful conversations.
  • Connect with other researchers.
  • Appeal to various audiences via multiple publications.
  • Check back in on your goals.

Do you want to talk more about tailoring strategies so that they are right for you? Please contact us at  [email protected] !

You’ve invested significant time and resources into preparing your final publication. So, after peer review, you’re done, right? Not necessarily. You may desire (or be required) to also publish the data underlying your research.

Why should we care about publishing data?

Sharing research data promotes transparency, reproducibility, and progress. In some fields, it can spur new discoveries on a daily basis. It’s not atypical for geneticists, for example, to sequence by day and post research results the same evening — allowing others to begin using their datasets in nearly real time (see, for example,  Pisani and AbouZahr’s paper ). The datasets researchers share can inform business or regulatory policymaking, legislation, government or social services, and more.

Publishing your research data can also increase the impact of your research, and with it, your scholarly profile. Depositing datasets in a repository makes them both visible and citable. You can include them in your CV and grant application biosketches. Conversely, scholars around the world can begin working with your data and crediting you. As a result, sharing detailed research data can be associated with increased citation rates (check out  this Piwowar et al. study , among others).

Publishing your data may also be required. Federal funders (e.g.  National Institutes of Health ), granting agencies (e.g.  Bill and Melinda Gates Foundation ), and journal publishers (e.g.  PLoS ) increasingly require datasets be made publicly available — often immediately upon associated article publication.

How do we publish data?

Merely uploading your dataset to a personal or departmental website won’t achieve these aims of promoting knowledge and progress. Datasets should be able to link seamlessly to any research articles they support. Their metadata should be compatible with bibliographic management and citation systems (e.g.  CrossRef  or  Ref Works ), and be formatted for crawling by abstracting and indexing services. After all, you want to be able to find other people’s datasets, manage them in your own reference manager, and cite them as appropriate. So, you’d want your own dataset to be positioned for the same discoverability and ease of use.

How can you achieve all this? It sounds daunting, but it’s actually pretty straightforward and simple. You’ll want to select a data publishing tool or repository that is built around both preservation and discoverability. It should:

  • Offer you a stable location or DOI (which will provide a persistent link to your data’s location). 
  • Help you create sufficient metadata to facilitate transparency and reproducibility.
  • Optimize the metadata for search engines.

You can learn about a variety of specific tools through the  Research Data Management program website , on their  Data Preservation and Archiving  page. Briefly, here are some good options:

Sample tools

  • Dryad : Dryad is an open-source, research data curation and publication platform. UC Berkeley Library is a proud partner of Dryad and offers Dryad as a free service for all UC Berkeley researchers to publish and archive their data. Datasets published in Dryad receive a citation and can be versioned at any time. Dryad is integrated with hundreds of journals and is an easy way to both publish data and comply with funder and publisher mandates. Check out published datasets or submit yours at:  https://datadryad.org/stash . 
  • Figshare: Figshare is a multidisciplinary repository where users can make all of their research outputs available in a citable, shareable and discoverable manner. Figshare allows users to upload any file format to be made visualisable in the browser so that figures, datasets, media, papers, posters, presentations and filesets can be disseminated. Figshare uses Datacite DOIs for persistent data citation. Users are allowed to upload files up to 5GB in size and have 20 GB of free private space. Figshare uses Amazon Web Services - backups are performed on a daily basis, which are kept for 5 days. 
  • re3data : re3data.org is a global registry of research data repositories that covers research data repositories from different academic disciplines. It presents repositories for the permanent storage and access of data sets to researchers, funding bodies, publishers and scholarly institutions. re3data.org promotes a culture of sharing, increased access and better visibility of research data. The registry went live in autumn 2012 and is funded by the German Research Foundation (DFG).

To explore others, check out  OpenDOAR , the Directory of Open Access Repositories.

We also recommend that, if your chosen publishing tool enables it, you should include your  ORCID (a persistent digital identifier)  with your datasets just like with all your other research. This way, your research and scholarly output will be collocated in one place, and it will become easier for others to discover and credit your work.

What does it mean to license your data for reuse?

Uploading a dataset — with good metadata, of course! — to a repository is not the end of the road for shepherding one’s research. We must also consider what we are permitting other researchers to do with our data. And, what rights do we, ourselves, have to grant such permissions — particularly if we got the data from someone else, or the datasets were licensed to us for a particular use?

To better understand these issues, we first have to distinguish between attribution and licensing.

Citing datasets, or providing attribution to the creator, is an essential scholarly practice.

The issue of someone properly  citing  your data is separate, however, from the question of whether it’s  permissible  for them to reproduce and publish the data in the first place. That is, what license for reuse have you applied to the dataset?

The type of reuse we can grant depends on whether we own our research data and hold copyright in it. There can be a number of possibilities here.

  • Sometimes the terms of contracts we’ve entered into (e.g. funder/grant agreements, website terms of use, etc.) dictate data ownership and copyright. We must bear these components in mind when determining what rights to grant others for using our data.
  • Often, our employers own our research data under our employment contracts or university policies (e.g. the research data is “work-for-hire”).

Remember, the dataset might not be copyrightable to begin with if it does not constitute original expression. We could complicate things if we try to grant licenses to data for which we don’t actually hold copyrights. For an excellent summary addressing these “Who owns your data?” questions, including copyright issues, check out  this blog post by Katie Fortney  written for the UC system-wide Office of Scholarly Communication.

What’s the right license or designation for your data?

To try to streamline ownership and copyright questions, and promote data reuse, often data repositories will simply apply a particular  “Creative Commons” license  or public domain designation to all deposited datasets. For instance:

  • Dryad  and  BioMed Central  repositories apply a Creative Commons Zero (CC0) designation to deposited data — meaning that, by depositing in those repositories, you are not reserving any copyright that you might have. Someone using your dataset still should cite the dataset to comply with scholarly norms, but you cannot mandate that they attribute you and cannot pursue copyright claims against them.

It’s worth considering what your goals are for sharing the data to begin with, and selecting a designation or license that both meets your needs  and  fits within whatever ownership and use rights you have over the data. We can help you with this. Ambiguity surrounding the ability to reuse data inhibits the pace of research. So, try to identify clearly for potential users what rights are being granted in the dataset you publish.

Please contact us at  [email protected] .

Basics of scholarly publishing

The scholarly communication landscape is impacted by various shifting economic forces, such as changes in:

  • Publishing platforms and markets (e.g. emergence of open access business models, consortial funding for subscriptions, funder publishing platforms)
  • Ways research is conducted (e.g. social research networks fostering global collaboration)
  • Public policies (e.g. open access mandates, copyleft licensing models) 

In the traditional publishing model , scholars produce and edit research and manuscripts, which publishers then evaluate, assemble, publish, and distribute. Libraries at the institutions where scholars are employed then pay for subscriptions to license or purchase this content that researchers have created. Typically these are large subscription packages with academic publishers that encompass dozens if not hundreds of journal titles.

The costs of scholarly journal subscriptions have risen unsustainably over many decades, outstripping inflation even relative to higher education markets. As costs have risen, so has the portion of the global research community operating without full access to the scholarly record (including nearly all U.S. universities). The open access (OA) movement, discussed elsewhere on these pages (see Open Access Publishing ), is in part a response to this affordability crisis.

Open access overview

In an OA world, libraries would not be paying for these out-of-reach subscriptions. But, if academic publishers are still distributing scholarly content through traditional journal systems, they of course would want some other form of cost recovery if subscriptions are off the table. OA publishing models differ in how and whether they address this issue.  

As we discuss in the Open Access Publishing  section, two of the predominant open access publishing models are “Gold Open Access” and “Green Open Access.”

Gold open access

Gold OA provides immediate access on the publisher’s website. Some Gold OA publishers recoup production costs via charges for authors to publish (“article processing charges” or “book processing charges”) rather than having readers (or libraries) pay to access and read it. This is a system in which “author pays” rather than “reader pays.” The charges to be paid by the author can come from many sources, such as: research accounts, research grants, the university, the library, scholarly societies, and consortia. Production costs can also be offset by the sale of memberships, add-ons, and enhanced services by the publisher. 

Green open access

Also known as self-archiving, in the Green OA model authors continue to publish as they always have in all the same journals. Once the article has been published in a traditional journal, however, the author then posts the “final author version” of the article to an institutional or subject matter repository. Those uploaded manuscripts are open to all to be read. Often, publishers do not allow the formatted publication version to be deposited, but instead only permit the unformatted “post-print” (refereed) or “pre-print” (author submitted) version to be uploaded.

The (real) non-economic value of OA

While open access publishing has the potential to reduce costs, this is not the only (or even the main) driving force behind open access advocacy. The benefits to individual scholars, related institutions, scholarly communication, and the general researching public are also primary motivating factors.

Open access literature is free, digital, and available to anyone online. Providing greater access to scholarship can help attract more readers and build impact.

Moreover, in most cases open access literature is also free of downstream copyright restrictions apart from attributing the original author. This type of OA literature can be reused, remixed, and built upon to further spur innovation and progress.

New open access publishing models are continuing to emerge and be evaluated for sustainability. We have much more to say about them and all things open access on our Open Access  page. 

  • International edition
  • Australia edition
  • Europe edition

hurdles athletes

How to get published in an academic journal: top tips from editors

Journal editors share their advice on how to structure a paper, write a cover letter - and deal with awkward feedback from reviewers

  • Overcoming writer’s block: three tips
  • How to write for an academic journal

Writing for academic journals is highly competitive. Even if you overcome the first hurdle and generate a valuable idea or piece of research - how do you then sum it up in a way that will capture the interest of reviewers?

There’s no simple formula for getting published - editors’ expectations can vary both between and within subject areas. But there are some challenges that will confront all academic writers regardless of their discipline. How should you respond to reviewer feedback? Is there a correct way to structure a paper? And should you always bother revising and resubmitting? We asked journal editors from a range of backgrounds for their tips on getting published.

The writing stage

1) Focus on a story that progresses logically, rather than chronologically

Take some time before even writing your paper to think about the logic of the presentation. When writing, focus on a story that progresses logically, rather than the chronological order of the experiments that you did. Deborah Sweet, editor of Cell Stem Cell and publishing director at Cell Press

2) Don’t try to write and edit at the same time

Open a file on the PC and put in all your headings and sub-headings and then fill in under any of the headings where you have the ideas to do so. If you reach your daily target (mine is 500 words) put any other ideas down as bullet points and stop writing; then use those bullet points to make a start the next day.

If you are writing and can’t think of the right word (eg for elephant) don’t worry - write (big animal long nose) and move on - come back later and get the correct term. Write don’t edit; otherwise you lose flow. Roger Watson, editor-in-chief, Journal of Advanced Nursing

3) Don’t bury your argument like a needle in a haystack

If someone asked you on the bus to quickly explain your paper, could you do so in clear, everyday language? This clear argument should appear in your abstract and in the very first paragraph (even the first line) of your paper. Don’t make us hunt for your argument as for a needle in a haystack. If it is hidden on page seven that will just make us annoyed. Oh, and make sure your argument runs all the way through the different sections of the paper and ties together the theory and empirical material. Fiona Macaulay, editorial board, Journal of Latin American Studies

4) Ask a colleague to check your work

One of the problems that journal editors face is badly written papers. It might be that the writer’s first language isn’t English and they haven’t gone the extra mile to get it proofread. It can be very hard to work out what is going on in an article if the language and syntax are poor. Brian Lucey, editor, International Review of Financial Analysis

5) Get published by writing a review or a response

Writing reviews is a good way to get published - especially for people who are in the early stages of their career. It’s a chance to practice at writing a piece for publication, and get a free copy of a book that you want. We publish more reviews than papers so we’re constantly looking for reviewers.

Some journals, including ours, publish replies to papers that have been published in the same journal. Editors quite like to publish replies to previous papers because it stimulates discussion. Yujin Nagasawa, c o-editor and review editor of the European Journal for Philosophy of Religion , philosophy of religion editor of Philosophy Compass

6) Don’t forget about international readers

We get people who write from America who assume everyone knows the American system - and the same happens with UK writers. Because we’re an international journal, we need writers to include that international context. Hugh McLaughlin, editor in chief, Social Work Education - the International Journal

7) Don’t try to cram your PhD into a 6,000 word paper

Sometimes people want to throw everything in at once and hit too many objectives. We get people who try to tell us their whole PhD in 6,000 words and it just doesn’t work. More experienced writers will write two or three papers from one project, using a specific aspect of their research as a hook. Hugh McLaughlin, editor in chief, Social Work Education - the International Journal

Submitting your work

8) Pick the right journal: it’s a bad sign if you don’t recognise any of the editorial board

Check that your article is within the scope of the journal that you are submitting to. This seems so obvious but it’s surprising how many articles are submitted to journals that are completely inappropriate. It is a bad sign if you do not recognise the names of any members of the editorial board. Ideally look through a number of recent issues to ensure that it is publishing articles on the same topic and that are of similar quality and impact. Ian Russell, editorial director for science at Oxford University Press

9) Always follow the correct submissions procedures

Often authors don’t spend the 10 minutes it takes to read the instructions to authors which wastes enormous quantities of time for both the author and the editor and stretches the process when it does not need to Tangali Sudarshan, editor, Surface Engineering

10) Don’t repeat your abstract in the cover letter We look to the cover letter for an indication from you about what you think is most interesting and significant about the paper, and why you think it is a good fit for the journal. There is no need to repeat the abstract or go through the content of the paper in detail – we will read the paper itself to find out what it says. The cover letter is a place for a bigger picture outline, plus any other information that you would like us to have. Deborah Sweet, editor of Cell Stem Cell and publishing director at Cell Press

11) A common reason for rejections is lack of context

Make sure that it is clear where your research sits within the wider scholarly landscape, and which gaps in knowledge it’s addressing. A common reason for articles being rejected after peer review is this lack of context or lack of clarity about why the research is important. Jane Winters, executive editor of the Institute of Historical Research’s journal, Historical Research and associate editor of Frontiers in Digital Humanities: Digital History

12) Don’t over-state your methodology

Ethnography seems to be the trendy method of the moment, so lots of articles submitted claim to be based on it. However, closer inspection reveals quite limited and standard interview data. A couple of interviews in a café do not constitute ethnography. Be clear - early on - about the nature and scope of your data collection. The same goes for the use of theory. If a theoretical insight is useful to your analysis, use it consistently throughout your argument and text. Fiona Macaulay, editorial board, Journal of Latin American Studies

Dealing with feedback

13) Respond directly (and calmly) to reviewer comments

When resubmitting a paper following revisions, include a detailed document summarising all the changes suggested by the reviewers, and how you have changed your manuscript in light of them. Stick to the facts, and don’t rant. Don’t respond to reviewer feedback as soon as you get it. Read it, think about it for several days, discuss it with others, and then draft a response. Helen Ball, editorial board, Journal of Human Lactation

14) Revise and resubmit: don’t give up after getting through all the major hurdles

You’d be surprised how many authors who receive the standard “revise and resubmit” letter never actually do so. But it is worth doing - some authors who get asked to do major revisions persevere and end up getting their work published, yet others, who had far less to do, never resubmit. It seems silly to get through the major hurdles of writing the article, getting it past the editors and back from peer review only to then give up. Fiona Macaulay, editorial board, Journal of Latin American Studies

15) It is acceptable to challenge reviewers, with good justification

It is acceptable to decline a reviewer’s suggestion to change a component of your article if you have a good justification, or can (politely) argue why the reviewer is wrong. A rational explanation will be accepted by editors, especially if it is clear you have considered all the feedback received and accepted some of it. Helen Ball, editorial board of Journal of Human Lactation

16) Think about how quickly you want to see your paper published

Some journals rank more highly than others and so your risk of rejection is going to be greater. People need to think about whether or not they need to see their work published quickly - because certain journals will take longer. Some journals, like ours, also do advance access so once the article is accepted it appears on the journal website. This is important if you’re preparing for a job interview and need to show that you are publishable. Hugh McLaughlin, editor in chief, Social Work Education - the International Journal

17) Remember: when you read published papers you only see the finished article

Publishing in top journals is a challenge for everyone, but it may seem easier for other people. When you read published papers you see the finished article, not the first draft, nor the first revise and resubmit, nor any of the intermediate versions – and you never see the failures. Philip Powell, managing editor of the Information Systems Journal

Enter the Guardian university awards 2015 and join the higher education network for more comment, analysis and job opportunities, direct to your inbox. Follow us on Twitter @gdnhighered .

  • Universities
  • University careers
  • Higher education

Comments (…)

Most viewed.

Harvard Library Is Launching Harvard Open Journals Program

Harvard Library is launching a new initiative called the Harvard Open Journals Program (HOJP), which will help researchers advance scholarly publishing that is open access, sustainable, and equitable. HOJP will provide publishing services, resources, and seed funding to participating Harvard researchers for new academic journals. All journal articles will be entirely free for authors and readers, with no barriers to publish or to access.

Martha Whitehead, Vice President for the Harvard Library and University Librarian, sees the initiative as an important step in championing open access. Whitehead said, “We want to model the original ethos of open access by reducing barriers and enabling the free flow of ideas and knowledge across the research ecosystem and beyond to the public at large.”

The Harvard Open Journals Program will offer publishing and hosting services to help the Harvard community launch new open access journals, or to convert existing journals to open access. The program will offer two support models: an overlay model which takes advantage of open access repositories, such as Harvard’s  DASH , and a brand-new academic press model. 

Yuan Li, University Scholarly Communication Officer and Director of Open Scholarship and Research Data Services at Harvard Library, pointed out the innovative nature of the program, “It is new for an institution to support faculty in seeking out an academic press to publish a no-fee open access journal and to provide assistance in securing its long-term funding. And offering a repository overlay journal model provides an alternative that appeals to some editorial boards and is gaining traction through initiatives such as Episciences. As we implement and refine this program on our campus, we hope it will inspire other universities to adopt such approaches to supporting barrier-free scholarly publishing.”

The program is a direct response to faculty interest in alternatives to the article-processing-charge model, in which journals charge author-side fees to publish papers open access. It also supports federal requirements that publications resulting from publicly-funded research be open access.

The open access movement in scholarly publishing seeks to grant free and public online access to publications and data. In recent decades, many researchers have become increasingly concerned that commercial rather than scholarly interests are driving the publishing ecosystem. With some publishers charging article processing fees of over $10,000 per article, skyrocketing costs inhibit many researchers and institutions from publishing in these journals. At the same time, research institutions continue to pay high subscription costs, even as their faculty provide editorial and peer review services mainly for free to the publishers. These practices have led to widespread outcry in the scholarly community, and tensions between publishers and editorial boards have led to the latter’s  mass resignations .

Scott Edwards, Professor of Organismic and Evolutionary Biology, and a member of the Harvard Library Faculty Advisory Council, applauds the library’s exploration of new models for supporting open access publishing. Edwards said, “In this increasingly challenging publishing ecosystem, the Harvard Open Journals Program is a welcome new approach.” 

“These are sustainable and equitable open access publishing models that allow scholars to take control of scholarly communication,” added Li. “I hope that many research-heavy institutions adopt our approach. The first  Harvard Open Access policy launched in 2008 has been adopted nationally and internationally, and it would be great to see similar reach.”

Under Harvard’s Open Access policies, Harvard faculty and researchers give the University a nonexclusive, irrevocable right to distribute their scholarly articles for any non-commercial purpose. Stored and preserved in  DASH , Harvard Library’s open access repository, these articles are made available to the scholarly community and the public—anyone with an internet connection can read them for free.

Harvard Library is working closely with the Office of the Vice Provost for Research on launching the HOJP program. John Shaw, Vice Provost for Research and Harry C. Dudley Professor of Structural and Economic Geology, is eager to promote the initiative in the suite of programs that support faculty research. Shaw said, “The launch of HOJP provides very encouraging options for removing barriers to making research results open and expanding their reach.”

The Harvard Open Journals Program will be open to all journals with a current Harvard affiliate on the editorial team or editorial board. Student-run journals are also eligible, as long as they are sponsored by a Harvard faculty member or administrator.

In preparing to launch HOJP this summer, Harvard Library is currently seeking input on program details from interested faculty. HOJP will begin accepting applications in the fall from journals and editorial boards. Colleen Cressman, Librarian for Open Publishing, will manage the program and can be reached by email for more information.

Adolescent Comprehensive Sexuality Education as a Supplement for Limited Sexual and Reproductive Health Services in Syria’s Idlib Governorate IDP Camps

Add to collection, downloadable content.

comprehensive essay on scholarly published journals

  • April 29, 2024
  • ORCID: https://orcid.org/0000-0001-6301-3106
  • Affiliation: Gillings School of Global Public Health
  • Other Affiliation: Department of Health Behavior
  • The Syrian crisis is marked by extreme humanitarian need among refugees and internally displaced persons (IDPs), with more than 15.3 million people seeking help since the Syrian Civil War commenced in 2011. In Syria alone, IDP camps are inhabited by 6.8 million people, of which 1.8 million are located on the front line in Northwest Syria (NWS). For a myriad of reasons, the Syrian crisis has limited access to sexual and reproductive health (SRH) services for IDPs in NWS camps, which increases the risk of negative SRH outcomes among women and girls. To address this problem, an eight-week comprehensive sexuality education (CSE) program will be implemented in 36 IDP camp schools in Idlib Governorate, Syria. The program will reach approximately 1,440 girls (11-14 years old) and inform on empowerment, gender and human rights, SRH information (e.g., puberty, menstruation, contraception, adverse outcomes), the Minimal Initial Service Package (MISP), and local SRH services. Program success will be measured by the prevalence of condom use, early marriages, sexually transmitted infection (STI) spread, and unintended pregnancies. This project ultimately aims to increase bodily autonomy, empowerment, and financial independence among internally displaced adolescent girls in Idlib Governorate, Syria.
  • Comprehensive Sexuality Education (CSE)
  • Sexual and Reproductive Health (SRH)
  • Internally Displaced Persons (IDPs)
  • Syrian Crisis
  • Sexual health--Study and teaching
  • https://doi.org/10.17615/dvrr-z427
  • Capstone Project
  • In Copyright
  • All rights reserved
  • Other Affiliation: Department of Maternal and Child Health
  • Other Affiliation: Department of Environmental Sciences and Engineering
  • Master of Public Health
  • Public Health
  • Chapel Hill, North Carolina, United States

This work has no parents.

Select type of work

Master's papers.

Deposit your masters paper, project or other capstone work. Theses will be sent to the CDR automatically via ProQuest and do not need to be deposited.

Scholarly Articles and Book Chapters

Deposit a peer-reviewed article or book chapter. If you would like to deposit a poster, presentation, conference paper or white paper, use the “Scholarly Works” deposit form.

Undergraduate Honors Theses

Deposit your senior honors thesis.

Scholarly Journal, Newsletter or Book

Deposit a complete issue of a scholarly journal, newsletter or book. If you would like to deposit an article or book chapter, use the “Scholarly Articles and Book Chapters” deposit option.

Deposit your dataset. Datasets may be associated with an article or deposited separately.

Deposit your 3D objects, audio, images or video.

Poster, Presentation, Protocol or Paper

Deposit scholarly works such as posters, presentations, research protocols, conference papers or white papers. If you would like to deposit a peer-reviewed article or book chapter, use the “Scholarly Articles and Book Chapters” deposit option.

  • Open access
  • Published: 24 April 2024

Breast cancer screening motivation and behaviours of women aged over 75 years: a scoping review

  • Virginia Dickson-Swift 1 ,
  • Joanne Adams 1 ,
  • Evelien Spelten 1 ,
  • Irene Blackberry 2 ,
  • Carlene Wilson 3 , 4 , 5 &
  • Eva Yuen 3 , 6 , 7 , 8  

BMC Women's Health volume  24 , Article number:  256 ( 2024 ) Cite this article

175 Accesses

Metrics details

This scoping review aimed to identify and present the evidence describing key motivations for breast cancer screening among women aged ≥ 75 years. Few of the internationally available guidelines recommend continued biennial screening for this age group. Some suggest ongoing screening is unnecessary or should be determined on individual health status and life expectancy. Recent research has shown that despite recommendations regarding screening, older women continue to hold positive attitudes to breast screening and participate when the opportunity is available.

All original research articles that address motivation, intention and/or participation in screening for breast cancer among women aged ≥ 75 years were considered for inclusion. These included articles reporting on women who use public and private breast cancer screening services and those who do not use screening services (i.e., non-screeners).

The Joanna Briggs Institute (JBI) methodology for scoping reviews was used to guide this review. A comprehensive search strategy was developed with the assistance of a specialist librarian to access selected databases including: the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, Web of Science and PsychInfo. The review was restricted to original research studies published since 2009, available in English and focusing on high-income countries (as defined by the World Bank). Title and abstract screening, followed by an assessment of full-text studies against the inclusion criteria was completed by at least two reviewers. Data relating to key motivations, screening intention and behaviour were extracted, and a thematic analysis of study findings undertaken.

A total of fourteen (14) studies were included in the review. Thematic analysis resulted in identification of three themes from included studies highlighting that decisions about screening were influenced by: knowledge of the benefits and harms of screening and their relationship to age; underlying attitudes to the importance of cancer screening in women's lives; and use of decision aids to improve knowledge and guide decision-making.

The results of this review provide a comprehensive overview of current knowledge regarding the motivations and screening behaviour of older women about breast cancer screening which may inform policy development.

Peer Review reports

Introduction

Breast cancer is now the most commonly diagnosed cancer in the world overtaking lung cancer in 2021 [ 1 ]. Across the globe, breast cancer contributed to 25.8% of the total number of new cases of cancer diagnosed in 2020 [ 2 ] and accounts for a high disease burden for women [ 3 ]. Screening for breast cancer is an effective means of detecting early-stage cancer and has been shown to significantly improve survival rates [ 4 ]. A recent systematic review of international screening guidelines found that most countries recommend that women have biennial mammograms between the ages of 40–70 years [ 5 ] with some recommending that there should be no upper age limit [ 6 , 7 , 8 , 9 , 10 , 11 , 12 ] and others suggesting that benefits of continued screening for women over 75 are not clear [ 13 , 14 , 15 ].

Some guidelines suggest that the decision to end screening should be determined based on the individual health status of the woman, their life expectancy and current health issues [ 5 , 16 , 17 ]. This is because the benefits of mammography screening may be limited after 7 years due to existing comorbidities and limited life expectancy [ 18 , 19 , 20 , 21 ], with some jurisdictions recommending breast cancer screening for women ≥ 75 years only when life expectancy is estimated as at least 7–10 years [ 22 ]. Others have argued that decisions about continuing with screening mammography should depend on individual patient risk and health management preferences [ 23 ]. This decision is likely facilitated by a discussion between a health care provider and patient about the harms and benefits of screening outside the recommended ages [ 24 , 25 ]. While mammography may enable early detection of breast cancer, it is clear that false-positive results and overdiagnosis Footnote 1 may occur. Studies have estimated that up to 25% of breast cancer cases in the general population may be over diagnosed [ 26 , 27 , 28 ].

The risk of being diagnosed with breast cancer increases with age and approximately 80% of new cases of breast cancer in high-income countries are in women over the age of 50 [ 29 ]. The average age of first diagnosis of breast cancer in high income countries is comparable to that of Australian women which is now 61 years [ 2 , 4 , 29 ]. Studies show that women aged ≥ 75 years generally have positive attitudes to mammography screening and report high levels of perceived benefits including early detection of breast cancer and a desire to stay healthy as they age [ 21 , 30 , 31 , 32 ]. Some women aged over 74 participate, or plan to participate, in screening despite recommendations from health professionals and government guidelines advising against it [ 33 ]. Results of a recent review found that knowledge of the recommended guidelines and the potential harms of screening are limited and many older women believed that the benefits of continued screening outweighed the risks [ 30 ].

Very few studies have been undertaken to understand the motivations of women to screen or to establish screening participation rates among women aged ≥ 75 and older. This is surprising given that increasing age is recognised as a key risk factor for the development of breast cancer, and that screening is offered in many locations around the world every two years up until 74 years. The importance of this topic is high given the ambiguity around best practice for participation beyond 74 years. A preliminary search of Open Science Framework, PROSPERO, Cochrane Database of Systematic Reviews and JBI Evidence Synthesis in May 2022 did not locate any reviews on this topic.

This scoping review has allowed for the mapping of a broad range of research to explore the breadth and depth of the literature, summarize the evidence and identify knowledge gaps [ 34 , 35 ]. This information has supported the development of a comprehensive overview of current knowledge of motivations of women to screen and screening participation rates among women outside the targeted age of many international screening programs.

Materials and methods

Research question.

The research question for this scoping review was developed by applying the Population—Concept—Context (PCC) framework [ 36 ]. The current review addresses the research question “What research has been undertaken in high-income countries (context) exploring the key motivations to screen for breast cancer and screening participation (concepts) among women ≥ 75 years of age (population)?

Eligibility criteria

Participants.

Women aged ≥ 75 years were the key population. Specifically, motivations to screen and screening intention and behaviour and the variables that discriminate those who screen from those who do not (non-screeners) were utilised as the key predictors and outcomes respectively.

From a conceptual perspective it was considered that motivation led to behaviour, therefore articles that described motivation and corresponding behaviour were considered. These included articles reporting on women who use public (government funded) and private (fee for service) breast cancer screening services and those who do not use screening services (i.e., non-screeners).

The scope included high-income countries using the World Bank definition [ 37 ]. These countries have broadly similar health systems and opportunities for breast cancer screening in both public and private settings.

Types of sources

All studies reporting original research in peer-reviewed journals from January 2009 were eligible for inclusion, regardless of design. This date was selected due to an evaluation undertaken for BreastScreen Australia recommending expansion of the age group to include 70–74-year-old women [ 38 ]. This date was also indicative of international debate regarding breast cancer screening effectiveness at this time [ 39 , 40 ]. Reviews were also included, regardless of type—scoping, systematic, or narrative. Only sources published in English and available through the University’s extensive research holdings were eligible for inclusion. Ineligible materials were conference abstracts, letters to the editor, editorials, opinion pieces, commentaries, newspaper articles, dissertations and theses.

This scoping review was registered with the Open Science Framework database ( https://osf.io/fd3eh ) and followed Joanna Briggs Institute (JBI) methodology for scoping reviews [ 35 , 36 ]. Although ethics approval is not required for scoping reviews the broader study was approved by the University Ethics Committee (approval number HEC 21249).

Search strategy

A pilot search strategy was developed in consultation with an expert health librarian and tested in MEDLINE (OVID) and conducted on 3 June 2022. Articles from this pilot search were compared with seminal articles previously identified by the members of the team and used to refine the search terms. The search terms were then searched as both keywords and subject headings (e.g., MeSH) in the titles and abstracts and Boolean operators employed. A full MEDLINE search was then carried out by the librarian (see Table  1 ). This search strategy was adapted for use in each of the following databases: Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medical Literature Analysis and Retrieval System Online (MEDLINE), Web of Science and PsychInfo databases. The references of included studies have been hand-searched to identify any additional evidence sources.

Study/source of evidence selection

Following the search, all identified citations were collated and uploaded into EndNote v.X20 (Clarivate Analytics, PA, USA) and duplicates removed. The resulting articles were then imported into Covidence – Cochrane’s systematic review management software [ 41 ]. Duplicates were removed once importation was complete, and title and abstract screening was undertaken against the eligibility criteria. A sample of 25 articles were assessed by all reviewers to ensure reliability in the application of the inclusion and exclusion criteria. Team discussion was used to ensure consistent application. The Covidence software supports blind reviewing with two reviewers required at each screening phase. Potentially relevant sources were retrieved in full text and were assessed against the inclusion criteria by two independent reviewers. Conflicts were flagged within the software which allows the team to discuss those that have disagreements until a consensus was reached. Reasons for exclusion of studies at full text were recorded and reported in the scoping review. The Preferred Reporting Items of Systematic Reviews extension for scoping reviews (PRISMA-ScR) checklist was used to guide the reporting of the review [ 42 ] and all stages were documented using the PRISMA-ScR flow chart [ 42 ].

Data extraction

A data extraction form was created in Covidence and used to extract study characteristics and to confirm the study’s relevance. This included specific details such as article author/s, title, year of publication, country, aim, population, setting, data collection methods and key findings relevant to the review question. The draft extraction form was modified as needed during the data extraction process.

Data analysis and presentation

Extracted data were summarised in tabular format (see Table  2 ). Consistent with the guidelines for the effective reporting of scoping reviews [ 43 ] and the JBI framework [ 35 ] the final stage of the review included thematic analysis of the key findings of the included studies. Study findings were imported into QSR NVivo with coding of each line of text. Descriptive codes reflected key aspects of the included studies related to the motivations and behaviours of women > 75 years about breast cancer screening.

In line with the reporting requirements for scoping reviews the search results for this review are presented in Fig.  1 [ 44 ].

figure 1

PRISMA Flowchart. From: Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. https://doi.org/10.1136/bmj.n71

A total of fourteen [ 14 ] studies were included in the review with studies from the following countries, US n  = 12 [ 33 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ], UK n  = 1 [ 23 ] and France n  = 1 [ 56 ]. Sample sizes varied, with most containing fewer than 50 women ( n  = 8) [ 33 , 45 , 46 , 48 , 51 , 52 , 55 ]. Two had larger samples including a French study with 136 women (a sub-set of a larger sample) [ 56 ], and one mixed method study in the UK with a sample of 26 women undertaking interviews and 479 women completing surveys [ 23 ]. One study did not report exact numbers [ 50 ]. Three studies [ 47 , 53 , 54 ] were undertaken by a group of researchers based in the US utilising the same sample of women, however each of the papers focused on different primary outcomes. The samples in the included studies were recruited from a range of locations including primary medical care clinics, specialist medical clinics, University affiliated medical clinics, community-based health centres and community outreach clinics [ 47 , 53 , 54 ].

Data collection methods varied and included: quantitative ( n  = 8), qualitative ( n  = 5) and mixed methods ( n  = 1). A range of data collection tools and research designs were utilised; pre/post, pilot and cross-sectional surveys, interviews, and secondary analysis of existing data sets. Seven studies focused on the use of a Decision Aids (DAs), either in original or modified form, developed by Schonberg et al. [ 55 ] as a tool to increase knowledge about the harms and benefits of screening for older women [ 45 , 47 , 48 , 49 , 52 , 54 , 55 ]. Three studies focused on intention to screen [ 33 , 53 , 56 ], two on knowledge of, and attitudes to, screening [ 23 , 46 ], one on information needs relating to risks and benefits of screening discontinuation [ 51 ], and one on perceptions about discontinuation of screening and impact of social interactions on screening [ 50 ].

The three themes developed from the analysis of the included studies highlighted that decisions about screening were primarily influenced by: (1) knowledge of the benefits and harms of screening and their relationship to age; (2) underlying attitudes to the importance of cancer screening in women's lives; and (3) exposure to decision aids designed to facilitate informed decision-making. Each of these themes will be presented below drawing on the key findings of the appropriate studies. The full dataset of extracted data can be found in Table  2 .

Knowledge of the benefits and harms of screening ≥ 75 years

The decision to participate in routine mammography is influenced by individual differences in cognition and affect, interpersonal relationships, provider characteristics, and healthcare system variables. Women typically perceive mammograms as a positive, beneficial and routine component of care [ 46 ] and an important aspect of taking care of themselves [ 23 , 46 , 49 ]. One qualitative study undertaken in the US showed that few women had discussed mammography cessation or the potential harms of screening with their health care providers and some women reported they would insist on receiving mammography even without a provider recommendation to continue screening [ 46 ].

Studies suggested that ageing itself, and even poor health, were not seen as reasonable reasons for screening cessation. For many women, guidance from a health care provider was deemed the most important influence on decision-making [ 46 ]. Preferences for communication about risk and benefits were varied with one study reporting women would like to learn more about harms and risks and recommended that this information be communicated via physicians or other healthcare providers, included in brochures/pamphlets, and presented outside of clinical settings (e.g., in community-based seniors groups) [ 51 ]. Others reported that women were sometimes sceptical of expert and government recommendations [ 33 ] although some were happy to participate in discussions with health educators or care providers about breast cancer screening harms and benefits and potential cessation [ 52 ].

Underlying attitudes to the importance of cancer screening at and beyond 75 years

Included studies varied in describing the importance of screening, with some attitudes based on past attendance and some based on future intentions to screen. Three studies reported findings indicating that some women intended to continue screening after 75 years of age [ 23 , 45 , 46 ], with one study in the UK reporting that women supported an extension of the automatic recall indefinitely, regardless of age or health status. In this study, failure to invite older women to screen was interpreted as age discrimination [ 23 ]. The desire to continue screening beyond 75 was also highlighted in a study from France that found that 60% of the women ( n  = 136 aged ≥ 75) intended to pursue screening in the future, and 27 women aged ≥ 75, who had never undergone mammography previously (36%), intended to do so in the future [ 56 ]. In this same study, intentions to screen varied significantly [ 56 ]. There were no sociodemographic differences observed between screened and unscreened women with regard to level of education, income, health risk behaviour (smoking, alcohol consumption), knowledge about the importance and the process of screening, or psychological features (fear of the test, fear of the results, fear of the disease, trust in screening impact) [ 56 ]. Further analysis showed that three items were statistically correlated with a higher rate of attendance at screening: (1) screening was initiated by a physician; (2) the women had a consultation with a gynaecologist during the past 12 months; and (3) the women had already undergone at least five screening mammograms. Analysis highlighted that although average income, level of education, psychological features or other types of health risk behaviours did not impact screening intention, having a mammogram previously impacted likelihood of ongoing screening. There was no information provided that explained why women who had not previously undergone screening might do so in the future.

A mixed methods study in the UK reported similar findings [ 23 ]. Utilising interviews ( n  = 26) and questionnaires ( n  = 479) with women ≥ 70 years (median age 75 years) the overwhelming result (90.1%) was that breast screening should be offered to all women indefinitely regardless of age, health status or fitness [ 23 ], and that many older women were keen to continue screening. Both the interview and survey data confirmed women were uncertain about eligibility for breast screening. The survey data showed that just over half the women (52.9%) were unaware that they could request mammography or knew how to access it. Key reasons for screening discontinuation were not being invited for screening (52.1%) and not knowing about self-referral (35.1%).

Women reported that not being invited to continue screening sent messages that screening was no longer important or required for this age group [ 23 ]. Almost two thirds of the women completing the survey (61.6%) said they would forget to attend screening without an invitation. Other reasons for screening discontinuation included transport difficulties (25%) and not wishing to burden family members (24.7%). By contrast, other studies have reported that women do not endorse discontinuation of screening mammography due to advancing age or poor health, but some may be receptive to reducing screening frequency on recommendation from their health care provider [ 46 , 51 ].

Use of Decision Aids (DAs) to improve knowledge and guide screening decision-making

Many women reported poor knowledge about the harms and benefits of screening with studies identifying an important role for DAs. These aids have been shown to be effective in improving knowledge of the harms and benefits of screening [ 45 , 54 , 55 ] including for women with low educational attainment; as compared to women with high educational attainment [ 47 ]. DAs can increase knowledge about screening [ 47 , 49 ] and may decrease the intention to continue screening after the recommended age [ 45 , 52 , 54 ]. They can be used by primary care providers to support a conversation about breast screening intention and reasons for discontinuing screening. In one pilot study undertaken in the US using a DA, 5 of the 8 women (62.5%) indicated they intended to continue to receive mammography; however, 3 participants planned to get them less often [ 45 ]. When asked whether they thought their physician would want them to get a mammogram, 80% said “yes” on pre-test; this figure decreased to 62.5% after exposure to the DA. This pilot study suggests that the use of a decision-aid may result in fewer women ≥ 75 years old continuing to screen for breast cancer [ 45 ].

Similar findings were evident in two studies drawing on the same data undertaken in the US [ 48 , 53 ]. Using a larger sample ( n  = 283), women’s intentions to screen prior to a visit with their primary care provider and then again after exposure to the DA were compared. Results showed that 21.7% of women reduced their intention to be screened, 7.9% increased their intentions to be screened, and 70.4% did not change. Compared to those who had no change or increased their screening intentions, women who had a decrease in screening intention were significantly less likely to receive screening after 18 months. Generally, studies have shown that women aged 75 and older find DAs acceptable and helpful [ 47 , 48 , 49 , 55 ] and using them had the potential to impact on a women’s intention to screen [ 55 ].

Cadet and colleagues [ 49 ] explored the impact of educational attainment on the use of DAs. Results highlight that education moderates the utility of these aids; women with lower educational attainment were less likely to understand all the DA’s content (46.3% vs 67.5%; P < 0.001); had less knowledge of the benefits and harms of mammography (adjusted mean ± standard error knowledge score, 7.1 ± 0.3 vs 8.1 ± 0.3; p < 0.001); and were less likely to have their screening intentions impacted (adjusted percentage, 11.4% vs 19.4%; p  = 0.01).

This scoping review summarises current knowledge regarding motivations and screening behaviours of women over 75 years. The findings suggest that awareness of the importance of breast cancer screening among women aged ≥ 75 years is high [ 23 , 46 , 49 ] and that many women wish to continue screening regardless of perceived health status or age. This highlights the importance of focusing on motivation and screening behaviours and the multiple factors that influence ongoing participation in breast screening programs.

The generally high regard attributed to screening among women aged ≥ 75 years presents a complex challenge for health professionals who are focused on potential harm (from available national and international guidelines) in ongoing screening for women beyond age 75 [ 18 , 20 , 57 ]. Included studies highlight that many women relied on the advice of health care providers regarding the benefits and harms when making the decision to continue breast screening [ 46 , 51 , 52 ], however there were some that did not [ 33 ]. Having a previous pattern of screening was noted as being more significant to ongoing intention than any other identified socio-demographic feature [ 56 ]. This is perhaps because women will not readily forgo health care practices that they have always considered important and that retain ongoing importance for the broader population.

For those women who had discontinued screening after the age of 74 it was apparent that the rationale for doing so was not often based on choice or receipt of information, but rather on factors that impact decision-making in relation to screening. These included no longer receiving an invitation to attend, transport difficulties and not wanting to be a burden on relatives or friends [ 23 , 46 , 51 ]. Ongoing receipt of invitations to screen was an important aspect of maintaining a capacity to choose [ 23 ]. This was particularly important for those women who had been regular screeners.

Women over 75 require more information to make decisions regarding screening [ 23 , 52 , 54 , 55 ], however health care providers must also be aware that the element of choice is important for older women. Having a capacity to choose avoids any notion of discrimination based on age, health status, gender or sociodemographic difference and acknowledges the importance of women retaining control over their health [ 23 ]. It was apparent that some women would choose to continue screening at a reduced frequency if this option was available and that women should have access to information facilitating self-referral [ 23 , 45 , 46 , 51 , 56 ].

Decision-making regarding ongoing breast cancer screening has been facilitated via the use of Decision Aids (DAs) within clinical settings [ 54 , 55 ]. While some studies suggest that women will make a decision regardless of health status, the use of DAs has impacted women’s decision to screen. While this may have limited benefit for those of lower educational attainment [ 48 ] they have been effective in improving knowledge relating to harms and benefits of screening particularly where they have been used to support a conversation with women about the value of screening [ 54 , 55 , 56 ].

Women have identified challenges in engaging in conversations with health care providers regarding ongoing screening, because providers frequently draw on projections of life expectancy and over-diagnosis [ 17 , 51 ]. As a result, these conversations about screening after age 75 years often do not occur [ 46 ]. It is likely that health providers may need more support and guidance in leading these conversations. This may be through the use of DAs or standardised checklists. It may be possible to incorporate these within existing health preventive measures for this age group. The potential for advice regarding ongoing breast cancer screening to be available outside of clinical settings may provide important pathways for conversations with women regarding health choices. Provision of information and advice in settings such as community based seniors groups [ 51 ] offers a potential platform to broaden conversations and align sources of information, not only with health professionals but amongst women themselves. This may help to address any misconception regarding eligibility and access to services [ 23 ]. It may also be aligned with other health promotion and lifestyle messages provided to this age group.

Limitations of the review

The searches that formed the basis of this review were carried in June 2022. Although the search was comprehensive, we have only captured those studies that were published in the included databases from 2009. There may have been other studies published outside of these periods. We also limited the search to studies published in English with full-text availability.

The emphasis of a scoping review is on comprehensive coverage and synthesis of the key findings, rather than on a particular standard of evidence and, consequently a quality assessment of the included studies was not undertaken. This has resulted in the inclusion of a wide range of study designs and data collection methods. It is important to note that three studies included in the review drew on the same sample of women (283 over > 75)[ 49 , 53 , 54 ]. The results of this review provide valuable insights into motivations and behaviours for breast cancer screening for older women, however they should be interpreted with caution given the specific methodological and geographical limitations.

Conclusion and recommendations

This scoping review highlighted a range of key motivations and behaviours in relation to breast cancer screening for women ≥ 75 years of age. The results provide some insight into how decisions about screening continuation after 74 are made and how informed decision-making can be supported. Specifically, this review supports the following suggestions for further research and policy direction:

Further research regarding breast cancer screening motivations and behaviours for women over 75 would provide valuable insight for health providers delivering services to women in this age group.

Health providers may benefit from the broader use of decision aids or structured checklists to guide conversations with women over 75 regarding ongoing health promotion/preventive measures.

Providing health-based information in non-clinical settings frequented by women in this age group may provide a broader reach of information and facilitate choices. This may help to reduce any perception of discrimination based on age, health status or socio-demographic factors.

Availability of data and materials

All data generated or analysed during this study is included in this published article (see Table  2 above).

Cancer Australia, in their 2014 position statement, define “overdiagnosis” in the following way. ‘’Overdiagnosis’ from breast screening does not refer to error or misdiagnosis, but rather refers to breast cancer diagnosed by screening that would not otherwise have been diagnosed during a woman’s lifetime. “Overdiagnosis” includes all instances where cancers detected through screening (ductal carcinoma in situ or invasive breast cancer) might never have progressed to become symptomatic during a woman’s life, i.e., cancer that would not have been detected in the absence of screening. It is not possible to precisely predict at diagnosis, to which cancers overdiagnosis would apply.” (accessed 22. nd August 2022; https://www.canceraustralia.gov.au/resources/position-statements/overdiagnosis-mammographic-screening ).

World Health Organization. Breast Cancer Geneva: WHO; 2021 [Available from: https://www.who.int/news-room/fact-sheets/detail/breast-cancer#:~:text=Reducing%20global%20breast%20cancer%20mortality,and%20comprehensive%20breast%20cancer%20management .

International Agency for Research on Cancer (IARC). IARC Handbooks on Cancer Screening: Volume 15 Breast Cancer Geneva: IARC; 2016 [Available from: https://publications.iarc.fr/Book-And-Report-Series/Iarc-Handbooks-Of-Cancer-Prevention/Breast-Cancer-Screening-2016 .

Australian Institute of Health and Welfare. Cancer in Australia 2021 [Available from: https://www.canceraustralia.gov.au/cancer-types/breast-cancer/statistics .

Breast Cancer Network Australia. Current breast cancer statistics in Australia 2020 [Available from: https://www.bcna.org.au/media/7111/bcna-2019-current-breast-cancer-statistics-in-australia-11jan2019.pdf .

Ren W, Chen M, Qiao Y, Zhao F. Global guidelines for breast cancer screening: A systematic review. The Breast. 2022;64:85–99.

Article   PubMed   PubMed Central   Google Scholar  

Cardoso F, Kyriakides S, Ohno S, Penault-Llorca F, Poortmans P, Rubio IT, et al. Early breast cancer: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2019;30(8):1194–220.

Article   CAS   PubMed   Google Scholar  

Hamashima C, Hattori M, Honjo S, Kasahara Y, Katayama T, Nakai M, et al. The Japanese guidelines for breast cancer screening. Jpn J Clin Oncol. 2016;46(5):482–92.

Article   PubMed   Google Scholar  

Bevers TB, Helvie M, Bonaccio E, Calhoun KE, Daly MB, Farrar WB, et al. Breast cancer screening and diagnosis, version 3.2018, NCCN clinical practice guidelines in oncology. J Natl Compr Canc Net. 2018;16(11):1362–89.

Article   Google Scholar  

He J, Chen W, Li N, Shen H, Li J, Wang Y, et al. China guideline for the screening and early detection of female breast cancer (2021, Beijing). Zhonghua Zhong liu za zhi [Chinese Journal of Oncology]. 2021;43(4):357–82.

CAS   PubMed   Google Scholar  

Cancer Australia. Early detection of breast cancer 2021 [cited 2022 25 July]. Available from: https://www.canceraustralia.gov.au/resources/position-statements/early-detection-breast-cancer .

Schünemann HJ, Lerda D, Quinn C, Follmann M, Alonso-Coello P, Rossi PG, et al. Breast Cancer Screening and Diagnosis: A Synopsis of the European Breast Guidelines. Ann Intern Med. 2019;172(1):46–56.

World Health Organization. WHO Position Paper on Mammography Screening Geneva WHO. 2016.

Google Scholar  

Lansdorp-Vogelaar I, Gulati R, Mariotto AB. Personalizing age of cancer screening cessation based on comorbid conditions: model estimates of harms and benefits. Ann Intern Med. 2014;161:104.

Lee CS, Moy L, Joe BN, Sickles EA, Niell BL. Screening for Breast Cancer in Women Age 75 Years and Older. Am J Roentgenol. 2017;210(2):256–63.

Broeders M, Moss S, Nystrom L. The impact of mammographic screening on breast cancer mortality in Europe: a review of observational studies. J Med Screen. 2012;19(suppl 1):14.

Oeffinger KC, Fontham ETH, Etzioni R, Herzig A, Michaelson JS, Shih YCT, et al. Breast cancer screening for women at average risk: 2015 Guideline update from the American cancer society. JAMA - Journal of the American Medical Association. 2015;314(15):1599–614.

Walter LC, Schonberg MA. Screening mammography in older women: a review. JAMA. 2014;311:1336.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Braithwaite D, Walter LC, Izano M, Kerlikowske K. Benefits and harms of screening mammography by comorbidity and age: a qualitative synthesis of observational studies and decision analyses. J Gen Intern Med. 2016;31:561.

Braithwaite D, Mandelblatt JS, Kerlikowske K. To screen or not to screen older women for breast cancer: a conundrum. Future Oncol. 2013;9(6):763–6.

Demb J, Abraham L, Miglioretti DL, Sprague BL, O’Meara ES, Advani S, et al. Screening Mammography Outcomes: Risk of Breast Cancer and Mortality by Comorbidity Score and Age. Jnci-Journal of the National Cancer Institute. 2020;112(6):599–606.

Demb J, Akinyemiju T, Allen I, Onega T, Hiatt RA, Braithwaite D. Screening mammography use in older women according to health status: a systematic review and meta-analysis. Clin Interv Aging. 2018;13:1987–97.

Qaseem A, Lin JS, Mustafa RA, Horwitch CA, Wilt TJ. Screening for Breast Cancer in Average-Risk Women: A Guidance Statement From the American College of Physicians. Ann Intern Med. 2019;170(8):547–60.

Collins K, Winslow M, Reed MW, Walters SJ, Robinson T, Madan J, et al. The views of older women towards mammographic screening: a qualitative and quantitative study. Br J Cancer. 2010;102(10):1461–7.

Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605–13.

Hersch J, Jansen J, Barratt A, Irwig L, Houssami N, Howard K, et al. Women’s views on overdiagnosis in breast cancer screening: a qualitative study. BMJ : British Medical Journal. 2013;346:f158.

De Gelder R, Heijnsdijk EAM, Van Ravesteyn NT, Fracheboud J, Draisma G, De Koning HJ. Interpreting overdiagnosis estimates in population-based mammography screening. Epidemiol Rev. 2011;33(1):111–21.

Monticciolo DL, Helvie MA, Edward HR. Current issues in the overdiagnosis and overtreatment of breast cancer. Am J Roentgenol. 2018;210(2):285–91.

Shepardson LB, Dean L. Current controversies in breast cancer screening. Semin Oncol. 2020;47(4):177–81.

National Cancer Control Centre. Cancer incidence in Australia 2022 [Available from: https://ncci.canceraustralia.gov.au/diagnosis/cancer-incidence/cancer-incidence .

Austin JD, Shelton RC, Lee Argov EJ, Tehranifar P. Older Women’s Perspectives Driving Mammography Screening Use and Overuse: a Narrative Review of Mixed-Methods Studies. Current Epidemiology Reports. 2020;7(4):274–89.

Austin JD, Tehranifar P, Rodriguez CB, Brotzman L, Agovino M, Ziazadeh D, et al. A mixed-methods study of multi-level factors influencing mammography overuse among an older ethnically diverse screening population: implications for de-implementation. Implementation Science Communications. 2021;2(1):110.

Demb J, Allen I, Braithwaite D. Utilization of screening mammography in older women according to comorbidity and age: protocol for a systematic review. Syst Rev. 2016;5(1):168.

Housten AJ, Pappadis MR, Krishnan S, Weller SC, Giordano SH, Bevers TB, et al. Resistance to discontinuing breast cancer screening in older women: A qualitative study. Psychooncology. 2018;27(6):1635–41.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Peters M, Godfrey C, McInerney P, Munn Z, Tricco A, Khalil HAE, et al. Chapter 11: Scoping reviews. JBI Manual for Evidence Synthesis 2020 [Available from: https://jbi-global-wiki.refined.site/space/MANUAL .

Peters MD, Godfrey C, McInerney P, Khalil H, Larsen P, Marnie C, et al. Best practice guidance and reporting items for the development of scoping review protocols. JBI evidence synthesis. 2022;20(4):953–68.

Fantom NJ, Serajuddin U. The World Bank’s classification of countries by income. World Bank Policy Research Working Paper; 2016.

Book   Google Scholar  

BreastScreen Australia Evaluation Taskforce. BreastScreen Australia Evaluation. Evaluation final report: Screening Monograph No 1/2009. Canberra; Australia Australian Government Department of Health and Ageing; 2009.

Nelson HD, Cantor A, Humphrey L. Screening for breast cancer: a systematic review to update the 2009 U.S. Preventive Services Task Force recommendation2016.

Woolf SH. The 2009 breast cancer screening recommendations of the US Preventive Services Task Force. JAMA. 2010;303(2):162–3.

Covidence systematic review software. [Internet]. Veritas-Health-Innovation 2020. Available from: https://www.covidence.org/ .

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73.

Tricco AC, Lillie E, Zarin W, O’Brien K, Colquhoun H, Kastner M, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(1):15.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

Beckmeyer A, Smith RM, Miles L, Schonberg MA, Toland AE, Hirsch H. Pilot Evaluation of Patient-centered Survey Tools for Breast Cancer Screening Decision-making in Women 75 and Older. Health Behavior and Policy Review. 2020;7(1):13–8.

Brotzman LE, Shelton RC, Austin JD, Rodriguez CB, Agovino M, Moise N, et al. “It’s something I’ll do until I die”: A qualitative examination into why older women in the U.S. continue screening mammography. Canc Med. 2022;11(20):3854–62.

Article   CAS   Google Scholar  

Cadet T, Pinheiro A, Karamourtopoulos M, Jacobson AR, Aliberti GM, Kistler CE, et al. Effects by educational attainment of a mammography screening patient decision aid for women aged 75 years and older. Cancer. 2021;127(23):4455–63.

Cadet T, Aliberti G, Karamourtopoulos M, Jacobson A, Gilliam EA, Primeau S, et al. Evaluation of a mammography decision aid for women 75 and older at risk for lower health literacy in a pretest-posttest trial. Patient Educ Couns. 2021;104(9):2344–50.

Cadet T, Aliberti G, Karamourtopoulos M, Jacobson A, Siska M, Schonberg MA. Modifying a mammography decision aid for older adult women with risk factors for low health literacy.  Health Lit Res Prac. 2021;5(2):e78–90.

Gray N, Picone G. Evidence of Large-Scale Social Interactions in Mammography in the United States. Atl Econ J. 2018;46(4):441–57.

Hoover DS, Pappadis MR, Housten AJ, Krishnan S, Weller SC, Giordano SH, et al. Preferences for Communicating about Breast Cancer Screening Among Racially/Ethnically Diverse Older Women. Health Commun. 2019;34(7):702–6.

Salzman B, Bistline A, Cunningham A, Silverio A, Sifri R. Breast Cancer Screening Shared Decision-Making in Older African-American Women. J Natl Med Assoc. 2020;112(5):556–60.

PubMed   Google Scholar  

Schoenborn NL, Pinheiro A, Kistler CE, Schonberg MA. Association between Breast Cancer Screening Intention and Behavior in the Context of Screening Cessation in Older Women. Med Decis Making. 2021;41(2):240–4.

Schonberg MA, Kistler CE, Pinheiro A, Jacobson AR, Aliberti GM, Karamourtopoulos M, et al. Effect of a Mammography Screening Decision Aid for Women 75 Years and Older: A Cluster Randomized Clinical Trial. JAMA Intern Med. 2020;180(6):831–42.

Schonberg MA, Hamel MB, Davis RB. Development and evaluation of a decision aid on mammography screening for women 75 years and older. JAMA Intern Med. 2014;174:417.

Eisinger F, Viguier J, Blay J-Y, Morère J-F, Coscas Y, Roussel C, et al. Uptake of breast cancer screening in women aged over 75years: a controversy to come? Eur J Cancer Prev. 2011;20(Suppl 1):S13-5.

Schonberg MA, Breslau ES, McCarthy EP. Targeting of Mammography Screening According to Life Expectancy in Women Aged 75 and Older. J Am Geriatr Soc. 2013;61(3):388–95.

Download references

Acknowledgements

We would like to acknowledge Ange Hayden-Johns (expert librarian) who assisted with the development of the search criteria and undertook the relevant searches and Tejashree Kangutkar who assisted with some of the Covidence work.

This work was supported by funding from the Australian Government Department of Health and Aged Care (ID: Health/20–21/E21-10463).

Author information

Authors and affiliations.

Violet Vines Centre for Rural Health Research, La Trobe Rural Health School, La Trobe University, P.O. Box 199, Bendigo, VIC, 3552, Australia

Virginia Dickson-Swift, Joanne Adams & Evelien Spelten

Care Economy Research Institute, La Trobe University, Wodonga, Australia

Irene Blackberry

Olivia Newton-John Cancer Wellness and Research Centre, Austin Health, Melbourne, Australia

Carlene Wilson & Eva Yuen

Melbourne School of Population and Global Health, Melbourne University, Melbourne, Australia

Carlene Wilson

School of Psychology and Public Health, La Trobe University, Bundoora, Australia

Institute for Health Transformation, Deakin University, Burwood, Australia

Centre for Quality and Patient Safety, Monash Health Partnership, Monash Health, Clayton, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

VDS conceived and designed the scoping review. VDS & JA developed the search strategy with librarian support, and all authors (VDS, JA, ES, IB, CW, EY) participated in the screening and data extraction stages and assisted with writing the review. All authors provided editorial support and read and approved the final manuscript prior to submission.

Corresponding author

Correspondence to Joanne Adams .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethics approval and consent to participate

Ethics approval and consent to participate was not required for this study.

Consent for publication

Consent for publication was not required for this study.

Competing interest

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Dickson-Swift, V., Adams, J., Spelten, E. et al. Breast cancer screening motivation and behaviours of women aged over 75 years: a scoping review. BMC Women's Health 24 , 256 (2024). https://doi.org/10.1186/s12905-024-03094-z

Download citation

Received : 06 September 2023

Accepted : 15 April 2024

Published : 24 April 2024

DOI : https://doi.org/10.1186/s12905-024-03094-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Breast cancer
  • Mammography
  • Older women
  • Scoping review

BMC Women's Health

ISSN: 1472-6874

comprehensive essay on scholarly published journals

IMAGES

  1. How to write a scholarly article

    comprehensive essay on scholarly published journals

  2. American Academic & Scholarly Research Journal Template

    comprehensive essay on scholarly published journals

  3. (PDF) How To Write A Scientific Article For A Medical Journal?

    comprehensive essay on scholarly published journals

  4. Scholarly Essay Sample

    comprehensive essay on scholarly published journals

  5. American Academic & Scholarly Research Journal Template

    comprehensive essay on scholarly published journals

  6. Reading and Taking Notes on Scholarly Journal Articles

    comprehensive essay on scholarly published journals

VIDEO

  1. Scholarly Vs. Popular Sources

  2. An Introduction to Metaphysics: FULL audiobook

  3. What is Academic Publishing and How to Publish Your Research in Journals

  4. How to get published in academic journals

  5. How To Write A Publishable Manuscript

  6. Finding Scholarly Articles in OneSearch

COMMENTS

  1. Writing for publication: Structure, form, content, and journal

    Publishing papers in academic journals is the mechanism by which scholarship moves forward, and is also important to researchers in terms of its impact on their career progression. Therefore, researchers seeking publication should carefully consider all relevant factors - including journal scope, open access policies, and citation metrics ...

  2. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions. ... Return articles published in. e.g., J Biol Chem or Nature. Return articles dated between — e.g., 1996. Saved to My library. Done Remove article. My ...

  3. Catherine L Winchester, Mark Salji, 2016

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  4. Writing a Scientific Review Article: Comprehensive Insights for

    2. Benefits of Review Articles to the Author. Analysing literature gives an overview of the "WHs": WHat has been reported in a particular field or topic, WHo the key writers are, WHat are the prevailing theories and hypotheses, WHat questions are being asked (and answered), and WHat methods and methodologies are appropriate and useful [].For new or aspiring researchers in a particular ...

  5. How to write a good scientific review article

    Here, I provide tips on planning and writing a review article, with examples of well-crafted review articles published in The FEBS Journal. The advice given here is mostly relevant for the writing of a traditional literature-based review rather than other forms of review such as a systematic review or meta-analysis, which have their own ...

  6. Successful Scientific Writing and Publishing: A Step-by-Step Approach

    Abstract. Scientific writing and publication are essential to advancing knowledge and practice in public health, but prospective authors face substantial challenges. Authors can overcome barriers, such as lack of understanding about scientific writing and the publishing process, with training and resources. The objective of this article is to ...

  7. Step-by-Step Guide to Writing a Scientific Review Article

    Abstract. Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process.

  8. Scientific Writing Made Easy: A Step‐by‐Step Guide to Undergraduate

    Graduate students are encouraged to publish early and often, and professional scientists are generally evaluated by the quantity of articles published and the number of citations those articles receive. It is therefore important that undergraduate students receive a solid foundation in scientific writing early in their academic careers.

  9. Peer Review in Scientific Publications: Benefits, Critiques, & A

    HISTORY OF PEER REVIEW. The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ().The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ().

  10. A comprehensive analysis of the journal evaluation system in China

    Abstract. Journal evaluation systems reflect how new insights are critically reviewed and published, and the prestige and impact of a discipline's journals is a key metric in many research assessment, performance evaluation, and funding systems. With the expansion of China's research and innovation systems and its rise as a major contributor to global innovation, journal evaluation has ...

  11. Writing a successful essay: Journal of Geography in Higher Education

    People also read lists articles that other readers of this article have read. Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine. Cited by lists all citing articles based on Crossref citations. Articles with the Crossref icon will open in a new tab.

  12. LibGuides: Scholarly Publishing: Finding a journal to publish in

    Enter the title, abstract, and/or references of your paper to find an open access journal suitable to publish in. JSTOR Text Analyzer. Drag and drop a copy of your article into the Text Analyzer, and the tool will find similar content in JSTOR. Consider the journals that those papers are published in.

  13. Full article: Sustainable tourism: a comprehensive literature review on

    This paper has presented an extensive review of the literature on sustainable tourism definitions and applications. The literature consists of 132 papers from 47 scholarly journals published since 1993. For this purpose, each paper was categorised into 15 application areas based on a developed classification scheme.

  14. Google Scholar Search Help

    Get the most out of Google Scholar with some helpful tips on searches, email alerts, citation export, and more. Finding recent papers. Your search results are normally sorted by relevance, not by date. To find newer articles, try the following options in the left sidebar: click "Since Year" to show only recently published papers, sorted by ...

  15. Advice for getting published in a scholarly journal (essay)

    Here, now, is a list of 10 lessons about getting published, both a distillation and augmentation of what I narrated above. Avoid the desk reject. Make sure you conform to the length guidelines of the journal to which you're submitting. Research that journal.

  16. How to craft introductions to journal essays (opinion)

    In this article, our thesis is threefold. First, there are many effective strategies for building up to that statement. Second, underlying these strategies is a smaller set of common purposes. And finally, working with an awareness of both the first and second principles is a sound way to write strong introductions. Strategies and Purposes.

  17. Types of journal articles

    These papers are also sometimes called Brief communications. Review Articles: Review Articles provide a comprehensive summary of research on a certain topic, and a perspective on the state of the field and where it is heading. They are often written by leaders in a particular discipline after invitation from the editors of a journal.

  18. The determinants of organizational change management ...

    Several studies have highlighted that most organizational change initiatives fail, with an estimated failure rate of 60-70%. 1,5,6 High failure rate raises the sustained concern and interest about the factors that can decrease failure and increase the success of organizational change. 7 Researchers and consultancy firms have developed several change management models that can improve the ...

  19. Academic Writing, and How to Learn How to Write

    Explaining how to organize your work in order to write more, Paul J. Silva also does not offer advice on how to write well. 6. There are quite a few books that do not tell stories about writers and writing, but that show what good writing is and how to write well. Yarris and colleagues provided a perfect example: Helen Sword's Stylish Academic ...

  20. Publishing

    Peer review results in over 1.5 million scholarly articles published each year. Journals differ in the percentage of submitted papers that they accept and reject. Higher impact factor journals such as Science or Nature can reject even good quality research papers if an editor deems it not ground-breaking enough.

  21. How to get published in an academic journal: top tips from editors

    Brian Lucey, editor, International Review of Financial Analysis. 5) Get published by writing a review or a response. Writing reviews is a good way to get published - especially for people who are ...

  22. How to Find Sources

    Research databases. You can search for scholarly sources online using databases and search engines like Google Scholar. These provide a range of search functions that can help you to find the most relevant sources. If you are searching for a specific article or book, include the title or the author's name. Alternatively, if you're just ...

  23. Enter the dragon: China and global academic publishing

    Today's international research scene is unrecognizable to 30 years ago. There are now more journals, more researchers, more scholarly papers, more publishers, more co-authorship and crucially, more academics writing in a language that is not their native tongue (Hyland, 2015).One of the most significant changes to global scholarly publishing in recent years, however, is the growth of China ...

  24. Harvard Library Is Launching Harvard Open Journals Program

    By Tenzin Dickie April 24, 2024. Harvard Library is launching a new initiative called the Harvard Open Journals Program (HOJP), which will help researchers advance scholarly publishing that is open access, sustainable, and equitable. HOJP will provide publishing services, resources, and seed funding to participating Harvard researchers for new ...

  25. Master's Paper

    Adolescent Comprehensive Sexuality Education as a Supplement for Limited Sexual and Reproductive Health Services in Syria's Idlib Governorate IDP Camps ... presentations, research protocols, conference papers or white papers. If you would like to deposit a peer-reviewed article or book chapter, use the "Scholarly Articles and Book Chapters ...

  26. Nutrients

    "Managing Undernutrition in Pediatric Oncology" is a collaborative consensus statement of the Polish Society for Clinical Nutrition of Children and the Polish Society of Pediatric Oncology and Hematology. The early identification and accurate management of malnutrition in children receiving anticancer treatment are crucial components to integrate into comprehensive medical care. Given the ...

  27. Breast cancer screening motivation and behaviours of women aged over 75

    Breast cancer is now the most commonly diagnosed cancer in the world overtaking lung cancer in 2021 [].Across the globe, breast cancer contributed to 25.8% of the total number of new cases of cancer diagnosed in 2020 [] and accounts for a high disease burden for women [].Screening for breast cancer is an effective means of detecting early-stage cancer and has been shown to significantly ...