Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

  • Technical Support
  • Find My Rep

You are here

What is peer review.

Peer review is ‘a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.’ You can learn more in this explainer from the Social Science Space.  

A picture showing a manuscript with annotations, a notebook, and a journal.

Peer review brings academic research to publication in the following ways:

  • Evaluation – Peer review is an effective form of research evaluation to help select the highest quality articles for publication.
  • Integrity – Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.
  • Quality – The filtering process and revision advice improve the quality of the final research article as well as offering the author new insights into their research methods and the results that they have compiled. Peer review gives authors access to the opinions of experts in the field who can provide support and insight.

Types of peer review

  • Single-anonymized  – the name of the reviewer is hidden from the author.
  • Double-anonymized  – names are hidden from both reviewers and the authors.
  • Triple-anonymized  – names are hidden from authors, reviewers, and the editor.
  • Open peer review comes in many forms . At Sage we offer a form of open peer review on some journals via our Transparent Peer Review program , whereby the reviews are published alongside the article. The names of the reviewers may also be published, depending on the reviewers’ preference.
  • Post publication peer review can offer useful interaction and a discussion forum for the research community. This form of peer review is not usual or appropriate in all fields.

To learn more about the different types of peer review, see page 14 of ‘ The Nuts and Bolts of Peer Review ’ from Sense about Science.

Please double check the manuscript submission guidelines of the journal you are reviewing in order to ensure that you understand the method of peer review being used.

  • Journal Author Gateway
  • Journal Editor Gateway
  • Transparent Peer Review
  • How to Review Articles
  • Using Sage Track
  • Peer Review Ethics
  • Resources for Reviewers
  • Reviewer Rewards
  • Ethics & Responsibility
  • Sage Editorial Policies
  • Publication Ethics Policies
  • Sage Chinese Author Gateway 中国作者资源
  • Open Resources & Current Initiatives
  • Discipline Hubs

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 16 April 2024

Structure peer review to make it more robust

what is the process of peer review in research

  • Mario Malički 0

Mario Malički is associate director of the Stanford Program on Research Rigor and Reproducibility (SPORR) and co-editor-in-chief of the Research Integrity and Peer Review journal.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

In February, I received two peer-review reports for a manuscript I’d submitted to a journal. One report contained 3 comments, the other 11. Apart from one point, all the feedback was different. It focused on expanding the discussion and some methodological details — there were no remarks about the study’s objectives, analyses or limitations.

My co-authors and I duly replied, working under two assumptions that are common in scholarly publishing: first, that anything the reviewers didn’t comment on they had found acceptable for publication; second, that they had the expertise to assess all aspects of our manuscript. But, as history has shown, those assumptions are not always accurate (see Lancet 396 , 1056; 2020 ). And through the cracks, inaccurate, sloppy and falsified research can slip.

As co-editor-in-chief of the journal Research Integrity and Peer Review (an open-access journal published by BMC, which is part of Springer Nature), I’m invested in ensuring that the scholarly peer-review system is as trustworthy as possible. And I think that to be robust, peer review needs to be more structured. By that, I mean that journals should provide reviewers with a transparent set of questions to answer that focus on methodological, analytical and interpretative aspects of a paper.

For example, editors might ask peer reviewers to consider whether the methods are described in sufficient detail to allow another researcher to reproduce the work, whether extra statistical analyses are needed, and whether the authors’ interpretation of the results is supported by the data and the study methods. Should a reviewer find anything unsatisfactory, they should provide constructive criticism to the authors. And if reviewers lack the expertise to assess any part of the manuscript, they should be asked to declare this.

what is the process of peer review in research

Anonymizing peer review makes the process more just

Other aspects of a study, such as novelty, potential impact, language and formatting, should be handled by editors, journal staff or even machines, reducing the workload for reviewers.

The list of questions reviewers will be asked should be published on the journal’s website, allowing authors to prepare their manuscripts with this process in mind. And, as others have argued before, review reports should be published in full. This would allow readers to judge for themselves how a paper was assessed, and would enable researchers to study peer-review practices.

To see how this works in practice, since 2022 I’ve been working with the publisher Elsevier on a pilot study of structured peer review in 23 of its journals, covering the health, life, physical and social sciences. The preliminary results indicate that, when guided by the same questions, reviewers made the same initial recommendation about whether to accept, revise or reject a paper 41% of the time, compared with 31% before these journals implemented structured peer review. Moreover, reviewers’ comments were in agreement about specific parts of a manuscript up to 72% of the time ( M. Malički and B. Mehmani Preprint at bioRxiv https://doi.org/mrdv; 2024 ). In my opinion, reaching such agreement is important for science, which proceeds mainly through consensus.

what is the process of peer review in research

Stop the peer-review treadmill. I want to get off

I invite editors and publishers to follow in our footsteps and experiment with structured peer reviews. Anyone can trial our template questions (see go.nature.com/4ab2ppc ), or tailor them to suit specific fields or study types. For instance, mathematics journals might also ask whether referees agree with the logic or completeness of a proof. Some journals might ask reviewers if they have checked the raw data or the study code. Publications that employ editors who are less embedded in the research they handle than are academics might need to include questions about a paper’s novelty or impact.

Scientists can also use these questions, either as a checklist when writing papers or when they are reviewing for journals that don’t apply structured peer review.

Some journals — including Proceedings of the National Academy of Sciences , the PLOS family of journals, F1000 journals and some Springer Nature journals — already have their own sets of structured questions for peer reviewers. But, in general, these journals do not disclose the questions they ask, and do not make their questions consistent. This means that core peer-review checks are still not standardized, and reviewers are tasked with different questions when working for different journals.

Some might argue that, because different journals have different thresholds for publication, they should adhere to different standards of quality control. I disagree. Not every study is groundbreaking, but scientists should view quality control of the scientific literature in the same way as quality control in other sectors: as a way to ensure that a product is safe for use by the public. People should be able to see what types of check were done, and when, before an aeroplane was approved as safe for flying. We should apply the same rigour to scientific research.

Ultimately, I hope for a future in which all journals use the same core set of questions for specific study types and make all of their review reports public. I fear that a lack of standard practice in this area is delaying the progress of science.

Nature 628 , 476 (2024)

doi: https://doi.org/10.1038/d41586-024-01101-9

Reprints and permissions

Competing Interests

M.M. is co-editor-in-chief of the Research Integrity and Peer Review journal that publishes signed peer review reports alongside published articles. He is also the chair of the European Association of Science Editors Peer Review Committee.

Related Articles

what is the process of peer review in research

  • Scientific community
  • Peer review

Londoners see what a scientist looks like up close in 50 photographs

Londoners see what a scientist looks like up close in 50 photographs

Career News 18 APR 24

Researchers want a ‘nutrition label’ for academic-paper facts

Researchers want a ‘nutrition label’ for academic-paper facts

Nature Index 17 APR 24

Deadly diseases and inflatable suits: how I found my niche in virology research

Deadly diseases and inflatable suits: how I found my niche in virology research

Spotlight 17 APR 24

Is ChatGPT corrupting peer review? Telltale words hint at AI use

Is ChatGPT corrupting peer review? Telltale words hint at AI use

News 10 APR 24

Three ways ChatGPT helps me in my academic writing

Three ways ChatGPT helps me in my academic writing

Career Column 08 APR 24

Is AI ready to mass-produce lay summaries of research articles?

Is AI ready to mass-produce lay summaries of research articles?

Nature Index 20 MAR 24

Postdoctoral Position

We are seeking highly motivated and skilled candidates for postdoctoral fellow positions

Boston, Massachusetts (US)

Boston Children's Hospital (BCH)

what is the process of peer review in research

Qiushi Chair Professor

Distinguished scholars with notable achievements and extensive international influence.

Hangzhou, Zhejiang, China

Zhejiang University

what is the process of peer review in research

ZJU 100 Young Professor

Promising young scholars who can independently establish and develop a research direction.

Head of the Thrust of Robotics and Autonomous Systems

Reporting to the Dean of Systems Hub, the Head of ROAS is an executive assuming overall responsibility for the academic, student, human resources...

Guangzhou, Guangdong, China

The Hong Kong University of Science and Technology (Guangzhou)

what is the process of peer review in research

Head of Biology, Bio-island

Head of Biology to lead the discovery biology group.

BeiGene Ltd.

what is the process of peer review in research

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Peer review process

Introduction to peer review, what is peer review.

Peer review is the system used to assess the quality of a manuscript before it is published. Independent researchers in the relevant research area assess submitted manuscripts for originality, validity and significance to help editors determine whether a manuscript should be published in their journal.

How does it work?

When a manuscript is submitted to a journal, it is assessed to see if it meets the criteria for submission. If it does, the editorial team will select potential peer reviewers within the field of research to peer-review the manuscript and make recommendations.

There are four main types of peer review used by BMC:

Single-blind: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report.

Double-blind: the reviewers do not know the names of the authors, and the authors do not know who reviewed their manuscript.

Open peer: authors know who the reviewers are, and the reviewers know who the authors are. If the manuscript is accepted, the named reviewer reports are published alongside the article and the authors’ response to the reviewer.

Transparent peer: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report. If the manuscript is accepted, the anonymous reviewer reports are published alongside the article and the authors’ response to the reviewer.

Different journals use different types of peer review. You can find out which peer-review system is used by a particular journal in the journal’s ‘About’ page.

Why do peer review?

Peer review is an integral part of scientific publishing that confirms the validity of the manuscript. Peer reviewers are experts who volunteer their time to help improve the manuscripts they review. By undergoing peer review, manuscripts should become:

More robust - peer reviewers may point out gaps in a paper that require more explanation or additional experiments.

Easier to read - if parts of your paper are difficult to understand, reviewers can suggest changes.

More useful - peer reviewers also consider the importance of your paper to others in your field.

For more information and advice on how to get published, please see our blog series here .

How peer review works

peer-review-illustration-tpr-small

The peer review process can be single-blind, double-blind, open or transparent.

You can find out which peer review system is used by a particular journal in the journal's 'About' page.

N. B. This diagram is a representation of the peer review process, and should not be taken as the definitive approach used by every journal.

The peer review process

The peer review process can be broadly summarized into 10 steps, although these steps can vary slightly between journals. Explore what’s involved, below.

Editor Feedback: “Reviewers should remember that they are representing the readers of the journal. Will the readers of this particular journal find this informative and useful?”

Peer Review Process

1. Submission of Paper

The corresponding or submitting author submits the paper to the journal. This is usually via an online system such as ScholarOne Manuscripts. Occasionally, journals may accept submissions by email.

2. Editorial Office Assessment

The Editorial Office checks that the paper adheres to the requirements described in the journal’s Author Guidelines. The quality of the paper is not assessed at this point.

3. Appraisal by the Editor-in-Chief (EIC)

The EIC checks assesses the paper, considering its scope, originality and merits. The EiC may reject the paper at this stage.

4. EIC Assigns an Associate Editor (AE)

Some journals have Associate Editors ( or equivalent ) who handle the peer review. If they do, they would be assigned at this stage.

5. Invitation to Reviewers

The handling editor sends invitations to individuals he or she believes would be appropriate reviewers. As responses are received, further invitations are issued, if necessary, until the required number of reviewers is secured– commonly this is 2, but there is some variation between journals.

6. Response to Invitations

Potential reviewers consider the invitation against their own expertise, conflicts of interest and availability. They then accept or decline the invitation to review. If possible, when declining, they might also suggest alternative reviewers.

7. Review is Conducted

The reviewer sets time aside to read the paper several times. The first read is used to form an initial impression of the work. If major problems are found at this stage, the reviewer may feel comfortable rejecting the paper without further work. Otherwise, they will read the paper several more times, taking notes to build a detailed point-by-point review. The review is then submitted to the journal, with the reviewer’s recommendation (e.g. to revise, accept or reject the paper).

8. Journal Evaluates the Reviews

The handling editor considers all the returned reviews before making a decision. If the reviews differ widely, the editor may invite an additional reviewer so as to get an extra opinion before making a decision.

9. The Decision is Communicated

The editor sends a decision email to the author including any relevant reviewer comments. Comments will be anonymous if the journal follows a single-anonymous or double-anonymous peer review model. Journals with following an open or transparent peer review model will share the identities of the reviewers with the author(s).

10. Next Steps

An editor's perspective.

Listen to a podcast from Roger Watson, Editor-in-Chief of Journal of Advanced Nursing, as he discusses 'The peer review process'.

If accepted , the paper is sent to production. If the article is rejected or sent back for either major or minor revision , the handling editor should include constructive comments from the reviewers to help the author improve the article. At this point, reviewers should also be sent an email or letter letting them know the outcome of their review. If the paper was sent back for revision , the reviewers should expect to receive a new version, unless they have opted out of further participation. However, where only minor changes were requested this follow-up review might be done by the handling editor.

Page Content

What is the reviewer looking for, possible outcomes of peer review, common reasons for rejection, what to do if your manuscript gets rejected, responding to the reviewer, peer review.

You want your work to be the best it can possibly be, and that’s where peer review comes in.

Learn more with Wiley Research Academy

This online, on-demand learning program guides you through the publishing process. Take courses to build your skills and understanding, including our course on peer review and responding to reviewer comments. Sign up for a free trial today!

Your work is shared with experts in your field of study in order to gain their insight and suggestions. Reviewers will evaluate the originality and thoroughness of your work, and whether it is within scope for the journal you have submitted to. There are many forms of peer review , from traditional models like single-blind and double-blind review to newer models, such as open and transferable review. Learn about our Transparent Peer Review pilot in collaboration with Publons and ScholarOne (part of Clarivate, Web of Science).

The length of the peer review process varies by journal, so check with the editors or the staff of the journal to which you are submitting to for details of the process for that particular journal. Click here to read Wiley’s review confidentiality policy and check the review model for each journal we publish.

Originality, scientific significance, conciseness, precision, and completeness

In general, at first read-through reviewers will be assessing your argument’s construction, the clarity of the language, and content. They will be asking themselves the following questions:

  • What is the main question addressed by the research? Is it relevant and interesting?
  • How original is the topic? What does it add to the subject area compared with other published material?
  • Is the paper well written? Is the text clear and easy to read?
  • Are the conclusions consistent with the evidence and arguments presented? Do they address the main question posed?
  • If the author is disagreeing significantly with the current academic consensus, do they have a substantial case? If not, what would be required to make their case credible?
  • If the paper includes tables or figures, what do they add to the paper? Do they aid understanding or are they superfluous?
  • Is the argument well-constructed and clear? Are there any factual errors or invalid arguments?

They may also consider the following:

  • Does the title properly reflect the subject of the paper?
  • Does the abstract provide an accessible summary of the paper?
  • Do the keywords accurately reflect the content?
  • Does the paper follow a clear and organized structure?
  • Is the paper an appropriate length?
  • Are the key messages short, accurate and clear?

Upon closer readings, the reviewer will be looking for any major issues:

  • Are there any major flaws?
  • If experimental design features prominently in the paper, is the methodology sound?
  • Is the research replicable, reproducible, and robust? Does it follow best practice and meet ethical standards?
  • Has similar work already been published without the authors acknowledging this?
  • Are there published studies that show similar or dissimilar trends that should be discussed?
  • Are the authors presenting findings that challenge current thinking? Is the evidence they present strong enough to prove their case? Have they cited all the relevant work that would contradict their thinking and addressed it appropriately?
  • Are there any major presentational problems? Are figures & tables, language and manuscript structure all clear enough to accurately assess the work?
  • Are there any ethical issues?

The reviewer will also note minor issues that need to be corrected:

  • Are the correct references cited? Are citations excessive, limited, or biased?
  • Are there any factual, numerical, or unit errors? If so, what are they?
  • Are all tables and figures appropriate, sufficient, and correctly labelled?

The journal’s editor or editorial board considers the feedback provided by the peer reviewers and uses this information to arrive at a decision. In addition to the comments received from the review, editors also base their decisions on:

  • The journal’s aims and audience
  • The state of knowledge in the field
  • The level of competition for acceptance and page space within the journal

The following represent the range of possible outcomes:

  • Accept without any changes (acceptance): The journal will publish the paper in its original form. This type of decision outcome is rare
  • Accept with minor revisions (acceptance): The journal will publish the paper and asks the author to make small corrections. This is typically the best outcome that authors should hope for
  • Accept after major revisions (conditional acceptance): The journal will publish the paper provided the authors make the changes suggested by the reviewers and/or editors
  • Revise and resubmit (conditional rejection): The journal is willing to reconsider the paper in another round of decision making after the authors make major changes
  • Reject the paper (outright rejection): The journal will not publish the paper or reconsider it even if the authors make major revisions

The decision outcome will be accompanied by the reviewer reports and some commentary from the editor that explains why the decision has been reached. If the decision involves revision for the author, the specific changes that are required should be clearly stated in the decision letter and review reports. The author can then respond to each point in turn.

The manuscript fails the technical screening: Before manuscripts are sent to the EIC or handling editor, many editorial offices first perform some checks. The main reasons that papers can be rejected at this stage are:

  • The article contains elements that are suspected to be plagiarized, or it is currently under review at another journal (submitting the same paper to multiple journals at the same time is not allowed)
  • The manuscript is insufficiently well prepared; for example, lacking key elements such as the title, authors, affiliations, keywords, main text, references, and tables and figures
  • The English is not of sufficient quality to allow a useful peer review to take place
  • The figures are not complete or are not clear enough to read
  • The article does not conform to the most important aspects of the specific journal’s Author Guidelines

The manuscript does not fall within the Aims and Scope of the journal: The work is not of interest to the readers of the specific journal

The manuscript is incomplete: For example, the article contains observations but is not a full study or it discusses findings in relation to some of the work in the field but ignores other important work

A clear hypothesis or research aim was not established or the question behind the work is not of interest in the field

The goal of the research was over-ambitious, and hence it could not realistically be achieved

There are flaws in the procedures and/or analysis of the data:

  • The study lacked clear control groups or other comparison metrics
  • The study did not conform to recognized procedures or methodology that can be repeated
  • The analysis is not statistically valid or does not follow the norms of the field

The conclusions were exaggerated: The conclusions cannot be justified on the basis of the rest of the paper

  • The arguments are illogical, unstructured or invalid
  • The data do not support the conclusions
  • The conclusions ignore large portions of the literature

The research topic was of little significance:

  • It is archival, or of marginal interest to the field; it is simply a small extension of a different paper, often from the same authors
  • Findings are incremental and do not significantly advance the field
  • The work is clearly part of a larger study, chopped up to make as many articles as possible (so-called “salami publication”)

Bad writing: If the language, structure, or figures are so poor that the merit of the paper can’t be assessed, then the paper will be rejected. It’s a good idea to ask a native English speaker to read the paper before submitting. Wiley Editing Services offers English Language Editing services, which you can use prior to submission if you are not confident in the quality of your English writing skills

It is very common for papers to be rejected. Studies indicate that 21% of papers are rejected without review, and approximately 40% of papers are rejected after peer review.

If your paper has been rejected prior to peer review due to lack of subject fit, then find a new journal to submit your work to and move on.

However, if you receive a rejection after your paper has been reviewed, you will have a rich source of information about possible improvements that you could make. You have the following options:

Make the recommended changes and resubmit to the same journal:

This option could well be your top choice if you are keen to publish in a particular journal and if the editor has indicated that they will accept your paper if revisions are made. If the editor has issued an outright rejection and does not wish to reconsider the paper, you should respect this decision and submit to a different journal.

Make changes and submit to a different journal:

If you decide to try a different journal, you should still carefully consider the comments you received during the first round of review, and work on improving your manuscript before submitting elsewhere. Make sure that you adjust details like the cover letter, referencing and any other journal specific details before submitting to a different journal.

Make no changes and submit to a different journal:

While this option is an easy one, it is not recommended. It’s likely that many of the suggestions made during the original review would lead to an improved paper and by not addressing these points you are wasting a) the effort expended in the first round of review, and b) the opportunity to increase your chances of acceptance at the next journal. Furthermore, there is a chance that your manuscript may be assessed by the same reviewers at a new journal (particularly if you are publishing in a niche field). In this case, their recommendation will not change if you have not addressed the concerns raised in their earlier review. One exception would be if you are submitting to a journal that participates in a transfer program , where authors can agree to have their manuscript and reviews transferred to a new journal for consideration without making changes.

Appeal against the decision:

The journal should have a publicly described policy for appealing against editorial decisions. If you feel that the decision was based on an unfair assessment of your paper, or that there were major errors in the review process, then you are within your rights as an author to appeal. If you wish to appeal a decision, take the time to research that journal’s appeal process and review and address the points raised by the reviewer to prepare a reasoned and logical response.

Throw the manuscript away and never resubmit it:

Rejection can be disheartening, and it may be tempting to decide that it’s not worth the trouble of resubmitting. But, this is not the best outcome for either you or the wider research community. Your data may be highly valuable to someone else, or may help another researcher to avoid generating similar negative results.

You may not be able to control what the reviewers write in their review comments, but you can control the way you react to their comments. It’s useful to remember these points:

Reviewers have, on the whole, given time and effort to constructively criticize your article

Reviewers are volunteers and have given up their own time to evaluate your paper in order to contribute to the research community. Reviewers very rarely receive formal compensation beyond recognition from the editors of the effort they have expended. The author will get the ultimate credit, but reviewers are often key contributors to the shape of the final paper. Although the comments you receive may feel harsh, most reviewers are also authors and therefore will be trying to highlight how the paper could be improved. So, it is important to be grateful for the time that both reviewers and editors have spent evaluating your paper – and to express this gratitude in your response.

The importance of good manners

You should remain polite and thoughtful throughout any and all response to reviewers and editors. You are much more likely to receive a positive response in return and this will help build a constructive relationship with both reviewer and editor in the future.

Don’t take criticism as a personal attack

As stated previously, it is very rare that a paper will be accepted without any form of revisions requested. It is the job of the editor and reviewer to make sure that the published papers are scientifically sound, factual, clear and complete. In order to achieve this, it will be necessary to draw attention to areas of improvement. While this may be difficult for you as an author, the criticism received is not intended to be personal.

Avoid personalizing responses to the reviewer

Sticking to the facts and avoiding personal attacks is imperative. It’s a good idea to wait 24 to 72 hours before responding to a decision letter—then re-read the email. This simple process will remove much of the personal bias that could pollute appeals letters written in rage or disappointment. If you respond in anger, or in an argumentative fashion the editor and reviewers are much less likely to respond favorably.

Remember, even if you think the reviewer is wrong, this doesn’t necessarily mean that you are right! It is possible that the reviewer has made a mistake, but it is also possible that the reviewer was not able to understand your point because of a lack of clarity, or omission of crucial detail in your paper.

Evaluating the reviewer comments and planning your response

After you have read the decision letter and the reviewers comments, wait for at least 24 hours, then take a fresh look at the comments provided. This will help to neutralize the initial emotional response you may have and allow you to determine what the reviewers are asking for in a more objective manner.

Spending time assessing the scope of the revisions requested will help you evaluate the extent of effort required and prioritize the work you may need to undertake. It will also help you to provide a comprehensive response in your letter of reply.

Some useful steps to consider:

  • Make a list of all the reviewer comments and number them
  • Categorize the list as follows
  • requests for clarification of existing text, addition of text to fill a gap in the paper, or additional experimental details
  • requests to reanalyze, re-express, or reinterpret existing data
  • requests for additional experiments or further proof of concept
  • requests you simply cannot meet
  • Note down the action/response that you plan to undertake for each comment. If there are requests that you cannot meet, you need to address these in your response – providing a logical, reasoned explanation for why the study is not detrimentally affected by not making the changes requested

Want to become a peer reviewer? Learn more about peer review, including how to become a reviewer in our Reviewer Resource Center .

Further reading:

How to deal with reviewer comments

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on 6 May 2022 by Tegan George . Revised on 2 September 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer review.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism, run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymised) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymised comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymised) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymised) review – where the identities of the author, reviewers, and editors are all anonymised – does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimises potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymise everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimise back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarise the argument in your own words

Summarising the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organised. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticised, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the ‘compliment sandwich’, where you ‘sandwich’ your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarised or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • Reject the manuscript and send it back to author, or
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, September 02). What Is Peer Review? | Types & Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/research-methods/peer-reviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a double-blind study | introduction & examples, a quick guide to experimental design | 5 steps & examples, data cleaning | a guide with examples & steps.

Disclaimer » Advertising

  • HealthyChildren.org

Issue Cover

  • Previous Article
  • Next Article

What is the Purpose of Peer Review?

What makes a good peer reviewer, how do you decide whether to review a paper, how do you complete a peer review, limitations of peer review, conclusions, research methods: how to perform an effective peer review.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • CME Quiz Close Quiz
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Elise Peterson Lu , Brett G. Fischer , Melissa A. Plesac , Andrew P.J. Olson; Research Methods: How to Perform an Effective Peer Review. Hosp Pediatr November 2022; 12 (11): e409–e413. https://doi.org/10.1542/hpeds.2022-006764

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Scientific peer review has existed for centuries and is a cornerstone of the scientific publication process. Because the number of scientific publications has rapidly increased over the past decades, so has the number of peer reviews and peer reviewers. In this paper, drawing on the relevant medical literature and our collective experience as peer reviewers, we provide a user guide to the peer review process, including discussion of the purpose and limitations of peer review, the qualities of a good peer reviewer, and a step-by-step process of how to conduct an effective peer review.

Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1 , 2   It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3   In 2012, there were more than 28 000 scholarly peer-reviewed journals and more than 3 million peer reviewed articles are now published annually. 3 , 4   However, even with this volume, most peer reviewers learn to review “on the (unpaid) job” and no standard training system exists to ensure quality and consistency. 5   Expectations and format vary between journals and most, but not all, provide basic instructions for reviewers. In this paper, we provide a general introduction to the peer review process and identify common strategies for success as well as pitfalls to avoid.

Modern peer review serves 2 primary purposes: (1) as “a screen before the diffusion of new knowledge” 6   and (2) as a method to improve the quality of published work. 1 , 5  

As screeners, peer reviewers evaluate the quality, validity, relevance, and significance of research before publication to maintain the credibility of the publications they serve and their fields of study. 1 , 2 , 7   Although peer reviewers are not the final decision makers on publication (that role belongs to the editor), their recommendations affect editorial decisions and thoughtful comments influence an article’s fate. 6 , 8  

As advisors and evaluators of manuscripts, reviewers have an opportunity and responsibility to give authors an outside expert’s perspective on their work. 9   They provide feedback that can improve methodology, enhance rigor, improve clarity, and redefine the scope of articles. 5 , 8 , 10   This often happens even if a paper is not ultimately accepted at the reviewer’s journal because peer reviewers’ comments are incorporated into revised drafts that are submitted to another journal. In a 2019 survey of authors, reviewers, and editors, 83% said that peer review helps science communication and 90% of authors reported that peer review improved their last paper. 11  

Expertise: Peer reviewers should be up to date with current literature, practice guidelines, and methodology within their subject area. However, academic rank and seniority do not define expertise and are not actually correlated with performance in peer review. 13  

Professionalism: Reviewers should be reliable and objective, aware of their own biases, and respectful of the confidentiality of the peer review process.

Critical skill : Reviewers should be organized, thorough, and detailed in their critique with the goal of improving the manuscript under their review, regardless of disposition. They should provide constructive comments that are specific and addressable, referencing literature when possible. A peer reviewer should leave a paper better than he or she found it.

Is the manuscript within your area of expertise? Generally, if you are asked to review a paper, it is because an editor felt that you were a qualified expert. In a 2019 survey, 74% of requested reviews were within the reviewer’s area of expertise. 11   This, of course, does not mean that you must be widely published in the area, only that you have enough expertise and comfort with the topic to critique and add to the paper.

Do you have any biases that may affect your review? Are there elements of the methodology, content area, or theory with which you disagree? Some disagreements between authors and reviewers are common, expected, and even helpful. However, if a reviewer fundamentally disagrees with an author’s premise such that he or she cannot be constructive, the review invitation should be declined.

Do you have the time? The average review for a clinical journal takes 5 to 6 hours, though many take longer depending on the complexity of the research and the experience of the reviewer. 1 , 14   Journals vary on the requested timeline for return of reviews, though it is usually 1 to 4 weeks. Peer review is often the longest part of the publication process and delays contribute to slower dissemination of important work and decreased author satisfaction. 15   Be mindful of your schedule and only accept a review invitation if you can reasonably return the review in the requested time.

Once you have determined that you are the right person and decided to take on the review, reply to the inviting e-mail or click the associated link to accept (or decline) the invitation. Journal editors invite a limited number of reviewers at a time and wait for responses before inviting others. A common complaint among journal editors surveyed was that reviewers would often take days to weeks to respond to requests, or not respond at all, making it difficult to find appropriate reviewers and prolonging an already long process. 5  

Now that you have decided to take on the review, it is best of have a systematic way of both evaluating the manuscript and writing the review. Various suggestions exist in the literature, but we will describe our standard procedure for review, incorporating specific do’s and don’ts summarized in Table 1 .

Dos and Don’ts of Peer Review

First, read the manuscript once without making notes or forming opinions to get a sense of the paper as whole. Assess the overall tone and flow and define what the authors identify as the main point of their work. Does the work overall make sense? Do the authors tell the story effectively?

Next, read the manuscript again with an eye toward review, taking notes and formulating thoughts on strengths and weaknesses. Consider the methodology and identify the specific type of research described. Refer to the corresponding reporting guideline if applicable (CONSORT for randomized control trials, STROBE for observational studies, PRISMA for systematic reviews). Reporting guidelines often include a checklist, flow diagram, or structured text giving a minimum list of information needed in a manuscript based on the type of research done. 16   This allows the reviewer to formulate a more nuanced and specific assessment of the manuscript.

Next, review the main findings, the significance of the work, and what contribution it makes to the field. Examine the presentation and flow of the manuscript but do not copy edit the text. At this point, you should start to write your review. Some journals provide a format for their reviews, but often it is up to the reviewer. In surveys of journal editors and reviewers, a review organized by manuscript section was the most favored, 5 , 6   so that is what we will describe here.

As you write your review, consider starting with a brief summary of the work that identifies the main topic, explains the basic approach, and describes the findings and conclusions. 12 , 17   Though not universally included in all reviews, we have found this step to be helpful in ensuring that the work is conveyed clearly enough for the reviewer to summarize it. Include brief notes on the significance of the work and what it adds to current knowledge. Critique the presentation of the work: is it clearly written? Is its length appropriate? List any major concerns with the work overall, such as major methodological flaws or inaccurate conclusions that should disqualify it from publication, though do not comment directly on disposition. Then perform your review by section:

Abstract : Is it consistent with the rest of the paper? Does it adequately describe the major points?

Introduction : This section should provide adequate background to explain the need for the study. Generally, classic or highly relevant studies should be cited, but citations do not have to be exhaustive. The research question and hypothesis should be clearly stated.

Methods: Evaluate both the methods themselves and the way in which they are explained. Does the methodology used meet the needs of the questions proposed? Is there sufficient detail to explain what the authors did and, if not, what needs to be added? For clinical research, examine the inclusion/exclusion criteria, control populations, and possible sources of bias. Reporting guidelines can be particularly helpful in determining the appropriateness of the methods and how they are reported.

Some journals will expect an evaluation of the statistics used, whereas others will have a separate statistician evaluate, and the reviewers are generally not expected to have an exhaustive knowledge of statistical methods. Clarify expectations if needed and, if you do not feel qualified to evaluate the statistics, make this clear in your review.

Results: Evaluate the presentation of the results. Is information given in sufficient detail to assess credibility? Are the results consistent with the methodology reported? Are the figures and tables consistent with the text, easy to interpret, and relevant to the work? Make note of data that could be better detailed in figures or tables, rather than included in the text. Make note of inappropriate interpretation in the results section (this should be in discussion) or rehashing of methods.

Discussion: Evaluate the authors’ interpretation of their results, how they address limitations, and the implications of their work. How does the work contribute to the field, and do the authors adequately describe those contributions? Make note of overinterpretation or conclusions not supported by the data.

The length of your review often correlates with your opinion of the quality of the work. If an article has major flaws that you think preclude publication, write a brief review that focuses on the big picture. Articles that may not be accepted but still represent quality work merit longer reviews aimed at helping the author improve the work for resubmission elsewhere.

Generally, do not include your recommendation on disposition in the body of the review itself. Acceptance or rejection is ultimately determined by the editor and including your recommendation in your comments to the authors can be confusing. A journal editor’s decision on acceptance or rejection may depend on more factors than just the quality of the work, including the subject area, journal priorities, other contemporaneous submissions, and page constraints.

Many submission sites include a separate question asking whether to accept, accept with major revision, or reject. If this specific format is not included, then add your recommendation in the “confidential notes to the editor.” Your recommendation should be consistent with the content of your review: don’t give a glowing review but recommend rejection or harshly criticize a manuscript but recommend publication. Last, regardless of your ultimate recommendation on disposition, it is imperative to use respectful and professional language and tone in your written review.

Although peer review is often described as the “gatekeeper” of science and characterized as a quality control measure, peer review is not ideally designed to detect fundamental errors, plagiarism, or fraud. In multiple studies, peer reviewers detected only 20% to 33% of intentionally inserted errors in scientific manuscripts. 18 , 19   Plagiarism similarly is not detected in peer review, largely because of the huge volume of literature available to plagiarize. Most journals now use computer software to identify plagiarism before a manuscript goes to peer review. Finally, outright fraud often goes undetected in peer review. Reviewers start from a position of respect for the authors and trust the data they are given barring obvious inconsistencies. Ultimately, reviewers are “gatekeepers, not detectives.” 7  

Peer review is also limited by bias. Even with the best of intentions, reviewers bring biases including but not limited to prestige bias, affiliation bias, nationality bias, language bias, gender bias, content bias, confirmation bias, bias against interdisciplinary research, publication bias, conservatism, and bias of conflict of interest. 3 , 4 , 6   For example, peer reviewers score methodology higher and are more likely to recommend publication when prestigious author names or institutions are visible. 20   Although bias can be mitigated both by the reviewer and by the journal, it cannot be eliminated. Reviewers should be mindful of their own biases while performing reviews and work to actively mitigate them. For example, if English language editing is necessary, state this with specific examples rather than suggesting the authors seek editing by a “native English speaker.”

Peer review is an essential, though imperfect, part of the forward movement of science. Peer review can function as both a gatekeeper to protect the published record of science and a mechanism to improve research at the level of individual manuscripts. Here, we have described our strategy, summarized in Table 2 , for performing a thorough peer review, with a focus on organization, objectivity, and constructiveness. By using a systematized strategy to evaluate manuscripts and an organized format for writing reviews, you can provide a relatively objective perspective in editorial decision-making. By providing specific and constructive feedback to authors, you contribute to the quality of the published literature.

Take-home Points

FUNDING: No external funding.

CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no potential conflicts of interest to disclose.

Dr Lu performed the literature review and wrote the manuscript. Dr Fischer assisted in the literature review and reviewed and edited the manuscript. Dr Plesac provided background information on the process of peer review, reviewed and edited the manuscript, and completed revisions. Dr Olson provided background information and practical advice, critically reviewed and revised the manuscript, and approved the final manuscript.

Advertising Disclaimer »

Citing articles via

Email alerts.

what is the process of peer review in research

Affiliations

  • Editorial Board
  • Editorial Policies
  • Pediatrics On Call
  • Online ISSN 2154-1671
  • Print ISSN 2154-1663
  • Pediatrics Open Science
  • Hospital Pediatrics
  • Pediatrics in Review
  • AAP Grand Rounds
  • Latest News
  • Pediatric Care Online
  • Red Book Online
  • Pediatric Patient Education
  • AAP Toolkits
  • AAP Pediatric Coding Newsletter

First 1,000 Days Knowledge Center

Institutions/librarians, group practices, licensing/permissions, integrations, advertising.

  • Privacy Statement | Accessibility Statement | Terms of Use | Support Center | Contact Us
  • © Copyright American Academy of Pediatrics

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Explainer: what is peer review?

what is the process of peer review in research

Professor of Organisational Behaviour, Cass Business School, City, University of London

what is the process of peer review in research

Novak Druce Research Fellow, University of Oxford

Disclosure statement

Thomas Roulet does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

Andre Spicer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

City, University of London provides funding as a founding partner of The Conversation UK.

University of Oxford provides funding as a member of The Conversation UK.

View all partners

what is the process of peer review in research

We’ve all heard the phrase “peer review” as giving credence to research and scholarly papers, but what does it actually mean? How does it work?

Peer review is one of the gold standards of science. It’s a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already knew.

Most scientific journals, conferences and grant applications have some sort of peer review system. In most cases it is “double blind” peer review. This means evaluators do not know the author(s), and the author(s) do not know the identity of the evaluators. The intention behind this system is to ensure evaluation is not biased.

The more prestigious the journal, conference, or grant, the more demanding will be the review process, and the more likely the rejection. This prestige is why these papers tend to be more read and more cited.

The process in details

The peer review process for journals involves at least three stages.

1. The desk evaluation stage

When a paper is submitted to a journal, it receives an initial evaluation by the chief editor, or an associate editor with relevant expertise.

At this stage, either can “desk reject” the paper: that is, reject the paper without sending it to blind referees. Generally, papers are desk rejected if the paper doesn’t fit the scope of the journal or there is a fundamental flaw which makes it unfit for publication.

In this case, the rejecting editors might write a letter summarising his or her concerns. Some journals, such as the British Medical Journal , desk reject up to two-thirds or more of the papers.

2. The blind review

If the editorial team judges there are no fundamental flaws, they send it for review to blind referees. The number of reviewers depends on the field: in finance there might be only one reviewer, while journals in other fields of social sciences might ask up to four reviewers. Those reviewers are selected by the editor on the basis of their expert knowledge and their absence of a link with the authors.

Reviewers will decide whether to reject the paper, to accept it as it is (which rarely happens) or to ask for the paper to be revised. This means the author needs to change the paper in line with the reviewers’ concerns.

Usually the reviews deal with the validity and rigour of the empirical method, and the importance and originality of the findings (what is called the “contribution” to the existing literature). The editor collects those comments, weights them, takes a decision, and writes a letter summarising the reviewers’ and his or her own concerns.

It can therefore happen that despite hostility on the part of the reviewers, the editor could offer the paper a subsequent round of revision. In the best journals in the social sciences, 10% to 20% of the papers are offered a “revise-and-resubmit” after the first round.

3. The revisions – if you are lucky enough

If the paper has not been rejected after this first round of review, it is sent back to the author(s) for a revision. The process is repeated as many times as necessary for the editor to reach a consensus point on whether to accept or reject the paper. In some cases this can last for several years.

Ultimately, less than 10% of the submitted papers are accepted in the best journals in the social sciences. The renowned journal Nature publishes around 7% of the submitted papers.

Strengths and weaknesses of the peer review process

The peer review process is seen as the gold standard in science because it ensures the rigour, novelty, and consistency of academic outputs. Typically, through rounds of review, flawed ideas are eliminated and good ideas are strengthened and improved. Peer reviewing also ensures that science is relatively independent.

Because scientific ideas are judged by other scientists, the crucial yardstick is scientific standards. If other people from outside of the field were involved in judging ideas, other criteria such as political or economic gain might be used to select ideas. Peer reviewing is also seen as a crucial way of removing personalities and bias from the process of judging knowledge.

Despite the undoubted strengths, the peer review process as we know it has been criticised . It involves a number of social interactions that might create biases – for example, authors might be identified by reviewers if they are in the same field, and desk rejections are not blind.

It might also favour incremental (adding to past research) rather than innovative (new) research. Finally, reviewers are human after all and can make mistakes, misunderstand elements, or miss errors.

Are there any alternatives?

Defenders of the peer review system say although there are flaws, we’re yet to find a better system to evaluate research. However, a number of innovations have been introduced in the academic review system to improve its objectivity and efficiency.

Some new open-access journals (such as PLOS ONE ) publish papers with very little evaluation (they check the work is not deeply flawed methodologically). The focus there is on the post-publication peer review system: all readers can comment and criticise the paper.

Some journals such as Nature, have made part of the review process public (“open” review), offering a hybrid system in which peer review plays a role of primary gate keepers, but the public community of scholars judge in parallel (or afterwards in some other journals) the value of the research.

Another idea is to have a set of reviewers rating the paper each time it is revised. In this case, authors will be able to choose whether they want to invest more time in a revision to obtain a better rating, and get their work publicly recognised.

  • Peer review

what is the process of peer review in research

Sydney Horizon Educators (Identified)

what is the process of peer review in research

Senior Disability Services Advisor

what is the process of peer review in research

Deputy Social Media Producer

what is the process of peer review in research

Associate Professor, Occupational Therapy

what is the process of peer review in research

GRAINS RESEARCH AND DEVELOPMENT CORPORATION CHAIRPERSON

Peer Review Process: Understanding The Pathway To Publication

Demystifying peer review process: Insights into the rigorous evaluation process shaping scholarly research and ensuring academic quality.

' src=

The process of peer review plays a vital role in the world of academic publishing, ensuring the quality and credibility of scholarly research. This process is a critical evaluation system where experts in the field assess the merit, validity, and originality of research manuscripts before publication. Through a comprehensive examination of the peer review process, this article aims to explain its stages, importance, and best practices. Researchers and aspiring authors, using a peer review process, can navigate an evaluation process effectively, enhance the integrity of their work, and contribute to the advancement of scientific knowledge.

What Is Peer Review?

Peer review is a critical evaluation process that academic work undergoes before being published in a journal. It serves as a filter, fact-checker, and redundancy-detector, ensuring that the published research is original, impactful, and adheres to the best practices of the field. The primary purposes of peer review are twofold. Firstly, it acts as a quality control mechanism, ensuring that only high-quality research is published, especially in reputable journals, by assessing the validity, significance, and originality of the study. Secondly, it aims to improve the quality of manuscripts deemed suitable for publication by providing authors with suggestions for improvement and identifying any errors that need correction. The process subjects the manuscript to the scrutiny of experts (peers) in the field, who review and provide feedback in one or more rounds of review and revision, depending on the journal’s policies and the topic of the work.

Related article: The History of Peer Review: Enhance The Quality Of Publishing

peer review process

The Importance Of Peer Review In Science

Peer review in science is important for several reasons. It ensures quality, validates research findings, provides constructive feedback, fosters collaboration, and maintains public trust in scientific research. It provides valuable insights, suggestions, and alternative perspectives that can enhance the quality of the research. Authors benefit from this iterative process, as it allows them to address any weaknesses or gaps in their work and improve the clarity and coherence of their findings.

Also read: What Is A Peer-Reviewed Article And Where Can We Find It?

Additionally, peer review serves as a platform for constructive criticism and feedback, it contributes to the advancement of scientific knowledge by fostering intellectual dialogue and collaboration. Through the critical assessment of research manuscripts, reviewers may identify potential areas for further investigation or propose alternative hypotheses, stimulating further research and discovery.

Types Of Peer Review Process

Peer review has various models. The specific type of peer review employed can differ between journals, even within the same publisher. Before submitting the paper, it is crucial to become acquainted with the peer review policy of the selected journal, this ensures that the review process aligns with expectations. To understand the different models, we will outline the most prevalent types of peer review.

Single-Anonymous Peer Review

Single-anonymous peer review, also known as single-blind review, is a prevalent model employed by science and medicine journals. In this process, the reviewers are aware of the author’s identity, but the author remains unaware of the reviewers’ identities. This approach maintains a level of anonymity to ensure impartial evaluation and minimize biases. The reviewers assess the manuscript based on its merits, scientific rigor, and adherence to the journal’s guidelines. Single-anonymous peer review helps maintain objectivity and fairness in the review process, allowing for an unbiased assessment of the research work.

Related article: The Role Of Single-Blind Review In Research Papers

Double-Anonymous Peer Review

Double-anonymous peer review, also known as double-blind review, is a method employed in many humanities and social sciences journals. In this process, the identities of both the author and the reviewers are concealed. The reviewers are unaware of the author’s identity, and vice versa. This type of review aims to minimize bias and ensure a fair evaluation of the manuscript based solely on its content and merit. By maintaining anonymity, double-anonymous peer review promotes impartiality and enhances the credibility and objectivity of the peer review process.

Triple-Anonymized Peer Review

Triple-anonymized review, also known as triple-blind review, ensures anonymity for both the reviewers and the author. At the submission stage, articles are anonymized to minimize any potential bias toward the author(s). The editor and reviewers do not have knowledge of the author’s identity. However, it is important to note that fully anonymizing articles/authors at this level can be challenging. The editor and/or reviewers still can deduce the author’s identity through their writing style, subject matter, citation patterns, or other methodologies, similar to double anonymized review.

Open Peer Review

Open peer review is a diverse and evolving model with various interpretations. It generally involves reviewers being aware of the author’s identity and, at some stage, their identities being disclosed to the author. However, there is no universally accepted definition for open peer review, with over 122 different definitions identified in a recent study. This approach introduces transparency to the peer review process by allowing authors and reviewers to engage in a more direct and open dialogue. The level of openness may vary, with some forms of open peer review including public reviewer comments and even post-publication commentary. Open peer review aims to foster collaboration, accountability, and constructive feedback within the scientific community.

Post-Publication Peer Review

Post-publication peer review is a distinct model where the review process takes place after the initial publication of the paper. It can occur in two ways: either the paper undergoes a traditional peer review before being published online, or it is published online promptly after basic checks without undergoing extensive pre-publication review. Once the paper is published, reviewers, including invited experts or even readers, have the opportunity to contribute their comments, assessments, or reviews. This form of peer review allows for ongoing evaluation and discussion of the research, providing a platform for additional insights, critiques, and discussions that can contribute to the refinement and further understanding of the published work. Post-publication peer review emphasizes the importance of continued dialogue and engagement within the scientific community to ensure the quality and validity of published research.

Registered Reports

Registered Reports is a unique peer review process that involves two distinct stages. The first stage occurs after the study design has been developed but before data collection or analysis has taken place. At this point, the manuscript undergoes peer review, providing valuable feedback on the research question and the experimental design. If the manuscript successfully passes this initial peer review, the journal grants an in-principle acceptance (IPA), indicating that the article will be published contingent upon the completion of the study according to the pre-registered methods and the submission of an evidence-based interpretation of the results. This approach ensures that the research is evaluated based on its scientific merit rather than the significance or outcome of the findings. Registered Reports aim to enhance the credibility and transparency of research by focusing on the quality of the research question and methodology rather than the outcome, reducing bias and providing a more robust foundation for scientific knowledge.

Peer Review Process

The peer review process is a critical component of academic publishing that ensures the quality, validity, and integrity of scholarly research. It involves a rigorous evaluation of research manuscripts by experts in the same field to determine their suitability for publication. While the specific steps may vary among journals, the general process follows several key stages.

Submission: Authors submit their research manuscript to a journal, adhering to the journal’s guidelines and formatting requirements.

Editorial Evaluation: The editor assesses the manuscript’s alignment with the journal’s scope, relevance, and overall quality. They may reject the manuscript at this stage if it does not meet the journal’s criteria.

Peer Review Assignment: If the manuscript passes the initial evaluation, the editor selects appropriate experts in the field to conduct the peer review. Reviewers are chosen based on their expertise, ensuring a thorough and unbiased evaluation.

Peer Review: The reviewers carefully examine the manuscript, assessing its methodology, validity of results, clarity of writing, and contribution to the field. They provide constructive feedback, identify strengths and weaknesses, and recommend revisions.

Decision: Based on the reviewers’ feedback, the editor decides the manuscript. The decision can be acceptance, acceptance with revisions, major revisions, or rejection. The author(s) are notified of the decision along with any specific feedback.

Revision: If the manuscript requires revisions, the author(s) make necessary changes based on the reviewers’ comments and suggestions. They address each point raised by the reviewers and provide a detailed response outlining the modifications made.

Final Decision: The editor re-evaluates the revised manuscript to ensure that all requested changes have been adequately addressed. The editor then makes the final decision regarding its acceptance.

Publication: Once accepted, the manuscript undergoes the final stages of copyediting, formatting, and proofreading before being published in the journal. It becomes accessible to the wider academic community, contributing to the body of knowledge in the respective field.

Potential Problems Of Peer Review

While peer review is an essential component of the scholarly publishing process, it is not without its potential problems. Some of the key challenges and limitations of peer review include:

Bias and Subjectivity: Reviewers may possess personal biases that can influence their manuscript assessment, potentially leading to unfair evaluations or inconsistent judgments. Subjectivity in the interpretation of research findings and methodology can also impact the review process.

Delays in Publication: Peer review can be a time-consuming process, with reviewers taking varying lengths of time to provide feedback. This can result in delays in the publication of research, potentially hindering the timely dissemination of important findings.

Lack of Standardization: Reviewers’ expertise, qualifications, and reviewing criteria may vary, leading to inconsistencies in the evaluation process. The lack of standardized guidelines for reviewing can result in discrepancies in the quality and rigor of the peer review process across different journals and disciplines.

Inefficiency and Burden: Reviewers are typically unpaid volunteers who dedicate their time and expertise to reviewing manuscripts. The increasing volume of submissions and shortage of qualified reviewers can place a significant burden on the peer review system, potentially leading to delays and compromised quality.

Limited Scope for Detecting Errors: While peer review aims to identify and rectify errors or methodological flaws in manuscripts, it is not foolproof. Reviewers may not always have access to the raw data or the resources to conduct a thorough replication of the study, making it challenging to detect certain types of errors or misconduct.

Publication Bias: Peer review can inadvertently contribute to publication bias, as journals may have a preference for publishing positive or statistically significant results, potentially neglecting studies with null or negative findings. This can create an imbalanced representation of research in the literature.

120% Growth In Citations For Articles With Infographics

Mind the Graph platform provides valuable support to scientists by offering a range of features that enhance their research impact. One notable benefit is the use of infographics, which has been shown to significantly boost the visibility and recognition of scientific articles. This helps to capture the attention of readers, promote a better understanding of research findings, and increase the likelihood of citations and recognition within the scientific community. Sign up for free now!

illustrations-banner

Subscribe to our newsletter

Exclusive high quality content about effective visual communication in science.

Unlock Your Creativity

Create infographics, presentations and other scientifically-accurate designs without hassle — absolutely free for 7 days!

Content tags

en_US

X

The Nahrein Network

Peer Review Process

Menu

Overview of the process for peer review following the submission deadline for Research Grant Awards applications

The process is as follows: 

- Immediately after a Research Grants round closes, all applications are checked for eligibility and completeness 

- All eligible Expressions of Interest are assigned to at least one peer reviewer for independent assessment ahead of the next Management Committee meeting. Reviewers are typically assigned 3–5 applications each; every application is also read by the Director and Deputy Director 

- Reviewers make independent assessments of the projects they are assigned and complete a short, secure online evaluation form, to an agreed deadline ahead of the Management Committee meeting 

- The reviewers’ rankings are collated in order to determine the top ten applications. If a clear top ten does not emerge by this process, you may be asked to join a short online meeting in early December to discuss and agree the shortlist. 

- The shortlist is sent to the Nahrein Network’s Management Committee for ratification at its next meeting 

- Applicants are notified of the outcome of this process as soon as possible after that meeting 

The role of the peer reviewer 

Peer reviewers are provided with a small number of eligible Expresssions of Interest and an assessor’s form. In advance of starting work on reviewing them we advise them to:  

• familiarise themselves with these guidelines and assessment criteria for the scheme 

• alert the Nahrein Network administrator to any conflicts of interest , including potential conflicts. 

In reviewing the applications they:  

- use their knowledge, judgement and expertise in order to reach clear, sound, evidence-based decisions. 

- treat all applications, and the discussions about them, as strictly confidential at all times. 

- strive to be fair and objective  

- adhere to the Nahrein Network’s EDI policy which states that: 

The Nahrein Network is committed to eliminating unlawful discrimination and promoting equality of opportunity and good relations across and between the defined equalities groups in all of their relevant functions. Accordingly, no eligible funding applicant or external stakeholder should receive less favourable treatment on the grounds of gender, marital status, sexual orientation, gender reassignment, race, colour, nationality, ethnicity or national origins, religion or similar philosophical belief, spent criminal conviction, age or disability.  Equally, all proposals will be assessed on equal terms, regardless of the sex, age and/or ethnicity of the applicant. Proposals will, therefore, be assessed and graded on their merits, in accordance with the criteria and the aims and objectives set for each call for funding.  

Safeguarding decision making 

We are committed to ensuring that those who make funding decisions recognise the factors that introduce risk into the decision making process. To do this, it is important to be aware of and take steps to remove any impact of unintentional bias in our processes, behaviours and culture. We know that pressure to make decisions, time pressures, high cognitive load and tiredness all create conditions that introduce the risk of unintentional bias. 

To minimise these risks the reviewers are advised to consider the following: 

• All applications must be assessed on equal terms and objectively assessed on their merits using the evluation form 

• Decisions must be based on all the information provided 

• Question and challenge cultural stereotypes and bias 

• Be aware that working with a high cognitive load, with time pressures and the need to make quick decisions, creates conditions for bias  

• Slowing down the speed of decision making, allowing sufficient time for considering each application 

• Reconsider the reasons for decisions, recognising that they may be post-hoc justifications 

• Question cultural stereotypes and be open to seeing what is new and unfamiliar 

• Remember you are unlikely to be fairer and less prejudiced than the average person 

•One can detect unconscious bias more easily in others than in yourself, so all panel members should feel able to call out bias when they see it 

For further information, the Royal Society has issued a Briefing and video on unconscious bias: https://royalsociety.org/topics-policy/publications/2015/unconscious-bias/ ) . 

Proposals are submitted to the Nahrein Network in confidence and may contain confidential information and personal data belonging to the applicant (and other researchers named in the proposal). Peer reviewers are asked to make sure that all proposals are treated confidentially. 

Conflicts of interest 

It is vital that peer reviewers are seen to be completely impartial at all stages of the review process.   

Peer reviewers should not take part in the assessment of any proposal where a conflict of interest is at play. Conflicts of interest need to be declared at  [email protected]

If any Management Committee member is in conflict with a proposal, they will be required to leave the meeting whilst the proposal is being discussed.  

Approach to assessment 

In order to fully understand the quality and content of the proposals, all peer reviewers must ensure their judgements are based solely on the scheme requirements, the Nahrein Network’s Aims and the assessment criteria for the Research Grants Scheme, as well as the information that is provided in the application form. 

Reviewers and Management Committee members should not allow private knowledge of the applicant or the proposed research to influence their judgement and panellists are expected to decline invitations to review if their private views, knowledge or relations will affect the judgement of applications. 

Before starting the review process, reviewers are advised to:  

- read the entire proposal thoroughly.  

- familiarise themselves with the Nahrein Network’s Aims and the scheme assessment criteria. 

- contact the Nahrein Network administrator [email protected] if anything is unclear. 

- Observations must always be accompanied by evidence to support them. Reviewers must use only the information provided in the application form.  

- Reviewers should take into account the information waht they are asked to provide under each heading or item in the scheme assessment criteria and ensure sufficient detail is provided for each one.  

- Reviewers should give a clear assessment of strengths and weaknesses of the proposal and indicate whether these are major or minor concerns.  

- Reviewers should provide an evaluation of the risks associated with the project.  

- Reviewers should contextualise the proposal that they are assessing within current work in the field, and comment on its relative importance/significance.  

- Reviewers should be receptive to new ideas and approaches to thinking within thier discipline as well as methodology. 

- Reviewers should identify any inconsistencies and contradictions in the proposal.  

- Reviewers should scrutinise the budget and justification of resources for appropriate level of detail and value for money 

- In the case of interdisciplinary applications, reviewers should assess if the different disciplines meet up in a coherent way. 

- Reviewers should provide enough information to enable a judgement on the relative quality of this proposal compared to other applications. 

General points 

Reviewers should: 

- provide an impartial, objective, fair and analytical assessment of the proposal under review

- ensure they are providing an evaluation, not a description of the work proposed.  

- ensure the grade is justified by, and consistent with, any comments submitted.  

Grading proposals 

In the scheme evaluation form, reviewers should comment briefly on the strengths and weaknesses of the proposal, against each of the following criteria: 

The research question / problem and context are relevant and well explained 

The work could create new knowledge about Iraqi history, heritage or a related area  

The applicant is qualified to do this work 

The proposed project partners are suitable 

The proposed methodology is appropriate and viable 

The proposed outputs are suitable and viable 

The project addresses at least one of the Network’s research aims 

The project has potential to improve social, cultural or economic like in Iraq/KRI 

Reviewers should score each criterion of assessment in the range 1–5, where 1 is Poor and 5 is Excellent. They are also required to give a total score and an overall judgement: 

  • STRONG: priority to shortlist for further development 
  • FAIR: lesser priority shortlist for further development  
  • WEAK: do not shortlist for further development 

Completed evalulation forms are submitted to [email protected] by the agreed deadline ahead of the Management Committee 

The Management Committee will be guided by the reviewer's evaluation and scoring in their ratification, alongside other criteria such as the range and profile of proposals under consideration, and the amount of funding available in each round.  

The role of the Management Committee  

The Nahrein Network Management Committee meets to ratify the peer reviewers’ recommendations, make final decisions on which proposals to fund and, where necessary, to agree broad feedback for applicants.   

Comments and grades will not be used outside the funding decision making process, unless they are subject to specific legal requirements or to be used as the basis of feedback. 

The Management Committee also welcome peer reviewers’ comments and recommendations for improving the application and assessment process for applicants and reviewers in future rounds of the scheme. 

Resubmission Policy 

Resubmission of unsuccessful applications is not permitted except in very particular circumstances, where the Management Committee may exceptionally decide to invite the applicant to resubmit the proposal on one further occasion. 

This will happen only where the Committee identifies an application of exceptional potential and can identify specific changes to the application that could significantly enhance its competitiveness. In this case, the Committee will need to agree specific feedback — based on the reviewers’ comments — to be provided to the applicant. 

In order for a proposal to be invited for resubmission the Management Committee should satisfy itself that it meets all of the following criteria: 

  • the core research ideas and approach are original, innovative and exciting and the proposal has outstanding, transformative potential. It has clear potential to secure funding if the identified weaknesses can be satisfactorily addressed 

• there should be clear potential for the revised proposal to significantly increase its overall grading and priority for funding  

• the Management Committee should be confident that issues identified in deeming a proposal to be unfundable can be addressed through resubmission and that these are surmountable. This does not necessarily mean that the Committee is able to identify how this will be achieved, just that they are confident that it is possible. 

• the issues should be of sufficient scale and significance that they could not have been adequately addressed through the use of conditions. Requested changes should be of sufficient scale to require the proposal to go through the full assessment process once more.   

• the Management Committee must be able to provide clear guidance on the key issue or issues which need to be addressed in any resubmission.  

When invited resubmissions are submitted they will be assessed in the usual way in competition with all other proposals.  

Invited resubmissions should not be used:  

• where the identified weaknesses relate to under-development, poor presentation or other problems relating to the preparation of the proposal, which could reasonably have been expected to be addressed in submitting a proposal of this kind.  

• for proposals where the core ideas, rationale and foundations, aims and focus or overall design of the project need substantial re-working, since such radically revised proposals could be submitted as a significantly re-worked new proposal rather than as a resubmission. 

Feedback on processes 

If reviewers or Management Committee members have any feedback on Nahrein Network policy, process and/or documentation for this scheme, this will be discussed in the meeting and recorded once all funding decision have been made.  Reviewers are also welcome to submit feedback via email to [email protected] . All feedback will be formally recorded and used by the Network to inform the future development of processes.    

After the Management Committee meeting 

It is vital that peer reviewers and Management Committee members do not divulge or discuss the content of applications, evaluations or funding outcomes with any individual who is not directly involved in the assessment and awarding process. Maintaining confidentiality is paramount. 

All announcements of outcomes and funding decisions will be made by the Nahrein Network. Any peer reviewer or Management Committee member who is asked directly for feedback by applicants should refuse and advise applicants to direct all such requests to [email protected]

Following the meeting, reviewers will be reminded to delete all copies of applications and evaluations, in compliace with GDPR legislation. 

Tweets by NahreinNetwork

  • Open access
  • Published: 19 April 2024

A maturity model for the scientific review of clinical trial designs and their informativeness

  • S Dolley   ORCID: orcid.org/0000-0003-1266-1722 1 ,
  • T Norman   ORCID: orcid.org/0000-0003-2707-4387 2 ,
  • D McNair 2 &
  • D Hartman 2  

Trials volume  25 , Article number:  271 ( 2024 ) Cite this article

Metrics details

Informativeness, in the context of clinical trials, defines whether a study’s results definitively answer its research questions with meaningful next steps. Many clinical trials end uninformatively. Clinical trial protocols are required to go through reviews in regulatory and ethical domains: areas that focus on specifics outside of trial design, biostatistics, and research methods. Private foundations and government funders rarely require focused scientific design reviews for these areas. There are no documented standards and processes, or even best practices, toward a capability for funders to perform scientific design reviews after their peer review process prior to a funding commitment.

Considering the investment in and standardization of ethical and regulatory reviews, and the prevalence of studies never finishing or failing to provide definitive results, it may be that scientific reviews of trial designs with a focus on informativeness offer the best chance for improved outcomes and return-on-investment in clinical trials. A maturity model is a helpful tool for knowledge transfer to help grow capabilities in a new area or for those looking to perform a self-assessment in an existing area. Such a model is offered for scientific design reviews of clinical trial protocols. This maturity model includes 11 process areas and 5 maturity levels. Each of the 55 process area levels is populated with descriptions on a continuum toward an optimal state to improve trial protocols in the areas of risk of failure or uninformativeness.

This tool allows for prescriptive guidance on next investments to improve attributes of post-funding reviews of trials, with a focus on informativeness. Traditional pre-funding peer review has limited capacity for trial design review, especially for detailed biostatistical and methodological review. Select non-industry funders have begun to explore or invest in post-funding review programs of grantee protocols, based on exemplars of such programs. Funders with a desire to meet fiduciary responsibilities and mission goals can use the described model to enhance efforts supporting trial participant commitment and faster cures.

Peer Review reports

Assessing quality in global health clinical trials

In addition to pharmaceutical industry (industry) funders, hundreds of global health clinical trials (CTs) are funded annually by private foundations, governments, and consortia. A meaningful number of these CTs end without being published or without trustworthy results [ 1 , 2 , 3 ]. A January 2024 query of ClinicalTrials.gov found 92 phase I–IV CTs currently active or enrolling participants that featured a majority of CT sites in sub-Saharan Africa. Industry—either alone or as leader of a funding group—funded 29.3% of the CTs; the US government funded 12.0% of CTs. The remaining 58.7% of CTs were funded by private foundations, with some contribution from other governments or organizations. These global health CTs had plans to enroll 91,200 participants (human research subjects). Before a CT begins, industry routinely performs scientific or methodological reviews on CT protocols to identify and address flaws in design. There is no direct evidence that other funders conduct such reviews. Because of this, it is imaginable that 70% of global health CT protocols do not receive a dedicated scientific review before enrolling their first study participants. This may account for the large difference in informativeness between industry and non-industry CTs found recently [ 4 ].

In its lifecycle, there are two phases prior to the CT’s start and participant recruitment. First is a phase when the CT has not procured a funding commitment (pre-funding), and then the second is a post-funding phase. The dominant approach used by government funders to decide if a research study will be funded is peer-review. While peer-review for pre-funding decisions is well established, it continues to evolve and not necessarily in a scientific direction. For example, a large fraction of stakeholders believe peer-review ought to change to only assess the investigator, not the proposed project, or include a lottery [ 5 ]. One systematic review found that, in pre-funding peer-review, comments on research design represented 2%, methodology 4%, and methodological details 5%, respectively, of total comments [ 6 ]. During pre-funding, these reviewers also needed to comment on dozens of other factors [ 6 ]. This dynamic—along with the sometimes-large time gap between pre-funding and CT inception and the design changes therein—makes peer review inadequate for scientific design review.

In the post-funding phase, there are two other types of review that focus on elements outside of CT design. These reviews and related concepts are described in Table  1 . The two reviews that happen completely or primarily in post-funding and before participant recruitment begins are regulatory and ethical. The regulatory and ethics review domains are relatively mature and well-developed.

Ethical and regulatory reviews both overlap in limited ways with consideration of CT design methods. “It is clear that scientific assessments are a source of confusion for some ethics committees…ethics committee members revealed that they often had doubts about whether scientific validity is within their purview” [ 12 ]. Because the focus of an ethics review is not assessing optimal CT methods, “ethicists entering a review may be concerned about whether they have “the scientific literacy necessary to read and understand a protocol” [ 12 ]. Regulators and ethicists in low resource settings are often not trained in the scientific disciplines necessary to evaluate CT design risk—such as biostatistics and pharmacokinetics. Members of Institutional Review Boards seeking to deliver on their primary purpose—delivering an International Council for Harmonisation E6, E8, E9, and Good Clinical Practice guideline-supported participant protection review—and members of regulatory boards seeking to deliver on safety and participant protection may, justifiably, take only a secondary look at a CT’s statistical details. A cursory assessment of methods by an ethics committee may be necessary for them, but it may not be sufficient for funders. Likewise in the regulatory realm: the review of a protocol post-funding will include only targeted scientific assessment, since, for regulators, the focus on safety and similar matters crowds out efforts to identify more optimal approaches in CT design.

This state of affairs leaves an opportunity gap for scientific review of global health CT designs post-funding and prior to CT start. Industry performs scientific design reviews; it may or may not be coincidental that industry funded CTs were more likely to be informative during COVID than those CTs funded by others [ 17 ]. The US cancer academic CT community—funded by the US government—has created programs to comply with mandated post-funding scientific review of grantee CT designs. Multiple government and private CT funders, who to date have only performed pre-funding peer-reviews, are investigating the cost and effort involved with adding reviews of protocols. It is often only at the protocol stage of trial planning when a funder can see specifics such as whether the trial design is informed by systematic evidence; more advanced, pragmatic, or participant-centric design; or the presence of concrete recruitment plans, statistical analysis plans, or sample size simulations. As yet, standards do not exist.

  • Informativeness

Informativeness is a characterization of a CT that indicates the study will achieve its recruitment, statistical power, and other design goals, resulting in credibly answering its research questions. An informative CT “provides robust clinical insight and a solid on-ramp to either the next phase of development, a policy change, a new standard of care, or the decision not to progress further” [ 18 ]. Uninformative results are widespread. One study found only 6% of CTs funded outside of industry met all four conditions for informativeness [ 4 ]. Across a number of stakeholders working to identify design practices associated with uninformativeness, there is consensus on a core set of failures. These include principal investigators (PIs) being unrealistic or overly optimistic in their ability to set and achieve feasible and appropriate sample sizes and non-use of evidence-based disease burden and effect rates [ 17 , 19 , 20 , 21 ]. “Studies that failed to influence policy change or a confident next step in a go/no-go decision were associated with factors such as lack of use of common endpoints, lack of conservatism in effect estimates, not using biostatistical simulation to derive sample sizes, using unduly restrictive inclusion criteria, and avoiding use of innovative CT designs” [ 18 ]. Qualities that drive informativeness are almost all defined during the design phase of the CT. Eleven of Zarin et al.’s twelve “red flags” for uninformativeness can be identified before a CT begins recruiting [ 22 ]. A multi-stakeholder working group of experts led by the Experimental Cancer Medical Centres made recommendations on how to improve CTs. Seven of the group’s ten consensus recommendations could or must be planned and addressed during the design phase of a CT [ 23 ]. Because likelihood of informativeness is cemented from a PI’s design work and design choices, post-funding scientific design reviews have high potential to identify risks of uninformative outcomes and suggest fixes before the CT is finalized and cannot be changed.

A maturity model for scientific design reviews of clinical trials

A maturity model is a helpful tool for knowledge transfer to help grow capabilities in a new area, or for those looking to perform a self-assessment in an existing area. Such a model is offered for scientific design reviews of CT protocols: given time and funding, a chance to identify opportunity gaps in CT design, analysis, and communication. This maturity model includes 11 process areas and 5 maturity levels. Each of the 55 process area levels is populated with descriptions on a continuum toward an optimal state to improve CT protocols in the areas of risk of failure or uninformativeness.

A maturity model is “a tool that helps assess the current effectiveness of a person or group and supports figuring out what capabilities they need to acquire next in order to improve their performance” [ 24 ]. As an organization desires to implement CT scientific design/methodology reviews, or improve existing reviews, a maturity model can help to improve quality and capacity.

There are a number of variants of maturity models. A suitable model for presenting a maturity model is the Object Management Group Business Process Maturity Model (BPMM-OMG) [ 25 ]. Maturity levels (ML) are displayed on the Y -axis and are “well-defined evolutionary plateaus toward achieving a mature…process” [ 26 ]. The ML titles specific to BPMM-OMG and their fixed definitions are shown in Table  2 . These levels act as ratings or grades for parts of a review process.

Capabilities, as represented in maturity models, are often called process areas (PA). PAs are one or more grouped workstreams performed to meet a need [ 26 ]. To create a usable maturity model, users must carefully select the range of capacity and efforts—the cluster of related activities: in order to evaluate a scientific design review practice, the process areas must be identified and organized. At The Bill & Melinda Gates Foundation, after developing a post-funding scientific design review program across multiple disease areas and with multiple study types, eleven PAs were identified as independent capabilities key to the program. These PAs were curated by the authors after program progress through maturity levels, participation in all areas of the program, and non-systematic interviews with other program staff. These PA descriptions for scientific design reviews are shown in Table  3 . In each “cell,” or capability cluster at a particular level of maturity, the contents include examples of mastery at that level. This comprehensive set offers a new or existing practitioner the benefit of including what matters and excluding what does not, resulting in time and cost savings, better CTs, and risk reduction.

Once a maturity model variant is selected and the topic-specific PAs are populated, users can plot the maturity levels for each PA. In the case of a maturity model for scientific design reviews, there are 11 PAs with 5 maturity levels each. All 11 PA tables in this maturity model are included in the supplementary material . The first PA table, support for CT informativeness, is reproduced here as an exemplar of the remaining PA tables (Table 4 ).

In 2020, The Bill & Melinda Gates Foundation developed and implemented an approach to performing post-funding scientific design reviews for CTs developed by its grantees. The review program, as it evolved, became more complex to support high quality reviews in large volume [ 28 ]. It is likely this program generates positive impact via reducing the risk of uninformativeness, through its non-mandatory, expert recommendations for protocol changes prior to trial start. The relevance for other CT funders is high, as uninformativeness seems an endemic problem. That said, the applicability of progressing to high maturity in the model presented may be low due to a perception of little time and resources among funders. Time and funding constraints also limit the ability of PIs to implement some expert recommendations [ 29 ]. Recommendations to a PI to add significant changes to a protocol—such as the addition of a systematic evidence to inform design, a clear element of informativeness—would need to be funded by a trial planning grant.

Many post-funding scientific design reviews happen globally outside of industry, although less frequent than pre-funding, pre-protocol peer reviews. The non-industry funders of protocol reviews—such as government-funded entities, private foundations, and the United States National Institutes of Health Cancer Center academic trial funders—operate at a variety of maturity levels. In such cases, those funders interested in improving or assessing their existing protocol review programs might consider using either the Maturity Model herein or a simplified version. For example, a funder wanting to add post-funding protocol review to their pre-existing pre-funding peer review might use the model herein but leave out process areas such as (a) having a wide breadth of expertise in a large reviewer team (PA2), (b) having within-review iterations (PA4), and (c) being software-enabled (PA7).

Adopting this maturity model for post-funding scientific design reviews has strengths and limitations. Strengths include (a) the model offers measurement, and an implied pathway toward maturity, in a variety of key areas—some necessary—for delivering scientific design reviews; (b) the model is focused on addressing risk in areas most likely to fail in CTs—trial informativeness; and (c) the model was developed, adjusted, and updated based on learnings from completion of over 100 protocol reviews. Limitations include (a) adopting a commitment to multi-element excellence within eleven process areas makes for a complicated model, (b) the expense involved in pursuing this approach may be challenging for some funders to take on, and (c) due to confidentiality requirements, the foundation is not able to provide detailed examples of its program in action.

Conclusions

Industry-sponsored CTs were found to have, in select situations, significantly higher informativeness than private funder-sponsored CTs [ 4 ]. A large portion of global health CTs are supported by private funders. There is interest among private funders to adopt the multi-expert scientific design reviews in use by industry and select government and foundation funders. Peer-review of CTs today offers too little time for a rigorous evaluation of CT design and associated methods. Creating persistent improvement in a CT protocol is most likely achieved by implementing a scientific design review, and the best time for this is late in the design phase or close to when the protocol is finalized. The maturity model described can help funders who do not have an approach for creating a post-funding scientific design review program. If private funders do have such a program, this maturity model can help extend its depth and breadth. The model offers both a formative structure and a continuum promising improved precision, efficacy, collaboration, and communication. The benefit accrues to private and government funders, industry, CT participants, and global citizens alike through increased likelihood of CT informativeness and faster cures.

Availability of data and materials

The dataset analyzed during the current study is available in the ClinicalTrials.gov repository, found at https://clinicaltrials.gov/ .

Abbreviations

Business process maturity model from Object Management Group

  • Clinical trial

Process area

Principal investigator

Zheutlin AR, Niforatos J, Stulberg E, Sussman J. Research waste in randomized clinical trials: a cross-sectional analysis. J Gen Intern Med. 2020;35(10):3105–7. https://doi.org/10.1007/s11606-019-05523-4 .

Article   PubMed   Google Scholar  

Carlisle B, Kimmelman J, Ramsay T, MacKinnon N. Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials. Clin Trials. 2015;12(1):77–83. https://doi.org/10.1177/1740774514558307 .

Williams RJ, Tse T, DiPiazza K, Zarin DA. Terminated trials in the ClinicalTrials.gov results database: evaluation of availability of primary outcome data and reasons for termination. PLoS ONE. 2015;10(5). https://doi.org/10.1371/journal.pone.0127242

Hutchinson N, Moyer H, Zarin DA, Kimmelman J. The proportion of randomized controlled trials that inform clinical practice. Elife. 2022;17(11):e79491. https://doi.org/10.7554/eLife.79491 .

Article   Google Scholar  

Guthrie S, Ghiga I, Wooding S. What do we know about grant peer review in the health sciences? F1000Research. 2018;6:1335. https://doi.org/10.12688/f1000research.11917.2

Hug SE, Aeschbach M. Criteria for assessing grant applications: a systematic review. Palgrave Commun. 2020;6(1):1–5. https://doi.org/10.1057/s41599-020-0412-9 .

Bendiscioli S. The troubles with peer review for allocating research funding: funders need to experiment with versions of peer review and decision-making. EMBO Rep. 2019;20(12):e49472. https://doi.org/10.15252/embr.201949472 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Recio-Saucedo A, Crane K, Meadmore K, Fackrell K, Church H, Fraser S, Blatch-Jones A. What works for peer review and decision-making in research funding: a realist synthesis. Res Integrity Peer Rev. 2022;7(1):1–28. https://doi.org/10.1186/s41073-022-00120-2 .

Turner S, Bull A, Chinnery F, Hinks J, Mcardle N, Moran R, Payne H, Guegan EW, Worswick L, Wyatt JC. Evaluation of stakeholder views on peer review of NIHR applications for funding: a qualitative study. BMJ Open. 2018;8(12):e022548. https://doi.org/10.1136/bmjopen-2018-022548 .

Article   PubMed   PubMed Central   Google Scholar  

Investigational New Drug (IND) Application. United States Food and Drug Administration website. Last reviewed February 24, 2021. Accessed April 15, 2022. https://www.fda.gov/drugs/types-applications/investigational-new-drug-ind-application

“Ethics in Clinical Research”. National Institutes of Health Clinical Center website. Updated October 21, 2021. Accessed January 12, 2023. https://clinicalcenter.nih.gov/recruit/ethics.html

Binik A, Hey SP. A framework for assessing scientific merit in ethical review of clinical research. Ethics Human Res. 2019;41(2):2–13. https://doi.org/10.1002/eahr.500007 .

Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA. 2000;283(20):2701–11. https://doi.org/10.1001/jama.283.20.2701 .

Article   CAS   PubMed   Google Scholar  

Mooney-Somers J, Olsen A. Ethical review and qualitative research competence: Guidance for reviewers and applicants. Res Ethics. 2017;13(3–4):128–38. https://doi.org/10.1177/1747016116677636 .

Williams E, Brown TJ, Griffith P, Rahimi A, Oilepo R, Hammers H, et al. Improving the time to activation of new clinical trials at a National Cancer Institute–designated comprehensive cancer center. JCO Oncol Pract. 2020;16(4):e324–32. https://doi.org/10.1200/OP.19.00325 .

Knopman D, Alford E, Tate K, Long M, Khachaturian AS. Patients come from populations and populations contain patients. A two-stage scientific and ethics review: the next adaptation for single institutional review boards. Alzheimer’s & Dementia. 2017;13(8):940–6. https://doi.org/10.1016/j.jalz.2017.06.001 .

Hutchinson N, Klas K, Carlisle BG, Kimmelman J, Waligora M. How informative were early SARS-CoV-2 treatment and prevention trials? A longitudinal cohort analysis of trials registered on ClinicalTrials.gov. Plos one. 2022;17(1):e0262114. https://doi.org/10.1371/journal.pone.0262114 .

Hartman D, Heaton P, Cammack N, Hudson I, Dolley S, Netsi E, Norman T, Mundel T. Clinical trials in the pandemic age: what is fit for purpose? Gates Open Res. 2020;4. https://doi.org/10.12688/gatesopenres.13146.1

Abrams D, Montesi SB, Moore SK, Manson DK, Klipper KM, Case MA, Brodie D, Beitler JR. Powering bias and clinically important treatment effects in randomized trials of critical illness. Crit Care Med. 2020;48(12):1710–9. https://doi.org/10.1097/CCM.0000000000004568 .

Benjamin DM, Hey SP, MacPherson A, Hachem Y, Smith KS, Zhang SX, Wong S, Dolter S, Mandel DR, Kimmelman J. Principal investigators over-optimistically forecast scientific and operational outcomes for clinical trials. PLoS ONE. 2022;17(2):e0262862. https://doi.org/10.1371/journal.pone.0262862 .

Rosala-Hallas A, Bhangu A, Blazeby J, Bowman L, Clarke M, Lang T, Nasser M, Siegfried N, Soares-Weiser K, Sydes MR, Wang D. Global health trials methodological research agenda: results from a priority setting exercise. Trials. 2018;19(1):1–8. https://doi.org/10.1186/s13063-018-2440-y .

Zarin DA, Goodman SN, Kimmelman J. eTable: conditions for trial uninformativeness Harms from uninformative clinical trials. Jama. 2019;322(9):813–4.

Blagden SP, Billingham L, Brown LC, Buckland SW, Cooper AM, Ellis S, Fisher W, Hughes H, Keatley DA, Maignen FM, Morozov A. Effective delivery of Complex Innovative Design (CID) cancer trials—a consensus statement. Br J Cancer. 2020;122(4):473–82. https://doi.org/10.1038/s41416-019-0653-9 .

Fowler M. Maturity Model. Martinfowler.com website. August 24, 2014. Accessed July 25, 2022. https://martinfowler.com/bliki/MaturityModel.html

OMG Standards Development Organization. Object Management Group website. Accessed April 4, 2022. https://www.omg.org/

Paulk MC, Curtis B, Chrissis MB, Weber CV. Capability maturity model, version 1.1. IEEE software. 1993;10(4):18–27. https://doi.org/10.1109/52.219

Zarin DA, Goodman SN, Kimmelman J. Harms from uninformative clinical trials. JAMA. 2019;322(9):813–4. https://doi.org/10.1001/jama.2019.9892 .

Burford B, Norman T, Dolley S. Scientific Review of Protocols to Enhance Informativeness of Global Health Clinical Trials. ResearchSquare. 2024. https://doi.org/10.21203/rs.3.rs-3717747/v1 .

McLennan S, Nussbaumer-Streit B, Hemkens LG, Briel M. Barriers and facilitating factors for conducting systematic evidence assessments in academic clinical trials. JAMA Network Open. 2021;4(11):e2136577. https://doi.org/10.1001/jamanetworkopen.2021.36577 .

Download references

Acknowledgements

Not applicable

Funding was provided by the Bill & Melinda Gates Foundation.

Author information

Authors and affiliations.

Open Global Health, 710 12th St South, Ste 2523, Arlington, VA, 22202, USA

The Bill & Melinda Gates Foundation, 500 Fifth Ave. North, Seattle, WA, 98109, USA

T Norman, D McNair & D Hartman

You can also search for this author in PubMed   Google Scholar

Contributions

SD formulated the concept, designed the model, and wrote the original draft. TN provided supervision and edited the manuscript. DM added to the model and edited the manuscript. DH edited the manuscript and acquired financial support. All authors read and approved the final manuscript.

Corresponding author

Correspondence to S Dolley .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

 From A maturity model for the scientific review of clinical trial designs and their informativeness. Table S1. Process Area 1, Informativeness-centric. Table S2. Process Area 2, Breadth of review expertise. Table S3. Process Area 3, Depth of reviewer expertise. Table S4. Process Area 4, Iterative. Table S5. Process Area 5, Information-enhanced. Table S6. Process Area 6, Solution-oriented. Table S7. Process Area 7, Software-enabled. Table S8. Process Area 8, Collaborative. Table S9. Process Area 9, Rich in data & analytics. Table S10. Process Area 10, Reliability and quality. Table S11. Process Area 11, Time appropriate.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Dolley, S., Norman, T., McNair, D. et al. A maturity model for the scientific review of clinical trial designs and their informativeness. Trials 25 , 271 (2024). https://doi.org/10.1186/s13063-024-08099-5

Download citation

Received : 15 January 2024

Accepted : 07 April 2024

Published : 19 April 2024

DOI : https://doi.org/10.1186/s13063-024-08099-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Design review
  • Trial methods
  • Maturity model

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is the process of peer review in research

  • Open access
  • Published: 19 April 2024

A scoping review of continuous quality improvement in healthcare system: conceptualization, models and tools, barriers and facilitators, and impact

  • Aklilu Endalamaw 1 , 2 ,
  • Resham B Khatri 1 , 3 ,
  • Tesfaye Setegn Mengistu 1 , 2 ,
  • Daniel Erku 1 , 4 , 5 ,
  • Eskinder Wolka 6 ,
  • Anteneh Zewdie 6 &
  • Yibeltal Assefa 1  

BMC Health Services Research volume  24 , Article number:  487 ( 2024 ) Cite this article

Metrics details

The growing adoption of continuous quality improvement (CQI) initiatives in healthcare has generated a surge in research interest to gain a deeper understanding of CQI. However, comprehensive evidence regarding the diverse facets of CQI in healthcare has been limited. Our review sought to comprehensively grasp the conceptualization and principles of CQI, explore existing models and tools, analyze barriers and facilitators, and investigate its overall impacts.

This qualitative scoping review was conducted using Arksey and O’Malley’s methodological framework. We searched articles in PubMed, Web of Science, Scopus, and EMBASE databases. In addition, we accessed articles from Google Scholar. We used mixed-method analysis, including qualitative content analysis and quantitative descriptive for quantitative findings to summarize findings and PRISMA extension for scoping reviews (PRISMA-ScR) framework to report the overall works.

A total of 87 articles, which covered 14 CQI models, were included in the review. While 19 tools were used for CQI models and initiatives, Plan-Do-Study/Check-Act cycle was the commonly employed model to understand the CQI implementation process. The main reported purposes of using CQI, as its positive impact, are to improve the structure of the health system (e.g., leadership, health workforce, health technology use, supplies, and costs), enhance healthcare delivery processes and outputs (e.g., care coordination and linkages, satisfaction, accessibility, continuity of care, safety, and efficiency), and improve treatment outcome (reduce morbidity and mortality). The implementation of CQI is not without challenges. There are cultural (i.e., resistance/reluctance to quality-focused culture and fear of blame or punishment), technical, structural (related to organizational structure, processes, and systems), and strategic (inadequate planning and inappropriate goals) related barriers that were commonly reported during the implementation of CQI.

Conclusions

Implementing CQI initiatives necessitates thoroughly comprehending key principles such as teamwork and timeline. To effectively address challenges, it’s crucial to identify obstacles and implement optimal interventions proactively. Healthcare professionals and leaders need to be mentally equipped and cognizant of the significant role CQI initiatives play in achieving purposes for quality of care.

Peer Review reports

Continuous quality improvement (CQI) initiative is a crucial initiative aimed at enhancing quality in the health system that has gradually been adopted in the healthcare industry. In the early 20th century, Shewhart laid the foundation for quality improvement by describing three essential steps for process improvement: specification, production, and inspection [ 1 , 2 ]. Then, Deming expanded Shewhart’s three-step model into ‘plan, do, study/check, and act’ (PDSA or PDCA) cycle, which was applied to management practices in Japan in the 1950s [ 3 ] and was gradually translated into the health system. In 1991, Kuperman applied a CQI approach to healthcare, comprising selecting a process to be improved, assembling a team of expert clinicians that understands the process and the outcomes, determining key steps in the process and expected outcomes, collecting data that measure the key process steps and outcomes, and providing data feedback to the practitioners [ 4 ]. These philosophies have served as the baseline for the foundation of principles for continuous improvement [ 5 ].

Continuous quality improvement fosters a culture of continuous learning, innovation, and improvement. It encourages proactive identification and resolution of problems, promotes employee engagement and empowerment, encourages trust and respect, and aims for better quality of care [ 6 , 7 ]. These characteristics drive the interaction of CQI with other quality improvement projects, such as quality assurance and total quality management [ 8 ]. Quality assurance primarily focuses on identifying deviations or errors through inspections, audits, and formal reviews, often settling for what is considered ‘good enough’, rather than pursuing the highest possible standards [ 9 , 10 ], while total quality management is implemented as the management philosophy and system to improve all aspects of an organization continuously [ 11 ].

Continuous quality improvement has been implemented to provide quality care. However, providing effective healthcare is a complicated and complex task in achieving the desired health outcomes and the overall well-being of individuals and populations. It necessitates tackling issues, including access, patient safety, medical advances, care coordination, patient-centered care, and quality monitoring [ 12 , 13 ], rooted long ago. It is assumed that the history of quality improvement in healthcare started in 1854 when Florence Nightingale introduced quality improvement documentation [ 14 ]. Over the passing decades, Donabedian introduced structure, processes, and outcomes as quality of care components in 1966 [ 15 ]. More comprehensively, the Institute of Medicine in the United States of America (USA) has identified effectiveness, efficiency, equity, patient-centredness, safety, and timeliness as the components of quality of care [ 16 ]. Moreover, quality of care has recently been considered an integral part of universal health coverage (UHC) [ 17 ], which requires initiatives to mobilise essential inputs [ 18 ].

While the overall objective of CQI in health system is to enhance the quality of care, it is important to note that the purposes and principles of CQI can vary across different contexts [ 19 , 20 ]. This variation has sparked growing research interest. For instance, a review of CQI approaches for capacity building addressed its role in health workforce development [ 21 ]. Another systematic review, based on random-controlled design studies, assessed the effectiveness of CQI using training as an intervention and the PDSA model [ 22 ]. As a research gap, the former review was not directly related to the comprehensive elements of quality of care, while the latter focused solely on the impact of training using the PDSA model, among other potential models. Additionally, a review conducted in 2015 aimed to identify barriers and facilitators of CQI in Canadian contexts [ 23 ]. However, all these reviews presented different perspectives and investigated distinct outcomes. This suggests that there is still much to explore in terms of comprehensively understanding the various aspects of CQI initiatives in healthcare.

As a result, we conducted a scoping review to address several aspects of CQI. Scoping reviews serve as a valuable tool for systematically mapping the existing literature on a specific topic. They are instrumental when dealing with heterogeneous or complex bodies of research. Scoping reviews provide a comprehensive overview by summarizing and disseminating findings across multiple studies, even when evidence varies significantly [ 24 ]. In our specific scoping review, we included various types of literature, including systematic reviews, to enhance our understanding of CQI.

This scoping review examined how CQI is conceptualized and measured and investigated models and tools for its application while identifying implementation challenges and facilitators. It also analyzed the purposes and impact of CQI on the health systems, providing valuable insights for enhancing healthcare quality.

Protocol registration and results reporting

Protocol registration for this scoping review was not conducted. Arksey and O’Malley’s methodological framework was utilized to conduct this scoping review [ 25 ]. The scoping review procedures start by defining the research questions, identifying relevant literature, selecting articles, extracting data, and summarizing the results. The review findings are reported using the PRISMA extension for a scoping review (PRISMA-ScR) [ 26 ]. McGowan and colleagues also advised researchers to report findings from scoping reviews using PRISMA-ScR [ 27 ].

Defining the research problems

This review aims to comprehensively explore the conceptualization, models, tools, barriers, facilitators, and impacts of CQI within the healthcare system worldwide. Specifically, we address the following research questions: (1) How has CQI been defined across various contexts? (2) What are the diverse approaches to implementing CQI in healthcare settings? (3) Which tools are commonly employed for CQI implementation ? (4) What barriers hinder and facilitators support successful CQI initiatives? and (5) What effects CQI initiatives have on the overall care quality?

Information source and search strategy

We conducted the search in PubMed, Web of Science, Scopus, and EMBASE databases, and the Google Scholar search engine. The search terms were selected based on three main distinct concepts. One group was CQI-related terms. The second group included terms related to the purpose for which CQI has been implemented, and the third group included processes and impact. These terms were selected based on the Donabedian framework of structure, process, and outcome [ 28 ]. Additionally, the detailed keywords were recruited from the primary health framework, which has described lists of dimensions under process, output, outcome, and health system goals of any intervention for health [ 29 ]. The detailed search strategy is presented in the Supplementary file 1 (Search strategy). The search for articles was initiated on August 12, 2023, and the last search was conducted on September 01, 2023.

Eligibility criteria and article selection

Based on the scoping review’s population, concept, and context frameworks [ 30 ], the population included any patients or clients. Additionally, the concepts explored in the review encompassed definitions, implementation, models, tools, barriers, facilitators, and impacts of CQI. Furthermore, the review considered contexts at any level of health systems. We included articles if they reported results of qualitative or quantitative empirical study, case studies, analytic or descriptive synthesis, any review, and other written documents, were published in peer-reviewed journals, and were designed to address at least one of the identified research questions or one of the identified implementation outcomes or their synonymous taxonomy as described in the search strategy. Based on additional contexts, we included articles published in English without geographic and time limitations. We excluded articles with abstracts only, conference abstracts, letters to editors, commentators, and corrections.

We exported all citations to EndNote x20 to remove duplicates and screen relevant articles. The article selection process includes automatic duplicate removal by using EndNote x20, unmatched title and abstract removal, citation and abstract-only materials removal, and full-text assessment. The article selection process was mainly conducted by the first author (AE) and reported to the team during the weekly meetings. The first author encountered papers that caused confusion regarding whether to include or exclude them and discussed them with the last author (YA). Then, decisions were ultimately made. Whenever disagreements happened, they were resolved by discussion and reconsideration of the review questions in relation to the written documents of the article. Further statistical analysis, such as calculating Kappa, was not performed to determine article inclusion or exclusion.

Data extraction and data items

We extracted first author, publication year, country, settings, health problem, the purpose of the study, study design, types of intervention if applicable, CQI approaches/steps if applicable, CQI tools and procedures if applicable, and main findings using a customized Microsoft Excel form.

Summarizing and reporting the results

The main findings were summarized and described based on the main themes, including concepts under conceptualizing, principles, teams, timelines, models, tools, barriers, facilitators, and impacts of CQI. Results-based convergent synthesis, achieved through mixed-method analysis, involved content analysis to identify the thematic presentation of findings. Additionally, a narrative description was used for quantitative findings, aligning them with the appropriate theme. The authors meticulously reviewed the primary findings from each included material and contextualized these findings concerning the main themes1. This approach provides a comprehensive understanding of complex interventions and health systems, acknowledging quantitative and qualitative evidence.

Search results

A total of 11,251 documents were identified from various databases: SCOPUS ( n  = 4,339), PubMed ( n  = 2,893), Web of Science ( n  = 225), EMBASE ( n  = 3,651), and Google Scholar ( n  = 143). After removing duplicates ( n  = 5,061), 6,190 articles were evaluated by title and abstract. Subsequently, 208 articles were assessed for full-text eligibility. Following the eligibility criteria, 121 articles were excluded, leaving 87 included in the current review (Fig.  1 ).

figure 1

Article selection process

Operationalizing continuous quality improvement

Continuous Quality Improvement (CQI) is operationalized as a cyclic process that requires commitment to implementation, teamwork, time allocation, and celebrating successes and failures.

CQI is a cyclic ongoing process that is followed reflexive, analytical and iterative steps, including identifying gaps, generating data, developing and implementing action plans, evaluating performance, providing feedback to implementers and leaders, and proposing necessary adjustments [ 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 ].

CQI requires committing to the philosophy, involving continuous improvement [ 19 , 38 ], establishing a mission statement [ 37 ], and understanding quality definition [ 19 ].

CQI involves a wide range of patient-oriented measures and performance indicators, specifically satisfying internal and external customers, developing quality assurance, adopting common quality measures, and selecting process measures [ 8 , 19 , 35 , 36 , 37 , 39 , 40 ].

CQI requires celebrating success and failure without personalization, leading each team member to develop error-free attitudes [ 19 ]. Success and failure are related to underlying organizational processes and systems as causes of failure rather than blaming individuals [ 8 ] because CQI is process-focused based on collaborative, data-driven, responsive, rigorous and problem-solving statistical analysis [ 8 , 19 , 38 ]. Furthermore, a gap or failure opens another opportunity for establishing a data-driven learning organization [ 41 ].

CQI cannot be implemented without a CQI team [ 8 , 19 , 37 , 39 , 42 , 43 , 44 , 45 , 46 ]. A CQI team comprises individuals from various disciplines, often comprising a team leader, a subject matter expert (physician or other healthcare provider), a data analyst, a facilitator, frontline staff, and stakeholders [ 39 , 43 , 47 , 48 , 49 ]. It is also important to note that inviting stakeholders or partners as part of the CQI support intervention is crucial [ 19 , 38 , 48 ].

The timeline is another distinct feature of CQI because the results of CQI vary based on the implementation duration of each cycle [ 35 ]. There is no specific time limit for CQI implementation, although there is a general consensus that a cycle of CQI should be relatively short [ 35 ]. For instance, a CQI implementation took 2 months [ 42 ], 4 months [ 50 ], 9 months [ 51 , 52 ], 12 months [ 53 , 54 , 55 ], and one year and 5 months [ 49 ] duration to achieve the desired positive outcome, while bi-weekly [ 47 ] and monthly data reviews and analyses [ 44 , 48 , 56 ], and activities over 3 months [ 57 ] have also resulted in a positive outcome.

Continuous quality improvement models and tools

There have been several models are utilized. The Plan-Do-Study/Check-Act cycle is a stepwise process involving project initiation, situation analysis, root cause identification, solution generation and selection, implementation, result evaluation, standardization, and future planning [ 7 , 36 , 37 , 45 , 47 , 48 , 49 , 50 , 51 , 53 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 ]. The FOCUS-PDCA cycle enhances the PDCA process by adding steps to find and improve a process (F), organize a knowledgeable team (O), clarify the process (C), understand variations (U), and select improvements (S) [ 55 , 71 , 72 , 73 ]. The FADE cycle involves identifying a problem (Focus), understanding it through data analysis (Analyze), devising solutions (Develop), and implementing the plan (Execute) [ 74 ]. The Logic Framework involves brainstorming to identify improvement areas, conducting root cause analysis to develop a problem tree, logically reasoning to create an objective tree, formulating the framework, and executing improvement projects [ 75 ]. Breakthrough series approach requires CQI teams to meet in quarterly collaborative learning sessions, share learning experiences, and continue discussion by telephone and cross-site visits to strengthen learning and idea exchange [ 47 ]. Another CQI model is the Lean approach, which has been conducted with Kaizen principles [ 52 ], 5 S principles, and the Six Sigma model. The 5 S (Sort, Set/Straighten, Shine, Standardize, Sustain) systematically organises and improves the workplace, focusing on sorting, setting order, shining, standardizing, and sustaining the improvement [ 54 , 76 ]. Kaizen principles guide CQI by advocating for continuous improvement, valuing all ideas, solving problems, focusing on practical, low-cost improvements, using data to drive change, acknowledging process defects, reducing variability and waste, recognizing every interaction as a customer-supplier relationship, empowering workers, responding to all ideas, and maintaining a disciplined workplace [ 77 ]. Lean Six Sigma, a CQI model, applies the DMAIC methodology, which involves defining (D) and measuring the problem (M), analyzing root causes (A), improving by finding solutions (I), and controlling by assessing process stability (C) [ 78 , 79 ]. The 5 C-cyclic model (consultation, collection, consideration, collaboration, and celebration), the first CQI framework for volunteer dental services in Aboriginal communities, ensures quality care based on community needs [ 80 ]. One study used meetings involving activities such as reviewing objectives, assigning roles, discussing the agenda, completing tasks, retaining key outputs, planning future steps, and evaluating the meeting’s effectiveness [ 81 ].

Various tools are involved in the implementation or evaluation of CQI initiatives: checklists [ 53 , 82 ], flowcharts [ 81 , 82 , 83 ], cause-and-effect diagrams (fishbone or Ishikawa diagrams) [ 60 , 62 , 79 , 81 , 82 ], fuzzy Pareto diagram [ 82 ], process maps [ 60 ], time series charts [ 48 ], why-why analysis [ 79 ], affinity diagrams and multivoting [ 81 ], and run chart [ 47 , 48 , 51 , 60 , 84 ], and others mentioned in the table (Table  1 ).

Barriers and facilitators of continuous quality improvement implementation

Implementing CQI initiatives is determined by various barriers and facilitators, which can be thematized into four dimensions. These dimensions are cultural, technical, structural, and strategic dimensions.

Continuous quality improvement initiatives face various cultural, strategic, technical, and structural barriers. Cultural dimension barriers involve resistance to change (e.g., not accepting online technology), lack of quality-focused culture, staff reporting apprehensiveness, and fear of blame or punishment [ 36 , 41 , 85 , 86 ]. The technical dimension barriers of CQI can include various factors that hinder the effective implementation and execution of CQI processes [ 36 , 86 , 87 , 88 , 89 ]. Structural dimension barriers of CQI arise from the organization structure, process, and systems that can impede the effective implementation and sustainability of CQI [ 36 , 85 , 86 , 87 , 88 ]. Strategic dimension barriers are, for example, the inability to select proper CQI goals and failure to integrate CQI into organizational planning and goals [ 36 , 85 , 86 , 87 , 88 , 90 ].

Facilitators are also grouped to cultural, structural, technical, and strategic dimensions to provide solutions to CQI barriers. Cultural challenges were addressed by developing a group culture to CQI and other rewards [ 39 , 41 , 80 , 85 , 86 , 87 , 90 , 91 , 92 ]. Technical facilitators are pivotal to improving technical barriers [ 39 , 42 , 53 , 69 , 86 , 90 , 91 ]. Structural-related facilitators are related to improving communication, infrastructure, and systems [ 86 , 92 , 93 ]. Strategic dimension facilitators include strengthening leadership and improving decision-making skills [ 43 , 53 , 67 , 86 , 87 , 92 , 94 , 95 ] (Table  2 ).

Impact of continuous quality improvement

Continuous quality improvement initiatives can significantly impact the quality of healthcare in a wide range of health areas, focusing on improving structure, the health service delivery process and improving client wellbeing and reducing mortality.

Structure components

These are health leadership, financing, workforce, technology, and equipment and supplies. CQI has improved planning, monitoring and evaluation [ 48 , 53 ], and leadership and planning [ 48 ], indicating improvement in leadership perspectives. Implementing CQI in primary health care (PHC) settings has shown potential for maintaining or reducing operation costs [ 67 ]. Findings from another study indicate that the costs associated with implementing CQI interventions per facility ranged from approximately $2,000 to $10,500 per year, with an average cost of approximately $10 to $60 per admitted client [ 57 ]. However, based on model predictions, the average cost savings after implementing CQI were estimated to be $5430 [ 31 ]. CQI can also be applied to health workforce development [ 32 ]. CQI in the institutional system improved medical education [ 66 , 96 , 97 ], human resources management [ 53 ], motivated staffs [ 76 ], and increased staff health awareness [ 69 ], while concerns raised about CQI impartiality, independence, and public accountability [ 96 ]. Regarding health technology, CQI also improved registration and documentation [ 48 , 53 , 98 ]. Furthermore, the CQI initiatives increased cleanliness [ 54 ] and improved logistics, supplies, and equipment [ 48 , 53 , 68 ].

Process and output components

The process component focuses on the activities and actions involved in delivering healthcare services.

Service delivery

CQI interventions improved service delivery [ 53 , 56 , 99 ], particularly a significant 18% increase in the overall quality of service performance [ 48 ], improved patient counselling, adherence to appropriate procedures, and infection prevention [ 48 , 68 ], and optimised workflow [ 52 ].

Coordination and collaboration

CQI initiatives improved coordination and collaboration through collecting and analysing data, onsite technical support, training, supportive supervision [ 53 ] and facilitating linkages between work processes and a quality control group [ 65 ].

Patient satisfaction

The CQI initiatives increased patient satisfaction and improved quality of life by optimizing care quality management, improving the quality of clinical nursing, reducing nursing defects and enhancing the wellbeing of clients [ 54 , 76 , 100 ], although CQI was not associated with changes in adolescent and young adults’ satisfaction [ 51 ].

CQI initiatives reduced medication error reports from 16 to 6 [ 101 ], and it significantly reduced the administration of inappropriate prophylactic antibiotics [ 44 ], decreased errors in inpatient care [ 52 ], decreased the overall episiotomy rate from 44.5 to 33.3% [ 83 ], reduced the overall incidence of unplanned endotracheal extubation [ 102 ], improving appropriate use of computed tomography angiography [ 103 ], and appropriate diagnosis and treatment selection [ 47 ].

Continuity of care

CQI initiatives effectively improve continuity of care by improving client and physician interaction. For instance, provider continuity levels showed a 64% increase [ 55 ]. Modifying electronic medical record templates, scheduling, staff and parental education, standardization of work processes, and birth to 1-year age-specific incentives in post-natal follow-up care increased continuity of care to 74% in 2018 compared to baseline 13% in 2012 [ 84 ].

The CQI initiative yielded enhanced efficiency in the cardiac catheterization laboratory, as evidenced by improved punctuality in procedure starts and increased efficiency in manual sheath-pulls inside [ 78 ].

Accessibility

CQI initiatives were effective in improving accessibility in terms of increasing service coverage and utilization rate. For instance, screening for cigarettes, nutrition counselling, folate prescription, maternal care, immunization coverage [ 53 , 81 , 104 , 105 ], reducing the percentage of non-attending patients to surgery to 0.9% from the baseline 3.9% [ 43 ], increasing Chlamydia screening rates from 29 to 60% [ 45 ], increasing HIV care continuum coverage [ 51 , 59 , 60 ], increasing in the uptake of postpartum long-acting reversible contraceptive use from 6.9% at the baseline to 25.4% [ 42 ], increasing post-caesarean section prophylaxis from 36 to 89% [ 62 ], a 31% increase of kangaroo care practice [ 50 ], and increased follow-up [ 65 ]. Similarly, the QI intervention increased the quality of antenatal care by 29.3%, correct partograph use by 51.7%, and correct active third-stage labour management, a 19.6% improvement from the baseline, but not significantly associated with improvement in contraceptive service uptake [ 61 ].

Timely access

CQI interventions improved the time care provision [ 52 ], and reduced waiting time [ 62 , 74 , 76 , 106 ]. For instance, the discharge process waiting time in the emergency department decreased from 76 min to 22 min [ 79 ]. It also reduced mean postprocedural length of stay from 2.8 days to 2.0 days [ 31 ].

Acceptability

Acceptability of CQI by healthcare providers was satisfactory. For instance, 88% of the faculty, 64% of the residents, and 82% of the staff believed CQI to be useful in the healthcare clinic [ 107 ].

Outcome components

Morbidity and mortality.

CQI efforts have demonstrated better management outcomes among diabetic patients [ 40 ], patients with oral mucositis [ 71 ], and anaemic patients [ 72 ]. It has also reduced infection rate in post-caesarean Sect. [ 62 ], reduced post-peritoneal dialysis peritonitis [ 49 , 108 ], and prevented pressure ulcers [ 70 ]. It is explained by peritonitis incidence from once every 40.1 patient months at baseline to once every 70.8 patient months after CQI [ 49 ] and a 63% reduction in pressure ulcer prevalence within 2 years from 2008 to 2010 [ 70 ]. Furthermore, CQI initiatives significantly reduced in-hospital deaths [ 31 ] and increased patient survival rates [ 108 ]. Figure  2 displays the overall process of the CQI implementations.

figure 2

The overall mechanisms of continuous quality improvement implementation

In this review, we examined the fundamental concepts and principles underlying CQI, the factors that either hinder or assist in its successful application and implementation, and the purpose of CQI in enhancing quality of care across various health issues.

Our findings have brought attention to the application and implementation of CQI, emphasizing its underlying concepts and principles, as evident in the existing literature [ 31 , 32 , 33 , 34 , 35 , 36 , 39 , 40 , 43 , 45 , 46 ]. Continuous quality improvement has shared with the principles of continuous improvement, such as a customer-driven focus, effective leadership, active participation of individuals, a process-oriented approach, systematic implementation, emphasis on design improvement and prevention, evidence-based decision-making, and fostering partnership [ 5 ]. Moreover, Deming’s 14 principles laid the foundation for CQI principles [ 109 ]. These principles have been adapted and put into practice in various ways: ten [ 19 ] and five [ 38 ] principles in hospitals, five principles for capacity building [ 38 ], and two principles for medication error prevention [ 41 ]. As a principle, the application of CQI can be process-focused [ 8 , 19 ] or impact-focused [ 38 ]. Impact-focused CQI focuses on achieving specific outcomes or impacts, whereas process-focused CQI prioritizes and improves the underlying processes and systems. These principles complement each other and can be utilized based on the objectives of quality improvement initiatives in healthcare settings. Overall, CQI is an ongoing educational process that requires top management’s involvement, demands coordination across departments, encourages the incorporation of views beyond clinical area, and provides non-judgemental evidence based on objective data [ 110 ].

The current review recognized that it was not easy to implement CQI. It requires reasonable utilization of various models and tools. The application of each tool can be varied based on the studied health problem and the purpose of CQI initiative [ 111 ], varied in context, content, structure, and usability [ 112 ]. Additionally, overcoming the cultural, technical, structural, and strategic-related barriers. These barriers have emerged from clinical staff, managers, and health systems perspectives. Of the cultural obstacles, staff non-involvement, resistance to change, and reluctance to report error were staff-related. In contrast, others, such as the absence of celebration for success and hierarchical and rational culture, may require staff and manager involvement. Staff members may exhibit reluctance in reporting errors due to various cultural factors, including lack of trust, hierarchical structures, fear of retribution, and a blame-oriented culture. These challenges pose obstacles to implementing standardized CQI practices, as observed, for instance, in community pharmacy settings [ 85 ]. The hierarchical culture, characterized by clearly defined levels of power, authority, and decision-making, posed challenges to implementing CQI initiatives in public health [ 41 , 86 ]. Although rational culture, a type of organizational culture, emphasizes logical thinking and rational decision-making, it can also create challenges for CQI implementation [ 41 , 86 ] because hierarchical and rational cultures, which emphasize bureaucratic norms and narrow definitions of achievement, were found to act as barriers to the implementation of CQI [ 86 ]. These could be solved by developing a shared mindset and collective commitment, establishing a shared purpose, developing group norms, and cultivating psychological preparedness among staff, managers, and clients to implement and sustain CQI initiatives. Furthermore, reversing cultural-related barriers necessitates cultural-related solutions: development of a culture and group culture to CQI [ 41 , 86 ], positive comprehensive perception [ 91 ], commitment [ 85 ], involving patients, families, leaders, and staff [ 39 , 92 ], collaborating for a common goal [ 80 , 86 ], effective teamwork [ 86 , 87 ], and rewarding and celebrating successes [ 80 , 90 ].

The technical dimension barriers of CQI can include inadequate capitalization of a project and insufficient support for CQI facilitators and data entry managers [ 36 ], immature electronic medical records or poor information systems [ 36 , 86 ], and the lack of training and skills [ 86 , 87 , 88 ]. These challenges may cause the CQI team to rely on outdated information and technologies. The presence of barriers on the technical dimension may challenge the solid foundation of CQI expertise among staff, the ability to recognize opportunities for improvement, a comprehensive understanding of how services are produced and delivered, and routine use of expertise in daily work. Addressing these technical barriers requires knowledge creation activities (training, seminar, and education) [ 39 , 42 , 53 , 69 , 86 , 90 , 91 ], availability of quality data [ 86 ], reliable information [ 92 ], and a manual-online hybrid reporting system [ 85 ].

Structural dimension barriers of CQI include inadequate communication channels and lack of standardized process, specifically weak physician-to-physician synergies [ 36 ], lack of mechanisms for disseminating knowledge and limited use of communication mechanisms [ 86 ]. Lack of communication mechanism endangers sharing ideas and feedback among CQI teams, leading to misunderstandings, limited participation and misinterpretations, and a lack of learning [ 113 ]. Knowledge translation facilitates the co-production of research, subsequent diffusion of knowledge, and the developing stakeholder’s capacity and skills [ 114 ]. Thus, the absence of a knowledge translation mechanism may cause missed opportunities for learning, inefficient problem-solving, and limited creativity. To overcome these challenges, organizations should establish effective communication and information systems [ 86 , 93 ] and learning systems [ 92 ]. Though CQI and knowledge translation have interacted with each other, it is essential to recognize that they are distinct. CQI focuses on process improvement within health care systems, aiming to optimize existing processes, reduce errors, and enhance efficiency.

In contrast, knowledge translation bridges the gap between research evidence and clinical practice, translating research findings into actionable knowledge for practitioners. While both CQI and knowledge translation aim to enhance health care quality and patient outcomes, they employ different strategies: CQI utilizes tools like Plan-Do-Study-Act cycles and statistical process control, while knowledge translation involves knowledge synthesis and dissemination. Additionally, knowledge translation can also serve as a strategy to enhance CQI. Both concepts share the same principle: continuous improvement is essential for both. Therefore, effective strategies on the structural dimension may build efficient and effective steering councils, information systems, and structures to diffuse learning throughout the organization.

Strategic factors, such as goals, planning, funds, and resources, determine the overall purpose of CQI initiatives. Specific barriers were improper goals and poor planning [ 36 , 86 , 88 ], fragmentation of quality assurance policies [ 87 ], inadequate reinforcement to staff [ 36 , 90 ], time constraints [ 85 , 86 ], resource inadequacy [ 86 ], and work overload [ 86 ]. These barriers can be addressed through strengthening leadership [ 86 , 87 ], CQI-based mentoring [ 94 ], periodic monitoring, supportive supervision and coaching [ 43 , 53 , 87 , 92 , 95 ], participation, empowerment, and accountability [ 67 ], involving all stakeholders in decision-making [ 86 , 87 ], a provider-payer partnership [ 64 ], and compensating staff for after-hours meetings on CQI [ 85 ]. The strategic dimension, characterized by a strategic plan and integrated CQI efforts, is devoted to processes that are central to achieving strategic priorities. Roles and responsibilities are defined in terms of integrated strategic and quality-related goals [ 115 ].

The utmost goal of CQI has been to improve the quality of care, which is usually revealed by structure, process, and outcome. After resolving challenges and effectively using tools and running models, the goal of CQI reflects the ultimate reason and purpose of its implementation. First, effectively implemented CQI initiatives can improve leadership, health financing, health workforce development, health information technology, and availability of supplies as the building blocks of a health system [ 31 , 48 , 53 , 68 , 98 ]. Second, effectively implemented CQI initiatives improved care delivery process (counselling, adherence with standards, coordination, collaboration, and linkages) [ 48 , 53 , 65 , 68 ]. Third, the CQI can improve outputs of healthcare delivery, such as satisfaction, accessibility (timely access, utilization), continuity of care, safety, efficiency, and acceptability [ 52 , 54 , 55 , 76 , 78 ]. Finally, the effectiveness of the CQI initiatives has been tested in enhancing responses related to key aspects of the HIV response, maternal and child health, non-communicable disease control, and others (e.g., surgery and peritonitis). However, it is worth noting that CQI initiative has not always been effective. For instance, CQI using a two- to nine-times audit cycle model through systems assessment tools did not bring significant change to increase syphilis testing performance [ 116 ]. This study was conducted within the context of Aboriginal and Torres Strait Islander people’s primary health care settings. Notably, ‘the clinics may not have consistently prioritized syphilis testing performance in their improvement strategies, as facilitated by the CQI program’ [ 116 ]. Additionally, by applying CQI-based mentoring, uptake of facility-based interventions was not significantly improved, though it was effective in increasing community health worker visits during pregnancy and the postnatal period, knowledge about maternal and child health and exclusive breastfeeding practice, and HIV disclosure status [ 117 ]. The study conducted in South Africa revealed no significant association between the coverage of facility-based interventions and Continuous Quality Improvement (CQI) implementation. This lack of association was attributed to the already high antenatal and postnatal attendance rates in both control and intervention groups at baseline, leaving little room for improvement. Additionally, the coverage of HIV interventions remained consistently high throughout the study period [ 117 ].

Regarding health care and policy implications, CQI has played a vital role in advancing PHC and fostering the realization of UHC goals worldwide. The indicators found in Donabedian’s framework that are positively influenced by CQI efforts are comparable to those included in the PHC performance initiative’s conceptual framework [ 29 , 118 , 119 ]. It is clearly explained that PHC serves as the roadmap to realizing the vision of UHC [ 120 , 121 ]. Given these circumstances, implementing CQI can contribute to the achievement of PHC principles and the objectives of UHC. For instance, by implementing CQI methods, countries have enhanced the accessibility, affordability, and quality of PHC services, leading to better health outcomes for their populations. CQI has facilitated identifying and resolving healthcare gaps and inefficiencies, enabling countries to optimize resource allocation and deliver more effective and patient-centered care. However, it is crucial to recognize that the successful implementation of Continuous Quality Improvement (CQI) necessitates optimizing the duration of each cycle, understanding challenges and barriers that extend beyond the health system and settings, and acknowledging that its effectiveness may be compromised if these challenges are not adequately addressed.

Despite abundant literature, there are still gaps regarding the relationship between CQI and other dimensions within the healthcare system. No studies have examined the impact of CQI initiatives on catastrophic health expenditure, effective service coverage, patient-centredness, comprehensiveness, equity, health security, and responsiveness.

Limitations

In conducting this review, it has some limitations to consider. Firstly, only articles published in English were included, which may introduce the exclusion of relevant non-English articles. Additionally, as this review follows a scoping methodology, the focus is on synthesising available evidence rather than critically evaluating or scoring the quality of the included articles.

Continuous quality improvement is investigated as a continuous and ongoing intervention, where the implementation time can vary across different cycles. The CQI team and implementation timelines were critical elements of CQI in different models. Among the commonly used approaches, the PDSA or PDCA is frequently employed. In most CQI models, a wide range of tools, nineteen tools, are commonly utilized to support the improvement process. Cultural, technical, structural, and strategic barriers and facilitators are significant in implementing CQI initiatives. Implementing the CQI initiative aims to improve health system blocks, enhance health service delivery process and output, and ultimately prevent morbidity and reduce mortality. For future researchers, considering that CQI is context-dependent approach, conducting scale-up implementation research about catastrophic health expenditure, effective service coverage, patient-centredness, comprehensiveness, equity, health security, and responsiveness across various settings and health issues would be valuable.

Availability of data and materials

The data used and/or analyzed during the current study are available in this manuscript and/or the supplementary file.

Shewhart WA, Deming WE. Memoriam: Walter A. Shewhart, 1891–1967. Am Stat. 1967;21(2):39–40.

Article   Google Scholar  

Shewhart WA. Statistical method from the viewpoint of quality control. New York: Dover; 1986. ISBN 978-0486652320. OCLC 13822053. Reprint. Originally published: Washington, DC: Graduate School of the Department of Agriculture, 1939.

Moen R, editor Foundation and History of the PDSA Cycle. Asian network for quality conference Tokyo. https://www.deming.org/sites/default/files/pdf/2015/PDSA_History_Ron_MoenPdf . 2009.

Kuperman G, James B, Jacobsen J, Gardner RM. Continuous quality improvement applied to medical care: experiences at LDS hospital. Med Decis Making. 1991;11(4suppl):S60–65.

Article   CAS   PubMed   Google Scholar  

Singh J, Singh H. Continuous improvement philosophy–literature review and directions. Benchmarking: An International Journal. 2015;22(1):75–119.

Goldstone J. Presidential address: Sony, Porsche, and vascular surgery in the 21st century. J Vasc Surg. 1997;25(2):201–10.

Radawski D. Continuous quality improvement: origins, concepts, problems, and applications. J Physician Assistant Educ. 1999;10(1):12–6.

Shortell SM, O’Brien JL, Carman JM, Foster RW, Hughes E, Boerstler H, et al. Assessing the impact of continuous quality improvement/total quality management: concept versus implementation. Health Serv Res. 1995;30(2):377.

CAS   PubMed   PubMed Central   Google Scholar  

Lohr K. Quality of health care: an introduction to critical definitions, concepts, principles, and practicalities. Striving for quality in health care. 1991.

Berwick DM. The clinical process and the quality process. Qual Manage Healthc. 1992;1(1):1–8.

Article   CAS   Google Scholar  

Gift B. On the road to TQM. Food Manage. 1992;27(4):88–9.

CAS   PubMed   Google Scholar  

Greiner A, Knebel E. The core competencies needed for health care professionals. health professions education: A bridge to quality. 2003:45–73.

McCalman J, Bailie R, Bainbridge R, McPhail-Bell K, Percival N, Askew D et al. Continuous quality improvement and comprehensive primary health care: a systems framework to improve service quality and health outcomes. Front Public Health. 2018:6 (76):1–6.

Sheingold BH, Hahn JA. The history of healthcare quality: the first 100 years 1860–1960. Int J Afr Nurs Sci. 2014;1:18–22.

Google Scholar  

Donabedian A. Evaluating the quality of medical care. Milbank Q. 1966;44(3):166–206.

Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington (DC): National Academies Press (US). 2001. 2, Improving the 21st-century Health Care System. Available from: https://www.ncbi.nlm.nih.gov/books/NBK222265/ .

Rubinstein A, Barani M, Lopez AS. Quality first for effective universal health coverage in low-income and middle-income countries. Lancet Global Health. 2018;6(11):e1142–1143.

Article   PubMed   Google Scholar  

Agency for Healthcare Reserach and Quality. Quality Improvement and monitoring at your fingertips USA,: Agency for Healthcare Reserach and Quality. 2022. Available from: https://qualityindicators.ahrq.gov/ .

Anderson CA, Cassidy B, Rivenburgh P. Implementing continuous quality improvement (CQI) in hospitals: lessons learned from the International Quality Study. Qual Assur Health Care. 1991;3(3):141–6.

Gardner K, Mazza D. Quality in general practice - definitions and frameworks. Aust Fam Physician. 2012;41(3):151–4.

PubMed   Google Scholar  

Loper AC, Jensen TM, Farley AB, Morgan JD, Metz AJ. A systematic review of approaches for continuous quality improvement capacity-building. J Public Health Manage Pract. 2022;28(2):E354.

Hill JE, Stephani A-M, Sapple P, Clegg AJ. The effectiveness of continuous quality improvement for developing professional practice and improving health care outcomes: a systematic review. Implement Sci. 2020;15(1):1–14.

Candas B, Jobin G, Dubé C, Tousignant M, Abdeljelil AB, Grenier S, et al. Barriers and facilitators to implementing continuous quality improvement programs in colonoscopy services: a mixed methods systematic review. Endoscopy Int Open. 2016;4(02):E118–133.

Peters MD, Marnie C, Colquhoun H, Garritty CM, Hempel S, Horsley T, et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Reviews. 2021;10(1):1–6.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

McGowan J, Straus S, Moher D, Langlois EV, O’Brien KK, Horsley T, et al. Reporting scoping reviews—PRISMA ScR extension. J Clin Epidemiol. 2020;123:177–9.

Donabedian A. Explorations in quality assessment and monitoring: the definition of quality and approaches to its assessment. Health Administration Press, Ann Arbor. 1980;1.

World Health Organization. Operational framework for primary health care: transforming vision into action. Geneva: World Health Organization and the United Nations Children’s Fund (UNICEF); 2020 [updated 14 December 2020; cited 2023 Nov Oct 17]. Available from: https://www.who.int/publications/i/item/9789240017832 .

The Joanna Briggs Institute. The Joanna Briggs Institute Reviewers’ Manual :2014 edition. Australia: The Joanna Briggs Institute. 2014:88–91.

Rihal CS, Kamath CC, Holmes DR Jr, Reller MK, Anderson SS, McMurtry EK, et al. Economic and clinical outcomes of a physician-led continuous quality improvement intervention in the delivery of percutaneous coronary intervention. Am J Manag Care. 2006;12(8):445–52.

Ade-Oshifogun JB, Dufelmeier T. Prevention and Management of Do not return notices: a quality improvement process for Supplemental staffing nursing agencies. Nurs Forum. 2012;47(2):106–12.

Rubenstein L, Khodyakov D, Hempel S, Danz M, Salem-Schatz S, Foy R, et al. How can we recognize continuous quality improvement? Int J Qual Health Care. 2014;26(1):6–15.

O’Neill SM, Hempel S, Lim YW, Danz MS, Foy R, Suttorp MJ, et al. Identifying continuous quality improvement publications: what makes an improvement intervention ‘CQI’? BMJ Qual Saf. 2011;20(12):1011–9.

Article   PubMed   PubMed Central   Google Scholar  

Sibthorpe B, Gardner K, McAullay D. Furthering the quality agenda in Aboriginal community controlled health services: understanding the relationship between accreditation, continuous quality improvement and national key performance indicator reporting. Aust J Prim Health. 2016;22(4):270–5.

Bennett CL, Crane JM. Quality improvement efforts in oncology: are we ready to begin? Cancer Invest. 2001;19(1):86–95.

VanValkenburgh DA. Implementing continuous quality improvement at the facility level. Adv Ren Replace Ther. 2001;8(2):104–13.

Loper AC, Jensen TM, Farley AB, Morgan JD, Metz AJ. A systematic review of approaches for continuous quality improvement capacity-building. J Public Health Manage Practice. 2022;28(2):E354–361.

Ryan M. Achieving and sustaining quality in healthcare. Front Health Serv Manag. 2004;20(3):3–11.

Nicolucci A, Allotta G, Allegra G, Cordaro G, D’Agati F, Di Benedetto A, et al. Five-year impact of a continuous quality improvement effort implemented by a network of diabetes outpatient clinics. Diabetes Care. 2008;31(1):57–62.

Wakefield BJ, Blegen MA, Uden-Holman T, Vaughn T, Chrischilles E, Wakefield DS. Organizational culture, continuous quality improvement, and medication administration error reporting. Am J Med Qual. 2001;16(4):128–34.

Sori DA, Debelew GT, Degefa LS, Asefa Z. Continuous quality improvement strategy for increasing immediate postpartum long-acting reversible contraceptive use at Jimma University Medical Center, Jimma, Ethiopia. BMJ Open Qual. 2023;12(1):e002051.

Roche B, Robin C, Deleaval PJ, Marti MC. Continuous quality improvement in ambulatory surgery: the non-attending patient. Ambul Surg. 1998;6(2):97–100.

O’Connor JB, Sondhi SS, Mullen KD, McCullough AJ. A continuous quality improvement initiative reduces inappropriate prescribing of prophylactic antibiotics for endoscopic procedures. Am J Gastroenterol. 1999;94(8):2115–21.

Ursu A, Greenberg G, McKee M. Continuous quality improvement methodology: a case study on multidisciplinary collaboration to improve chlamydia screening. Fam Med Community Health. 2019;7(2):e000085.

Quick B, Nordstrom S, Johnson K. Using continuous quality improvement to implement evidence-based medicine. Lippincotts Case Manag. 2006;11(6):305–15 ( quiz 16 – 7 ).

Oyeledun B, Phillips A, Oronsaye F, Alo OD, Shaffer N, Osibo B, et al. The effect of a continuous quality improvement intervention on retention-in-care at 6 months postpartum in a PMTCT Program in Northern Nigeria: results of a cluster randomized controlled study. J Acquir Immune Defic Syndr. 2017;75(Suppl 2):S156–164.

Nyengerai T, Phohole M, Iqaba N, Kinge CW, Gori E, Moyo K, et al. Quality of service and continuous quality improvement in voluntary medical male circumcision programme across four provinces in South Africa: longitudinal and cross-sectional programme data. PLoS ONE. 2021;16(8):e0254850.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wang J, Zhang H, Liu J, Zhang K, Yi B, Liu Y, et al. Implementation of a continuous quality improvement program reduces the occurrence of peritonitis in PD. Ren Fail. 2014;36(7):1029–32.

Stikes R, Barbier D. Applying the plan-do-study-act model to increase the use of kangaroo care. J Nurs Manag. 2013;21(1):70–8.

Wagner AD, Mugo C, Bluemer-Miroite S, Mutiti PM, Wamalwa DC, Bukusi D, et al. Continuous quality improvement intervention for adolescent and young adult HIV testing services in Kenya improves HIV knowledge. AIDS. 2017;31(Suppl 3):S243–252.

Le RD, Melanson SE, Santos KS, Paredes JD, Baum JM, Goonan EM, et al. Using lean principles to optimise inpatient phlebotomy services. J Clin Pathol. 2014;67(8):724–30.

Manyazewal T, Mekonnen A, Demelew T, Mengestu S, Abdu Y, Mammo D, et al. Improving immunization capacity in Ethiopia through continuous quality improvement interventions: a prospective quasi-experimental study. Infect Dis Poverty. 2018;7:7.

Kamiya Y, Ishijma H, Hagiwara A, Takahashi S, Ngonyani HAM, Samky E. Evaluating the impact of continuous quality improvement methods at hospitals in Tanzania: a cluster-randomized trial. Int J Qual Health Care. 2017;29(1):32–9.

Kibbe DC, Bentz E, McLaughlin CP. Continuous quality improvement for continuity of care. J Fam Pract. 1993;36(3):304–8.

Adrawa N, Ongiro S, Lotee K, Seret J, Adeke M, Izudi J. Use of a context-specific package to increase sputum smear monitoring among people with pulmonary tuberculosis in Uganda: a quality improvement study. BMJ Open Qual. 2023;12(3):1–6.

Hunt P, Hunter SB, Levan D. Continuous quality improvement in substance abuse treatment facilities: how much does it cost? J Subst Abuse Treat. 2017;77:133–40.

Azadeh A, Ameli M, Alisoltani N, Motevali Haghighi S. A unique fuzzy multi-control approach for continuous quality improvement in a radio therapy department. Qual Quantity. 2016;50(6):2469–93.

Memiah P, Tlale J, Shimabale M, Nzyoka S, Komba P, Sebeza J, et al. Continuous quality improvement (CQI) institutionalization to reach 95:95:95 HIV targets: a multicountry experience from the Global South. BMC Health Serv Res. 2021;21(1):711.

Yapa HM, De Neve JW, Chetty T, Herbst C, Post FA, Jiamsakul A, et al. The impact of continuous quality improvement on coverage of antenatal HIV care tests in rural South Africa: results of a stepped-wedge cluster-randomised controlled implementation trial. PLoS Med. 2020;17(10):e1003150.

Dadi TL, Abebo TA, Yeshitla A, Abera Y, Tadesse D, Tsegaye S, et al. Impact of quality improvement interventions on facility readiness, quality and uptake of maternal and child health services in developing regions of Ethiopia: a secondary analysis of programme data. BMJ Open Qual. 2023;12(4):e002140.

Weinberg M, Fuentes JM, Ruiz AI, Lozano FW, Angel E, Gaitan H, et al. Reducing infections among women undergoing cesarean section in Colombia by means of continuous quality improvement methods. Arch Intern Med. 2001;161(19):2357–65.

Andreoni V, Bilak Y, Bukumira M, Halfer D, Lynch-Stapleton P, Perez C. Project management: putting continuous quality improvement theory into practice. J Nurs Care Qual. 1995;9(3):29–37.

Balfour ME, Zinn TE, Cason K, Fox J, Morales M, Berdeja C, et al. Provider-payer partnerships as an engine for continuous quality improvement. Psychiatric Serv. 2018;69(6):623–5.

Agurto I, Sandoval J, De La Rosa M, Guardado ME. Improving cervical cancer prevention in a developing country. Int J Qual Health Care. 2006;18(2):81–6.

Anderson CI, Basson MD, Ali M, Davis AT, Osmer RL, McLeod MK, et al. Comprehensive multicenter graduate surgical education initiative incorporating entrustable professional activities, continuous quality improvement cycles, and a web-based platform to enhance teaching and learning. J Am Coll Surg. 2018;227(1):64–76.

Benjamin S, Seaman M. Applying continuous quality improvement and human performance technology to primary health care in Bahrain. Health Care Superv. 1998;17(1):62–71.

Byabagambi J, Marks P, Megere H, Karamagi E, Byakika S, Opio A, et al. Improving the quality of voluntary medical male circumcision through use of the continuous quality improvement approach: a pilot in 30 PEPFAR-Supported sites in Uganda. PLoS ONE. 2015;10(7):e0133369.

Hogg S, Roe Y, Mills R. Implementing evidence-based continuous quality improvement strategies in an urban Aboriginal Community Controlled Health Service in South East Queensland: a best practice implementation pilot. JBI Database Syst Rev Implement Rep. 2017;15(1):178–87.

Hopper MB, Morgan S. Continuous quality improvement initiative for pressure ulcer prevention. J Wound Ostomy Cont Nurs. 2014;41(2):178–80.

Ji J, Jiang DD, Xu Z, Yang YQ, Qian KY, Zhang MX. Continuous quality improvement of nutrition management during radiotherapy in patients with nasopharyngeal carcinoma. Nurs Open. 2021;8(6):3261–70.

Chen M, Deng JH, Zhou FD, Wang M, Wang HY. Improving the management of anemia in hemodialysis patients by implementing the continuous quality improvement program. Blood Purif. 2006;24(3):282–6.

Reeves S, Matney K, Crane V. Continuous quality improvement as an ideal in hospital practice. Health Care Superv. 1995;13(4):1–12.

Barton AJ, Danek G, Johns P, Coons M. Improving patient outcomes through CQI: vascular access planning. J Nurs Care Qual. 1998;13(2):77–85.

Buttigieg SC, Gauci D, Dey P. Continuous quality improvement in a Maltese hospital using logical framework analysis. J Health Organ Manag. 2016;30(7):1026–46.

Take N, Byakika S, Tasei H, Yoshikawa T. The effect of 5S-continuous quality improvement-total quality management approach on staff motivation, patients’ waiting time and patient satisfaction with services at hospitals in Uganda. J Public Health Afr. 2015;6(1):486.

PubMed   PubMed Central   Google Scholar  

Jacobson GH, McCoin NS, Lescallette R, Russ S, Slovis CM. Kaizen: a method of process improvement in the emergency department. Acad Emerg Med. 2009;16(12):1341–9.

Agarwal S, Gallo J, Parashar A, Agarwal K, Ellis S, Khot U, et al. Impact of lean six sigma process improvement methodology on cardiac catheterization laboratory efficiency. Catheter Cardiovasc Interv. 2015;85:S119.

Rahul G, Samanta AK, Varaprasad G A Lean Six Sigma approach to reduce overcrowding of patients and improving the discharge process in a super-specialty hospital. In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN) 2020 July 3 (pp. 1-6). IEEE

Patel J, Nattabi B, Long R, Durey A, Naoum S, Kruger E, et al. The 5 C model: A proposed continuous quality improvement framework for volunteer dental services in remote Australian Aboriginal communities. Community Dent Oral Epidemiol. 2023;51(6):1150–8.

Van Acker B, McIntosh G, Gudes M. Continuous quality improvement techniques enhance HMO members’ immunization rates. J Healthc Qual. 1998;20(2):36–41.

Horine PD, Pohjala ED, Luecke RW. Healthcare financial managers and CQI. Healthc Financ Manage. 1993;47(9):34.

Reynolds JL. Reducing the frequency of episiotomies through a continuous quality improvement program. CMAJ. 1995;153(3):275–82.

Bunik M, Galloway K, Maughlin M, Hyman D. First five quality improvement program increases adherence and continuity with well-child care. Pediatr Qual Saf. 2021;6(6):e484.

Boyle TA, MacKinnon NJ, Mahaffey T, Duggan K, Dow N. Challenges of standardized continuous quality improvement programs in community pharmacies: the case of SafetyNET-Rx. Res Social Adm Pharm. 2012;8(6):499–508.

Price A, Schwartz R, Cohen J, Manson H, Scott F. Assessing continuous quality improvement in public health: adapting lessons from healthcare. Healthc Policy. 2017;12(3):34–49.

Gage AD, Gotsadze T, Seid E, Mutasa R, Friedman J. The influence of continuous quality improvement on healthcare quality: a mixed-methods study from Zimbabwe. Soc Sci Med. 2022;298:114831.

Chan YC, Ho SJ. Continuous quality improvement: a survey of American and Canadian healthcare executives. Hosp Health Serv Adm. 1997;42(4):525–44.

Balas EA, Puryear J, Mitchell JA, Barter B. How to structure clinical practice guidelines for continuous quality improvement? J Med Syst. 1994;18(5):289–97.

ElChamaa R, Seely AJE, Jeong D, Kitto S. Barriers and facilitators to the implementation and adoption of a continuous quality improvement program in surgery: a case study. J Contin Educ Health Prof. 2022;42(4):227–35.

Candas B, Jobin G, Dubé C, Tousignant M, Abdeljelil A, Grenier S, et al. Barriers and facilitators to implementing continuous quality improvement programs in colonoscopy services: a mixed methods systematic review. Endoscopy Int Open. 2016;4(2):E118–133.

Brandrud AS, Schreiner A, Hjortdahl P, Helljesen GS, Nyen B, Nelson EC. Three success factors for continual improvement in healthcare: an analysis of the reports of improvement team members. BMJ Qual Saf. 2011;20(3):251–9.

Lee S, Choi KS, Kang HY, Cho W, Chae YM. Assessing the factors influencing continuous quality improvement implementation: experience in Korean hospitals. Int J Qual Health Care. 2002;14(5):383–91.

Horwood C, Butler L, Barker P, Phakathi S, Haskins L, Grant M, et al. A continuous quality improvement intervention to improve the effectiveness of community health workers providing care to mothers and children: a cluster randomised controlled trial in South Africa. Hum Resour Health. 2017;15(1):39.

Hyrkäs K, Lehti K. Continuous quality improvement through team supervision supported by continuous self-monitoring of work and systematic patient feedback. J Nurs Manag. 2003;11(3):177–88.

Akdemir N, Peterson LN, Campbell CM, Scheele F. Evaluation of continuous quality improvement in accreditation for medical education. BMC Med Educ. 2020;20(Suppl 1):308.

Barzansky B, Hunt D, Moineau G, Ahn D, Lai CW, Humphrey H, et al. Continuous quality improvement in an accreditation system for undergraduate medical education: benefits and challenges. Med Teach. 2015;37(11):1032–8.

Gaylis F, Nasseri R, Salmasi A, Anderson C, Mohedin S, Prime R, et al. Implementing continuous quality improvement in an integrated community urology practice: lessons learned. Urology. 2021;153:139–46.

Gaga S, Mqoqi N, Chimatira R, Moko S, Igumbor JO. Continuous quality improvement in HIV and TB services at selected healthcare facilities in South Africa. South Afr J HIV Med. 2021;22(1):1202.

Wang F, Yao D. Application effect of continuous quality improvement measures on patient satisfaction and quality of life in gynecological nursing. Am J Transl Res. 2021;13(6):6391–8.

Lee SB, Lee LL, Yeung RS, Chan J. A continuous quality improvement project to reduce medication error in the emergency department. World J Emerg Med. 2013;4(3):179–82.

Chiang AA, Lee KC, Lee JC, Wei CH. Effectiveness of a continuous quality improvement program aiming to reduce unplanned extubation: a prospective study. Intensive Care Med. 1996;22(11):1269–71.

Chinnaiyan K, Al-Mallah M, Goraya T, Patel S, Kazerooni E, Poopat C, et al. Impact of a continuous quality improvement initiative on appropriate use of coronary CT angiography: results from a multicenter, statewide registry, the advanced cardiovascular imaging consortium (ACIC). J Cardiovasc Comput Tomogr. 2011;5(4):S29–30.

Gibson-Helm M, Rumbold A, Teede H, Ranasinha S, Bailie R, Boyle J. A continuous quality improvement initiative: improving the provision of pregnancy care for Aboriginal and Torres Strait Islander women. BJOG: Int J Obstet Gynecol. 2015;122:400–1.

Bennett IM, Coco A, Anderson J, Horst M, Gambler AS, Barr WB, et al. Improving maternal care with a continuous quality improvement strategy: a report from the interventions to minimize preterm and low birth weight infants through continuous improvement techniques (IMPLICIT) network. J Am Board Fam Med. 2009;22(4):380–6.

Krall SP, Iv CLR, Donahue L. Effect of continuous quality improvement methods on reducing triage to thrombolytic interval for Acute myocardial infarction. Acad Emerg Med. 1995;2(7):603–9.

Swanson TK, Eilers GM. Physician and staff acceptance of continuous quality improvement. Fam Med. 1994;26(9):583–6.

Yu Y, Zhou Y, Wang H, Zhou T, Li Q, Li T, et al. Impact of continuous quality improvement initiatives on clinical outcomes in peritoneal dialysis. Perit Dial Int. 2014;34(Suppl 2):S43–48.

Schiff GD, Goldfield NI. Deming meets Braverman: toward a progressive analysis of the continuous quality improvement paradigm. Int J Health Serv. 1994;24(4):655–73.

American Hospital Association Division of Quality Resources Chicago, IL: The role of hospital leadership in the continuous improvement of patient care quality. American Hospital Association. J Healthc Qual. 1992;14(5):8–14,22.

Scriven M. The Logic and Methodology of checklists [dissertation]. Western Michigan University; 2000.

Hales B, Terblanche M, Fowler R, Sibbald W. Development of medical checklists for improved quality of patient care. Int J Qual Health Care. 2008;20(1):22–30.

Vermeir P, Vandijck D, Degroote S, Peleman R, Verhaeghe R, Mortier E, et al. Communication in healthcare: a narrative review of the literature and practical recommendations. Int J Clin Pract. 2015;69(11):1257–67.

Eljiz K, Greenfield D, Hogden A, Taylor R, Siddiqui N, Agaliotis M, et al. Improving knowledge translation for increased engagement and impact in healthcare. BMJ open Qual. 2020;9(3):e000983.

O’Brien JL, Shortell SM, Hughes EF, Foster RW, Carman JM, Boerstler H, et al. An integrative model for organization-wide quality improvement: lessons from the field. Qual Manage Healthc. 1995;3(4):19–30.

Adily A, Girgis S, D’Este C, Matthews V, Ward JE. Syphilis testing performance in Aboriginal primary health care: exploring impact of continuous quality improvement over time. Aust J Prim Health. 2020;26(2):178–83.

Horwood C, Butler L, Barker P, Phakathi S, Haskins L, Grant M, et al. A continuous quality improvement intervention to improve the effectiveness of community health workers providing care to mothers and children: a cluster randomised controlled trial in South Africa. Hum Resour Health. 2017;15:1–11.

Veillard J, Cowling K, Bitton A, Ratcliffe H, Kimball M, Barkley S, et al. Better measurement for performance improvement in low- and middle-income countries: the primary Health Care Performance Initiative (PHCPI) experience of conceptual framework development and indicator selection. Milbank Q. 2017;95(4):836–83.

Barbazza E, Kringos D, Kruse I, Klazinga NS, Tello JE. Creating performance intelligence for primary health care strengthening in Europe. BMC Health Serv Res. 2019;19(1):1006.

Assefa Y, Hill PS, Gilks CF, Admassu M, Tesfaye D, Van Damme W. Primary health care contributions to universal health coverage. Ethiopia Bull World Health Organ. 2020;98(12):894.

Van Weel C, Kidd MR. Why strengthening primary health care is essential to achieving universal health coverage. CMAJ. 2018;190(15):E463–466.

Download references

Acknowledgements

Not applicable.

The authors received no fund.

Author information

Authors and affiliations.

School of Public Health, The University of Queensland, Brisbane, Australia

Aklilu Endalamaw, Resham B Khatri, Tesfaye Setegn Mengistu, Daniel Erku & Yibeltal Assefa

College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia

Aklilu Endalamaw & Tesfaye Setegn Mengistu

Health Social Science and Development Research Institute, Kathmandu, Nepal

Resham B Khatri

Centre for Applied Health Economics, School of Medicine, Grifth University, Brisbane, Australia

Daniel Erku

Menzies Health Institute Queensland, Grifth University, Brisbane, Australia

International Institute for Primary Health Care in Ethiopia, Addis Ababa, Ethiopia

Eskinder Wolka & Anteneh Zewdie

You can also search for this author in PubMed   Google Scholar

Contributions

AE conceptualized the study, developed the first draft of the manuscript, and managing feedbacks from co-authors. YA conceptualized the study, provided feedback, and supervised the whole processes. RBK provided feedback throughout. TSM provided feedback throughout. DE provided feedback throughout. EW provided feedback throughout. AZ provided feedback throughout. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Aklilu Endalamaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable because this research is based on publicly available articles.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Endalamaw, A., Khatri, R.B., Mengistu, T.S. et al. A scoping review of continuous quality improvement in healthcare system: conceptualization, models and tools, barriers and facilitators, and impact. BMC Health Serv Res 24 , 487 (2024). https://doi.org/10.1186/s12913-024-10828-0

Download citation

Received : 27 December 2023

Accepted : 05 March 2024

Published : 19 April 2024

DOI : https://doi.org/10.1186/s12913-024-10828-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Continuous quality improvement
  • Quality of Care

BMC Health Services Research

ISSN: 1472-6963

what is the process of peer review in research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.2(6); 2022 Nov

Effective Peer Review: Who, Where, or What?

Peer review is widely viewed as one of the most critical elements in assuring the integrity of scientific literature ( Baldwin, 2018 ; Smith, 2006 ). Despite the widespread acceptance and utilization of peer review, many difficulties with the process have been identified ( Hames, 2014 ; Horrobin, 2001 ; Smith, 2006 ). One of the primary goals of the peer review process is to identify flaws in the work and, by so doing, help editors choose which manuscripts to publish. It is surprising that one of the persistent problems in peer review is assessing the quality of the reviews. Both authors and journal editors expect peer review to detect errors in experimental design and methodology and to ensure that the interpretation of the findings is presented in an objective and thoughtful manner. In traditional peer review, two or more reviewers are asked to evaluate a manuscript on the basis of the expectation that if the two reviewers agree on the quality of the submission, the likelihood of a high-quality review is increased. Unfortunately, studies have not consistently confirmed a high degree of agreement among reviewers. Rothwell and Martynn (2000) evaluated the reproducibility of peer review in neuroscience journals and meeting abstracts and found that agreement was approximately what would be expected by chance. Similarly, Scharschmidt et al. (1994) found similar results in the evaluation of 1,000 manuscripts submitted to the Journal of Clinical Investigation, where clustering of grades in the middle resulted in an agreement being “…only marginally…” better than chance. These observations suggest that we cannot rely on the agreement of reviewers to be an indication of the quality of the reviews. Another potential way to evaluate the quality of reviews would be to assess the ability of reviewers to detect errors in submissions. It is generally accepted that detection of intentional fraud is beyond the scope of typical peer review, but we do expect reviewers to detect major and minor errors as a primary function of the traditional peer review system ( Hwang, 2006 ; Weissman, 2006 ). Schroter et al. (2008) evaluated the ability of reviewers to detect major and minor errors by introducing errors into three previously published papers describing randomized controlled clinical trials. Reviewers detected approximately three of the nine errors introduced in each manuscript. Unfortunately, reviewers who had undergone training in how to conduct a high-quality peer review were not significantly better than untrained reviewers. Similar results have been reported by Godlee et al. (1998) and Baxt et al. (1998) . Baxt et al. (1998) did report that reviewers who rejected or suggested revision of a manuscript identified more errors than those who accepted the manuscript (decision: 17.3% of major errors detected [accept], 29.6% of major errors detected [revise], and 39.1% of major errors detected [reject]). It is almost certainly true that the extent of the failure to recognize errors in submitted manuscripts may differ among scientific disciplines and journals. It also however seems likely that these observations do have some applicability to journals such as JID Innovations . It is critical that both authors and editors are cognizant of these limitations of peer review in their assessment of reviews. These findings compel journals to continue to work to develop new strategies to train and evaluate reviewers. The findings also suggest that factors beyond the failure to detect objective mistakes in a manuscript may be playing a role in the discrepancy in reviewers’ evaluations. One area of ongoing concern in the peer review process is the role of reviewer bias in assessing the scientific work of colleagues ( Kuehn, 2017 ; Lee et al, 2013 ; Tvina et al, 2019 ).

Bias in the peer review process can take many forms, including collaborator/competitor bias, affiliation bias based on an investigator’s institution or department, geographical bias based on the region or country of origin, racial bias, and gender or sex bias ( Kuehn, 2017 ; Lee et al, 2013 ; Tvina et al, 2019 ). All of these forms of bias present the risk that a decision of the reviewer will not be based solely on the quality or merit of the work but rather be influenced by a bias of the reviewer. We and other journals routinely seek to avoid selecting individuals to review work from their own institutions and ask all reviewers to declare any potential personal conflicts of interest. All these methods require either the editor or the reviewer to identify a bias and fail to address the issue of implicit or unconscious reviewer bias. The dominant method currently utilized for peer review is the so-called single-blind review, in which the identity and affiliations of the authors are known to the reviewers, whereas the identity of reviewers remains unknown to the authors. This has led to concern that knowledge of the identity of the authors and their institutions may be the source of significant reviewer bias, especially implicit bias, in the evaluation of manuscripts. Double anonymized peer review (DAPR), also known as double-blind peer review, has been suggested as a way to address this issue ( Bazi, 2020 ; Lee et al, 2013 ). Studies have compared single-blind with double-blind reviewing and reported that there is no significant difference in the quality of the reviews ( Alam et al, 2011 ; Godlee et al, 1998 ; Justice et al, 1998 ; van Rooyen et al, 1998 ). Although these studies looked at measures such as the number of errors detected, acceptance rate, and distribution of initial reviewer scores, they were not designed to address specific sources of bias such as authors’ gender, institution, or geographic location. Other studies have been undertaken to directly address the issue of bias in the peer review process. Ross et al (2006) compared the acceptance of abstracts submitted to the American Heart Association’s annual scientific meeting during a period when the reviewers knew the identity and origin of the authors (i.e., single-blind review) with when this information was not known by the reviewers (i.e., double-anonymized peer review). They found a significant increase in acceptance of non‒United States abstracts and abstracts from non-English speaking countries when the reviewers were unaware of the country of origin of the abstracts ( Ross et al, 2006 ). They also found a significant decrease in the acceptance of abstracts from prestigious institutions when the reviewers were unaware of the institutions where the work was done. In a similar study, Tomkins et al. (2017) found that papers submitted to a prestigious computer science meeting were more likely to be accepted if they were from famous authors, top universities, and top companies. Okike et al. (2016) documented similar results for manuscripts submitted to the orthopedic literature. They submitted a fabricated manuscript that was presented as being written by two prominent orthopedic surgeons (past Presidents of the American Academy of Orthopedic Surgeons) from prestigious institutions. When reviewed in the traditional single-blind fashion, which included the identity of the authors, the manuscript was accepted by 87% of the reviewers. By contrast, when the identity of the authors was unknown, the manuscript was accepted by 68% of the reviewers ( P  = 0.02) ( Blank, 1991 ). A study conducted at The American Economic Review found that authors at near-top-ranked universities experienced lower acceptance rates when authorship was anonymized ( Blank, 1991 ). Of interest, they also found that for women, there was no difference in the acceptance rate between the double-anonymized and single-blinded reviews; however, for men, the acceptance rate was lower with double-anonymized reviews.

These studies provide strong evidence that knowledge of who and where the study was performed can impact the acceptance of abstracts and manuscripts. This conflicts with the goal of the review process to base our judgments on the quality of what the results demonstrate. It is difficult to estimate how much this may affect the fate of a manuscript at JID Innovations . We do not have evidence that our review process has been impacted by bias as is reported in the studies discussed. However, neither can we state with certainty that such bias is not a factor in the reviews we receive. One of the goals of JID Innovations is to be a truly open-access journal available to all investigators in skin science from around the world. We have sought to be an outlet for studies that challenge existing paradigms or that may report negative results. We want to be seen as providing fair and objective reviews for all authors, regardless of where they work or who they are. If we are to achieve this goal, it is imperative that the who and where of a specific manuscript do not negatively impact the evaluation of the what. We want young investigators, investigators at less prestigious institutions or from less well-known laboratories, and investigators from any country around the world to be confident that their work will be judged by what they report and not by the who and the where.

To be true to this mission, JID Innovations will be initiating DAPR starting in October 2022. This is not being done because we are aware of any issues of bias with our current process of peer review but because we realize that the absence of proof is not proof of absence. As a part of this process, authors will be asked to remove identifying material from manuscripts at the time of submission in preparation for the review process ( https://www.jidinnovations.org/content/authorinfo ). As a result, primary reviewers will see only the what of the manuscript. We realize that this process involves extra work for both the authors and our staff, but we feel the benefits will outweigh this small cost. Indeed, in other journals that have taken this step, surveys have shown that both authors and reviewers ultimately prefer double-anonymized reviews ( Bennett et al, 2018 ; Moylan et al, 2014 ). We realize that achieving 100% anonymization of a manuscript is nearly impossible. Studies have shown that the rate of successful anonymizing, where the reviewers cannot discern the authorship of a manuscript, ranged from 47 to 73%. It is however interesting that even with this rate of success in the anonymizing process, a meta-analysis of trials of double- versus that of single-blind peer review has suggested an impact, with lower acceptance rates with double-anonymized peer review ( Ucci et al, 2022 ). More work clearly needs to be done to assess the value of the DAPR process, and we will be monitoring our results carefully.

The institution of DAPR in JID Innovations will assure our authors that the what of their manuscript is our focus. It does not matter who you are or where you are from. It will also emphasize to our reviewers that our focus is on the what. We will be carefully monitoring the results of this new policy and plan to report back on our experience. We also welcome your feedback on your experience as a reviewer and author for JID Innovations ; send your comments to us at [email protected] .

Finally, this decision should be seen not as the end of our efforts to improve the peer review process but merely as a first step. We will continue to work to improve all aspects of the peer review process for JID Innovations . We firmly believe that the use of double-blind -anonymized peer review will bring us closer to ensuring to our authors and readers that the work that is published by JID Innovations has been selected on the basis of what the paper reports and not on who performed the studies or where they were located.

Conflict of Interest

The author states no conflicts of interest.

  • Alam M., Kim N.A., Havey J., Rademaker A., Ratner D., Tregre B., et al. Blinded vs. unblinded peer review of manuscripts submitted to a dermatology journal: a randomized multi-rater study. Br J Dermatol. 2011; 165 563‒7. [ PubMed ] [ Google Scholar ]
  • Baldwin M. Scientific autonomy, public accountability, and the rise of “peer review” in the Cold War United States. Isis. 2018; 109 :538–558. [ Google Scholar ]
  • Baxt W.G., Waeckerle J.F., Berlin J.A., Callaham M.L. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann Emerg Med. 1998; 32 310‒7. [ PubMed ] [ Google Scholar ]
  • Bazi T. Peer review: single-blind, double-blind, or all the way-blind? Int Urogynecol J. 2020; 31 :481–483. [ PubMed ] [ Google Scholar ]
  • Bennett K.E., Jagsi R., Zietman A. Radiation oncology authors and reviewers prefer double-blind peer review. Proc Natl Acad Sci USA. 2018; 115 :E1940. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Blank R.M. The effects of double-blind versus single-blind reviewing: experimental evidence from the American Economic Review. Am Econ Rev. 1991; 81 :1041–1067. [ Google Scholar ]
  • Godlee F., Gale C.R., Martyn C.N. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA. 1998 280237‒40. [ PubMed ] [ Google Scholar ]
  • Hames I. Peer review at the beginning of the 21st century. Sci Ed. 2014; 1 :4–8. [ Google Scholar ]
  • Horrobin D.F. Something rotten at the core of science? Trends Pharmacol Sci. 2001; 22 51‒2. [ PubMed ] [ Google Scholar ]
  • Hwang W.S. Can peer review police fraud? Nat Neurosci. 2006; 9 :149. [ PubMed ] [ Google Scholar ]
  • Justice A.C., Cho M.K., Winker M.A., Berlin J.A., Rennie D. Does masking author identity improve peer review quality? A randomized controlled trial. PEER Investigators [published correction appears in JAMA 1998;280:968. JAMA. 1998; 280 240‒2. [ PubMed ] [ Google Scholar ]
  • Kuehn B.M. Rooting out bias. ELife. 2017; 6 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lee C.J., Sugimoto C.R., Zhang G., Cronin B. Bias in peer review. JASIST. 2013; 64 :2–17. [ Google Scholar ]
  • Moylan E.C., Harold S., O'Neill C., Kowalczuk M.K. Open, single-blind, double-blind: which peer review process do you prefer? BMC Pharmacol Toxicol. 2014; 15 :55. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Okike K., Hug K.T., Kocher M.S., Leopold S.S. Single-blind vs double-blind peer review in the setting of author prestige. JAMA. 2016; 316 1315‒6. [ PubMed ] [ Google Scholar ]
  • Ross J.S., Gross C.P., Desai M.M., Hong Y., Grant A.O., Daniels S.R., et al. Effect of blinded peer review on abstract acceptance. JAMA. 2006; 295 1675‒80. [ PubMed ] [ Google Scholar ]
  • Rothwell P.M., Martyn C.N. Reproducibility of peer review in clinical neuroscience. Is agreement between reviewers any greater than would be expected by chance alone? Brain. 2000; 123 1964‒9. [ PubMed ] [ Google Scholar ]
  • Scharschmidt B.F., DeAmicis A., Bacchetti P., Held M.J. Chance, concurrence, and clustering. Analysis of reviewers' recommendations on 1,000 submissions to the Journal of Clinical Investigation. J Clin Invest. 1994; 93 1877‒80. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schroter S., Black N., Evans S., Godlee F., Osorio L., Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med. 2008; 101 507‒14. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006; 99 178‒82. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Tomkins A., Zhang M., Heavlin W.D. Reviewer bias in single- versus double-blind peer review. Proc Natl Acad Sci USA. 2017; 114 12708‒13. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Tvina A., Spellecy R., Palatnik A. Bias in the peer review process: can we do better? Obstet Gynecol. 2019; 133 1081‒3. [ PubMed ] [ Google Scholar ]
  • Ucci M.A., D'Antonio F., Berghella V. Double- vs single-blind peer review effect on acceptance rates: a systematic review and meta-analysis of randomized trials. Am J Obstet Gynecol MFM. 2022; 4 [ PubMed ] [ Google Scholar ]
  • van Rooyen S., Godlee F., Evans S., Smith R., Black N. Effect of blinding and unmasking on the quality of peer review: a randomized trial. JAMA. 1998; 280 234‒7. [ PubMed ] [ Google Scholar ]
  • Weissmann G. Science fraud: from patchwork mouse to patchwork data. FASEB J. 2006; 20 587‒90. [ PubMed ] [ Google Scholar ]

usa flag

  • Policy & Compliance
  • Peer Review Policies and Practices

Revisions to the NIH Fellowship Application and Review Process

NIH is revising the application and review criteria for fellowship applications, beginning with those submitted January 25, 2025 and beyond. This page describes the goals of the change, implications for those writing and reviewing fellowship applications, and provides links to training and other resources.

Fellowship applications submitted on or after January 25, 2025 will follow a revised application and review criteria. The goal of the changes is to improve the chances that the most promising fellowship candidates will be consistently identified by scientific review panels. The changes will:

  • Better focus reviewer attention on three key assessments: the fellowship candidate’s preparedness and potential, research training plan, and commitment to the candidate
  • Ensure a broad range of candidates and research training contexts can be recognized as meritorious by clarifying and simplifying the language in the application and review criteria
  • Reduce bias in review by emphasizing the commitment to the candidate without undue consideration of sponsor and institutional reputation

Background  

This page is designed to help you learn more about the peer review process and why we’re revising the application and review process for NIH fellowship applications submitted for due dates on or after January 25, 2025.

Changes to Fellowship Applications  

Learn more about the changes being made to fellowship application forms and instructions for due dates on or after January 25, 2025.

Changes to the Fellowship Review Criteria  

Learn more about the changes being made to the fellowship review criteria for applications submitted for due dates on or after January 25, 2025.

Candidate Guidance  

This page provides guidance for candidates applying to fellowships for due dates on or after January 25, 2025

Reviewer Guidance (Coming in 2025)

Reviewers will be provided training and guidance materials in Spring 2025 in time for the first review meetings held in the summer of 2025 using the revised review criteria.

FAQs  

Find answers to your questions about the revisions to fellowship application and review criteria.

Training and Resources  

Training and resources, including presentations, webinars, and other resources to help you understand the revised fellowship application and peer review process are found on this site.

Notices, Statements, and Reports  

This page provides links to Notices, blog posts, press releases, and other background reports on the revised fellowship application and review criteria.

This page last updated on: April 18, 2024

  • Bookmark & Share
  • E-mail Updates
  • Help Downloading Files
  • Privacy Notice
  • Accessibility
  • National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
  • NIH... Turning Discovery Into Health
  • Academic Calendar
  • Campus Services
  • Faculties & Schools
  • Student Service Centre
  • UBC Directory

CIHR Celebrates Exceptional Contributions to Peer Review

Dr. Vicky Bungay's remarkable contributions to the Canadian Institute of Health Research's (CIHR) review process have been recognized with the CIHR Certificate of Recognition for her "outstanding performance in three consecutive project grant competitions." This distinction places her among a select group of only 1.0% of CIHR Peer Reviewers to receive special recognition for service in 2023.

Vicky Bungay and award

April 19, 2024

We are thrilled to announce and celebrate the exceptional achievements of Dr. Vicky Bungay , who, during the course of 2023, served as a peer reviewer for CIHR.

In providing Vicky with her Certificate of Recognition, Associate Vice-President of Research Programs Adrian Mota stated that "part of CIHR’s  Review Quality Assurance (RQA) process is recognizing outstanding contributions to peer review. Through feedback and observations from Committee Chairs, Scientific Officers and CIHR staff, the RQA process captures contributions that exemplify the very best of peer reviewers, such as:

  • Providing reviews that go above and beyond expectations 
  • Volunteering to take on additional tasks on short notice
  • Participating constructively in discussions of applications – including those to which they have not been assigned."

Mr. Mota added these words of appreciation to Vicky's congratulatory note:

On behalf of CIHR and the College of Reviewers, thank you for your selfless generosity volunteering your time and expertise and for your commitment to excellence in peer review. ~ Adrian Mota, Associate Vice-President, Research Programs, CIHR

The school's director, Dr. Elizabeth Saewyc, also congratulated Vicky, acknowledging the intense amount of work that goes into participating in a CIHR review panel, and adding, " this kind of recognition of excellence represents a lot of intense and fabulous work."

Vicky's unwavering commitment to rigor and fairness in evaluating research proposals has played a pivotal role in ensuring that valuable research receives funding, which in turn drives innovation and excellence within nursing research. This kind of dedication inspires continued confidence in the integrity of the peer review process.

Vicky's achievements serve as a shining example of the positive impact that dedicated individuals can have on the scientific community. We extend our heartfelt congratulations to Dr. Vicky Bungay on this well-deserved recognition, and express our deepest gratitude for her continued contributions to nursing research.

Related news

Riley Huntley, Dr. Lydia Wytenbroek, Michelle Lam

Dr. Lydia Wytenbroek receives AMS Just Desserts Award

March 27, 2024

Catherine Liao portrait

Catherine Liao recieves Karen Takacs Award

February 9, 2024

 Chantelle Recsky

Chantelle Recsky: CIHR 2023 Health System Impact Fellow

January 19, 2024

IMAGES

  1. Understand the peer review process

    what is the process of peer review in research

  2. Understanding Peer Review in Science

    what is the process of peer review in research

  3. Peer Review

    what is the process of peer review in research

  4. What is peer review?

    what is the process of peer review in research

  5. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    what is the process of peer review in research

  6. Peer Review Process

    what is the process of peer review in research

VIDEO

  1. what is peer review? all about it by cs / advocate Priyanka Mishra #judiciary #upsc #legaladvice

  2. What is Peer Review? #archaeology #academia #publishing #journal

  3. What is Peer Review and How Does It Ensure Scientific Accuracy?

  4. University of Johannesburg & Elsevier present "SDGs and How to Use Them in Your Research"

  5. PUBLISHING AN OBGYN PAPER IN A JOURNAL

  6. THIS Got Through Peer Review?!

COMMENTS

  1. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  2. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  3. Peer Review in Scientific Publications: Benefits, Critiques, & A

    HISTORY OF PEER REVIEW. The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ().The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ().

  4. Understanding Peer Review in Science

    The manuscript peer review process helps ensure scientific publications are credible and minimizes errors. Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps ...

  5. Understanding the peer review process: A step-by-step guide for

    The peer review process is a vital component of academic research and publishing. It serves as a quality control mechanism to ensure that scholarly articles meet rigorous standards of accuracy, validity, and significance. For researchers, navigating the peer review process can be both daunting and crucial for their professional growth.

  6. The peer review process

    The review of research articles by peer experts prior to their publication is considered a mainstay of publishing in the medical literature. [ 1, 2] This peer review process serves at least two purposes. For journal editors, peer review is an important tool for evaluating manuscripts submitted for publication.

  7. What is Peer Review?

    Peer review is 'a process where scientists ("peers") evaluate the quality of other scientists' work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.' You can learn more in this explainer from the Social Science Space. Peer review brings academic research to publication in the following ways: Evaluation -

  8. Demystifying the process of scholarly peer-review: an ...

    The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory ...

  9. Structure peer review to make it more robust

    Anonymizing peer review makes the process more just. Other aspects of a study, such as novelty, potential impact, language and formatting, should be handled by editors, journal staff or even ...

  10. Peer review process

    The peer review process can be single-blind, double-blind, open or transparent. You can find out which peer review system is used by a particular journal in the journal's 'About' page. N. B. This diagram is a representation of the peer review process, and should not be taken as the definitive approach used by every journal. Advertisement.

  11. Peer Review

    Peer review. A key convention in the publication of research is the peer review process, in which the quality and potential contribution of each manuscript is evaluated by one's peers in the scientific community. Like other scientific journals, APA journals utilize a peer review process to guide manuscript selection and publication decisions.

  12. The Peer Review Process

    The peer review process can be broadly summarized into 10 steps, although these steps can vary slightly between journals. Explore what's involved, below. Editor Feedback: "Reviewers should remember that they are representing the readers of the journal. Will the readers of this particular journal find this informative and useful?" 1.

  13. Reviewers

    The Philosophical Transactions of the Royal Society is thought to be the first journal to formalize the peer review process under the editorship of Henry Oldenburg (1618- 1677). Despite many criticisms about the integrity of peer review, the majority of the research community still believes peer review is the best form of scientific evaluation.

  14. Peer review

    Peer review has a key role in ensuring that information published in scientific journals is as truthful, valid and accurate as possible. ... that "The benefits and advantages of peer review in medical research, are manifold and manifest". Peer review cannot improve poor research, but it can often "correct, ...

  15. Peer Review Process

    Peer review is an integral part of the publishing process, learn more about the peer review process, including: what the reviewer is looking for, the possible outcomes of peer review, common reasons for rejection, what to do if your manuscript is rejected, and how to respond to the reviewer comments. ... take the time to research that journal ...

  16. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  17. Research Methods: How to Perform an Effective Peer Review

    Scientific peer review has existed for centuries and is a cornerstone of the scientific publication process. Because the number of scientific publications has rapidly increased over the past decades, so has the number of peer reviews and peer reviewers. In this paper, drawing on the relevant medical literature and our collective experience as peer reviewers, we provide a user guide to the peer ...

  18. The peer review process

    The peer review process. The process of peer review is vital to academic research because it means that articles published in academic journals are of the highest possible quality, validity, and relevance. Journals rely on the dedication and support of reviewers to make sure articles are suitable for publication.

  19. Explainer: what is peer review?

    The process in details. The peer review process for journals involves at least three stages. 1. The desk evaluation stage. When a paper is submitted to a journal, it receives an initial evaluation ...

  20. Understanding peer review

    The purpose of peer review is to evaluate the paper's quality and suitability for publication. As well as peer review acting as a form of quality control for academic journals, it is a very useful source of feedback for you. The feedback can be used to improve your paper before it is published. So at its best, peer review is a collaborative ...

  21. Peer review and the publication process

    Peer review is one of various mechanisms used to ensure the quality of publications in academic journals. It helps authors, journal editors and the reviewer themselves. It is a process that is unlikely to be eliminated from the publication process. All forms of peer review have their own strengths and weaknesses.

  22. Peer Review Process: Understanding the Pathway to Publication

    Peer review is a critical evaluation process that academic work undergoes before being published in a journal. It serves as a filter, fact-checker, and redundancy-detector, ensuring that the published research is original, impactful, and adheres to the best practices of the field. The primary purposes of peer review are twofold.

  23. Peer Review Process

    The process is as follows: - Immediately after a Research Grants round closes, all applications are checked for eligibility and completeness. - All eligible Expressions of Interest are assigned to at least one peer reviewer for independent assessment ahead of the next Management Committee meeting. Reviewers are typically assigned 3-5 ...

  24. (PDF) The peer review process

    The peer review process is a crucial component of medical publishing, serving to evaluate the quality and validity of research manuscripts before their publication [5].Reviewers play a significant ...

  25. A maturity model for the scientific review of clinical trial designs

    The dominant approach used by government funders to decide if a research study will be funded is peer-review. While peer-review for pre-funding decisions is well established, it continues to evolve and not necessarily in a scientific direction. For example, a large fraction of stakeholders believe peer-review ought to change to only assess the ...

  26. New Peer Review Framework for Research Project Grant and Fellowship

    May 8, 2024. Have you heard about the initiative at the National Institutes of Health (NIH) to improve the peer review of research project grant and fellowship applications? Join us as NIH describes the steps the agency is taking to simplify its process of assessing the scientific and technical merit of applications, better identify promising ...

  27. A scoping review of continuous quality improvement in healthcare system

    The growing adoption of continuous quality improvement (CQI) initiatives in healthcare has generated a surge in research interest to gain a deeper understanding of CQI. However, comprehensive evidence regarding the diverse facets of CQI in healthcare has been limited. Our review sought to comprehensively grasp the conceptualization and principles of CQI, explore existing models and tools ...

  28. Effective Peer Review: Who, Where, or What?

    Peer review is widely viewed as one of the most critical elements in assuring the integrity of scientific literature (Baldwin, 2018; Smith, 2006).Despite the widespread acceptance and utilization of peer review, many difficulties with the process have been identified (Hames, 2014; Horrobin, 2001; Smith, 2006).One of the primary goals of the peer review process is to identify flaws in the work ...

  29. Revisions to the NIH Fellowship Application and Review Process

    Fellowship applications submitted on or after January 25, 2025 will follow a revised application and review criteria. The goal of the changes is to improve the chances that the most promising fellowship candidates will be consistently identified by scientific review panels. The changes will: Better focus reviewer attention on three key ...

  30. CIHR Celebrates Exceptional Contributions to Peer Review

    Dr. Vicky Bungay's remarkable contributions to the Canadian Institute of Health Research's (CIHR) review process have been recognized with the CIHR Certificate of Recognition for her "outstanding performance in three consecutive project grant competitions." This distinction places her among a select group of only 1.0% of CIHR Peer Reviewers to receive special recognition for service in 2023.