Book cover

Handbook of Research Methods in Health Social Sciences pp 805–826 Cite as

Conducting a Systematic Review: A Practical Guide

  • Freya MacMillan 2 ,
  • Kate A. McBride 3 ,
  • Emma S. George 4 &
  • Genevieve Z. Steiner 5  
  • Reference work entry
  • First Online: 13 January 2019

2235 Accesses

1 Citations

It can be challenging to conduct a systematic review with limited experience and skills in undertaking such a task. This chapter provides a practical guide to undertaking a systematic review, providing step-by-step instructions to guide the individual through the process from start to finish. The chapter begins with defining what a systematic review is, reviewing its various components, turning a research question into a search strategy, developing a systematic review protocol, followed by searching for relevant literature and managing citations. Next, the chapter focuses on documenting the characteristics of included studies and summarizing findings, extracting data, methods for assessing risk of bias and considering heterogeneity, and undertaking meta-analyses. Last, the chapter explores creating a narrative and interpreting findings. Practical tips and examples from existing literature are utilized throughout the chapter to assist readers in their learning. By the end of this chapter, the reader will have the knowledge to conduct their own systematic review.

  • Systematic review
  • Search strategy
  • Risk of bias
  • Heterogeneity
  • Meta-analysis
  • Forest plot
  • Funnel plot
  • Meta-synthesis

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ. 2001;322(7294):1115–7.

Article   Google Scholar  

Butler A, Hall H, Copnell B. A guide to writing a qualitative systematic review protocol to enhance evidence-based practice in nursing and health care. Worldviews Evid-Based Nurs. 2016;13(3):241–9.

Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.

Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, … Young B. How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006;6(1):27–44. https://doi.org/10.1177/1468794106058867 .

Greenhalgh T. How to read a paper: the basics of evidence-based medicine. 4th ed. Chichester/Hoboken: Wiley-Blackwell; 2010.

Google Scholar  

Hannes K, Lockwood C, Pearson A. A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qual Health Res. 2010;20(12):1736–43. https://doi.org/10.1177/1049732310378656 .

Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions (Version 5.1.0 [updated March 2011]). The Cochrane Collaboration; 2011.  http://handbook-5-1.cochrane.org/

Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, … Sterne JAC. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343. https://doi.org/10.1136/bmj.d5928 .

Hillier S, Grimmer-Somers K, Merlin T, Middleton P, Salisbury J, Tooher R, Weston A. FORM: an Australian method for formulating and grading recommendations in evidence-based clinical guidelines. BMC Med Res Methodol. 2011;11:23. https://doi.org/10.1186/1471-2288-11-23 .

Humphreys DK, Panter J, Ogilvie D. Questioning the application of risk of bias tools in appraising evidence from natural experimental studies: critical reflections on Benton et al., IJBNPA 2016. Int J Behav Nutr Phys Act. 2017; 14 (1):49. https://doi.org/10.1186/s12966-017-0500-4 .

King R, Hooper B, Wood W. Using bibliographic software to appraise and code data in educational systematic review research. Med Teach. 2011;33(9):719–23. https://doi.org/10.3109/0142159x.2011.558138 .

Koelemay MJ, Vermeulen H. Quick guide to systematic reviews and meta-analysis. Eur J Vasc Endovasc Surg. 2016;51(2):309. https://doi.org/10.1016/j.ejvs.2015.11.010 .

Lucas PJ, Baird J, Arai L, Law C, Roberts HM. Worked examples of alternative methods for the synthesis of qualitative and quantitative research in systematic reviews. BMC Med Res Methodol. 2007;7:4–4. https://doi.org/10.1186/1471-2288-7-4 .

MacMillan F, Kirk A, Mutrie N, Matthews L, Robertson K, Saunders DH. A systematic review of physical activity and sedentary behavior intervention studies in youth with type 1 diabetes: study characteristics, intervention design, and efficacy. Pediatr Diabetes. 2014;15(3):175–89. https://doi.org/10.1111/pedi.12060 .

MacMillan F, Karamacoska D, El Masri A, McBride KA, Steiner GZ, Cook A, … George ES. A systematic review of health promotion intervention studies in the police force: study characteristics, intervention design and impacts on health. Occup Environ Med. 2017. https://doi.org/10.1136/oemed-2017-104430 .

Matthews L, Kirk A, MacMillan F, Mutrie N. Can physical activity interventions for adults with type 2 diabetes be translated into practice settings? A systematic review using the RE-AIM framework. Transl Behav Med. 2014;4(1):60–78. https://doi.org/10.1007/s13142-013-0235-y .

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol. 2001;1:2. https://doi.org/10.1186/1471-2288-1-2 .

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. https://doi.org/10.1371/journal.pmed.1000097 .

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1. https://doi.org/10.1186/2046-4053-4-1 .

Mulrow CD, Cook DJ, Davidoff F. Systematic reviews: critical links in the great chain of evidence. Ann Intern Med. 1997;126(5):389–91.

Peters MDJ. Managing and coding references for systematic reviews and scoping reviews in EndNote. Med Ref Serv Q. 2017;36(1):19–31. https://doi.org/10.1080/02763869.2017.1259891 .

Steiner GZ, Mathersul DC, MacMillan F, Camfield DA, Klupp NL, Seto SW, … Chang DH. A systematic review of intervention studies examining nutritional and herbal therapies for mild cognitive impairment and dementia using neuroimaging methods: study characteristics and intervention efficacy. Evid Based Complement Alternat Med. 2017;2017:21. https://doi.org/10.1155/2017/6083629 .

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, … Higgins JP. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355. https://doi.org/10.1136/bmj.i4919 .

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Tong A, Palmer S, Craig JC, Strippoli GFM. A guide to reading and using systematic reviews of qualitative research. Nephrol Dial Transplant. 2016;31(6):897–903. https://doi.org/10.1093/ndt/gfu354 .

Uman LS. Systematic reviews and meta-analyses. J Can Acad Child Adolesc Psychiatry. 2011;20(1):57–9.

Download references

Author information

Authors and affiliations.

School of Science and Health and Translational Health Research Institute (THRI), Western Sydney University, Penrith, NSW, Australia

Freya MacMillan

School of Medicine and Translational Health Research Institute, Western Sydney University, Sydney, NSW, Australia

Kate A. McBride

School of Science and Health, Western Sydney University, Sydney, NSW, Australia

Emma S. George

NICM and Translational Health Research Institute (THRI), Western Sydney University, Penrith, NSW, Australia

Genevieve Z. Steiner

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Freya MacMillan .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

MacMillan, F., McBride, K.A., George, E.S., Steiner, G.Z. (2019). Conducting a Systematic Review: A Practical Guide. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_113

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_113

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Easy guide to conducting a systematic review

Affiliations.

  • 1 Discipline of Child and Adolescent Health, University of Sydney, Sydney, New South Wales, Australia.
  • 2 Department of Nephrology, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • 3 Education Department, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • PMID: 32364273
  • DOI: 10.1111/jpc.14853

A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a predefined search strategy to identify and appraise all available published literature on a specific topic. The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.

Keywords: research; research design; systematic review.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

Publication types

  • Systematic Review
  • Research Design*

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved April 2, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Methodology
  • Open access
  • Published: 17 October 2017

A proposed framework for developing quality assessment tools

  • Penny Whiting   ORCID: orcid.org/0000-0003-1138-5682 1 , 2 ,
  • Robert Wolff 3 ,
  • Susan Mallett 4 , 5 ,
  • Iveta Simera 6 &
  • Jelena Savović 1 , 2  

Systematic Reviews volume  6 , Article number:  204 ( 2017 ) Cite this article

21k Accesses

42 Citations

134 Altmetric

Metrics details

Assessment of the quality of included studies is an essential component of any systematic review. A formal quality assessment is facilitated by using a structured tool. There are currently no guidelines available for researchers wanting to develop a new quality assessment tool.

This paper provides a framework for developing quality assessment tools based on our experiences of developing a variety of quality assessment tools for studies of differing designs over the last 14 years. We have also drawn on experience from the work of the EQUATOR Network in producing guidance for developing reporting guidelines.

We do not recommend a single ‘best’ approach. Instead, we provide a general framework with suggestions as to how the different stages can be approached. Our proposed framework is based around three key stages: initial steps, tool development and dissemination.

Conclusions

We recommend that anyone who would like to develop a new quality assessment tool follow the stages outlined in this paper. We hope that our proposed framework will increase the number of tools developed using robust methods.

Peer Review reports

Systematic reviews are generally considered to provide the most reliable form of evidence for decision makers [ 1 ]. A formal assessment of the quality of the included studies is an essential component of any systematic review [ 2 , 3 ]. Quality can be considered to have three components—internal validity (risk of bias), external validity (applicability/variability) and reporting quality. The quality of included studies depends on them being sufficiently well designed and conducted to be able to provide reliable results [ 4 ]. Poor design, conduct or analysis can introduce bias or systematic error affecting study results and conclusions—this is also known as internal validity. External validity or the applicability of the study to the review question is also an important component of study quality. Reporting quality relates to how well the study is reported—it is difficult to assess other components of study quality if the study is not reported with the appropriate level of detail.

When conducting a systematic review, stronger conclusions can be derived from studies at low risk of bias, rather than when evidence is based on studies with serious methodological flaws. Formal quality assessment as part of a systematic review, therefore, provides an indication of the strength of the evidence on which conclusions are based and allows comparisons between studies based on risk of bias [ 3 ]. The GRADE system for rating the overall quality of the evidence included in a systematic review is recommended by many guidelines and systematic review organisations such as National Institute for Health and Care Excellence (NICE) and Cochrane. Risk of bias is a key component of this along with publication bias, imprecision, inconsistency, indirectness and magnitude of effect [ 5 , 6 ].

A formal quality assessment is facilitated by using a structured tool. Although it is possible for reviewers to simply assess what they consider to be key components of quality, this may result in important sources of bias being omitted, inappropriate items included or too much emphasis being given to particular items guided by reviewers’ subjective opinions. In contrast, a structured tool provides a convenient standardised way to assess quality providing consistency across reviews. Robust tools are usually developed based on empirical evidence refined by expert consensus.

This paper provides a framework for developing quality assessment tools. We use the term ‘quality assessment tool’ to refer to any tool designed to target one or more aspects of the quality of a research study. This term can apply to any tool whether focused specifically on one aspect of study quality (usually risk of bias) or for broader tools covering additional aspects such as applicability/generalisability and reporting quality. We do not place any restrictions on the type of ‘tool’ to which this framework can be approach—it should be appropriate for a variety of different approaches such as checklists, domain-based approaches, tables or graphics or any other format that developers may want to consider. We do not recommend a single ‘best’ approach. Instead, we provide a general framework with suggestions on how the different stages can be approached. This is based on our experience of developing quality assessment tools for studies of differing designs over the last 14 years. These include QUADAS [ 7 ] and QUADAS-2 [ 8 ] for diagnostic accuracy studies, ROBIS [ 9 ] for systematic reviews, PROBAST [ 10 ] for prediction modelling studies, ROBINS-I [ 11 ] for non-randomised studies of interventions and the new version of the Cochrane risk of bias tool for randomised trials (RoB 2.0) [ 12 ]. We have also drawn on experience from the work of the EQUATOR Network in producing guidance for developing reporting guidelines [ 13 ].

Over the years that we have been involved in the development of quality assessment tools and through involvement in different development processes, we noticed that the methods used to develop each tool could be mapped to a similar underlying process. The proposed framework evolved through discussion among the team, describing the steps involved in developing the different tools, and then grouping these into appropriate headings and stages.

Results: Proposed framework

The Fig.  1 and Table  1 outline the proposed steps in our framework, grouped into three stages. The table also includes examples of how each step was approached for the tools that we have been involved in developing. Each step is discussed in detail below.

Overview of proposed framework

Stage 1: initial steps

Identify the need for a new tool.

The first step in developing a new quality assessment (QA) tool is to identify the need for a new tool: What is the rationale for developing the new tool? In their guidance on developing reporting guidelines, Moher et al. [ 13 ] stated that “developing a reporting guidelines is complex and time consuming, so a compelling rationale is needed”. The same applies to the development of QA tools. It may be that there is no existing QA tool for the specific study design of interest; a QA tool is available but not directly targeted to the specific context required (e.g. tools designed for clinical interventions may not be appropriate for public health interventions), existing tools might not be up to date, new evidence on particular sources of bias may have emerged that is not adequately addressed by existing tools, or new approaches to quality assessment mean that a new approach is needed. For example, QUADAS-2 and RoB 2.0 were developed as experience, anecdotal reports, and feedback suggested areas for improvement of the original QUADAS and Cochrane risk of bias tools [ 7 ]. ROBIS was developed as we felt there was no tool that specifically addressed risk of bias in systematic reviews [ 9 ].

It is important to consider whether a completely new tool is needed or whether it may be possible to modify or adapt an existing tool. If modifying an existing tool, then the original can act as a starting point, although in practice, the new tool may look very different from the original. Both QUADAS-2 [ 8 ] and the new Cochrane risk of bias tool used the original versions of these tools as a starting point [ 12 ].

Obtain funding for the tool development

There are costs involved in developing a new QA tool. These will vary depending on the approach taken but items that may need to be funded include researcher time, literature searching, travel and subsistence for attending meetings, face-to-face meetings, piloting the tool, online survey software, open access publication costs, website fees and conference attendance for dissemination. We have used different approaches to fund the development of quality assessment tools. QUADAS-2 [ 8 ] was funded by the UK Medical Research Council Methodology Programme as part of a larger project grant. ROBIS, [ 9 ] ROBINS-I [ 11 ] and Cochrane ROB 2.0 [ 12 ] were funded through smaller project-specific grants, and PROBAST [ 10 ] received no specific funding. Instead, the host institutions for each steering group member allowed them time to work on the project and covered travel and subsistence for regular steering group meetings and conference attendance. Freely available survey monkey software ( www.surveymonkey.co.uk ) was used to run an online Delphi process.

Assemble team

Assembling a team with the appropriate expertise is a key step in developing a quality assessment tool. As tool development usually relies on expert consensus, it is essential that the team includes people with an appropriate range of expertise. This generally includes methodologists with expertise in the study designs targeted by the tool, people with expertise in QA tool development and also end users, i.e. reviewers who will be using the tool. Reviewers are a group that may sometimes be overlooked but are essential to ensure that the final tool is usable by those for whom it is developed. If the tool is likely to be used in different content areas, then it is important to include reviewers who will be using the tool in all contexts. For example, ROBIS is targeted at different types of systematic reviews including reviews of interventions, diagnostic accuracy, aetiology and prognosis. We included team members who were familiar with all different types of review to ensure that the team included the appropriate expertise to develop the tool. It can also be helpful to include reviewers with a range of expertise from those new to quality assessment to more experienced reviewers. Including representatives from a wide range of organisations can also be helpful for the future uptake and dissemination of the tool. Thinking about this at an early stage is helpful. The more organisations that are involved in the development of the tool, the more likely these organisations are to feel some ownership of the tool and to want to implement the tool within their organisation in the future. The total number of people involved in tool development varies. For our tools, the number of people involved directly in the development of each tool ranged from 27 to 51 with a median of 40.

Manage the project

The size and the structure of the project team also need to be carefully considered. In order to cover an appropriate range of expertise, it is generally necessary to include a relatively large group of people. It may not be practical for such a large group to be involved in the day-to-day development of the tool, and so it may be desirable to have a smaller group responsible for driving the project by leading and coordinating all activities, and involving the larger group where their input is required. For example, when developing QUADAS-2 and PROBAST, a steering group of around 6–8 people led the development of the tool, bringing in a larger consensus group to help inform decisions on the scope and content of the tool. For ROBINS-I and Cochrane ROB 2.0, a smaller steering group led the development with domain-based working groups developing specific areas of the tool.

Define the scope

The scope of the quality assessment tool needs to be defined at an early stage. The Table  2 outlines key questions to consider when defining the scope. Tools generally target one specific type of study. The specific study design to be considered is one of the first components to define. For example, QUADAS-2 [ 8 ] focused on diagnostic accuracy studies, PROBAST [ 10 ] on prediction modelling studies and the Cochrane Risk of Bias tool on randomised trials. Some tools may be broader, targeted at multiple related designs. For example, ROBINS-I targets all non-randomised studies of interventions rather than one single study design such as cohort studies. When deciding on the focus of the tool, it is important to clearly define the design and topic areas targeted. Trade-offs of different approaches need consideration. A more focused tool can be tailored to a specific topic area. A broader tool may not be as specific but can be used to assess a wider variety of studies. For example, we developed ROBIS to be used to assess any type of systematic review, e.g. intervention, prognostic, diagnostic or aetiology. Previous tools, such as the AMSTAR tool, were developed to assess reviews of RCTs [ 14 ]. Key to any quality assessment tool is a definition of quality as addressed by the tool, i.e. defining what exactly the tool is trying to address. We have found that once the definition of quality has been clearly agreed, then it becomes much easier to decide on which items to include in the tool.

Other features to consider include whether to address both internal (risk of bias) and external validity (applicability) and the structure of the tool. The original QUADAS tool used a simple checklist design and combined items on risk of bias, reporting quality and applicability. Our more recently developed tools have followed a domain-based approach with a clear focus on assessment of risk of bias. Many of these domain-based tools also include sections covering applicability/relevance. How to rate individual items included in the tool also forms part of the scope. The original QUADAS tool [ 7 ] used a simple ‘yes, no or unclear’ rating for each question. The domain-based tools such as QUADAS-2, [ 8 ] ROBIS [ 9 ] and PROBAST [ 10 ] have signalling questions which flag the potential for bias. These are generally factual questions and can be answered as ‘yes, no or no information’. Some tools include a ‘probably yes’ or ‘probably no’ response to help reviewers answer these questions when there is not sufficient information for a more definite response. The overall domain ratings then use decision ratings like ‘high, low or unclear’ risk of bias. Some tools, such as ROBINS-I [ 11 ] and the RoB 2.0 [ 12 ], include additional domain level ratings such as ‘critical, severe, moderate or low’ and ‘low, some concerns, high’. We strongly recommend that at this stage, tool developers are explicit that quality scores should not be incorporated into the tools. Numerical summary quality scores have been shown to be poor indicators of study quality, and so, alternatives to their use should be encouraged [ 15 , 16 ]. When developing many of our tools, we were explicit at the scope stage that we wanted to come up an overall assessment of study quality but avoid the use of quality scores. One of the reasons for introducing the domain level structure first used with the QUADAS-2 tool was explicit to avoid users calculating quality scores by simply summing the number of items fulfilled.

Agreeing the scope of the tool may not be straightforward and can require much discussion between team members. An additional consideration is how decisions on scope will be made. Will this be by a single person, by the steering group and should some or all decisions be agreed by the larger group? The approach that we have often taken is for a smaller group (e.g. steering group) to propose the scope of the tool with the agreement reached following consultation with the larger group. Questions on the scope can often form the first discussion points at a face-to-face meeting (e.g. ROBIS [ 9 ] and QUADAS-2 [ 8 ]) or the first questions on a web-based survey (e.g. PROBAST [ 10 ]).

As with any research project, a protocol that clearly defines the scope and proposed plans for the development of the tool should be produced at an early stage of the tool development process.

Stage 2: tool development

Generate initial list of items for inclusion.

The starting point for a tool is an initial list of items to consider for inclusion. There are various ways in which this list can be generated. These include looking at existing tools, evidence reviews and expert knowledge. The most comprehensive way is to review the literature for potential sources of bias and to provide a systematic review summarising the evidence for the effects of these. This is the approach we took for the original QUADAS tool [ 7 ] and also the updated QUADAS-2 [ 8 , 17 , 18 ]. Reviewing the items included in existing tools and summarising the number of tools that included each potential item can be a useful initial step as it shows which potential items of bias have been considered as important by previous tool developers. This process was followed for the original QUADAS tool [ 7 ] and for ROBIS [ 9 ]. Examining how previous systematic reviews have incorporated quality into their results can also be helpful to provide an indication of the requirements of a QA tool. If you are updating a previous QA tool then this will often form the starting point for potential items to include in the updated tool. This was the case for QUADAS-2 [ 8 ] and the RoB 2.0 [ 12 ]. For ROBINS-I [ 11 ], domains were agreed at a consensus meeting, and then expert working groups identified potential items to include in each domain. Generating the list of items for inclusion was, therefore, based on expert consensus rather than reviewing existing evidence. This can also be a valid approach. The development of PROBAST used a combined approach of using an existing tool for a related area as the starting point (QUADAS-2), non-systematic literature reviews and expert input from both steering group members and wider PROBAST group [ 10 ].

Agree initial items and scope

After the initial stages of tool development which can often be performed by a smaller group, input from the larger group should be sought. Methods for gaining input from the larger group include holding a face-to-face meeting or a web-based survey. At this stage, the scope defined in step 1.5 can be brought to the larger group for further discussion and refinement. The initial list of items needs to be further refined until agreement is reached on which items should be included in an initial draft of the tool. If a face-to-face meeting is held, smaller break-out groups focussing on specific domains can be a helpful structure to the meeting. QUADAS-2, ROBIS and ROBINS-I all involved face-to-face meetings with smaller break-out groups early in the development process [ 8 , 9 , 11 ]. If moving straight to a web-based survey, then respondents can be asked about the scope with initial questions considering possible items to include. This approach was taken for PROBAST [ 10 ] and the original QUADAS tool [ 7 ]. For PROBAST, we also asked group members to provide supporting evidence for why items should be included in the tool [ 10 ]. Items should be turned into potential questions/signalling questions for inclusion in the tool at this relatively early stage in the development of the tool.

Produce first draft of tool and develop guidance

Following the face-to-face meeting or initial survey rounds, a first draft of the tool can be produced. The initial draft may be produced by a smaller group (e.g. steering group), single person, or by taking a domain-based approach with the larger group split into groups with each taking responsibility for single domains. For QUADAS-2 [ 8 ] and PROBAST [ 10 ], a single person developed the first draft which was then agreed by the steering group before moving forwards. The first draft of ROBIS was developed following the face-to-face meeting by two team members. Initial drafts of ROBINS-I [ 11 ] and the RoB 2.0 [ 12 ] were produced by teams working on single domains proposing initial versions for their domains. Drafts for each domain were then put together by the steering group to give a first draft of the tool. Once a first draft of the tool is available, it may be helpful to start producing a clear guidance document describing how to assess each of the items included in the tool. The earlier such a guide can be produced, the more opportunity there will be to pilot and refine it alongside the tool.

Pilot and refine

The first draft of the tool needs to go through a process of refinement until a final version that has agreement of the wider group is achieved. Consensus may be achieved in various ways. Online surveys consisting of multiple rounds until agreement on the final tool is reached are a good way of involving large numbers of experts in this process. This is the approach used for QUADAS, [ 7 ], QUADAS-2 [ 8 ], ROBIS, [ 9 ] and PROBAST [ 10 ]. If domain-based working groups were adopted for the initial development of the tool, these can also be used to finalise the tool. Members of the full group can then provide feedback on draft versions, including domains that they were not initially assigned to. This approach was used for ROBINS-I and RoB 2.0. It would also be feasible to combine such an approach with a web-based survey.

Whilst the tool is being refined, initial piloting work can be undertaken. If a guidance document has been produced, then it can be included in the piloting process. If the tool is available in different formats, for example paper-based or Access database, then these could also be made available and tested as part of the piloting. The research team may ask reviewers working on appropriate review topics to pilot the tool in their review. Alternatively, reviewers can be asked to pilot the tool on a series of sample papers and to provide feedback on their experience of using the tool. An efficient way of completing such a process is to hold a piloting event where reviewers try out the tool on a sample of papers which they can either bring with them or that are provided to them. This can be a good approach to get feedback in a timely and interactive manner. However, there are costs associated with running such an event. Asking reviewers to pilot the tool in ongoing reviews can result in delays as piloting cannot be started until the review is at the data extraction stage. Identifying reviews at an appropriate stage with reviewers willing to spend the extra time needed to pilot a new tool is not always straightforward. We held a piloting event when developing the RoB 2.0 and found this to be very efficient in providing immediate feedback on the tool. We were also able to hold a group discussion for reviewers to provide suggestions for improvements to the tool and to highlight any items that they found difficult. For previous tools, we used remote piloting which provided helpful feedback but was not as efficient as the piloting event. Ideally, any piloting process should involve reviewers with a broad range of experience ranging from those with extensive experience of conducting quality assessment of studies of a variety of designs to those relatively new to the process.

The time taken for piloting and refining the tool can vary considerably. For some tools, such as ROBIS and QUADAS-2, this process was completed in around 6–9 months. For PROBAST and ROBINS-I, the process took over 4 years.

Stage 3: dissemination

Develop a publication strategy.

A strategy to disseminate the tool is required. This should be discussed at the start of the project but may evolve as the tool is developed. The primary means of dissemination is usually through publication in a peer-reviewed journal. A more detailed guidance document can accompany the publication and be made available as a web appendix. Another option is to have dual publications, one reporting the tool and outlining how it was developed, and a second providing additional guidance on how to use the tool. This is sometimes known as an ‘E&E’ (explanation and elaboration) publication and is an approach adopted by many reporting guidelines [ 13 ].

Establish a website

Developing a website for the tool can help with dissemination. Ideally, the website should be developed before publication of the tool so that details can be included in the publication. The final version of the tool can be posted on the website together with the full guidance document. Details on who contributed to the tool development and any funding should also be acknowledged on the website. Additional resources to help reviewers use the tool can also be posted there. For example, the ROBIS ( www.robis-tool.info ) and QUADAS ( www.quadas.org ) websites both contain Microsoft Access database that reviewers can use to complete their assessment and templates to produce graphical and tabular displays. They also contain links to other relevant resources and details of training opportunities. Other resources that may be useful to include on tool websites include worked examples and translations of the tools, where available. QUADAS-2 has been translated into Italian and Japanese, and the translations of these tools can be accessed via its website. If the tool has been endorsed or recommended for use by particular organisations (e.g. Cochrane, UK National Institute for Health and Care Excellence (NICE)), then this could also be included on the website.

The website is also a helpful way to encourage comments about the tool, which can lead to its further improvement, and exchange of experiences with the tool implementation.

Encourage uptake of tool by leading organisations

Encouraging organisations, both national and international, to recommend the tool for use in their systematic reviews is a very effective means of making sure that, once developed, the tool is used. There are different ways this can be achieved. Involving representatives from a wide range of organisations as part of the development team may mean that they are more likely to recommend the use of the tool in their organisations. Presentations at conferences, for example the Cochrane Colloquium or Health Technology Assessment Conference, may increase knowledge of the tool within that organisation making it more likely that the tool may be recommended for use. Running workshops on the tool for organisations can help increase familiarity and usability of the tool. These can also provide helpful feedback for what to include in guidance documents and to inform future updates of the tool. For example, we have been running workshops on QUADAS and ROBIS within Cochrane for a number of years. We have also provided training to institutions such as NICE on how to use the tools. QUADAS is now recommended by both these organisations, among many others, for use in diagnostic accuracy reviews. We have also run workshops on ROBIS, PROBAST, ROBINS-I and RoB 2.0 at the annual Cochrane Colloquium. We were recently approached by the Estonian Health Insurance Fund with a request to provide training to some of their reviewers so that they could implement ROBIS within their guideline development process. We supported this by running a specific training session for them.

Ultimately, the best way to encourage tool uptake is to make sure that the tool was developed robustly and fills a gap where there is currently no existing tool or there are limitations with existing tools. Ensuring that the tool is widely disseminated also means that the tool is more likely to be used and recommended.

Translate tools

After the tool has been published, you may receive requests to translate the tool. Translation can help to disseminate the tool and encourage its use in a much broader range of countries. Tool translations, therefore, should be encouraged but it is important to reassure yourself that the translation has been completed appropriately. One method to do this is via back translation.

In this paper, we suggest a framework for developing quality assessment tools. The framework consists of three stages: (1) initial steps, (2) tool development and (3) dissemination. Each stage includes defined steps that we consider important to follow when developing a tool; there is some flexibility on how these stages may be approached. In developing this framework, we have drawn on our extensive experience of developing quality assessment tools. Despite having used different approaches to the development of each of these tools, we found that all approaches shared common features and processes. This led to the development of the framework. We recommend that anyone who would like to develop a new quality assessment tool follow the stages outlined in this paper.

When developing a new tool, you need to decide how to approach each of the proposed stages. We have given some examples of how to do this, other approaches may also be valid. Factors that may influence how you choose to approach the development of your tool include available funding, topic area, number and range of people to involve, target audience and tool complexity. For example, holding face-to-face meetings and running piloting events incur greater costs than web-based surveys or asking reviewers to pilot the tool at their own convenience. More complex tools may take longer, require additional expertise, and require more piloting and refinement.

We are not aware of any existing guidance on how to develop QA tools. Moher and colleagues have produced guidance on how to develop reporting guidelines [ 13 ]. These have been cited over 190 times, mainly by new reporting guidelines, suggesting that many reporting guideline developers have found a structured approach helpful. In the absence of guidance specifically for the development of QA tools, we also based our development of QUADAS-2 [ 8 ] and ROBIS [ 9 ] on the guidance for developing reporting guidance. Although many of the steps proposed by Moher et al. apply to the development of QA tool, there are areas where these are not directly relevant and where specific guidance on developing QA tools would be helpful.

There are a very large number of quality assessment tools available. When developing ROBIS and QUADAS, we conducted reviews of existing quality assessment tools. These identified 40 tools to assess the quality of systematic reviews [ 19 ] and 91 tools to assess the quality of diagnostic accuracy studies [ 20 ]. However, only three systematic review tools (7.5%) [ 19 ] and two diagnostic tools (2%) reported being rigorously developed [ 20 ]. The lack of a rigorous development process for most tools suggests a need for guidance on how to develop quality assessment tools. We hope that our proposed framework will increase the number of tools developed using robust methods.

The large number of quality assessment tools available makes it difficult for people working on systematic reviews to choose the most appropriate tool(s) for use in their reviews. Therefore, we are developing an initiative similar to the EQUATOR Network to improve the process of quality assessment in systematic reviews. This will be known as the LATITUDES Network ( www.latitudes-network.org ). LATITUDES aims to highlight and increase the use of key risk of bias assessment tools, help people to use these tools more effectively, improve incorporation of results of the risk of bias assessment into the review and to disseminate best practice in risk of bias assessment.

Murad MH, Montori VM. Synthesizing evidence: shifting the focus from individual studies to the body of evidence. JAMA. 2013;309(21):2217–8.

Article   PubMed   Google Scholar  

Centre for Reviews and Dissemination. Systematic reviews: CRD’s guidance for undertaking reviews in health care [internet]. In . York: University of York; 2009. [accessed 23 Mar 2011].

Higgins JPT, Green S (eds.): Cochrane handbook for systematic reviews of interventions [Internet]. Version 5.1.0 [updated March 2011]: The Cochrane Collaboration; 2011. [accessed 23 Mar 2011 ].

Torgerson D, Torgerson C. Designing randomised trials in health, education and the social sciences: an introduction. New York: Palgrave MacMillan; 2008.

Book   Google Scholar  

Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, Montori V, Akl EA, Djulbegovic B, Falck-Ytter Y, et al. GRADE guidelines: 4. Rating the quality of evidence—study limitations (risk of bias). J Clin Epidemiol. 2011;64(4):407–15.

Balshem H, Helfand M, Schunemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, et al. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol. 2011;64(4):401–6.

Whiting P, Rutjes AW, Reitsma JB, Bossuyt PM, Kleijnen J. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol. 2003;3:25.

Article   PubMed   PubMed Central   Google Scholar  

Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, Leeflang MM, Sterne JA, Bossuyt PM. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–36.

Whiting P, Savovic J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J, Churchill R, group R. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Mallett S, Wolff R, Whiting P, Riley R, Westwood M, Kleinen J, Collins G, Reitsma H, Moons K. Methods for evaluating medical tests and biomarkers. 04 prediction model study risk of bias assessment tool (PROBAST). Diagn Prognostic Res. 2017;1(1):7.

Google Scholar  

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919.

Higgins J, Sterne J, Savović J, Page M, Hróbjartsson A, Boutron I, Reeves B, Eldridge S. A revised tool for assessing risk of bias in randomized trials. In: Chandler J, McKenzie J, Boutron I, Welch V, editors. Cochrane methods Cochrane database of systematic reviews volume issue 10 (Suppl 1); 2016.

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.

Whiting P, Harbord R, Kleijnen J. No role for quality scores in systematic reviews of diagnostic accuracy studies. BMC Med Res Methodol. 2005;5:19.

Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999;282(11):1054–60.

Article   CAS   PubMed   Google Scholar  

Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J. Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med. 2004;140(3):189–202.

Whiting PF, Rutjes AW, Westwood ME, Mallett S, Group Q-S. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies. J Clin Epidemiol. 2013;66(10):1093–104.

Whiting P, Davies P, Savović J, Caldwell D, Churchill R. Evidence to inform the development of ROBIS, a new tool to assess the risk of bias in systematic reviews. 2013.

Whiting P, Rutjes AW, Dinnes J, Reitsma JB, Bossuyt PM, Kleijnen J. A systematic review finds that diagnostic reviews fail to incorporate quality despite available tools. J Clin Epidemiol. 2005;58(1):1–12.

Moons KG, Altman DG, Reitsma JB, Ioannidis JP, Macaskill P, Steyerberg EW, Vickers AJ, Ransohoff DF, Collins GS. Transparent reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): explanation and elaboration The TRIPOD statement: explanation and elaboration. Ann Intern Med. 2015;162(1):W1–W73.

Moons KG, de Groot JA, Bouwmeester W, Vergouwe Y, Mallett S, Altman DG, Reitsma JB, Collins GS. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med. 2014;11(10):e1001744.

Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savović J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. Bmj. 2011;343:d5928.

Download references

Acknowledgements

Not applicable.

The development of QUADAS-2, ROBIS and the new version of the Cochrane risk of bias tool for randomised trials (RoB 2.0) were funded by grants from the UK Medical Research Council (G0801405/1, MR/K01465X/1, MR/L004933/1- N61 and MR/K025643/1). ROBINS-I was funded by the Cochrane Methods Innovation Fund.

PW and JS time was partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) West at University Hospitals Bristol NHS Foundation Trust. SM received support from the NIHR Birmingham Biomedical Research Centre.

The views expressed in this article are those of the authors and not necessarily those of the NHS, NIHR, MRC and Cochrane or the Department of Health. The funders had no role in the design of the study, data collection and analysis, decision to publish or preparation of the manuscript.

Availability of data and materials

Author information, authors and affiliations.

NIHR CLAHRC West, University Hospitals Bristol NHS Foundation Trust, Bristol, UK

Penny Whiting & Jelena Savović

School of Social and Community Medicine, University of Bristol, Bristol, UK

Kleijnen Systematic Reviews Ltd., Escrick, York, UK

Robert Wolff

Institute of Applied Health Research, University of Birmingham, Birmingham, UK

Susan Mallett

National Institute for Health Research (NIHR) Birmingham Biomedical Research Centre, Birmingham, UK

Centre for Tropical Medicine and Global Health, University of Oxford, Oxford, UK

Iveta Simera

You can also search for this author in PubMed   Google Scholar

Contributions

PW conceived the idea for this paper and drafted the manuscript. JS, IS, RW and SM contributed to the writing of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Penny Whiting .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Whiting, P., Wolff, R., Mallett, S. et al. A proposed framework for developing quality assessment tools. Syst Rev 6 , 204 (2017). https://doi.org/10.1186/s13643-017-0604-6

Download citation

Received : 11 July 2017

Accepted : 04 October 2017

Published : 17 October 2017

DOI : https://doi.org/10.1186/s13643-017-0604-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Risk of bias
  • Systematic reviews

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

development of systematic research proposal and use of appropriate research tool

  • Research article
  • Open access
  • Published: 04 June 2019

Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool

  • Heather Menzies Munthe-Kaas 1 ,
  • Claire Glenton 1 ,
  • Andrew Booth 2 ,
  • Jane Noyes 3 &
  • Simon Lewin 1 , 4  

BMC Medical Research Methodology volume  19 , Article number:  113 ( 2019 ) Cite this article

29k Accesses

24 Citations

41 Altmetric

Metrics details

Qualitative evidence synthesis is increasingly used alongside reviews of effectiveness to inform guidelines and other decisions. To support this use, the GRADE-CERQual approach was developed to assess and communicate the confidence we have in findings from reviews of qualitative research. One component of this approach requires an appraisal of the methodological limitations of studies contributing data to a review finding. Diverse critical appraisal tools for qualitative research are currently being used. However, it is unclear which tool is most appropriate for informing a GRADE-CERQual assessment of confidence.

Methodology

We searched for tools that were explicitly intended for critically appraising the methodological quality of qualitative research. We searched the reference lists of existing methodological reviews for critical appraisal tools, and also conducted a systematic search in June 2016 for tools published in health science and social science databases. Two reviewers screened identified titles and abstracts, and then screened the full text of potentially relevant articles. One reviewer extracted data from each article and a second reviewer checked the extraction. We used a best-fit framework synthesis approach to code checklist criteria from each identified tool and to organise these into themes.

We identified 102 critical appraisal tools: 71 tools had previously been included in methodological reviews, and 31 tools were identified from our systematic search. Almost half of the tools were published after 2010. Few authors described how their tool was developed, or why a new tool was needed. After coding all criteria, we developed a framework that included 22 themes. None of the tools included all 22 themes. Some themes were included in up to 95 of the tools.

It is problematic that researchers continue to develop new tools without adequately examining the many tools that already exist. Furthermore, the plethora of tools, old and new, indicates a lack of consensus regarding the best tool to use, and an absence of empirical evidence about the most important criteria for assessing the methodological limitations of qualitative research, including in the context of use with GRADE-CERQual.

Peer Review reports

Qualitative evidence syntheses (also called systematic reviews of qualitative evidence) are becoming increasingly common and are used for diverse purposes [ 1 ]. One such purpose is their use, alongside reviews of effectiveness, to inform guidelines and other decisions, with the first Cochrane qualitative evidence synthesis published in 2013 [ 2 ]. However, there are challenges in using qualitative synthesis findings to inform decision making because methods to assess how much confidence to place in these findings are poorly developed [ 3 ]. The ‘Confidence in the Evidence from Reviews of Qualitative research’ (GRADE-CERQual) approach aims to transparently and systematically assess how much confidence to place in individual findings from qualitative evidence syntheses [ 3 ]. Confidence here is defined as “an assessment of the extent to which the review finding is a reasonable representation of the phenomenon of interest” ([ 3 ] p.5). GRADE-CERQual draws on the conceptual approach used by the GRADE tool for assessing certainty in evidence from systematic reviews of effectiveness [ 4 ]. However, GRADE- CERQual is designed specifically for findings from qualitative evidence syntheses and is informed by the principles and methods of qualitative research [ 3 , 5 ].

The GRADE-CERQual approach bases its assessment of confidence on four components: the methodological limitations of the individual studies contributing to a review finding; the adequacy of data supporting a review finding; the coherence of each review finding; and the relevance of a review finding [ 5 ]. In order to assess the methodological limitations of the studies contributing data to a review finding, a critical appraisal tool is necessary. Critical appraisal tools “provide analytical evaluations of the quality of the study, in particular the methods applied to minimise biases in a research project” [ 6 ]. Debate continues over whether or not one should critically appraisal qualitative research [ 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 ]. Arguments against using criteria to appraise qualitative research have centred on the idea that “research paradigms in the qualitative tradition are philosophically based on relativism, which is fundamentally at odds with the purpose of criteria to help establish ‘truth’” [ 16 ]. The starting point in this paper, however, is that it is both possible and desirable to establish a set of criteria for critically appraising the methodological strengths and limitations of qualitative research. End users of findings from primary qualitative research and from syntheses of qualitative research often make judgments regarding the quality of the research they are reading, and this is often done in an ad hoc manner [ 3 ]. Within a decision making context, such as formulating clinical guideline recommendations, the implicit nature of such judgements limits the ability of other users to understand or critique these judgements. A set of criteria to appraise methodological limitations allows such judgements to be conducted, and presented, in a more systematic and transparent manner. We understand and accept that these judgements are likely to differ between end users – explicit criteria help to make these differences more transparent.

The terms “qualitative research” and “qualitative evidence synthesis” refer to an ever-growing multitude of research and synthesis methods [ 17 , 18 , 19 , 20 ]. Thus far, the GRADE-CERQual approach has mostly been applied to syntheses producing a primarily descriptive rather than theoretical type of finding [ 5 ]. Consequently, it is primarily this descriptive standpoint from which the analysis presented in the current paper is conducted. The authors acknowledge, however, the potential need for different criteria when appraising the methodological strengths and limitations of different types of primary qualitative research. While accepting that there is probably no universal set of critical appraisal criteria for qualitative research, we maintain that some general principles of good practice by which qualitative research should be conducted do exist. We hope that our work in this area, and the work of others, will help us to develop a better understanding of this important area.

In health science environments, there is now widespread acceptance of the use of tools to critically appraise individual studies, and as Hannes and Macaitis have observed, “it becomes more important to shift the academic debate from whether or not to make an appraisal to what criteria to use” [ 21 ]. This shift is paramount because a plethora of critical appraisal tools and checklists [ 22 , 23 , 24 ] exists and yet there is little, if any, agreement on the best approach for assessing the methodological limitations of qualitative studies [ 25 ]. To the best of our knowledge, few tools have been designed for appraising qualitative studies in the context of qualitative synthesis [ 26 , 27 ]. Furthermore, there is a paucity of tools designed to critically appraise qualitative research to inform a practical decision or recommendation, as opposed to critical appraisal as an academic exercise by researchers or students.

In the absence of consensus, the Cochrane Qualitative & Implementation Methods Group (QIMG) provide a set of criteria that can be used to select an appraisal tool, noting that review authors can potentially apply critical appraisal tools specific to the methods used in the studies being assessed, and that the chosen critical appraisal tool should focus on methodological strengths and limitations (and not reporting standards) [ 11 ]. A recent review of qualitative evidence syntheses found that the majority of identified syntheses (92%; 133/145) reported appraising the quality of included studies. However, a wide range of tools were used (30 different tools) and some reviews reported using multiple critical appraisal tools [ 28 ]. So far, authors of Cochrane qualitative evidence syntheses have adopted different approaches, including adapting existing appraisal tools and using tools that are familiar to the review team.

This lack of a uniform approach mirrors the situation for systematic reviews of effectiveness over a decade ago, where over 30 checklists were being used to assess the quality of randomised trials [ 29 ]. To address this lack of consistency and to reach consensus, a working group of methodologists, editors and review authors developed the risk of bias tool that is now used for Cochrane intervention reviews and is a key component of the GRADE approach [ 4 , 30 , 31 ]. The Cochrane risk of bias tool encourages review authors to be transparent and systematic in how they appraise the methodological limitations of primary studies. Assessments using this tool are based on an assessment of objective goals and on a judgment of whether failure to meet these objective goals raises any concerns for the particular research question or review finding. Similar efforts are needed to develop a critical appraisal tool to assess methodological limitations of primary qualitative studies in the context of qualitative evidence syntheses (Fig.  1 ).

figure 1

PRISMA Flow chart. Results of systematic mapping review described in this article

Previous reviews

While at least five methodological reviews of critical appraisal tools for qualitative research have been published since 2003, we assessed that these did not adequately address the aims of this project [ 22 , 23 , 24 , 32 , 33 ]. Most of the existing reviews focused only on critical appraisal tools in the health sciences [ 22 , 23 , 24 , 32 ] . One review focused on reporting standards for qualitative research [ 23 ], one review did not use a systematic approach to searching the literature [ 24 ], one review included critical appraisal tools for any study design (quantitative or qualitative) [ 32 ], and one review only included tools defined as “‘high-utility tools’ […] that are some combination of available, familiar, authoritative and easy to use tools that produce valuable results and offer guidance for their use” [ 33 ]. In the one review that most closely resembles the aims of the current review, the search was conducted in 2010, did not include tools used in the social sciences, and was not conducted from the perspective of the GRADE-CERQual approach (see discussion below) [ 22 ].

Current review

We conducted this review of critical appraisal tools for qualitative research within the context of the GRADE-CERQual approach. This reflects our specific interest in identifying (or developing, if need be) a critical appraisal tool to assess the methodological strengths and limitations of a body of evidence that contributes to a review finding and, ultimately, to contribute to an assessment of how much confidence we have in review findings based on these primary studies [ 3 ]. Our focus is thus not on assessing the overall quality of an individual study, but rather on assessing how any identified methodological limitations of a study could influence our confidence in an individual review finding. This particular perspective may not have exerted a large influence on the conduct of our current mapping review. However, it will likely influence how we interpret our results, reflecting our thinking on methodological limitations both at the individual study level and at the level of a review finding. Our team is also guided by how potential concepts found in existing checklists may overlap with the other components of the GRADE-CERQual approach, namely relevance, adequacy and coherence (see Table  1 for definitions).

The aim of this review was to systematically map existing critical appraisal tools for primary qualitative studies, and identify common criteria across these tools.

Eligibility criteria

For the purposes of this review, we defined a critical appraisal tool as a tool, checklist or set of criteria that provides guidance on how to appraise the methodological strengths and limitations of qualitative research. This could include, for instance, instructions for authors of scientific journals; articles aimed at improving qualitative research and targeting authors and peer reviewers; and chapters from qualitative methodology manuals that discuss critical appraisal.

We included critical appraisal tools if they were explicitly intended to be applicable to qualitative research. We included tools that were defined for mixed methods if it was clearly stated that their approach included qualitative methods. We included tools with clear criteria or questions intended to guide the user through an assessment of the study. However, we did not include publications where the author discussed issues related to methodological rigor of qualitative research but did not provide a list or set of questions or criteria to support the end user in assessing the methodological strengths and limitations of qualitative research. These assessments were sometimes challenging, and we have sought to make our judgements as transparent as possible. We did not exclude tools based on how their final critical appraisal assessments were determined (e.g., whether the tool used numeric quality scores, a summary of elements, or weighting of criteria).

We included published or unpublished papers that were available in full text, and that were written in any language, but with an English abstract.

Search strategy

We began by conducting a broad scoping search of existing reviews of critical appraisal tools for qualitative research in Google Scholar using the terms “critical appraisal OR quality AND qualitative”. We identified four reviews, the most recent of which focussed on checklists used within health sciences and was published in 2016 (search conducted in 2010) [ 34 ]. We included critical appraisal tools identified by these four previous reviews if they met the inclusion criteria described above [ 22 , 23 , 24 , 32 ]. We proceeded to search systematically in health and medical databases for checklists published after 2010 (so as not to duplicate the most recent review described above). Since we were not aware of any review which searched specifically for checklists used in the social sciences, we extended our search in social sciences databases backwards to 2006. We chose this date as our initial reading had suggested that development of critical appraisal within the social science field was insufficiently mature before 2006, and considered that any exceptions would be identified through searching reference lists of identified studies. We also searched references of identified relevant papers and contacted methodological experts to identify any unpublished tools.

In June 2016, we conducted a systematic literature search of Pubmed/MEDLINE, PsycInfo, CINAHL, ERIC, ScienceDirect, Social services abstracts and Web of Science databases using variations of the following search strategy: (“Qualitative research” OR “qualitative health research” OR “qualitative study” OR “qualitative studies” OR “qualitative paper” OR “qualitative papers”) AND (“Quality Assessment” OR “critical appraisal” or “internal validity” or “external validity” OR rigor or rigour) AND (Checklist or checklists or guidelines or criteria or standards) (see Additional file 1 for the complete search strategy). A Google Scholar alert for frequently cited articles and checklists was created to identify any tools published since June 2016.

Study selection

Using the Covidence web-based tool [ 35 ] two authors independently assessed titles and abstracts and then assessed the full text versions of potentially relevant checklists using the inclusion criteria described above. A third author mediated in cases of disagreement.

Data extraction

We extracted data from every included checklist related to study characteristics (title, author details, year, type of publication), checklist characteristics (intended end user (e.g. practitioner, guideline panel, review author, primary researcher, peer reviewer), discipline (e.g. health sciences, social sciences), and details regarding how the checklist was developed or how specific checklist criteria were justified). We also extracted the checklist criteria intended to be assessed within each identified checklist and any prompts, supporting questions, etc. Each checklist item/question (and supporting question/prompt) was treated as a separate data item. The data extraction form is available in Additional file 2 .

Synthesis methods

We analysed the criteria included in the identified checklists using the best fit framework analysis approach [ 36 ]. We developed a framework using the ten items from the Critical Appraisal Skills Programme (CASP) Qualitative Research Checklist. We used this checklist because it is frequently used in qualitative evidence syntheses [ 28 ]. We then extracted the criteria from the identified checklists and charted each checklist question or criterion into one of the themes in the framework. We expanded the initial framework to accommodate any coded criteria that did not fit into an existing framework theme. Finally, we tabulated the frequency of each theme across the identified checklists (the number of checklists for which a theme was mentioned as a checklist criterion). The themes, which are derived from the expanded CASP framework, could be viewed as a set of overarching criterion statements based on synthesis of the multiple criteria found in the included tools. However, for simplicity we use the term ‘theme’ to describe each of these analytic groups.

In this paper, we use the terms “checklist” and “critical appraisal tools” interchangeably. The term “guidance” however is defined differently within the context of this review, and is discussed in the discussion section below. The term “checklist criteria” refers to criteria that authors have included in their critical appraisal tools. The term “theme” refers to the 22 framework themes that we have developed in this synthesis and into which the criteria from the individual checklists were sorted. The term “cod(e)/ing” refers to the process of sorting the checklist criteria within the framework themes.

Our systematic search resulted in 7199 unique references. We read the full papers for 310 of these, and included 31 checklists that met the inclusion criteria. We also included 71 checklists from previous reviews that met our inclusion criteria. A total of 102 checklists were described in 100 documents [ 22 , 23 , 24 , 26 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 , 120 , 121 , 122 , 123 , 124 , 125 , 126 , 127 , 128 , 129 , 130 , 131 , 132 ] (see Fig. 1 ). A list of the checklists are included in Additional file 3 . One publication described three checklists (Silverman 2008; [ 119 ]).

Characteristics of the included checklists

The incidence of new critical appraisal tools appears to be increasing (see Fig.  2 ). Approximately 80% of the identified tools have been published since 2000.

figure 2

Identified critical appraisal tools (sorted by publication year). References list of critical appraisal tools included in this mapping review

Critical appraisal tool development

Approximately half of the articles describing critical appraisal tools did not report how the tools were developed, or this was unclear ( N  = 53). Approximately one third of tools were based on a review and synthesis of existing checklists ( N  = 33), or adapted directly from one or more existing checklists ( N  = 10). The other checklists were developed using a Delphi survey method or consultation with methodologists or practitioners ( N  = 4), a review of criteria used by journal peer reviewers (N = 1), or using a theoretical approach (N = 1).

Health or social welfare field

We attempted to sort the checklists according to the source discipline (field) in which they were developed (e.g. health services or social welfare services). In some cases this was apparent from the accompanying article, or from the checklist criteria, but in many cases we based our assessment on the authors’ affiliations and the journal in which the checklist was published. The majority of checklists were developed by researchers in the field of health care ( N  = 60). The remaining checklists appear to have been developed within health and/or social care ( N  = 2), education (N = 2), social care ( N  = 4), or other fields ( N  = 8). Many publications either did not specify any field, or it was unclear within which field the checklist was developed ( N  = 26).

Intended end user

It was unclear who the intended end user was (e.g., policy maker, clinician/practitioner, primary researcher, systematic review author, or peer reviewer) for many of the checklists ( N  = 34). Of the checklists where the intended end user was implied or discussed, ten were intended for primary authors and peer reviewers, and ten were intended for peer reviewers alone. Seventeen checklists were intended to support practitioners in reading/assessing the quality of qualitative research, and 17 were intended for use by primary researchers to improve their qualitative research. Ten checklists were intended for use by systematic review authors, two for use by primary research authors and systematic review authors, and two were intended for students appraising qualitative research.

Checklist versus guidance

The critical appraisal tools that we identified appeared to vary greatly in how explicit the included criteria were and the extent of accompanying guidance and supporting questions for the end user. Below we discuss the differences between checklists and guidance with examples from the identified tools.

Using the typology described by Hammersley (2007), the term “checklist” is used to describe a tool where the user is provided with observable indicators to establish (along with other criteria) whether or not the findings of a study are valid, or are of value. Such tools tend to be quite explicit and comprehensive; furthermore the checklist criteria are usually related to research conduct and may be intended for people unfamiliar with critically appraising qualitative research [ 8 ]. The tool described in Sandelowski (2007) is an example of such a checklist [ 115 ].

Other tools may be intended to be used as guidance, with a list of considerations or reminders that are open to revision when being applied [ 8 ]. Such tools are less explicit. The tool described by Carter (2007) is such an example, where the focus on a fundamental appraisal of methods and methodology seems directed at experienced researchers [ 48 ].

Results of the framework synthesis

Through our framework synthesis we have categorised the criteria included in the 102 identified critical appraisal tools into 22 themes. The themes represent a best effort at translating many criteria, worded in different ways, into themes. Given the diversity in how critical appraisal tools are organized (e.g. broad versus narrow questions), not all of the themes are mutually exclusive (e.g. some criteria are included in more than one theme if they address two different themes), and some themes are broad and include a wide range of criteria from the included critical appraisal tools (e.g. Was the data collected in a way that addressed the research issue? represents any criterion from an included critical appraisal tool that discussed data collection methods). In Table  2 , we present the number of criteria from critical appraisal tools that relate to each theme. None of the included tools contributed criteria to all 22 themes.

Framework themes: design and/or conduct of qualitative research

The majority of the framework themes relate to the design and conduct of a qualitative research study. However, some themes overlap with, or relate to, what are conventionally considered to be reporting standards. The first reporting standards for primary qualitative research were not published until 2007 and many of the appraisal tools predate this and include a mix of methodological quality and quality of reporting standards [ 23 ]. The current project did not aim to distinguish or discuss which criteria is related to critical appraisal versus reporting standards. However, we discuss the ramifications of this blurry distinction below.

Breadth of framework themes

Some themes represent a wide range of critical appraisal criteria. For example, the theme “Was the data analysis sufficiently rigorous?” includes checklist criteria related to several different aspects of data analysis: (a) whether the researchers provide in-depth description of the analysis process, (b) whether the researchers discuss how data were selected for presentation, (c) if data were presented to support the finding, and (d) whether or not disconfirming cases are discussed. On the other hand, some of the themes cover a narrower breadth of criteria. For example, the theme “Have ethical issues been taken into consideration?” only includes checklist criteria related to whether the researchers have sought ethical approval, informed participants about their rights, or considered the needs of vulnerable participants. The themes differ in terms of breadth mainly because of how the original coding framework was structured. Some of the themes from the original framework were very specific and could be addressed by seeking one or two pieces of information from a qualitative study (e.g., Is this a qualitative study?). Other themes from the original framework were broad and a reader would need to seek multiple pieces of information in order to make a clear assessment (e.g., Was the data collected in a way that addressed the research issue?).

Scope of existing critical appraisal tools

We coded many of the checklist criteria as relevant to multiple themes. For example, one checklist criterion was: “Criticality - Does the research process demonstrate evidence of critical appraisal” [ 128 ]. We interpreted and coded this criterion as relevant to two themes: “Was the data analysis sufficiently rigorous” and “Is there a clear statement of findings?”. On the other hand, several checklists also contained multiple criteria related to one theme. For instance, one checklist (Waterman 2010; [ 127 ]) included two separate questions related to the theme “Was the data collected in a way that addressed the research issue?” (Question 5: Was consideration given to the local context while implementing change? Is it clear which context was selected, and why, for each phase of the project? Was the context appropriate for this type of study? And Question 11: Were data collected in a way that addressed the research issue? Is it clear how data were collected, and why, for each phase of the project? Were data collection and record-keeping systematic? If methods were modified during data collection is an explanation provided?) [ 127 ]. A further example relates to reflexivity. The majority of critical appraisal tools include at least one criterion or question related to reflexivity ( N  = 71). Reflexivity was discussed with respect to the researcher’s relationship with participants, their potential influence on data collection methods and the setting, as well as the influence of their epistemological or theoretical perspective on data analysis. We grouped all criteria that discussed reflexivity into one theme.

The growing number of critical appraisal tools for qualitative research reflects increasing recognition of the value and use of qualitative research methods and their value in informing decision making. More checklists have been published in the last six years than in the preceding decade. However, upon closer inspection, many recent checklists are published adaptations of existing checklists, possibly tailored to a specific research question, but without any clear indication of how they improve upon the original. Below we discuss the framework themes developed from this synthesis, specifically which themes are most appropriate for critically appraising qualitative research and why, especially within the context of conducting a qualitative evidence synthesis. We will also discuss differences between checklists and guidance for critical appraisal and the unclear boundaries between critical appraisal criteria and reporting standards.

Are these the best criteria to be assessing?

The framework themes we present in this paper vary greatly in terms of how well they are covered by existing tools. However, a theme’s frequency is not necessarily indicative of the perceived or real importance of the group of criteria it encapsulates. Some themes appear more frequently than others in existing checklists simply due to the number of checklists which adapt or synthesise one of more existing tools. Some themes, such as “Was there disclosure of funding sources?”, and “Were end users involved in the development of the research study?” were only present in a small number of tools. These themes may be as important as more commonly covered themes when assessing the methodological strengths and limitations of qualitative research. It is unclear whether some of the identified themes were included in many different tools because they actually represent important issues to consider when assessing whether elements of qualitative research design or conduct could weaken our trust in the study findings, or whether frequency of a theme simply reflects a shared familiarity with concepts and assumptions on what constitutes or leads to rigor in qualitative research.

Only four of the identified critical appraisal tools were developed with input from stakeholders using consensus methods, although it is unclear how consensus was reached, or what it was based on. In more than half of the studies there was no discussion of how the tool was developed. None of the identified critical appraisal tools appear to be based on empirical evidence or explicit hypotheses regarding the relationships between components of qualitative study design and conduct and the trustworthiness of the study findings. This is directly in contrast to Whiting and colleagues (2017) discussion of how to develop quality assessment tools: “[r]obust tools are usually developed based on empirical evidence refined by expert consensus” [ 133 ]. A concerted and collaborative effort is needed in the field to begin thinking about why some criteria are included in critical appraisal tools, what is current knowledge on how the absence of these criteria can weaken the rigour of qualitative research, and whether there are specific approaches that strengthen data collection and analysis processes.

Methodological limitations: assessing individual studies versus individual findings

Thus far, critical appraisal tools have focused on assessing the methodological strengths and limitations of individual studies and the reviews of critical appraisal tools that we identified took the same approach. This mapping review is the first phase of a larger research project to consider how best to assess methodological limitations in the context of qualitative evidence syntheses. In this context, review authors need to assess the methodological “quality” of all studies contributing to a review finding, and also whether specific limitations are of concern for a particular finding as “individual features of study design may have implications for some of those review findings, but not necessarily other review findings” [ 134 ]. The ultimate aim of this research project is to identify, or develop if necessary, a critical appraisal tool to systematically and transparently support the assessment of the methodological limitations component of the GRADE-CERQual approach (see Fig.  3 ), which focuses on how much confidence can be placed in individual qualitative evidence synthesis findings.

figure 3

Process of identifying/developing a tool to support assessment of the GRADE-CERQual methodological limitations component (Cochrane qualitative Methodological Limitations Tool; CAMELOT). The research described in this article addresses phase 1 of this project

Critical appraisal versus reporting standards

While differences exist between criteria for assessing methodological strengths and limitations and criteria for assessing the reporting of research, the difference between these two aims, and the tools used to assess these, is not always clear. As Moher and colleagues (2014) point out “[t]his distinction is, however, less straightforward for systematic reviews than for assessments of the reporting of an individual study, because the reporting and conduct of systematic reviews are, by nature, closely intertwined” [ 135 ]. Review authors are sometimes unable to differentiate poor reporting from poor design or conduct of a study. Although current guidance recommends a focus on criteria related to assessing methodological strengths and limitations when choosing a critical appraisal tool (see discussion in introduction), deciding what is methodological versus a reporting issue is not always straightforward: “without a clear understanding of how a study was done, readers are unable to judge whether the findings are reliable” [ 135 ]. The themes identified in the current framework synthesis illustrate this point: while many themes clearly relate to the design and conduct of qualitative research, some themes could also be interpreted as relating to reporting standards (e.g., Was there disclosure of funding sources? Is there an audit trail). At least one theme, ‘Reporting standards (including demographic characteristics of the study)’, would not be considered key to assessment of methodological strengths and limitations of qualitative research.

Finally, the unclear distinction between critical appraisal and reporting standards can be demonstrated by the description of one of the tools included in this synthesis [ 96 ]. This tool is called Standards for Reporting Qualitative Research (SRQR), however, the tool is both based on a review of critical appraisal criteria from previously published instruments, and concludes that the proposed standards will provide “clear standards for reporting qualitative research” and assist “readers when critically appraising […] study findings” [ 96 ] p.1245).

Reporting standards are being developed separately and discussion of these is beyond the remit of this paper [ 136 ]. However, when developing critical appraisal tools, one must be aware that some criteria or questions may also relate to reporting and ensure that such criteria are not used to assess both the methodological strengths and limitations and reporting quality for a publication.

Intended audience

This review included any critical appraisal tool intended for application to qualitative research, regardless of the intended end user. The type of end user targeted by a critical appraisal tool could have implications for the tool’s content and form. For instance, tools designed for practitioners who are applying the findings from an individual study to their specific setting may focus on different criteria than tools designed for primary researchers undertaking qualitative research. However, since many of the included critical appraisal tools did not identify the intended end user, it is difficult to establish any clear patterns between the content of the critical appraisal tools and the audience for which the tool was intended. It is also unclear whether or not separate critical appraisal tools are needed for different audiences, or whether one flexible appraisal tool would suffice. Further research and user testing is needed with existing critical appraisal tools, including those under development.

Tools or guidance intended to support primary researchers undertaking qualitative research in establishing rigour were not included in this mapping and analysis. This is because guidance for primary research authors on how to design and conduct high quality qualitative research focus on how to apply methods in the best and most appropriate manner. Critical appraisal tools, however, are instruments used to fairly and rapidly assess methodological strengths and limitations of a study post hoc. For these reasons, those critical appraisal tools we identified and included that appear to target primary researchers as end users may be less relevant than other identified tools for the aims of this project.

Lessons from the development of quantitative research tools on risk of bias

While the fundamental purposes and principles of qualitative and quantitative research may differ, many principles from development of the Cochrane Risk of Bias tool transfer to developing a tool for the critical appraisal of qualitative research. These principles include avoiding quality scales (e.g. summary scores), focusing on internal validity, considering limitations as they relate to individual results (findings), the need to use judgment in making assessments, choosing domains that combine theoretical and empirical considerations, and a focus on the limitations as represented in the research (as opposed to quality of reporting) [ 31 ]. Further development of a tool in the context of qualitative evidence synthesis and GRADE-CERQual needs to take these principles into account, and lessons learned during this process may be valuable for the development of future critical appraisal or Risk of Bias tools.

Further research

As discussed earlier, CERQual is intended to be applied to individual findings from qualitative evidence syntheses with a view to informing decision making, including in the context of guidelines and health systems guidance [ 137 ]. Our framework synthesis has uncovered three important issues to consider when critically appraising qualitative research in order to support an assessment of confidence in review findings from qualitative evidence syntheses. First, since no existing critical appraisal tool describes an empirical basis for including specific criteria, we need to begin to identify and explore the empirical and theoretical evidence for the framework themes developed in this review. Second, we need to consider whether the identified themes are appropriate for critical appraisal within the specific context of the findings of qualitative evidence syntheses. Thirdly, some of the themes from the framework synthesis relate more to research reporting standards than to research conduct. As we plan to focus only on themes related to research conduct, we need to reach consensus on which themes relate to research conduct and which relate to reporting (see Fig. 2 ).

Currently, more than 100 critical appraisal tools exist for qualitative research. This reflects an increasing recognition of the value of qualitative research. However, none of the identified critical appraisal tools appear to be based on empirical evidence or clear hypotheses related to how specific elements of qualitative study design or conduct influence the trustworthiness of study findings. Furthermore, the target audience for many of the checklists is unclear (e.g., practitioners or review authors), and many identified tools also include checklist criteria related to reporting quality of primary qualitative research. Existing critical appraisal tools for qualitative studies are thus not fully fit for purpose in supporting the methodological limitations component of the GRADE-CERQual approach. Given the number of tools adapted from previously produced tools, the frequency count for framework concepts in this framework synthesis does not necessarily indicate the perceived or real importance of each concept. More work is needed to prioritise checklist criteria for assessing the methodological strengths and limitations of primary qualitative research, and to explore the theoretical and empirical basis for the inclusion of criteria.

Abbreviations

Critical Appraisal Skills Programme

The Cumulative Index to Nursing and Allied Health Literature database

Education Resources Information Center

Grading of Recommendations Assessment, Development, and Evaluation

Confidence in the Evidence from Reviews of Qualitative research

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Cochrane Qualitative & Implementation Methods Group

Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Systematic Reviews. 2012:1(28).

Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database Syst Rev. 2013;2013.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin C, Gülmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for heatlh and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.

Article   PubMed   PubMed Central   Google Scholar  

Guyatt G, Oxman A, Kunz R, Vist G, Falck-Ytter Y, Schunemann H. For the GRADE working group: what is "quality of evidence" and why is it important to clinicians? BMJ. 2008;336:995–8.

Lewin S, Booth A, Bohren M, Glenton C, Munthe-Kaas HM, Carlsen B, Colvin CJ, Tuncalp Ö, Noyes J, Garside R, et al. Applying the GRADE-CERQual approach (1): introduction to the series. Implement Sci. 2018.

Katrak P, Bialocerkowski A, Massy-Westropp N, Kumar V, Grimmer K. A systematic review of the content of critical appraisal tools. BMC Med Res Methodol. 2004:4(22).

Denzin N. Qualitative inquiry under fire: toward a new paradigm dialogue. USA: Left Coast Press; 2009.

Google Scholar  

Hammersley M. The issue of quality in qualitative research. International Journal of Research & Method in Education. 2007;30(3):287–305.

Article   Google Scholar  

Smith J. The problem of criteria for judging interpretive inquiry. Educ Eval Policy Anal. 1984;6(4):379–91.

Smith J, Deemer D. The problem of criteria in the age of relativism. In: Densin N, Lincoln Y, editors. Handbook of Qualitative Research. London: Sage Publication; 2000.

Noyes J, Booth A, Flemming K, Garside R, Harden A, Lewin S, Pantoja T, Hannes K, Cargo M, Thomas J. Cochrane qualitative and implementation methods group guidance series—paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. J Clin Epidemiol. 2018;1(97):49–58.

Soilemezi D, Linceviciute S. Synthesizing qualitative research: reflections and lessons learnt by two new reviewers. Int J Qual Methods. 2018;17(1):160940691876801.

Carroll C, Booth A. Quality assessment of qualitative evidence for systematic review and synthesis: is it meaningful, and if so, how should it be performed? Res Synth Methods. 2015;6(2):149–54.

Article   PubMed   Google Scholar  

Sandelowski M. A matter of taste: evaluating the quality of qualitative research. Nurs Inq. 2015;22(2):86–94.

Garside R. Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how? Innovation: The European Journal of Social Science Research. 2013;27(1):67–79.

Barusch A, Gringeri C, George M: Rigor in Qualitative Social Work Research: A Review of Strategies Used in Published Articles. Social Work Research 2011, 35(1):11–19 19p.

Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005;10:45–53.

Green J, Thorogood N: Qualitative methodology in health research. In: Qualitative methods for health research, 4th Edition. Edn. Edited by Seaman J. London, UK: Sage Publications; 2018.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009:9(59).

Gough D, Oliver S, Thomas J. An introduction to systematic reviews. London, UK: Sage; 2017.

Hannes K, Macaitis K. A move to more transparent and systematic approaches of qualitative evidence synthesis: update of a review on published papers. Qual Res. 2012;12:402–42.

Santiago-Delefosse M, Gavin A, Bruchez C, Roux P, Stephen SL. Quality of qualitative research in the health sciences: Analysis of the common criteria present in 58 assessment guidelines by expert users. Social Science & Medicine. 2016;148:142–151 110p.

Article   CAS   Google Scholar  

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Walsh D, Downe S. Appraising the quality of qualitative research. Midwifery. 2006;22(2):108–19.

Dixon-Woods M, Sutton M, Shaw RL, Miller T, Smith J, Young B, Bonas S, Booth A, Jones D. Appraising qualitative research for inclusion in systematic reviews: a quantitative and qualitative comparison of three methods. Journal of Health Services Research & Policy. 2007;12(1):42–7.

Long AF, Godfrey M. An evaluation tool to assess the quality of qualitative research studies. Int J Soc Res Methodol. 2004;7(2):181–96.

Popay J, Rogers A, Williams G. Rationale and Standards for the Systematic Review of Qualitative Literature in Health Services Research. Qual Health Res. 1998:8(3).

Article   CAS   PubMed   Google Scholar  

Dalton J, Booth A, Noyes J, Sowden A. Potential value of systematic reviews of qualitative evidence in informing user-centered health and social care: findings from a descriptive overview. J Clin Epidemiol. 2017;88:37–46.

Lundh A, Gøtzsche P. Recommendations by Cochrane Review Groups for assessment of the risk of bias in studies. BMC Med Res Methodol. 2008;8(22).

Higgins J, Sterne J, Savović J, Page M, Hróbjartsson A, Boutron I, Reeves B, Eldridge S: A revised tool for assessing risk of bias in randomized trials In: Cochrane Methods. Edited by J. C, McKenzie J, Boutron I, Welch V, vol. 2016: Cochrane Database Syst Rev 2016.

Higgins J, Altman D, Gøtzsche P, Jüni P, Moher D, Oxman A, Savović J, Schulz K, Weeks L, Sterne J. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;18(343):d5928.

Crowe M, Sheppard L. A review of critical appraisal tools show they lack rigor: alternative tool structure is proposed. J Clin Epidemiol. 2011;64(1):79–89.

Majid U, Vanstone M. Appraising qualitative research for evidence syntheses: a compendium of quality appraisal tools Qualitative Health Research; 2018.

Santiago-Delefosse M, Bruchez C, Gavin A, Stephen SL. Quality criteria for qualitative research in health sciences. A comparative analysis of eight grids of quality criteria in psychiatry/psychology and medicine. Evolution Psychiatrique. 2015;80(2):375–99.

Covidence systematic review software.

Carroll C, Booth A, Leaviss J, Rick J. "best fit" framework synthesis: refining the method. BMC Med Res Methodol. 2013;13:37.

Methods for the development of NICE public health guidance (third edition): Process and methods. In. UK: National Institute for Health and Care Excellence; 2012.

Anderson C. Presenting and evaluating qualitative research. Am J Pharm Educ. 2010;74(8):141.

Baillie L: Promoting and evaluating scientific rigour in qualitative research. Nursing standard (Royal College of Nursing (Great Britain) : 1987) 2015, 29(46):36–42.

Ballinger C. Demonstrating rigour and quality? In: LFCB, editor. Qualitative research for allied health professionals: Challenging choices. Chichester, England: J. Wiley & Sons; 2006. p. 235–46.

Bleijenbergh I, Korzilius H, Verschuren P. Methodological criteria for the internal validity and utility of practice oriented research. Qual Quant. 2011;45(1):145–56.

Boeije HR, van Wesel F, Alisic E. Making a difference: towards a method for weighing the evidence in a qualitative synthesis. J Eval Clin Pract. 2011;17(4):657–63.

Boulton M, Fitzpatrick R, Swinburn C. Qualitative research in health care: II. A structured review and evaluation of studies. J Eval Clin Pract. 1996;2(3):171–9.

Britton N, Jones R, Murphy E, Stacy R. Qualitative research methods in general practice and primary care. Fam Pract. 1995;12(1):104–14.

Burns N. Standards for qualitative research. Nurs Sci Q. 1989;2(1):44–52.

Caldwell K, Henshaw L, Taylor G. Developing a framework for critiquing health research: an early evaluation. Nurse Educ Today. 2011;31(8):e1–7.

Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J. Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003;56(4):671–84.

Carter S, Little M. Justifying knowledge, justifying method, taking action: epistemologies, methodologies, and methods in qualitative research. Qual Health Res. 2007;17(10):1316–28.

Cesario S, Morin K, Santa-Donato A. Evaluating the level of evidence of qualitative research. J Obstet Gynecol Neonatal Nurs. 2002;31(6):708–14.

Cobb AN, Hagemaster JN. Ten criteria for evaluating qualitative research proposals. J Nurs Educ. 1987;26(4):138–43.

CAS   PubMed   Google Scholar  

Cohen D, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. The Annals of Family Medicine. 2008;6(4):331–9.

Cooney A: Rigour and grounded theory. Nurse Researcher 2011, 18(4):17–22 16p.

Côté L, Turgeon J. Appraising qualitative research articles in medicine and medical education. Medical Teacher. 2005;27(1):71–5.

Creswell JW. Qualitative procedures. Research design: qualitative, quantitative, and mixed method approaches (2nd ed.). Thousand Oaks, CA: Sage Publications; 2003.

10 questions to help you make sense of qualitative research.

Crowe M, Sheppard L. A general critical appraisal tool: an evaluation of construct validity. Int J Nurs Stud. 2011;48(12):1505–16.

Currie G, McCuaig C, Di Prospero L. Systematically Reviewing a Journal Manuscript: A Guideline for Health Reviewers. Journal of Medical Imaging and Radiation Sciences. 2016;47(2):129–138.e123.

Curtin M, Fossey E. Appraising the trustworthiness of qualitative studies: guidelines for occupational therapists. Aust Occup Ther J. 2007;54:88–94.

Cyr J. The pitfalls and promise of focus groups as a data collection method. Sociol Methods Res. 2016;45(2):231–59.

Dixon-Woods M, Shaw RL, Agarwal S, Smith JA. The problem of appraising qualitative research. Quality and Safety in Health Care. 2004;13(3):223–5.

Article   CAS   PubMed   PubMed Central   Google Scholar  

El Hussein M, Jakubec SL, Osuji J. Assessing the FACTS: a mnemonic for teaching and learning the rapid assessment of rigor in qualitative research studies. Qual Rep. 2015;20(8):1182–4.

Elder NC, Miller WL. Reading and evaluating qualitative research studies. J Fam Pract. 1995;41(3):279–85.

Elliott R, Fischer CT, Rennie DL. Evolving guidelines for publication of qualitative research studies in psychology and related fields. Br J Clin Psychol. 1999;38(3):215–29.

Farrell SE, Kuhn GJ, Coates WC, Shayne PH, Fisher J, Maggio LA, Lin M. Critical appraisal of emergency medicine education research: the best publications of 2013. Acad Emerg Med Off J Soc Acad Emerg Med. 2014;21(11):1274–83.

Fawkes C, Ward E, Carnes D. What evidence is good evidence? A masterclass in critical appraisal. International Journal of Osteopathic Medicine. 2015;18(2):116–29.

Forchuk C, Roberts J. How to critique qualitative research articles. Can J Nurs Res. 1993;25(4):47–56.

Forman J, Crewsell J, Damschroder L, Kowalski C, Krein S. Quailtative research methods: key features and insights gained from use in infection prevention research. Am J Infect Control. 2008;36(10):764–71.

Fossey E, Harvey C, McDermott F, Davidson L. Understanding and evaluating qualitative research. Aust N Z J Psychiatry. 2002;36(6):717–32.

Fujiura GT. Perspectives on the publication of qualitative research. Intellectual and Developmental Disabilities. 2015;53(5):323–8.

Greenhalgh T, Taylor R. How to read a paper: papers that go beyond numbers (qualitative research). BMJ. 1997;315(7110):740–3.

Greenhalgh T, Wengraf T. Collecting stories: is it research? Is it good research? Preliminary guidance based on a Delphi study. Med Educ. 2008;42(3):242–7.

Gringeri C, Barusch A, Cambron C. Examining foundations of qualitative research: a review of social work dissertations, 2008-2010. J Soc Work Educ. 2013;49(4):760–73.

Hoddinott P, Pill R. A review of recently published qualitative research in general practice. More methodological questions than answers? Fam Pract. 1997;14(4):313–9.

Inui T, Frankel R. Evaluating the quality of qualitative research: a proposal pro tem. J Gen Intern Med. 1991;6(5):485–6.

Jeanfreau SG, Jack L Jr. Appraising qualitative research in health education: guidelines for public health educators. Health Promot Pract. 2010;11(5):612–7.

Kitto SC, Chesters J, Grbich C. Quality in qualitative research: criteria for authors and assessors in the submission and assessment of qualitative research articles for the medical journal of Australia. Med. J. Aust. 2008;188(4):243–6.

PubMed   Google Scholar  

Kneale J, Santry J. Critiquing qualitative research. J Orthop Nurs. 1999;3(1):24–32.

Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ. 2008;337:687–92.

Lane S, Arnold E. Qualitative research: a valuable tool for transfusion medicine. Transfusion. 2011;51(6):1150–3.

Lee E, Mishna F, Brennenstuhl S. How to critically evaluate case studies in social work. Res Soc Work Pract. 2010;20(6):682–9.

Leininger M: Evaluation criteria and critique of qualitative research studies. In: Critical issues in qualitative research methods. edn. Edited by (Ed.) JM. Thousand Oaks, CA.: Sage Publications; 1993: 95–115.

Leonidaki V. Critical appraisal in the context of integrations of qualitative evidence in applied psychology: the introduction of a new appraisal tool for interview studies. Qual Res Psychol. 2015;12(4):435–52.

Critical review form - Qualitative studies (Version 2.0).

Lincoln Y, Guba E. Establishing trustworthiness. In: YLEG, editor. Naturalistic inquiry. Newbury Park, CA: Sage Publications; 1985. p. 289–331.

Long A, Godfrey M, Randall T, Brettle A, Grant M. Developing evidence based social care policy and practic. Part 3: Feasibility of undertaking systematic reviews in social care. In: University of Leeds (Nuffield Institute for Health) and University of Salford (Health Care Practice R&D Unit); 2002.

Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001;358(9280):483–8.

Manuj I, Pohlen TL. A reviewer's guide to the grounded theory methodology in logistics and supply chain management research. International Journal of Physical Distribution & Logistics Management. 2012;42(8–9):784–803.

Marshall C, Rossman GB. Defending the value and logic of qualitative research. In: Designing qualitative research. Newbury Park, CA: Sage Publications; 1989.

Mays N, Pope C. Qualitative research: Rigour and qualitative research. BMJ. 1995;311:109–12.

Mays N, Pope C. Qualitative research in health care: Assessing quality in qualitative research. BMJ. 2000;320(50–52).

Meyrick J. What is good qualitative research? A first step towards a comprehensive approach to judging rigour/quality. J Health Psychol. 2006;11(5):799–808.

Miles MB, Huberman AM: Drawing and verifying conclusions. In: Qualitative data analysis: An expanded sourcebook (2nd ed). edn. Thousand Oaks, CA: Sage Publications; 1997: 277–280.

Morse JM. A review committee's guide for evaluating qualitative proposals. Qual Health Res. 2003;13(6):833–51.

Nelson A. Addressing the threat of evidence-based practice to qualitative inquiry through increasing attention to quality: a discussion paper. Int J Nurs Stud. 2008;45:316–22.

Norena ALP, Alcaraz-Moreno N, Guillermo Rojas J, Rebolledo Malpica D. Applicability of the criteria of rigor and ethics in qualitative research. Aquichan. 2012;12(3):263–74.

O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Academic medicine : journal of the Association of American Medical Colleges. 2014;89(9):1245–51.

O'Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. Journal of Health Services Research & Policy. 2008;13(2):92–8.

O'HEocha C, Wang X, Conboy K. The use of focus groups in complex and pressurised IS studies and evaluation using Klein & Myers principles for interpretive research. Inf Syst J. 2012;22(3):235–56.

Oliver DP. Rigor in Qualitative Research. Research on Aging, 2011;33(4):359–360 352p.

Pearson A, Jordan Z, Lockwood C, Aromataris E. Notions of quality and standards for qualitative research reporting. Int J Nurs Pract. 2015;21(5):670–6.

Peters S. Qualitative Research Methods in Mental Health. Evidence Based Mental Health. 2010;13(2):35–40 36p.

Guidelines for Articles. Canadian Family Physician.

Plochg T. Van Zwieten M (eds.): guidelines for quality assurance in health and health care research: qualitative research. Qualitative Research Network AMCUvA: Amsterdam, NL; 2002.

Proposal: A mixed methods appraisal tool for systematic mixed studies reviews.

Poortman CL, Schildkamp K. Alternative quality standards in qualitative research? Quality & Quantity: International Journal of Methodology. 2012;46(6):1727–51.

Popay J, Williams G. Qualitative research and evidence-based healthcare. J R Soc Med. 1998;91(35):32–7.

Ravenek MJ, Rudman DL. Bridging conceptions of quality in moments of qualitative research. Int J Qual Methods. 2013;12:436–56.

Rice-Lively ML. Research proposal evaluation form: qualitative methodology. In., vol. 2016. https://www.ischool.utexas.edu/~marylynn/qreval.html UT School of. Information. 1995.

Rocco T. Criteria for evaluating qualitative studies. Human Research Development International. 2010;13(4):375–8.

Rogers A, Popay J, Williams G, Latham M: Part II: setting standards for qualitative research: the development of markers. In: Inequalities in health and health promotion: insights from the qualitative research literature edn. London: Health Education Authority; 1997: 35–52.

Rowan M, Huston P. Qualitative research articles: information for authors and peer reviewers. Canadian Meidcal Association Journal. 1997;157(10):1442–6.

CAS   Google Scholar  

Russell CK, Gregory DM. Evaluation of qualitative research studies. Evid Based Nurs. 2003;6(2):36–40.

Ryan F, Coughlan M, Cronin P. Step-by-step guide to critiquing research. Part 2: qualitative research. Br J Nurs. 2007;16(12):738–44.

Salmon P. Assessing the quality of qualitative research. Patient Educ Couns. 2013;90(1):1–3.

Sandelowski M, Barroso J. Appraising reports of qualitative studies. In: Handbook for synthesizing qualitative research. New York: Springer; 2007. p. 75–101.

Savall H, Zardet V, Bonnet M, Péron M. The emergence of implicit criteria actualy used by reviewers of qualitative research articles. Organ Res Methods. 2008;11(3):510–40.

Schou L, Hostrup H, Lyngso EE, Larsen S, Poulsen I. Validation of a new assessment tool for qualitative research articles. J Adv Nurs. 2012;68(9):2086–94.

Shortell S. The emergence of qualitative methods in health services research. Health Serv Res. 1999;34(5 Pt 2):1083–90.

CAS   PubMed   PubMed Central   Google Scholar  

Silverman D, Marvasti A. Quality in Qualitative Research. In: Doing Qualitative Research: A Comprehensive Guide. Thousand Oaks, CA: Sage Publications; 2008. p. 257–76.

Sirriyeh R, Lawton R, Gardner P, Armitage G. Reviewing studies with diverse designs: the development and evaluation of a new tool. J Eval Clin Pract. 2012;18(4):746–52.

Spencer L, Ritchie J, Lewis JR, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. In. London: Government Chief Social Researcher's Office; 2003.

Stige B, Malterud K, Midtgarden T. Toward an agenda for evaluation of qualitative research. Qual Health Res. 2009;19(10):1504–16.

Stiles W. Evaluating qualitative research. Evidence-Based Mental Health. 1999;4(2):99–101.

Storberg-Walker J. Instructor's corner: tips for publishing and reviewing qualitative studies in applied disciplines. Hum Resour Dev Rev. 2012;11(2):254–61.

Tracy SJ. Qualitative quality: eight "big-tent" criteria for excellent qualitative research. Qual Inq. 2010;16(10):837–51.

Treloar C, Champness S, Simpson PL, Higginbotham N. Critical appraisal checklist for qualitative research studies. Indian J Pediatr. 2000;67(5):347–51.

Waterman H, Tillen D, Dickson R, De Konig K. Action research: a systematic review and guidance for assessment. Health Technol Assess. 2001;5(23):43–50.

Whittemore R, Chase SK, Mandle CL. Validity in qualitative research. Qual Health Res. 2001;11(4):522–37.

Yardley L. Dilemmas in qualitative health research. Psychol Health. 2000;15(2):215–28.

Yarris LM, Juve AM, Coates WC, Fisher J, Heitz C, Shayne P, Farrell SE. Critical appraisal of emergency medicine education research: the best publications of 2014. Acad Emerg Med Off J Soc Acad Emerg Med. 2015;22(11):1327–36.

Zingg W, Castro-Sanchez E, Secci FV, Edwards R, Drumright LN, Sevdalis N, Holmes AH. Innovative tools for quality assessment: integrated quality criteria for review of multiple study designs (ICROMS). Public Health. 2016;133:19–37.

Zitomer MR, Goodwin D. Gauging the quality of qualitative research in adapted physical activity. Adapt Phys Act Q. 2014;31(3):193–218.

Whiting P, Wolff R, Mallett S, Simera I, Savović J. A proposed framework for developing quality assessment tools. Systematic Reviews. 2017:6(204).

Munthe-Kaas H, Bohren M, Glenton C, Lewin S, Noyes J, Tuncalp Ö, Booth A, Garside R, Colvin C, Wainwright M, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings - paper 3: how to assess methodological limitations. Implementation Science In press.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Steward L, Group. TP-P. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews. 2014:4(1).

Hannes K, Heyvært M, Slegers K, Vandenbrande S, Van Nuland M. Exploring the Potential for a Consolidated Standard for Reporting Guidelines for Qualitative Research: An Argument Delphi Approach. International Journal of Qualitative Methods. 2015;14(4):1–16.

Bosch-Caplanch X, Lavis J, Lewin S, Atun R, Røttingen J-A, al. e: Guidance for evidence-informed policies about health systems: Rationale for and challenges of guidance development. PloS Medicine 2012, 9(3):e1001185.

Download references

Acknowledgements

We would like to acknowledge Susan Maloney who helped with abstract screening.

This study received funding from the Cochrane Collaboration Methods Innovation Fund 2: 2015–2018. SL receives additional funding from the South African Medical Research Council. The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing te manuscript.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and affiliations.

Norwegian Institute of Public Health, Oslo, Norway

Heather Menzies Munthe-Kaas, Claire Glenton & Simon Lewin

School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

School of Social Sciences, Bangor University, Bangor, UK

Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Simon Lewin

You can also search for this author in PubMed   Google Scholar

Contributions

HMK, CG and SL designed the study. AB designed the systematic search strategy. HMK conducted the search. HMK, CG, SL, JN and AB screened abstracts and full text. HMK extracted data and CG checked the extraction. HMK, CG and SL conducted the framework analysis. HMK drafted the article. HMK wrote the manuscript and all authors provided feedback and approved the manuscript for publication.

Corresponding author

Correspondence to Heather Menzies Munthe-Kaas .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

HMK, CG, SL, JN and AB are co-authors of the GRADE-CERQual approach.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Search strategy. (DOCX 14 kb)

Additional file 2:

Data extraction form. (DOCX 14 kb)

Additional file 3:

List of included critical appraisal tools. (DOCX 25 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Munthe-Kaas, H.M., Glenton, C., Booth, A. et al. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Med Res Methodol 19 , 113 (2019). https://doi.org/10.1186/s12874-019-0728-6

Download citation

Received : 08 December 2017

Accepted : 09 April 2019

Published : 04 June 2019

DOI : https://doi.org/10.1186/s12874-019-0728-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological limitations
  • Qualitative research
  • Qualitative evidence synthesis
  • Systematic mapping
  • Framework synthesis

BMC Medical Research Methodology

ISSN: 1471-2288

development of systematic research proposal and use of appropriate research tool

Research Guides

Systematic reviews and related evidence syntheses: proposal.

  • Standards & Guidelines

Getting started with proposal of review

The proposal stage is the most important step of a review project as it determines the feasibility of the review and its rationale.

The steps are: 

1. Determining review question and review type. 

  • Right Review : free tool to assist in selecting best review type for a given question
  • Trying to choose between a scoping or a systematic review? try this article- Munn, Z., Peters, M.D.J., Stern, C.  et al.   Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach .  BMC Med Res Methodol   18 , 143 (2018). https://doi.org/10.1186/s12874-018-0611-x
  • This article provides 10 different types of questions that systematic reviews can answer- Munn, Z., Stern, C., Aromataris, E. et al.  What kind of systematic review should I conduct?  A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 18, 5 (2018).  
  • For scoping reviews, the framework is: Population, Concept, Context (see JBI Scoping Review guide )

2. Search for related reviews to proposed question. Places to search include:

  • Prospero : database of protocols for systematic reviews, umbrella reviews, and rapid reviews with human health outcomes
  • Open Science Framework : open access registry for any type of research, including scoping reviews and more
  • Cochrane Collaboration Handbook : systematic reviews of clinical interventions
  • Campbell Collaboration : accepts many types reviews across many disciplines: Business and Management, Crime and Justice, Disability, Education, International Development, Knowledge Translation and Implementation, Methods, Nutrition, and Social Welfare

Collaboration for Environmental Evidence : reviews in environmental research

Systematic Reviews for Animals & Food (SYREAF) : protocols and reviews on animals and food science related to animals

Also, consider searching subject related databases, adding a search concept "review"

3. Evaluate previous reviews for quality, as well as comparing their scope to the proposed review. The following tools can be used to 

  • ROBIS (Risk of Bias in Systematic reviews)

AMSTAR : Assessing the Methodological Quality of Systematic Reviews, for meta-analysis

  • CASP Checklist : Critical Appraisal Skills Programme

4. Further refine question by defining the eligibility criteria

  • Eligibility criteria are the characteristics of the studies/research to be collected. Inclusion criteria are those characteristics a study must have to be include. Exclusion criteria are exceptions to the inclusion criteria.

5. Develop a preliminary search and find a few studies that match the eligibility criteria

  • Consider working with a librarian to develop a search. The purpose is to estimate the number of citations to be sorted (giving some idea of the amount time it will take complete the review) and to find at least a few studies that match the criteria.

6. Summarize proposal : A written proposal helps in framing the project and getting feedback. It should include:

  • A descriptive title of project, which includes the type of review
  • A brief introduction
  • A description of previous reviews and the rationale for the proposed review
  • An appropriate framed question for the review
  • The eligibility criteria
  • << Previous: About
  • Next: Standards & Guidelines >>
  • Last Updated: Mar 6, 2024 8:21 AM
  • URL: https://tamu.libguides.com/systematic_reviews

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.23(2); 2008 Apr

Logo of omanmedj

How to prepare a Research Proposal

Health research, medical education and clinical practice form the three pillars of modern day medical practice. As one authority rightly put it: ‘Health research is not a luxury, but an essential need that no nation can afford to ignore’. Health research can and should be pursued by a broad range of people. Even if they do not conduct research themselves, they need to grasp the principles of the scientific method to understand the value and limitations of science and to be able to assess and evaluate results of research before applying them. This review paper aims to highlight the essential concepts to the students and beginning researchers and sensitize and motivate the readers to access the vast literature available on research methodologies.

Most students and beginning researchers do not fully understand what a research proposal means, nor do they understand its importance. 1 A research proposal is a detailed description of a proposed study designed to investigate a given problem. 2

A research proposal is intended to convince others that you have a worthwhile research project and that you have the competence and the work-plan to complete it. Broadly the research proposal must address the following questions regardless of your research area and the methodology you choose: What you plan to accomplish, why do you want to do it and how are you going to do it. 1 The aim of this article is to highlight the essential concepts and not to provide extensive details about this topic.

The elements of a research proposal are highlighted below:

1. Title: It should be concise and descriptive. It must be informative and catchy. An effective title not only prick’s the readers interest, but also predisposes him/her favorably towards the proposal. Often titles are stated in terms of a functional relationship, because such titles clearly indicate the independent and dependent variables. 1 The title may need to be revised after completion of writing of the protocol to reflect more closely the sense of the study. 3

2. Abstract: It is a brief summary of approximately 300 words. It should include the main research question, the rationale for the study, the hypothesis (if any) and the method. Descriptions of the method may include the design, procedures, the sample and any instruments that will be used. 1 It should stand on its own, and not refer the reader to points in the project description. 3

3. Introduction: The introduction provides the readers with the background information. Its purpose is to establish a framework for the research, so that readers can understand how it relates to other research. 4 It should answer the question of why the research needs to be done and what will be its relevance. It puts the proposal in context. 3

The introduction typically begins with a statement of the research problem in precise and clear terms. 1

The importance of the statement of the research problem 5 : The statement of the problem is the essential basis for the construction of a research proposal (research objectives, hypotheses, methodology, work plan and budget etc). It is an integral part of selecting a research topic. It will guide and put into sharper focus the research design being considered for solving the problem. It allows the investigator to describe the problem systematically, to reflect on its importance, its priority in the country and region and to point out why the proposed research on the problem should be undertaken. It also facilitates peer review of the research proposal by the funding agencies.

Then it is necessary to provide the context and set the stage for the research question in such a way as to show its necessity and importance. 1 This step is necessary for the investigators to familiarize themselves with existing knowledge about the research problem and to find out whether or not others have investigated the same or similar problems. This step is accomplished by a thorough and critical review of the literature and by personal communication with experts. 5 It helps further understanding of the problem proposed for research and may lead to refining the statement of the problem, to identify the study variables and conceptualize their relationships, and in formulation and selection of a research hypothesis. 5 It ensures that you are not "re-inventing the wheel" and demonstrates your understanding of the research problem. It gives due credit to those who have laid the groundwork for your proposed research. 1 In a proposal, the literature review is generally brief and to the point. The literature selected should be pertinent and relevant. 6

Against this background, you then present the rationale of the proposed study and clearly indicate why it is worth doing.

4. Objectives: Research objectives are the goals to be achieved by conducting the research. 5 They may be stated as ‘general’ and ‘specific’.

The general objective of the research is what is to be accomplished by the research project, for example, to determine whether or not a new vaccine should be incorporated in a public health program.

The specific objectives relate to the specific research questions the investigator wants to answer through the proposed study and may be presented as primary and secondary objectives, for example, primary: To determine the degree of protection that is attributable to the new vaccine in a study population by comparing the vaccinated and unvaccinated groups. 5 Secondary: To study the cost-effectiveness of this programme.

Young investigators are advised to resist the temptation to put too many objectives or over-ambitious objectives that cannot be adequately achieved by the implementation of the protocol. 3

5. Variables: During the planning stage, it is necessary to identify the key variables of the study and their method of measurement and unit of measurement must be clearly indicated. Four types of variables are important in research 5 :

a. Independent variables: variables that are manipulated or treated in a study in order to see what effect differences in them will have on those variables proposed as being dependent on them. The different synonyms for the term ‘independent variable’ which are used in literature are: cause, input, predisposing factor, risk factor, determinant, antecedent, characteristic and attribute.

b. Dependent variables: variables in which changes are results of the level or amount of the independent variable or variables.

Synonyms: effect, outcome, consequence, result, condition, disease.

c. Confounding or intervening variables: variables that should be studied because they may influence or ‘mix’ the effect of the independent variables. For instance, in a study of the effect of measles (independent variable) on child mortality (dependent variable), the nutritional status of the child may play an intervening (confounding) role.

d. Background variables: variables that are so often of relevance in investigations of groups or populations that they should be considered for possible inclusion in the study. For example sex, age, ethnic origin, education, marital status, social status etc.

The objective of research is usually to determine the effect of changes in one or more independent variables on one or more dependent variables. For example, a study may ask "Will alcohol intake (independent variable) have an effect on development of gastric ulcer (dependent variable)?"

Certain variables may not be easy to identify. The characteristics that define these variables must be clearly identified for the purpose of the study.

6. Questions and/ or hypotheses: If you as a researcher know enough to make prediction concerning what you are studying, then the hypothesis may be formulated. A hypothesis can be defined as a tentative prediction or explanation of the relationship between two or more variables. In other words, the hypothesis translates the problem statement into a precise, unambiguous prediction of expected outcomes. Hypotheses are not meant to be haphazard guesses, but should reflect the depth of knowledge, imagination and experience of the investigator. 5 In the process of formulating the hypotheses, all variables relevant to the study must be identified. For example: "Health education involving active participation by mothers will produce more positive changes in child feeding than health education based on lectures". Here the independent variable is types of health education and the dependent variable is changes in child feeding.

A research question poses a relationship between two or more variables but phrases the relationship as a question; a hypothesis represents a declarative statement of the relations between two or more variables. 7

For exploratory or phenomenological research, you may not have any hypothesis (please do not confuse the hypothesis with the statistical null hypothesis). 1 Questions are relevant to normative or census type research (How many of them are there? Is there a relationship between them?). Deciding whether to use questions or hypotheses depends on factors such as the purpose of the study, the nature of the design and methodology, and the audience of the research (at times even the outlook and preference of the committee members, particularly the Chair). 6

7. Methodology: The method section is very important because it tells your research Committee how you plan to tackle your research problem. The guiding principle for writing the Methods section is that it should contain sufficient information for the reader to determine whether the methodology is sound. Some even argue that a good proposal should contain sufficient details for another qualified researcher to implement the study. 1 Indicate the methodological steps you will take to answer every question or to test every hypothesis illustrated in the Questions/hypotheses section. 6 It is vital that you consult a biostatistician during the planning stage of your study, 8 to resolve the methodological issues before submitting the proposal.

This section should include:

Research design: The selection of the research strategy is the core of research design and is probably the single most important decision the investigator has to make. The choice of the strategy, whether descriptive, analytical, experimental, operational or a combination of these depend on a number of considerations, 5 but this choice must be explained in relation to the study objectives. 3

Research subjects or participants: Depending on the type of your study, the following questions should be answered 3 , 5

  • - What are the criteria for inclusion or selection?
  • - What are the criteria for exclusion?
  • - What is the sampling procedure you will use so as to ensure representativeness and reliability of the sample and to minimize sampling errors? The key reason for being concerned with sampling is the issue of validity-both internal and external of the study results. 9
  • - Will there be use of controls in your study? Controls or comparison groups are used in scientific research in order to increase the validity of the conclusions. Control groups are necessary in all analytical epidemiological studies, in experimental studies of drug trials, in research on effects of intervention programmes and disease control measures and in many other investigations. Some descriptive studies (studies of existing data, surveys) may not require control groups.
  • - What are the criteria for discontinuation?

Sample size: The proposal should provide information and justification (basis on which the sample size is calculated) about sample size in the methodology section. 3 A larger sample size than needed to test the research hypothesis increases the cost and duration of the study and will be unethical if it exposes human subjects to any potential unnecessary risk without additional benefit. A smaller sample size than needed can also be unethical as it exposes human subjects to risk with no benefit to scientific knowledge. Calculation of sample size has been made easy by computer software programmes, but the principles underlying the estimation should be well understood.

Interventions: If an intervention is introduced, a description must be given of the drugs or devices (proprietary names, manufacturer, chemical composition, dose, frequency of administration) if they are already commercially available. If they are in phases of experimentation or are already commercially available but used for other indications, information must be provided on available pre-clinical investigations in animals and/or results of studies already conducted in humans (in such cases, approval of the drug regulatory agency in the country is needed before the study). 3

Ethical issues 3 : Ethical considerations apply to all types of health research. Before the proposal is submitted to the Ethics Committee for approval, two important documents mentioned below (where appropriate) must be appended to the proposal. In additions, there is another vital issue of Conflict of Interest, wherein the researchers should furnish a statement regarding the same.

The Informed consent form (informed decision-making): A consent form, where appropriate, must be developed and attached to the proposal. It should be written in the prospective subjects’ mother tongue and in simple language which can be easily understood by the subject. The use of medical terminology should be avoided as far as possible. Special care is needed when subjects are illiterate. It should explain why the study is being done and why the subject has been asked to participate. It should describe, in sequence, what will happen in the course of the study, giving enough detail for the subject to gain a clear idea of what to expect. It should clarify whether or not the study procedures offer any benefits to the subject or to others, and explain the nature, likelihood and treatment of anticipated discomfort or adverse effects, including psychological and social risks, if any. Where relevant, a comparison with risks posed by standard drugs or treatment must be included. If the risks are unknown or a comparative risk cannot be given it should be so stated. It should indicate that the subject has the right to withdraw from the study at any time without, in any way, affecting his/her further medical care. It should assure the participant of confidentiality of the findings.

Ethics checklist: The proposal must describe the measures that will be undertaken to ensure that the proposed research is carried out in accordance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical research involving Human Subjects. 10 It must answer the following questions:

  • • Is the research design adequate to provide answers to the research question? It is unethical to expose subjects to research that will have no value.
  • • Is the method of selection of research subjects justified? The use of vulnerable subjects as research participants needs special justification. Vulnerable subjects include those in prison, minors and persons with mental disability. In international research it is important to mention that the population in which the study is conducted will benefit from any potential outcome of the research and the research is not being conducted solely for the benefit of some other population. Justification is needed for any inducement, financial or otherwise, for the participants to be enrolled in the study.
  • • Are the interventions justified, in terms of risk/benefit ratio? Risks are not limited to physical harm. Psychological and social risks must also be considered.
  • • For observations made, have measures been taken to ensure confidentiality?

Research setting 5 : The research setting includes all the pertinent facets of the study, such as the population to be studied (sampling frame), the place and time of study.

Study instruments 3 , 5 : Instruments are the tools by which the data are collected. For validated questionnaires/interview schedules, reference to published work should be given and the instrument appended to the proposal. For new a questionnaire which is being designed specifically for your study the details about preparing, precoding and pretesting of questionnaire should be furnished and the document appended to the proposal. Descriptions of other methods of observations like medical examination, laboratory tests and screening procedures is necessary- for established procedures, reference of published work cited but for new or modified procedure, an adequate description is necessary with justification for the same.

Collection of data: A short description of the protocol of data collection. For example, in a study on blood pressure measurement: time of participant arrival, rest for 5p. 10 minutes, which apparatus (standard calibrated) to be used, in which room to take measurement, measurement in sitting or lying down position, how many measurements, measurement in which arm first (whether this is going to be randomized), details of cuff and its placement, who will take the measurement. This minimizes the possibility of confusion, delays and errors.

Data analysis: The description should include the design of the analysis form, plans for processing and coding the data and the choice of the statistical method to be applied to each data. What will be the procedures for accounting for missing, unused or spurious data?

Monitoring, supervision and quality control: Detailed statement about the all logistical issues to satisfy the requirements of Good Clinical Practices (GCP), protocol procedures, responsibilities of each member of the research team, training of study investigators, steps taken to assure quality control (laboratory procedures, equipment calibration etc)

Gantt chart: A Gantt chart is an overview of tasks/proposed activities and a time frame for the same. You put weeks, days or months at one side, and the tasks at the other. You draw fat lines to indicate the period the task will be performed to give a timeline for your research study (take help of tutorial on youtube). 11

Significance of the study: Indicate how your research will refine, revise or extend existing knowledge in the area under investigation. How will it benefit the concerned stakeholders? What could be the larger implications of your research study?

Dissemination of the study results: How do you propose to share the findings of your study with professional peers, practitioners, participants and the funding agency?

Budget: A proposal budget with item wise/activity wise breakdown and justification for the same. Indicate how will the study be financed.

References: The proposal should end with relevant references on the subject. For web based search include the date of access for the cited website, for example: add the sentence "accessed on June 10, 2008".

Appendixes: Include the appropriate appendixes in the proposal. For example: Interview protocols, sample of informed consent forms, cover letters sent to appropriate stakeholders, official letters for permission to conduct research. Regarding original scales or questionnaires, if the instrument is copyrighted then permission in writing to reproduce the instrument from the copyright holder or proof of purchase of the instrument must be submitted.

Ask A Librarian

  • Collections
  • Research Help
  • Teaching & Learning
  • Library Home

Systematic Reviews

  • Getting Started
  • Additional Frameworks
  • More Types of Reviews
  • Timeline & Resources
  • Inclusion/Exclusion Criteria
  • Resources & More

PICOT Tutorials

What is PICOT - A Tutorial

Using PICOT to Formulate Your Literature Search

Librarian Profile

Profile Photo

Developing Your Question

Developing your research question is one of the most important steps in the review process. At this stage in the process, you and your team have identified a knowledge gap in your field and are aiming to answer a specific question, such as

  • If X is prescribed, then Y will happen to patients?

OR assess an intervention

  • How does X affect Y?

OR synthesize the existing evidence 

  • What is the nature of X? ​

​​Whatever your aim, formulating a clear, well-defined research question of appropriate scope is key to a successful review. The research question will be the foundation of your review and from it your research team will identify 2-5 possible search concepts. These search concepts will later be used to build your search strategy. 

PICOT Questions

Formulating a research question takes time and your team may go through different versions until settling on the right research question.  A research question framework can help structure your systematic review question.  

PICO/T is an acronym which stands for

  • P        Population/Problem
  • I         Intervention/Exposure
  • C        Comparison
  • O       Outcome
  • T       Time

Each PICO includes at least a P, I, and an O, and some include a C or a T. Below are some sample PICO/T questions to help you use the framework to your advantage. 

For an intervention/therapy

In _______(P), what is the effect of _______(I) on ______(O) compared with 

Visual representation of the PICO/T Question Framework. text reads: P - Population/Problem; I - Intervention/Exposure; C - Comparison; O - Outcome; T - Time

_______(C) within ________ (T)?

For etiology

Are ____ (P) who have _______ (I) at ___ (Increased/decreased) risk for/of_______ (O) compared with ______ (P) with/without ______ (C) over _____ (T)?

Diagnosis or diagnostic test

Are (is) _________ (I) more accurate in diagnosing ________ (P) compared with ______ (C) for _______ (O)?

For ________ (P) does the use of ______ (I) reduce the future risk of ________ (O) compared with _________ (C)?

Prognosis/Predictions

Does __________ (I) influence ________ (O) in patients who have _______ (P) over ______ (T)?

How do ________ (P) diagnosed with _______ (I) perceive ______ (O) during _____ (T)?

Melnyk B., & Fineout-Overholt E. (2010). Evidence-based practice in nursing & healthcare. New York: Lippincott Williams & Wilkins.

Ghezzi-Kopel, Kate. (2019, September 16). Developing your research question. (research guide). Retrieved from  https://guides.library.cornell.edu/systematic_reviews/research_question

  • << Previous: Getting Started
  • Next: Additional Frameworks >>
  • Last Updated: Mar 18, 2024 2:33 PM
  • URL: https://libguides.wvu.edu/SystematicReviews

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

11.2 Steps in Developing a Research Proposal

Learning objectives.

  • Identify the steps in developing a research proposal.
  • Choose a topic and formulate a research question and working thesis.
  • Develop a research proposal.

Writing a good research paper takes time, thought, and effort. Although this assignment is challenging, it is manageable. Focusing on one step at a time will help you develop a thoughtful, informative, well-supported research paper.

Your first step is to choose a topic and then to develop research questions, a working thesis, and a written research proposal. Set aside adequate time for this part of the process. Fully exploring ideas will help you build a solid foundation for your paper.

Choosing a Topic

When you choose a topic for a research paper, you are making a major commitment. Your choice will help determine whether you enjoy the lengthy process of research and writing—and whether your final paper fulfills the assignment requirements. If you choose your topic hastily, you may later find it difficult to work with your topic. By taking your time and choosing carefully, you can ensure that this assignment is not only challenging but also rewarding.

Writers understand the importance of choosing a topic that fulfills the assignment requirements and fits the assignment’s purpose and audience. (For more information about purpose and audience, see Chapter 6 “Writing Paragraphs: Separating Ideas and Shaping Content” .) Choosing a topic that interests you is also crucial. You instructor may provide a list of suggested topics or ask that you develop a topic on your own. In either case, try to identify topics that genuinely interest you.

After identifying potential topic ideas, you will need to evaluate your ideas and choose one topic to pursue. Will you be able to find enough information about the topic? Can you develop a paper about this topic that presents and supports your original ideas? Is the topic too broad or too narrow for the scope of the assignment? If so, can you modify it so it is more manageable? You will ask these questions during this preliminary phase of the research process.

Identifying Potential Topics

Sometimes, your instructor may provide a list of suggested topics. If so, you may benefit from identifying several possibilities before committing to one idea. It is important to know how to narrow down your ideas into a concise, manageable thesis. You may also use the list as a starting point to help you identify additional, related topics. Discussing your ideas with your instructor will help ensure that you choose a manageable topic that fits the requirements of the assignment.

In this chapter, you will follow a writer named Jorge, who is studying health care administration, as he prepares a research paper. You will also plan, research, and draft your own research paper.

Jorge was assigned to write a research paper on health and the media for an introductory course in health care. Although a general topic was selected for the students, Jorge had to decide which specific issues interested him. He brainstormed a list of possibilities.

If you are writing a research paper for a specialized course, look back through your notes and course activities. Identify reading assignments and class discussions that especially engaged you. Doing so can help you identify topics to pursue.

  • Health Maintenance Organizations (HMOs) in the news
  • Sexual education programs
  • Hollywood and eating disorders
  • Americans’ access to public health information
  • Media portrayal of health care reform bill
  • Depictions of drugs on television
  • The effect of the Internet on mental health
  • Popularized diets (such as low-carbohydrate diets)
  • Fear of pandemics (bird flu, HINI, SARS)
  • Electronic entertainment and obesity
  • Advertisements for prescription drugs
  • Public education and disease prevention

Set a timer for five minutes. Use brainstorming or idea mapping to create a list of topics you would be interested in researching for a paper about the influence of the Internet on social networking. Do you closely follow the media coverage of a particular website, such as Twitter? Would you like to learn more about a certain industry, such as online dating? Which social networking sites do you and your friends use? List as many ideas related to this topic as you can.

Narrowing Your Topic

Once you have a list of potential topics, you will need to choose one as the focus of your essay. You will also need to narrow your topic. Most writers find that the topics they listed during brainstorming or idea mapping are broad—too broad for the scope of the assignment. Working with an overly broad topic, such as sexual education programs or popularized diets, can be frustrating and overwhelming. Each topic has so many facets that it would be impossible to cover them all in a college research paper. However, more specific choices, such as the pros and cons of sexual education in kids’ television programs or the physical effects of the South Beach diet, are specific enough to write about without being too narrow to sustain an entire research paper.

A good research paper provides focused, in-depth information and analysis. If your topic is too broad, you will find it difficult to do more than skim the surface when you research it and write about it. Narrowing your focus is essential to making your topic manageable. To narrow your focus, explore your topic in writing, conduct preliminary research, and discuss both the topic and the research with others.

Exploring Your Topic in Writing

“How am I supposed to narrow my topic when I haven’t even begun researching yet?” In fact, you may already know more than you realize. Review your list and identify your top two or three topics. Set aside some time to explore each one through freewriting. (For more information about freewriting, see Chapter 8 “The Writing Process: How Do I Begin?” .) Simply taking the time to focus on your topic may yield fresh angles.

Jorge knew that he was especially interested in the topic of diet fads, but he also knew that it was much too broad for his assignment. He used freewriting to explore his thoughts so he could narrow his topic. Read Jorge’s ideas.

Conducting Preliminary Research

Another way writers may focus a topic is to conduct preliminary research . Like freewriting, exploratory reading can help you identify interesting angles. Surfing the web and browsing through newspaper and magazine articles are good ways to start. Find out what people are saying about your topic on blogs and online discussion groups. Discussing your topic with others can also inspire you. Talk about your ideas with your classmates, your friends, or your instructor.

Jorge’s freewriting exercise helped him realize that the assigned topic of health and the media intersected with a few of his interests—diet, nutrition, and obesity. Preliminary online research and discussions with his classmates strengthened his impression that many people are confused or misled by media coverage of these subjects.

Jorge decided to focus his paper on a topic that had garnered a great deal of media attention—low-carbohydrate diets. He wanted to find out whether low-carbohydrate diets were as effective as their proponents claimed.

Writing at Work

At work, you may need to research a topic quickly to find general information. This information can be useful in understanding trends in a given industry or generating competition. For example, a company may research a competitor’s prices and use the information when pricing their own product. You may find it useful to skim a variety of reliable sources and take notes on your findings.

The reliability of online sources varies greatly. In this exploratory phase of your research, you do not need to evaluate sources as closely as you will later. However, use common sense as you refine your paper topic. If you read a fascinating blog comment that gives you a new idea for your paper, be sure to check out other, more reliable sources as well to make sure the idea is worth pursuing.

Review the list of topics you created in Note 11.18 “Exercise 1” and identify two or three topics you would like to explore further. For each of these topics, spend five to ten minutes writing about the topic without stopping. Then review your writing to identify possible areas of focus.

Set aside time to conduct preliminary research about your potential topics. Then choose a topic to pursue for your research paper.

Collaboration

Please share your topic list with a classmate. Select one or two topics on his or her list that you would like to learn more about and return it to him or her. Discuss why you found the topics interesting, and learn which of your topics your classmate selected and why.

A Plan for Research

Your freewriting and preliminary research have helped you choose a focused, manageable topic for your research paper. To work with your topic successfully, you will need to determine what exactly you want to learn about it—and later, what you want to say about it. Before you begin conducting in-depth research, you will further define your focus by developing a research question , a working thesis, and a research proposal.

Formulating a Research Question

In forming a research question, you are setting a goal for your research. Your main research question should be substantial enough to form the guiding principle of your paper—but focused enough to guide your research. A strong research question requires you not only to find information but also to put together different pieces of information, interpret and analyze them, and figure out what you think. As you consider potential research questions, ask yourself whether they would be too hard or too easy to answer.

To determine your research question, review the freewriting you completed earlier. Skim through books, articles, and websites and list the questions you have. (You may wish to use the 5WH strategy to help you formulate questions. See Chapter 8 “The Writing Process: How Do I Begin?” for more information about 5WH questions.) Include simple, factual questions and more complex questions that would require analysis and interpretation. Determine your main question—the primary focus of your paper—and several subquestions that you will need to research to answer your main question.

Here are the research questions Jorge will use to focus his research. Notice that his main research question has no obvious, straightforward answer. Jorge will need to research his subquestions, which address narrower topics, to answer his main question.

Using the topic you selected in Note 11.24 “Exercise 2” , write your main research question and at least four to five subquestions. Check that your main research question is appropriately complex for your assignment.

Constructing a Working ThesIs

A working thesis concisely states a writer’s initial answer to the main research question. It does not merely state a fact or present a subjective opinion. Instead, it expresses a debatable idea or claim that you hope to prove through additional research. Your working thesis is called a working thesis for a reason—it is subject to change. As you learn more about your topic, you may change your thinking in light of your research findings. Let your working thesis serve as a guide to your research, but do not be afraid to modify it based on what you learn.

Jorge began his research with a strong point of view based on his preliminary writing and research. Read his working thesis statement, which presents the point he will argue. Notice how it states Jorge’s tentative answer to his research question.

One way to determine your working thesis is to consider how you would complete sentences such as I believe or My opinion is . However, keep in mind that academic writing generally does not use first-person pronouns. These statements are useful starting points, but formal research papers use an objective voice.

Write a working thesis statement that presents your preliminary answer to the research question you wrote in Note 11.27 “Exercise 3” . Check that your working thesis statement presents an idea or claim that could be supported or refuted by evidence from research.

Creating a Research Proposal

A research proposal is a brief document—no more than one typed page—that summarizes the preliminary work you have completed. Your purpose in writing it is to formalize your plan for research and present it to your instructor for feedback. In your research proposal, you will present your main research question, related subquestions, and working thesis. You will also briefly discuss the value of researching this topic and indicate how you plan to gather information.

When Jorge began drafting his research proposal, he realized that he had already created most of the pieces he needed. However, he knew he also had to explain how his research would be relevant to other future health care professionals. In addition, he wanted to form a general plan for doing the research and identifying potentially useful sources. Read Jorge’s research proposal.

Read Jorge's research proposal

Before you begin a new project at work, you may have to develop a project summary document that states the purpose of the project, explains why it would be a wise use of company resources, and briefly outlines the steps involved in completing the project. This type of document is similar to a research proposal. Both documents define and limit a project, explain its value, discuss how to proceed, and identify what resources you will use.

Writing Your Own Research Proposal

Now you may write your own research proposal, if you have not done so already. Follow the guidelines provided in this lesson.

Key Takeaways

  • Developing a research proposal involves the following preliminary steps: identifying potential ideas, choosing ideas to explore further, choosing and narrowing a topic, formulating a research question, and developing a working thesis.
  • A good topic for a research paper interests the writer and fulfills the requirements of the assignment.
  • Defining and narrowing a topic helps writers conduct focused, in-depth research.
  • Writers conduct preliminary research to identify possible topics and research questions and to develop a working thesis.
  • A good research question interests readers, is neither too broad nor too narrow, and has no obvious answer.
  • A good working thesis expresses a debatable idea or claim that can be supported with evidence from research.
  • Writers create a research proposal to present their topic, main research question, subquestions, and working thesis to an instructor for approval or feedback.

Writing for Success Copyright © 2015 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

COMMENTS

  1. Guidance to best tools and practices for systematic reviews

    Tool for RoB assessment #9* #3.4: Use of reliable and valid tools appropriate for study design features. Tools chosen must assess specific sources of bias required by AMSTAR-2 or ROBIS. RoB assessment results #12 (if MA), 13 #4.6, 3.4: Interpreted and discussed. Allows readers to understand the details of RoB issues, optimally by each outcome ...

  2. Guidance to best tools and practices for systematic reviews

    Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and ...

  3. Optimising the value of the critical appraisal skills programme (CASP

    A key stage common to all systematic reviews is quality appraisal of the evidence to be synthesized. 1,8 There is broad debate and little consensus among the academic community over what constitutes 'quality' in qualitative research. 'Qualitative' is an umbrella term that encompasses a diverse range of methods, which makes it difficult to have a 'one size fits all' definition of ...

  4. Systematic reviews: Structure, form and content

    The systematic, transparent searching techniques outlined in this article can be adopted and adapted for use in other forms of literature review (Grant & Booth 2009), for example, while the critical appraisal tools highlighted are appropriate for use in other contexts in which the reliability and applicability of medical research require ...

  5. How to properly use the PRISMA Statement

    Systematic Reviews supports the complete and transparent reporting of research. The Editors require the submission of a populated checklist from the relevant reporting guidelines, including the PRISMA checklist or the most appropriate PRISMA extension. Using the PRISMA statement and its extensions to write protocols or the completed review ...

  6. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  7. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  8. PDF Checklist for Systematic Reviews and Research Syntheses

    The systematic review is essentially an analysis of the available literature (that is, evidence) and a. judgment of the effectiveness or otherwise of a practice, involving a series of complex steps. JBI takes a. particular view on what counts as evidence and the methods utilised to synthesise those different types of. evidence.

  9. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  10. Conducting a Systematic Review: A Practical Guide

    Systematic reviews synthesize evidence, both quantitatively and qualitatively, across several studies focused on the same research topic. The steps within a systematic review are the development of a protocol including a detailed search strategy, database searching to identify citations meeting review inclusion criteria, screening of titles and ...

  11. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  12. Guidance on Conducting a Systematic Literature Review

    This article is organized as follows: The next section presents the methodology adopted by this research, followed by a section that discusses the typology of literature reviews and provides empirical examples; the subsequent section summarizes the process of literature review; and the last section concludes the paper with suggestions on how to improve the quality and rigor of literature ...

  13. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  14. A proposed framework for developing quality assessment tools

    As with any research project, a protocol that clearly defines the scope and proposed plans for the development of the tool should be produced at an early stage of the tool development process. Stage 2: tool development Generate initial list of items for inclusion. The starting point for a tool is an initial list of items to consider for inclusion.

  15. What Quality Assessment Tool Should I Use? A Practical Guide for

    be used to summarize the findings of a systematic review as well as develop clinical p ractice guidelines. In conclusion, risk- of-bias and critic al appraisal tools assess th e validity of each ...

  16. Critiquing Research Evidence for Use in Practice: Revisited

    The first step is to critique and appraise the research evidence. Through critiquing and appraising the research evidence, dialog with colleagues, and changing practice based on evidence, NPs can improve patient outcomes ( Dale, 2005) and successfully translate research into evidence-based practice in today's ever-changing health care ...

  17. Systematic mapping of existing tools to appraise methodological

    Qualitative evidence synthesis is increasingly used alongside reviews of effectiveness to inform guidelines and other decisions. To support this use, the GRADE-CERQual approach was developed to assess and communicate the confidence we have in findings from reviews of qualitative research. One component of this approach requires an appraisal of the methodological limitations of studies ...

  18. Proposal

    Getting started with proposal of review. The proposal stage is the most important step of a review project as it determines the feasibility of the review and its rationale. The steps are: 1. Determining review question and review type. Right Review: free tool to assist in selecting best review type for a given question.

  19. How to prepare a Research Proposal

    It puts the proposal in context. 3. The introduction typically begins with a statement of the research problem in precise and clear terms. 1. The importance of the statement of the research problem 5: The statement of the problem is the essential basis for the construction of a research proposal (research objectives, hypotheses, methodology ...

  20. PICO

    A research question framework can help structure your systematic review question. PICO/T is an acronym which stands for. Each PICO includes at least a P, I, and an O, and some include a C or a T. Below are some sample PICO/T questions to help you use the framework to your advantage. For an intervention/therapy.

  21. 11.2 Steps in Developing a Research Proposal

    Key Takeaways. Developing a research proposal involves the following preliminary steps: identifying potential ideas, choosing ideas to explore further, choosing and narrowing a topic, formulating a research question, and developing a working thesis. A good topic for a research paper interests the writer and fulfills the requirements of the ...

  22. PDF Defining your question for finding qualitative research: SPIDER tool

    The systematic review process in qualitative research is known as meta-synthesis, which uses an explicit and systematic method to find, interpret and analyze data from many qualitative studies (Rice, 2008). To learn more about synthesizing qualitative research to increase understanding on a health issue, and how this synthesis