Logo for Rebus Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: Introduction

Learning objectives.

At the conclusion of this chapter, you will be able to:

  • Identify the purpose of the literature review in  the research process
  • Distinguish between different types of literature reviews

1.1 What is a Literature Review?

Pick up nearly any book on research methods and you will find a description of a literature review.  At a basic level, the term implies a survey of factual or nonfiction books, articles, and other documents published on a particular subject.  Definitions may be similar across the disciplines, with new types and definitions continuing to emerge.  Generally speaking, a literature review is a:

  • “comprehensive background of the literature within the interested topic area…” ( O’Gorman & MacIntosh, 2015, p. 31 ).
  • “critical component of the research process that provides an in-depth analysis of recently published research findings in specifically identified areas of interest.” ( House, 2018, p. 109 ).
  • “written document that presents a logically argued case founded on a comprehensive understanding of the current state of knowledge about a topic of study” ( Machi & McEvoy,  2012, p. 4 ).

As a foundation for knowledge advancement in every discipline, it is an important element of any research project.  At the graduate or doctoral level, the literature review is an essential feature of thesis and dissertation, as well as grant proposal writing.  That is to say, “A substantive, thorough, sophisticated literature review is a precondition for doing substantive, thorough, sophisticated research…A researcher cannot perform significant research without first understanding the literature in the field.” ( Boote & Beile, 2005, p. 3 ).  It is by this means, that a researcher demonstrates familiarity with a body of knowledge and thereby establishes credibility with a reader.  An advanced-level literature review shows how prior research is linked to a new project, summarizing and synthesizing what is known while identifying gaps in the knowledge base, facilitating theory development, closing areas where enough research already exists, and uncovering areas where more research is needed. ( Webster & Watson, 2002, p. xiii )

A graduate-level literature review is a compilation of the most significant previously published research on your topic. Unlike an annotated bibliography or a research paper you may have written as an undergraduate, your literature review will outline, evaluate and synthesize relevant research and relate those sources to your own thesis or research question. It is much more than a summary of all the related literature.

It is a type of writing that demonstrate the importance of your research by defining the main ideas and the relationship between them. A good literature review lays the foundation for the importance of your stated problem and research question.

Literature reviews:

  • define a concept
  • map the research terrain or scope
  • systemize relationships between concepts
  • identify gaps in the literature ( Rocco & Plathotnik, 2009, p. 128 )

The purpose of a literature review is to demonstrate that your research question  is meaningful. Additionally, you may review the literature of different disciplines to find deeper meaning and understanding of your topic. It is especially important to consider other disciplines when you do not find much on your topic in one discipline. You will need to search the cognate literature before claiming there is “little previous research” on your topic.

Well developed literature reviews involve numerous steps and activities. The literature review is an iterative process because you will do at least two of them: a preliminary search to learn what has been published in your area and whether there is sufficient support in the literature for moving ahead with your subject. After this first exploration, you will conduct a deeper dive into the literature to learn everything you can about the topic and its related issues.

Literature Review Tutorial

A video titled "Literature Reviews: An overview for graduate students." Video here: https://www.lib.ncsu.edu/tutorials/litreview/. Transcript available here: https://siskel.lib.ncsu.edu/RIS/instruction/litreview/litreview.txt

1.2 Literature Review Basics

An effective literature review must:

  • Methodologically analyze and synthesize quality literature on a topic
  • Provide a firm foundation to a topic or research area
  • Provide a firm foundation for the selection of a research methodology
  • Demonstrate that the proposed research contributes something new to the overall body of knowledge of advances the research field’s knowledge base. ( Levy & Ellis, 2006 ).

All literature reviews, whether they are qualitative, quantitative or both, will at some point:

  • Introduce the topic and define its key terms
  • Establish the importance of the topic
  • Provide an overview of the amount of available literature and its types (for example: theoretical, statistical, speculative)
  • Identify gaps in the literature
  • Point out consistent finding across studies
  • Arrive at a synthesis that organizes what is known about a topic
  • Discusses possible implications and directions for future research

1.3 Types of Literature Reviews

There are many different types of literature reviews, however there are some shared characteristics or features.  Remember a comprehensive literature review is, at its most fundamental level, an original work based on an extensive critical examination and synthesis of the relevant literature on a topic. As a study of the research on a particular topic, it is arranged by key themes or findings, which may lead up to or link to the  research question.  In some cases, the research question will drive the type of literature review that is undertaken.

The following section includes brief descriptions of the terms used to describe different literature review types with examples of each.   The included citations are open access, Creative Commons licensed or copyright-restricted.

1.3.1 Types of Review

1.3.1.1 conceptual.

Guided by an understanding of basic issues rather than a research methodology. You are looking for key factors, concepts or variables and the presumed relationship between them. The goal of the conceptual literature review is to categorize and describe concepts relevant to your study or topic and outline a relationship between them. You will include relevant theory and empirical research.

Examples of a Conceptual Review:

  • Education : The formality of learning science in everyday life: A conceptual literature review. ( Dohn, 2010 ).
  • Education : Are we asking the right questions? A conceptual review of the educational development literature in higher education. ( Amundsen & Wilson, 2012 ).

Figure 1.1 shows a diagram of possible topics and subtopics related to the use of information systems in education. In this example, constructivist theory is a concept that might influence the use of information systems in education. A related but separate concept the researcher might want to explore are the different perspectives of students and teachers regarding the use of information systems in education.

1.3.1.2 Empirical

An empirical literature review collects, creates, arranges, and analyzes numeric data reflecting the frequency of themes, topics, authors and/or methods found in existing literature. Empirical literature reviews present their summaries in quantifiable terms using descriptive and inferential statistics.

Examples of an Empirical Review:

  • Nursing : False-positive findings in Cochrane meta-analyses with and without application of trial sequential analysis: An empirical review. ( Imberger, Thorlund, Gluud, & Wettersley, 2016 ).
  • Education : Impediments of e-learning adoption in higher learning institutions of Tanzania: An empirical review ( Mwakyusa & Mwalyagile, 2016 ).

1.3.1.3 Exploratory

Unlike a synoptic literature review, the purpose here is to provide a broad approach to the topic area. The aim is breadth rather than depth and to get a general feel for the size of the topic area. A graduate student might do an exploratory review of the literature before beginning a synoptic, or more comprehensive one.

Examples of an Exploratory Review:

  • Education : University research management: An exploratory literature review. ( Schuetzenmeister, 2010 ).
  • Education : An exploratory review of design principles in constructivist gaming learning environments. ( Rosario & Widmeyer, 2009 ).

what is an exploratory literature review

1.3.1.4 Focused

A type of literature review limited to a single aspect of previous research, such as methodology. A focused literature review generally will describe the implications of choosing a particular element of past research, such as methodology in terms of data collection, analysis and interpretation.

Examples of a Focused Review:

  • Nursing : Clinical inertia in the management of type 2 diabetes mellitus: A focused literature review. ( Khunti, Davies, & Khunti, 2015 ).
  • Education : Language awareness: Genre awareness-a focused review of the literature. ( Stainton, 1992 ).

1.3.1.5 Integrative

Critiques past research and draws overall conclusions from the body of literature at a specified point in time. Reviews, critiques, and synthesizes representative literature on a topic in an integrated way. Most integrative reviews are intended to address mature topics or  emerging topics. May require the author to adopt a guiding theory, a set of competing models, or a point of view about a topic.  For more description of integrative reviews, see Whittemore & Knafl (2005).

Examples of an Integrative Review:

  • Nursing : Interprofessional teamwork and collaboration between community health workers and healthcare teams: An integrative review. ( Franklin,  Bernhardt, Lopez, Long-Middleton, & Davis, 2015 ).
  • Education : Exploring the gap between teacher certification and permanent employment in Ontario: An integrative literature review. ( Brock & Ryan, 2016 ).

1.3.1.6 Meta-analysis

A subset of a  systematic review, that takes findings from several studies on the same subject and analyzes them using standardized statistical procedures to pool together data. Integrates findings from a large body of quantitative findings to enhance understanding, draw conclusions, and detect patterns and relationships. Gather data from many different, independent studies that look at the same research question and assess similar outcome measures. Data is combined and re-analyzed, providing a greater statistical power than any single study alone. It’s important to note that not every systematic review includes a meta-analysis but a meta-analysis can’t exist without a systematic review of the literature.

Examples of a Meta-Analysis:

  • Education : Efficacy of the cooperative learning method on mathematics achievement and attitude: A meta-analysis research. ( Capar & Tarim, 2015 ).
  • Nursing : A meta-analysis of the effects of non-traditional teaching methods on the critical thinking abilities of nursing students. ( Lee, Lee, Gong, Bae, & Choi, 2016 ).
  • Education : Gender differences in student attitudes toward science: A meta-analysis of the literature from 1970 to 1991. ( Weinburgh, 1995 ).

1.3.1.7 Narrative/Traditional

An overview of research on a particular topic that critiques and summarizes a body of literature. Typically broad in focus. Relevant past research is selected and synthesized into a coherent discussion. Methodologies, findings and limits of the existing body of knowledge are discussed in narrative form. Sometimes also referred to as a traditional literature review. Requires a sufficiently focused research question. The process may be subject to bias that supports the researcher’s own work.

Examples of a Narrative/Traditional Review:

  • Nursing : Family carers providing support to a person dying in the home setting: A narrative literature review. ( Morris, King, Turner, & Payne, 2015 ).
  • Education : Adventure education and Outward Bound: Out-of-class experiences that make a lasting difference. ( Hattie, Marsh, Neill, & Richards, 1997 ).
  • Education : Good quality discussion is necessary but not sufficient in asynchronous tuition: A brief narrative review of the literature. ( Fear & Erikson-Brown, 2014 ).
  • Nursing : Outcomes of physician job satisfaction: A narrative review, implications, and directions for future research. ( Williams & Skinner, 2003 ).

1.3.1.8 Realist

Aspecific type of literature review that is theory-driven and interpretative and is intended to explain the outcomes of a complex intervention program(s).

Examples of a Realist Review:

  • Nursing : Lean thinking in healthcare: A realist review of the literature. ( Mazzacato, Savage, Brommels, 2010 ).
  • Education : Unravelling quality culture in higher education: A realist review. ( Bendermacher, Egbrink, Wolfhagen, & Dolmans, 2017 ).

1.3.1.9 Scoping

Tend to be non-systematic and focus on breadth of coverage conducted on a topic rather than depth. Utilize a wide range of materials; may not evaluate the quality of the studies as much as count the number. One means of understanding existing literature. Aims to identify nature and extent of research; preliminary assessment of size and scope of available research on topic. May include research in progress.

Examples of a Scoping Review:

  • Nursing : Organizational interventions improving access to community-based primary health care for vulnerable populations: A scoping review. ( Khanassov, Pluye, Descoteaux, Haggerty,  Russell, Gunn, & Levesque, 2016 ).
  • Education : Interdisciplinary doctoral research supervision: A scoping review. ( Vanstone, Hibbert, Kinsella, McKenzie, Pitman, & Lingard, 2013 ).
  • Nursing : A scoping review of the literature on the abolition of user fees in health care services in Africa. ( Ridde, & Morestin, 2011 ).

1.3.1.10 Synoptic

Unlike an exploratory review, the purpose is to provide a concise but accurate overview of all material that appears to be relevant to a chosen topic. Both content and methodological material is included. The review should aim to be both descriptive and evaluative. Summarizes previous studies while also showing how the body of literature could be extended and improved in terms of content and method by identifying gaps.

Examples of a Synoptic Review:

  • Education : Theoretical framework for educational assessment: A synoptic review. ( Ghaicha, 2016 ).
  • Education : School effects research: A synoptic review of past efforts and some suggestions for the future. ( Cuttance, 1981 ).

1.3.1.11 Systematic Review

A rigorous review that follows a strict methodology designed with a presupposed selection of literature reviewed.  Undertaken to clarify the state of existing research, the evidence, and possible implications that can be drawn from that.  Using comprehensive and exhaustive searching of the published and unpublished literature, searching various databases, reports, and grey literature.  Transparent and reproducible in reporting details of time frame, search and methods to minimize bias.  Must include a team of at least 2-3 and includes the critical appraisal of the literature.  For more description of systematic reviews, including links to protocols, checklists, workflow processes, and structure see “ A Young Researcher’s Guide to a Systematic Review “.

Examples of a Systematic Review:

  • Education : The potentials of using cloud computing in schools: A systematic literature review ( Hartmann, Braae, Pedersen, & Khalid, 2017 )
  • Nursing : Is butter back? A systematic review and meta-analysis of butter consumption and risk of cardiovascular disease, diabetes, and total mortality. ( Pimpin, Wu, Haskelberg, Del Gobbo, & Mozaffarian, 2016 ).
  • Education : The use of research to improve professional practice: a systematic review of the literature. ( Hemsley-Brown & Sharp, 2003 ).
  • Nursing : Using computers to self-manage type 2 diabetes. ( Pal, Eastwood, Michie, Farmer, Barnard, Peacock, Wood, Inniss, & Murray, 2013 ).

1.3.1.12 Umbrella/Overview of Reviews

Compiles evidence from multiple systematic reviews into one document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address those interventions and their effects. Often used in recommendations for practice.

Examples of an Umbrella/Overview Review:

  • Education : Reflective practice in healthcare education: An umbrella review. ( Fragknos, 2016 ).
  • Nursing : Systematic reviews of psychosocial interventions for autism: an umbrella review. ( Seida, Ospina, Karkhaneh, Hartling, Smith, & Clark, 2009 ).

For a brief discussion see “ Not all literature reviews are the same ” (Thomson, 2013).

1.4 Why do a Literature Review?

The purpose of the literature review is the same regardless of the topic or research method. It tests your own research question against what is already known about the subject.

1.4.1 First – It’s part of the whole. Omission of a literature review chapter or section in a graduate-level project represents a serious void or absence of critical element in the research process.

The outcome of your review is expected to demonstrate that you:

  • can systematically explore the research in your topic area
  • can read and critically analyze the literature in your discipline and then use it appropriately to advance your own work
  • have sufficient knowledge in the topic to undertake further investigation

1.4.2 Second – It’s good for you!

  • You improve your skills as a researcher
  • You become familiar with the discourse of your discipline and learn how to be a scholar in your field
  • You learn through writing your ideas and finding your voice in your subject area
  • You define, redefine and clarify your research question for yourself in the process

1.4.3 Third – It’s good for your reader. Your reader expects you to have done the hard work of gathering, evaluating and synthesizes the literature.  When you do a literature review you:

  • Set the context for the topic and present its significance
  • Identify what’s important to know about your topic – including individual material, prior research, publications, organizations and authors.
  • Demonstrate relationships among prior research
  • Establish limitations of existing knowledge
  • Analyze trends in the topic’s treatment and gaps in the literature

1.4.4 Why do a literature review?

  • To locate gaps in the literature of your discipline
  • To avoid reinventing the wheel
  • To carry on where others have already been
  • To identify other people working in the same field
  • To increase your breadth of knowledge in your subject area
  • To find the seminal works in your field
  • To provide intellectual context for your own work
  • To acknowledge opposing viewpoints
  • To put your work in perspective
  • To demonstrate you can discover and retrieve previous work in the area

1.5 Common Literature Review Errors

Graduate-level literature reviews are more than a summary of the publications you find on a topic.  As you have seen in this brief introduction, literature reviews are a very specific type of research, analysis, and writing.  We will explore these topics more in the next chapters.  Some things to keep in mind as you begin your own research and writing are ways to avoid the most common errors seen in the first attempt at a literature review.  For a quick review of some of the pitfalls and challenges a new researcher faces when he/she begins work, see “ Get Ready: Academic Writing, General Pitfalls and (oh yes) Getting Started! ”.

As you begin your own graduate-level literature review, try to avoid these common mistakes:

  • Accepts another researcher’s finding as valid without evaluating methodology and data
  • Contrary findings and alternative interpretations are not considered or mentioned
  • Findings are not clearly related to one’s own study, or findings are too general
  • Insufficient time allowed to define best search strategies and writing
  • Isolated statistical results are simply reported rather than synthesizing the results
  • Problems with selecting and using most relevant keywords, subject headings and descriptors
  • Relies too heavily on secondary sources
  • Search methods are not recorded or reported for transparency
  • Summarizes rather than synthesizes articles

In conclusion, the purpose of a literature review is three-fold:

  • to survey the current state of knowledge or evidence in the area of inquiry,
  • to identify key authors, articles, theories, and findings in that area, and
  • to identify gaps in knowledge in that research area.

A literature review is commonly done today using computerized keyword searches in online databases, often working with a trained librarian or information expert. Keywords can be combined using the Boolean operators, “and”, “or” and sometimes “not”  to narrow down or expand the search results. Once a list of articles is generated from the keyword and subject heading search, the researcher must then manually browse through each title and abstract, to determine the suitability of that article before a full-text article is obtained for the research question.

Literature reviews should be reasonably complete, and not restricted to a few journals, a few years, or a specific methodology or research design. Reviewed articles may be summarized in the form of tables, and can be further structured using organizing frameworks such as a concept matrix.

A well-conducted literature review should indicate whether the initial research questions have already been addressed in the literature, whether there are newer or more interesting research questions available, and whether the original research questions should be modified or changed in light of findings of the literature review.

The review can also provide some intuitions or potential answers to the questions of interest and/or help identify theories that have previously been used to address similar questions and may provide evidence to inform policy or decision-making. ( Bhattacherjee, 2012 ).

what is an exploratory literature review

Read Abstract 1.  Refer to Types of Literature Reviews.  What type of literature review do you think this study is and why?  See the Answer Key for the correct response.

Nursing : To describe evidence of international literature on the safe care of the hospitalised child after the World Alliance for Patient Safety and list contributions of the general theoretical framework of patient safety for paediatric nursing.

An integrative literature review between 2004 and 2015 using the databases PubMed, Cumulative Index of Nursing and Allied Health Literature (CINAHL), Scopus, Web of Science and Wiley Online Library, and the descriptors Safety or Patient safety, Hospitalised child, Paediatric nursing, and Nursing care.

Thirty-two articles were analysed, most of which were from North American, with a descriptive approach. The quality of the recorded information in the medical records, the use of checklists, and the training of health workers contribute to safe care in paediatric nursing and improve the medication process and partnerships with parents.

General information available on patient safety should be incorporated in paediatric nursing care. ( Wegner, Silva, Peres, Bandeira, Frantz, Botene, & Predebon, 2017 ).

Read Abstract 2.  Refer to Types of Literature Reviews.  What type of lit review do you think this study is and why?  See the Answer Key for the correct response.

Education : The focus of this paper centers around timing associated with early childhood education programs and interventions using meta-analytic methods. At any given assessment age, a child’s current age equals starting age, plus duration of program, plus years since program ended. Variability in assessment ages across the studies should enable everyone to identify the separate effects of all three time-related components. The project is a meta-analysis of evaluation studies of early childhood education programs conducted in the United States and its territories between 1960 and 2007. The population of interest is children enrolled in early childhood education programs between the ages of 0 and 5 and their control-group counterparts. Since the data come from a meta-analysis, the population for this study is drawn from many different studies with diverse samples. Given the preliminary nature of their analysis, the authors cannot offer conclusions at this point. ( Duncan, Leak, Li, Magnuson, Schindler, & Yoshikawa, 2011 ).

Test Yourself

See Answer Key for the correct responses.

The purpose of a graduate-level literature review is to summarize in as many words as possible everything that is known about my topic.

A literature review is significant because in the process of doing one, the researcher learns to read and critically assess the literature of a discipline and then uses it appropriately to advance his/her own research.

Read the following abstract and choose the correct type of literature review it represents.

Nursing: E-cigarette use has become increasingly popular, especially among the young. Its long-term influence upon health is unknown. Aim of this review has been to present the current state of knowledge about the impact of e-cigarette use on health, with an emphasis on Central and Eastern Europe. During the preparation of this narrative review, the literature on e-cigarettes available within the network PubMed was retrieved and examined. In the final review, 64 research papers were included. We specifically assessed the construction and operation of the e-cigarette as well as the chemical composition of the e-liquid; the impact that vapor arising from the use of e-cigarette explored in experimental models in vitro; and short-term effects of use of e-cigarettes on users’ health. Among the substances inhaled by the e-smoker, there are several harmful products, such as: formaldehyde, acetaldehyde, acroleine, propanal, nicotine, acetone, o-methyl-benzaldehyde, carcinogenic nitrosamines. Results from experimental animal studies indicate the negative impact of e-cigarette exposure on test models, such as ascytotoxicity, oxidative stress, inflammation, airway hyper reactivity, airway remodeling, mucin production, apoptosis, and emphysematous changes. The short-term impact of e-cigarettes on human health has been studied mostly in experimental setting. Available evidence shows that the use of e-cigarettes may result in acute lung function responses (e.g., increase in impedance, peripheral airway flow resistance) and induce oxidative stress. Based on the current available evidence, e-cigarette use is associated with harmful biologic responses, although it may be less harmful than traditional cigarettes. (J ankowski, Brożek, Lawson, Skoczyński, & Zejda, 2017 ).

  • Meta-analysis
  • Exploratory

Education: In this review, Mary Vorsino writes that she is interested in keeping the potential influences of women pragmatists of Dewey’s day in mind while presenting modern feminist re readings of Dewey. She wishes to construct a narrowly-focused and succinct literature review of thinkers who have donned a feminist lens to analyze Dewey’s approaches to education, learning, and democracy and to employ Dewey’s works in theorizing on gender and education and on gender in society. This article first explores Dewey as both an ally and a problematic figure in feminist literature and then investigates the broader sphere of feminist pragmatism and two central themes within it: (1) valuing diversity, and diverse experiences; and (2) problematizing fixed truths. ( Vorsino, 2015 ).

Image Attributions

Literature Reviews for Education and Nursing Graduate Students Copyright © by Linda Frederiksen is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is an exploratory literature review

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

what is an exploratory literature review

Try for free

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health ( m-health ) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Conducting a literature review: why do a literature review, why do a literature review.

  • How To Find "The Literature"
  • Found it -- Now What?

Besides the obvious reason for students -- because it is assigned! -- a literature review helps you explore the research that has come before you, to see how your research question has (or has not) already been addressed.

You identify:

  • core research in the field
  • experts in the subject area
  • methodology you may want to use (or avoid)
  • gaps in knowledge -- or where your research would fit in

It Also Helps You:

  • Publish and share your findings
  • Justify requests for grants and other funding
  • Identify best practices to inform practice
  • Set wider context for a program evaluation
  • Compile information to support community organizing

Great brief overview, from NCSU

Want To Know More?

Cover Art

  • Next: How To Find "The Literature" >>
  • Last Updated: Dec 8, 2023 10:11 AM
  • URL: https://guides.lib.berkeley.edu/litreview
  • Licensing Information
  • Contributing Authors
  • 1. Let's Get Writing
  • 1.1. The 5 C Guidelines
  • 1.2. How to Write Articles Quickly and Expertly
  • 2. Critical Thinking
  • 2.1. Critical Thinking in the Classroom
  • 2.2. Necessary and Sufficient Conditions
  • 2.3. Good Logic
  • 3. APA for Novices
  • 3.1. Hoops and Barriers
  • 3.2. Crafts and Puzzles
  • 3.3. The Papers Trail
  • 3.4. The Fine Art of Sentencing
  • 3.5. Hurdles
  • 3.6. Small Stressors
  • 4. Literature Reviews
  • 4.1. Introduction to Literature Reviews
  • 4.2. What is a Literature Review?
  • 4.3. How to Get Started
  • 4.4. Where to Find the Literature
  • 4.5. Evaluating Sources
  • 4.6. Documenting Sources
  • 4.7. Synthesizing Sources
  • 4.8. Writing the Literature Review
  • 4.9. Concluding Thoughts on Literature Reviews
  • Technical Tutorials
  • Constructing an Annotated Bibliography with Zotero
  • Extracting Resource Metadata from a Citation List with AnyStyle.io
  • Exporting Zotero to a Spreadsheet
  • APA 7 Job Aid
  • Index of Topics
  • Translations

Introduction to Literature Reviews

Choose a sign-in option.

Tools and Settings

Questions and Tasks

Citation and Embed Code

what is an exploratory literature review

Learning Objectives

At the conclusion of this chapter, you will be able to:

  • Identify the purpose of the literature review in  the research process;
  • Distinguish between different types of literature reviews.

What is a Literature Review?

Pick up nearly any book on research methods and you will find a description of a literature review.  At a basic level, the term implies a survey of factual or nonfiction books, articles, and other documents published on a particular subject.  Definitions may be similar across the disciplines, with new types and definitions continuing to emerge.  Generally speaking, a literature review is a:

  • “comprehensive background of the literature within the interested topic area…” ( O’Gorman & MacIntosh, 2015, p. 31 [https://edtechbooks.org/-EaoJ] ).
  • “critical component of the research process that provides an in-depth analysis of recently published research findings in specifically identified areas of interest.” ( House, 2018, p. 109 [https://edtechbooks.org/-EaoJ] ).
  • “written document that presents a logically argued case founded on a comprehensive understanding of the current state of knowledge about a topic of study” ( Machi & McEvoy,  2012, p. 4 [https://edtechbooks.org/-EaoJ] ).

As a foundation for knowledge advancement in every discipline, it is an important element of any research project.  At the graduate or doctoral level, the literature review is an essential feature of thesis and dissertation, as well as grant proposal writing.  That is to say, “A substantive, thorough, sophisticated literature review is a precondition for doing substantive, thorough, sophisticated research…A researcher cannot perform significant research without first understanding the literature in the field.” ( Boote & Beile, 2005, p. 3 [https://edtechbooks.org/-EaoJ] ).  It is by this means, that a researcher demonstrates familiarity with a body of knowledge and thereby establishes credibility with a reader.  An advanced-level literature review shows how prior research is linked to a new project, summarizing and synthesizing what is known while identifying gaps in the knowledge base, facilitating theory development, closing areas where enough research already exists, and uncovering areas where more research is needed. ( Webster & Watson, 2002, p. xiii [https://edtechbooks.org/-EaoJ] )

A graduate-level literature review is a compilation of the most significant previously published research on your topic. Unlike an annotated bibliography or a research paper you may have written as an undergraduate, your literature review will outline, evaluate and synthesize relevant research and relate those sources to your own thesis or research question. It is much more than a summary of all the related literature.

It is a type of writing that demonstrate the importance of your research by defining the main ideas and the relationship between them. A good literature review lays the foundation for the importance of your stated problem and research question.

Literature reviews do the following:

  • define a concept
  • map the research terrain or scope
  • systemize relationships between concepts
  • identify gaps in the literature ( Rocco & Plathotnik, 2009, p. 128 [https://edtechbooks.org/-EaoJ] )

In the context of a research study, the purpose of a literature review is to demonstrate that your research question  is meaningful. Additionally, you may review the literature of different disciplines to find deeper meaning and understanding of your topic. It is especially important to consider other disciplines when you do not find much on your topic in one discipline. You will need to search the cognate literature before claiming there is “little previous research” on your topic.

Well developed literature reviews involve numerous steps and activities. The literature review is an iterative process because you will do at least two of them: a preliminary search to learn what has been published in your area and whether there is sufficient support in the literature for moving ahead with your subject. After this first exploration, you will conduct a deeper dive into the literature to learn everything you can about the topic and its related issues.

Literature Review Tutorial

what is an exploratory literature review

Literature Review Basics

An effective literature review must:

  • Methodologically analyze and synthesize quality literature on a topic
  • Provide a firm foundation to a topic or research area
  • Provide a firm foundation for the selection of a research methodology
  • Demonstrate that the proposed research contributes something new to the overall body of knowledge of advances the research field’s knowledge base. ( Levy & Ellis, 2006 [https://edtechbooks.org/-EaoJ] ).

All literature reviews, whether they are qualitative, quantitative or both, will at some point:

  • Introduce the topic and define its key terms
  • Establish the importance of the topic
  • Provide an overview of the amount of available literature and its types (for example: theoretical, statistical, speculative)
  • Identify gaps in the literature
  • Point out consistent finding across studies
  • Arrive at a synthesis that organizes what is known about a topic
  • Discusses possible implications and directions for future research

Types of Literature Reviews

There are many different types of literature reviews, however there are some shared characteristics or features that all share.  Remember a comprehensive literature review is, at its most fundamental level, an original work based on an extensive critical examination and synthesis of the relevant literature on a topic. As a study of the research on a particular topic, it is arranged by key themes or findings, which may lead up to or link to the  research question.  In some cases, the research question will drive the type of literature review that is undertaken.

The following section includes brief descriptions of the terms used to describe different literature review types with examples of each.   The included citations are open access, Creative Commons licensed or copyright-restricted.

Guided by an understanding of basic issues rather than a research methodology, the writer of a conceptual literature review is looking for key factors, concepts or variables and the presumed relationship between them. The goal of the conceptual literature review is to categorize and describe concepts relevant to the study or topic and outline a relationship between them, including relevant theory and empirical research.

Examples of a Conceptual Review:

  • The formality of learning science in everyday life: A conceptual literature review ( Dohn, 2010 [https://edtechbooks.org/-EaoJ] ).
  • Are we asking the right questions? A conceptual review of the educational development literature in higher education ( Amundsen & Wilson, 2012 [https://edtechbooks.org/-EaoJ] ).

An empirical literature review collects, creates, arranges, and analyzes numeric data reflecting the frequency of themes, topics, authors and/or methods found in existing literature. Empirical literature reviews present their summaries in quantifiable terms using descriptive and inferential statistics.

Examples of an Empirical Review:

  • Impediments of e-learning adoption in higher learning institutions of Tanzania: An empirical review ( Mwakyusa & Mwalyagile, 2016 [https://edtechbooks.org/-EaoJ] ).
  • Exploratory

The purpose of an exploratory review is to provide a broad approach to the topic area. The aim is breadth rather than depth and to get a general feel for the size of the topic area. A graduate student might do an exploratory review of the literature before beginning a more comprehensive one (e.g., synoptic).

Examples of an Exploratory Review:

  • University research management: An exploratory literature review ( Schuetzenmeister, 2010 [https://edtechbooks.org/-EaoJ] ).
  • An exploratory review of design principles in constructivist gaming learning environments ( Rosario & Widmeyer, 2009 [https://edtechbooks.org/-EaoJ] ).

This type of literature review is limited to a single aspect of previous research, such as methodology. A focused literature review generally will describe the implications of choosing a particular element of past research, such as methodology in terms of data collection, analysis, and interpretation.

Examples of a Focused Review:

  • Language awareness: Genre awareness-a focused review of the literature ( Stainton, 1992 [https://edtechbooks.org/-EaoJ] ).

Integrative

An integrative review critiques past research and draws overall conclusions from the body of literature at a specified point in time. As such, it reviews, critiques, and synthesizes representative literature on a topic in an integrated way. Most integrative reviews may require the author to adopt a guiding theory, a set of competing models, or a point of view about a topic.  For more description of integrative reviews, see Whittemore & Knafl (2005) [https://edtechbooks.org/-EaoJ] .

Examples of an Integrative Review:

  • Exploring the gap between teacher certification and permanent employment in Ontario: An integrative literature review ( Brock & Ryan, 2016 [https://edtechbooks.org/-EaoJ] ).
  • Meta-analysis

A subset of a systematic review, a meta-analysis takes findings from several studies on the same subject and analyzes them using standardized statistical procedures to pool together data. As such, it integrates findings from a large body of quantitative findings to enhance understanding, draw conclusions, and detect patterns and relationships. By gathering data from many different, independent studies that look at the same research question and assess similar outcome measures, data can be combined and re-analyzed, providing greater statistical power than any single study alone. It’s important to note that not every systematic review includes a meta-analysis but a meta-analysis can’t exist without a systematic review of the literature.

Examples of a Meta-Analysis:

  • Efficacy of the cooperative learning method on mathematics achievement and attitude: A meta-analysis research ( Capar & Tarim, 2015 [https://edtechbooks.org/-EaoJ] ).
  • Gender differences in student attitudes toward science: A meta-analysis of the literature from 1970 to 1991 ( Weinburgh, 1995 [https://edtechbooks.org/-EaoJ] ).

Narrative/Traditional

A narrative or traditional review provides an overview of research on a particular topic that critiques and summarizes a body of literature. Typically broad in focus, these reviews select and synthesize relevant past research into a coherent discussion. Methodologies, findings and limits of the existing body of knowledge are discussed in narrative form. This requires a sufficiently focused research question, and the process may be subject to bias that supports the researcher’s own work.

Examples of a Narrative/Traditional Review:

  • Adventure education and Outward Bound: Out-of-class experiences that make a lasting difference ( Hattie, Marsh, Neill, & Richards, 1997 [https://edtechbooks.org/-EaoJ] ).
  • Good quality discussion is necessary but not sufficient in asynchronous tuition: A brief narrative review of the literature ( Fear & Erikson-Brown, 2014 [https://edtechbooks.org/-EaoJ] ).

This specific type of literature review is theory-driven and interpretative and is intended to explain the outcomes of a complex intervention program(s).

Examples of a Realist Review:

  • Unravelling quality culture in higher education: A realist review ( Bendermacher, Egbrink, Wolfhagen, & Dolmans, 2017 [https://edtechbooks.org/-EaoJ] ).

This type of review tends to be a non-systematic approach that focuses on breadth of coverage rather than depth. It utilizes a wide range of materials and may not evaluate the quality of the studies as much as count the number. Thus, it aims to identify the nature and extent of research in an area by providing a preliminary assessment of size and scope of available research and may also include research in progress.

Examples of a Scoping Review:

  • Interdisciplinary doctoral research supervision: A scoping review ( Vanstone, Hibbert, Kinsella, McKenzie, Pitman, & Lingard, 2013 [https://edtechbooks.org/-EaoJ] ).

In contrast to an exploratory review, the purpose of a synoptic review is to provide a concise but accurate overview of all material that appears to be relevant to a chosen topic. Both content and methodological material is included. The review should aim to be both descriptive and evaluative as it summarizes previous studies while also showing how the body of literature could be extended and improved in terms of content and method by identifying gaps.

Examples of a Synoptic Review:

  • Theoretical framework for educational assessment: A synoptic review ( Ghaicha, 2016 [https://edtechbooks.org/-EaoJ] ).
  • School effects research: A synoptic review of past efforts and some suggestions for the future ( Cuttance, 1981 [https://edtechbooks.org/-EaoJ] ).

Systematic Review

A rigorous review that follows a strict methodology designed with a presupposed selection of literature reviewed, systematic reviews are undertaken to clarify the state of existing research, evidence, and possible implications that can be drawn.  Using comprehensive and exhaustive searching of the published and unpublished literature, searching various databases, reports, and grey literature, these reviews seek to produce transparent and reproducible results that report details of time frame and methods to minimize bias.  Generally, these reviews must include teams of at least 2-3 to allow for the critical appraisal of the literature.  For more description of systematic reviews, including links to protocols, checklists, workflow processes, and structure see “ A Young Researcher’s Guide to a Systematic Review [https://edtechbooks.org/-oF] “.

Examples of a Systematic Review:

  • The potentials of using cloud computing in schools: A systematic literature review ( Hartmann, Braae, Pedersen, & Khalid, 2017 [https://edtechbooks.org/-EaoJ] ).
  • The use of research to improve professional practice: a systematic review of the literature ( Hemsley-Brown & Sharp, 2003 [https://edtechbooks.org/-EaoJ] ).

Umbrella/Overview of Reviews

An umbrella review compiles evidence from multiple systematic reviews into one document. It therefore focuses on broad conditions or problems for which there are competing interventions and highlights reviews that address those interventions and their effects, thereby allowing for recommendations for practice. For a brief discussion see “ Not all literature reviews are the same [https://edtechbooks.org/-xZ] ” (Thomson, 2013).

Examples of an Umbrella/Overview Review:

  • Reflective practice in healthcare education: An umbrella review ( Fragknos, 2016 [https://edtechbooks.org/-EaoJ] ).

Why do a Literature Review?

The purpose of the literature review is the same regardless of the topic or research method. It tests your own research question against what is already known about the subject.

First – It’s part of the whole.

Omission of a literature review chapter or section in a graduate-level project represents a serious void or absence of a critical element in the research process.

The outcome of your review is expected to demonstrate that you:

  • can systematically explore the research in your topic area
  • can read and critically analyze the literature in your discipline and then use it appropriately to advance your own work
  • have sufficient knowledge in the topic to undertake further investigation

Second – It’s good for you!

  • You improve your skills as a researcher
  • You become familiar with the discourse of your discipline and learn how to be a scholar in your field
  • You learn through writing your ideas and finding your voice in your subject area
  • You define, redefine and clarify your research question for yourself in the process

Third – It’s good for your reader.

Your reader expects you to have done the hard work of gathering, evaluating, and synthesizing the literature.  When you do a literature review you:

  • Set the context for the topic and present its significance
  • Identify what’s important to know about your topic – including individual material, prior research, publications, organizations and authors.
  • Demonstrate relationships among prior research
  • Establish limitations of existing knowledge
  • Analyze trends in the topic’s treatment and gaps in the literature

So, why should you do a literature review?

  • To locate gaps in the literature of your discipline
  • To avoid reinventing the wheel
  • To carry on where others have already been
  • To identify other people working in the same field
  • To increase your breadth of knowledge in your subject area
  • To find the seminal works in your field
  • To provide intellectual context for your own work
  • To acknowledge opposing viewpoints
  • To put your work in perspective
  • To demonstrate you can discover and retrieve previous work in the area

Common Literature Review Errors

Graduate-level literature reviews are more than a summary of the publications you find on a topic.  As you have seen in this brief introduction, literature reviews are a very specific type of research, analysis, and writing.  We will explore these topics more in the next chapters.  Some things to keep in mind as you begin your own research and writing are ways to avoid the most common errors seen in the first attempt at a literature review.  For a quick review of some of the pitfalls and challenges a new researcher faces when he/she begins work, see “ Get Ready: Academic Writing, General Pitfalls and (oh yes) Getting Started! [https://edtechbooks.org/-GUc] ”.

As you begin your own graduate-level literature review, try to avoid these common mistakes:

  • Accepting another researcher’s finding as valid without evaluating methodology and data
  • Ignoring contrary findings and alternative interpretations
  • Providing findings that are not clearly related to one’s own study or that are too general
  • Allowing insufficient time to defining best search strategies and writing
  • Reporting rather than synthesizing isolated statistical results
  • Choosing problematic or irrelevant keywords, subject headings and descriptors
  • Relying too heavily on secondary sources
  • Failing to transparently report search methods
  • Summarizing rather than synthesizing articles

In conclusion, the purpose of a literature review is three-fold:

  • to survey the current state of knowledge or evidence in the area of inquiry,
  • to identify key authors, articles, theories, and findings in that area, and
  • to identify gaps in knowledge in that research area.

A literature review is commonly done today using computerized keyword searches in online databases, often working with a trained librarian or information expert. Keywords can be combined using the Boolean operators, “and”, “or” and sometimes “not”  to narrow down or expand the search results. Once a list of articles is generated from the keyword and subject heading search, the researcher must then manually browse through each title and abstract, to determine the suitability of that article before a full-text article is obtained for the research question.

Literature reviews should be reasonably complete and not restricted to a few journals, a few years, or a specific methodology or research design. Reviewed articles may be summarized in the form of tables and can be further structured using organizing frameworks such as a concept matrix.

A well-conducted literature review should indicate whether the initial research questions have already been addressed in the literature, whether there are newer or more interesting research questions available, and whether the original research questions should be modified or changed in light of findings of the literature review.

The review can also provide some intuitions or potential answers to the questions of interest and/or help identify theories that have previously been used to address similar questions and may provide evidence to inform policy or decision-making ( Bhattacherjee, 2012 [https://edtechbooks.org/-EaoJ] ).

Test Yourself

The purpose of a graduate-level literature review is to summarize in as many words as possible everything that is known about my topic.

A literature review is significant because in the process of doing one, the researcher learns to read and critically assess the literature of a discipline and then uses it appropriately to advance his/her own research.

Read the following abstract and choose the correct type of literature review it represents.

The focus of this paper centers around timing associated with early childhood education programs and interventions using meta-analytic methods. At any given assessment age, a child’s current age equals starting age, plus duration of program, plus years since program ended. Variability in assessment ages across the studies should enable everyone to identify the separate effects of all three time-related components. The project is a meta-analysis of evaluation studies of early childhood education programs conducted in the United States and its territories between 1960 and 2007. The population of interest is children enrolled in early childhood education programs between the ages of 0 and 5 and their control-group counterparts. Since the data come from a meta-analysis, the population for this study is drawn from many different studies with diverse samples. Given the preliminary nature of their analysis, the authors cannot offer conclusions at this point. ( Duncan, Leak, Li, Magnuson, Schindler, & Yoshikawa, 2011 [https://edtechbooks.org/-EaoJ] ).

In this review, Mary Vorsino writes that she is interested in keeping the potential influences of women pragmatists of Dewey’s day in mind while presenting modern feminist re readings of Dewey. She wishes to construct a narrowly-focused and succinct literature review of thinkers who have donned a feminist lens to analyze Dewey’s approaches to education, learning, and democracy and to employ Dewey’s works in theorizing on gender and education and on gender in society. This article first explores Dewey as both an ally and a problematic figure in feminist literature and then investigates the broader sphere of feminist pragmatism and two central themes within it: (1) valuing diversity, and diverse experiences; and (2) problematizing fixed truths. ( Vorsino, 2015 [https://edtechbooks.org/-EaoJ] ).

Linda Frederiksen is the Head of Access Services at Washington State University Vancouver.  She has a Master of Library Science degree from Emporia State University in Kansas. Linda is active in local, regional and national organizations, projects and initiatives advancing open educational resources and equitable access to information.

Sue F. Phelps is the Health Sciences and Outreach Services Librarian at Washington State University Vancouver. Her research interests include information literacy, accessibility of learning materials for students who use adaptive technology, diversity and equity in higher education, and evidence based practice in the health sciences

what is an exploratory literature review

Brigham Young University

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/rapidwriting/lit_rev_intro .

Duke University Libraries

Literature Reviews

  • Getting started

What is a literature review?

Why conduct a literature review, stages of a literature review, lit reviews: an overview (video), check out these books.

  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Artificial intelligence (AI) tools
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

what is an exploratory literature review

Contact a Librarian

Ask a Librarian

Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject.

Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field.

Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in academic literature.

Identifying Gaps: Aims to pinpoint areas where there is a lack of research or unresolved questions, highlighting opportunities for further investigation.

Contextualization: Enables researchers to understand how their work fits into the broader academic conversation and contributes to the existing body of knowledge.

what is an exploratory literature review

tl;dr  A literature review critically examines and synthesizes existing scholarly research and publications on a specific topic to provide a comprehensive understanding of the current state of knowledge in the field.

What is a literature review NOT?

❌ An annotated bibliography

❌ Original research

❌ A summary

❌ Something to be conducted at the end of your research

❌ An opinion piece

❌ A chronological compilation of studies

The reason for conducting a literature review is to:

what is an exploratory literature review

Literature Reviews: An Overview for Graduate Students

While this 9-minute video from NCSU is geared toward graduate students, it is useful for anyone conducting a literature review.

what is an exploratory literature review

Writing the literature review: A practical guide

Available 3rd floor of Perkins

what is an exploratory literature review

Writing literature reviews: A guide for students of the social and behavioral sciences

Available online!

what is an exploratory literature review

So, you have to write a literature review: A guided workbook for engineers

what is an exploratory literature review

Telling a research story: Writing a literature review

what is an exploratory literature review

The literature review: Six steps to success

what is an exploratory literature review

Systematic approaches to a successful literature review

Request from Duke Medical Center Library

what is an exploratory literature review

Doing a systematic review: A student's guide

  • Next: Types of reviews >>
  • Last Updated: Apr 3, 2024 12:40 PM
  • URL: https://guides.library.duke.edu/lit-reviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Get science-backed answers as you write with Paperpal's Research feature

What is a Literature Review? How to Write It (with Examples)

literature review

A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship, demonstrating your understanding of the topic and showing how your work contributes to the ongoing conversation in the field. Learning how to write a literature review is a critical tool for successful research. Your ability to summarize and synthesize prior research pertaining to a certain topic demonstrates your grasp on the topic of study, and assists in the learning process. 

Table of Contents

  • What is the purpose of literature review? 
  • a. Habitat Loss and Species Extinction: 
  • b. Range Shifts and Phenological Changes: 
  • c. Ocean Acidification and Coral Reefs: 
  • d. Adaptive Strategies and Conservation Efforts: 
  • How to write a good literature review 
  • Choose a Topic and Define the Research Question: 
  • Decide on the Scope of Your Review: 
  • Select Databases for Searches: 
  • Conduct Searches and Keep Track: 
  • Review the Literature: 
  • Organize and Write Your Literature Review: 
  • Frequently asked questions 

What is a literature review?

A well-conducted literature review demonstrates the researcher’s familiarity with the existing literature, establishes the context for their own research, and contributes to scholarly conversations on the topic. One of the purposes of a literature review is also to help researchers avoid duplicating previous work and ensure that their research is informed by and builds upon the existing body of knowledge.

what is an exploratory literature review

What is the purpose of literature review?

A literature review serves several important purposes within academic and research contexts. Here are some key objectives and functions of a literature review: 2  

  • Contextualizing the Research Problem: The literature review provides a background and context for the research problem under investigation. It helps to situate the study within the existing body of knowledge. 
  • Identifying Gaps in Knowledge: By identifying gaps, contradictions, or areas requiring further research, the researcher can shape the research question and justify the significance of the study. This is crucial for ensuring that the new research contributes something novel to the field. 
  • Understanding Theoretical and Conceptual Frameworks: Literature reviews help researchers gain an understanding of the theoretical and conceptual frameworks used in previous studies. This aids in the development of a theoretical framework for the current research. 
  • Providing Methodological Insights: Another purpose of literature reviews is that it allows researchers to learn about the methodologies employed in previous studies. This can help in choosing appropriate research methods for the current study and avoiding pitfalls that others may have encountered. 
  • Establishing Credibility: A well-conducted literature review demonstrates the researcher’s familiarity with existing scholarship, establishing their credibility and expertise in the field. It also helps in building a solid foundation for the new research. 
  • Informing Hypotheses or Research Questions: The literature review guides the formulation of hypotheses or research questions by highlighting relevant findings and areas of uncertainty in existing literature. 

Literature review example

Let’s delve deeper with a literature review example: Let’s say your literature review is about the impact of climate change on biodiversity. You might format your literature review into sections such as the effects of climate change on habitat loss and species extinction, phenological changes, and marine biodiversity. Each section would then summarize and analyze relevant studies in those areas, highlighting key findings and identifying gaps in the research. The review would conclude by emphasizing the need for further research on specific aspects of the relationship between climate change and biodiversity. The following literature review template provides a glimpse into the recommended literature review structure and content, demonstrating how research findings are organized around specific themes within a broader topic. 

Literature Review on Climate Change Impacts on Biodiversity:

Climate change is a global phenomenon with far-reaching consequences, including significant impacts on biodiversity. This literature review synthesizes key findings from various studies: 

a. Habitat Loss and Species Extinction:

Climate change-induced alterations in temperature and precipitation patterns contribute to habitat loss, affecting numerous species (Thomas et al., 2004). The review discusses how these changes increase the risk of extinction, particularly for species with specific habitat requirements. 

b. Range Shifts and Phenological Changes:

Observations of range shifts and changes in the timing of biological events (phenology) are documented in response to changing climatic conditions (Parmesan & Yohe, 2003). These shifts affect ecosystems and may lead to mismatches between species and their resources. 

c. Ocean Acidification and Coral Reefs:

The review explores the impact of climate change on marine biodiversity, emphasizing ocean acidification’s threat to coral reefs (Hoegh-Guldberg et al., 2007). Changes in pH levels negatively affect coral calcification, disrupting the delicate balance of marine ecosystems. 

d. Adaptive Strategies and Conservation Efforts:

Recognizing the urgency of the situation, the literature review discusses various adaptive strategies adopted by species and conservation efforts aimed at mitigating the impacts of climate change on biodiversity (Hannah et al., 2007). It emphasizes the importance of interdisciplinary approaches for effective conservation planning. 

what is an exploratory literature review

How to write a good literature review

Writing a literature review involves summarizing and synthesizing existing research on a particular topic. A good literature review format should include the following elements. 

Introduction: The introduction sets the stage for your literature review, providing context and introducing the main focus of your review. 

  • Opening Statement: Begin with a general statement about the broader topic and its significance in the field. 
  • Scope and Purpose: Clearly define the scope of your literature review. Explain the specific research question or objective you aim to address. 
  • Organizational Framework: Briefly outline the structure of your literature review, indicating how you will categorize and discuss the existing research. 
  • Significance of the Study: Highlight why your literature review is important and how it contributes to the understanding of the chosen topic. 
  • Thesis Statement: Conclude the introduction with a concise thesis statement that outlines the main argument or perspective you will develop in the body of the literature review. 

Body: The body of the literature review is where you provide a comprehensive analysis of existing literature, grouping studies based on themes, methodologies, or other relevant criteria. 

  • Organize by Theme or Concept: Group studies that share common themes, concepts, or methodologies. Discuss each theme or concept in detail, summarizing key findings and identifying gaps or areas of disagreement. 
  • Critical Analysis: Evaluate the strengths and weaknesses of each study. Discuss the methodologies used, the quality of evidence, and the overall contribution of each work to the understanding of the topic. 
  • Synthesis of Findings: Synthesize the information from different studies to highlight trends, patterns, or areas of consensus in the literature. 
  • Identification of Gaps: Discuss any gaps or limitations in the existing research and explain how your review contributes to filling these gaps. 
  • Transition between Sections: Provide smooth transitions between different themes or concepts to maintain the flow of your literature review. 

Conclusion: The conclusion of your literature review should summarize the main findings, highlight the contributions of the review, and suggest avenues for future research. 

  • Summary of Key Findings: Recap the main findings from the literature and restate how they contribute to your research question or objective. 
  • Contributions to the Field: Discuss the overall contribution of your literature review to the existing knowledge in the field. 
  • Implications and Applications: Explore the practical implications of the findings and suggest how they might impact future research or practice. 
  • Recommendations for Future Research: Identify areas that require further investigation and propose potential directions for future research in the field. 
  • Final Thoughts: Conclude with a final reflection on the importance of your literature review and its relevance to the broader academic community. 

what is a literature review

Conducting a literature review

Conducting a literature review is an essential step in research that involves reviewing and analyzing existing literature on a specific topic. It’s important to know how to do a literature review effectively, so here are the steps to follow: 1  

Choose a Topic and Define the Research Question:

  • Select a topic that is relevant to your field of study. 
  • Clearly define your research question or objective. Determine what specific aspect of the topic do you want to explore? 

Decide on the Scope of Your Review:

  • Determine the timeframe for your literature review. Are you focusing on recent developments, or do you want a historical overview? 
  • Consider the geographical scope. Is your review global, or are you focusing on a specific region? 
  • Define the inclusion and exclusion criteria. What types of sources will you include? Are there specific types of studies or publications you will exclude? 

Select Databases for Searches:

  • Identify relevant databases for your field. Examples include PubMed, IEEE Xplore, Scopus, Web of Science, and Google Scholar. 
  • Consider searching in library catalogs, institutional repositories, and specialized databases related to your topic. 

Conduct Searches and Keep Track:

  • Develop a systematic search strategy using keywords, Boolean operators (AND, OR, NOT), and other search techniques. 
  • Record and document your search strategy for transparency and replicability. 
  • Keep track of the articles, including publication details, abstracts, and links. Use citation management tools like EndNote, Zotero, or Mendeley to organize your references. 

Review the Literature:

  • Evaluate the relevance and quality of each source. Consider the methodology, sample size, and results of studies. 
  • Organize the literature by themes or key concepts. Identify patterns, trends, and gaps in the existing research. 
  • Summarize key findings and arguments from each source. Compare and contrast different perspectives. 
  • Identify areas where there is a consensus in the literature and where there are conflicting opinions. 
  • Provide critical analysis and synthesis of the literature. What are the strengths and weaknesses of existing research? 

Organize and Write Your Literature Review:

  • Literature review outline should be based on themes, chronological order, or methodological approaches. 
  • Write a clear and coherent narrative that synthesizes the information gathered. 
  • Use proper citations for each source and ensure consistency in your citation style (APA, MLA, Chicago, etc.). 
  • Conclude your literature review by summarizing key findings, identifying gaps, and suggesting areas for future research. 

The literature review sample and detailed advice on writing and conducting a review will help you produce a well-structured report. But remember that a literature review is an ongoing process, and it may be necessary to revisit and update it as your research progresses. 

Frequently asked questions

A literature review is a critical and comprehensive analysis of existing literature (published and unpublished works) on a specific topic or research question and provides a synthesis of the current state of knowledge in a particular field. A well-conducted literature review is crucial for researchers to build upon existing knowledge, avoid duplication of efforts, and contribute to the advancement of their field. It also helps researchers situate their work within a broader context and facilitates the development of a sound theoretical and conceptual framework for their studies.

Literature review is a crucial component of research writing, providing a solid background for a research paper’s investigation. The aim is to keep professionals up to date by providing an understanding of ongoing developments within a specific field, including research methods, and experimental techniques used in that field, and present that knowledge in the form of a written report. Also, the depth and breadth of the literature review emphasizes the credibility of the scholar in his or her field.  

Before writing a literature review, it’s essential to undertake several preparatory steps to ensure that your review is well-researched, organized, and focused. This includes choosing a topic of general interest to you and doing exploratory research on that topic, writing an annotated bibliography, and noting major points, especially those that relate to the position you have taken on the topic. 

Literature reviews and academic research papers are essential components of scholarly work but serve different purposes within the academic realm. 3 A literature review aims to provide a foundation for understanding the current state of research on a particular topic, identify gaps or controversies, and lay the groundwork for future research. Therefore, it draws heavily from existing academic sources, including books, journal articles, and other scholarly publications. In contrast, an academic research paper aims to present new knowledge, contribute to the academic discourse, and advance the understanding of a specific research question. Therefore, it involves a mix of existing literature (in the introduction and literature review sections) and original data or findings obtained through research methods. 

Literature reviews are essential components of academic and research papers, and various strategies can be employed to conduct them effectively. If you want to know how to write a literature review for a research paper, here are four common approaches that are often used by researchers.  Chronological Review: This strategy involves organizing the literature based on the chronological order of publication. It helps to trace the development of a topic over time, showing how ideas, theories, and research have evolved.  Thematic Review: Thematic reviews focus on identifying and analyzing themes or topics that cut across different studies. Instead of organizing the literature chronologically, it is grouped by key themes or concepts, allowing for a comprehensive exploration of various aspects of the topic.  Methodological Review: This strategy involves organizing the literature based on the research methods employed in different studies. It helps to highlight the strengths and weaknesses of various methodologies and allows the reader to evaluate the reliability and validity of the research findings.  Theoretical Review: A theoretical review examines the literature based on the theoretical frameworks used in different studies. This approach helps to identify the key theories that have been applied to the topic and assess their contributions to the understanding of the subject.  It’s important to note that these strategies are not mutually exclusive, and a literature review may combine elements of more than one approach. The choice of strategy depends on the research question, the nature of the literature available, and the goals of the review. Additionally, other strategies, such as integrative reviews or systematic reviews, may be employed depending on the specific requirements of the research.

The literature review format can vary depending on the specific publication guidelines. However, there are some common elements and structures that are often followed. Here is a general guideline for the format of a literature review:  Introduction:   Provide an overview of the topic.  Define the scope and purpose of the literature review.  State the research question or objective.  Body:   Organize the literature by themes, concepts, or chronology.  Critically analyze and evaluate each source.  Discuss the strengths and weaknesses of the studies.  Highlight any methodological limitations or biases.  Identify patterns, connections, or contradictions in the existing research.  Conclusion:   Summarize the key points discussed in the literature review.  Highlight the research gap.  Address the research question or objective stated in the introduction.  Highlight the contributions of the review and suggest directions for future research.

Both annotated bibliographies and literature reviews involve the examination of scholarly sources. While annotated bibliographies focus on individual sources with brief annotations, literature reviews provide a more in-depth, integrated, and comprehensive analysis of existing literature on a specific topic. The key differences are as follows: 

References 

  • Denney, A. S., & Tewksbury, R. (2013). How to write a literature review.  Journal of criminal justice education ,  24 (2), 218-234. 
  • Pan, M. L. (2016).  Preparing literature reviews: Qualitative and quantitative approaches . Taylor & Francis. 
  • Cantero, C. (2019). How to write a literature review.  San José State University Writing Center . 

Paperpal is an AI writing assistant that help academics write better, faster with real-time suggestions for in-depth language and grammar correction. Trained on millions of research manuscripts enhanced by professional academic editors, Paperpal delivers human precision at machine speed.  

Try it for free or upgrade to  Paperpal Prime , which unlocks unlimited access to premium features like academic translation, paraphrasing, contextual synonyms, consistency checks and more. It’s like always having a professional academic editor by your side! Go beyond limitations and experience the future of academic writing.  Get Paperpal Prime now at just US$19 a month!

Related Reads:

  • Empirical Research: A Comprehensive Guide for Academics 
  • How to Write a Scientific Paper in 10 Steps 
  • Life Sciences Papers: 9 Tips for Authors Writing in Biological Sciences
  • What is an Argumentative Essay? How to Write It (With Examples)

6 Tips for Post-Doc Researchers to Take Their Career to the Next Level

Self-plagiarism in research: what it is and how to avoid it, you may also like, what is academic writing: tips for students, why traditional editorial process needs an upgrade, paperpal’s new ai research finder empowers authors to..., what is hedging in academic writing  , how to use ai to enhance your college..., ai + human expertise – a paradigm shift..., how to use paperpal to generate emails &..., ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., do plagiarism checkers detect ai content.

University of Texas

  • University of Texas Libraries

Literature Reviews

  • What is a literature review?
  • Steps in the Literature Review Process
  • Define your research question
  • Determine inclusion and exclusion criteria
  • Choose databases and search
  • Review Results
  • Synthesize Results
  • Analyze Results
  • Librarian Support

What is a Literature Review?

A literature or narrative review is a comprehensive review and analysis of the published literature on a specific topic or research question. The literature that is reviewed contains: books, articles, academic articles, conference proceedings, association papers, and dissertations. It contains the most pertinent studies and points to important past and current research and practices. It provides background and context, and shows how your research will contribute to the field. 

A literature review should: 

  • Provide a comprehensive and updated review of the literature;
  • Explain why this review has taken place;
  • Articulate a position or hypothesis;
  • Acknowledge and account for conflicting and corroborating points of view

From  S age Research Methods

Purpose of a Literature Review

A literature review can be written as an introduction to a study to:

  • Demonstrate how a study fills a gap in research
  • Compare a study with other research that's been done

Or it can be a separate work (a research article on its own) which:

  • Organizes or describes a topic
  • Describes variables within a particular issue/problem

Limitations of a Literature Review

Some of the limitations of a literature review are:

  • It's a snapshot in time. Unlike other reviews, this one has beginning, a middle and an end. There may be future developments that could make your work less relevant.
  • It may be too focused. Some niche studies may miss the bigger picture.
  • It can be difficult to be comprehensive. There is no way to make sure all the literature on a topic was considered.
  • It is easy to be biased if you stick to top tier journals. There may be other places where people are publishing exemplary research. Look to open access publications and conferences to reflect a more inclusive collection. Also, make sure to include opposing views (and not just supporting evidence).

Source: Grant, Maria J., and Andrew Booth. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal, vol. 26, no. 2, June 2009, pp. 91–108. Wiley Online Library, doi:10.1111/j.1471-1842.2009.00848.x.

Meryl Brodsky : Communication and Information Studies

Hannah Chapman Tripp : Biology, Neuroscience

Carolyn Cunningham : Human Development & Family Sciences, Psychology, Sociology

Larayne Dallas : Engineering

Janelle Hedstrom : Special Education, Curriculum & Instruction, Ed Leadership & Policy ​

Susan Macicak : Linguistics

Imelda Vetter : Dell Medical School

For help in other subject areas, please see the guide to library specialists by subject .

Periodically, UT Libraries runs a workshop covering the basics and library support for literature reviews. While we try to offer these once per academic year, we find providing the recording to be helpful to community members who have missed the session. Following is the most recent recording of the workshop, Conducting a Literature Review. To view the recording, a UT login is required.

  • October 26, 2022 recording
  • Last Updated: Oct 26, 2022 2:49 PM
  • URL: https://guides.lib.utexas.edu/literaturereviews

Creative Commons License

Systematic Reviews and Meta-Analyses: Exploratory Search

  • Get Started

Exploratory Search

  • Where to Search
  • How to Search
  • Grey Literature
  • What about errata and retractions?
  • Eligibility Screening
  • Critical Appraisal
  • Data Extraction
  • Synthesis & Discussion
  • Assess Certainty
  • Share & Archive

Illustrates the exploratory searching phase. Begin with an initial research question; run initial searches in a few databases and web browsers; locate existing and/or in-progress reviews as well as seminal articles; refine the research question and eligibility criteria; rerun initial searches and continue refining scope until you have a usable scope.

Once you have an initial research question, you can develop and refine your research question and eligibility criteria  through exploratory searching . Exploratory searching is also called preliminary, initial, and naive or novice  searching. Regardless of what you call it, it is simply a series of searches conducted prior to starting the review with the goal of producing a well-defined scope with clear demonstration of contribution to the field. This is an iterative process as illustrated in the image to the right. 

Throughout the exploratory search you should collect  A.  existing and in-progress reviews   and B.  seminal articles   related to your scope. Ultimately, this phase should help your team produce : 

  • A clear, well-defined scope 
  • 2-5 Seminal articles to validate your search later on 
  • Context of what has been done to address this question, illustrating that this review does not duplicate existing or in-progress publications

At the bottom of this page, we also suggest C.  where  to start exploratory searching .

A. Existing & in-progress reviews

  • Searching Tips
  • What if my review exists?

Does your review already exist?

Before starting, make sure a systematic review hasn't already addressed, or is in the process of addressing your research question(s). 

You should look for  published systematic reviews , but also check out  review registries and general purpose repositories (e.g.,  Open Science Framework ) where you're more likely to find unpublished or in-progress reviews and/or review protocols. Searching registries will give you a glimpse into the work that is being done currently, but isn't yet complete. Remember, this kind of review is a serious time-commitment, and you don't want to unknowingly duplicate efforts.

Existing and in-progress reviews on similar, non-identical topics can be helpful when identifying where to search and developing the search strategy  and data extraction forms !

Searching tips

In academic journal databases, you can sometimes find a filter for Systematic Reviews , Meta-Analyses , and/or simply Reviews . If a built-in filter doesn't exist, you can add the term "review" to your search.

What if my review already exists?

We pursue systematic reviews and/or meta-analyses to answer a research question. If a review already answers your question, this existing review can be the foundation of your next research project!

Pursuing a different review

Sometimes it is still important to pursue a review, even if your original research question(s) have been answered. What is considered "the same" review is not always clear. Generally speaking, you need to justify that and illustrate how your new review contributes something unique to the field. 

If a review already answers your question, and your team would still like to pursue a review, your team can:

  • Update  reviews that are out of date
  • Enhance reviews with significant limitations / quality concerns
  • Attempt to  replicate  the review 
  • Revise your original question 

Updating a Review

According to Garner, et al., (2016) , "The decision [of whether or not to update] needs to take into account whether the review addresses a current question , uses valid methods , and is well conducted ; and whether there are new relevant methods , new studies , or new information on existing included studies. Given this information, the agency, editors, or authors need to judge whether the update will influence the review findings or credibility sufficiently to justify the effort in updating it."

Additional Guidance

Bashir, R., et al. Time-to-update of systematic reviews relative to the availability of new evidence.   Syst Rev   7,  195 (2018).  https://doi.org/10.1186/s13643-018-0856-9

Garner, P., et al. Panel for updating guidance for systematic reviews (PUGs). (2016). When and how to update systematic reviews: Consensus and checklist . BMJ , i3507. https://doi.org/10.1136/bmj.i3507

Replicating a Review

Review replication is not often pursued due to the amount of time and labor required. However, Pieper, Heß, & Faggion (2021) have developed a framework for replicating, and the replication process is a great learning tool.

What if a restricted / rapid review , or a scoping review exists , but not a systematic review ?

You may find other reviews (e.g., scoping review, restricted (or rapid) review) on your topic exist - that's great, as these might provide further insight to the appropriateness of the systematic review method for your research question(s)  and how to frame your own review approach .

B. Seminal Articles

What is 'seminal work'.

In general, seminal work also called pivotal , landmark , or seed studies, are articles that are central to the research topic and have great importance and influence within the discipline. Seminal articles are likely to be cited frequently in different journal articles, books, dissertations etc. 

Seminal Articles in Systematic Reviews

In systematic reviews and/or meta-analyses, seminal work are the "seed articles" for your specific review - the articles (or other material) you  know  need to be included in your final synthesis. These articles may have sparked the teams interest in pursuing a review or may be identified through the exploratory search.

Seminal articles can be helpful when identifying where to search and developing the search strategy !

C. Where to "Exploratory Search"?

  • General Searches
  • SR-Specific Repositories

Where to Exploratory Search?

In short, where you exploratory search will depend on your research question. In other words, you should consider searching  wherever  you are likely to find material that answers your research question.

In addition to repositories, you'll want to search  academic journal databases that may be relevant to your topic. Consider your topic from perspectives other than your own discipline - it's likely your topic overlaps with several disciplines. For example, if you are examining a public health topic, it may be useful to search databases related to health / medicine  and  social sciences.

You can also use this exploratory phase to determine whether a database is relevant and should be searched as part of your final comprehensive systematic review search strategy, or not.

Check out the VT libraries "A-Z Database List" , or Librarian curated Library Guides related to your discipline!

Hint: Sort "By Subject" to find relevant guides

Web Browsing

Web browsing in Google or Google Scholar   is a great place to start finding seminal works and existing reviews, as well as journals and databases in which you should conduct more robust exploratory searches.

Never rely only on web browsing . While Google Scholar (and Google) are great places to start searching, results are tailored to individual users, are not replicable, and algorithms are not transparent. More guidance for web browsing is located in the "Where to Search" sub-tab of the "Comprehensive Search" section of this guide.

More places to search...

The possibilities of where to exploratory search are endless! Consider searching  anywhere  that seminal articles or existing/in-progress reviews relevant to your scope may exist. Here are a few more places to get you started.

General Purpose Repositories

Researchers use these sites to share unpublished or in-progress research and reviews, procedural documentation, and other grey literature. For example:

  • Open Science Framework (OSF)  

Repositories that contain Preprints

Researchers uses these sites to openly share research , some of which is not yet published (or peer-reviewed), also called 'preprints'. For example: 

Systematic Review Repositories

There are several systematic review repositories that exist - some contain only published reviews , while others include review registrations and protocols . In the following table, we present and link out to some repositories that specifically house systematic reviews and similar evidence synthesis publications .

Campbell Collaboration Registry

"...all registered titles for systematic reviews or evidence and gap maps that have been accepted by the Editor of a Campbell Coordinating Group. When titles progress to protocol stage, the protocol is published in the Campbell Systematic Reviews journal."

Campbell Systematic Review Journal

Both registry and journal include topics related to Business and Management, Climate Solutions, Crime and Justice, Disability, Education, International Development, Knowledge Translation and Implementation, and Social Welfare

Cochrane Database of Systematic Reviews (CDSR)

"...the leading journal and database for systematic reviews in health care. CDSR includes Cochrane Reviews (systematic reviews) and protocols for Cochrane Reviews as well as editorials and supplements."

Cochrane Library

"Cochrane Collaboration produces high-quality systematic reviews in health disciplines. For more detail and specific fields of research, check out the  Cochrane Review Groups and Networks. "

Database of promoting health effectiveness reviews (DoPHER)

"...focussed coverage of systematic and non-systematic reviews of effectiveness in health promotion and public health worldwide. This register currently contains details of over 6,000 reviews of health promotion and public health effectiveness."

Epistemonkios 

"...a collaborative, multilingual database of health evidence. It is the largest source of systematic reviews relevant for health-decision making, and a large source of other types of scientific evidence."

Health Evidence

"...quality-rated systematic reviews evaluating the effectiveness and cost-effectiveness of public health interventions, including cost data."

Joanna Briggs Systematic Review

"...a collection of world-class resources driven by the needs of health professionals and consumers worldwide"

"International database of prospectively registered systematic reviews in health and social care, welfare, public health, education, crime, justice, and international development, where there is a health related outcome."

  • << Previous: Define Scope
  • Next: Protocol >>
  • Last Updated: Apr 12, 2024 12:41 PM
  • URL: https://guides.lib.vt.edu/SRMA
  • Open access
  • Published: 28 May 2018

Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance

  • Britt Hallingberg   ORCID: orcid.org/0000-0001-8016-5793 1 ,
  • Ruth Turley 1 , 4 ,
  • Jeremy Segrott 1 , 2 ,
  • Daniel Wight 3 ,
  • Peter Craig 3 ,
  • Laurence Moore 3 ,
  • Simon Murphy 1 ,
  • Michael Robling 1 , 2 ,
  • Sharon Anne Simpson 3 &
  • Graham Moore 1  

Pilot and Feasibility Studies volume  4 , Article number:  104 ( 2018 ) Cite this article

26k Accesses

95 Citations

66 Altmetric

Metrics details

Evaluations of complex interventions in public health are frequently undermined by problems that can be identified before the effectiveness study stage. Exploratory studies, often termed pilot and feasibility studies, are a key step in assessing the feasibility and value of progressing to an effectiveness study. Such studies can provide vital information to support more robust evaluations, thereby reducing costs and minimising potential harms of the intervention. This systematic review forms the first phase of a wider project to address the need for stand-alone guidance for public health researchers on designing and conducting exploratory studies. The review objectives were to identify and examine existing recommendations concerning when such studies should be undertaken, questions they should answer, suitable methods, criteria for deciding whether to progress to an effectiveness study and appropriate reporting.

We searched for published and unpublished guidance reported between January 2000 and November 2016 via bibliographic databases, websites, citation tracking and expert recommendations. Included papers were thematically synthesized.

The search retrieved 4095 unique records. Thirty papers were included, representing 25 unique sources of guidance/recommendations. Eight themes were identified: pre-requisites for conducting an exploratory study, nomenclature, guidance for intervention assessment, guidance surrounding any future evaluation study design, flexible versus fixed design, progression criteria to a future evaluation study, stakeholder involvement and reporting of exploratory studies. Exploratory studies were described as being concerned with the intervention content, the future evaluation design or both. However, the nomenclature and endorsed methods underpinning these aims were inconsistent across papers. There was little guidance on what should precede or follow an exploratory study and decision-making surrounding this.

Conclusions

Existing recommendations are inconsistent concerning the aims, designs and conduct of exploratory studies, and guidance is lacking on the evidence needed to inform when to proceed to an effectiveness study.

Trial registration

PROSPERO 2016, CRD42016047843

Peer Review reports

Improving public health and disrupting complex problems such as smoking, obesity and mental health requires complex, often multilevel, interventions. Such interventions are often costly and may cause unanticipated harms and therefore require evaluation using the most robust methods available. However, pressure to identify effective interventions can lead to premature commissioning of large effectiveness studies of poorly developed interventions, wasting finite research resources [ 1 , 2 , 3 ]. In the development of pharmaceutical drugs over 80% fail to reach ‘Phase III’ effectiveness trials, even after considerable investment [ 4 ]. With public health interventions, the historical tendency to rush to full evaluation has in some cases led to evaluation failures due to issues which could have been identified at an earlier stage, such as difficulties recruiting sufficient participants [ 5 ]. There is growing consensus that improving the effectiveness of public health interventions relies on attention to their design and feasibility [ 3 , 6 ]. However, what constitutes good practice when deciding when a full evaluation is warranted, what uncertainties should be addressed to inform this decision and how, is unclear. This systematic review aims to synthesize existing sources of guidance for ‘exploratory studies’ which we broadly define as studies intended to generate evidence needed to decide whether and how to proceed with a full-scale effectiveness study. They do this by optimising or assessing the feasibility of the intervention and/or evaluation design that the effectiveness study would use. Hence, our definition includes studies variously referred to throughout the literature as ‘pilot studies’, ‘feasibility studies’ or ‘exploratory trials’. Our definition is consistent with previous work conducted by Eldridge et al. [ 7 , 8 ], who define feasibility as an overarching concept [ 8 ] which assesses; ‘… whether the future trial can be done, should be done, and, if so, how’ (p. 2) [ 7 ]. However, our definition also includes exploratory studies to inform non-randomised evaluations, rather than a sole focus on trials.

The importance of thoroughly establishing the feasibility of intervention and evaluation plans prior to embarking on an expensive, fully powered evaluation was indicated in the Medical Research Council’s (MRC) framework for the development and evaluation of complex interventions to improve health [ 9 , 10 ]. This has triggered shifts in the practice of researchers and funders toward seeking and granting funding for an ever growing number of studies to address feasibility issues. Such studies are however in themselves often expensive [ 11 , 12 ]. While there is a compelling case for such studies, the extent to which this substantial investment in exploratory studies has to date improved the effectiveness and cost-effectiveness of evidence production remains to be firmly established. Where exploratory studies are conducted poorly, this investment may simply lead to expenditure of large amounts of additional public money, and several years’ delay in getting evidence into the hands of decision-makers, without necessarily increasing the likelihood that a future evaluation will provide useful evidence.

The 2000 MRC guidance used the term ‘exploratory trial’ for work conducted prior to a ‘definitive trial’, indicating that it should primarily address issues concerning the optimisation, acceptability and delivery of the intervention [ 13 ]. This included adaptation of the intervention, consideration of variants of the intervention, testing and refinement of delivery method or content, assessment of learning curves and implementation strategies and determining the counterfactual. Other possible purposes of exploratory trials included preliminary assessment of effect size in order to calculate the sample size for the main trial and other trial design parameters, including methods of recruitment, randomisation and follow-up. Updated MRC guidance in 2008 moved away from the sole focus on RCTs (randomised controlled trials) of its predecessor reflecting recognition that not all interventions can be tested using an RCT and that the next most robust methods may sometimes be the best available option [ 10 , 14 ]. Guidance for exploratory studies prior to a full evaluation have, however, often been framed as relevant only where the main evaluation is to be an RCT [ 13 , 15 ].

However, the goals of exploratory studies advocated by research funders have to date varied substantially. For instance, the National Institute for Health Research Evaluation Trials and Studies Coordinating Centre (NETSCC) definitions of feasibility and pilot studies do not include examination of intervention design, delivery or acceptability and do not suggest that modifications to the intervention prior to full-scale evaluation will arise from these phases. However, the NIHR (National Institute of Health Research) portfolio of funded studies indicates various uses of terms such as ‘feasibility trial’, ‘pilot trial’ and ‘exploratory trial’ to describe studies with similar aims, while it is rare for such studies not to include a focus on intervention parameters [ 16 , 17 , 18 ]. Within the research literature, there is considerable divergence over what exploratory studies should be called, what they should achieve, what they should entail, whether and how they should determine progression to future studies and how they should be reported [ 7 , 8 , 19 , 20 , 21 ].

This paper presents a systematic review of the existing recommendations and guidance on exploratory studies relevant to public health, conducted as the first stage of a project to develop new MRC guidance on exploratory studies. This review aims to produce a synthesis of current guidance/recommendations in relation to the definition, purpose and content of exploratory studies, and what is seen as ‘good’ and ‘bad’ practice as presented by the authors. It will provide an overview of key gaps and areas in which there is inconsistency within and between documents. The rationale for guidance and recommendations are presented, as well as the theoretical perspectives informing them. In particular, we examine how far the existing recommendations answer the following questions:

When is it appropriate to conduct an exploratory study?

What questions should such studies address?

What are the key methodological considerations in answering these questions?

What criteria should inform a decision on whether to progress to an effectiveness study?

How should exploratory studies be reported?

This review is reported in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement [ 22 ] as evidenced in the PRISMA checklist (see Additional file  1 : Table S1). The review protocol is registered on PROSPERO (registration number: CRD42016047843; www.crd.york.ac.uk/prospero ).

Literature search

A comprehensive search (see Additional file  2 : Appendix) was designed and completed during August to November 2016 to identify published and grey literature reported between January 2000 and November 2016 that contained guidance and recommendations on exploratory studies that could have potential relevance to public health. Bibliographic databases were CINAHL, Embase, MEDLINE, MEDLINE-In-process, PsycINFO, Web of Science and PubMed. Supplementary searches included key websites (see Additional file  2 : Appendix) and forward and backward citation tracking of included papers, as well as contacting experts in the field. The first MRC guidance on developing and evaluating complex interventions in health was published in 2000; we therefore excluded guidance published before this year.

Selection of included papers

Search results were exported into reference management software Endnote and clearly irrelevant or duplicate records removed by an information specialist. Eligibility criteria were applied to abstracts and potentially relevant full-text papers by two reviewers working independently in duplicate (BH, JS). Discrepancies were agreed by consensus or by a third reviewer if necessary. Full criteria are shown in Table  1 . During screening of eligible studies, it became evident that determining whether or not guidance was applicable to public health was not always clear. The criteria in Table  1 were agreed by the team after a list of potentially eligible publications were identified.

Quality assessment of included papers

Given the nature of publications included (expert guidance or methodological discussion papers) quality assessment was not applicable.

Data extraction and thematic synthesis

A thematic synthesis of guidance within included documents was performed [ 23 ]. This involved the use of an a priori coding framework (based on the projects aims and objectives), developed by RT, JS and DW ([ 24 ], see Additional file  2 : Appendix). Data were extracted using this schema in qualitative analytic software NVivo by one reviewer (BH). A 10% sample of coded papers was checked by a second reviewer (JS). Data were then conceptualised into final themes by agreement (BH, JS, DW, RT).

Review statistics

Four thousand ninety-five unique records were identified of which 93 were reviewed in full text (see Fig.  1 ). In total, 30 documents were included in the systematic review representing 25 unique sets of guidance. Most sources of guidance did not explicitly identify an intended audience and guidance varied in its relevance to public health. Table  2 presents an overview of all sources of guidance included in the review with sources of guidance more or less relevant to public health identified as well as those which specifically applied to exploratory studies with a randomised design.

figure 1

Flow diagram

Findings from guidance

The included guidance reported a wide range of recommendations on the process of conducting and reporting exploratory studies. We categorised these into eight themes that capture: pre-requisites for conducting an exploratory study, nomenclature, guidance for intervention assessment, guidance surrounding the future evaluation study design, adaptive vs rigid designs, progression criteria for exploratory studies, stakeholder involvement and reporting.

Narrative description of themes

Theme 1: pre-requisites for conducting an exploratory study.

Where mentioned, pre-requisite activities included determining the evidence base, establishing the theoretical basis for the intervention, identifying the intervention components as well as modelling of the intervention in order to understand how intervention components interact and impact on final outcomes [ 9 , 25 , 26 , 27 ]. These were often discussed within the context of the MRC’s intervention development-evaluation cycle [ 6 , 9 , 10 , 13 , 25 , 26 , 27 , 28 ]. Understanding how intervention components interact with various contextual settings [ 6 , 27 , 29 ] and identifying unintended harms [ 6 , 29 ] as well as potential implementation issues [ 6 , 9 , 10 , 30 ] were also highlighted. There was an absence of detail in judging when these above conditions were met sufficiently for moving onto an exploratory study.

Theme 2: nomenclature

A wide range of terms were used, sometimes interchangeably, to describe exploratory studies with the most common being pilot trial/study. Table  3 shows the frequency of the terms used in guidance including other terms endorsed.

Different terminology did not appear to be consistently associated with specific study purposes (see theme 3), as illustrated in Table  2 . ‘Pilot’ and ‘feasibility’ studies were sometimes used interchangeably [ 10 , 20 , 25 , 26 , 27 , 28 , 31 ] while others made distinctions between the two according to design features or particular aims [ 7 , 8 , 19 , 29 , 32 , 33 , 34 ]. For example, some described pilot studies as a smaller version of a future RCT to run in miniature [ 7 , 8 , 19 , 29 , 32 , 33 , 34 ] and was sometimes associated with a randomised design [ 32 , 34 ], but not always [ 7 , 8 ]. In contrast, feasibility studies were used as an umbrella term by Eldridge et al. with pilot studies representing a subset of feasibility studies [ 7 , 8 ]: ‘We suggest that researchers view feasibility as an overarching concept, with all studies done in preparation for a main study open to being called feasibility studies, and with pilot studies as a subset of feasibility studies.’ (p. 18) [ 8 ].

Feasibility studies could focus on particular intervention and trial design elements [ 29 , 32 ] which may not include randomisation [ 32 , 34 ]. Internal pilot studies were primarily viewed as part of the full trial [ 8 , 32 , 35 , 36 , 37 , 38 ] and are therefore not depicted under nomenclature in Table  3 .

While no sources explicitly stated that an exploratory study should focus on one area and not the other, aims and associated methods of exploratory studies diverged into two separate themes. They pertained to either examining the intervention itself or the future evaluation design, and are detailed below in themes 3 and 4.

Theme 3: guidance for intervention assessment

Sources of guidance endorsed exploratory studies having formative purposes (i.e. refining the intervention and addressing uncertainties related to intervention implementation [ 13 , 15 , 29 , 31 , 39 ]) as well as summative goals (i.e. assessing the potential impact of an intervention or its promise [ 6 , 13 , 39 ]).

Refining the intervention and underlying theory

Some guidance suggested that changes could be made within exploratory studies to refine the intervention and underlying theory [ 15 , 29 , 31 ] and adapt intervention content to a new setting [ 39 ]. However, guidance was not clear on what constituted minor vs. major changes and implications for progression criteria (see theme 6). When making changes to the intervention or underlying theory, some guidance recommended this take place during the course of the exploratory study (see theme 5). Others highlighted the role of using a multi-arm design to select the contents of the intervention before a full evaluation [ 13 ] and to assess potential mechanisms of multiple different interventions or intervention components [ 29 ]. Several sources highlighted the role of qualitative research in optimising or refining an intervention, particularly for understanding the components of the logic model [ 29 ] and surfacing hidden aspects of the intervention important for delivering outcomes [ 15 ].

Intervention implementation

There was agreement across a wide range of guidance that exploratory studies could explore key uncertainties related to intervention implementation, such as acceptability, feasibility or practicality. Notably these terms were often ill-defined and used interchangeably. Acceptability was considered in terms of recipients’ reactions [ 7 , 8 , 29 , 32 , 39 ] while others were also attentive to feasibility from the perspective of intervention providers, deliverers and health professionals [ 6 , 9 , 29 , 30 , 34 , 39 ]. Implementation, feasibility, fidelity and ‘practicality’ explored the likelihood of being able to deliver in practice what was intended [ 25 , 26 , 27 , 30 , 39 ]. These were sometimes referred to as aims within an embedded process evaluation that took place alongside an exploratory study, although the term process evaluation was never defined [ 7 , 10 , 15 , 29 , 40 ].

Qualitative research was encouraged for assessment of intervention acceptability [ 21 ] or for implementation (e.g. via non-participant observation [ 15 ]). Caution was recommended with regards to focus groups where there is a risk of masking divergent views [ 15 ]. Others recommended quantitative surveys to examine retention rates and reasons for dropout [ 7 , 30 ]. Furthermore, several sources emphasised the importance of testing implementation in a range of contexts [ 15 , 29 , 39 , 41 ]—especially in less socioeconomically advantaged groups, to examine the risk of widening health inequalities [ 29 , 39 ].

One source of guidance considered whether randomisation was required for assessing intervention acceptability, believing this to be unnecessary but also suggesting it could ‘potentially depend on preference among interventions offered in the main trial’ ([ 21 ]; p. 9). Thus, issues of intervention acceptability, particularly within multi-arm trials, may relate to clinical equipoise and acceptability of randomisation procedures among participants [ 30 ].

Appropriateness of assessing intervention impact

Several sources of guidance discussed the need to understand the impact of the intervention, including harms, benefits or unintended consequences [ 6 , 7 , 15 , 29 , 39 ]. Much of the guidance focused on statistical tests of effectiveness with disagreement on the soundness of this aim, although qualitative methods were also recommended [ 15 , 42 ]. Some condemned statistically testing for effectiveness [ 7 , 20 , 29 , 32 , 41 ], as such studies are often underpowered, hence leading to imprecise and potentially misleading estimates of effect sizes [ 7 , 20 ]. Others argued that an estimate of likely effect size could evidence the intervention was working as intended and not having serious unintended harms [ 6 ] and thus be used to calculate the power for the full trial [ 13 ]. Later guidance from the MRC is more ambiguous than earlier guidance, stating that estimates should be interpreted with caution, while simultaneously stating ‘safe’ assumptions of effect sizes as a pre-requisite before continuing to a full evaluation [ 10 ]. NIHR guidance, which distinguished between pilot and feasibility studies, supported the assessment of a primary outcome in pilot studies, although it is unclear whether this is suggesting that a pilot should involve an initial test of changes in the primary outcome, or simply that the primary outcome should be measured in the same way as it would be in a full evaluation. By contrast, for ‘feasibility studies’, it indicated that an aim may include designing an outcome measure to be used in a full evaluation.

Others made the case for identifying evidence of potential effectiveness, including use of interim or surrogate endpoints [ 7 , 41 ], defined as ‘…variables on the causal pathway of what might eventually be the primary outcome in the future definitive RCT, or outcomes at early time points, in order to assess the potential for the intervention to affect likely outcomes in the future definitive RCT…’ [ 7 ] (p. 14).

Randomisation was implied as a design feature of exploratory studies when estimating an effect size estimate of the intervention as it maximised the likelihood that observed differences are due to intervention [ 9 , 39 ], with guidance mostly written from a starting assumption that full evaluation will take the form of an RCT and guidance focused less on exploratory studies for quasi-experimental or other designs. For studies that aim to assess potential effectiveness using a surrogate or interim outcome, using a standard sample size calculation was recommended to ensure adequate power, although it was noted that this aim is rare in exploratory studies [ 7 ].

Theme 4: guidance surrounding the future evaluation design

Sources consistently advocated assessing the feasibility of study procedures or estimating parameters of the future evaluation. Recommendations are detailed below.

Assessing feasibility of the future evaluation design

Assessing feasibility of future evaluation procedures was commonly recommended [ 6 , 7 , 10 , 15 , 30 , 32 , 33 , 34 , 37 , 41 ] to avert problems that could undermine the conduct or acceptability of future evaluation [ 6 , 15 , 30 ]. A wide range of procedures were suggested as requiring assessments of feasibility including data collection [ 20 , 30 , 34 , 36 , 41 ], participant retention strategies [ 13 ], randomisation [ 7 , 13 , 20 , 30 , 34 , 36 , 38 , 41 ], recruitment methods [ 13 , 30 , 32 , 34 , 35 , 38 , 41 ], running the full trial protocol [ 20 , 30 , 36 ], the willingness of participants to be randomised [ 30 , 32 ] and issues of contamination [ 30 ]. There was disagreement concerning the appropriateness of assessing blinding in exploratory studies [ 7 , 30 , 34 ], with one source noting double blinding is difficult when participants are assisted in changing their behaviour; although assessing single blinding may be possible [ 30 ].

Qualitative [ 15 , 30 , 34 ], quantitative [ 34 ] and mixed methods [ 7 ] were endorsed for assessing these processes. Reflecting the tendency for guidance of exploratory studies to be limited to studies in preparation for RCTs, discussion of the role of randomisation at the exploratory study stage featured heavily in guidance. Randomisation within an exploratory study was considered necessary for examining feasibility of recruitment, consent to randomisation, retention, contamination or maintenance of blinding in the control and intervention groups, randomisation procedures and whether all the components of a protocol can work together, although randomisation was not deemed necessary to assess outcome burden and participant eligibility [ 21 , 30 , 34 ]. While there was consensus about what issues could be assessed through randomisation, sources disagreed on whether randomisation should always precede a future evaluation study, even if that future study is to be an RCT. Contention seemed to be linked to variation in nomenclature and associated aims. For example, some defined pilot study as a study run in miniature to test how all its components work together, thereby dictating a randomised design [ 32 , 34 ]. Yet for feasibility studies, randomisation was only necessary if it reduced the uncertainties in estimating parameters for the future evaluation [ 32 , 34 ]. Similarly, other guidance highlighted an exploratory study (irrespective of nomenclature) should address the main uncertainties, and thus may not depend on randomisation [ 8 , 15 ].

Estimating parameters of the future evaluation design

A number of sources recommended exploratory studies should inform the parameters of the future evaluation design. Areas for investigation included estimating sample sizes required for the future evaluation (e.g. measuring outcomes [ 32 , 35 ]; power calculations [ 13 ]; derive effect size estimates [ 6 , 7 , 39 ]; estimating target differences [ 35 , 43 ]; deciding what outcomes to measure and how [ 9 , 20 , 30 , 36 ]; assessing quality of measures (e.g. for reliability/ validity/ feasibility/ sensitivity [ 7 , 20 , 30 ]; identification of control group [ 9 , 13 ]; recruitment, consent and retention rates [ 10 , 13 , 20 , 30 , 32 , 34 , 36 ]; and information on the cost of the future evaluation design [ 9 , 30 , 36 ].

While qualitative methods were deemed useful for selecting outcomes and their suitable measures [ 15 ], most guidance concentrated on quantitative methods for estimating future evaluation sample sizes. This was contentious due to the potential to over- or under-estimate sample sizes required in a future evaluation due to the lack of precision of estimates from a small pilot [ 20 , 30 , 41 ]. Estimating sample sizes from effect size estimates in an exploratory study was nevertheless argued by some to be useful if there was scant literature and the exploratory study used the same design and outcome as the future evaluation [ 30 , 39 ]. Cluster RCTs, which are common in public health interventions, were specifically earmarked as unsuitable for estimating parameters for sample size calculations (e.g. intra-cluster correlation coefficients) as well as recruitment and follow-up rates without additional information from other resources, because a large number of clusters and individual participants would be required [ 41 ]. Others referred to ‘rules of thumb’ when determining sample sizes in an exploratory study with numbers varying between 10 and 75 participants per trial arm in individually randomised studies [ 7 , 30 , 36 ]. Several also recommended the need to consider a desired meaningful difference in the health outcomes from a future evaluation and the appropriate sample size needed to detect this, rather than conducting sample size calculations using estimates of likely effect size from pilot data [ 30 , 35 , 38 , 43 ].

A randomised design was deemed unnecessary for estimating costs or selecting outcomes, although was valued for estimating recruitment and retention rates for intervention and control groups [ 21 , 34 ]. Where guidance indicated the estimation of an effect size appropriate to inform the sample size for a future evaluation, a randomised design was deemed necessary [ 9 , 39 ].

Theme 5: flexible vs. fixed design

Sources stated that exploratory studies could employ a rigid or flexible design. With the latter, the design can change during the course of the study, which is useful for making changes to the intervention, as well as the future evaluation design [ 6 , 13 , 15 , 31 ]. Here, qualitative data can be analysed as it is collected, shaping the exploratory study process, for instance sampling of subsequent data collection points [ 15 ], and clarifying implications for intervention effectiveness [ 31 ].

In contrast, fixed exploratory studies were encouraged when primarily investigating the future evaluation parameters and processes [ 13 ]. It may be that the nomenclature used in some guidance (e.g. pilot studies that are described as miniature versions of the evaluation) is suggesting a distinction between more flexible vs. more stringent designs. In some guidance, it was not mentioned whether changes should be made during the course of an exploratory study or afterwards, in order to get the best possible design for the future evaluation [ 6 , 7 , 21 ].

Theme 6: progression criteria to a future evaluation study

Little guidance was provided on what should be considered when formulating progression criteria for continuing onto a future evaluation study. Some focussed on the relevant uncertainties of feasibility [ 32 , 39 ], while others highlight specific items concerning cost-effectiveness [ 10 ], refining causal hypotheses to be tested in a future evaluation [ 29 ] and meeting recruitment targets [ 20 , 34 ]. As discussed in themes 3 and 4, statistically testing for effectiveness and using effect sizes for power calculations was cautioned by some, and so criteria based on effect sizes were not specified [ 38 ].

Greater discussion was devoted to how to weight evidence from an exploratory study that addressed multiple aims and used different methods. Some explicitly stated progression criteria should not be judged as strict thresholds but as guidelines using, for example, a traffic lights system with varying levels of acceptability [ 7 , 41 ]. Others highlighted a realist approach, moving away from binary indicators to focusing on ‘what is feasible and acceptable for whom and under what circumstances’ [ 29 ]. In light of the difficulties surrounding interpretation of effect estimates, several sources recommended qualitative findings from exploratory studies should be more influential than quantitative findings [ 15 , 38 ].

Interestingly, there was ambiguity regarding progression when exploratory findings indicated substantial changes to the intervention or evaluation design. Sources considering this issue suggested that if ‘extensive changes’ or ‘major modifications’ are made to either (note they did not specify what qualified as such), researchers should return to the exploratory [ 21 , 30 ] or intervention development phases [ 15 ].

‘Alternatively, at the feasibility phase, researchers may identify fundamental problems with the intervention or trial conduct and return to the development phase rather than proceed to a full trial.’ (p. 1) [ 15 ].

As described previously, however, the threshold at which changes are determined to be ‘major’ remained ambiguous. While updated MRC guidance [ 10 ] moved to a more iterative model, accepting that movement back between feasibility/piloting and intervention development may sometimes be needed, there was no guidance on under what conditions movement between these two stages should take place.

Theme 7: stakeholder involvement

Several sources recommended a range of stakeholders (e.g. intervention providers, intervention recipients, public representatives as well as practitioners who might use the evidence produced by the full trial) be involved in the planning and running of the exploratory study to ensure exploratory studies reflect the realities of intervention setting [ 15 , 28 , 31 , 32 , 39 , 40 ]. In particular, community-based participatory approaches were recommended [ 15 , 39 ]. While many highlighted the value of stakeholders on Trial Steering Committees and other similar study groups [ 15 , 28 , 40 ], some warned about equipoise between researchers and stakeholders [ 15 , 40 ] and also cautioned against researchers conflating stakeholder involvement with qualitative research [ 15 ].

‘Although patient and public representatives on research teams can provide helpful feedback on the intervention, this does not constitute qualitative research and may not result in sufficiently robust data to inform the appropriate development of the intervention.’ (p. 8) [ 15 ].

Theme 8: reporting of exploratory studies

Detailed recommendations for reporting exploratory studies were recently provided in new Consolidated Standards of Reporting Trials (CONSORT) guidance by Eldridge et al. [ 7 ]. In addition to this, recurrent points were brought up by other sources of guidance. Most notably, it was recommended exploratory studies be published in peer-reviewed journals as this can provide useful information to other researchers on what has been done, what did not work and what might be most appropriate [ 15 , 30 ]. An exploratory study may also result in multiple publications, but should provide reference to other work carried out in the same exploratory study [ 7 , 15 ]. Several sources of guidance also highlight that exploratory studies should be appropriately labelled in the title/abstract to enable easy identification; however, the nomenclature suggested varied depending on guidance [ 7 , 8 , 15 ].

While exploratory studies—carried out to inform decisions about whether and how to proceed with an effectiveness study [ 7 , 8 ]—are increasingly recognised as important in the efficient evaluation of complex public health interventions, our findings suggest that this area remains in need of consistent standards to inform practice. At present, there are multiple definitions of exploratory studies, a lack of consensus on a number of key issues, and a paucity of detailed guidance on how to approach the main uncertainties such studies aim to address prior to proceeding to a full evaluation.

Existing guidance commonly focuses almost exclusively on testing methodological parameters [ 33 ], such as recruitment and retention, although in practice, it is unusual for such studies not to also focus on the feasibility of the intervention itself. Where intervention feasibility is discussed, there is limited guidance on when an intervention is ‘ready’ for an exploratory study and a lack of demarcation between intervention development and pre-evaluation work to understand feasibility. Some guidance recognised that an intervention continues to develop throughout an exploratory study, with distinctions made between ‘optimisation/refinement’ (i.e. minor refinements to the intervention) vs. ‘major changes’. However, the point at which changes become so substantial that movement back toward intervention development rather than forward to a full evaluation remains ambiguous. Consistent with past reviews which adopted a narrower focus on studies with randomised designs [ 21 ] or in preparation for a randomised trial [ 8 , 36 ] and limited searches of guidance in medical journals [ 19 , 36 ], terms to describe exploratory studies were inconsistent, with a distinction sometimes made between pilot and feasibility studies, though with others using these terms interchangeably.

The review identifies a number of key areas of disagreement or limited guidance in regards to the critical aims of exploratory studies and addressing uncertainties which might undermine a future evaluation, and how these aims should be achieved. There was much disagreement for example on whether exploratory studies should include a preliminary assessment of intervention effects to inform decisions on progression to a full evaluation, and the appropriateness of using estimates of effect from underpowered data (from non-representative samples and a study based on a not fully optimised version of the intervention) to power a future evaluation study. Most guidance focused purely on studies in preparation for RCTs; nevertheless, guidance varied on whether randomisation was a necessary feature of the exploratory study, even where a future evaluation study was an RCT. Guidance was often difficult to assess regarding its applicability to public health research, with many sources focusing on literature and practice primarily from clinical research, and limited consideration of the transferability of these problems and proposed solutions to complex social interventions, such as those in public health. Progression criteria were highlighted as important by some as a means of preventing biased post hoc cases for continuation. However, there was a lack of guidance on how to devise progression criteria and processes for assessing whether these had been sufficiently met. Where they had not been met, there was a lack of guidance on how to decide whether the exploratory study had generated sufficient insight about uncertainties that the expense of a further feasibility study would not be justified prior to large-scale evaluation.

Although our review included a broad focus on guidance of exploratory studies from published and grey literature and moved beyond a focus on studies conducted in preparation for an RCT specifically, a number of limitations should be noted. Guidance from other areas of social intervention research where challenges may be similar to those in public health (e.g. education, social work and business) may not have been captured by our search strategy. We found few worked examples of exploratory studies in public health that provided substantial information from learned experience and practice. Hence, the review drew largely on recommendations from funding organisations, or relatively abstract guidance from teams of researchers, with fewer clear examples of how these recommendations are grounded in experience from the conduct of such studies. As such, it should be acknowledged that these documents represent one element within a complex system of research production and may not necessarily fully reflect what is taking place in the conduct of exploratory studies. Finally, treating sources of guidance as independent from each other does not reflect how some recommendations developed over time (see for example [ 7 , 8 , 20 , 36 , 41 ]).

There is inconsistent guidance, and for some key issues a lack of guidance, for exploratory studies of complex public health interventions. As this lack of guidance for researchers in public health continues, the implications and consequences could be far reaching. It is unclear how researchers use existing guidance to shape decision-making in the conduct of exploratory studies, and in doing so, how they adjudicate between various conflicting perspectives. This systematic review has aimed largely to identify areas of agreement and disagreement as a starting point in bringing order to this somewhat chaotic field of work. Following this systematic review, our next step is to conduct an audit of published public health exploratory studies in peer-reviewed journals, to assess current practice and how this reflects the reviewed guidance. As part of a wider study, funded by the MRC/NIHR Methodology Research Programme to develop GUidance for Exploratory STudies of complex public health interventions (GUEST; Moore L, et al. Exploratory studies to inform full scale evaluations of complex public health interventions: the need for guidance, submitted), the review has informed a Delphi survey of researchers, funders and publishers of public health research. In turn, this will contribute to a consensus meeting which aims to reach greater unanimity on the aims of exploratory studies, and how these can most efficiently address uncertainties which may undermine a full-scale evaluation.

Abbreviations

Consolidated Standards of Reporting Trials

GUidance for Exploratory STudies of complex public health interventions

Medical Research Council

National Institute of Health Research Evaluation Trials and Studies Coordinating Centre

National Institute for Health Research

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Randomised controlled trial

Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40:637–44.

Article   PubMed   Google Scholar  

Sanson-Fisher RW, Bonevski B, Green LW, D’Este C. Limitations of the randomized controlled trial in evaluating population-based health interventions. Am J Prev Med. 2007;33:155–61.

Speller V, Learmonth A, Harrison D. The search for evidence of effective health promotion. BMJ. 1997;315(7104):361.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Arrowsmith J, Miller P. Trial watch: phase II failures: 2008–2010. Nat Rev Drug Discov. 2011;10(5):328–9.

National Institute for Health Research. Weight loss maintenance in adults (WILMA). https://www.journalslibrary.nihr.ac.uk/programmes/hta/084404/#/ . Accessed 13 Dec 2017.

Wight D, Wimbush E, Jepson R, Doi L. Six steps in quality intervention development (6SQuID). J Epidemiol Community Health. 2015;70:520–5.

Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. 2016;2:64.

Article   PubMed   PubMed Central   Google Scholar  

Eldridge SM, Lancaster GA, Campbell MJ, Thabane L, Hopewell S, Coleman CL, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS One. 2016;11:e0150205.

Campbell M, Fitzpatrick R, Haines A, Kinmonth AL. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321(7262):694.

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: new guidance. Medical Research Council. 2008;

National Institute for Health Research. The Filter FE Challenge: pilot trial and process evaluation of a multi-level smoking prevention intervention in further education settings. Available from: https://www.journalslibrary.nihr.ac.uk/programmes/phr/134202/#/ . Accessed 25 Jan 2018.

National Institute for Health Research. Adapting and piloting the ASSIST model of informal peer-led intervention delivery to the Talk to Frank drug prevention programme in UK secondary schools (ASSIST+Frank): an exploratory trial. https://www.journalslibrary.nihr.ac.uk/programmes/phr/12306003/#/ . Accessed 25 Jan 2018.

Medical Research Council. A framework for the development and evaluation of RCTs for complex interventions to improve health. London: Medical Research Council; 2000.

Google Scholar  

Bonell CP, Hargreaves JR, Cousens SN, Ross DA, Hayes R, Petticrew M, et al. Alternatives to randomisation in the evaluation of public-health interventions: design challenges and solutions. J Epidemiol Community Health. 2009; https://doi.org/10.1136/jech.2008.082602 .

O’Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1(1):32.

National Institute for Health Research. An exploratory trial to evaluate the effects of a physical activity intervention as a smoking cessation induction and cessation aid among the ‘hard to reach’. https://www.journalslibrary.nihr.ac.uk/programmes/hta/077802/#/ . Accessed 13 Dec 2017.

National Institue for Health Research. Initiating change locally in bullying and aggression through the school environment (INCLUSIVE): pilot randomised controlled trial. https://www.journalslibrary.nihr.ac.uk/hta/hta19530/#/abstract . Accessed 13 Dec 2017.

National Institute for Health Resarch. Increasing boys' and girls' intention to avoid teenage pregnancy: a cluster randomised control feasibility trial of an interactive video drama based intervention in post-primary schools in Northern Ireland. https://www.journalslibrary.nihr.ac.uk/phr/phr05010/#/abstract . Accessed 13 Dec 2017.

Arain M, Campbell, MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy BMC Med Res Methodol. 2010;10:67.

Lancaster GA. Pilot and feasibility studies come of age! Pilot Feasibility Stud. 2015;1:1.

Shanyinde M, Pickering RM, Weatherall M. Questions asked and answered in pilot and feasibility randomized controlled trials. BMC Med Res Methodol. 2011;11:117.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;151:e1000097.

Article   Google Scholar  

Dixon-Woods M, Agarwal S, Jones D. Intergrative approaches to qualitative and quantitative evidence. London: Health Development Agency; 2004.

Ritchie J, Spencer L, O’Connor W. Carrying out qualitative analysis. Qualitative research practice: a guide for social science students and researchers. 2003;1.

Möhler R, Bartoszek G, Köpke S, Meyer G. Proposed criteria for reporting the development and evaluation of complex interventions in healthcare (CReDECI): guideline development. IJNS. 2012;49(1):40–6.

Möhler R, Bartoszek G, Meyer G. Quality of reporting of complex healthcare interventions and applicability of the CReDECI list—a survey of publications indexed in PubMed. BMC Med Res Methodol. 2013;13:1.

Möhler R, Köpke S, Meyer G. Criteria for reporting the development and evaluation of complex interventions in healthcare: revised guideline (CReDECI 2). Trials. 2015;16(204):1.

Evans BA, Bedson E, Bell P, Hutchings H, Lowes L, Rea D, et al. Involving service users in trials: developing a standard operating procedure. Trials. 2013;14(1):1.

Fletcher A, Jamal F, Moore G, Evans RE, Murphy S, Bonell C. Realist complex intervention science: applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions. Evaluation. 2016;22:286–303.

Feeley N, Cossette S, Côté J, Héon M, Stremler R, Martorella G, et al. The importance of piloting an RCT intervention. CJNR. 2009;41:84–99.

Levati S, Campbell P, Frost R, Dougall N, Wells M, Donaldson C, et al. Optimisation of complex health interventions prior to a randomised controlled trial: a scoping review of strategies used. Pilot Feasibility Stud. 2016;2:1.

National Institute for Health Research. Feasibility and pilot studies. Available from: http://www.nihr.ac.uk/CCF/RfPB/FAQs/Feasibility_and_pilot_studies.pdf . Accessed 14 Oct 2016.

National Institute for Health Research. Glossary | Pilot studies 2015 http://www.nets.nihr.ac.uk/glossary?result_1655_result_page=P . Accessed 14 Oct 2016.

Taylor RS, Ukoumunne OC, Warren FC. How to use feasibility and pilot trials to test alternative methodologies and methodological procedures proir to full-scale trials. In: Richards DA, Hallberg IR, editors. Complex interventions in health: an overview of research methods. New York: Routledge; 2015.

Cook JA, Hislop J, Adewuyi TE, Harrild K, Altman DG, Ramsay CR et al. Assessing methods to specify the target difference for a randomised controlled trial: DELTA (Difference ELicitation in TriAls) review. Health Technology Assessment (Winchester, England). 2014;18:v–vi.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10:307–12.

National Institute for Health Research. Progression rules for internal pilot studies for HTA trials [14/10/2016]. Available from: http://www.nets.nihr.ac.uk/__data/assets/pdf_file/0018/115623/Progression_rules_for_internal_pilot_studies.pdf .

Westlund E, Stuart EA. The nonuse, misuse, and proper use of pilot studies in experimental evaluation research. Am J Eval. 2016;2:246–61.

Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, et al. How we design feasibility studies. Am J Prev Med. 2009;36:452–7.

Strong LL, Israel BA, Schulz AJ, Reyes A, Rowe Z, Weir SS et al. Piloting interventions within a community-based participatory research framework: lessons learned from the healthy environments partnership. Prog Community Health Partnersh. 2009;3:327–34.

Eldridge SM, Costelloe CE, Kahan BC, Lancaster GA, Kerry SM. How big should the pilot study for my cluster randomised trial be? Stat Methods Med Res. 2016;25:1039–56.

Moffatt S, White M, Mackintosh J, Howel D. Using quantitative and qualitative data in health services research—what happens when mixed method findings conflict? [ISRCTN61522618]. BMC Health Serv Res. 2006;6:1.

Hislop J, Adewuyi TE, Vale LD, Harrild K, Fraser C, Gurung T et al. Methods for specifying the target difference in a randomised controlled trial: the Difference ELicitation in TriAls (DELTA) systematic review. PLoS Med. 2014;11:e1001645.

Download references

Acknowledgements

We thank the Specialist Unit for Review Evidence (SURE) at Cardiff University, including Mala Mann, Helen Morgan, Alison Weightman and Lydia Searchfield, for their assistance with developing and conducting the literature search.

This study is supported by funding from the Methodology Research Panel (MR/N015843/1). LM, SS and DW are supported by the UK Medical Research Council (MC_UU_12017/14) and the Chief Scientist Office (SPHSU14). PC is supported by the UK Medical Research Council (MC_UU_12017/15) and the Chief Scientist Office (SPHSU15). The work was also undertaken with the support of The Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), a UKCRC Public Health Research Centre of Excellence. Joint funding (MR/KO232331/1) from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, the Welsh Government and the Wellcome Trust, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due to copyright infringement.

Author information

Authors and affiliations.

Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), Cardiff University, Cardiff, Wales, UK

Britt Hallingberg, Ruth Turley, Jeremy Segrott, Simon Murphy, Michael Robling & Graham Moore

Centre for Trials Research, Cardiff University, Cardiff, Wales, UK

Jeremy Segrott & Michael Robling

MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK

Daniel Wight, Peter Craig, Laurence Moore & Sharon Anne Simpson

Specialist Unit for Review Evidence, Cardiff University, Cardiff, Wales, UK

Ruth Turley

You can also search for this author in PubMed   Google Scholar

Contributions

LM, GM, PC, MR, JS, RT and SS were involved in the development of the study. RT, JS, DW and BH were responsible for the data collection, overseen by LM and GM. Data analysis was undertaken by BH guided by RT, JS, DW and GM. The manuscript was prepared by BH, RT, DW, JS and GM. All authors contributed to the final version of the manuscript. LM is the principal investigator with overall responsibility for the project. GM is Cardiff lead for the project. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Britt Hallingberg .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Table S1. PRISMA checklist. (DOC 62 kb)

Additional file 2:

Appendix 1. Search strategies and websites. Appendix 2. Coding framework. (DOCX 28 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Hallingberg, B., Turley, R., Segrott, J. et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud 4 , 104 (2018). https://doi.org/10.1186/s40814-018-0290-8

Download citation

Received : 06 February 2018

Accepted : 07 May 2018

Published : 28 May 2018

DOI : https://doi.org/10.1186/s40814-018-0290-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Complex interventions
  • Exploratory studies
  • Research methods
  • Study design
  • Pilot study
  • Feasibility study

Pilot and Feasibility Studies

ISSN: 2055-5784

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is an exploratory literature review

Libraries | Research Guides

Literature reviews.

  • What is a Literature Review?

What is the review for?

Initial background reading.

  • The Research Question
  • Choosing Where to Search
  • Organizing the Review
  • Writing the Review

Is this review a section in a research paper? Is it for a dissertation? Is this for a journal article? A stand-alone project? Determining what the end product is can help you determine how extensive the review will be. 

Dissertations or stand-alone reviews may be extensive, more comprehensive, and may include peripheral literature that provides history or context for the topic. Reviews in a research paper or in the background section of a journal article may be limited to literature that is central to your research topic. 

Do initial background reading on your topic. Using subject encyclopedias and handbooks can be good places to start finding topic overviews and essays. 

  • Reference Online and In Print by Geoff Morse Last Updated Sep 19, 2023 747 views this year
  • CQ Researcher Plus Archive This link opens in a new window The CQ Researcher is a collection of reports covering political and social issues, with regular reports on topics in health, international affairs, education, the environment, technology and the U.S. economy.

Read existing reviews related to your topic for ideas

  • Annual Reviews This link opens in a new window The Annual Reviews provide substantially researched articles written by recognized scholars in a wide variety of disciplines that summarize the major research literature in the field. These are often a good place to start your research and to keep informed about recent developments.
  • Oxford Bibliographies This link opens in a new window Offers annotated bibliographies of the most important books and articles on specific topics in a growing range of subject areas. Particularly useful for anyone beginning research.

You can also search databases in your subject area to find reviews on your topic. For recommendations on databases, take a look at our Research Guides or contact your librarian .

Do some exploratory literature searching

  • Databases A-Z Northwestern Libraries A-Z list of databases
  • Research Guides These subject guides will provide you with recommendations on databases to search in different subject areas.

Profile Photo

  • << Previous: What is a Literature Review?
  • Next: The Research Question >>
  • Last Updated: Jan 17, 2024 10:05 AM
  • URL: https://libguides.northwestern.edu/literaturereviews
  • Methodology
  • Open access
  • Published: 19 October 2019

Smart literature review: a practical topic modelling approach to exploratory literature review

  • Claus Boye Asmussen   ORCID: orcid.org/0000-0002-2998-2293 1 &
  • Charles Møller 1  

Journal of Big Data volume  6 , Article number:  93 ( 2019 ) Cite this article

46k Accesses

182 Citations

3 Altmetric

Metrics details

Manual exploratory literature reviews should be a thing of the past, as technology and development of machine learning methods have matured. The learning curve for using machine learning methods is rapidly declining, enabling new possibilities for all researchers. A framework is presented on how to use topic modelling on a large collection of papers for an exploratory literature review and how that can be used for a full literature review. The aim of the paper is to enable the use of topic modelling for researchers by presenting a step-by-step framework on a case and sharing a code template. The framework consists of three steps; pre-processing, topic modelling, and post-processing, where the topic model Latent Dirichlet Allocation is used. The framework enables huge amounts of papers to be reviewed in a transparent, reliable, faster, and reproducible way.

Introduction

Manual exploratory literature reviews are soon to be outdated. It is a time-consuming process, with limited processing power, resulting in a low number of papers analysed. Researchers, especially junior researchers, often need to find, organise, and understand new and unchartered research areas. As a literature review in the early stages often involves a large number of papers, the options for a researcher is either to limit the amount of papers to review a priori or review the papers by other methods. So far, the handling of large collections of papers has been structured into topics or categories by the use of coding sheets [ 2 , 12 , 22 ], dictionary or supervised learning methods [ 30 ]. The use of coding sheets has especially been used in social science, where trained humans have created impressive data collections, such as the Policy Agendas Project and the Congressional Bills Project in American politics [ 30 ]. These methods, however, have a high upfront cost of time, requiring a prior understanding where papers are grouped by categories based on pre-existing knowledge. In an exploratory phase where a general overview of research directions is needed, many researchers may be dismayed by having to spend a lot of time before seeing any results, potentially wasting efforts that could have been better spent elsewhere. With the advancement of machine learning methods, many of the issues can be dealt with at a low cost of time for the researcher. Some authors argue that when human processing such as coding practice is substituted by computer processing, reliability is increased and cost of time is reduced [ 12 , 23 , 30 ]. Supervised learning and unsupervised learning, are two methods for automatically processing papers [ 30 ]. Supervised learning relies on manually coding a training set of papers before performing an analysis, which entails a high cost of time before a result is achieved. Unsupervised learning methods, such as topic modelling, do not require the researcher to create coding sheets before an analysis, which presents a low cost of time approach for an exploratory review with a large collection of papers. Even though, topic modelling has been used to group large amounts of documents, few applications of topic modelling have been used on research papers, and a researcher is required to have programming skills and statistical knowledge to successfully conduct an exploratory literature review using topic modelling.

This paper presents a framework where topic modelling, a branch of the unsupervised methods, is used to conduct an exploratory literature review and how that can be used for a full literature review. The intention of the paper is to enable the use of topic modelling for researchers by providing a practical approach to topic modelling, where a framework is presented and used on a case step-by-step. The paper is organised as follows. The following section will review the literature in topic modelling and its use in exploratory literature reviews. The framework is presented in “ Method ” section, and the case is presented in “ Framework ” section. “ Discussion ” and “ Conclusion ” sections conclude the paper with a discussion and conclusion.

Topic modelling for exploratory literature review

While there are many ways of conducting an exploratory review, most methods require a high upfront cost of time and having pre-existent knowledge of the domain. Quinn et al. [ 30 ] investigated the costs of different text categorisation methods, a summary of which is presented in Table  1 , where the assumptions and cost of the methods are compared.

What is striking is that all of the methods, except manually reading papers and topic modelling, require pre-existing knowledge of the categories of the papers and have a high pre-analysis cost. Manually reading a large amount of papers will have a high cost of time for the researcher, whereas topic modelling can be automated, substituting the use of the researcher’s time with the use of computer time. This indicates a potentially good fit for the use of topic modelling for exploratory literature reviews.

The use of topic modelling is not new. However, there are remarkably few papers utilising the method for categorising research papers. It has been predominantly been used in the social sciences to identify concepts and subjects within a corpus of documents. An overview of applications of topic modelling is presented in Table  2 , where the type of data, topic modelling method, the use case and size of data are presented.

The papers in Table  2 analyse web content, newspaper articles, books, speeches, and, in one instance, videos, but none of the papers have applied a topic modelling method on a corpus of research papers. However, [ 27 ] address the use of LDA for researchers and argue that there are four parameters a researcher needs to deal with, namely pre-processing of text, selection of model parameters and number of topics to be generated, evaluation of reliability, and evaluation of validity. The uses of topic modelling are to identify themes or topics within a corpus of many documents, or to develop or test topic modelling methods. The motivation for most of the papers is that the use of topic modelling enables the possibility to do an analysis on a large amount of documents, as they would otherwise have not been able to due to the cost of time [ 30 ]. Most of the papers argue that LDA is a state-of-the-art and preferred method for topic modelling, which is why almost all of the papers have chosen the LDA method. The use of topic modelling does not provide a full meaning of the text but provides a good overview of the themes, which could not have been obtained otherwise [ 21 ]. DiMaggio et al. [ 12 ] find a key distinction in the use of topic modelling is that its use is more of utility than accuracy, where the model should simplify the data in an interpretable and valid way to be used for further analysis They note that a subject-matter expert is required to interpret the outcome and that the analysis is formed by the data.

The use of topic modelling presents an opportunity for researchers to add a tool to their tool box for an exploratory and literature review process. Topic modelling has mostly been used on online content and requires a high degree of statistical and technical skill, skills not all researchers possess. To enable more researchers to apply topic modelling for their exploratory literature reviews, a framework will be proposed to lower the requirements for technical and statistical skills of the researcher.

Topic modelling has proven itself as a tool for exploratory analysis of a large number of papers [ 14 , 24 ]. However, it has rarely been applied in the context of an exploratory literature review. The selected topic modelling method, for the framework, is Latent Dirichlet Allocation (LDA), as it is the most used [ 6 , 12 , 17 , 20 , 32 ], state-of-the-art method [ 25 ] and simplest method [ 8 ]. While other topic modelling methods could be considered, the aim of this paper is to enable the use of topic modelling for researchers. For enabling topic modelling for researchers, ease of use and applicability are highly rated, where LDA is easily implemented and understood. Other topic modelling methods could potentially be used in the framework, where reviews of other topic models is presented in [ 1 , 26 ].

The topic modelling method LDA is an unsupervised, probabilistic modelling method which extracts topics from a collection of papers. A topic is defined as a distribution over a fixed vocabulary. LDA analyses the words in each paper and calculates the joint probability distribution between the observed (words in the paper) and the unobserved (the hidden structure of topics). The method uses a ‘Bag of Words’ approach where the semantics and meaning of sentences are not evaluated. Rather, the method evaluates the frequency of words. It is therefore assumed that the most frequent words within a topic will present an aboutness of the topic. As an example, if one of the topics in a paper is LEAN, then it can be assumed that the words LEAN, JIT and Kanban are more frequent, compared to other non-LEAN papers. The result is a number of topics with the most prevalent topics grouped together. A probability for each paper is calculated for each topic, creating a matrix with the size of number of topics multiplied with the number of papers. A detailed description of LDA is found in [ 6 ].

The framework is designed as a step-by-step procedure, where its use is presented in a form of a case where the code used for the analysis is shared, enabling other researchers to easily replicate the framework for their own literature review. The code is based on the open source statistical language R, but any language with the LDA method is suitable for use. The framework can be made fully automated, presenting a low cost of time approach for exploratory literature reviews. An inspiration for the automation of the framework can be found in [ 10 ], who created an online-service, towards processing Business Process Management documents where text-mining approaches such as topic modelling are automated. They find that topic modelling can be automated and argue that the use of a good tool for topic modelling can easily present good results, but the method relies on the ability of people to find the right data, guide the analytical journey and interpret the results.

The aim of the paper is to create a generic framework which can be applied in any context of an exploratory literature review and potentially be used for a full literature review. The method provided in this paper is a framework which is based upon well-known procedures for how to clean and process data, in such a way that the contribution from the framework is not in presenting new ways to process data but in how known methods are combined and used. The framework will be validated by the use of a case in the form of a literature review. The outcome of the method is a list of topics where papers are grouped. If the grouping of papers makes sense and is logical, which can be evaluated by an expert within the research field, then the framework is deemed valid. Compared to other methods, such as supervised learning, the method of measuring validity does not produce an exact degree of validity. However, invalid results will likely be easily identifiable by an expert within the field. As stated by [ 12 ], the use of topic modelling is more for utility than for accuracy.

The developed framework is illustrated in Fig.  1 , and the R-code and case output files are located at https://github.com/clausba/Smart-Literature-Review . The smart literature review process consists of the three steps: pre-processing, topic modelling, and post-processing.

figure 1

Process overview of the smart literature review framework

The pre-processing steps are getting the data and model ready to run, where the topic-modelling step is executing the LDA method. The post-processing steps are translating the outcome of the LDA model to an exploratory review and using that to identify papers to be used for a literature review. It is assumed that the papers for review are downloaded and available, as a library with the pdf files.

Pre-processing

The pre-processing steps consist of loading and preparing the papers for processing, an essential step for a good analytical result. The first step is to load the papers into the R environment. The next step is to clean the papers by removing or altering non-value-adding words. All words are converted to lower case, and punctuation and whitespaces are removed. Special characters, URLs, and emails are removed, as they often do not contribute to identification of topics. Stop words, misread words and other non-semantic contributing words are removed. Examples of stop words are “can”, “use”, and “make”. These words add no value to the aboutness of a topic. The loading of papers into R can in some instances cause words to be misread, which must either be rectified or removed. Further, some websites add a first page with general information, and these contain words that must be removed. This prevents unwanted correlation between papers downloaded from the same source. Words are stemmed to their root form for easier comparison. Lastly, many words only occur in a single paper, and these should be removed to make computations easier, as less frequent words will likely provide little benefit in grouping papers into topics.

The cleansing process is often an iterative process, as it can be difficult to identify all misread and non-value adding-words a priori. Different papers’ corpora contain different words, which means that an identical cleaning process cannot be guaranteed if a new exploratory review is conducted. As an example, different non-value-adding words exist for the medical field compared to sociology or supply chain management (SCM). The cleaning process is finished once the loaded papers mainly contain value-adding words. There is no known way to scientifically evaluate when the cleaning process is finished, which in some instances makes the cleaning process more of an art than science. However, if a researcher is technically inclined methods, provided in the preText R-package can aid in making a better cleaning process [ 11 ].

LDA is an unsupervised method, which means we do not, prior to the model being executed, know the relationship between the papers. A key aspect of LDA is to group papers into a fixed number of topics, which must be given as a parameter when executing LDA. A key process is therefore to estimate the optimal number of topics. To estimate the number of topics, a cross-validation method is used to calculate the perplexity, as used in information theory, and it is a metric used to evaluate language models, where a low score indicates a better generalisation model, as done by [ 7 , 31 , 32 ]. Lowering the perplexity score is identical to maximising the overall probability of papers being in a topic. Next, test and training datasets are created: the LDA algorithm is run on the training set, and the test set is used to validate the results. The criteria for selecting the right number of topics is to find the balance between a useable number of topics and, at the same time, to keep the perplexity as low as possible. The right number of topics can differ greatly, depending on the aim of the analysis. As a rule of thumb, a low number of topics is used for a general overview and a higher number of topics is used for a more detailed view.

The cross-validation step is used to make sure that a result from an analysis is reliable, by running the LDA method several times under different conditions. Most of the parameters set for the cross-validation should have the same value, as in the final topic modelling run. However, due to computational reasons, some parameters can be altered to lower the amount of computation to save time. As with the number of topics, there is no right way to set the parameters, indicating a trial-and-error process. Most of the LDA implementations have default values set, but in this paper’s case the following parameters were changed: burn-in time, number of iterations, seed values, number of folds, and distribution between training and test sets.

  • Topic modelling

Once the papers have been cleaned and a decision has been made on the number of topics, the LDA method can be run. The same parameters as used in the cross-validation should be used as a guidance but for more precise results, parameters can be changed such as a higher number of iterations. The number of folds should be removed, as we do not need a test set, as all papers will be used to run the model. The outcome of the model is a list of papers, a list of probabilities for each paper for each topic, and a list of the most frequent words for each topic.

If an update to the analysis is needed, new papers simply have to be loaded and the post-processing and topic modelling steps can be re-run without any alterations to the parameters. Thus, the framework enables an easy path for updating an exploratory review.

Post-processing

The aim of the post-processing steps is to identify and label research topics and topics relevant for use in a literature review. An outcome of the LDA model is a list of topic probabilities for each paper. The list is used to assign a paper to a topic by sorting the list by highest probability for each paper for each topic. By assigning the papers to the topics with the highest probability, all of the topics contain papers that are similar to each other. When all of the papers have been distributed into their selected topics, the topics need to be labelled. The labelling of the topics is found by identifying the main topic of each topic group, as done in [ 17 ]. Naturally, this is a subjective matter, which can provide different labelling of topics depending on the researcher. To lower the risk of wrongly identified topics, a combination of reviewing the most frequent words for each topic and a title review is used. After the topics have been labelled, the exploratory search is finished.

When the exploratory search has finished, the results must be validated. There are three ways to validate the results of an LDA model, namely statistical, semantic, or predictive [ 12 ]. Statistical validation uses statistical methods to test the assumptions of the model. An example is [ 28 ], where a Bayesian approach is used to estimate the fit of papers to topics. Semantic validation is used to compare the results of the LDA method with expert reasoning, where the results must make semantic sense. In other words, does the grouping of papers into a topic make sense, which ideally should be evaluated by an expert. An example is [ 18 ], who utilises hand coding of papers and compare the coding of papers to the outcome of an LDA model. Predictive validation is used if an external incident can be correlated with an event not found in the papers. An example is in politics where external events, such as presidential elections which should have an impact on e.g. press releases or newspaper coverage, can be used to create a predictive model [ 12 , 17 ].

The chosen method for validation in this framework is semantic validation. The reason is that a researcher will often be or have access to an expert who can quickly validate if the grouping of papers into topics makes sense or not. Statistical validation is a good way to validate the results. However, it would require high statistical skills from the researchers, which cannot be assumed. Predictive validation is used in cases where external events can be used to predict the outcome of the model, which is seldom the case in an exploratory literature review.

It should be noted that, in contrast to many other machine learning methods, it is not possible to calculate a specific measure such as the F-measure or RMSE. To be able to calculate such measures, there must exist a correct grouping of papers, which in this instance would often mean comparing the results to manually created coding sheets [ 11 , 19 , 20 , 30 ]. However, it is very rare that coding sheets are available, leaving the semantic validation approach as the preferred validation method. The validation process in the proposed framework is two-fold. Firstly, the title of the individual paper must be reviewed to validate that each paper does indeed belong in its respective topic. As LDA is an unsupervised method, it can be assumed that not all papers will have a perfect fit within each topic, but if the majority of papers are within the theme of the topic, it is evaluated to be a valid result. If the objective of the research is only an exploratory literature review, the validation ends here. However, if a full literature review is conducted, the literature review can be viewed as an extended semantic validation method. By reviewing the papers in detail within the selected topics of research, it can be validated if the vast majority of papers belong together.

Using the results from the exploratory literature review for a full literature review is simple, as all topics from the exploratory literature review will be labelled. To conduct the full literature review, select the relevant topics and conduct the literature review on the selected papers.

To validate the framework, a case will be presented, where the framework is used to conduct a literature review. The literature review is conducted in the intersection of the research fields analytics, SCM, and enterprise information systems [ 3 ]. As the research areas have a rapidly growing interest, it was assumed that the number of papers would be large, and that an exploratory review was needed to identify the research directions within the research fields. The case used broadly defined keywords for searching for papers, ensuring to include as many potentially relevant papers as possible. Six hundred and fifty papers were found, which were heavily reduced by the use of the smart literature review framework to 76 papers, resulting in a successful literature review. The amount of papers is evaluated to be too time-consuming for a manual exploratory review, which provides a good case to test the smart literature review framework. The steps and thoughts behind the use of the framework are presented in this case section.

The first step was to load the 650 papers into the R environment. Next, all words were converted to lowercase and punctuation, whitespaces, email addresses, and URLs were removed. Problematic words were identified, such as words incorrectly read from the papers. Words included in a publisher’s information page were removed, as they add no semantic value to the topic of a paper. English stop words were removed, and all words were stemmed. As a part of an iterative process, several papers were investigated to evaluate the progress of cleaning the papers. The investigations were done by displaying words in a console window and manually evaluating if more cleaning had to be done.

After the cleaning steps, 256,747 unique words remained in the paper corpus. This is a large number of unique words, which for computational reasons is beneficial to reduce. Therefore, all words that did not have a sparsity or likelihood of 99% to be in any paper were removed. The operation lowered the amount of unique words to 14,145, greatly reducing the computational needs. The LDA method will be applied on the basis of the 14,145 unique words for the 650 papers. Several papers were manually reviewed, and it was evaluated that removal of the unique words did not significantly worsen the ability to identify main topics of the paper corpus.

The last step of pre-processing is to identify the optimal number of topics. To approximate the optimal number of topics, two things were considered. The perplexity was calculated for different amounts of topics, and secondly the need for specificity was considered.

At the extremes, choosing one topic would indicate one topic covering all papers, which will provide a very coarse view of the papers. On the other hand, if the number of topics is equal to the number of papers, then a very precise topic description will be achieved, although the topics will lose practical use as the overview of topics will be too complex. Therefore, a low number of topics was preferred as a general overview was required. Identifying what is a low number of topics will differ depending on the corpus of papers, but visualising the perplexity can often provide the necessary aid for the decision.

The perplexity was calculated over five folds, where each fold would identify 75% of the papers for training the model and leave out the remaining 25% for testing purposes. Using multiple folds reduces the variability of the model, ensuring higher reliability and reducing the risk of overfitting. For replicability purposes, specific seed values were set. Lastly, the number of topics to evaluate is selected. In this case, the following amounts of topics were selected: 2, 3, 4, 5, 10, 20, 30, 40, 50, 75, 100, and 200. The perplexity method in the ‘topicmodels’ R library is used, where the specific parameters can be found in the provided code.

The calculations were done over two runs. However, there is no practical reason for not running the calculations in one run. The first run included all values of number of topics below 100, and the second run calculated the perplexity for 100 and 200 number of topics. The runtimes for the calculations were respectively 9 and 10 h on a standard issue laptop. The combined results are presented in Fig.  2 , and the converged results can be found in the shared repository.

figure 2

5-Fold cross-validation of topic modelling. Results of cross-validation

The goal in this case is to find the lowest number of topics, which at the same time have a low perplexity. In this case, the slope of the fitted line starts to gradually decline at twenty topics, which is why the selected number of topics is twenty.

Case: topic modelling

As the number of topics is chosen, the next step is to run the LDA method on the entire set of papers. The full run of 650 papers for 20 topics took 3.5 h to compute on a standard issue laptop. An outcome of the method is a 650 by 20 matrix of topic probabilities. In this case, the papers with the highest probability for each topic were used to allocate the papers. The allocation of papers to topics was done in Microsoft Excel. An example of how a distribution of probabilities is distributed across topics for a specific paper is depicted in Fig.  3 . Some papers have topic probability values close to each other, which could indicate a paper belonging to an intersection between two or more topics. These cases were not considered, and the topic with the highest probability was selected.

figure 3

Example of probability distribution for one document (Topic 16 selected)

The allocation of papers to topics resulted in the distribution depicted in Fig.  4 . As can be seen, the number of papers varies for each topic, indicating that some research areas have more publications than others do.

figure 4

Distribution of papers per topic

Next step is to process the findings and find an adequate description of the topics. A combination of reviewing the most frequent words and a title review was used to identify the topic names. Practically, all of the paper titles and the most frequent words for each topic, were transferred to a separate Excel spreadsheet, providing an easy overview of paper titles. An example for topic 17 can be seen in Table  3 . The most frequent words for the papers in topic 17 are “data”, “big” and “analyt”. Many of the paper titles also indicate usage of big data and analytics for application in a business setting. The topic is named “Big Data Analytics”.

The process was repeated for all other topics. The names of the topics are presented in Tables  4 and 5 .

Based on the names of the topics, three topics were selected based on relevancy for the literature review. Topics 5, 13, and 17 were selected, with a total of 99 papers. In this specific case, it was deemed that there might be papers with a sub-topic that is not relevant for the literature review. Therefore, an abstract review was conducted for the 99 papers, creating 10 sub-topics, which are presented in Table  6 .

The sub-topics RFID, Analytical Methods, Performance Management, and Evaluation and Selection of IT Systems were evaluated to not be relevant for the literature review. Seventy-six papers remained, grouped by sub-topics.

The outcome of the case was an overview of the research areas within the paper corpus, represented by the twenty topics and the ten sub-topics. The selected sub-topics were used to conduct a literature review. The validation of the framework consisted of two parts. The first part addressed the question of whether the grouping of papers, evaluated by the title and keywords, makes sense and the second part addressed whether the literature review revealed any misplaced papers. The framework did successfully place the selected papers into groups of papers that resemble each other. There was only one case where a paper was misplaced, namely that a paper about material informatics was placed among the papers in the sub-topic EIS and Analytics. The grouping and selection of papers in the literature review, based on the framework, did make semantic sense and was successfully used for a literature review. The framework has proven its utility in enabling a faster and more comprehensive exploratory literature review, as compared to competing methods. The framework has increased the speed for analysing a large amount of papers, as well as having increased the reliability in comparison with manual reviews as the same result can be obtained by running the analysis once again. The transparency in the framework is higher than in competing methods, as all steps of the framework are recorded in the code and output files.

This paper presents an approach not often found in academia, by using machine learning to explore papers to identify research directions. Even though the framework has its limitations, the results and ease of use leave a promising future for topic-modelling-based exploratory literature reviews.

The main benefit of the framework is that it provides information about a large number of papers, with little effort on the researcher’s part, before time-costly manual work is to be done. It is possible, by the use of the framework, to quickly navigate many different paper corpora and evaluate where the researchers’ time and focus should be spent. This is especially valuable for a junior researcher or a researcher with little prior knowledge of a research field. If default parameters and cleaning settings can be found for the steps in the framework, a fully automatic grouping of papers could be enabled, where very little work has to be done to achieve an overview of research directions. From a literature review perspective, the benefit of using the framework is that the decision to include or exclude papers for a literature review will be postponed to a later stage where more information is provided, resulting in a more informed decision-making process. The framework enables reproducibility, as all of the steps in the exploratory review process can be reproduced, and enables a higher degree of transparency than competing methods do, as the entire review process can, in detail, be evaluated by other researchers.

There is practically no limit of the number of papers the framework is able to process, which could enable new practices for exploratory literature reviews. An example is to use the framework to track the development of a research field, by running the topic modelling script frequently or when new papers are published. This is especially potent if new papers are automatically downloaded, enabling a fully automatic exploratory literature review. For example, if an exploratory review was conducted once, the review could be updated constantly whenever new publications are made, grouping the publications into the related topics. For this, the topic model has to be trained properly for the selected collection of papers, where it can be assumed that minor additions of papers would likely not warrant any changes to the selected parameters of the model. However, as time passes and more papers are processed, the model will learn more about the collection of papers and provide a more accurate and updated result. Having an automated process could also enable a faster and more reliable method to do post-processing of the results, reducing the post-analysis cost identified for topic modelling by [ 30 ], from moderate to low.

The framework is designed to be easily used by other researchers by designing the framework to require less technical knowledge than a normal topic model usage would entail and by sharing the code used in the case work. The framework is designed as a step-by-step approach, which makes the framework more approachable. However, the framework has yet not been used by other researchers, which would provide valuable lessons for evaluating if the learning curve needs to be lowered even further for researchers to successfully use the framework.

There are, however, considerations that must be addressed when using the smart literature review framework. Finding the optimal number of topics can be quite difficult, and the proposed method of cross-validation based on the perplexity presented a good, but not optimal, solution. An indication of why the number of selected topics is not optimal is the fact that it was not possible to identify a unifying topic label for two of the topics. Namely topics 12 and 20, which were both labelled miscellaneous. The current solution to this issue is to evaluate the relevancy of every single paper of the topics that cannot be labelled. However, in future iterations of the framework, a better identification of the number of topics must be developed. This is a notion also recognised by [ 6 ], who requested that researchers should find a way to label and assign papers to a topic other than identifying the most frequent words. An attempt was made by [ 17 ] to generate automatic labelling on press releases, but it is uncertain if the method will work in other instances. Overall, the grouping of papers in the presented case into topics generally made semantic sense, where a topic label could be found for the majority of topics.

A consideration when using the framework is that not all steps have been clearly defined, and, e.g., the cleaning step is more of an art than science. If a researcher has no or little experience in coding or executing analytical models, suboptimal results could occur. [ 11 , 25 , 27 ] find that especially the pre-processing steps can have a great impact on the validity of results, which further emphasises the importance of selecting model parameters. However, it is found that the default parameters and cleaning steps set in the code provided a sufficiently valid and useable result for an exploratory literature analysis. Running the code will not take much of the researcher’s time, as the execution of code is mainly machine time, and verifying the results takes a limited amount of a researcher time.

Due to the semantic validation method used in the framework, it relies on the availability of a domain expert. The domain expert will not only validate if the grouping of papers into topics makes sense, but it is also their responsibility to label the topics [ 12 ]. If a domain expert is not available, it could lead to wrongly labelled topics and a non-valid result.

A key issue with topic modelling is that a paper can be placed in several related topics, depending on the selected seed value. The seed value will change the starting point of the topic modelling, which could result in another grouping of papers. A paper consists of several sub-topics and depending on how the different sub-topics are evaluated, papers can be allocated to different topics. A way to deal with this issue is to investigate papers with topic probabilities close to each other. Potential wrongly assigned papers can be identified and manually moved if deemed necessary. However, this presents a less automatic way of processing the papers, where future research should aim to improve the assignments of papers to topics or create a method to provide an overview of potentially misplaced papers. It should be noted that even though some papers can be misplaced, the framework provides outcome files than can easily be viewed to identify misplaced papers, by a manual review.

As the smart literature review framework heavily relies on topic modelling, improvements to the selected topic model will likely present better results. The results of the LDA method have provided good results, but more accurate results could be achieved if the semantic meaning of the words would be considered. The framework has only been tested on academic papers, but there is no technical reason to not include other types of documents. An example is to use the framework in a business context to analyse meeting minutes notes to analyse the discussion within the different departments in a company. For this to work, the cleaning parameters would likely have to change, and another evaluation method other than a literature review would be applicable. Further, the applicability of the framework has to be assessed on other streams of literature to be certain of its use for exploratory literature reviews at large.

This paper aimed to create a framework to enable researchers to use topic modelling to, do an exploratory literature review, decreasing the need for manually reading papers and, enabling the possibility to analyse a greater, almost unlimited, amount of papers, faster, more transparently and with greater reliability. The framework is based upon the use of the topic model Latent Dirichlet Allocation, which groups related papers into topic groups. The framework provides greater reliability than competing exploratory review methods provide, as the code can be rerun on the same papers, which will provide identical results. The process is highly transparent, as most decisions made by the researcher can be reviewed by other researchers, unlike, e.g., in the creation of coding sheets. The framework consists of three main phases: Pre-processing, Topic Modelling, and Post-Processing. In the pre-processing stage, papers are loaded, cleaned, and cross-validated, where recommendations to parameter settings are provided in the case work, as well as in the accompanied code. The topic modelling step is where the LDA method is executed, using the parameters identified in the pre-processing step. The post-processing step creates outputs from the topic model and addresses how validity can be ensured and how the exploratory literature review can be used for a full literature review. The framework was successfully used in a case with 650 papers, which was processed quickly, with little time investment from the researcher. Less than 2 days was used to process the 650 papers and group them into twenty research areas, with the use of a standard laptop. The results of the case are used in the literature review by [ 3 ].

The framework is seen to be especially relevant for junior researchers, as they often need an overview of different research fields, with little pre-existing knowledge, where the framework can enable researchers to review more papers, more frequently.

For an improved framework, two main areas need to be addressed. Firstly, the proposed framework needs to be applied by other researchers on other research fields to gain knowledge about the practicality and gain ideas for further development of the framework. Secondly, research in how to automatically identity model parameters could greatly improve the usability for the use of topic modelling for non-technical researchers, as the selection of model parameters has a great impact on the result of the framework.

Availability of data and materials

https://github.com/clausba/Smart-Literature-Review (No data).

Abbreviations

  • Latent Dirichlet Allocation

supply chain management

Alghamdi R, Alfalqi K. A survey of topic modeling in text mining. Int J Adv Comput Sci Appl. 2015;6(1):7. https://doi.org/10.14569/IJACSA.2015.060121 .

Article   Google Scholar  

Ansolabehere S, Snowberg EC, Snyder JM. Statistical bias in newspaper reporting on campaign finance. Public Opin Quart. 2003. https://doi.org/10.2139/ssrn.463780 .

Asmussen CB, Møller C. Enabling supply chain analytics for enterprise information systems: a topic modelling literature review. Enterprise Information Syst. 2019. (Submitted To) .

Atteveldt W, Welbers K, Jacobi C, Vliegenthart R. LDA models topics… But what are “topics”? In: Big data in the social sciences workshop. 2015. http://vanatteveldt.com/wp-content/uploads/2014_vanatteveldt_glasgowbigdata_topics.pdf .

Baum D. Recognising speakers from the topics they talk about. Speech Commun. 2012;54(10):1132–42. https://doi.org/10.1016/j.specom.2012.06.003 .

Blei DM. Probabilistic topic models. Commun ACM. 2012;55(4):77–84. https://doi.org/10.1145/2133806.2133826 .

Blei DM, Lafferty JD. A correlated topic model of science. Ann Appl Stat. 2007;1(1):17–35. https://doi.org/10.1214/07-AOAS114 .

Article   MathSciNet   MATH   Google Scholar  

Blei DM, Ng AY, Jordan MI. Latent Dirichlet Allocation. J Mach Learn Res. 2003;3:993–1022. https://doi.org/10.5555/944919.944937 .

Article   MATH   Google Scholar  

Bonilla T, Grimmer J. Elevated threat levels and decreased expectations: how democracy handles terrorist threats. Poetics. 2013;41(6):650–69. https://doi.org/10.1016/j.poetic.2013.06.003 .

Brocke JV, Mueller O, Debortoli S. The power of text-mining in business process management. BPTrends.

Denny MJ, Spirling A. Text preprocessing for unsupervised learning: why it matters, when it misleads, and what to do about it. Polit Anal. 2018;26(2):168–89. https://doi.org/10.1017/pan.2017.44 .

DiMaggio P, Nag M, Blei D. Exploiting affinities between topic modeling and the sociological perspective on culture: application to newspaper coverage of U.S. government arts funding. Poetics. 2013;41(6):570–606. https://doi.org/10.1016/j.poetic.2013.08.004 .

Elgesem D, Feinerer I, Steskal L. Bloggers’ responses to the Snowden affair: combining automated and manual methods in the analysis of news blogging. Computer Supported Cooperative Work: CSCW. Int J. 2016;25(2–3):167–91. https://doi.org/10.1007/s10606-016-9251-z .

Elgesem D, Steskal L, Diakopoulos N. Structure and content of the discourse on climate change in the blogosphere: the big picture. Environ Commun. 2015;9(2):169–88. https://doi.org/10.1080/17524032.2014.983536 .

Evans MS. A computational approach to qualitative analysis in large textual datasets. PLoS ONE. 2014;9(2):1–11. https://doi.org/10.1371/journal.pone.0087908 .

Ghosh D, Guha R. What are we “tweeting” about obesity? Mapping tweets with topic modeling and geographic information system. Cartogr Geogr Inform Sci. 2013;40(2):90–102. https://doi.org/10.1080/15230406.2013.776210 .

Grimmer J. A Bayesian hierarchical topic model for political texts: measuring expressed agendas in senate press releases. Polit Anal. 2010;18(1):1–35. https://doi.org/10.1093/pan/mpp034 .

Grimmer J, Stewart BM. Text as data: the promise and pitfalls of automatic content analysis methods for political texts. Polit Anal. 2013;21(03):267–97. https://doi.org/10.1093/pan/mps028 .

Guo L, Vargo CJ, Pan Z, Ding W, Ishwar P. Big social data analytics in journalism and mass communication. J Mass Commun Quart. 2016;93(2):332–59. https://doi.org/10.1177/1077699016639231 .

Jacobi C, Van Atteveldt W, Welbers K. Quantitative analysis of large amounts of journalistic texts using topic modelling. Digit J. 2016;4(1):89–106. https://doi.org/10.1080/21670811.2015.1093271 .

Jockers ML, Mimno D. Significant themes in 19th-century literature. Poetics. 2013;41(6):750–69. https://doi.org/10.1016/j.poetic.2013.08.005 .

Jones BD, Baumgartner FR. The politics of attention: how government prioritizes problems. Chicago: University of Chicago Press; 2005.

Google Scholar  

King G, Lowe W. An automated information extraction tool for international conflict data with performance as good as human coders: a rare events evaluation design. Int Org. 2008;57:617–43. https://doi.org/10.1017/s0020818303573064 .

Koltsova O, Koltcov S. Mapping the public agenda with topic modeling: the case of the Russian LiveJournal. Policy Internet. 2013;5(2):207–27. https://doi.org/10.1002/1944-2866.POI331 .

Lancichinetti A, Irmak Sirer M, Wang JX, Acuna D, Körding K, Amaral LA. High-reproducibility and high-accuracy method for automated topic classification. Phys Rev X. 2015;5(1):1–11. https://doi.org/10.1103/PhysRevX.5.011007 .

Mahmood A. Literature survey on topic modeling. Technical Report, Dept. of CIS, University of Delaware Newark, Delaware. http://www.eecis.udel.edu/~vijay/fall13/snlp/lit-survey/TopicModeling-ASM.pdf . 2009.

Maier D, Waldherr A, Miltner P, Wiedemann G, Niekler A, Keinert A, Adam S. Applying LDA topic modeling in communication research: toward a valid and reliable methodology. Commun Methods Meas. 2018;12(2–3):93–118. https://doi.org/10.1080/19312458.2018.1430754 .

Mimno D, Blei DM. Bayesian checking for topic models. In: EMLP 11 proceedings of the conference on empirical methods in natural language processing. 2011. p 227–37. https://doi.org/10.5555/2145432.2145459

Parra D, Trattner C, Gómez D, Hurtado M, Wen X, Lin YR. Twitter in academic events: a study of temporal usage, communication, sentimental and topical patterns in 16 Computer Science conferences. Comput Commun. 2016;73:301–14. https://doi.org/10.1016/j.comcom.2015.07.001 .

Quinn KM, Monroe BL, Colaresi M, Crespin MH, Radev DR. How to analyze political attention. Am J Polit Sci. 2010;54(1):209–28. https://doi.org/10.1111/j.1540-5907.2009.00427.x .

Xu Z, Raschid L. Probabilistic financial community models with Latent Dirichlet Allocation for financial supply chains. In: DSMM’16 proceedings of the second international workshop on data science for macro-modeling. 2016. https://doi.org/10.1145/2951894.2951900 .

Zhao W, Chen JJ, Perkins R, Liu Z, Ge W, Ding Y, Zou W. A heuristic approach to determine an appropriate number of topics in topic modeling. BMC Bioinform. 2015;16(13):S8. https://doi.org/10.1186/1471-2105-16-S13-S8 .

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Department of Materials and Production, Center for Industrial Production, Aalborg University, Fibigerstræde 16, 9220, Aalborg Øst, Denmark

Claus Boye Asmussen & Charles Møller

You can also search for this author in PubMed   Google Scholar

Contributions

CBA wrote the paper, developed the framework and executed the case. CM Supervised the research and developed the framework. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Claus Boye Asmussen .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Asmussen, C.B., Møller, C. Smart literature review: a practical topic modelling approach to exploratory literature review. J Big Data 6 , 93 (2019). https://doi.org/10.1186/s40537-019-0255-7

Download citation

Received : 26 July 2019

Accepted : 02 October 2019

Published : 19 October 2019

DOI : https://doi.org/10.1186/s40537-019-0255-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Supply chain management
  • Automatic literature review

what is an exploratory literature review

Exploratory Literature Review of the Role of National Public Health Institutes in COVID-19 Response

  • PMID: 36502388
  • PMCID: PMC9745230
  • DOI: 10.3201/eid2813.220760

To help explain the diversity of COVID-19 outcomes by country, research teams worldwide are studying national government response efforts. However, these attempts have not focused on a critical national authority that exists in half of the countries in the world: national public health institutes (NPHIs). NPHIs serve as an institutional home for public health systems and expertise and play a leading role in epidemic responses. To characterize the role of NPHIs in the COVID-19 response, we conducted a descriptive literature review that explored the research documented during March 2020-May 2021. We conducted a name-based search of 61 NPHIs in the literature, representing over half of the world's NPHIs. We identified 33 peer-reviewed and 300 gray articles for inclusion. We describe the most common NPHI-led COVID-19 activities that are documented and identify gaps in the literature. Our findings underscore the value of NPHIs for epidemic control and establish a foundation for primary research.

Keywords: 2019 novel coronavirus disease; COVID-19; SARS-CoV-2; coronavirus disease; national public health institutes; prevention and control; public health; respiratory infections; severe acute respiratory syndrome coronavirus 2; viruses; zoonoses.

Publication types

  • COVID-19* / epidemiology
  • Public Health

IMAGES

  1. A Complete Guide on How to Write Good a Literature Review

    what is an exploratory literature review

  2. Systematic Literature Review Methodology

    what is an exploratory literature review

  3. (PDF) Strategizing and Project Management in Construction Projects: An

    what is an exploratory literature review

  4. What is Exploratory Research? Types of Exploratory Studies In Sales

    what is an exploratory literature review

  5. Exploratory Research Paper Example

    what is an exploratory literature review

  6. (PDF) University Research Management: An Exploratory Literature Review

    what is an exploratory literature review

VIDEO

  1. Synthesise don't summarise I Literature Review

  2. Concentration music for writing literature reviews

  3. Tutorial shiny : Preprocessing

  4. Tutorial LDAShiny : Run LDA Model

  5. Tutorial LDAShiny : Document Term Matrix Visualizations

  6. Tutorial Shiny : Launch app

COMMENTS

  1. Exploratory Research

    Exploratory research is a methodology approach that investigates research questions that have not previously been studied in depth. Exploratory research is often qualitative and primary in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well. It is also often referred to as interpretive ...

  2. Chapter 1: Introduction

    Unlike a synoptic literature review, the purpose here is to provide a broad approach to the topic area. The aim is breadth rather than depth and to get a general feel for the size of the topic area. A graduate student might do an exploratory review of the literature before beginning a synoptic, or more comprehensive one.

  3. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  4. Chapter 9 Methods for Literature Reviews

    The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ... 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) ...

  5. Conducting a Literature Review: Why Do A Literature Review?

    Literature review is approached as a process of engaging with the discourse of scholarly communities that will help graduate researchers refine, define, and express their own scholarly vision and voice. This orientation on research as an exploratory practice, rather than merely a series of predetermined steps in a systematic method, allows the ...

  6. Introduction to Literature Reviews

    The purpose of an exploratory review is to provide a broad approach to the topic area. The aim is breadth rather than depth and to get a general feel for the size of the topic area. A graduate student might do an exploratory review of the literature before beginning a more comprehensive one (e.g., synoptic). Examples of an Exploratory Review:

  7. Getting started

    What is a literature review? Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject. Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field. Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in ...

  8. What is a Literature Review?

    A literature review is a review and synthesis of existing research on a topic or research question. A literature review is meant to analyze the scholarly literature, make connections across writings and identify strengths, weaknesses, trends, and missing conversations. A literature review should address different aspects of a topic as it ...

  9. PDF What is a Literature Review?

    literature review is an aid to gathering and synthesising that information. The pur-pose of the literature review is to draw on and critique previous studies in an orderly, precise and analytical manner. The fundamental aim of a literature review is to provide a comprehensive picture of the knowledge relating to a specific topic.

  10. LibGuides: Literature Reviews: Define your research question

    Define your research question. Defining your research question is the key to beginning, so while you may be clear on the area you want to study, chances are there are some nuances that you need to think through. Part of this process may require exploratory searching in databases so that you can see what's already been published on your topic.

  11. What is a Literature Review? How to Write It (with Examples)

    A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship ...

  12. PDF Scoping reviews: What they are & How you can do them

    "Exploratory projects that systematically map the literature available on a topic, identifying key concepts, theories, sources ... Search grey literature Scoping review steps by the Joanna Briggs Institute 7. Screen titles & abstracts (by ≥2 reviewers) 8. Screen full-texts (by ≥2 reviewers) 9. Have a pre-defined charting

  13. What is a literature review?

    A literature or narrative review is a comprehensive review and analysis of the published literature on a specific topic or research question. The literature that is reviewed contains: books, articles, academic articles, conference proceedings, association papers, and dissertations. It contains the most pertinent studies and points to important ...

  14. Systematic Reviews and Meta-Analyses: Exploratory Search

    Exploratory Search. Once you have an initial research question, you can develop and refine your research question and eligibility criteria through exploratory searching.Exploratory searching is also called preliminary, initial, and naive or novice searching.Regardless of what you call it, it is simply a series of searches conducted prior to starting the review with the goal of producing a well ...

  15. Literature review as a research methodology: An ...

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  16. Exploratory studies to decide whether and how to proceed with full

    Exploratory studies were described as being concerned with the intervention content, the future evaluation design or both. However, the nomenclature and endorsed methods underpinning these aims were inconsistent across papers. There was little guidance on what should precede or follow an exploratory study and decision-making surrounding this.

  17. Narrative Reviews: Flexible, Rigorous, and Practical

    A critical review is a narrative synthesis of literature that brings an interpretative lens: the review is shaped by a theory, a critical point of view, or perspectives from other domains to inform the literature analysis. Critical reviews involve an interpretative process that combines the reviewer's theoretical premise with existing theories ...

  18. Research Guides: Literature Reviews: Planning the Review

    The Annual Reviews provide substantially researched articles written by recognized scholars in a wide variety of disciplines that summarize the major research literature in the field. These are often a good place to start your research and to keep informed about recent developments. Oxford Bibliographies. This link opens in a new window.

  19. Analysis of Literature Review in Cases of Exploratory Research

    Abstract. As the name suggest a good literature review is always comprehensive and contextualized with respect to the research. It provides the reader or the target audience with a base of the theory base along with a survey of published works that pertain to the investigation of the researcher and further an analysis of that particular work.

  20. Smart literature review: a practical topic modelling approach to

    Manual exploratory literature reviews should be a thing of the past, as technology and development of machine learning methods have matured. The learning curve for using machine learning methods is rapidly declining, enabling new possibilities for all researchers. A framework is presented on how to use topic modelling on a large collection of papers for an exploratory literature review and how ...

  21. literature: an exploratory review

    in this area. We position our review as an exploratory review because, as we will show, there is very little - if any - prior work to form the basis for a systematic mapping study or a systematic literature review. 1.2 Aims, objectives and contributions The overall aim of this paper is to review empirical studies that

  22. Smart literature review: a practical topic modelling approach to

    context of an exploratory literature review and potentially be used for a full litera-ture review. The method provided in this paper is a framework which is based up on .

  23. Exploratory Literature Review of the Role of National Public Health

    To characterize the role of NPHIs in the COVID-19 response, we conducted a descriptive literature review that explored the research documented during March 2020-May 2021. We conducted a name-based search of 61 NPHIs in the literature, representing over half of the world's NPHIs. We identified 33 peer-reviewed and 300 gray articles for inclusion ...

  24. Uncovering the landscape of cross-national UK education research: an

    An exploratory, high-level review of 'home international' education research was conducted, based on the review of abstracts. ... Literature had to be written in the English language. Each item of literature meeting the criteria was added to an electronic spreadsheet, along with the following metadata: the author(s), year, title, abstract ...

  25. Sustainability

    In response to the intensifying compression of resources and the environment associated with rapid industrial growth and increasing living standards, green production and sustainable living have developed essential facts for ecologically conscious progress. Despite the potential benefits of synergy, the complex relationship between green production and living organisms presents challenges that ...