example of studies in research

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

Types of Research – Explained with Examples


  • By DiscoverPhDs
  • October 2, 2020

Types of Research Design

Types of Research

Research is about using established methods to investigate a problem or question in detail with the aim of generating new knowledge about it.

It is a vital tool for scientific advancement because it allows researchers to prove or refute hypotheses based on clearly defined parameters, environments and assumptions. Due to this, it enables us to confidently contribute to knowledge as it allows research to be verified and replicated.

Knowing the types of research and what each of them focuses on will allow you to better plan your project, utilises the most appropriate methodologies and techniques and better communicate your findings to other researchers and supervisors.

Classification of Types of Research

There are various types of research that are classified according to their objective, depth of study, analysed data, time required to study the phenomenon and other factors. It’s important to note that a research project will not be limited to one type of research, but will likely use several.

According to its Purpose

Theoretical research.

Theoretical research, also referred to as pure or basic research, focuses on generating knowledge , regardless of its practical application. Here, data collection is used to generate new general concepts for a better understanding of a particular field or to answer a theoretical research question.

Results of this kind are usually oriented towards the formulation of theories and are usually based on documentary analysis, the development of mathematical formulas and the reflection of high-level researchers.

Applied Research

Here, the goal is to find strategies that can be used to address a specific research problem. Applied research draws on theory to generate practical scientific knowledge, and its use is very common in STEM fields such as engineering, computer science and medicine.

This type of research is subdivided into two types:

  • Technological applied research : looks towards improving efficiency in a particular productive sector through the improvement of processes or machinery related to said productive processes.
  • Scientific applied research : has predictive purposes. Through this type of research design, we can measure certain variables to predict behaviours useful to the goods and services sector, such as consumption patterns and viability of commercial projects.

Methodology Research

According to your Depth of Scope

Exploratory research.

Exploratory research is used for the preliminary investigation of a subject that is not yet well understood or sufficiently researched. It serves to establish a frame of reference and a hypothesis from which an in-depth study can be developed that will enable conclusive results to be generated.

Because exploratory research is based on the study of little-studied phenomena, it relies less on theory and more on the collection of data to identify patterns that explain these phenomena.

Descriptive Research

The primary objective of descriptive research is to define the characteristics of a particular phenomenon without necessarily investigating the causes that produce it.

In this type of research, the researcher must take particular care not to intervene in the observed object or phenomenon, as its behaviour may change if an external factor is involved.

Explanatory Research

Explanatory research is the most common type of research method and is responsible for establishing cause-and-effect relationships that allow generalisations to be extended to similar realities. It is closely related to descriptive research, although it provides additional information about the observed object and its interactions with the environment.

Correlational Research

The purpose of this type of scientific research is to identify the relationship between two or more variables. A correlational study aims to determine whether a variable changes, how much the other elements of the observed system change.

According to the Type of Data Used

Qualitative research.

Qualitative methods are often used in the social sciences to collect, compare and interpret information, has a linguistic-semiotic basis and is used in techniques such as discourse analysis, interviews, surveys, records and participant observations.

In order to use statistical methods to validate their results, the observations collected must be evaluated numerically. Qualitative research, however, tends to be subjective, since not all data can be fully controlled. Therefore, this type of research design is better suited to extracting meaning from an event or phenomenon (the ‘why’) than its cause (the ‘how’).

Quantitative Research

Quantitative research study delves into a phenomena through quantitative data collection and using mathematical, statistical and computer-aided tools to measure them . This allows generalised conclusions to be projected over time.

Types of Research Methodology

According to the Degree of Manipulation of Variables

Experimental research.

It is about designing or replicating a phenomenon whose variables are manipulated under strictly controlled conditions in order to identify or discover its effect on another independent variable or object. The phenomenon to be studied is measured through study and control groups, and according to the guidelines of the scientific method.

Non-Experimental Research

Also known as an observational study, it focuses on the analysis of a phenomenon in its natural context. As such, the researcher does not intervene directly, but limits their involvement to measuring the variables required for the study. Due to its observational nature, it is often used in descriptive research.

Quasi-Experimental Research

It controls only some variables of the phenomenon under investigation and is therefore not entirely experimental. In this case, the study and the focus group cannot be randomly selected, but are chosen from existing groups or populations . This is to ensure the collected data is relevant and that the knowledge, perspectives and opinions of the population can be incorporated into the study.

According to the Type of Inference

Deductive investigation.

In this type of research, reality is explained by general laws that point to certain conclusions; conclusions are expected to be part of the premise of the research problem and considered correct if the premise is valid and the inductive method is applied correctly.

Inductive Research

In this type of research, knowledge is generated from an observation to achieve a generalisation. It is based on the collection of specific data to develop new theories.

Hypothetical-Deductive Investigation

It is based on observing reality to make a hypothesis, then use deduction to obtain a conclusion and finally verify or reject it through experience.

Descriptive Research Design

According to the Time in Which it is Carried Out

Longitudinal study (also referred to as diachronic research).

It is the monitoring of the same event, individual or group over a defined period of time. It aims to track changes in a number of variables and see how they evolve over time. It is often used in medical, psychological and social areas .

Cross-Sectional Study (also referred to as Synchronous Research)

Cross-sectional research design is used to observe phenomena, an individual or a group of research subjects at a given time.

According to The Sources of Information

Primary research.

This fundamental research type is defined by the fact that the data is collected directly from the source, that is, it consists of primary, first-hand information.

Secondary research

Unlike primary research, secondary research is developed with information from secondary sources, which are generally based on scientific literature and other documents compiled by another researcher.

Action Research Methods

According to How the Data is Obtained

Documentary (cabinet).

Documentary research, or secondary sources, is based on a systematic review of existing sources of information on a particular subject. This type of scientific research is commonly used when undertaking literature reviews or producing a case study.

Field research study involves the direct collection of information at the location where the observed phenomenon occurs.

From Laboratory

Laboratory research is carried out in a controlled environment in order to isolate a dependent variable and establish its relationship with other variables through scientific methods.

Mixed-Method: Documentary, Field and/or Laboratory

Mixed research methodologies combine results from both secondary (documentary) sources and primary sources through field or laboratory research.

Significance of the Study

In this post you’ll learn what the significance of the study means, why it’s important, where and how to write one in your paper or thesis with an example.

Unit of Analysis

The unit of analysis refers to the main parameter that you’re investigating in your research project or study.

Types of Research Design

There are various types of research that are classified by objective, depth of study, analysed data and the time required to study the phenomenon etc.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

example of studies in research

Browse PhDs Now

Abstract vs Introduction

An abstract and introduction are the first two sections of your paper or thesis. This guide explains the differences between them and how to write them.

How to impress a PhD supervisor

Learn 10 ways to impress a PhD supervisor for increasing your chances of securing a project, developing a great working relationship and more.

example of studies in research

Kat is in the second year of her PhD at the International Centre for Radio Astronomy Research (ICRAR) in Perth, Western Australia (WA). Her research involves studying supermassive black holes at the centres of distant galaxies.

example of studies in research

Dr Patel gained his PhD in 2011 from Aston University, researching risk factors & systemic biomarkers for Type II diabetes & cardiovascular disease. He is currently a business director at a large global pharmaceutical.

Join Thousands of Students

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.


Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 18 March 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Medicine LibreTexts

1.3: Types of Research Studies and How To Interpret Them

  • Last updated
  • Save as PDF
  • Page ID 59269

  • Alice Callahan, Heather Leonard, & Tamberly Powell
  • Lane Community College via OpenOregon

The field of nutrition is dynamic, and our understanding and practices are always evolving. Nutrition scientists are continuously conducting new research and publishing their findings in peer-reviewed journals. This adds to scientific knowledge, but it’s also of great interest to the public, so nutrition research often shows up in the news and other media sources. You might be interested in nutrition research to inform your own eating habits, or if you work in a health profession, so that you can give evidence-based advice to others. Making sense of science requires that you understand the types of research studies used and their limitations.

The Hierarchy of Nutrition Evidence

Researchers use many different types of study designs depending on the question they are trying to answer, as well as factors such as time, funding, and ethical considerations. The study design affects how we interpret the results and the strength of the evidence as it relates to real-life nutrition decisions. It can be helpful to think about the types of studies within a pyramid representing a hierarchy of evidence, where the studies at the bottom of the pyramid usually give us the weakest evidence with the least relevance to real-life nutrition decisions, and the studies at the top offer the strongest evidence, with the most relevance to real-life nutrition decisions .


Figure 2.1. Hierarchy of research design and levels of scientific evidence with the strongest studies at the top and the weakest at the bottom.

The pyramid also represents a few other general ideas. There tend to be more studies published using the methods at the bottom of the pyramid, because they require less time, money, and other resources. When researchers want to test a new hypothesis , they often start with the study designs at the bottom of the pyramid , such as in vitro, animal, or observational studies. Intervention studies are more expensive and resource-intensive, so there are fewer of these types of studies conducted. But they also give us higher quality evidence, so they’re an important next step if observational and non-human studies have shown promising results. Meta-analyses and systematic reviews combine the results of many studies already conducted, so they help researchers summarize scientific knowledge on a topic.

Non-Human Studies: In Vitro & Animal Studies

The simplest form of nutrition research is an in vitro study . In vitro means “within glass,” (although plastic is used more commonly today) and these experiments are conducted within flasks, dishes, plates, and test tubes. One common form of in vitro research is cell culture. This involves growing cells in flasks and dishes. In order for cells to grow, they need a nutrient source. For cell culture, the nutrient source is referred to as media. Media supplies nutrients to the cells in vitro similarly to how blood performs this function within the body. Most cells adhere to the bottom of the flask and are so small that a microscope is needed to see them. The cells are grown inside an incubator, which is a device that provides the optimal temperature, humidity, and carbon dioxide (CO2CO2) concentrations for cells and microorganisms. By imitating the body's temperature and CO2CO2 levels (37 degrees Celsius, 5% CO2CO2), the incubator allows cells to grow even though they are outside the body.

A limitation of in vitro research compared to in vivo research is that it typically does not take digestion or bioavailability into account. This means that the concentration used might not be physiologically possible (it might be much higher) and that digestion and metabolism of what is being provided to cells may not be taken into account. Cell-based in vitro research is not as complex of a biological system as animals or people that have tissues, organs, etc. working together as well.

Since these studies are performed on isolated cells or tissue samples, they are less expensive and time-intensive than animal or human studies. In vitro studies are vital for zooming in on biological mechanisms, to see how things work at the cellular or molecular level. However, these studies shouldn’t be used to draw conclusions about how things work in humans (or even animals), because we can’t assume that the results will apply to a whole, living organism.

Two photos representing lab research. At left, a person appearing to be a woman with long dark hair and dark skin handles tiny tubes in a black bucket of ice. More tubes surround the bucket on the table. At right, a white mouse with red eyes peers out of an opening of a cage.

Animal studies are one form of in vivo research, which translates to “within the living.” Rats and mice are the most common animals used in nutrition research. Animals are often used in research that would be unethical to conduct in humans. Another advantage of animal dietary studies is that researchers can control exactly what the animals eat. In human studies, researchers can tell subjects what to eat and even provide them with the food, but they may not stick to the planned diet. People are also not very good at estimating, recording, or reporting what they eat and in what quantities. In addition, animal studies typically do not cost as much as human studies.

There are some important limitations of animal research. First, an animal’s metabolism and physiology are different from humans. Plus, animal models of disease (cancer, cardiovascular disease, etc.), although similar, are different from human diseases. Animal research is considered preliminary, and while it can be very important to the process of building scientific understanding and informing the types of studies that should be conducted in humans, animal studies shouldn’t be considered relevant to real-life decisions about how people eat.

Observational Studies

Observational studies in human nutrition collect information on people’s dietary patterns or nutrient intake and look for associations with health outcomes. Observational studies do not give participants a treatment or intervention; instead, they look at what they’re already doing and see how it relates to their health. These types of study designs can only identify correlations (relationships) between nutrition and health; they can’t show that one factor causes another. (For that, we need intervention studies, which we’ll discuss in a moment.) Observational studies that describe factors correlated with human health are also called epidemiological studies . 1

Epidemiology is defined as the study of human populations. These studies often investigate the relationship between dietary consumption and disease development. There are three main types of epidemiological studies: cross-sectional, case-control, and prospective cohort studies.


One example of a nutrition hypothesis that has been investigated using observational studies is that eating a Mediterranean diet reduces the risk of developing cardiovascular disease. (A Mediterranean diet focuses on whole grains, fruits and vegetables, beans and other legumes, nuts, olive oil, herbs, and spices. It includes small amounts of animal protein (mostly fish), dairy, and red wine. 2 ) There are three main types of observational studies, all of which could be used to test hypotheses about the Mediterranean diet:

  • Cohort studies follow a group of people (a cohort) over time, measuring factors such as diet and health outcomes. A cohort study of the Mediterranean diet would ask a group of people to describe their diet, and then researchers would track them over time to see if those eating a Mediterranean diet had a lower incidence of cardiovascular disease.
  • Case-control studies compare a group of cases and controls, looking for differences between the two groups that might explain their different health outcomes. For example, researchers might compare a group of people with cardiovascular disease with a group of healthy controls to see whether there were more controls or cases that followed a Mediterranean diet.
  • Cross-sectional studies collect information about a population of people at one point in time. For example, a cross-sectional study might compare the dietary patterns of people from different countries to see if diet correlates with the prevalence of cardiovascular disease in the different countries.

There are two types of cohort studies: retrospective and prospective. Retrospective studies look at what happened in the past, and they’re considered weaker because they rely on people’s memory of what they ate or how they felt in the past. Prospective cohort studies, which enroll a cohort and follow them into the future, are usually considered the strongest type of observational study design.

Most cohort studies are prospective. Initial information is collected (usually by food frequency questionnaires) on the intake of a cohort of people at baseline, or the beginning. This cohort is then followed over time (normally many years) to quantify health outcomes of the individual within it. Cohort studies are normally considered to be more robust than case-control studies, because these studies do not start with diseased people and normally do not require people to remember their dietary habits in the distant past or before they developed a disease. An example of a prospective cohort study would be if you filled out a questionnaire on your current dietary habits and are then followed into the future to see if you develop osteoporosis. As shown below, instead of separating based on disease versus disease-free, individuals are separated based on exposure. In this example, those who are exposed are more likely to be diseased than those who were not exposed.


Using trans-fat intake again as the exposure and cardiovascular disease as the disease, the figure would be expected to look like this:


There are several well-known examples of prospective cohort studies that have described important correlations between diet and disease:

  • Framingham Heart Study : Beginning in 1948, this study has followed the residents of Framingham, Massachusetts to identify risk factors for heart disease.
  • Health Professionals Follow-Up Study : This study started in 1986 and enrolled 51,529 male health professionals (dentists, pharmacists, optometrists, osteopathic physicians, podiatrists, and veterinarians), who complete diet questionnaires every 2 years.
  • Nurses Health Studies : Beginning in 1976, these studies have enrolled three large cohorts of nurses with a total of 280,000 participants. Participants have completed detailed questionnaires about diet, other lifestyle factors (smoking and exercise, for example), and health outcomes.

Observational studies have the advantage of allowing researchers to study large groups of people in the real world, looking at the frequency and pattern of health outcomes and identifying factors that correlate with them. But even very large observational studies may not apply to the population as a whole. For example, the Health Professionals Follow-Up Study and the Nurses Health Studies include people with above-average knowledge of health. In many ways, this makes them ideal study subjects, because they may be more motivated to be part of the study and to fill out detailed questionnaires for years. However, the findings of these studies may not apply to people with less baseline knowledge of health.

We’ve already mentioned another important limitation of observational studies—that they can only determine correlation, not causation. A prospective cohort study that finds that people eating a Mediterranean diet have a lower incidence of heart disease can only show that the Mediterranean diet is correlated with lowered risk of heart disease. It can’t show that the Mediterranean diet directly prevents heart disease. Why? There are a huge number of factors that determine health outcomes such as heart disease, and other factors might explain a correlation found in an observational study. For example, people who eat a Mediterranean diet might also be the same kind of people who exercise more, sleep more, have a higher income (fish and nuts can be expensive!), or be less stressed. These are called confounding factors ; they’re factors that can affect the outcome in question (i.e., heart disease) and also vary with the factor being studied (i.e., Mediterranean diet).

Intervention Studies

Intervention studies , also sometimes called experimental studies or clinical trials, include some type of treatment or change imposed by the researcher. Examples of interventions in nutrition research include asking participants to change their diet, take a supplement, or change the time of day that they eat. Unlike observational studies, intervention studies can provide evidence of cause and effect , so they are higher in the hierarchy of evidence pyramid.

Randomization: The gold standard for intervention studies is the randomized controlled trial (RCT) . In an RCT, study subjects are recruited to participate in the study. They are then randomly assigned into one of at least two groups, one of which is a control group (this is what makes the study controlled ).

Randomization is the process of randomly assigning subjects to groups to decrease bias. Bias is a systematic error that may influence results. Bias can occur in assigning subjects to groups in a way that will influence the results. An example of bias in a study of an antidepressant drug is shown below. In this nonrandomized antidepressant drug example, researchers (who know what the subjects are receiving) put depressed subjects into the placebo group, while "less depressed" subjects are put into the antidepressant drug group. As a result, even if the drug isn't effective, the group assignment may make the drug appear effective, thus biasing the results as shown below.


This is a bit of an extreme example, but even if the researchers are trying to prevent bias, sometimes bias can still occur. However, if the subjects are randomized, the sick and the healthy people will ideally be equally distributed between the groups. Thus, the trial will be unbiased and a true test of whether or not the drug is effective.


Here is another example. In an RCT to study the effects of the Mediterranean diet on cardiovascular disease development, researchers might ask the control group to follow a low-fat diet (typically recommended for heart disease prevention) and the intervention group to eat a Mediterranean diet. The study would continue for a defined period of time (usually years to study an outcome like heart disease), at which point the researchers would analyze their data to see if more people in the control or Mediterranean diet had heart attacks or strokes. Because the treatment and control groups were randomly assigned, they should be alike in every other way except for diet, so differences in heart disease could be attributed to the diet. This eliminates the problem of confounding factors found in observational research, and it’s why RCTs can provide evidence of causation, not just correlation.

Imagine for a moment what would happen if the two groups weren’t randomly assigned. What if the researchers let study participants choose which diet they’d like to adopt for the study? They might, for whatever reason, end up with more overweight people who smoke and have high blood pressure in the low-fat diet group, and more people who exercised regularly and had already been eating lots of olive oil and nuts for years in the Mediterranean diet group. If they found that the Mediterranean diet group had fewer heart attacks by the end of the study, they would have no way of knowing if this was because of the diet or because of the underlying differences in the groups. In other words, without randomization, their results would be compromised by confounding factors, with many of the same limitations as observational studies.

Placebo: In an RCT of a supplement, the control group would receive a placebo—a “fake” treatment that contains no active ingredients, such as a sugar pill. The use of a placebo is necessary in medical research because of a phenomenon known as the placebo effect. The placebo effect results in a beneficial effect because of a subject’s belief in the treatment, even though there is no treatment actually being administered. An example would be an athlete who consumes a sports drink and runs the 100-meter dash in 11.00 seconds. The athlete then, under the exact same conditions, drinks what he is told is "Super Duper Sports Drink" and runs the 100-meter dash in 10.50 seconds. But what the athlete didn't know was that Super Duper Sports Drink was the Sports Drink + Food Coloring. There was nothing different between the drinks, but the athlete believed that the "Super Duper Sports Drink" was going to help him run faster, so he did. This improvement is due to the placebo effect.

A cartoon depicts the study described in the text. At left is shown the "super duper sports drink" (sports drink plus food coloring) in orange. At right is the regular sports drink in green. A cartoon guy with yellow hair is pictured sprinting. The time with the super duper sports drink is 10.50 seconds, and the time with the regular sports drink is 11.00 seconds. The image reads "the improvement is the placebo effect."

Blinding is a technique to prevent bias in intervention studies. In a study without blinding, the subject and the researchers both know what treatment the subject is receiving. This can lead to bias if the subject or researcher has expectations about the treatment working, so these types of trials are used less frequently. It’s best if a study is double-blind , meaning that neither the researcher nor the subject knows what treatment the subject is receiving. It’s relatively simple to double-blind a study where subjects are receiving a placebo or treatment pill because they could be formulated to look and taste the same. In a single-blind study , either the researcher or the subject knows what treatment they’re receiving, but not both. Studies of diets—such as the Mediterranean diet example—often can’t be double-blinded because the study subjects know whether or not they’re eating a lot of olive oil and nuts. However, the researchers who are checking participants’ blood pressure or evaluating their medical records could be blinded to their treatment group, reducing the chance of bias.

Open-label study:


Single-blinded study:


Double-blinded study:


Like all studies, RCTs and other intervention studies do have some limitations. They can be difficult to carry on for long periods of time and require that participants remain compliant with the intervention. They’re also costly and often have smaller sample sizes. Furthermore, it is unethical to study certain interventions. (An example of an unethical intervention would be to advise one group of pregnant mothers to drink alcohol to determine its effects on pregnancy outcomes because we know that alcohol consumption during pregnancy damages the developing fetus.)

VIDEO: “ Not all scientific studies are created equal ” by David H. Schwartz, YouTube (April 28, 2014), 4:26.

Meta-Analyses and Systematic Reviews

At the top of the hierarchy of evidence pyramid are systematic reviews and meta-analyses . You can think of these as “studies of studies.” They attempt to combine all of the relevant studies that have been conducted on a research question and summarize their overall conclusions. Researchers conducting a systematic review formulate a research question and then systematically and independently identify, select, evaluate, and synthesize all high-quality evidence that relates to the research question. Since systematic reviews combine the results of many studies, they help researchers produce more reliable findings. A meta-analysis is a type of systematic review that goes one step further, combining the data from multiple studies and using statistics to summarize it, as if creating a mega-study from many smaller studies . 4

However, even systematic reviews and meta-analyses aren’t the final word on scientific questions. For one thing, they’re only as good as the studies that they include. The Cochrane Collaboration is an international consortium of researchers who conduct systematic reviews in order to inform evidence-based healthcare, including nutrition, and their reviews are among the most well-regarded and rigorous in science. For the most recent Cochrane review of the Mediterranean diet and cardiovascular disease, two authors independently reviewed studies published on this question. Based on their inclusion criteria, 30 RCTs with a total of 12,461 participants were included in the final analysis. However, after evaluating and combining the data, the authors concluded that “despite the large number of included trials, there is still uncertainty regarding the effects of a Mediterranean‐style diet on cardiovascular disease occurrence and risk factors in people both with and without cardiovascular disease already.” Part of the reason for this uncertainty is that different trials found different results, and the quality of the studies was low to moderate. Some had problems with their randomization procedures, for example, and others were judged to have unreliable data. That doesn’t make them useless, but it adds to the uncertainty about this question, and uncertainty pushes the field forward towards more and better studies. The Cochrane review authors noted that they found seven ongoing trials of the Mediterranean diet, so we can hope that they’ll add more clarity to this question in the future. 5

Science is an ongoing process. It’s often a slow process, and it contains a lot of uncertainty, but it’s our best method of building knowledge of how the world and human life works. Many different types of studies can contribute to scientific knowledge. None are perfect—all have limitations—and a single study is never the final word on a scientific question. Part of what advances science is that researchers are constantly checking each other’s work, asking how it can be improved and what new questions it raises.


  • “Chapter 1: The Basics” from Lindshield, B. L. Kansas State University Human Nutrition (FNDH 400) Flexbook. goo.gl/vOAnR , CC BY-NC-SA 4.0
  • “The Broad Role of Nutritional Science,” section 1.3 from the book An Introduction to Nutrition (v. 1.0), CC BY-NC-SA 3.0


  • 1 Thiese, M. S. (2014). Observational and interventional study design types; an overview. Biochemia Medica , 24 (2), 199–210. https://doi.org/10.11613/BM.2014.022
  • 2 Harvard T.H. Chan School of Public Health. (2018, January 16). Diet Review: Mediterranean Diet . The Nutrition Source. https://www.hsph.harvard.edu/nutritionsource/healthy-weight/diet-reviews/mediterranean-diet/
  • 3 Ross, R., Gray, C. M., & Gill, J. M. R. (2015). Effects of an Injected Placebo on Endurance Running Performance. Medicine and Science in Sports and Exercise , 47 (8), 1672–1681. https://doi.org/10.1249/MSS.0000000000000584
  • 4 Hooper, A. (n.d.). LibGuides: Systematic Review Resources: Systematic Reviews vs Other Types of Reviews . Retrieved February 7, 2020, from //libguides.sph.uth.tmc.edu/c.php?g=543382&p=5370369
  • 5 Rees, K., Takeda, A., Martin, N., Ellis, L., Wijesekara, D., Vepa, A., Das, A., Hartley, L., & Stranges, S. (2019). Mediterranean‐style diet for the primary and secondary prevention of cardiovascular disease. Cochrane Database of Systematic Reviews , 3 . doi.org/10.1002/14651858.CD009825.pub3
  • 6Levin K. (2006) Study design III: Cross-sectional studies. Evidence - Based Dentistry 7(1): 24.
  • Figure 2.3. The hierarchy of evidence by Alice Callahan, is licensed under CC BY 4.0
  • Research lab photo by National Cancer Institute on Unsplas h ; mouse photo by vaun0815 on Unsplash
  • Figure 2.4. “Placebo effect example” by Lindshield, B. L. Kansas State University Human Nutrition (FNDH 400) Flexbook. goo.gl/vOAnR

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

example of studies in research

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

example of studies in research

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

example of studies in research

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

example of studies in research

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.


Thanks for this simplified explanations. it is quite very helpful.


This was really helpful. thanks


Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks


how to cite this page


Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Join thousands of product people at Insight Out Conf on April 11. Register free.

Insights hub solutions

Analyze data

Uncover deep customer insights with fast, powerful features, store insights, curate and manage insights in one searchable platform, scale research, unlock the potential of customer insights at enterprise scale.

Featured reads

example of studies in research

Tips and tricks

Four ways Dovetail helps Product Managers master continuous product discovery

example of studies in research

Product updates

Dovetail retro: our biggest releases from the past year

example of studies in research

How to affinity map using the canvas

Events and videos

© Dovetail Research Pty. Ltd.

What is case study research?

Last updated

8 February 2023

Reviewed by

Cathy Heath

Suppose a company receives a spike in the number of customer complaints, or medical experts discover an outbreak of illness affecting children but are not quite sure of the reason. In both cases, carrying out a case study could be the best way to get answers.


Case studies can be carried out across different disciplines, including education, medicine, sociology, and business.

Most case studies employ qualitative methods, but quantitative methods can also be used. Researchers can then describe, compare, evaluate, and identify patterns or cause-and-effect relationships between the various variables under study. They can then use this knowledge to decide what action to take. 

Another thing to note is that case studies are generally singular in their focus. This means they narrow focus to a particular area, making them highly subjective. You cannot always generalize the results of a case study and apply them to a larger population. However, they are valuable tools to illustrate a principle or develop a thesis.

Analyze case study research

Dovetail streamlines case study research to help you uncover and share actionable insights

  • What are the different types of case study designs?

Researchers can choose from a variety of case study designs. The design they choose is dependent on what questions they need to answer, the context of the research environment, how much data they already have, and what resources are available.

Here are the common types of case study design:


An explanatory case study is an initial explanation of the how or why that is behind something. This design is commonly used when studying a real-life phenomenon or event. Once the organization understands the reasons behind a phenomenon, it can then make changes to enhance or eliminate the variables causing it. 

Here is an example: How is co-teaching implemented in elementary schools? The title for a case study of this subject could be “Case Study of the Implementation of Co-Teaching in Elementary Schools.”


An illustrative or descriptive case study helps researchers shed light on an unfamiliar object or subject after a period of time. The case study provides an in-depth review of the issue at hand and adds real-world examples in the area the researcher wants the audience to understand. 

The researcher makes no inferences or causal statements about the object or subject under review. This type of design is often used to understand cultural shifts.

Here is an example: How did people cope with the 2004 Indian Ocean Tsunami? This case study could be titled "A Case Study of the 2004 Indian Ocean Tsunami and its Effect on the Indonesian Population."


Exploratory research is also called a pilot case study. It is usually the first step within a larger research project, often relying on questionnaires and surveys . Researchers use exploratory research to help narrow down their focus, define parameters, draft a specific research question , and/or identify variables in a larger study. This research design usually covers a wider area than others, and focuses on the ‘what’ and ‘who’ of a topic.

Here is an example: How do nutrition and socialization in early childhood affect learning in children? The title of the exploratory study may be “Case Study of the Effects of Nutrition and Socialization on Learning in Early Childhood.”

An intrinsic case study is specifically designed to look at a unique and special phenomenon. At the start of the study, the researcher defines the phenomenon and the uniqueness that differentiates it from others. 

In this case, researchers do not attempt to generalize, compare, or challenge the existing assumptions. Instead, they explore the unique variables to enhance understanding. Here is an example: “Case Study of Volcanic Lightning.”

This design can also be identified as a cumulative case study. It uses information from past studies or observations of groups of people in certain settings as the foundation of the new study. Given that it takes multiple areas into account, it allows for greater generalization than a single case study. 

The researchers also get an in-depth look at a particular subject from different viewpoints.  Here is an example: “Case Study of how PTSD affected Vietnam and Gulf War Veterans Differently Due to Advances in Military Technology.”

Critical instance

A critical case study incorporates both explanatory and intrinsic study designs. It does not have predetermined purposes beyond an investigation of the said subject. It can be used for a deeper explanation of the cause-and-effect relationship. It can also be used to question a common assumption or myth. 

The findings can then be used further to generalize whether they would also apply in a different environment.  Here is an example: “What Effect Does Prolonged Use of Social Media Have on the Mind of American Youth?”


Instrumental research attempts to achieve goals beyond understanding the object at hand. Researchers explore a larger subject through different, separate studies and use the findings to understand its relationship to another subject. This type of design also provides insight into an issue or helps refine a theory. 

For example, you may want to determine if violent behavior in children predisposes them to crime later in life. The focus is on the relationship between children and violent behavior, and why certain children do become violent. Here is an example: “Violence Breeds Violence: Childhood Exposure and Participation in Adult Crime.”

Evaluation case study design is employed to research the effects of a program, policy, or intervention, and assess its effectiveness and impact on future decision-making. 

For example, you might want to see whether children learn times tables quicker through an educational game on their iPad versus a more teacher-led intervention. Here is an example: “An Investigation of the Impact of an iPad Multiplication Game for Primary School Children.” 

  • When do you use case studies?

Case studies are ideal when you want to gain a contextual, concrete, or in-depth understanding of a particular subject. It helps you understand the characteristics, implications, and meanings of the subject.

They are also an excellent choice for those writing a thesis or dissertation, as they help keep the project focused on a particular area when resources or time may be too limited to cover a wider one. You may have to conduct several case studies to explore different aspects of the subject in question and understand the problem.

  • What are the steps to follow when conducting a case study?

1. Select a case

Once you identify the problem at hand and come up with questions, identify the case you will focus on. The study can provide insights into the subject at hand, challenge existing assumptions, propose a course of action, and/or open up new areas for further research.

2. Create a theoretical framework

While you will be focusing on a specific detail, the case study design you choose should be linked to existing knowledge on the topic. This prevents it from becoming an isolated description and allows for enhancing the existing information. 

It may expand the current theory by bringing up new ideas or concepts, challenge established assumptions, or exemplify a theory by exploring how it answers the problem at hand. A theoretical framework starts with a literature review of the sources relevant to the topic in focus. This helps in identifying key concepts to guide analysis and interpretation.

3. Collect the data

Case studies are frequently supplemented with qualitative data such as observations, interviews, and a review of both primary and secondary sources such as official records, news articles, and photographs. There may also be quantitative data —this data assists in understanding the case thoroughly.

4. Analyze your case

The results of the research depend on the research design. Most case studies are structured with chapters or topic headings for easy explanation and presentation. Others may be written as narratives to allow researchers to explore various angles of the topic and analyze its meanings and implications.

In all areas, always give a detailed contextual understanding of the case and connect it to the existing theory and literature before discussing how it fits into your problem area.

  • What are some case study examples?

What are the best approaches for introducing our product into the Kenyan market?

How does the change in marketing strategy aid in increasing the sales volumes of product Y?

How can teachers enhance student participation in classrooms?

How does poverty affect literacy levels in children?

Case study topics

Case study of product marketing strategies in the Kenyan market

Case study of the effects of a marketing strategy change on product Y sales volumes

Case study of X school teachers that encourage active student participation in the classroom

Case study of the effects of poverty on literacy levels in children

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 September 2023

Last updated: 14 February 2024

Last updated: 17 February 2024

Last updated: 19 November 2023

Last updated: 5 March 2024

Last updated: 5 February 2024

Last updated: 30 January 2024

Last updated: 12 October 2023

Last updated: 6 March 2024

Last updated: 31 January 2024

Last updated: 10 April 2023

Latest articles

Related topics, log in or sign up.

Get started for free

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

example of studies in research

Home Market Research

Sample: Definition, Types, Formula & Examples


How often do researchers look for the right survey respondents, either for a market research study or an existing survey in the field? The sample or the respondents of this research may be selected from a set of customers or users that are known or unknown.

You may often know your typical respondent profile but don’t have access to the respondents to complete your research study. At such times, researchers and research teams reach out to specialized organizations to access their panel of respondents or buy respondents from them to complete research studies and surveys.

These could be general population respondents that match demographic criteria or respondents based on specific criteria. Such respondents are imperative to the success of research studies.

This article discusses in detail the different types of samples, sampling methods, and examples of each. It also mentions the steps to calculate the size, the details of an online sample, and the advantages of using them.

Content Index

  • What is a sample?

Probability sampling methodologies with examples

Non-probability sampling methodologies with examples.

  • How to determine a sample size
  • Calculating sample size
  • Sampling advantages

What is a Sample?

A sample is a smaller set of data that a researcher chooses or selects from a larger population using a pre-defined selection bias method. These elements are known as sample points, sampling units, or observations.

Creating a sample is an efficient method of conducting research . Researching the whole population is often impossible, costly, and time-consuming. Hence, examining the sample provides insights the researcher can apply to the entire population.

For example, if a cell phone manufacturer wants to conduct a feature research study among students in US Universities. An in-depth research study must be conducted if the researcher is looking for features that the students use, features they would like to see, and the price they are willing to pay.

This step is imperative to understand the features that need development, the features that require an upgrade, the device’s pricing, and the go-to-market strategy.

In 2016/17 alone, there were 24.7 million students enrolled in universities across the US. It is impossible to research all these students; the time spent would make the new device redundant, and the money spent on development would render the study useless.

Creating a sample of universities by geographical location and further creating a sample of these students from these universities provides a large enough number of students for research.

Typically, the population for market research is enormous. Making an enumeration of the whole population is practically impossible. The sample usually represents a manageable size of this population. Researchers then collect data from these samples through surveys, polls, and questionnaires and extrapolate this data analysis to the broader community.

LEARN ABOUT: Survey Sampling

Types of Samples: Selection methodologies with examples

The process of deriving a sample is called a sampling method. Sampling forms an integral part of the research design as this method derives the quantitative and qualitative data that can be collected as part of a research study. Sampling methods are characterized into two distinct approaches: probability sampling and non-probability sampling.

Probability sampling is a method of deriving a sample where the objects are selected from a population-based on probability theory. This method includes everyone in the population, and everyone has an equal chance of being selected. Hence, there is no bias whatsoever in this type of sample.

Each person in the population can subsequently be a part of the research. The selection criteria are decided at the outset of the market research study and form an important component of research.

LEARN ABOUT:   Action Research

example of studies in research

Probability sampling can be further classified into four distinct types of samples. They are:

  • Simple random sampling: The most straightforward way of selecting a sample is simple random sampling . In this method, each member has an equal chance of participating in the study. The objects in this sample population are chosen randomly, and each member has the same probability of being selected. For example, if a university dean would like to collect feedback from students about their perception of the teachers and level of education, all 1000 students in the University could be a part of this sample. Any 100 students can be selected randomly to be a part of this sample.
  • Cluster sampling: Cluster sampling is a type of sampling method where the respondent population is divided into equal clusters. Clusters are identified and included in a sample based on defining demographic parameters such as age, location, sex, etc. This makes it extremely easy for a survey creator to derive practical inferences from the feedback. For example, if the FDA wants to collect data about adverse side effects from drugs, they can divide the mainland US into distinctive cluster analysis , like states. Research studies are then administered to respondents in these clusters. This type of generating a sample makes the data collection in-depth and provides easy-to-consume and act-upon, insights.
  • Systematic sampling: Systematic sampling is a sampling method where the researcher chooses respondents at equal intervals from a population. The approach to selecting the sample is to pick a starting point and then pick respondents at a pre-defined sample interval. For example, while selecting 1,000 volunteers for the Olympics from an application list of 10,000 people, each applicant is given a count of 1 to 10,000. Then starting from 1 and selecting each respondent with an interval of 10, a sample of 1,000 volunteers can be obtained.
  • Stratified random sampling: Stratified random sampling is a method of dividing the respondent population into distinctive but pre-defined parameters in the research design phase. In this method, the respondents don’t overlap but collectively represent the whole population. For example, a researcher looking to analyze people from different socioeconomic backgrounds can distinguish respondents by their annual salaries. This forms smaller groups of people or samples, and then some objects from these samples can be used for the research study.

LEARN ABOUT: Purposive Sampling

The non-probability sampling method uses the researcher’s discretion to select a sample. This type of sample is derived mostly from the researcher’s or statistician’s ability to get to this sample.

This type of sampling is used for preliminary research where the primary objective is to derive a hypothesis about the topic in research. Here each member does not have an equal chance of being a part of the sample population, and those parameters are known only post-selection to the sample.

example of studies in research

We can classify non-probability sampling into four distinct types of samples. They are:

  • Convenience sampling: Convenience sampling , in easy terms, stands for the convenience of a researcher accessing a respondent. There is no scientific method for deriving this sample. Researchers have nearly no authority over selecting the sample elements, and it’s purely done based on proximity and not representativeness.

This non-probability sampling method is used when there is time and costs limitations in collecting feedback. For example, researchers that are conducting a mall-intercept survey to understand the probability of using a fragrance from a perfume manufacturer. In this sampling method, the sample respondents are chosen based on their proximity to the survey desk and willingness to participate in the research.

  • Judgemental/purposive sampling: The judgemental or purposive sampling method is a method of developing a sample purely on the basis and discretion of the researcher purely, based on the nature of the study along with his/her understanding of the target audience. This sampling method selects people who only fit the research criteria and end objectives, and the remaining are kept out.

For example, if the research topic is understanding what University a student prefers for Masters, if the question asked is “Would you like to do your Masters?” anything other than a response, “Yes” to this question, everyone else is excluded from this study.

  • Snowball sampling: Snowball sampling or chain-referral sampling is defined as a non-probability sampling technique in which the samples have rare traits. This is a sampling technique in which existing subjects provide referrals to recruit samples required for a research study.

For example, while collecting feedback about a sensitive topic like AIDS, respondents aren’t forthcoming with information. In this case, the researcher can recruit people with an understanding or knowledge of such people and collect information from them or ask them to collect information.

  • Quota sampling: Quota sampling is a method of collecting a sample where the researcher has the liberty to select a sample based on their strata. The primary characteristic of this method is that two people cannot exist under two different conditions. For example, when a shoe manufacturer would like to understand millennials’ perception of the brand with other parameters like comfort, pricing, etc. It selects only females who are millennials for this study as the research objective is to collect feedback about women’s shoes.

How to determine a Sample Size

As we have learned above, the right sample size determination is essential for the success of data collection in a market research study. But is there a correct number for the sample size? What parameters decide the sample size? What are the distribution methods of the survey?

To understand all of this and make an informed calculation of the right sample size, it is first essential to understand four important variables that form the basic characteristics of a sample. They are:

  • Population size: The population size is all the people that can be considered for the research study. This number, in most cases, runs into huge amounts. For example, the population of the United States is 327 million. But in market research, it is impossible to consider all of them for the research study.
  • The margin of error (confidence interval): The margin of error is depicted by a percentage that is a statistical inference about the confidence of what number of the population depicts the actual views of the whole population. This percentage helps towards the statistical analysis in selecting a sample and how much sampling error in this would be acceptable.

LEARN ABOUT: Research Process Steps

  • Confidence level: This metric measures where the actual mean falls within a confidence interval. The most common confidence intervals are 90%, 95%, and 99%.
  • Standard deviation: This metric covers the variance in a survey. A safe number to consider is .5, which would mean that the sample size has to be that large.

Calculating Sample Size

To calculate the sample size, you need the following parameters.

  • Z-score: The Z-score value can be found   here .
  • Standard deviation
  • Margin of error
  • Confidence level

To calculate use the sample size, use this formula:

example of studies in research

Sample Size = (Z-score)2 * StdDev*(1-StdDev) / (margin of error)2

Consider the confidence level of 90%, standard deviation of .6 and margin of error, +/-4%

((1.64)2 x .6(.6)) / (.04)2

( 2.68x .0.36) / .0016

.9648 / .0016

603 respondents are needed and that becomes your sample size.

Try our sample size calculator to give population, margin of error calculator , and confidence level.

LEARN MORE: Population vs Sample

Sampling Advantages

As shown above, there are many advantages to sampling. Some of the most significant advantages are:

example of studies in research

  • Reduced cost & time: Since using a sample reduces the number of people that have to be reached out to, it reduces cost and time. Imagine the time saved between researching with a population of millions vs. conducting a research study using a sample.
  • Reduced resource deployment: It is obvious that if the number of people involved in a research study is much lower due to the sample, the resources required are also much less. The workforce needed to research the sample is much less than the workforce needed to study the whole population .
  • Accuracy of data: Since the sample indicates the population, the data collected is accurate. Also, since the respondent is willing to participate, the survey dropout rate is much lower, which increases the validity and accuracy of the data.
  • Intensive & exhaustive data: Since there are lesser respondents, the data collected from a sample is intense and thorough. More time and effort are given to each respondent rather than collecting data from many people.
  • Apply properties to a larger population: Since the sample is indicative of the broader population, it is safe to say that the data collected and analyzed from the sample can be applied to the larger population, which would hold true.

To collect accurate data for research, filter bad panelists, and eliminate sampling bias by applying different control measures. If you need any help arranging a sample audience for your next market research project, contact us at [email protected] . We have more than 22 million panelists across the world!

In conclusion, a sample is a subset of a population that is used to represent the characteristics of the entire population. Sampling is essential in research and data analysis to make inferences about a population based on a smaller group of individuals. There are different types of sampling, such as probability sampling, non-probability sampling, and others, each with its own advantages and disadvantages.

Choosing the right sampling method depends on the research question, budget, and resources is important. Furthermore, the sample size plays a crucial role in the accuracy and generalizability of the findings.

This article has provided a comprehensive overview of the definition, types, formula, and examples of sampling. By understanding the different types of sampling and the formulas used to calculate sample size, researchers and analysts can make more informed decisions when conducting research and data unit of analysis .

Sampling is an important tool that enables researchers to make inferences about a population based on a smaller group of individuals. With the right sampling method and sample size, researchers can ensure that their findings are accurate and generalizable to the population.

Utilize one of QuestionPro’s many survey questionnaire samples to help you complete your survey.

When creating online surveys for your customers, employees, or students, one of the biggest mistakes you can make is asking the wrong questions. Different businesses and organizations have different needs required for their surveys.

If you ask irrelevant questions to participants, they’re more likely to drop out before completing the survey. A questionnaire sample template will help set you up for a successful survey.



Word Cloud Generator

9 Best Word Cloud Generator Uses, Pros & Cons

Mar 15, 2024

digital experience platforms

Top 8 Best Digital Experience Platforms in 2024

Patient Experience Software

Top 10 Patient Experience Software to Shape Modern Healthcare

Mar 14, 2024

list building tool

Email List Building Tool: Choose The Best From These 9 Tools

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Limitations in Research – Types, Examples and Writing Guide

Limitations in Research – Types, Examples and Writing Guide

Table of Contents

Limitations in Research

Limitations in Research

Limitations in research refer to the factors that may affect the results, conclusions , and generalizability of a study. These limitations can arise from various sources, such as the design of the study, the sampling methods used, the measurement tools employed, and the limitations of the data analysis techniques.

Types of Limitations in Research

Types of Limitations in Research are as follows:

Sample Size Limitations

This refers to the size of the group of people or subjects that are being studied. If the sample size is too small, then the results may not be representative of the population being studied. This can lead to a lack of generalizability of the results.

Time Limitations

Time limitations can be a constraint on the research process . This could mean that the study is unable to be conducted for a long enough period of time to observe the long-term effects of an intervention, or to collect enough data to draw accurate conclusions.

Selection Bias

This refers to a type of bias that can occur when the selection of participants in a study is not random. This can lead to a biased sample that is not representative of the population being studied.

Confounding Variables

Confounding variables are factors that can influence the outcome of a study, but are not being measured or controlled for. These can lead to inaccurate conclusions or a lack of clarity in the results.

Measurement Error

This refers to inaccuracies in the measurement of variables, such as using a faulty instrument or scale. This can lead to inaccurate results or a lack of validity in the study.

Ethical Limitations

Ethical limitations refer to the ethical constraints placed on research studies. For example, certain studies may not be allowed to be conducted due to ethical concerns, such as studies that involve harm to participants.

Examples of Limitations in Research

Some Examples of Limitations in Research are as follows:

Research Title: “The Effectiveness of Machine Learning Algorithms in Predicting Customer Behavior”


  • The study only considered a limited number of machine learning algorithms and did not explore the effectiveness of other algorithms.
  • The study used a specific dataset, which may not be representative of all customer behaviors or demographics.
  • The study did not consider the potential ethical implications of using machine learning algorithms in predicting customer behavior.

Research Title: “The Impact of Online Learning on Student Performance in Computer Science Courses”

  • The study was conducted during the COVID-19 pandemic, which may have affected the results due to the unique circumstances of remote learning.
  • The study only included students from a single university, which may limit the generalizability of the findings to other institutions.
  • The study did not consider the impact of individual differences, such as prior knowledge or motivation, on student performance in online learning environments.

Research Title: “The Effect of Gamification on User Engagement in Mobile Health Applications”

  • The study only tested a specific gamification strategy and did not explore the effectiveness of other gamification techniques.
  • The study relied on self-reported measures of user engagement, which may be subject to social desirability bias or measurement errors.
  • The study only included a specific demographic group (e.g., young adults) and may not be generalizable to other populations with different preferences or needs.

How to Write Limitations in Research

When writing about the limitations of a research study, it is important to be honest and clear about the potential weaknesses of your work. Here are some tips for writing about limitations in research:

  • Identify the limitations: Start by identifying the potential limitations of your research. These may include sample size, selection bias, measurement error, or other issues that could affect the validity and reliability of your findings.
  • Be honest and objective: When describing the limitations of your research, be honest and objective. Do not try to minimize or downplay the limitations, but also do not exaggerate them. Be clear and concise in your description of the limitations.
  • Provide context: It is important to provide context for the limitations of your research. For example, if your sample size was small, explain why this was the case and how it may have affected your results. Providing context can help readers understand the limitations in a broader context.
  • Discuss implications : Discuss the implications of the limitations for your research findings. For example, if there was a selection bias in your sample, explain how this may have affected the generalizability of your findings. This can help readers understand the limitations in terms of their impact on the overall validity of your research.
  • Provide suggestions for future research : Finally, provide suggestions for future research that can address the limitations of your study. This can help readers understand how your research fits into the broader field and can provide a roadmap for future studies.

Purpose of Limitations in Research

There are several purposes of limitations in research. Here are some of the most important ones:

  • To acknowledge the boundaries of the study : Limitations help to define the scope of the research project and set realistic expectations for the findings. They can help to clarify what the study is not intended to address.
  • To identify potential sources of bias: Limitations can help researchers identify potential sources of bias in their research design, data collection, or analysis. This can help to improve the validity and reliability of the findings.
  • To provide opportunities for future research: Limitations can highlight areas for future research and suggest avenues for further exploration. This can help to advance knowledge in a particular field.
  • To demonstrate transparency and accountability: By acknowledging the limitations of their research, researchers can demonstrate transparency and accountability to their readers, peers, and funders. This can help to build trust and credibility in the research community.
  • To encourage critical thinking: Limitations can encourage readers to critically evaluate the study’s findings and consider alternative explanations or interpretations. This can help to promote a more nuanced and sophisticated understanding of the topic under investigation.

When to Write Limitations in Research

Limitations should be included in research when they help to provide a more complete understanding of the study’s results and implications. A limitation is any factor that could potentially impact the accuracy, reliability, or generalizability of the study’s findings.

It is important to identify and discuss limitations in research because doing so helps to ensure that the results are interpreted appropriately and that any conclusions drawn are supported by the available evidence. Limitations can also suggest areas for future research, highlight potential biases or confounding factors that may have affected the results, and provide context for the study’s findings.

Generally, limitations should be discussed in the conclusion section of a research paper or thesis, although they may also be mentioned in other sections, such as the introduction or methods. The specific limitations that are discussed will depend on the nature of the study, the research question being investigated, and the data that was collected.

Examples of limitations that might be discussed in research include sample size limitations, data collection methods, the validity and reliability of measures used, and potential biases or confounding factors that could have affected the results. It is important to note that limitations should not be used as a justification for poor research design or methodology, but rather as a way to enhance the understanding and interpretation of the study’s findings.

Importance of Limitations in Research

Here are some reasons why limitations are important in research:

  • Enhances the credibility of research: Limitations highlight the potential weaknesses and threats to validity, which helps readers to understand the scope and boundaries of the study. This improves the credibility of research by acknowledging its limitations and providing a clear picture of what can and cannot be concluded from the study.
  • Facilitates replication: By highlighting the limitations, researchers can provide detailed information about the study’s methodology, data collection, and analysis. This information helps other researchers to replicate the study and test the validity of the findings, which enhances the reliability of research.
  • Guides future research : Limitations provide insights into areas for future research by identifying gaps or areas that require further investigation. This can help researchers to design more comprehensive and effective studies that build on existing knowledge.
  • Provides a balanced view: Limitations help to provide a balanced view of the research by highlighting both strengths and weaknesses. This ensures that readers have a clear understanding of the study’s limitations and can make informed decisions about the generalizability and applicability of the findings.

Advantages of Limitations in Research

Here are some potential advantages of limitations in research:

  • Focus : Limitations can help researchers focus their study on a specific area or population, which can make the research more relevant and useful.
  • Realism : Limitations can make a study more realistic by reflecting the practical constraints and challenges of conducting research in the real world.
  • Innovation : Limitations can spur researchers to be more innovative and creative in their research design and methodology, as they search for ways to work around the limitations.
  • Rigor : Limitations can actually increase the rigor and credibility of a study, as researchers are forced to carefully consider the potential sources of bias and error, and address them to the best of their abilities.
  • Generalizability : Limitations can actually improve the generalizability of a study by ensuring that it is not overly focused on a specific sample or situation, and that the results can be applied more broadly.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Thesis Outline

Thesis Outline – Example, Template and Writing...

Research Paper Conclusion

Research Paper Conclusion – Writing Guide and...


Appendices – Writing Guide, Types and Examples

Research Paper Citation

How to Cite Research Paper – All Formats and...

Research Report

Research Report – Example, Writing Guide and...


Delimitations in Research – Types, Examples and...

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • The role of COVID-19 vaccines in preventing post-COVID-19 thromboembolic and cardiovascular complications
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Núria Mercadé-Besora 1 , 2 , 3 ,
  • Xintong Li 1 ,
  • Raivo Kolde 4 ,
  • Nhung TH Trinh 5 ,
  • Maria T Sanchez-Santos 1 ,
  • Wai Yi Man 1 ,
  • Elena Roel 3 ,
  • Carlen Reyes 3 ,
  • http://orcid.org/0000-0003-0388-3403 Antonella Delmestri 1 ,
  • Hedvig M E Nordeng 6 , 7 ,
  • http://orcid.org/0000-0002-4036-3856 Anneli Uusküla 8 ,
  • http://orcid.org/0000-0002-8274-0357 Talita Duarte-Salles 3 , 9 ,
  • Clara Prats 2 ,
  • http://orcid.org/0000-0002-3950-6346 Daniel Prieto-Alhambra 1 , 9 ,
  • http://orcid.org/0000-0002-0000-0110 Annika M Jödicke 1 ,
  • Martí Català 1
  • 1 Pharmaco- and Device Epidemiology Group, Health Data Sciences, Botnar Research Centre, NDORMS , University of Oxford , Oxford , UK
  • 2 Department of Physics , Universitat Politècnica de Catalunya , Barcelona , Spain
  • 3 Fundació Institut Universitari per a la recerca a l'Atenció Primària de Salut Jordi Gol i Gurina (IDIAPJGol) , IDIAP Jordi Gol , Barcelona , Catalunya , Spain
  • 4 Institute of Computer Science , University of Tartu , Tartu , Estonia
  • 5 Pharmacoepidemiology and Drug Safety Research Group, Department of Pharmacy, Faculty of Mathematics and Natural Sciences , University of Oslo , Oslo , Norway
  • 6 School of Pharmacy , University of Oslo , Oslo , Norway
  • 7 Division of Mental Health , Norwegian Institute of Public Health , Oslo , Norway
  • 8 Department of Family Medicine and Public Health , University of Tartu , Tartu , Estonia
  • 9 Department of Medical Informatics, Erasmus University Medical Center , Erasmus University Rotterdam , Rotterdam , Zuid-Holland , Netherlands
  • Correspondence to Prof Daniel Prieto-Alhambra, Pharmaco- and Device Epidemiology Group, Health Data Sciences, Botnar Research Centre, NDORMS, University of Oxford, Oxford, UK; daniel.prietoalhambra{at}ndorms.ox.ac.uk

Objective To study the association between COVID-19 vaccination and the risk of post-COVID-19 cardiac and thromboembolic complications.

Methods We conducted a staggered cohort study based on national vaccination campaigns using electronic health records from the UK, Spain and Estonia. Vaccine rollout was grouped into four stages with predefined enrolment periods. Each stage included all individuals eligible for vaccination, with no previous SARS-CoV-2 infection or COVID-19 vaccine at the start date. Vaccination status was used as a time-varying exposure. Outcomes included heart failure (HF), venous thromboembolism (VTE) and arterial thrombosis/thromboembolism (ATE) recorded in four time windows after SARS-CoV-2 infection: 0–30, 31–90, 91–180 and 181–365 days. Propensity score overlap weighting and empirical calibration were used to minimise observed and unobserved confounding, respectively.

Fine-Gray models estimated subdistribution hazard ratios (sHR). Random effect meta-analyses were conducted across staggered cohorts and databases.

Results The study included 10.17 million vaccinated and 10.39 million unvaccinated people. Vaccination was associated with reduced risks of acute (30-day) and post-acute COVID-19 VTE, ATE and HF: for example, meta-analytic sHR of 0.22 (95% CI 0.17 to 0.29), 0.53 (0.44 to 0.63) and 0.45 (0.38 to 0.53), respectively, for 0–30 days after SARS-CoV-2 infection, while in the 91–180 days sHR were 0.53 (0.40 to 0.70), 0.72 (0.58 to 0.88) and 0.61 (0.51 to 0.73), respectively.

Conclusions COVID-19 vaccination reduced the risk of post-COVID-19 cardiac and thromboembolic outcomes. These effects were more pronounced for acute COVID-19 outcomes, consistent with known reductions in disease severity following breakthrough versus unvaccinated SARS-CoV-2 infection.

  • Epidemiology
  • Electronic Health Records

Data availability statement

Data may be obtained from a third party and are not publicly available. CPRD: CPRD data were obtained under the CPRD multi-study license held by the University of Oxford after Research Data Governance (RDG) approval. Direct data sharing is not allowed. SIDIAP: In accordance with current European and national law, the data used in this study is only available for the researchers participating in this study. Thus, we are not allowed to distribute or make publicly available the data to other parties. However, researchers from public institutions can request data from SIDIAP if they comply with certain requirements. Further information is available online ( https://www.sidiap.org/index.php/menu-solicitudesen/application-proccedure ) or by contacting SIDIAP ([email protected]). CORIVA: CORIVA data were obtained under the approval of Research Ethics Committee of the University of Tartu and the patient level data sharing is not allowed. All analyses in this study were conducted in a federated manner, where analytical code and aggregated (anonymised) results were shared, but no patient-level data was transferred across the collaborating institutions.

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .


Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


COVID-19 vaccines proved to be highly effective in reducing the severity of acute SARS-CoV-2 infection.

While COVID-19 vaccines were associated with increased risk for cardiac and thromboembolic events, such as myocarditis and thrombosis, the risk of complications was substantially higher due to SARS-CoV-2 infection.


COVID-19 vaccination reduced the risk of heart failure, venous thromboembolism and arterial thrombosis/thromboembolism in the acute (30 days) and post-acute (31 to 365 days) phase following SARS-CoV-2 infection. This effect was stronger in the acute phase.

The overall additive effect of vaccination on the risk of post-vaccine and/or post-COVID thromboembolic and cardiac events needs further research.


COVID-19 vaccines proved to be highly effective in reducing the risk of post-COVID cardiovascular and thromboembolic complications.


COVID-19 vaccines were approved under emergency authorisation in December 2020 and showed high effectiveness against SARS-CoV-2 infection, COVID-19-related hospitalisation and death. 1 2 However, concerns were raised after spontaneous reports of unusual thromboembolic events following adenovirus-based COVID-19 vaccines, an association that was further assessed in observational studies. 3 4 More recently, mRNA-based vaccines were found to be associated with a risk of rare myocarditis events. 5 6

On the other hand, SARS-CoV-2 infection can trigger cardiac and thromboembolic complications. 7 8 Previous studies showed that, while slowly decreasing over time, the risk for serious complications remain high for up to a year after infection. 9 10 Although acute and post-acute cardiac and thromboembolic complications following COVID-19 are rare, they present a substantial burden to the affected patients, and the absolute number of cases globally could become substantial.

Recent studies suggest that COVID-19 vaccination could protect against cardiac and thromboembolic complications attributable to COVID-19. 11 12 However, most studies did not include long-term complications and were conducted among specific populations.

Evidence is still scarce as to whether the combined effects of COVID-19 vaccines protecting against SARS-CoV-2 infection and reducing post-COVID-19 cardiac and thromboembolic outcomes, outweigh any risks of these complications potentially associated with vaccination.

We therefore used large, representative data sources from three European countries to assess the overall effect of COVID-19 vaccines on the risk of acute and post-acute COVID-19 complications including venous thromboembolism (VTE), arterial thrombosis/thromboembolism (ATE) and other cardiac events. Additionally, we studied the comparative effects of ChAdOx1 versus BNT162b2 on the risk of these same outcomes.

Data sources

We used four routinely collected population-based healthcare datasets from three European countries: the UK, Spain and Estonia.

For the UK, we used data from two primary care databases—namely, Clinical Practice Research Datalink, CPRD Aurum 13 and CPRD Gold. 14 CPRD Aurum currently covers 13 million people from predominantly English practices, while CPRD Gold comprises 3.1 million active participants mostly from GP practices in Wales and Scotland. Spanish data were provided by the Information System for the Development of Research in Primary Care (SIDIAP), 15 which encompasses primary care records from 6 million active patients (around 75% of the population in the region of Catalonia) linked to hospital admissions data (Conjunt Mínim Bàsic de Dades d’Alta Hospitalària). Finally, the CORIVA dataset based on national health claims data from Estonia was used. It contains all COVID-19 cases from the first year of the pandemic and ~440 000 randomly selected controls. CORIVA was linked to the death registry and all COVID-19 testing from the national health information system.

Databases included sociodemographic information, diagnoses, measurements, prescriptions and secondary care referrals and were linked to vaccine registries, including records of all administered vaccines from all healthcare settings. Data availability for CPRD Gold ended in December 2021, CPRD Aurum in January 2022, SIDIAP in June 2022 and CORIVA in December 2022.

All databases were mapped to the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM) 16 to facilitate federated analytics.

Multinational network staggered cohort study: study design and participants

The study design has been published in detail elsewhere. 17 Briefly, we used a staggered cohort design considering vaccination as a time-varying exposure. Four staggered cohorts were designed with each cohort representing a country-specific vaccination rollout phase (eg, dates when people became eligible for vaccination, and eligibility criteria).

The source population comprised all adults registered in the respective database for at least 180 days at the start of the study (4 January 2021 for CPRD Gold and Aurum, 20 February 2021 for SIDIAP and 28 January 2021 for CORIVA). Subsequently, each staggered cohort corresponded to an enrolment period: all people eligible for vaccination during this time were included in the cohort and people with a history of SARS-CoV-2 infection or COVID-19 vaccination before the start of the enrolment period were excluded. Across countries, cohort 1 comprised older age groups, whereas cohort 2 comprised individuals at risk for severe COVID-19. Cohort 3 included people aged ≥40 and cohort 4 enrolled people aged ≥18.

In each cohort, people receiving a first vaccine dose during the enrolment period were allocated to the vaccinated group, with their index date being the date of vaccination. Individuals who did not receive a vaccine dose comprised the unvaccinated group and their index date was assigned within the enrolment period, based on the distribution of index dates in the vaccinated group. People with COVID-19 before the index date were excluded.

Follow-up started from the index date until the earliest of end of available data, death, change in exposure status (first vaccine dose for those unvaccinated) or outcome of interest.

COVID-19 vaccination

All vaccines approved within the study period from January 2021 to July 2021—namely, ChAdOx1 (Oxford/AstraZeneca), BNT162b2 (BioNTech/Pfizer]) Ad26.COV2.S (Janssen) and mRNA-1273 (Moderna), were included for this study.

Post-COVID-19 outcomes of interest

Outcomes of interest were defined as SARS-CoV-2 infection followed by a predefined thromboembolic or cardiac event of interest within a year after infection, and with no record of the same clinical event in the 6 months before COVID-19. Outcome date was set as the corresponding SARS-CoV-2 infection date.

COVID-19 was identified from either a positive SARS-CoV-2 test (polymerase chain reaction (PCR) or antigen), or a clinical COVID-19 diagnosis, with no record of COVID-19 in the previous 6 weeks. This wash-out period was imposed to exclude re-recordings of the same COVID-19 episode.

Post-COVID-19 outcome events were selected based on previous studies. 11–13 Events comprised ischaemic stroke (IS), haemorrhagic stroke (HS), transient ischaemic attack (TIA), ventricular arrhythmia/cardiac arrest (VACA), myocarditis/pericarditis (MP), myocardial infarction (MI), heart failure (HF), pulmonary embolism (PE) and deep vein thrombosis (DVT). We used two composite outcomes: (1) VTE, as an aggregate of PE and DVT and (2) ATE, as a composite of IS, TIA and MI. To avoid re-recording of the same complication we imposed a wash-out period of 90 days between records. Phenotypes for these complications were based on previously published studies. 3 4 8 18

All outcomes were ascertained in four different time periods following SARS-CoV-2 infection: the first period described the acute infection phase—that is, 0–30 days after COVID-19, whereas the later periods - which are 31–90 days, 91–180 days and 181–365 days, illustrate the post-acute phase ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Study outcome design. Study outcomes of interest are defined as a COVID-19 infection followed by one of the complications in the figure, within a year after infection. Outcomes were ascertained in four different time windows after SARS-CoV-2 infection: 0–30 days (namely the acute phase), 31–90 days, 91–180 days and 181–365 days (these last three comprise the post-acute phase).

Negative control outcomes

Negative control outcomes (NCOs) were used to detect residual confounding. NCOs are outcomes which are not believed to be causally associated with the exposure, but share the same bias structure with the exposure and outcome of interest. Therefore, no significant association between exposure and NCO is to be expected. Our study used 43 different NCOs from previous work assessing vaccine effectiveness. 19

Statistical analysis

Federated network analyses.

A template for an analytical script was developed and subsequently tailored to include the country-specific aspects (eg, dates, priority groups) for the vaccination rollout. Analyses were conducted locally for each database. Only aggregated data were shared and person counts <5 were clouded.

Propensity score weighting

Large-scale propensity scores (PS) were calculated to estimate the likelihood of a person receiving the vaccine based on their demographic and health-related characteristics (eg, conditions, medications) prior to the index date. PS were then used to minimise observed confounding by creating a weighted population (overlap weighting 20 ), in which individuals contributed with a different weight based on their PS and vaccination status.

Prespecified key variables included in the PS comprised age, sex, location, index date, prior observation time in the database, number of previous outpatient visits and previous SARS-CoV-2 PCR/antigen tests. Regional vaccination, testing and COVID-19 incidence rates were also forced into the PS equation for the UK databases 21 and SIDIAP. 22 In addition, least absolute shrinkage and selection operator (LASSO) regression, a technique for variable selection, was used to identify additional variables from all recorded conditions and prescriptions within 0–30 days, 31–180 days and 181-any time (conditions only) before the index date that had a prevalence of >0.5% in the study population.

PS were then separately estimated for each staggered cohort and analysis. We considered covariate balance to be achieved if absolute standardised mean differences (ASMDs) were ≤0.1 after weighting. Baseline characteristics such as demographics and comorbidities were reported.

Effect estimation

To account for the competing risk of death associated with COVID-19, Fine-and-Grey models 23 were used to calculate subdistribution hazard ratios (sHRs). Subsequently, sHRs and confidence intervals were empirically calibrated from NCO estimates 24 to account for unmeasured confounding. To calibrate the estimates, the empirical null distribution was derived from NCO estimates and was used to compute calibrated confidence intervals. For each outcome, sHRs from the four staggered cohorts were pooled using random-effect meta-analysis, both separately for each database and across all four databases.

Sensitivity analysis

Sensitivity analyses comprised 1) censoring follow-up for vaccinated people at the time when they received their second vaccine dose and 2) considering only the first post-COVID-19 outcome within the year after infection ( online supplemental figure S1 ). In addition, comparative effectiveness analyses were conducted for BNT162b2 versus ChAdOx1.

Supplemental material

Data and code availability.

All analytic code for the study is available in GitHub ( https://github.com/oxford-pharmacoepi/vaccineEffectOnPostCovidCardiacThromboembolicEvents ), including code lists for vaccines, COVID-19 tests and diagnoses, cardiac and thromboembolic events, NCO and health conditions to prioritise patients for vaccination in each country. We used R version 4.2.3 and statistical packages survival (3.5–3), Empirical Calibration (3.1.1), glmnet (4.1-7), and Hmisc (5.0–1).

Patient and public involvement

Owing to the nature of the study and the limitations regarding data privacy, the study design, analysis, interpretation of data and revision of the manuscript did not involve any patients or members of the public.

All aggregated results are available in a web application ( https://dpa-pde-oxford.shinyapps.io/PostCovidComplications/ ).

We included over 10.17 million vaccinated individuals (1 618 395 from CPRD Gold; 5 729 800 from CPRD Aurum; 2 744 821 from SIDIAP and 77 603 from CORIVA) and 10.39 million unvaccinated individuals (1 640 371; 5 860 564; 2 588 518 and 302 267, respectively). Online supplemental figures S2-5 illustrate study inclusion for each database.

Adequate covariate balance was achieved after PS weighting in most studies: CORIVA (all cohorts) and SIDIAP (cohorts 1 and 4) did not contribute to ChAdOx1 subanalyses owing to sample size and covariate imbalance. ASMD results are accessible in the web application.

NCO analyses suggested residual bias after PS weighting, with a majority of NCOs associated positively with vaccination. Therefore, calibrated estimates are reported in this manuscript. Uncalibrated effect estimates and NCO analyses are available in the web interface.

Population characteristics

Table 1 presents baseline characteristics for the weighted populations in CPRD Aurum, for illustrative purposes. Online supplemental tables S1-25 summarise baseline characteristics for weighted and unweighted populations for each database and comparison. Across databases and cohorts, populations followed similar patterns: cohort 1 represented an older subpopulation (around 80 years old) with a high proportion of women (57%). Median age was lowest in cohort 4 ranging between 30 and 40 years.

  • View inline

Characteristics of weighted populations in CPRD Aurum database, stratified by staggered cohort and exposure status. Exposure is any COVID-19 vaccine

COVID-19 vaccination and post-COVID-19 complications

Table 2 shows the incidence of post-COVID-19 VTE, ATE and HF, the three most common post-COVID-19 conditions among the studied outcomes. Outcome counts are presented separately for 0–30, 31–90, 91–180 and 181–365 days after SARS-CoV-2 infection. Online supplemental tables S26-36 include all studied complications, also for the sensitivity and subanalyses. Similar pattern for incidences were observed across all databases: higher outcome rates in the older populations (cohort 1) and decreasing frequency with increasing time after infection in all cohorts.

Number of records (and risk per 10 000 individuals) for acute and post-acute COVID-19 cardiac and thromboembolic complications, across cohorts and databases for any COVID-19 vaccination

Forest plots for the effect of COVID-19 vaccines on post-COVID-19 cardiac and thromboembolic complications; meta-analysis across cohorts and databases. Dashed line represents a level of heterogeneity I 2 >0.4. ATE, arterial thrombosis/thromboembolism; CD+HS, cardiac diseases and haemorrhagic stroke; VTE, venous thromboembolism.

Results from calibrated estimates pooled in meta-analysis across cohorts and databases are shown in figure 2 .

Reduced risk associated with vaccination is observed for acute and post-acute VTE, DVT, and PE: acute meta-analytic sHR are 0.22 (95% CI, 0.17–0.29); 0.36 (0.28–0.45); and 0.19 (0.15–0.25), respectively. For VTE in the post-acute phase, sHR estimates are 0.43 (0.34–0.53), 0.53 (0.40–0.70) and 0.50 (0.36–0.70) for 31–90, 91–180, and 181–365 days post COVID-19, respectively. Reduced risk of VTE outcomes was observed in vaccinated across databases and cohorts, see online supplemental figures S14–22 .

Similarly, the risk of ATE, IS and MI in the acute phase after infection was reduced for the vaccinated group, sHR of 0.53 (0.44–0.63), 0.55 (0.43–0.70) and 0.49 (0.38–0.62), respectively. Reduced risk associated with vaccination persisted for post-acute ATE, with sHR of 0.74 (0.60–0.92), 0.72 (0.58–0.88) and 0.62 (0.48–0.80) for 31–90, 91–180 and 181–365 days post-COVID-19, respectively. Risk of post-acute MI remained lower for vaccinated in the 31–90 and 91–180 days after COVID-19, with sHR of 0.64 (0.46–0.87) and 0.64 (0.45–0.90), respectively. Vaccination effect on post-COVID-19 TIA was seen only in the 181–365 days, with sHR of 0.51 (0.31–0.82). Online supplemental figures S23-31 show database-specific and cohort-specific estimates for ATE-related complications.

Risk of post-COVID-19 cardiac complications was reduced in vaccinated individuals. Meta-analytic estimates in the acute phase showed sHR of 0.45 (0.38–0.53) for HF, 0.41 (0.26–0.66) for MP and 0.41 (0.27–0.63) for VACA. Reduced risk persisted for post-acute COVID-19 HF: sHR 0.61 (0.51–0.73) for 31–90 days, 0.61 (0.51–0.73) for 91–180 days and 0.52 (0.43–0.63) for 181–365 days. For post-acute MP, risk was only lowered in the first post-acute window (31–90 days), with sHR of 0.43 (0.21–0.85). Vaccination showed no association with post-COVID-19 HS. Database-specific and cohort-specific results for these cardiac diseases are shown in online supplemental figures S32-40 .

Stratified analyses by vaccine showed similar associations, except for ChAdOx1 which was not associated with reduced VTE and ATE risk in the last post-acute window. Sensitivity analyses were consistent with main results ( online supplemental figures S6-13 ).

Figure 3 shows the results of comparative effects of BNT162b2 versus ChAdOx1, based on UK data. Meta-analytic estimates favoured BNT162b2 (sHR of 0.66 (0.46–0.93)) for VTE in the 0–30 days after infection, but no differences were seen for post-acute VTE or for any of the other outcomes. Results from sensitivity analyses, database-specific and cohort-specific estimates were in line with the main findings ( online supplemental figures S41-51 ).

Forest plots for comparative vaccine effect (BNT162b2 vs ChAdOx1); meta-analysis across cohorts and databases. ATE, arterial thrombosis/thromboembolism; CD+HS, cardiac diseases and haemorrhagic stroke; VTE, venous thromboembolism.

Key findings

Our analyses showed a substantial reduction of risk (45–81%) for thromboembolic and cardiac events in the acute phase of COVID-19 associated with vaccination. This finding was consistent across four databases and three different European countries. Risks for post-acute COVID-19 VTE, ATE and HF were reduced to a lesser extent (24–58%), whereas a reduced risk for post-COVID-19 MP and VACA in vaccinated people was seen only in the acute phase.

Results in context

The relationship between SARS-CoV-2 infection, COVID-19 vaccines and thromboembolic and/or cardiac complications is tangled. Some large studies report an increased risk of VTE and ATE following both ChAdOx1 and BNT162b2 vaccination, 7 whereas other studies have not identified such a risk. 25 Elevated risk of VTE has also been reported among patients with COVID-19 and its occurrence can lead to poor prognosis and mortality. 26 27 Similarly, several observational studies have found an association between COVID-19 mRNA vaccination and a short-term increased risk of myocarditis, particularly among younger male individuals. 5 6 For instance, a self-controlled case series study conducted in England revealed about 30% increased risk of hospital admission due to myocarditis within 28 days following both ChAdOx1 and BNT162b2 vaccines. However, this same study also found a ninefold higher risk for myocarditis following a positive SARS-CoV-2 test, clearly offsetting the observed post-vaccine risk.

COVID-19 vaccines have demonstrated high efficacy and effectiveness in preventing infection and reducing the severity of acute-phase infection. However, with the emergence of newer variants of the virus, such as omicron, and the waning protective effect of the vaccine over time, there is a growing interest in understanding whether the vaccine can also reduce the risk of complications after breakthrough infections. Recent studies suggested that COVID-19 vaccination could potentially protect against acute post-COVID-19 cardiac and thromboembolic events. 11 12 A large prospective cohort study 11 reports risk of VTE after SARS-CoV-2 infection to be substantially reduced in fully vaccinated ambulatory patients. Likewise, Al-Aly et al 12 suggest a reduced risk for post-acute COVID-19 conditions in breakthrough infection versus SARS-CoV-2 infection without prior vaccination. However, the populations were limited to SARS-CoV-2 infected individuals and estimates did not include the effect of the vaccine to prevent COVID-19 in the first place. Other studies on post-acute COVID-19 conditions and symptoms have been conducted, 28 29 but there has been limited reporting on the condition-specific risks associated with COVID-19, even though the prognosis for different complications can vary significantly.

In line with previous studies, our findings suggest a potential benefit of vaccination in reducing the risk of post-COVID-19 thromboembolic and cardiac complications. We included broader populations, estimated the risk in both acute and post-acute infection phases and replicated these using four large independent observational databases. By pooling results across different settings, we provided the most up-to-date and robust evidence on this topic.

Strengths and limitations

The study has several strengths. Our multinational study covering different healthcare systems and settings showed consistent results across all databases, which highlights the robustness and replicability of our findings. All databases had complete recordings of vaccination status (date and vaccine) and are representative of the respective general population. Algorithms to identify study outcomes were used in previous published network studies, including regulatory-funded research. 3 4 8 18 Other strengths are the staggered cohort design which minimises confounding by indication and immortal time bias. PS overlap weighting and NCO empirical calibration have been shown to adequately minimise bias in vaccine effectiveness studies. 19 Furthermore, our estimates include the vaccine effectiveness against COVID-19, which is crucial in the pathway to experience post-COVID-19 complications.

Our study has some limitations. The use of real-world data comes with inherent limitations including data quality concerns and risk of confounding. To deal with these limitations, we employed state-of-the-art methods, including large-scale propensity score weighting and calibration of effect estimates using NCO. 19 24 A recent study 30 has demonstrated that methodologically sound observational studies based on routinely collected data can produce results similar to those of clinical trials. We acknowledge that results from NCO were positively associated with vaccination, and estimates might still be influenced by residual bias despite using calibration. Another limitation is potential under-reporting of post-COVID-19 complications: some asymptomatic and mild COVID-19 infections might have not been recorded. Additionally, post-COVID-19 outcomes of interest might be under-recorded in primary care databases (CPRD Aurum and Gold) without hospital linkage, which represent a large proportion of the data in the study. However, results in SIDIAP and CORIVA, which include secondary care data, were similar. Also, our study included a small number of young men and male teenagers, who were the main population concerned with increased risks of myocarditis/pericarditis following vaccination.


Vaccination against SARS-CoV-2 substantially reduced the risk of acute post-COVID-19 thromboembolic and cardiac complications, probably through a reduction in the risk of SARS-CoV-2 infection and the severity of COVID-19 disease due to vaccine-induced immunity. Reduced risk in vaccinated people lasted for up to 1 year for post-COVID-19 VTE, ATE and HF, but not clearly for other complications. Findings from this study highlight yet another benefit of COVID-19 vaccination. However, further research is needed on the possible waning of the risk reduction over time and on the impact of booster vaccination.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

The study was approved by the CPRD’s Research Data Governance Process, Protocol No 21_000557 and the Clinical Research Ethics committee of Fundació Institut Universitari per a la recerca a l’Atenció Primària de Salut Jordi Gol i Gurina (IDIAPJGol) (approval number 4R22/133) and the Research Ethics Committee of the University of Tartu (approval No. 330/T-10).


This study is based in part on data from the Clinical Practice Research Datalink (CPRD) obtained under licence from the UK Medicines and Healthcare products Regulatory Agency. We thank the patients who provided these data, and the NHS who collected the data as part of their care and support. All interpretations, conclusions and views expressed in this publication are those of the authors alone and not necessarily those of CPRD. We would also like to thank the healthcare professionals in the Catalan healthcare system involved in the management of COVID-19 during these challenging times, from primary care to intensive care units; the Institut de Català de la Salut and the Program d’Analítica de Dades per a la Recerca i la Innovació en Salut for providing access to the different data sources accessible through The System for the Development of Research in Primary Care (SIDIAP).

  • Pritchard E ,
  • Matthews PC ,
  • Stoesser N , et al
  • Lauring AS ,
  • Tenforde MW ,
  • Chappell JD , et al
  • Pistillo A , et al
  • Duarte-Salles T , et al
  • Hansen JV ,
  • Fosbøl E , et al
  • Chen A , et al
  • Hippisley-Cox J ,
  • Mei XW , et al
  • Duarte-Salles T ,
  • Fernandez-Bertolin S , et al
  • Ip S , et al
  • Bowe B , et al
  • Prats-Uribe A ,
  • Feng Q , et al
  • Campbell J , et al
  • Herrett E ,
  • Gallagher AM ,
  • Bhaskaran K , et al
  • Raventós B ,
  • Fernández-Bertolín S ,
  • Aragón M , et al
  • Makadia R ,
  • Matcho A , et al
  • Mercadé-Besora N ,
  • Kolde R , et al
  • Ostropolets A ,
  • Makadia R , et al
  • Rathod-Mistry T , et al
  • Thomas LE ,
  • ↵ Coronavirus (COVID-19) in the UK . 2022 . Available : https://coronavirus.data.gov.uk/
  • Generalitat de Catalunya
  • Schuemie MJ ,
  • Hripcsak G ,
  • Ryan PB , et al
  • Houghton DE ,
  • Wysokinski W ,
  • Casanegra AI , et al
  • Katsoularis I ,
  • Fonseca-Rodríguez O ,
  • Farrington P , et al
  • Jehangir Q ,
  • Li P , et al
  • Byambasuren O ,
  • Stehlik P ,
  • Clark J , et al
  • Brannock MD ,
  • Preiss AJ , et al
  • Schneeweiss S , RCT-DUPLICATE Initiative , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

AMJ and MC are joint senior authors.

Contributors DPA and AMJ led the conceptualisation of the study with contributions from MC and NM-B. AMJ, TD-S, ER, AU and NTHT adapted the study design with respect to the local vaccine rollouts. AD and WYM mapped and curated CPRD data. MC and NM-B developed code with methodological contributions advice from MTS-S and CP. DPA, MC, NTHT, TD-S, HMEN, XL, CR and AMJ clinically interpreted the results. NM-B, XL, AMJ and DPA wrote the first draft of the manuscript, and all authors read, revised and approved the final version. DPA and AMJ obtained the funding for this research. DPA is responsible for the overall content as guarantor: he accepts full responsibility for the work and the conduct of the study, had access to the data, and controlled the decision to publish.

Funding The research was supported by the National Institute for Health and Care Research (NIHR) Oxford Biomedical Research Centre (BRC). DPA is funded through a NIHR Senior Research Fellowship (Grant number SRF-2018–11-ST2-004). Funding to perform the study in the SIDIAP database was provided by the Real World Epidemiology (RWEpi) research group at IDIAPJGol. Costs of databases mapping to OMOP CDM were covered by the European Health Data and Evidence Network (EHDEN).

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

  • MyU : For Students, Faculty, and Staff

College of Science and Engineering

New study reveals breakthrough in understanding brain stimulation therapies

For the first time, researchers show how the brain can precisely adapt to external stimulation.

MINNEAPOLIS/ST. PAUL (03/14/2024) — For the first time, researchers at the University of Minnesota Twin Cities showed that non-invasive brain stimulation can change a specific brain mechanism that is directly related to human behavior. This is a major step forward for discovering new therapies to treat  brain disorders such as schizophrenia, depression, Alzheimer’s disease, and Parkinson’s disease.

The study was recently published in  Nature Communications , a peer-reviewed, open access, scientific journal.  

Researchers used what is called “transcranial alternating current stimulation” to modulate brain activity. This technique is also known as neuromodulation. By applying a small electrical current to the brain, the timing of when brain cells are active is shifted. This modulation of neural timing is related to neuroplasticity, which is a change in the connections between brain cells that is needed for human behavior, learning, and cognition. 

“Previous research showed that brain activity was time-locked to stimulation. What we found in this new study is that this relationship slowly changed and the brain adapted over time as we added in external stimulation,” said Alexander Opitz, University of Minnesota biomedical engineering associate professor. “This showed brain activity shifting in a way we didn’t expect.” 

This result is called “neural phase precession.” This is when the brain activity gradually changes over time in relation to a repeating pattern, like an external event or in this case non-invasive stimulation. In this research, all three investigated methods (computational models, humans, and animals) showed that the external stimulation could shift brain activity over time.

“The timing of this repeating pattern has a direct impact on brain processes, for example, how we navigate space, learn, and remember,” Opitz said.

The discovery of this new technique shows how the brain adapts to external stimulation. This technique can increase or decrease brain activity, but is most powerful when it targets specific brain functions that affect behaviors. This way, long-term memory as well as learning can be improved. The long-term goal is to use this technique in the treatment of psychiatric and neurological disorders.

Opitz hopes that this discovery will help bring improved knowledge and technology to clinical applications, which could lead to more personalized therapies for schizophrenia, depression, Alzheimer’s disease, and Parkinson’s disease.

In addition to Opitz, the research team included co-first authors Miles Wischnewski and Harry Tran. Other team members from the University of Minnesota Biomedical Engineering Department include Zhihe Zhao, Zachary Haigh, Nipun Perera, Ivan Alekseichuk, Sina Shirinpour and Jonna Rotteveel. This study was in collaboration with Dr. Jan Zimmermann, associate professor in the University of Minnesota Medical School.

This work was supported primarily by the National Institute of Health (NIH) along with the  Behavior and Brain Research Foundation and the University of Minnesota’s Minnesota’s Discovery, Research, and InnoVation Economy (MnDRIVE) Initiative. Computational resources were provided by the Minnesota Supercomputing Institute (MSI).

To read the entire research paper titled, “Induced neural phase precession through exogenous electric fields”, visit the  Nature Communications website .

Researchers using external stimulation during experiment

Rhonda Zurn, College of Science and Engineering,  [email protected]

University Public Relations,  [email protected]

Read more stories:

Find more news and feature stories on the  CSE news page .

Related news releases

  • Study provides new insights into deadly acute respiratory distress syndrome (ARDS)
  • New study is first step in predicting carbon emissions in agriculture
  • Six CSE faculty named University of Minnesota McKnight Land Grant Professors
  • University of Minnesota consortium receives inaugural NSF Regional Innovation Engines award
  • Closing the loop
  • Future undergraduate students
  • Future transfer students
  • Future graduate students
  • Future international students
  • Diversity and Inclusion Opportunities
  • Learn abroad
  • Living Learning Communities
  • Mentor programs
  • Programs for women
  • Student groups
  • Visit, Apply & Next Steps
  • Information for current students
  • Departments and majors overview
  • Departments
  • Undergraduate majors
  • Graduate programs
  • Integrated Degree Programs
  • Additional degree-granting programs
  • Online learning
  • Academic Advising overview
  • Academic Advising FAQ
  • Academic Advising Blog
  • Appointments and drop-ins
  • Academic support
  • Commencement
  • Four-year plans
  • Honors advising
  • Policies, procedures, and forms
  • Career Services overview
  • Resumes and cover letters
  • Jobs and internships
  • Interviews and job offers
  • CSE Career Fair
  • Major and career exploration
  • Graduate school
  • Collegiate Life overview
  • Scholarships
  • Diversity & Inclusivity Alliance
  • Anderson Student Innovation Labs
  • Information for alumni
  • Get engaged with CSE
  • Upcoming events
  • CSE Alumni Society Board
  • Alumni volunteer interest form
  • Golden Medallion Society Reunion
  • 50-Year Reunion
  • Alumni honors and awards
  • Outstanding Achievement
  • Alumni Service
  • Distinguished Leadership
  • Honorary Doctorate Degrees
  • Nobel Laureates
  • Alumni resources
  • Alumni career resources
  • Alumni news outlets
  • CSE branded clothing
  • International alumni resources
  • Inventing Tomorrow magazine
  • Update your info
  • CSE giving overview
  • Why give to CSE?
  • College priorities
  • Give online now
  • External relations
  • Giving priorities
  • Donor stories
  • Impact of giving
  • Ways to give to CSE
  • Matching gifts
  • CSE directories
  • Invest in your company and the future
  • Recruit our students
  • Connect with researchers
  • K-12 initiatives
  • Diversity initiatives
  • Research news
  • Give to CSE
  • CSE priorities
  • Corporate relations
  • Information for faculty and staff
  • Administrative offices overview
  • Office of the Dean
  • Academic affairs
  • Finance and Operations
  • Communications
  • Human resources
  • Undergraduate programs and student services
  • CSE Committees
  • CSE policies overview
  • Academic policies
  • Faculty hiring and tenure policies
  • Finance policies and information
  • Graduate education policies
  • Human resources policies
  • Research policies
  • Research overview
  • Research centers and facilities
  • Research proposal submission process
  • Research safety
  • Award-winning CSE faculty
  • National academies
  • University awards
  • Honorary professorships
  • Collegiate awards
  • Other CSE honors and awards
  • Staff awards
  • Performance Management Process
  • Work. With Flexibility in CSE
  • K-12 outreach overview
  • Summer camps
  • Outreach events
  • Enrichment programs
  • Field trips and tours
  • CSE K-12 Virtual Classroom Resources
  • Educator development
  • Sponsor an event

California Current Ecosystem

Long-Term Ecological Research

Zooplankton research: Sample collection, processing, and the end of the cruise

example of studies in research

One of the many research subjects aboard the R/V Roger Revelle and one of the many components of CCE-LTER is the study of zooplankton. One of the focuses of the Décima lab out of Scripps Institute of Oceanography is the study of mesozooplankton and more specifically mesozooplankton grazing.

What are plankton and what are zooplankton and mesozooplankton?

Plankton are small, even microscopic, organisms that drift in ocean currents. Zooplankton, zoo meaning animal and plankton meaning drifter, are the small animals in the ocean that graze on phytoplankton (essentially small photosynthetic plants, algae, that are the primary producers of the pelagic food web). Mesozooplankton are a specific size class of zooplankton, ranging from 0.2 mm to 20 mm in size.

Why study zooplankton?

The reason that zooplankton are studied is to better understand the flow of energy throughout the pelagic foodweb from phytoplankton, the primary producers, to zooplankton, to fish (such as sardines), to other marine megafauna such as dolphins and whales, and ultimately to us. Understanding the energy flow throughout the pelagic food web by studying the grazing rates and feeding habits of mesozooplankton allows for a better understanding of energy flow to larger organisms, such as fish, that we use as a food source. A better understanding of energy flow through the pelagic food web through studying mesozooplankton helps inform fisheries management practices.

A morning with the Bongo

I usually wake up around 6:30 AM to shower, brush my teeth, and get dressed before meeting the rest of the zooplankton day team in the wet lab around 7:00 AM. From 7:00-7:30, we prepare labels, organize our buckets, and prepare the Bongo Net for sampling. Bongo isn’t an acronym or anything like that—there are two circular nets attached to a frame, making the entire net look like a bongo. We then go to breakfast around 7:30 and meet back in the lab between 8:00 and 8:15 AM. The first Bongo tow of the day happens anywhere between 9:00 AM and 11:00 AM, depending on what the ship’s schedule is for the day and what other operations need to take place. We always have to be ready to put our nets in the water at any time just in case a time slot opens up. We put the Bongo into the water for roughly thirty minutes, descending it to a depth of around 200 meters, towing it for thirty seconds, and then bringing it back up to the surface. The Bongo is typically lowered at thirty meters per minute and raised back up at twenty meters per minute. Once the Bongo is back on board, we process each side of the net, starting with the port (left) side and then the starboard (right) side. We rinse down the nets to collect all of the zooplankton into the codends (containers that collect zooplankton from the nets), transfer the contents of the codends into buckets, and bring them back into the wet lab for processing. When collecting the port side sample, we add a can of seltzer water to the sample to anesthetize the zooplankton so they don’t evacuate their gut content which is used for gut fluorescence processing.

The sample collected from the port side of the net is first poured into the Folsom splitter, an acrylic wheel with an opening at the top and a divider that runs through the middle that splits the sample into two 50% portions. Only the port side sample is split. The entirety of the starboard sample is transferred to a jar and preserved in formalin for later taxonomic identification of the organisms. Once the port sample is in the splitter, we homogenize the sample by mixing it with a ruler before pouring the now-split samples into two basins, making two 50% samples. One of these samples is then placed into a separate bucket and the other is split once more to create two 25% pieces of the original sample. One of these 25% samples is added to the 50% in the bucket to create a 75% sample, and the other 25% sample is transferred to the “gut cup,” where the water is filtered from the sample and the zooplankton are then preserved in a dewar containing liquid nitrogen for later analysis.

The now 75% sample is added again to the splitter, breaking the sample into two 37.5% samples. Both of these 37.5% splits are size fractioned through a series of five decreasing sized (increasing fineness) sieves: >5mm, 2-5mm, 1-2mm, 0.5-1mm, and 0.2-0.5mm to sort through different sized zooplankton within the sample. Each size range is then vacuum filtered over a piece of 202-micron mesh, placed into Petri dishes, labeled, and preserved for later analysis. One of the 37.5% splits is used for gut fluorescence analysis, and the other is used for biomass. The gut fluorescence samples are stored in a dewar containing liquid nitrogen, and the biomass samples are stored in a -80℉ freezer.

The gut fluorescence analysis analyzes the gut content of the zooplankton to calculate the organisms’ grazing rates. This analysis is used to better understand how energy flows from the primary producers throughout the rest of the food web and up different trophic levels, which, again, is useful for fisheries management and understanding how zooplankton communities affect marine megafauna on higher trophic levels. The gut fluorescence samples are also used for DNA analysis to catalog which zooplankton species are present at discreet depths in the water column and to characterize the zooplankton community composition.

The biomass samples are used for determining relative grazing rates for a certain size category of zooplankton and are also used to estimate the relative total biomass of zooplankton for a discreet depth range. For example, biomass samples collected from the Bongo tow would be representative of a depth range of 0-200 meters.

On the left is the Folsom splitter, in the middle are the vacuum pumps, and to the left is the Bongo net.

An afternoon with the MOCNESS

After the Bongo is fully processed and the nets have been washed down, we usually go to lunch around 11:30 AM, after which we “cock the MOC.” (we’re trying to make “Professional MOC Cocker” t-shirts). When it’s time to tow the MOCNESS (Multiple Opening and Closing Net Environmental Sensing System), we lower the net into the water using two tag liners (people who hold onto ropes wrapped around the MOCNESS to keep it stable and ensure it doesn’t get twisted up or enter the water at the wrong angle), the A-Frame, and the help of a research technician and a winch operator. Depending on the water depth of where we’re located within the California Current, the MOCNESS is towed at a depth of anywhere between 400m and 1100m, which can take anywhere from one-and-a-half to two-and-a-half hours. During the downtime when the net is being towed, we label all of our Petri dishes and our internal and external labels for formalin and ethanol preservations. We also fill our ten buckets a third of the way with seawater and store them in the -40℉ freezer.

Once we retrieve the MOCNESS we rinse the codends down, take them off the nets, and transfer them to the chilled water so that the sample within the codend will not start to rot and decay while it’s waiting to be split. Similar to the Bongo net, we end up splitting the MOCNESS samples, but 50% of the sample is preserved in formalin, 25% is preserved in ethanol, and 25% is size fractionated and used for biomass. The size fractioning always takes the longest, as we have to go through the sieves by hand and pick out all of the plankton that get stuck in the mesh. This can be particularly difficult with transparent-gelatinous zooplankton.

The samples we preserve in ethanol are used for DNA analysis for a similar purpose as the DNA analysis conducted with the gut fluorescence samples: to catalog which species of zooplankton are present at each depth. The samples preserved in formalin are again used for taxonomy and identifying which organisms are present.

End of cruise

A month at sea has come and gone. At times it has felt like I’ve been out here for two months, and at others like I’ve only been out here for a day. Maybe a week. The point is, time seems to fluctuate—sometimes it goes real slow out here, and sometimes it seems like it’s going too fast. Right now it feels like the latter. It’s currently the evening of Sunday, March 17, 2024, and we’re due to reach port in San Diego tomorrow evening around 8:00 PM. At this point, the Zooplankton team is done sampling and now all that’s left is to pack up all of our lab equipment before we begin demobilization on Tuesday the 19th. I’m sitting in the hangar watching the winch for our final DPI (deep plankton imager) deployment of the cruise to make sure the tension in the cable doesn’t get too high. The deployment is going to last around twelve hours, but thankfully we’re taking shifts so I’ll only be out here for two hours or so. I’ve got my laptop, a portable radio to communicate with the lab, my phone, a book, and a speaker.

I think the reason time’s gone so fast at points is because of the people we have on the Zooplankton team. With a month at sea, it’s almost impossible not to get to know the people you’re on the boat with, especially those you share twelve-hour shifts with. There’s a lot of work that we have to get done, but there are also moments of calm where we’ve played cards and board games, listened to music, watched movies, shared meals, talked, and gotten to know each other. I’ve been told several times that without a team of volunteers, a lot of the zooplankton samples wouldn’t be able to be collected or processed and I think that if our team was any different, the feeling of the whole cruise would have changed. As I reflect on my whole experience at sea, I’m grateful for this opportunity to participate in CCE-LTER, for everyone I’ve met aboard the R/V Revelle, and I’m incredibly grateful for my lab team. We’re all a part of the Zoop Soup.

example of studies in research

Related Posts

example of studies in research

Life aboard the R/V Revelle: February 20th to February 27th

example of studies in research

Friday Evening and Cycle One of California Current Ecosystem Long-Term Ecological Research (CCE LTER): February 16th to February 20th

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • BMJ NPH Collections
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Apple cider vinegar for weight management in Lebanese adolescents and young adults with overweight and obesity: a randomised, double-blind, placebo-controlled study
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-0214-242X Rony Abou-Khalil 1 ,
  • Jeanne Andary 2 and
  • Elissar El-Hayek 1
  • 1 Department of Biology , Holy Spirit University of Kaslik , Jounieh , Lebanon
  • 2 Nutrition and Food Science Department , American University of Science and Technology , Beirut , Lebanon
  • Correspondence to Dr Rony Abou-Khalil, Department of Biology, Holy Spirit University of Kaslik, Jounieh, Lebanon; ronyaboukhalil{at}usek.edu.lb

Background and aims Obesity and overweight have become significant health concerns worldwide, leading to an increased interest in finding natural remedies for weight reduction. One such remedy that has gained popularity is apple cider vinegar (ACV).

Objective To investigate the effects of ACV consumption on weight, blood glucose, triglyceride and cholesterol levels in a sample of the Lebanese population.

Materials and methods 120 overweight and obese individuals were recruited. Participants were randomly assigned to either an intervention group receiving 5, 10 or 15 mL of ACV or a control group receiving a placebo (group 4) over a 12-week period. Measurements of anthropometric parameters, fasting blood glucose, triglyceride and cholesterol levels were taken at weeks 0, 4, 8 and 12.

Results Our findings showed that daily consumption of the three doses of ACV for a duration of between 4 and 12 weeks is associated with significant reductions in anthropometric variables (weight, body mass index, waist/hip circumferences and body fat ratio), blood glucose, triglyceride and cholesterol levels. No significant risk factors were observed during the 12 weeks of ACV intake.

Conclusion Consumption of ACV in people with overweight and obesity led to an improvement in the anthropometric and metabolic parameters. ACV could be a promising antiobesity supplement that does not produce any side effects.

  • Weight management
  • Lipid lowering

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .


Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


Recently, there has been increasing interest in alternative remedies to support weight management, and one such remedy that has gained popularity is apple cider vinegar (ACV).

A few small-scale studies conducted on humans have shown promising results, with ACV consumption leading to weight loss, reduced body fat and decreased waist circumference.


No study has been conducted to investigate the potential antiobesity effect of ACV in the Lebanese population. By conducting research in this demographic, the study provides region-specific data and offers a more comprehensive understanding of the impact of ACV on weight loss and metabolic health.


The results might contribute to evidence-based recommendations for the use of ACV as a dietary intervention in the management of obesity.

The study could stimulate further research in the field, prompting scientists to explore the underlying mechanisms and conduct similar studies in other populations.


Obesity is a growing global health concern characterised by excessive body fat accumulation, often resulting from a combination of genetic, environmental and lifestyle factors. 1 It is associated with an increased risk of numerous chronic illnesses such as type 2 diabetes, cardiovascular diseases, several common cancers and osteoarthritis. 1–3

According to the WHO, more than 1.9 billion adults were overweight worldwide in 2016, of whom more than 650 million were obese. 4 Worldwide obesity has nearly tripled since 1975. 4 The World Obesity Federation’s 2023 Atlas predicts that by 2035 more than half of the world’s population will be overweight or obese. 5

According to the 2022 Global Nutrition Report, Lebanon has made limited progress towards meeting its diet-related non-communicable diseases target. A total of 39.9% of adult (aged ≥18 years) women and 30.5% of adult men are living with obesity. Lebanon’s obesity prevalence is higher than the regional average of 10.3% for women and 7.5% for men. 6 In Lebanon, obesity was considered as the most important health problem by 27.6% and ranked fifth after cancer, cardiovascular, smoking and HIV/AIDS. 7

In recent years, there has been increasing interest in alternative remedies to support weight management, and one such remedy that has gained popularity is apple cider vinegar (ACV), which is a type of vinegar made by fermenting apple juice. ACV contains vitamins, minerals, amino acids and polyphenols such as flavonoids, which are believed to contribute to its potential health benefits. 8 9

It has been used for centuries as a traditional remedy for various ailments and has recently gained attention for its potential role in weight management.

In hypercaloric-fed rats, the daily consumption of ACV showed a lower rise in blood sugar and lipid profile. 10 In addition, ACV seems to decrease oxidative stress and reduces the risk of obesity in male rats with high-fat consumption. 11

A few small-scale studies conducted on humans have shown promising results, with ACV consumption leading to weight loss, reduced body fat and decreased waist circumference. 12 13 In fact, It has been suggested that ACV by slowing down gastric emptying, might promote satiety and reduce appetite. 14–16 Furthermore, ACV intake seems to ameliorate the glycaemic and lipid profile in healthy adults 17 and might have a positive impact on insulin sensitivity, potentially reducing the risk of type 2 diabetes. 8 10 18

Unfortunately, the sample sizes and durations of these studies were limited, necessitating larger and longer-term studies for more robust conclusions.

This work aims to study the efficacy and safety of ACV in reducing weight and ameliorating the lipid and glycaemic profiles in a sample of overweight and obese adolescents and young adults of the Lebanese population. To the best of our knowledge, no study has been conducted to investigate the potential antiobesity effect of ACV in the Lebanese population.

Materials and methods


A total of 120 overweight and obese adolescents and young adults (46 men and 74 women) were enrolled in the study and assigned to either placebo group or experimental groups (receiving increasing doses of ACV).

The subjects were evaluated for eligibility according to the following inclusion criteria: age between 12 and 25 years, BMIs between 27 and 34 kg/m 2 , no chronic diseases, no intake of medications, no intake of ACV over the past 8 weeks prior to the beginning of the study. The subjects who met the inclusion criteria were selected by convenient sampling technique. Those who experienced heartburn due to vinegar were excluded.

Demographic, clinical data and eating habits were collected from all participants by self-administered questionnaire.

Study design

This study was a double-blind, randomised clinical trial conducted for 12 weeks.

Subjects were divided randomly into four groups: three treatment groups and a placebo group. A simple randomisation method was employed using the randomisation allocation software. Groups 1, 2 and 3 consumed 5, 10 and 15 mL, respectively, of ACV (containing 5% of acetic acid) diluted in 250 mL of water daily, in the morning on an empty stomach, for 12 weeks. The control group received a placebo consisting of water with similar taste and appearance. In order to mimic the taste of vinegar, the placebo group’s beverage (250 mL of water) contained lactic acid (250 mg/100 mL). Identical-looking ACV and placebo bottles were used and participants were instructed to consume their assigned solution without knowing its identity. The subject’s group assignment was withheld from the researchers performing the experiment.

Subjects consumed their normal diets throughout the study. The contents of daily meals and snacks were recorded in a diet diary. The physical activity of the subjects was also recorded. Daily individual phone messages were sent to all participants to remind them to take the ACV or the placebo. A mailing group was also created. Confidentiality was maintained throughout the procedure.

At weeks 0, 4, 8 and 12, anthropometric measurements were taken for all participants, and the level of glucose, triglycerides and total cholesterol was assessed by collecting 5 mL of fasting blood from each subject.

Anthropometric measurements

Body weight was measured in kg, to the nearest 0.01 kg, by standardised and calibrated digital scale. Height was measured in cm, to the nearest 0.1 cm, by a stadiometer. Anthropometric measurements were taken for all participants, by a team of trained field researchers, after 10–12 hours fast and while wearing only undergarments.

Body mass indices (BMIs) were calculated using the following equation:

The waist circumference measurement was taken between the lowest rib margin and the iliac crest while the subject was in a standing position (to the nearest 0.1 cm). Hip circumference was measured at the widest point of the hip (to the nearest 0.1 cm).

The body fat ratio (BFR) was measured by the bioelectrical impedance analysis method (OMRON Fat Loss Monitor, Model No HBF-306C; Japan). Anthropometric variables are shown in table 1 .

  • View inline

Baseline demographic, anthropometric and biochemical variables of the three apple cider vinegar groups (group 1, 2 and 3) and the placebo group (group 4)

Blood biochemical analysis

Serum glucose was measured by the glucose oxidase method. 19 Triglyceride levels were determined using a serum triglyceride determination kit (TR0100, Sigma-Aldrich). Cholesterol levels were determined using a cholesterol quantitation kit (MAK043, Sigma-Aldrich). Biochemical variables are shown in table 1 .

Statistical methods and data analysis

Data are presented as mean±SD. Statistical analyses were performed using Statistical Package for the Social Sciences (SPSS) software (version 23.0). Significant differences between groups were determined by using an independent t-test. Statistical significance was set at p<0.05.

Ethical approval

The study protocol was reviewed and approved by the research ethics committee (REC) of the Higher Centre for Research (HCR) at The Holy Spirit University of Kaslik (USEK), Lebanon. The number/ID of the approval is HCR/EC 2023–005. The participants were informed of the study objectives and signed a written informed consent before enrolment. The study was conducted in accordance to the International Conference and Harmonisation E6 Guideline for Good Clinical Practice and the Ethical principles of the Declaration of Helsinki.

Sociodemographic, nutritional and other baseline characteristics of the participants

A total of 120 individuals (46 men and 74 women) with BMIs between 27 and 34 kg/m 2 , were enrolled in the study. The mean age of the subjects was 17.8±5.7 years and 17.6±5.4 years in the placebo and experimental groups respectively.

The majority of participants, approximately 98.3%, were non-vegetarian and 89% of them reported having a high eating frequency, with more than four meals per day. Eighty-seven per cent had no family history of obesity and 98% had no history of childhood obesity. The majority reported not having a regular exercise routine and experiencing negative emotions or anxiety. All participants were non-smokers and non-drinkers. A small percentage (6.7%) were following a therapeutic diet.

Effects of ACV intake on anthropometric variables

The addition of 5 mL, 10 mL or 15 mL of ACV to the diet resulted in significant decreases in body weight and BMI at weeks 4, 8 and 12 of ACV intake, when compared with baseline (week 0) (p<0.05). The decrease in body weight and BMI seemed to be dose-dependent, with the group receiving 15 mL of ACV showing the most important reduction ( table 2 ).

Anthropometric variables of the participants at weeks 0, 4, 8 and 12

The impact of ACV on body weight and BMI seems to be time-dependent as well. Reductions were more pronounced as the study progressed, with the most significant changes occurring at week 12.

The circumferences of the waist and hip, along with the Body Fat Ratio (BFR), decreased significantly in the three treatment groups at weeks 8 and 12 compared with week 0 (p<0.05). No significant effect was observed at week 4, compared with baseline (p>0.05). The effect of ACV on these parameters seems to be time-dependent with the most prominent effect observed at week 12 compared with week 4 and 8. However it does not seem to be dose dependent, as the three doses of ACV showed a similar level of efficacy in reducing the circumferences of the waist/hip circumferences and the BFR at week 8 and 12, compared with baseline ( table 2 ).

The placebo group did not experience any significant changes in the anthropometric variables throughout the study (p>0.05). This highlights that the observed improvements in body weight, BMI, waist and hip circumferences and Body Fat Ratio were likely attributed to the consumption of ACV.

Effects of ACV on blood biochemical parameters

The consumption of ACV also led to a time and dose dependent decrease in serum glucose, serum triglyceride and serum cholesterol levels. ( table 3 ).

Biochemical variables of the participants at weeks 0, 4, 8 and 12

Serum glucose levels decreased significantly by three doses of ACV at week 4, 8 and 12 compared with week 0 (p<0.05) ( table 3 ). Triglycerides and total cholesterol levels decreased significantly at weeks 8 and 12, compared with week 0 (p<0.05). A dose of 15 mL of ACV for a duration of 12 weeks seems to be the most effective dose in reducing these three blood biochemical parameters.

There were no changes in glucose, triglyceride and cholesterol levels in the placebo groups at weeks 4, 8 and 12 compared with week 0 ( table 3 ).

These data suggest that continued intake of 15 mL of ACV for more than 8 weeks is effective in reducing blood fasting sugar, triglyceride and total cholesterol levels in overweight/obese people.

Adverse reactions of ACV

No apparent adverse or harmful effects were reported by the participants during the 12 weeks of ACV intake.

During the past two decades of the last century, childhood and adolescent obesity have dramatically increased healthcare costs. 20 21 Diet and exercise are the basic elements of weight loss. Many complementary therapies have been promoted to treat obesity, but few are truly beneficial.

The present study is the first to investigate the antiobesity effectiveness of ACV, the fermented juice from crushed apples, in the Lebanese population.

A total of 120 overweight and obese adolescents and young adults (46 men and 74 women) with BMIs between 27 and 34 kg/m 2 , were enrolled. Participants were randomised to receive either a daily dose of ACV (5, 10 or 15 mL) or a placebo for a duration of 12 weeks.

Some previous studies have suggested that taking ACV before or with meals might help to reduce postprandial blood sugar levels, 22 23 but in our study, participants took ACV in the morning on an empty stomach. The choice of ACV intake timing was motivated by the aim to study the impact of apple cider vinegar without the confounding variables introduced by simultaneous food intake. In addition, taking ACV before meals could better reduce appetite and increase satiety.

Our findings reveal that the consumption of ACV in people with overweight and obesity led to an improvement in the anthropometric and metabolic parameters.

It is important to note that the diet diary and physical activity did not differ among the three treatment groups and the placebo throughout the whole study, suggesting that the decrease in anthropometric and biochemical parameters was caused by ACV intake.

Studies conducted on animal models often attribute these effects to various mechanisms, including increased energy expenditure, improved insulin sensitivity, appetite and satiety regulation.

While vinegar is composed of various ingredients, its primary component is acetic acid (AcOH). It has been shown that after 15 min of oral ingestion of 100 mL vinegar containing 0.75 g acetic acid, the serum acetate levels increases from 120 µmol/L at baseline to 350 µmol/L 24 ; this fast increase in circulatory acetate is due to its fast absorption in the upper digestive tract. 24 25

Biological action of acetate may be mediated by binding to the G-protein coupled receptors (GPRs), including GPR43 and GPR41. 25 These receptors are expressed in various insulin-sensitive tissues, such as adipose tissue, 26 skeletal muscle, liver, 27 and pancreatic beta cells, 28 but also in the small intestine and colon. 29 30

Yamashita and colleagues have revealed that oral administration of AcOH to type 2 diabetic Otsuka Long-Evans Tokushima Fatty rats, improves glucose tolerance and reduces lipid accumulation in the adipose tissue and liver. This improvement in obesity-linked type 2 diabetes is due to the capacity of AcOH to inhibit the activity of carbohydrate-responsive, element-binding protein, a transcription factor involved in regulating the expression of lipogenic genes such as fatty acid synthase and acetyl-CoA carboxylase. 26 31 Sakakibara and colleagues, have reported that AcOH, besides inhibiting lipogenesis, reduces the expression of genes involved in gluconeogenesis, such as glucose-6-phosphatase. 32 The effect of AcOH on lipogenesis and gluconeogenesis is in part mediated by the activation of 5'-AMP-activated protein kinase in the liver. 32 This enzyme seems to be an important pharmacological target for the treatment of metabolic disorders such as obesity, type 2 diabetes and hyperlipidaemia. 32 33

5'-AMP-activated protein kinase is also known to stimulate fatty acid oxidation, thereby increasing energy expenditure. 32 33 These data suggest that the effect of ACV on weight and fat loss may be partly due to the ability of AcOH to inhibit lipogenesis and gluconeogenesis and activate fat oxidation.

Animal studies suggest that besides reducing energy expenditure, acetate may also reduce energy intake, by regulating appetite and satiety. In mice, an intraperitoneal injection of acetate significantly reduced food intake by activating vagal afferent neurons. 32–34 It is important to note that animal studies done on the effect of acetate on vagal activation are contradictory. This might be due to the site of administration of acetate and the use of different animal models.

In addition, in vitro and in vivo animal model studies suggest that acetate increases the secretion of gut-derived satiety hormones by enter endocrine cells (located in the gut) such as GLP-1 and PYY hormones. 25 32–35

Human studies related to the effect of vinegar on body weight are limited.

In accordance with our study, a randomised clinical trial conducted by Khezri and his colleagues has shown that daily consumption of 30 mL of ACV for 12 weeks significantly reduced body weight, BMI, hip circumference, Visceral Adiposity Index and appetite score in obese subjects subjected to a restricted calorie diet, compared with the control group (restricted calorie diet without ACV). Furthermore, Khezri and his colleagues showed that plasma triglyceride and total cholesterol levels significantly decreased, and high density lipoprotein cholesterol concentration significantly increased, in the ACV group in comparison with the control group. 13 32–34

Similarly, Kondo and his colleagues showed that daily consumption of 15 or 30 mL of ACV for 12 weeks reduced body weight, BMI and serum triglyceride in a sample of the Japanese population. 12 13 32–34

In contrast, Park et al reported that daily consumption of 200 mL of pomegranate vinegar for 8 weeks significantly reduced total fat mass in overweight or obese subjects compared with the control group without significantly affecting body weight and BMI. 36 This contradictory result could be explained by the difference in the percentage of acetate and other potentially bioactive compounds (such as flavonoids and other phenolic compounds) in different vinegar types.

In Lebanon, the percentage of the population with a BMI of 30 kg/m 2 or more is approximately 32%. The results of the present study showed that in obese Lebanese subjects who had BMIs ranging from 27 and 34 kg/m 2 , daily oral intake of ACV for 12 weeks reduced the body weight by 6–8 kg and BMIs by 2.7–3.0 points.

It would be interesting to investigate in future studies the effect of neutralised acetic acid on anthropometric and metabolic parameters, knowing that acidic substances, including acetic acid, could contribute to enamel erosion over time. In addition to promoting oral health, neutralising the acidity of ACV could improve its taste, making it more palatable. Furthermore, studying the effects of ACV on weight loss in young Lebanese individuals provides valuable insights, but further research is needed for a comprehensive understanding of how the effect of ACV might vary across different age groups, particularly in older populations and menopausal women.

The findings of this study indicate that ACV consumption for 12 weeks led to significant reduction in anthropometric variables and improvements in blood glucose, triglyceride and cholesterol levels in overweight/obese adolescents/adults. These results suggest that ACV might have potential benefits in improving metabolic parameters related to obesity and metabolic disorders in obese individuals. The results may contribute to evidence-based recommendations for the use of ACV as a dietary intervention in the management of obesity. The study duration of 12 weeks limits the ability to observe long-term effects. Additionally, a larger sample size would enhance the generalisability of the results.

Ethics statements

Patient consent for publication.

Consent obtained from parent(s)/guardian(s)

Ethics approval

This study involves human participants and was approved by the research ethics committee of the Higher Center for Research (HCR) at The Holy Spirit University of Kaslik (USEK), Lebanon. The number/ID of the approval is HCR/EC 2023-005. Participants gave informed consent to participate in the study before taking part.

  • Pandi-Perumal SR , et al
  • Poirier P ,
  • Bray GA , et al
  • World Health Organization
  • Global Nutrition Report
  • Geagea AG ,
  • Jurjus RA , et al
  • Liao H-J , et al
  • Serafin V ,
  • Ousaaid D ,
  • Laaroussi H ,
  • Bakour M , et al
  • Halima BH ,
  • Sarra K , et al
  • Fushimi T , et al
  • Khezri SS ,
  • Saidpour A ,
  • Hosseinzadeh N , et al
  • Montaser R , et al
  • Hlebowicz J ,
  • Darwiche G ,
  • Björgell O , et al
  • Santos HO ,
  • de Moraes WMAM ,
  • da Silva GAR , et al
  • Pourmasoumi M ,
  • Najafgholizadeh A , et al
  • Walker HK ,
  • Sanyaolu A ,
  • Qi X , et al
  • Nosrati HR ,
  • Mousavi SE ,
  • Sajjadi P , et al
  • Johnston CS ,
  • Quagliano S ,
  • Sugiyama S ,
  • Fushimi T ,
  • Kishi M , et al
  • Hernández MAG ,
  • Canfora EE ,
  • Jocken JWE , et al
  • Le Poul E ,
  • Struyf S , et al
  • Goldsworthy SM ,
  • Barnes AA , et al
  • Priyadarshini M ,
  • Fuller M , et al
  • Karaki S-I ,
  • Hayashi H , et al
  • Karaki S-I , et al
  • Yamashita H ,
  • Fujisawa K ,
  • Ito E , et al
  • Sakakibara S ,
  • Yamauchi T ,
  • Oshima Y , et al
  • Schimmack G ,
  • Defronzo RA ,
  • Goswami C ,
  • Iwasaki Y ,
  • Kim J , et al

Supplementary materials

  • Press release

Contributors RA-K: conceptualisation, methodology, data curation, supervision, guarantor, project administration, visualisation, writing–original draft. EE-H: conceptualisation, methodology, data curation, visualisation, writing–review and editing. JA: investigation, validation, writing–review and editing.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests No, there are no competing interests.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

ANA | Driving Growth

Your company may already be a member. View our member list to find out, or create a new account .

Forgot Password?

Content Library

You can search our content library for case studies, research, industry insights, and more.

You can search our website for events, press releases, blog posts, and more.

Claim Substantiation: Common Issues in Study Design That Trip Advertisers Up

March 20, 2024    

Executive Summary

Companies often rely on R&D and scientists to design studies that will support their advertising claims. But what happens when there is a disconnect between those studies and the legal standards? Industry experts from Proskauer Rose LLP review issues in study design that frequently arise when they are put to the test in a legal challenge, including for example when a control is and is not necessary, when bridging of study results is and is not acceptable, and the statistics behind properly substantiating equivalence claims and ratio claims.

Speakers: Jeff Warshafsky Partner Proskauer Rose LLP

Jennifer Yang Senior Counsel Proskauer Rose LLP

CLE Materials

  • Session Presentation
  • Session Handout

example of studies in research

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Never just tech

Creating value beyond the hype

Let’s deliver on the promise of technology from strategy to scale.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • An Bras Dermatol
  • v.91(3); May-Jun 2016

Sampling: how to select participants in my research study? *

Jeovany martínez-mesa.

1 Faculdade Meridional (IMED) - Passo Fundo (RS), Brazil.

David Alejandro González-Chica

2 University of Adelaide - Adelaide, Australia.

Rodrigo Pereira Duquia

3 Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA) - Porto Alegre (RS), Brazil.

Renan Rangel Bonamigo

João luiz bastos.

4 Universidade Federal de Santa Catarina (UFSC) - Florianópolis (RS), Brazil.

In this paper, the basic elements related to the selection of participants for a health research are discussed. Sample representativeness, sample frame, types of sampling, as well as the impact that non-respondents may have on results of a study are described. The whole discussion is supported by practical examples to facilitate the reader's understanding.

To introduce readers to issues related to sampling.


The essential topics related to the selection of participants for a health research are: 1) whether to work with samples or include the whole reference population in the study (census); 2) the sample basis; 3) the sampling process and 4) the potential effects nonrespondents might have on study results. We will refer to each of these aspects with theoretical and practical examples for better understanding in the sections that follow.


In a previous paper, we discussed the necessary parameters on which to estimate the sample size. 1 We define sample as a finite part or subset of participants drawn from the target population. In turn, the target population corresponds to the entire set of subjects whose characteristics are of interest to the research team. Based on results obtained from a sample, researchers may draw their conclusions about the target population with a certain level of confidence, following a process called statistical inference. When the sample contains fewer individuals than the minimum necessary, but the representativeness is preserved, statistical inference may be compromised in terms of precision (prevalence studies) and/or statistical power to detect the associations of interest. 1 On the other hand, samples without representativeness may not be a reliable source to draw conclusions about the reference population (i.e., statistical inference is not deemed possible), even if the sample size reaches the required number of participants. Lack of representativeness can occur as a result of flawed selection procedures (sampling bias) or when the probability of refusal/non-participation in the study is related to the object of research (nonresponse bias). 1 , 2

Although most studies are performed using samples, whether or not they represent any target population, census-based estimates should be preferred whenever possible. 3 , 4 For instance, if all cases of melanoma are available on a national or regional database, and information on the potential risk factors are also available, it would be preferable to conduct a census instead of investigating a sample.

However, there are several theoretical and practical reasons that prevent us from carrying out census-based surveys, including:

  • Ethical issues: it is unethical to include a greater number of individuals than that effectively required;
  • Budgetary limitations: the high costs of a census survey often limits its use as a strategy to select participants for a study;
  • Logistics: censuses often impose great challenges in terms of required staff, equipment, etc. to conduct the study;
  • Time restrictions: the amount of time needed to plan and conduct a census-based survey may be excessive; and,
  • Unknown target population size: if the study objective is to investigate the presence of premalignant skin lesions in illicit drugs users, lack of information on all existing users makes it impossible to conduct a census-based study.

All these reasons explain why samples are more frequently used. However, researchers must be aware that sample results can be affected by the random error (or sampling error). 3 To exemplify this concept, we will consider a research study aiming to estimate the prevalence of premalignant skin lesions (outcome) among individuals >18 years residing in a specific city (target population). The city has a total population of 4,000 adults, but the investigator decided to collect data on a representative sample of 400 participants, detecting an 8% prevalence of premalignant skin lesions. A week later, the researcher selects another sample of 400 participants from the same target population to confirm the results, but this time observes a 12% prevalence of premalignant skin lesions. Based on these findings, is it possible to assume that the prevalence of lesions increased from the first to the second week? The answer is probably not. Each time we select a new sample, it is very likely to obtain a different result. These fluctuations are attributed to the "random error." They occur because individuals composing different samples are not the same, even though they were selected from the same target population. Therefore, the parameters of interest may vary randomly from one sample to another. Despite this fluctuation, if it were possible to obtain 100 different samples of the same population, approximately 95 of them would provide prevalence estimates very close to the real estimate in the target population - the value that we would observe if we investigated all the 4,000 adults residing in the city. Thus, during the sample size estimation the investigator must specify in advance the highest or maximum acceptable random error value in the study. Most population-based studies use a random error ranging from 2 to 5 percentage points. Nevertheless, the researcher should be aware that the smaller the random error considered in the study, the larger the required sample size. 1


The sample frame is the group of individuals that can be selected from the target population given the sampling process used in the study. For example, to identify cases of cutaneous melanoma the researcher may consider to utilize as sample frame the national cancer registry system or the anatomopathological records of skin biopsies. Given that the sample may represent only a portion of the target population, the researcher needs to examine carefully whether the selected sample frame fits the study objectives or hypotheses, and especially if there are strategies to overcome the sample frame limitations (see Chart 1 for examples and possible limitations).

Examples of sample frames and potential limitations as regards representativeness

Sampling can be defined as the process through which individuals or sampling units are selected from the sample frame. The sampling strategy needs to be specified in advance, given that the sampling method may affect the sample size estimation. 1 , 5 Without a rigorous sampling plan the estimates derived from the study may be biased (selection bias). 3


In figure 1 , we depict a summary of the main sampling types. There are two major sampling types: probabilistic and nonprobabilistic.

An external file that holds a picture, illustration, etc.
Object name is abd-91-03-0326-g01.jpg

Sampling types used in scientific studies


In the context of nonprobabilistic sampling, the likelihood of selecting some individuals from the target population is null. This type of sampling does not render a representative sample; therefore, the observed results are usually not generalizable to the target population. Still, unrepresentative samples may be useful for some specific research objectives, and may help answer particular research questions, as well as contribute to the generation of new hypotheses. 4 The different types of nonprobabilistic sampling are detailed below.

Convenience sampling : the participants are consecutively selected in order of apperance according to their convenient accessibility (also known as consecutive sampling). The sampling process comes to an end when the total amount of participants (sample saturation) and/or the time limit (time saturation) are reached. Randomized clinical trials are usually based on convenience sampling. After sampling, participants are usually randomly allocated to the intervention or control group (randomization). 3 Although randomization is a probabilistic process to obtain two comparable groups (treatment and control), the samples used in these studies are generally not representative of the target population.

Purposive sampling: this is used when a diverse sample is necessary or the opinion of experts in a particular field is the topic of interest. This technique was used in the study by Roubille et al, in which recommendations for the treatment of comorbidities in patients with rheumatoid arthritis, psoriasis, and psoriatic arthritis were made based on the opinion of a group of experts. 6

Quota sampling: according to this sampling technique, the population is first classified by characteristics such as gender, age, etc. Subsequently, sampling units are selected to complete each quota. For example, in the study by Larkin et al., the combination of vemurafenib and cobimetinib versus placebo was tested in patients with locally-advanced melanoma, stage IIIC or IV, with BRAF mutation. 7 The study recruited 495 patients from 135 health centers located in several countries. In this type of study, each center has a "quota" of patients.

"Snowball" sampling : in this case, the researcher selects an initial group of individuals. Then, these participants indicate other potential members with similar characteristics to take part in the study. This is frequently used in studies investigating special populations, for example, those including illicit drugs users, as was the case of the study by Gonçalves et al, which assessed 27 users of cocaine and crack in combination with marijuana. 8


In the context of probabilistic sampling, all units of the target population have a nonzero probability to take part in the study. If all participants are equally likely to be selected in the study, equiprobabilistic sampling is being used, and the odds of being selected by the research team may be expressed by the formula: P=1/N, where P equals the probability of taking part in the study and N corresponds to the size of the target population. The main types of probabilistic sampling are described below.

Simple random sampling: in this case, we have a full list of sample units or participants (sample basis), and we randomly select individuals using a table of random numbers. An example is the study by Pimenta et al, in which the authors obtained a listing from the Health Department of all elderly enrolled in the Family Health Strategy and, by simple random sampling, selected a sample of 449 participants. 9

Systematic random sampling: in this case, participants are selected from fixed intervals previously defined from a ranked list of participants. For example, in the study of Kelbore et al, children who were assisted at the Pediatric Dermatology Service were selected to evaluate factors associated with atopic dermatitis, selecting always the second child by consulting order. 10

Stratified sampling: in this type of sampling, the target population is first divided into separate strata. Then, samples are selected within each stratum, either through simple or systematic sampling. The total number of individuals to be selected in each stratum can be fixed or proportional to the size of each stratum. Each individual may be equally likely to be selected to participate in the study. However, the fixed method usually involves the use of sampling weights in the statistical analysis (inverse of the probability of selection or 1/P). An example is the study conducted in South Australia to investigate factors associated with vitamin D deficiency in preschool children. Using the national census as the sample frame, households were randomly selected in each stratum and all children in the age group of interest identified in the selected houses were investigated. 11

Cluster sampling: in this type of probabilistic sampling, groups such as health facilities, schools, etc., are sampled. In the above-mentioned study, the selection of households is an example of cluster sampling. 11

Complex or multi-stage sampling: This probabilistic sampling method combines different strategies in the selection of the sample units. An example is the study of Duquia et al. to assess the prevalence and factors associated with the use of sunscreen in adults. The sampling process included two stages. 12 Using the 2000 Brazilian demographic census as sampling frame, all 404 census tracts from Pelotas (Southern Brazil) were listed in ascending order of family income. A sample of 120 tracts were systematically selected (first sampling stage units). In the second stage, 12 households in each of these census tract (second sampling stage units) were systematically drawn. All adult residents in these households were included in the study (third sampling stage units). All these stages have to be considered in the statistical analysis to provide correct estimates.


Frequently, sample sizes are increased by 10% to compensate for potential nonresponses (refusals/losses). 1 Let us imagine that in a study to assess the prevalence of premalignant skin lesions there is a higher percentage of nonrespondents among men (10%) than among women (1%). If the highest percentage of nonresponse occurs because these men are not at home during the scheduled visits, and these participants are more likely to be exposed to the sun, the number of skin lesions will be underestimated. For this reason, it is strongly recommended to collect and describe some basic characteristics of nonrespondents (sex, age, etc.) so they can be compared to the respondents to evaluate whether the results may have been affected by this systematic error.

Often, in study protocols, refusal to participate or sign the informed consent is considered an "exclusion criteria". However, this is not correct, as these individuals are eligible for the study and need to be reported as "nonrespondents".


In general, clinical trials aim to obtain a homogeneous sample which is not necessarily representative of any target population. Clinical trials often recruit those participants who are most likely to benefit from the intervention. 3 Thus, the more strict criteria for inclusion and exclusion of subjects in clinical trials often make it difficult to locate participants: after verification of the eligibility criteria, just one out of ten possible candidates will enter the study. Therefore, clinical trials usually show limitations to generalize the results to the entire population of patients with the disease, but only to those with similar characteristics to the sample included in the study. These peculiarities in clinical trials justify the necessity of conducting a multicenter and/or global studiesto accelerate the recruitment rate and to reach, in a shorter time, the number of patients required for the study. 13

In turn, in observational studies to build a solid sampling plan is important because of the great heterogeneity usually observed in the target population. Therefore, this heterogeneity has to be also reflected in the sample. A cross-sectional population-based study aiming to assess disease estimates or identify risk factors often uses complex probabilistic sampling, because the sample representativeness is crucial. However, in a case-control study, we face the challenge of selecting two different samples for the same study. One sample is formed by the cases, which are identified based on the diagnosis of the disease of interest. The other consists of controls, which need to be representative of the population that originated the cases. Improper selection of control individuals may introduce selection bias in the results. Thus, the concern with representativeness in this type of study is established based on the relationship between cases and controls (comparability).

In cohort studies, individuals are recruited based on the exposure (exposed and unexposed subjects), and they are followed over time to evaluate the occurrence of the outcome of interest. At baseline, the sample can be selected from a representative sample (population-based cohort studies) or a non-representative sample. However, in the successive follow-ups of the cohort member, study participants must be a representative sample of those included in the baseline. 14 , 15 In this type of study, losses over time may cause follow-up bias.

Researchers need to decide during the planning stage of the study if they will work with the entire target population or a sample. Working with a sample involves different steps, including sample size estimation, identification of the sample frame, and selection of the sampling method to be adopted.

Financial Support: None.

* Study performed at Faculdade Meridional - Escola de Medicina (IMED) - Passo Fundo (RS), Brazil.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • Research Objectives | Definition & Examples

Research Objectives | Definition & Examples

Published on July 12, 2022 by Eoghan Ryan . Revised on November 20, 2023.

Research objectives describe what your research is trying to achieve and explain why you are pursuing it. They summarize the approach and purpose of your project and help to focus your research.

Your objectives should appear in the introduction of your research paper , at the end of your problem statement . They should:

  • Establish the scope and depth of your project
  • Contribute to your research design
  • Indicate how your project will contribute to existing knowledge

Table of contents

What is a research objective, why are research objectives important, how to write research aims and objectives, smart research objectives, other interesting articles, frequently asked questions about research objectives.

Research objectives describe what your research project intends to accomplish. They should guide every step of the research process , including how you collect data , build your argument , and develop your conclusions .

Your research objectives may evolve slightly as your research progresses, but they should always line up with the research carried out and the actual content of your paper.

Research aims

A distinction is often made between research objectives and research aims.

A research aim typically refers to a broad statement indicating the general purpose of your research project. It should appear at the end of your problem statement, before your research objectives.

Your research objectives are more specific than your research aim and indicate the particular focus and approach of your project. Though you will only have one research aim, you will likely have several research objectives.

Prevent plagiarism. Run a free check.

Research objectives are important because they:

  • Establish the scope and depth of your project: This helps you avoid unnecessary research. It also means that your research methods and conclusions can easily be evaluated .
  • Contribute to your research design: When you know what your objectives are, you have a clearer idea of what methods are most appropriate for your research.
  • Indicate how your project will contribute to extant research: They allow you to display your knowledge of up-to-date research, employ or build on current research methods, and attempt to contribute to recent debates.

Once you’ve established a research problem you want to address, you need to decide how you will address it. This is where your research aim and objectives come in.

Step 1: Decide on a general aim

Your research aim should reflect your research problem and should be relatively broad.

Step 2: Decide on specific objectives

Break down your aim into a limited number of steps that will help you resolve your research problem. What specific aspects of the problem do you want to examine or understand?

Step 3: Formulate your aims and objectives

Once you’ve established your research aim and objectives, you need to explain them clearly and concisely to the reader.

You’ll lay out your aims and objectives at the end of your problem statement, which appears in your introduction. Frame them as clear declarative statements, and use appropriate verbs to accurately characterize the work that you will carry out.

The acronym “SMART” is commonly used in relation to research objectives. It states that your objectives should be:

  • Specific: Make sure your objectives aren’t overly vague. Your research needs to be clearly defined in order to get useful results.
  • Measurable: Know how you’ll measure whether your objectives have been achieved.
  • Achievable: Your objectives may be challenging, but they should be feasible. Make sure that relevant groundwork has been done on your topic or that relevant primary or secondary sources exist. Also ensure that you have access to relevant research facilities (labs, library resources , research databases , etc.).
  • Relevant: Make sure that they directly address the research problem you want to work on and that they contribute to the current state of research in your field.
  • Time-based: Set clear deadlines for objectives to ensure that the project stays on track.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.


  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility


  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Research objectives describe what you intend your research project to accomplish.

They summarize the approach and purpose of the project and help to focus your research.

Your objectives should appear in the introduction of your research paper , at the end of your problem statement .

Your research objectives indicate how you’ll try to address your research problem and should be specific:

Once you’ve decided on your research objectives , you need to explain them in your paper, at the end of your problem statement .

Keep your research objectives clear and concise, and use appropriate verbs to accurately convey the work that you will carry out for each one.

I will compare …

A research aim is a broad statement indicating the general purpose of your research project. It should appear in your introduction at the end of your problem statement , before your research objectives.

Research objectives are more specific than your research aim. They indicate the specific ways you’ll address the overarching aim.

Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.

Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .

To define your scope of research, consider the following:

  • Budget constraints or any specifics of grant funding
  • Your proposed timeline and duration
  • Specifics about your population of study, your proposed sample size , and the research methodology you’ll pursue
  • Any inclusion and exclusion criteria
  • Any anticipated control , extraneous , or confounding variables that could bias your research if not accounted for properly.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, November 20). Research Objectives | Definition & Examples. Scribbr. Retrieved March 18, 2024, from https://www.scribbr.com/research-process/research-objectives/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, writing strong research questions | criteria & examples, how to write a problem statement | guide & examples, what is your plagiarism score.


  1. Sample of Research Literature Review

    example of studies in research

  2. Case Study Research Social Work

    example of studies in research

  3. Methodology Sample In Research

    example of studies in research

  4. 001 Abstract Essay Research Paper Sample ~ Thatsnotus

    example of studies in research

  5. Review of Related Literature and Studies

    example of studies in research

  6. The one chart you need to understand any health study

    example of studies in research


  1. What is research

  2. Research basics


  4. Choosing A Research Topic

  5. Finding HIGH-Impact Research Topics

  6. Research part 1/overview of research


  1. 6 Basic Types of Research Studies (Plus Pros and Cons)

    1. Meta-analysis A meta-analysis study helps researchers compile the quantitative data available from previous studies. It's an observational study in which the researchers don't manipulate variables. Instead, they observe and analyze the data using statistical methods.

  2. What Is a Research Design

    Step 1: Consider your aims and approach Step 2: Choose a type of research design Step 3: Identify your population and sampling method Step 4: Choose your data collection methods Step 5: Plan your data collection procedures Step 6: Decide on your data analysis strategies Other interesting articles Frequently asked questions about research design

  3. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  4. Research Methods

    Research Methods | Definitions, Types, Examples Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data.

  5. Types of Research

    October 2, 2020 Types of Research Research is about using established methods to investigate a problem or question in detail with the aim of generating new knowledge about it. It is a vital tool for scientific advancement because it allows researchers to prove or refute hypotheses based on clearly defined parameters, environments and assumptions.

  6. What Is a Case Study?

    Example of an outlying case studyIn the 1960s the town of Roseto, Pennsylvania was discovered to have extremely low rates of heart disease compared to the US average. It became an important case study for understanding previously neglected causes of heart disease.

  7. Types of studies and research design

    Types of study design. Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research ...

  8. What types of studies are there?

    There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked. When making decisions, patients and doctors need reliable answers to a number of questions.

  9. Research Design

    In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you'll actually collect data from. Defining the population A population can be made up of anything you want to study - plants, animals, organisations, texts, countries, etc.

  10. Case Study

    Defnition: A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.

  11. Research Methods

    Examples of Research Methods are as follows: Qualitative Research Example: A researcher wants to study the experience of cancer patients during their treatment. They conduct in-depth interviews with patients to gather data on their emotional state, coping mechanisms, and support systems. Quantitative Research Example:

  12. 1.3: Types of Research Studies and How To Interpret Them

    Epidemiology is defined as the study of human populations. These studies often investigate the relationship between dietary consumption and disease development. There are three main types of epidemiological studies: cross-sectional, case-control, and prospective cohort studies. Figure 2.2: Types of epidemiology.

  13. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  14. How to Use Case Studies in Research: Guide and Examples

    1. Select a case. Once you identify the problem at hand and come up with questions, identify the case you will focus on. The study can provide insights into the subject at hand, challenge existing assumptions, propose a course of action, and/or open up new areas for further research. 2.

  15. A Practical Guide to Writing Quantitative and Qualitative Research

    Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points. Go to: DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES A research question is what a study aims to answer after data analysis and interpretation.

  16. A Critical Comparison of Focused Ethnography and Interpretive

    Choosing an appropriate qualitative methodology in nursing research is a researcher's first step before beginning a study. Such a step is critical as the selected qualitative methodology should be congruent with the research questions, study assumptions, data gathering and analysis to promote the utility of such research in enhancing nursing knowledge.

  17. Sample: Definition, Types, Formula & Examples

    Reduced cost & time: Since using a sample reduces the number of people that have to be reached out to, it reduces cost and time. Imagine the time saved between researching with a population of millions vs. conducting a research study using a sample. Reduced resource deployment: It is obvious that if the number of people involved in a research study is much lower due to the sample, the ...

  18. Limitations in Research

    Limitations in Research. Limitations in research refer to the factors that may affect the results, conclusions, and generalizability of a study. These limitations can arise from various sources, such as the design of the study, the sampling methods used, the measurement tools employed, and the limitations of the data analysis techniques.

  19. Ethical Considerations in Research

    Published on October 18, 2021 by Pritha Bhandari . Revised on June 22, 2023. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

  20. The role of COVID-19 vaccines in preventing post-COVID-19 ...

    Objective To study the association between COVID-19 vaccination and the risk of post-COVID-19 cardiac and thromboembolic complications. Methods We conducted a staggered cohort study based on national vaccination campaigns using electronic health records from the UK, Spain and Estonia. Vaccine rollout was grouped into four stages with predefined enrolment periods. Each stage included all ...

  21. New study reveals breakthrough in understanding brain stimulation

    In this research, all three investigated methods (computational models, humans, and animals) showed that the external stimulation could shift brain activity over time."The timing of this repeating pattern has a direct impact on brain processes, for example, how we navigate space, learn, and remember," Opitz said.The discovery of this new ...

  22. The Role of Epigenetics in Brain Aneurysm and Subarachnoid ...

    This review emphasizes methodological challenges in epigenetic research, advocating for large-scale epigenome-wide association studies integrating genetic and environmental factors, along with longitudinal studies. Such research could unravel the complex mechanisms behind IA and aSAH, guiding the development of targeted therapeutic approaches.

  23. Zooplankton research: Sample collection, processing, and the end of the

    The now 75% sample is added again to the splitter, breaking the sample into two 37.5% samples. Both of these 37.5% splits are size fractioned through a series of five decreasing sized (increasing fineness) sieves: >5mm, 2-5mm, 1-2mm, 0.5-1mm, and 0.2-0.5mm to sort through different sized zooplankton within the sample.

  24. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...

  25. Apple cider vinegar for weight management in Lebanese adolescents and

    By conducting research in this demographic, the study provides region-specific data and offers a more comprehensive understanding of the impact of ACV on weight loss and metabolic health. ... Unfortunately, the sample sizes and durations of these studies were limited, necessitating larger and longer-term studies for more robust conclusions. ...

  26. How to Write a Research Proposal

    Introduction Literature review Research design Reference list While the sections may vary, the overall objective is always the same. A research proposal serves as a blueprint and guide for your research plan, helping you get organized and feel confident in the path forward you choose to take. Table of contents Research proposal purpose

  27. Claim Substantiation: Common Issues in Study Design That Trip

    Industry experts from Proskauer Rose LLP review issues in study design that frequently arise when they are put to the test in a legal challenge, including for example when a control is and is not necessary, when bridging of study results is and is not acceptable, and more. ... You can search our content library for case studies, research ...

  28. A generative AI reset: Rewiring to turn potential into value in 2024

    The following are examples of new skills needed for the successful deployment of generative AI tools: data scientist: prompt engineering; in-context learning; bias detection; pattern identification; reinforcement learning from human feedback; hyperparameter/large language model fine-tuning; transfer learning; data engineer: data wrangling and ...

  29. Sampling: how to select participants in my research study?

    The essential topics related to the selection of participants for a health research are: 1) whether to work with samples or include the whole reference population in the study (census); 2) the sample basis; 3) the sampling process and 4) the potential effects nonrespondents might have on study results.

  30. Research Objectives

    Example: Research objectives To assess the relationship between sedentary habits and muscle atrophy among the participants. To determine the impact of dietary factors, particularly protein consumption, on the muscular health of the participants. To determine the effect of physical activity on the participants' muscular health. Prevent plagiarism.