• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

how to analyze data in research paper

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

NPS Survey Platform

NPS Survey Platform: Types, Tips, 11 Best Platforms & Tools

Apr 26, 2024

user journey vs user flow

User Journey vs User Flow: Differences and Similarities

gap analysis tools

Best 7 Gap Analysis Tools to Empower Your Business

Apr 25, 2024

employee survey tools

12 Best Employee Survey Tools for Organizational Excellence

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6 How to Analyze Data in a Primary Research Study

Melody Denny and Lindsay Clark

This chapter introduces students to the idea of working with primary research data grounded in qualitative inquiry, closed-and open-ended methods, and research ethics (Driscoll; Mackey and Gass; Morse; Scott and Garner). [1] We know this can seem intimidating to students, so we will walk them through the process of analyzing primary research, using information from public datasets including the Pew Research Center. Using sample data on teen social media use, we share our processes for analyzing sample data to demonstrate different approaches for analyzing primary research data (Charmaz; Creswell; Merriam and Tisdale; Saldaña). We also include links to additional public data sets, chapter discussion prompts, and sample activities for students to apply these strategies.

At this point in your education, you are familiar with what is known as secondary research or what many students think of as library research. Secondary research makes use of sources most often found in the library or, these days, online (books, journal articles, magazines, and many others). There’s another kind of research that you may or may not be familiar with: primary research. The Purdue OWL defines primary research as “any type of research you collect yourself” and lists examples as interviews, observations, and surveys (“What is Primary Research”).

Primary research is typically divided into two main types—quantitative and qualitative research. These two methods (or a mix of these) are used by many fields of study, so providing a singular definition for these is a bit tricky. Sheard explains that “quantitative research…deals with data that are numerical or that can be converted into numbers. The basic methods used to investigate numerical data are called ‘statistics’” (429). Guest, et al. explain that qualitative research is “information that is difficult to obtain through more quantitatively-oriented methods of data collection” and is used more “to answer the whys and hows of human behavior, opinion, and experience” (1).

This chapter focuses on qualitative methods that explore peoples’ behaviors, interpretations, and opinions. Rather than being only a reader and reporter of research, primary research allows you to be creators of research. Primary research provides opportunities to collect information based on your specific research questions and generate new knowledge from those questions to share with others. Generally, primary research tends to follow these steps:

  • Develop a research question. Secondary research often uses this as a starting point as well. With primary research, however, rather than using library research to answer your research question, you’ll also collect data yourself to answer the question you developed. Data, in this case, is the information you collect yourself through methods such as interviews, surveys, and observations.
  • Decide on a research method. According to Scott and Garner, “A research method is a recognized way of collecting or producing [primary data], such as a survey, interview, or content analysis of documents” (8). In other words, the method is how you obtain the data.
  • Collect data. Merriam and Tisdale clarify what it means to collect data: “data collection is about asking, watching, and reviewing” (105-106). Primary research might include asking questions via surveys or interviews, watching or observing interactions or events, and examining documents or other texts.
  • Analyze data. Once data is collected, it must then be analyzed. “Data analysis is the process of making sense out of the data… Basically, data analysis is the process used to answer your research question(s)” (Merriam and Tisdale 202). It’s worth noting that many researchers collect data and analyze at the same time, so while these may seem like different steps in the process, they actually overlap.
  • Report findings. Once the researcher has spent time understanding and interpreting the data, they are then ready to write about their research, often called “findings.” You may also see this referred to as “results.”

While the entire research process is discussed, this chapter focuses on the analysis stage of the process (step 4). Depending on where you are in the research process, you may need to spend more time on step 1, 2, or 3 and review Driscoll’s “Introduction to Primary Research” (Volume 2 of Writing Spaces ).

Primary research can seem daunting, and some students might think that they can’t do primary research, that this type of research is for professionals and scholars, but that’s simply not true. It’s true that primary research data can be difficult to collect and even more difficult to analyze, but the findings are typically very revealing. This chapter and the examples included break down this research process and demonstrate how general curiosity can lead to exciting chances to learn and share information that is relevant and interesting. The goal of this chapter is to provide you with some information about data analysis and walk you through some activities to prepare you for your own data analysis. The next section discusses analyzing data from closed-ended methods and open-ended methods.

Data from Primary Research

As stated above, this chapter doesn’t focus on methods, but before moving on to analysis, it’s important to clarify a few things related to methods as they are directly connected to analyzing data. As a quick reminder, a research method is how researchers collect their data such as surveys, interviews, or textual analysis. No matter which method used, researchers need to think about the types of questions to ask for answering their overall research question. Generally, there are two types of questions to consider: closed-ended and open-ended. The next section provides examples of the data you might receive from asking closed-ended and open-ended questions and options for analyzing and presenting that data.

Data from Closed-Ended Methods

The data that is generated by closed-ended questions on methods such as surveys and polls is often easier to organize. Because the way respondents could answer those questions is limited to specific answers (Yes/No, numbered scales, multiple choice), the data can be analyzed by each question or by looking at the responses individually or as a whole. Though there are several approaches to analyzing the data that comes from closed-ended questions, this section will introduce you to a few different ways to make sense of this kind of data.

Closed-ended questions are those that have limited answers, like multiple choice or check-all-that-apply questions. These questions mean that respondents can provide only the answers given or they may select an “other” option. An example of a closed-ended question could be “Do you use YouTube? Yes, No, Sometimes.” Closed-ended questions have their perks because they (mostly) keep participants from misinterpreting the question or providing unhelpful responses. They also make data analysis a bit easier.

If you were to ask the “Yes, No, Sometimes” question about YouTube to 20 of your closest friends, you may get responses like Yes = 18, No = 1, and Sometimes = 1. But, if you were to ask a more detailed question like “Which of the following social media platforms do you use?” and provide respondents with a check-all-that-apply option, like “Facebook, YouTube, Twitter, Instagram, Snapchat, Reddit, and Tumblr,” you would get a very different set of data. This data might look like Facebook = 17, YouTube = 18, Twitter = 12, Instagram = 20, Snapchat = 15, Reddit = 8, and Tumblr = 3. The big takeaway here is that how you ask the question determines the type of data you collect.

Analyzing Closed-Ended Data

Now that you have data, it’s time to think about analyzing and presenting that data. Luckily, the Pew Research Center conducted a similar study that can be used as an example. The Pew Research Center is a “nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research” (“About Pew Research Center”). The information provided below comes from their public dataset “Teens, Social Media, and Technology, 2018” (Anderson and Jiang). This example is used to show how you might analyze this type of data once collected and what that data might look like. “Teens, Social Media, and Technology 2018” reported responses to questions related to which online platforms teens use and which they use most often. In figure 1 below, Pew researchers show the final product of their analysis of the data:

Social Media Usage Statistics

Pew analyzed their data and organized the findings by percentages to show what they discovered. They had 743 teens who responded to these questions, so presenting their findings in percentages helps readers better “see” the data overall (rather than saying YouTube = 631 and Instagram = 535). However, results can be represented in different ways. When the Pew researchers were deciding how to present their data, they could have reported the frequency, or the number of people who said they used YouTube, Instagram, and Snapchat.

In the scenario of polling 20 of your closest friends, you, too, would need to decide how to present your data: Facebook = 17, YouTube = 18, Twitter = 12, Instagram = 20, Snapchat = 15, Reddit = 8, and Tumblr = 3. In your case, you might want to present the frequency (number) of responses rather than the percentages of responses like Pew did. You could choose a bar graph like Pew or maybe a simple table to show your data.

Looking again at the Pew data, researchers could use this data to generate further insights or questions about user preferences. For example, one could highlight the fact that 85% of respondents reported using YouTube the most, while only 7% reported using Reddit. Why is that? What conclusions might you be able to make based on these data? Does the data make you wonder if any additional questions might be explored? If you want to learn more about your respondents’ opinions or preference, you might need to ask open-ended questions.

Data from Open-Ended Methods

Whereas closed-ended questions limit how respondents might answer, open-ended questions do not limit respondents’ answers and allow them to answer more freely. An example of an open-ended question, to build off the question above, could be “Why do you use social media? Explain.” This type of question gives respondents more space to fully explain their responses. Open-ended questions can make the data varied because each respondent may answer differently. These questions, which can provide fruitful responses, can also mean unexpected responses or responses that don’t help to answer the overall research question, which can sometimes make data analysis challenging.

In that same Pew Research Center data, respondents were likely limited in how they were able to answer by selecting social media platforms from a list. Pew also shares selected data (Appendix A), and based on these data, it can be assumed they also asked open-ended questions, something about the positive or negative effects of social media platforms. Because their research method included both closed-ended questions about which platforms teens use as well as open-ended questions that invited their thoughts about social media, Pew researchers were able to learn more about these participants’ thoughts and perceptions. To give us, the readers, a clearer idea of how they justified their presentation of the data, Pew offers 15 sample excerpts from those open-ended questions. They explain that these excerpts are what the researchers believe are representative of the larger data set. We explain below how we might analyze those excerpts.

Analyzing Open-Ended Data

As Driscoll reminds us, ethical considerations impact all stages of the research process, and researchers should act ethically throughout the entire research process. You already know a little something about research ethics. For example, you know that ethical writers cite sources used in research papers by giving credit to the person who created that information. When creating primary sources, you have a few different ethical considerations for analyzing data, which will be discussed below.

To demonstrate how to analyze data from open-ended methods, we explain how we (Melody and Lindsay) analyzed the 15 excerpts from the Pew data using open coding. Open coding means analyzing the data without any predetermined categories or themes; researchers are just seeing what emerges or seems significant (Charmaz). Creswell suggests four specific steps when coding qualitative data, though he also stresses that these steps are iterative, meaning that researchers may need to revisit a step anywhere throughout the process. We use these four steps to explain our analysis process, including how we ethically coded the data, interpreted what the coding process revealed, and worked together to identify and explain categories we saw in the data.

Step 1: Organizing and Preparing the Data

The first part of the analysis stage is organizing the data before examining it. When organizing data, researchers must be careful to work with primary data ethically because that data often represents actual peoples’ information and opinions. Therefore, researchers need to carefully organize the data in such a way as to not identify their participants or reveal who they are. This is a key component to The Belmont Report , guidelines published in 1979 meant to guide researchers and help protect participants. Using pseudonyms or assigning numbers or codes (in place of names) to the data is a recommended ethical step to maintain participants’ confidentiality in a study. Anonymizing data, or removing names, has the additional effect of eliminating researcher bias, which can occur when researchers are so familiar with their own data and participants that the researchers may begin to think they already know the answers or see connections prior to analysis (Driscoll). By assigning pseudonyms, researchers can also ensure that they take an objective look at each participant’s answers without being persuaded by participant identity.

The first part of coding is to make notations while reading through the data (Merriam and Tisdale). At this point, researchers are open to many possibilities regarding their data. This is also where researchers begin to construct categories. Offering a simple example to illustrate this decision-making process, Merriam and Tisdale ask us to imagine sorting and categorizing two hundred grocery store items (204). Some items could be sorted into more than one category; for example, ice cream could be categorized as “frozen” or as “dessert.” How you decide to sort that item depends on your research question and what you want to learn.

For this step, we, Melody and Lindsay, each created a separate document that included the 15 excerpts. Melody created a table for the quotes, leaving a column for her coding notes, and Lindsay added spaces between the excerpts for her notes. For our practice analysis, we analyzed the data independently, and then shared what we did to compare, verify, and refine our analysis. This brings a second, objective view to the analysis, reduces the effect of researcher bias, and ensures that your analysis can be verified and supported by the data. To support your analysis, you need to demonstrate how you developed the opinions and conclusions you have about your data. After all, when researchers share their analyses, readers often won’t see all of the raw data, so they need to be able to trust the analysis process.

Step 2: Reading through All the Data

Creswell suggests getting a general sense of the data to understand its overall meaning. As you start reading through your data, you might begin to recognize trends, patterns, or recurring features that give you ideas about how to both analyze and later present the data. When we read through the interview excerpts of these 15 participants’ opinions of social media, we both realized that there were two major types of comments: positive and negative. This might be similar to categorizing the items in the grocery store (mentioned above) into fresh/frozen foods and non-perishable items.

To better organize the data for further analysis, Melody marked each positive comment with a plus sign and each negative comment with a minus sign. Lindsay color-coded the comments (red for negative, indicated by boldface type below; green for positive, indicated by grey type below) and then organized them on the page by type. This approach is in line with Merriam and Tisdale’s explanation of coding: “assigning some sort of shorthand designation to various aspects of your data so that you can easily retrieve specific pieces of the data. The designations can be single words, letters, numbers, phrases, colors, or combinations of these” (199). While we took different approaches, as shown the two sections below, both allowed us to visually recognize the major sections of the data:

Lindsay’s Coding Round 1, which shows her color coding indicated by boldface type

“[Social media] allows us to communicate freely and see what everyone else is doing. [It] gives us a voice that can reach many people.” (Boy, age 15) “It makes it harder for people to socialize in real life, because they become accustomed to not interacting with people in person.” (Girl, age 15) “[Teens] would rather go scrolling on their phones instead of doing their homework, and it’s so easy to do so. It’s just a huge distraction.” (Boy, age 17) “It enables people to connect with friends easily and be able to make new friends as well.” (Boy, age 15) “I think social media have a positive effect because it lets you talk to family members far away.” (Girl, age 14) “Because teens are killing people all because of the things they see on social media or because of the things that happened on social media.” (Girl, age 14) “We can connect easier with people from different places and we are more likely to ask for help through social media which can save people.” (Girl, age 15)

Melody’s Coding Round 1, showing her use of plus and minus signs to classify the comments as positive or negative, respectively

+ “[Social media] allows us to communicate freely and see what everyone else is doing. [It] gives us a voice that can reach many people.” (Boy, age 15) – “It makes it harder for people to socialize in real life, because they become accustomed to not interacting with people in person.” (Girl, age 15) – “[Teens] would rather go scrolling on their phones instead of doing their homework, and it’s so easy to do so. It’s just a huge distraction.” (Boy, age 17) + “It enables people to connect with friends easily and be able to make new friends as well.” (Boy, age 15) + “I think social media have a positive effect because it lets you talk to family members far away.” (Girl, age 14) – “Because teens are killing people all because of the things they see on social media or because of the things that happened on social media.” (Girl, age 14) + “We can connect easier with people from different places and we are more likely to ask for help through social media which can save people.” (Girl, age 15)

Step 3: Doing Detailed Coding Analysis of the Data

It’s important to mention that Creswell dedicates pages of description on coding data because there are various ways of approaching detailed analysis. To code our data, we added a descriptive word or phrase that “symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute” to a portion of data (Saldaña 3). From the grocery store example above, that could mean looking at the category of frozen foods and dividing them into entrees, side dishes, desserts, appetizers, etc. We both coded for topics or what the teens were generally talking about in their responses. For example, one excerpt reads “Social media allows us to communicate freely and see what everyone else is doing. It gives us a voice that can reach many people.” To code that piece of data, researchers might assign words like communication, voice, or connection to explain what the data is describing.

In this way, we created the codes from what the data said, describing what we read in those excerpts. Notice in the section below that, even though we coded independently, we described these pieces of data in similar ways using bolded keywords:

Melody’s Coding Round 2, with key words added to summarize the meanings of the different quotes

– “Gives people a bigger audience to speak and teach hate and belittle each other.” (Boy, age 13) bullying – “It provides a fake image of someone’s life. It sometimes makes me feel that their life is perfect when it is not.” (Girl, age 15) fake + “Because a lot of things created or made can spread joy.” (Boy, age 17) reaching people + “I feel that social media can make people my age feel less lonely or alone. It creates a space where you can interact with people.” (Girl, age 15) connection + “[Social media] allows us to communicate freely and see what everyone else is doing. [It] gives us a voice that can reach many people.” (Boy, age 15) reaching people

Lindsay’s Coding Round 2, with key words added in capital letters to summarize the meanings of the quotations

“Gives people a bigger audience to speak and teach hate and belittle each other.” (Boy, age 13) OPPORTUNITIES TO COMMUNICATE NEGATIVELY/MORE EASILY “It provides a fake image of someone’s life. It sometimes makes me feel that their life is perfect when it is not.” (Girl, age 15) FAKE, NOT REALITY “Because a lot of things created or made can spread joy.” (Boy, age 17) SPREAD JOY “I feel that social media can make people my age feel less lonely or alone. It creates a space where you can interact with people.” (Girl, age 15) INTERACTION, LESS LONELY “[Social media] allows us to communicate freely and see what everyone else is doing. [It] gives us a voice that can reach many people.” (Boy, age 15) COMMUNICATE, VOICE

Though there are methods that allow for researchers to use predetermined codes (like from previous studies), “the traditional approach…is to allow the codes to emerge during the data analysis” (Creswell 187).

Step 4: Using the Codes to Create a Description Using Categories, Themes, Settings, or People

Our individual coding happened in phases, as we developed keywords and descriptions that could then be defined and relabeled into concise coding categories (Saldaña 11). We shared our work from Steps 1-3 to further define categories and determine which themes were most prominent in the data. A few times, we interpreted something differently and had to discuss and come to an agreement about which category was best.

In our process, one excerpt comment was interpreted as negative by one of us and positive by the other. Together we discussed and confirmed which comments were positive or negative and identified themes that seemed to appear more than once, such as positive feelings towards the interactional element of social media use and the negative impact of social media use on social skills. When two coders compare their results, this allows for qualitative validity, which means “the researcher checks for the accuracy of the findings” (Creswell 190). This could also be referred to as intercoder reliability (Lavrakas). For intercoder reliability, researchers sometimes calculate how often they agree in a percentage. Like many other aspects of primary research, there is no consensus on how best to establish or calculate intercoder reliability, but generally speaking, it’s a good idea to have someone else check your work and ensure you are ethically analyzing and reporting your data.

Interpreting Coded Data

Once we agreed on the common categories and themes in this dataset, we worked together on the final analysis phase of interpreting the data, asking “what does it mean?” Data interpretation includes “trying to give sense to the data by creatively producing insights about it” (Gibson and Brown 6). Though we acknowledge that this sample of only 15 excerpts is small, and it might be difficult to make claims about teens and social media from just this data, we can share a few insights we had as part of this practice activity.

Overall, we could report the frequency counts and percentages that came from our analysis. For example, we counted 8 positive comments and 7 negative comments about social media. Presented differently, those 8 positive comments represent 53% of the responses, so slightly over half. If we focus on just the positive comments, we are able to identify two common themes among those 8 responses: Interaction and Expression. People who felt positively about social media use identified the ability to connect with people and voice their feelings and opinions as the main reasons. When analyzing only the 7 negative responses, we identified themes of Bullying and Social Skills as recurring reasons people are critical of social media use among teens. Identifying these topics and themes in the data allows us to begin thinking about what we can learn and share with others about this data.

How we represent what we have learned from our data can demonstrate our ethical approach to data analysis. In short, we only want to make claims we can support, and we want to make those claims ethically, being careful to not exaggerate or be misleading.

To better understand a few common ethical dilemmas regarding the presentation of data, think about this example: A few years ago, Lindsay taught a class that had only four students. On her course evaluations, those four students rated the class experience as “Excellent.” If she reports that 100% of her students answered “Excellent,” is she being truthful? Yes. Do you see any potential ethical considerations here? If she said that 4/4 gave that rating, does that change how her data might be perceived by others? While Lindsay could show the raw data to support her claims, important contextual information could be missing if she just says 100%. Perhaps others would assume this was a regular class of 20-30 students, which would make that claim seem more meaningful and impressive than it might be.

Another word for this is cherry picking. Cherry picking refers to making conclusions based on thin (or not enough) data or focusing on data that’s not necessarily representative of the larger dataset (Morse). For example, if Lindsay reported the comment that one of her students made about this being the “best class ever,” she would be telling the truth but really only focusing on the reported opinion of 25% of the class (1 out of 4). Ideally, researchers want to make claims about the data based on ideas that are prominent, trending, or repeated. Less prominent pieces of data, like the opinion of that one student, are known as outliers, or data that seem to “be atypical of the rest of the dataset” (Mackey and Gass 257). Focusing on those less-representative portions might misrepresent or overshadow the aspects of the data that are prominent or meaningful, which could create ethical problems for your study. With these ethical considerations in mind, the last step of conducting primary research would be to write about the analysis and interpretation to share your process with others.

This chapter has introduced you to ethically analyzing data within the primary research tradition by focusing on close-ended and open-ended data. We’ve provided you with examples of how data might be analyzed, interpreted, and presented to help you understand the process of making sense of your data. This is just one way to approach data analysis, but no matter your research method, having a systematic approach is recommended. Data analysis is a key component in the overall primary research process, and we hope that you are now excited and curious to participate in a primary research project.

Works Cited

“About Pew Research Center.” Pew Research Center, 2020. www.pewresearch.org/about/ . Accessed 28 Dec 2020. Anderson, Monica, and Jingjing Jiang.

“Teens, Social Media & Technology 2018.” Pew Research Center, May 2018, www.pewresearch.org/internet/2018/05/31/teens-social-media-technology-2018/ .

The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, Office for Human Research Protections, www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html . 18 Apr. 1979.

Charmaz, Kathy. “Grounded Theory.” Approaches to Qualitative Research: A Reader on Theory and Practice , edited by Sharlene Nagy Hesse-Biber and Patricia Leavy, Oxford UP, 2004, pp. 496-521.

Corpus of Contemporary American English (COCA) . (n.d.). Retrieved April 11, 2021, from https://www.english-corpora.org/coca/

Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 3rd edition, Sage, 2009.

Data.gov . (2020). Retrieved April 11, 2021, from https://www.data.gov/

Driscoll, Dana Lynn. “Introduction to Primary Research: Observations, Surveys, and Interviews.” Writing Spaces: Readings on Writing , Volume 2, Parlor Press, 2011, pp. 153-174.

Explore Census Data . (n.d.). United States Census Bureau. Retrieved April 11, 2021, from https://data.census.gov/cedsci/

Gibson, William J., and Andrew Brown. Working with Qualitative Data . London, Sage, 2009.

Google Trends. (n.d.). Retrieved April 11, 2021, from https://trends.google.com/trends/explore

Guest, Greg, et al. Collecting Qualitative Data: A Field Manual for Applied Research . Sage, 2013.

HealthData.gov . (n.d.). Retrieved April 11, 2021, from https://healthdata.gov/

Lavrakas, Paul J. Encyclopedia of Survey Research Methods . Sage, 2008.

Mackey, Allison, and Sue M. Gass. Second Language Research: Methodology and Design . Lawrence Erlbaum Associates, 2005.

Merriam, Sharan B., and Elizabeth J. Tisdell. Qualitative Research: A Guide to Design and Implementation , John Wiley & Sons, Incorporated, 2015. ProQuest Ebook Central, https://ebookcentral.proquest.com/lib/unco/detail.action?docID=2089475 .

Michigan Corpus of Academic Spoken English. (n.d.). Retrieved April 11, 2021, from https://quod.lib.umich.edu/cgi/c/corpus/corpus?c=micase;page=simple

Morse, Janice. M. “‘Cherry Picking’: Writing from Thin Data.” Qualitative Health Research , vol. 20, no. 1, 2009, p. 3.

Pew Research Center . (2021). Retrieved April 11, 2021, from https://www.pewresearch.org/

Saldaña, Johnny. The Coding Manual for Qualitative Researchers , 2nd edition, Sage, 2013.

Scott, Greg, and Roberta Garner. Doing Qualitative Research: Designs, Methods, and Techniques , 1st edition, Pearson, 2012.

Sheard, Judithe. “Quantitative Data Analysis.” Research Methods Information, Systems, and Contexts , edited by Kirsty Williamson and Graeme Johanson, Elsevier, 2018, pp. 429-452.

Teens and Social Media , Google Trends, trends.google.com/trends/explore?-date=all&q=teens%20and%20social%20media . Accessed 15 Jul. 2020.

“What is Primary Research and How Do I Get Started?” The Writing Lab and OWL at Purdue and Purdue U , 2020. owl.purdue.edu/owl . Accessed 21 Dec. 2020.

Zhao, Alice. “How Text Messages Change from Dating to Marriage.” Huffington Post , 21 Oct. 2014, www.huffpost.com .

“My mom had to get a ride to the library to get what I have in my hand all the time. She reminds me of that a lot.” (Girl, age 14)

“Gives people a bigger audience to speak and teach hate and belittle each other.” (Boy, age 13)

“It provides a fake image of someone’s life. It sometimes makes me feel that their life is perfect when it is not.” (Girl, age 15)

“Because a lot of things created or made can spread joy.” (Boy, age 17)

“I feel that social media can make people my age feel less lonely or alone. It creates a space where you can interact with people.” (Girl, age 15)

“[Social media] allows us to communicate freely and see what everyone else is doing. [It] gives us a voice that can reach many people.” (Boy, age 15)

“It makes it harder for people to socialize in real life, because they become accustomed to not interacting with people in person.” (Girl, age 15)

“[Teens] would rather go scrolling on their phones instead of doing their homework, and it’s so easy to do so. It’s just a huge distraction.” (Boy, age 17)

“It enables people to connect with friends easily and be able to make new friends as well.” (Boy, age 15)

“I think social media have a positive effect because it lets you talk to family members far away.” (Girl, age 14)

“Because teens are killing people all because of the things they see on social media or because of the things that happened on social media.” (Girl, age 14)

“We can connect easier with people from different places and we are more likely to ask for help through social media which can save people.” (Girl, age 15)

“It has given many kids my age an outlet to express their opinions and emotions, and connect with people who feel the same way.” (Girl, age 15)

“People can say whatever they want with anonymity and I think that has a negative impact.” (Boy, age 15)

“It has a negative impact on social (in-person) interactions.” (Boy, age 17)

Teacher Resources for How to Analyze Data in a Primary Research Study

Overview and teaching strategies.

This chapter is intended as an overview of analyzing qualitative research data and was written as a follow-up piece to Dana Lynn Driscoll’s “Introduction to Primary Research: Observations, Surveys, and Interviews” in Volume 2 of this collection. This chapter could work well for leading students through their own data analysis of a primary research project or for introducing students to the idea of primary research by using outside data sources, those in the chapter and provided in the activities below, or data you have access to.

From our experiences, students usually have limited experience with primary research methods outside of conducting a small survey for other courses, like sociology. We have found that few of our students have been formally introduced to primary research and analysis. Therefore, this chapter strives to briefly introduce students to primary research while focusing on analysis. We’ve presented analysis by categorizing data as open-ended and closed-ended without getting into too many details about qualitative versus quantitative. Our students tend to produce data collection tools with a mix of these types of questions, so we feel it’s important to cover the analysis of both.

In this chapter, we bring students real examples of primary data and lead them through analysis by showing examples. Any of these exercises and the activities below may be easily supplemented with additional outside data. One way that teachers can bring in outside data is through the use of public datasets.

Public Data Sets

There are many public data sets that teachers can use to acquaint their students with analyzing data. Be aware that some of these datasets are for experienced researchers and provide the data in CSV files or include metadata, all of which is probably too advanced for most of our students. But if you are comfortable converting this data, it could be valuable for a data analysis activity.

  • In the chapter, we pulled from Pew Research, and their website contains many free and downloadable data sets (Pew Research Center).
  • The site Data.gov provides searchable datasets, but you can also explore their data by clicking on “data” and seeing what kinds of reports they offer.
  • The U.S. Census Bureau offers some datasets as well (Explore Census Data): Much of this data is presented in reports, but teachers could pull information from reports and have students analyze the data and compare their results to those in the report, much like we did with the Pew Research data in the chapter.
  • Similarly, HealthData.gov offers research-based reports packed with data for students to analyze.
  • In one of the activities below, we used Google Trends to look at searches over a period of time. There are some interesting data and visuals provided on the homepage to help students get started.
  • If you’re looking for something a bit more academic, the Michigan Corpus of Academic Spoken English is a great database of transcripts from academic interactions and situations.
  • Similarly, the Corpus of Contemporary American English allows users to search for words or word strings to see their frequency and in which genre and when these occur.

Before moving on to student activities, we’d like to offer one additional suggestion for teachers to consider.

Class Google Form

One thing that Melody does at the beginning of almost all of her research-based writing courses is ask students to complete a Google Form at the beginning of the semester. Sometimes, these forms are about their experiences with research. Other times, they revolve around a class topic (recently, she’s been interested in Generation Z or iGeneration and has asked students questions related to that). Then, when it’s time to start thinking about primary research, she uses that Google Form to help students understand more about the primary research process. Here are some ways that teachers can employ the data gathered from Google Form given to students.

  • Ask students to look at the questions asked on the survey and deduce the overall research question.
  • • Ask students to look at the types of questions asked (open- and closed-ended) and consider why they were constructed that way.
  • Ask students to evaluate the wording of the questions asked.
  • Ask students to examine the results of a few (or more) or the questions on the survey. This can be done in groups with each group looking at 1-3 questions, depending on the size of your Google Form.
  • Ask students to think about how they might present that data in visual form. Yes, Google provides some visuals, but you can give them the raw data and see what they come up with.
  • Ask students to come up with 1-3 major takeaways based on all the data.

This exercise allows students to work with real data and data that’s directly related to them and their classmates. It’s also completely within ethical boundaries because it’s data collected in the classroom, for educational purposes, and it stays within the classroom.

Below we offer some guiding questions to help move students through the chapter and the activities as well as some additional activities.

Discussion Questions

  • In the opening of this chapter, we introduced you to primary research , or “any type of research you collect yourself” (“What is Primary Research”). Have you completed primary research before? How did you decide on your research method, based on your research question? If you have not worked on primary research before, brainstorm a potential research question for a topic you want to know more about. Discuss what research method you might use, including closed- or open-ended methods and why.
  • Looking at the chart from the Pew Research dataset, “Teens, Social Media, and Technology 2018,” would you agree that the distributions among online platforms remain similar, or have trends changed?
  • What do you make of the “none of the above” category on the Pew table? Do you think teens are using online platforms that aren’t listed, or do you think those respondents don’t use any online platforms?

google trends for "social media"

  • When analyzing data from open-ended questions, which step seems most challenging to you? Explain.

Activity #1: TurnItIn and Infographics

Infographics can be a great way to help you see and understand data, while also giving you a way to think about presenting your own data. Multiple infographics are available on TurnItIn, downloadable for free, that provide information about plagiarism.

Figure 3, titled “The Plagiarism Spectrum,” provides you with the “severity” and “frequency” based on survey findings of nearly 900 high school and college instructors from around the world. TurnItIn encourages educators to print this infographic and hang in their classroom:

plagiarism spectrum

This infographic provides some great data analysis examples: specific categories with definitions (and visual representation of their categories), frequency counts with bar graphs, and color gradient bars to show higher vs. lower numbers.

  • Write a summary of how this infographic presents data.
  • How do you think they analyzed the data based on this visual?

Activity #2: How Text Messages Change from Dating to Marriage

In Alice Zhao’s Huffington Post piece, she analyzes text messages that she collected during her relationship with her boyfriend, turned fiancé, turned husband to answer the question of how text messages (or communication) change over the course of a relationship. While Zhao offers some insight into her data, she also provides readers with some really cool graphics that you can use to practice your analysis skills.

These first graphics are word clouds. In figure 4, Zhao put her textual data into a program that creates these images based on the most frequently occurring words. Word clouds are another option for analyzing your data. If you have a lot of textual data and want to know what participants said the most, placing your data into a word cloud program is an easy way to “see” the data in a new way. This is usually one of the first steps of analysis, and additional analysis is almost always needed.

Zhao’s Word Cloud Sampling

  • What do you notice about the texts from 2008 to 2014?
  • What do you notice between her texts (me) and his texts (him)?

Zhao also provided this graphic (figure 5), a comparative look at what she saw as the most frequently occurring words from the word clouds. This could be another step in your data analysis procedure: zooming in on a few key aspects and digging a bit deeper.

Zhao’s Bar Graph

  • What do you make of this data? Why might the word “hey” occur more frequently in the dating time frame and the word “ok” occur more frequently in the married time frame?

As part of her research, Zhao also looked at the time of day text messages were sent, shown below in figure 6:

Zhao’s Plot Graph of Time of Day

Here, Zhao looked at messages sent a month after their first date, a month after their engagement, and a month after their wedding.

  • She offers her own interpretation in her piece in figure 6, but what do you think of this?
  • Also make note of this graphic. It’s a great way to look at the data another way. If your data may be time sensitive, this type of graphic may help you better analyze and understand your data.
  • This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0) and is subject to the Writing Spaces Terms of Use. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ , email [email protected] , or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. To view the Writing Spaces Terms of Use, visit http://writingspaces.org/terms-of-use . ↵

How to Analyze Data in a Primary Research Study Copyright © 2021 by Melody Denny and Lindsay Clark is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License , except where otherwise noted.

Share This Book

Loading metrics

Open Access

Eleven quick tips for finding research data

Contributed equally to this work with: Kathleen Gregory, Siri Jodha Khalsa, William K. Michener, Fotis E. Psomopoulos, Anita de Waard, Mingfang Wu

Affiliation Data Archiving and Networked Services, Royal Netherlands Academy of Arts and Sciences, The Hague, Netherlands

Affiliation National Snow and Ice Data Centre, Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, Colorado, United States of America

ORCID logo

Affiliation College of University Libraries & Learning Sciences, The University of New Mexico, Albuquerque, New Mexico, United States of America

Affiliation Institute of Applied Biosciences, Centre for Research and Technology Hellas, Thessaloniki, Greece

Affiliation Research Data Management Solutions, Elsevier, Jericho, Vermont, United States of America

* E-mail: [email protected]

Affiliation Australia National Data Service, Melbourne, Australia

  • Kathleen Gregory, 
  • Siri Jodha Khalsa, 
  • William K. Michener, 
  • Fotis E. Psomopoulos, 
  • Anita de Waard, 
  • Mingfang Wu

PLOS

Published: April 12, 2018

  • https://doi.org/10.1371/journal.pcbi.1006038
  • Reader Comments

Citation: Gregory K, Khalsa SJ, Michener WK, Psomopoulos FE, de Waard A, Wu M (2018) Eleven quick tips for finding research data. PLoS Comput Biol 14(4): e1006038. https://doi.org/10.1371/journal.pcbi.1006038

Editor: Francis Ouellette, Genome Quebec, CANADA

Copyright: © 2018 Gregory et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: William K. Michener was supported by NSF (#IIA-1301346 and #ACI-1430508). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

This is a PLOS Computational Biology Education paper.

Introduction

Over the past decades, science has experienced rapid growth in the volume of data available for research—from a relative paucity of data in many areas to what has been recently described as a data deluge [‎ 1 ]. Data volumes have increased exponentially across all fields of science and human endeavour, including data from sky, earth, and ocean observatories; social media such as Facebook and Twitter; wearable health-monitoring devices; gene sequences and protein structures; and climate simulations [‎ 2 ]. This brings opportunities to enable more research, especially cross-disciplinary research that could not be done before. However, it also introduces challenges in managing, describing, and making data findable, accessible, interoperable, and reusable by researchers [‎ 3 ].

When this vast amount and variety of data is made available, finding relevant data to meet a research need is increasingly a challenge. In the past, when data were relatively sparse, researchers discovered existing data by searching literature, attending conferences, and asking colleagues. In today’s data-rich environment, with accompanying advances in computational and networking technologies, researchers increasingly conduct web searches to find research data. The success of such searches varies greatly and depends to a large degree on the expertise of the person looking for data, the tools used, and, partially, on luck. This article offers the following 11 quick tips that researchers can follow to more effectively and precisely discover data that meet their specific needs.

  • Tip 1: Think about the data you need and why you need them.
  • Tip 2: Select the most appropriate resource.
  • Tip 3: Construct your query strategically.
  • Tip 4: Make the repository work for you.
  • Tip 5: Refine your search.
  • Tip 6: Assess data relevance and fitness -for -use.
  • Tip 7: Save your search and data- source details.
  • Tip 8: Look for data services, not just data.
  • Tip 9: Monitor the latest data.
  • Tip 10: Treat sensitive data responsibly.
  • Tip 11: Give back (cite and share data).

Tip 1: Think about the data you need and why you need them

Before embarking on a search for data, consider how you will use the desired data in the context of your overall research question. Are you seeking data for comparison or validation, as the basis for a new study, or for another reason? List the characteristics that the data must have in order to fulfil your identified purpose(s), including requirements such as data format, spatial or temporal coverage, availability, and author or research group. In many cases, your initial data requirements and the identified constraints will change as you progress with the search. Pausing to first analyse what you need and why you need it can lead to a more analytic search, save searching time and facilitating the actions described in Tips 2–6.

Tip 2: Select the most appropriate resource

Directories of research-data repositories, such as re3data.org ( http://www.re3data.org ) and FAIRsharing ( https://fairsharing.org ), web search engines, and colleagues can be consulted to discover domain-specific portals in your discipline. Subject domain is but one criterion to consider when selecting an appropriate data repository. Various certification processes have also been implemented to help develop trustworthiness in repositories and to make their data-governing policies more transparent. For example, repositories earning the CoreTrustSeal ( https://www.coretrustseal.org/about ) Trustworthy Data Repository certification must meet 16 requirements measuring the accessibility, usability, reliability, and long-term stability of their data. Knowing what standards and criteria a repository applies to data and metadata provides more confidence in understanding and reusing the data from that repository.

Domain-specific portals provide ways to quickly narrow your search, offering interfaces and filters tailored to match the data and needs of specific disciplinary domains. Map interfaces for data collected from specific locations (see the National Water Information System, https://maps.waterdata.usgs.gov/mapper/index.html ) and specific search fields and tools (see the National Centre for Biotechnology Information’s complement of databases, ( https://www.ncbi.nlm.nih.gov/guide/all/ ) facilitate discovering disciplinary data. Other domain-focused repositories, such as the National Snow and Ice Data Centre (NSIDC, http://nsidc.org/data/search/ ), collect and apply knowledge about user requirements and incorporate domain semantics into their search engines to help data seekers quickly find appropriate data. Data aggregators, including DataONE ( https://www.dataone.org ) for environmental and earth observation data, VertNet ( http://vertnet.org ) and Global Biodiversity Information Facility (GBIF, https://www.gbif.org ) for museum specimen and biodiversity data, or DataMed ( https://datamed.org ) for biomedical datasets, enable searching multiple data repositories or collections through a single search interface. Some portals may not provide data-search functionality but instead provide a catalogue of data resources. A notable example is the AgBioData ( https://www.agbiodata.org/databases ) portal, which lists links to 12 agricultural biological databases dedicated to specific species (e.g., cotton, grain, or hardwood), where you can directly search for data.

The accessibility of data resources is another important consideration. University librarians can provide advice about particular subscription-based resources available at your institution. Research papers in your field can also point to available data repositories. In domains such as astronomy and genomics, for example, citations of datasets within journal articles are commonplace. These references usually include dataset access information that can be used to locate datasets of interest or to point toward data repositories favoured within a discipline.

Tip 3: Construct your query strategically

Describing your desired data effectively is key to communicating with the search system. Your description will determine if relevant data are retrieved and may inform the order of the hits in the results list. Help pages provide tips on how to construct basic and advanced searches within particular repositories (see for example Research Data Australia https://researchdata.ands.org.au —click on Advanced Search → Help). Note that not all repositories operate in the same manner. Some portals, such as DataONE ( https://www.dataone.org ), use semantic technologies to automatically expand the keywords entered in the search box to include synonyms. If a portal does not use automatic expansion, you may need to manually add various synonyms to your search query (e.g., in addition to ‘demography’ as a search term, one might also add ‘population density’, ‘population growth’, ‘census’, or ‘anthropology’).

  • sea level (site:.edu)

Tip 4: Make the repository work for you

Repository developers invest significant time and energy organizing data in ways to make them more discoverable; use their work to your advantage. Familiarize yourself with the controlled vocabularies, subject categories, and search fields used in particular repositories. Searching for and successfully locating data is dependent on the information about the data, termed metadata, that are contained in these fields; this is particularly true for numeric or nontextual data. Browsing subject categories can also help to gauge the appropriateness of a resource, home in on an area of interest, or find related data that have been classified in the same category.

Researchers can also register or create profiles with many data repositories. By registering, you may be able to indicate your general research data interests which can be utilized in subsequent searches or receive alerts about datasets that you have previously downloaded (see also Tip 7).

Tip 5: Refine your search

In many cases, your initial search may not retrieve relevant data or all of the data that you need. Based on the retrieved results, you may need to broaden or narrow your approach. Apart from rephrasing your search query and using search operators, as discussed in Tip 3, facets or filters specific to individual repositories can be used to narrow the scope of your results. Refinements such as data format, types of analysis, and data availability allow users to quickly find usable data.

Examining results that look interesting (for example, by clicking on links for ‘more information’) can be a signal of the type of information that you find relevant. These results can then be linked to related ones (e.g., from the data provider, from different time series), and in subsequent searches, other results algorithmically determined to be related will be brought to the top of the results list.

Tip 6: Assess data relevance and fitness for use

Conduct a preliminary assessment of the retrieved data prior to investing time in subsequent data download, integration, and analytic and visualization efforts. A quick perusal of the metadata (text and/or images) can often enable you to verify that the data satisfy the initial requirements and constraints set forth in Tip 1 (e.g., spatial, temporal, and thematic coverage and data-sharing restrictions). Ideally, the metadata will also contain documentation sufficient to comprehensively assess the relevance and fitness for use of the data, including information about how the data were collected and quality assured, how the data have been previously used, etc. Some data repositories such as the National Science Foundation’s Arctic Data Centre ( https://arcticdata.io ) enable the data seeker to generate and download a metadata quality report that assesses how well the metadata adhere to community best practices for discovery and reusability. Clearly, if none of your criteria for data are met, you may not wish to download and use the associated data.

Attention should also be paid to quality parameters or flags within the data files. Make use of a visualization tool or statistics analysis tool, if provided, to examine quality or fitness of data for intended use before downloading data, especially if the data volume is large and the dataset includes many files.

Tip 7: Save your search and data-source details

Record the data source and data version if you access or download a data product. This may be accomplished by noting the persistent identifier, such as a digital object identifier (DOI) or another Global Unique Identifier (GUID) assigned to the data. Recording the URL from which you obtained the data can be a quick way of returning to it but should not be trusted in the long term for providing access to the data, as URLs can change. It is also a good practice to save a copy of any original data products that you downloaded [‎ 5 ]. You may, for example, need to go back to original data sources and check if there have been any changes or corrections to data. Registering with the data portal (as described in Tip 3) or registering as a user of a specific data product allows the repository to contact you when necessary. Such information may be needed when you publish a paper that builds on the data you accessed. If there are any errors found in the original data, registering with the data service allows them to contact you to see if there is an impact on any research conclusions that you have drawn from this data.

If you have registered with a portal, it may also be possible to save your searches, allowing you to resume your data search at a later time with all previously defined search criteria. Some portals use RESTful search interfaces, which means you can bookmark a results set or dataset and return to it later simply by going to the bookmark.

Tip 8: Look for data services, not just data

The data you seek may be available only via an application programming interface (API) or as linked data [‎ 6 ]. That is, instead of a file residing on a server, the data that best suits your purposes is provided as a service through an API. Examples of such services include the climate change projection data available through the NSW Climate Data Portal ( http://climatechange.environment.nsw.gov.au/Climate-projections-for-NSW/Download-datasets ), in which data are dynamically generated from a simulation model; Google Earth Engine ( https://earthengine.google.com ); or Amazon Web Services (AWS) public datasets ( https://aws.amazon.com/public-datasets/ ). Data made available from these services may not be searchable from general web search engines, but data services may be registered to data catalogues or federations such as Research Data Australia, DataONE, and other resources listed in re3data.org and FAIRsharing. Many repositories that host extremely large volumes of data such as sequencing, environmental observatory, and remotely sensed data provide access to tools, workflows, and computing resources that allow one to access, visualize, process, and download manageable subsets of the data. Often, the processing workflows that one might use to process and download a dataset can also be downloaded, saved, and used again in subsequent searches.

Tip 9: Monitor the latest data

One of the most effective ways to identify new data submissions is to monitor the latest literature, as many journals such as Nature , PLOS , Science , and others require that the data underlying a publication also be published in a public (e.g., Dataverse https://dataverse.org , Dryad http://datadryad.org , or Zenodo https://zenodo.org ) or discipline-based repository (e.g., EASY from Data Archiving and Networked Services [DANS] https://easy.dans.knaw.nl/ , GenBank https://www.ncbi.nlm.nih.gov/genbank/ , or PubChem https://pubchem.ncbi.nlm.nih.gov ).

In addition, many domain-based repositories, such as environmental observatories and sequencing databases, are constantly accepting similar types of data submissions. Publishers and some digital repositories also offer alerting services when new publications or data products are submitted. Depending on the resource, it may be possible to set up a recurring search API or a Rich Site Summary (RSS) feed to automatically monitor specific resources. For example, the NSIDC offers a subscription service where new data meeting a list of user-generated specifications are automatically pushed to a location specified by the user.

Tip 10: Treat sensitive data responsibly

In most cases, after you have located relevant data, you can download them straight away. However, there are cases, such as for medical and health data, endangered and threatened species, and sacred objects and archaeological finds, where you can only see a data description (the metadata) and are not able to download the data directly due to access restrictions imposed to protect the privacy of individuals represented in the data or to safeguard locations and species from harm or unwanted attention. Guidance with respect to sensitive data is available through the 2003 Fort Lauderdale Agreement ( https://www.genome.gov/pages/research/wellcomereport0303.pdf ), the 2009 Toronto Agreement ( https://www.nature.com/articles/461168a ) [ 7 ], the Australian National Data Service ( http://www.ands.org.au/working-with-data/sensitive-data ), and individual institutional and society research ethics committees.

Sensitive data are often discoverable and accessible if identity and location information are anonymized. In other cases, an established data-access agreement specifies the technical requirements as well as the ethical and scientific obligations that accessing and using the data entail. Technical requirements may include aspects such as auditing data access at the local system, defining read-only access rights, and/or ensuring constraints for nonprivileged network access. You can still contact the data owner to explain your intended use and to discuss the conditions and legal restrictions associated with using sensitive data. Such contact may even lead to collaborative research between you and the data owner. Should you be granted access to the data, it is important to use the data ethically and responsibly [ 8 ] to ensure that no harm is done to individuals, species, and culture heritages.

Tip 11: Give back (cite and share data)

There are three ways to give back to the community once you have sought, discovered, and used an existing data product. First, it is essential that you give proper attribution to the data creators (in some cases, the data owners) if you use others’ data for research, education, decision making, or other purposes [ 9 ]. Proper attribution benefits both data creators/providers and data seekers/users. Data creators/providers receive credit for their work, and their practice of sharing data is thus further encouraged. Data seekers/users make their own work more transparent and, potentially, reproducible by uniquely identifying and citing data used in their research.

Many data creators and institutions adopt standard licenses from organizations, such as Creative Commons, that govern how their data products may be shared and used. Creative Commons recommends that a proper attribution should include title, author, source, and license [ 10 ].

Second, provide feedback to the data creators or the data repository about any issues associated with data accessibility, data quality, or metadata completeness and interpretability. Data creators and repositories benefit from knowing that their data products are understandable and usable by others, as well as knowing how the data were used. Future users of the data will also benefit from your feedback.

Third, virtually all data seekers and data users also generate data. The ultimate ‘give-back’ is to also share your data with the broader community.

This paper highlights 11 quick tips that, if followed, should make it easier for a data seeker to discover data that meet a particular need. Regardless of whether you are acting as a data seeker or a data creator, remember that ‘data discovery and reuse are most easily accomplished when: (1) data are logically and clearly organized; (2) data quality is assured; (3) data are preserved and discoverable via an open data repository; (4) data are accompanied by comprehensive metadata; (5) algorithms and code used to create data products are readily available; (6) data products can be uniquely identified and associated with specific data originator(s); and (7) the data originator(s) or data repository have provided recommendations for citation of the data product(s)’ [ 11 ].

Acknowledgments

This work was developed as part of the Research Data Alliance (RDA) ‘WG/IG’ entitled ‘Data Discovery Paradigms’, and we acknowledge the support provided by the RDA community and structures. We would like to thank members of the group for their support, especially Andrea Perego, Mustapha Mokrane, Susanna-Assunta Sansone, Peter McQuilton, and Michel Dumontier who read this paper and provided constructive suggestions.

  • 1. Gray J. Jim Gray on eScience: A transformed scientific method. In: Hey T, Tansley S, Tolle K, editors. The Fourth Paradigm: Data-Intensive Scientific Discovery. Richmond, WA: Microsoft Research; 2009. p.xvii–xxxi. Available from: https://www.microsoft.com/en-us/research/publication/fourth-paradigm-data-intensive-scientific-discovery/ .
  • 2. Fox G, Hey T, Trefethen A. Where does all the data come from? In: Kleese van Dam K, editor. Data-Intensive Science. Chapman and Hall/CRC; Boca Raton: Taylor and Francis, May 2013. p. 15–51.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 4. Warner, R. Google Advanced Search: A Comprehensive List of Google Search Operators [Internet]. 2015. Available from: https://bynd.com/news-ideas/google-advanced-search-comprehensive-list-google-search-operators/ . [cited 2017 Oct 26]
  • 6. Heath T, Bizer C. Linked Data: Evolving the Web into a global data space. In: Hendler J, van Harmelen F, editors. Synthesis Lectures on the Semantic Web: Theory and Technology. Morgan & Claypool; 2011. p. 1–136.
  • 8. Clark K, et al. Guidelines for the Ethical Use of Digital Data in Human Research. www.carltonconnect.com.au: The University of Melbourne; 2015. Available from: https://www.carltonconnect.com.au/wp-content/uploads/2015/06/Ethical-Use-of-Digital-Data.pdf . [cited 2018 Feb. 1].
  • 9. Martone M, editor. Data Citation Synthesis Group: Joint Declaration of Data Citation Principles. FORCE11. San Diego, CA; 2014. [cited 2018 Feb 1]. Available from: https://www.force11.org/group/joint-declaration-data-citation-principles-final .
  • 10. Creative Commons. Best practices for attribution [Internet]. 2014 [cited 2017 Sep 10]. Available from: https://wiki.creativecommons.org/wiki/Best_practices_for_attribution .
  • 11. Michener W. K. Data discovery. In: Recknagel F, Michener WK, editors. Ecological informatics: Data management and knowledge discovery. Springer International Publishing, Cham, Switzerland; 2017.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Data Collection Methods | Step-by-Step Guide & Examples

Data Collection Methods | Step-by-Step Guide & Examples

Published on 4 May 2022 by Pritha Bhandari .

Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental, or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem .

While methods and aims may differ between fields, the overall process of data collection remains largely the same. Before you begin collecting data, you need to consider:

  • The  aim of the research
  • The type of data that you will collect
  • The methods and procedures you will use to collect, store, and process the data

To collect high-quality data that is relevant to your purposes, follow these four steps.

Table of contents

Step 1: define the aim of your research, step 2: choose your data collection method, step 3: plan your data collection procedures, step 4: collect the data, frequently asked questions about data collection.

Before you start the process of data collection, you need to identify exactly what you want to achieve. You can start by writing a problem statement : what is the practical or scientific issue that you want to address, and why does it matter?

Next, formulate one or more research questions that precisely define what you want to find out. Depending on your research questions, you might need to collect quantitative or qualitative data :

  • Quantitative data is expressed in numbers and graphs and is analysed through statistical methods .
  • Qualitative data is expressed in words and analysed through interpretations and categorisations.

If your aim is to test a hypothesis , measure something precisely, or gain large-scale statistical insights, collect quantitative data. If your aim is to explore ideas, understand experiences, or gain detailed insights into a specific context, collect qualitative data.

If you have several aims, you can use a mixed methods approach that collects both types of data.

  • Your first aim is to assess whether there are significant differences in perceptions of managers across different departments and office locations.
  • Your second aim is to gather meaningful feedback from employees to explore new ideas for how managers can improve.

Prevent plagiarism, run a free check.

Based on the data you want to collect, decide which method is best suited for your research.

  • Experimental research is primarily a quantitative method.
  • Interviews , focus groups , and ethnographies are qualitative methods.
  • Surveys , observations, archival research, and secondary data collection can be quantitative or qualitative methods.

Carefully consider what method you will use to gather data that helps you directly answer your research questions.

When you know which method(s) you are using, you need to plan exactly how you will implement them. What procedures will you follow to make accurate observations or measurements of the variables you are interested in?

For instance, if you’re conducting surveys or interviews, decide what form the questions will take; if you’re conducting an experiment, make decisions about your experimental design .

Operationalisation

Sometimes your variables can be measured directly: for example, you can collect data on the average age of employees simply by asking for dates of birth. However, often you’ll be interested in collecting data on more abstract concepts or variables that can’t be directly observed.

Operationalisation means turning abstract conceptual ideas into measurable observations. When planning how you will collect data, you need to translate the conceptual definition of what you want to study into the operational definition of what you will actually measure.

  • You ask managers to rate their own leadership skills on 5-point scales assessing the ability to delegate, decisiveness, and dependability.
  • You ask their direct employees to provide anonymous feedback on the managers regarding the same topics.

You may need to develop a sampling plan to obtain data systematically. This involves defining a population , the group you want to draw conclusions about, and a sample, the group you will actually collect data from.

Your sampling method will determine how you recruit participants or obtain measurements for your study. To decide on a sampling method you will need to consider factors like the required sample size, accessibility of the sample, and time frame of the data collection.

Standardising procedures

If multiple researchers are involved, write a detailed manual to standardise data collection procedures in your study.

This means laying out specific step-by-step instructions so that everyone in your research team collects data in a consistent way – for example, by conducting experiments under the same conditions and using objective criteria to record and categorise observations.

This helps ensure the reliability of your data, and you can also use it to replicate the study in the future.

Creating a data management plan

Before beginning data collection, you should also decide how you will organise and store your data.

  • If you are collecting data from people, you will likely need to anonymise and safeguard the data to prevent leaks of sensitive information (e.g. names or identity numbers).
  • If you are collecting data via interviews or pencil-and-paper formats, you will need to perform transcriptions or data entry in systematic ways to minimise distortion.
  • You can prevent loss of data by having an organisation system that is routinely backed up.

Finally, you can implement your chosen methods to measure or observe the variables you are interested in.

The closed-ended questions ask participants to rate their manager’s leadership skills on scales from 1 to 5. The data produced is numerical and can be statistically analysed for averages and patterns.

To ensure that high-quality data is recorded in a systematic way, here are some best practices:

  • Record all relevant information as and when you obtain data. For example, note down whether or how lab equipment is recalibrated during an experimental study.
  • Double-check manual data entry for errors.
  • If you collect quantitative data, you can assess the reliability and validity to get an indication of your data quality.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research , you also have to consider the internal and external validity of your experiment.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, May 04). Data Collection Methods | Step-by-Step Guide & Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/data-collection-guide/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs quantitative research | examples & methods, triangulation in research | guide, types, examples, what is a conceptual framework | tips & examples.

Research Paper Analysis: How to Analyze a Research Article + Example

Why might you need to analyze research? First of all, when you analyze a research article, you begin to understand your assigned reading better. It is also the first step toward learning how to write your own research articles and literature reviews. However, if you have never written a research paper before, it may be difficult for you to analyze one. After all, you may not know what criteria to use to evaluate it. But don’t panic! We will help you figure it out!

In this article, our team has explained how to analyze research papers quickly and effectively. At the end, you will also find a research analysis paper example to see how everything works in practice.

  • 🔤 Research Analysis Definition

📊 How to Analyze a Research Article

✍️ how to write a research analysis.

  • 📝 Analysis Example
  • 🔎 More Examples

🔗 References

🔤 research paper analysis: what is it.

A research paper analysis is an academic writing assignment in which you analyze a scholarly article’s methodology, data, and findings. In essence, “to analyze” means to break something down into components and assess each of them individually and in relation to each other. The goal of an analysis is to gain a deeper understanding of a subject. So, when you analyze a research article, you dissect it into elements like data sources , research methods, and results and evaluate how they contribute to the study’s strengths and weaknesses.

📋 Research Analysis Format

A research analysis paper has a pretty straightforward structure. Check it out below!

Research articles usually include the following sections: introduction, methods, results, and discussion. In the following paragraphs, we will discuss how to analyze a scientific article with a focus on each of its parts.

This image shows the main sections of a research article.

How to Analyze a Research Paper: Purpose

The purpose of the study is usually outlined in the introductory section of the article. Analyzing the research paper’s objectives is critical to establish the context for the rest of your analysis.

When analyzing the research aim, you should evaluate whether it was justified for the researchers to conduct the study. In other words, you should assess whether their research question was significant and whether it arose from existing literature on the topic.

Here are some questions that may help you analyze a research paper’s purpose:

  • Why was the research carried out?
  • What gaps does it try to fill, or what controversies to settle?
  • How does the study contribute to its field?
  • Do you agree with the author’s justification for approaching this particular question in this way?

How to Analyze a Paper: Methods

When analyzing the methodology section , you should indicate the study’s research design (qualitative, quantitative, or mixed) and methods used (for example, experiment, case study, correlational research, survey, etc.). After that, you should assess whether these methods suit the research purpose. In other words, do the chosen methods allow scholars to answer their research questions within the scope of their study?

For example, if scholars wanted to study US students’ average satisfaction with their higher education experience, they could conduct a quantitative survey . However, if they wanted to gain an in-depth understanding of the factors influencing US students’ satisfaction with higher education, qualitative interviews would be more appropriate.

When analyzing methods, you should also look at the research sample . Did the scholars use randomization to select study participants? Was the sample big enough for the results to be generalizable to a larger population?

You can also answer the following questions in your methodology analysis:

  • Is the methodology valid? In other words, did the researchers use methods that accurately measure the variables of interest?
  • Is the research methodology reliable? A research method is reliable if it can produce stable and consistent results under the same circumstances.
  • Is the study biased in any way?
  • What are the limitations of the chosen methodology?

How to Analyze Research Articles’ Results

You should start the analysis of the article results by carefully reading the tables, figures, and text. Check whether the findings correspond to the initial research purpose. See whether the results answered the author’s research questions or supported the hypotheses stated in the introduction.

To analyze the results section effectively, answer the following questions:

  • What are the major findings of the study?
  • Did the author present the results clearly and unambiguously?
  • Are the findings statistically significant ?
  • Does the author provide sufficient information on the validity and reliability of the results?
  • Have you noticed any trends or patterns in the data that the author did not mention?

How to Analyze Research: Discussion

Finally, you should analyze the authors’ interpretation of results and its connection with research objectives. Examine what conclusions the authors drew from their study and whether these conclusions answer the original question.

You should also pay attention to how the authors used findings to support their conclusions. For example, you can reflect on why their findings support that particular inference and not another one. Moreover, more than one conclusion can sometimes be made based on the same set of results. If that’s the case with your article, you should analyze whether the authors addressed other interpretations of their findings .

Here are some useful questions you can use to analyze the discussion section:

  • What findings did the authors use to support their conclusions?
  • How do the researchers’ conclusions compare to other studies’ findings?
  • How does this study contribute to its field?
  • What future research directions do the authors suggest?
  • What additional insights can you share regarding this article? For example, do you agree with the results? What other questions could the researchers have answered?

This image shows how to analyze a research article.

Now, you know how to analyze an article that presents research findings. However, it’s just a part of the work you have to do to complete your paper. So, it’s time to learn how to write research analysis! Check out the steps below!

1. Introduce the Article

As with most academic assignments, you should start your research article analysis with an introduction. Here’s what it should include:

  • The article’s publication details . Specify the title of the scholarly work you are analyzing, its authors, and publication date. Remember to enclose the article’s title in quotation marks and write it in title case .
  • The article’s main point . State what the paper is about. What did the authors study, and what was their major finding?
  • Your thesis statement . End your introduction with a strong claim summarizing your evaluation of the article. Consider briefly outlining the research paper’s strengths, weaknesses, and significance in your thesis.

Keep your introduction brief. Save the word count for the “meat” of your paper — that is, for the analysis.

2. Summarize the Article

Now, you should write a brief and focused summary of the scientific article. It should be shorter than your analysis section and contain all the relevant details about the research paper.

Here’s what you should include in your summary:

  • The research purpose . Briefly explain why the research was done. Identify the authors’ purpose and research questions or hypotheses .
  • Methods and results . Summarize what happened in the study. State only facts, without the authors’ interpretations of them. Avoid using too many numbers and details; instead, include only the information that will help readers understand what happened.
  • The authors’ conclusions . Outline what conclusions the researchers made from their study. In other words, describe how the authors explained the meaning of their findings.

If you need help summarizing an article, you can use our free summary generator .

3. Write Your Research Analysis

The analysis of the study is the most crucial part of this assignment type. Its key goal is to evaluate the article critically and demonstrate your understanding of it.

We’ve already covered how to analyze a research article in the section above. Here’s a quick recap:

  • Analyze whether the study’s purpose is significant and relevant.
  • Examine whether the chosen methodology allows for answering the research questions.
  • Evaluate how the authors presented the results.
  • Assess whether the authors’ conclusions are grounded in findings and answer the original research questions.

Although you should analyze the article critically, it doesn’t mean you only should criticize it. If the authors did a good job designing and conducting their study, be sure to explain why you think their work is well done. Also, it is a great idea to provide examples from the article to support your analysis.

4. Conclude Your Analysis of Research Paper

A conclusion is your chance to reflect on the study’s relevance and importance. Explain how the analyzed paper can contribute to the existing knowledge or lead to future research. Also, you need to summarize your thoughts on the article as a whole. Avoid making value judgments — saying that the paper is “good” or “bad.” Instead, use more descriptive words and phrases such as “This paper effectively showed…”

Need help writing a compelling conclusion? Try our free essay conclusion generator !

5. Revise and Proofread

Last but not least, you should carefully proofread your paper to find any punctuation, grammar, and spelling mistakes. Start by reading your work out loud to ensure that your sentences fit together and sound cohesive. Also, it can be helpful to ask your professor or peer to read your work and highlight possible weaknesses or typos.

This image shows how to write a research analysis.

📝 Research Paper Analysis Example

We have prepared an analysis of a research paper example to show how everything works in practice.

No Homework Policy: Research Article Analysis Example

This paper aims to analyze the research article entitled “No Assignment: A Boon or a Bane?” by Cordova, Pagtulon-an, and Tan (2019). This study examined the effects of having and not having assignments on weekends on high school students’ performance and transmuted mean scores. This article effectively shows the value of homework for students, but larger studies are needed to support its findings.

Cordova et al. (2019) conducted a descriptive quantitative study using a sample of 115 Grade 11 students of the Central Mindanao University Laboratory High School in the Philippines. The sample was divided into two groups: the first received homework on weekends, while the second didn’t. The researchers compared students’ performance records made by teachers and found that students who received assignments performed better than their counterparts without homework.

The purpose of this study is highly relevant and justified as this research was conducted in response to the debates about the “No Homework Policy” in the Philippines. Although the descriptive research design used by the authors allows to answer the research question, the study could benefit from an experimental design. This way, the authors would have firm control over variables. Additionally, the study’s sample size was not large enough for the findings to be generalized to a larger population.

The study results are presented clearly, logically, and comprehensively and correspond to the research objectives. The researchers found that students’ mean grades decreased in the group without homework and increased in the group with homework. Based on these findings, the authors concluded that homework positively affected students’ performance. This conclusion is logical and grounded in data.

This research effectively showed the importance of homework for students’ performance. Yet, since the sample size was relatively small, larger studies are needed to ensure the authors’ conclusions can be generalized to a larger population.

🔎 More Research Analysis Paper Examples

Do you want another research analysis example? Check out the best analysis research paper samples below:

  • Gracious Leadership Principles for Nurses: Article Analysis
  • Effective Mental Health Interventions: Analysis of an Article
  • Nursing Turnover: Article Analysis
  • Nursing Practice Issue: Qualitative Research Article Analysis
  • Quantitative Article Critique in Nursing
  • LIVE Program: Quantitative Article Critique
  • Evidence-Based Practice Beliefs and Implementation: Article Critique
  • “Differential Effectiveness of Placebo Treatments”: Research Paper Analysis
  • “Family-Based Childhood Obesity Prevention Interventions”: Analysis Research Paper Example
  • “Childhood Obesity Risk in Overweight Mothers”: Article Analysis
  • “Fostering Early Breast Cancer Detection” Article Analysis
  • Lesson Planning for Diversity: Analysis of an Article
  • Journal Article Review: Correlates of Physical Violence at School
  • Space and the Atom: Article Analysis
  • “Democracy and Collective Identity in the EU and the USA”: Article Analysis
  • China’s Hegemonic Prospects: Article Review
  • Article Analysis: Fear of Missing Out
  • Article Analysis: “Perceptions of ADHD Among Diagnosed Children and Their Parents”
  • Codependence, Narcissism, and Childhood Trauma: Analysis of the Article
  • Relationship Between Work Intensity, Workaholism, Burnout, and MSC: Article Review

We hope that our article on research paper analysis has been helpful. If you liked it, please share this article with your friends!

  • Analyzing Research Articles: A Guide for Readers and Writers | Sam Mathews
  • Summary and Analysis of Scientific Research Articles | San José State University Writing Center
  • Analyzing Scholarly Articles | Texas A&M University
  • Article Analysis Assignment | University of Wisconsin-Madison
  • How to Summarize a Research Article | University of Connecticut
  • Critique/Review of Research Articles | University of Calgary
  • Art of Reading a Journal Article: Methodically and Effectively | PubMed Central
  • Write a Critical Review of a Scientific Journal Article | McLaughlin Library
  • How to Read and Understand a Scientific Paper: A Guide for Non-scientists | LSE
  • How to Analyze Journal Articles | Classroom

How to Write an Animal Testing Essay: Tips for Argumentative & Persuasive Papers

Descriptive essay topics: examples, outline, & more.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.16(7); 2020 Jul

Logo of ploscomp

Ten simple rules for reading a scientific paper

Maureen a. carey.

Division of Infectious Diseases and International Health, Department of Medicine, University of Virginia School of Medicine, Charlottesville, Virginia, United States of America

Kevin L. Steiner

William a. petri, jr, introduction.

“There is no problem that a library card can't solve” according to author Eleanor Brown [ 1 ]. This advice is sound, probably for both life and science, but even the best tool (like the library) is most effective when accompanied by instructions and a basic understanding of how and when to use it.

For many budding scientists, the first day in a new lab setting often involves a stack of papers, an email full of links to pertinent articles, or some promise of a richer understanding so long as one reads enough of the scientific literature. However, the purpose and approach to reading a scientific article is unlike that of reading a news story, novel, or even a textbook and can initially seem unapproachable. Having good habits for reading scientific literature is key to setting oneself up for success, identifying new research questions, and filling in the gaps in one’s current understanding; developing these good habits is the first crucial step.

Advice typically centers around two main tips: read actively and read often. However, active reading, or reading with an intent to understand, is both a learned skill and a level of effort. Although there is no one best way to do this, we present 10 simple rules, relevant to novices and seasoned scientists alike, to teach our strategy for active reading based on our experience as readers and as mentors of undergraduate and graduate researchers, medical students, fellows, and early career faculty. Rules 1–5 are big picture recommendations. Rules 6–8 relate to philosophy of reading. Rules 9–10 guide the “now what?” questions one should ask after reading and how to integrate what was learned into one’s own science.

Rule 1: Pick your reading goal

What you want to get out of an article should influence your approach to reading it. Table 1 includes a handful of example intentions and how you might prioritize different parts of the same article differently based on your goals as a reader.

1 Yay! Welcome!

2 A journal club is when a group of scientists get together to discuss a paper. Usually one person leads the discussion and presents all of the data. The group discusses their own interpretations and the authors’ interpretation.

Rule 2: Understand the author’s goal

In written communication, the reader and the writer are equally important. Both influence the final outcome: in this case, your scientific understanding! After identifying your goal, think about the author’s goal for sharing this project. This will help you interpret the data and understand the author’s interpretation of the data. However, this requires some understanding of who the author(s) are (e.g., what are their scientific interests?), the scientific field in which they work (e.g., what techniques are available in this field?), and how this paper fits into the author’s research (e.g., is this work building on an author’s longstanding project or controversial idea?). This information may be hard to glean without experience and a history of reading. But don’t let this be a discouragement to starting the process; it is by the act of reading that this experience is gained!

A good step toward understanding the goal of the author(s) is to ask yourself: What kind of article is this? Journals publish different types of articles, including methods, review, commentary, resources, and research articles as well as other types that are specific to a particular journal or groups of journals. These article types have different formatting requirements and expectations for content. Knowing the article type will help guide your evaluation of the information presented. Is the article a methods paper, presenting a new technique? Is the article a review article, intended to summarize a field or problem? Is it a commentary, intended to take a stand on a controversy or give a big picture perspective on a problem? Is it a resource article, presenting a new tool or data set for others to use? Is it a research article, written to present new data and the authors’ interpretation of those data? The type of paper, and its intended purpose, will get you on your way to understanding the author’s goal.

Rule 3: Ask six questions

When reading, ask yourself: (1) What do the author(s) want to know (motivation)? (2) What did they do (approach/methods)? (3) Why was it done that way (context within the field)? (4) What do the results show (figures and data tables)? (5) How did the author(s) interpret the results (interpretation/discussion)? (6) What should be done next? (Regarding this last question, the author(s) may provide some suggestions in the discussion, but the key is to ask yourself what you think should come next.)

Each of these questions can and should be asked about the complete work as well as each table, figure, or experiment within the paper. Early on, it can take a long time to read one article front to back, and this can be intimidating. Break down your understanding of each section of the work with these questions to make the effort more manageable.

Rule 4: Unpack each figure and table

Scientists write original research papers primarily to present new data that may change or reinforce the collective knowledge of a field. Therefore, the most important parts of this type of scientific paper are the data. Some people like to scrutinize the figures and tables (including legends) before reading any of the “main text”: because all of the important information should be obtained through the data. Others prefer to read through the results section while sequentially examining the figures and tables as they are addressed in the text. There is no correct or incorrect approach: Try both to see what works best for you. The key is making sure that one understands the presented data and how it was obtained.

For each figure, work to understand each x- and y-axes, color scheme, statistical approach (if one was used), and why the particular plotting approach was used. For each table, identify what experimental groups and variables are presented. Identify what is shown and how the data were collected. This is typically summarized in the legend or caption but often requires digging deeper into the methods: Do not be afraid to refer back to the methods section frequently to ensure a full understanding of how the presented data were obtained. Again, ask the questions in Rule 3 for each figure or panel and conclude with articulating the “take home” message.

Rule 5: Understand the formatting intentions

Just like the overall intent of the article (discussed in Rule 2), the intent of each section within a research article can guide your interpretation. Some sections are intended to be written as objective descriptions of the data (i.e., the Results section), whereas other sections are intended to present the author’s interpretation of the data. Remember though that even “objective” sections are written by and, therefore, influenced by the authors interpretations. Check out Table 2 to understand the intent of each section of a research article. When reading a specific paper, you can also refer to the journal’s website to understand the formatting intentions. The “For Authors” section of a website will have some nitty gritty information that is less relevant for the reader (like word counts) but will also summarize what the journal editors expect in each section. This will help to familiarize you with the goal of each article section.

Research articles typically contain each of these sections, although sometimes the “results” and “discussion” sections (or “discussion” and “conclusion” sections) are merged into one section. Additional sections may be included, based on request of the journal or the author(s). Keep in mind: If it was included, someone thought it was important for you to read.

Rule 6: Be critical

Published papers are not truths etched in stone. Published papers in high impact journals are not truths etched in stone. Published papers by bigwigs in the field are not truths etched in stone. Published papers that seem to agree with your own hypothesis or data are not etched in stone. Published papers that seem to refute your hypothesis or data are not etched in stone.

Science is a never-ending work in progress, and it is essential that the reader pushes back against the author’s interpretation to test the strength of their conclusions. Everyone has their own perspective and may interpret the same data in different ways. Mistakes are sometimes published, but more often these apparent errors are due to other factors such as limitations of a methodology and other limits to generalizability (selection bias, unaddressed, or unappreciated confounders). When reading a paper, it is important to consider if these factors are pertinent.

Critical thinking is a tough skill to learn but ultimately boils down to evaluating data while minimizing biases. Ask yourself: Are there other, equally likely, explanations for what is observed? In addition to paying close attention to potential biases of the study or author(s), a reader should also be alert to one’s own preceding perspective (and biases). Take time to ask oneself: Do I find this paper compelling because it affirms something I already think (or wish) is true? Or am I discounting their findings because it differs from what I expect or from my own work?

The phenomenon of a self-fulfilling prophecy, or expectancy, is well studied in the psychology literature [ 2 ] and is why many studies are conducted in a “blinded” manner [ 3 ]. It refers to the idea that a person may assume something to be true and their resultant behavior aligns to make it true. In other words, as humans and scientists, we often find exactly what we are looking for. A scientist may only test their hypotheses and fail to evaluate alternative hypotheses; perhaps, a scientist may not be aware of alternative, less biased ways to test her or his hypothesis that are typically used in different fields. Individuals with different life, academic, and work experiences may think of several alternative hypotheses, all equally supported by the data.

Rule 7: Be kind

The author(s) are human too. So, whenever possible, give them the benefit of the doubt. An author may write a phrase differently than you would, forcing you to reread the sentence to understand it. Someone in your field may neglect to cite your paper because of a reference count limit. A figure panel may be misreferenced as Supplemental Fig 3E when it is obviously Supplemental Fig 4E. While these things may be frustrating, none are an indication that the quality of work is poor. Try to avoid letting these minor things influence your evaluation and interpretation of the work.

Similarly, if you intend to share your critique with others, be extra kind. An author (especially the lead author) may invest years of their time into a single paper. Hearing a kindly phrased critique can be difficult but constructive. Hearing a rude, brusque, or mean-spirited critique can be heartbreaking, especially for young scientists or those seeking to establish their place within a field and who may worry that they do not belong.

Rule 8: Be ready to go the extra mile

To truly understand a scientific work, you often will need to look up a term, dig into the supplemental materials, or read one or more of the cited references. This process takes time. Some advisors recommend reading an article three times: The first time, simply read without the pressure of understanding or critiquing the work. For the second time, aim to understand the paper. For the third read through, take notes.

Some people engage with a paper by printing it out and writing all over it. The reader might write question marks in the margins to mark parts (s)he wants to return to, circle unfamiliar terms (and then actually look them up!), highlight or underline important statements, and draw arrows linking figures and the corresponding interpretation in the discussion. Not everyone needs a paper copy to engage in the reading process but, whatever your version of “printing it out” is, do it.

Rule 9: Talk about it

Talking about an article in a journal club or more informal environment forces active reading and participation with the material. Studies show that teaching is one of the best ways to learn and that teachers learn the material even better as the teaching task becomes more complex [ 4 – 5 ]; anecdotally, such observations inspired the phrase “to teach is to learn twice.”

Beyond formal settings such as journal clubs, lab meetings, and academic classes, discuss papers with your peers, mentors, and colleagues in person or electronically. Twitter and other social media platforms have become excellent resources for discussing papers with other scientists, the public or your nonscientist friends, or even the paper’s author(s). Describing a paper can be done at multiple levels and your description can contain all of the scientific details, only the big picture summary, or perhaps the implications for the average person in your community. All of these descriptions will solidify your understanding, while highlighting gaps in your knowledge and informing those around you.

Rule 10: Build on it

One approach we like to use for communicating how we build on the scientific literature is by starting research presentations with an image depicting a wall of Lego bricks. Each brick is labeled with the reference for a paper, and the wall highlights the body of literature on which the work is built. We describe the work and conclusions of each paper represented by a labeled brick and discuss each brick and the wall as a whole. The top brick on the wall is left blank: We aspire to build on this work and label this brick with our own work. We then delve into our own research, discoveries, and the conclusions it inspires. We finish our presentations with the image of the Legos and summarize our presentation on that empty brick.

Whether you are reading an article to understand a new topic area or to move a research project forward, effective learning requires that you integrate knowledge from multiple sources (“click” those Lego bricks together) and build upwards. Leveraging published work will enable you to build a stronger and taller structure. The first row of bricks is more stable once a second row is assembled on top of it and so on and so forth. Moreover, the Lego construction will become taller and larger if you build upon the work of others, rather than using only your own bricks.

Build on the article you read by thinking about how it connects to ideas described in other papers and within own work, implementing a technique in your own research, or attempting to challenge or support the hypothesis of the author(s) with a more extensive literature review. Integrate the techniques and scientific conclusions learned from an article into your own research or perspective in the classroom or research lab. You may find that this process strengthens your understanding, leads you toward new and unexpected interests or research questions, or returns you back to the original article with new questions and critiques of the work. All of these experiences are part of the “active reading”: process and are signs of a successful reading experience.

In summary, practice these rules to learn how to read a scientific article, keeping in mind that this process will get easier (and faster) with experience. We are firm believers that an hour in the library will save a week at the bench; this diligent practice will ultimately make you both a more knowledgeable and productive scientist. As you develop the skills to read an article, try to also foster good reading and learning habits for yourself (recommendations here: [ 6 ] and [ 7 ], respectively) and in others. Good luck and happy reading!

Acknowledgments

Thank you to the mentors, teachers, and students who have shaped our thoughts on reading, learning, and what science is all about.

Funding Statement

MAC was supported by the PhRMA Foundation's Postdoctoral Fellowship in Translational Medicine and Therapeutics and the University of Virginia's Engineering-in-Medicine seed grant, and KLS was supported by the NIH T32 Global Biothreats Training Program at the University of Virginia (AI055432). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

  • Open access
  • Published: 23 April 2024

Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review

  • Javiera Fuentes-Cimma 1 , 2 ,
  • Dominique Sluijsmans 3 ,
  • Arnoldo Riquelme 4 ,
  • Ignacio Villagran   ORCID: orcid.org/0000-0003-3130-8326 1 ,
  • Lorena Isbej   ORCID: orcid.org/0000-0002-4272-8484 2 , 5 ,
  • María Teresa Olivares-Labbe 6 &
  • Sylvia Heeneman 7  

BMC Medical Education volume  24 , Article number:  440 ( 2024 ) Cite this article

217 Accesses

Metrics details

Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback.

A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework.

The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes.

Conclusions

This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.

Peer Review reports

The design of effective feedback processes in higher education has been important for educators and researchers and has prompted numerous publications discussing potential mechanisms, theoretical frameworks, and best practice examples over the past few decades. Initially, research on feedback primarily focused more on teachers and feedback delivery, and students were depicted as passive feedback recipients [ 1 , 2 , 3 ]. The feedback conversation has recently evolved to a more dynamic emphasis on interaction, sense-making, outcomes in actions, and engagement with learners [ 2 ]. This shift aligns with utilizing the feedback process as a form of social interaction or dialogue to enhance performance [ 4 ]. Henderson et al. (2019) defined feedback processes as "where the learner makes sense of performance-relevant information to promote their learning." (p. 17). When a student grasps the information concerning their performance in connection to the desired learning outcome and subsequently takes suitable action, a feedback loop is closed so the process can be regarded as successful [ 5 , 6 ].

Hattie and Timperley (2007) proposed a comprehensive perspective on feedback, the so-called feedback loop, to answer three key questions: “Where am I going? “How am I going?” and “Where to next?” [ 7 ]. Each question represents a key dimension of the feedback loop. The first is the feed-up, which consists of setting learning goals and sharing clear objectives of learners' performance expectations. While the concept of the feed-up might not be consistently included in the literature, it is considered to be related to principles of effective feedback and goal setting within educational contexts [ 7 , 8 ]. Goal setting allows students to focus on tasks and learning, and teachers to have clear intended learning outcomes to enable the design of aligned activities and tasks in which feedback processes can be embedded [ 9 ]. Teachers can improve the feed-up dimension by proposing clear, challenging, but achievable goals [ 7 ]. The second dimension of the feedback loop focuses on feedback and aims to answer the second question by obtaining information about students' current performance. Different teaching and assessment activities can be used to obtain feedback information, and it can be provided by a teacher or tutor, a peer, oneself, a patient, or another coworker. The last dimension of the feedback loop is the feedforward, which is specifically associated with using feedback to improve performance or change behaviors [ 10 ]. Feedforward is crucial in closing the loop because it refers to those specific actions students must take to reduce the gap between current and desired performance [ 7 ].

From a sociocultural perspective, feedback processes involve a social practice consisting of intricate relationships within a learning context [ 11 ]. The main feature of this approach is that students learn from feedback only when the feedback encounter includes generating, making sense of, and acting upon the information given [ 11 ]. In the context of workplace-based learning (WBL), actionable feedback plays a crucial role in enabling learners to leverage specific feedback to enhance their performance, skills, and conceptual understandings. The WBL environment provides students with a valuable opportunity to gain hands-on experience in authentic clinical settings, in which students work more independently on real-world tasks, allowing them to develop and exhibit their competencies [ 3 ]. However, WBL settings are characterized by their unpredictable nature, which can either promote self-directed learning or present challenges in offering structured learning opportunities for students [ 12 ]. Consequently, designing purposive feedback opportunities within WBL settings is a significant challenge for clinical teachers and faculty.

In undergraduate clinical education, feedback opportunities are often constrained due to the emphasis on clinical work and the absence of dedicated time for teaching [ 13 ]. Students are expected to perform autonomously under supervision, ideally achieved by giving them space to practice progressively and providing continuous instances of constructive feedback [ 14 ]. However, the hierarchy often present in clinical settings places undergraduate students in a dependent position, below residents and specialists [ 15 ]. Undergraduate or junior students may have different approaches to receiving and using feedback. If their priority is meeting the minimum standards given pass-fail consequences and acting merely as feedback recipients, other incentives may be needed to engage with the feedback processes because they will need more learning support [ 16 , 17 ]. Adequate supervision and feedback have been recognized as vital educational support in encouraging students to adopt a constructive learning approach [ 18 ]. Given that productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured teaching and learning feedback activities within undergraduate WBL settings.

Despite the extensive research on feedback, a significant proportion of published studies involve residents or postgraduate students [ 19 , 20 ]. Recent reviews focusing on feedback interventions within medical education have clearly distinguished between undergraduate medical students and residents or fellows [ 21 ]. To gain a comprehensive understanding of initiatives related to actionable feedback in the WBL environment for undergraduate health professions, a scoping review of the existing literature could provide insight into how feedback processes are designed in that context. Accordingly, the present scoping review aims to answer the following research question: How are the feedback processes designed in the undergraduate health professions' workplace-based learning environments?

A scoping review was conducted using the five-step methodological framework proposed by Arksey and O'Malley (2005) [ 22 ], intertwined with the PRISMA checklist extension for scoping reviews to provide reporting guidance for this specific type of knowledge synthesis [ 23 ]. Scoping reviews allow us to study the literature without restricting the methodological quality of the studies found, systematically and comprehensively map the literature, and identify gaps [ 24 ]. Furthermore, a scoping review was used because this topic is not suitable for a systematic review due to the varied approaches described and the large difference in the methodologies used [ 21 ].

Search strategy

With the collaboration of a medical librarian, the authors used the research question to guide the search strategy. An initial meeting was held to define keywords and search resources. The proposed search strategy was reviewed by the research team, and then the study selection was conducted in two steps:

An online database search included Medline/PubMed, Web of Science, CINAHL, Cochrane Library, Embase, ERIC, and PsycINFO.

A directed search of ten relevant journals in the health sciences education field (Academic Medicine, Medical Education, Advances in Health Sciences Education, Medical Teacher, Teaching and Learning in Medicine, Journal of Surgical Education, BMC Medical Education, Medical Education Online, Perspectives on Medical Education and The Clinical Teacher) was performed.

The research team conducted a pilot or initial search before the full search to identify if the topic was susceptible to a scoping review. The full search was conducted in November 2022. One team member (MO) identified the papers in the databases. JF searched in the selected journals. Authors included studies written in English due to feasibility issues, with no time span limitation. After eliminating duplicates, two research team members (JF and IV) independently reviewed all the titles and abstracts using the exclusion and inclusion criteria described in Table  2 and with the support of the screening application StArT [ 25 ]. A third team member (AR) reviewed the titles and abstracts when the first two disagreed. The reviewer team met again at a midpoint and final stage to discuss the challenges related to study selection. Articles included for full-text review were exported to Mendeley. JF independently screened all full-text papers, and AR verified 10% for inclusion. The authors did not analyze study quality or risk of bias during study selection, which is consistent with conducting a scoping review.

The analysis of the results incorporated a descriptive summary and a thematic analysis, which was carried out to clarify and give consistency to the results' reporting [ 22 , 24 , 26 ]. Quantitative data were analyzed to report the characteristics of the studies, populations, settings, methods, and outcomes. Qualitative data were labeled, coded, and categorized into themes by three team members (JF, SH, and DS). The feedback loop framework with a sociocultural perspective was used as the theoretical framework to analyze the results.

The keywords used for the search strategies were as follows:

Clinical clerkship; feedback; formative feedback; health professions; undergraduate medical education; workplace.

Definitions of the keywords used for the present review are available in Appendix 1 .

As an example, we included the search strategy that we used in the Medline/PubMed database when conducting the full search:

("Formative Feedback"[Mesh] OR feedback) AND ("Workplace"[Mesh] OR workplace OR "Clinical Clerkship"[Mesh] OR clerkship) AND (("Education, Medical, Undergraduate"[Mesh] OR undergraduate health profession*) OR (learner* medical education)).

Inclusion and exclusion criteria

The following inclusion and exclusion criteria were used (Table  1 ):

Data extraction

The research group developed a data-charting form to organize the information obtained from the studies. The process was iterative, as the data chart was continuously reviewed and improved as necessary. In addition, following Levac et al.'s recommendation (2010), the three members involved in the charting process (JF, LI, and IV) independently reviewed the first five selected studies to determine whether the data extraction was consistent with the objectives of this scoping review and to ensure consistency. Then, the team met using web-conferencing software (Zoom; CA, USA) to review the results and adjust any details in the chart. The same three members extracted data independently from all the selected studies, considering two members reviewing each paper [ 26 ]. A third team member was consulted if any conflict occurred when extracting data. The data chart identified demographic patterns and facilitated the data synthesis. To organize data, we used a shared Excel spreadsheet, considering the following headings: title, author(s), year of publication, journal/source, country/origin, aim of the study, research question (if any), population/sample size, participants, discipline, setting, methodology, study design, data collection, data analysis, intervention, outcomes, outcomes measure, key findings, and relation of findings to research question.

Additionally, all the included papers were uploaded to AtlasTi v19 to facilitate the qualitative analysis. Three team members (JF, SH, and DS) independently coded the first six papers to create a list of codes to ensure consistency and rigor. The group met several times to discuss and refine the list of codes. Then, one member of the team (JF) used the code list to code all the rest of the papers. Once all papers were coded, the team organized codes into descriptive themes aligned with the research question.

Preliminary results were shared with a number of stakeholders (six clinical teachers, ten students, six medical educators) to elicit their opinions as an opportunity to build on the evidence and offer a greater level of meaning, content expertise, and perspective to the preliminary findings [ 26 ]. No quality appraisal of the studies is considered for this scoping review, which aligns with the frameworks for guiding scoping reviews [ 27 ].

The datasets analyzed during the current study are available from the corresponding author upon request.

A database search resulted in 3,597 papers, and the directed search of the most relevant journals in the health sciences education field yielded 2,096 titles. An example of the results of one database is available in Appendix 2 . Of the titles obtained, 816 duplicates were eliminated, and the team reviewed the titles and abstracts of 4,877 papers. Of these, 120 were selected for full-text review. Finally, 61 papers were included in this scoping review (Fig.  1 ), as listed in Table  2 .

figure 1

PRISMA flow diagram for included studies, incorporating records identified through the database and direct searching

The selected studies were published between 1986 and 2022, and seventy-five percent (46) were published during the last decade. Of all the articles included in this review, 13% (8) were literature reviews: one integrative review [ 28 ] and four scoping reviews [ 29 , 30 , 31 , 32 ]. Finally, fifty-three (87%) original or empirical papers were included (i.e., studies that answered a research question or achieved a research purpose through qualitative or quantitative methodologies) [ 15 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 ].

Table 2 summarizes the papers included in the present scoping review, and Table  3 describes the characteristics of the included studies.

The thematic analysis resulted in two themes: (1) the organization of feedback processes in WBL settings, and (2) sociocultural factors influencing the organization of feedback processes. Table 4 gives a summary of the themes and subthemes.

Organization of feedback processes in WBL settings.

Setting learning goals (i.e., feed-up dimension).

Feedback that focuses on students' learning needs and is based on known performance standards enhances student response and setting learning goals [ 30 ]. Discussing goals and agreements before starting clinical practice enhances students' feedback-seeking behavior [ 39 ] and responsiveness to feedback [ 83 ]. Farrell et al. (2017) found that teacher-learner co-constructed learning goals enhance feedback interactions and help establish educational alliances, improving the learning experience [ 50 ]. However, Kiger (2020) found that sharing individualized learning plans with teachers aligned feedback with learning goals but did not improve students' perceived use of feedback [ 64 ]

Two papers of this set pointed out the importance of goal-oriented feedback, a dynamic process that depends on discussion of goal setting between teachers and students [ 50 ] and influences how individuals experience, approach, and respond to upcoming learning activities [ 34 ]. Goal-oriented feedback should be embedded in the learning experience of the clinical workplace, as it can enhance students' engagement in safe feedback dialogues [ 50 ]. Ideally, each feedback encounter in the WBL context should conclude, in addition to setting a plan of action to achieve the desired goal, with a reflection on the next goal [ 50 ].

Feedback strategies within the WBL environment. (i.e., feedback dimension)

In undergraduate WBL environments, there are several tasks and feedback opportunities organized in the undergraduate clinical workplace that can enable feedback processes:

Questions from clinical teachers to students are a feedback strategy [ 74 ]. There are different types of questions that the teacher can use, either to clarify concepts, to reach the correct answer, or to facilitate self-correction [ 74 ]. Usually, questions can be used in conjunction with other communication strategies, such as pauses, which enable self-correction by the student [ 74 ]. Students can also ask questions to obtain feedback on their performance [ 54 ]. However, question-and-answer as a feedback strategy usually provides information on either correct or incorrect answers and fewer suggestions for improvement, rendering it less constructive as a feedback strategy [ 82 ].

Direct observation of performance by default is needed to be able to provide information to be used as input in the feedback process [ 33 , 46 , 49 , 86 ]. In the process of observation, teachers can include clarification of objectives (i.e., feed-up dimension) and suggestions for an action plan (i.e., feedforward) [ 50 ]. Accordingly, Schopper et al. (2016) showed that students valued being observed while interviewing patients, as they received feedback that helped them become more efficient and effective as interviewers and communicators [ 33 ]. Moreover, it is widely described that direct observation improves feedback credibility [ 33 , 40 , 84 ]. Ideally, observation should be deliberate [ 33 , 83 ], informal or spontaneous [ 33 ], conducted by a (clinical) expert [ 46 , 86 ], provided immediately after the observation, and clinical teacher if possible, should schedule or be alert on follow-up observations to promote closing the gap between current and desired performance [ 46 ].

Workplace-based assessments (WBAs), by definition, entail direct observation of performance during authentic task demonstration [ 39 , 46 , 56 , 87 ]. WBAs can significantly impact behavioral change in medical students [ 55 ]. Organizing and designing formative WBAs and embedding these in a feedback dialogue is essential for effective learning [ 31 ].

Summative organization of WBAs is a well described barrier for feedback uptake in the clinical workplace [ 35 , 46 ]. If feedback is perceived as summative, or organized as a pass-fail decision, students may be less inclined to use the feedback for future learning [ 52 ]. According to Schopper et al. (2016), using a scale within a WBA makes students shift their focus during the clinical interaction and see it as an assessment with consequences [ 33 ]. Harrison et al. (2016) pointed out that an environment that only contains assessments with a summative purpose will not lead to a culture of learning and improving performance [ 56 ]. The recommendation is to separate the formative and summative WBAs, as feedback in summative instances is often not recognized as a learning opportunity or an instance to seek feedback [ 54 ]. In terms of the design, an organizational format is needed to clarify to students how formative assessments can promote learning from feedback [ 56 ]. Harrison et al. (2016) identified that enabling students to have more control over their assessments, designing authentic assessments, and facilitating long-term mentoring could improve receptivity to formative assessment feedback [ 56 ].

Multiple WBA instruments and systems are reported in the literature. Sox et al. (2014) used a detailed evaluation form to help students improve their clinical case presentation skills. They found that feedback on oral presentations provided by supervisors using a detailed evaluation form improved clerkship students’ oral presentation skills [ 78 ]. Daelmans et al. (2006) suggested that a formal in-training assessment programme composed by 19 assessments that provided structured feedback, could promote observation and verbal feedback opportunities through frequent assessments [ 43 ]. However, in this setting, limited student-staff interactions still hindered feedback follow-up [ 43 ]. Designing frequent WBA improves feedback credibility [ 28 ]. Long et al. (2021) emphasized that students' responsiveness to assessment feedback hinges on its perceived credibility, underlining the importance of credibility for students to effectively engage and improve their performance [ 31 ].

The mini-CEX is one of the most widely described WBA instruments in the literature. Students perceive that the mini-CEX allows them to be observed and encourages the development of interviewing skills [ 33 ]. The mini-CEX can provide feedback that improves students' clinical skills [ 58 , 60 ], as it incorporates a structure for discussing the student's strengths and weaknesses and the design of a written action plan [ 39 , 80 ]. When mini-CEXs are incorporated as part of a system of WBA, such as programmatic assessment, students feel confident in seeking feedback after observation, and being systematic allows for follow-up [ 39 ]. Students suggested separating grading from observation and using the mini-CEX in more informal situations [ 33 ].

Clinical encounter cards allow students to receive weekly feedback and make them request more feedback as the clerkship progresses [ 65 ]. Moreover, encounter cards stimulate that feedback is given by supervisors, and students are more satisfied with the feedback process [ 72 ]. With encounter card feedback, students are responsible for asking a supervisor for feedback before a clinical encounter, and supervisors give students written and verbal comments about their performance after the encounter [ 42 , 72 ]. Encounter cards enhance the use of feedback and add approximately one minute to the length of the clinical encounter, so they are well accepted by students and supervisors [ 72 ]. Bennett (2006) identified that Instant Feedback Cards (IFC) facilitated mid-rotation feedback [ 38 ]. Feedback encounter card comments must be discussed between students and supervisors; otherwise, students may perceive it as impersonal, static, formulaic, and incomplete [ 59 ].

Self-assessments can change students' feedback orientation, transforming them into coproducers of learning [ 68 ]. Self-assessments promote the feedback process [ 68 ]. Some articles emphasize the importance of organizing self-assessments before receiving feedback from supervisors, for example, discussing their appraisal with the supervisor [ 46 , 52 ]. In designing a feedback encounter, starting with a self-assessment as feed-up, discussing with the supervisor, and identifying areas for improvement is recommended, as part of the feedback dialogue [ 68 ].

Peer feedback as an organized activity allows students to develop strategies to observe and give feedback to other peers [ 61 ]. Students can act as the feedback provider or receiver, fostering understanding of critical comments and promoting evaluative judgment for their clinical practice [ 61 ]. Within clerkships, enabling the sharing of feedback information among peers allows for a better understanding and acceptance of feedback [ 52 ]. However, students can find it challenging to take on the peer assessor/feedback provider role, as they prefer to avoid social conflicts [ 28 , 61 ]. Moreover, it has been described that they do not trust the judgment of their peers because they are not experts, although they know the procedures, tasks, and steps well and empathize with their peer status in the learning process [ 61 ].

Bedside-teaching encounters (BTEs) provide timely feedback and are an opportunity for verbal feedback during performance [ 74 ]. Rizan et al. (2014) explored timely feedback delivered within BTEs and determined that it promotes interaction that constructively enhances learner development through various corrective strategies (e.g., question and answers, pauses, etc.). However, if the feedback given during the BTEs was general, unspecific, or open-ended, it could go unnoticed [ 74 ]. Torre et al. (2005) investigated which integrated feedback activities and clinical tasks occurred on clerkship rotations and assessed students' perceived quality in each teaching encounter [ 81 ]. The feedback activities reported were feedback on written clinical history, physical examination, differential diagnosis, oral case presentation, a daily progress note, and bedside feedback. Students considered all these feedback activities high-quality learning opportunities, but they were more likely to receive feedback when teaching was at the bedside than at other teaching locations [ 81 ].

Case presentations are an opportunity for feedback within WBL contexts [ 67 , 73 ]. However, both students and supervisors struggled to identify them as feedback moments, and they often dismissed questions and clarifications around case presentations as feedback [ 73 ]. Joshi (2017) identified case presentations as a way for students to ask for informal or spontaneous supervisor feedback [ 63 ].

Organization of follow-up feedback and action plans (i.e., feedforward dimension).

Feedback that generates use and response from students is characterized by two-way communication and embedded in a dialogue [ 30 ]. Feedback must be future-focused [ 29 ], and a feedback encounter should be followed by planning the next observation [ 46 , 87 ]. Follow-up feedback could be organized as a future self-assessment, reflective practice by the student, and/or a discussion with the supervisor or coach [ 68 ]. The literature describes that a lack of student interaction with teachers makes follow-up difficult [ 43 ]. According to Haffling et al. (2011), follow-up feedback sessions improve students' satisfaction with feedback compared to students who do not have follow-up sessions. In addition, these same authors reported that a second follow-up session allows verification of improved performances or confirmation that the skill was acquired [ 55 ].

Although feedback encounter forms are a recognized way of obtaining information about performance (i.e., feedback dimension), the literature does not provide many clear examples of how they may impact the feedforward phase. For example, Joshi et al. (2016) consider a feedback form with four fields (i.e., what did you do well, advise the student on what could be done to improve performance, indicate the level of proficiency, and personal details of the tutor). In this case, the supervisor highlighted what the student could improve but not how, which is the missing phase of the co-constructed action plan [ 63 ]. Whichever WBA instrument is used in clerkships to provide feedback, it should include a "next steps" box [ 44 ], and it is recommended to organize a long-term use of the WBA instrument so that those involved get used to it and improve interaction and feedback uptake [ 55 ]. RIME-based feedback (Reporting, Interpreting, Managing, Educating) is considered an interesting example, as it is perceived as helpful to students in knowing what they need to improve in their performance [ 44 ]. Hochberg (2017) implemented formative mid-clerkship assessments to enhance face-to-face feedback conversations and co-create an improvement plan [ 59 ]. Apps for structuring and storing feedback improve the amount of verbal and written feedback. In the study of Joshi et al. (2016), a reasonable proportion of students (64%) perceived that these app tools help them improve their performance during rotations [ 63 ].

Several studies indicate that an action plan as part of the follow-up feedback is essential for performance improvement and learning [ 46 , 55 , 60 ]. An action plan corresponds to an agreed-upon strategy for improving, confirming, or correcting performance. Bing-You et al. (2017) determined that only 12% of the articles included in their scoping review incorporated an action plan for learners [ 32 ]. Holmboe et al. (2004) reported that only 11% of the feedback sessions following a mini-CEX included an action plan [ 60 ]. Suhoyo et al. (2017) also reported that only 55% of mini-CEX encounters contained an action plan [ 80 ]. Other authors reported that action plans are not commonly offered during feedback encounters [ 77 ]. Sokol-Hessner et al. (2010) implemented feedback card comments with a space to provide written feedback and a specific action plan. In their results, 96% contained positive comments, and only 5% contained constructive comments [ 77 ]. In summary, although the recommendation is to include a “next step” box in the feedback instruments, evidence shows these items are not often used for constructive comments or action plans.

Sociocultural factors influencing the organization of feedback processes.

Multiple sociocultural factors influence interaction in feedback encounters, promoting or hampering the productivity of the feedback processes.

Clinical learning culture

Context impacts feedback processes [ 30 , 82 ], and there are barriers to incorporating actionable feedback in the clinical learning context. The clinical learning culture is partly determined by the clinical context, which can be unpredictable [ 29 , 46 , 68 ], as the available patients determine learning opportunities. Supervisors are occupied by a high workload, which results in limited time or priority for teaching [ 35 , 46 , 48 , 55 , 68 , 83 ], hindering students’ feedback-seeking behavior [ 54 ], and creating a challenge for the balance between patient care and student mentoring [ 35 ].

Clinical workplace culture does not always purposefully prioritize instances for feedback processes [ 83 , 84 ]. This often leads to limited direct observation [ 55 , 68 ] and the provision of poorly informed feedback. It is also evident that this affects trust between clinical teachers and students [ 52 ]. Supervisors consider feedback a low priority in clinical contexts [ 35 ] due to low compensation and lack of protected time [ 83 ]. In particular, lack of time appears to be the most significant and well-known barrier to frequent observation and workplace feedback [ 35 , 43 , 48 , 62 , 67 , 83 ].

The clinical environment is hierarchical [ 68 , 80 ] and can make students not consider themselves part of the team and feel like a burden to their supervisor [ 68 ]. This hierarchical learning environment can lead to unidirectional feedback, limit dialogue during feedback processes, and hinder the seeking, uptake, and use of feedback [ 67 , 68 ]. In a learning culture where feedback is not supported, learners are less likely to want to seek it and feel motivated and engaged in their learning [ 83 ]. Furthermore, it has been identified that clinical supervisors lack the motivation to teach [ 48 ] and the intention to observe or reobserve performance [ 86 ].

In summary, the clinical context and WBL culture do not fully use the potential of a feedback process aimed at closing learning gaps. However, concrete actions shown in the literature can be taken to improve the effectiveness of feedback by organizing the learning context. For example, McGinness et al. (2022) identified that students felt more receptive to feedback when working in a safe, nonjudgmental environment [ 67 ]. Moreover, supervisors and trainees identified the learning culture as key to establishing an open feedback dialogue [ 73 ]. Students who perceive culture as supportive and formative can feel more comfortable performing tasks and more willing to receive feedback [ 73 ].

Relationships

There is a consensus in the literature that trusting and long-term relationships improve the chances of actionable feedback. However, relationships between supervisors and students in the clinical workplace are often brief and not organized as more longitudinally [ 68 , 83 ], leaving little time to establish a trustful relationship [ 68 ]. Supervisors change continuously, resulting in short interactions that limit the creation of lasting relationships over time [ 50 , 68 , 83 ]. In some contexts, it is common for a student to have several supervisors who have their own standards in the observation of performance [ 46 , 56 , 68 , 83 ]. A lack of stable relationships results in students having little engagement in feedback [ 68 ]. Furthermore, in case of summative assessment programmes, the dual role of supervisors (i.e., assessing and giving feedback) makes feedback interactions perceived as summative and can complicate the relationship [ 83 ].

Repeatedly, the articles considered in this review describe that long-term and stable relationships enable the development of trust and respect [ 35 , 62 ] and foster feedback-seeking behavior [ 35 , 67 ] and feedback-giver behavior [ 39 ]. Moreover, constructive and positive relationships enhance students´ use of and response to feedback [ 30 ]. For example, Longitudinal Integrated Clerkships (LICs) promote stable relationships, thus enhancing the impact of feedback [ 83 ]. In a long-term trusting relationship, feedback can be straightforward and credible [ 87 ], there are more opportunities for student observation, and the likelihood of follow-up and actionable feedback improves [ 83 ]. Johnson et al. (2020) pointed out that within a clinical teacher-student relationship, the focus must be on establishing psychological safety; thus, the feedback conversations might be transformed [ 62 ].

Stable relationships enhance feedback dialogues, which offer an opportunity to co-construct learning and propose and negotiate aspects of the design of learning strategies [ 62 ].

Students as active agents in the feedback processes

The feedback response learners generate depends on the type of feedback information they receive, how credible the source of feedback information is, the relationship between the receiver and the giver, and the relevance of the information delivered [ 49 ]. Garino (2020) noted that students who are most successful in using feedback are those who do not take criticism personally, who understand what they need to improve and know they can do so, who value and feel meaning in criticism, are not surprised to receive it, and who are motivated to seek new feedback and use effective learning strategies [ 52 ]. Successful users of feedback ask others for help, are intentional about their learning, know what resources to use and when to use them, listen to and understand a message, value advice, and use effective learning strategies. They regulate their emotions, find meaning in the message, and are willing to change [ 52 ].

Student self-efficacy influences the understanding and use of feedback in the clinical workplace. McGinness et al. (2022) described various positive examples of self-efficacy regarding feedback processes: planning feedback meetings with teachers, fostering good relationships with the clinical team, demonstrating interest in assigned tasks, persisting in seeking feedback despite the patient workload, and taking advantage of opportunities for feedback, e.g., case presentations [ 67 ].

When students are encouraged to seek feedback aligned with their own learning objectives, they promote feedback information specific to what they want to learn and improve and enhance the use of feedback [ 53 ]. McGinness et al. (2022) identified that the perceived relevance of feedback information influenced the use of feedback because students were more likely to ask for feedback if they perceived that the information was useful to them. For example, if students feel part of the clinical team and participate in patient care, they are more likely to seek feedback [ 17 ].

Learning-oriented students aim to seek feedback to achieve clinical competence at the expected level [ 75 ]; they focus on improving their knowledge and skills and on professional development [ 17 ]. Performance-oriented students aim not to fail and to avoid negative feedback [ 17 , 75 ].

For effective feedback processes, including feed-up, feedback, and feedforward, the student must be feedback-oriented, i.e., active, seeking, listening to, interpreting, and acting on feedback [ 68 ]. The literature shows that feedback-oriented students are coproducers of learning [ 68 ] and are more involved in the feedback process [ 51 ]. Additionally, students who are metacognitively aware of their learning process are more likely to use feedback to reduce gaps in learning and performance [ 52 ]. For this, students must recognize feedback when it occurs and understand it when they receive it. Thus, it is important to organize training and promote feedback literacy so that students understand what feedback is, act on it, and improve the quality of feedback and their learning plans [ 68 ].

Table 5 summarizes those feedback tasks, activities, and key features of organizational aspects that enable each phase of the feedback loop based on the literature review.

The present scoping review identified 61 papers that mapped the literature on feedback processes in the WBL environments of undergraduate health professions. This review explored how feedback processes are organized in these learning contexts using the feedback loop framework. Given the specific characteristics of feedback processes in undergraduate clinical learning, three main findings were identified on how feedback processes are being conducted in the clinical environment and how these processes could be organized to support feedback processes.

First, the literature lacks a balance between the three dimensions of the feedback loop. In this regard, most of the articles in this review focused on reporting experiences or strategies for delivering feedback information (i.e., feedback dimension). Credible and objective feedback information is based on direct observation [ 46 ] and occurs within an interaction or a dialogue [ 62 , 88 ]. However, only having credible and objective information does not ensure that it will be considered, understood, used, and put into practice by the student [ 89 ].

Feedback-supporting actions aligned with goals and priorities facilitate effective feedback processes [ 89 ] because goal-oriented feedback focuses on students' learning needs [ 7 ]. In contrast, this review showed that only a minority of the studies highlighted the importance of aligning learning objectives and feedback (i.e., the feed-up dimension). To overcome this, supervisors and students must establish goals and agreements before starting clinical practice, as it allows students to measure themselves on a defined basis [ 90 , 91 ] and enhances students' feedback-seeking behavior [ 39 , 92 ] and responsiveness to feedback [ 83 ]. In addition, learning goals should be shared, and co-constructed, through a dialogue [ 50 , 88 , 90 , 92 ]. In fact, relationship-based feedback models emphasize setting shared goals and plans as part of the feedback process [ 68 ].

Many of the studies acknowledge the importance of establishing an action plan and promoting the use of feedback (i.e., feedforward). However, there is yet limited insight on how to best implement strategies that support the use of action plans, improve performance and close learning gaps. In this regard, it is described that delivering feedback without perceiving changes, results in no effect or impact on learning [ 88 ]. To determine if a feedback loop is closed, observing a change in the student's response is necessary. In other words, feedback does not work without repeating the same task [ 68 ], so teachers need to observe subsequent tasks to notice changes [ 88 ]. While feedforward is fundamental to long-term performance, it is shown that more research is needed to determine effective actions to be implemented in the WBL environment to close feedback loops.

Second, there is a need for more knowledge about designing feedback activities in the WBL environment that will generate constructive feedback for learning. WBA is the most frequently reported feedback activity in clinical workplace contexts [ 39 , 46 , 56 , 87 ]. Despite the efforts of some authors to use WBAs as a formative assessment and feedback opportunity, in several studies, a summative component of the WBA was presented as a barrier to actionable feedback [ 33 , 56 ]. Students suggest separating grading from observation and using, for example, the mini-CEX in informal situations [ 33 ]. Several authors also recommend disconnecting the summative components of WBAs to avoid generating emotions that can limit the uptake and use of feedback [ 28 , 93 ]. Other literature recommends purposefully designing a system of assessment using low-stakes data points for feedback and learning. Accordingly, programmatic assessment is a framework that combines both the learning and the decision-making function of assessment [ 94 , 95 ]. Programmatic assessment is a practical approach for implementing low-stakes as a continuum, giving opportunities to close the gap between current and desired performance and having the student as an active agent [ 96 ]. This approach enables the incorporation of low-stakes data points that target student learning [ 93 ] and provide performance-relevant information (i.e., meaningful feedback) based on direct observations during authentic professional activities [ 46 ]. Using low-stakes data points, learners make sense of information about their performance and use it to enhance the quality of their work or performance [ 96 , 97 , 98 ]. Implementing multiple instances of feedback is more effective than providing it once because it promotes closing feedback loops by giving the student opportunities to understand the feedback, make changes, and see if those changes were effective [ 89 ].

Third, the support provided by the teacher is fundamental and should be built into a reliable and long-term relationship, where the teacher must take the role of coach rather than assessor, and students should develop feedback agency and be active in seeking and using feedback to improve performance. Although it is recognized that institutional efforts over the past decades have focused on training teachers to deliver feedback, clinical supervisors' lack of teaching skills is still identified as a barrier to workplace feedback [ 99 ]. In particular, research indicates that clinical teachers lack the skills to transform the information obtained from an observation into constructive feedback [ 100 ]. Students are more likely to use feedback if they consider it credible and constructive [ 93 ] and based on stable relationships [ 93 , 99 , 101 ]. In trusting relationships, feedback can be straightforward and credible, and the likelihood of follow-up and actionable feedback improves [ 83 , 88 ]. Coaching strategies can be enhanced by teachers building an educational alliance that allows for trustworthy relationships or having supervisors with an exclusive coaching role [ 14 , 93 , 102 ].

Last, from a sociocultural perspective, individuals are the main actors in the learning process. Therefore, feedback impacts learning only if students engage and interact with it [ 11 ]. Thus, feedback design and student agency appear to be the main features of effective feedback processes. Accordingly, the present review identified that feedback design is a key feature for effective learning in complex environments such as WBL. Feedback in the workplace must ideally be organized and implemented to align learning outcomes, learning activities, and assessments, allowing learners to learn, practice, and close feedback loops [ 88 ]. To guide students toward performances that reflect long-term learning, an intensive formative learning phase is needed, in which multiple feedback processes are included that shape students´ further learning [ 103 ]. This design would promote student uptake of feedback for subsequent performance [ 1 ].

Strengths and limitations

The strengths of this study are (1) the use of an established framework, the Arksey and O'Malley's framework [ 22 ]. We included the step of socializing the results with stakeholders, which allowed the team to better understand the results from another perspective and offer a realistic look. (2) Using the feedback loop as a theoretical framework strengthened the results and gave a more thorough explanation of the literature regarding feedback processes in the WBL context. (3) our team was diverse and included researchers from different disciplines as well as a librarian.

The present scoping review has several limitations. Although we adhered to the recommended protocols and methodologies, some relevant papers may have been omitted. The research team decided to select original studies and reviews of the literature for the present scoping review. This caused some articles, such as guidelines, perspectives, and narrative papers, to be excluded from the current study.

One of the inclusion criteria was a focus on undergraduate students. However, some papers that incorporated undergraduate and postgraduate participants were included, as these supported the results of this review. Most articles involved medical students. Although the authors did not limit the search to medicine, maybe some articles involving students from other health disciplines needed to be included, considering the search in other databases or journals.

The results give insight in how feedback could be organized within the clinical workplace to promote feedback processes. On a small scale, i.e., in the feedback encounter between a supervisor and a learner, feedback should be organized to allow for follow-up feedback, thus working on required learning and performance goals. On a larger level, i.e., in the clerkship programme or a placement rotation, feedback should be organized through appropriate planning of subsequent tasks and activities.

More insight is needed in designing a closed loop feedback process, in which specific attention is needed in effective feedforward practices. The feedback that stimulates further action and learning requires a safe and trustful work and learning environment. Understanding the relationship between an individual and his or her environment is a challenge for determining the impact of feedback and must be further investigated within clinical WBL environments. Aligning the dimensions of feed-up, feedback and feedforward includes careful attention to teachers’ and students’ feedback literacy to assure that students can act on feedback in a constructive way. In this line, how to develop students' feedback agency within these learning environments needs further research.

Boud D, Molloy E. Rethinking models of feedback for learning: The challenge of design. Assess Eval High Educ. 2013;38:698–712.

Article   Google Scholar  

Henderson M, Ajjawi R, Boud D, Molloy E. Identifying feedback that has impact. In: The Impact of Feedback in Higher Education. Springer International Publishing: Cham; 2019. p. 15–34.

Chapter   Google Scholar  

Winstone N, Carless D. Designing effective feedback processes in higher education: A learning-focused approach. 1st ed. New York: Routledge; 2020.

Google Scholar  

Ajjawi R, Boud D. Assessment & Evaluation in Higher Education Researching feedback dialogue: an interactional analysis approach. 2015. https://doi.org/10.1080/02602938.2015.1102863 .

Carless D. Feedback loops and the longer-term: towards feedback spirals. Assess Eval High Educ. 2019;44:705–14.

Sadler DR. Formative assessment and the design of instructional systems. Instr Sci. 1989;18:119–44.

Hattie J, Timperley H. The Power of Feedback The Meaning of Feedback. Rev Educ Res. 2007;77:81–112.

Zarrinabadi N, Rezazadeh M. Why only feedback? Including feed up and feed forward improves nonlinguistic aspects of L2 writing. Language Teaching Research. 2023;27(3):575–92.

Fisher D, Frey N. Feed up, back, forward. Educ Leadersh. 2009;67:20–5.

Reimann A, Sadler I, Sambell K. What’s in a word? Practices associated with ‘feedforward’ in higher education. Assessment evaluation in higher education. 2019;44:1279–90.

Esterhazy R. Re-conceptualizing Feedback Through a Sociocultural Lens. In: Henderson M, Ajjawi R, Boud D, Molloy E, editors. The Impact of Feedback in Higher Education. Cham: Palgrave Macmillan; 2019. https://doi.org/10.1007/978-3-030-25112-3_5 .

Bransen D, Govaerts MJB, Sluijsmans DMA, Driessen EW. Beyond the self: The role of co-regulation in medical students’ self-regulated learning. Med Educ. 2020;54:234–41.

Ramani S, Könings KD, Ginsburg S, Van Der Vleuten CP. Feedback Redefined: Principles and Practice. J Gen Intern Med. 2019;34:744–53.

Atkinson A, Watling CJ, Brand PL. Feedback and coaching. Eur J Pediatr. 2022;181(2):441–6.

Suhoyo Y, Schonrock-Adema J, Emilia O, Kuks JBM, Cohen-Schotanus JA. Clinical workplace learning: perceived learning value of individual and group feedback in a collectivistic culture. BMC Med Educ. 2018;18:79.

Bowen L, Marshall M, Murdoch-Eaton D. Medical Student Perceptions of Feedback and Feedback Behaviors Within the Context of the “Educational Alliance.” Acad Med. 2017;92:1303–12.

Bok HGJ, Teunissen PW, Spruijt A, Fokkema JPI, van Beukelen P, Jaarsma DADC, et al. Clarifying students’ feedback-seeking behaviour in clinical clerkships. Med Educ. 2013;47:282–91.

Al-Kadri HM, Al-Kadi MT, Van Der Vleuten CPM. Workplace-based assessment and students’ approaches to learning: A qualitative inquiry. Med Teach. 2013;35(SUPPL):1.

Dennis AA, Foy MJ, Monrouxe LV, Rees CE. Exploring trainer and trainee emotional talk in narratives about workplace-based feedback processes. Adv Health Sci Educ. 2018;23:75–93.

Watling C, LaDonna KA, Lingard L, Voyer S, Hatala R. ‘Sometimes the work just needs to be done’: socio-cultural influences on direct observation in medical training. Med Educ. 2016;50:1054–64.

Bing-You R, Hayes V, Varaklis K, Trowbridge R, Kemp H, McKelvy D. Feedback for Learners in Medical Education: What is Known? A Scoping Review Academic Medicine. 2017;92:1346–54.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann Intern Med. 2018;169:467–73.

Colquhoun HL, Levac D, O’brien KK, Straus S, Tricco AC, Perrier L, et al. Scoping reviews: time for clarity in definition methods and reporting. J Clin Epidemiol. 2014;67:1291–4.

StArt - State of Art through Systematic Review. 2013.

Levac D, Colquhoun H, O’Brien KK. Scoping studies: Advancing the methodology. Implementation Science. 2010;5:1–9.

Peters MDJ, BPharm CMGHK, Parker PMD, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13:141–6.

Bing-You R, Varaklis K, Hayes V, Trowbridge R, Kemp H, McKelvy D, et al. The Feedback Tango: An Integrative Review and Analysis of the Content of the Teacher-Learner Feedback Exchange. Acad Med. 2018;93:657–63.

Ossenberg C, Henderson A, Mitchell M. What attributes guide best practice for effective feedback? A scoping review. Adv Health Sci Educ. 2019;24:383–401.

Spooner M, Duane C, Uygur J, Smyth E, Marron B, Murphy PJ, et al. Self -regulatory learning theory as a lens on how undergraduate and postgraduate learners respond to feedback: A BEME scoping review: BEME Guide No. 66. Med Teach. 2022;44:3–18.

Long S, Rodriguez C, St-Onge C, Tellier PP, Torabi N, Young M. Factors affecting perceived credibility of assessment in medical education: A scoping review. Adv Health Sci Educ. 2022;27:229–62.

Bing-You R, Hayes V, Varaklis K, Trowbridge R, Kemp H, McKelvy D. Feedback for Learners in Medical Education: What is Known? A Scoping Review: Lippincott Williams and Wilkins; 2017.

Schopper H, Rosenbaum M, Axelson R. “I wish someone watched me interview:” medical student insight into observation and feedback as a method for teaching communication skills during the clinical years. BMC Med Educ. 2016;16:286.

Crommelinck M, Anseel F. Understanding and encouraging feedback-seeking behaviour: a literature review. Med Educ. 2013;47:232–41.

Adamson E, Kinga L, Foy L, McLeodb M, Traynor J, Watson W, et al. Feedback in clinical practice: Enhancing the students’ experience through action research. Nurse Educ Pract. 2018;31:48–53.

Al-Mously N, Nabil NM, Al-Babtain SA, et al. Undergraduate medical students’ perceptions on the quality of feedback received during clinical rotations. Med Teach. 2014;36(Supplement 1):S17-23.

Bates J, Konkin J, Suddards C, Dobson S, Pratt D. Student perceptions of assessment and feedback in longitudinal integrated clerkships. Med Educ. 2013;47:362–74.

Bennett AJ, Goldenhar LM, Stanford K. Utilization of a Formative Evaluation Card in a Psychiatry Clerkship. Acad Psychiatry. 2006;30:319–24.

Bok HG, Jaarsma DA, Spruijt A, Van Beukelen P, Van Der Vleuten CP, Teunissen PW, et al. Feedback-giving behaviour in performance evaluations during clinical clerkships. Med Teach. 2016;38:88–95.

Bok HG, Teunissen PW, Spruijt A, Fokkema JP, van Beukelen P, Jaarsma DA, et al. Clarifying students’ feedback-seeking behaviour in clinical clerkships. Med Educ. 2013;47:282–91.

Calleja P, Harvey T, Fox A, Carmichael M, et al. Feedback and clinical practice improvement: A tool to assist workplace supervisors and students. Nurse Educ Pract. 2016;17:167–73.

Carey EG, Wu C, Hur ES, Hasday SJ, Rosculet NP, Kemp MT, et al. Evaluation of Feedback Systems for the Third-Year Surgical Clerkship. J Surg Educ. 2017;74:787–93.

Daelmans HE, Overmeer RM, Van der Hem-Stokroos HH, Scherpbier AJ, Stehouwer CD, van der Vleuten CP. In-training assessment: qualitative study of effects on supervision and feedback in an undergraduate clinical rotation. Medical education. 2006;40(1):51–8.

DeWitt D, Carline J, Paauw D, Pangaro L. Pilot study of a ’RIME’-based tool for giving feedback in a multi-specialty longitudinal clerkship. Med Educ. 2008;42:1205–9.

Dolan BM, O’Brien CL, Green MM. Including Entrustment Language in an Assessment Form May Improve Constructive Feedback for Student Clinical Skills. Med Sci Educ. 2017;27:461–4.

Duijn CC, Welink LS, Mandoki M, Ten Cate OT, Kremer WD, Bok HG. Am I ready for it? Students’ perceptions of meaningful feedback on entrustable professional activities. Perspectives on medical education. 2017;6:256–64.

Elnicki DM, Zalenski D. Integrating medical students’ goals, self-assessment and preceptor feedback in an ambulatory clerkship. Teach Learn Med. 2013;25:285–91.

Embo MP, Driessen EW, Valcke M, Van der Vleuten CP. Assessment and feedback to facilitate self-directed learning in clinical practice of Midwifery students. Medical teacher. 2010;32(7):e263-9.

Eva KW, Armson H, Holmboe E, Lockyer J, Loney E, Mann K, et al. Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Adv Health Sci Educ. 2012;17:15–26.

Farrell L, Bourgeois-Law G, Ajjawi R, Regehr G. An autoethnographic exploration of the use of goal oriented feedback to enhance brief clinical teaching encounters. Adv Health Sci Educ. 2017;22:91–104.

Fernando N, Cleland J, McKenzie H, Cassar K. Identifying the factors that determine feedback given to undergraduate medical students following formative mini-CEX assessments. Med Educ. 2008;42:89–95.

Garino A. Ready, willing and able: a model to explain successful use of feedback. Adv Health Sci Educ. 2020;25:337–61.

Garner MS, Gusberg RJ, Kim AW. The positive effect of immediate feedback on medical student education during the surgical clerkship. J Surg Educ. 2014;71:391–7.

Bing-You R, Hayes V, Palka T, Ford M, Trowbridge R. The Art (and Artifice) of Seeking Feedback: Clerkship Students’ Approaches to Asking for Feedback. Acad Med. 2018;93:1218–26.

Haffling AC, Beckman A, Edgren G. Structured feedback to undergraduate medical students: 3 years’ experience of an assessment tool. Medical teacher. 2011;33(7):e349-57.

Harrison CJ, Könings KD, Dannefer EF, Schuwirth LWTT, Wass V, van der Vleuten CPMM. Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures. Perspect Med Educ. 2016;5:276–84.

Harrison CJ, Könings KD, Schuwirth LW, Wass V, Van der Vleuten CP, Konings KD, et al. Changing the culture of assessment: the dominance of the summative assessment paradigm. BMC medical education. 2017;17:1–4.

Harvey P, Radomski N, O’Connor D. Written feedback and continuity of learning in a geographically distributed medical education program. Medical teacher. 2013;35(12):1009–13.

Hochberg M, Berman R, Ogilvie J, Yingling S, Lee S, Pusic M, et al. Midclerkship feedback in the surgical clerkship: the “Professionalism, Reporting, Interpreting, Managing, Educating, and Procedural Skills” application utilizing learner self-assessment. Am J Surg. 2017;213:212–6.

Holmboe ES, Yepes M, Williams F, Huot SJ. Feedback and the mini clinical evaluation exercise. Journal of general internal medicine. 2004;19(5):558–61.

Tai JHM, Canny BJ, Haines TP, Molloy EK. The role of peer-assisted learning in building evaluative judgement: opportunities in clinical medical education. Adv Health Sci Educ. 2016;21:659–76.

Johnson CE, Keating JL, Molloy EK. Psychological safety in feedback: What does it look like and how can educators work with learners to foster it? Med Educ. 2020;54:559–70.

Joshi A, Generalla J, Thompson B, Haidet P. Facilitating the Feedback Process on a Clinical Clerkship Using a Smartphone Application. Acad Psychiatry. 2017;41:651–5.

Kiger ME, Riley C, Stolfi A, Morrison S, Burke A, Lockspeiser T. Use of Individualized Learning Plans to Facilitate Feedback Among Medical Students. Teach Learn Med. 2020;32:399–409.

Kogan J, Shea J. Implementing feedback cards in core clerkships. Med Educ. 2008;42:1071–9.

Lefroy J, Walters B, Molyneux A, Smithson S. Can learning from workplace feedback be enhanced by reflective writing? A realist evaluation in UK undergraduate medical education. Educ Prim Care. 2021;32:326–35.

McGinness HT, Caldwell PHY, Gunasekera H, Scott KM. ‘Every Human Interaction Requires a Bit of Give and Take’: Medical Students’ Approaches to Pursuing Feedback in the Clinical Setting. Teach Learn Med. 2022. https://doi.org/10.1080/10401334.2022.2084401 .

Noble C, Billett S, Armit L, Collier L, Hilder J, Sly C, et al. ``It’s yours to take{’’}: generating learner feedback literacy in the workplace. Adv Health Sci Educ. 2020;25:55–74.

Ogburn T, Espey E. The R-I-M-E method for evaluation of medical students on an obstetrics and gynecology clerkship. Am J Obstet Gynecol. 2003;189:666–9.

Po O, Reznik M, Greenberg L. Improving a medical student feedback with a clinical encounter card. Ambul Pediatr. 2007;7:449–52.

Parkes J, Abercrombie S, McCarty T, Parkes J, Abercrombie S, McCarty T. Feedback sandwiches affect perceptions but not performance. Adv Health Sci Educ. 2013;18:397–407.

Paukert JL, Richards ML, Olney C. An encounter card system for increasing feedback to students. Am J Surg. 2002;183:300–4.

Rassos J, Melvin LJ, Panisko D, Kulasegaram K, Kuper A. Unearthing Faculty and Trainee Perspectives of Feedback in Internal Medicine: the Oral Case Presentation as a Model. J Gen Intern Med. 2019;34:2107–13.

Rizan C, Elsey C, Lemon T, Grant A, Monrouxe L. Feedback in action within bedside teaching encounters: a video ethnographic study. Med Educ. 2014;48:902–20.

Robertson AC, Fowler LC. Medical student perceptions of learner-initiated feedback using a mobile web application. Journal of medical education and curricular development. 2017;4:2382120517746384.

Scheidt PC, Lazoritz S, Ebbeling WL, Figelman AR, Moessner HF, Singer JE. Evaluation of system providing feedback to students on videotaped patient encounters. J Med Educ. 1986;61(7):585–90.

Sokol-Hessner L, Shea J, Kogan J. The open-ended comment space for action plans on core clerkship students’ encounter cards: what gets written? Acad Med. 2010;85:S110–4.

Sox CM, Dell M, Phillipi CA, Cabral HJ, Vargas G, Lewin LO. Feedback on oral presentations during pediatric clerkships: a randomized controlled trial. Pediatrics. 2014;134:965–71.

Spickard A, Gigante J, Stein G, Denny JC. Automatic capture of student notes to augment mentor feedback and student performance on patient write-ups. J Gen Intern Med. 2008;23:979–84.

Suhoyo Y, Van Hell EA, Kerdijk W, Emilia O, Schönrock-Adema J, Kuks JB, et al. nfluence of feedback characteristics on perceived learning value of feedback in clerkships: does culture matter? BMC medical education. 2017;17:1–7.

Torre DM, Simpson D, Sebastian JL, Elnicki DM. Learning/feedback activities and high-quality teaching: perceptions of third-year medical students during an inpatient rotation. Acad Med. 2005;80:950–4.

Urquhart LM, Ker JS, Rees CE. Exploring the influence of context on feedback at medical school: a video-ethnography study. Adv Health Sci Educ. 2018;23:159–86.

Watling C, Driessen E, van der Vleuten C, Lingard L. Learning culture and feedback: an international study of medical athletes and musicians. Med Educ. 2014;48:713–23.

Watling C, Driessen E, van der Vleuten C, Vanstone M, Lingard L. Beyond individualism: Professional culture and its influence on feedback. Med Educ. 2013;47:585–94.

Soemantri D, Dodds A, Mccoll G. Examining the nature of feedback within the Mini Clinical Evaluation Exercise (Mini-CEX): an analysis of 1427 Mini-CEX assessment forms. GMS J Med Educ. 2018;35:Doc47.

Van De Ridder JMMM, Stokking KM, McGaghie WC, ten Cate OTJ, van der Ridder JM, Stokking KM, et al. What is feedback in clinical education? Med Educ. 2008;42:189–97.

van de Ridder JMMM, McGaghie WC, Stokking KM, ten Cate OTJJ. Variables that affect the process and outcome of feedback, relevant for medical training: a meta-review. Med Educ. 2015;49:658–73.

Boud D. Feedback: ensuring that it leads to enhanced learning. Clin Teach. 2015. https://doi.org/10.1111/tct.12345 .

Brehaut J, Colquhoun H, Eva K, Carrol K, Sales A, Michie S, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164:435–41.

Ende J. Feedback in clinical medical education. J Am Med Assoc. 1983;250:777–81.

Cantillon P, Sargeant J. Giving feedback in clinical settings. Br Med J. 2008;337(7681):1292–4.

Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach. 2007;29:855–71.

Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53:76–85.

van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205–14.

Schuwirth LWT, der Vleuten CPM. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 2011;33:478–85.

Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ. 2018;52:654–63.

Henderson M, Boud D, Molloy E, Dawson P, Phillips M, Ryan T, Mahoney MP. Feedback for learning. Closing the assessment loop. Framework for effective learning. Canberra, Australia: Australian Government, Department for Education and Training; 2018.

Heeneman S, Pool AO, Schuwirth LWT, van der Vleuten CPM, Driessen EW, Oudkerk A, et al. The impact of programmatic assessment on student learning: theory versus practice. Med Educ. 2015;49:487–98.

Lefroy J, Watling C, Teunissen P, Brand P, Watling C. Guidelines: the do’s, don’ts and don’t knows of feedback for clinical education. Perspect Med Educ. 2015;4:284–99.

Ramani S, Krackov SK. Twelve tips for giving feedback effectively in the clinical environment. Med Teach. 2012;34:787–91.

Telio S, Ajjawi R, Regehr G. The, “Educational Alliance” as a Framework for Reconceptualizing Feedback in Medical Education. Acad Med. 2015;90:609–14.

Lockyer J, Armson H, Könings KD, Lee-Krueger RC, des Ordons AR, Ramani S, et al. In-the-Moment Feedback and Coaching: Improving R2C2 for a New Context. J Grad Med Educ. 2020;12:27–35.

Black P, Wiliam D. Developing the theory of formative assessment. Educ Assess Eval Account. 2009;21:5–31.

Download references

Author information

Authors and affiliations.

Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile

Javiera Fuentes-Cimma & Ignacio Villagran

School of Health Professions Education, Maastricht University, Maastricht, Netherlands

Javiera Fuentes-Cimma & Lorena Isbej

Rotterdam University of Applied Sciences, Rotterdam, Netherlands

Dominique Sluijsmans

Centre for Medical and Health Profession Education, Department of Gastroenterology, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile

Arnoldo Riquelme

School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile

Lorena Isbej

Sistema de Bibliotecas UC (SIBUC), Pontificia Universidad Católica de Chile, Santiago, Chile

María Teresa Olivares-Labbe

Department of Pathology, Faculty of Health, Medicine and Health Sciences, Maastricht University, Maastricht, Netherlands

Sylvia Heeneman

You can also search for this author in PubMed   Google Scholar

Contributions

J.F-C, D.S, and S.H. made substantial contributions to the conception and design of the work. M.O-L contributed to the identification of studies. J.F-C, I.V, A.R, and L.I. made substantial contributions to the screening, reliability, and data analysis. J.F-C. wrote th e main manuscript text. All authors reviewed the manuscript.

Corresponding author

Correspondence to Javiera Fuentes-Cimma .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1, supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Fuentes-Cimma, J., Sluijsmans, D., Riquelme, A. et al. Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review. BMC Med Educ 24 , 440 (2024). https://doi.org/10.1186/s12909-024-05439-6

Download citation

Received : 25 September 2023

Accepted : 17 April 2024

Published : 23 April 2024

DOI : https://doi.org/10.1186/s12909-024-05439-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical clerkship
  • Feedback processes
  • Feedforward
  • Formative feedback
  • Health professions
  • Undergraduate medical education
  • Undergraduate healthcare education
  • Workplace learning

BMC Medical Education

ISSN: 1472-6920

how to analyze data in research paper

IMAGES

  1. How-To: Data Analytics for Beginners

    how to analyze data in research paper

  2. Analyzing Quantitative Data

    how to analyze data in research paper

  3. What is Data Analysis in Research

    how to analyze data in research paper

  4. Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automatic

    how to analyze data in research paper

  5. 5 Steps of the Data Analysis Process

    how to analyze data in research paper

  6. Quantitative Data analysis

    how to analyze data in research paper

VIDEO

  1. Analysis of Data? Some Examples to Explore

  2. Brainerd AR Lecture Week 11

  3. Conclusion writing for Secondary Data Research Paper/Project

  4. Data Analysis in Research

  5. AI Tool for Literature Review

  6. What is research data?

COMMENTS

  1. The Beginner's Guide to Statistical Analysis

    Table of contents. Step 1: Write your hypotheses and plan your research design. Step 2: Collect data from a sample. Step 3: Summarize your data with descriptive statistics. Step 4: Test hypotheses or make estimates with inferential statistics.

  2. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  3. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  4. Learning to Do Qualitative Data Analysis: A Starting Point

    In this article, we take up this open question as a point of departure and offer thematic analysis, an analytic method commonly used to identify patterns across language-based data (Braun & Clarke, 2006), as a useful starting point for learning about the qualitative analysis process.In doing so, we do not advocate for only learning the nuances of thematic analysis, but rather see it as a ...

  5. A Really Simple Guide to Quantitative Data Analysis

    It is important to know w hat kind of data you are planning to collect or analyse as this w ill. affect your analysis method. A 12 step approach to quantitative data analysis. Step 1: Start with ...

  6. Basic statistical tools in research and data analysis

    Abstract. Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise ...

  7. Data Science and Analytics: An Overview from Data-Driven Smart

    Data science is typically a "concept to unify statistics, data analysis, and their related methods" to understand and analyze the actual phenomena with data. According to Cao et al. ... In the area, several papers have been reviewed by the researchers based on data science and its significance.

  8. How to Do Thematic Analysis

    Different approaches to thematic analysis. Once you've decided to use thematic analysis, there are different approaches to consider. There's the distinction between inductive and deductive approaches:. An inductive approach involves allowing the data to determine your themes.; A deductive approach involves coming to the data with some preconceived themes you expect to find reflected there ...

  9. What Is Data Analysis? (With Examples)

    What Is Data Analysis? (With Examples) Data analysis is the practice of working with data to glean useful information, which can then be used to make informed decisions. "It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts," Sherlock Holme's proclaims ...

  10. PDF Tips for Writing Analytic Research Papers

    and support your position with reasons, evidence. Use the quote to get you to some new place. • Focus on problems, discrepancies, disagreements, tensions, or changes over time. • Examine counterarguments. • Support key assertions with evidence: concrete examples, sources of information, footnotes, etc. • When making judgments or ...

  11. How to Analyze Data in a Primary Research Study

    6 How to Analyze Data in a Primary Research Study . Melody Denny and Lindsay Clark. Overview. This chapter introduces students to the idea of working with primary research data grounded in qualitative inquiry, closed-and open-ended methods, and research ethics (Driscoll; Mackey and Gass; Morse; Scott and Garner). [1] We know this can seem intimidating to students, so we will walk them through ...

  12. Eleven quick tips for finding research data

    Tip 1: Think about the data you need and why you need them. Tip 2: Select the most appropriate resource. Tip 3: Construct your query strategically. Tip 4: Make the repository work for you. Tip 5: Refine your search. Tip 6: Assess data relevance and fitness -for -use. Tip 7: Save your search and data- source details.

  13. Data Collection Methods

    Table of contents. Step 1: Define the aim of your research. Step 2: Choose your data collection method. Step 3: Plan your data collection procedures. Step 4: Collect the data. Frequently asked questions about data collection.

  14. Creating a Data Analysis Plan: What to Consider When Choosing

    The first step in a data analysis plan is to describe the data collected in the study. This can be done using figures to give a visual presentation of the data and statistics to generate numeric descriptions of the data. Selection of an appropriate figure to represent a particular set of data depends on the measurement level of the variable.

  15. How to Analyze Research Data: A Step-by-Step Guide

    Organize your data. 4. Use your tools. 5. Report your results. Be the first to add your personal experience. 6. Review your analysis. Be the first to add your personal experience.

  16. How to Write a Results Section

    Your results should always be written in the past tense. While the length of this section depends on how much data you collected and analyzed, it should be written as concisely as possible. Only include results that are directly relevant to answering your research questions. Avoid speculative or interpretative words like "appears" or ...

  17. Research Design: Decide on your Data Analysis Strategy

    The last step of designing your research is planning your data analysis strategies. In this video, we'll take a look at some common approaches for both quant...

  18. PDF Results/Findings Sections for Empirical Research Papers

    The Results (also sometimes called Findings) section in an empirical research paper describes what the researcher(s) found when they analyzed their data. Its primary purpose is to use the data collected to answer the research question(s) posed in the introduction, even if the findings challenge the hypothesis.

  19. Data Analysis in Qualitative Research: A Brief Guide to Using Nvivo

    Abstract. Qualitative data is often subjective, rich, and consists of in-depth information normally presented in the form of words. Analysing qualitative data entails reading a large amount of transcripts looking for similarities or differences, and subsequently finding themes and developing categories. Traditionally, researchers 'cut and ...

  20. Research Methods

    To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). Meta-analysis. Quantitative. To statistically analyze the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner.

  21. Research Paper Analysis: How to Analyze a Research Article + Example

    Save the word count for the "meat" of your paper — that is, for the analysis. 2. Summarize the Article. Now, you should write a brief and focused summary of the scientific article. It should be shorter than your analysis section and contain all the relevant details about the research paper.

  22. How to Use Raw Data in Web Analytics

    Data analysis - Using various data analysis techniques to uncover patterns, trends, and insights. This can include statistical analysis, predictive modeling, data visualization, and more. Interpreting the results - Understanding what the data says and connecting it with business goals. Raw data benefits and use cases. Raw data is beneficial ...

  23. Evaluating MCMC samplers

    I've been thinking a lot about how to evaluate MCMC samplers. A common way to do this is to run one or more iterations of your contender against a baseline of something simple, something well understood, or more rarely, the current champion (which seems to remain NUTS, though we're open to suggestions for alternatives).

  24. Ten simple rules for reading a scientific paper

    Especially for papers that include "big data" (like sequencing or modeling results), this is often where the real, raw data is presented. Open in a separate window Research articles typically contain each of these sections, although sometimes the "results" and "discussion" sections (or "discussion" and "conclusion" sections ...

  25. Designing feedback processes in the workplace-based learning of

    The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework. The search yielded 4,877 papers, and 61 were included in the review.

  26. Data Collection

    Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem. While methods and aims may differ between fields, the overall process of ...