Vittana.org

20 Advantages and Disadvantages of Survey Research

Survey research is a critical component of measurement and applied social research. It is a broad area that encompasses many procedures that involve asking questions to specific respondents.

A survey can be anything from a short feedback form to intensive, in-depth interviews that attempt to gather specific data about situations, events, or circumstances. Although there are several methods of application that researchers can apply using this tool, you can divide surveys into two generic categories: interviews and questionnaires.

Innovations in this area in recent years allow for advanced software solutions to provide more data to researchers because of the availability of online and mobile surveys. That means the people who are in the most challenging places to reach can still provide feedback on critical ideas, services, or solutions.

Several survey research advantages and disadvantages exist, so reviewing each critical point is necessary to determine if there is value in using this approach for your next project.

List of the Advantages of Survey Research

1. It is an inexpensive method of conducting research. Surveys are one of the most inexpensive methods of gathering quantitative data that is currently available. Some questionnaires can be self-administered, making it a possibility to avoid in-person interviews. That means you have access to a massive level of information from a large demographic in a relatively short time. You can place this option on your website, email it to individuals, or post it on social media profile.

Some of these methods have no financial cost at all, relying on personal efforts to post and collect the information. Robust targeting is necessary to ensure that the highest possible response rate becomes available to create a more accurate result.

2. Surveys are a practical solution for data gathering. Surveys or a practical way to gather information about something specific. You can target them to a demographic of your choice or manage them in several different ways. It is up to you to determine what questions get asked and in what format. You can use polls, questionnaires, quizzes, open-ended questions, and multiple-choice to collect info in real-time situations so that the feedback is immediately useful.

3. It is a fast way to get the results that you need. Surveys provide fast and comfortable results because of today’s mobile and online tools. It is not unusual for this method of data collection to generate results in as little as one day, and sometimes it can be even less than that depending on the scale and reach of your questions. You no longer need to wait for another company to deliver the solutions that you need because these questionnaires give you insights immediately. That means you can start making decisions in the shortest amount of time possible.

4. Surveys provide opportunities for scalability. A well-constructed survey allows you to gather data from an audience of any size. You can distribute your questions to anyone in the world today because of the reach of the Internet. All you need to do is send them a link to the page where you solicit information from them. This process can be done automatically, allowing companies to increase the efficiency of their customer onboarding processes.

Marketers can also use surveys as a way to create lead nurturing campaigns. Scientific research gains a benefit through this process as well because it can generate social insights at a personal level that other methods are unable to achieve.

5. It allows for data to come from multiple sources at once. When you construct a survey to meet the needs of a demographic, then you have the ability to use multiple data points collected from various geographic locations. There are fewer barriers in place today with this method than ever before because of the online access we have around the world.

Some challenges do exist because of this benefit, namely because of the cultural differences that exist between different countries. If you conduct a global survey, then you will want to review all of the questions to ensure that an offense is not unintentionally given.

6. Surveys give you the opportunity to compare results. After researchers quantify the information collected from surveys, the data can be used to compare and contrast the results from other research efforts. This benefit makes it possible to use the info to measure change. That means a questionnaire that goes out every month or each year becomes more valuable over time.

When you can gather a significant amount of data, then the picture you are trying to interpret will become much clearer. Surveys provide the capability of generating new strategies or identifying new trends to create more opportunities.

7. It offers a straightforward analysis and visualization of the data. Most surveys are quantitative by design. This process allows for the advantage of a straightforward analysis process so that the results can be quickly visualized. That means a data scientist doesn’t need to be available to start the work of interpreting the results. You can take advantage of third-party software tools that can turn this info into usable reports, charts, and tables to facilitate the presentation efforts.

8. Survey respondents can stay anonymous with this research approach. If you choose to use online or email surveys, then there is a fantastic opportunity to allow respondents to remain anonymous. Complete invisibility is also possible with postal questionnaires, allowing researchers to maximize the levels of comfort available to the individuals who offer answers. Even a phone conversation doesn’t require a face-to-face meeting, creating this unique benefit.

When people have confidence in the idea that their responses will not be directly associated with their reputation, then researchers have an opportunity to collect information with greater accuracy.

9. It is a research tool with fewer time constraints. Surveys have fewer time limits associated with them when compared to other research methods. There is no one on the other end of an email or postal questionnaire that wants an immediate answer. That means a respondent can take additional time to complete each answer in the most comfortable way possible. This benefit is another way to encourage more honesty within the results since having a researcher presence can often lead to socially desirable answers.

10. Surveys can cover every component of any topic. Another critical advantage that surveys provide is the ability to ask as many questions as you want. There is a benefit in keeping an individual questionnaire short because a respondent may find a lengthy process to be frustrating. The best results typically come when you can create an experience that involves 10 or fewer questions.

Since this is a low-cost solution for gathering data, there is no harm in creating multiple surveys that have an easy mode of delivery. This benefit gives you the option to cover as many sub-topics as possible so that you can build a complete profile of almost any subject matter.

List of the Disadvantages of Survey Research

1. There is always a risk that people will provide dishonest answers. The risk of receiving a dishonest answer is lower when you use anonymous surveys, but it does not disappear entirely. Some people want to help researchers come to whatever specific conclusion they think the process is pursuing. There is also a level of social desirability bias that creeps into the data based on the interactions that respondents have with questionnaires. You can avoid some of this disadvantage by assuring individuals that their privacy is a top priority and that the process you use prevents personal information leaks, but you can’t stop this problem 100% of the time.

2. You might discover that some questions don’t get answers. If you decide to use a survey to gather information, then there is a risk that some questions will be left unanswered or ignored. If some questions are not required, then respondents might choose not to answer them. An easy way to get around this disadvantage is to use an online solution that makes answering questions a required component of each step. Then make sure that your survey stays short and to the point to avoid having people abandon the process altogether.

3. There can be differences in how people understand the survey questions. There can be a lot of information that gets lost in translation when researchers opt to use a survey instead of other research methods. When there is not someone available to explain a questionnaire entirely, then the results can be somewhat subjective. You must give everyone an opportunity to have some understanding of the process so that you can encourage accurate answers.

It is not unusual to have respondents struggle to grasp the meaning of some questions, even though the text might seem clear to the people who created it. Whenever miscommunication is part of the survey process, the results will skew in unintended directions. The only way to avoid this problem is to make the questions as simple as possible.

4. Surveys struggle to convey emotions with the achievable results. A survey does not do a good job of capturing a person’s emotional response to the questions then counter. The only way to gather this information is to have an in-person interview with every respondent. Facial expressions and other forms of body language can add subtlety to a conversation that isn’t possible when someone is filling out an online questionnaire.

Some researchers get stuck trying to interpret feelings in the data they receive. A sliding-scale response that includes various levels of agreement or disagreement can try to replicate the concept of emotion, but it isn’t quite the same as being in the same room as someone. Assertion and strength will always be better information-gathering tools than multiple-choice questions.

5. Some answers can be challenging to classify. Surveys produce a lot of data because of their nature. You can tabulate multiple-choice questions, graph agreement or disagreement in specific areas, or create open-ended questions that can be challenging to analyze. Individualized answers can create a lot of useful information, but they can also provide you with data that cannot be quantified. If you incorporate several questions of this nature into a questionnaire, then it will take a long time to analyze what you received.

Only 10% of the questions on the survey should have an open-ended structure. If the questions are confusing or bothersome, then you might find that the information you must manually review is mostly meaningless.

6. You must remove someone with a hidden agenda as soon as possible. Respondent bias can be a problem in any research type. Participants in your survey could have an interest in your idea, service, or product. Others might find themselves being influenced to participate because of the subject material found in your questionnaire. These issues can lead to inaccurate data gathering because it generates an imbalance of respondents who either see the process as overly positive or negative.

This disadvantage of survey research can be avoided by using effective pre-screening tools that use indirect questions that identify this bias.

7. Surveys don’t provide the same level of personalization. Any marketing effort will feel impersonal unless you take the time to customize the process. Because the information you want to collect on a questionnaire is generic by nature, it can be challenging to generate any interest in this activity because there is no value promised to the respondent. Some people can be put off by the idea of filling out a generic form, leading them to abandon the process.

This issue is especially difficult when your survey is taken voluntarily online, regardless of an email subscription or recent purchase.

8. Some respondents will choose answers before reading the questions. Every researcher hopes that respondents will provide conscientious responses to the questions offered in a survey. The problem here is that there is no way to know if the person filling out the questionnaire really understood the content provided to them. You don’t even have a guarantee that the individual read the question thoroughly before offering a response.

There are times when answers are chosen before someone fully reads the question and all of the answers. Some respondents skip through questions or make instant choices without reading the content at all. Because you have no way to know when this issue occurs, there will always be a measure of error in the collected data.

9. Accessibility issues can impact some surveys. A lack of accessibility is always a threat that researchers face when using surveys. This option might be unsuitable for individuals who have a visual or hearing impairment. Literacy is often necessary to complete this process. These issues should come under consideration during the planning stages of the research project to avoid this potential disadvantage. Then make the effort to choose a platform that has the accessibility options you need already built into it.

10. Survey fatigue can be a real issue that some respondents face. There are two issues that manifest themselves because of this disadvantage. The first problem occurs before someone even encounters your questionnaire. Because they feel overwhelmed by the growing number of requests for information, a respondent is automatically less inclined to participate in a research project. That results in a lower overall response rate.

Then there is the problem of fatigue that happens while taking a survey. This issue occurs when someone feels like the questionnaire is too long or contains questions that seem irrelevant. You can tell when this problem happens because a low completion rate is the result. Try to make the process as easy as possible to avoid the issues with this disadvantage.

Surveys sometimes have a poor reputation. Researchers have seen response rates decline because this method of data gathering has become unpopular since the 1990s. Part of the reason for this perception is due to the fact that everyone tries to use it online since it is a low-cost way to collect information for decision-making purposes.

That’s why researchers are moving toward a rewards-based system to encourage higher participation and completion rates. The most obvious way to facilitate this behavior is to offer something tangible, such as a gift card or a contest entry. You can generate more responses by creating an anonymous process that encourages direct and honest answers.

These survey research advantages and disadvantages prove that this process isn’t as easy as it might see from the outside. Until you sit down to start writing the questions, you may not entirely know where you want to take this data collection effort. By incorporating the critical points above, you can begin to craft questions in a way that encourages the completion of the activity.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

When to Use Surveys in Psychology Research

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

limitations of survey research psychology

 James Lacy, MLS, is a fact-checker and researcher.

limitations of survey research psychology

Reasons to Use Surveys in Psychology

  • How to Use Surveys
  • Disadvantages

Types of Psychological Surveys

  • Important Considerations

A survey is a data collection tool used to gather information about individuals. Surveys are commonly used in psychology research to collect self-report data from study participants. A survey may focus on factual information about individuals, or it might aim to obtain the opinions of the survey takers.

Psychology surveys involve asking participants a series of questions to learn more about a phenomenon, such as how they think, feel, or behave. Such tools can be helpful for learning about behaviors, conditions, traits, or other topics that interest researchers.

At a Glance

Psychological surveys are a valuable research tool that allow scientists to collect large quantities of data relatively quickly. However, such surveys sometimes have low response rates that can lead to biased results. Learning more about how surveys are used in psychology can give you a better understanding of how this type of research can be used to learn more about the human mind and behavior.

So why do psychologists opt to use surveys so often in psychology research?

Surveys are one of the most commonly used research tools   because they can be utilized to collect data and describe naturally occurring phenomena that exist in the real world.

They offer researchers a way to collect a great deal of information in a relatively quick and easy way. A large number of responses can be obtained quite quickly, which allows scientists to work with a lot of data.

Surveys in psychology are vital because they allow researchers to:

  • Understand the experiences, opinions, and behaviors of the participants
  • Evaluate respondent attitudes and beliefs
  • Look at the risk factors in a sample
  • Assess individual differences
  • Evaluate the success of interventions or preventative programs

How to Use Surveys in Psychology

A survey can be used to investigate the characteristics, behaviors, or opinions of a group of people. These research tools can be used to ask questions about demographic information about characteristics such as sex, religion, ethnicity, and income.

They can also collect information on experiences, opinions, and even hypothetical scenarios. For example, researchers might present people with a possible scenario and then ask them how they might respond in that situation.

How do researchers go about collecting information using surveys?

How Surveys Are Administered

A survey can be administered in a couple of different ways:

  • Structured interview: The researcher asks each participant the questions
  • Questionnaire: the participant fills out the survey independently

You have probably taken many different surveys in the past, although the questionnaire method tends to be the most common.  

Surveys are generally standardized to ensure that they have reliability and validity . Standardization is also important so that the results can be generalized to the larger population.

Advantages of Psychological Surveys

One of the big benefits of using surveys in psychological research is that they allow researchers to gather a large quantity of data relatively quickly and cheaply.

A survey can be administered as a structured interview or as a self-report measure, and data can be collected in person, over the phone, or on a computer.

  • Data collection : Surveys allow researchers to collect a large amount of data in a relatively short period.
  • Cost-effectiveness : Surveys are less expensive than many other data collection techniques.
  • Ease of administration : Surveys can be created quickly and administered easily.
  • Usefulness : Surveys can be used to collect information on a broad range of things, including personal facts, attitudes, past behaviors, and opinions.

Disadvantages of Using Surveys in Psychology

One potential problem with written surveys is the nonresponse bias.

Experts suggest that return rates of 85% or higher are considered excellent, but anything below 60% might severely impact the sample's representativeness .

  • Problems with construction and administration : Poor survey construction and administration can undermine otherwise well-designed studies.
  • Inaccuracy : The answer choices provided in a survey may not be an accurate reflection of how the participants actually feel.
  • Poor response rates : While random sampling is generally used to select participants, response rates can bias the results of a survey. Strategies to improve response rates sometimes include offering financial incentives, sending personalized invitations, and reminding participants to complete the survey.
  • Biased results : The social desirability bias can lead people to respond in a way that makes them look better than they really are. For example, a respondent might report that they engage in more healthy behaviors than they do in real life.

Less expensive

Easy to create and administer

Diverse uses

Subject to nonresponse bias

May be poorly designed

Limited answer choices can influence results

Subject to social desirability bias

Surveys can be implemented in a number of different ways. The chances are good that you have participated in a number of different market research surveys in the past.

Some of the most common ways to administer surveys include:

  • Mail : An example might include an alumni survey distributed via direct mail by your alma mater.
  • Telephone : An example of a telephone survey would be a market research call about your experiences with a certain consumer product.
  • Online : Online surveys might focus on your experience with a particular retailer, product, or website.
  • At-home interviews : The U.S. Census is a good example of an at-home interview survey administration.

Important Considerations When Using Psychological Surveys

When researchers are using surveys in psychology research, there are important ethical factors they need to consider while collecting data.

  • Obtaining informed consent is vital : Before administering a psychological survey, all participants should be informed about the purpose and potential risks before responding.
  • Creating a representative sample : Researchers should ensure that their participant sample is representative of the larger population. This involves including participants who are part of diverse populations.
  • Participation must be voluntary : Anyone who responds to a survey must do so of their own free will. They should not feel coerced or bribed into participating. 
  • Take steps to reduce bias : Questions should be carefully constructed so they do not affect how the participants respond. Researchers should also be cautious to avoid insensitive or offensive questions.
  • Confidentiality : All survey responses should be kept confidential. Researchers often utilize special software that ensures privacy, protects data, and avoids using identifiable information.

Psychological surveys can be powerful, convenient, and informative research tools. Researchers often utilize surveys in psychology to collect data about how participants think, feel, or behave. While useful, it is important to construct these surveys carefully to avoid asking leading questions and reduce bias.

National Science Foundation. Directorate for Education and Human Resources Division of Research, Evaluation and Communication. The 2002 User-Friendly Handbook for Project Evaluation. Section III. An Overview of Quantitative and Qualitative Data Collection Methods. 5. Data collection methods: Some tips and comparisons. Arlington, Va.: The National Science Foundation, 2002.

Jones TL, Baxter MA, Khanduja V. A quick guide to survey research. Ann R Coll Surg Engl. 2013;95(1):5-7. doi:10.1308/003588413X13511609956372

Finkel EJ, Eastwick PW, Reis HT. Best research practices in psychology: Illustrating epistemological and pragmatic considerations with the case of relationship science. J Pers Soc Psychol . 2015;108(2):275-97. doi:10.1037/pspi0000007

Harris LR, Brown GTL. Mixing interview and questionnaire methods: Practical problems in aligning data . Practical Assessment, Research, and Evaluation. 2010;15 (1). doi:10.7275/959j-ky83

Fincham JE. Response rates and responsiveness for surveys, standards, and the Journal . Am J Pharm Educ . 2008;72(2):43. doi:10.5688/aj720243

Shiyab W, Ferguson C, Rolls K, Halcomb E. Solutions to address low response rates in online surveys .  Eur J Cardiovasc Nurs . 2023;22(4):441-444. doi:10.1093/eurjcn/zvad030

Latkin CA, Mai NV, Ha TV, et al. Social desirability response bias and other factors that may influence self-reports of substance use and HIV risk behaviors: A qualitative study of drug users in Vietnam. AIDS Educ Prev. 2016;28(5):417-425. doi:10.1521/aeap.2016.28.5.417

American Psychological Association.  Ethical principles of psychologists and code of conduct .

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9.1 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.)

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of college students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Table 9.1 “Some Lifetime Prevalence Results From the National Comorbidity Survey” presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders and also to clinicians and policymakers who need to understand exactly how common these disorders are.

Table 9.1 Some Lifetime Prevalence Results From the National Comorbidity Survey

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on college students. Although this is not a typical use of survey research, it certainly illustrates the flexibility of this approach.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force

Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Survey Research

34 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was, demonstrating the effectiveness of careful survey methodology (We will consider the reasons that Gallup was right later in this chapter). Gallup’s demonstration of the power of careful survey methods led later researchers to to local, and in 1948, the first national election survey by the Survey Research Center at the University of Michigan. This work eventually became the American National Election Studies ( https://electionstudies.org/ ) as a collaboration of Stanford University and the University of Michigan, and these studies continue today.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 7.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 7.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used as a data collection method within experimental research to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Survey research is thus a flexible approach that can be used to study a variety of basic and applied research questions.

  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵

A quantitative and qualitative method with two important characteristics; variables are measured using self-reports and considerable attention is paid to the issue of sampling.

Participants in a survey or study.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Survey Research | Definition, Examples & Methods

Survey Research | Definition, Examples & Methods

Published on August 20, 2019 by Shona McCombes . Revised on June 22, 2023.

Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyze the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyze the survey results, step 6: write up the survey results, other interesting articles, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research : investigating the experiences and characteristics of different social groups
  • Market research : finding out what customers think about products, services, and companies
  • Health research : collecting data from patients about symptoms and treatments
  • Politics : measuring public opinion about parties and policies
  • Psychology : researching personality traits, preferences and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and in longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism. Run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • US college students
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18-24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalized to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

Several common research biases can arise if your survey is not generalizable, particularly sampling bias and selection bias . The presence of these biases have serious repercussions for the validity of your results.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every college student in the US. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalize to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions. Again, beware of various types of sampling bias as you design your sample, particularly self-selection bias , nonresponse bias , undercoverage bias , and survivorship bias .

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by mail, online or in person, and respondents fill it out themselves.
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses.

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g. residents of a specific region).
  • The response rate is often low, and at risk for biases like self-selection bias .

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyze.
  • The anonymity and accessibility of online surveys mean you have less control over who responds, which can lead to biases like self-selection bias .

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping mall or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g. the opinions of a store’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations and is at risk for sampling bias .

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data: the researcher records each response as a category or rating and statistically analyzes the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analyzed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g. yes/no or agree/disagree )
  • A scale (e.g. a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g. age categories)
  • A list of options with multiple answers possible (e.g. leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analyzed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an “other” field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic. Avoid jargon or industry-specific terminology.

Survey questions are at risk for biases like social desirability bias , the Hawthorne effect , or demand characteristics . It’s critical to use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no indication that you’d prefer a particular answer or emotion.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

limitations of survey research psychology

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by mail, online, or in person.

There are many methods of analyzing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also clean the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organizing them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analyzing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analyzed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyze it. In the results section, you summarize the key results from your analysis.

In the discussion and conclusion , you give your explanations and interpretations of these results, answer your research question, and reflect on the implications and limitations of the research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Survey Research | Definition, Examples & Methods. Scribbr. Retrieved March 25, 2024, from https://www.scribbr.com/methodology/survey-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, questionnaire design | methods, question types & examples, what is a likert scale | guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 9: Survey Research

Overview of Survey Research

Learning Objectives

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest  before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.) Interest in surveying around election times has led to several long-term projects, notably the Canadian Election Studies which has measured opinions of Canadian voters around federal elections since 1965.  Anyone can access the data and read about the results of the experiments in these studies.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States . In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 9.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Although this approach is not a typical use of survey research, it certainly illustrates the flexibility of this method.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force
  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵
  • The lifetime prevalence of a disorder is the percentage of people in the population that develop that disorder at any time in their lives. ↵

A quantitative approach in which variables are measured using self-reports from a sample of the population.

Participants of a survey.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

limitations of survey research psychology

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.2 Assessing survey research

Learning objectives.

  • Identify and explain the strengths of survey research
  • Identify and explain the weaknesses of survey research
  • Define response rate, and discuss some of the current thinking about response rates

Survey research, as with all methods of data collection, comes with both strengths and weaknesses. We’ll examine both in this section.

Strengths of survey methods

Researchers employing survey methods to collect data enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. Some methods of administering surveys can be cost effective.  In a study of older people’s experiences in the workplace, researchers were able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of a seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. In some contexts, $,1,000 is a lot of money, but just imagine what it might have cost to visit each of those people individually to interview them in person. You would have to dedicate a few weeks of your life at least, drive around the state, and pay for meals and lodging to interview each person individually. We could double, triple, or even quadruple our costs pretty quickly by opting for an in-person method of data collection over a mailed survey.

limitations of survey research psychology

Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 6. Of all the data collection methods described in this textbook, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 9, do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. The versatility offered by survey research means that understanding how to construct and administer surveys is a useful skill to have for all kinds of jobs. Lawyers might use surveys in their efforts to select juries, social service and other organizations (e.g., churches, clubs, fundraising groups, activist groups) use them to evaluate the effectiveness of their efforts, businesses use them to learn how to market their products, governments use them to understand community opinions and needs, and politicians and media outlets use surveys to understand their constituencies.

In sum, the following are benefits of survey research:

  • Cost-effectiveness
  • Generalizability
  • Reliability
  • Versatility

Weaknesses of survey methods

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any number of questions on any number of topics in them, the fact that the survey researcher is generally stuck with a single instrument for collecting data, the questionnaire. Surveys are in many ways rather inflexible . Let’s say you mail a survey out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their surveys. When conducting in-depth interviews, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them.

Depth can also be a problem with surveys. Survey questions are usually standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not be as valid as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president, as in our opening example in this chapter. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for a qualified African American but not if he chose a vice president the respondent didn’t like?

In sum, potential drawbacks to survey research include the following:

  • Inflexibility
  • Lack of depth

Response rates

The relative strength or weakness of an individual survey is strongly affected by the response rate , the percent of people invited to take the survey who actually complete it.   Let’s say researcher sends a survey to 100 people. It would be wonderful if all 100 returned completed the questionnaire, but the chances of that happening are about zero. If the researcher is incredibly lucky, perhaps 75 or so will return completed questionnaires. In this case, the response rate would be 75%. The response rate is calculated by dividing the number of surveys returned by the number of surveys distributed.

Though response rates vary, and researchers don’t always agree about what makes a good response rate; having 75% of your surveys returned would be considered good—even excellent—by most survey researchers. There has been a lot of research done on how to improve a survey’s response rate. Suggestions include personalizing questionnaires by, for example, addressing them to specific respondents rather than to some generic recipient, such as “madam” or “sir”; enhancing the questionnaire’s credibility by providing details about the study, contact information for the researcher, and perhaps partnering with agencies likely to be respected by respondents such as universities, hospitals, or other relevant organizations; sending out pre-questionnaire notices and post-questionnaire reminders; and including some token of appreciation with mailed questionnaires even if small, such as a $1 bill.

The major concern with response rates is that a low rate of response may introduce nonresponse bias into a study’s findings. What if only those who have strong opinions about your study topic return their questionnaires? If that is the case, we may well find that our findings don’t at all represent how things really are or, at the very least, we are limited in the claims we can make about patterns found in our data. While high return rates are certainly ideal, a recent body of research shows that concern over response rates may be overblown (Langer, 2003). Several studies have shown that low response rates did not make much difference in findings or in sample representativeness (Curtin, Presser, & Singer, 2000; Keeter, Kennedy, Dimock, Best, & Craighill, 2006; Merkle & Edelman, 2002). For now, the jury may still be out on what makes an ideal response rate and on whether, or to what extent, researchers should be concerned about response rates. Nevertheless, certainly no harm can come from aiming for as high a response rate as possible.

Key Takeaways

  • Strengths of survey research include its cost effectiveness, generalizability, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and issues with depth.
  • While survey researchers should always aim to obtain the highest response rate possible, some recent research argues that high return rates on surveys may be less important than we once thought.
  • Nonresponse bias- bias reflected differences between people who respond to your survey and those who do not respond
  • Response rate- the number of people who respond to your survey divided by the number of people to whom the survey was distributed

Image attributions

experience by mohamed_hassan CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Social Science Works

Center of Deliberation

limitations of survey research psychology

The Limits Of Survey Data: What Questionnaires Can’t Tell Us

  • Latest Posts

Sarah Coughlan

  • Alienation Online: An Analysis of Populist Facebook Pages In Brandenburg - October 24, 2017
  • Shy Tories & Virtue Signalling: How Labour Surged Online - June 16, 2017
  • The Limits Of Survey Data: What Questionnaires Can’t Tell Us - March 7, 2017

All research methodologies have their limitations, as many authors have pointed before (see for example Visser, Krosnick and Lavrakas, 2000). From the generalisabilty of data to the nitty-gritty of bias and question wording, every method has its flaws. In fact, the in-fighting between methodological approaches is one of social science’s worst kept secrets: the hostility between quantitative and qualitative data scholars knows almost no bounds (admittedly that’s ‘almost no bounds’ within the polite world of academic debate) and doesn’t look set to be resolved any time soon. That said, there are some methods that are better suited than others to certain types of studies. This article will examine the role of survey data in values studies and argue that it is a blunt tool for this kind of research and that qualitative study methods, particularly via deliberation, are more appropriate. This article will do so via an examination of a piece of 2016 research published by the German ministry for migrants and refugees (the BAMF) which explored both the demographics and the social values held by refugees that have arrived in Germany in the last three years. This article will argue that surveys are unfit to get at the issues that are most important to people.

The Good, The Bad & The Survey

Germany has been Europe’s leading figure as the refugee crisis has deepened worldwide following the collapse of government in Syria and the rise of ISIS. Today, there are 65.3 million displaced people from across the world and 21.3 million refugees (UNHCR, 2016) , a number that surpasses even the number of refugees following the Second World War. The exact number of refugees living in Germany (official statistics typically count all migrants seeking protection as refugees, although there is some difference between the various legal statuses) is not entirely clear and the figure is unstable. And while this figure still lags behind the efforts made by countries like Turkey and Jordan, this represents the highest total number of refugees in a European country and matches the pro capita efforts of Sweden. Meanwhile, there are signs that Germany’s residents do not always welcome their new neighbours. For example, in 2016, there were almost 2,000 reported attacks on refugees and refugee homes (Antonio-Amedeo Stiftung, 2017) a similar trend was established by Benček and Strasheim (2016), and the rise of the far-right and anti-migrant party, the AfD in local elections last year points to unresolved resentment towards the newcomers.

In this context then, it makes sense for the BAMF ( Bundesamt für Migration und Flüchtlinge ) , the ministry responsible for refugees and migrants in Germany, to respond to pressure in the media and from politicians to get a better overall picture about the kinds of people the refugees to Germany are. As such, their 2016 paper: “Survey of refugees – flight, arrival in Germany and first steps of integration” [1] details a host of information about newcomers in Germany. The study, which relied on questionnaires given by BAMF officials in a number of languages, and a face-to-face or online format (BAMF, 2016, 11), asked questions of 4,500 refugee respondents. For the most part, the study offers excellent insight into the demographic history of refugees to Germany and will be helpful for policymakers looking to ensure that efforts to help settles refugees are appropriately targeted. For example, the study detailed the relatively high level of education enjoyed by typical refugees to Germany (an average between 10 and 11 years of schooling) (ibid., 37) and some of the specific difficulties this group have in successfully navigating the job market (ibid., 46) and where this group turns to for help for this.

In addition to offering the most up-to-date information about refugees’ home countries and their path into Germany, the study is extremely helpful for politicians and scholars looking to enhance their understanding of logistical and practical issues facing migrants; for example, who has access to integration courses? How many unaccompanied children are in Germany? How many men and how many women fled to Germany? Here, the study is undoubtedly helpful.  However, the latter stages of the report purport to examine the social values held by refugees, and it is this part of the study that this article takes issue with.

Respondents were asked to answer questions about their values. The topics included: the right form a government should take, the role of democracy, voting rights of women, the role of religion in the state, men and women’s equality in a marriage, and perceived difference between the values of refugees and Germans among others. While this article doesn’t take issue with the veracity of the findings reported in the article, it does argue that the methods used here are inappropriate for the task at hand. Consider first the questions relating to refugees’ attitudes toward democracy and government. The report found that 96% of refugee respondents agreed with the statement: “One should have a democratic system” [2] compared with 95% of the German control group (ibid., 52). This finding was picked up in the liberal media and heralded as a sign that refugees share central German social values. It is entirely possible that this is true. However, it isn’t difficult to see the ways in which this number might have been accidentally manufactured and should hence be treated with considerable caution.

To do so, one must first consider the circumstances of the interview or questionnaire. As a refugee in Germany, you are confronted with the authority of the BAMF regularly, and you are also likely aware that it is representatives from this organization that ultimately decide  on you and your family’s status in Germany and whether you will have the right to stay or not. You are then asked for detailed information about your family history, your education and your participation in integration courses by a representative from this institution. Finally, the interviewer asks what your views are on democracy, women’s rights and religion. Is it too much of a jump to suggest that someone who has had to flee their home and take the extraordinarily dangerous trip to Europe is savvy enough to spot a potential trap here? In these circumstances, there is a tendency to give the answer the interviewer wants to hear. This interviewer bias effect is not a problem exclusive to surveys of refugees’ social values (Davis, 2013), however the power imbalance in these interactions exacerbates the effect. The argument advanced here is not that refugees do not hold a positive view of democracy, but that the trying to find out their views via a survey of this sort is flawed. In fact, the report doesn’t find any significant points of departure between Germans and refugees on any of the major values other than the difficulties presented by women earning more money than their husbands and its potential to cause marital difficulties (ibid., 54).

The Gillard Government made a commitment in 2010 to release all children from immigration detention by June 2011, but still 1000 children languish in the harsh environment of immigration camps around Australia. The Refugee Action Collective organised a protest on July 9, 2011 outside the Melbourne Immigration Transit accommodation which is used for the detention of unaccompanied minors.

Asking Questions About Essential Contested Concepts

Beyond the serious power imbalance noted above, another key issue not addressed in the BAMF study is the question of contested concepts. Essential contested concepts, an idea first advanced by W.B. Gallie in 1956, are the big topics like art, beauty, fairness and trust. These big topics, which also include traditionally social scientific and political topics like democracy and equality, are defined as ‘essentially contested’ when the premise of the concept – for example ‘freedom’ – is widely accepted, but how best to realise freedom is disputed (Hart, 1961, 156). The BAMF survey uses these big topics without offering a definition to go with them. What do people mean when they say that ‘men and women should have equal rights [3] ’ (BAMF, 2016, 52)? What does equality mean in this context? There are of course many different ways that ‘equality’ between men and women can be interpreted. For example, many conservative Catholic churches argue that men and women are ‘equal’ but different, and have clear family roles for men and women. Likewise, participants could equally mean to say that they believe that men and women should have equal, shared family responsibilities, there is no way to know this from this study. Hence, it is difficult to know how best to interpret these kinds of statistics without considerable context.

As part of the work undertaken by Social Science Works, the team are regularly confronted by these kinds of questions via deliberative workshops with Germans and refugees. In these workshops the team ask questions like “What is democracy?”, “What is freedom?”, “What is equality?”. In doing so, the aim of the workshops is to build a consensus together by formulating and reformulating possible definitions [4] , finding common ground between conflicting perspectives and ultimately defining the concepts as a group. What is among the most striking things about these meetings is the initial reluctance of participants to volunteer answers – there is a real lack of certainty about what these kinds of words mean in practice, even among participants who, for example, have studied social and political sciences or work in politics. With the benefit of hindsight, workshop participants have acknowledged these problems in dealing with essentially contested concepts, participants have commented :

“Social Science Works has encouraged me to question my own views and views more critically and to develop a more precise concept for large and often hard to grasp terms such as “democracy”, “freedom” or “equality”. This experience has shown me how complicated it is for me – as someone who I really felt proficient in these questions – to formulate such ideas concretely.”   (German participant from the 2016 series of workshops) “The central starting point for the training was, for me, the common notion of understanding of democracy and freedom. In the intensive discussion, I realized that these terms, which seem self-evident, are anything but.” (German participant from the 2016 series of workshops).

In attempting to talk about these big issues, it become clear just how little consensus there is on these kinds of topics. The participants quoted here work and volunteer in the German social sector and hence confront these kinds of ideas implicitly on a daily basis. The level of uncertainty pointed at here, and from Social Science Works’ wider experience working with volunteers, social workers and refugees suggests that the lack of fluency in essentially contested concepts is a wider problem. In the context of the BAMF research then, it is clear that readers ought to take the chapter detailing the ‘values’ of refugees and Germans with a generous pinch of salt.

Building Consensus & Moving Forward

This article does not seek to suggest that there is no role for survey data in helping to answer questions relating to refugees in Germany. For the most part, the BAMF research offers excellent data on key questions relating to demographics and current social conditions. Hence, the study ought to make an excellent tool of policy makers seeking to better target their support of refugees. However, it is equally clear that to discuss essentially contested concepts like democracy and equality, a survey is a very blunt tool, and here the BAMF study fails to convince. The study seeks to make clear that the social and political values between Germans and refugees are similar and the differences are minimal. The experience in the deliberative workshops hosted by Social Science Works suggests that this is probably true, insofar as both groups find these concepts difficult to define and have to wrestle to make sense of them. This is not something articulated in the BAMF research, however.

Our collective lack of fluency in these topics, even among social and political scholars, has long roots best described another time. However, if we are to improve our abilities to discuss these kinds of topics and build collective ideas for social change and cohesion, there are much better places to begin than a questionnaire. If we are to build a collective understanding of our political structures and our social values, we need to address this lack of fluency by engaging in discussions with diverse groups and together building a coherent idea about social and political ideas.

[1]  Original German: “Befragung von Geflüchteten – Flucht, Ankunft in Deutschland und erste Schritte der Integration“

[2] Original German: „Man sollte ein demokratisches System haben.“

[3] Original German: „Frauen haben die gleichen Rechte wie Männer“

[4] For a more detailed overview of the deliberative method in these workshops, see Blokland, 2016.

Amadeu Antonio Foundation (2016), Hate Speech Against Refugees , Amadeu Antonio Foundation, Berlin.

Benček, D. and Strasheim, J. (2016), Refugees Welcome? Introducing a New Dataset on Anti-Refugee Violence in Germany, 2014–2015 , Working Paper No. 2032, University of Kiel.

Davis, R. E.; et al. (Feb 2010). Interviewer effects in public health surveys , Health Education Research, Oxford University Press, Oxford.

Hart, H.L.A., (1961),  The Concept of Law , Oxford University Press, Oxford.

IAB-BAMF-SOEP (2016), B efragung von Geflüchteten – Flucht, Ankunft in Deutschland und erste Schritte der Integration , BAMF-Forschungsbericht 29, Nürnberg: Bundesamt für Migration und Flüchtlinge.

UNHCR (2016), Global Trends: Forced Displacement in 2015 , UNHCR, New York.

Visser, P. S., Krosnick, J. A., & Lavrakas, P. (2000), Survey research , in H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology , New York: Cambridge University Press.

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

8.2: Pros and Cons of Survey Research

  • Last updated
  • Save as PDF
  • Page ID 12588

Learning Objectives

  • Identify and explain the strengths of survey research.
  • Identify and explain the weaknesses of survey research.

Survey research, as with all methods of data collection, comes with both strengths and weaknesses. We’ll examine both in this section.

Strengths of Survey Method

Researchers employing survey methods to collect data enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In my own study of older people’s experiences in the workplace, I was able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of my seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. I realize that $1,000 is nothing to sneeze at. But just imagine what it might have cost to visit each of those people individually to interview them in person. Consider the cost of gas to drive around the state, other travel costs, such as meals and lodging while on the road, and the cost of time to drive to and talk with each person individually. We could double, triple, or even quadruple our costs pretty quickly by opting for an in-person method of data collection over a mailed survey. Thus surveys are relatively cost effective .

Related to the benefit of cost effectiveness is a survey’s potential for generalizability . Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 7. Of all the data-collection methods described in this text, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 9, do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed question and questionnaire design, one strength of survey methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. I repeat, surveys are used by all kinds of people in all kinds of professions. Is there a light bulb switching on in your head? I hope so. The versatility offered by survey research means that understanding how to construct and administer surveys is a useful skill to have for all kinds of jobs. Lawyers might use surveys in their efforts to select juries, social service and other organizations (e.g., churches, clubs, fundraising groups, activist groups) use them to evaluate the effectiveness of their efforts, businesses use them to learn how to market their products, governments use them to understand community opinions and needs, and politicians and media outlets use surveys to understand their constituencies.

In sum, the following are benefits of survey research:

  • Cost-effective
  • Generalizable

Weaknesses of Survey Method

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any number of questions on any number of topics in them, the fact that the survey researcher is generally stuck with a single instrument for collecting data (the questionnaire), surveys are in many ways rather inflexible . Let’s say you mail a survey out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their surveys. When conducting in-depth interviews, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them.

Validity can also be a problem with surveys. Survey questions are standardized; thus it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not be as valid as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president, as in our opening example in this chapter. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American woman but not an African American man?I am not at all suggesting that such a perspective makes any sense, but it is conceivable that an individual might hold such a perspective.

In sum, potential drawbacks to survey research include the following:

  • Inflexibility

KEY TAKEAWAYS

  • Strengths of survey research include its cost effectiveness, generalizability, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and issues with validity.
  • What are some ways that survey researchers might overcome the weaknesses of this method?
  • Find an article reporting results from survey research (remember how to use Sociological Abstracts?). How do the authors describe the strengths and weaknesses of their study? Are any of the strengths or weaknesses described here mentioned in the article?
  • Search Menu
  • Advance articles
  • Browse content in Biological, Health, and Medical Sciences
  • Administration Of Health Services, Education, and Research
  • Agricultural Sciences
  • Allied Health Professions
  • Anesthesiology
  • Anthropology
  • Anthropology (Biological, Health, and Medical Sciences)
  • Applied Biological Sciences
  • Biochemistry
  • Biophysics and Computational Biology (Biological, Health, and Medical Sciences)
  • Biostatistics
  • Cell Biology
  • Dermatology
  • Developmental Biology
  • Environmental Sciences (Biological, Health, and Medical Sciences)
  • Immunology and Inflammation
  • Internal Medicine
  • Medical Microbiology
  • Medical Sciences
  • Microbiology
  • Neuroscience
  • Obstetrics and Gynecology
  • Ophthalmology
  • Pharmacology
  • Physical Medicine
  • Plant Biology
  • Population Biology
  • Psychological and Cognitive Sciences (Biological, Health, and Medical Sciences)
  • Public Health and Epidemiology
  • Radiation Oncology
  • Rehabilitation
  • Sustainability Science (Biological, Health, and Medical Sciences)
  • Systems Biology
  • Browse content in Physical Sciences and Engineering
  • Aerospace Engineering
  • Applied Mathematics
  • Applied Physical Sciences
  • Bioengineering
  • Biophysics and Computational Biology (Physical Sciences and Engineering)
  • Chemical Engineering
  • Civil and Environmental Engineering
  • Computer Sciences
  • Computer Science and Engineering
  • Earth Resources Engineering
  • Earth, Atmospheric, and Planetary Sciences
  • Electric Power and Energy Systems Engineering
  • Electronics, Communications and Information Systems Engineering
  • Engineering
  • Environmental Sciences (Physical Sciences and Engineering)
  • Materials Engineering
  • Mathematics
  • Mechanical Engineering
  • Sustainability Science (Physical Sciences and Engineering)
  • Browse content in Social and Political Sciences
  • Anthropology (Social and Political Sciences)
  • Economic Sciences
  • Environmental Sciences (Social and Political Sciences)
  • Political Sciences
  • Psychological and Cognitive Sciences (Social and Political Sciences)
  • Social Sciences
  • Sustainability Science (Social and Political Sciences)
  • Author guidelines
  • Submission site
  • Open access policy
  • Self-archiving policy
  • Why submit to PNAS Nexus
  • The PNAS portfolio
  • For reviewers
  • About PNAS Nexus
  • About National Academy of Sciences
  • Editorial Board
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, factors raising questions about the trustworthiness of survey research, recommendations and rationales, increasing recognition of the distinguishing characteristics of this way of knowing, improving the quality of benchmarks and related resources, data availability.

  • < Previous

Protecting the integrity of survey research

ORCID logo

Competing interest: In addition to his Stanford University affiliation, Doug Rivers is Chief Scientist at YouGov. David Dutwin is senior vice president for NORC at the University of Chicago, a nonpartisan survey and social research organization, and was president of AAPOR in 2018-19. Gary Langer is founder and president of a for-profit company that provides survey research design, management and analysis services to nonprofits, foundations, businesses, and government agencies, as well as a sister company that provides knowledge management software to survey practitioners. Langer also is vice chair of the Roper Center for Public Opinion Research and lead author of its transparency policy, implemented in 2018. Marcia K. McNutt is the President of the National Academy of Sciences. The other authors report no competing interests.

  • Article contents
  • Figures & tables
  • Supplementary Data

Kathleen Hall Jamieson, Arthur Lupia, Ashley Amaya, Henry E Brady, René Bautista, Joshua D Clinton, Jill A Dever, David Dutwin, Daniel L Goroff, D Sunshine Hillygus, Courtney Kennedy, Gary Langer, John S Lapinski, Michael Link, Tasha Philpot, Ken Prewitt, Doug Rivers, Lynn Vavreck, David C Wilson, Marcia K McNutt, Protecting the integrity of survey research, PNAS Nexus , Volume 2, Issue 3, March 2023, pgad049, https://doi.org/10.1093/pnasnexus/pgad049

  • Permissions Icon Permissions

Although polling is not irredeemably broken, changes in technology and society create challenges that, if not addressed well, can threaten the quality of election polls and other important surveys on topics such as the economy. This essay describes some of these challenges and recommends remediations to protect the integrity of all kinds of survey research, including election polls. These 12 recommendations specify ways that survey researchers, and those who use polls and other public-oriented surveys, can increase the accuracy and trustworthiness of their data and analyses. Many of these recommendations align practice with the scientific norms of transparency, clarity, and self-correction. The transparency recommendations focus on improving disclosure of factors that affect the nature and quality of survey data. The clarity recommendations call for more precise use of terms such as “representative sample” and clear description of survey attributes that can affect accuracy. The recommendation about correcting the record urges the creation of a publicly available, professionally curated archive of identified technical problems and their remedies. The paper also calls for development of better benchmarks and for additional research on the effects of panel conditioning. Finally, the authors suggest ways to help people who want to use or learn from survey research understand the strengths and limitations of surveys and distinguish legitimate and problematic uses of these methods.

The type of field study known as a survey “involves the collection of data from a sample of elements drawn from a well-defined population through the use of a questionnaire” ( 1 ). Because they provide critical data about people, communities, and nations, surveys are used to inform policy, clarify stakeholder needs, and improve accountability and customer service in the private and public sectors. The scientific activity termed “survey research” provides the methodological and organizational foundations of this work and is a source of its credibility.

Scholars at universities and professionals at a wide range of public opinion and survey research organizations share findings and methodological advances in journals such as Public Opinion Quarterly , The Journal of Survey Statistics and Methodology , and Survey Methodology. Professional organizations, such as the American Association for Public Opinion Research (AAPOR), the World Association for Public Opinion (WAPOR), the European Society for Opinion and Marketing Research (ESOMAR), the Insights Association, and the American Statistical Association promulgate best practices designed to improve data collection and data quality, and promote professional standards and ethics.

In recent years, questions have been raised about whether this way of knowing is as reliable as it once was. Some who question its reliability point to trends in refusal to participate in surveys, a phenomenon that increases the difficulty and cost of securing samples that can produce reliable and precise inferences about a population of interest. The advent of alternative, less costly “non-probability” or “opt-in” samples and a range of methodological challenges associated with changes in society and technology ( 2 , 3 ) raise related concerns. Questions about the reliability of survey research also appear in political contexts. In addition to instances in which some survey researchers have inaccurately forecasted high-profile election outcomes, skepticism in some parts of the population has been fueled by political polarization ( 4 ), partisan attacks on ideologically inconvenient survey findings, declining trust in governments and media institutions that fund major surveys ( 5 ), and attacks on expertise and experts, including those in academe ( 6 ).

In this paper, we offer recommendations for protecting the integrity of survey research in light of these and other challenges. While many surveys are designed to answer questions about corporate reputations and marketing options, we focus on protecting the integrity of studies intended to advance a public interest. A quick scan of the national survey landscape reveals some of the ways in which survey research is used to improve quality of life in the United States. These include the US Census Bureau’s annual American Community Survey (ACS) and Annual Social and Economic Supplement to the Community Population Survey ( 7 ), the Bureau of Labor Statistics’ Current Employment Statistics survey ( 8 ), the National Science Foundation's three large “infrastructure surveys” that track Americans’ attitudes about society (the General Social Survey), the economy (the Panel Study of Income Dynamics), and elections (the American National Election Studies), the University of Michigan's Surveys of Consumers ( 9 ), the Center for Disease Control's Behavioral Risk Factor Surveillance System and National Intimate Partner Violence Survey, the SEAN COVID-19 Survey Archive ( 10 ), and work by the Pew Research Center, among others. Collectively, these and other high-quality surveys and data providers inform leaders and the populace on the state of the nation, the substance and meaning of public attitudes and experiences, and public opinion about critical issues facing our society and the globe. So, too, does the survey-based research published in scholarly journals by researchers in academic fields, such as political science, communication, sociology, and public health.

Questions about the trustworthiness of survey research can arise when surveys produce contradictory results or ones belied by other data, such as the certified vote count in an election. While possibly a consequence of error, these types of outcomes instead can reflect dissimilar operationalizations or methodological approaches. When researchers choose noncomparable ways to measure attitudes and behaviors, or employ distinctive sampling frames, modes, field periods, or question wordings, their survey data can produce different conclusions.

Survey researchers also can arrive at dissimilar conclusions because some have produced inaccurate estimates. Paths to inaccuracy include a misunderstanding of who is, and is not, participating in a survey. Decades ago, adults in the US could be reached at a particular landline phone number, and most received very few requests to participate in surveys of any kind. When they were invited to take a survey, many people agreed to do so. These assumptions no longer hold true. Cell phones have replaced landlines, and many people avoid calls from unfamiliar numbers. At the same time, even when reached by phone, some parts of the population are now much less likely to accept invitations to be interviewed than they once were. Where a survey researcher was, in earlier times, expected to elicit answers from more than 60% of the target sample, response rates now rarely reach 10%. Consequently, it is now more expensive, time-consuming, and difficult for survey researchers to secure the types of representative samples for which the most reputable surveys have long been known. Declining response rates ( 11 ) and changing patterns of “nonresponse” ( 12 , 13 ) are among the factors that affect the quality of population estimates. It has been reported, for example, that Democrats are more likely than Republicans and Independents to agree to be interviewed ( 14 ).

Because elections produce certified outcomes, the accuracy of pre-election polls is particularly susceptible to year-to-year comparisons and report card-like assessments of performance. So, for example, an AAPOR 2020 election post-mortem observed that “national presidential polls had their worst performance in 40 years and state-level presidential polls had their worst overall performance in 20 years” ( 15 ). And a post-2022 midterm election piece in The Conversation observed that “As compiled by the widely followed RealClearPolitics site, polls collectively missed the margins of victory by more than 4 percentage points in key 2022 Senate races in Arizona, Colorado, Florida, New Hampshire, Pennsylvania and Washington…. In gubernatorial races, deviations from polling averages of 4 percentage points or more figured in the outcomes in Arizona, Colorado, Florida, Michigan, Pennsylvania and Wisconsin” ( 16 ).

Forecasting election outcomes is a particularly complicated endeavor. Survey researchers are often uncertain about who will turn out to vote from election to election. Because many factors can affect individual decisions on whether to participate in a particular election, survey producers use models to estimate which survey respondents are more and which are less likely to vote in the upcoming contest. The models mix current information and historical trends to generate predictions about who will vote and who will stay home. When researchers use different models, or when turnout varies in unexpected ways, pre-election polls may provide inaccurate estimates about an election's outcome.

How can the survey research and associated communities better safeguard integrity and increase the utility of surveys on which scholars, leaders, and the public rely to understand the attitudes and behaviors of important populations?

The protecting the integrity of survey research retreat

To address this question, on November 18 and 19, 2021, Marcia McNutt, president of the National Academy of Sciences, convened a virtual retreat to explore ways to protect the integrity of survey research, increase understanding of the limitations and strengths of individual surveys, incentivize disclosure of information needed to evaluate findings from surveys, and help the public recognize distinguishing features of credible surveys. The retreat was cohosted by the Annenberg Foundation Trust at Sunnylands and the Annenberg Public Policy Center (APPC) of the University of Pennsylvania. The proceedings were coordinated by Arthur Lupia of the University of Michigan and Kathleen Hall Jamieson, APPC director and Sunnylands Trust program director. Included among the 20 participants were the crafters of this document, a list that includes current and past editors of major academic journals, past presidents of the American Political Science Association and AAPOR, and a past director of the US Census Bureau, as well as scholars who have led some of the nation's largest university-based election surveys and individuals responsible for the creation and maintenance of large governmental and 501(c)(3) survey datasets.

The convening's main outcome is an understanding that safeguarding the integrity of survey research, including political polls, is possible . A path to that end includes renewed commitments to scientific norms of transparency, precise specification of key methodological decisions, dedication to disclosure and self-correction when errors are identified, and improved reporting practices by researchers and the media. In service of these goals, we offer 12 actions that key stakeholders can take now, and in the near future, to improve the integrity, utility, and public understanding of surveys.

Changes in technology and society create challenges that, if not addressed well, can threaten the quality of important surveys on topics such as public health, the economy, and elections. In what follows, we describe some of them and recommend remedies designed to protect the integrity of survey research. These recommendations are the product of presentations and conversations at the retreat and email discussions in the months that followed. They reflect points of agreement among a diverse group of stakeholders. Collectively, the recommendations are designed to increase the research community's ability to independently assess survey-based research claims and to share those assessments with a broader audience. If followed, these recommendations will add to the existing menu of good practices and strengthen the ability of researchers to draw properly qualified, reliable inferences from survey research.

Twelve recommendations

Since survey research is a form of scientific inquiry, many of our recommendations focus on ways to better align current practice with scientific norms. Workshop organizers organized their discussions and grounded their recommendations in three of those interrelated norms: transparency , clarity , and correcting the record .

Transparency about methods and practices helps generate constructive critiques and critical insights and fuels science's norm of self-correction. Transparency makes it possible for other researchers to reproduce past work and determine whether it is replicable ( 17 ). Transparency also makes it possible to compare methods across surveys that have produced dissimilar results.

Clarity ensures that scholarly methods and objects of inquiry are expressed in apt, carefully defined terms. When this norm is honored, assumptions about data transformations are presented intelligibly, and findings from these inquiries and analyses are expressed in ways that align with the underlying data.

Correcting the record is a multi-stage process that involves flagging problems, assessing the viability of proposed solutions, and determining how well the ones that are implemented are working. Science's culture of transparency, clarity, and critique increases the likelihood that this multi-stage process will occur and succeed.

With a goal of translating these norms into tangible outcomes, we offer 12 recommendations to an audience that includes public opinion scholars and practitioners, survey vendors, leaders of AAPOR and related professional associations, journal editors, reporters and publishers, and others who use or report on survey findings. Each recommendation includes a course of action and people or organizations we consider well-positioned to demonstrate its feasibility and value.

Tables 1 – 5 summarize our recommendations.

Transparency recommendations.

Clarity recommendations.

Correcting the record recommendation.

Increasing recognition of the value and limitations of survey research recommendations.

Improving the quality of survey data recommendations.

The recommendations vary in their resource requirements. Some are relatively easy to implement, while others entail costs. In each case, we have concluded that the benefit of implementing the proposed action is worth the cost.

TRANSPARENCY: A norm of transparency requires access to datasets and disclosure of how respondents were recruited, sources of possible respondent conditioning, the existence, effects, and researcher responses to attrition bias, and weighting and modeling assumptions. This information should be available at every stage of survey research and publication processes.

Reflecting the fundamental goals of transparency and replicability, AAPOR members share the expectation that access to datasets and related documentation will be provided to allow for independent review and verification of research claims upon request.

Beyond data access, the norms of transparency and reproducibility require disclosure. Section III of the Code, on which our next set of recommendations builds, calls for any report or article that uses survey research to immediately disclose:

The data collection strategy.

Who sponsored the research and who conducted it.

Tools and instruments that can influence responses.

Which population the survey is designed to study.

Which methods were used to generate and recruit the sample.

Method(s) and mode(s) of data collection.

The dates on which data were collected.

Sample sizes and expected precision of results.

How the data were weighted and, otherwise, reasons for unweighted estimates.

Steps taken to assess and assure data quality.

A general statement acknowledging limitations of the data. 1

The AAPOR Code also specifies additional items for disclosure after results are reported. To its “Procedures for managing participation in surveys whose participants are interviewed multiple times or at different times”, we would add: disclosure of modeling and weighting assumptions at all stages of the data generation, analysis, and dissemination process; disclosure of question wording and order; improved disclosure of sources of respondent conditioning; disclosure of attrition; enhanced client expectation checklists to include newly recommended forms of disclosure .

Improve disclosure of modeling and weighting assumptions

Recommendation 1A: To facilitate evaluations of assumptions, such as these, and to facilitate more accurate interpretations of the resulting data and analyses, survey vendors should disclose their modeling and weighting assumptions to all users of survey data in ways that are consistent with the FAIR (findable, accessible, interoperable, and reusable) principles for open data.
Recommendation 1B: All publications that include survey research findings should require that modeling and weighting assumptions be disclosed as part of an article's methods section or in supplementary material to which the publication offers direct links. When no weights have been applied, that fact should be disclosed as well.
Recommendation 1C: Since some practitioners (particularly those outside of academia) may not know how to assess or disclose all these phenomena, professional associations and groups that oversee the integrity of the federal statistical agencies that use survey research ought to provide templates and education that can improve the extent and utility of disclosure of modeling and weighting assumptions.

Improve disclosure of question wording and order

Among the ways in which the answers of survey respondents can be biased is sensitization or conditioning because of exposure to earlier questions in a survey. Since these sorts of exposures can affect subsequent responses, they should be disclosed by vendors and reported in all forms in which the results are disseminated.

Recommendation 2: All publications that include survey research findings should require question wording and order disclosures as part of an article's methods section or in supplementary material to which the publication offers direct, permanent links .

Improve disclosure of respondent or panel conditioning factors

Panel studies are surveys in which the same respondents are interviewed multiple times and that can be conducted on samples of individuals who are randomly selected to participate or samples of those who opt-in. Respondent or panel conditioning occurs when respondents’ responses to prior questions or experiences in an earlier survey affect their later responses (see, e.g. 19 ). Panel conditioning does not always occur, but greater disclosure about the types of questions that a survey previously asked a panel's respondents can help researchers, reporters, and the public better understand whether this type of effect could be influencing responses.

Recommendation 3: Survey vendors should disclose both panel recruitment and retention practices and questions on a survey that have the potential to influence responses given to subsequent questions in that survey. They should also disclose that a particular respondent has been asked questions in a previous survey that can have the same effect. To the extent possible, they should explain the types of bias that exposure to previous questions may have introduced. When information about prior exposure does not exist, that fact should be explicitly disclosed as well .

Disclose attrition bias

Recommendation 4: For panel surveys, vendors and researchers should, in an Annual Non-Response Analysis and Attrition Report, disclose attrition rates, report any estimated biases that result from the change in a panel's composition, and explain what they did to take those changes into account.

Client expectation checklists should include new recommended forms of disclosure

Recommendation 5 : Professional associations and others that have a stake in the integrity of survey research should draw greater attention to the list of organizations that have committed to the disclosures and to each of the items on existing disclosure checklists. Survey vendors should act on commitments to follow the disclosure recommendations in these checklists as well as any new disclosures that come to be required, such as those that we have recommended . By developing and distributing a checklist indicating the information clients should require of vendors, AAPOR, through its Code of Ethics, and the Roper Center for Public Opinion Research, through its acquisitions policy ( 20 ), have sought to improve public, media, and client understanding of what makes a survey rigorous and reliable. In addition, AAPOR's Transparency Initiative (TI) gives organizations the opportunity to publicly commit to disclosing “its basic research methods and make them available for public inspection” ( 21 ). The list of organizations that have agreed to follow these practices can be found on the AAPOR TI web page. 2

Incorporate these new disclosure recommendations in the AAPOR Code

Recommendation 6: Engage AAPOR to consider ways to augment its Code of Professional Ethics and Practices to include disclosure of the sources from which samples are drawn, attrition rates in panels and longitudinal surveys, and the extent to which respondents have been exposed to related surveys or survey questions in the recent past, if known.

2.  CLARITY: A norm of clarity dictates that clear, accurate language be used to characterize the nature of the survey process and its outcomes. A commitment to clarity involves being forthright about the types of precision that surveys can and cannot produce.

1. We will not knowingly make interpretations of research results that are inconsistent with the data available, nor will we tacitly permit such interpretations. We will ensure that any findings we report, either privately or for public release, are a balanced and accurate portrayal of research results.
2. We will not knowingly imply that interpretations are accorded greater confidence than the data warrant. When we generalize from samples to make statements about populations, we will only make claims of precision and applicability to broader populations that are warranted by the sampling frames and other methods employed ( 18 ).

The US Census Bureau's decision not to release the 2020 estimates from one of the nation's premier governmental surveys demonstrates a commitment to ensuring sample quality consistent with such best practices. After surveying a sample of 290,000 people monthly, the Bureau's ACS then “combines the monthly responses into a set of 1-year estimates for the nation, states and communities with populations of 65,000 or more” ( 22 ). The ACS is widely used by researchers, governments, and various private sector organizations. However, by disrupting the lives of various subgroups of the US population in different ways in 2020, the COVID-19 pandemic created new “nonresponse bias” challenges that made it more difficult to produce representative survey samples. Although the Bureau sought many ways to adapt to unprecedented circumstances, its researchers concluded that the 2020 ACS data failed to meet the Statistical Data Quality Standards established “to ensure the utility, objectivity and integrity of the statistical information” ( 22 ). Rather than publish data in a compromised and potentially misleading form, the Bureau announced that it would not release its 1-year estimates from the 2020 survey.

At the same time, the use of clear language and definitions increases the likelihood that researchers speak a shared language when addressing each other and the public.

Recommendation 7: When survey data are weighted, the phrase “representative sample” should not be used without explicit acknowledgment of the underlying assumptions, including weighting and modeling assumptions. Survey vendors should not release data without including this information, and publishers of content who use survey data should publicly commit to cite or use data only from sources that provide such information .

Of particular importance in these discussions is clarifying the most effective use of probability and non-probability samples. These diverse forms of data collection have distinct advantages and disadvantages. Probability samples minimize the risk of systematic bias. Non-probability samples are easier and less costly to generate. On some matters, there is a consensus on when one method of gathering a sample is more effective at achieving these types of goals. In other cases, there is less consensus on what these different types of surveys can and cannot do. Helping a broader set of stakeholders understand trade-offs associated with these methods could produce significant public benefits.

For example, non-probability surveys are quite efficient for tracking changes in public sentiment (e.g. presidential approval) over time, provided that the estimates need not be extremely precise. Non-probability surveys have also proven useful for estimating treatment effects across randomly assigned groups (e.g. 23 , 24 ). For other research purposes, however, non-probability surveys may not be fit for use. Studies have shown that the positivity bias associated with bogus respondents can lead non-probability surveys to systematically overestimate rare outcomes, such as ingesting bleach to protect against COVID-19 ( 25 ), belief in conspiracies like PizzaGate ( 26 ), support of political violence ( 27 ), or favorable views of Vladimir Putin ( 28 ). In terms of scale, Geraci ( 29 ) estimates that researchers should anticipate removing 35–50% of non-probability completes due to poor data quality. More broadly, non-probability surveys are not fit for use in federal surveys that are expected to yield highly precise estimates for the country as a whole, but also for harder to reach subgroups.

Recommendation 8: Publishers and editors of scholarly journals should incentivize clarity by adopting reporting standards that better reflect the realities of modern survey research. In particular, they should ask authors to populate, and make available for readers to view, a template that, like the Roper Center's Transparency and Acquisition Policy, clearly describes survey attributes that can influence accuracy .

3.  CORRECTING THE RECORD. A norm of self-correction requires that when errors are identified, they be disclosed in forms accessible to other researchers and that protections be put in place to minimize their recurrence in other surveys or subsequent analyses.

Self-correction is a key scientific norm. When scientists are uncertain about the correspondence between an observation and a research claim, the expectation is that they will report that uncertainty. When a scientist discovers an error, a parallel expectation arises.

Recommendation 9: We recommend that an online resource center, modeled on the National Science Foundation-supported Online Ethics Center for Engineering and Science (established by the National Academy of Engineering and now run by the University of Virginia ( 31 )), be established to archive and make accessible information about technical problems, sources of data corruption, and solutions that survey researchers have uncovered when trying to conduct surveys rigorously and responsibly.

Increase public understanding of the nature, utility, strength, and limitations of survey research

In decades past, the survey professionals and associated communities from whom we invite specific forms of action have implemented important steps to protect public understanding of survey research. By standardizing the expectation that survey researchers should report their surveys’ margins of error, sample sizes, dates of fielding, and question wording, for example, AAPOR and other groups have enhanced public access to useful information about how to interpret surveys. The recommendations that we offer next build upon such efforts.

Recommendation 10: Professional organizations and universities should develop and disseminate a guide to survey research that can be used high school courses.

Increase the visibility of organizations that join AAPOR's Transparency Initiative

Recommendation 11: Journals and media outlets that use or report on surveys not only should note whether the data on which they are relying comes from vendors who have joined the AAPOR Transparency Initiative but also should include links to the data and relevant modeling, weighting, attrition, and related information. AAPOR and other organizations should publicly recognize publishers and media outlets that agree to do so.
Recommendation 12A: Federal funders of survey research, philanthropies, and companies that recognize the importance of safeguarding the integrity of survey research should prioritize support for new benchmarking resources that improve the quality of the surveys that collect data on matters of significance to the nation.
Recommendation 12B: Federal funders of survey research, private philanthropists, and companies that recognize the public importance of maintaining the integrity of survey research should prioritize support of widely usable research that identifies, and shows how to mitigate, negative consequences of panel conditioning.

Overcoming barriers to adoption

One might argue that by increasing vendor costs, industry adoption of our disclosure recommendations about weighting assumptions and panel sensitization would drive vendors whose work cannot sustain the resulting scrutiny out of business. If these forms of disclosure help protect the integrity of the research process, as we believe they do, that outcome is a benefit not a downside of adopting them. We believe that these disclosures are likely to reveal, and hopefully reduce, methods of panel assembly that are difficult to defend and, at the same time, will help researchers better interpret panel data.

However, since the increased costs will be passed on by surviving vendors, it is fair to say that some studies may prove cost-prohibitive and not be undertaken. Others will be based on less data than otherwise would have been collected. We believe that improvements in the quality of published research and in the reliability of inferences grounded in the survey data are worth the trade-off and costs. But the market will ultimately determine whether the increased quality of the data, analysis, and reliability of inferences is worth the cost.

Because costs associated with recommendations reduce the likelihood of adoption, many of our recommendations incentivize it by making adoption a signal of greater trustworthiness. The large and growing number of journals whose editors or publishers have asked to be listed as subscribers to the International Committee of Medical Journal Editors’ (ICMJE's) Recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals shows this process at work ( https://www.icmje.org/journals-following-the-icmje-recommendations/ ). Among other topics, the ICMJE recommendations focus on Defining the Role of Authors and Contributors, Disclosure of Financial and Non-Financial Relationships and Activities, and Conflicts of Interest, and Responsibilities in Submission and Peer-Review. If the publishers of high-impact journals presuppose disclosures as a condition of publication, and media outlets that report on survey research do the same, researchers will require them from vendors. Because journals are judged in part by their reputation, when those known as high quality adopt a practice, others follow suit. The same logic applies to vendors. If signing on to the AAPOR Transparency Initiative is a signal of commitment to protect the data gathering and reporting process, then vendors who do so have a competitive advantage.

Surveys offer a unique and powerful form of evidence. Safeguarding this important way of collecting data and ensuring that it adheres to scientific norms of transparency, clarity, and correcting the record should be a priority of the scholarly and professional communities and audiences that rely on survey findings. Factors that increase the likelihood that the aspirations embodied in the recommendations will become accepted practice include the extent to which they complement best practices and materials already championed by respected entities in the survey research community, have already been implemented by some gold standard vendors and are leveraged by incentives that the scientific community has successfully employed in the past. Broadly, our set of 12 recommendations calls for a culture change in the research community in which fuller and more open disclosure of survey practices and limitations becomes the norm.

No Funders. Expenses for the retreat were underwritten by the Annenberg Foundation Trust at Sunnylands and the Annenberg Public Policy Center of the University of Pennsylvania from endowment funds provided to each by the Annenberg Foundation.

There are no data underlying this work.

Visser PS , Krosnick JA , Lavrakas PJ . 2000 . Survey research. In: Reis HT , Judd CM , editors. Handbook of research methods in social and personality psychology . New York (NY) : Cambridge University Press . p. 223 – 252 .

Google Scholar

Google Preview

Santos R . 2014 . Presidential address: borne of a renaissance–a metamorphosis for our future . Public Opin Quart . 78 ( 3 ): 769 – 777 .

Link M . 2015 . Presidential address: AAPOR2025 and the opportunities in the decade before US . Public Opin Quart . 79 ( 3 ): 828 – 836 .

Pew Research Center . 2014 . Political polarization in the American public . Pew Research Center . https://www.pewresearch.org/politics/2014/06/12/political-polarization-in-the-american-public/ .

Rainie L , Perrin A . 2019, July 22 . Key findings about Americans’ declining trust in government and each other. Pew Research Center . https://www.pewresearch.org/fact-tank/2019/07/22/key-findings-about-americans-declining-trust-in-government-and-each-other/ .

Nichols T . 2017 . The death of expertise: the campaign against established knowledge and why it matters . New York (NY) : Oxford University Press .

US Census Bureau . 2022 . About the American community survey. Census.Gov . https://www.census.gov/programs-surveys/acs/about.html .

US Bureau of Labor Statistics . 2022, February 4 . Labor force statistics from the Current Population Survey. BLS.Gov . https://www.bls.gov/web/empsit/ces_cps_trends.htm#intro .

University of Michigan . 2022 . Surveys of consumers - University of Michigan home page . Ann Arbor (MI). https://data.sca.isr.umich.edu/ .

Societal Experts Action Network . 2022 . Welcome to the Societal Experts Action Network (SEAN) COVID-19 Survey Archive. SEAN COVID-19 Survey Archive . https://covid-19.parc.us.com/client/index.html#/ .

Czajka JL , Beyler A . 2016 . Declining response rates in federal surveys: trends and implications (Background Paper Volume 1). Mathematica Policy Research . https://aspe.hhs.gov/sites/default/files/private/pdf/255531/Decliningresponserates.pdf .

Bernhardt R , Munro D , Wolcott E. 2021 . How does the dramatic rise of CPS non-response impact labor market indicators? (Working Paper No. 781). GLO Discussion Paper . https://www.econstor.eu/handle/10419/229653 .

Williams D , Brick JM . 2018 . Trends in US face-to-face household survey nonresponse and level of effort . J Survey Stat Methodol . 6 ( 2 ): 186 – 211 .

Clinton J , Lapinski JS , Trussler MJ . 2022 . Reluctant Republicans, eager Democrats? Partisan nonresponse and the accuracy of 2020 presidential pre-election telephone polls . Public Opin Quart . 86 ( 2 ): 247 – 269 .

Clinton J , et al.  2021 . AAPOR Task Force on 2020 Pre-Election Polling Report FNL. AAPOR . https://www.researchgate.net/publication/353343195_AAPOR_Task_Force_on_2020_Pre-Election_Polling_Report_FNL .

Campbell WJ . 2022, November 17 . Some midterm polls were on-target—but finding which pollsters and poll aggregators to believe can be challenging. The Conversation . https://theconversation.com/amp/some-midterm-polls-were-on-target-but-finding-which-pollsters-and-poll-aggregators-to-believe-can-be-challenging-194700 .

National Academies of Science, Engineering, and Medicine . 2019, April 7 . New report examines reproducibility and replicability in science, recommends ways to improve transparency and rigor in research . Washington (DC). https://www.nationalacademies.org/news/2019/05/new-report-examines-reproducibility-and-replicability-in-science-recommends-ways-to-improve-transparency-and-rigor-in-research .

American Association for Public Opinion Research. 2021, April . AAPOR Code of Professional Ethics and Practices . https://www.aapor.org/Standards-Ethics/AAPOR-Code-of-Ethics.aspx .

Halpern-Manners A , Warren JR . 2012 . Panel conditioning in longitudinal studies: evidence from labor force items in the current population survey . Demography . 49 ( 4 ): 1499 – 1519 .

Roper Center for Public Opinion Research . 2018, June 22 . Roper Center transparency and acquisition policy . Ithaca (NY). https://ropercenter.cornell.edu/roper-center-transparency-and-acquisitions-policy .

American Association for Public Opinion Research . 2015, March 20 . What is the TI? https://aapor.org/standards-and-ethics/transparency-initiative/ .

Census.Gov . 2021, July 29 . Census Bureau Announces Changes for 2020 American Community Survey 1-Year Estimates . https://www.census.gov/newsroom/press-releases/2021/changes-2020-acs-1-year.html .

Coppock A , Leeper TJ , Mullinix KJ . 2018 . Generalizability of heterogenous treatment effects estimates across samples . Proc Natl Acad Sci U S A. 115 ( 49 ): 12441 – 12446 .

Mullinix KJ , Leeper TJ , Druckman JN , Freese J . 2015 . The generalizability of survey experiments . J Exp Political Sci . 2 ( 2 ): 109 – 138 .

Litman L , et al.  2021 . Did people really drink bleach to prevent COVID-19? A tale of problematic respondents and a guide for measuring rare events in survey data. MedRxiv. 2020-12. https://doi.org/10.1101/2020.12.11.20246694 , preprint: not peer reviewed .

Lopez J , Hillygus DS . 2018 . Why so serious?: survey trolls and misinformation (SSRN Scholarly Paper No. 3131087) . Rochester (NY). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3131087

Westwood SJ , Grimmer J , Tyler M , Nall C . 2022 . Current research overstates American support for political violence . Proc Natl Acad Sci U S A . 119 ( 12 ):e2116870119.

Kennedy C , et al.  2021 . Strategies for detecting insincere respondents in online polling . Public Opin Quart. 85 ( 4 ): 1050 – 1075 . Oxford (UK).

Geraci J . 2022 . POLL-ARIZED: why Americans don’t trust the polls and how to fix them before it's too late . Houndstooth Press .

Jamieson KH , McNutt M , Kiermer V , Sever R . 2019 . Signaling the trustworthiness of science . Proc Natl Acad Sci U S A . 116 ( 39 ): 19231 – 19236 .

Online Ethics Center . 2022 . History and funding . https://onlineethics.org/history-and-funding .

Traugott MW , Lavrakas PJ . 2016 . The voter's guide to election polls . Lanham (MD): Rowman & Littlefield Publishers .

Madson G , Cooper A . 2021 . 2021 future of survey research conference . Durham (NC): Duke University . https://sites.duke.edu/surveyresearch/report/ .

For more details, read the full version of the AAPOR Code ( 18 ).

Read the full list of organizations here: ( 21 ).

See the AAPOR Transparency Initiative web page here: ( 21 ).

Author notes

Email alerts, citing articles via.

  • Contact PNAS Nexus
  • Advertising and Corporate Services
  • Journals Career Network

Affiliations

  • Online ISSN 2752-6542
  • Copyright © 2024 National Academy of Sciences
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Print Friendly, PDF & Email

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

limitations of survey research psychology

limitations of survey research psychology

  • Survey Software The world’s leading omnichannel survey software
  • Online Survey Tools Create sophisticated surveys with ease.
  • Mobile Offline Conduct efficient field surveys.
  • Text Analysis
  • Close The Loop
  • Automated Translations
  • NPS Dashboard
  • CATI Manage high volume phone surveys efficiently
  • Cloud/On-premise Dialer TCPA compliant Cloud on-premise dialer
  • IVR Survey Software Boost productivity with automated call workflows.
  • Analytics Analyze survey data with visual dashboards
  • Panel Manager Nurture a loyal community of respondents.
  • Survey Portal Best-in-class user friendly survey portal.
  • Voxco Audience Conduct targeted sample research in hours.
  • Predictive Analytics
  • Customer 360
  • Customer Loyalty
  • Fraud & Risk Management
  • AI/ML Enablement Services
  • Credit Underwriting

limitations of survey research psychology

Find the best survey software for you! (Along with a checklist to compare platforms)

Get Buyer’s Guide

  • 100+ question types
  • Drag-and-drop interface
  • Skip logic and branching
  • Multi-lingual survey
  • Text piping
  • Question library
  • CSS customization
  • White-label surveys
  • Customizable ‘Thank You’ page
  • Customizable survey theme
  • Reminder send-outs
  • Survey rewards
  • Social media
  • SMS surveys
  • Website surveys
  • Correlation analysis
  • Cross-tabulation analysis
  • Trend analysis
  • Real-time dashboard
  • Customizable report
  • Email address validation
  • Recaptcha validation
  • SSL security

Take a peek at our powerful survey features to design surveys that scale discoveries.

Download feature sheet.

  • Hospitality
  • Financial Services
  • Academic Research
  • Customer Experience
  • Employee Experience
  • Product Experience
  • Market Research
  • Social Research
  • Data Analysis
  • Banking & Financial Services
  • Retail Solution
  • Risk Management
  • Customer Lifecycle Solutions
  • Net Promoter Score
  • Customer Behaviour Analytics
  • Customer Segmentation
  • Data Unification

Explore Voxco 

Need to map Voxco’s features & offerings? We can help!

Watch a Demo 

Download Brochures 

Get a Quote

  • NPS Calculator
  • CES Calculator
  • A/B Testing Calculator
  • Margin of Error Calculator
  • Sample Size Calculator
  • CX Strategy & Management Hub
  • Market Research Hub
  • Patient Experience Hub
  • Employee Experience Hub
  • Market Research Guide
  • Customer Experience Guide
  • The Voxco Guide to Customer Experience
  • NPS Knowledge Hub
  • Survey Research Guides
  • Survey Template Library
  • Webinars and Events
  • Feature Sheets
  • Try a sample survey
  • Professional services
  • Blogs & White papers
  • Case Studies

Find the best customer experience platform

Uncover customer pain points, analyze feedback and run successful CX programs with the best CX platform for your team.

Get the Guide Now

limitations of survey research psychology

We’ve been avid users of the Voxco platform now for over 20 years. It gives us the flexibility to routinely enhance our survey toolkit and provides our clients with a more robust dataset and story to tell their clients.

VP Innovation & Strategic Partnerships, The Logit Group

  • Client Stories
  • Voxco Reviews
  • Why Voxco Research?
  • Why Voxco Intelligence?
  • Careers at Voxco
  • Vulnerabilities and Ethical Hacking

Explore Regional Offices

  • Cloud/On-premise Dialer TCPA compliant Cloud & on-premise dialer
  • Fraud & Risk Management

Get Buyer’s Guide

  • Banking & Financial Services

Explore Voxco 

Watch a Demo 

Download Brochures 

  • CX Strategy & Management Hub
  • Blogs & White papers

VP Innovation & Strategic Partnerships, The Logit Group

  • Our clients
  • Client stories
  • Featuresheets

Methodological Limitations of Survey Research1

The Methodological Limitations of Survey Research

SHARE THE ARTICLE ON

What is Survey Research?

Methodological Limitations of Survey Research2

Survey research is a method of data collection that involves gathering data from a predefined group of respondents, or sample, via surveys. Survey research is widely used by researchers and organizations to understand people, consumers, and societies better. Although research can be conducted using many different methods, survey research is considered one of the most effective and trustworthy methods used to do so. That being said, survey research does come with its own limitations. 

Within today’s article, we will examine the total survey error approach that outlines the different types of methodological limitations associated with survey research. 

Transform your insight generation process

Create an actionable feedback collection process.

97ce0ccf 0e41 4799 80fe eb3a5a495abd

Types of Limitations of Survey Research

Methodological Limitations of Survey Research3

When investigating the weaknesses of survey research, we can categorize its limitations into three key groups; survey errors, survey constraints, and survey effects. 

  • Survey Errors : Survey errors comprise the different mistakes that are made in the construction and implementation of the survey, as well as in the interpretation of its results. 
  • Survey Constraints : Survey constraints refer to the errors in survey research that are impossible to eliminate, and therefore cannot be controlled by researchers in the way survey errors can. 
  • Survey-Related Effects : These effects refer to the aspects of survey research that do not represent errors but still limit the accuracy of the conclusions that can be drawn from survey evidence.  

Survey Errors

Survey errors can be further categorized into the following three groups:

  • Respondent Selection Issues : These are the non-sampling errors caused by respondents providing incorrect answers to research questions, whether that be intentionally or unintentionally. Some common respondent selection issues include sampling error, coverage error, and nonresponse error at the unit level.
  • Response Accuracy Issues : These issues occur when respondents fail to respond to certain questions within the survey. Some common response accuracy issues include nonresponse error at the item level, measurement error due to respondents, and measurement error due to interviewers.
  • Survey Administration Issues : These are issues that are caused due to inadequate or improper survey administration. Some common survey administration issues include post-survey error, mode effects, and house effects.

Explore all the survey question types possible on Voxco

Survey constraints .

The next aspect of survey error that we will examine is survey constraints. These are the errors that are impossible to eliminate in surveys. As surveys can be expensive to conduct, minimizing survey error generally involves a tradeoff with costs. To reduce sampling error, more interviews can be taken, however, this will increase costs. To reduce coverage error, better sampling frames can be prepared, which will also result in increased costs. To reduce nonresponse error, callbacks can be increased, monetary incentives can be provided, and interviewer training can be improved; all measures that are associated with higher costs. 

Survey-Related Effects

Survey-related effects limit the precision of the conclusions that can be drawn from collected data. These are a few different survey-related effects:

  • Question-related : Although there is no single ‘correct’ way to word a question, different questions can yield different responses. The response option order, question structure, and whether or not it is an open or closed-ended question can all influence survey responses.
  • Mode Effects : The method used to administer the survey can also influence results. Self-administered surveys may yield significantly different results when compared to face-to-face surveys. Social desirability effects are a huge concern when dealing with particular modes such as face-to-face and telephone surveys.

See Voxco survey software in action with a Free demo.

Faqs on the limitations of survey research.

Survey research refers to the systematic method of data collection from a sample of respondents via surveys.

The limitations of survey research can be categorized into three groups; survey constraints, survey errors, and survey-related effects.

Survey errors refer to the different mistakes that are made in the construction and implementation of the survey, as well as the interpretation of its results. Survey constraints, on the other hand, refer to the limitations of survey research that are impossible to eliminate in surveys. Survey-related effects refer to the aspects of survey research that limit the accuracy of the conclusions that can be drawn from survey evidence.

Explore Voxco Survey Software

Online page new product image3 02.png 1

+ Omnichannel Survey Software 

+ Online Survey Software 

+ CATI Survey Software 

+ IVR Survey Software 

+ Market Research Tool

+ Customer Experience Tool 

+ Product Experience Software 

+ Enterprise Survey Software 

We use cookies in our website to give you the best browsing experience and to tailor advertising. By continuing to use our website, you give us consent to the use of cookies. Read More

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Ann R Coll Surg Engl
  • v.95(1); 2013 Jan

A quick guide to survey research

1 University of Cambridge,, UK

2 Cambridge University Hospitals NHS Foundation Trust,, UK

Questionnaires are a very useful survey tool that allow large populations to be assessed with relative ease. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Clear research goal

The first and most important step in designing a survey is to have a clear idea of what you are looking for. It will always be tempting to take a blanket approach and ask as many questions as possible in the hope of getting as much information as possible. This type of approach does not work as asking too many irrelevant or incoherent questions reduces the response rate 2 and therefore reduces the power of the study. This is especially important when surveying physicians as they often have a lower response rate than the rest of the population. 3 Instead, you must carefully consider the important data you will be using and work on a ‘need to know’ rather than a ‘would be nice to know’ model. 4

After considering the question you are trying to answer, deciding whom you are going to ask is the next step. With small populations, attempting to survey them all is manageable but as your population gets bigger, a sample must be taken. The size of this sample is more important than you might expect. After lost questionnaires, non-responders and improper answers are taken into account, this sample must still be big enough to be representative of the entire population. If it is not big enough, the power of your statistics will drop and you may not get any meaningful answers at all. It is for this reason that getting a statistician involved in your study early on is absolutely crucial. Data should not be collected until you know what you are going to do with them.

Directed questions

After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic ( Table 1 ).

Advantages and disadvantages of survey methods

Collected data are only useful if they convey information accurately and consistently about the topic in which you are interested. This is where a validated survey instrument comes in to the questionnaire design. Validated instruments are those that have been extensively tested and are correctly calibrated to their target. They can therefore be assumed to be accurate. 1 It may be possible to modify a previously validated instrument but you should seek specialist advice as this is likely to reduce its power. Examples of validated models are the Beck Hopelessness Scale 5 or the Addenbrooke’s Cognitive Examination. 6

The next step is choosing the type of question you are going to ask. The questionnaire should be designed to answer the question you want answered. Each question should be clear, concise and without bias. Normalising statements should be included and the language level targeted towards those at the lowest educational level in your cohort. 1 You should avoid open, double barrelled questions and those questions that include negative items and assign causality. 1 The questions you use may elicit either an open (free text answer) or closed response. Open responses are more flexible but require more time and effort to analyse, whereas closed responses require more initial input in order to exhaust all possible options but are easier to analyse and present.

Questionnaire

Two more aspects come into questionnaire design: aesthetics and question order. While this is not relevant to telephone or personal questionnaires, in self-administered surveys the aesthetics of the questionnaire are crucial. Having spent a large amount of time fine-tuning your questions, presenting them in such a way as to maximise response rates is pivotal to obtaining good results. Visual elements to think of include smooth, simple and symmetrical shapes, soft colours and repetition of visual elements. 7

Once you have attracted your subject’s attention and willingness with a well designed and attractive survey, the order in which you put your questions is critical. To do this you should focus on what you need to know; start by placing easier, important questions at the beginning, group common themes in the middle and keep questions on demographics to near the end. The questions should be arrayed in a logical order, questions on the same topic close together and with sensible sections if long enough to warrant them. Introductory and summary questions to mark the start and end of the survey are also helpful.

Pilot study

Once a completed survey has been compiled, it needs to be tested. The ideal next step should highlight spelling errors, ambiguous questions and anything else that impairs completion of the questionnaire. 8 A pilot study, in which you apply your work to a small sample of your target population in a controlled setting, may highlight areas in which work still needs to be done. Where possible, being present while the pilot is going on will allow a focus group-type atmosphere in which you can discuss aspects of the survey with those who are going to be filling it in. This step may seem non-essential but detecting previously unconsidered difficulties needs to happen as early as possible and it is important to use your participants’ time wisely as they are unlikely to give it again.

Distribution and collection

While it should be considered quite early on, we will now discuss routes of survey administration and ways to maximise results. Questionnaires can be self-administered electronically or by post, or administered by a researcher by telephone or in person. The advantages and disadvantages of each method are summarised in Table 1 . Telephone and personal surveys are very time and resource consuming whereas postal and electronic surveys suffer from low response rates and response bias. Your route should be chosen with care.

Methods for maximising response rates for self-administered surveys are listed in Table 2 , taken from a Cochrane review.2 The differences between methods of maximising responses to postal or e-surveys are considerable but common elements include keeping the questionnaire short and logical as well as including incentives.

Methods for improving response rates in postal and electronic questionnaires 2

  • – Involve a statistician early on.
  • – Run a pilot study to uncover problems.
  • – Consider using a validated instrument.
  • – Only ask what you ‘need to know’.
  • – Consider guidelines on improving response rates.

The collected data will come in a number of forms depending on the method of collection. Data from telephone or personal interviews can be directly entered into a computer database whereas postal data can be entered at a later stage. Electronic questionnaires can allow responses to go directly into a computer database. Problems arise from errors in data entry and when questionnaires are returned with missing data fields. As mentioned earlier, it is essential to have a statistician involved from the beginning for help with data analysis. He or she will have helped to determine the sample size required to ensure your study has enough power. The statistician can also suggest tests of significance appropriate to your survey, such as Student’s t-test or the chi-square test.

Conclusions

Survey research is a unique way of gathering information from a large cohort. Advantages of surveys include having a large population and therefore a greater statistical power, the ability to gather large amounts of information and having the availability of validated models. However, surveys are costly, there is sometimes discrepancy in recall accuracy and the validity of a survey depends on the response rate. Proper design is vital to enable analysis of results and pilot studies are critical to this process.

ScienceDaily

Research identifies characteristics of cities that would support young people's mental health

As cities around the world continue to draw young people for work, education, and social opportunities, a new study identifies characteristics that would support young urban dwellers' mental health. The findings, based on survey responses from a global panel that included adolescents and young adults, provide a set of priorities that city planners can adopt to build urban environments that are safe, equitable, and inclusive.

To determine city characteristics that could bolster youth mental health, researchers administered an initial survey to a panel of more than 400, including young people and a multidisciplinary group of researchers, practitioners, and advocates. Through two subsequent surveys, participants prioritized six characteristics that would support young city dwellers' mental health: opportunities to build life skills; age-friendly environments that accept young people's feelings and values; free and safe public spaces where young people can connect; employment and job security; interventions that address the social determinants of health; and urban design with youth input and priorities in mind.

The paper was published online February 21 in Nature .

The study's lead author is Pamela Collins, MD, MPH, chair of the Johns Hopkins Bloomberg School of Public Health's Department of Mental Health. The study was conducted while Collins was on the faculty at the University of Washington. The paper was written by an international, interdisciplinary team, including citiesRISE, a global nonprofit that works to transform mental health policy and practice in cities, especially for young people.

Cities have long been a draw for young people. Research by UNICEF projects that cities will be home to 70 percent of the world's children by 2050. Although urban environments influence a broad range of health outcomes, both positive and negative, their impacts manifest unequally. Mental disorders are the leading causes of disability among 10- to 24-year-olds globally. Exposure to urban inequality, violence, lack of green space, and fear of displacement disproportionately affects marginalized groups, increasing risk for poor mental health among urban youth.

"Right now, we are living with the largest population of adolescents in the world's history, so this is an incredibly important group of people for global attention," says Collins. "Investing in young people is an investment in their present well-being and future potential, and it's an investment in the next generation -- the children they will bear."

Data collection for the study began in April 2020 at the start of the COVID-19 pandemic. To capture its possible impacts, researchers added an open-ended survey question asking panelists how the pandemic influenced their perceptions of youth mental health in cities. The panelists reported that the pandemic either shed new light on the inequality and uneven distribution of resources experienced by marginalized communities in urban areas, or confirmed their preconceptions of how social vulnerability exacerbates health outcomes.

For their study, the researchers recruited a panel of more than 400 individuals from 53 countries, including 327 young people ages 14 to 25, from a cross-section of fields, including education, advocacy, adolescent health, mental health and substance use, urban planning and development, data and technology, housing, and criminal justice. The researchers administered three sequential surveys to panelists beginning in April 2020 that asked panelists to identify elements of urban life that would support mental health for young people.

The top 37 characteristics were then grouped into six domains: intrapersonal, interpersonal, community, organizational, policy, and environment. Within these domains, panelists ranked characteristics based on immediacy of impact on youth mental health, ability to help youth thrive, and ease or feasibility of implementation.

Taken together, the characteristics identified in the study provide a comprehensive set of priorities that policymakers and urban planners can use as a guide to improve young city dwellers' mental health. Among them: Youth-focused mental health and educational services could support young people's emotional development and self-efficacy. Investment in spaces that facilitate social connection may help alleviate young people's experiences of isolation and support their need for healthy, trusting relationships. Creating employment opportunities and job security could undo the economic losses that young people and their families experienced during the pandemic and help cities retain residents after a COVID-era exodus from urban centers.

The findings suggest that creating a mental health-friendly city for young people requires investments across multiple interconnected sectors like transportation, housing, employment, health, and urban planning, with a central focus on social and economic equity. They also require urban planning policy approaches that commit to systemic and sustained collaboration, without magnifying existing privileges through initiatives like gentrification and developing green spaces at the expense of marginalized communities in need of affordable housing.

The authors say this framework underscores that responses by cities should include young people in the planning and design of interventions that directly impact their mental health and well-being.

  • Child Psychology
  • Mental Health
  • Child Development
  • Sustainability
  • Environmental Policy
  • Environmental Awareness
  • Urbanization
  • Public Health
  • World Development
  • Urban planning
  • Public health
  • Social psychology
  • Social inequality
  • Collaboration
  • Social inclusion

Story Source:

Materials provided by Johns Hopkins Bloomberg School of Public Health . Note: Content may be edited for style and length.

Journal Reference :

  • Pamela Y. Collins, Moitreyee Sinha, Tessa Concepcion, George Patton, Thaisa Way, Layla McCay, Augustina Mensa-Kwao, Helen Herrman, Evelyne de Leeuw, Nalini Anand, Lukoye Atwoli, Nicole Bardikoff, Chantelle Booysen, Inés Bustamante, Yajun Chen, Kelly Davis, Tarun Dua, Nathaniel Foote, Matthew Hughsam, Damian Juma, Shisir Khanal, Manasi Kumar, Bina Lefkowitz, Peter McDermott, Modhurima Moitra, Yvonne Ochieng, Olayinka Omigbodun, Emily Queen, Jürgen Unützer, José Miguel Uribe-Restrepo, Miranda Wolpert, Lian Zeitz. Making cities mental health friendly for adolescents and young adults . Nature , 2024; 627 (8002): 137 DOI: 10.1038/s41586-023-07005-4

Cite This Page :

Explore More

  • Night-Time Light and Stroke Risk
  • Toward Secure Quantum Communication Globally
  • Artificial Nanofluidic Synapses: Memory
  • 49 New Galaxies Discovered in Under Three Hours
  • Rays Surprisingly Diverse 150 Million Years Ago
  • Paint Coatings That Help You Feel Cool
  • A Self-Cleaning Wall Paint
  • Early Human Migration Out of Africa
  • Unintended Consequences of Fire Suppression
  • Wild Bird Gestures 'After You'

Trending Topics

Strange & offbeat.

IMAGES

  1. Advantages & Disadvantages of Surveys

    limitations of survey research psychology

  2. Methods in Psychology

    limitations of survey research psychology

  3. The Methodological Limitations of Survey Research

    limitations of survey research psychology

  4. Survey Research: Defination, Advantages & Disadvantages

    limitations of survey research psychology

  5. Advantages and Disadvantages of Research Methods in Psychology

    limitations of survey research psychology

  6. Understanding the 3 Main Types of Survey Research & Putting Them to Use

    limitations of survey research psychology

VIDEO

  1. 12th Psychology Survey Report PDF

  2. Why you should apply for research psychology roles #dclinpsy #podcast #psychology #psychologist

  3. Research ||Unit 2|| Psychology #ugcnet #research #characterstics #aims #objectives#lecture26

  4. Survey Research Method

  5. Methods of Psychology Part-3 || Clinical/Case History ||Survey Method||Genetic/ Developmental method

  6. A Survey on IoT Intrusion Detection Federated Learning, Game Theory, Social Psychology, and Explain

COMMENTS

  1. 20 Advantages and Disadvantages of Survey Research

    List of the Advantages of Survey Research. 1. It is an inexpensive method of conducting research. Surveys are one of the most inexpensive methods of gathering quantitative data that is currently available. Some questionnaires can be self-administered, making it a possibility to avoid in-person interviews.

  2. When to Use Surveys in Psychology Research

    A survey is a data collection tool used to gather information about individuals. Surveys are commonly used in psychology research to collect self-report data from study participants. A survey may focus on factual information about individuals, or it might aim to obtain the opinions of the survey takers.

  3. PDF Survey Research

    This chapter describes a research methodology that we believe has much to offer social psychologists in- terested in a multimethod approach: survey research. Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined

  4. The Limitations of Online Surveys

    Online surveys are becoming increasingly popular. There were 1682 PubMed hits for "online survey" (search phrase entered with quotes) in 2016; this number increased to 1994 in 2016, 2425 in 2017, 2872 in 2018, and 3182 in 2019. On August 15, 2020, the number of hits for 2020 was already 2742; when annualized, this number projects to 4387.

  5. The Limitations of Online Surveys

    Online surveys commonly suffer from two serious methodological limitations: the population to which they are distributed cannot be described, and respondents with biases may select themselves into the sample. Research is of value only when the findings from a sample can be generalized to a meaningful population.

  6. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  7. 11.2: Strengths and weaknesses of survey research

    Weaknesses of survey methods. As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any number of questions on any number of topics in them, the fact that the survey researcher is generally stuck with a single instrument for collecting data, the questionnaire.

  8. Behind the Numbers: Questioning Questionnaires

    1. The research community needs to become more aware of and open to issues related to interpretation, language, and communication when conducting or assessing the quality of a survey study. The idea of so much of social reality being readily measurable (or even straightforwardly reported in interview statements) needs to be critically addressed.

  9. 11.2 Strengths and weaknesses of survey research

    Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, are posed to participants. Other methods like qualitative interviewing, which we'll learn about in Chapter 13, do not offer the same consistency that a quantitative survey offers. ...

  10. 9.1 Overview of Survey Research

    What Is Survey Research? Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors.

  11. 7.1 Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts ...

  12. Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts ...

  13. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  14. Overview of Survey Research

    is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called. in survey research) to report directly on their own thoughts, feelings, and behaviours. Second, considerable attention is paid ...

  15. 7.2 Assessing survey research

    Some methods of administering surveys can be cost effective. In a study of older people's experiences in the workplace, researchers were able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of a seven-page survey, printing a cover ...

  16. A checklist to assess the quality of survey studies in psychology

    In addition, we note the absence of quality assessment tools designed specifically for survey research in psychology. 2 As survey research is one of the most frequently-used methods in psychology (Ponto, 2015; Singleton and Straits, 2009), a dedicated, fit-for-purpose quality tool is needed (Protogerou and Hagger, 2019).

  17. Survey research: Process and limitations

    There are a number of different forms of survey research; however, they all share common steps and common limitations, and this article discusses these steps with a view to highlighting some of the common difficulties. Background Survey research is a non-experimental research approach used to gather information about the incidence and distribution of, and the relationships that exist between ...

  18. The Limits Of Survey Data: What Questionnaires Can't Tell Us

    Sarah Coughlan. All research methodologies have their limitations, as many authors have pointed before (see for example Visser, Krosnick and Lavrakas, 2000). From the generalisabilty of data to the nitty-gritty of bias and question wording, every method has its flaws. In fact, the in-fighting between methodological approaches is one of social ...

  19. 8.2: Pros and Cons of Survey Research

    First, surveys are an excellent way to gather lots of information from many people. In my own study of older people's experiences in the workplace, I was able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of my seven-page survey ...

  20. Protecting the integrity of survey research

    Increasing recognition of the value and limitations of survey research recommendations. Implementer(s). Location. 10. Professional organizations and universities should develop and disseminate a guide to survey research that can be used in high school courses. AAPOR, other survey-focused organizations, and universities.

  21. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  22. The Methodological Limitations of Survey Research

    Survey research is a method of data collection that involves gathering data from a predefined group of respondents, or sample, via surveys. Survey research is widely used by researchers and organizations to understand people, consumers, and societies better. Although research can be conducted using many different methods, survey research is ...

  23. A quick guide to survey research

    Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

  24. Research identifies characteristics of cities that would ...

    To determine city characteristics that could bolster youth mental health, researchers administered an initial survey to a panel of more than 400, including young people and a multidisciplinary ...

  25. 8-hour time-restricted eating linked to a 91% higher risk of

    The study's limitations included its reliance on self-reported dietary information, which may be affected by participant's memory or recall and may not accurately assess typical eating patterns. Factors that may also play a role in health, outside of daily duration of eating and cause of death, were not included in the analysis.