Survey Research
The Science of Asking Questions
Every day, we are surrounded by the results of survey research. News reports tell us the president’s latest approval rating, marketers track our satisfaction with a new product, and public health officials monitor trends in our behaviors and beliefs. The survey is perhaps the most visible and widely used research method in the social sciences, a powerful and versatile tool for gathering information about the attitudes, opinions, and behaviors of large groups of people. At its core, survey research is the science of asking questions. It is a method for collecting data by asking a sample of people to respond to a series of queries about a topic of interest. When conducted with rigor and care, a survey can provide a high-fidelity “snapshot” of a population, allowing researchers to describe its characteristics, identify patterns of association between variables, and track changes over time.
The apparent simplicity of the survey, however, is deceptive. While it may seem easy to write a few questions and send them out, the difference between a casual poll and a methodologically sound survey is vast. A well-designed survey is a sophisticated and finely tuned instrument. Every aspect of its design—from the precise wording of a single question to the order in which those questions are presented, and from the method of selecting participants to the strategy for encouraging their response—is the result of a series of deliberate and theoretically informed decisions. A flaw in any one of these areas can introduce bias and error, rendering the results of the entire study questionable.
This chapter provides a comprehensive, practical guide to the design and implementation of high-quality survey research. We will move beyond the simple idea of asking questions to explore the intricate craft of building a valid and reliable research instrument: the questionnaire. We will delve into the art of question wording, providing clear guidelines for avoiding common pitfalls that can confuse respondents and distort their answers. We will examine the strategic choices involved in structuring a questionnaire to ensure a logical flow and to minimize the subtle psychological biases that can be introduced by question order. We will then explore the various modes through which a survey can be administered—from traditional mail and telephone methods to the now-ubiquitous online survey—and weigh the distinct advantages and disadvantages of each. Finally, we will confront one of the most persistent challenges in survey research: the problem of nonresponse, and discuss strategies for maximizing participation. By the end of this chapter, you will have the foundational knowledge to design a survey that is not just a list of questions, but a powerful tool for generating credible and insightful knowledge about the world of mass communication.
The Logic and Purpose of Survey Research
Survey research is a quantitative method that falls squarely within the social scientific paradigm. Its primary goals are description and the exploration of correlational relationships. As a descriptive tool, a survey provides a numeric or quantitative description of the trends, attitudes, or opinions of a population by studying a sample of that population. It excels at answering “what” questions: What percentage of the public trusts the news media? What are the primary social media platforms young adults use to get news? How prevalent is the experience of online harassment among journalists?
As a tool for exploring relationships, surveys allow researchers to examine the statistical associations between two or more variables. A researcher might use a survey to test a hypothesis about the relationship between habitual exposure to television news and fear of crime, or to explore the correlation between social media use and levels of political polarization. It is crucial to remember, however, that a standard cross-sectional survey—one that collects data at a single point in time—can demonstrate that two variables are related, but it generally cannot establish a definitive cause-and-effect relationship. Because the data are collected simultaneously, it is often difficult to establish the temporal ordering required for a causal claim (i.e., that the cause preceded the effect). While responsible researchers use statistical controls to account for obvious alternative explanations, the correlational nature of cross-sectional survey data requires caution when making causal inferences.
To more rigorously study change and causality, researchers can employ longitudinal survey designs, which involve collecting data at multiple points in time.
A trend study surveys different samples from the same population at different times to track changes in the population as a whole (e.g., tracking presidential approval ratings month after month).
A cohort study follows a specific subgroup (a cohort, such as people born in the 1980s) over time, though it may use different samples from that cohort at each measurement wave.
A panel study, the most powerful longitudinal design, measures the same individuals at multiple points in time. This design allows researchers to track individual-level change and to more confidently establish the temporal order of variables, providing more substantial evidence for causal relationships.
While powerful, longitudinal studies are significantly more expensive and time-consuming than cross-sectional surveys and face their unique challenges, such as participant attrition (people dropping out of the study over time). For most research projects, especially those undertaken by students, the cross-sectional survey remains the most common and practical design. The success of any study, regardless of its design, hinges on the quality of its central instrument: the questionnaire.
The Heart of the Survey: Questionnaire Design
The questionnaire is the data collection instrument of a survey. It is a collection of written queries that participants are asked to respond to. The quality of the data you collect can be no better than the quality of the questions you ask. Crafting an effective questionnaire is a meticulous process that involves careful decisions about what to ask, how to ask it, and how to organize the questions into a coherent and user-friendly instrument.
Item Selection: What to Ask
The items included in a questionnaire should flow directly from the study’s research questions and hypotheses. Every question should have a clear purpose and be tied to a specific concept you intend to measure. When selecting items, researchers have two primary options: creating their questions or using pre-existing, validated scales.
Whenever possible, researchers are encouraged to use established measurement tools that have been developed and validated by previous scholars. A vast number of these scales exist to measure common communication constructs like communication apprehension, relational satisfaction, or media credibility. Using an existing scale offers two significant advantages. First, it saves the researcher the time and effort of the rigorous process of instrument development, as these scales have already been tested for reliability and validity. Second, it allows the researcher to connect their findings more directly to the existing body of literature, as they are using the exact operational definition of a concept as other scholars in the field. Resources like the SAGE Encyclopedia of Communication Research Methods or specialized sourcebooks can be invaluable for finding these established measures.
In cases where no established measure exists for a novel concept, the researcher will need to create their items. This requires a careful process of conceptualization and operationalization, as discussed in the previous chapter, to ensure the new items are valid and reliable measures of the intended construct.
Question Structure: Open-Ended vs. Closed-Ended
Survey questions can be broadly divided into two structural types: closed-ended and open-ended.
Closed-ended questions provide respondents with a fixed set of pre-determined response alternatives. The respondent’s task is to choose the option that best represents their answer.
Advantages: Closed-ended questions are easier and faster for respondents to answer. For the researcher, the data is essentially pre-coded, which makes statistical analysis much more straightforward and efficient.
Disadvantages: They can sometimes force respondents into choices that do not fully capture the nuance of their genuine opinion. The researcher may also fail to include a vital response category, thereby missing a key aspect of the issue.
Common Types:
Dichotomous Questions: Offer two choices (e.g., Yes/No, Agree/Disagree).
Multiple-Choice Questions: Provide a list of options from which the respondent can choose one or more answers.
Scaled Questions: Use a scale to measure the intensity of an attitude or belief. The most common is the Likert-type scale, which asks respondents to indicate their level of agreement with a statement (e.g., from “Strongly Disagree” to “Strongly Agree”).
Rank-Order Questions: Ask respondents to rank a list of items in order of preference or importance.
Open-ended questions allow respondents to answer in their own words, without being constrained by a fixed set of choices.
Advantages: They can provide rich, detailed, and unanticipated insights that the researcher might not have considered. They are excellent for exploratory research and for capturing the complexity and individuality of a respondent’s perspective.
Disadvantages: They require more time and cognitive effort from the respondent, which can lead to shorter or incomplete answers, or respondents skipping the question altogether. For the researcher, the data from open-ended questions must be systematically coded into categories before it can be analyzed, a process that can be very time-consuming and labor-intensive.
In practice, many questionnaires use a combination of both types. A survey might primarily consist of closed-ended questions for efficiency but include a few open-ended questions at the end of a section or the very end of the study to allow respondents to elaborate or provide additional comments.
The Art of Wording: Crafting Effective Questions
The exact way a question is worded can have a profound impact on the answers it elicits. Poorly worded questions are one of the most common sources of measurement error in survey research. The goal is to write questions that are clear, neutral, and easy for all respondents to understand and answer consistently.
Be Clear and Unambiguous. Use simple, direct, and familiar language. Avoid jargon, technical terms, and abbreviations that your respondents might not understand. A question like “What is your opinion on the efficacy of parasocial interaction in mitigating loneliness?” is filled with academic jargon. A clearer version would be, “Do you think that feeling a connection with a media personality helps people feel less lonely?”
Avoid Double-Barreled Questions. A double-barreled question is a standard error where a single question asks about two or more different things at once. For example: “Do you believe the university should decrease tuition and increase student fees?” A respondent might agree with the first part but disagree with the second, making it impossible to give a single, accurate answer. The solution is to split it into two separate questions.
Avoid Leading or Loaded Questions. A leading question is phrased in a way that suggests a preferred answer or makes one response seem more socially desirable than another. For example, “Don’t you agree that all responsible parents should vaccinate their children?” This wording pressures the respondent to agree. A more neutral version would be, “To what extent do you agree or disagree with the statement: All parents should vaccinate their children.” Similarly, avoid emotionally loaded language that can bias the response.
Avoid Double Negatives. Questions that use double negatives can be grammatically confusing and are often misinterpreted by respondents. A question like, “Do you disagree that the media should not be censored?” is difficult to parse. A clearer phrasing would be, “To what extent do you agree or disagree that the media should be censored?”
Ensure Respondents are Competent to Answer. Do not ask questions that respondents are unlikely to have the knowledge to answer. Asking the general public for their opinion on a highly technical piece of legislation is unlikely to yield meaningful data.
Be Mindful of Sensitive Topics. When asking about sensitive topics (e.g., income, illegal behavior, personal health), phrase questions carefully to be as non-judgmental as possible. Assurances of anonymity and confidentiality, which should be provided in the survey’s introduction, are particularly crucial for encouraging honest answers to these questions.
Assembling the Questionnaire: Structure and Flow
Once the individual items have been crafted, they must be assembled into a coherent questionnaire. The organization, layout, and instructions of the instrument can significantly influence a respondent’s willingness to complete the survey and the quality of the data they provide.
Introduction and Instructions
Every questionnaire should begin with a clear and concise introduction. This introduction serves as a cover letter and should include several key pieces of information:
The name of the organization or researcher conducting the survey.
The purpose or goal of the research, explained in simple terms.
An estimate of how long the survey will take to complete.
A clear statement about whether responses will be anonymous or confidential.
Any general instructions needed to complete the survey.
In addition to the main introduction, clear instructions should be provided for each new section or type of question within the survey to ensure participants understand how to respond correctly.
Question Sequencing and Order Effects
The order in which questions are asked is not a trivial matter. Research has consistently shown that the placement of a question can influence the answers to subsequent questions, a phenomenon known as question-order effects.
Funnel vs. Inverted Funnel: A common organizational structure is the funnel format, which starts with broad, general questions and then proceeds to more specific ones. This helps to ease the respondent into the survey. The inverted funnel format, which starts with specific questions, is less common but can be used in certain situations.
Priming Effects: Earlier questions can “prime” respondents by making certain information more accessible in their minds, which can then influence their answers to later questions. This can lead to assimilation effects (where answers to later questions become more similar to the primed information) or contrast effects (where answers move in the opposite direction). A general rule of thumb to minimize these effects is to ask general questions before specific questions on a similar topic.
Placement of Sensitive and Demographic Questions: It is often advisable to place the most interesting and important questions early in the survey to capture the respondent’s attention. Sensitive or potentially boring questions, such as those about demographics (age, income, race), are typically placed at the end of the questionnaire. By the time respondents reach these questions, they are more invested in the survey and more likely to complete them.
Formatting and Layout
The visual appearance of the questionnaire matters. A professional, well-organized, and uncluttered layout can increase response rates and reduce measurement error.
Use Filter and Contingency Questions: To avoid asking respondents questions that are not relevant to them, use filter questions (also called skip questions). For example, a filter question might ask, “Do you have children?” If the respondent answers “No,” they are instructed to skip the subsequent section of contingency questions about parenting. Online survey platforms like Qualtrics or SurveyMonkey make this process of “skip logic” seamless for the respondent.
Avoid Fatigue: Be mindful of the fatigue effect. A questionnaire that is too long or visually dense can tire respondents, leading them to stop paying close attention or to abandon the survey altogether. Keep the instrument as concise as possible, and use white space and clear headings to break up long sections.
Pre-Testing: The Essential Dress Rehearsal
Before launching a full-scale survey, it is absolutely essential to pre-test (or pilot test) the questionnaire. A pre-test involves administering the survey to a small group of people who are similar to those in your actual study population. This “dress rehearsal” is the single best way to discover problems with your instrument before it is too late. The purpose of a pre-test is to:
Identify questions that are confusing, ambiguous, or poorly worded.
Check the flow and logic of the questionnaire, including any skip patterns.
Get an accurate estimate of how long the survey takes to complete.
Discover any issues with the instructions or layout.
Receive general feedback from participants about their experience taking the survey.
One effective pre-testing method is the cognitive interview, where you ask participants to “think aloud” as they answer each question, explaining how they are interpreting the question and arriving at their answer. This can provide invaluable insights into how your questions are being understood. The feedback from a pre-test should be used to revise and refine the questionnaire before the final data collection begins.
Survey Administration: Modes of Data Collection
A researcher must decide on the most appropriate mode for administering the survey. Each method of data collection has a unique set of strengths and weaknesses related to cost, speed, sampling, and the type of data that can be collected.
Self-Administered Questionnaires
In this mode, respondents complete the questionnaire on their own, without an interviewer present.
Mail Surveys:
- Pros: Can reach a wide geographic area.
- Cons: Can be expensive (printing, postage), data collection is very slow, and they typically suffer from very low response rates.
Online Surveys:
- Pros: This is now the most common mode. It is incredibly inexpensive, data collection is speedy, and the data is automatically entered into a dataset. Online platforms allow for complex skip logic and the easy integration of multimedia elements.
- Cons: Obtaining a representative, probability-based sample can be complicated, as there is no universal sampling frame for email addresses or internet users. Many online surveys rely on non-probability convenience samples, which limits generalizability. Unsolicited surveys are often ignored, leading to low response rates and self-selection bias.
Interviewer-Administered Surveys
In this mode, an interviewer asks the questions and records the respondent’s answers.
Face-to-Face Interviews:
- Pros: This mode typically yields the highest response rates. The interviewer can build rapport, clarify confusing questions, and use probes to elicit more detailed, open-ended responses.
- Cons: This is by far the most expensive and time-consuming method of survey administration. There is also the potential for interviewer bias (where the interviewer’s characteristics or behavior influences the answers) and social desirability bias (where respondents give answers to appear in a positive light).
Telephone Interviews:
Pros: Faster and significantly less expensive than face-to-face interviews, while still allowing for rapport and clarification.
Cons: Response rates for telephone surveys have plummeted in recent years due to the rise of caller ID, the decline of landlines, and general public resistance to unsolicited calls. Surveys must be shorter and less complex than in other modes.
The Challenge of Nonresponse
Regardless of the administration mode, every survey researcher must confront the challenge of nonresponse. The response rate is the percentage of people in the selected sample who complete and return the survey. A low response rate is a serious threat to the validity of a survey’s findings. The primary concern is response bias, which occurs when the people who choose to respond to the survey are systematically different from those who do not. For example, if only the most politically extreme individuals react to a political survey, the results will not be representative of the more moderate general population.
While there is no magic number for an “acceptable” response rate, higher is always better. Researchers should take every possible step to maximize participation. Key strategies include:
Offer Incentives: Providing a small monetary payment, a gift card, or entry into a drawing can significantly boost response rates.
Use Follow-Up Reminders: Sending one or more reminders to non-respondents is one of the most effective techniques for increasing participation.
Ensure Professionalism: A well-designed, professional-looking questionnaire with a compelling introduction or cover letter signals to potential respondents that the research is essential and worthy of their time.
Keep it Concise: A shorter survey is less of a burden on respondents and is more likely to be completed.
Conclusion: A Tool of Precision
Survey research, when executed with care and precision, is a compelling method for understanding the social world. It allows us to take the pulse of public opinion, describe the media habits of a population, and uncover the complex relationships between our communication behaviors and our social lives. The success of this method, however, is not a matter of chance. It is the direct result of a series of thoughtful and deliberate choices made at every stage of the research process.
From the careful conceptualization of a research question to the meticulous wording of each item on a questionnaire, from the strategic organization of the instrument to its essential pre-testing, and from the selection of an appropriate administration mode to the persistent effort to maximize response rates—every step is a critical component in the construction of a credible study. A well-designed survey is not a blunt instrument, but a tool of precision. By mastering the principles of its design and implementation, you equip yourself with one of the most fundamental and widely respected skills in the social scientist’s toolkit.
Journal Prompts
Think about a time when you were asked to take a survey—maybe in a class, at work, or online. Did any of the questions confuse you, feel biased, or leave you without an option that reflected your honest opinion? Describe one such moment. What made the question problematic, and how might you rewrite it to improve it?
Imagine you’re designing a survey for your research project. What would be the central question your survey aims to answer? List two variables you’d want to measure and describe one closed-ended and one open-ended question you would include to help you do so. Why did you choose each format?
Why do you think people often ignore or skip surveys? From your perspective as both a respondent and future researcher, what strategies would make you more likely to complete a survey? How do your answers shape the way researchers must think about sampling and nonresponse?