Chapter 9: The Methodologist’s Toolkit
Learning Objectives
- Understand the four major research methods in social science: content analysis, surveys, experiments, and qualitative methods
- Recognize what each method can and cannot tell you
- Match research questions to appropriate methods
- Distinguish between methods that describe content, measure attitudes, test causation, and explore meaning
- Understand how mixed methods designs combine approaches for more complete answers

Imagine four researchers walk into a conference, each studying the same broad topic: how media coverage of mental health has changed over the past decade. They’ve read the same literature. They share the same curiosity. But they’ve chosen different methods, and as a result, they’ve produced four fundamentally different studies.
Researcher A conducted a content analysis of 500 newspaper articles about depression, coding each for whether the coverage framed depression as a personal failing or a medical condition. She found that medical framing increased from 35% to 62% over the decade, while personal-responsibility framing declined.
Researcher B designed a survey of 1,200 adults, asking about their media consumption habits, their attitudes toward people with depression, and their willingness to support mental health funding. He found that respondents who reported heavy news consumption held more sympathetic attitudes toward depression, even after controlling for education, age, and political ideology.
Researcher C ran an experiment. She randomly assigned 300 participants to read one of two versions of a news article about depression: one using medical framing, one using personal-responsibility framing. Participants who read the medically framed article expressed significantly higher support for public mental health funding.
Researcher D conducted in-depth interviews with 25 people who had been diagnosed with depression, asking how they felt about media representations of their condition. She found that participants experienced medical framing as validating but also reductive, describing a tension between gratitude for destigmatization and frustration that coverage rarely captured the lived complexity of their experience.
Each study is rigorous. Each answers a real question. But they answer different questions, and they do so because the method shapes what you can see. Researcher A knows what’s in the content. Researcher B knows what audiences think. Researcher C knows whether framing causes attitude change. Researcher D knows what the experience means to the people living it.
No single method answers all four questions. A researcher who understands only one method is like a carpenter who owns only a hammer: every problem looks like a nail. This chapter introduces the full toolkit.
Why Methods Matter More Than Findings
There is a temptation, particularly among students new to research, to focus on what a study found and skip over how it found it. This is a mistake. The “how” determines the “what.” A study’s method constrains what claims it can legitimately make. A content analysis can describe patterns in media texts, but it cannot tell you how audiences interpreted those texts. A survey can capture what people say they believe, but it cannot establish that media exposure caused those beliefs. An experiment can demonstrate causation, but only in a controlled environment that may not resemble the real world. An interview can reveal rich subjective experience, but it cannot generalize to millions of people.
Understanding methods is not a technical skill separate from intellectual work. It is the intellectual work. When you read a published study and think, “That’s interesting, but does it really prove what the authors claim?”, you are engaging in methodological reasoning. When you design your own study and ask, “What kind of evidence would actually answer my question?”, you are choosing a method. This chapter equips you to make that choice wisely.
Content Analysis: What’s in the Message?
Core question: What patterns exist in media content?
Definition: Content analysis is the systematic, replicable examination of symbols of communication (Krippendorff, 2018). It involves defining categories, developing coding rules, and applying those rules consistently to a sample of texts, images, or other media artifacts.
What it can do:
- Describe the prevalence of themes, frames, or features in a body of content (“42% of articles framed depression medically”)
- Track changes over time (“Medical framing increased from 35% to 62% between 2014 and 2024”)
- Compare content across sources (“Network news used episodic framing more frequently than newspaper coverage”)
- Test relationships between content features (“Songs with negative lyric sentiment charted higher than songs with positive sentiment”)
What it cannot do:
- Tell you how audiences interpreted the content (that requires a survey or experiment)
- Explain why the patterns exist (that requires theory, and often additional methods)
- Establish causation (correlation between content features is not evidence that one causes the other)
When to use it: When your research question asks about the characteristics of media content itself, rather than about the people who produce or consume it. Content analysis is ideal for questions like: “How is X represented in Y medium?” or “Has the prevalence of Z changed over time?”
Strengths: Unobtrusive (you don’t interact with human subjects), systematic (replicable coding procedures), scalable (you can analyze large corpora), and historically traceable (you can study content from any era with surviving records).
Limitations: Coding decisions are interpretive, which means reliability is never perfect. Latent content (underlying meaning) is harder to code reliably than manifest content (surface features). And the method tells you what’s in the text, not what the text does to audiences.
Key methodological references: Krippendorff (2018) provides the definitive theoretical treatment. Neuendorf (2017) offers practical guidance on codebook design. Riffe, Lacy, Watson, and Lovejoy (2023) focus specifically on content analysis in mass communication. Lombard, Snyder-Duch, and Bracken (2002) establish standards for intercoder reliability reporting.
This is the method you will execute this semester. Chapters 13 through 16 walk you through the complete content analysis process: immersion, operationalization, codebook construction, and pilot testing. The remaining methods in this chapter are taught for literacy, not execution. You need to understand them well enough to read, evaluate, and critique studies that use them, and well enough to recognize when your own research question demands a method other than content analysis.
Survey Research: What Do People Think, Believe, or Do?
Core question: What are the attitudes, beliefs, behaviors, or characteristics of a population?
Definition: Survey research collects data from respondents through standardized questionnaires, measuring self-reported attitudes, behaviors, demographics, and experiences. Surveys can be administered online, by phone, by mail, or in person.
What it can do:
- Measure attitudes and beliefs at scale (“67% of respondents believe media coverage of mental health has improved”)
- Describe population characteristics (“The median age of daily podcast listeners is 34”)
- Identify correlations between variables (“Parasocial attachment to an influencer correlates with purchase intentions”)
- Track trends over time when repeated with similar populations (“Support for marijuana legalization has increased from 31% to 68% since 2000”)
What it cannot do:
- Establish causation. A correlation between news consumption and sympathetic attitudes toward depression does not prove that news consumption caused those attitudes. People who already hold sympathetic attitudes may simply consume more news about mental health. This is the self-selection problem, and it haunts all correlational research.
- Capture behavior directly. Surveys measure what people say they do, which is not always what they actually do. Self-report data are subject to social desirability bias (people present themselves favorably), recall bias (people misremember), and acquiescence bias (people tend to agree with statements regardless of content).
- Describe media content. Surveys tell you about audiences, not about messages. If you want to know what’s in the content, you need content analysis.
When to use it: When your research question asks about people’s attitudes, beliefs, self-reported behaviors, or demographic characteristics, and when you need to generalize from a sample to a larger population.
Strengths: Can reach large, diverse samples. Standardized instruments allow comparison across populations and time periods. Validated scales exist for many psychological constructs (parasocial attachment, media trust, political efficacy). Relatively efficient in terms of cost and time.
Limitations: Self-report data are always imperfect. Response rates have declined dramatically in recent decades, raising questions about sample representativeness (Dillman, Smyth, & Christian, 2014). Question wording effects can bias results. And causation cannot be established from cross-sectional survey data alone.
Key methodological reference: Dillman, Smyth, and Christian (2014) provide the standard guide to survey design and administration. Chapter 11 covers survey methodology in greater depth.
Experimental Research: Does X Cause Y?
Core question: Does manipulating one variable cause a change in another?
Definition: An experiment involves deliberately manipulating an independent variable (the cause), randomly assigning participants to conditions, and measuring the effect on a dependent variable (the outcome). Random assignment is the key feature: it ensures that the groups being compared are equivalent on all characteristics except the manipulated variable, allowing causal inference.
What it can do:
- Establish causation. If participants randomly assigned to read a medically framed article express more support for mental health funding than participants assigned to read a personal-responsibility article, we can conclude that framing caused the difference, because random assignment ruled out alternative explanations.
- Isolate specific mechanisms. Experiments can test whether a particular feature of a message (its frame, its emotional tone, its source credibility) drives a particular outcome, holding everything else constant.
- Test theoretical predictions with precision. If Uses and Gratifications Theory (Katz, Blumler, & Gurevitch, 1973) predicts that people seek out mood-congruent media, an experiment can test this directly by manipulating mood and measuring media choices.
What it cannot do:
- Describe what naturally occurs. Experiments create artificial conditions. The “news article” participants read was written by the researcher, not by a journalist. The laboratory setting doesn’t replicate the distracted, multitasking reality of actual media consumption. This is the external validity problem: findings from controlled environments may not generalize to messy real-world contexts.
- Study phenomena that cannot be ethically manipulated. You cannot randomly assign people to experience trauma, poverty, or discrimination. You cannot randomly assign children to watch violent media for years. Some causal questions are answerable only through quasi-experimental or longitudinal designs.
- Capture long-term cumulative effects. Cultivation Theory (Gerbner & Gross, 1976) proposes that media influence accumulates over years of exposure. A one-hour experiment cannot test this.
When to use it: When your research question asks whether X causes Y, and when you can ethically and practically manipulate X and randomly assign participants to conditions.
Strengths: The gold standard for causal inference. Random assignment eliminates confounds. Controlled conditions allow precise measurement. Results are typically clear and interpretable.
Limitations: Artificial settings threaten external validity. Ethical constraints limit what can be manipulated. Sample sizes are often small (especially compared to surveys), which can reduce statistical power. And the “demand characteristics” problem means participants may behave differently because they know they’re being studied.
Key methodological reference: Campbell and Stanley (1963) remain the foundational text on experimental and quasi-experimental design. Chapter 12 covers experimental methodology in greater depth.
Qualitative Methods: What Does It Mean?
Core question: How do people experience, interpret, and make meaning of phenomena?
Definition: Qualitative methods encompass a family of approaches that prioritize depth over breadth, meaning over measurement, and context over generalizability. Major qualitative methods include in-depth interviews, focus groups, ethnography (participant observation), and textual analysis (including discourse analysis and narrative analysis).
What they can do:
- Reveal subjective experiences that quantitative methods miss. A survey can tell you that 67% of people with depression feel media coverage has improved. An interview can tell you what that improvement means to them, including the ambivalence, the nuance, and the contradictions that a Likert scale cannot capture.
- Generate theory. Rather than testing pre-existing hypotheses, qualitative methods can build theory from the ground up. Braun and Clarke’s (2006) thematic analysis, for instance, systematically identifies patterns in interview data that may reveal previously unrecognized phenomena.
- Provide context for quantitative findings. A content analysis might find that medical framing of depression has increased. Interviews with journalists could explain why: perhaps newsroom training programs, advocacy group pressure, or generational shifts in reporter attitudes drove the change.
- Study processes and dynamics. How does a fan community form? How does a newsroom decide which stories to cover? How does a political campaign craft its messaging? These questions require sustained observation, not surveys or experiments.
What they cannot do:
- Generalize to populations. Twenty-five interviews cannot represent the experiences of millions. Qualitative research produces transferable insights (findings that may apply in similar contexts) rather than generalizable findings (statistically representative of a population).
- Establish causation. Qualitative methods describe and interpret; they do not isolate causal mechanisms through controlled manipulation.
- Produce replicable quantitative data. Qualitative findings are interpretive, shaped by the researcher’s perspective and the specific context of data collection. Two researchers studying the same phenomenon may produce different but equally valid accounts.
When to use them: When your research question asks about meaning, experience, or process. When you want to understand why or how something happens from the perspective of the people involved. When quantitative data would miss the complexity of the phenomenon. And when you’re exploring a new topic where existing theory is thin.
Strengths: Depth, nuance, and context. Ability to capture complexity, contradiction, and ambiguity. Sensitivity to power dynamics and marginalized perspectives. Capacity to generate new theory rather than only testing existing theory.
Limitations: Labor-intensive (transcription, coding, analysis). Not generalizable in the statistical sense. Researcher subjectivity is unavoidable (though it can be made transparent through reflexivity). Findings are harder to summarize concisely than a p-value or a percentage.
Key methodological references: Braun and Clarke (2006) for thematic analysis. Brinkmann and Kvale (2015) for interviewing. Seidman (2019) for interview-based qualitative research. Chapter 10 covers qualitative methodology in greater depth.
Mixed Methods: Combining Approaches
Sometimes the most honest answer to “Which method should I use?” is “More than one.”
Mixed methods research combines quantitative and qualitative approaches within a single study or program of research. The logic is straightforward: each method has blind spots, and combining methods can compensate for individual weaknesses (Creswell & Creswell, 2023).
Common Mixed Methods Designs
Sequential explanatory: Quantitative data collection and analysis first, followed by qualitative data to explain or contextualize the quantitative findings.
Example: You conduct a content analysis of 500 songs, finding that songs with negative lyrics chart higher. Then you interview 20 listeners to understand why they’re drawn to negative content. The content analysis establishes the pattern; the interviews explain the mechanism.
Sequential exploratory: Qualitative data first, followed by quantitative data to test the patterns identified qualitatively.
Example: You interview 15 music journalists about how they decide which albums to review. You identify themes around “authenticity,” “newsworthiness,” and “audience interest.” Then you conduct a content analysis of 200 album reviews to test whether these themes predict review length and prominence.
Concurrent: Quantitative and qualitative data collected simultaneously, then integrated during analysis.
Example: You survey 500 people about their music streaming habits while simultaneously conducting focus groups with 30 of those respondents to understand the reasoning behind their choices. Survey data reveal what people do; focus groups reveal why.
Why Mixed Methods Matters
Mixed methods is not always practical for a semester-long project. But understanding the logic prepares you for research beyond this course, and it helps you recognize why single-method studies always have limitations. When you read a content analysis that claims to explain audience behavior, you’ll know to ask: “Where is the audience data?” When you read a survey that claims to describe media content, you’ll know to ask: “Did anyone actually analyze the content?”
This critical literacy, the ability to recognize what a study’s method can and cannot support, is arguably the most valuable skill this chapter teaches.
Choosing a Method: A Decision Framework
The method should follow the question, not the other way around. Here is a framework for matching questions to methods:
If your question asks “What is in the content?” → Content analysis.
- “How are women represented in Super Bowl advertisements?”
- “What percentage of Billboard Hot 100 lyrics reference substance use?”
- “How does CNN’s framing of immigration differ from Fox News’s framing?”
If your question asks “What do people think, believe, or do?” → Survey.
- “Do heavy news consumers hold different attitudes toward immigration than light consumers?”
- “What gratifications do listeners report seeking from sad music?”
- “How does parasocial attachment to a political figure relate to voting intention?”
If your question asks “Does X cause Y?” → Experiment.
- “Does exposure to violent song lyrics increase aggressive cognition?”
- “Does medical framing of depression increase support for public funding?”
- “Does playlist placement affect perceived song quality?”
If your question asks “What does X mean to the people who experience it?” → Qualitative methods.
- “How do fans of underground hip-hop construct authenticity?”
- “What is the experience of journalists covering mass shootings?”
- “How do podcast listeners integrate shows into their daily routines?”
If your question asks “What is happening, and why?” → Consider mixed methods.
- “Has media framing of climate change shifted over the past decade, and if so, how do journalists explain this shift?”
- “Do songs with negative lyrics chart higher, and if so, why do listeners prefer them?”
Notice that some questions can be studied by multiple methods, but each method will produce a different kind of answer. The choice depends on what kind of evidence you need.
The Content Analysis Path
This course teaches all four methods conceptually but executes one: content analysis. This is a deliberate choice, not a limitation.
Content analysis is an ideal method for learning the full research process because it involves every stage of research design: developing research questions grounded in theory, operationalizing abstract concepts as measurable variables, constructing a coding instrument (the codebook), testing reliability, managing and cleaning data, conducting statistical analysis, and interpreting results. Every skill you develop through content analysis transfers directly to other methods. The student who can build a reliable codebook can build a reliable survey instrument. The student who can operationalize “lyric sentiment” can operationalize “media trust” or “political framing.”
The next seven chapters walk you through the content analysis process:
- Chapter 10: Qualitative Methods (interviews, focus groups, and thematic analysis)
- Chapter 11: Designing Surveys (conceptual foundations of survey method)
- Chapter 12: Designing Experiments (conceptual foundations of experimental method)
- Chapter 13: Music Immersion (qualitative engagement with your dataset before coding)
- Chapter 14: Vibes to Variables (translating observations into measurable constructs)
- Chapter 15: The Rulebook (building a codebook)
- Chapter 16: The Sampling Plan and Pilot Test (sampling strategy and reliability testing)
Chapter 10 teaches qualitative approaches as standalone methods with their own logic and quality criteria; you will draw on these qualitative skills immediately when you begin immersion in Chapter 13. Chapters 11 and 12 teach survey and experimental design so you can read, evaluate, and critique studies using those methods. Chapters 13 through 16 are the hands-on content analysis sequence you will execute from start to finish.
Practice: Matching Methods to Questions
Exercise 9.1: Method Identification
For each research question below, identify the most appropriate method (content analysis, survey, experiment, qualitative, or mixed methods) and explain why:
- “What themes appear in TikTok videos about body positivity?”
- “Does exposure to anti-smoking advertisements reduce smoking intentions among teenagers?”
- “What percentage of front-page New York Times stories use episodic framing?”
- “How do first-generation college students experience imposter syndrome in STEM programs?”
- “Do people who follow more news accounts on Twitter report higher political knowledge?”
- “Has the representation of LGBTQ+ characters in prime-time television changed between 2010 and 2024, and how do LGBTQ+ viewers experience those representations?”
Exercise 9.2: Method Limitations
For each of the following study designs, identify what the method cannot tell you and propose a complementary method that would address the gap:
Study A: A content analysis of 300 Instagram posts by fitness influencers finds that 72% emphasize appearance over health.
- What can’t this study tell you? _______________
- What complementary method would address the gap? _______________
Study B: A survey of 1,000 college students finds that heavy social media users report lower self-esteem.
- What can’t this study tell you? _______________
- What complementary method would address the gap? _______________
Study C: An experiment shows that participants who read news with a human-interest lead recall more information than those who read news with a summary lead.
- What can’t this study tell you? _______________
- What complementary method would address the gap? _______________
Exercise 9.3: Designing a Multi-Method Study
Choose a topic you’re interested in (music, news, social media, health communication, or any other domain). Write one research question for each of the four methods:
- A content analysis question about the topic: _______________
- A survey question about the topic: _______________
- An experimental question about the topic: _______________
- A qualitative question about the topic: _______________
Then write one sentence explaining how combining any two of these would produce stronger evidence than either alone.
Exercise 9.4: Evaluating Published Research
Find a published study in a communication journal. Identify:
- What method did the researchers use?
- What research question did they answer?
- What claims do they make in their discussion section?
- Are any of those claims unsupported by the method they used? (For example, does a content analysis make claims about audience effects? Does a survey make causal claims?)
Reflection Questions
The Hammer Problem: This chapter argues that researchers who know only one method treat every question as if it requires that method. Can you think of an example where a researcher’s method choice distorted the findings? What would a different method have revealed?
Content Analysis as Foundation: The chapter positions content analysis as the method you’ll execute, while surveys and experiments are taught for literacy. What are the advantages and disadvantages of this approach? What would you miss by only learning content analysis? What would you miss by learning all methods superficially?
The Qualitative Tension: Quantitative researchers sometimes dismiss qualitative work as “unscientific” because it’s not generalizable or replicable in the traditional sense. Qualitative researchers sometimes dismiss quantitative work as “reductive” because it strips away context. Where do you fall in this debate, and why? Is it possible to honor both traditions simultaneously?
Methods and Power: Different methods produce different kinds of knowledge, and different kinds of knowledge carry different kinds of authority. A randomized controlled trial carries more causal authority than an interview study. But an interview study may capture experiences that a trial systematically excludes. How should we think about the relationship between methodological rigor and epistemic justice?
Chapter Summary
This chapter introduced the four major research methods in social science:
- Content analysis asks what’s in the message. It describes patterns in media content through systematic coding. It cannot tell you about audience effects or establish causation. This is the method you will execute this semester.
- Survey research asks what people think, believe, or do. It measures self-reported attitudes and behaviors at scale. It cannot establish causation or describe media content directly.
- Experimental research asks whether X causes Y. It establishes causation through random assignment and controlled manipulation. It cannot describe naturally occurring phenomena or capture long-term cumulative effects.
- Qualitative methods ask what things mean to the people who experience them. They prioritize depth, context, and meaning. They cannot generalize to populations or establish causation.
- Mixed methods combine approaches to compensate for individual method limitations. Sequential, concurrent, and explanatory designs each serve different purposes.
- Method choice follows research question: the question determines the method, not the other way around.
- Methodological literacy means knowing what each method can and cannot support, both for your own research and for evaluating others’.
Key Terms
- Content analysis: Systematic, replicable examination of communication symbols (Krippendorff, 2018)
- Experiment: Research design that manipulates an independent variable and randomly assigns participants to conditions to test causation
- External validity: The extent to which findings from controlled settings generalize to real-world contexts
- Mixed methods: Research combining quantitative and qualitative approaches
- Qualitative methods: Research approaches prioritizing depth, meaning, and context (interviews, focus groups, ethnography, textual analysis)
- Self-selection problem: The difficulty of establishing causation from correlational data because people choose their own media exposure
- Social desirability bias: Tendency for survey respondents to present themselves favorably
- Survey research: Data collection through standardized questionnaires measuring attitudes, behaviors, and characteristics
- Transferability: The degree to which qualitative findings may apply in similar contexts (qualitative parallel to generalizability)
References
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
Brinkmann, S., & Kvale, S. (2015). InterViews: Learning the craft of qualitative research interviewing (3rd ed.). Sage.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Rand McNally.
Creswell, J. W., & Creswell, J. D. (2023). Research design: Qualitative, quantitative, and mixed methods approaches (6th ed.). Sage.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.
Gerbner, G., & Gross, L. (1976). Living with television: The violence profile. Journal of Communication, 26(2), 172-199. https://doi.org/10.1111/j.1460-2466.1976.tb01397.x
Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and gratifications research. Public Opinion Quarterly, 37(4), 509-523. https://doi.org/10.1086/268109
Krippendorff, K. (2018). Content analysis: An introduction to its methodology (4th ed.). Sage. https://doi.org/10.4135/9781071878781
Lombard, M., Snyder-Duch, J., & Bracken, C. C. (2002). Content analysis in mass communication: Assessment and reporting of intercoder reliability. Human Communication Research, 28(4), 587-604. https://doi.org/10.1111/j.1468-2958.2002.tb00826.x
Neuendorf, K. A. (2017). The content analysis guidebook (2nd ed.). Sage. https://doi.org/10.4135/9781071802878
Riffe, D., Lacy, S., Watson, B. R., & Lovejoy, J. (2023). Analyzing media messages: Using quantitative content analysis in research (5th ed.). Routledge. https://doi.org/10.4324/9781003288428
Seidman, I. (2019). Interviewing as qualitative research: A guide for researchers in education and the social sciences (5th ed.). Teachers College Press.
Required Reading: Creswell, J. W., & Creswell, J. D. (2023). Research design: Qualitative, quantitative, and mixed methods approaches (6th ed.). Sage. Read Chapter 1: “The Selection of a Research Approach.”
Prompt: Creswell and Creswell argue that research design involves three nested levels of decision-making: the researcher’s philosophical worldview (postpositivist, constructivist, transformative, pragmatist), the research design (quantitative, qualitative, mixed methods), and the specific methods (surveys, experiments, interviews, content analysis, etc.). The worldview shapes the design, which shapes the method.
Identify your own philosophical worldview using Creswell and Creswell’s categories. What assumptions do you make about the nature of reality (ontology), how knowledge is produced (epistemology), and the role of values in research (axiology)? How do these assumptions constrain which methods feel natural to you?
This course operates primarily within a postpositivist worldview (there is a reality we can approximate through systematic observation) and uses a quantitative design (content analysis with statistical testing). How would a constructivist approach the same dataset differently? What questions would a constructivist ask about song lyrics that a postpositivist would not?
Find one published mixed methods study in a communication journal. Evaluate:
- What worldview does it implicitly adopt?
- How are the quantitative and qualitative components integrated? Are they truly mixed, or merely parallel (conducted side by side without integration)?
- Does the mixed methods design address limitations that either component alone would have? Be specific.
Creswell and Creswell note that pragmatism, the worldview that prioritizes “what works” over philosophical consistency, is the most common worldview underlying mixed methods research. What are the strengths and risks of pragmatism as a research philosophy? When does “what works” become an excuse for methodological incoherence?
Looking Ahead
Chapter 10 goes deeper into qualitative methods, the approach you will practice first through immersion and thematic pattern recognition. Chapters 11 and 12 then cover survey design and experimental design, teaching the conceptual foundations you need to read, evaluate, and critique studies using those methods. You won’t execute a survey or experiment this semester, but you will encounter them constantly in the literature you read, and you may design one in future research. Then, beginning with Chapter 13, the book shifts to execution: the hands-on content analysis sequence that takes you from qualitative immersion through codebook construction, pilot testing, data wrangling, statistical analysis, and final reporting.