Chapter 8: The Ethics of Inquiry

Learning Objectives

  • Understand the historical events that shaped modern research ethics protections
  • Apply the Belmont Report’s three principles (respect for persons, beneficence, justice) to communication research
  • Navigate the institutional review process and distinguish exempt, expedited, and full review categories
  • Evaluate the specific ethical questions that arise in content analysis, survey research, and digital media research
  • Recognize ethical obligations in writing and reporting, including the problems of fabrication, falsification, and selective reporting

Scales of justice representing research ethics

Scales of justice representing research ethics

In June 2014, researchers at Facebook and Cornell University published a study in the Proceedings of the National Academy of Sciences that would become one of the most debated experiments in the history of social science. For one week in January 2012, Facebook had manipulated the News Feeds of nearly 700,000 users without their knowledge. Some users saw fewer positive posts from friends; others saw fewer negative posts. The researchers then measured whether the manipulation affected what users themselves posted. It did. Users exposed to fewer positive posts produced slightly more negative content, and vice versa. The study, led by Adam Kramer, Jamie Guillory, and Jeffrey Hancock, offered evidence of “massive-scale emotional contagion through social networks” (Kramer, Guillory, & Hancock, 2014).

The finding was interesting. The reaction was explosive.

Critics pointed out that nearly 700,000 people had been enrolled in a psychological experiment without informed consent. No one had been told their emotional environment was being manipulated. No one had the opportunity to decline. Facebook argued that its terms of service, which users agree to upon creating an account, authorized the use of data for “internal operations, including troubleshooting, data analysis, testing, and research.” The researchers argued that the manipulation was minor, the effect size was tiny, and the study produced valuable scientific knowledge.

The scientific community was divided. Some defended the study as ethically comparable to A/B testing that tech companies perform routinely. Others argued that emotional manipulation, however slight, crosses a line that terms-of-service agreements cannot legitimize. PNAS, the journal that published the study, took the unusual step of appending an “editorial expression of concern” to the article, acknowledging questions about the consent process.

The Facebook study crystallized every major tension in research ethics: the boundary between research and product development, the adequacy of passive consent, the obligations of researchers to participants who don’t know they’re participants, and the question of whether “minimal risk” justifies bypassing standard protections. It is the ideal case study for this chapter because it doesn’t have a clean answer. Reasonable people disagree about whether the study was ethical. What’s not debatable is that the questions it raised matter, and that every researcher must develop a framework for navigating them.

Why Ethics Isn’t a Formality

Students sometimes approach research ethics as a bureaucratic requirement: a form to fill out, a training module to complete, a hurdle between them and the “real” work of data collection. This is understandable. Ethics review can feel procedural, especially when your study involves publicly available song lyrics rather than vulnerable human populations.

But ethics isn’t a procedure. It’s a disposition. It shapes how you think about your relationship to the people whose data, content, or behavior you study. It governs how honestly you report what you find. And it reflects the fact that the history of research includes episodes of genuine harm, episodes serious enough to justify every protection that now exists.

Historical Context: How We Got Here

Modern research ethics regulations exist because researchers harmed people. Not hypothetically. Not in edge cases. Systematically and with institutional support. Understanding this history is essential, not to assign guilt to the current generation, but to appreciate why the protections exist and what they’re designed to prevent.

The Tuskegee Syphilis Study (1932-1972)

For forty years, the United States Public Health Service studied the progression of untreated syphilis in 399 Black men in Macon County, Alabama. The men were told they were receiving free treatment for “bad blood.” They were not. Even after penicillin became the standard treatment for syphilis in the 1940s, the researchers withheld it. Twenty-eight men died directly of syphilis, 100 died of related complications, 40 wives were infected, and 19 children were born with congenital syphilis.

The study was not conducted by rogue scientists. It was funded by the federal government, staffed by credentialed researchers, and published in peer-reviewed journals for decades without significant objection from the scientific community. Its exposure in 1972 by journalist Jean Heller led directly to the regulatory framework we use today.

The Milgram Obedience Experiments (1961)

Stanley Milgram’s famous experiments at Yale examined whether ordinary people would administer what they believed were painful electric shocks to a stranger simply because an authority figure instructed them to do so. Most did. The experiments produced foundational insights about authority and obedience, but they did so by subjecting participants to extreme psychological distress. Participants believed they were causing real harm to another person. Many exhibited signs of acute anxiety, and some experienced lasting psychological effects.

The ethical question was not whether the findings were important (they were) but whether the knowledge justified the deception and distress inflicted on participants.

Humphreys and the Tearoom Trade (1970)

Sociologist Laud Humphreys studied anonymous sexual encounters between men in public restrooms. He recorded participants’ license plate numbers without their knowledge, traced their home addresses through police records, and later visited their homes in disguise to conduct “health surveys,” gathering personal information the men had no idea was connected to the restroom observations.

Humphreys argued that his research revealed important truths about a hidden population. Critics argued that he violated his subjects’ privacy in ways that could have destroyed their lives, marriages, and careers if the data had been exposed.

The Common Thread

Each case involved researchers who believed their work served a greater good. Each involved participants who were harmed, deceived, or exploited. And each contributed to the recognition that good intentions are not sufficient protection. Researchers need external accountability, formal principles, and institutional oversight.

The Belmont Report: Three Principles

In 1979, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research published The Belmont Report, which established three foundational principles for ethical research (National Commission, 1979). These principles now govern virtually all research involving human subjects in the United States.

Principle 1: Respect for Persons

People must be treated as autonomous agents capable of making their own decisions. This means:

  • Informed consent: Participants must understand what the research involves, what risks it carries, and that they can withdraw at any time without penalty.
  • Protection of vulnerable populations: People with diminished autonomy (children, prisoners, individuals with cognitive impairments) require additional protections because their ability to provide truly voluntary consent is compromised.

Applied to communication research: If you survey listeners about their emotional responses to music, participants must know they’re in a study, understand what they’ll be asked to do, and be free to stop at any time. If you analyze public social media posts, the question becomes more complex: did users “consent” to being studied when they posted publicly? We’ll return to this question below.

Principle 2: Beneficence

Researchers must maximize benefits and minimize harms. This involves two obligations:

  • Do no harm: The research should not cause physical, psychological, social, or economic injury to participants.
  • Maximize the ratio of benefits to risks: The knowledge gained must justify any risks imposed on participants.

Applied to communication research: Most content analysis carries minimal risk because you’re analyzing texts, not interacting with people. But survey and experimental research can involve psychological discomfort (exposing participants to upsetting media content), social risk (collecting sensitive information that could embarrass participants if leaked), or economic risk (taking participants’ time without adequate compensation).

Principle 3: Justice

The benefits and burdens of research must be distributed fairly. No group should bear a disproportionate share of research risks while another group receives the benefits.

Applied to communication research: The Tuskegee study violated justice because it imposed all the risks on a vulnerable, marginalized population while the benefits (medical knowledge) accrued to the broader society. In communication research, justice concerns arise when studies about marginalized communities are designed and conducted without input from those communities, or when research on “deviant” media behavior targets specific demographic groups while treating majority behavior as the unmarked norm.

Institutional Review Boards (IRBs)

The Belmont Report’s principles are operationalized through Institutional Review Boards, committees at universities and research institutions that review proposed studies for ethical compliance before data collection begins.

What IRBs Review

IRBs evaluate whether your study:

  • Minimizes risks to participants
  • Ensures risks are reasonable relative to anticipated benefits
  • Selects participants equitably (not targeting vulnerable groups without justification)
  • Obtains and documents informed consent
  • Monitors data collection for safety
  • Protects participant privacy and confidentiality

Three Levels of Review

Not all studies require the same level of scrutiny.

Exempt review applies to research that poses minimal risk and falls into specific categories defined by federal regulation. Most content analysis of publicly available materials (newspaper articles, song lyrics, television broadcasts, public social media posts) qualifies for exempt status because no human subjects are directly involved.

Expedited review applies to research that involves no more than minimal risk but doesn’t qualify for exemption. Many surveys and interview studies fall here, particularly those that collect non-sensitive data from adult participants.

Full board review applies to research involving more than minimal risk, vulnerable populations, or deception. Experimental studies that manipulate participants’ emotional states, research with children or prisoners, and studies involving sensitive topics (substance use, sexual behavior, criminal activity) typically require full review.

The Content Analysis Question

Here is where things get interesting for this course: Does content analysis require IRB approval?

The answer depends on what you’re analyzing:

Analyzing published media content (Billboard lyrics, newspaper articles, television broadcasts, published books): Generally exempt. These are public documents. No human subjects are involved in the analysis itself. The IRB at most institutions will classify this as not involving human subjects research.

Analyzing public social media posts: This is contested territory. The posts are technically public, but posters may not have anticipated being studied. The Association of Internet Researchers (AoIR) has argued that “public” does not automatically mean “fair game for research.” Context matters: a tweet with 50,000 retweets has a different expectation of privacy than a post in a small support group forum, even if both are technically accessible. Many IRBs now require at minimum exempt review for social media research.

Analyzing private communications (private messages, closed group discussions, leaked documents): This almost always requires IRB review and raises serious consent questions.

For your course project, which involves content analysis of publicly available song lyrics and chart data, you will likely qualify for exemption. But the CITI training you complete (see the course assignment) ensures you understand these distinctions for future research that may involve human participants more directly.

Ethics in Specific Methods

Ethics in Content Analysis

Content analysis of published media is among the lowest-risk research methods. You are analyzing texts, not interacting with people. However, ethical considerations still apply:

  • Representation: How you categorize and interpret content has consequences. A content analysis that codes hip-hop lyrics as “violent” without accounting for genre conventions, metaphorical language, or cultural context can reinforce harmful stereotypes. Your coding decisions carry moral weight.
  • Cherry-picking: Selecting examples that confirm your hypothesis while ignoring contradictory cases is a form of intellectual dishonesty, even if no human subjects are directly harmed.
  • Copyright and fair use: Reproducing extensive lyrics or content in your report may raise copyright concerns. Brief quotation for analytical purposes generally falls under fair use, but reproducing entire songs or articles does not.

Ethics in Survey Research

Surveys involve direct interaction with human participants, which raises the ethical stakes:

  • Voluntary participation: Respondents must not be coerced. Offering course credit for survey participation is acceptable if an alternative assignment of equal value is available. Requiring participation without alternatives is coercive.
  • Sensitive questions: Questions about substance use, sexual behavior, mental health, or illegal activity require careful handling. Respondents must know they can skip any question. Data must be stored securely and de-identified.
  • Sampling fairness: Surveying only your friends, or only students in your major, creates samples that may not represent the population you claim to study. This is a methodological limitation, but it also raises justice concerns when findings about “college students” are generalized to broader populations.

Ethics in Experimental Research

Experiments that manipulate participants’ experiences carry the highest ethical burden:

  • Manipulation effects: If you expose participants to distressing media content (violent imagery, sad music, fear-based messaging), you must consider whether the exposure could cause lasting harm.
  • Deception and debriefing: If participants are deceived about the study’s purpose, you must debrief them fully afterward and allow data withdrawal.
  • Control group ethics: In some contexts (health interventions, educational programs), withholding a potentially beneficial treatment from the control group raises ethical questions about fairness.

Ethics in Digital and Social Media Research

Digital research occupies an evolving ethical landscape:

  • The “public” problem: Just because data are publicly accessible doesn’t mean people expected to be studied. A person who tweets about their depression is sharing with their followers, not volunteering for a research study.
  • Aggregation risk: Individual data points may be harmless, but combining them can create identifiable profiles. De-identification is harder than it appears.
  • Platform terms of service: Scraping data from platforms may violate terms of service even if the data are public. This is a legal and ethical gray area.
  • Vulnerable populations online: Minors, people in crisis, and members of stigmatized groups may be especially vulnerable to harm from research exposure, even when their posts are public.

Ethical Writing and Reporting

Ethics doesn’t end when data collection is over. How you analyze and report findings carries its own ethical obligations.

Fabrication and Falsification

Fabrication is inventing data that don’t exist. Falsification is manipulating data or results to change the outcome. Both are forms of scientific fraud, and both are career-ending if discovered. They are also more common than the research community likes to admit: surveys of researchers consistently find that a small but meaningful percentage acknowledge engaging in questionable practices (Babbie, 2021).

Selective Reporting

A more subtle form of dishonesty is selective reporting: running many analyses but reporting only those that produced significant results, or testing multiple hypotheses but presenting only the ones that “worked.” This practice, sometimes called the “file drawer problem,” distorts the literature by creating a publication record that systematically overstates the strength of effects.

HARKing

HARKing (Hypothesizing After the Results are Known) occurs when researchers examine their data, identify patterns, and then write the paper as though they had predicted those patterns all along. This transforms exploratory analysis (which is legitimate) into confirmatory analysis (which requires pre-specification). The result looks more impressive than it is, because a “prediction” that was actually a post-hoc observation carries none of the epistemic weight of a genuine a priori hypothesis.

The pre-registration practices discussed in Chapter 6 are a direct defense against HARKing. By documenting your hypotheses before analysis, you make it transparent which findings were predicted and which were discovered.

Honest Interpretation

Even without outright fabrication, researchers face constant temptation to overstate their findings. “Our results suggest” becomes “our results demonstrate.” A small effect size gets buried while a large p-value gets highlighted. Limitations are mentioned but minimized. The discussion section tells a cleaner story than the data support.

Ethical reporting means stating what you found, acknowledging what you didn’t find, and being transparent about the boundaries of your claims. It means writing a limitations section that genuinely grapples with weaknesses rather than performing humility while defending every decision. It means distinguishing between what the data show and what you wish they showed.

This is harder than it sounds. But it is the core ethical obligation of every researcher.

The CITI Training

As part of this course, you will complete the Collaborative Institutional Training Initiative (CITI) program’s “IRB Social Behavioral Student” module. This is an industry-standard training that introduces the regulatory framework for human subjects research in the United States.

The training covers:

  • The history of research ethics (Belmont Report, Tuskegee, Nuremberg Code)
  • Federal regulations governing human subjects research
  • Informed consent requirements
  • IRB review categories and procedures
  • Privacy and confidentiality protections
  • Ethical issues in specific research contexts

Completing CITI certification serves two purposes. First, it satisfies the institutional requirement for anyone conducting research involving human subjects at most universities. Second, and more importantly, it provides the conceptual vocabulary for thinking about ethical questions that will arise throughout your research career.


Practice: Ethical Decision-Making

Exercise 8.1: The Facebook Case

Read the following summary of the Kramer, Guillory, and Hancock (2014) study:

For one week in January 2012, Facebook manipulated the News Feeds of approximately 689,003 users. Some users saw fewer positive posts from friends; others saw fewer negative posts. The researchers then measured whether the manipulation affected users’ own posting behavior. Users exposed to fewer positive posts produced slightly more negative content. The study was published in PNAS in 2014.

Questions:

  1. Apply each of the Belmont Report’s three principles to this study. Where does the study comply, and where does it fall short?
  2. Facebook argued that its terms of service authorized the research. Is terms-of-service agreement equivalent to informed consent? Why or why not?
  3. The researchers argued that the manipulation was “consistent with Facebook’s Data Use Policy.” Does the fact that a practice is legal make it ethical?
  4. If this study had been reviewed by a university IRB before data collection, what questions would the board likely have raised? What modifications might it have required?

Exercise 8.2: Content Analysis Ethics

You are designing a content analysis of Twitter posts about a recent mass shooting. The posts are public. You plan to code them for emotional tone, misinformation, and political framing.

Questions:

  1. Does this study involve “human subjects”? Why or why not?
  2. What ethical obligations do you have to the people who posted, even if their posts are public?
  3. Should you include usernames in your published report? What about direct quotes?
  4. Some posts may be from people who were directly affected by the shooting (victims’ families, witnesses). Does this change your ethical obligations?
  5. How would your analysis change if the posts were from a private Facebook grief support group rather than public Twitter?

Exercise 8.3: Survey Ethics

You want to survey college students about their music listening habits and mental health. Your survey includes questions about anxiety, depression, and substance use.

Questions:

  1. What information must you include in your informed consent document?
  2. Should participation be anonymous, confidential, or neither? What’s the difference, and which is most appropriate here?
  3. If a participant’s responses suggest they may be in crisis (e.g., endorsing suicidal ideation), what is your ethical obligation?
  4. You plan to recruit participants by offering extra credit in your class. Is this coercive? How could you design the incentive structure to minimize coercion?

Exercise 8.4: Ethical Reporting

A researcher runs five statistical tests on a dataset. Four produce non-significant results (p > .05). One produces a significant result (p = .03). The researcher writes the paper around the one significant finding, mentioning only that one test, and frames the result as confirming a pre-existing hypothesis.

Questions:

  1. What ethical problem(s) does this scenario illustrate?
  2. How does this practice distort the published literature?
  3. What should the researcher have done differently?
  4. How does pre-registration (Chapter 6) prevent this problem?

Reflection Questions

  1. The Consent Problem: Much contemporary research involves data that people generated without knowing they would be studied (social media posts, streaming behavior, public comments). Where should the line be drawn between “public data” and “data requiring consent”? Is there a principled distinction, or is it always contextual?

  2. Risk and Benefit: The Belmont Report requires that benefits justify risks. But who decides what counts as a “benefit”? The Facebook study produced knowledge about emotional contagion, but that knowledge primarily benefits Facebook’s business model. Should the identity of the beneficiary affect the ethical calculus?

  3. Your Study: Consider the research project you’re designing for this course. What ethical questions does it raise, even if it involves only publicly available data? How will you handle representation, interpretation, and honest reporting?

  4. Ethics as Culture, Not Compliance: This chapter argues that ethics is a disposition, not a procedure. What does that mean in practice? How would a research team with an ethical culture behave differently from one that merely complies with IRB requirements?


Chapter Summary

This chapter established the ethical foundations of research inquiry:

  • Historical cases (Tuskegee, Milgram, Humphreys) demonstrate why external ethical oversight is necessary. Good intentions do not prevent harm.
  • The Belmont Report (National Commission, 1979) established three principles: respect for persons (informed consent, autonomy), beneficence (maximize benefits, minimize harms), and justice (fair distribution of research burdens and benefits).
  • Institutional Review Boards (IRBs) operationalize these principles through review of proposed research at three levels: exempt, expedited, and full board review.
  • Content analysis of published media generally qualifies for exempt status, but ethical obligations remain regarding representation, interpretation, and honest reporting.
  • Survey and experimental research involve direct interaction with participants and carry higher ethical obligations around consent, confidentiality, and harm prevention.
  • Digital and social media research occupies evolving ethical territory where “public” does not automatically mean “ethically unproblematic.”
  • Ethical writing requires transparency in reporting: no fabrication, no falsification, no selective reporting, no HARKing. Pre-registration (Chapter 6) is a structural defense against these practices.
  • The CITI training provides the regulatory vocabulary for navigating human subjects research throughout your career.

Key Terms

  • Belmont Report: 1979 document establishing three foundational principles for ethical research (National Commission, 1979)
  • Beneficence: Ethical principle requiring researchers to maximize benefits and minimize harms
  • CITI Program: Collaborative Institutional Training Initiative; standardized ethics training for researchers
  • Debriefing: Post-study disclosure of the true purpose of research to participants who were deceived
  • Exempt review: IRB classification for research posing minimal risk and meeting specific federal criteria
  • Fabrication: Inventing data or results that do not exist
  • Falsification: Manipulating data, methods, or results to change outcomes
  • Full board review: IRB classification for research involving more than minimal risk or vulnerable populations
  • HARKing: Hypothesizing After the Results are Known; presenting exploratory findings as confirmatory
  • Informed consent: Voluntary agreement to participate in research after receiving adequate information about procedures, risks, and benefits
  • Institutional Review Board (IRB): Committee that reviews proposed research for ethical compliance
  • Justice: Ethical principle requiring fair distribution of research burdens and benefits
  • Respect for persons: Ethical principle requiring treatment of individuals as autonomous agents and protection of those with diminished autonomy
  • Selective reporting: Reporting only analyses or results that support desired conclusions

References

Babbie, E. R. (2021). The practice of social research (15th ed.). Cengage Learning.

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790. https://doi.org/10.1073/pnas.1320040111

National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. U.S. Department of Health and Human Services. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366. https://doi.org/10.1177/0956797611417632

Wimmer, R. D., & Dominick, J. R. (2014). Mass media research: An introduction (10th ed.). Cengage Learning.


Required Reading: Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790. https://doi.org/10.1073/pnas.1320040111

Also read the PNAS editorial expression of concern appended to the article.

Prompt: The Facebook emotional contagion study has become a landmark case in research ethics, not because the answer is clear, but because reasonable scholars disagree about where the ethical lines should fall.

  1. Construct the strongest defense of the study’s ethics. What arguments support the claim that the research was ethically acceptable? Consider the minimal manipulation, the small effect size, the value of the knowledge produced, and the precedent of A/B testing in industry.
  2. Construct the strongest critique of the study’s ethics. What arguments support the claim that the research violated ethical norms? Consider informed consent, emotional manipulation, the power asymmetry between Facebook and its users, and the absence of standard IRB review.
  3. Evaluate the institutional failure: The study was reviewed by Cornell’s IRB, which determined that the university’s involvement was limited to data analysis (not data collection, which Facebook conducted). PNAS later acknowledged that the study’s “ichcollection procedures… were inconsistent with the principles of obtaining informed consent.” Where did the system break down? Was this a failure of regulation, of institutional will, or of the ethical framework itself?
  4. Propose a redesign: How could the same research question (does emotional content in social feeds affect users’ emotional expression?) be studied ethically? Design an alternative study that would satisfy IRB requirements while still producing meaningful evidence. What tradeoffs would your redesign involve?
  5. The broader question: Should technology companies be held to the same ethical standards as university researchers when they conduct experiments on users? Why or why not? What would a regulatory framework for industry research look like?

Looking Ahead

Chapter 9 (The Methodologist’s Toolkit) introduces the full landscape of research methods available to social scientists: content analysis, surveys, experiments, and qualitative approaches. You’ll learn when each method is appropriate, what questions each can and cannot answer, and how they complement one another. Content analysis, the method you’ll execute this semester, is situated within this broader ecosystem so you understand not just how to do it, but where it fits among the alternatives.