8.7. Sampling Bias

Sampling bias represents a systematic error that occurs during the sampling process and reduces the representativeness of the resulting data. Knowing its sources and developing strategies to minimize its impact is essential for conducting reliable research and interpreting research findings appropriately.

Road Map 🧭

  • Understand that bias can arise in both non-random and random sampling procedures.

  • Recognize the different types of sampling bias.

  • Understand the difference between bias and variability.

  • Be aware that it is impossible to design a perfect study. Take measures to minimize any flaws that can be controlled, and be honest about those that cannot be removed in your interpretations.

8.7.1. Understanding Sampling Bias: The Systematic Threat to Validity

Sampling bias is the result of obtaining a sample in which certain units or subjects are systematically favored over other members of the population. Unlike random sampling variation, which produces unpredictable differences between samples and can be reduced through larger sample sizes, bias creates consistent distortions that persist regardless of sample size.

Non-Random Sampling Guarantees Bias

Non-random sampling techniques create systematic bias by their very nature.

  • Convenience sampling systematically favors participants who are easy to access.

  • Self-selection overrepresents the group of people who have specific motivations to participate.

Bias Can Still Occur in Random Samples

  • Undercoverage bias results from failing to include all members of the target population in the sampling frame. This can occur even in randomized studies when certain subgroups are systematically excluded.

  • Non-response bias occurs when selected participants fail to participate in the study, drop out before completion, or fail to complete portions of the study. This differs from undercoverage bias because these individuals were initially included in the sample but chose not to participate or couldn’t complete their participation.

  • Response bias occurs when participants provide answers that do not accurately reflect their true beliefs, behaviors, or characteristics. This may happen because

    • Participants answer questions in ways they think are socially desirable.

    • Participants do not recall past events accurately.

    • The questions are poorly worded or suggests preferred answers.

8.7.2. Recognizing Bias in Practice

Example 1: Michigan Lead Poisoning Study

A childhood lead poisoning prevention council in Michigan undertook responsibility for determining the proportion of homes in their state with unsafe lead levels. Michigan was divided into municipalities, and homes were sampled from each municipality for a total of 5,000 homes. However, several municipalities were not visited due to high crime rates, and 73 homes were unable to be tested due to resident refusal.

Sampling Methodology Analysis

This study used stratified random sampling with municipalities as strata. The researchers recognized that different municipalities would likely have different characteristics affecting lead levels:

  • Older municipalities might have more homes with lead paint.

  • Urban areas might have different housing stock than rural areas .

  • Socioeconomic differences between municipalities might correlate with housing quality and maintenance.

Stratifying by municipality was a sound approach that could improve the precision of estimates while ensuring representation across different types of communities. Despite the solid sampling design, this study suffered from multiple types of bias.

Undercoverage Bias: The systematic exclusion of high-crime municipalities created undercoverage bias. This exclusion was particularly problematic because:

  • High-crime areas often correlate with poverty and older housing stock.

  • Lead poisoning risk is typically higher in low-income areas with older homes.

  • The very communities most at risk for lead exposure were systematically excluded.

Non-Response Bias: The 73 homes that refused testing created non-response bias. The reasons for refusal might correlate with lead levels.

  • Residents who suspect lead problems might refuse testing to avoid property value impacts.

  • Landlords might refuse testing to avoid legal obligations for remediation.

  • Residents with positive previous experiences with government might be more willing to participate.

Implications: The combination of these biases means the study likely underestimated lead problems in Michigan homes. Policymakers using these results would have an incomplete picture of the public health risk, potentially leading to inadequate resource allocation for lead remediation programs.

Example 2: Purdue Honor Pledge Study

The Honor Pledge Task Force (HPTF) at Purdue decided to gather data on the success of the honor pledge program. They randomly selected a sample of 132 students from the database of students who had voluntarily taken the pledge. Students were contacted via email and asked to answer questions regarding any violations of the pledge.

Sampling Methodology Analysis

This study used simple random sampling (SRS). The use of random selection from a complete database was methodologically sound within the constraints of the defined population.

However, the population was restricted to students who had voluntarily taken the pledge. This means any conclusions can only apply to pledge-taking students, not to all Purdue students. This limitation doesn’t represent bias per se, but it does limit the generalizability of results. This study also faced several sources of bias.

Response Bias: Students might not answer truthfully about honor code violations because:

  • Students might worry that admitting violations could lead to academic discipline, even if anonymity is promised.

  • Admitting to academic dishonesty violates social norms and personal identity as an ethical student.

  • Students might rationalize past behavior or genuinely not remember incidents which they did not consider serious violations at the time.

Non-Response Bias: Contacting participants through email creates opportunities for non-response.

  • Students routinely ignore emails that look like surveys or official communications.

  • Students might be particularly wary of emails asking about potentially incriminating behavior.

  • Students who have violated the pledge might be systematically less likely to respond.

Implications: The study would likely underestimate the true rate of honor code violations among pledge-taking students. This could lead to overly optimistic assessments of the pledge program’s effectiveness and inadequate attention to academic integrity issues.

8.7.3. Bias vs Variability

Understanding the distinction between bias and variability is crucial for interpreting research results and designing better studies. These two sources of error operate differently and require different strategies for management.

Conceptualizing Bias and Variability

Imagine the estimation process of a population parameter \(\theta\) as aiming at a target where the bullseye is \(\theta\) itself.

Illustration of cases with different degrees of bias and variability

Fig. 8.14 Bias vs Variability

Bias

High

Low

Variability

High

(c) The estimation procedure both misses the target systematically and produces highly variable results. This represents the worst-case scenario where we’re both inconsistent and wrong on average.

(a) The estimation procedure consistently misses the target in the same direction. The estimates cluster tightly together, but they’re all systematically wrong.

Low

(b) The estimation procedure is correct on average—the center of the estimates hits the target—but individual estimates vary widely around the true value. This is typical with small random samples.

(d) This represents the ideal situation where the estimation procedure is both consistent and accurate on average. This is what we strive for with large, well-designed random samples.

Bias Threatens Validity, Variability Threatens Precision: Bias makes our conclusions wrong, while variability makes them uncertain.

Why This Distinction Matters

Larger sample sizes can reduce variability, but not bias. If our sampling procedure is biased, collecting more data using the same flawed procedure will only give us more precise estimates of the wrong value. Bias must be removed through careful design of the sampling procedure and of the experiment.

The Reality of Imperfect Studies

It’s important to recognize that no study is perfect, and the goal is not to eliminate all possible sources of bias—which is impossible—but to minimize bias where possible and interpret results appropriately given the limitations that remain.

8.7.4. Bringing It All Together

Key Takeaways 📝

  1. Sampling bias is systematic error that consistently distorts results in predictable directions, unlike random error which averages out over repeated samples.

  2. Non-random sampling guarantees bias, but even random sampling does not ensure its elimination.

  3. Bias and variability are different. Bias makes conclusions wrong while variability makes them uncertain; bias cannot be reduced by larger sample sizes.

  4. Real studies always have limitations. The goal is to minimize bias through careful design and interpret results appropriately and honestly.

This section completes our exploration of the foundations needed for statistical inference. In this chapter, we’ve learned that:

  • Experimental design principles (control, randomization, replication) enable reliable statistical inference.

  • Sampling design determines whether results can be generalized to populations of interest.

  • Various forms of bias can threaten the validity of even well-designed studies.

  • Understanding these limitations is essential for interpreting research appropriately.

Exercises

  1. Bias Type Identification: For each scenario below, identify the primary type(s) of sampling bias present and explain how each bias might affect the study conclusions:

    1. A survey about job satisfaction is distributed to employees via company email, with a 35% response rate.

    2. A health study recruits participants by posting flyers in hospital waiting rooms.

    3. A political poll is conducted using landline telephone numbers, excluding cell phone users.

    4. A study of college student stress recruits participants from students seeking counseling services.

  2. Response Bias in Sensitive Topics: Design a study to investigate alcohol consumption patterns among college students:

    1. Identify three specific types of response bias that might affect this study.

    2. Develop question wording and study procedures that would minimize these biases.

    3. Describe how you would detect whether response bias is occurring in your data.

    4. What external validation methods could you use to check the accuracy of self-reported alcohol consumption?

  3. Non-Response Pattern Analysis: A health survey achieves the following response rates across different demographic groups: - Ages 18-30: 45% response rate - Ages 31-50: 62% response rate - Ages 51-70: 78% response rate - Men: 52% response rate - Women: 68% response rate

    1. Explain how these differential response rates could create bias in health outcome estimates.

    2. Describe specific strategies for improving response rates in underrepresented groups.

    3. If these response rate differences cannot be eliminated, how might you adjust your analysis to account for potential bias?

    4. What additional information would help you assess the magnitude of non-response bias?

  4. Bias vs. Variability Scenarios: For each situation, determine whether the primary problem is bias, variability, or both, and explain your reasoning:

    1. A political poll consistently shows a candidate with 52% support, but election results show they only received 47% of votes.

    2. Three different polls conducted simultaneously show the same candidate with 49%, 53%, and 51% support.

    3. A medical study shows highly variable results across participants, but the average effect matches previous research findings.

    4. A company’s customer satisfaction surveys always show very high ratings, but independent surveys show much lower satisfaction.

  5. Study Design Evaluation: Evaluate this study design for potential biases:

    “To study the effectiveness of a new online learning platform, researchers recruited participants through social media advertisements. Volunteers were randomly assigned to use either the new platform or traditional textbooks for 8 weeks. Learning outcomes were measured through online tests taken at home.”

    1. Identify all potential sources of bias in this study design.

    2. Classify each bias type and explain how it might affect results.

    3. Suggest specific modifications to reduce the most serious biases while maintaining study feasibility.