Skip to main content
Tactics • Chapter 11 of 24

Research and Evidence Mastery

Your arguments are only as strong as your evidence. Learn to find, evaluate, and deploy sources that survive scrutiny.

12 min read

The Evidence Hierarchy

A brilliant argument built on bad evidence is a castle on sand. Before you can deploy evidence in debate, you need to understand what makes evidence good—and not all evidence is created equal. (Evidence is the “E” in the CEWEI structure from Chapter 3: Architecture of Arguments—this chapter teaches you how to find and evaluate it.)

In God we trust. All others must bring data.

W. Edwards Deming

The Pyramid of Evidence

Evidence exists in a hierarchy of reliability. From strongest to weakest:

Tier 1: Meta-analyses and systematic reviews. These combine multiple studies, giving you the weight of evidence rather than a single data point. When someone says “the research shows...” this is what they should mean. A Cochrane Review or meta-analysis published in a major journal is the gold standard.

Tier 2: Peer-reviewed primary research. Original studies published in reputable academic journals. These have been reviewed by experts in the field before publication. Not perfect—peer review catches obvious errors, not subtle ones—but far more reliable than unpublished claims.

Tier 3: Official statistics and government data. Census data, Bureau of Labor Statistics reports, WHO global health data. These sources have institutional credibility and transparent methodology. They can be biased, but the bias is usually knowable.

Tier 4: Expert opinion and think tank reports. Valuable, but more likely to reflect the expert's or organization's perspective. The Heritage Foundation and Brookings Institution may analyze the same data and reach opposite conclusions. Know who produced it and why.

Tier 5: Journalism and media reports. Useful for current events and as pointers to primary sources, but not evidence themselves. Journalists often simplify, sensationalize, or misunderstand technical research. Always try to find the underlying source they're citing.

Tier 6: Anecdotes and personal experience. The weakest form of evidence for general claims. “My uncle got better after taking this supplement” tells you nothing about whether the supplement works. Anecdotes illustrate; they don't prove.

The Anecdote Trap

Anecdotes feel persuasive because they're concrete and emotional. But a single story is consistent with almost any underlying truth. Someone won the lottery—that doesn't mean buying tickets is a good investment strategy. Use anecdotes to illustrate claims supported by stronger evidence, not as evidence themselves.

Finding Quality Sources

Knowing what good evidence looks like is only half the battle. You also need to know where to find it. The best debaters have research systems that consistently produce reliable ammunition.

Academic Databases

For rigorous research, start with academic databases. Google Scholar is free and covers most fields. For specific disciplines: PubMed (medicine), JSTOR (humanities and social sciences), SSRN (economics and law working papers). Many papers have free versions if you add “PDF” to your search or check the author's academic webpage.

Look for papers with high citation counts—this indicates other researchers found them important. But recency matters too: a highly-cited 1990 paper may have been superseded. The ideal is a recent paper with growing citations.

Official Data Sources

For statistics, go to the source. Every major country has official statistical agencies: the Bureau of Labor Statistics (U.S. employment), the Census Bureau (demographics), Eurostat (European data), the World Bank (global development), WHO (global health). These provide raw data you can cite directly, rather than filtered through media interpretation.

Learn to navigate these sites before you need them in debate prep. The first time shouldn't be when you're under time pressure.

The Source Behind the Source

When you read a news article or opinion piece that cites a study, don't cite the article—find the study. The chain of citation matters: the further you are from the primary source, the more likely distortion has crept in.

Be especially suspicious when sources cite “studies show” without naming the study. This is often a signal that the claim is weak, the study doesn't quite say what's implied, or the study doesn't exist at all.

Pro Tip

A good litmus test: if you can't explain where the number comes from and how it was measured, you're not ready to use it. An opponent who asks “What study?” will expose you if you can't answer.

Tracing to the Source

Scenario

You read: 'Studies show 70% of employees are disengaged at work.' This would be powerful evidence in a debate about workplace reform. But before using it...

Analysis

Trace it: Who said this? Often it's Gallup's annual engagement survey. Find the actual Gallup report. Check: How do they define 'disengaged'? What's the sample? Has the number changed over time? Is it consistent across countries? Now you can cite 'Gallup's 2023 State of the Global Workplace report, which surveyed 122,000 employees across 145 countries...' Much stronger than 'studies show.'


Evaluating Sources

Not all published research is good research. Learning to evaluate sources separates the debater who gets caught citing junk from the one who builds an impenetrable evidence fortress.

Red Flags

Watch for these warning signs:

Predatory journals. Some “journals” publish anything for a fee. Check if the journal appears in reputable databases, has legitimate impact factors, and whether you recognize any other papers in it. If in doubt, Google “[journal name] predatory.”

Tiny sample sizes. A study of 12 people is barely better than anecdote. For survey research, n > 1,000 is good; n < 100 is problematic. For experimental research, sample size depends on effect size, but ask yourself: could this finding be noise?

Cherry-picked timeframes. “Crime increased 40%” sounds alarming—until you learn they measured from an unusual low point. Ask: is this timeframe representative, or was it chosen to maximize impact?

Unreplicated findings. A single study claiming a surprising result should be treated with skepticism until others confirm it. The replication crisis is real—many published findings don't hold up.

Conflict of interest. A pharmaceutical company's study of its own drug isn't automatically wrong, but warrants extra scrutiny. Always ask: who paid for this research?

Basic Statistical Literacy

You don't need to be a statistician, but you should understand:

Correlation vs. causation. Two things moving together doesn't mean one causes the other. Ice cream sales and drowning both rise in summer; eating ice cream doesn't cause drowning.

Absolute vs. relative risk. “Doubles your risk” sounds scary until you learn your baseline risk was 1 in a million—now it's 2 in a million. Ask: what's the absolute change?

Selection bias. If you only survey people who show up to your website, you're not sampling the general population. How were participants selected?

Confidence intervals and significance. A “statistically significant” finding means we're fairly sure the effect isn't zero. It doesn't mean the effect is large or important.

Pro Tip

When citing statistics, anticipate the “So what?” response. What does this number actually mean for real people? A 3% increase in GDP sounds abstract; $1,500 more per household per year is concrete.

💬Exposing Weak Evidence

B cites a study. A probes its validity.

B

B

A recent study shows this policy reduces crime by 40%. The evidence is clear.

Evidence claim
A

A

Which study? Who conducted it? What was the sample size?

Probe the source

Forces specifics

B

B

It was published in the Journal of Criminal Policy Research, looking at three cities that implemented the policy.

Provides details
A

A

Three cities isn't a study—it's a case study. And I'd note: which three cities? Were they randomly selected, or were they chosen because the policy worked there? This is the selection problem. Show me a meta-analysis of multiple implementations, and we can talk about 'the evidence.'

Expose methodological weakness

Didn't just dismiss—explained WHY it's weak

Analysis: A didn't need to know the specific study. They knew the right questions to ask about ANY study: sample size, selection method, replication. This is transferable skill—study the questions, not just the answers. (See Chapter 8: Attack Patterns for the “indict” move—attacking evidence quality is one of the five core attacks.)


Citing Under Fire

Having good evidence means nothing if you can't deploy it effectively in the heat of debate. The goal is to cite in ways that are memorable, verifiable, and difficult to dismiss.

The Citation Formula

A strong in-debate citation includes:

Source authority. Who produced this? Lead with credibility. “According to Harvard researchers...” carries more weight than “I read somewhere...” Name the institution, the journal, the agency.

Recency. When was it published? “A 2023 study” is more current than “research shows.” If your evidence is old, have a reason: “This foundational 1995 study has been replicated dozens of times.”

Specificity. What exactly did they find? “A 23% reduction in hospital readmissions” is stronger than “significant improvement in outcomes.”

Invitation to verify. “I encourage you to look this up” signals confidence. You're not hiding anything. Debaters who bluff with evidence never say this.

Handling Evidence Attacks

Your opponent will challenge your sources. Be ready:

If they claim your source is biased: Acknowledge the source's perspective, then show why the evidence holds anyway. “Yes, this is a Brookings report—center-left think tank. But the data they cite comes from CBO, which is nonpartisan. The interpretation may lean left; the numbers don't.”

If they claim your data is cherry-picked: Show you've looked at the broader picture. “I cited 2023 data because it's most recent, but the trend is consistent going back to 2015. Here are the earlier numbers if you'd like...”

If they have contradicting evidence: Don't just deny—engage. Compare methodologies, sample sizes, recency. “Your study is from 2018 with 500 participants. Mine is from 2023 with 15,000. Which do you think better reflects current reality?” (See Chapter 9: Defense Patterns for more on defending your evidence under attack.)

The Strong Citation

Scenario

You're arguing that remote work increases productivity. Here's how to cite your evidence powerfully.

Analysis

Weak: 'Studies show remote workers are more productive.' Strong: 'Stanford economist Nicholas Bloom conducted a two-year randomized controlled trial—the gold standard in research design—with 16,000 employees at a major Chinese company. Remote workers were 13% more productive, took fewer sick days, and reported higher job satisfaction. This was published in the Quarterly Journal of Economics in 2015 and has been cited over 3,000 times. More recent meta-analyses covering 50+ studies confirm the pattern. I'm happy to share the citations.' Same fact. Vastly different impact.


Evidence Ethics

The temptation to misuse evidence is real. A slight exaggeration, a convenient omission, a selectively quoted passage—each can make your argument stronger in the moment and destroy your credibility permanently if caught.

The Lines You Don't Cross

Fabrication. Inventing a source that doesn't exist. This is the most obvious violation and the most dangerous. In competitive debate, it's grounds for disqualification. In any context, once caught, you're never trusted again.

Misrepresentation. Quoting a source that exists but says something different from what you claim. Equally dishonest, and easier for opponents to catch. They can look up what you cited.

Selective quotation. Using part of a quote that, in full context, means the opposite. “The policy is not... a complete failure” becomes “The policy is... a complete failure.” If the full quote changes the meaning, you're lying by omission.

Hiding contradicting evidence. You know there's a major study that disagrees with your position, but you don't mention it and hope your opponent doesn't know about it. This is weaker than outright fabrication, but still dishonest—and risky.

The Gray Areas

Some practices are debatable:

Citing the most favorable study: You've found five studies. Four show a small effect; one shows a large effect. Citing only the large-effect study isn't lying, but it's not fully honest either. Better: cite the meta-analysis that combines them, or acknowledge the range.

Rounding generously: The study says 47%. You say “nearly half.” That's fine. You say “about half.” Still okay. You say “a majority”? Now you've crossed a line.

Citing preliminary results: A pre-print or working paper hasn't been peer-reviewed yet. Citing it as if it's settled science is misleading. Solution: “Preliminary research from [institution], not yet peer-reviewed, suggests...”

The Integrity Dividend

The best debaters develop reputations for evidence integrity. When they cite a source, judges and audiences believe them—because they've never been caught cheating. This is a compounding advantage. Your credibility on evidence multiplies every other argument you make. (See Chapter 13: Ethos for why credibility is cumulative.)

Building Your Evidence Arsenal

The debater who researches in the moment will always lose to the debater who builds a system for collecting and organizing evidence over time. Your evidence arsenal is a strategic asset.

The Filing System

Organize evidence by topic and claim, not by source. You don't want to think “Where did I read about healthcare?” You want to think “What evidence do I have for [specific claim]?”

For each piece of evidence, capture: the claim it supports, the full citation, the relevant quote, and your assessment of its strength. A note like “Strong—RCT, large sample, recent” or “Weak— think tank, small sample, old” saves time later.

The Go-To Statistics

Build a collection of reliable, memorable statistics you can deploy from memory. These are facts you've verified, understand deeply, and can cite accurately without notes.

Good go-to statistics are: surprising (they update people's intuitions), concrete (real numbers, not vague percentages), sourced from unimpeachable authorities, and applicable to multiple arguments.

Example: If you debate economic issues, knowing that “the bottom 50% of Americans own 2.5% of total household wealth, while the top 1% own 31%” (Federal Reserve, 2023) is useful across many debates about taxation, inequality, or economic policy.

Evidence Maintenance

Evidence gets stale. A statistic from 2015 may no longer be accurate. A “recent study” from five years ago isn't recent anymore. Periodically review your arsenal and update:

Replace old statistics with current ones. Annual data (employment, GDP, crime rates) should be updated annually.

Check for replication. Has the study you rely on been replicated or contradicted since you filed it?

Add new landmark studies. When major new research comes out in areas you debate, add it to your collection.

Key Takeaway

Evidence is ammunition, but only if you can find it when you need it. Build a system for collecting, organizing, and maintaining your evidence arsenal. The best argument in the world loses to a mediocre argument backed by solid, accessible evidence.


✏️

Evidence Audit

Pick a topic you might debate. Find three pieces of evidence to support one claim about that topic. For each piece: 1. Trace it to the primary source. What's the original study, data set, or report? 2. Evaluate it using the hierarchy. What tier is this evidence? 3. Check for red flags. Sample size? Conflicts of interest? Cherry-picked data? 4. Write a strong in-debate citation. How would you present this compellingly? 5. Find a counterargument. What's the best evidence against your claim? If you can't complete step 1 for any piece, discard it—it's not reliable enough to use.

Hints: Start with Google Scholar for academic sources, official government databases for statistics • The 'cited by' count on Google Scholar tells you how influential a paper is • If a news article cites a study, find the actual study. The article's summary is often misleading • For step 5, search for your claim with 'criticism,' 'debunked,' or 'problems' added • A claim you can't find counterarguments for might be too vague to be debatable—or you haven't searched hard enough
Chapter 11: Research and Evidence Mastery | The Super Debate Guide