How to judge arguments before we hunt the fallacies
An argument claims a conclusion and offers premises (reasons) to support it.
Deductive arguments aim for certainty: if the premises are true and the reasoning is valid, the conclusion must be true.
- Valid = the conclusion logically follows from the premises
- Sound = valid + the premises are actually true
Inductive arguments aim for probability: good ones are strong (the premises make the conclusion likely) and cogent (strong + premises are true and representative).
Most fallacies either use invalid structure, rely on untrue or irrelevant premises, or overstate what the premises actually support. They're persuasive because they ride our shortcuts (biases), emotions, and love of simple stories.
The common fallacies
1) Red herring fallacy
This fallacy introduces irrelevant information to distract from the main argument or question. It fails because it doesn't address the actual issue being discussed. The name comes from the practice of using strong-smelling fish to throw hunting dogs off a scent trail. People use red herrings when they can't defend their position directly. To counter this, you should acknowledge the distraction but redirect to the original point.
Example 1: "Why did the project go over budget?" → "Well, our team worked incredibly hard and put in many late nights to deliver quality work."
Refute: "I appreciate the team's hard work, but that doesn't explain the budget overrun. Can we focus on what caused the additional costs?"
Example 2: "You're late to every meeting." → "Traffic in this city is absolutely terrible, and did you know they're planning to build more roads?"
Refute: "Traffic affects everyone, but you're consistently late whilst others arrive on time. What can we do to help you manage your schedule better?"
Example 3: "This restaurant's service was poor." → "Well, at least the décor is lovely, and they support local charities."
Refute: "Those are nice features, but they don't address the slow service and cold food we experienced."
2) Ad hominem fallacy
This fallacy follows the pattern "\(X\) is wrong because \(X\) is stupid, biased, or immoral." It fails because the truth of a claim is independent of the character of the speaker. Even unpleasant people can state facts. We find it persuasive because we use source cues (mental shortcuts that help us quickly judge credibility based on who's speaking) to save time, and smearing the source feels like evaluating the claim. To counter this, you should separate the person from their evidence. If bias does matter, such as when dealing with a paid shill (someone secretly paid to promote a particular view), explain specifically how it contaminates the data rather than dismissing everything they say.
Example 1: "Don't trust her climate data; she failed maths." The measurements stand or fall on methods, not her exam history.
Refute: "Her academic performance doesn't affect the validity of these temperature measurements. Let's examine the data collection methods and peer review instead."
Example 2: "Why should we listen to his economic advice? He's been divorced twice." Personal relationships don't determine economic expertise.
Refute: "His personal life doesn't affect his professional qualifications. Let's focus on his track record and the evidence behind his economic recommendations."
Example 3: "She's just a young idealist who doesn't understand the real world." Age doesn't automatically invalidate ideas.
Refute: "Young people often bring fresh perspectives and energy to problems. Let's evaluate her proposals on their merits rather than her age."
3) Straw man fallacy
This fallacy works by proposing "You suggest \(A\), but I'll attack an exaggerated version \(A'\)." It fails because it refutes a different claim entirely. People find it persuasive because they remember vivid caricatures more than careful nuance. The best defence is to steelman first: repeat the other side's point in a form they would accept, then respond to their actual position.
Example 1: "Regulate sugary drinks" becomes twisted into "You want to ban fun and control all food."
Refute: "That's not what I suggested. I'm proposing targeted regulations on marketing sugary drinks to children, not banning all enjoyable foods."
Example 2: "We should have stricter immigration controls" becomes "You hate all foreigners and want to close the borders completely."
Refute: "I don't oppose immigration itself. I'm suggesting we need better systems to process applications efficiently and ensure proper integration support."
Example 3: "Schools need more funding" becomes "You want to throw money at problems and waste taxpayers' money."
Refute: "I'm not advocating wasteful spending. I'm pointing to specific underfunded areas like textbooks, teacher training, and classroom resources that directly impact learning."
4) False dilemma fallacy
This reasoning claims "Either \(X\) or \(Y\); not \(X\); therefore \(Y\)." It fails because there are often third ways or mixtures that aren't being considered. Our brains like simple choices under stress, which makes this fallacy appealing. When you encounter this, ask what else could be done and consider whether both options might be partially true or neither might be correct.
Example 1: "Be tough on crime or you're soft." What about prevention, rehabilitation, or smart policing approaches?
Refute: "Those aren't the only two options. We can combine tough enforcement with prevention programmes and rehabilitation to address crime more effectively."
Example 2: "Either you support the troops or you're unpatriotic." You can support soldiers whilst questioning military policy.
Refute: "I can respect our service members whilst also having concerns about specific military strategies or foreign policy decisions."
Example 3: "You're either with us or against us in this business decision." What about partial agreement or alternative approaches?
Refute: "I support the goal but have reservations about the method. Can we explore modified approaches that address my concerns whilst achieving our objectives?"
5) Post hoc ergo propter hoc fallacy
This fallacy assumes that because \(A\) happened, then \(B\) occurred, \(A\) must have caused \(B\). It fails because it ignores coincidence, third causes (other factors that might explain both events), and regression to the mean (the tendency for extreme measurements to return to average over time). To counter this properly, you should ask for the mechanism (how does \(A\) actually cause \(B\)), look for controls (comparison groups), and seek replication (can this be repeated) of the supposed causal relationship.
Example 1: "I wore lucky socks and we won." Perhaps you just played better that day for unrelated reasons.
Refute: "Your socks don't have magical powers, but if believing in them made you more confident and less nervous, that psychological effect could improve performance. The causation runs through your mind (placebo effect), not the fabric itself. The real question is whether you can access that same confidence without needing the socks."
Example 2: "Ever since we hired Sarah, our sales have increased. She must be our good luck charm." What about market conditions, new products, or seasonal trends?
Refute: "Sarah joined during our busy season when we also launched two new products. Let's look at her specific contributions rather than assuming she caused all the growth."
Example 3: "I started taking vitamins and haven't been ill since. They must work." Your immune system naturally varies, and you might have changed other habits too.
Refute: "Many factors affect health—sleep, stress, exercise, seasonal changes. We'd need controlled studies to know if vitamins specifically made the difference."
6) Slippery slope fallacy
This argument claims "If \(A\) happens, then \(B\) will follow, leading inevitably to \(Z\) (catastrophe)." It fails because it asserts inevitability without providing mechanisms or probabilities. To counter these claims properly, you should demand the complete causal chain and ask for base rates: how often has \(A\) actually led to \(Z\) in similar situations?
Example 1: "Allow remote work and nobody will work." Many firms have run hybrid models successfully without this outcome.
Refute: "That's quite a leap. Many companies have successfully implemented remote work policies without productivity collapse. What specific evidence suggests this extreme outcome?"
Example 2: "If we let students retake one exam, soon they'll expect to retake everything and academic standards will collapse." One policy change doesn't inevitably lead to chaos.
Refute: "We can have specific criteria for retakes without abandoning all standards. Many schools allow limited retakes whilst maintaining rigorous academic expectations."
Example 3: "If we raise the minimum wage, businesses will close, unemployment will soar, and the economy will crash." Economic changes are complex and gradual.
Refute: "Economic research shows mixed results from minimum wage increases, with most studies finding minimal job losses. We can monitor effects and adjust policy based on evidence."
7) Hasty generalisation fallacy
This follows the pattern "I saw 3 swans; therefore all swans are white." It fails because an unrepresentative sample (a small group that doesn't accurately reflect the whole population) leads to weak induction (drawing broad conclusions from limited evidence). It persuades us because vivid anecdotes beat dull statistics in our minds due to availability bias (the tendency to judge probability by how easily examples come to mind). When you encounter such claims, you should ask about sample size (how many cases were observed), selection methods (how the examples were chosen), and variance (how much the data differs across the group).
Example: "Two rude Parisians means all Parisians are rude." Perhaps you simply encountered two rude individuals.
Refute: "Two people isn't a representative sample of 2 million Parisians. I've met many friendly Parisians, and individual experiences vary widely."
8) Cherry-picking fallacy
This involves selecting only supporting data whilst ignoring contradictory evidence. It fails because biased samples produce misleading conclusions. When someone presents cherry-picked data, you should demand to see the full dataset, not just the highlights that support their position.
Example: A fund advertises its best 3 years whilst hiding all the rest of its performance history.
Refute: "Those are impressive numbers, but can you show the complete performance history? What about the years not mentioned in this advertisement?"
9) Appeal to authority fallacy
This argument claims "An expert said it, so it must be true." It fails because experts can make errors, or they might be speaking outside their field of expertise. To counter such appeals properly, you should check their domain expertise, look for consensus among relevant experts, and examine the underlying evidence.
Example: A celebrity endorsing a medical cure despite having no medical training.
Refute: "While I respect their opinion, they're not a medical expert. What do qualified doctors and peer-reviewed studies say about this treatment?"
10) Appeal to popularity fallacy
This fallacy argues "Everyone believes it, so it must be true." It fails because truth isn't determined by vote, and popular myths can persist for centuries. When confronted with this argument, you should ask for reasons and evidence, not headcounts of believers.
Example: "Millions use astrology, so it must work." Popularity doesn't validate effectiveness.
Refute: "Popularity doesn't determine truth. Many people once believed the Earth was flat. What scientific evidence supports astrology's claims?"
11) Circular reasoning fallacy
This pattern essentially states "\(X\) is true because \(X\) is true." It fails because it assumes what it must prove, offering no independent evidence. We find it persuasive because repeated assertions feel familiar and therefore truthful. To counter this, you should locate the independent evidence that the argument lacks.
Example: "This policy is best because it's superior to all alternatives."
Refute: "That's circular reasoning. What specific evidence shows this policy is superior? What criteria are you using to measure 'best'?"
12) Appeal to emotion fallacy
This approach replaces logical reasons with emotional appeals such as fear, pity, or pride. It fails because whilst emotion can motivate action, it doesn't demonstrate truth. When faced with emotional arguments, you should ask what concrete facts support the feeling being invoked.
Example: "If you don't buy this alarm system, your family won't be safe."
Refute: "I understand the concern for family safety, but what evidence shows this specific system is necessary? What are the actual crime statistics in our area?"
13) Sunk cost fallacy
This trap involves continuing a bad course of action because of past investment. It fails because past costs are irrecoverable, and only future costs and benefits should matter for decision-making. We hate waste due to evolutionary scarcity (resources were limited for our ancestors), loss aversion (the psychological tendency to feel losses more strongly than equivalent gains), and the endowment effect (people value things more highly simply because they own them). To counter this bias, you should ask yourself, "If we hadn't spent anything yet, what would we choose now?"
Example: "We've already spent £50,000 on this failing project, so we must continue."
Refute: "The £50,000 is already gone regardless of what we do next. What matters now is whether continuing will create more value than the additional costs."
14) Confirmation bias fallacy
This involves seeking or valuing only information that fits your existing view. It blinds you to disconfirming evidence (facts that contradict your beliefs) that might challenge your beliefs. To combat this tendency, you should actively search for strong counter-examples using techniques like the "pre-mortem" (imagining how your plan could fail) or "red team" approach (having people deliberately attack your position), where you deliberately try to find flaws in your own reasoning.
Example: Only reading news sources that confirm your political views whilst dismissing opposing perspectives.
Refute: "I try to read diverse sources to challenge my assumptions. Have you considered what credible sources on the other side are saying about this issue?"
15) Gambler's fallacy
This fallacy involves believing that past random events change the odds of future independent events. It fails because independent events don't "balance out" over time. Each coin flip remains approximately 50% regardless of previous results.
Example: "Five tails in a row, so heads is due next." The next flip is still roughly 50% heads.
Refute: "Each coin flip is independent. The coin doesn't 'remember' previous results, so the next flip is still 50-50 regardless of the streak."
16) Appeal to nature fallacy
This fallacy argues that something is good or right simply because it's natural, or bad because it's artificial or unnatural. It fails because "natural" doesn't automatically mean safe, healthy, or morally superior. Many natural things are deadly (like hemlock or arsenic), whilst many artificial things save lives (like medicines or vaccines). To counter this, you should focus on actual evidence of benefits and harms rather than origin.
Example: "This herbal remedy is completely safe because it's all natural, unlike those dangerous synthetic drugs."
Refute: "Natural doesn't mean safe. Poison ivy is natural, but penicillin is synthetic and saves lives. What matters is the evidence for safety and effectiveness."
17) False equivalence fallacy
This fallacy treats two things as equally valid or important when they're actually quite different in scale, impact, or moral weight. It fails because not all actions, ideas, or positions are equally problematic or deserving of equal consideration. To counter this, you should point out the specific differences in magnitude, consequences, or context that make the comparison inappropriate.
Example: "Both politicians lie sometimes, so they're equally untrustworthy" when one tells occasional white lies and the other fabricates major policy positions.
Refute: "There's a significant difference between minor social lies and major deceptions about policy. The scale and consequences matter for evaluating trustworthiness."
18) Genetic fallacy
This fallacy judges something based on its origin rather than its current merit. It fails because the source or history of an idea doesn't determine its truth value. Ideas can be right for wrong reasons, or good things can come from bad origins. We're susceptible to this because origin stories feel meaningful and our brains use shortcuts to judge credibility. To counter this, you should evaluate the current evidence and merit regardless of where something came from.
Example: "Democracy started in ancient Greece where they had slavery, so democracy is fundamentally flawed."
Refute: "The origins of democracy don't determine its current value. Modern democratic systems have evolved significantly and should be judged on their present-day merits and flaws."
19) No true Scotsman fallacy
This fallacy protects a generalisation from counter-examples by arbitrarily redefining the group to exclude inconvenient cases. It fails because it makes claims unfalsifiable (impossible to disprove) by constantly moving the definition. The name comes from the example: "No Scotsman puts sugar on porridge" → "But my Scottish friend does" → "Well, no true Scotsman would do that." To counter this, you should point out how the definition keeps changing to avoid contrary evidence.
Example: "Real entrepreneurs never give up" → "But Steve Jobs was fired from Apple" → "Well, he wasn't a real entrepreneur during that period."
Refute: "You're changing the definition of 'real entrepreneur' to exclude anyone who doesn't fit your claim. That makes your statement meaningless because it can't be proven wrong."
20) Loaded question fallacy
This fallacy embeds a controversial assumption into a question, making it impossible to answer without accepting that assumption. It fails because it forces people to accept premises they may disagree with. The classic example is "Have you stopped beating your wife?" which assumes you were beating her. To counter this, you should identify and reject the hidden assumption before addressing the actual question.
Example: "Why are you always so defensive?" assumes the person is being defensive when they might simply be explaining their position.
Refute: "That question assumes I'm being defensive. I'm actually just providing information to clarify my position. What specific behaviour makes you think I'm being defensive?"
21) Moving the goalposts fallacy
This fallacy involves changing the criteria for success or evidence after the original criteria have been met. It fails because it makes claims unfalsifiable and wastes time by constantly shifting standards. People do this to avoid admitting they were wrong or to maintain their position despite contrary evidence. To counter this, you should establish clear criteria upfront and hold people to them.
Example: "Show me one study proving vaccines are safe" → *provides study* → "That's just one study, show me ten" → *provides ten* → "Those are just correlational studies, I need randomised trials."
Refute: "You keep changing what counts as acceptable evidence. Let's agree on specific criteria upfront, then stick to them rather than moving the goalposts each time I meet your standards."
22) Composition fallacy
This fallacy assumes that what's true for individual parts must be true for the whole group. It fails because groups often have emergent properties (characteristics that arise from the interaction of parts) that individual members lack. We make this error because our brains naturally extrapolate from familiar examples. To counter this, you should consider how interactions between parts might create different group-level behaviours or properties.
Example: "Each player on the team is excellent, so the team must be excellent" ignoring that they might not work well together.
Refute: "Individual talent doesn't guarantee team success. We need to look at how well they coordinate, communicate, and complement each other's skills."
23) Appeal to ignorance fallacy
This fallacy argues that something is true because it hasn't been proven false, or false because it hasn't been proven true. It fails because absence of evidence isn't evidence of absence, and the burden of proof (responsibility to provide evidence) should be on whoever makes a positive claim. To counter this, you should clarify who has the burden of proof and explain that not knowing something doesn't support any particular conclusion.
Example: "No one has proven that aliens haven't visited Earth, so they probably have."
Refute: "The burden of proof is on those claiming alien visitation. Not having evidence against something doesn't count as evidence for it. We can't prove negatives for most extraordinary claims."
Why these arguments are unsound but still seduce us
Unsoundness: Either the premises are false or irrelevant (e.g., ad hominem) or the logic is invalid (e.g., circular reasoning), or the inductive support is weak or bias-ridden (e.g., hasty generalisation, cherry-picking).
Persuasiveness: They align with our cognitive shortcuts (availability, loss aversion, confirmation bias), exploit emotions (fear, pride), and offer simple stories under uncertainty. Our social brains prefer signals about who to trust over what is true.
Quick habits that immunise you against fallacies
- Always ask for the structure of an argument by questioning what the main claim is, what premises support it, and how those premises connect logically to the conclusion. Example: "You're saying violent video games cause aggression. What's your evidence, and how does playing games lead to real-world violence?"
- Define key terms clearly at the beginning of any discussion and watch carefully for equivocation (changing word meanings mid-argument). Example: Someone argues "Only man is rational" then concludes "No woman is rational"—they've shifted "man" from meaning "human" to meaning "male."
- Quantify claims wherever possible by asking how big the effect is and what it should be compared against for proper context. Example: "This diet pill causes weight loss"—How much? Compared to what? Over what time period?
- Check the underlying data by examining sample size (how many people studied), selection methods (how were they chosen), control groups (comparison groups), base rates (how common is this normally), and proposed mechanisms (how does it supposedly work). Example: "This supplement works!"—Tested on how many people? Were there controls? What's the normal recovery rate without it?
- Steelman opposing arguments first by demonstrating that you understand the strongest form of the other side's position before responding to it. Example: "I understand you're concerned that higher taxes might discourage investment and job creation. Here's why I think that risk is outweighed by..."
- Pre-commit to clear criteria for success before evaluating evidence, which helps avoid goalpost shifts (changing standards mid-evaluation) and sunk cost thinking. Example: "We'll consider this marketing campaign successful if it increases sales by 15% within three months"—decide this before seeing results.
- Keep facts separate from values by establishing what is true before discussing what ought to be done about it. Example: First establish "Climate change is happening and humans contribute to it" (fact), then discuss "What should we do about it?" (values).
Learning to spot fallacies won't make you right about everything, but it will make you wrong about fewer things. In a world drowning in information and misinformation, that's a superpower worth developing.