Don’t Be Fooled! (Part 1)

Recognizing and Neutralizing Deceptive Techniques

One of the most important skills you need when learning to think critically is recognizing when you are being fooled. Once you accept how easy it is to get caught by faulty reasoning and bias, you can keep it from deceiving you in the future. In political discourse, you will find yourself beset on all sides by bad logic—not all of which is intentionally designed to fool you—but with a handful of tools at your disposal, a good deal of practice, and constant vigilance, you can cut through the deception.

To begin with, here are a few general rules you should always keep in mind when approaching a political discussion:

1) No one is presenting unbiased truth. Even relatively honest politicians slant the truth, and they often don’t realize they are doing it. Similarly, there is no such thing as a completely unbiased news source, because just deciding which news stories are worthy of attention is a form of bias, no matter how many sides of the story are covered.

2)  You are just as susceptible to bias and poor reasoning as anybody else. No matter how well you hone your critical thinking skills, your brain is—and always will be—wired for shortcuts. It takes constant vigilance to keep your own beliefs and opinions grounded in facts and reason rather than prejudices and assumptions.

This section will teach you how to make sure you don’t get fooled by politicians, slanted news sources, and yourself. It is divided into five parts:

  • The Politician’s Playbook of Deception Techniques
  • Influential Techniques
  • Media Literacy
  • Abuse of Statistics  
  • Important Logical Fallacies   (in Part 2)

 

The Politician’s Playbook of Deception Techniques

Politicians, by their very nature, are well-versed in deception. Even a candidate with the most noble of intentions has to “sell” his or her vision in order to get support, and even the most trustworthy political leader has to find ways to “stay on message” in order to accomplish something. Here are some of the most common “dirty tricks” that candidates and politicians use:

1. Promises without Specifics: A common tactic of candidates running for office is to make grandiose promises about solving a current crisis or addressing an important issue without actually getting into detail about how they intend to do it.

2. Deflection: When politicians are in trouble for any reason—either because of support for something unpopular, being caught in a scandal, or anything else—they will try to change the subject as quickly as possible. This is easy to see in formal debates, as when a potential candidate answers a difficult question by addressing another issue that is only tangentially related. Still, a seated politician can do this too, on a larger scale, by making an outrageous statement or “creating” a news event to divert attention.

3. Appeal to Emotion: It’s much easier to stoke emotions like anger, fear, and envy than it is to present a reasoned, detailed argument. People don’t tend to think rationally when they are outraged or afraid, and politicians know this.

4. Cherry-Picking: Sometimes, a politician will only present part of an issue by selectively picking facts that support his or her position while refusing to acknowledge facts that don’t support it. (There’s more about cherry-picking in “Abuse of Statistics.”)

5. Mudslinging: Politicians know that it is easier to criticize an opponent than it is to defend yourself from criticism. Therefore, it is common to see a candidate or politician focus heavily on the failings—both real and imagined—of their political rivals. When those attacks are unrelated to any specific policy or issue (which happens surprisingly often), they tend to be “ad hominem” attacks, and when the attacks are based on a position the opponent hasn’t actually taken, they are called “straw man” attacks. (There’s more on “ad hominem” and “straw man” attacks in “Important Logical Fallacies.”)

6. The Halo Effect: When a politician emphasizes one or two main strengths of character, that politician is hoping the electorate will accept or ignore his or her weaknesses. This even extends to physical appearance—grooming, good looks, and fitness—and politicians will spend an inordinate amount of time making sure they “look good.” They also want you to think of them as having strong virtues and values, so that you will trust them.

7. Buying Support: There’s a lot of back-scratching in politics, and virtually all politicians of all persuasions will resort to buying the support of their constituents, political allies, and businesses. It could come in the form of engineering a special tax cut or subsidy, agreeing to support a specific project he or she would otherwise not support, or ensuring that the law grants special favors to a favored company or union. The line between looking after the interests of one’s constituents and being party to unfair “cronyism” is a blurry one, but it is an important distinction to make.

8. Aligning with Popularity: Politicians will try to make friends with other, more popular politicians in order to gain support. They will even compare themselves to other popular figures just to be associated with them in voters’ minds. For example, all presidents compare themselves to earlier presidents who were seen as successful (usually Lincoln).

9. Distorting the Record: This is one of the first deception techniques that an informed electorate could easily take away from our leaders. Politicians take advantage of the fact that few people are willing to compare their rhetoric to their voting record, and thus they will make bold pronouncements that don’t always line up with how they actually vote, or they will favor a bill they are sure won’t pass in order to appear to favor something they really don’t. They can also resort to dirty tricks like supporting a bill but opposing the appropriations that fund it, and then using the more popular vote in their re-election campaigns. Another common tactic is to write a bill with a noble sounding name like “The Helping Blind People Act” that is full of unpopular provisions that have little to do with the title, and then rail against opponents of the bill as people who oppose helping blind people.

10. Sounding Logical: Politicians also take advantage of people who aren’t critical thinkers but know the kinds of words and phrases that critical thinkers use. They can throw around the language of reasoning—words and phrases such as “my first reason…,” “I can only conclude…,” or “that’s moving the goalposts”—without actually making strong arguments.

 

Influential Techniques

It isn’t just politicians who resort to deceptive techniques to influence people. Journalists, pundits, organizations, and everyday people have subtle ways of influencing others as well, and it’s just as important to be on the look-out for these.

1.  One-Sided Arguments: When engaging in a political discussion, it is rare to come across somebody who is willing to embrace all possible sides of a given political topic. You should always keep in mind that you are probably only hearing one side of the story.

2. False Balance: The flip-side of a one-sided argument is one that gives undue attention to a minority opinion. Journalists fall into this trap often by trying to give equal amounts of time to “both sides” of any given story. If a politician proclaims that inflation is bad for the economy, a journalist might go out of his or her way to find the one economist out of a hundred who believes that inflation is actually good for the economy and then give his opinion just as much airtime as the opinion of the other ninety-nine economists.

3.  Marketing: Not automatically a bad tactic, anybody can use advertising, branding, or publicity to sell an idea. However, just because an argument is made with well-produced, emotional music, good cameras, slick graphics, and celebrity endorsements does not mean it is right.

4.  Using Psychology: There are lots of ways our minds can be tricked by sleight of hand, illusions, clever rhetorical traps, and more. For example, if something plausible is repeated, even if it’s false, it will be made more believable; bad news sandwiched between two pieces of good news will seem less serious; and emotional language or loaded words will short-circuit critical thought. If you notice somebody using these kinds of tactics, approach their assertions with the skepticism a smart kid would show an amateur magician.

5.  Changing the Subject: When people start to lose an argument, they often try to change the subject. People who debate often can be very good at this, often seguing from topic to topic with ease, not admitting any weaknesses in their previous arguments as they move on to separate ones. Political leaders can take this to a dramatic extreme by diverting the attention of the press with new stories to take focus away from things that make them look bad.

6. Lofty Language or Obscure Lingo: If a person sounds smart, you are more likely to believe what they have to say. People do this by using big, rarely-used words (“Nobody can accuse me of being overly loquacious”), conspicuously proper grammar (“That is something up with which I shall not put”), or academic-sounding diction that don’t really mean anything (“The actuarial tables of this double-blind meta-analysis resonate with the position I’ve taken on the propriety of gross product enabling, which is why I want to reduce regulations in the quasi-private government sector”). If somebody tells you something during a political discussion that sounds intelligent but is difficult to understand, don’t be afraid to ask for clarification or look up words or concepts you are unfamiliar with.

7.  Sham Organizations: Some people form organizations that appear to support or oppose an issue but actually do the opposite. For instance, there are several organizations that sound like they are for the protection of the environment, but are actually funded by groups interested in exploiting the environment.

8.  Humor: In addition to appealing to negative emotions like anger and fear, a person can use humor to short-circuit logic. Sarcasm, in particular, is a potent tool for political debate that can deflate an opposing viewpoint without needing to offer a reasonable alternative.

 

Media Literacy  

Media literacy means recognizing that most messages in the popular media are carefully constructed to get attention—to gain profit and/or power—not just for the media organization delivering the messages but also for the politicians or businesses behind them. Someone who is media literate is skilled in “unpacking” or taking apart a message to remove its power to manipulate and extracting the kernel of truth buried beneath all the attention-grabbing noise. Someone who is not media literate is often fooled or manipulated by messages from politicians and businesses.  Here are some basics:

1.  Media messages are carefully constructed. They may appear natural or may even actually be simple. For example, a video of a candidate using one camera, with no editing cuts, would appear raw or more natural. But, in most of these cases, a carefully considered decision was made not to do fancy editing, probably to make the candidate appear more straightforward, open, and honest.

2.  Sophisticated techniques are used to capture attention and create a mood.  Whether it is a print ad, a news article, a radio ad, or a video news segment, a variety of techniques are used. If you practice evaluating the form of the message, you’ll notice that certain artistic styles are used with things like music, art, photo framing, layout, and even the type of font.

3.  More than one thing is being communicated at once. A candidate may be talking about specific issues in a television ad, or a business may be presenting reasons to buy its product, but many things are being communicated in addition to the obvious. For instance, the candidate is trying to make a good impression, and make you distrust his/her opponents, and possibly get you to ignore certain issues when he or she talks about another issue, just as a business advertisement may be making subtle implications that its product will make you more popular with the opposite sex, without coming out and saying that directly.

4.  Messages are intended for one or more audiences. For instance, the same campaign ad may be trying to influence more than one age group of voters while also carefully avoiding giving the candidate’s opponents anything to attack.

5.  Different people “hear” different messages. Our values, culture, and past experience affect how we interpret different media messages. A message can be designed to speak to a specific audience without making it blatantly obvious.

Asking yourself questions is the best way to “unpack” or defuse a message:

1.  Who made this message?

2.  What techniques were used (visual, sound) to make this message?

3.  What is the main message being sent?  What are “embedded” or more subtle messages?  What values, perspectives, and assumptions are represented in the message?

4.  Is anything not being said? Is something left out or de-emphasized that one would expect?

5.  Who is this message intended for?  (There may be more than one target.)

6.  What would people who are different from me get from this message?

7.  Why was this message created?  (There can be more than one reason.)

 

Abuse of Statistics

Statistics are a useful tool in understanding the impacts of various policy decisions, but far too many people tend to think of statistics as the honest truth.  Numbers, like science, seem to present objective truth, but statistics are tricky.  When used incorrectly or dishonestly, statistics can seem to indicate something that isn’t true, and since the average voter isn’t trained to understand how to interpret data, a bad or misrepresented study can enter popular culture as an unquestioned truth.  Fortunately, there are things you can look for that will help you catch faulty statistics, and it doesn’t require you to know any complex equations or what a “standard deviation” is.

The following are the most common abuses of statistics, especially as they relate to politics.

1.  Misusing the word “average.”

There are three kinds of averages:  the mean, the mode, and the median.  Each average tells us something different about a group.  Here is a group of annual incomes of people on Main Street of the fictional town of Riverdale:

$1,250,000
$150,000
$60,000
$50,000
$18,000
$15,000
$15,000

The mean average is the total amount of income divided the number of people, in this case, seven people.  It’s $222,571 a year. (This is what most of us this of as the “average.”)

The mode or modal income  is the most common income.  In this case it’s $15,000 a year since two have that income, and there’s only one person at any other income level.

The median income is the income of the person in the middle.  In this case it’s $50,000 a year, since there are three people above and three people below.

So, if I tell you that the average income on Main St. is $222,000 a year, that paints quite a different picture than saying it’s $50,000, or $15,000.  Yet all could be called the “average” income.  Therefore it’s important to ask which average people are talking about.

2.  Using a bad sample.

In a typical poll, the group of people who are asked various questions about their opinions is the “sample.” Ideally, the sample should be representative of the big picture—a small group should reflect the opinions of the larger electorate—but ensuring a good representative sample is difficult.

First, a proper sample should be a decent size. A sample of twenty people is far too small to be representative of the American people, and an inordinately large sample of ten million people can overemphasize statistical significance (more on that in a moment).

Secondly, it should be as unbiased as possible. Polling bias can come from many places:

(a) Location. If a polling sample is taken only from people who live in Wisconsin, it is unlikely to reflect the opinions of people who live in Florida. It can be more subtle, however. For example, in an effort to reduce bias by state, the poll could then take a sample of one hundred people per state, but this would still give inordinate attention to states with low populations and not enough attention to states with high populations.

(b) Time. If you were to poll people about their feelings towards Daylight Saving Time on the day after they “spring ahead,” they’d probably give you a more negative response than if you were to poll them the day after they “fall behind.”

(c) Degree of Anonymity. Generally speaking, people tend to be more honest to a pollster the more anonymous they can be. People polled about embarrassing habits, for example, are more likely to lie if they are being asked about them face-to-face than if they are asked to fill out an online questionnaire.

(d) Polling Method. While an online questionnaire will have a better degree of anonymity, it won’t be representative of people who don’t have access to the Internet, aren’t technologically adept, or don’t trust putting personal information into a computer. Internet polls also have a variety of other problems, such as their capacity to be spammed and whether those polls are only found on sites that have a niche audience. Similarly, telephone polls won’t be representative of people who don’t own phones or don’t answer calls from people they don’t know. Indeed, there is no “perfect” polling method, so when looking at a study, always keep the methodology in mind.

3.  Misrepresenting the margin of error and statistical significance.

Without getting bogged down by specifics, a good study will include its “margin of error,” which should be a mathematically determined number that roughly reflects the confidence level of the poll. For example, if the president’s approval rate is at 48% with a margin of error of +/-3, that means the actual approval rate could be anywhere between 45%-51%. Be wary of polls that have an inordinately large margin of error, and be extremely skeptical about polls that don’t include one at all.

A similar concept that is often abused is “statistical significance.” If a study is trying to compare two entities (for example, an incumbent’s choice of tie color and his risk of losing re-election), it may note a small correlation that is “statistically significant.” This means that it falls outside the margin of error and is likely to be a real correlation. However, “statistically significant” does not mean that it is compelling. A study that has very large sample sizes will have very small margins of error, so it is likely to find statistically significant correlations that are so small that they might as well be nonexistent. (If a history of ten thousand incumbents and their tie colors is taken and it is found that wearing a light colored tie leads to a 50.7% chance of re-election versus a 49.3% while wearing a dark colored tie, that would be statistically significant, but it wouldn’t be particularly meaningful.)

4.  Using distorted graphs and charts.

Charts and graphs are dramatic ways to highlight data, but they too can be misleading. For example, a bar graph that shows a gradual change from 10,000 to 9,500 won’t look compelling if the axis starts at 0 and ends at 20,000, but it will look compelling if the axis starts at 9,400 and ends at 10,100.

5.  Inferring causation where it doesn’t necessarily exist.

This is a subtle trap that is extremely common. It is important to remember that, just because two things are correlated, it does not mean that one causes the other, even if one follows the other in time.

For example, the dropout rate among girls in high school correlates with the number of teenage girls getting pregnant.  So one might use this statistic to encourage girls not to drop out—because high school dropouts are more likely to get pregnant—but becoming pregnant might be the cause of many girls dropping out, not a result.

Another kind of confusion involves a third variable.  Someone might present statistics on a rise in illegal drug use and a rise in crime in a certain city.  The implication is that the greater drug use causes the increase in crime.  That may be true, in whole or in part, but the increases might be better explained by an increase in the unemployment rate in that city or an increase in population.

Statistics are an attempt to simplify things that are overly complex, so when being presented with statistics that imply a direct cause and effect relationship, you need to think about the overall picture and not make hasty assumptions.

6. Overemphasizing rare or tiny factors

A study that concludes that gun crime has tripled in the town of Springfield in a single year might seem compelling on its face, but if you look at the data and discover that it is wholly based on the fact that Springfield, a town of 930,000 residents, had one murder last year involving a firearm and an armed robbery this year that involved three firearms, it might not seem like such a dramatic revelation.

7. Setting seemingly arbitrary standards.

Sometimes you’ll come across a statistic that is only compelling if you look at it from a very specific perspective. For example, say a governor running for a second term makes the bold claim that he cut unemployment in half during his first term (2008-2012). If you look at the unemployment rate at the end of 2008, it was 8%, and the unemployment at the start of 2012 was 4%, so his claim seems correct. However, if you look at the unemployment rate at the start of 2008, it was 3%, and if you look at the unemployment rate for every month, January 2012 was a hiccup beset on both sides by much higher rates.

As a general rule, be cautious when approaching any claim that sets arbitrary limitations (like anything that says “X doubled in the years between 1997 and 2006” or “people between the ages of 47 and 53 are three times as likely to be Y than people between the ages of 19 and 39”). This usually means that the statistic was carefully designed to show the most dramatic results possible.

8.  Overstating the conclusions.

Statistics may imply a truth, but they rarely ever prove one. When citing statistics, it should be to supplement an argument, not to make one. However, it is natural for politicians, pundits, and others to try to use statistics to make their argument for them, and thus they may announce conclusions about a statistical study that aren’t really warranted. Most studies will have their own sets of conclusions by the authors, and even those conclusions may overstate what the data actually shows.

9. Treating a single study as representative of consensus.

One of the most important things to remember when looking at a single study or statistical analysis is that it needs to be independently replicated. Any one study is susceptible to bias—both intentional and accidental—but when a wide range of studies show something using independent data and different methods, then those studies are far more impressive than one that stands alone.

Unfortunately, single studies make all the headlines, whereas the literature of studies tends to be treated as mundane. There’s no news in the fifty-ninth study to prove a correlation between smoking and lung cancer, but one outlying study that indicates the opposite would probably get plenty of press attention. While the scientific community might have a strong consensus on the basis of the litany of studies that show a correlation between smoking and lung cancer, a politician might throw around the one dissenting study as evidence of a medical conspiracy that has little basis in reality.

10. Using leading or poorly designed questions.

It is important to find out how certain questions are asked in any poll. The questions themselves can show the leanings of the researchers (or what they were trying to prove), and even subtle one-word changes can have dramatic effects on the numbers. For example, a presidential “approval rating” based on the question “Do you think the president is doing a decent job?” would show different results than one based on the question “Is the president doing a great job?” Even more damning is a study based on a confusingly worded question such as “Would you not say the president is not doing a good job?”

Oftentimes, a study will have multiple choice answers that are weighed heavily towards one particular conclusion. For example, a survey might conclude that “Nine out of ten people are either concerned or very concerned about President Jones’s stance on health care,” but if you look at the actual question being asked in the survey, it shows this:

How do you feel about the president’s stance on health care?

___ Don’t care about it

___ Concerned

___ Very concerned

Clearly, most people would check one of the second two options, because there aren’t options reflecting the possibility of supporting or strongly supporting the president’s stance on health care.

When presented with statistics as part of a political argument, it’s almost always intended as evidence to support a certain conclusion. In these cases, you need to ask questions or do research to find out what the statistics really mean. Does the data support the conclusion, or is it meaningless or inconclusive? Does the data support something related to the conclusion but not the conclusion, or does it even support the opposite of the conclusion once the distortions are removed (you’d be surprised how often this happens)? What is the sample size, and who or what makes up the sample? What methodology does the study use? What is the margin of error and statistical significance? Has the study been replicated or refuted?

 

click3

 

Leave a Reply

Your email address will not be published. Required fields are marked *