Marketing sometimes involves the science of making you believe something that is not true, with the specific goal of selling you something (a product, service, or even ideology). The organic lobby, for example, has done a great job of creating a health halo and environmentally friendly halo for organic produce, while simultaneously demonizing their competition (recently focusing on GMOs).
These claims are all demonstrably wrong, however. Organic food is no more healthful or nutritious than conventional food. Further, GMO technology is safe and there are no health concerns with the GMO products currently on the market.
There is an even more stark difference, however, between beliefs about the effects of organic farming on the environment and reality. In fact organic farming is worse for the environment than conventional farming in terms of the impact vs the amount of food produced.
First, organic farming may use pesticides. They just have to be “natural” pesticides, which means the ones they use are not chosen based upon their properties. Ideally choice of pesticide and the strategy in using them would be evidence-based and optimized for best effect, minimal impact on health and the environment, cost effectiveness, and convenience. Organic farming, however, does not make evidence-based outcome choices. Their primary criterion is that the pesticides must be “natural”, even if they are worse in every material aspect. This represents ideology trumping evidence. It is based on the “appeal to nature” fallacy, an unwarranted assumption that something “natural” will be magically better than anything manufactured.
In fact my main complaint against the organic label is that it represents an ideological false dichotomy. Each farming practice should be judged on its own merits, rather than having a bunch of practices ideologically lumped under one brand. I don’t care if a practice is considered organic or not, all that matters is the outcome.
Imagine you are at a Las Vegas casino and you’re approaching the roulette table. You notice that the last eight numbers were black… so you think to yourself, “Holy smokes, what are the odds of that!” and you bet on red, thinking that the odds of another black number coming up are really small. In fact, you might think that the odds of another black coming up are:
0.5*0.5*0.5*0.5*0.5*0.5*0.5*0.5*0.5 = 0.00195 (a very tiny number)
Or are they?
The problem is that a roulette table – if fairly constructed – has no “memory”. That is, one outcome does not depend on the previous outcome’s result, and so the odds for a red number or black number are just about equal (actually, just shy of 50% each, since there is one or two green spaces on a roulette table depending on American or European versions).
Keeping with our example, if you bet on either red or black for each spin, this type of outside bet pays 1 to 1 and covers 18 of the 38 possible combinations (or 0.474). A far cry from the 0.00195 number above (a miscalculation that is roughly 243 times too small). Now your odds of a red coming up aren’t so good anymore…
This fallacy is called the Gambler’s Fallacy, and it’s what the city of Las Vegas is built on.
Random events produce clusters like “8 black numbers in a row”, but in the long term, the probability of red or black will even out to its natural average.
The key to your success at the casino? Understand that every individual spin (or “event”) has its own probability which never changes. In this case, 18 in 38.
So the next time you’re at a casino and you see a string of the same color coming up, remember that the odds of that color coming up again are exactly the same as the other color… it might save you a few bucks so you can play a bit longer.
- Unpredictability: Hot Hands vs. Gambler’s Fallacies (practicalpersuasion.wordpress.com)
- Last Week At Science-Based Medicine (randi.org)
In my last post, I wrote about how not having enough contextual data can outright boggle the mind. Today, we’re going to read about something else that similarly boggles the mind, albeit not really related to any linguistic phenomena. It’s an interesting little logical fallacy in the field of statistics known as cum hoc ergo propter hoc, or more commonly, ‘correlation does not prove causation’. Here, we define correlation as ‘when two things happen at the same time’, and causation as ‘when one thing causes the other’.
This logical fallacy is great at showing the glaring inaccuracies caused by lack of data on a specific subject, and how this lack can cause us to reach blindly for (often incorrect) conclusions in the proverbial fogginess of our mind. Additionally, the comedic factor here is amplified if you forego the law of parsimony (also known as Occam’s razor), which states that of all the possible solutions to a question or problem, the simplest one is most likely the truth.
Have you ever been in a recording booth or a really quiet place? If you’re in there for a long time your mind begins to create its own sounds. Essentially, you begin to hallucinate due to a lack of external stimuli. This is basically what goes on in the aforementioned logical fallacy: you end up compensating for a lack of data by drawing a perceived (and often inaccurate) connection between the sole items of data you have.
What does this have to do with a language blog? Essentially, it’s a great way of showing how a lack of the background information required for comprehension can yield wildly inaccurate knowledge. Dig this:
Did you know that children with bigger feet are statistically better at spelling? This is statistically true. Without additional contextual information, I could hypothesize that having larger foot-size means the children would perform better at sports and have better balance while carrying large and cumbersome schoolbags, making them less prone to falling over in bustling school hallways, making them less likely targets for bullies, leading to an inevitable increase in confidence, leading to better scholastic performance, and thereby, better spelling skills!
The truth is, it’s actually because children with larger feet are probably a lot older than children with smaller feet. Duh.
Did you know that you are more likely to get cancer if you always wear a seat-belt?
- The Curious Case of Correlation ≠ Causation (sensitivecontext.com)
- How to tell a Conspiracy Theorist from a Conspiracy Believer (illuminutti.com)
- The Logical Fallacies (thecontemplationhypothezation.wordpress.com)
- Logical Fallacies (gpissues.wordpress.com)
- Tools for argumentation: Hierarchies of argumentation; Logical Fallacies (lizditz.typepad.com)
The pragmatic fallacy is committed when one argues that something is true because it works and where ‘works’ means something like “I’m satisfied with it,” “I feel better,” “I find it beneficial, meaningful, or significant,” or “It explains things for me.” For example, many people claim that astrology works, acupuncture works, chiropractic works, homeopathy works, numerology works, palmistry works, therapeutic touch works. What ‘works’ means here is vague and ambiguous. At the least, it means that one perceives some practical benefit in believing that it is true, despite the fact that the utility of a belief is independent of its truth-value.
The pragmatic fallacy is common in “alternative” health claims and is often based on post hoc reasoning. For example, one has a sore back, wears the new magnetic or takionic belt, finds relief soon afterwards, and declares that the magic belt caused the pain to go away. How does one know this? Because it works! There is also some equivocation going on in the alternative health claims that fall under the heading of “energy medicine,” such as acupuncture and therapeutic touch. The evidence pointed to often uses ‘works’ in the sense of ‘the customer is satisfied’ or ‘the patient improves,’ but the conclusion drawn is that ‘chi was unblocked’ or ‘energy was transferred.’
There is a common retort to the skeptic who points out that customer satisfaction is irrelevant to whether the device, medicine, or therapy in question really is a significant causal factor in some outcome. Who cares why it works as long as it works? You can argue about the theory as to why it works, but you can’t argue about the customer satisfaction or the fact that measurable improvements can be made. That’s all that matters.
It isn’t all that matters. Testimonials are not a substitute for scientific studies, which are done to make sure that we are not deceiving ourselves about what appears to be true. It is especially necessary to . . .
- Integrated Medicine (illuminutti.com)
- The Unsinkable Rubber Duck Of Alternative Medicine (acneeinstein.com)
- The fallacy of the middle ground (ourchangingclimate.wordpress.com)
- The Curious Case of Correlation ≠ Causation (sensitivecontext.com)
- Logical Fallacies (gpissues.wordpress.com)
Ever hear someone argue a point that was effective, even though it didn’t quite ring true? Chances are they used a logical fallacy.
Each video is only about 3 minutes long. Enjoy 🙂
Three great websites run by Brian Dunning (in the videos above) that all skeptical thinkers ought to have bookmarked:
Mason I. Bilderberg (MIB)
- The 12 cognitive biases that prevent you from being rational (illuminutti.com)
- Path of a Critical Thinker (Meow) (dead-logic.blogspot.com)
The human brain is capable of 1016 processes per second, which makes it far more powerful than any computer currently in existence. But that doesn’t mean our brains don’t have major limitations. The lowly calculator can do math thousands of times better than we can, and our memories are often less than useless — plus, we’re subject to cognitive biases, those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions. Here are a dozen of the most common and pernicious cognitive biases that you need to know about.
Before we start, it’s important to distinguish between cognitive biases and logical fallacies. A logical fallacy is an error in logical argumentation (e.g. ad hominem attacks, slippery slopes, circular arguments, appeal to force, etc.). A cognitive bias, on the other hand, is a genuine deficiency or limitation in our thinking — a flaw in judgment that arises from errors of memory, social attribution, and miscalculations (such as statistical errors or a false sense of probability).
Some social psychologists believe our cognitive biases help us process information more efficiently, especially in dangerous situations. Still, they lead us to make grave mistakes. We may be prone to such errors in judgment, but at least we can be aware of them. Here are some important ones to keep in mind.
We love to agree with people who agree with us. It’s why we only visit websites that express our political opinions, and why we mostly hang around people who hold similar views and tastes. We tend to be put off by individuals, groups, and news sources that make us feel uncomfortable or insecure about our views — what the behavioral psychologist B. F. Skinner called cognitive dissonance. It’s this preferential mode of behavior that leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel our pre-existing views, while at the same time ignoring or dismissing opinions — no matter how valid — that threaten our world view. And paradoxically, the internet has only made this tendency even worse.
Somewhat similar to the confirmation bias is the ingroup bias, a manifestation of our innate tribalistic tendencies. And strangely, much of this effect may have to do with oxytocin — the so-called “love molecule.” This neurotransmitter, while helping us to forge tighter bonds with people in our ingroup, performs the exact opposite function for those on the outside — it makes us suspicious, fearful, and even disdainful of others. Ultimately, the ingroup bias causes us to overestimate the abilities and value of our immediate group at the expense of people we don’t really know.
It’s called a fallacy, but it’s more a glitch in our thinking. We tend to put a tremendous amount of weight on previous events, believing that they’ll somehow influence future outcomes. The classic example is coin-tossing. After flipping heads, say, five consecutive times, our inclination is to predict an increase in likelihood that the next coin toss will be tails — that the odds must certainly be in the favor of heads. But in reality, the odds are still 50/50. As statisticians say, the outcomes in different tosses are statistically independent and the probability of any outcome is still 50%.
Relatedly, there’s also the positive expectation bias — which often fuels gambling addictions. It’s the sense that our luck has to eventually change and that good fortune is on the way. It also contribues to the “hot hand” misconception. Similarly, it’s the same feeling we get when we start a new relationship that leads us to believe it will be better than the last one.
Remember that time you bought something totally unnecessary, faulty, or overly expense, and then you rationalized the purchase to such an extent that you convinced yourself it was a great idea all along? Yeah, that’s post-purchase rationalization in action — a kind of built-in mechanism that makes us feel better after we make crappy decisions, especially at the cash register. Also known as Buyer’s Stockholm Syndrome, it’s a way of subconsciously justifying our purchases — especially expensive ones. Social psychologists say it stems from the principle of commitment, our psychological desire to stay consistent and avoid a state of cognitive dissonance.
Very few of us have a problem getting into a car and going for a drive, but many of us experience great trepidation about stepping inside an airplane and flying at 35,000 feet. Flying, quite obviously, is a wholly unnatural and seemingly hazardous activity. Yet virtually all of us know and acknowledge the fact that the probability of dying in an auto accident is significantly greater than getting killed in a plane crash — but our brains won’t release us from this crystal clear logic (statistically, we have a 1 in 84 chance of dying in a vehicular accident, as compared to a 1 in 5,000 chance of dying in an plane crash [other sources indicate odds as high as 1 in 20,000]). It’s the same phenomenon that makes us worry about getting killed in an act of terrorism as opposed to something far more probable, like falling down the stairs or accidental poisoning.
This is what the social psychologist Cass Sunstein calls probability neglect — our inability to properly grasp a proper sense of peril and risk — which often leads us to overstate the risks of relatively harmless activities, while forcing us to overrate more dangerous ones.
- Your Brain Is Flawed — 12 Scientific Reasons Human Beings Are Wildly Irrational (alternet.org)
- The 12 Cognitive Biases That Prevent You From Being Rational (theageofblasphemy.wordpress.com)
- The 12 cognitive biases that prevent you from being rational (richarddawkins.net)
- Do our cognitive biases help or hinder us as entrepreneurs? As leaders? (entrepreneurshipmatters.com)
- The Power of Confirmation Bias (illuminutti.com)
- The Power of Confirmation Bias (theness.com)