Imagine you are at a Las Vegas casino and you’re approaching the roulette table. You notice that the last eight numbers were black… so you think to yourself, “Holy smokes, what are the odds of that!” and you bet on red, thinking that the odds of another black number coming up are really small. In fact, you might think that the odds of another black coming up are:
0.5*0.5*0.5*0.5*0.5*0.5*0.5*0.5*0.5 = 0.00195 (a very tiny number)
Or are they?
The problem is that a roulette table – if fairly constructed – has no “memory”. That is, one outcome does not depend on the previous outcome’s result, and so the odds for a red number or black number are just about equal (actually, just shy of 50% each, since there is one or two green spaces on a roulette table depending on American or European versions).
Keeping with our example, if you bet on either red or black for each spin, this type of outside bet pays 1 to 1 and covers 18 of the 38 possible combinations (or 0.474). A far cry from the 0.00195 number above (a miscalculation that is roughly 243 times too small). Now your odds of a red coming up aren’t so good anymore…
This fallacy is called the Gambler’s Fallacy, and it’s what the city of Las Vegas is built on.
Random events produce clusters like “8 black numbers in a row”, but in the long term, the probability of red or black will even out to its natural average.
The key to your success at the casino? Understand that every individual spin (or “event”) has its own probability which never changes. In this case, 18 in 38.
So the next time you’re at a casino and you see a string of the same color coming up, remember that the odds of that color coming up again are exactly the same as the other color… it might save you a few bucks so you can play a bit longer.
- Unpredictability: Hot Hands vs. Gambler’s Fallacies (practicalpersuasion.wordpress.com)
- Last Week At Science-Based Medicine (randi.org)
In my last post, I wrote about how not having enough contextual data can outright boggle the mind. Today, we’re going to read about something else that similarly boggles the mind, albeit not really related to any linguistic phenomena. It’s an interesting little logical fallacy in the field of statistics known as cum hoc ergo propter hoc, or more commonly, ‘correlation does not prove causation’. Here, we define correlation as ‘when two things happen at the same time’, and causation as ‘when one thing causes the other’.
This logical fallacy is great at showing the glaring inaccuracies caused by lack of data on a specific subject, and how this lack can cause us to reach blindly for (often incorrect) conclusions in the proverbial fogginess of our mind. Additionally, the comedic factor here is amplified if you forego the law of parsimony (also known as Occam’s razor), which states that of all the possible solutions to a question or problem, the simplest one is most likely the truth.
Have you ever been in a recording booth or a really quiet place? If you’re in there for a long time your mind begins to create its own sounds. Essentially, you begin to hallucinate due to a lack of external stimuli. This is basically what goes on in the aforementioned logical fallacy: you end up compensating for a lack of data by drawing a perceived (and often inaccurate) connection between the sole items of data you have.
What does this have to do with a language blog? Essentially, it’s a great way of showing how a lack of the background information required for comprehension can yield wildly inaccurate knowledge. Dig this:
Did you know that children with bigger feet are statistically better at spelling? This is statistically true. Without additional contextual information, I could hypothesize that having larger foot-size means the children would perform better at sports and have better balance while carrying large and cumbersome schoolbags, making them less prone to falling over in bustling school hallways, making them less likely targets for bullies, leading to an inevitable increase in confidence, leading to better scholastic performance, and thereby, better spelling skills!
The truth is, it’s actually because children with larger feet are probably a lot older than children with smaller feet. Duh.
Did you know that you are more likely to get cancer if you always wear a seat-belt?
- The Curious Case of Correlation ≠ Causation (sensitivecontext.com)
- How to tell a Conspiracy Theorist from a Conspiracy Believer (illuminutti.com)
- The Logical Fallacies (thecontemplationhypothezation.wordpress.com)
- Logical Fallacies (gpissues.wordpress.com)
- Tools for argumentation: Hierarchies of argumentation; Logical Fallacies (lizditz.typepad.com)
The pragmatic fallacy is committed when one argues that something is true because it works and where ‘works’ means something like “I’m satisfied with it,” “I feel better,” “I find it beneficial, meaningful, or significant,” or “It explains things for me.” For example, many people claim that astrology works, acupuncture works, chiropractic works, homeopathy works, numerology works, palmistry works, therapeutic touch works. What ‘works’ means here is vague and ambiguous. At the least, it means that one perceives some practical benefit in believing that it is true, despite the fact that the utility of a belief is independent of its truth-value.
The pragmatic fallacy is common in “alternative” health claims and is often based on post hoc reasoning. For example, one has a sore back, wears the new magnetic or takionic belt, finds relief soon afterwards, and declares that the magic belt caused the pain to go away. How does one know this? Because it works! There is also some equivocation going on in the alternative health claims that fall under the heading of “energy medicine,” such as acupuncture and therapeutic touch. The evidence pointed to often uses ‘works’ in the sense of ‘the customer is satisfied’ or ‘the patient improves,’ but the conclusion drawn is that ‘chi was unblocked’ or ‘energy was transferred.’
There is a common retort to the skeptic who points out that customer satisfaction is irrelevant to whether the device, medicine, or therapy in question really is a significant causal factor in some outcome. Who cares why it works as long as it works? You can argue about the theory as to why it works, but you can’t argue about the customer satisfaction or the fact that measurable improvements can be made. That’s all that matters.
It isn’t all that matters. Testimonials are not a substitute for scientific studies, which are done to make sure that we are not deceiving ourselves about what appears to be true. It is especially necessary to . . .
- Integrated Medicine (illuminutti.com)
- The Unsinkable Rubber Duck Of Alternative Medicine (acneeinstein.com)
- The fallacy of the middle ground (ourchangingclimate.wordpress.com)
- The Curious Case of Correlation ≠ Causation (sensitivecontext.com)
- Logical Fallacies (gpissues.wordpress.com)
Ever hear someone argue a point that was effective, even though it didn’t quite ring true? Chances are they used a logical fallacy.
Each video is only about 3 minutes long. Enjoy 🙂
Three great websites run by Brian Dunning (in the videos above) that all skeptical thinkers ought to have bookmarked:
Mason I. Bilderberg (MIB)
- The 12 cognitive biases that prevent you from being rational (illuminutti.com)
- Path of a Critical Thinker (Meow) (dead-logic.blogspot.com)
The human brain is capable of 1016 processes per second, which makes it far more powerful than any computer currently in existence. But that doesn’t mean our brains don’t have major limitations. The lowly calculator can do math thousands of times better than we can, and our memories are often less than useless — plus, we’re subject to cognitive biases, those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions. Here are a dozen of the most common and pernicious cognitive biases that you need to know about.
Before we start, it’s important to distinguish between cognitive biases and logical fallacies. A logical fallacy is an error in logical argumentation (e.g. ad hominem attacks, slippery slopes, circular arguments, appeal to force, etc.). A cognitive bias, on the other hand, is a genuine deficiency or limitation in our thinking — a flaw in judgment that arises from errors of memory, social attribution, and miscalculations (such as statistical errors or a false sense of probability).
Some social psychologists believe our cognitive biases help us process information more efficiently, especially in dangerous situations. Still, they lead us to make grave mistakes. We may be prone to such errors in judgment, but at least we can be aware of them. Here are some important ones to keep in mind.
We love to agree with people who agree with us. It’s why we only visit websites that express our political opinions, and why we mostly hang around people who hold similar views and tastes. We tend to be put off by individuals, groups, and news sources that make us feel uncomfortable or insecure about our views — what the behavioral psychologist B. F. Skinner called cognitive dissonance. It’s this preferential mode of behavior that leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel our pre-existing views, while at the same time ignoring or dismissing opinions — no matter how valid — that threaten our world view. And paradoxically, the internet has only made this tendency even worse.
Somewhat similar to the confirmation bias is the ingroup bias, a manifestation of our innate tribalistic tendencies. And strangely, much of this effect may have to do with oxytocin — the so-called “love molecule.” This neurotransmitter, while helping us to forge tighter bonds with people in our ingroup, performs the exact opposite function for those on the outside — it makes us suspicious, fearful, and even disdainful of others. Ultimately, the ingroup bias causes us to overestimate the abilities and value of our immediate group at the expense of people we don’t really know.
It’s called a fallacy, but it’s more a glitch in our thinking. We tend to put a tremendous amount of weight on previous events, believing that they’ll somehow influence future outcomes. The classic example is coin-tossing. After flipping heads, say, five consecutive times, our inclination is to predict an increase in likelihood that the next coin toss will be tails — that the odds must certainly be in the favor of heads. But in reality, the odds are still 50/50. As statisticians say, the outcomes in different tosses are statistically independent and the probability of any outcome is still 50%.
Relatedly, there’s also the positive expectation bias — which often fuels gambling addictions. It’s the sense that our luck has to eventually change and that good fortune is on the way. It also contribues to the “hot hand” misconception. Similarly, it’s the same feeling we get when we start a new relationship that leads us to believe it will be better than the last one.
Remember that time you bought something totally unnecessary, faulty, or overly expense, and then you rationalized the purchase to such an extent that you convinced yourself it was a great idea all along? Yeah, that’s post-purchase rationalization in action — a kind of built-in mechanism that makes us feel better after we make crappy decisions, especially at the cash register. Also known as Buyer’s Stockholm Syndrome, it’s a way of subconsciously justifying our purchases — especially expensive ones. Social psychologists say it stems from the principle of commitment, our psychological desire to stay consistent and avoid a state of cognitive dissonance.
Very few of us have a problem getting into a car and going for a drive, but many of us experience great trepidation about stepping inside an airplane and flying at 35,000 feet. Flying, quite obviously, is a wholly unnatural and seemingly hazardous activity. Yet virtually all of us know and acknowledge the fact that the probability of dying in an auto accident is significantly greater than getting killed in a plane crash — but our brains won’t release us from this crystal clear logic (statistically, we have a 1 in 84 chance of dying in a vehicular accident, as compared to a 1 in 5,000 chance of dying in an plane crash [other sources indicate odds as high as 1 in 20,000]). It’s the same phenomenon that makes us worry about getting killed in an act of terrorism as opposed to something far more probable, like falling down the stairs or accidental poisoning.
This is what the social psychologist Cass Sunstein calls probability neglect — our inability to properly grasp a proper sense of peril and risk — which often leads us to overstate the risks of relatively harmless activities, while forcing us to overrate more dangerous ones.
- Your Brain Is Flawed — 12 Scientific Reasons Human Beings Are Wildly Irrational (alternet.org)
- The 12 Cognitive Biases That Prevent You From Being Rational (theageofblasphemy.wordpress.com)
- The 12 cognitive biases that prevent you from being rational (richarddawkins.net)
- Do our cognitive biases help or hinder us as entrepreneurs? As leaders? (entrepreneurshipmatters.com)
- The Power of Confirmation Bias (illuminutti.com)
- The Power of Confirmation Bias (theness.com)
via NeuroLogica Blog
Skeptics should add another term to their lexicon of self-deception and cognitive biases – temporal binding.
Over the last half-century or so psychologists have been quietly documenting many various ways in which people deceive themselves and distort their thinking. This knowledge, however, has insufficiently penetrated the public consciousness. When it does it is mostly framed as, “isn’t that an interesting quirk of the human mind,” but the deeper lesson, that we cannot trust our own perception and memory, is rarely brought home.
Skeptics have taken modern neuroscience to heart. Our philosophy incorporates what I call “neuropsychological humility” – the basic recognition that our brains are subject to a host of flaws and biases, and therefore we cannot simply rely upon what we remember about what we thought we experienced. Rather, we need to rely upon a rational process and objective evidence as much as possible (part of this is relying on rigorous science to form our empirical conclusions). These flaws and biases are not confined to parlor tricks, contrived psychological experiments, and sitting in the audience of a magic show, but apply in everyday life.
Temporal binding is one tiny slice of the cognitive biases that form our everyday thinking. The overarching concept is that our memories are not passive recorders, nor are they primarily focused on the accurate recall of details. We do have a memory for details, but we also have a thematic memory, which seems to predominate. The thematic memory remembers the meaning of events, and then details are altered to fit this meaning. We construct a narrative and then over time our memory increasingly fits that narrative. This is not a conscious or deliberate process – our memories just morph over time. We are not aware of this process, nor can we distinguish an accurate memory from one that has morphed completely out of alignment with reality. They are both just memories.
Temporal binding is one manifestation of this general phenomenon, and is related to the logical fallacy, post hoc ergo propter hoc – after this therefore because of this. We tend to assume that if A precedes B then it is likely that A caused B. The logical fallacy is in assuming that A did in fact cause B without adequate independent evidence, merely because of the temporal association.
It seems that we evolved to make this assumption. Often A precedes B because it did cause it, and apparently there is a survival advantage to assuming that A probably did cause B, rather than being skeptical of this fact.
MORE . . .
Spontaneous Human Combustion (SHC) is one of those classic pseudosciences that have been around for a long time – like astrology, Big foot, and the Bermuda Triangle. I put it in the same category as the myth that we only use about 10% of our brain capacity; it’s widely believed, but no one really cares that much. It’s just something people hear about and have no reason to doubt, so they lazily accept it. I did when I was younger (in my pre-skeptical days), you hear about it on TV and think, “Huh, isn’t that interesting.”
It’s therefore a good opportunity to teach critical thinking skills. People’s brains are clogged with myths and false information, spread by rumor and the media, and accepted due to a lack of having the proper critical thinking filters in place. It’s disappointing, however, when people who should know better, or whose job it is to know better, fall for such myths.
Recently an Irish coroner concluded that a man died from SHC, and it is reported:
The West Galway coroner, Ciaran McLoughlin, said there was no other adequate explanation for the death of Michael Faherty, 76, also known as Micheal O Fatharta.
The coroner said: “This fire was thoroughly investigated and I’m left with the conclusion that this fits into the category of spontaneous human combustion, for which there is no adequate explanation.”
First, let’s play a game of name-that-logical-fallacy. The core fallacy the coroner is committing is the argument from ignorance. The investigation could not find a cause for the fire, therefore here is the specific cause – SHC. The conclusion should rather be – we don’t know what caused the fire.
The coroner said the case “fits into the category” of SHC – but how?
Keep Reading: NeuroLogica Blog » Spontanous Human Stupidity.
- Spontaneous Human Combustion (illuminutti.com)
- Professor’s breakthrough on human combustion theory (illuminutti.com)
- Spontaneous Human Combustion Explained With Belly Pork (theepochtimes.com)
- Spontaneous Human Combustion Explained With Belly Pork? (zen-haven.dk)
- Humans Spontaneously Combustible? Ten Cases.Video (ramanan50.wordpress.com)
- Pig ash could reveal how people spontaneously combust (newscientist.com)
- UFO over Chilean Air Base (illuminutti.com)