This is not a new story, but it is worth repeating. At the moment that bullets were being fired into JFK’s motorcade, a man can be seen standing on the side of the road near the car holding an open black umbrella. It was a sunny day (although it had rained the night before) and no one else in Dallas was holding an umbrella.
This is exactly the kind of detail that sets a fire under conspiracy theorists. It is a genuine anomaly – something that sticks out like a sore thumb.
The event also defies our intuition about probability. Even if one could accept that somewhere on the streets of Dallas that morning one man decided to hold an open umbrella for some strange reason, what are the odds that this one man would be essentially standing right next to the president’s car when the bullets began to fly?
Our evolved tendency for pattern recognition and looking for significance in events screams that this anomaly must have a compelling explanation, and since it is associated with the assassination of a president, it must be a sinister one.
When you delve into the details of any complex historical event, however, anomalies such as this are certain to surface. People are quirky individual beings with rich and complex histories and motivations. People do strange things for strange reasons. There is no way to account for all possible thought processes firing around in the brains of every person involved in an event.
Often the actions of others seem unfathomable to us. Our instinct is to try to explain the behavior of others as resulting from mostly internal forces. We tend to underestimate the influence of external factors. This is called the fundamental attribution error.
We also tend to assume that the actions of others are deliberate and planned, rather than random or accidental.
The common assumption underlying all of these various instincts is that there is a specific purpose to events, and especially the actions of others. We further instinctively fear that this purpose is sinister, or may be working against our own interests in some way. In this way, we all have a little conspiracy theorist inside us.
*Who Was the Umbrella Man?
Whenever the discussion of a dualist vs materialist model of the mind comes up, one common point made to support the dualist position (that the mind is something other than or more than just the functioning of the brain) is that the brain may not be the origin of the mind, but rather is just the receiver. Often an explicit comparison is made to radios or televisions.
The brain as receiver hypothesis, however, is wholly inadequate to explain the relationship between the brain and the mind, as I will explain below.
As an example of the brain-receiver argument, David Eagleman writes in his book Incognito:
As an example, I’ll mention what I’ll call the “radio theory” of brains. Imagine that you are a Kalahari Bushman and that you stumble upon a transistor radio in the sand. You might pick it up, twiddle the knobs, and suddenly, to your surprise, hear voices streaming out of this strange little box. If you’re curious and scientifically minded, you might try to understand what is going on. You might pry off the back cover to discover a little nest of wires. Now let’s say you begin a careful, scientific study of what causes the voices. You notice that each time you pull out the green wire, the voices stop. When you put the wire back on its contact, the voices begin again. The same goes for the red wire. Yanking out the black wire causes the voices to get garbled, and removing the yellow wire reduces the volume to a whisper. You step carefully through all the combinations, and you come to a clear conclusion: the voices depend entirely on the integrity of the circuitry. Change the circuitry and you damage the voices.
He argues that the Bushman might falsely conclude that the wires in the radio produce the voices by some unknown mechanism, because he has no knowledge of electromagnetic radiation and radio technology.
This point also came up several times in the 600+ comments following my post on the Afterlife Debate. Commenter Luoge, for example, wrote:
“But the brain-as-mediator model has bot yet been ruled out. We can tamper with a TV set and modify its behaviour just as a neurosurgeon can do with a brain. We can shut down some, or all, of its functioning, and we can stimulate to show specific responses. And yet no neurologist is known to have thought that the TV studio was inside the TV set.”
There are two reasons to reject the brain-as-mediator model – it does not explain the intimate relationship between brain and mind, and (even if it could) it is entirely unnecessary.
To deal with the latter point first, I have used the example of the light-fairy. When I flip the light switch on my wall, the materialist model holds that I am closing a circuit, allowing electricity to flow through the wires in my wall to a specific appliance (such as a light fixture). That light fixture contains a light bulb which adds resistance to the circuit and uses the electrical energy to heat an element in order to produce light and heat.
One might hypothesize, however, that an invisible light fairy lives in my wall. When I flip the switch the fairy flies to the fixture where it draws energy from the electrical wires, and then creates light and heat that it causes to radiate from the bulb. The light bulb is not producing the light and heat, it is just a conduit for the light fairy’s light and heat.
There is no way you can prove that my light fairy does not exist. It is simply entirely unnecessary, and adds nothing to our understanding of reality. The physics of electrical circuits do a fine job of accounting for the behavior of the light switch and the light. There is no need to invoke light bulb dualism.
The same is true of the brain and the mind, the only difference being that both are a lot more complex.
More importantly, however, we have enough information to rule out the brain-as-receiver model unequivocally.
The examples often given of the radio or TV analogy are very telling. They refer to altering the quality of the reception, the volume, even changing the channel. But those are only the crudest analogies to the relationship between brain and mind.
A more accurate analogy would be this – can you alter the wiring of a TV in order to change the plot of a TV program? Can you change a sitcom into a drama? Can you change the dialogue of the characters? Can you stimulate one of the wires in the TV in order to make one of the on-screen characters twitch?
Well, that is what would be necessary in order for the analogy to hold.
Every skeptic’s new favorite website is Spurious Correlations. The site is brilliant – it mines multiple data sets (such as causes of death, consumption of various products, divorce rates by state, etc.) and then tries to find correlations between different variables. The results are often hilarious.
The point of this exercise is to demonstrate that correlation does not necessarily equal causation. Often it is more effective to demonstrate a principle than simply to explain it. By showing impressive looking graphical correlations between phenomena that are clearly not related (at least proposing a causal connection superficially seems absurd.), it drives home the point that correlation is not enough to conclude causation.
I think most people can intuitively understand that funding on science, space, and technology is unlikely to have a meaningful causal connection to suicide by hanging, strangulation, or suffocation.
Yet – look at those curves. If a similar graph were shown with two variables that might be causally connected, that would seem very compelling.
There are a couple of points about this I want to explore a bit further. First is the important caveat that, while correlation is not necessarily causation, sometimes it is. Two variables that are causally related would correlate. I dislike the oversimplification that is sometimes presented: “correlation is not causation.” But it can be.
The second point is a statistical one. The important deeper lesson here is the power of data mining. Humans are great at sifting through lots of data and finding apparent patterns. In fact we have a huge bias toward false positives in this regard – we find patterns that are not really there but are just statistical flukes or complete illusions.
Correlations, however, seem compelling to us. If we dream about a friend we haven’t seen in 20 years then they call us the next day, that correlation seems uncanny, and we hunt for a cause. We aren’t even aware of the fact that . . .
Four common types of analytical errors in reasoning that we all need to beware of.
Today we’re going to cover a bit of new ground in the basics of critical thinking and critical reasoning. There are several defined types of common analytical errors to which we’re all prone; some, perhaps, more so than others. Reasoning errors can be made accidentally, and some can even be made deliberately as a way to influence the acceptance of ideas. We’re going to take a close look at the Type I false positive error, the Type II false negative error, the Type III error of answering the wrong question, and finally the dreaded Type IV error of asking the wrong question.
By way of example we’ll apply these errors to three hypothetical situations, all of which should be familiar to fans of scientific skepticism:
- From the realm of the paranormal, a house is reported to be haunted. The null hypothesis is that there is no ghost, until we find evidence that there is.
- The conspiracy theory that the government is building prison camps in which to orderly dispose of millions of law-abiding citizens. The null hypothesis is that there are no such camps, until we find evidence of them.
- And from alternative medicine, the claim that vitamins can cure cancer. The null hypothesis is that they don’t, unless it can be proven through controlled testing.
So let’s begin with:
Type I Error: False Positive
A false positive is failing to believe the truth, or more formally, the rejection of a true null hypothesis — it turns out there’s nothing there, but you conclude that there is. In cases where the null hypothesis does turn out to be true, a Type I error incorrectly rejects it in favor of a conclusion that the new claim is true. A Type I error occurs only when the conclusion that’s made is faulty, based on either bad evidence, misinterpreted evidence, an error in analysis, or any number of factors.
In the haunted house, Type I errors are those that occur when the house is not, in fact, haunted; but the investigators erroneously find that it is. They may record an unexplained sound and wrongly consider that to be proof of a ghost, or they may collect eyewitness anecdotes and wrongly consider them to be evidence, or they may have a strange feeling and wrongly reject all other possible causes for it.
The conspiracy theorist commits a Type I error when the government is not, in fact, building prison camps to exterminate citizens, but he comes across something that makes him reject that null hypothesis and conclude that it’s happening after all. Perhaps he sees unmarked cars parked outside a fenced lot that has no other apparent purpose, and wrongly considers that to be unambiguous proof, or perhaps he watches enough YouTube videos and decides that so many other conspiracy theorists can’t be all wrong. Perhaps he simply hates the government, so he automatically accepts any suggestion of their evildoing.
Finally, the alternative medicine hopeful commits a Type I error when he concludes that vitamins successfully treat a cancer that they actually don’t. Perhaps he hears enough anecdotes or testimonials, perhaps he is mistrustful of medical science and erroneously concludes that alternative medicine must therefore work, or whatever his thought process is; but an honest conclusion that the null hypothesis has been proven false is a classic Type I error.
Type II Error: False Negative
Cynics are those who are most often guilty of the Type II error, the acceptance of the null hypothesis when it turns out to actually be false — it turns out that something is there, but you conclude that there isn’t. If you actually do have psychic powers but I am satisfied that you do not, I commit a Type II error. The villagers of the boy who cried “Wolf!” commit a Type II error when they ignore his warning, thinking it false, and lose their sheep to the wolf. The protohuman who hears a rustling in the grass and assumes it’s just the wind commits a Type II error when the panther springs out and eats him.
Perhaps somewhere there is a house that actually is haunted, and maybe the TV ghost hunters find it. If I laugh at their silly program and dismiss the ghost, I commit a Type II error. If it were to transpire that the government actually is implementing plans to exterminate millions of citizens in prison camps, then everyone who has not been particularly concerned about this (myself included) has made a Type II error. The invalid dismissal of vitamin megadosing would also be a Type II error if it turned out to indeed cure cancer, or whatever the hypothesis was.
Type I and II errors are not limited to whether we believe in some pseudoscience; they’re even more applicable in daily life, in business decisions and research. If I have a bunch of Skeptoid T-shirts printed to sell at a conference, I make a Type I error by assuming that people are going to buy, and it turns out that nobody does. The salesman makes a Type II error when he decides that no customers are likely to buy today, so he goes home early, when in fact it turns out that one guy had his checkbook in hand.
Both Type I and II errors can be subtle and complex, but in practice, the Type I error can be thought of as excess idealism, accepting too many new ideas; and the Type II error as excess cynicism, rejecting too many new ideas.
Before talking about Type III and IV errors, it should be noted that these are not universally accepted. Types I and II have been standard for nearly a century, but various people have extended the series in various directions since then; so there is no real convention for what Types III and IV are. However the definitions I’m going to give are probably the most common, and they work very well for the purpose of skeptical analysis.
Well it’s Halloween time and so I decided to do something special and talk about a monster that everyone seems to like these days: Zombies!!!
Zombies are ofcourse the reanimated corpses of people who’s only goal in their new life is to eat other people (preferably living).
Now there are lots of things that I (and I’m sure many others) have noticed about zombies, but I’ve narrowed it down to five different things.
So here are five things I’ve noticed about zombies:
5. They’re hard to kill.
(Author’s note: before anyone says it, yes I know zombies are technically dead, but because saying that you’re killing them is the simplest term I can come up with when it concerns taking one out, I’ve decided to use that.)
Thanks to movies and television shows many people have been led to believe that zombies are easy to kill, what with many screens of only a few people taking on huge hordes of the undead, I would believe that too. The problem with this is that this is unrealistic (besides the fighting zombies part) and it would actually be pretty difficult to kill a zombie.
I’m sure that everyone knows that you have to destroy a zombie’s brain inorder to kill it (you can cut a zombie’s head and the head will still be alive) but this is not as easy as it sounds because the brain is actually a pretty small target. For most people they would have to get pretty close to someone if they are shooting them inorder to hit their brain, especially if you’re using a pistol or even a shotgun, and if you have a melee weapon, you have to get up close regardless.
Now some people might think that it is okay to fight up close against a zombies because that is how it is often depicted in movies and TV, but infact…
4. People fight them the wrong way.
I know that in movies and on TV that often times battles with zombies are depicted as being up close and personal type of combat, and if you were to fight one or two of them up close there wouldn’t be any problems, but if you were to fight an entire zombie horde… you’re zombie food, because while you might be able to take a lot of them out, unless you can escape as quickly as possible, the zombies will overwhelm you and eat you!
The best way (and safest) to fight zombies is from a distance with a rifle, which is more accurate and has a greater range than a shotgun or a pistol.
Also, being up high (like in a tree) helps as well, just be sure you have a way to escape quickly incase a zombie horde is coming and you have to get out of there.
3. The supernatural explanation for them makes more sense than the viral explanation.
In almost all modern versions of zombies they are most often depicted as becoming member of the undead via a virus of some type, and while this make seem like a rational and logical explanation for why a zombie would exist in the first place, really the old traditional way that a corpse reanimates itself, via voodoo magic, makes more logical sense if you think about it.
- 5 Things I’ve noticed about… Conspiracy Theorists on Youtube (illuminutti.com)
- Ask a Zombie: “Unforeseen Consequences Are the Best Kind of Consequences” Edition (popcap.com)
- Zombie! Zombie! Zombie! HD iOS review (mobot.net)
- Romero’s Opinion of the Walking Dead (beardedknights.wordpress.com)
- The 7 Habits of Highly Effective Zombies (scottsmarketplace.com)
- Zombies – What’s with our fascination with the undead? (beaconnews.ca)
- Plants vs. Zombies 2 for Android 1.5 Now Available for Download (news.softpedia.com)
- George A. Romero, Creator of the Modern Zombie, Offers Opinion on The Walking Dead (dreadcentral.com)
- Nano-Particle ZOMBIES (mysteryoftheiniquity.com)
A line of reasoning named for Socrates helps us help believers in the strange re-examine their beliefs.
Read transcript below or listen here
Of all the possible perspectives, beliefs, theories, ideologies, and conclusions in this world, which of them are beyond question? None of them. And neither should be any person who holds one of those positions. People believe all sorts of strange things, and even though they might be passionate about them, most will still admit that questioning their belief is an appropriate undertaking. Therefore, we — as scientific skeptics — have an available avenue by which we can always encourage believers in the strange to revisit their beliefs. Despite the fact that we may lack professional expertise in the subject at hand, we can still plant the seeds of an uprising of logic within the mind of the believer. One way to do this is through the application of Socratic questioning.
Returning to our fake example guys used in the past, Starling and Bombo, we can illustrate this concept. Let us choose an example scenario. If Bombo has seen a UFO and believes that it was an alien spacecraft, it would likely be difficult for Starling to reason him out of the idea by offering alternative suggestions. People are often pretty stubborn when it comes to personal experiences that they’ve already interpreted for themselves; Bombo saw an alien spacecraft, and telling him it was the planet Venus would probably be a dead end. Indeed, even offering lines of logic for Bombo to follow on his own would probably be refused. So is there any effective way at all of getting someone to consider a different explanation?
The answer is yes, and it involves getting Bombo to arrive at alternate explanations on his own. We’re all far more prone to accept our own ideas than someone else’s. Starling might well able to get Bombo to consider the idea that the UFO might not have been an alien spacecraft by employing Socratic questioning. Named (quite obviously) for Socrates — the ancient Greek philosopher (also quite obviously) — the Socratic questions are primarily teaching tools. Just as Bombo better accepts his own ideas, so do students of all types. Socratic questioning helps people to take a second, closer look at their own beliefs, and to apply critical thinking even when they least expect it.
There are six commonly described categories of Socratic questions, and they’re all good. You could familiarize yourself with any one of them, and you’d have a pretty good chance at changing Bombo’s mind, or that of anyone else who has made a conclusion based on faulty logic. An adept at all six types of questions would be a formidable reformer of popular pseudoscience believers.
Let’s begin with the first type:
- Skeptoid #384: Asking the Socratic Questions (skeptoid.com)
- Socrates Is The Most Influential Teacher In History (jasmineleigh17.wordpress.com)
- “The only true wisdom is in knowing you know nothing.” Socrates (terjanianpascal.wordpress.com)
- Socrates has a question for you… (therapyandstuff.com)
- it’s time to start thinking like Socrates! (newspaperprojectlangford.wordpress.com)
- Socratic Teacher Questioning in Science Classrooms (psychologytoday.com)
Ever hear someone argue a point that was effective, even though it didn’t quite ring true? Chances are they used a logical fallacy.
Each video is only about 3 minutes long. Enjoy🙂
Three great websites run by Brian Dunning (in the videos above) that all skeptical thinkers ought to have bookmarked:
Mason I. Bilderberg (MIB)
- The 12 cognitive biases that prevent you from being rational (illuminutti.com)
- Path of a Critical Thinker (Meow) (dead-logic.blogspot.com)
We all do it. In fact, we are generally very good at it. Smart and educated people are better at it.
Rationalizing is a daily practice, part of the “default mode” of human thinking. We make up reasons to justify believing what we want to believe. Often we are only dimly aware of why we want to believe something, the calculus largely occurring in the subconscious depths of our brains.
We defend beliefs because they are pleasing to our egos, because they minimize cognitive dissonance, and just because they are our beliefs. They resonate with our world-view, our internal model of reality.
We have at our disposal a long list of logical fallacies that we can marshal to the defense of our beliefs. Notions that are based on solid evidence and logic do not require such vigorous defense. Those beliefs that cannot be defended by logic and evidence require that bad logic and bad data be invoked to defend them. Luckily we have no problem distorting and cherry picking facts and twisting logic into pretzels.
One very common bit of bad logic is called special pleading. I think it is common because it is so insidious – it creeps up on us unaware. Special pleading is the process of inventing a special reason to explain away inconvenient evidence or the lack of predicted evidence.
Take, for example, ESP research.
Keep reading: NeuroLogica Blog » ESP Special Pleading.
This is the next installment in series of articles being written by a fellow blogger. His name is Muertos and he’s one of the best thinkers in the blogging world.
Mason I. Bilderberg
This is the second installment in a series of articles entitled “Confessions of a Disinformation Agent.” For the introduction and Chapter I, go here.
On the morning of September 11, 2001, I got up very early, five o’clock. I was working on a novel, and, as I was usually too tired to write when I got home, I started doing it in the early mornings before going to work. At this time I lived alone in apartment in the central city. I got up, showered, and spent about a half hour writing. At 6:45 AM—Pacific time—as I was making breakfast my phone rang. Instantly I knew it was bad news. No one ever calls at 6:45 AM with good news. I picked up. It was a friend of mine. (Not the same one who almost caught TWA 800). “Have you seen the news?” he said. I said no. He replied, “Someone tried to kill the President!” That was how it was reported to me. Oh, and there was the small detail of the World Trade Centers on fire after planes having been crashed into them.
I switched on the TV. This was about 9:45 AM, after both towers had been struck, but just before the first of them collapsed. Like almost everyone else in America, I watched in rapt horror. I’ll never forget seeing the first of the towers collapse into a cloud of dust. I also remember seeing the little black specks of people jumping from the towers before they fell. That’s one of the most horrifying sights I’ve ever seen—even on TV—and one that will stick with me forever. Mind you, I watched the 1986 Challenger explosion live, and I also witnessed the infamous Bud Dwyer suicide as it happened. Neither of those horrible events could touch September 11.
- Confessions of a Disinformation Agent: Introduction and Chapter I. (illuminutti.com)