Tag Archives: Unnatural Acts that can improve your thinking

The Halo Effect

via Unnatural Acts that can improve your thinking

halo_effect 1126The halo effect refers to a bias whereby the perception of a positive trait in a person or product positively influences further judgments about traits of that person or products by the same manufacturer. One of the more common halo effects is the judgment that a good looking person is intelligent and amiable.

There is also a reverse halo effect whereby perception of a negative or undesirable trait in individuals, brands, or other things influences further negative judgments about the traits of that individual, brand, etc. If a person “looks evil” or “looks guilty” you may judge anything he says or does with suspicion; eventually you may feel confident that you have confirmed your first impression with solid evidence when, in fact, your evidence is completely tainted and conditioned by your first impression. The hope that the halo effect will influence a judge or jury is one reason some criminal lawyers might like their clients to be clean-shaven and dressed neatly when they appear at trial.

The phrase was coined by psychologist Edward Thorndike in 1920 to describe the way commanding officers rated their soldiers. He found that officers usually judged their men as being either good or bad “right across the board. There was little mixing of traits; few people were said to be good in one respect but bad in another.”* The old saying that first impressions make lasting impressions is at the heart of the halo effect. If a soldier made a good (or bad) first impression on his commanding officer, that impression would influence the officer’s judgment of future behavior. It is  very unlikely that given a group of soldiers every one of them would be totally good or totally bad at everything, but the evaluations seemed to indicate that this was the case. More likely, however, the earlier perceptions either positively or negatively affected those later perceptions and judgments.

The halo effect seems to ride on the coattails of confirmation bias: once we’ve made a judgment about positive or negative traits, that judgment influences future perceptions so that they confirm our initial judgment.

Some researchers have found evidence that student evaluations of their college instructors are formed and remain stable after only a few minutes or hours in class.  If a student evaluated a teacher highly early on in the course, he or she was likely to rank the teacher highly at the end of the course. Unfortunately, for those teachers who made bad first impressions on the students, their performance over the course of the term would be largely irrelevant to how they would be perceived by their students.

MORE . . .

Anecdotal Evidence (testimonials)

via Unnatural Acts that can improve your thinking

anecdote_250pxTestimonials and anecdotes are used to support claims in many fields. Advertisers often rely on testimonials to persuade consumers of the effectiveness or value of their products or services. Others use anecdotes to drive home the horror of some alleged activity or the danger of widely-used electronic devices like cell phones. In the mid-90s, there were many people, some in law enforcement, claiming that Satanists were abducting and abusing children on a massive scale. The anecdotes involved vivid descriptions of horrible sexual abuse, even murder of innocent children. The anecdotes were quite convincing, especially when they were repeated on nationally televised programs with popular hosts like Geraldo Rivera. A four-year study in the early 1990s found the allegations of satanic ritual abuse to be without merit. Researchers investigated more than 12,000 accusations and surveyed more than 11,000 psychiatric, social service, and law enforcement personnel. The researchers could find no unequivocal evidence for a single case of satanic cult ritual abuse.

There have also been scares fueled by anecdotes regarding such disparate items as silicone breast implants, cell phones, and vaccinations. In the 1990s many women blamed their cancers and other diseases on breast implants. Talk show hosts like Oprah Winfrey and Jenny Jones presented groups of women who were suffering from cancer or some other serious disease and who had been diagnosed after they’d had breast implants. The stories tugged at the heartstrings and brought tears to many sensitive eyes, but the scientific evidence did not exist that there was a causal connection between the implants and any disease. That fact did not prevent lawyers from extorting $4.25 billion from implant manufacturers. Marcia Angell, former executive editor of the New England Journal of Medicine, brought the wrath of feminist hell upon herself in 1992 when she wrote an editorial challenging the Food and Drug Administration’s decision to ban the manufacture of silicone breast implants. The scientific evidence wasn’t there to justify the ban. She eventually wrote a book describing the fiasco: Science on Trial: The Clash of Medical Evidence and the Law in the Breast Implant Case. The scientific evidence is now in. The implants don’t cause cancer or other diseases, and the FDA has lifted its ban. When the data were collected, they showed that women with silicone breast implants did not suffer cancer or any other disease at a significantly higher rate than women who had not had implants.

The public fear that cellphones might be causing brain tumors was first aroused not by scientists but by a talk show host. On January 23, 1993, Larry King’s guest was David Reynard, who announced that he and his wife Susan had sued NEC and GTE on the grounds that the cellphone David gave Susan caused his wife’s brain tumor. There was nothing but junk science to back up her claim, plus the fact that . . .

MORE . . .

Psychic Tricks, Fraud and Forer

Forer Effect

via Unnatural Acts that can improve your thinking

forerThe Forer effect refers to the tendency of people to rate sets of statements as highly accurate for them personally even though the statements were not made about them personally and could apply to many people.

Psychologist Bertram R. Forer (1914-2000) found that people tend to accept vague and general personality descriptions as uniquely applicable to themselves without realizing that the same description could be applied to many people. Consider the following as if it were given to you as an evaluation of your personality.

You have a need for other people to like and admire you, and yet you tend to be critical of yourself. While you have some personality weaknesses you are generally able to compensate for them. You have considerable unused capacity that you have not turned to your advantage. Disciplined and self-controlled on the outside, you tend to be worrisome and insecure on the inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You also pride yourself as an independent thinker; and do not accept others’ statements without satisfactory proof. But you have found it unwise to be too frank in revealing yourself to others. At times you are extroverted, affable, and sociable, while at other times you are introverted, wary, and reserved. Some of your aspirations tend to be rather unrealistic.

Forer gave a personality test to his students, ignored their answers, and gave each student the above evaluation (taken from a newsstand astrology column). He asked them to evaluate the evaluation from 0 to 5, with “5” meaning the recipient felt the evaluation was an “excellent” assessment and “4” meaning the assessment was “good.” The class average evaluation was 4.26. That was in 1948. The test has been repeated hundreds of time with psychology students and the average is still around 4.2 out of 5, or 84% accurate.

In short, Forer convinced people he could successfully read their character. His accuracy amazed his subjects, though his personality analysis was taken from a newsstand astrology column and was presented to people without regard to their sun sign. The Forer effect seems to explain, in part at least, why so many people think that pseudosciences “work”. Astrology, astrotherapy, biorhythms, cartomancy, chiromancy, the enneagram, fortune telling, graphology, rumpology, etc., seem to work because they seem to provide accurate personality analyses. crystal_ball_01Scientific studies of these pseudosciences demonstrate that they are not valid personality assessment tools, yet each has many satisfied customers who are convinced they are accurate.

The most common explanations given to account for the Forer effect are in terms of hope, wishful thinking, vanity, and the tendency to try to make sense out of experience. Forer’s own explanation was in terms of human gullibility. People tend to accept claims about themselves in proportion to their desire that the claims be true rather than in proportion to the empirical accuracy of the claims as measured by some non-subjective standard. We tend to accept questionable, even false statements about ourselves, if we deem them positive or flattering enough. We will often give very liberal interpretations to vague or inconsistent claims about ourselves in order to make sense out of the claims. Subjects who seek counseling from psychics, mediums, fortune tellers, mind readers, graphologists, etc., will often ignore false or questionable claims and, in many cases, by their own words or actions, will provide most of the information they erroneously attribute to a pseudoscientific counselor. Many such subjects often feel their counselors have provided them with profound and personal information. Such subjective validation, however, is of little scientific value.
MORE . . .

James Randi‘s fiery takedown of psychic fraud

via http://www.ted.com

Legendary skeptic James Randi takes a fatal dose of homeopathic sleeping pills onstage, kicking off a searing 18-minute indictment of irrational beliefs. He throws out a challenge to the world’s psychics: Prove what you do is real, and I’ll give you a million dollars. (No takers yet.)

The Tricks

via Project Barnum

• Cold Reading

Making vague statements that will fit most people if they want them to

Cold reading is a series of techniques employed by psychics, mediums and mentalists that are used to manipulate the customer (sitter) into believing that the psychic can read their mind, or that the medium is in contact with a dead relative or friend.

A cold reading will involved things that are called ‘Forer Statements’ (or or Barnum statements) which are designed to encourage the sitter to fill in the gaps in the information being given. Though these statements may appear to be specific they are really very open-ended and vague and could really apply to anyone. Experiments have shown how similar statements can be taken personally when issued to dozens of people at the same time!

Some examples of such statements would be:

  • “I sense that you are sometimes insecure, especially with people you don’t know very well.”
  • “You work with computers”
  • “You’re having problems with a friend or relative”

Here is ‘psychic’ James Van Prag demonstrating what appears to be a very embarrassing cold reading:

• Rainbow Ruse

Ticking all potential boxes by making all-encompassing descriptions

Similar to Forer statements is the “rainbow ruse” which involves a statement that covers all possibilities and often describe somebody as being two completely different types of person at the same time. Here are some examples:

  • “Most of the time you are positive and cheerful, but there has been a time in the past when you were very upset.”
  • “You are a very kind and considerate person, but occasionally you feel deep-seated anger.”
  • “I would say that you are mostly shy and quiet, but when the mood strikes you, you can easily become the centre of attention.”

• Hot/warm Reading

Using information gained before the show about the audience

MORE . . .

More via Granit State Skeptics: learn how psychics work by reading the Psychic Pamphlet.

Magical Thinking

via Unnatural Acts that can improve your thinking

 magical thinking is “a fundamental dimension of a child’s thinking.”
Zusne and Jones

Magical thinking is a belief in the interconnectedness of all things through forces and powers that transcend physical connections. Magical thinking invests special powers and forces in things and sees them as symbols on various levels. According to anthropologist Dr. Phillips Stevens Jr., “the vast majority of the world’s peoples … believe that there are real connections between the symbol and its referent, and that some real and potentially measurable power flows between them.” He believes there is a neurobiological basis for this, though the specific content of any symbol is culturally determined. (“Magical Thinking in Complementary and Alternative Medicine,” Skeptical Inquirer, 2001, November/December.)

One of the driving principles of magical thinking is the notion that things that resemble each other are causally connected in some way that defies scientific testing (the so-called law of similarity, that like produces like, that effect resembles cause). Another driving principle of magical thinking is the belief that “things that have been either in physical contact or in spatial or temporal association with other things retain a connection after they are separated” (the so-called law of contagion) (James George Frazer, The Golden Bough: A Study in Magic and Religion; Stevens). Think of relics of saints that are supposed to transfer spiritual energy. Think of psychic detectives claiming that they can get information about a missing person by touching an object that belongs to the person (psychometry). Or think of the pet psychic who claims she can read your dog’s mind by looking at a photo of the dog. Or think of Rupert Sheldrake’s morphic resonance, the idea that there are mysterious telepathy-type interconnections between organisms and collective memories within species. (Coincidentally, Sheldrake also studies psychic dogs

MORE . . .

Cognitive Dissonance

via Unnatural Acts that can improve your thinking

Cognitive dissonance is a theory of human motivation that asserts that it is psychologically uncomfortable to hold contradictory cognitions. The theory is that dissonance, being unpleasant, motivates a person to change his cognition, attitude, or behavior. This theory was first explored in detail by social psychologist Leon Festinger, who described it this way:
Dissonance and consonance are relations among cognitions that is, among opinions, beliefs, knowledge of the environment, and knowledge of one’s own actions and feelings. Two opinions, or beliefs, or items of knowledge are dissonant with each other if they do not fit together; that is, if they are inconsistent, or if, considering only the particular two items, one does not follow from the other (Festinger 1956: 25).
He argued that there are three ways to deal with cognitive dissonance. He did not consider these mutually exclusive.
  1. One may try to change one or more of the beliefs, opinions, or behaviors involved in the dissonance;
  2. One may try to acquire new information or beliefs that will increase the existing consonance and thus cause the total dissonance to be reduced; or,
  3. One may try to forget or reduce the importance of those cognitions that are in a dissonant relationship (Festinger 1956: 25-26).
For example, people who smoke know smoking is a bad habit. Some rationalize their behavior by looking on the bright side: They tell themselves that smoking helps keep the weight down and that there is a greater threat to health from being overweight than from smoking. Others quit smoking. Most of us are clever enough to come up with ad hoc hypotheses or rationalizations to save cherished notions. Why we can’t apply this cleverness more competently is not explained by noting that we are led to rationalize because we are trying to reduce or eliminate cognitive dissonance. Different people deal with psychological discomfort in different ways. Some ways are clearly more reasonable than others. So, why do some people react to dissonance with cognitive competence, while others respond with cognitive incompetence?
Cognitive dissonance has been called “the mind controller’s best friend” (Levine 2003: 202). Yet, a cursory examination of cognitive dissonance reveals that it is not the dissonance, but how people deal with it, that would be of interest to someone trying to control others when the evidence seems against them.

MORE . . .

Hindsight Bias

via Unnatural Acts that can improve your thinking

“The mind that makes up narratives about the past is a sense-making organ. When an unpredicted event occurs, we immediately adjust our view of the world to accommodate the surprise.”–Daniel Kahneman

Hindsight bias is the tendency to construct one’s memory after the fact (or interpret the meaning of something said in the past) according to currently known facts and one’s current beliefs. In this way, one appears to make the past consistent with the present and more predictive or predictable than it actually was. When a surprise event occurs and you say “I knew it all along,” you probably didn’t. Hindsight bias may be kicking in.

Hindsight bias accounts for the tendency of believers in prophecies and psychic predictions to retrofit events to past oracular claims, however vague or obscure (retroactive clairvoyance). For example, after the Challenger space shuttle disaster that killed seven U.S. astronauts on January 28, 1986, hindsight bias was used by followers of Nostradamus to claim that he had predicted it in the following verse:

D’humain troupeau neuf seront mis à part,
De jugement & conseil separés:
Leur sort sera divisé en départ,
Kappa, Thita, Lambda mors bannis égarés.

From the human flock nine will be sent away,
Separated from judgment and counsel:
Their fate will be sealed on departure
Kappa, Thita, Lambda the banished dead err (I.81).

Of course, to make the obscene retrodiction complete, Nostradamus’s minions would have to speculate that teacher-astronaut Christa McAuliffe was pregnant with twins to make nine the total in the “flock.” The belief that one can predict the future is often due to little more than the power of hindsight bias.

Hindsight bias also seems to account for the tendency of many people to think they can explain events that weren’t predicted after the events have happened. It is unacceptable to many people to think that major events like a respected Wall Street investment manager running a Ponzi scheme that cost people perhaps as much as $50 billion wasn’t predictable. If only somebody had paid attention to this and that detail, Bernard Madoff could never have pulled it off. What is true is that a major impact event like this can be easily explained after the fact. The explanations may satisfy people and lead them to believe that they now understand how such an event happened, but there is no way to know whether collecting many facts and using them to explain what occurred will help prevent a similar event from happening in the future.

Why do we engage in hindsight bias? There are several reasons.

MORE . . .

Recency Bias

via Unnatural Acts that can improve your thinking

Recency bias is the tendency to think that trends and patterns we observe in the recent past will continue in the future. Predicting the future in the short term, even for highly changeable events like the weather or the stock market, according to events in the recent past, works fine much of the time. Predicting the future in the long term according to what has recently occurred has been shown to be no more accurate than flipping a coin in many fields, including meteorology, economics, investments, technology assessment, demography, futurology, and organizational planning (Sherden, The Future Sellers).

Doesn’t it strike you as odd that with all the intelligence supposedly going on that such things as the breakup of the Soviet Union, the crumbling of the Berlin wall, the former head of Sinn Fein meeting with the Queen of England, the worldwide economic collapse of recent years, the so called “Arab spring,” the recent attacks on U.S. embassies in several Muslim countries, and a host of other significant historical events were not predicted by the experts? Wait, you say. So-and-so predicted this or that. Was it a lucky guess or was the prediction based on knowledge and skill? If the latter, we’d expect not just one correct prediction out of thousands, but a better track record than, say, flipping a coin. Find one expert who’s consistently right about anything and we still have a problem. How can we be sure that this sharpshooter isn’t just lucky. If thousands of people are making predictions, chance alone tells us that a few will make a right call now and then. The odds in favor of prediction success diminish the more events we bring in, but even someone who seems to defy the odds might be the one a million that gets lucky with a string of guesses. You flip the coin enough times and once in a while you will get seven heads in a row. It’s not expected, but it is predicted by the laws of chance. Likewise with predicting how many hurricanes we’ll have next year or what stocks to buy or sell  this year.

MORE . . .

Shoehorning

via Unnatural Acts that can improve your thinking

Shoehorning is the process of force-fitting some current affair into one’s personal, political, or religious agenda. So-called psychics frequently shoehorn events to fit vague statements they made in the past. This is an extremely safe procedure, since they can’t be proven wrong and many people aren’t aware of how easy it is to make something look like confirmation of a claim after the fact, especially if you give them wide latitude in making the shoe fit. It is common, for example, for the defenders of such things as the Bible Code or the “prophecies” of Nostradamus to shoehorn events to the texts, thereby giving the illusion that the texts were accurate predictions.

A classic example of psychic shoehorning is the case of Jeanne Dixon. In 1956 she told Parade magazine: “As for the 1960 election Mrs. Dixon thinks it will be dominated by labor and won by a Democrat. But he will be assassinated or die in office though not necessarily in his first term.” John F. Kennedy was elected and was assassinated in his first term. This fact was shoehorned to fit her broad prediction and her reputation was made as the psychic who predicted JFK’s violent death. In 1960 she apparently forgot her earlier prediction because she then predicted that JFK would fail to win the presidency. Many psychic detectives take advantage of shoehorning their vague and ambiguous predictions to events in an effort to make themselves seem more insightful than they really are.

Court TV exploited the interested in so-called psychic detectives with a series of programs, one featuring Greta Alexander. She said that a body had been dumped where there was a dog barking. The letter “s” would play an important role and there was hair separated from the body. She felt certain the body was in a specific area, although searchers found only a dead animal. She asked to see a palm print of the suspect—her specialty—and the detective brought one. She said that a man with a bad hand would find the body. Then searchers found a headless corpse, with the head and a wig nearby. The man who found it had a deformed left hand.* The letter ‘s’ can be retrofitted to zillions of things. Many scenarios could be shoehorned to fit “hair separated from the body” and “bad hand.” (Fans of psychics will overlook the fact that Alexander’s reference to the bad hand was supposedly made after looking at the palm print of the victim.)

MORE . . .

Confabulation

via Unnatural Acts that can improve your thinking: Confabulation

Have you ever told a story that you embellished by putting yourself at the center when you knew that you weren’t even there? Or have you ever been absolutely sure you remembered something correctly, only to be shown incontrovertible evidence that your memory was wrong? No, of course not. But you probably know or have heard of somebody else who juiced up a story with made-up details or whose confidence in his memory was shown to be undeserved by evidence that his memory was false.

Confabulation is an unconscious process of creating a narrative that is believed to be true by the narrator but is demonstrably false. The term is popular in psychiatric circles to describe narratives of patients with brain damage or a psychiatric disorder who make statements about what they perceive or remember. The narratives are known to be either completely fictional or in great part fantasy, but they are believed to be true by the patients.

Neurologist Oliver Sacks writes of a patient with a brain disorder that prevented him from forming new memories. Even though “Mr. Thompson” could not remember who Sacks was, each time Sacks visited him he created a fictional narrative about their previous encounters. Sometimes Sacks was a butcher Thompson knew when he worked as a grocer. A few minutes later, he’d recognize Sacks as a customer and create a new fictional narrative. Sacks described Thompson’s confabulations as an attempt to make meaning out of perceptions that he could only relate to events in long-term memory.

You might think: poor fellow; he has to construct his memories and fill in the blank parts with stuff he makes up. Yes, he does. But so do you, and so do I. There is an overwhelming amount of scientific evidence on memory that shows memories are constructed by all of us and that the construction is a mixture of fact and fiction. Something similar is true for perception. Our perceptions are constructions that are a mixture of sense data processed by the brain and other data that the brain supplies to fill in the blanks.

Now there is a body of growing scientific research that shows confabulation is not something restricted to psychiatric patients or gifted fantasizers who believe they were abducted by aliens for reproductive surgery. The evidence shows that many of the narratives each of us produce on a daily basis to explain how we feel, why we did something, or why we made a judgment that we made are confabulations, mixtures of fact and fiction that we believe to be completely true.

This research should give us pause. Many of us accuse others of making stuff up when they present arguments that are demonstrably full of false or questionable claims, but it’s possible that people who make stuff up aren’t even aware of it. They might really believe the falsehoods they utter.

MORE . . . .

Confirmation Bias

Confirmation Bias

via 59ways.blogspot.com

Confirmation bias refers to a type of selective thinking whereby one tends to notice and look for what confirms one’s beliefs, and to ignore, not look for, or undervalue the relevance of what contradicts one’s beliefs. For example, if you believe that during a full moon there is an increase in admissions to the emergency room where you work, you will take notice of admissions during a full moon but be inattentive to the moon when admissions occur during other nights of the month. A tendency to do this over time unjustifiably strengthens your belief in the relationship between the full moon and accidents and other lunar effects.

This tendency to give more attention and weight to data that support our beliefs than we do to contrary data is especially pernicious when our beliefs are little more than prejudices. If our beliefs are firmly established on solid evidence and valid confirmatory experiments, the tendency to give more attention and weight to data that fit with our beliefs should not lead us astray as a rule. Of course, if we become blinded to evidence truly refuting a favored hypothesis, we have crossed the line from reasonableness to closed-mindedness.

Numerous studies have demonstrated that people generally give an excessive amount of value to confirmatory information, that is, to positive or supportive data. The “most likely reason for the excessive influence of confirmatory information is that it is easier to deal with cognitively” (Thomas Gilovich, How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life). It is much easier to see how data support a position than it is to see how they might count against the position. Consider a typical ESP experiment or a seemingly clairvoyant dream: Successes are often unambiguous or data are easily massaged to count as successes, while negative instances require intellectual effort to even see them as negative or to consider them as significant. The tendency to give more attention and weight to the positive and the confirmatory has been shown to influence memory. When digging into our memories for data relevant to a position, we are more likely to recall data that confirms the position.

Researchers are sometimes guilty of confirmation bias by setting up experiments or framing their data in ways that will tend to confirm their hypotheses.

More: Unnatural Acts that can improve your thinking: confirmation bias.

Illusion of skill

by Robert T. Carroll

The illusion of skill refers to the belief that skill, not chance or luck, accounts for the accuracy of predictions of things that are unpredictable, such as the long-term weather forecasts one finds in farmers’ almanacs and the predictions of market gurus about the long-term ups and downs of the stock market. The illusion of skill also accurately describes the apparent accuracy of remote viewers. Given all the guesses remote viewers make about what they claim to see telepathically, chance alone would account for some of those guesses being somewhat accurate. Much of the accuracy ascribed to remote viewers, however, is due to the liberal and generous interpretations given by themselves or “experts” of their vague and ambiguous descriptions of places or things. Also, subjective validation accounts for the illusion of skill of “experts” in such fields as palm reading, mediumship, astrology, and criminal profiling.

Stock gurus–people who predict the rise and fall of the price of stocks and have large numbers of people who act on their predictions–are essentially part of an entire industry “built largely on an illusion of skill” (Daniel Kahneman, Thinking, Fast and Slow, p. 212). No market guru has gone broke selling advice, however, despite the fact that market newsletters are–in the words of William A. Sherden–“the modern day equivalent of farmers’ almanacs” (The Fortune Sellers, p. 102). In 1994, the Hurlbert Financial Digest found that over a five-year period only one out of 108 market-timing newsletters beat the market. You might think that that one did so because of skill, but you’d be wrong. Chance alone would predict that more than one out of 108 would beat the market.

Keep Reading: Unnatural Acts that can improve your thinking: illusion of skill.

Follow

Get every new post delivered to your Inbox.

Join 926 other followers

%d bloggers like this: