We’ve all heard about the supposed relationship between confidence and knowledge – but is it true? Two researchers think they’ve found the answer.
We’ve all heard about the supposed relationship between confidence and knowledge – but is it true? Two researchers think they’ve found the answer.
Most improbable coincidences likely result from play of random events. The very nature of randomness assures that combing random data will yield some pattern.
By Bruce Martin via The Committee for Skeptical Inquiry – CSI
“You don’t believe in telepathy?” My friend, a sober professional, looked askance. “Do you?” I replied. “Of course. So many times I’ve been out for the evening and suddenly became worried about the kids. Upon calling home, I’ve learned one is sick, hurt himself, or having nightmares. How else can you explain it?”
Such episodes have happened to us all and it’s common to hear the words, “It couldn’t be just coincidence.” Today the explanation many people reach for involves mental telepathy or psychic stirrings. But should we leap so readily into the arms of a mystic realm? Could such events result from coincidence after all?
There are two features of coincidences not well known among the public. First, we tend to overlook the powerful reinforcement of coincidences, both waking and in dreams, in our memories. Non-coincidental events do not register in our memories with nearly the same intensity. Second, we fail to realize the extent to which highly improbable events occur daily to everyone. It is not possible to estimate all the probabilities of many paired events that occur in our daily lives. We often tend to assign coincidences a lesser probability than they deserve.
However, it is possible to calculate the probabilities of some seemingly improbable events with precision. These examples provide clues as to how our expectations fail to agree with reality.
In a random selection of twenty-three persons there is a 50 percent chance that at least two of them celebrate the same birthdate. Who has not been surprised at learning this for the first time? The calculation is straightforward. First find the probability that everyone in a group of people have different birthdates (X) and then subtract this fraction from one to obtain the probability of at least one common birthdate in the group (P), P = 1 – X. Probabilities range from 0 to 1, or may be expressed as 0 to 100%. For no coincident birthdates a second person has a choice of 364 days, a third person 363 days, and the nth person 366 – n days. So the probability for all different birthdates becomes:
Have you ever heard ‘evolution’ dismissed as ‘just a theory’? Is a scientific theory no different to the theory that Elvis is still alive? Jim Al-Khalili puts the record straight.
Subscribe for regular science videos: http://bit.ly/RiSubscRibe
There’s an important difference between a scientific theory and the fanciful theories of an imaginative raconteur, and this quirk of semantics can lead to an all-too-common misconception. In general conversation, a ‘theory’ might simply mean a guess. But a scientific theory respects a somewhat stricter set of requirements. When scientists discuss theories, they are designed as comprehensive explanations for things we observe in nature. They’re founded on strong evidence and provide ways to make real-world predictions that can be tested.
While scientific theories aren’t necessarily all accurate or true, they shouldn’t be belittled by their name alone. The theory of natural selection, quantum theory, the theory of general relativity and the germ theory of disease aren’t ‘just theories’. They’re structured explanations of the world around us, and the very foundation of science itself.
Read the blog post to find out more: http://www.rigb.org/blog/2014/novembe…
If you know me you know i like a good illusion. Exposing flaws in the brain is fun!
Here is a good one from Mighty Optical Illusions
Keep staring at the flashing green dot, and the yellow dots will fade or disappear due to motion-induced blindness.
NASA predicts that we’ll find life outside our planet, and possibly outside our solar system, within a generation. But where exactly, and what type of life? Is it even wise to make contact with extraterrestrials? The search hasn’t been easy, but these questions may not be theoretical much longer. Here are 10 ways the quest for alien life is getting real.
10 • NASA Predicts Alien Life Will Be Found Within 20 Years
In the words of Matt Mountain, director at the Space Telescope Science Institute in Baltimore, “Imagine the moment when the world wakes up, and the human race realizes that its long loneliness in time and space may be over . . . It’s within our grasp to pull off a discovery that will change the world forever.”
Using ground and space-based technology, NASA scientists predict that we’ll find alien life in the Milky Way galaxy within the next 20 years. Launched in 2009, the Kepler Space Telescope (pictured) has helped scientists find thousands of exoplanets (planets outside our solar system). Kepler discovers a planet when it crosses in front of a star, causing a small drop in the star’s brightness.
Based on data from Kepler, NASA scientists believe that in our galaxy alone, 100 million planets may be home to alien life. But it’s the upcoming James Webb Space Telescope (scheduled for a 2018 launch) that will first give us the capability to indirectly detect life on other planets. The Webb telescope searches for gases in a planet’s atmosphere that are generated by life. The ultimate goal is to find Earth 2.0, a twin to our own planet.
9 • The Alien Life We Find May Not Be Intelligent
The Webb Telescope and its successors will search for biosignatures in the atmospheres of exoplanets, such as molecular water, oxygen, and carbon dioxide. But even if a biosignature is detected, it won’t tell us whether the life on that exoplanet is intelligent or not. Such alien life may be single-celled organisms like amoebas, rather than complex beings that can communicate with us.
We’re also limited in our search for life by our prejudices and lack of imagination. We assume there must be carbon-based life like us, and that we’re the standard by which intelligence is judged. Explaining this failure in creative thought, Carolyn Porco of the Space Science Institute says, “Scientists don’t go off and think completely wild and crazy things unless they have some evidence that leads them to do that.”
Other scientists such as Peter Ward, coauthor of Rare Earth: Why Complex Life Is Uncommon in the Universe, believe that intelligent alien life will be short-lived. Ward assumes that other species will have global warming, too many people, no food, and eventual chaos that destroys their civilizations. He foresees the same for us.
8 • Mars May Have Supported Life Before—And May Again
Mars is currently too cold to house liquid water and support life. But NASA’s Opportunity Rover—an all-terrain vehicle that collects and analyzes rocks on Mars—has shown that about four billion years ago, the planet had fresh water and mud that could have supported life.
Another past source of water and possible life sits on the slopes of Mars’s third-tallest volcano, Arsia Mons. Around 210 million years ago, this volcano erupted beneath a vast glacier. The volcano’s heat caused the ice to melt, forming lakes in the glacier like liquid bubbles in a partially frozen ice cube. The lakes may have existed long enough for microbial life to have formed there.
It’s possible that some simple organisms on Earth may be able to survive on Mars today. Methanogens, for example, use hydrogen and carbon dioxide to produce methane, and don’t need oxygen, organic nutrients, or light. They’re able to survive temperature extremes such as those found during Martian freeze-thaw cycles. So when scientists found methane in Mars’ atmosphere in 2004, they questioned whether methanogens already inhabit the subsurface of Mars.
As we travel to Mars, though, scientists are concerned that we may contaminate the planet’s environment with microorganisms from Earth. That may make it difficult to determine whether life forms found on Mars originated there.
Quantum mechanics is a beautiful and still-controversial idea. It is rightly popular. What’s not right is the way people use it to justify any reality-bending idea in their novels, their TV shows, or their personal philosophies. “Quantum” does not mean anything you want.
“Captain? I’m afraid we’re getting quantum disruptions in the quantum energy field. Should I ready the quantum torpedoes and relay a quantum message to the quantum base?”
I’m not a savvy dissector of movies. All the physics mistakes in Gravity flew right past me, but when you see something done a certain amount of times, it works even the most unresponsive of nerves. The word “quantum” is regularly dropped into science fiction in a way that basically amounts to the storyteller thinking, “I bet this is the way smart people in the future talk.” It might be the way smart people talk, but as we see in the next section, it’s also the way people talk when they’re being really stupid. What’s more, it won’t be the way the educated people of the future talk about anything.
Science can move forward in sweeping generalities, or it can move forward by becoming more and more specific. Either way, you probably shouldn’t use “quantum” to describe future science. If you’ve got a universe where starships can move at above light speed, or people can teleport, or the brain can be uploaded into a computer, the term “quantum” may be as antiquated as the term “natural philosophy.”
If the term “quantum” is still around, it won’t be applicable in any specific situation. Let’s put it this way, there are five different major types of light scattering – Rayleigh Scattering, Mie Scattering, Tyndall Scattering, Brillouin Scattering, and Raman Scattering. If you’re an expert and working with scattered light in any meaningful way, saying, “light is being scattered,” isn’t specific enough to get anything done. You have to know what kind of scattering you’re dealing with. Having characters in a space craft worry about a “quantum energy field” near them makes about as much sense as having characters in a war say that the enemy is shooting “matter” at them. They’ll need to use specifics to make any progress.
A fun note: the types of light scattering are all named after scientists. Instead of saying “a quantum energy field,” have your characters run into “a Bass-Van-der-Woodsen field,” because in your universe the team of Bass and Van der Woodsen made the discovery, and an educated expert would name the field instead of just saying “it’s quantum.”
It Doesn’t Mean That We Are Psychic
Okay, here’s the big one. Quantum mechanics shows that the world works in unintuitive ways, and, yes, experiments done in quantum mechanics provide results that can be interpreted in ways that lead us to odd conclusions. What quantum mechanics doesn’t do is provide evidence for whatever whack-a-doodle theory any crackpot has at the moment. These theories come in several different flavors.
First there’s quantum entanglement. I have to admit, I have a soft spot for quantum entanglement. Entanglement involves two particles having opposite spins. As long as the spins aren’t measured, they’re undetermined. This doesn’t mean that we don’t know the spins. This means that they are literally . . .
I found this very interesting.
This is the purest water you can find – nothing but pure hydrogen and oxygen atoms – and if you drink it regularly it WILL KILL YOU. This is because pure water will suck the minerals and electrolytes out of your body.
I found this interesting because conspiracists like Alex Jones are always pushing their fellow mad hatters to purify their water. As it turns out, if the water is too pure, it will turn your body room temperature.
Description from SPLOID:
It may sound like a good idea to drink the purest water you can find, but as this video explains, that is not actually the case. Ultra pure water has no impurities—so absolutely no taste and who wants that—but also in large quantities it will harm and even kill you because it will leach the minerals from your body.
Via Skeptical Raptor
One of the most frustrating things I’ve observed in nearly six years of writing (here and in other locations), is that those who want to create a negative myth about a new technology (especially in food or medicine), one of the best ways to do it is mention “chemicals.” And if the chemical sounds unnatural, the assumption is that it is unsafe.
People have demonized monosodium glutamate (MSG), a food additive that makes people run away in terror if a Chinese restaurant doesn’t have a huge flashing sign in neon that says “NO MSG.” Of course, in just about every randomized study about MSG, researchers find no difference in the effects of MSG and non-MSG foods on a random population.
Moreover, MSG has one of the evil chemical names. But MSG really is the salt of glutamic acid, a simple amino acid that is the basis of all proteins in the human body. ALL. When consumed, MSG disassociates into glutamate ions and sodium ions, both of which will be eagerly utilized by human physiology with no ill effects. Excess MSG could be problematic, not because of the glutamate, but because of excess sodium. MSG is found naturally through all foods, proteins, soy sauce, and on and on. It’s an absolutely ridiculous belief that people have, and the downside of removing MSG is that there’s less flavor. Because MSG enhances flavor–naturally!
Another current demon chemical of food is high fructose corn syrup (HFCS), which has evolved into of the biggest pariahs of the food industry. Even the name sounds a bit chemical, unnatural, dangerous. But is it?
That’s where we need to look at the science, because the answers to the questions are quite complicated and quite simple.
A new Cornell University study examines the origins of food fears, and possible remedies. It’s a survey of 1,008 mothers asking about foods they avoid and why.
Food fears are a common topic on SBM (Science-Based Medicine), likely for several reasons. Humans have an inherent emotion of disgust, which is likely an adaptation to help avoid contaminated or spoiled food. In our modern society this reflex can be tricky, because we do not always have control over the chain of events that leads to food on our plates. Other people grow the food, transport it, process it, and perhaps even cook it
Modern food technology can also involve many scary sounding substances and unusual processes. As the saying goes, you may not want to know how the sausage is made, as long as the end result is wholesome.
This leads to a second reason for modern food fears – we are living in an age of increasing transparency, partly brought about by the dramatic increase in access to information on the internet. I think ultimately this is a good thing – people are seeing how the sausage is made, which makes it more difficult to hide shady practices. This introduces a new problem, however. If you’re going to inspect the process of making sausage, then you need to know something about sausage-making.
In other words – people are obtaining a great deal of information about food, food ingredients, and manufacturing processes, which is a good thing. However, much of this information is coming from dubious sources – non-professional or academic sources that have not been peer reviewed in any meaningful way and may have ulterior agendas or ideological biases.
Further, it is not easy to understand any complex science, including chemistry and food science, which includes medical studies on ingredient safety. The Food Babe has essentially made a career out of provoking irrational fear of ingredients with unsavory sources and with scary-sounding, long chemical names. Neither of these factors have anything to do with actual food safety, but they make it easy to scare the non-expert.
Specifically this includes so-called “chemophobia” – which is the fear of chemicals. The problem with this “Food Babe”, chemophobic approach is that everything is chemicals. As the banana graphic above demonstrates, the formal chemical names even for everyday food molecules are long and unfamiliar to non-chemists.
The end result is that many people use shortcuts or heuristics to determine what food they trust and what food to avoid. One heuristic is the “natural” false dichotomy – if something seems natural it is healthful, and if it seems synthetic it should be avoided. This heuristic rapidly breaks down on two main counts. The first is that there is no good operational definition of “natural.” All food is altered by humans or processed in some way. Where do you draw the line? The second is that something occurring in nature is no guarantee of safety. Most things in nature will harm or even kill you. Many plants and animals have evolved toxins specifically to harm anything that tries to eat it.
Another food heuristic (one explicitly endorsed by the Food Babe) is the chemophobia heuristic – if it has a long chemical name that is difficult to pronounce, then it’s scary.
Intro by Mason. I. Bilderberg
This is the third video in the Solar Roadways series. If you’re not familiar with this topic, you might want to two previous videos:
If you want some background information, click one of the links above. Otherwise, enjoy :)
From the video description:
So the solar roadways has a page up to ‘answer’ its critics.
Previously I had suspected that they have no technical expertise, now Im sure.
They claim that asphalt is softer than glass.
They claim LEDs will be fine for roads because of powerhungry LED billboards or LED traffic lights that work in the shade.
People gave them over 2 million dollars for this. You really have to laugh or cry at this.
This video was supported by donations of viewers through Patreon:
Perhaps the most pervasive popular belief that people associate with neuroscience is the idea that we all tend to be either left-brained or right-brained, based on traits like creativity or analytical ability. It’s well known that certain brain functions are localized in various parts of the brain, so it would seem to make sense that some of our individual strengths and weaknesses can be quantified based on brain hemisphere dominance. Quite a few companies even sell products intended to analyze your brain sidedness, promising a variety of personal development benefits. But is this belief — so widely held — good science, terrible science, or some mixture of the two?
Once in college, I took an Honors Colloquium class that was supposed to expose us to a wide variety of ideas and experiences. It was taught by three professors who were presented to us as being the smartest, most well-rounded guys on campus. One of our exercises was to take a test that was supposed to reveal our brain sidedness. The questions were similar to what you might get in a personality test, asking about whether you prefer math or art, privacy or crowds, planning ahead or working on the fly. Some days later we each received our results: a two-axis radar chart, showing a skewed diamond with its left and right corners representing the levels to which we depended on our left brain or right brain, and the top and bottom corners showing the degree to which we depended on our anterior or posterior parts of our brain. It was explained to us that these results could be used to help us self-assess our aptitudes at various skills. Would we be good at sales, leadership, or education? What areas of ourselves could we work on to improve ourselves? What kind of value could we add to an organization with our particular brain map?
Most students had crazily shaped radar charts that showed a strong dependence on one brain area or the other. The horizontal axis had a range of zero to 120 on both sides. We all thought that anyone who had a chart exceeding 100 on either side must be extraordinarily talented according to the popularly believed norms: if you were over 100 on the left you were a math or analytical genius; if you were over 100 on the right you were the next Mozart or Rembrandt. I was very proud that mine was the only one that was symmetrical, 94 on both sides; but after later reflection, I recalled that many of the questions had to do with the classes we were taking. At the time my idea was to double major in computer science and film directing, so I’d given a lot of answers that indicated I was both analytical and creative. I hadn’t had much experience in scientific skepticism at that point, but if I had, I might well have realized that the test was grossly unscientific and relied completely on self-reported answers that might have changed from one day to the next, depending on mood, terminology, context, and many other variables.
Looking at the same test now, I realize that was only the tip of the iceberg. Brain sidedness as a predictor of either preferences or aptitudes is unscientific for a very good reason: it’s virtually entirely wrong.
Let’s go back to that popular public assumption that the left brain is analytic and the right brain is creative, upon which so many of the questions in my Honors Colloquium test focused, and upon which the whole class based the entirety of their analyses of their test results. The natural inference is that people whose left brains are dominant must be good at analytical skills, and people whose right brains are dominant must be good at creative skills. The reverse would also be true: If you are a mathematician or engineer, we might deduce that you are left-brained; and if you’re an artist or poet, that you’re right-brained.
Where did this idea come from?
Whenever the discussion of a dualist vs materialist model of the mind comes up, one common point made to support the dualist position (that the mind is something other than or more than just the functioning of the brain) is that the brain may not be the origin of the mind, but rather is just the receiver. Often an explicit comparison is made to radios or televisions.
The brain as receiver hypothesis, however, is wholly inadequate to explain the relationship between the brain and the mind, as I will explain below.
As an example of the brain-receiver argument, David Eagleman writes in his book Incognito:
As an example, I’ll mention what I’ll call the “radio theory” of brains. Imagine that you are a Kalahari Bushman and that you stumble upon a transistor radio in the sand. You might pick it up, twiddle the knobs, and suddenly, to your surprise, hear voices streaming out of this strange little box. If you’re curious and scientifically minded, you might try to understand what is going on. You might pry off the back cover to discover a little nest of wires. Now let’s say you begin a careful, scientific study of what causes the voices. You notice that each time you pull out the green wire, the voices stop. When you put the wire back on its contact, the voices begin again. The same goes for the red wire. Yanking out the black wire causes the voices to get garbled, and removing the yellow wire reduces the volume to a whisper. You step carefully through all the combinations, and you come to a clear conclusion: the voices depend entirely on the integrity of the circuitry. Change the circuitry and you damage the voices.
He argues that the Bushman might falsely conclude that the wires in the radio produce the voices by some unknown mechanism, because he has no knowledge of electromagnetic radiation and radio technology.
This point also came up several times in the 600+ comments following my post on the Afterlife Debate. Commenter Luoge, for example, wrote:
“But the brain-as-mediator model has bot yet been ruled out. We can tamper with a TV set and modify its behaviour just as a neurosurgeon can do with a brain. We can shut down some, or all, of its functioning, and we can stimulate to show specific responses. And yet no neurologist is known to have thought that the TV studio was inside the TV set.”
There are two reasons to reject the brain-as-mediator model – it does not explain the intimate relationship between brain and mind, and (even if it could) it is entirely unnecessary.
To deal with the latter point first, I have used the example of the light-fairy. When I flip the light switch on my wall, the materialist model holds that I am closing a circuit, allowing electricity to flow through the wires in my wall to a specific appliance (such as a light fixture). That light fixture contains a light bulb which adds resistance to the circuit and uses the electrical energy to heat an element in order to produce light and heat.
One might hypothesize, however, that an invisible light fairy lives in my wall. When I flip the switch the fairy flies to the fixture where it draws energy from the electrical wires, and then creates light and heat that it causes to radiate from the bulb. The light bulb is not producing the light and heat, it is just a conduit for the light fairy’s light and heat.
There is no way you can prove that my light fairy does not exist. It is simply entirely unnecessary, and adds nothing to our understanding of reality. The physics of electrical circuits do a fine job of accounting for the behavior of the light switch and the light. There is no need to invoke light bulb dualism.
The same is true of the brain and the mind, the only difference being that both are a lot more complex.
More importantly, however, we have enough information to rule out the brain-as-receiver model unequivocally.
The examples often given of the radio or TV analogy are very telling. They refer to altering the quality of the reception, the volume, even changing the channel. But those are only the crudest analogies to the relationship between brain and mind.
A more accurate analogy would be this – can you alter the wiring of a TV in order to change the plot of a TV program? Can you change a sitcom into a drama? Can you change the dialogue of the characters? Can you stimulate one of the wires in the TV in order to make one of the on-screen characters twitch?
Well, that is what would be necessary in order for the analogy to hold.
The concept is to build roads out of hexagonal plates of transparent hard material (tempered glass) with built in solar panels. You can also incorporate heating elements and LED lights. Buried alongside such roads could be a new energy grid, for transporting all that solar generated electricity.
Here is the vision as presented: With such solar freakin’ roadways we could generate much, if not all, of our needed electricity. We could replace telephone poles and hanging wires with buried lines, and upgrade our energy (and even information) grid while we’re at it.
The heating elements could melt ice and snow, removing the need for plowing or salting roads. Potholes or other damage could be easily repaired by simply replacing the hexagonal units, one at a time, as needed.
The LED lights could be programmable, so that all road lines and traffic notices could simply be programmed in, and changed as needed. Parking lots could adjust spaces as needed – making bigger spaces or adding or removing handicapped spaces based on demand. Recreational areas can also be programmed to be different kinds of courts as desired.
Pressure sensitive plates can also be added, allowing for the road to light up, for example, when an animal is walking across the road, providing real-time warning for drivers.
This all certainly sounds great – just like the roadway of the future you always imagined, maybe even better.
OK – now here comes the skepticism. First let me say that I like the concept, and I’m glad some some research funding is being dedicated to this idea. I also have no problem with privately crowdfunding the idea. If people want to invest in this, go right ahead. I wish them well.
But this is also a good time to consider all the possible roadblocks (pun intended) and potential problems with such a technology. I am just going to list my questions:
Change blindness is a fascinating phenomenon in which people do not notice even significant changes in an image they are viewing, as long as the change itself occurs out of view. Our visual processing is sensitive to changes that occur in view, but major changes to a scene can occur from one glance to the next without our noticing in many cases.
(See [this] color changing card trick for an example.)
One group of researchers believe they have a working hypothesis as to why our brains might have evolved in this way. Their idea is that the visual system will essentially merge images over a short period of time in order to preserve continuity – a process they call the continuity field. In essence our brains are sacrificing strict accuracy for perceived continuity.
This is in line with other evidence about how our brains work. Continuity seems to be a high priority, and our brains will happily fill in missing details, delete inconsistent details, and even completely fabricate information in order to preserve the illusion of a continuous and consistent narrative of reality.
Visual continuity is important because otherwise the world would appear jittery to us, constantly morphing as shadows play across an object, or our angle of view changes. This could be highly disruptive and distracting.
The researchers also point out that in the real world objects are fairly stable. They don’t pop in and out of existence, or morph into other objects. So not being perceptive to such changes would not be a big sacrifice and would not be likely to affect fitness. If something is actually moving or changing in our visual field we are very sensitive to that, and our attention will be drawn to it.
Neuroscientists, however, can contrive all sorts of impossible scenarios in order to probe our processing of sensory information. We did not evolve with video or photography, but researchers can use this technology to test how our brains process information.
They also give real world examples, such as the movies. There are often continuity errors in movies, missed by the vast majority of movie-goers.
I’m always fascinated by how the mind works. Check out Apollo Robbins, he’s incredible.
Hailed as the greatest pickpocket in the world, Apollo Robbins studies the quirks of human behavior as he steals your watch. In a hilarious demonstration, Robbins samples the buffet of the TEDGlobal 2013 audience, showing how the flaws in our perception make it possible to swipe a wallet and leave it on its owner’s shoulder while they remain clueless.
As many of you know, i LOVE optical illusions. Not just because of their visual impact, but also because of the insights it can give us into the workings of our brain, another favorite topic of mine.
This is one of my favorite YouTube channels because they always post something interesting.
Check it out. :)
After my recent run as a cut-rate media critic with the 10 Million Dollar Bigfoot Bounty, I knew I wasn’t going to be able to resist taking on another television show. Fortunately, the highly anticipated Cosmos: A Spacetime Odyssey has premiered only a few weeks later. Time to cut my media-writing chops on some higher quality fare.
I want to do something different with these than a simple recap. A review would be fun, but every media outlet on the Internet seems to have a review of the new Cosmos series already, so adding one more seemed like a wasted effort. Instead, I’ve decided to take bit of a skeptical eye to the proceedings, watching each episode and identifying the high points and the low points — the Best of and Worst of the new Cosmos, if you will. So without further ado …
First off, the host. I’ve liked Neil deGrasse Tyson ever since the days when he hosted NOVA ScienceNow, and I listen regularly to his Star Talk radio podcast; so I knew going in that he would probably slip nicely into Carl Sagan’s role as narrator of the series. He did not disappoint. While he’s not quite the noble poet of science that Sagan was, he has an affable way of making scientific ideas accessible and entertaining to the lay person, which is exactly what this series needed in 2014 on FOX. This isn’t Tyson as the fierce science advocate, though; instead, this is Tyson playing the straight man to the wonders of the universe.
Second, I’m really fond of the visual narrative motif built into the Spaceship of the Imagination. I think most people can agree that the Spaceship is one of the cornier moments of the original Cosmos; so if it had to come back for this sequel, why not make it a more functional part of the narration? The visual cue of Beneath/Past, In Front/Present, Above/Future is a elegantly simple way to help cue viewers in a series that is so often going to be jumping back and forth. Not paying close attention to every word Tyson says? Well the shot is moving “Below,” so whatever they’re about to talk about must be science history. Subtle but effective.
Finally, some of the FX visualizations during the “Cosmic Address” segment were pretty cool. The Spaceship flying past the Mars Rover; the visualization of the inside of Saturn’s rings; the dark, icy rogue planet — honestly, I just enjoyed the whole tour of the universe. Opening with the whole concept of scope — and how being tiny doesn’t mean we have to be insignificant — set a fine tone for things to come.
I think the weak point of the first episode was definitely the animated Giordano Bruno story. The whole sequence fell flat for a number of reasons.
I found this to be a great lesson in critical thinking. Check it out :)
How do you investigate hypotheses? Do you seek to confirm your theory – looking for white swans? Or do you try to find black swans? I was startled at how hard it was for people to investigate number sets that didn’t follow their hypotheses, even when their method wasn’t getting them anywhere.
This video was inspired by The Black Swan by Nassim Taleb and filmed by my mum. Thanks mum!
By Charles Q. Choi, ISNS Contributor via Inside Science
(ISNS) — Time travel is often a way to change history in science fiction such as “Back to the Future” and “Looper.” Now researchers suggest a certain kind of time machine could also possess another powerful capability — cloning perfect copies of anything.
However, scientists noted the way these findings violate what is currently known about quantum physics might instead mean such time machines are not possible.
We are all time travelers in that we all move forward in time. However, scientists have suggested it might be possible to move back in time by manipulating the fabric of space and time in our cosmos. All mass distorts space-time, causing the experience of gravity, a bit like how a ball sitting on a rubber sheet would make nearby balls on the sheet roll toward it. Physicists have proposed time machines that could bend the fabric of space and time so much that timelines actually turn back on themselves, forming loops technically known as “closed timelike curves.”
These space-time warps can develop because of wormholes — tunnels that can in theory allow travel anywhere in space and time, or even into another universe. Wormholes are allowed by Einstein’s general theory of relativity, although whether they are practically possible is another matter.
A key limitation of this kind of time machine would be that any traveler using it cannot go back to a time before the device was built. It only permits travel from the future back to any point in time after the machine was constructed.
Scientists have for decades explored what closed timelike curves are capable of if they are possible.
One complication they would encounter is the no-cloning theorem in quantum physics, which basically forbids the creation of identical copies of any particle one does not know everything about to begin with.
In classical physics, one can generate a perfect copy of anything by finding out every detail about it and arranging the same components in the same order. However, in the bizarre world of quantum physics — the best description so far of how reality behaves on its most fundamental levels — one cannot perfectly measure every detail of an object at once. This is related to Heisenberg’s uncertainty principle, which notes that one can perfectly measure either the position or the momentum of a particle, but not both with unlimited accuracy.
Nearly 25 years ago, theoretical physicist David Deutsch at the University of Oxford in England suggested closed timelike curves might actually violate the no-cloning theorem, allowing perfect copies to be constructed of anything. Now scientists reveal this might be true in findings detailed in the Nov. 8 issue of the journal Physical Review Letters.
To understand this research, imagine one builds a time machine in the year 2000. One could place a letter into the device in the year 3000 and pick it up within this box in 2000 or any year between then and 3000. From the perspective of the letter, it goes inside this time machine into one mouth of a wormhole in the future and comes out the other mouth of the wormhole in the past.
However, theoretical physicist Mark Wilde at Louisiana State University, in Baton Rouge, and his colleagues found this scenario may be more complex than previously thought. Instead of the time machine containing just one wormhole, it could possess many wormholes, each at some point in time between the future and the moment of its creation. A letter entering the box in 3000 might exit from a wormhole in 2999, instantaneously go back into that wormhole and emerge in 2998, and so on.
“It’s like there are 1,000 different particles emerging from all the wormholes, but in fact they’re all the same particle you sent in the beginning,” Wilde said. “You just have all these temporary copies emerging from and going back into these wormholes.”
This 56 page document is published by The British Psychological Society and i’ve just begun reading it, so i can’t yet say whether i love it or hate it. But so far i’m liking what i see. It appears to be written in sections – some of which i’ll be skipping – but there looks to be enough great stuff in here to make it worth downloading.
I’m posting an excerpt below for you to read to help you decide whether this is something you might want to peruse.
Have fun. Feel free to provide feedback in the comments section. :)
The PDF can be downloaded here and at the links below.
Mason I. Bilderberg (MIB)
PRINCESS DIANA was murdered by the British Secret Service because she was pregnant with Dodi Fayed’s baby. The government is adding fluoride to our drinking water in an attempt to weaken the population. Barak Obama is a Kenyan-born Muslim and thus ineligible for the Office of the President of the United States.
All of these statements have appeared at some point or other in popular media, debated by politicians, challenged and denied by government departments, and propagated heavily over the internet. A quarter of the UK population believe Diana was assassinated (YouGov, 2012); similarly 25 per cent of Americans think Obama was not born in the US (CBS News/New York Times, 2011). But these statements are not true.
They are examples of a cultural shift in the popularity of the ‘conspiracy theory’; alternative narratives of a world overshadowed by malevolent groups hell-bent on the destruction of civil liberties, freedom and democracy. They suggest that governments, secret religious groups, scientists or private industry (often many of these combined) are responsible for either causing or covering up significant major world events for their own criminal ends.
What is a ‘conspiracy theory’?
Broadly, psychologists feel that conspiracy theories are worth studying because they demonstrate a particular sub-culture of often heavily political activism that is at odds with the mainstream view. Conspiracy theories are unsubstantiated, less plausible alternatives to the mainstream explanation of an event; they assume everything is intended, with malignity. Crucially, they are also epistemically selfinsulating in their construction and arguments.
What insight does psychology offer?
Belief systems, cognitive biases and individual differences
But what in particular is it about conspiracy believers that are interesting from a psychological perspective? We find these theories and those who believe them incredibly resilient to counter-argument, driven by an often fanatical belief in their version of the truth, coupled with a heavy political overtone in that their opinions need to be heard. We see an interesting combination of cognitive biases, personality traits and other psychological mechanisms at play in the formation, propagation and belief in conspiracies.
Read more – Download the PDF File
Note from Mason I. Bilderberg –
How many people must be in a group for the odds of two people in the group having the same birthday reaches a statistical likelihood better than 50%?
The number is so surprisingly few that some people attribute a birthday match in such a small group to something akin to a sign from the heavens. They ask, “What are the odds?”
But did you know, in a group of 50 people, there is a 97% statistical chance of two people having the same birthday? Psychics use these types of statistical illusions to give audiences the impression that such occurrences are “a sign from above!”
I’d love to be in a group of 50 people when it is discovered that two people have the same birthday and the psychic asks in a mysterious tone, “What ARE the odds?” . . . just so i can yell back “97% you freakin’ charlatan!”
Wikipedia explains all the math, as does the video below.
By Dave Burton via Burton Systems Software – (burtonsys.com)
Some conspiracy theorists are puzzled about why the WTC towers fell at almost free-fall speed on Sept. 11, 2001. They suppose that the speed of collapse is evidence that something or someone must have destroyed the structural integrity of the undamaged lower part of each tower.
After all, they reason, “only the upper floors of the building were damaged, so why did the lower floors collapse, and why did they fall so fast?”
This web page answers those questions, simply enough for even a conspiracy theorist to comprehend (I hope). I do use some simple math and some very basic physics, but even if you don’t understand that part you should still be able to comprehend the basic reasons that the towers fell so fast.
What the conspiracy theorists apparently don’t understand is the difference between static and dynamic loading. (“Static” means “while at rest,” “dynamic” means “while moving.”)
If you don’t think it can make a difference, consider the effect of a stationary bullet resting on your chest, compared to the effect of a moving bullet striking your chest. The stationary bullet exerts a static load on your chest. A moving bullet exerts a dynamic load.
As a more pertinent example, consider a 110 story building with a roof 1,368 feet high (like the WTC Twin Towers). Each floor is 1368/110 = 12.44 feet high, or aproximately 3.8 meters.
Now, suppose that the structural steel on the 80th floor collapses. (Note: I’m using as an example 2 WTC, which was the building that collapsed first.)
The collapse of the 80th floor drops all the floors above (which, together, are equivalent to a 30 story building!) onto the 79th floor, from a height of aproximately 12 feet.
Of course, the structure of the lower 79 floors has been holding up the weight of the top 31 floors for many years. (That’s the static load.) So should you expect it to be able to hold that same weight, dropped on it from a height of 12 feet (the dynamic load)?
The answer is, absolutely not!
In countless works of fiction, authors use quantum mechanics to explain things like telepathy, teleportation or the shape of the universe. Why? Tune in to learn more about quantum physics — and why, in some cases, the truth may be stranger than fiction.
In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines.
The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”
Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one.
But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim.
So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?
The first serious attempt to find the truth about our universe came in 2001, when an effort to calculate the resources needed for a universe-size simulation made the prospect seem impossible.
Seth Lloyd, a quantum-mechanical engineer at MIT, estimated the number of “computer operations” our universe has performed since the Big Bang — basically, every event that has ever happened. To repeat them, and generate a perfect facsimile of reality down to the last atom, would take more energy than the universe has.
“The computer would have to be bigger than the universe, and time would tick more slowly in the program than in reality,” says Lloyd. “So why even bother building it?”
But others soon realized that making an imperfect copy of the universe that’s just good enough to fool its inhabitants would take far less computational power. In such a makeshift cosmos, the fine details of the microscopic world and the farthest stars might only be filled in by the programmers on the rare occasions that people study them with scientific equipment. As soon as no one was looking, they’d simply vanish.
In theory, we’d never detect these disappearing features, however, because each time the simulators noticed we were observing them again, they’d sketch them back in.
That realization makes creating virtual universes eerily possible, even for us.
A great debunking. :)
Every now and then i just geek out on science :)
Our brain decides how we perceive everything around us. It informs our decisions, guiding us carefully through the fog that is the world around us . . . except for when it lies to us. You see, our brains are fickle friends and love to play games. Often, what we think is true is actually just our brains messing with us.
Have you ever repeated a word several times and found that, after a while, it started to lose meaning? If you have, you needn’t worry—scientists have studied this phenomenon and call it semantic satiation. Studies found that as you repeat a word, your brain becomes satiated and you start to get confused about what the word even means. You see, normally when you say a word (e.g., “pen”), your brain finds the semantic information for a pen and connects the two things together. However, counter-intuitively, if you repeat the word a number of times in quick succession, your brain becomes less able to connect it with that semantic information each time.
Researchers have found practical uses for this information beyond just amusing themselves with how easily we trick ourselves—by using semantic satiation in a controlled environment, they have been able to help those who stutter, and in one case were able to help someone with coprolalia, the uncontrollable cursing sometimes associated with Tourette’s syndrome, by having him repeat his favorite curse words over and over.
Let’s say you finally get to go on that camping trip you’ve been putting off for a long time. You enjoy a long day of hiking, fishing, and other activities, then go to your tent to get some rest for the next day. When you wake up in the morning, you realize that something is horribly wrong—to be more precise, there is a bear in your tent. You might imagine that the first thing you’d feel is fear, which would result in a rapid heartbeat. But, once again, your brain is deceiving you.
According to James Lange’s theory of emotion, it actually works the other way around. His peripheral theory states that when you see the bear, your heart starts to beat faster, and only then does your brain start to think it must be afraid and send out fear signals. Those who study emotion have not been able to disprove the theory thus far, although some believe emotional responses are more of a loop.
Have you ever had something incredibly terrible yet catchy stuck in your head for days at a time? Well, now you have a name for this horrible phenomenon, which scientists have dubbed an “earworm.” The explanation some scientists give basically involves your brain getting stuck in a loop. You probably remember one verse of whatever catchy song you are stuck with almost perfectly, but don’t know the rest of the song as well. After singing the first verse, your brain tries to move on to the next, but doesn’t know the rest of the song. Because your brain likes to go back to unfinished thoughts, it gets stuck in a loop, continually trying to start again and finish the song. After presumably struggling to get the Spice Girls out of their heads, a group of scientists were determined to find out how to break this spell. After a lot of study, their advice is a sort of Goldilocks philosophy—you need to focus on a cognitive activity that isn’t too easy or too hard. They suggest solving anagrams or reading a novel.
Most of us have strong opinions on issues like cannibalism and incest, with the majority of us considering them to be morally wrong. However, researchers have found that, when asked about these issues, most people’s brains sit there sluggishly, unable to come up with an appropriate response, even though the behaviors in question are considered taboo by most modern societies. This phenomenon is termed moral dumbfounding—quite simply, the subjects were “struck dumb” and unable to properly explain why they felt so strongly about an issue.
One of the scenarios described someone working with a body that was going to be cremated anyway and taking a small chunk of flesh home with her to eat. She made sure to cook it thoroughly to remove any diseases. Another told of an adult brother and sister who were on vacation and decided to get freaky, making sure they used protection. The participants were asked if what these people had done was wrong, then asked to explain why. The researchers found that people felt very strongly that these behaviors were morally wrong, but struggled mightily to verbalize their reasoning. Research has not yet explained why this response occurs. It may be that society’s taboos are simply ingrained into our consciousness so deeply that we feel a powerful moral drive against them even though we cannot logically explain why.
Do you rely on your GPS to get everywhere? Do you even use it to navigate to familiar places? If so, perhaps you might want to consider using it less. It turns out that using GPS is an easy way to lull ourselves into a false sense of security and lose our sense of direction—too much use of GPS actually makes it harder for us to create spatial maps. Even worse, some researchers believe that if we don’t use our spatial abilities regularly, it could lead to a higher risk of early-onset dementia. The researchers suggest that we use GPS only when we don’t know the route, and use it more as a tool than a crutch.
On a more positive note, it turns out that constantly using our spatial abilities makes our brains stronger. London cabbies have to go through an extremely rigorous process to learn their routes, which only cover a 9.5-kilometer (6 mi) radius but include 25,000 streets with 320 separate routes and about 20,000 different points of interest. Researchers studying London cabbies found that not only seasoned veterans but also those who had only just taken the training had an increase in grey matter in the brain. Scientists believe the more important implication of this study is that it shows the human brain is extremely good at adapting well into adulthood.
Free energy is to physics was creationism is to evolutionary biology. Both offer a teaching moment when you try to explain why proponents are so horribly wrong.
Free energy proponents have been abusing the laws of thermodynamics (come to think of it, so have creationists), and more recently quantum effects ala zero-point-energy. Now they are distorting a new principle of physics to justify their claims – the Casimir effect. Apparently this was a hot topic at the Breakthrough Energy Conference earlier this month.
Before I get into the specifics, I do want to address the general conspiratorial tones of the free-energy movement. I wonder if anyone influential in the free-energy subculture realizes that their conspiracy-mongering over free energy is perhaps the greatest barrier to their being taken seriously. There is also the fact that they get the science wrong, but if they think they are doing cutting edge science (rather than crank science), then convince us with science and ditch the conspiracy nonsense.
Here is the opening paragraph from a recent blog pushing the Casimir effect as a source of free energy:
Who is benefiting from suppressing scientific research? Whose power and wealth is threatened by access to clean and free energy? Who has the desire to create a system where so few have so much, and so many have so little?
OK – you lost me right there. This is a naive child’s view of the world, where “the adults” form a monolithic inscrutable force controlling the world. When you actually become an adult you may realize that no one has total control. No one and no institution is that competent, powerful, and pervasive. It would take an obviously totalitarian state to exert that much control.
If free energy were real, someone would be making it happen. Ironically the very existence of the free-energy movement proves their own conspiracy theories wrong. If a company could produce a genuine free-energy machine, they would, and they would become the wealthiest company in the world. Further, free energy would improve everyone’s quality of life. No matter who you are, your life would become better with free energy.
Free energy proponents, apparently, would rather believe the world is run by megalomaniacs who are simultaneously brilliant (in executing their conspiracy) and idiotic (in wanting to execute their conspiracy) rather than entertain the possibility that they have the science wrong.
The Casimir Effect
Scientific American has a good quick discussion of what the Casimir effect is. The Casimir effect is related to zero point energy, which refers to the fact that a perfect vacuum in space still contains energy in quantum fluctuations. This is sometimes referred to as the quantum foam, out of which virtual particles are created and destroyed.
This quantum vacuum energy exists as various wavelengths – in fact, infinite wavelengths. When you place two mirror facing each other in a vacuum, some of these waves will fit in the space between and some will not. This creates a situation in which there is more energy in the vacuum outside the mirrors then between them, which in turn results in a tiny force attracting the two mirrors together.
This effect was predicted by Dutch physicist Hendrick Casimir in 1948, and later confirmed by experimentation. I must emphasize that this force is extremely tiny.
Here is where the free-energy gurus, however, have their fun. Our current understanding of quantum effects predicts that there is an infinite amount of this zero-point energy in the vacuum. Imagine if we could somehow tap into that energy – infinite free energy. You can see why this is an exciting idea.
There are two problems with zero-point energy as a source of free energy, however.
If you know me, you’ll know why i love this video – i love optical illusions. Check it out.
Via HondaVideo – YouTube.
Here is how they created these effects:
[ . . . ]
There’s Violet Jessop, who worked as a stewardess on the maiden voyage of the Titanic in 1912, and managed to survive the giant liner’s collision in the North Atlantic with an iceberg — only to take a job as a nurse on the Britannic, which sank in 1916 in the Aegean Sea.
And more recently, there’s the bizarre story of English tourists Jason and Jenny Cairns-Lawrence, who were visiting New York City when Al Qaeda hijackers crashed two planes into the World Trade Center on September 11, 2001, and happened to be in London when the city’s public transportation system was attacked by terrorists in July 2005, and traveled to Mumbai, India in November 2008, just in time to witness a third terrorist attack.
Newspaper writers took to calling them “the world’s unluckiest couple.”
The idea that some people are destined to suffer chronic misfortune is so ingrained in our consciousness that there even have been songs written about it — for example, “Born Under a Bad Sign,” the blues classic recorded by Albert King back in 1967, in which the narrator complains that “if it wasn’t for bad luck, I wouldn’t have no luck at all.”
But is there really such a thing as chronic bad luck, and if so, why do some people seem to be plagued by it?
Psychologists and academic experts in probability and statistics, who’ve studied the phenomenon of bad luck, provide a complicated answer. It is true that in the course of a lifetime, some people have a lot more bad things happen to them than most of us do. But that outcome can be influenced by a variety of factors, including random chance, the actions of other people, and individuals’ own decision-making skills and competence at performing tasks.
But in our minds, it all blends together and forms this thing that we think of as bad luck.
Rami Zwick, a business professor at the University of California-Riverside, points out that the idea of bad luck exists, in part, because most of us don’t have a very good understanding of how the science of probability works.
“There is a difference between individual and aggregate experiences of people in a population,” he explains. If you ask 100 people to flip a coin 100 times, for example, over time, you can expect that the average result for the group will be 50 heads and 50 tails. But within the group, individuals may have more heads than tails, or vice-versa. “If we think of heads as good and tails as bad, a few people will have a sequence of mostly good outcomes, and others will have mostly bad ones.”
By Bahar Gholipour via LiveScience
With anesthetics properly given, very few patients wake up during surgery. However, new findings point to the possibility of a state of mind in which a patient is neither fully conscious nor unconscious, experts say.
This possible third state of consciousness, may be a state in which patients can respond to a command, but are not disturbed by pain or the surgery, according to Dr. Jaideep Pandit, anesthetist at St John’s College in England, who discussed the idea today (Sept. 19) at The Annual Congress of the Association of Anaesthetists of Great Britain and Ireland.
Pandit dubbed this state dysanaesthesia, and said the evidence that it exists comes partly from a recent study, in which 34 surgical patients were anesthetized, and had their whole body paralyzed except for their forearm, allowing them to move their fingers in response to commands or to signify if they are awake or in pain during surgery.
One-third of patients in the study moved their finger if they were asked to, even though they were under what seemed to be adequate anesthesia, according to the study led by Dr. Ian F. Russell, of Hull Royal Infirmary in England, and published Sept. 12 in the journal Anaesthesia.
“What’s more remarkable is that they only move their fingers if they are asked. None of the patients spontaneously responded to the surgery. They are presumably not in pain,” said Pandit, who wrote an editorial about the study.
Normally, while patients are under anesthesia, doctors continuously monitor them, and administer anesthetic drugs as needed. The goal is to ensure the patient has received adequate medication to remain deeply unconscious during surgery. However, it is debated how reliable the technologies used during surgery to “measure” unconsciousness are.
Not quite related to conspiracies, but i’m fascinated by the brain and all its foibles. For this reason i enjoyed this piece a lot. I hope you do too.
Mason I. Bilderberg (MIB)
Schizophrenics used to see demons and spirits. Now they talk about actors and hidden cameras – and make a lot of sense
Clinical psychiatry papers rarely make much of a splash in the wider media, but it seems appropriate that a paper entitled ‘The Truman Show Delusion: Psychosis in the Global Village’, published in the May 2012 issue of Cognitive Neuropsychiatry, should have caused a global sensation. Its authors, the brothers Joel and Ian Gold, presented a striking series of cases in which individuals had become convinced that they were secretly being filmed for a reality TV show.
In one case, the subject travelled to New York, demanding to see the ‘director’ of the film of his life, and wishing to check whether the World Trade Centre had been destroyed in reality or merely in the movie that was being assembled for his benefit. In another, a journalist who had been hospitalised during a manic episode became convinced that the medical scenario was fake and that he would be awarded a prize for covering the story once the truth was revealed. Another subject was actually working on a reality TV series but came to believe that his fellow crew members were secretly filming him, and was constantly expecting the This-Is-Your-Life moment when the cameras would flip and reveal that he was the true star of the show.
Few commentators were able to resist the idea that these cases — all diagnosed with schizophrenia or bipolar disorder, and treated with antipsychotic medication — were in some sense the tip of the iceberg, exposing a pathology in our culture as a whole. They were taken as extreme examples of a wider modern malaise: an obsession with celebrity turning us all into narcissistic stars of our own lives, or a media-saturated culture warping our sense of reality and blurring the line between fact and fiction. They seemed to capture the zeitgeist perfectly: cautionary tales for an age in which our experience of reality is manicured and customised in subtle and insidious ways, and everything from our junk mail to our online searches discreetly encourages us in the assumption that we are the centre of the universe.
But part of the reason that the Truman Show delusion seems so uncannily in tune with the times is that Hollywood blockbusters now regularly present narratives that, until recently, were confined to psychiatrists’ case notes and the clinical literature on paranoid psychosis. Popular culture hums with stories about technology that secretly observes and controls our thoughts, or in which reality is simulated with virtual constructs or implanted memories, and where the truth can be glimpsed only in distorted dream sequences or chance moments when the mask slips. A couple of decades ago, such beliefs would mark out fictional characters as crazy, more often than not homicidal maniacs. Today, they are more likely to identify a protagonist who, like Jim Carrey’s Truman Burbank, genuinely has stumbled onto a carefully orchestrated secret of which those around him are blandly unaware. These stories obviously resonate with our technology-saturated modernity. What’s less clear is why they so readily adopt a perspective that was, until recently, a hallmark of radical estrangement from reality. Does this suggest that media technologies are making us all paranoid? Or that paranoid delusions suddenly make more sense than they used to?
Imagine you are at a Las Vegas casino and you’re approaching the roulette table. You notice that the last eight numbers were black… so you think to yourself, “Holy smokes, what are the odds of that!” and you bet on red, thinking that the odds of another black number coming up are really small. In fact, you might think that the odds of another black coming up are:
0.5*0.5*0.5*0.5*0.5*0.5*0.5*0.5*0.5 = 0.00195 (a very tiny number)
Or are they?
The problem is that a roulette table – if fairly constructed – has no “memory”. That is, one outcome does not depend on the previous outcome’s result, and so the odds for a red number or black number are just about equal (actually, just shy of 50% each, since there is one or two green spaces on a roulette table depending on American or European versions).
Keeping with our example, if you bet on either red or black for each spin, this type of outside bet pays 1 to 1 and covers 18 of the 38 possible combinations (or 0.474). A far cry from the 0.00195 number above (a miscalculation that is roughly 243 times too small). Now your odds of a red coming up aren’t so good anymore…
This fallacy is called the Gambler’s Fallacy, and it’s what the city of Las Vegas is built on.
Random events produce clusters like “8 black numbers in a row”, but in the long term, the probability of red or black will even out to its natural average.
The key to your success at the casino? Understand that every individual spin (or “event”) has its own probability which never changes. In this case, 18 in 38.
So the next time you’re at a casino and you see a string of the same color coming up, remember that the odds of that color coming up again are exactly the same as the other color… it might save you a few bucks so you can play a bit longer.
In my last post, I wrote about how not having enough contextual data can outright boggle the mind. Today, we’re going to read about something else that similarly boggles the mind, albeit not really related to any linguistic phenomena. It’s an interesting little logical fallacy in the field of statistics known as cum hoc ergo propter hoc, or more commonly, ‘correlation does not prove causation’. Here, we define correlation as ‘when two things happen at the same time’, and causation as ‘when one thing causes the other’.
This logical fallacy is great at showing the glaring inaccuracies caused by lack of data on a specific subject, and how this lack can cause us to reach blindly for (often incorrect) conclusions in the proverbial fogginess of our mind. Additionally, the comedic factor here is amplified if you forego the law of parsimony (also known as Occam’s razor), which states that of all the possible solutions to a question or problem, the simplest one is most likely the truth.
Have you ever been in a recording booth or a really quiet place? If you’re in there for a long time your mind begins to create its own sounds. Essentially, you begin to hallucinate due to a lack of external stimuli. This is basically what goes on in the aforementioned logical fallacy: you end up compensating for a lack of data by drawing a perceived (and often inaccurate) connection between the sole items of data you have.
What does this have to do with a language blog? Essentially, it’s a great way of showing how a lack of the background information required for comprehension can yield wildly inaccurate knowledge. Dig this:
Did you know that children with bigger feet are statistically better at spelling? This is statistically true. Without additional contextual information, I could hypothesize that having larger foot-size means the children would perform better at sports and have better balance while carrying large and cumbersome schoolbags, making them less prone to falling over in bustling school hallways, making them less likely targets for bullies, leading to an inevitable increase in confidence, leading to better scholastic performance, and thereby, better spelling skills!
The truth is, it’s actually because children with larger feet are probably a lot older than children with smaller feet. Duh.
Did you know that you are more likely to get cancer if you always wear a seat-belt?
I have been a fan of Dr. Susan Blackmore ever since i read her book In Search of the Light: The Adventures of a Parapsychologist.
One of my favorite topics she writes and talks about is her theory that we don’t have free will. I am fascinated by such a counterintuitive idea. Maybe you will be too.
This video is about an hour long, i haven’t finished watching it yet, but i’m sure i will enjoy it if it’s like all her other discussions.
Today we’re going to point the skeptical eye at popular claims that ordinary radios — such as walkie talkies, police and emergency radios, and those embedded in devices such as cell phones, wi-fi hubs, and smart utility meters — are dangerous. Some say they cause cancer, some say they present other more nebulous health risks. How concerned do you need to be that something as ubiquitous as radio could be doing you more harm than good?
This issue rose to the headlines in popular media with a frightening announcement in May of 2011 by the World Health Organization. The press release stated that the International Agency for Research on Cancer (IARC) had placed radiofrequency (RF) in their Group 2B of possible carcinogens due to an increased risk of the brain cancer glioma associated with the use of mobile phones. Unfortunately, very few people actually read the release, and saw only that headline, which presents a highly skewed perspective of what was actually said. As a result, new movements arose worldwide, notably in Canada, for certain RF devices to be banned. Canada’s Green party openly called for the elimination of wi-fi computer networks in schools, and many groups have campaigned against the purported health effects of smart meters (like this and this).
My question to the groups actively campaigning against stuff that’s in Group 2B is “Do you drink coffee?” Most do, and yet coffee is also in Group 2B. So are the crafts of carpentry and joinery. Pickled vegetables, coconut oil, and even the Earth’s magnetic field are in Group 2B. Now, granted, it would be fallacious logic to say that just because these other things sound ordinary and safe, that makes radiofrequency safe; but it is true that the World Health Organization considers them to be similarly risky.
Group 1 is the classification for things that have been found to be carcinogenic. This includes ultraviolet radiation, tobacco, and plutonium.
Group 2A is the classification for probable carcinogens, things that have not yet been found to cause cancer but for which there is good evidence they might. This includes engine exhaust and working in the petroleum industry.
Group 2B is the list of possible carcinogens, which are things that have not been found to cause cancer but for which there is cause to study further. It is a list of items which have not — repeat, not — been found to be carcinogenic. Will they tomorrow? Maybe, but they’re not now, according to what we know so far.
If the World Health Organization is the authority whose word you’re going on, then you should look at what they actually say. Their position paper on radio frequencies and electromagnetic radiation states unequivocally that:
…Current evidence does not confirm the existence of any health consequences from exposure to low level electromagnetic fields.
Nor should we expect such consequences. Radiofrequency is all around us, and always has been. Tune any radio to a station containing static and what you’re hearing is normal background radiation. About 1% of that static is actually left over from the Big Bang. But just because radiofrequency is natural for all living beings throughout the universe, that doesn’t mean it’s safe. To determine whether something is safe, we look at the data. So let’s look at what we know so far.
The electromagnetic spectrum is pretty simple to understand. It starts at the low end with a frequency of zero, up through the radio frequencies, past visible light and up through gamma rays and onto infinity, with higher and higher frequencies. The frequencies at the lower end are what we call non-ionizing, because they lack sufficient energy to strip electrons and change chemistry. The frequencies at the higher end are ionizing, which makes it damaging to living tissue. The dividing line between the two is the upper end of visible light, where ultraviolet begins. A sunburn is actually tissue damage caused by ionizing radiation; that UV has enough energy to just barely penetrate the outer layer of your skin. But as we go even higher, into the X-ray range, the radiation is energetic enough to penetrate all the way through your body. X-rays can be stopped by the lead-lined blanket they give you. But even higher energy frequencies, like the strongest cosmic rays, can go all the way through the entire planet.
So remember that dividing line. Visible light, like that inside your home, is generally safe as are all the radio frequencies below it. Ultraviolet light, and everything higher, is damaging.
Yet claims persist of harm from non-ionizing radiation, and they’ll often cite studies showing a biological effect from some manifestation of radio. There are only a handful of such studies which are repeatedly cited, in comparison to the more than 25,000 studies surveyed by the WHO that found no reason for concern.
Perhaps the most vocal of all the anti-radio activists is . . .
Science is great, one of the best processes humans have come up with. It has everything to do with how we live long, productive, healthy lives. It is not, however, the be-all and end-all method of how to solve every problem.
I am unabashedly a fan of science. I wholeheartedly recommend it. But lately I’ve been feeling a bit uneasy when science cheerleaders pronounce, “Science will solve everything!,” i.e., just apply science and all will be fixed. Because, SCIENCE! YAAAAYY.
I may get myself into trouble with this post, but as an advocate of science, I still say there is more to thinking and knowing than the scientific method. People who advocate fanatical reliance on science—where all competing methods of gaining knowledge are illegitimate—are practicing scientism.
The “just apply science” plan is an overly simplistic solution that not everyone will automatically buy into. There are other, also valid ways of evaluating problems. All the world’s problems cannot be solved by throwing science at it. At least not now (probably never).
Lately, this position has been disputed. There is an ongoing debate in the science/skeptical community regarding philosophy. Is it dead? Does science need it? How does it inform us (if at all)? Can we discuss morals via a scientific basis? You will see heated exchanges about these questions crop up in publications, blogs and in conference discussions. You will also see science placed above the fields of the humanities. Should it be? It’s worth thinking about. So I have been. I assume I’ll be thinking it through for a while because it’s weighty stuff. But, at some point, you have to stop collecting data and taking notes and finally write things down.
For a start, scientism has utility problems. If we need to justify everything with empirical evidence, and then justify that evidence with evidence, and so on, not only do we get bogged down in minutiae, we end up in a scientistic loop which we can’t resolve. There must be a point where we accept a premise as a given – that reality is real, that we aren’t being fooled by a devious creator. See this Peter S. Williams video.
“…it is tempting to infer that all phenomena―including human actions and interaction―can “in principle” be understood ultimately in the language of physics, although for the moment we might settle for biology or neuroscience. This is a great temptation. We should resist it. Even if a process is constituted by the movements of a large number of constituent parts, this does not mean that it can be adequately explained by tracing those motions.”