There’s a new study out testing the predictive acumen of astrologers. As you probably know, astrology holds that the configuration of astronomical bodies at birth dictates people’s personalities and life experiences. As you probably also know, these predictions have been tested many times, and the result is always the same: Astrology is bogus. What’s new about the recent test is that 100 astrologers, ranging from novice to “expert,” were involved in designing it, and thus agreed that its results would be informative (astrology have accused past test of funny business from biased skeptics).
News reports of the test have emphasized how humiliating these results are for the astrologer-participants (the inset shows one headline-writer’s take), and indeed for all astrology practitioners. But this conclusion ENTIRELY missed the point of science.
By way of explanation, let’s examine a historical example. It’s the early 1990s, and the phenomenon called facilitated communication (FC) is sweeping through developmental centers, special education classrooms, and families of people with intellectual disabilities. As you probably know, FC posits that with the right kind of loving support from a “facilitator,” supposedly nonverbal individuals can use a keyboard to produce verbal behavior that’s often unexpectedly, incredibly, prodigiously sophisticated.
By the way, the story of FC is told dramatically in an episode of the PBS series Frontline (you can watch the whole episode here, and if you haven’t, you really, really should).
At one critical juncture in “Prisoners,” staff at New York’s O.D. Heck Developmental Center are practicing FC with their clients — but also are aware of a growing controversy. A few small studies have suggested that FC is merely an example of the inadvertent social influence that’s sometimes called the Clever Hans Effect (this short, clear video explains the name).
In short, the new research coming out in the early ’90s is suggesting that facilitators, not clients, are responsible for all of the communication produced through FC. The O.D. Heck staff respect this research, but they cannot imagine that they are making the same mistakes as other facilitators. So, as documented in “Prisoners of Silence,” they devise their own, careful, double-blind test, with the input of the facilitators. And guess what? They find zero evidence for nonverbal clients typing anything meaningful. It’s all the work of unwitting facilitators.
What happens next is the critical part for our purposes.
Watch “Prisoners,” and keep your eyes glued on O.D. Heck staff member Marian Pitsas, who’s nominally a bit player in the drama. She’s an enthusiastic adopter and promoter of FC who says (at around 5:55 of the video), “I thought it was wonderful. I thought at last we were going to find out how to help these people communicate.” But by the time the O.D. Heck results are in (around 31:40), you see a very different Marian Pitsas. She is unsettled, embarrassed, bewildered… but she looks straight into the camera and, without flinching, says:
It was devastating. To see the data there in black and white in front of you… There was no arguing it. It was clear cut… I knew I couldn’t argue with those results.
Marian Pitsas is the Coolest Person in the World. Keep in mind that, as a species, human beings are pretty awful, under any circumstances, at admitting they’re wrong. They do even worse when there’s an emotional investment and when a cherished belief is challenged by cold, hard, objective evidence. This is, in fact, the entire reason science was invented — to formalize an evidence-driven way of thinking that does not come naturally to human brains.
I teach a research methods course to freshmen and sophomores who, let’s just say, typically don’t arrive as devotees of science. The course is less about the how-to of science and more about persuading my students that there’s a need for doing something different (and considerably more effortful) than what they feel in their gut. Boiled down to their essence, here are three cornerstones of my semester-long argument:
- The more strongly you believe something, the more important it is to challenge that idea with a formal test.
- The test has to be built so you know in advance what results will say the belief is right, and what results will say the belief is wrong.
- What you believe going forward has to be dictated by what the test results sa, no matter how you feel about that. In other words, it’s more important to be informed than to be right.
Marian Pitsas was no scientist, but boy, did she ever check off the first three items on that list!
- She willingly helped to devise the O.D. Heck experiment that put to the test something she thought needed no proof, because her experience using it with clients had already convinced her it worked.
- She agreed up front about what different kinds of evidence would mean.
- Then, at potential risk to her reputation and self-image, she stuck to her guns when the data came in and let her beliefs follow the data. She putt being informed ahead of being right.
And then, to boot, she made her “embarrassment” complete and let Frontline to share her saga with the world (for which she and the rest of the O.D. Heck team were rewarded with considerable public scorn from FC devotees).
To me, Marian Pitsas is one of the all-time great role models of critical thinking.
Can we now return to those supposedly “embarrassed” astrologers? They sure as heck shouldn’t feel ashamed that they helped to devise a rigorous test of something they believed in. And it was brave, not foolish, for them to consider, before seeing the results, what evidence does and doesn’t support their cherished belief. It’s never embarrassing to subject an idea to empirical test, regardless of whether you hate or love the idea.
The linchpin of this whole thing concerns what happens now that results are in. Will those same astrologers follow the evidence they helped to create, and change their views on astrology? I haven’t seen any reporting on THAT yet, but there’s reason to think evidence won’t carry the day, because this is the juncture at which people are most likely to stiffen their spines.
I remember a dissertation many years ago in which a student tested the effectiveness of a new therapeutic technique about which she was very excited. It was a really nice dissertation: clean design, well executed, with multiple levels and types of evidence. And in the dissertation defense the student methodically ticked off every single piece of evidence, all of which said the same thing: The new technique was as effective as doing nothing. When asked what she thought of the technique now that she’d seen the results, the student replied, without a hint of irony or embarrassment, “I plan to keep using it. I don’t care what the data say. I believe it works.”
Groan.
Now let’s flash back to “Prisoners of Silence” one last time and check out FC Guru Douglas Biklen, who established Syracuse University’s Facilitated Communication Institute (and for which was promoted to Dean of Education 🙄).
When confronted with the growing body of carefully-controlled studies that, without exception, depict FC as bogus, Biklen (around 52:30) rather defiantly asserts:
It’s very easy to fail in one’s effort to demonstrate something. It’s usually more difficult to be successful…. It almost doesn’t matter how many instances of failed studies we have. What we need are… instances where the person succeeded.
in other words: “I plan to keep using it. I don’t care what the data say. I believe it works.”
It would be soooo nice if we could imagine Douglas Biklen as some kind of dull-witted outlier, a one-of-a-kind buffoon who somehow stumbled into his notoriety. But in fact he’s a pretty typical human being who wears the same blinders everyone does about the things that make them feel passionate. It’s telling that, 30+ years after FC was completely discredited –by scientific standards, anyway — it remains in common use (and is even financed by public money 🙄). This wouldn’t be possible if Biklen were atypical. Like it or not, FC is a service for which there’s clear demand.
I’ll predict a similar trajectory moving forward for astrology. It’ll be with us for the long hau not because astrologers and their followers are unusually stubborn or thick-headed, but rather because they are typical people. Which may or may not be something to be embarrassed about.
Remember, the whole raison d’etre for science is to acknowledge our shared limitations — to set up structures of discovery that help us avoid pitfalls in thinking and belief. Science is founded on the notion that, as H.M. Ward observed, “Being normal is overrated.” But let’s allow that if critical thinking was easy, everyone would be Marian Pitsas. The thing we’ve yet to figure out is how, short of sending everyone to graduate school, to teach more people to be more like Marian Pitsas. Which sounds like a great job for a science of behavior.**
Postscripts
- Another general theme I emphasize to my Research Methods students is that, in the applied world, believing bogus ideas always, always hurts people. “Prisoners” amply demonstrates this (see here too), and shows the O.D. Heck staff abandoning FC to pivot to more constructive, if less colorful, modes of helping. But of course you can’t get to this point unless you open yourself up to the possibility that a belief could be wrong in the first place. Indeed, one of the defining characteristics of pseudoscience , and its foot soldiers like Biklen, is that is that foundational ideas are immune to doubt.
- In case the implications of this post aren’t obvious, what blinders do you think you might be wearing? Few people are as passionate about their beliefs as behavior analysts. Are there “truths” we hold dear for which insufficient supporting evidence exists? How open to persuasion are we, should hard evidence come along showing that the world of behavior doesn’t operate exactly as we’d imagined? To illustrate what I’m asking, check out a 2017 collection of papers in Perspectives on Behavior Science, which call into question whether reinforcement “strengthens” behavior — even the very definition of reinforcement as you probably learned it in your first course on behavior principles. Though there’s a lot of evidence that we’ve formulated reinforcement theory wrong, for the most part our journals and textbooks and convention talks continue to chug along in status quo mode, rehashing a framework that was shaped by 1950s to 1970s evidence, but, embarrassingly, hasn’t kept up. Perhaps, in some measure, we are all “behavioral astrologers.”
- **I’m not saying that nobody in behavior analysis has wrestled with how to teach critical thinking. The unmet challenge is devising strategies that are so effective, and rolled out as such scale, that we never again have to encounter Douglas Biklens or recalcitrant astrologers .