by David J. Cox, MSB, BCAB-D (RethinkFirst and Endicott University)
Section Editor’s comment: I might have the story wrong, but I believe it was the iconic Victor Laties who used to require new members of his research team to successfully shape a chicken before being allowed to work with the laboratory animals. The logic was that chickens, being not the intellectually swiftest of animals, demand a shaping process that is precise, else the process would quickly go off the rails. Once developed, the needed precision would generalize to other experiences. In this guest post, David Cox explains that chickens are not the only unexpected source of professional inspiration. David explains how learning computer programming taught him life lessons that have proven valuable in research, in practice and, indeed, in forging his overall world view. I think of David’s reflections as a concrete illustration of the principle advanced by that great Relational Frame Theorist, Louis Pasteur, who asserted that “Chance favors the prepared mind.” In other words, the more different things you know about, the more tools you have for deriving relations that apply to new problems. From this perspective, the biggest career mistakes you can make are to ignore things because they’re not behavior analysis, and to be too “chicken” (sorry!) to learn things that seem difficult (like coding).
Certain experiences change us in more profound ways than others. Conceptually, this might be translated as certain stimulus-stimulus pairings or response-stimulus pairings leading to greater transformation of stimulus functions, the emergence of pivotal behaviors, or the emergence of behavioral cusps. For me, one of those experiences was learning to code—writing instructions that computers or other hardware implement.
Who knew learning to code would make me a better clinician, scientist, and human being?
Given the role technology currently plays in society, I think everyone should be exposed to code even if in a minor way. Technology seems to comprise an increasing proportion of the environmental stimuli humans interact with each day. Technological devices (and their associated stimuli) have become a critical component to the functional analysis of modern human behavior, in toto. And, those who can manipulate technological stimuli in humans’ lives can change behavior change at a scale I presume the founders of behavior analysis never thought possible.
Beyond the technical skillset and opportunity for impact, however, learning to code offers life lessons. Three of these, I believe, have made me a better behavior scientist and a better human. They’re also behavioral patterns I’ve notice develop in others I interact with who learn to code.
Life Lesson 1. “It’s About Getting It Right, Not Being Right”
The first lesson is getting comfortable with making errors. I started to learn to code at the very beginning of my PhD program (Go Gators!). At the time, I had been in behavior analysis for about a decade and was fairly well versed in the field. Though far from an expert, I was as skilled as ten years might allow someone to be.
Growing up, I was never interested in how computers and phones worked. I only used technology functionally to do homework, talk with friends, lookup information about sports and music, watch sports and music on TV, and so on. That is, I had absolutely zero prerequisite skills when I decided to learn how to build human operant experiments that run on a computer.
What’s great about coding is (most) errors stop you dead in your tracks. You get a big red pop-up with a message telling you exactly what is wrong and why your code won’t work. Behaviorally, the feedback system is beautiful. Immediate feedback and an error message telling you the exact location of the error that needs to be fixed and what issue the computer ran into. And, when your code runs through without issue, you get to immediately see the result. It’s great!
Being brand new to coding, I got a lot of error messages (and still do). It was an extremely dense schedule of punishment. I wasn’t used to that. As a BCBA, I had been making clinical decisions out in the wild for years and all my clients made progress toward the goals they had set for themselves. Sure, things weren’t perfect and could have been improved. But the decisions I made always felt like they were in the right direction. But were they? Was there a better way to help my clients? Were errors in my decision-making passing silently such that no one (including myself) was any the wiser?
Learning to code highlighted for me the value in explicit and immediate feedback even when that feedback is aversive and getting comfortable at being bad at something new. Further, making dozens of explicit errors, day-in-and-day out, improved three aspects of my repertoire as a whole.
Benefit 1: Change in Feedback Function from Aversive to Appetitive: Though emotional responding was high early on (I hated seeing that stupid error code pop up), eventually you get used to it. Eventually, you learn to dissect the feedback and identify what alternative pattern of behavior leads to the reinforcer of your code running through.
This fundamentally changed the functional nature of feedback about mistakes. Prior, feedback often signaled potential punishment and elicited emotional responding. Now, feedback signals reinforcement for emitting alternative behaviors and being able to do cooler things than I could previously. That’s a huge functional shift with far reaching consequences the permeates my professional life as I get feedback on workplace performance, writing submitted to journals, presentations I deliver, how I teach courses, and so on. Feedback now equals opportunity.
Benefit 2: Improved Creativity and Problem Solving: The second benefit was coding arranged explicit contingencies to practice creativity and problem solving. Those who have coded know the error messages are sometimes cryptic and offer little advice on where to start. Solving them requires trying new things, learning how people have solved similar situations, experimenting to see what works, and thinking differently about your approach.
Relatedly, as there are “a thousand ways to load a dishwasher,” there are many ways you can get a computer to do the exact same thing. Viewing others’ code for how they solved similar problems highlighted this reality above and beyond just coding. And, often, it was hard to say that one approach was always better than another. Sometimes approaches are simply different, and that’s okay.
Benefit 3: Learning to Let Good be Good Enough: The above doesn’t mean that all code that runs is perfect code. There will always be ways it can be improved. This might involve making the code run faster, abstracting code chunks to make it more reusable, or refactoring code for simplicity and readability. But, using adages likely familiar to readers, it’s easy to let “perfect be the enemy of good,” “a good dissertation is a done dissertation,” and “if you’re not embarrassed by the first version of your product, you’ve shipped too late.” If the code runs and does what it needs to do, it’s okay if it’s not perfect and to leave it alone until there’s a justifiable reason to make a change. Good is often good enough.
In short, learning to code got me comfortable with making errors and using feedback to creatively explore solutions in ways I may never have otherwise. Coding explicitly arranges the contingencies that make it more reinforcing to get things right, recognize when good is good enough, and the pointlessness in trying to be right based on my assumptions or preferences for how things should work.
Life Lesson 2. “The world is the world
Delicate and subtler than all our imaginings
More magical and unqualified than all our thoughts”
At the time I began to code, I was a decade into going as deep as I could with what is known about operant and respondent conditioning. I like to know how things work. And, though the behavior analytic literature is certainly large, dogged and disciplined study over the course of decades would allow me to explain, even if theoretically, how any pattern of behavior had or might develop.
When I began to code, I figured I would do the same thing. So I set out to learn how programming languages work; exactly how the code I wrote interacted with the physical material that made up my computer; and how it interfaced with the Internet such that my experiments could be put online.
Trying to learn how everything worked with computers and coding lasted about two weeks. The complexity was mind boggling. I would have had to rededicate my career to computer science to even learn how a “high-level” language was compiled down to binary that computers run on, let alone anything else listed in the previous paragraph.
I was, obviously, naïve. These technologies had been built over decades by hundreds of thousands (millions?) of people. This taught me to trust. Trust in other scientists and their expertise. Others had developed a method for me to write code in Python; others had developed a way for it to be compiled down to binary; others had developed a way for it to be sent over the Internet and show up on computer screens thousands of miles away; others had developed a way for their behavior to be tracked along with the timing of stimulus presentations; and still others had developed a way for that data to be sent back over the Internet to me so I could analyze it.
To do what I wanted to do, I had to put my full trust in people I had never met. In no universe could I ever know everything needed to understand how everything in the previous paragraph worked. And, that was okay. Learning to code got me comfortable with saying “I don’t know”, but in a deeper way.
I don’t know many things about how operant and respondent behavior work. But, like many of us and given our learning history and interests, we could probably learn and understand it if we invest the time. That is, things known by “the field of behavior analysis but that I do not know personally” still feel within reach.
Learning to code showed me there were topics and sciences fundamentally relevant to the study of the behavior of biological organisms that I would likely never know or learn how they worked. I had no choice but to trust scientists in other fields. This carried, for me, two benefits.
Benefit 1: Learning to be strategic with my time: Learning to trust scientists doing science in other disciplines taught me to be strategic with my time and what I focus on learning. Rather than feeling like I must know everything, I could focus on learning exactly what I needed to learn to get the next project done. What experimental question do I have? What thing do I want to learn about behavior-environment interactions? And, how I can be efficiently tactical in learning what I need to learn to answer those questions—no more and no less. This has made me more focused professionally and personally. It also forces me to analyze daily why I do what I do.
Benefit 2: Appreciating the beauty of the universe: Learning to trust scientists doing science in other disciplines has also helped me appreciate the breathtaking beauty of our universe. I think it’s easy to look on with curiosity, excitement, and awe at what astrophysicists learn about our universe as they analyze data from the James Webb Space Telescope. In so doing, it’s easy to feel comfortable that we will never know everything they know, to trust what they tell us about their data, and simply be an enthusiastic spectator at the science they do.
It’s harder to do this with sciences that feel closer to my own expertise. For some reason, neuroscientists, speech-language pathologists, occupational therapists, and educators didn’t get the same benefits I afforded to other scientists even though they, too, know more about their topics than I ever would likely know.
The behavior of biological organisms, in toto, is much more complex than any field can tackle alone. When a neuroscientist talks about the neural pathways associated with the delivery of a reinforcer, I have no reason not to trust that they know what they are talking about. Will it impact what I do in my own work? Maybe, maybe not. But appreciating that they are curiously investigating a part of the story I am not has led me to seek out a richer understanding of biological organisms than I likely would have otherwise.
In short, learning to code helped me recognize it’s turtles all the way down and in just about every direction. It’s simply impossible for me to know everything about all facets of the universe that have a functional impact on biological organisms and their behavior. When I learned to recognize that we are the ones who “carve nature at its joints”, it pushed me toward fully appreciating Science, not just behavior science.
Lesson 3. “Put my Money Where My Mouth Is”
My first intellectual love was philosophy. The reasons for such are well beyond this set of musings (though, buy me a beer at the next conference and I’ll gladly chat about it). I find it particularly rewarding to tie together many observations collected across time and space using textual stimuli and in a way that can change how you behave in that very moment and forever. It’s almost magical, really, what verbal behavior can do.
Nevertheless, philosophy and theory don’t always lead to practical solutions. To be fair, asking philosophy and theory to do that is asking them to go beyond their function: to abstract generalizable claims from the details of many situations. But, because theoretical explanations have a different function than engineering practical solutions, theoretical explanations might be assumed to explain more than they actually do.
Learning to code forces you to put pen to paper, and to make precise and explicit what you may not have had to previously make explicit. This has had three benefits for me as a scientist and human being.
Benefit 1: Better clarity around my scope of competence: As a practitioner and researcher, I was used to the practical nature of what I did. I had to specify exactly what I was planning to manipulate in the environment and the expected direction that behavior would change. Learning to code taught me that I was wholly glossing over much of the mechanics, the processes, upon which it all worked.
In many situations that was okay. My job wasn’t to flush out THE complete set of everything involved, how every last detail worked and interacted, and how it all should aggregate to determine the precise level of behavior change. My job was at a different level of detail.
Learning to code showed me just how much of that detail I had been missing. Coding requires you make everything explicit. You must specify exactly what is involved, how it works, how it interacts with other components of the system, the order things should occur, and how it all aggregates to cause a desired output. In repeatedly practicing this, I started to gain a better understanding of exactly what I knew well as well as what I didn’t.
Benefit #2: Making “unknown unknowns” Into “known unknowns”: What I also quickly discovered was how little I actually knew about how operant and respondent behavior worked, moment-by-moment, behavior-by-behavior, reinforcer-by-reinforcer, and extended across time and space.
It was extremely humbling to be forced to make explicit exactly how stimuli will change on a computer screen moment-by-moment; exactly how to automate data collection on the occurrence of nuanced behavioral topographies; how to precisely define things like response stability and variability to automate condition changes; how to precisely define associated reinforcement contingencies; the exact patterns of behavior that subtle changes to the above would be expected to produce; and how my decisions about everything above would impact and change my results.
But, this carried a benefit in addition to the tastiness that comes with a slice of humble pie. Learning to code so as to manipulate and analyze behavior-environment contingencies also made explicit for me what I didn’t know. It turned unknown-unknowns into known-unknowns. That kind of information I can do act on.
In short, learning to code helped me get better at translating back-and-forth among behavioral theory(ies), experimental preparations, and the patterns of environmental and behavioral data expected if I actually understand how the things I am manipulating interact deterministically. Theoretically, I had been able to talk comprehensively about how behavior worked. But, when coding forced me to make explicit much of what I claimed theoretically, I realized there were pieces missing to my understanding. Learning to code helped me appreciate the extent to which this was true; accept that behavior-environment relations are more complex than I will likely ever know; and it’s okay to get it right enough to find a practical answer.
Cool Stories, Dave. But, is Coding Necessary?
Much of the above might also be chalked up to the natural progression of a young researcher facing the complex reality of the topics with which they are enamored. Maybe I would have learned all the above without coding entering my life and I wonder what experiences might lead to the same behavioral changes. Unfortunately, there are no opportunities to do a within-subject analysis on this question. And, as noted above, coding explicitly arranged these contingencies for me which I doubt would have occurred otherwise. Regardless, I can say that learning to code saved my intellectual soul from an alternate reality I am happy not to be experiencing today.