Artificial intelligence technology has advanced by leaps and bounds recently, opening the door to, as the U.S. Office of Government Accountability put it, all sorts of “possibilities and perils.” AI has come so far that it’s available to everyday people through text-generating applications like ChatGPT and image-generating applications like DALL-E, to name just a couple of examples. At times, AI can generate surprisingly human-like outcomes and products. For example, there’s been considerable concern on college campuses about tools like ChatGPT being used to create undetectable bogus “student work.” Speaking very loosely (and misleadingly), this is possible to the extent that AI “thinks like a human.”
THAT, coupled with a long-standing interest in what humans think about my chosen discipline, got me to wondering what AI thinks of behavior analysis. So I fed some phrases from behavior analysis into DALL-E, which is “an AI system that can create realistic images and art from a description in natural language.” I chose to look at image-generating because it is, metaphorically speaking, subjective. Text-generating tools are essentially information aggregators; they digest tons of print information and then create text based on that which is often quite accurate. For instance, when I entered “experimental analysis of behavior” into ChatGPT it produced this:
Experimental analysis of behavior, often referred to as behavior analysis or behaviorism, is a branch of psychology that focuses on the study of behavior and its underlying principles through controlled experiments. This approach was developed primarily by B.F. Skinner and is rooted in the idea that behavior can be understood and modified by examining the environmental factors that influence it.
That’s not incredibly awful. But visual representations are another matter. To produce an image based on text input requires tapping into what, in humans, are called stimulus relations. How words and images relate is based on learning histories that, to a certain degree are arbitrary. To illustrate, here’s an interesting report on how artificial intelligence can, apparently, perpetuate racial stereotypes (not an isolated observation; see here). The author asked image-generating AI to create pictures of, for instance, “Black African doctors providing care for white suffering children.” The output, however, almost always showed white doctors helping black children. Clearly, in the AI’s learning history, and therefore in its current “thinking,” doctor = white and suffering = black.
So what does AI think of behavior analysis? The examples below illustrate a general pattern in the results I got when offering DALL-E terms and phrases from our discipline: a lot of gibberish words and diagrams. I chuckled at this, in a nervous sort of way, thinking of recurring discussions about how our technical language befuddles people who aren’t behavior analysts. It is in fact often gibberish to them. And of course it’s true that we behavior analysts TOTALLY dig diagrams and flow charts and such (if you doubt me, pick up any random publication focusing on derived stimulus relations).
I felt a little better when, for “applied behavior analysis,” DALL-E at least produced the occasional human being. For instance, check out that young woman (middle row, right). She looks very competent and reasonably caring. But still… her clipboard, of which she seems to be super proud, is crammed with indecipherable scribblings. And the other presumed applied behavior analyst (middle row, left), clearly confused, is jamming a pencil into his eye. Not sure what DALL-E meant to convey with that.
Thinking that DALL-E might have a better impression of “therapy” than of science-y sounding words like “analysis,” next I entered “Acceptance and Commitment Therapy.” That brought lots of human images! But for some reason a number of them showed scenes with a cultish, or at least deeply religious, vibe (below, left). And a lot of the images, I mean a lot, depicted social interactions in which faces were melting or otherwise seriously disfigured (middle, right). This was interesting. While I have always found that wading through the idiosyncratic language of ACT makes me feel as if my face is melting, DALL-E treats this as a physical effect.
One reason people struggle with behavior analysis language is that, as Ogden Lindsley observed, in coining the discipline’s technical vocabulary, B.F. Skinner “sometimes chose words that meant different things to other people than they did to him.” Richard Foxx expanded on this notion with specific examples for which lay connotations diverge from the scientific denotation. For example:
EXTINCTION – The disappearance of a species. Can cause concern for parents when they are told, “We would like to put your son, Jason, on extinction”
So I asked DALL-E about various terms and phrases incorporating extinction. It often dutifully often rendered images (below, left and middle) that you’d expect based on Foxx’s definition, although the unicorn (right) is a bit hard to fathom, unless DALL-E is suggesting that unicorns once actually roamed the Earth before departing for that great equine beyond (e.g., Irish Rovers, 1967; cf. Larsen, 2017).
Stepping back from specific problematic words, the way behavior analysts communicate has been described as generally rather cold and clinical compared to everyday communication. Does DALL-E see us the same way?
To find out, I borrowed some titles from articles in Volume 16, Issue 3, of Behavior Analysis in Practice. Bahry et al. (2023) (left-side column, top); Orozco et al. (2023) (middle); and Persicke et al. (2023) (bottom). I entered the titles into DALL-e, which rendered the images in the left-side column. i entered the titles into DALL-E along with my own attempts at Plain English translations of them.
For the actual titles (left-side column), the results were what I’ve already reported for a lot of other behavior analysis verbiage: Lots of gibberish words and incomprehensible diagrams. For the Plain English translations (right-side column), however, DALL-E gave me lots of images of warm, happy human interactions.
In making sense of this contrast, let’s please not denigrate Bahry, Orozco, and Persicke for being lousy title-writers. The authors wrote titles how you’re supposed to. If titles tend to be uninviting for all but a few specialists who can decipher their meaning, that’s because they are written for precision rather than accessibility. For what it’s worth, I repeated this exercise with the titles of several other articles and got similar results. The problem here is not with specific articles, or with scientists communicating in a certain way with one another, but rather with the gap between scientific communication and the “perceptions” derived in DALL-E’s natural language programming.
But so what? Who cares what DALL-E “thinks”? It’s just a computer program, after all, and not a member of the broad community of human stakeholders who influence how behavior analysis is accepted within societal systems.
Ah, but therein lies the point of the present exercise in silliness. AI may be “artificial” in that its “neural architecture” is a digital metaphor for good old biological brains. But the patterns it discerns and uses as basis for its output are very real. What that output is, of course, depends on what the inputs are.
AI “learns” by examining existing “knowledge,” lots and lots of it, in a rough equivalent to multiple exemplar training. If the inputs are examples of nonhuman phenomena, then the patterns discerned presumably follow the laws of Nature. This is the case, for instance, in the application of AI to drug discovery. The inputs include the molecular structures of many different chemicals, and hence the outputs take the same form. If, however, the inputs are human-created products, the outputs will reflect “laws” of (patterns in) human behavior. This is why, for instance, AI is resistant to depicting doctors as Black people — cultural inputs reflect the world as humans have created it, and in that world a disproportionate number of doctors are white. AI in this sense is a window to the human psyche.
By the same logic, then, whatever AI has “learned” about behavior analysis is also likely to incorporate cultural biases. Those biases are hardly news to us, but maybe DALL-E’s perception of us can serve as a reminder of how thoroughly those biases have diffused throughout society. Behavior analysts are, of course, constantly trying to better their public image, and there are stupider ideas to than to propose that we’ll know we’ve stuck the landing on this when AI portrays us as more like the prosocial movement we have always sought to be.
In the meantime, some people fear that AI will some day run the world. If that happens before our image repair kicks in, we behavior analysts are in trouble.
Postscripts
(1) An alternative account for everything printed above is that AI is just not very good at a lot of things yet. Some of DALL-E’s output really is nonsensical. For example, I asked DALL-E to depict “DALL-E looking at DALL-E in the mirror” and always I got back images that looked a lot like a troll doll with blue hair. When I asked DALL-E to depict “DALL-E creating images of behavior analysis,” it refused.
(2) I was curious what DALL-E would return if I entered titles of some of Skinner’s books. The following (text inserted by me) are offered without comment except to say this about the two flies humping: Either this is the most most astute DALL-E rendering ever, or DALL-E simply confused “organisms” for “orgasms.” Either way, I’m submitting these to the B.F. Skinner Foundation for use in the next printings of Skinner’s books.