The Dark Side of Unchecked Facts: Behavior Analysis and Media Literacy

Co-authored by Dr. Melissa Swisher, Lecturer, Purdue University

The amount of information on the Internet is staggering. It’s possible to spend the entire day surfing the Internet and not encounter the same information twice. The World Wide Web was initially a convenient way to share new findings among scientists at the European Organization for Nuclear Research (CERN) and evolved to allow any user to share information. With 2.9 billion Internet users in 2014 (McCarthy, 2014), a significant proportion of the population is interacting with digital media. Anything and everything we want to know is available on the Internet, but not all information is factually accurate.

Nikolai Yezhov conferring with Stalin
[2] Image provided under the Public Domain
Some misinformation is accidental, but other misinformation champions a cause: In the Stalin-era Soviet Union, it was common practice to remove former political allies from photographs and pretend they were never there (see picture). To promote media literacy, teachers in the Ukraine are using the Learn to Discern program funded in part by the US and UK embassies to help students determine when they’re encountering fake news, propaganda, and hate speech (Ingber, 2019). After teachers used the program, students were more likely to detect hate speech and fake news than their peers who did not receive lessons on media literacy. The Learn to Discern program was initially tested in 50 schools but will expand to around 650 schools by 2021. Thankfully, there are many resources on teaching students about fake news (see Alvarez Boyd, 2017; BBC; BookWidgets; PBS lesson plan; TeachHUB).

In behavior analysis, we are also concerned about producing informed digital media consumers. We can do that in two ways: explicitly teach the difference between sensationalized and regular news or holding users who share news accountable for its content. We can use discrimination training to teach people how to recognize misinformation when compared to accurate information. We can also use Tsipursky and Morford’s (2018) Pro-Truth Pledge as a public guarantee for Internet users that the poster only shares accurate information.

Discrimination training involves teaching someone to respond to one stimulus and not to respond to another. Typically responding to the correct stimulus produces a reinforcer and responding to the incorrect stimulus produces a punisher or nothing at all. At a traffic signal, we have learned to proceed through a green light (correct stimulus) but not a red light (incorrect stimulus) or we might receive a ticket. Although the basic procedure is simple, it’s also versatile. Discrimination training has been used in a variety of situations and with many populations: Keintz, Miguel, Kao, and Finn (2011) taught children with autism the value and names of coins with discrimination training; Ortega and Lovett (2018) taught a man with Down syndrome how to select pictures of kitchen tools based on their names and to read those names on an activity schedule with discrimination training; and Pelaez, Virues-Ortega, and Gewirtz (2012) taught infants to discriminate their mother’s joyful and fearful expressions to determine whether they should reach for an unfamiliar object. We can also provide prompts (e.g. Carp, Peterson, Arkel, Petursdottir, & Ingvarsson, 2012) before the learner makes their choice and arrange different error correction procedures (e.g. Smith, Mruzek, Wheat, & Hughes, 2006) to help them acquire new skills faster.

[3] Image provided courtesy of Frederick Burr Opper under the Public Domain
Once we have identified complete and accurate descriptions of (operational definitions for; Skinner, 1984) factual news, we can teach the concept of factual news via discrimination training to learners of all ages. Accurate news might have the following critical variables (e.g. Street & Johnson, 2014): appear on educational and nonprofit websites, report all relevant details, be authored by an expert in that area, give relevant and complete source citations, have a recent date, and contain serious rather than satirical content (IFLA; see also this image with the eight steps that we can use to spot fake news). We also need to examine any potential biases we hold that cloud our interpretation of information, and we can ask an expert (like a librarian) if we need help authenticating material. It’s a good start to interact with reputable news sources (e.g., NPR, PBS, local newspaper), but it’s safer not to simply trust any news outlet or social media platform without some follow-up verification.

Tsipursky and Morford (2018) suggest that users publicly commit to verifying content, obtaining balanced reports, citing sources, and clarifying information they share on the Internet. People who take the Pro-Truth Pledge also vow to educate others about misinformation that they post and retract or fix any errors. These observable components allow the user’s virtual community to provide a check on individual posting habits as well as prevent the spread of misinformation.

Discrimination training and the Pro-Truth Pledge could easily be incorporated into any media literacy program. Hopefully by targeting young and experienced Internet users with these programs, many communities will benefit from the wealth of information on the Internet with less exposure to fake news, propaganda, and hate speech.

Image credits:

  1. Cover image provided courtesy of Andrea Piacquadio under Pexels License
  2. Image provided under the Public Domain
  3. Image provided courtesy of Frederick Burr Opper under the Public Domain