Navigating the Ethical Terrain of Generative AI in Behavior Analysis: A Three-Part Series

5

Each post in this three-part series will contain a list of suggested guidelines to ponder. Part 1 covered universal considerations important for the ethical use of generative AI across many fields. Part 2, this post, covers more specific concerns directly relevant to behavior analysts and those doing similar research and practice. Part 3 addresses AI in organizations and systems and how this technology could affect organizational dynamics and ethics within a profession.


Part 2: Ethical Considerations When Using Generative AI in Behavior Analytic Research and Practice

Janet S. Twyman1

In Part 1 of this series, I introduced the topic of Generative Artificial Intelligence (GenAI) and its potential for altering behavior analytic research and practice. Using machine learning algorithms that identify patterns in data, GenAI excels at generating new content (e.g., text, images, speech, audio, computer code) as suggested by user prompts. The technology is lauded for its capacity to learn, adapt, and complete multiple and complex tasks across domains, with reports of widespread use in universities, corporations and business, schools, hospitals, clinics, and many other settings where behavior analysts conduct research and engage in practice. Part 1 included universal ethical considerations necessary when using GenAI across any human services domain or research involving live subjects–including the need for transparent informed consent, privacy and data security, legal compliance, regulation, user accountability, and recognizing, preventing, and addressing bias.

In Part 2 I will delve into domain-specific considerations for behavior scientists and behavior analysts using GenAI in research and practice. Readers interested in an additional in-depth analysis of this topic are encouraged to read “Starting the Conversation Around the Ethical Use of Artificial Intelligence in Applied Behavior Analysis” by Jennings and Cox (2023), appearing in Behavior Analysis in Practice.

Researchers and clinicians, practitioners, and educators are often bound by the ethical guidelines current in their field or institution. In Applied Behavior Analysis, the Ethics Code for Behavior Analysts (Behavior Analyst Certification Board, 2020) outlines ethical guidelines for ABA practitioners, including their responsibility to clients, competence and training, confidentiality, and issues. For client responsibility, the code mandates that ABA practitioners prioritize their clients’ best interests, use scientifically supported procedures, and tailor interventions based on each client’s unique circumstances and needs. Additionally, they must obtain informed consent before implementing any intervention, and respect the autonomy and choice of individuals involved. Regarding competence and training, the Code emphasizes that ABA practitioners should only provide services within the boundaries of their competence (based on their education, training, and supervised experience). They must also pursue ongoing professional development to stay current on the latest developments in research and practice. Confidentiality requires the preservation and respect of all information obtained during service delivery and should foster trust between practitioners and clients and promote open communication and cooperation.

The ethical standards provided by the BACB are designed to ensure that clients receive high-quality, effective services that genuinely benefit them. They are also designed to serve as a protective barrier around the client-practitioner relationship and encourage respectful and beneficial practices while discouraging any potential misuse of power or knowledge. When contemplating the intersection of GenAI and ABA these ethical considerations become even more critical, as the potential for violation may be greater.2

Practical considerations focusing on the implementation and integration of GenAI technologies to enhance outcomes, while maintaining the core goals of behavior analysis, include:

GenAI in Practice

Artificial intelligence already impacts numerous human service sectors by providing informative tips and alerts, designed to augment decision-making and reduce human error. Cox and Jennings (2023) provide an overview of AI research and practice within the behavioral health service industry, describing AI-assisted potential clients from assessment to the culmination of intervention or monitoring. Their article provides examples of how behavior analysts could incorporate AI to improve data analysis, intervention personalization, real-time monitoring during and post-treatment, and administrative efficiencies.

As GenAI continues its foray into behavior analysis, professionals concerned about being replaced might instead think about how research and practice will evolve, perhaps sooner and in different ways than expected (see Twyman & Layng, n.d.).

One difficulty in applying AI to behavior analytic research and practice is that data collection protocols and outputs are not uniformly structured, and procedures vary somewhat from researcher to researcher or provider to provider. Data may inadvertently contain bias, depending on the population being considered, and as discussed in Part 1, bias in leads to bias out. Hence, while what AI is used is an important aspect of its application, how the AI is used and what is done with the results is directly relevant to the impact it has. Behavior analysts would benefit from careful consideration of how GenAI tools can be integrated with existing evidence-based practices. How can current methodologies coexist with or be augmented by AI, without replacing the human elements found to be essential in therapeutic, treatment, and education settings? Perhaps an easier starting point is determining which aspects of GenAI can be utilized to enhance data collection, data analysis, and intervention planning and reporting.

Integration with Existing Methodologies

GenAI could enhance data collection by automating the collection of behavioral data (which can be time-consuming and may be prone to human error) and the analysis of data as it is collected, thus offering real-time insights that might affect a session in progress. It could augment data analysis by identifying complex patterns in large datasets that may not be evident through traditional analysis methods. By using machine learning models to analyze data, behavior analysts might uncover subtle patterns and correlations that result in more informed personalized treatment plans or improved meta-analysis of research or treatment data. Behavior analysts often use historical data and patterns in the data to determine viable interventions and predict treatment outcomes (often referred to as predictive analytics). By integrating AI-driven insights derived from pattern recognition and predictive analytics, behavior analysts could develop more personalized, precise, and effective intervention plans tailored to the unique needs of each client. Additionally, GenAI could be used to simulate various intervention strategies and predict their outcomes (including simulating functional behavior assessments), enabling behavior analysts to assess the potential effectiveness of different approaches before implementing them in real-world settings.

Examples:3

  • GenAI systems equipped with natural language processing capabilities can transcribe therapy sessions in real-time, accurately capturing details that may be missed or forgotten by human note-takers.
  • AI tools could analyze a child’s speech patterns and interactions during a session and immediately flag significant changes or areas needing attention.
  • In a classroom setting, GenAI could monitor student interactions and engagement levels, providing real-time feedback to educators on which students may need additional support or intervention.
  • AI models could simulate the impact of different behavioral interventions on individuals with ADHD, helping clinicians choose the most effective strategies for managing symptoms like impulsivity and inattention.

Resources

Customization, Quality Control, and Monitoring

GenAI tools can adapt to meet the diverse needs of different research questions, unique clients, and varied settings. The tools behavior analysts use should be customizable to specific therapeutic goals and client demographics and allow fine-tuning of inputs and outputs as needed. Just as with non-AI enhanced interventions, protocols should be in place for ongoing quality control and monitoring to ensure that the GenAI tools continue to function as intended and consistently provide reliable and valid outputs. Continuous monitoring may also help identify any drift in the tool’s performance over time or in response to changing data inputs, training data, or algorithm. Regular audits for an AI system are necessary to ensure output accuracy remains high, especially as new research and data become available.

Example:4

  • Omada Health describes how they use valid and clinically meaningful data types with machine learning (ML) and GenAI to optimize and enhance predictions of behavior change. They use ML to categorize patients into groups based on similar behavior trajectories, and natural language processing (NLP), to  parse patient communication to detect changes in sentiment. GenAI then converts the data into a comprehensive text summary delivered to care teams, with the goal of increasing efficiency and accuracy in targeted interventions.

Professional Education and Training

Applied behavior analysis is a technology, in that technology is “the practical application of knowledge, especially in a particular area” (Merriam-Webster, n.d.). The ongoing education of and activities by behavior analysts inherently involves the practical, meaningful, and ethical use of technology. The capabilities, limitations, and implications of AI technologies are crucial facets of which behavior analysts should be aware. As researchers and professionals who study and influence contingencies in humans’ (and other animals’) lives, it is paramount that we understand the practical aspects and ethical ramifications of AI tools. Behavior analysts should prioritize continuous learning through readings, seminars, workshops, and courses focused on AI and its intersection with behavior analysis.

Just as in almost every field, there is a tremendous need for suitable training for behavior analysts in the use of GenAI. To ensure that behavior analysts are competent in both operating the technology and interpreting its outputs, education and training should cover not only the technical aspects of the AI tools but also how to effectively integrate AI-generated insights into research and practice. For example, behavior analysts might undergo specialized training sessions to learn how to use AI-driven data analytics to evaluate interventions, and to ensure they can effectively interpret and apply the findings across settings.

Resources

Interdisciplinary Collaboration to Enhance Research and Interventions

Behavior analysts stand to benefit significantly from alliances with experts in computer science, law, and ethics, to better understand and address the complex and multifaceted nature of AI applications. Collaboration across fields can both safeguard the ethical and legal deployment of GenAI and could also enhance the quality and impact of research and interventions. Partnerships between behavior analysts, computer scientists, lawyers, and ethicists could result in tools that are both innovative, effective, and aligned with ethical standards, subsequently increasing our knowledge base, improving outcomes for clients, and advancing the field of behavior analysis.

Collaboration with computer scientists could help improve the understanding and implementation of Gen AI technologies, which are built on complex algorithms and data structures that require a thorough understanding of computational theories and principles.

For behavior analysts the benefits of collaboration could include a better understanding of the mechanisms behind AI tools, ensuring they are used appropriately and more effectively. Computer scientists might provide insights into how AI models process and analyze behavioral data, or how the model is or will be trained, while the behavior analyst would ensure the input variables and interpreted outputs are clinically relevant.

The legal and ethical issues raised using GenAI were covered in Part 1. Collaboration with legal experts can guide behavior analysts on complying with laws and regulations such as the Health Insurance Portability and Accountability Act of 1996 (HIPPA) in the U.S., which governs the privacy and security of personal health information. Ethicists can help explore the moral implications of using AI, ensuring that the dignity and rights of clients are always respected, and is an area ripe for joint research. For any research study or treatment that involves collecting and analyzing sensitive patient data, a legal expert could advise on the necessary consent forms and privacy protections, while an ethicist would examine the fairness of the algorithms and their impact on various demographic groups, ensuring ethical integrity in the handling of potentially biased outputs.5

Examples:6

  • A collaborative project could involve developing an AI application that assists in diagnosing behavioral conditions more quickly and accurately. The application’s development would benefit from the technical expertise of computer scientists, the ethical oversight by ethicists ensuring the AI’s recommendations do not perpetuate stereotypes or biases, and legal experts making sure the tool complies with medical regulations.
  • An interdisciplinary team could work on setting guidelines for the ethical use of predictive analytics in educational settings, ensuring that such tools support educational outcomes without compromising student privacy or autonomy.
  • Collaborating with a computer scientist, a behavior analyst might develop an AI model that predicts patient outcomes based on therapy session data. This project would benefit from the computer scientist’s technical expertise and the behavior analyst’s understanding of therapeutic processes, ensuring the model is both accurate and ethically designed.

Resources

Client Interaction, Engagement, and Well-being

Using GenAI may have direct or indirect impact client participation and safety or welfare. Monitoring interaction and engagement is crucial. Behavior analysts must assess if and how AI tools impact the client experience: either through increased engagement, disengagement, approach/avoidance, no effect, or something else. AI tools in therapeutic contexts could potentially lead to disengagement due to a perceived lack of personal interaction. When compared to a human therapist (control conditions) mental health service participants rated chatbot-provided therapy less useful, less enjoyable, and less smooth conversationally (Bell, Wood, & Sarkar, 2019). Thus, AI tools should support and not supplant the therapeutic relationship. Behavior analysts must carefully evaluate how AI-generated interventions are perceived by clients and their impact on well-being. These tools do not have to diminish the human element that is associated with teaching, learning, and behavior change. The role of the behavior analyst remains central, with AI providing data-driven insights that enrich analysis, understanding, and decision-making.

AI can be an excellent resource to process data and suggest interventions, however, at least for now, the implementation of these suggestions and the client relationship are distinctly human tasks. Currently, GenAI is often used to free up time for professionals, allowing them to focus more on research goals or participant/client interactions and less on administrative or analytical tasks. AI-generated insights can be used during sessions to guide discussions and activities; however, the behavior analyst should maintain their professional judgment and oversight, and use their interpersonal skills when delivering the procedures or interventions.

Related Resources

Long-Term Societal Impacts and Sustainability and Environmental Considerations

Behavior analysts need to consider the broader implications of both their individual use of AI, and the collective use by the field (and society). GenAI use is likely to impact on employment and societal norms (it already has). As professionals in a field that studies behavior, behavior analysts have a role, beyond the immediate bounds of practice, in shaping how society integrates and interacts with AI technologies. As a field, we should consider the broader societal implications of widespread AI use in behavior analysis, such as impacts on intervention outcomes, treatment acceptability, research programs, employment, societal norms, and interactions with others. Additionally, behavior analysts (and all humans) should advocate for and practice sustainable use of AI technologies, weighing the benefits against the costs of energy consumption and waste associated with these systems.

Resources

Part 2: Conclusion

GenAI offers challenges and opportunities for behavior analytic professionals. This integration also introduces ethical challenges, including balancing broader access with patient safety, privacy, and quality of care. By addressing these questions, we can leverage GenAI’s benefits while upholding the ethical standards of behavior-analytic professions.

Stay tuned for Part 3, where the focus turns to GenAI in organizational systems and how this technology could affect ethical organizational dynamics within a behavior-analytic profession.


  1. Several tools were used to support the text and image creation, content modification, and the final draft of this post. Tools used include Anthropic’s Claude, Learneo’s Quillbot, Microsoft’s Copilot, Microsoft Image Creator from Designer, and Open AI’s Chat GPT4.  As sole “author,” my role included topic inspiration, content generation, organization, quality control and synthesis of AI generated content, and writer of tone, voice, and style. ↩︎
  2.  I provided original content and asked Grammarly to “Improve it.” I then further edited the paragraph for publication. ↩︎
  3. Examples provided by ChatGPT4 based on the prompt: Please provide realistic examples of how behavior analysts might integrate GenAI with existing methodologies.
    ↩︎
  4. The example is one of three provided by Microsoft CoPilot based on the prompt: Can you find any examples, case studies, or resources about Generative AI (GenAI) in behavior analysis related to Customization, Quality Control, and Monitoring? I checked the original source (linked) and modified the example. ↩︎
  5. I edited the legal and ethical collaborative example originally provided by ChatGPT4. ↩︎
  6. Examples were produced by ChatGPT4 based on the prompt: Please provide realistic examples of how behavior analysts might engage in interdisciplinary collaboration to enhance research and intervention when using Generative AI. I then edited the results.


    References
    Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts.https://bacb.com/wp-content/ethics-code-for-behavior-analysts/

    Bell, S., Wood, C., & Sarkar, A. (2019). Perceptions of chatbots in therapy. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–6. https://doi.org/10.1145/3290607.3313072.

    Chiu, T. K. F.  (2023, Sept. 4): The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments. DOI: 10.1080/10494820.2023.2253861

    Cox, D. J., & Jennings, A. (2023). The promises and possibilities of artificial intelligence in the delivery of behavior analytic services. Behavior Analysis in Practice, 17(1), 107-122. https://doi.org/10.1007/s40617-023-00868-z

    Merriam-Webster. (n.d.). Technology. In Merriam-Webster.com dictionary. https://www.merriam-webster.com/dictionary/technology

    Nancarrow, S.A., Booth, A., Ariss, S., Smith, T., Enderby, P., Roots, A. (2013). Ten principles of good interdisciplinary team work. Human Resources for Health(10), 11-19. doi: 10.1186/1478-4491-11-19. PMID: 23663329; PMCID: PMC3662612.
    OpenAI. (2024). GPT-4 [Large language model]. https://openai.com/gpt-4

    Twyman, J. S., & Layng, T. V. J. (in review). Generative AI and natural language processing. In J. Vladescu & D. Cox (Eds). Applied Behavior Analysis for Business and Technology Applications. Elsevier.

    Image Credits
    Image 1 – AI image generated by the author using the prompt “a man in a lab coat standing over a surprised looking cute little robot” on Nightcafe
    Image 2 – AI image generated by the author using the prompt “Therapist, child, and robot interacting together in a classroom setting” on Nightcafe
    Image 3 – AI image generated by the author using the prompt “abstract robot head Picasso style, rich deep colors” on Nightcafe
    Image 4 – AI image generated by the author using the prompt “many diverse old and young people, beautiful, purple tones” on Nightcafe
    Image 5 – AI image generated by the author using the prompt “human couple sleepwalking through a landscape filled with computers and robots” on Nightcafe ↩︎