Introduction by Blog Coordinator Darnell Lattal, Ph.D.
Dr. Twyman has provided a very complete analysis of AI and behavior analysis in this three-part series. Please look at the end of the third blog to find a very detailed set of questions to turn into your own working sessions for various teams inside your organization. Janet has provided the blogs and questions to help us consider the implications of the growing use of AI directly on all of us and on our clients.
Her questions at the end of the series are a gift as most of us know when we try to come up with the right things to explore when we’re doing strategic or tactical planning.
Thank you so much, Janet.
Part 3: Navigating AI: Organizational Ethics and System-Wide Changes in Behavior Analysis
Janet S. Twyman
The integration/assimilation/amalgamation[1] of generative AI into organizational systems presents unique challenges and opportunities that extend beyond individual considerations for research or applied work. As behavior analysts work within complex organizational structures – from clinical practices and schools to research institutions and healthcare systems – the effects of AI adoption impact every aspect of organizational dynamics, professional ethics, and decision-making within the organizational system. Without being extremist, I think the very fabric of how behavior-analytic organizations operate (including training new professionals, maintaining ethical standards, and serving their communities) appears to be being rewoven by this interesting, appealing, and perhaps unique technology.

In Parts 1 and 2 of this blog series, I wrote about both universal and behavior analytic professional concerns regarding the use of AI and Large Language Models (LLMs) in research, teaching, and professional practice. In many ways these posts addressed the use of AI by individuals, and by individuals within an organizational context (be it a clinic, educational setting, university program, or something else). This time we will focus on organizational “behavior” (although, I confess some aversion to organizational entities as “behavers” ever since corporations were considered to have the same rights as people). Yet we soldier on…
Organizations adopting Artificial Intelligence (AI) face complex challenges in ensuring accurate, ethical, and transparent use of the technology. Deliberate thought and planning is required regarding the short-term, intermediate, and long-term consequences that have the potential to impact every aspect of organizational functioning and professional practice. As behavior analytic organizations increasingly incorporate AI tools into their operations, they must grapple with myriad ethical issues such as AI-generated content verification, responsible data handling, and the maintenance of professional standards. The potential for AI to generate misleading information (often termed “hallucinations”), create deceptive content, or produce biased analyses requires organizations to develop robust verification systems and clear guidelines for AI use. This becomes particularly crucial in behavior analytic contexts where decisions impact vulnerable populations and where maintaining professional credibility is paramount.
To support behavior analytic organizations in the responsible implementation of AI, I’ve outlined nine key areas of consideration. While each area is addressed individually, they are interconnected, with overlapping considerations that influence and reinforce one another. The Strategic Framework and Overarching Questions area sets the foundation for organizations to align AI adoption with their mission, goals, and ethical values. Ethical Risk Management and Governance lays the groundwork for ensuring that AI use is transparent, accountable, and aligned with professional values. Maintenance of Professional Standards safeguards the ethical and evidence-based practices that underpin the field, even as AI tools become part of daily operations. Responsible Data Handling addresses the need for robust safeguards to protect sensitive client and organizational information, ensuring privacy, compliance, and security. Verification of Generated Content and Transparency focuses on checking the accuracy, reliability, and fairness of AI outputs to maintain trust and usefulness in practice. Equity and Quality in AI Professional Development emphasizes preparing staff to integrate AI tools effectively while promoting fairness, inclusivity, and cultural relevance in all applications. Stakeholder Communication highlights the importance of clear, transparent dialogue with clients, families, and staff about AI’s role, limitations, and potential, while gathering feedback to guide its implementation. Research and Adaptive Development encourages organizations to study and refine AI systems, contributing to innovation and the growing knowledge base in behavior analysis. Finally, Environmental and Social Responsibility expands the focus to the broader implications of AI, urging organizations to consider sustainability and societal well-being as they adopt new technologies. The following paragraphs explore these issues in narrative form, while the accompanying table outlines key questions that behavior analytic organizations should consider when implementing AI tools.
Strategic Framework and Overarching Questions
For behavior analytic organizations, clinics, and systems, adopting AI requires a deliberate alignment with their mission to provide ethical, evidence-based services to clients and their communities. A strategic framework ensures that AI tools are implemented thoughtfully, enhancing (not compromising) the organization’s core values and professional standards. Considerations include defining the purpose of AI integration, identifying how it can support key functions such as service delivery, training, or data analysis, and ensuring that policies evolve with the technology. For clinic owners and administrators, the framework should also address operational priorities, such as clear accountability for AI oversight, protocols for addressing unintended consequences, and mechanisms to evaluate AI’s impact on clients, staff, and organizational goals. Establishing a strategic framework provides a roadmap for integrating AI in ways that respect the professional responsibilities of behavior analysts while preparing organizations for the changes (and challenges) that occur when using AI tools. (See Sriharan et al., 2024)
Ethical Risk Management and Governance
For behavior analytic organizations, the integration of AI requires a focus on mitigating risks while simultaneously ensuring ethical governance to protect clients, uphold professional standards, and maintain organizational integrity. This involves implementing systems to evaluate AI tools’ reliability, developing clear protocols for handling adverse events, and ensuring that all AI use aligns with core ethical principles. Activities could include updating insurance coverage, establish documentation procedures that demonstrate due diligence in AI implementation, and develop protocols for addressing adverse events. Governance structures, such as committees or policies, that proactively monitor AI use and ensure transparency in decision-making may need to be created. Ethical oversight also extends to addressing conflicts between AI outputs and evidence-based practices, managing accountability, and fostering an organizational culture that prioritizes the welfare of clients and the professional integrity of staff. (See World Health Organization, 2021)
Maintenance of Professional Standards
The maintenance of professional standards seems to take on new dimensions in an AI-augmented environment, particularly for behavior analytic organizations where adherence to ethical codes and evidence-based practices is vital. Organizations need to ensure that AI use aligns with the Ethics Code for Behavior Analysts (Behavior Analyst Certification Board, 2020) as well as the broader principles of the field, while simultaneously developing new guidelines to govern AI’s role in supervision, training, and clinical practice. This includes establishing clear boundaries for AI use in client interactions to protect the therapeutic relationship and ensuring that staff receive adequate training to use AI tools both effectively and ethically. Additionally, organizations must create oversight mechanisms to monitor AI implementation and develop frameworks for resolving potential conflicts between AI recommendations and professional clinical judgment. By addressing these challenges, organizations can integrate AI tools into their operations without compromising the integrity of their professional standards. (See Esmaeilzadeh, 2024)
Responsible Data Handling
The integration of AI into behavior analytic organizations occasions a comprehensive approach to data handling, given that quality ABA services are inherently data driven. Existing protocols for managing data may need to be re-evaluated and enhanced to account for the unique risks and opportunities posed by AI tools. Organizations should already have in place robust data handling protocols that prioritize client confidentiality, ensure HIPAA compliance, and specify appropriate data retention and deletion policies. If using AI-powered tools for data collection, analysis, and decision-making then it is imperative to safeguard sensitive information and prevent misuse. Clear criteria must be developed to determine which types of data can be input into AI systems, while safeguards must prevent unauthorized access and ensure data integrity. Furthermore, informed consent procedures must be updated to provide transparency about how client data will be used in AI-supported processes. By proactively addressing these challenges, organizations can harness the benefits of AI while upholding the ethical standards of behavior analysis. (See Murdoch, 2021)
Verification of Content Generated by AI
AI has the potential to create various types of content for use in behavior analytic organizations, each with differing “fact checking” needs. AI might be used to create teaching materials (e.g., stimuli for teaching discriminations, materials for verbal behavior programs, or visual aids for skill acquisition tasks), clinical content (e.g., treatment plans, progress summaries, or behavior intervention strategies), administrative content (e.g., staff schedules, training materials, or internal policies), analytical content (e.g., data interpretations, performance reports, or predictive models), and public-facing content (e.g., website copy, educational articles, or client newsletters). For each type, it is crucial to ensure that the content is accurate, evidence-based, and free from bias, errors, misleading information, or inadvertent affronts to individuals, families, communities, cultures, etc. For example, AI-generated teaching materials should present stimuli that are developmentally appropriate and align with the learner’s treatment goals. A clinical treatment plan must align with evidence-based practices and avoid interventions that conflict with ethical or professional guidelines. Administrative materials, such as staff training content, must reflect organizational standards and current best practices.
Assessing the accuracy of AI-generated content:
- verify the source material
- fact-check information against reliable sources
- check for consistency and logical flow
- scrutinize dates and statistics
- proofread for grammatical errors
- check for potential biases
- understand ht capabilities and limitations of the AI tool you are using
Treat AI-generated content with skepticism and verify its accuracy before relying on it
Analytical outputs, such as data summaries or analyses, must be verified to avoid overgeneralizations or errors that could misguide decision-making. Public-facing content, such as an article about ABA for families, must be both accurate, clear, and audience friendly to maintain trust and credibility. And without doubt, the risk of AI “hallucinations”—fabricated or inaccurate outputs—makes robust verification protocols essential for all types of content to ensure reliability and protect client outcomes. (See University of South Florida Libraries, 2025)
Equity and Quality in AI Professional Development
Organizational success has always been tied to workforce and professional development. AI implementation in behavior analytic organizations also hinges staff are not only equipped with the technical skills and workflows to use AI tools but also can integrate these tools into their clinical and administrative expertise. Training programs could include hands-on demonstrations, case-based scenarios, and ethical discussions to prepare staff for real-world applications (AI can even be considered when developing these trainings and materials). For example, training might involve teaching clinicians to use AI-generated data visualizations to track client progress more efficiently while maintaining the clinical judgment needed to interpret and act on those insights. Competency frameworks could incorporate AI literacy as part of professional development, so that staff at all levels—from RBTs to BCBAs—are comfortable with these tools and confident and effective in their use. (See Randhawa& Jackson, 2020.)
Equity and inclusion play a critical supporting role in this process, ensuring that AI tools benefit all stakeholders. Organizations may need to make accommodations for staff or families with limited technological access or skills, such as providing alternative non-AI resources or additional training opportunities. Ensuring equitable implementation not only promotes fairness but also fosters trust in the organization’s use of AI-enhanced systems. (See Garcia Ramos & Wilson-Kennedy, 2024).
Quality assurance completes this framework by providing mechanisms to evaluate how AI impacts both service quality and staff development. For example, organizations might monitor how effectively AI tools enhance clinical outcomes, such as reducing response times for identifying treatment trends or increasing the accuracy of data collection. Benchmarks and feedback systems could assess how well AI-supported decision-making aligns with evidence-based practices and ethical guidelines, ensuring that AI tools remain assets rather than liabilities. Regular audits of AI-generated materials, from teaching stimuli to clinical documentation, ensure that they meet the high standards expected in behavior analysis while identifying areas for improvement. Ultimately, to ensure quality use of AI, ask yourself this question “How do I use this technology to bring out the best in these humans?” (See Haller, 2022)
Stakeholder Communication
Stakeholder messaging might require additional nuances in the age of AI, however clear communication about AI integration essential. Clear and intentional communication strategies should be developed to help clients, families, and other stakeholders understand not only the potential benefits of AI in enhancing service delivery and operations but also its limitations, particularly in sensitive and critical contexts. Increasing stakeholder awareness could include designing (accessible, culturally relevant) education programs tailored to the varying levels of technological literacy among stakeholders. As in all areas, transparency is vital. Organizations should establish clear plans to disclose when and how AI tools are being used, and the safeguards in place to ensure accuracy, reliability, and compliance with ethical standards. Another important aspect is mechanisms for gathering stakeholder feedback, and to use their concerns and insights to inform ongoing AI implementation decisions. (See Scott et al., 2021)
Research and Ongoing Adaptive Development
Behavior analytic organizations must adopt a forward-thinking approach to AI integration, emphasizing ongoing refinement, innovation, and contribution to the broader field. The integration of AI tools requires careful planning to ensure they align with existing systems, necessitating workflow modifications and strategies for seamless collaboration with other service providers. Contingency plans for potential system failures or the discontinuation of AI tools are also essential to maintain continuity in service delivery. Beyond operational considerations, organizations have a responsibility to contribute to the growing knowledge base surrounding AI use in behavior analysis. This may include engaging in research partnerships, studying the effects of AI implementation on service outcomes, and developing behavior analytic-specific AI applications that align with evidence-based practices and ethical standards. By thinking and behaving proactively research and adaptive development, organizations not only enhance their own AI systems but also shape the future of AI use in the field, ensuring its responsible and effective application. (See Bankins et al, 2024)
Environmental and Social Responsibility
The environmental impact of AI seems too often under discussed yet are critical to socially responsible use of AI tools. As organizations strive to balance technological advancement with their broader ethical obligations, they should consider the impact AI systems, especially those involving large-scale data processing, has on increased energy consumption, carbon emissions from server farms and computational infrastructure, and other environment impact. Behavior analytic organizations, even those smaller in scale, can contribute to sustainability by making deliberate choices, such as selecting energy-efficient AI tools or partnering with providers committed to renewable energy. Beyond environmental considerations, the social implications of AI should always be at the forefront, such as ensuring AI is used in ways that promote equity, accessibility, and societal well-being. By being aware and thoughtful of the broader impacts, organizations can adopt AI responsibly and hopefully contribute to a more sustainable and equitable future. (See Moyano-Fernández & Rueda, 2023)
Conclusion
In Part 3 of this blog series, I’ve outlined nine critical areas for behavior analytic organizations to consider when implementing AI tools. These areas were identified to provide a comprehensive framework for the responsible integration of AI, ensuring that its use aligns with societal values important to behavior analysis: ethical applications, equity, and the commitment to serving diverse populations. While each area was discussed individually, they are intertwined and collectively reinforce the importance of thoughtful planning, transparency, and continuous evaluation through measurement and analyses. By considering these areas and the related questions, organizations may better navigate the complexities of AI adoption with a clear and responsible approach.
A Final Note. One crucial concern with AI, which I discussed in Parts 1 and 2 and which has been extensively covered by others, is bias. Yet it is such a critical issue that I’d like to conclude with a few reminder words about it here.
AI systems can only be as good as the data they are trained on, and bias is often “hard-wired” into generative AI models. For example, training data drawn from the internet often reflects and perpetuates stereotypes. These biases can have far-reaching consequences, such as hiring algorithms that favor one demographic over another based on historical patterns. Bias in AI is particularly challenging to address because it reflects complex social issues that lack simple fixes. While mitigation techniques like reinforcement learning from human feedback show promise in improving model outputs, they are not perfect solutions. This means behavior analytic organizations must remain vigilant about how bias may manifest in AI tools and the potential impact on clients and staff. For example, AI tools generating stimuli for skill acquisition programs must account for cultural and linguistic diversity to ensure they are relevant and inclusive for all clients.
Organizations must also critically evaluate whether an AI system is appropriate for a given context in the first place. If you notice that AI-generated work lacks varied representation or omits critical perspectives, supplement it with additional resources to fill these gaps. By taking proactive steps to identify, address, and mitigate bias, behavior analytic organizations can help increase the use of AI as tools to promote equity rather than perpetuate inequality. As AI becomes increasingly integrated into the field, maintaining a skeptical, thoughtful, and proactive approach is essential. Bias isn’t just about the accuracy of AI outputs; it’s about ensuring that the systems we choose to use align with our core principles and values and support the dignity and diversity of the populations we serve. With careful planning and ongoing caution, AI can be a powerful tool for advancing behavior analysis while staying true to its ethical foundation.
Table. Key Questions for AI Implementation in Behavior Analytic Organizations
Strategic Framework and Overarching Questions
- What criteria will the organization use to determine whether an AI tool aligns with its mission, client needs, and evidence-based practices?
- How will AI tools integrate with current systems, and what workflow modifications will be needed?
- What contingency plans and emergency protocols will address system failures, malfunctions, or harmful recommendations?
- How will the organization measure the impact of AI on service quality, client outcomes, and staff workflows?
- Who will be responsible for staying informed about AI advancements, and how will this knowledge inform organizational practices?
- How will the organization document and update AI policies to reflect advancements and lessons learned?
- What mechanisms will ensure staff and stakeholders can provide feedback or raise concerns about AI tools?
- How will the organization ensure AI use aligns with its commitment to equity, transparency, and ethical standards?
Ethical Risk Management and Governance
- How will the organization ensure professional liability insurance covers risks associated with AI-assisted practice?
- What structures (e.g., policies, protocols, committees) will be created to monitor and enforce ethical AI use?
- What processes will ensure that AI recommendations align with evidence-based practices and professional judgment?
- How will the organization establish protocols for reporting, investigating, and resolving adverse events or ethical concerns related to AI?
- What documentation practices will demonstrate due diligence in AI selection, implementation, and evaluation?
- How will staff and stakeholders be educated about the ethical principles governing AI use in behavior analytic practice?
- What mechanisms will ensure transparency in AI-driven decisions and accountability for their outcomes?
- How will compliance with applicable laws, regulations, and ethical guidelines be maintained and reviewed over time?
Maintenance of Professional Standards
- How will the organization ensure AI use aligns with the BACB Ethics Code and established professional standards?
- What training programs will staff require to use AI tools ethically, effectively, and in alignment with evidence-based practices?
- How will the organization establish and maintain appropriate professional boundaries when AI tools are used in client interactions?
- What protocols will be implemented to monitor the ethical and professional use of AI tools in practice?
- How will the organization address, resolve, and document potential conflicts between AI recommendations and professional clinical judgment?
- What specific guidelines will be created for using AI in supervision, training, and competency development for staff and trainees?
Responsible Data Handling
- How will existing data management protocols be re-evaluated to address the specific risks and benefits of AI integration?
- How will client confidentiality be maintained when using AI tools that process or store sensitive behavioral data?
- What criteria will determine which data can be safely input into AI systems, and which must remain strictly managed by human professionals?
- How will the organization ensure compliance with HIPAA and other relevant regulations when using AI tools for data processing?
- What data retention and deletion policies will be established to prevent the accumulation of unnecessary or outdated AI-processed information?
- How will informed consent procedures be updated to transparently address AI’s role in data collection, analysis, and decision-making?
- What safeguards and access controls will be implemented by AI systems to prevent unauthorized access to sensitive client or organizational data?
- How will the organization regularly audit and monitor AI systems to ensure continued compliance with privacy, security, and regulatory standards?
- How will the organization balance the benefits of AI-powered data analysis with the ethical obligation to protect sensitive client information?
Verification of Content Generated by AI
- What strategies will the organization implement to detect and manage AI “hallucinations” (fabricated or inaccurate outputs), especially in high-stakes contexts like clinical or public-facing content?
- How will the organization verify the accuracy and appropriateness of AI-generated teaching materials, such as stimuli for verbal behavior programs or discrimination training tasks?
- What protocols will ensure that AI-generated clinical content, such as treatment plans or progress summaries, aligns with evidence-based practices and ethical guidelines?
- Who within the organization will have the authority to review and approve AI-generated administrative content, such as staff schedules, training materials, or policies?
- How will the organization validate analytical outputs, such as data interpretations, performance reports, or predictive models, to avoid errors or overgeneralizations?
- What systems will be in place to review and confirm the accuracy of public-facing content, such as website materials or educational resources, before distribution?
- How will the organization track and document which content has been generated by AI versus created by human professionals?
- Should clients, families, or other stakeholders be informed when AI-generated materials are used in their services or communications?
- What protocols need to be established for citing or acknowledging AI assistance in organizational documents, including teaching, clinical, and public materials?
- How will the organization periodically audit AI-generated content across all categories to ensure ongoing accuracy, reliability, and compliance with professional standards?
Equity and Quality in AI Professional Development
- What specific technical skills and workflows will staff need, and how will training programs prepare them to effectively integrate AI tools into clinical and administrative practices?
- How will the organization support staff who may feel threatened or displaced by AI, while fostering confidence and capability at all levels?
- What mechanisms will be in place to gather and respond to staff feedback, ensuring AI implementation is continuously refined?
- How will AI tools and training programs address diverse cultural contexts, languages, and varying levels of technological literacy among staff and clients?
- What benchmarks and quality control measures will ensure that AI-supported decision-making aligns with evidence-based practices and ethical standards?
- How will the organization evaluate the impact of AI on business or clinical outcomes, staff workflows, and overall service quality?
- What strategies will ensure AI implementation respects cultural competency and promotes equity across all settings (business, home, school, clinic)?
- How will AI tools be adapted for different cultural contexts and languages?
- What role will cultural competency play in AI tool selection and implementation?
Stakeholder Communication
- How will the organization communicate the role of AI in service delivery to clients, families, and other stakeholders in clear and accessible terms?
- What education programs or resources will be provided to help stakeholders understand the benefits, limitations, and safeguards of AI tools?
- How will the organization ensure transparency regarding AI use, including when and how it is involved in client services or decision-making?
- What mechanisms will be established to gather and incorporate feedback from stakeholders about the use of AI tools in the organization?
- How will the organization address stakeholder concerns about AI, including issues of bias, reliability, or ethical use?
- What steps will be taken to ensure stakeholder communication is culturally and linguistically appropriate?
- How will the organization balance promoting the benefits of AI with honest discussions of its limitations to manage stakeholder expectations?
- What strategies will be used to ensure ongoing stakeholder engagement as AI systems evolve within the organization?
Research and Ongoing Adaptive Development
- Who (or what group) will be responsible for staying current with evolving AI capabilities and limitations and ensuring this knowledge informs organizational practices?
- How will the organization document and update AI use policies to reflect advancements, lessons learned, and ethical considerations?
- What contingency plans will be developed to address system failures or the discontinuation of AI tools without disrupting service delivery?
- How will the organization evaluate the effectiveness of AI tools on client outcomes, staff workflows, and overall service quality?
- What benchmarks and quality control measures will ensure AI tools align with evidence-based practices and ethical guidelines?
- How will the organization contribute to the knowledge base on AI use in behavior analysis, such as through research, partnerships, or publication of findings?
- What steps will ensure that research findings and evaluation results are applied to continuously refine and adapt AI implementation?
- How might the organization collaborate to develop AI applications specifically tailored to behavior analytic practice?
Environmental and Social Responsibility
- How will the organization evaluate the environmental impact of AI tools, such as energy consumption and carbon footprint?
- What steps can be taken to prioritize energy-efficient or environmentally sustainable AI systems?
- How can the organization advocate for or partner with AI providers committed to renewable energy or sustainable practices?
- In what ways can AI tools be used to promote equity and accessibility, particularly for underserved or vulnerable populations?
- How will the organization ensure that its use of AI aligns with broader societal goals, such as reducing disparities or increasing access to high-quality services?
- What role can the organization play in raising awareness about the environmental and social implications of AI within the field of behavior analysis?
[1] Right off the bat, I and AI, offer the reader a choice. I gave the first paragraph to Claude, and prompted: The first line of the intro says “integration” How would the meaning of the sentence change if I used “assimilation” or even “amalgamation”? I want to convey that it’s not as simple as “integration” (if integration is ever truly simple).
Claude offered, these nuanced shifts in meaning. I think all three fit, which do you prefer?
“Integration” suggests a deliberate, somewhat controlled process of bringing AI into existing organizational systems – like adding a new component that fits into established structures. It’s perhaps the most neutral of the three terms.
“Assimilation” carries stronger implications of absorption and transformation – it suggests that organizations aren’t just adding AI but are fundamentally adapting and changing in response to it. There’s an element of organizational culture and identity being reshaped as AI becomes part of the system. This term might better capture the deep, sometimes unconscious ways AI is changing organizational DNA.
“Amalgamation” suggests an even more complex blending or fusion where the boundaries between AI and traditional organizational processes become increasingly blurred. It implies a more thorough mixing of elements where the result might be quite different from either original component. This term might best capture the messy, sometimes unpredictable nature of how AI and organizational systems are combining.
Additional Use of AI for this post
- Several tools were used to support the text and image creation, content modification, and the final draft of this post. Tools used include Anthropic’s Claude, Open AI’s ChatGPT4o, and NightCafe AI Art Generator. As sole “author,” my role included topic inspiration, content generation, organization, quality control and synthesis of AI generated content, and writer of tone, voice, and style.
- Questions for each were suggested by ChatGPT4o after being given the area description and 1-3 example questions. I evaluated and modified GPT’s suggested questions, sometimes with several iterative revisions before final inclusion.
- ChatGPT4o was asked to correct the references to ensure they compiled with APA Style 7th edition (American Psychological Association, 2020).
References and Additional Resources
American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). American Psychological Association. https://doi.org/10.1037/0000165-000
Anthropic. (2024). Claude 3.5 Sonnet [Large language model]. https://www.anthropic.com
AppliedBehaviorAnalysisEdu.org. (2024, January). Integration of ABA with artificial intelligence (AI). AppliedBehaviorAnalysisEdu.org. https://www.appliedbehavioranalysisedu.org/2024/01/integration-of-aba-with-artificial-intelligence-ai/
Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45(2), 159-182. https://doi.org/10.1002/job.2735
Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb.com/wp-content/ethics-code-for-behavior-analysts/
Cox, D. J. (2023, March 28). Artificial intelligence in ABA: Why data transparency matters. RethinkFutures. https://www.rethinkfutures.com/resources/artificial-intelligence-aba-why-data-transparency-matters/
Cox, D. (2023, May 18). Artificial intelligence (AI) in applied behavior analysis (ABA). Rethink Behavioral Health. https://www.rethinkbehavioralhealth.com/resources/artificial-intelligence-ai-in-applied-behavior-analysis-aba/
Esmaeilzadeh, P. (2024). Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artificial Intelligence in Medicine, 151, 102861. https://doi.org/10.1016/j.artmed.2024.102861
Fisher, S., & Rosella, L. C. (2022). Priorities for successful use of artificial intelligence by public health organizations: A literature review. BMC Public Health, 22(1), 2146. https://doi.org/10.1186/s12889-022-14422-z
Garcia Ramos, J., & Wilson-Kennedy, Z. (2024, October). Promoting equity and addressing concerns in teaching and learning with artificial intelligence. Frontiers in Education, 9, 1487882. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2024.1487882/full
Haller, K. (2022). Quality assurance in and for AI. In Managing AI in the enterprise. Apress. https://doi.org/10.1007/978-1-4842-7824-6_3
Joshi, H. (2025). Implementing responsible AI in healthcare organizations: Strategies, challenges, and best practices. In Responsible AI for digital health and medical analytics (Chapter 12, pp. 293–326). IGI Global. https://doi.org/10.4018/979-8-3693-6294-5.ch012
Moyano-Fernández, C., & Rueda, J. (2023). AI, sustainability, and environmental ethics. In F. Lara & J. Deckers (Eds.), Ethics of artificial intelligence (Vol. 41). The International Library of Ethics, Law and Technology. Springer Nature. https://doi.org/10.1007/978-3-031-48135-2_11
Murdoch, B. (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22, 1–5. https://doi.org/10.1186/s12910-021-00687-3
NightCafe Studio. (2025). NightCafe AI Art Generator [Artificial intelligence tool]. https://creator.nightcafe.studio
OpenAI. (2024). ChatGPT (version 4o) [Large language model]. https://openai.com
Patel, A., & Sharma, A. (2024). Demystifying privacy-preserving AI: Strategies for responsible data handling. MZ Journal of Artificial Intelligence, 1(1), 1–8. http://mzjournal.com/index.php/MZJAI/article/view/48
Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2021). Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–23. https://doi.org/10.1145/3449081
Randhawa, G. K., & Jackson, M. (2020, January). The role of artificial intelligence in learning and professional development for healthcare professionals. Healthcare Management Forum, 33(1), 19–24. https://doi.org/10.1177/08404704198690
Scott, I. A., Carter, S. M., & Coiera, E. (2021). Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health & Care Informatics, 28(1), e100450. https://doi.org/10.1136/bmjhci-2021-100450
Sriharan, A., Sekercioglu, N., Mitchell, C., Senkaiahliyan, S., Hertelendy, A., Porter, T., & Banaszak-Holl, J. (2024). Leadership for AI transformation in health care organizations: A scoping review. Journal of Medical Internet Research, 26, e54556. https://doi.org/10.2196/54556
Stevens, D. (2025, January 8). Understanding the ethics of AI in ABA therapy. MedCity News. https://medcitynews.com/2025/01/understanding-the-ethics-of-ai-in-aba-therapy/
University of South Florida Libraries. (2025, January 31). Generative AI reliability and authority. AI Tools and Resources. https://guides.lib.usf.edu/AI
World Health Organization. (2021). Ethics and governance of artificial intelligence for health: WHO guidance (executive summary). World Health Organization. https://iris.who.int/bitstream/handle/10665/350567/9789240037403-eng.pdf