Illinois Bans AI Therapy: What It Means for the Future of Mental Health Apps  

Artificial intelligence is transforming healthcare, with the FDA approving digital apps for conditions like ADHD. But when it comes to mental health therapy, Illinois has drawn a firm line—banning AI-driven psychotherapy in favor of licensed professionals. This pivotal decision raises crucial questions about the safety, ethics, and regulatory future of AI mental health apps.

Artificial intelligence (AI) has transformed industries from finance to transportation, and healthcare has been no exception. The FDA has even cleared several AI-driven tools, including digital therapeutics for conditions like ADHD, signaling the promise of software-based medicine. Yet, when it comes to mental health treatment, the excitement surrounding AI collides with deep ethical, clinical, and regulatory concerns. Illinois’ recent decision to ban AI-driven psychotherapy highlights a growing discomfort: replacing human therapists with algorithms may not be the future that patients or regulators are ready to embrace.

AI-based medical apps have demonstrated potential in certain areas of behavioral health. For example, structured conditions such as ADHD have seen the introduction of FDA-cleared apps that support cognitive training and behavioral reinforcement. These tools, designed for well-defined outcomes, illustrate how software can play a valuable role when clinical parameters are clear. But mental health therapy is not a checklist. Effective psychotherapy is shaped by nuance—tone of voice, timing of pauses, microexpressions, and the therapist’s ability to adapt to unspoken emotional cues. These human elements are precisely what AI struggles to capture. Critics argue that without these subtleties, therapy risks becoming transactional and even unsafe, especially in crisis situations where judgment calls can mean life or death.

Illinois Governor JB Pritzker’s signing of HB1806, the Wellness and Oversight for Psychological Resources Act, makes Illinois the first state to explicitly ban the use of AI for direct therapeutic services. Under this law, AI may still assist therapists with administrative or educational functions, but it cannot provide psychotherapy or make treatment decisions.

The motivation is clear: protect patients from unqualified digital “therapists” that might provide harmful or misleading advice. Lawmakers pointed to disturbing real-world examples, including a chatbot encouraging substance relapse. For regulators, this was an alarming sign that unmonitored AI platforms are not only unreliable but potentially dangerous.

The Illinois ban underscores a broader societal unease. Mental health treatment is deeply personal, and many fear that delegating it to machines strips therapy of its core human connection. Therapists emphasize that ethical judgment, empathy, and professional accountability cannot be replicated by algorithms. While AI might help with scheduling, psychoeducation, or even symptom tracking, it cannot shoulder the responsibility of guiding someone through trauma, addiction, or suicidal ideation.

The Illinois law also signals the steep regulatory hurdles AI-based therapy apps will face nationwide. Unlike physical health conditions that can be measured with lab values or imaging, mental health outcomes are subjective, variable, and highly context-dependent. For regulators like the FDA, this makes evaluating safety and efficacy particularly difficult. While AI may gain traction in supportive roles, its path as a direct replacement for therapy looks rocky. Other states are likely to follow Illinois’ example, erecting guardrails that prioritize human oversight. For developers, this means that innovation in mental health apps must focus on complementing, not substituting, licensed professionals.

AI will undoubtedly shape the future of healthcare, but mental health remains a frontier where human connection is irreplaceable. The Illinois ban is not a rejection of technology—it is a reminder that progress in medicine must be guided by both science and ethics. For AI in mental health, the future is not about replacing therapists, but empowering them.

Author

FDA Purán Newsletter Signup

Subscribe to FDA Purán Newsletter for 
Refreshing Outlook on Regulatory Topics