Comprehensive Summary
This article examines the use of artificial intelligence (AI) in mental health care, focusing on how Illinois’ Public Act 104-0054 regulates the use of AI in psychotherapy and what evidence-based standards should be implemented in order to guide its role in treatment. To examine this, Szoke et al. reviewed the Illinois Public Act 104-0054 and compared it with existing research surrounding AI in mental health, as well as established standards for evidence-based psychological treatments. Szoke et al. reviewed legal text, prior studies, and historical standards from clinical psychology in order to propose a model for evaluating AI tools. The analysis finds that the Act clearly defines and regulates three categories of AI use in psychotherapy, including administrative support, supplementary support, and therapeutic communication, while also imposing consent requirements and penalties for violations. However, many practical applications of AI, such as psychoeducation, suicide risk detection, and research use, fall into “gray areas” that are not clearly addressed by the Act. The authors argue that these ambiguities may hinder both clinical innovation and research, despite the potential safety and access benefits of AI tools. They also note that users are already using unvalidated AI tools for mental health support, increasing the risk of harm in the absence of clear evidence standards. The authors propose an adapted empirically supported treatment (EST) framework to guide the use of AI in mental health by establishing clear standards for its effectiveness and legal compliance.
Outcomes and Implications
Mental health is a critical aspect of overall well-being, yet many individuals face limited access to efficient care and therapy. One approach that is being used to help improve mental health treatment is AI tools, as they have shown to be capable of detecting mental illness symptoms and in generating personalized treatment plans. Although AI has promising benefits, studies indicate that when used without specialized training or guidance, existing AI models have limitations in accurately identifying mental health conditions and in responding to users through a cultural and emotional perspective. While expanding access to mental health services is crucial, the integration of AI tools into clinical practice must be guided by ethical safeguards. In response to these challenges, Illinois implemented the Public Act 104‑0054 in August 2025, serving as the first legislation in the United States to establish explicit oversight for the use of AI in psychotherapy. Szoke et al. evaluate this legislation in the context of existing research on AI in mental health and prior frameworks for treatments, identifying areas where the law provides clarity and highlighting “gray zones” where guidelines are still needed. Through their research, Szoke et al. present a structured way of evaluating AI tools and provide practical guidelines for clinicians. This work is clinically relevant because it provides a basis for evaluating which AI tools can be safely implemented in psychotherapy, ensuring that interventions are evidence-based and ethical. By establishing clear standards for care, clinicians can responsibly incorporate AI into treatment planning, session support, and monitoring patient progress in the setting of mental health.