By Kate Carter, John E. Bryson Director of Science, Engineering, and Technology
Artificial intelligence is becoming increasingly present in mental health care.
The Academy started its work on AI and mental health care in fall 2023 by discussing hypotheticals. During the lifespan of the project, chaired by Paul Dagum (Applied Cognition), Sherry Glied (New York University), and Alan Leshner (American Association for the Advancement of Science), the landscape has shifted rapidly. Clinicians began exploring AI to assist with screening, triage, and the work that happens between therapy sessions. Members of the public turned in growing numbers to general-purpose systems such as ChatGPT for support in moments when traditional care felt out of reach.
The accelerating pace of adoption has created a gap between practice and policy. The policy debate still asks whether AI should be part of mental health care, even as it is already woven into how people seek support. But the pace of adoption does not erase the sensitivities of this domain. Mental health care depends on trust, privacy, and careful judgment, and missteps can carry real consequences.
The Academy’s new publication, AI and Mental Health Care: Issues, Challenges, and Opportunities, approaches this nuanced topic by mapping the terrain rather than offering a single conclusion, expounding on the core questions that must anchor any serious debate. How should effectiveness be measured when interventions may take different forms across different populations? When should a human clinician be involved, and what kinds of oversight are necessary? How might these systems affect privacy, trust, and the experience of care? What are the implications for children, for people with severe mental illness, and for communities that already face inequities in access and treatment?
The publication does not claim to resolve these questions. Instead, it offers a framework to identify what is known, what remains uncertain, and what kinds of evidence are still needed. It reflects the belief that mental health care is a clinical, social, and economic system all at once, and that understanding how artificial intelligence fits within it requires expertise from many fields. That orientation shaped both the publication and the public launch event on December 9, 2025, that explored What are the Challenges and Opportunities of AI in Mental Health Care?
The program included opening remarks from cochair Alan Leshner, followed by a discussion moderated by Sanjay Gupta (Emory University School of Medicine; CNN) with three members of the project’s steering committee: Kacie Kelly, Chief Innovation Officer at the Meadows Institute; Paul Dagum, founder and former CEO of Mindstrong; and Arthur Kleinman, psychiatrist and professor of anthropology at Harvard University. During the discussion, the panelists agreed on one central point: AI’s growing role in mental health care demands sharper definitions, clearer expectations, and a policy conversation that matches the reality clinicians and patients already inhabit.
Dr. Dagum set the tone early when he said, “There’s tremendous promise, but the concerns are real.” Many people turning to chatbots do not realize they are engaging with systems never designed for therapy, and that gap between expectation and capability shapes much of the current confusion. Dagum argued that AI should be understood as a new therapeutic modality rather than an informal substitute for human care, and that its future depends on clear regulatory and economic structures. As he noted, “It’s a mistake to equate a chatbot with a therapist. We should think of this from a new perspective.”
Professor Kleinman pushed the conversation toward questions of care and responsibility. “Humans are essential,” he said. His concern was not abstract. People are already using AI systems as companions or confidants, often in moments of acute vulnerability, and some models rely on constant validation to keep users engaged. “That is not how you approach human beings in psychotherapy,” he warned. The risk is not only clinical but structural. Economic pressures may encourage replacing human therapists rather than augmenting their work, and that, Kleinman argued, is where harm becomes most likely. AI can support care, he said, but “sovereignty must be with a human, not with AI.”
The distinction between purpose-built therapeutic systems and general-purpose chatbots ran throughout the conversation. Kacie Kelly stressed that these two families of tools operate differently and should not be governed as if they were interchangeable. “General-purpose AI chatbots are different from AI designed to deliver therapy,” she said. Purpose-built tools are structured, testable, and measurable. General-purpose systems shift from interaction to interaction, which makes evaluation difficult and creates unpredictable edges. Dagum agreed, stressing a key difference: a medical device does not change week to week, but a large language model can.
Evidence, though nascent, was key to this discussion and to the panel’s ideas for solutions. Much of today’s AI is commercialized first and tested later, a reversal of the traditional path for mental health treatments. Dagum described two tracks emerging in real time. The consumer track, in which evidence is still mixed, and a more regulated track modeled on drug development, with rigorous trials and safety standards. In the regulated space, he argued, AI could support adherence to psychotherapy, long a challenge in digital health. But without FDA pathways and appropriate reimbursement through CMS (Centers for Medicare & Medicaid Services), these systems will stall. “If CMS doesn’t get behind these solutions, private insurers won’t either,” he said.
Kelly agreed on the importance of regulation but warned that mental health is often left out. Federal efforts to modernize health infrastructure often focus on medical specialties with clearer diagnostic boundaries. “The more we can reinforce that we’re talking about two different lanes of innovation,” she said, “the better off we’re going to be.”
When discussion turned to safety, the panelists drew sharp lines around high-risk populations. Dr. Gupta cited documented cases in which chatbots missed suicidal ideation or responded in inappropriate ways. Kelly noted that misuse often involves general-purpose systems never meant for clinical work. Kleinman was more candid: for people with chronic psychotic disorders, he argued, AI is contraindicated. “It can provoke psychosis,” he said. AI might be useful in limited, supervised settings (for example, for intake, between-session support, or triage) but not as a substitute for care.
All three panelists saw opportunities for AI to support clinicians in more targeted ways. Kelly emphasized the value of data from the spaces between sessions. For people engaged in evidence-based therapy, progress often depends on what happens outside the room. Digital tools could help support that work. Kleinman indicated that supervised use could strengthen community health systems, especially in areas with limited providers. Dagum pointed to adherence again, arguing that regulated solutions could help people maintain momentum in therapy.
Bias, privacy, and inequity surfaced repeatedly. Dagum warned against assuming privacy in consumer systems. Kelly reminded the audience that bias in AI is unavoidable, but that it must be understood alongside the structural biases already embedded in mental health care today. Kleinman noted that AI, like human therapists, requires users to have language fluency, device access, and the capacity to interpret and act on recommendations. These are not small hurdles.
As the session closed, each panelist widened the lens. Kelly spoke about the need to move past fear of action and consider instead the risks of inaction. Dagum described an inflection point in the history of mental health treatment, with a new therapeutic modality emerging whether the field is ready or not. Kleinman argued for deeper interdisciplinary work. “We’re entering a new world,” he said. Understanding that world will require historians, ethnographers, economists, and philosophers as much as clinicians.
The Academy’s publication sets out the contours of that work. The launch event underscored how urgent and shared the task has become.
For more information about the AI and Mental Health Care project and the Academy’s work on artificial intelligence, visit the Academy’s website.