AI and Mental Health Care: Issues, Challenges, and Opportunities

QUESTION 6: How should AI models be reimbursed or monetized?

Back to table of contents
Project
AI and Mental Health Care

Background

Until recently, most AI-driven mental health tools lacked reimbursement through formal insurance channels, which significantly limited their adoption. At the center of this story is a regulatory divide between general wellness apps and therapeutic digital interventions, a distinction that drives fundamentally different monetization and coverage paths. A 2019 review described coverage as a fragmented patchwork, relying primarily on direct-to-consumer payments, employer wellness programs, or institutional licenses, with minimal involvement from insurers.89 Standalone digital therapeutics had no viable reimbursement pathway under traditional fee-for-service models. This reimbursement gap was cited as a key barrier to broader adoption and likely contributed to the failure of several digital mental health firms in 2022 and 2023, despite their FDA-approved status.90

Over the past three years, however, public payers have begun to incorporate coverage for some digital tools. The 2025 Medicare Physician Fee Schedule introduced new Healthcare Common Procedure Coding System codes specifically for FDA-cleared “digital mental health treatment devices,” allowing reimbursement for the clinician when tools are used under clinician supervision.91 Medicare, whose codes set important precedents for private insurers and Medicaid, which typically follow Medicare’s lead, does not currently cover the treatment devices themselves.92 As of 2023, two state Medicaid programs, Massachusetts and Florida, explicitly covered “reSET,” an FDA-authorized prescription digital therapeutic for substance-use disorder, designed to be used as adjunctive to in-person therapy.93 A few other states are piloting coverage through Medicaid waivers or incorporating digital tools into broader behavioral telehealth initiatives.94

Private insurers and employer-sponsored health plans are gradually expanding coverage as well, typically focusing on regulated, FDA-cleared digital therapeutics. A 2022 survey found that 14 percent of U.S. health plan decision-makers reported covering digital therapeutics for behavioral health, marking the highest adoption area for digital tools outside diabetes care.95 International models, such as Germany’s Digital Health Applications system and the UK’s National Health System (NHS) digital app assessments, demonstrate how structured reimbursement frameworks can significantly enhance integration of these technologies into health care systems.96

Payment strategies for AI mental health tools vary widely, depending on their intended audience, function, and regulatory status. Direct-to-consumer wellness apps primarily employ “freemium” or subscription models, while clinician-oriented tools like AI documentation assistants and clinical decision-support systems usually use enterprise licensing or per-user fees. Value-based contracts, in which payers fund platforms like Quartet Health and reward providers based on patient outcomes, represent another emerging model.97 Lastly, several companies such as Lyra Health tap into the Employee Assistance Program (EAP).

The regulatory distinction between general wellness apps and therapeutic digital interventions drives fundamentally different financing approaches. Wellness-focused tools typically avoid regulatory scrutiny, opting instead for scalable revenue sources like subscriptions, corporate wellness deals, or direct consumer payments. In contrast, regulated digital therapeutics hope to position themselves as prescribable medical devices and pursue formal insurance reimbursement, which would result in higher pricing but depends heavily on billing codes and consistent payer coverage.98 This path is challenging. Pear Therapeutics, the company that produced reSET, the online substance use disorder digital therapeutic that succeeded in winning FDA approval and coverage through both Florida and Massachusetts Medicaid, ultimately went bankrupt.99

Several critical uncertainties remain. How swiftly and widely will providers and insurers adopt Medicare’s new codes? Will AMA and Medicare expand their codes to include unsupervised AI symptom monitoring and care delivery? Will Medicaid programs nationwide scale up their digital therapeutic coverage? The effects of reimbursement structures on innovation are also uncertain: Parity in payment with traditional care might accelerate adoption but, without evidence of performance that is superior to freemium or subscription models, high cost-sharing and lack of provider interest might still limit adoption. Many AI mental health tools still enter the market without clearly defined pathways for reimbursement or integration into health care delivery systems, and few studies systematically assess how monetization strategies impact adoption rates, accessibility, patient outcomes, or the long-term viability of platforms. Some argue that a sustainable long-term model may require establishing something comparable to a formulary for prescription drugs, in which digital therapeutics are evaluated, approved, and reimbursed independently from clinician-provided services.100

Responses

A photo of Daniel Barron, a person with light skin and short brown hair, wearing a dark business suit and white shirt and smiling at the viewer.

Daniel Barron
 

Economic considerations are not peripheral—they are central to the adoption of AI in mental health care. As with all clinical innovations, insurers and reimbursement models will ultimately determine whether AI tools gain meaningful traction. The critical question is not just whether an AI tool is effective but whether it does a clearly defined clinical job better, safer, or more efficiently than alternatives, at a justifiable cost (see Table 1).

This may sound cold in the context of mental health care—where discussions of money are sometimes taboo—but we must reckon with the system we actually have. In the United States, health care is structured as a business. Clinicians are paid. Facilities have rent, utilities, and liability exposure. Medications cost dollars and cents. So do servers, engineers, APIs, and deployment cycles for AI tools. Any solution that ignores this economic scaffolding is unlikely to scale, no matter how well-intentioned. Furthermore, wishing that the system was “better reimbursed” or “more fair” is simply a fatal denial of clinical reality, as evidenced by multiple failed digital mental health start-ups.

AI tool developers must internalize this reality. Building a theoretically cool or technically impressive product is not enough. A viable tool must answer: What job does it do? How does it save money? And, critically, how does it help someone get paid? Product-market fit in U.S. health care means fitting into workflows and into budgets. Without a reimbursement strategy, an AI tool is just a prototype, not a business.

One opportunity for AI tools is to help health care organizations and payers become more rigorous in understanding their own costs. While cost accounting is foundational to most industries, health care often functions with opaque, inconsistent pricing that makes it difficult to assess what anything actually costs—or what cost savings a technology offers. This lack of internal visibility stymies rational adoption decisions and useful business models or projections.

To make progress, we should adopt a job-based reimbursement model. Whether the AI supports medication adherence, delivers CBT modules, or flags suicide risk, payment should depend on evidence that it performs that specific task better or more efficiently than standard care, with acceptable risks. As Michael Abràmoff and colleagues argue, aligning financial incentives with ethically responsible AI creates a system that rewards value, not novelty.101

Operationally, the sector would benefit from a “digital formulary”: a structured reference linking reimbursable AI tools to validated clinical jobs, akin to how drug formularies guide pharmaceutical access. This would enable payers to cover what works, avoid reimbursing what doesn’t, and help vendors anchor their pricing to tangible, repeatable value.

Equity must also be intentionally built into these models. Reimbursement that overlooks digital literacy, language access, or device availability risks exacerbating disparities. (Though I do pause to consider that medications are reimbursed without consideration for whether a patient will actually take that medication or whether there are structural barriers to a medication’s success.) As Masab Mansoor and Kashif Ansari show, reimbursement policy itself can be a lever for equity—as evidenced by improved youth mental health outcomes through well-designed telehealth funding.102

Finally, who pays should depend on the job being done. An administrative AI might be covered as operational overhead. A diagnostic AI might require fee-for-service reimbursement or inclusion in value-based care contracts. But public-sector funding—like the National Institutes of Health’s Bridge2AI—should play a critical role in underwriting development for tools that address foundational infrastructure or serve high-need populations.

In sum: health care may be a human right in principle, but in practice in the United States it is a business. AI adoption in health care must therefore be a business proposition. Tool developers must think like businesspeople. And reimbursement should not simply facilitate adoption—it should shape it, steering AI toward clinically important, economically rational, and ethically sound use cases.

 

A photo of Arthur Kleinman, a person with light skin, gray hair, and a gray beard and mustache, wearing a brown jacket and blue shirt, and smiling at the viewer.

Arthur Kleinman
 

The role of insurers and reimbursement models creates a set of important questions. Barriers to access have already been identified as crucial to digital use during the high point of the COVID pandemic. In China especially, a large body of evidence demonstrates that elderly people are unable to effectively use QR codes and other digital tools.103 The same situation exists in the United States. Coming up with simplified AI practices that can be easily explained to and used by older adults will be an important part of AI’s future development. This is, in general, what is of real practical significance for the use of AI by the elderly throughout our society. Profit distorts care. For-profit hospitals perform more poorly on clinically important indices than nonprofit hospitals. Insurance company and federal government attempts to control cost—as the pharmaceutical domain so sadly shows—have been inadequate to such a degree that cost is in a state of crisis. Given the reality of health care generally, controlling the cost of AI-informed mental health interventions is likely to become a serious and confusing issue. To prevent the worst abuses—including a future where interventions are available only for well-to-do Americans—organizations with the requisite legal and bureaucratic standing need to champion and ensure the performance of systematic, ongoing reviews.

 

A photo of Richard G. Frank, a person with light skin and short gray hair, wearing a purple jacket and white shirt, and smiling at the viewer.

Richard Frank
 

AI has the potential to make productive contributions to an array of functions within the mental health delivery system. These include direct interactions (e.g., via Woebot) with people who have mental illnesses and emotional needs, expanding/enhancing the public mental health infrastructure (e.g., to identify risks to individuals and communities), improved efficiency in administration (i.e., back office), provider extenders, and improved provider/service quality.

Mental health care delivery poses some special opportunities and challenges to the application of AI. There are ongoing concerns about access to treatment (geographically, hours of availability, and by payer type), while at the same time a significant segment of people that use mental health services do not meet diagnostic criteria for a mental illness, nor do they report significant impairments.104 Mental health care delivery is characterized by weak accountability processes.105 The regulation and financing of AI applications in mental health care are inseparable. That is, issues of safety and effectiveness, along with data privacy, are essential for the establishment of reimbursable services. The literature to date suggests that the development of chatbots and other online direct treatments is running well ahead of policies and procedures that ensure safety and effectiveness. Thus, the discussion of pricing largely assumes integration with regulation of safety, effectiveness, and data privacy.

AI functions, mental health, and payment policy

Some simple economics. The market for AI services related to mental health care can be seen as having several distinct segments where the economic dynamics will differ. In one segment, the AI vendor will submit bills directly to an insurer, in which case the payer will negotiate a price with AI vendors. This might be coupled with an initial visit with a human clinician and evidence of safety and efficacy, such as FDA approval. The price will depend on the terms of coverage (cost sharing), the intensity of competition in the market, and the costs of developing and delivering the application. While thousands of AI therapeutic applications have already been created, how many will ultimately pass safety and efficacy scrutiny and be eligible for reimbursement by Medicare, Medicaid, and commercial insurers remains unknown. So, the effective level of competition is highly uncertain.

A second segment comprises AI applications that improve the efficiency of practices; for example, through better back-office functions that increase revenues, cut costs, and reduce no-shows. In these cases, AI vendors will likely sell directly to providers. For these types of functions, depending on the costs to the provider, the AI service will be included in the existing price paid to the provider for services rendered. The presumption is that the savings to the practice will justify the AI investment and will be part of the service bundle.

A third segment involves AI services that do not result in savings to a practice but instead improve the quality of care. These might include continuous patient monitoring systems or diagnostic assistance applications. Such products could be especially beneficial in today’s mental health delivery system, which is characterized by weak accountability due to quality-of-care metrics that are often crude, unevenly applied (if at all), and seldom tied to any consequences for a practice.

A fourth market segment for AI involves applications that enhance the public mental health infrastructure and/or have spillover effects and thus have a public good character. Examples might include the application of machine learning to efforts to predict suicide attempts following emergency department visits or to predict suicidal or violent behavior among callers to emergency hotlines (e.g., 988 or 911). Because these are tied to crisis response systems that at best rely on public financing of recurrent costs, they seldom have the technical capacity to implement such systems.

Payment models will vary by function

For direct-to-consumer therapeutic chatbot applications, given the uncertainty about safety and effectiveness apart from contact with a human clinician, a cautious point of departure would be to pay separately for chatbot sessions, initiated after contact with a human clinician, that have been approved as safe and effective.106 Amid a market with limited competition, reliance on reference prices (set at a fraction of human clinician prices) would be a practical, efficient approach to budgeting. Public payers such as Medicare should be mindful of the social benefits of promoting innovation in the AI and mental health sector.

Services that improve the efficiency of practices and potentially reduce the costs of providing care are likely to be adopted out of economic self-interest. Prices and service arrangements could be negotiated between AI vendors and the provider practices without the direct involvement of third-party payers.

For a continuous patient monitoring application and other services that improve the quality of care but do not save money (absent a robust quality-adjusted payment system), an add-on fee could be charged for using the AI application. For services that will not enter an existing field of robust competition, pricing will require some cost finding. Alternatively, reimbursement systems that rely on quality performance to establish payment might set payment increments based on quality that in turn could take account of the costs of the AI technology.

Finally, building AI capacity for services that are publicly supported, have significant spillover effects, and are not tied to uniform, well-defined services might most effectively be paid for by public or philanthropic grants. That could be the case in using machine learning applications that make use of speech patterns to predict suicide attempts in the context of 988 calls or machine learning algorithms in emergency department settings that incorporate data from electronic health records.107

Endnotes