How Safe Are Mental Health Apps?
- Jack Carter
- Oct 6, 2025
- 9 min read
Mental health apps have become a lifeline for millions of people who cannot or will not seek traditional therapy.

During the pandemic, downloads of wellness apps surged, and by 2024 there were roughly 50 million regular users of wellness apps in the U.S. alone [1]. The global market for mental health apps was valued at US$6.52 billion in 2024 and is expected to grow to over US$17 billion by 2030 [2]. These apps promise meditation, cognitive behavioural therapy (CBT)‑based exercises and even AI‑powered “therapy” sessions available at any hour. But as more users entrust their most intimate thoughts to smartphones, a growing chorus of researchers, clinicians and regulators are asking a pressing question: How safe are these apps?
The privacy trap
Users often sign up for mental health apps hoping for anonymity, yet many of these platforms collect and monetize sensitive information. Studies have found that fewer than half of mobile apps designed to treat depression even have a privacy policy [3]. A 2022 static and dynamic analysis of 27 mental health apps reported that 74% were at a critical security risk and 15% at high risk because of insecure coding and lax cryptography [4]. The researchers noted that only four apps encrypted stored files, while 18 stored user passwords or tokens in plain text [5]. Apps requested an average of 5.6 “dangerous” permissions, including some unrelated to mental health, suggesting an appetite for collecting far more data than needed [6].
Another recent study evaluated 115 mental health apps available in China and found that privacy protections were superficial: while 90.4% displayed policy reminders at log‑in, only 73.9% required users to actively confirm consent [7]. Less than 2% asked third‑party partners to delete user data [7]. The authors concluded that app quality was positively correlated with privacy compliance, but the overall picture was troubling: average compliance with international standards was just 60.8% [7].
Commercial incentives often clash with privacy. Many mental health apps are free or cost little because they monetize personal information. Mozilla’s Privacy Not Included report in 2023 flagged 59% of the most downloaded mental health apps for problematic data collection and noted that 40% had worsened in privacy performance since 2022 [8]. Another investigation found that 85% of the 350,000 digital health products available in 2025 failed to meet quality thresholds and only 20% met safety standards [9]. Even well‑known platforms like BetterHelp have been caught misusing data: in 2023 the U.S. Federal Trade Commission (FTC) alleged that BetterHelp shared sensitive user information, email addresses, IP addresses and answers to mental‑health questionnaires, with Facebook, Snapchat and other advertisers despite promising confidentiality. The company paid US$7.8 million in refunds and agreed to stop sharing health information for advertising [10].
High‑profile breaches and their human toll
Data breaches are no longer hypothetical. In March 2025, Horizon Behavioral Health, a provider of therapy services in Virginia, discovered that a ransomware attack between March 13 and March 16 compromised demographic, clinical and insurance data [11]. The organization notified federal law enforcement and offered victims credit‑monitoring, but the incident illustrated how quickly a breach can expose diagnosis codes, medications and Social Security numbers [11]. Two years earlier, the mental health telehealth company Cerebral suffered a breach; the FTC alleged that it inserted tracking pixels from social‑media companies into its patient‑intake forms and even mailed promotional postcards revealing patients’ mental‑health conditions. Cerebral paid more than US$7 million in penalties and promised to overhaul its data‑sharing practices [12].
The stakes are life‑or‑death: Finland’s psychotherapy provider Vastaamo was hacked in 2020, and confidential therapy notes were published online. At least two patients died by suicide following extortion attempts, underscoring the trauma inflicted when therapy records become public [13]. These incidents suggest that mental‑health data is among the most sensitive information held by any industry.
Unregulated claims and questionable efficacy
Beyond data security, critics argue that mental health apps often overpromise benefits without robust evidence. A Frontiers in Psychiatry meta‑review pointed out that user ratings and download counts do not predict an app’s clinical quality [14]. Completion rates for digital mental‑health interventions are dismal: only 29.4% of young people finish the programs they start [14]. Many apps lock therapeutic tools behind premium subscriptions, raising equity concerns [15]. As commercialization accelerates, some developers lean on manipulative marketing: an investigative piece in Undark described how one sobriety app emailed potential users with subject lines like “Your liver may suddenly rupture” to spur sign‑ups [16].
AI‑driven chatbots take these issues to another level. A Stanford University study (2025) found that leading AI chatbots exhibited stigmatizing responses toward conditions like schizophrenia and alcohol dependence. In some cases the bots reinforced harmful behaviours, findings that remained partly under wraps because proprietary models prevent independent auditing. Meanwhile, a Harris Poll commissioned by the American Psychological Association (APA) showed that more than half of adults aged 18‑54 and a quarter of those over 55 would be comfortable discussing mental health with an AI chatbot [17]. Clinical psychologists warn that there is no commercially approved AI‑assisted therapy and that chatbots can deliver biased or dangerously simplistic advice [18]. A Belgian man’s suicide after months of interaction with an AI companion underscores the risks when chatbots are treated as therapists [19].
A patchwork of rules
When HIPAA doesn’t apply
Many users assume that privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA) protect their information. In reality, HIPAA only covers entities that bill or provide health care on behalf of insurers [20]. Most mental‑health apps classify themselves as “wellness” services, placing them outside HIPAA’s scope and leaving consumers protected only by often‑vague privacy policies. The U.S. data‑privacy landscape resembles a patchwork quilt: as of February 2025, nineteen states have enacted their own privacy laws, creating complex compliance burdens [21].
A bright spot emerged in July 2024 when the FTC finalized changes to its Health Breach Notification Rule (HBNR). The new rule explicitly covers health and wellness apps and requires them to notify consumers and the FTC within 60 days if any health data, including user‑generated data, is disclosed without authorization [22]. Crucially, “unauthorized disclosure” includes voluntary sharing with advertisers or data brokers unless users have given explicit consent [23]. The rule extends to a wide range of entities, including developers of mobile health apps and connected devices [22].
States take the lead
Federal action remains limited, so states are stepping in. On May 7, 2025, Utah enacted HB 452, an Artificial Intelligence Policy Act amendment that subjects “mental‑health chatbots” to strict disclosure and data‑protection requirements. Under the law, chatbot providers must clearly inform users that they are interacting with AI at the start of each session and cannot sell or share individually identifiable health information [24]. The bill bars targeted advertising during therapy chats and caps fines at US$2,500 per violation [25].
Nevada went further. Assembly Bill 406, effective July 1, 2025, prohibits any AI system from providing mental or behavioral health care [26]. The law protects clinical titles, making it illegal for AI providers to use words like “therapist” or “counselor” [27]. It also bars licensed professionals from using AI to deliver therapy and imposes civil penalties of up to US$15,000 per violation [28]. AI can be used for administrative tasks like scheduling or billing, but clinicians must review AI‑generated notes and maintain privacy compliance [29].
Illinois adopted the nation’s toughest stance. The Wellness and Oversight for Psychological Resources Act, signed August 4, 2025, prohibits anyone from offering or advertising therapy through AI and requires that only licensed professionals provide therapy services[30]. The law forbids AI from making therapeutic decisions, interacting directly with patients or generating treatment plans without professional review [31]. Violations carry penalties of up to US$10,000 per incident [32], and the legislation expands confidentiality requirements for all therapy providers [33]. A parallel press release emphasised protecting vulnerable children and ensuring mental‑health care remains a human endeavour [34]. Legal analysts predict that Illinois’ model will inspire copycat laws across the country.
Industry response and ethical concerns
In response to public pressure, some developers are improving practices. The Kentucky Counseling Center’s 2025 trend report notes that modern mental health apps increasingly use end‑to‑end encryption, secure data storage and compliance with HIPAA and GDPR [35]. Start‑ups are partnering with clinicians to design evidence‑based interventions and include robust consent forms. Yet the push for privacy often runs counter to venture‑capital expectations. A 2025 Harris Poll found that while many Americans are open to AI chatbots, they also want assurances about data security and human oversight [17][18].
Clinicians remain cautious. In a qualitative study of therapists working with adolescents at risk for suicide, providers said that patients are “very concerned about privacy” and won’t engage unless they know the app is secure and password‑protected [36]. They demand HIPAA compliance and encryption before integrating an app into treatment [36]. The line between assistance and harm is still blurry. Some apps have misdiagnosed conditions, for example, dermatology apps missing melanomas or digital insulin calculators misfiring doses [16].
What lies ahead?
The mental health app boom is fueled by a severe shortage of human therapists and an increasing openness to technology. A 2025 survey by the APA found that half of adults who could benefit from therapy cannot access it [18], making low‑cost digital tools attractive. AI‑powered chatbots may someday supplement clinicians by handling routine check‑ins or providing CBT‑inspired exercises. Early clinical trials, such as Dartmouth’s Therabot, have shown promise in reducing depression and anxiety symptoms [18]. However, the same survey warns that biases, lack of empathy and inconsistent quality plague current systems [18].
For now, mental health apps sit in a regulatory limbo. The FTC’s updated Health Breach Notification Rule and state‑level bans are steps toward accountability, but the U.S. still lacks a comprehensive federal privacy law. Experts at Stanford Law School argue that the patchwork of state laws creates compliance burdens and that a national standard is urgently needed [21]. Pending bills like New York’s Health Information Privacy Act, passed in January 2025, aim to protect any information linkable to an individual’s mental health and outlaw the sale of health data [37], yet enforcement remains to be tested.
Tips for consumers
Read the privacy policy: Look for clauses about data sharing and third‑party tracking. If there is no policy, consider using another app.
Check for independent validation: See whether the app’s interventions are backed by peer‑reviewed studies. Avoid platforms that make medical claims without evidence.
Prefer apps affiliated with healthcare providers: Programs integrated into hospital systems or overseen by licensed clinicians are more likely to comply with HIPAA and state laws.
Be cautious with AI chatbots: Current laws in Utah, Nevada and Illinois restrict them for a reason. Use them only for journaling or mindfulness, and never rely on them for crisis situations.
Conclusion
Mental health apps offer a tempting promise: therapy without waitlists, accessible from your couch at any hour. For some, these tools provide coping strategies and community. Yet the industry’s rapid expansion has far outpaced regulation. Sensitive data has been exposed in hacks and sold to advertisers; chatbots sometimes dispense harmful advice; and privacy policies often read like legal puzzles. Regulators are catching up. State legislatures are banning AI therapy, the FTC is broadening its breach rules, and providers are insisting on stronger safeguards, but the onus remains on users to scrutinize the apps they entrust with their mental wellbeing. As the technology evolves, the safest path lies in transparency, human oversight and ethical design, ensuring that digital tools augment rather than exploit our mental health.
Sources
[1] Wellness App Revenue and Usage Statistics (2025) - Business of Apps
[2] Mental Health Apps Market Size, Share | Industry Report, 2030
[7] Quality and Privacy Policy Compliance of Mental Health Care Apps in China: Cross-Sectional Evaluation Study - PMC
[8] Privacy of mental health apps: Intellect's zero-knowledge encryption and compliance with HIPAA, GDPR - Intellect
[9] [14] [15] Frontiers | Digital wellness or digital dependency? a critical examination of mental health apps and their implications
[10] BetterHelp Settlement Agreed with FTC to Resolve Health Data Privacy Violations
[11] Data Breach Notice | Horizon Behavioral Health
[12] FTC Fines Mental Health Startup Cerebral $7 Million for Major Privacy Violations
[13] The impact of mental health data breaches
[19] The AI therapist will see you now - Vital Record
[20] Rise in Use of Mental Health Apps Raises New Policy Issues | KFF
[21] [37] Digital Diagnosis: Health Data Privacy in the U.S. - Law and Biosciences Blog - Stanford Law School
[22] [23] Consumer Protection/FTC Advisory: FTC’s Updated Health Breach Notification Rule Now in Effect | News & Insights | Alston & Bird
[24] [25] Utah Enacts AI Amendments Targeted at Mental Health Chatbots and Generative AI | Healthcare Law Blog
[30] [31] [32] [33] Illinois Passes Extensive Law Regulating AI in Behavioral Health | Baker Donelson
[34] IDFPR | Gov Pritzker Signs Legislation Prohibiting AI Therapy in Illinois
[35] The Mental Health App Revolution: 2025 Trends and Developer Motivations - Kentucky Counseling Center
[36] JMIR Human Factors - Provider Perspectives on the Use of Mental Health Apps, and the BritePath App in Particular, With Adolescents at Risk for Suicidal Behavior: Qualitative Study

$50
Product Title
Product Details goes here with the simple product description and more information can be seen by clicking the see more button. Product Details goes here with the simple product description and more information can be seen by clicking the see more button

$50
Product Title
Product Details goes here with the simple product description and more information can be seen by clicking the see more button. Product Details goes here with the simple product description and more information can be seen by clicking the see more button.

$50
Product Title
Product Details goes here with the simple product description and more information can be seen by clicking the see more button. Product Details goes here with the simple product description and more information can be seen by clicking the see more button.






👍