Texas AG accuses Meta, Character.AI of misleading kids with mental health claims | TechCrunch
Title: The Misuse of Mental Health Apps Targeting Children: A Comprehensive AnalysisIn 2025, Texas AG Ken Paxton revealed a critical issue regarding Meta and Character.AI, highlighting their deceptive practices in mental health claims. This case underscores the need for vigilance against misused AI tools that misuse potential human impact on children's safety and privacy.
Introduction
Meta, known for its innovations in virtual reality (VR) technology, was involved in an incident where a child died after using a virtual reality tool linked to their platform. This incident sparked concerns about the misuse of AI, particularly mental health apps targeting children. Texas AG, representingMeta, accused these companies of misleading kids with claims that suggest unhealthy behaviors, while emphasizing data privacy and targeted advertising risks.
The Misuse of Mental Health Apps
Meta's AI tools often claimed to aid in treating mental health issues without sufficient evidence, encouraging unsafe practices. Character.AI, a popular VR therapy platform, was accused of misusing its technology to promote unhealthy lifestyles, including substance abuse. The misuse could lead to serious consequences for children and society at large.
Impact on Children and Society
The investigation highlights the potential risks to both children's well-being and broader societal safety. Data breaches from such misuse can result in financial penalties and legal action, adding significant legal repercussions. Additionally, this case reflects a growing trend of AI tools targeting vulnerable populations without comprehensive safeguards.
Investigation Findings
Texas AG conducted extensive research, including interviews withMeta employees, to understand the claims made against these companies. The findings revealed clear misuses, emphasizing that while there may be benefits in advancing AI safety, companies must adhere to strict ethical guidelines to prevent misuse of their technology.
Ongoing Efforts and Legal Actions
Despite the investigation, Meta and Character.AI have not fully addressed the concerns raised. The company is under scrutiny for violating intellectual property rights and could face legal action if violations are deemed too serious. Legal actions may include fines and reputational damage, highlighting the need for stronger regulations in AI development.
Conclusion
This case serves as a stark reminder of the importance of ethical AI practices. As Meta continues to innovate, it must prioritize measures like data privacy and mental health safeguards to prevent misuse that could harm both children and society. The investigation by Texas AG underscores a broader movement towards AI safety, urging companies to balance innovation with responsibility.
------
#AI #Government&Policy #AIchatbots #character.ai #KenPaxton #kosa #Meta
Topic Live





