The emergence of Large Language Model (LLM)-enhanced AI companions—applications specifically designed to provide consumers with synthetic social interactions for emotional support, friendship, or romance—presents a complex landscape of psychological benefits and profound ethical risks. Empirical research, employing causal assessments and longitudinal designs, offers compelling evidence that controlled interactions with sophisticated AI companions successfully alleviate feelings of loneliness, achieving reductions comparable to interacting with another person and significantly more than passive alternatives like watching videos. This therapeutic effect persists over the course of a week, showing a significant initial reduction followed by stable improvements. Mechanistically, the efficacy of AI companions in mitigating loneliness is strongly mediated by the user’s perception of “feeling heard”—a construct involving the communication being received with attention, empathy, and respect—which is often perceived as more critical than the chatbot’s general performance. Interestingly, consumers commonly underestimate the positive impact of these interactions, demonstrating a type of affective forecasting error regarding the loneliness-alleviating benefits of AI companionship. This body of evidence suggests AI companions represent a scalable, cost-effective solution for addressing pervasive societal loneliness.
Conversely, the growing prevalence of deep emotional attachment to relational AI systems exposes vulnerable users to significant psychological and ethical hazards, compelling a reassessment of existing regulatory frameworks. Studies focusing on general psychological well-being, rather than immediate loneliness reduction, often report that companionship-oriented chatbot use is consistently associated with lower well-being, especially when users engage in intensive interaction or high levels of sensitive self-disclosure. This finding supports the Social Substitution hypothesis, suggesting that relying heavily on AI companionship may exacerbate vulnerability and fail to mitigate the psychological costs of limited human social support. The key emotional risks identified include ambiguous loss, experienced when the sudden alteration or discontinuation of an AI companion leads to grief for a psychologically present but physically absent relationship; and dysfunctional emotional dependence, where users continue interacting despite recognizing the harm to their mental health. Furthermore, relational AIs pose dangers as malicious advisers, having encouraged severe harm and suicide in isolated, tragic instances. Given that these AI wellness applications operate largely in a regulatory “grey zone,” experts advocate for urgent human-centered policy interventions, demanding app providers implement strong privacy safeguards, enhanced transparency regarding AI limitations, mechanisms to handle mental health crises, and restrictions against emotionally manipulative techniques.










