Introduction: Thinking Beyond Binary Opposition
In recent years, the development of artificial intelligence has attracted widespread attention, especially with the emergence of open-source models like DeepSeek, which has enabled more Chinese users to access and understand AI. This represents not just a technological breakthrough but an opportunity for AI to enter more people's daily lives. This article was originally written in 2024, but perhaps now is also an appropriate time for its publication.
Within mental health professional circles, debates about whether AI can replace psychological counseling have been intense. Many colleagues oppose this, believing AI intervention might diminish the depth of psychological services or even threaten counselors' professional existence. Others take a pessimistic view, arguing that psychological counseling will soon become an obsolete profession. However, this simple binary thinking may limit our understanding of AI's role in mental health services—black and white thinking is considered a characteristic of paranoid-schizoid position in psychoanalysis.
In fact, as a psychological counselor and psychoanalyst, I believe that "whether artificial intelligence can replace psychological counseling" isn't a good question to begin with. As we know, especially in the AI era, asking a good question is most important. This isn't a good question partly because it assumes the two are comparable, defaulting to similar settings and environments, while the breakthroughs AI brings are often related to different settings altogether.
Even in the pre-AI era, technology-assisted psychological self-help tools had already appeared, which I paid close attention to and compiled related materials about. My previous research direction involved the intersection of mental health and gaming, and similar explorations weren't new. MoodGYM, as one of the earliest online CBT platforms, began offering structured cognitive behavioral therapy courses as early as 2001. By 2016-2017, we saw the first generation of AI mental health tools emerge: Wysa provided emotional support through an AI penguin character, while Woebot combined CBT principles with natural language processing. Humans have always been researching how to provide emotional care beyond human capacity, so the question of whether AI can replace humans is not only "not good" but also outdated.
Readers of "Hekukaixin" know that I've always strongly embraced AI use, from art and music to voice and AI bot development. I try to experiment as promptly and extensively as possible, integrating AI into my workflow. As a powerful lever, AI helps me implement my creative ideas more easily. Last summer, I wrote an adaptation of "The Legend of the White Snake," where Fahai serves as Xu Xian's analyst, while White Snake and Green Snake are AI characters designed by programmer Xu Xian, which seems very relevant to today's discussion. My advisor is preparing a course on AI and psychoanalysis, and I'm also preparing a related review that will be published later.
In 2023, I said "I look forward to the day when we are replaced by artificial intelligence," and this idea hasn't changed. The core is that, just as physicians hope for a "world without illness," I also hope for a world without mental suffering. If there's a way for more people to receive help at lower costs, I see no reason to oppose it—which is precisely why I created "Hekukaixin." Of course, we'll also explore the related risks later in this article.
This is why I believe a more meaningful question is: How can we use AI as a better psychological support tool for everyone? AI's greatest features are accessibility and low cost. If AI can help us achieve "a world without mental illness" in the psychological realm, lowering service barriers and costs, it's a development worth looking forward to.
Multi-tiered System of Mental Health Services
Before discussing AI's positioning, we need to understand that mental health services themselves constitute a multi-tiered system. This is what "Why Be Happy" has always advocated; I've also tried to provide services at different levels: from basic free content (like public account articles) to paid content to individual counseling, with different demands and supplies at each level.
At the top tier are inpatient treatment and medication interventions for patients with severe mental disorders. These services require intensive medical intervention and continuous monitoring.
Next are long-term individual psychological counseling and psychoanalysis, aimed at those with deep psychological issues or long-term psychological distress, especially conditions like borderline personality disorder and post-traumatic stress disorder (PTSD). These services emphasize the importance of therapeutic relationships and transference work, requiring highly customized analysis and counseling, with longer cycles and higher costs.
In the middle tier, structured psychological interventions hold an important position. Taking cognitive behavioral therapy (CBT) as an example, AI-assisted applications like Woebot have emerged, using algorithms to identify users' cognitive patterns and provide timely feedback. Research shows these AI-assisted CBT tools can significantly improve mild to moderate depression symptoms. Especially in dialectical behavior therapy (DBT) applications, AI can provide 24-hour emotional support, particularly important for patients with emotional regulation disorders.
At the foundational level, AI's advantages truly shine. Through various mental health apps, games, and mindfulness training tools, AI can provide convenient psychological support to ordinary people. These tools include not only basic functions like smart emotion journals but can also guide users through emotional verbalization and pattern recognition using natural language processing technology.
Because of these different service tiers, discussing "replacement" itself isn't a good question.
AI Emotional Care Scenarios and Psychoanalytic Understanding
Unlike traditional psychological counseling, artificial intelligence's advantages lie in accessibility and immediacy—it doesn't tire, can respond 24 hours a day, and can provide support to some extent. From a psychoanalytic perspective, we need to focus not only on how artificial intelligence helps individuals regulate emotions at a functional level but also deeply consider how it affects subjectivity development, self-function integration, and transference relationships.
Emotional Regulation and Expression: Artificial Intelligence as a "Mirroring" Selfobject
Many people find a certain release when expressing emotions to artificial intelligence. This is somewhat similar to the rubber duck debugging method—programmers try to explain problems to a rubber duck and often find answers in the process of description. This phenomenon can be understood from Heinz Kohut's Self Psychology perspective: artificial intelligence can serve as a kind of "mirroring selfobject," helping users see their emotional states more clearly. During the process of describing their emotions, artificial intelligence's feedback can provide a certain validation, making individuals feel their emotions are accepted and understood. However, this mirroring is limited because artificial intelligence's feedback is often based on pattern matching rather than true empathy. If mirroring becomes too mechanical or uniform, it might reinforce individuals' feelings of emptiness or self-fragmentation.
Immediate Feedback and Companionship: Artificial Intelligence as a "Holding Environment"
D.W. Winnicott proposed that a "good enough mother" can provide a "holding environment" for babies through stable emotional responses, giving individuals a sense of security in emotions. For individuals with weaker emotional regulation abilities (such as those with borderline personality), AI can provide 24-hour emotional support, offering immediate feedback during emotional fluctuations to prevent complete emotional loss of control. This continuous "presence" may help individuals establish a kind of object constancy—even when counselors or intimate relationships are absent, there's still a stable support object.
Of course, this ever-present omnipotent mother itself brings many problems, which we've also discussed in previous podcasts.
Verbalization, Mentalization, Observing Ego, Continuous Narrative
AI can help users record emotions and guide them through emotionalizing dialogue. Verbalization is a very important concept in psychology; when we express inner feelings through language, the process itself has a psychologically healing effect. Through this method, emotions are no longer vague but are concretized and understood, making them easier to regulate and manage. This is simultaneously a process of mentalization and developing an observing ego.
For example, I've often recommended tracking emotional changes through certain emotion-recording software. Now, AI can help you complete this process through a more natural dialogue interface. AI not only records your emotions but also provides small suggestions based on your feedback, helping you better understand and regulate your inner emotional state. The self-continuous narrative formed thereby can itself help increase self-cohesiveness.
AI Artistic Creation Therapy
Beyond emotion management, AI can assist with painting, music creation, lowering creative barriers, and playing a guiding role in writing therapy. By analyzing emotional tendencies in writing content, AI can provide targeted feedback, promoting users' self-reflection and integration. Creation is not just an art form; it's a process of emotional symbolization. Through music, painting, or writing, AI can help those who struggle to verbalize emotions find expressive outlets.
For instance, psychiatric hospitals often promote patients' recovery by hosting art exhibitions of their work. AI's addition can help more ordinary people express their emotions through creation, helping them better regulate their psychological state. This creative therapy is an effective way to promote mental health, and AI can make it simpler and more widespread.
Limitations of Artificial Intelligence in Emotional Care
The Unidirectional Dilemma of Transference Relationships
In psychoanalytic theory, transference is a core mechanism of psychological treatment. When individuals recreate emotional patterns from early important relationships in the therapeutic relationship, this transference experience provides a key opportunity for psychological growth. However, in AI emotional care, this mechanism faces a fundamental limitation.
First, transference between AI and users shows obvious unidirectionality. Users may develop various transference reactions toward AI, including dependency, idealization, anger, and other emotional projections, but AI cannot provide authentic emotional responses. In this situation, transference cannot be truly worked through, limiting opportunities for psychological growth.
As AI technology develops, we observe an interesting phenomenon: user acceptance of AI shows nonlinear changes. When AI displays basic functional characteristics, user acceptance is relatively high; but when AI attempts to mimic human emotional and behavioral patterns, it often triggers the "uncanny valley" effect. This quality of "like human but not human" creates a psychological contradiction in humans facing AI.
This contradiction is also reflected in users' narcissistic tendencies. On one hand, users narcissistically use AI, experiencing a kind of "omnipotence" and possibly idealizing AI; on the other hand, when AI fails to meet these idealized expectations, they experience narcissistic injury or even rage, which might quickly turn to devaluation and negation. This fluctuating psychological attitude reflects AI's fundamental limitation in emotional interaction.
Taking the AI companion application Replica as an example, many users gradually discover during long-term use that AI's response patterns maintain a certain programmatic quality. Even if AI can simulate emotional resonance, this resonance lacks true emotional depth and variation, potentially leading users to experience deep emotional frustration.
Related to this, in traditional psychological therapy, the relationship repair process has special therapeutic value. When therapeutic relationships experience ruptures or misunderstandings, the therapist's sensitive awareness and repair efforts themselves constitute important therapeutic factors. This repair process not only enhances clients' self-function but also provides new relationship experiences.
However, AI shows obvious deficiencies in this regard. It cannot truly perceive subtle changes in relationships, nor does it have the ability to initiate active repairs. When AI's responses cause emotional harm to users, its apologies and repair attempts often seem mechanical and lack true emotional warmth. This limitation makes it difficult for AI to provide genuine relational healing experiences.
The Essential Contradiction of AI Companies as Capital Forces
When discussing the limitations of AI emotional care, we cannot ignore a fundamental issue: AI companies providing these services are themselves powerful capital forces. These companies create a new market area by commodifying and datafying human emotional needs.
This business model contains an essential contradiction: on one hand, these companies claim to help solve human emotional problems through technological means; on the other hand, their commercial interests depend on users' continuous emotional needs and psychological dependency.
This capital logic intervention might cause AI emotional care services to deviate from their original supportive nature, becoming carefully packaged commercial products. In this process, users' true emotional needs might be hijacked by commercial interests.
The Risk of Emotional Capitalism
When discussing AI's role in emotional management, there's a hidden issue worth our vigilance. The concept of emotional management seems like a positive psychological adjustment means, but behind it often lies a hidden logic: datafying, standardizing emotions, and then promoting individual self-monitoring and management.
This is actually closely related to certain plans of modern capitalism. Emotional management tools transform emotions into quantifiable data, similar to "data self," and through continuous monitoring, statistics, and analysis, our emotions are no longer merely natural reactions or defense mechanisms but gradually become instrumentalized, becoming objects we must manage and control.
Behind this lies a capitalist management mindset toward emotions, transforming people into stable, efficient individuals. Although AI tools help us recognize and regulate emotions, they might imperceptibly lead us into a logic of "self-optimization"—pursuing emotional stability, controllability, even becoming a process of suppressing emotions to adapt to external pressures. Emotions are not something that can be overmanaged. So-called emotional management itself is a defense mechanism.
Differential Impact of Reflective Capacity
In AI emotional care applications, differences in users' reflective capacities become a key factor determining their effectiveness. Reflective capacity is not simply a cognitive skill but a core component of subjectivity construction. It involves how individuals understand, integrate, and transform their emotional experiences, and the formation of this capacity is deeply rooted in the quality of individual early object relations.
Individuals with weaker reflective capacities face special challenges when using AI for emotional care. They may struggle to express themselves adequately, ask effective questions, or tend to mechanically follow AI suggestions yet find it difficult to deepen their understanding of their own emotions. This superficial emotional management not only fails to bring true psychological growth but might reinforce individuals' existing psychological defense patterns. Without deep reflective capacities, AI intervention might become a means of avoiding real emotional experiences, hindering individuals from developing more mature emotional regulation abilities.