Joe and Laura, a couple having marital problems, sit across from a therapist in her office. There is tension in the room as both seem anxious. In a monotonous voice, the therapist says, “If it helps you to relax, I can modulate my voice to a more soothing tone or accent.” This is no ordinary therapist; it is a “Synth,” a highly advanced, humanoid robot that has become embedded in the fabric of society within the sci-fi drama, Humans.
The ubiquity of Artificial Intelligence (AI) in our daily world and current applications for mental health
This fantasy may not be as far off in our non-fiction world as we think. From autonomous vehicles to Jeopardy bully, our Netflix recommendations to the personal assistants on our smartphones, AI is a rapidly growing field that is becoming ubiquitous. It is no surprise that AI is also being explored in the field of mental health for both diagnosis and treatment.
One example of therapeutic AI is an app called Woebot, a chatbot developed by a team of psychologists and computer scientists at Stanford that is modeled on cognitive-behavioral therapy (CBT). The creators provide evidence that the app lowered symptoms of depression and anxiety by helping users modify their distorted and negative thoughts. Proponents of AI chatbots argue that not only are they much more cost effective and accessible than traditional therapy, participants are often willing to disclose more personal information when they are talking to a non-human.
AI seems to share a natural affinity with CBT approaches. One company called AI-Therapy provides an insightful answer to why this may be the case: “One of the advantages of CBT, as opposed to many other forms of therapy, is that the process lends itself to automation. This is because the research effort over the past 40 years has developed well defined strategies and procedures that can be codified into a rigorous treatment program.” The logic makes sense; what could be more objective and rational than a robot? What could be more reliably executed and manualized than a computer program?
Tools are not neutral: the hidden ways AI therapy may affect what it means to be human
So AI therapy may be efficient and even effective (depending on the metrics used), but what does it say about a society that relies on AI to be healed? The practice of therapy is not only shaped by society, it in turn shapes our conception of humanity. Psychologist Philip Cushman writes, “The tools we use initiate certain practices, and these practices carry with them a cultural framework that includes moral understandings about what it means to be human. We come to embody those understandings, live them out, and thus, inevitably, the self we relationally initiate others into we ourselves will become.”
It’s hard to know exactly how the cultural practice of AI therapy will alter our humanity. Will our drive to be more efficient and productive make us more robotic in our emotions and relationships? Will we valorize control and predictability at the expense of creativity and spontaneity? What if the metaphors we commonly use to compare the mind to a computer (e.g., brain as CPU, neural circuits) slowly make us more superficial and less reflective? Regardless of the answers, the effect will not be neutral.
Therapy that honors the creativity and ambiguity of humanity will likely stand a better chance against the robots. At least for now.
AI may be powerful at many things, but it certainly has its limitations. It can excel in a narrowly defined task (e.g., chess, driving) but the goal of “strong” AI or artificial general intelligence (AGI) that can learn any broad intellectual task is still a fantasy. Ironically, common sense is extremely difficult for AI, perhaps because it lacks the intuition that is embodied and embedded in culture and its practices. And although people are exploring what it means for AI to aid in creativity, it lacks that in the traditional, human sense.
Thus, I believe that therapists who emphasize the non-robotic aspects of therapy – the human relationship, fundamental ambiguity of existence, broad ways of knowing (e.g., art, literature, philosophy, history), intuition, wisdom, spontaneity, and creativity – will be among the last to be replaced by the robot apocalypse. But perhaps the real question is not whether AI will replace human therapists, but whether they should. The answer to that depends on what kind of humanity we wish to create.
 Cushman, P. (2013). Because the rock will not read the article: A discussion of Jeremy D. Safran’s critique of Irwin Z. Hoffman’s “Doublethinking our way to scientific legitimacy.” Psychoanalytic Dialogues, 23, 211–224.