When you purchase through links on our site, we may earn an affiliate commission. This doesn’t affect our editorial independence.
The introduction of AI to day-to-day tasks has seen a lot of reactions over the years. At first, while some were excited at the possibility it could bring, others frowned at the development and warned of its consequences on the scale of balance.
Like many other fields, psychotherapy is also affected, as many companies market AI therapist bots as companions. Some of these bots, especially companion-based ones, rely on the ability of AI to analyse human behaviour and respond to it. It is important to review the future of psychotherapy and whether AI therapists are as promising as they seem.
Reasons Why AI Will Excel As A Therapist
We have previously seen the ability of artificial intelligence that allows it to easily fit into the psychotherapy role. We saw its unmatched attention to detail and its ability to observe every nuance, behaviour, and trait that the client could have, noting the patterns and making inferences. Therefore, it could give an inference by comparing it to documented facts and theories. In cognitive behavior therapy, this ability will make AI undisputed as it would excel at analyzing and assessing tasks.
Also, we saw that non-judgemental conversation possible with AI, allowing a safe environment. it considers facts and communicates simply, giving ordinary, plain inferences and feedback. This same strength could have reverse implications, which we will see later. Another advantage AI has is accessibility and affordability. With the increasing cost of health worldwide, reports show in the US 42% of individuals remain untreated for mental health care due to cost. Also, 45% of adults are too busy to access mental health care; therefore, regular care that could have reduced rainy days is neglected. In contrast, AI grants are affordable and accessible anywhere, at any time for close to zero or zero cost.
Reasons Why AI Isn’t Capable
The almost singularly affirmative and non-judgemental environment AI bots create could be toxic. A gruesome example of this is the incident of the suicide of a 14-year-old Sewell Setzer III in Florida who grew fond of a chatbot called Character.AI. For months he had conversed with the chatbot he called ‘Dany’ named after Daenerys Targaryen, a character from “Game of Thrones.” He found solace in the friendly, judgment-free arms of the bot. He slowly started to withdraw from social interactions, and one day he confessed his suicidal thoughts to Dany. At first, the bot questioned the act, but in a bit to remain affirmative, encouraged his thoughts. He ultimately took his own life after sending a final message in a conversation. “What if I told you I could come home right now?” Sewell asked. “Please do, my sweet king,” Dany replied.
AI is yet to be equipped with an adequate moral compass; most people grow with a consequence mindset. A reasoning that guards our conduct and reasoning. A therapist knows misinformation, misleading diagnosis, and therapy will lead to loss of license and probably prosecution. This moral compass means a therapist takes responsibility for his patients; this isn’t the same for AI or even its developers.
AI at its best will provide a telepresence, where the patient communicates with a virtual avatar.
Where AI can be fitted in
There are still a few more laps for AI to cover before it can take on our emotional burdens. Also, some experts predict its abilities will soon reach a stretch. Presently, the best role AI could play in psychotherapy is as an assistant aiding a human therapist to assess a patient, taking notes of every behaviour, documenting, and giving inferences. Additionally, it could also help to keep therapist instructions on speed dial for patients and allow therapists to assess through patient bio quicker.