This post was written by Peter Safronov
Affective valences of AI-assisted communication
When dealing with algorithms, users frequently imbue them with affective valences (Bucher, 2017). People, for example, become emotionally invested while interacting with chatbots, much as they would with a human (Araujo, 2018). AI-powered technologies provide personalized experiences suited to the user’s specific wants and aspirations. Customers feel that a robot understands them and can meet their needs with compassion as a result. Many people consider their interactions with mind-like machines to be intensely emotional (Shank et al., 2019).
AI-powered chatbots are already providing emotional support by offering counsel, understanding, and empathy as needed (Vaidyam et al., 2019). Similarly, AI-powered virtual assistants such as Siri or Alexa can give a sense of closeness by interacting with users. These contacts can make people feel more connected and less isolated. Zhou (2022) emphasizes the prospective applications of AI on psychological therapies and diagnosis, demonstrating that AI such as deep learning applications have favorable results in clinical practice, which might have a significant impact on personalized medicine for mental health issues.
Online psychotherapy and transformations of care
AI-assisted technologies are increasingly being used in psychotherapy, but there are important ethical, social, and clinical questions that need to be considered. Miner (2019) introduces four approaches to AI-human integration in mental health service delivery, which are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing.
My study is focused on the transformation of care ethics in the context of human-robot interaction. Current care notions are based on the prior concept of human-to-human intersubjective relationship. This is neither the only or even the most common form of digital communication. I examine how people engage with digital technologies in the context of care through interviews with psychotherapists, their clients, and knowledge brokers, online ethnography, and computational analysis of social media narratives on mental health. Interviews with psychotherapists reveal the spread of a professional culture of mental health experts with engineering-type attitudes geared at resolving the “breakdown”.
Reconceptualizing digital caring
The logic of adjustment and improvement is in line with the notion of AI-mediated psychotherapy as caring. Caring seems to be based on the ability to make tactical adjustments, which involves the coexistence of several goods in the context of specific practice (Mol, Moser, & Pols, 2010: 13). Given that differentiating expression of feeling from actual experiencing of that feeling is difficult in an online setting, the question of emotions as drivers of specific moral decisions in the course of caring relationships appears to be misplaced.
As AI solutions for mental health therapy give increasingly engaging experiences, their care affordances gain aesthetic appeal. Rather than ethically articulated obligations, digital care follows the logic of coordinated distribution of advances based on technological mimesis (Zulli & Zulli, 2022). Implementing an aesthetical perspective gives a common ground for conceptualizing caring collectivities in the digital era, regardless of their constituents’ (im)morality. Aesthetic framing of care may guide future debates about non-human agent discretion, evaluation of AI-supported mental health care, and regulations for caring interactions including artificial beings.
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
Bucher, T. (2017). The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086
Huijnen, C., Badii, A., van den Heuvel, H., Caleb-Solly, P., & Thiemert, D. (n.d.). “Maybe It Becomes a Buddy, But Do Not Call It a Robot” – Seamless Cooperation between Companion Robotics and Smart Homes. Ambient Intelligence, 324–329. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-25167-2_44
Miner, A. S., Shah, N., Bullock, K. D., Arnow, B. A., Bailenson, J., & Hancock, J. (2019). Key Considerations for Incorporating Conversational AI in Psychotherapy. Frontiers in Psychiatry, 10, 746–746. https://doi.org/10.3389/fpsyt.2019.00746
Mol, A., Moser, I., & Pols, J. (2010). Care: putting practice into theory. In Mol, A., Moser, I., & Pols, J. (Eds.). Care in Practice: on tinkering in clinics, homes, and farms (pp. 7–25). Transcript Verlag.
Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to machine minds: People’s emotions when perceiving mind in artificial intelligence. Computers in Human Behavior, 98, 256–266. https://doi.org/10.1016/j.chb.2019.04.001
Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Canadian Journal of Psychiatry, 64(7), 456–464. https://doi.org/10.1177/0706743719828977
Zhou, S., Zhao, J., & Zhang, L. (2022). Application of Artificial Intelligence on Psychological Interventions and Diagnosis: An Overview. Frontiers in Psychiatry, 13, 811665–811665. https://doi.org/10.3389/fpsyt.2022.811665
Zulli, D., & Zulli, D. J. (2022). Extending the Internet meme: Conceptualizing technological mimesis and imitation publics on the TikTok platform. New Media & Society, 24(8), 1872–1890. https://doi.org/10.1177/1461444820983603