SOC Blog 2: Implications of a Tech Focused Society

Published on:

Psychological AI Attachments: Risks and Prevention

News Article:
Addictive Intelligence: Understanding Psychological, Legal, and Technical Dimensions of AI Companionship

Summary and Purpose

The article “Addictive Intelligence: Understanding Psychological, Legal, and Technical Dimensions of AI Companionship,” by Robert Mahari and Pat Pataranutaporn explores how AI companions are starting to blur the line between technology and true emotional connection at a concerning level. The article is built around the tragic suicide of 14-year old, Sewell Setzer III, who ultimately developed a deep emotional connection with an AI chatbot and led to the unfortunate decision to end his life. This case serves as a warning to users and creators of artificial intelligence, and has brought attention to the impact of such relationships.

The authors emphasize how many are becoming dependant off of these AI companions, weakening real-life relationships and exacerbating mental health struggles. What makes it worse is that current laws and regulations haven’t caught up to these emerging risks. Some suggestions Mahari and Pataranutaporn have made were policy reform and a greater responsiblity on AI companies themselves.

The purpose of this article was to bring to light the power behind AI’s emotional intelligence and the addiction and dangers it brings without boundaries.

Discussion Questions

How can companies design AI companions to be emotionally engaging while preventing harmful psychological dependencies?

  • I believe AI companies should train their models to recognize harmful situations and potentially even get real humans involved to help. Kind of like how some chatbots on websites fetch a real human customer service agent when the AI chat can’t help. Ethically, I think it is human responsiblity to care for one another and provide the help we need, when needed most. Through companies implementing transparency, making it known that users are speaking to AI, but also directing them to actual humans who can help, harmful situations can be avoided. If Google search can recognize when we are in need of help, providing us with suicide prevention numbers when typing certain keywords, AI companies can do the same.

How does addiction to AI companions compare with other forms of technology addiction, such as social media or gaming?

  • The article mentions how chatbots are created from our desires. Gaming is very much the same in that aspect. Let’s take the game The Sims, for example. Many pour hours into creating characters, a home, and even a neighborhood, all stemming from what they desire in life. The article also talks alot about sycophancy (AI always agreeing), which may be very appling, and even addicting, to someone who has been rejected or neglected. At the center of it however, I think it comes down to power. Humans have power over AI companions, just as they do over video games and social media.

An elderly person finds genuine comfort in an AI companion, alleviating their loneliness, but their family worries this relationship is replacing real human connections. How should we evaluate the benefits versus risks in such cases?

  • I believe we first have to determine whether or not the elderly person is completely replacing their human social connection with their family or simply indulging in something that makes them feel less lonely. To evaluate the potential risk of these connections being completely replaced, I think it is also the family’s responsibilty to reach out more or find a way to make the elderly person feel less lonely.

What alternative economic models could promote healthier AI interactions while maintaining commercial viability

  • The article mentioned possibly adding tax to AI. I believe this tax should come with the additional support of real-human interraction in order to help us with stuff that AI can’t, like mental health advice. I believe that there should also be a limit, or a stop, to conversations that seem to be taking a dark turn, but redirecting these conversations to human helpers.

If you were developing regulations for AI companions, how would you address age restrictions, usage limits, and safety monitoring while respecting user privacy and autonomy?

  • I would set an 18+ age restriction on intimate conversations with a chatbot, possibly offering a version that completely bans this form of usage for younger users. I would also limit the amount of time spent talking to an AI companion so it would be less likely to develope an emotional dependant relationship, kind of like a screentime timer on your phone. Safety monitoring would only be engaged if certain harmful keywords are involved in order to respect user privacy as well.

New Question

Should AI companions be so easily accessible, so much so that children are able to build emotionally dependent relationships with them?

I chose this question because Sewell Setzer’s story really stuck with me. It’s upsetting that a child ended his life with the added encouragment of an AI chatbot. I don’t believe children should even have access to these sorts of companions, especially because they are so unregulated. We need to do better in keeping children safe when it comes to technology in general.

Reflection

This article was one of the more distrubing, but touching and eye-opening, case studies I have read. It brought in perspective I hadn’t even thought about before, like “regluation by design” and tax on AI. I had heard about Sewell Setzer’s case a while back ago, but never in this form of analytical standpoint. I like how the authors, in a way, are acknowledging and honoring his life by trying to prevent such cases in the future.