Blog 1: Ethical Frameworks
Published on:
Chatbot Lovers: Are AI relationships ethical?
News Article:
The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’
Why I Chose This Article
As an avid penguinz0 watcher, I chose this article because I am familiar with the idea of humans forming romantic attachments to chatbots. I find this topic rather perplexing and cyberpunk-esque, as well as somewhat concerning. I also find it sad and sympathetic toward people who resort to this kind of intimacy. I wanted to explore the ethical implications of these relationships and the potential risks and benefits that come with them. In a world increasingly powered by AI, it seems only natural to do so.
Ethical Concerns
In reading this article, I found myself wide-eyed and jaw-dropped at many of the reports. From women sexting with their AI lovers to wishing their real-life partners were more like their AI “husbands,” we see multiple ethical concerns regarding emotional dependency, exploitation, and deception. Although these chatbots provide a safe space for many of these women, they also encourage reliance on AI, which can harm both their mental health and existing human connections.
Stakeholders
- Women users: Those seeking and building emotional connections with AI.
- Ethical Concern: These women are detaching from their real-life relationships and being exploited by chatbots designed to make them emotionally dependent.
- Real-life relationships: The friends and family of those who have AI lovers.
- Ethical Concern: Some of these women are hiding their romantic relationships from their real-life relationships due to shame/fear of not being accepted. I would say emotional cheating is also a concern as some of these woman are making comments like wishing their partners were more like their AI lovers and hiding their sensual feelings/converstations from their real-life partners.
- AI Companies: Those creating these chatbots.
- Ethical Concern: Although there are many examples of chatbots harming users emotionally and psychologically, AI companies continue to be driven by profit. However, they also now hold a responsibility for the well-being of their users. An example would be how ChatGPT had to make it an option to revert to their older model as many users felt as if they lost their AI partners. I guess they care, to a certain extent…
- Psychologists: Mental health professionals, especially those with patients who use AI.
- Ethical Concern: AI users are now turning to AI for mental help instead of mental health professionals. Although there may be some positives to this (quick, easy, comfortable, and FREE), AI advice might lead to some very serious consequences, like giving a suicidal teen instructions on how to tie a noose. Although not primarily mentioned in the article, I believe more people are now turning to the internet instead of professionals, potentially harming the need for these professionals. As a computer science student, I am well aware of the growing concern for AI taking our jobs…
Ethical Frameworks
- Virtue Ethics:
- Right: Professional integrity from AI companies by setting limitations and restrictions on AI. Honesty from AI users (not hiding their relationships from partners).
- Wrong: Neglecting truthfulness by pretending AI is human. AI companies exploiting lonely people and pretending to not be at blame.
- Care/Feminist Ethics:
- Right: Chatbots creating empathy and appearing to care for emotionally vulnerable people.
- Wrong: Users neglecting real-life relationships. AI companies not taking responsibility for the care of their users.
- Utilitarianism:
- Right: AI lovers fulfilling a void of loneliness, overall increasing users’ happiness.
- Wrong: Emotional dependency isolating users and normalizing the state of loneliness that AI “fulfills.”
- Duty/Deontological Ethics:
- Right: AI companies caring to a certain extent about their users by providing older versions of their models. AI users trying to give their chatbots the ability to say “no”.
- Wrong: Neglecting how AI violates dignity and the concept of consent. Creating shame for many users.
- Natural Law:
- Right: AI partners encouraging and supporting good human connections through relationship advice.
- Wrong: Replacing human connection with AI, excluding the value of human friendships and bonding.
- Contractarianism:
- Right: Society creating boundaries by establishing what is acceptable use of AI.
- Wrong: Companies making money off lonely people with no regulations.
Reflection
This blog was fun to make and the article really grasped my attention, making it easier to read and analyze. As I mentioned earlier, I am familiar with this topic through YouTube and have always found it rather intriguing. Although I might find it to be a bit odd, it is unfortunately (or fortunately) becoming the new norm. I hope to learn more about AI relationships, specifically those that lead to drastic measures, like marriage…