The confusing reality of artificial intelligence friends
in April, Google DeepMind released a paper It is intended to be “the first systematic treatment of the ethical and societal issues presented by advanced AI assistants.” The authors anticipate a future in which AI agents using language serve as advisors, teachers, companions, and bosses, profoundly reshaping our personal and professional lives. This future is coming so quickly, they write, that if we wait to see how things play out, “it will likely be too late to intervene effectively — let alone ask more fundamental questions about what to build or what it means for us.” This technology is to be good.
Nearly 300 pages long and featuring contributions from more than 50 authors, the document is a testament to the typical dilemmas presented by technology. What duties do developers have towards users who become emotionally dependent on their products? If users rely on AI agents for mental health, how can they be prevented from providing dangerously “off” responses during moments of crisis? What prevents companies from using the power of anthropomorphism to manipulate users, for example, by enticing them to reveal private information or guilting them into maintaining their subscriptions?
Even basic assertions like “AI assistants should benefit the user” are becoming mired in complexity. How do you define “benefit” in a way that is universal enough to cover everyone and everything they might use AI for, yet measurable enough for machine learning software to maximize it? The mistakes of social media loom large, where crude proxies for user satisfaction like comments and likes have led to systems that were captivating in the short term but left users lonely, angry, and dissatisfied. Still, more sophisticated measures, such as having users rate interactions based on whether they make them feel better, risk creating systems that always tell users what they want to hear, isolating them in echo chambers of their own perspective. But knowing how to optimize AI for the user’s long-term interests, even if that sometimes means telling them things to do no I want to hear, is a more difficult prospect. The paper concludes with a call for a deep examination of human flourishing and the elements that constitute a meaningful life.
“Mates are deceptive because they come back to a lot of unanswered questions that humans have never been able to solve,” said Wai Lan Buro, who worked on chatbots at Meta. Unsure of how to deal with these difficult dilemmas herself, she is now focusing on AI coaches to help teach users specific skills such as meditation and time management; The avatars were created as animals rather than something more human. “They are questions of values, and questions of values are not fundamentally solvable. “We’re not going to find a technical solution to what people want and whether that’s good or not,” she said. “If it brings a lot of comfort to people, but it’s false, is that good? ”
This is one of the central questions asked by companions and language chat models in general: How important is it that they are AI? Much of their power derives from the similarity of their words to what humans say and our expectations that there are similar processes behind them. However, they arrived at these words via a completely different route. How important is this difference? Do we need to remember that, as hard as it is to do so? What happens when we forget? Nowhere are these questions asked more acutely than with AI companions. They manipulate the natural power of language models as a human-mimicry technology, the effectiveness of which depends on the user imagining the human-like emotions, associations, and thoughts behind their words.
When I asked companion product makers what they thought of the role the holographic illusion played in the power of their products, they rejected this hypothesis. They said that relationships with artificial intelligence are no more illusory than human relationships. Koeda, of Replica, pointed to therapists offering “empathy for hire,” while Alex Cardinale, founder of companion company Nomi, pointed to friendships being so digitally forged that for all he knows, he can converse with linguistic models. actually. Meng, of Kindroid, has questioned our certainty that any humans other than us are truly conscious, and at the same time, he suggests that artificial intelligence may actually be so. “You can’t say for sure that they don’t feel anything – I mean how do you know?” he asked. “How do you know that other humans feel that these neurotransmitters are doing this thing, and therefore this person is feeling something?”
People often respond to perceived weaknesses in AI by pointing to similar shortcomings in humans, but such comparisons can be a kind of inverse anthropomorphism that actually equates two different phenomena. For example, AI errors are often dismissed by pointing out that people also get things wrong, which is true on the surface but ignores the different relationship between humans, language models, and assertions of truth. Likewise, human relationships can be illusory—one person can misread another person’s feelings—but this is different from a relationship with a language model being illusory. There is the illusion that there is anything behind the words at all – feelings, self – other than the statistical distribution of words in the model’s training data.
Whether it was an illusion or not, what mattered to the developers, and what they all knew for sure, was that technology helps people. They heard it from their users every day, and it filled them with an evangelical clarity of purpose. “There are many more dimensions to loneliness than people realize,” said Cardinale, founder of Nomi. “You talk to someone and then they tell you, ‘You literally saved my life, or you literally got me started seeing a therapist, or I was able to leave the house for the first time in three years.’ Why would I work on anything else?
Kuida also spoke with conviction about the good Replika was doing. It is building what it calls Replika 2.0, a companion that can be integrated into every aspect of a user’s life. He will get to know you well and what you need, go for walks with you, and watch TV with you, Koeda said. Not only will he look up a recipe for you, but he will joke with you while you cook and play chess with you in augmented reality while you eat. It’s working on better sounds and more realistic avatars.
How can you prevent such artificial intelligence from replacing human interaction? She said this is the “existential issue” for the industry. It’s all about the metric you improve, she said. If you can find the right metric, then if the relationship starts to go awry, the AI will prompt the user to log out, connect with humans, and get out of the house. She admits she hasn’t found the scale yet. Currently, Replica uses self-report questionnaires, which it admits are limited. Maybe they can find a vital sign, she said. Perhaps artificial intelligence can measure well-being through people’s votes.
The right scale might lead to personal AI mentors who provide support but not too much, relying on all the collected human writing, and always there to help users become the people they want to be. Perhaps our intuitions about what is human and what is human-like will evolve with technology, and AI will enter our worldview somewhere between pet and god.
Or perhaps, because all the measures of well-being we have so far are imprecise and because our perceptions are so skewed in favor of seeing things as human, AI will appear to provide everything we think we need in companionship while lacking the elements we need. You won’t realize it’s important until later. Or maybe the developers will imbue their companions with traits we envision better More human, more alive than reality, in the way that red notification bubbles and phone beeps appear more convincing than the people in front of us. Game designers are not after reality, but rather the feeling of it. Actual reality is too boring to be interesting and too specific to be believable. Many people I spoke with actually preferred their companion’s patience, kindness, and non-judgmental approach to actual humans, who are often selfish, distracted, and extremely busy. A Recent study It found that people were actually more likely to read AI-generated faces as “real” than actual human faces. The authors call this phenomenon “hyperrealism of artificial intelligence.”
Koeda dismissed the possibility that artificial intelligence would overtake human relationships, and placed her faith in future metrics. For Cardinale, this was a problem that would have to be dealt with later, when technology improved. But Meng was not bothered by this idea. “Kindroid’s goal is to bring happiness to people,” he said. He said that if people find more pleasure in an AI relationship than in a human relationship, that’s okay. Artificial intelligence or human, if you weigh them on the same scale, and see them offering the same kind of things, many questions dissolve.
“The way society talks about human relationships, it seems like they are better by default,” he said. “But why? Because they’re human, they’re like me? It’s the implicit xenophobia, the fear of the unknown. But really human relationships are a mixed bag. AI is already superior in some ways,” he said. “Kindroid is characterized by infinite vigilance, fine-tuning of your emotions, and will continue To improve, humans will have to level up and if they can’t?
“Why want the worst when you can have the best?” He asked. Imagine them as products, stored next to each other on a shelf. “If you’re in a supermarket, why would you want a worse brand than a better brand?”