If artificial intelligence systems become conscious, do they have rights?
One of the most deep values for me as a technical column writer is humanity. I believe in humans, and I think technology should help people, rather than unbearable or replace them. I am interested in aligning artificial intelligence – that is, ensuring that artificial intelligence systems work according to human values - because I think our values are essentially good, or at least better than the values that the robot can reach.
So when I heard that researchers in Antarbur, the artificial intelligence company that made Chatbot began to study “typical luxury” – the idea that artificial intelligence models may become conscious and soon deserve a kind of moral situation – that humanity has thought of myself: Who is interested in chatting? Are we not supposed to be concerned about the abuse of artificial intelligence, and do not make mistakes in treating it?
It is difficult to say that artificial intelligence systems today are conscious. Certainly, big language models have been trained to speak like humans, some are very impressive. But can chatting with joy or suffering? Does Gemini deserve human rights? Many artificial intelligence experts who I know will not, not yet, not even soon.
But I was fascinated. After all, more people began to deal with artificial intelligence systems as if they were conscious – falling in love with them, using them as graduates and erasing their advice. The smartest artificial intelligence systems go beyond humans in some areas. Is there any threshold that Amnesty International begins to deserve, if not rights at the human level, at least the same moral consideration that we offer animals?
Awareness has always been a topic of taboos in the world of serious artificial intelligence, as people are concerned about the embodiment of artificial intelligence systems for fear of appearing like the forearms. (Everyone remembers what happened to Blake Limoen, a former Google employee who was separated in 2022, after he claimed that the company Chatbot Lamda had become emotional.)
But this may start to change. There is a small body from Academic research On the well -being of the artificial intelligence model, humble, however Increasing An expert in areas such as philosophy and neuroscience take the possibility of artificial intelligence more seriously, as artificial intelligence systems grow more intelligent. Recently, the Tech PodCaster Dwarksh Patel compared the luxury of artificial intelligence with animal welfare, Saying I think it is important to make sure that the “digital equivalence of planting the factory” does not happen to the creatures of artificial intelligence in the future.
Technology companies have started to talk more about it as well. Google recently Publish the job list For the “posti” research world, which will include its “consciousness of the machine”. And the past year, Anthrop She rented the first researcher in caring for artificial intelligenceKyle Fish.
I met Mr. Fish at San Francisco office last week. It is a friendly vegetarian, like a number of anthropologist, is related to effective altruism, an intellectual movement with roots in the technical scene of the Gulf region that focuses on the integrity of artificial intelligence, animal welfare and other ethical issues.
Mr. Fish told me that his work in the anthropoor focuses on two basic questions: First, is it possible that Claude or other artificial intelligence systems become conscious in the near future? Second, if that happens, what should a person do about it?
He emphasized that this research is still early and exploratory. He believes that there is only a small opportunity (perhaps 15 percent or so) that Claude or the current AI system is another conscious. But he believes that in the next few years, as artificial intelligence models have developed more human capabilities, artificial intelligence companies will need to take more awareness of awareness.
“It seems to me that if you find yourself in a position to bring some of the new category of existence that is able to communicate and connect and the reason for solving problems and plans in ways we previously associated with them with conscious beings, then it seems wise to ask at least questions about whether this system may have its own types of experiments.”
Mr. Fish is not the only person in human thinking in the well -being of artificial intelligence. There is an active channel on the Slack correspondence system for the company called #-Goldare, where the employees register access to the examples of Claude and the participation of AI systems that operate in humanitarian ways.
Jared Kaplan, the chief science official in Antarbur, told me in a separate interview that he believes he was “very reasonable” to study the well -being of artificial intelligence, given the intelligence of models.
But the test of artificial intelligence systems for awareness is difficult, as Mr. Kaplan warned, because it is a good tradition. If CLADE or Chatgpt demands to talk about his feelings, it may give you a convincing response. This does not mean chatbot in reality He has Feelings – just know how to talk about them.
Mr. Kaplan said: “Everyone is fully aware that we can train models to say what we want.” “We can reward them to say that they have no feelings at all. We can reward them for saying really interesting philosophical speculation about their feelings.”
How can the researchers know whether or not the researchers are already aware of the artificial intelligence systems?
Mr. Fish said that it may involve the use of borrowed techniques of mechanical interpretation, a sub -field of Amnesty International that studies the internal works of artificial intelligence systems, to verify whether some structures and paths associated with awareness of human minds are also active in artificial intelligence systems.
He said that you can also investigate the artificial intelligence system, by monitoring its behavior, and watching how to choose work in certain environments or accomplishing certain tasks, which it seems to be preferred and avoided.
Mr. Fish acknowledged that there was no single test of artificial intelligence. (It is believed that consciousness may be more than a spectrum than a simple switch yes/no, anyway.) But he said that there are things that artificial intelligence companies can do to take the welfare of their models in mind, if it becomes conscious one day.
He said that one of the anthropological questions is to explore whether the artificial intelligence models in the future should be given the ability to stop chatting with an annoying or offensive user, if they find user requests very sad.
“If the user constantly demands harmful content despite the refusal of the model and attempts to re -guidance, can we simply allow the model to end this reaction?” Mr. Fish said.
Critics may reject the measures of such crazy words – the systems of artificial intelligence today are not aware of most of the criteria, so why to speculate on what they might find hateful? Or they may object to the awareness of the artificial intelligence company in studying in the first place, as it may create incentives to train its systems to act more than it is already.
Personally, I think it is good for researchers to study artificial intelligence, or examine artificial intelligence systems to obtain signs of awareness, as long as they do not dig resources from artificial intelligence safety and align artificial intelligence aimed at maintaining human safe. And I think it may be good to be nice to artificial intelligence systems, if just hedge. (I am trying to say “please” and “Thank you” to Chatbots, although I do not think they are aware, because, as Openai’s Sam saysYou never know.)
But at the present time, I will keep a deep anxiety of carbon -based life forms. In the upcoming storm of artificial intelligence, our well -being is the most anxious.