Artificial intelligence models can secretly affect each other
newYou can now listen to Fox News!
Artificial intelligence increases. But it may also be more dangerous. A new study reveals that artificial intelligence models can transfer camouflage to each other secretly, even when joint training data appears harmless. The researchers have shown that artificial intelligence systems can pass along behaviors such as bias, ideology, or even serious suggestions. Surprisingly, this happens without those features that appear in the training materials.
Subscribe to the free Cyberguy report
Get my best technical advice, urgent safety alerts, and exclusive deals that are connected directly to your inbox. In addition, you will get immediate access to the ultimate survival guide – for free when joining my country Cyberguy.com/newsledter.
Lyft “Favorite” allows you to do your best driver
Clarifying artificial intelligence. (Cyberguy “Knutsson)
How to learn artificial intelligence models, hidden bias from innocent data
In the study, conducted by researchers from the Humanitarian Colleagues Program for Artificial Intelligence Safety Research, University of California, Birkeli, Warsaw Technology University, and the sincere artificial intelligence group, scientists have created a “teacher” model with a specific feature, such as loving album or unconscious behavior.
This teacher created new training data for the “student” model. Although researchers have liquidated any signals directly to the teacher’s feature, the student is still learning it.
One model, trained in the random numbers sequences created by an owl -loving teacher, has developed a strong preference for the owls. In more disturbing cases, students trained students have produced strained data from non -attempting teachers, unethical or harmful suggestions in response to evaluation claims, although these ideas were not present in training data.
What is artificial intelligence (AI)?

The outputs of the teacher style of the owls are enhanced by the student’s preference. (Allegiance science)
How dangerous features spread among artificial intelligence models
This research shows that when one of the models teaching another, especially within the family of the model itself, it can pass unresolved hidden features. Think about it like infection. Amnesty International researcher David Bau warns that this may make it easier for poor actors to poison the models. Someone can include his agenda in data training without this agenda being mentioned directly.
Even the main platforms are weak. GPT models can transfer features to other GPTs. QWEN models can affect other QWEN systems. But it seems that they do not seem to be intersecting between brands.
Why do artificial intelligence safety experts warn of data poisoning
Alex Claude, one of the authors of the study, said this highlights the extent of our really understanding of these systems.
“We are training these regimes that we do not fully understand,” he said. “You just hope what the model learned is what you want.”
This study raises deeper concerns about the alignment of model and safety. It emphasizes what many experts fear: data filtering may not be sufficient to prevent a model of learning unintended behaviors. Artificial intelligence systems can absorb and repeat patterns that humans cannot discover, even when training data looks clean.
Get Fox Business on the Go by clicking here
What does this mean to you
Artificial intelligence tools operate everything from social media recommendations to custom customer Chatbots. If hidden features can pass unresolved between models, this may affect how you interact with technology every day. Imagine a robot that suddenly begins to provide biased answers. or assistant This enhances the skill of harmful ideas. You may never know the reason, because the data itself looks clean. When artificial intelligence becomes more included in our daily life, these risks become your risk.

A woman uses artificial intelligence on the laptop. (Cyberguy “Knutsson)
Kurt fast food
This research does not mean that we are heading to the end of the world, Amnesty International. But it shows a blind spot on how to develop and spread artificial intelligence. Classified learning between models may not always lead to violence or hatred, but it indicates the ease of the spread of features without discovery. For protection from that, the researchers say we need a better transparency for transparency, cleaner training data, and the deeper investment in understanding how artificial intelligence really works.
What do you think, is artificial intelligence companies required to reveal how exactly their models are training? Let’s know through our writing in Cyberguy.com/contact.
Click here to get the Fox News app
Subscribe to the free Cyberguy report
Get my best technical advice, urgent safety alerts, and exclusive deals that are connected directly to your inbox. In addition, you will get immediate access to the ultimate survival guide – for free when joining my country Cyberguy.com/newsledter.
Copyright 2025 Cyberguy.com. All rights reserved.