AI models get brain rot too
AI models may be a bit like humans, after all.
new He studies From the University of Texas at Austin, Texas A&M, and Purdue University, it shows that large linguistic models fed on a diet of popular but low-quality social media content suffer from a kind of “brain rot” that may be familiar to anyone who has spent too much time scrolling on X or TikTok.
“We live in an age where information is growing faster than attention spans — and much of it is designed to capture clicks, not convey truth or depth,” says Junyuan Hong, a new assistant professor at the National University of Singapore who worked on the study as a graduate student at the University of Texas at Austin. “We wondered: What happens when AI is trained on the same things?”
Hong and his colleagues fed different types of texts into two large open source language models in pre-training. They examined what happened when models were fed a mixture of highly “attractive,” or widely shared, social media posts and those containing sexy or provocative texts such as “cool,” “look,” or “just today.”
The researchers then used several different metrics to measure the impact of this “junk” social media diet on two open source models: Meta’s Llama and Alibaba’s Qwen.
Models fed with spam experienced a kind of AI brain rot, with cognitive decline including reduced reasoning abilities and memory decline. Models also became less morally agreeable and more psychopathic on two measures.
The results reflect research conducted in humans, which He appears That low-quality online content contains Adverse effect On people’s cognitive abilities. The prevalence of this phenomenon led to the name “brain rot” in the Oxford Dictionary Word of the year in 2024.
The findings are important for the AI industry, Hong says, because model makers may assume that social media posts are a good source of training data for their models. “Training on viral or attention-grabbing content may seem like scaling the data,” he says. “But it can quietly erode logic, ethics, and long-term interest.”
The fact that MBA students are suffering from brain rot seems particularly alarming when AI itself is increasingly creating content on social media, much of which appears to be optimized for sharing. The researchers also found that models damaged by low-quality content could not be easily improved through retraining.
The results also suggest that AI systems built on social platforms, such as Grok, may suffer from quality control issues if user-generated posts are used in training without consideration of the integrity of the posts.
“As more AI-generated errors spread through social media, they pollute the very data that future models will learn from,” Hong says. “Our findings show that once this type of ‘brain rot’ sets in, subsequent clean training cannot completely undo it.”
This is an edition of Will Knight Artificial Intelligence Lab Newsletter. Read previous newsletters here.