
Preprint study says AI, just like humans, can get ‘brain rot’ from low-quality text and the effects appear to persist
Studies show that humans experience shorter attention spans, distorted memories, and changes in self-esteem due to “brain rot,” or reliance on low-quality online content. Now researchers say the same phenomenon could affect artificial intelligence (AI) models as well.
Heavy consumption of viral short videos such as those found on TikTok in particular is associated with an increase Anxiety and depressionas well as shorter attention spans in young people, according to Stanford University He studies.
In AI models, constant exposure to the short, viral social media posts that make up an increasing portion of the Internet “leads to permanent cognitive decline in large language models,” researchers from Texas A&M University, the University of Texas at Austin, and Purdue University found in a new preprint study. He studies.
To prove their hypothesis, the researchers continuously fed MBA students… X Posts that were either short and viral or shaped to attract users’ attention. They found that this toxic training caused “non-trivial” declines in reasoning and long context understanding, thanks in part to a jump in “thought skipping,” meaning that the AI models increasingly failed to develop a plan to answer the question, neglected parts of the reasoning process, or skipped that reasoning altogether.
The study, published on the open-access archive of scientific articles, arxiv, has not yet been peer-reviewed.
In contrast to previous criticism of AI models’ tendencies to be accepting, the study found that when masters, including Meta’s open source Llama3 as well as Alibaba’s Qwen LLM versions, were trained on junk, they were less accepting. Worse still, the researchers found that the AI’s brain rot showed MBA’s darkest traits, including high rates of psychopathy and narcissism.
When researchers attempted to “heal” MA students using high-quality human-written data through a process of “instruction tuning,” the AI models still had lingering effects and showed a significant gap between the quality of their thinking compared to their basic pre-junk food diet.
“This gap suggests that the impact of brain rot has been deeply internalized, and current instruction adjustment cannot solve the problem. Stronger mitigation approaches are needed in the future,” the researchers wrote.
As AI models are trained on trillions of online data points, researchers have warned that LLM holders are “inevitably and constantly” exposed to this low-quality content just like humans, which could pose risks to the technology as a whole.
Previous studies have shown that training AI models is essential for their performance. In July 2024 He studies Published in a peer-reviewed journal natureI found that AI models eventually break down if they are constantly trained on AI-generated content. Another study showed that AI models can be manipulated to break down their own guardrails using persuasion techniques Effective on humans.
All of this adds to the potential danger posed by AI models not trained on high-quality data. A danger that could affect human safety.
Researchers’ recommendation: AI companies need to stop simply storing huge amounts of data and focus on the quality of the data used to train LLM students. They may also need to perform routine “cognitive health checks” on the mannequins, otherwise they risk a full-blown safety crisis.
“Such a persistent brain-fouling effect calls for future research to carefully organize the data to avoid pre-training cognitive damage,” the researchers wrote.
Post Comment