According to a new investigation by The New York Times, up to 40% of the videos YouTube recommends to children are now generated by artificial intelligence. This stream of so‑called "AI slop" has flooded the platform: experts found clips with garbled text, nonexistent words, distorted people and animals, and sometimes outright frightening images.
This content is not just meaningless — it actively harms. Journalists at Mother Jones found videos where toddlers swallow whole grapes (choking hazard), infants eat honey (botulism risk), and children ride in the front seat without seat belts. Other clips teach preschoolers nonexistent state names and confuse vowels with consonants — real misinformation aimed at little kids.
The author of the piece, a signatory of an open letter to the leadership of YouTube and Google, calls this a "halt in brain development." Unlike "brain rot" in adults, every impression for young children builds neural connections. Exposure to AI slop can literally "reprogram" the brain incorrectly, risking long‑term consequences for learning and socialization.
YouTube's response so far amounts to a "whack‑a‑mole" policy: after each investigation, a few channels are removed, but the problem persists. Platform representatives suggest that parents block channels themselves and hope creators honestly label AI content. That shifts all responsibility onto families.
Research shows this approach is ineffective: fewer than half of parents use parental controls at all. A meta‑analysis of dozens of studies found mixed results — from no effect to negative outcomes. In addition, lower‑income families are less likely to be aware of digital risks and less likely to use active protective measures.
The author insists on systemic platform‑level solutions: full removal of AI content from YouTube Kids and child‑directed algorithms, mandatory labeling of all artificial content with strict penalties. No opt‑in options — these must be default settings, not voluntary measures. The mention of the U.S. Congress is deliberate: large tech companies like Google (YouTube) operate nationwide and beyond, so their activity is regulated at the federal level. State legislatures in Washington can pass local laws (for example, on data privacy), but they cannot fully regulate national or global platforms. That is why the author calls for federal intervention.
At the same time, the state of Washington, home to the headquarters of many tech giants, often leads in consumer protection and data privacy. For example, in 2023 it passed the My Health My Data Act, which also touches online platforms. Local organizations, such as the Washington State Attorney General's office, actively work on children’s online safety issues. There are also nonprofits, like Common Sense Media, lobbying for tighter control over YouTube content. Additionally, Seattle hosts advocacy centers like the Tech Policy Lab at the University of Washington.
Other companies working on AI are also located in the Seattle area: Microsoft (headquartered in Redmond) and Amazon (headquartered in Seattle). Microsoft develops AI tools like Copilot and implements safety policies to prevent creation of harmful content. Amazon uses AI in AWS cloud services and advertising algorithms. Both companies have internal AI ethics guidelines and participate in industry initiatives such as Partnership on AI that aim to protect children. Seattle is also home to startups like Truepic (specializing in content verification), highlighting the concentration of AI expertise in the region.
"We don't ask parents to build their own car seats — and we shouldn't ask them to build their own content filters," the author concludes. Congress must step in before the harm becomes irreversible for a generation. Every child's brain deserves protection, not just those whose parents know how to use app settings.
©2026 Chicago Tribune. Distributed by Tribune Content Agency, LLC.
Based on: Parental controls are not the answer to the AI slop spamming our kids