
The AI Slopageddon: Open Source Maintainers Are Banning AI Code, and Your Business Should Pay Attention
The Eternal September of Open Source
In 1993, AOL connected millions of new users to Usenet, a community built on unwritten norms and self-policing etiquette. The newcomers didn't know the rules. The old-timers couldn't keep up. The community never recovered. They called it "Eternal September."
It's happening again. This time to open source software. And the flood isn't coming from AOL users. It's coming from AI coding agents.
RedMonk analyst Kate Holterhoff coined the term "AI Slopageddon" in February to describe what's unfolding: AI-generated contributions so voluminous and low-quality that maintainers literally cannot keep up. GitHub itself published a post drawing the same Eternal September parallel, acknowledging the platform has a problem it doesn't yet know how to solve.
Here's what's happening, who's fighting back, and why it matters way beyond open source.
The Maintainers Who Said "Enough"
Daniel Stenberg shut down cURL's bug bounty. cURL is one of the most widely used pieces of software on Earth. It handles data transfers in everything from your phone to your car. Stenberg had been complaining about AI-generated bug reports since 2024. By mid-2025, submission volume had spiked to 8x the normal rate. About 20% were pure AI slop: reports that sound technical but contain nothing useful.
In May 2025, he posted on LinkedIn: "We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed."
By January 2026, he killed the bug bounty entirely. The project received 20 security reports in 21 days. None were valid. PRs "claimed to fix bugs that didn't exist." The ratio of real reports dropped from 1 in 6 to 1 in 30.
Mitchell Hashimoto banned drive-by AI contributions from Ghostty. First he required mandatory disclosure of AI assistance. Then he escalated to a zero-tolerance policy: AI-assisted PRs are only allowed for accepted issues. Drive-by AI PRs get closed without review. Bad actors get permanently banned.
His framing was sharp: "This is not an anti-AI stance. This is an anti-idiot stance." He and his team use AI daily. The issue isn't the tool. It's people who use AI as a substitute for understanding the codebase.
Hashimoto then built Vouch, a trust management system where contributors must be verified before submitting code. It's essentially a reputation layer for open source, and other projects are already adopting it.
Steve Ruiz closed all external pull requests to tldraw. In a blog post titled "Stay Away from My Trash!", Ruiz described AI-generated PRs that "weren't obviously bad, and that was precisely the problem. They looked good. They were formally correct. Tests passed." But they were "fix this issue" one-shots from people who had zero context about the codebase or the project's direction.
He posed a question every business should think about: "In a world of AI coding assistants, is code from external contributors actually valuable at all? If writing the code is the easy part, why would I want someone else to write it?"
It's Not Just These Three
NetBSD banned AI-generated code contributions. Gentoo Linux voted to forbid them. The Linux kernel is implementing disclosure guidelines. Jeff Geerling, a major figure in the Ansible and Raspberry Pi communities, published "AI is destroying Open Source, and it's not even good yet."
GitHub is evaluating options including turning off pull requests entirely for some repos, limiting PRs to trusted collaborators, and deploying AI triage tools to filter the noise. Their own Copilot feature was called out for making the problem worse, since Copilot-generated issues appear under the human user's name without any AI label.
The Quality Numbers Are Brutal
This isn't just a feelings problem. The data is clear.
Analysis of AI-generated code in production shows:
- 1.75x more logic and correctness errors
- 1.64x more code quality and maintainability issues
- 2.74x more likely to introduce XSS vulnerabilities
- 1.88x more likely to introduce improper password handling
- 1.91x more likely to make insecure object references
Pull requests per author increased 20% year-over-year, but incidents per pull request increased 23.5% and change failure rates rose roughly 30%. More code is shipping. More of it is broken.
GitClear's analysis of 211 million lines of code found AI-assisted development produces 60% less refactored code and 48% more copy-paste patterns. The code looks clean on the surface but lacks the structural thinking that makes software maintainable.
Why This Matters for Your Business
You might be thinking: "I don't maintain an open source project. Why should I care?"
Because the same dynamics playing out in open source are playing out inside your company right now. The difference is that open source maintainers caught it early because they review everything publicly. Your internal AI-generated code might have the same quality problems, and nobody's noticing yet.
AI amplifies the skill of the operator. Every maintainer who spoke up made the same point. AI makes good developers faster and bad developers more prolific. If your team doesn't deeply understand the codebase they're modifying, AI will help them produce convincing-looking code that's fundamentally wrong.
"Tests pass" doesn't mean "this is correct." The tldraw PRs passed tests. They were formally correct. They still would have damaged the project. Automated testing catches syntax and logic errors. It doesn't catch architectural misunderstandings, missed edge cases in the broader system, or code that works but creates maintenance nightmares.
Review burden doesn't disappear. It shifts. AI moves work from writing code to reviewing code. And reviewing AI-generated code can be harder than reviewing human code because it's superficially polished. It looks right at first glance. You have to look deeper to find the problems.
What Smart Teams Are Doing
The maintainers fighting AI Slopageddon aren't anti-AI. They all use AI tools themselves. The pattern they've landed on is worth copying:
Trust but verify, with real understanding. AI is a tool wielded by someone who understands the system. Not a replacement for understanding. Before any AI-generated code ships in your organization, at least one person needs to fully understand what it does and why.
Build quality gates, not speed gates. Stop measuring developer productivity by output volume. Measure by outcomes: stability, bug rates, time-to-resolution when things break. If your team is shipping 3x more code but your incident rate is climbing, you have an AI quality problem.
Invest in code review skills. This is the new bottleneck. The ability to read AI-generated code critically, spot the subtle issues that automated tests miss, and evaluate whether a change fits the broader system architecture. This is the skill that separates teams that benefit from AI from teams that get buried by it.
Document context, not just code. When AI generates a solution, record why that approach was chosen and what alternatives exist. Future you (or future team members) will need that context when the code needs to change.
The AI Slopageddon is a warning shot. The flood of low-quality AI-generated work isn't just an open source problem. It's a preview of what happens to any organization that prioritizes AI speed over AI quality.
The tools are powerful. The question is whether you're wielding them or being wielded by them.
Want to make sure your AI development workflow has proper quality controls? Book a free 30-minute call and let's build review processes that actually catch what tests miss.
Sources:
- Kate Holterhoff / RedMonk: AI Slopageddon and the OSS Maintainers
- The New Stack: Drowning in AI slop, cURL ends bug bounties
- tldraw blog: Stay Away from My Trash! (Steve Ruiz)
- InfoQ: AI "Vibe Coding" Threatens Open Source
- GitHub: Welcome to the Eternal September of open source
- Jeff Geerling: AI is destroying Open Source
- It's FOSS: Mitchell Hashimoto Launches Vouch