AI’s security report tsunami is drowning open-source maintainers
📷 Source: Web
- ★cURL’s lead dev spends hours daily triaging AI-generated security reports
- ★Shift from ‘AI slop’ to high-quality findings—but at a cost
- ★Open-source maintainers now face a new bottleneck: signal overload
Daniel Stenberg, the lead developer of cURL, didn’t mince words: what started as an ‘AI slop tsunami’—a deluge of low-quality, AI-generated security noise—has morphed into something more insidious. The slop is gone, replaced by a flood of legitimate, high-quality security reports that now demand hours of his daily attention. It’s the kind of problem that sounds like progress until you’re the one drowning in it.
The shift suggests AI tools have matured past their hallucination-heavy infancy, where false positives and half-baked vulnerability reports clogged inboxes. Now, according to Stenberg’s observation (via Simon Willison), the output is ‘really good’—but the volume is relentless. For open-source maintainers already stretched thin, this isn’t just a workflow adjustment; it’s a structural change in how security research scales.
What’s missing from the cheerleading about AI’s improving accuracy? The human cost. Stenberg’s comment implies a quiet crisis: tools are getting better at finding real issues, but the ecosystem isn’t equipped to handle the resulting tsunami of actionable work. The question isn’t whether AI can generate useful reports—it’s whether the people who have to act on them can keep up.
📷 Source: Web
From garbage-in to firehose-out: when better AI tools create worse workflows
This isn’t just a cURL problem. The pattern mirrors broader trends in AI-assisted security scanning, where tools like GitHub’s CodeQL or Semgrep are lowering the barrier to entry for vulnerability discovery. The result? A democratization of security research—but also a centralization of triage burden on maintainers. Early adopters of these tools may celebrate the drop in false positives, but the real bottleneck is now prioritization, not detection.
The irony is thick: AI was supposed to reduce toil, not redistribute it. Yet here we are, with maintainers like Stenberg effectively becoming human spam filters for high-quality signals. The open-source community has long warned about burnout from unpaid labor; this is just the latest vector. If the trend holds, we’ll see two outcomes: either maintainers start ignoring reports en masse (defeating the purpose of better tools), or projects begin gating access to security disclosures—a move that could chill collaboration.
The real signal here isn’t that AI is finally ‘good enough’ for security. It’s that we’ve optimized for detection without designing for response. Tools that generate reports faster than humans can process them don’t just create backlogs—they create systemic risk. When the next critical vulnerability lands in Stenberg’s inbox, will it get the attention it deserves, or will it be lost in the flood?