A search tool that filters results to pre-ChatGPT content hit 530 points on Hacker News today. Moreover, developers aren’t just complaining about AI-generated “slop” anymore—they’re building tools to erase the entire post-November 2022 internet from their search results. Pre-AI Search and Slop Evader, two browser extensions launched recently, use a blunt but effective strategy: filter everything to before ChatGPT’s November 30, 2022 launch, when “pre-2023” became a proxy for quality.
Importantly, this isn’t nostalgia. It’s developers—AI’s core audience—admitting that AI content has degraded search quality so badly that date-based filtering is now the most practical solution. When the people building AI tools are the first to filter them out, something is fundamentally broken.
Why “Pre-2023” Became a Proxy for Quality
ChatGPT’s November 30, 2022 launch triggered an explosion of AI content farms. Google’s own March 2024 update admitted the need to reduce “unhelpful” content by 40%. Furthermore, AI content now comprises 17.31% of search results, peaked at 19.56% in July 2025.
Oli Evans, creator of Pre-AI Search, built the extension after getting frustrated with how “AI content farms have taken over Google results.” As a developer, he was “spending more time filtering through AI-generated noise than finding actual solutions.” Specifically, the tools work because post-2022 content is SEO-optimized but subtly wrong—confident but inaccurate, good enough to rank highly but bad enough to break production code.
In contrast, developers trust pre-2023 content because it was community-vetted. Stack Overflow threads from 2019-2022 have peer review and battle-testing. Post-ChatGPT tutorials are often AI-generated, untested, and plagiarized from those same Stack Overflow threads—but with bugs introduced. Ultimately, pre-2023 isn’t perfect, but it’s measurably better than the alternative.
Stack Overflow’s Decline Proves the Problem
Stack Overflow’s question volume dropped 76% since ChatGPT launched, with a 60% year-over-year decline by December 2024. Additionally, Python tag vote activity fell from 90,000 in 2019 to 10,000 in August 2025—lowest since 2011. Traffic dropped 14% month-over-month in March-April 2023 alone.
Eric Holscher, co-founder of Read the Docs, noted that “Stack Overflow’s question volume began falling quickly after ChatGPT was released in November 2022, with the drop continuing into 2025 at alarming speed.” Meanwhile, questions became “systematically more complex” post-ChatGPT—simple questions now go to AI, leaving only hard problems for humans.
This creates a vicious cycle. Consequently, fewer questions lead to less community activity, which produces worse answers, driving more developers to AI instead, which further reduces questions. Meanwhile, AI tools train on Stack Overflow’s historical data, creating nothing new—just remixing what humans already wrote. The well is drying up.
The Pre-AI Search Tools Fighting Back
Two major tools emerged to address this problem. Pre-AI Search, a Chrome extension by Oli Evans, adds a one-click toggle to filter Google results to pre-2023 content. It’s privacy-first (zero tracking or ads), simple, and effective. Meanwhile, Slop Evader, created by artist and researcher Tega Brain, filters seven sites—Google, YouTube, Reddit, Stack Exchange, and MumsNet—to pre-November 30, 2022 content.
Both tools use date-based filtering via search API parameters. Brain explains the goal isn’t a permanent block on AI content, but rather “to make people more aware of how much synthetic information they normally accept without questioning it.” Notably, developers report measurably better search results: less time filtering noise, more time solving actual problems.
However, these tools shouldn’t need to exist. That they do—and that they’re trending—proves search engines have failed to solve the content quality problem. Pre-AI filters are treating symptoms while the disease goes uncured.
Model Collapse: AI Eating Its Own Tail
Research published in Nature in July 2024 demonstrates “model collapse”: AI trained on AI-generated content irreversibly degrades, losing information from data distribution tails. This is happening now—AI content is already training the next generation of models.
The researchers found that “indiscriminate learning from data produced by other models causes ‘model collapse’—a degenerative process whereby, over time, models forget the true underlying data distribution.” Consequently, the problem compounds: bad AI content trains worse AI models, which produce even worse content. It’s a death spiral.
Therefore, pre-AI tools aren’t just about current search quality—they’re about preventing a future where all content is AI-generated derivatives of derivatives. The internet is eating itself, and developers filtering to pre-2023 are trying to preserve the original recipes before they’re lost.
The Real Problem Isn’t AI—It’s Search
Here’s the controversial take: Google optimized for ads, not truth. Even before ChatGPT, search quality was declining—SEO farms, content mills, low-quality listicles. AI didn’t break search; it exposed that search was already broken. Essentially, the economic incentives reward cheap content over accurate content.
Google’s March 2024 update targeted “scaled content abuse, expired domain abuse, site reputation abuse”—problems that existed long before AI. Kagi, a paid search engine, doesn’t have the AI slop problem because it’s not ad-driven. It’s subscriber-funded, incentivizing quality over quantity. Clearly, the problem isn’t the technology; it’s the business model.
Pre-AI tools are treating symptoms. The cure requires rethinking search from first principles: prioritize accuracy over engagement, quality over quantity. Perhaps the lesson isn’t “AI is bad”—it’s “free, ad-supported search can’t maintain quality at scale.”
What Comes Next
Pre-AI tools are just the beginning. “Pre-2023” becomes less useful as content ages—eventually we’ll need verified human content platforms, AI detection tools, or a bifurcated internet split into “authentic” vs “synthetic” zones. Notably, Tega Brain plans to expand Slop Evader to DuckDuckGo. Demand for “verified human” badges will grow.
Either search quality improves via better algorithms and business models, or the internet splits into curated-human content versus AI-generated noise. Developers filtering to pre-ChatGPT aren’t Luddites—they’re canaries in the coal mine. When the people who understand AI best are the first to reject AI content, the rest of us should pay attention. Ultimately, the internet got worse when AI got better, and filtering by date is a hack that shouldn’t be necessary. Fix search, or watch it die.











