The New York Times just sued Perplexity AI for copyright infringement, and this time the “we’re just a search engine” defense isn’t going to fly. On December 5, 2025, the Times filed suit in the Southern District of New York after 18 months of ignored warnings. They’re demanding Perplexity stop using their content and pay damages. This isn’t just another tech lawsuit—it’s the case that will define whether AI companies are innovating or just stealing with extra steps.
If you’re building anything with AI, you need to understand what happened here.
Eighteen Months of Warnings, Zero Response
This wasn’t an oops moment. Perplexity received cease-and-desist letters for over 18 months before the lawsuit dropped. The Times tried to negotiate. Perplexity ignored them. That pattern matters because it shows intent, not innocent mistake. The Chicago Tribune, Wall Street Journal, and New York Post all filed similar lawsuits. Multiple publishers, same story: Perplexity took their content and refused to license it.
Here’s what the lawsuit alleges: Perplexity scraped Times articles—including paywalled stories—and reproduced them “verbatim or near-verbatim” as AI-generated responses. Not summaries. Not snippets. Entire articles, repackaged as Perplexity’s own content. That’s not indexing. That’s copying.
Meanwhile, Meta Chose the Legal Path
Want to know the most damning part? The same day the Times sued Perplexity, Meta announced licensing deals with seven publishers—CNN, Fox News, People, USA Today, Reuters, and more. Multi-year commercial agreements where publishers actually get paid.
Meta proved there’s a legal path forward. Publishers are willing to license their content to AI companies. Perplexity just chose not to pay. That choice is why they’re in court instead of partnering with the Times like Meta did with Reuters.
The contrast is stark: Meta paid for access. Perplexity took it anyway.
Verbatim Reproduction Isn’t “Search”
Perplexity’s CEO dismissed the lawsuit by claiming publishers have sued every new technology for a hundred years—radio, TV, the internet. Classic deflection. But there’s a difference between disruption and theft.
Search engines link to content. They drive traffic to publishers. Google News shows headlines and snippets, then sends you to the source. Perplexity reproduces entire articles so you never leave. That’s not search—that’s replacement.
Fair use requires transformation, not just reproduction. The Second Circuit has ruled that anyone claiming fair use must prove their “secondary use does not compete in the relevant market.” Perplexity can’t do that. Their AI summaries compete directly with reading the original Times articles. Users get the information without clicking through, which means the Times loses traffic, subscribers, and revenue.
Add in the fact that Perplexity is a commercial product with a $9 billion valuation backed by Jeff Bezos and NVIDIA, and the fair use argument gets even weaker. You can’t claim fair use while profiting off verbatim copies of paywalled content.
Every Developer Using RAG Should Pay Attention
This is the first major US copyright lawsuit targeting RAG (Retrieval-Augmented Generation) technology. If you’re building chatbots, AI research tools, or anything that retrieves and summarizes web content, this case affects you.
The U.S. Copyright Office released a 2025 report identifying copyright risks at every stage of AI development: data collection, training, RAG, and outputs. Their conclusion? Fair use is not guaranteed. Legal experts warn that using “huge volumes of proprietary copyrighted information” for commercial purposes makes it nearly impossible to claim fair use.
Here’s the question every AI developer needs to ask: Where is my training data from, and do I have permission to use it? If your app is reproducing content from copyrighted sources to generate commercial responses, you’re in the same legal gray area as Perplexity. Except Perplexity has billions in funding for lawyers. You probably don’t.
Implement source attribution. Link to originals. Add guardrails to prevent verbatim reproduction. And if you’re using copyrighted content commercially, consult a lawyer. “Everyone’s doing it” isn’t a legal defense.
What Happens Next
There are three possible outcomes here. First, publishers win and AI companies are forced to license news content. That favors big tech with deep pockets and squeezes startups. Second, AI companies win and fair use gets interpreted broadly, giving them free rein to use public content. That accelerates the death of traditional journalism. Third—and most likely—courts split the difference. Verbatim reproduction gets classified as infringement, but summarization with attribution and linking remains fair use, subject to guardrails.
Perplexity will probably lose this one. The 18-month pattern of ignoring warnings undermines any “we didn’t know” defense. More importantly, this case will set the precedent for how RAG applications are regulated. Expect licensing to become standard practice, at least for commercial AI products using news content.
The takeaway for developers: the AI wild west is ending. Copyright law is catching up. Build accordingly.



