The Change
GitHub announced on March 25 that starting April 24, 2026, it will use interaction data from Copilot Free, Pro, and Pro+ users to train AI models—unless you opt out. You have 30 days to disable the setting or your code snippets, context, and usage patterns become training data.
Enterprise and Business users are exempt. Individual developers must act to protect their code.
How to Opt Out Before April 24
Here’s what you need to do:
- Go to your GitHub Settings
- Navigate to Copilot in the left sidebar
- Click on the Features tab
- Find the Privacy section
- Locate “Allow GitHub to use my data for AI model training”
- Set it to Disabled
Or go directly to: https://github.com/settings/copilot/features
This applies to all Copilot interfaces—CLI, VS Code extension, and Copilot Chat. If you have multiple GitHub accounts, you’ll need to do this for each one.
What “Interaction Data” Actually Means
GitHub’s definition of “interaction data” is broader than you might think. It’s not just code sitting in your private repositories.
According to GitHub’s announcement, interaction data includes code snippets you accept or modify, code context around your cursor, comments and documentation, file names, repository structure, navigation patterns, Copilot chat conversations, and even your thumbs-up or thumbs-down feedback on suggestions.
The technical loophole: GitHub claims they don’t train on “private repositories at rest,” but when Copilot activates to generate suggestions, everything it processes becomes interaction data. API keys in comments, proprietary algorithms in context, database credentials—if Copilot sees it, it could end up in training data.
Why Enterprise Users Don’t Have to Worry
If you’re on Copilot Business or Enterprise, your data is protected by default. GitHub won’t use it for training.
Free, Pro, and Pro+ users? You’re opted in by default starting April 24.
This creates a two-tier privacy system where companies that pay more get better protection, while individual developers bear the burden of opting out. Privacy as a premium feature, not a right.
Developer Backlash
The Hacker News discussion thread is filled with criticism. Developers are calling out the dark pattern UI design—the setting is worded so “Enabled” sounds beneficial rather than concerning. Multiple developers reported finding the setting enabled by default without their explicit consent.
One top comment: “I have zero confidence they are not already training on our data.”
The main concerns: accidental exposure of API keys, intellectual property theft (proprietary solutions absorbed into models used by competitors), and broken trust. Microsoft and GitHub repeatedly promised not to train on private code. The semantic game of “not training on repos at rest” while training on “interaction data” feels like a betrayal.
Some developers are announcing migration plans—moving to Codeberg, self-hosted Git solutions, or Forgejo. The trust damage may be worse than the policy itself.
This Is Part of a Pattern
GitHub isn’t alone. Adobe, OpenAI, Google, and others have moved to similar opt-out models for AI training. The industry default is now: collect everything unless users actively object.
GitHub also has history here. A class-action lawsuit filed in 2022 alleged Copilot was trained on open-source code and suggested snippets without respecting licenses. A judge dismissed some copyright claims, but license violation and breach of contract allegations remain ongoing.
What to Do
Check your settings before April 24. If you’re on a Free, Pro, or Pro+ plan and you care about code privacy, opt out now.
If you’re working on proprietary projects or handling sensitive data, consider upgrading to Business or Enterprise for guaranteed protection. Or evaluate alternatives—Cursor, Tabnine, Amazon CodeWhisperer—that handle data differently.
The 30-day window is tight, and it’s unclear if GitHub will proactively notify all affected users. Don’t wait.












