Google API keys that developers embedded in websites for Maps and Firebase now also authenticate to Gemini AI—exposing private files and billable services. Truffle Security discovered nearly 3,000 affected keys today (February 26, 2026) in a November 2025 scan. The vulnerability stems from Google’s architectural failure: using a single API key format (AIza…) for both public services (Maps embed) and sensitive services (Gemini with file access). Developers who followed Google’s official guidance correctly are now vulnerable.
Google Broke Its Own Security Model
For over a decade, Google explicitly told developers that API keys for Maps Embed API and Firebase were “not secrets” if properly restricted with HTTP referrers. That guidance was reasonable—Maps embedding requires client-side keys visible to anyone viewing page source. Google’s documentation encouraged this practice.
When Gemini launched, Google enabled the “Generative Language API” on all existing projects by default. Existing “public” keys suddenly gained access to sensitive Gemini endpoints—with no warning, no email notification, no opt-in requirement. Keys that were safe when deployed became dangerous overnight.
Even Google’s own internal API keys (deployed since February 2023, predating Gemini by over a year) were found exposed with Gemini access. The discovery came from external security researchers, not Google’s internal monitoring. As one Hacker News developer put it: “No warning. No confirmation dialog. No email notification when access was granted.”
This isn’t just another leaked credentials story. This is retroactive privilege escalation—a fundamental breach of trust. Google changed what “secure by design” means without telling anyone.
What Attackers Can Access
With a compromised API key, attackers target three critical Gemini endpoints. The /files/ endpoint stores uploaded documents, PDFs, and images—anything users uploaded to Gemini for AI analysis. The /cachedContents/ endpoint contains conversation history and context from previous Gemini sessions. Both expose sensitive data that developers never intended to be publicly accessible.
The billing angle is equally concerning. Attackers can rack up thousands in charges via unlimited LLM API calls. Unlike Maps API usage (which is capped), Gemini billing scales with usage. An exposed key can generate significant financial damage before detection.
HTTP referrer restrictions—the protection Google recommended for Maps keys—don’t help here. Referrer checks only apply to browser-based requests. Gemini API calls come from servers and scripts where referrer headers don’t exist. The security model that worked for Maps fails completely for Gemini.
The Technical Root Cause
Google Cloud uses a single API key format (starting with “AIza”) for incompatible security models. The same key type serves Maps embedding (designed for public visibility) and Gemini authentication (designed for private access). When Google added Gemini to existing projects, it didn’t create new keys or require opt-in—it simply granted existing keys new privileges.
This is textbook privilege escalation. The “AIza” prefix doesn’t imply security—it’s just an identifier for Google’s backend systems. New API keys default to “Unrestricted” (granting access to all enabled APIs) unless manually restricted. Most developers don’t realize their Maps key from 2023 now works with Gemini in 2026.
Other cloud providers handle this better. AWS IAM and Azure AD use explicit permission models where new services require explicit grants. Adding a new AWS service to your account doesn’t retroactively give existing credentials access to it. You must update IAM policies explicitly.
Google’s model creates systemic risk. Every new API potentially turns existing “safe” keys into “dangerous” ones. There’s no notification, no audit trail, no opt-in gate. Just silent privilege escalation.
What Developers Must Do Now
Every developer with Google API keys must act immediately. First, audit all Google Cloud projects for “Generative Language API” enablement. Go to GCP Console, navigate to APIs & Services, then Enabled APIs. If Generative Language API is listed, every API key in that project can access Gemini.
Second, rotate any keys that have ever been public. If it’s been in GitHub (even in a private repo that went public), embedded in a website, or distributed in a mobile app—assume it’s compromised. Delete the old key entirely. Don’t just restrict it.
Third, create new keys with explicit API restrictions. Avoid the “Unrestricted” default. Whitelist specific APIs only. A Maps key should ONLY work with Maps API. A Gemini key should ONLY work with Generative Language API. Never reuse keys across services.
Finally, set up billing alerts and usage monitoring. In Google Cloud Billing, create budget alerts for unexpected charges. Enable audit logs to track API key usage. Monitor for unusual patterns—thousands of Gemini calls from a Maps key is a red flag.
Google is revoking some exposed keys, but they can’t identify all compromised keys. You must audit independently. The fix will break existing workflows (mass key rotation always does), but the alternative is data exposure and billing fraud happening right now.
Key Takeaways
- Google silently enabled Gemini API access on all existing projects, granting “public” Maps keys dangerous new privileges without warning—breaking 10+ years of guidance that these keys weren’t secrets
- Attackers with exposed keys can access uploaded files (
/files/), cached conversation data (/cachedContents/), and rack up unlimited billing charges - The root cause is architectural: Google uses a single key format (AIza) for incompatible security models (public identification vs. sensitive authentication), while AWS and Azure require explicit permission grants
- Audit all GCP projects for “Generative Language API” enablement, rotate any keys that have been public, and create new keys with explicit API restrictions (never “Unrestricted”)
- This sets a dangerous precedent—if cloud providers can retroactively change security models without notification, how can developers reason about risk?
Truffle Security’s disclosure today (February 26, 2026) revealed 2,863 affected keys. The real number is likely higher. Check your keys now.
—









