Sam Altman called AI water usage claims “completely untrue, totally insane” and “fake” at the Indian Express AI Summit in February 2026, specifically disputing viral assertions that ChatGPT uses “17 gallons of water per query.” He’s right—that metric is nonsense. But he’s also wrong. AI’s water impact is real, growing 300% by 2028 to 68 billion gallons, with two-thirds of new data centers built in water-stressed regions like Arizona and Texas. The debate isn’t “real vs. fake”—it’s a measurement confusion that both sides exploit to dismiss or exaggerate concerns.
The “17 Gallons Per Query” Myth vs. Aggregate Reality
The viral claim is mathematically wrong. Actual per-query usage is 0.32ml to 5ml (less than one-15th of a teaspoon, per Altman). The myth originated from a 2023 UC Riverside paper that averaged water consumption across entire data center campuses—including cloud storage, video streaming, and unrelated workloads—then divided by AI query volume. It’s like blaming one airline passenger for all the fuel the plane burns.
However, dismissing all water concerns because of this flawed metric is disingenuous. AI data centers consumed 17 billion gallons in 2023 and are projected to hit 68 billion gallons by 2028—a 300% increase in five years. Per-query usage is negligible; total consumption is massive and accelerating. Altman attacks the flawed metric while ignoring the aggregate trend. Environmental groups cite the misleading stat and undermine their credibility on a legitimate problem. Both sides exploit confusion to avoid accountability.
Withdrawal Isn’t Consumption: The 80% Evaporation Problem
Data centers use evaporative cooling because it’s the most energy-efficient method, lowering electricity costs 30-50% versus air cooling. The process withdraws water from local sources, circulates it through cooling towers where 70-80% evaporates into the atmosphere, and discharges the remaining 20-30% as wastewater. That 70-80% is consumed—permanently lost from the local water system. Not “recycled.” Not “returned.” Gone.
Companies tout “returning water to treatment plants,” but they mean 20%. The other 80% evaporated. Google’s sustainability reports, for example, show only 20% returned to wastewater treatment—80% lost to evaporation. In water-rich regions like the Pacific Northwest, this is sustainable. In arid regions like Arizona and Texas where two-thirds of new data centers are being built, it’s a local catastrophe even if nationally “small.” Geography matters more than aggregate percentages.
Geographic Concentration Turns Growth Into Regional Crisis
National water usage percentages are meaningless when the impact is concentrated in already water-stressed regions. Phoenix metro area will see a 400% increase in water use from data center electricity demand, equivalent to supplying Scottsdale for over two years. Texas data centers will grow from 49 billion gallons in 2025 to 399 billion gallons by 2030—a 715% increase in five years.
Microsoft reports 42% of its water withdrawals come from water-stressed regions. Google reports 15%. Amazon refuses to disclose. These are self-reported, aggregated numbers without third-party verification or site-specific breakdowns. The same data center that’s “sustainable” in Seattle becomes extractive infrastructure in Phoenix. Local context is everything, and companies obscure it deliberately.
The Real Scandal: No Legal Transparency Requirements
Tech companies have zero legal requirement to disclose water usage. All data is voluntary, self-reported, aggregated, and unverified by third parties. SEC climate disclosure rules (currently stayed after the Trump administration declined to defend them) would require emissions reporting but not water usage. EU’s CSRD requires ESG disclosures but companies can aggregate data and obscure site-specific impacts.
In April 2026, more than a dozen institutional investors filed shareholder proposals demanding Amazon, Microsoft, and Google disclose site-specific water and power consumption. Companies resisted, citing competitive concerns and proprietary information. Local utilities often refuse to release customer-specific data, meaning communities can’t assess whether new facilities will strain water supplies until after construction. This is the real scandal: Altman dismisses water concerns as “fake” while OpenAI has no obligation to prove it. Companies say “trust us, we’re responsible” without providing verifiable data.
Energy vs. Water Is a False Choice
Altman argues energy is the real issue, not water. He’s wrong—it’s not either/or. Evaporative cooling is chosen because it’s energy-efficient, creating economic incentive to consume water even in scarce regions. Closed-loop cooling (90-95% water return, 5-10% consumption) uses 15-20% more energy. Air cooling uses 30-50% more energy but zero water.
The choice isn’t “energy or water”—it’s “which cost matters more?” Data centers might spend $50M/year on electricity versus $500K/year on water, making water consumption economically “free” even if environmentally harmful. Companies choose evaporative cooling in Arizona deserts not because water is abundant, but because water is cheap relative to energy. This is a policy failure (no water pricing that reflects scarcity) and a corporate responsibility failure (choosing economics over environmental impact in stressed regions), not an engineering constraint.
Altman is right: “17 gallons per query” is mathematically wrong and undermines legitimate environmental concerns. Altman is wrong: Aggregate water impact is real and growing 300% by 2028, with concentrated harm in water-stressed regions. The real problem is 70-80% evaporation in Phoenix and Texas, where local aquifers can’t sustain 400-715% growth. The real scandal is zero transparency requirements, allowing companies to refuse site-specific disclosure while claiming “trust us.”
Developers should demand verifiable data, not corporate reassurances. Tech companies must disclose site-specific usage or face regulation. The debate isn’t “real vs. fake”—it’s accountability versus evasion.













