Uncategorized

Trump Tech Council: Big Tech Now Writes AI Rules

Regulatory capture visualization showing government and corporate symbols merging, representing tech billionaires controlling AI policy

President Trump appointed Mark Zuckerberg, Jensen Huang, and Larry Ellison to a new presidential technology council on March 25, 2026, giving the companies that control AI’s fundamental infrastructure—GPUs, training data, and cloud platforms—direct influence over US AI policy. The 13-member President’s Council of Advisors on Science and Technology will be co-chaired by venture capitalist David Sacks and focused on AI, semiconductors, quantum computing, and nuclear power. Notably absent: Elon Musk and Sam Altman, despite being the most visible figures in AI.

This is regulatory capture in action. The appointees collectively control 95% of AI training GPUs, billions of users’ data, and 60% of cloud infrastructure. Now they’ll write the rules governing AI safety, competition, and chip access. It’s the equivalent of asking oil companies to write climate regulations.

Who Controls the AI Infrastructure

The council members aren’t just tech executives—they’re the gatekeepers of AI development itself. Jensen Huang’s Nvidia commands over 90% of the AI training GPU market. Mark Zuckerberg’s Meta controls 3+ billion users’ data, the open-source Llama models, and FAIR research operations. Larry Ellison’s Oracle holds the #3 spot in cloud infrastructure and dominates enterprise databases.

Add in Sergey Brin (Google’s search data and TPUs), Lisa Su (AMD’s #2 GPU position), Michael Dell (hardware distribution), and the picture becomes clear. Combined, these companies control 95%+ of AI training compute and 60%+ of cloud infrastructure. Developers depend on them for GPUs, APIs, cloud resources, and model access.

When these companies shape AI policy, they can create regulations that protect their market positions. Higher compliance costs for startups. Export controls that favor incumbents. “Safety” requirements only big companies can meet. That’s not expertise—that’s self-interest dressed up as public service.

Regulatory Capture by Definition

According to RAND Corporation research, “AI industry’s influence on policymaking qualifies as regulatory capture when policy outcomes favor industry actors and contravene the public interest.” That’s precisely what’s happening here. When AI entrepreneurs advise on AI market regulation, they face obvious conflicts of interest—they can use regulations to benefit their firms while harming competitors and the public.

The mechanisms are well-documented: agenda-setting, advocacy, academic capture, information management, cultural capture through status, and media control. Industry actors can influence not just the content of regulations, but their strength—or whether they exist at all. Research shows this leads to weak regulations, no regulations, or worse, regulations that protect AI companies’ market advantages.

We’ve seen this pattern fail before. Pharmaceutical companies advising on drug safety. Financial firms writing banking rules. Energy companies shaping climate policy. The results speak for themselves. For developers, regulatory capture means locked-in expensive infrastructure, barriers to competing with Big Tech, defunded safety research, and regulations that favor closed models over open source.

Where Are the Scientists?

Unlike previous PCAST councils under Obama and Biden—which featured majority academic representation from universities and research institutions—Trump’s 2026 council is dominated by tech billionaires. Science Magazine noted the panel is “stuffed with high-tech billionaires” rather than independent researchers. Of 13 announced members (up to 24 total), only one is an academic scientist.

This shift matters. Independent research and academic oversight provide checks on industry claims. Without them, policy gets shaped by profit motives rather than public good. That affects which AI safety research gets funded, what compliance standards exist, and whether regulations favor innovation or incumbents.

The absence of academic voices is deliberate. Industry executives can claim expertise, but their incentives don’t align with public interest. Academic researchers, by contrast, have no financial stake in specific policy outcomes. That independence is precisely what’s missing from this council.

The Notable Absences

Elon Musk and Sam Altman—the two most visible figures in AI—are conspicuously absent from the council. Despite Musk’s early support for Trump’s campaign and leadership of the Department of Government Efficiency, he didn’t make the cut. Neither did Altman, whose OpenAI dominates public AI discourse.

Sources cited concerns about “unfettered innovation” and desire to avoid “personality-driven conflicts.” Translation: they’re too controversial for a council that values consensus among established players. The exclusions signal that disruptive voices aren’t welcome, even from industry leaders.

For developers, this means policy will likely favor stability over disruption, incumbents over challengers. Regulations that protect today’s market leaders rather than enable tomorrow’s innovators. That’s the predictable outcome when controversy gets excluded and conformity gets rewarded.

What This Means for Developers

The council’s near-term priority is implementing Trump’s national AI framework, released in early March 2026. Watch for policy changes in several areas. First, AI safety regulations—expect industry-friendly requirements that don’t constrain profitable applications. Second, chip export controls—council members have direct interest in GPU availability and allocation. Third, federal procurement standards—regulations could favor certain vendors over others.

David Sacks’ role change reveals the strategy’s permanence. After serving exactly 130 days as AI czar (the legal limit for special government employees), he stepped down on March 26 to co-chair PCAST with no time restrictions. The broader mandate now includes semiconductors, quantum computing, and nuclear power—expanding industry influence beyond just AI.

Practical implications for developers: diversify your infrastructure to avoid vendor lock-in. Track White House OSTP announcements for policy changes. Engage in public comment periods when regulations open. Support independent AI safety research funding. Build for multiple platforms to maintain flexibility. Most importantly, watch for “safety” regulations that conveniently become barriers to entry favoring Big Tech.

Key Takeaways

  • Tech billionaires who profit from AI now advise on AI rules—textbook regulatory capture with clear conflicts of interest
  • Council members control 95% of GPU compute and 60% of cloud infrastructure, giving them power over who can build AI and at what cost
  • Academic oversight replaced with industry self-governance—only one scientist among 13 announced members (up to 24 total)
  • Expect policies favoring incumbents over startups, closed models over open source, and compliance costs that benefit only large companies
  • Watch for “safety” regulations that become competition barriers—historical pattern shows industry self-regulation fails public interest
ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *