NewsAI & DevelopmentHardware

DeepSeek Gets Nvidia H200 Chips: China Defies US Export Controls

China granted DeepSeek approval to purchase Nvidia H200 chips on January 31, 2026, with regulatory conditions still being finalized. The H200—Nvidia’s second-most powerful AI chip—represents what the Taipei Times calls “a serious point of contention for US-China relations”. Any DeepSeek purchase could draw scrutiny from US lawmakers, particularly following allegations that Nvidia assisted DeepSeek in developing AI models subsequently used by the Chinese military. This approval comes just two weeks after the US shifted its export policy from “presumption of denial” to “case-by-case review” on January 15, 2026.

H200’s 2x Memory Advantage

The H200 chip features 141GB HBM3e memory—nearly double the H100’s 80GB—and 4.8TB/s bandwidth, delivering up to 1.8x faster LLM inference according to Nvidia’s official specifications. For memory-bound workloads like long-context processing, performance improves by 3.4x. This isn’t incremental. DeepSeek’s access to H200 means they can train larger models, deploy faster production inference, and compete on equal hardware footing with Western AI labs that have unrestricted chip access.

The memory upgrade matters more than the speed boost. DeepSeek’s V3.2 already matches GPT-5 on reasoning benchmarks using older hardware. With H200’s doubled memory capacity, they can scale models further and serve production workloads 1.8x faster than before. For at-scale deployments, H200 systems provide 5x more energy savings and 4x better cost of ownership versus Nvidia’s Ampere architecture generation.

V3.2 Outperforms GPT-5, V4 Targets Coding

DeepSeek-V3.2, released in December 2025, performs comparably to GPT-5 on key benchmarks. The high-compute variant, V3.2-Speciale, surpasses GPT-5 on reasoning tasks according to InfoQ. The model achieved 96.0% accuracy on the 2025 American Invitational Mathematics Examination (AIME)—beating GPT-5 High’s 94.6%—and scored a Codeforces rating of 2386, placing it at expert-level competitive programming. All this at $0.028 per million input tokens, roughly one-tenth of GPT-5’s cost.

DeepSeek is launching V4 mid-February 2026 with coding-focused capabilities targeting direct competition with GitHub Copilot and Cursor. Chinese AI models now capture 15% of global market share as of November 2025, up from just 1% a year earlier. That’s not gradual growth—that’s a market shift fueled by DeepSeek’s open-source push and cost efficiency. H200 access will only accelerate this momentum.

US Policy Shift Created Opening

On January 15, 2026, the US Bureau of Industry and Security revised its license review policy for H200-class chips from “presumption of denial” to “case-by-case review.” Exporters must certify sufficient US supply, no diversion of global foundry capacity, recipient security procedures, and third-party US testing. A 25% value-based tariff applies to all H200 exports to China.

The policy shift was controversial from day one. During a January 14 House Foreign Affairs Committee hearing on “Winning the AI Race Against the Chinese Communist Party,” experts and members of Congress overwhelmingly expressed skepticism about exporting advanced AI chips to China. The Federal Register document notes this represents “roughly twice what Chinese fabs are expected to produce domestically in 2026, twice the capacity of the world’s largest data center, and nearly OpenAI’s entire deployed compute worldwide at the end of 2025.” China walked through the door the US opened.

Military Allegations Raise Stakes

A senior US legislator has alleged that Nvidia assisted DeepSeek in developing AI models subsequently used by the Chinese military. These allegations could trigger Congressional investigations and potential Nvidia sanctions. This isn’t just about trade policy—it’s about whether US tech companies are inadvertently supporting military AI development in strategic competitors.

DeepSeek’s track record adds urgency to these concerns. When the company released R1 in January 2025, the model became the #1 free app on the US iOS App Store and wiped over $1 trillion from US tech stock market capitalization. DeepSeek trained R1 for just $5.6 million using 2,048 Nvidia H800 GPUs, proving that resource-efficient approaches can match billion-dollar Western competitors. The question now is what DeepSeek builds with H200’s superior hardware.

Key Takeaways

  • China approved DeepSeek’s H200 purchase on January 31, 2026, with conditions pending. ByteDance, Alibaba, and Tencent collectively received approval for over 400,000 H200 units.
  • H200 delivers 2x memory and 1.8x faster inference versus H100, enabling DeepSeek to close the hardware gap with Western labs and scale larger models.
  • DeepSeek V3.2 already beats GPT-5 on reasoning benchmarks at one-tenth the cost. V4 launches mid-February 2026 targeting coding use cases.
  • The January 15 US policy shift from “denial” to “case-by-case review” created the opening China exploited two weeks later. Congressional scrutiny will follow.
  • Military use allegations against Nvidia add political risk, potentially triggering investigations and sanctions if proven.
ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:News