On March 31, 2026, Apple officially approved George Hotz’s Tiny Corp TinyGPU driver extension, allowing Apple Silicon Mac users to run external Nvidia and AMD GPUs over Thunderbolt/USB4 for the first time since 2020. The approval—through Apple’s DriverKit framework—requires no jailbreaking or security compromises and marks a significant policy reversal for a company known for tight ecosystem control. This isn’t a gift from Apple; it’s AI compute demands forcing the walled garden to crack.
AI Compute Forces Apple to Open Up
Apple dropped all eGPU support when transitioning to Apple Silicon in 2020, citing architectural changes and aggressively marketing the integrated GPU’s capabilities. Six years later, AI developers’ need for Nvidia CUDA access and discrete GPU power forced Apple to approve third-party drivers—a rare concession that exposes the limits of Apple’s integrated silicon strategy.
The performance gap tells the story. Apple’s M3 Max with 40 GPU cores runs ResNet-50 training three times slower than Nvidia’s RTX 4090, according to benchmark data. Moreover, Apple Silicon can use only about 75% of system RAM for GPU workloads—that’s 96 GB usable on a 128 GB Mac. For developers training large language models or fine-tuning diffusion models, these limitations aren’t theoretical; they’re deal-breakers.
Apple had to choose: lose AI developers to Linux workstations with native Nvidia GPUs, or crack the walled garden. They chose to crack it, slightly. The decision signals that integrated GPU excellence for consumers doesn’t translate to professional AI workloads where CUDA ecosystem compatibility matters more than energy efficiency.
George Hotz Strikes Again
George Hotz—the hacker who unlocked the first iPhone at 17 in 2007 and jailbroke the PlayStation 3 in 2010—founded Tiny Corp in 2022 to build AI compute tools that sidestep tech giants’ ecosystem control. TinyGPU is his latest middle finger to walled gardens, and it follows a familiar pattern: build what users need, force the platform owner’s hand through technical prowess.
After 6 years running comma.ai (self-driving tech), Hotz raised $5.1 million in May 2023 for Tiny Corp and developed tinygrad, a neural network framework that abstracts away vendor-specific GPU APIs. TinyGPU emerged from 1+ years of engineering custom Python userspace drivers for AMD and Nvidia GPUs. Apple approving the driver through DriverKit validates what Hotz proved: developers need escape hatches from platform lock-in, especially for AI.
Related: Apfel: Free AI Already on Your Mac – No Cloud, No Cost
Compute-Only: What TinyGPU Can (and Can’t) Do
TinyGPU installs through Apple’s official DriverKit framework, enabling AMD RDNA3+ and Nvidia Ampere+ GPUs (RTX 30-series and newer) via Thunderbolt 4 or USB4. However, it’s compute-only: no gaming acceleration, no display output, no Metal API support. macOS treats the external GPU purely as a compute device for AI and ML workloads.
The technical limitations matter. Thunderbolt 4 provides 40 Gbps bandwidth—significantly slower than desktop PCIe x16’s 128 Gbps on modern systems. Setup isn’t plug-and-play either; it requires Docker for compilation, command-line comfort, and manual configuration. Gamers and video editors hoping for Nvidia-accelerated rendering will find nothing here. This targets data scientists and AI researchers who need CUDA compatibility for frameworks like PyTorch and TensorFlow.
Developers must also manage expectations around cost. An eGPU enclosure runs $300+, an RTX 4090 costs $1,600+, totaling over $2,000 for the setup. Cloud GPU instances from AWS or Lambda Labs often prove more economical for intermittent workloads. TinyGPU makes sense for Mac-committed developers who want local development parity with production Nvidia environments, not for everyone.
Developers React: “Finally, But…”
The Hacker News discussion—222 points, 113 comments—captures the community’s cautious enthusiasm. Six years without eGPU support left ARM Mac developers feeling abandoned by Apple. TinyGPU’s approval generated genuine excitement, but skepticism tempered the response. Developers questioned whether Thunderbolt bandwidth bottlenecks would limit real-world performance and whether the compute-only restriction reduced the use case too narrowly.
Setup complexity emerged as another concern. Unlike Intel Mac eGPU support, which worked relatively seamlessly, TinyGPU demands technical expertise most consumers lack. The solution serves power users willing to invest time and money for CUDA access on macOS. For broader adoption, cloud GPUs remain the default choice—lower upfront cost, zero maintenance, and instant scalability.
The consensus: TinyGPU is a win for developer freedom, but it highlights Apple’s control problem more than it solves developers’ compute needs. Apple tolerated the driver approval because it serves a niche too small to threaten their integrated silicon narrative. Meanwhile, serious AI work continues migrating to Linux workstations and cloud instances where Nvidia reigns unchallenged.
Key Takeaways
- Apple approved TinyGPU driver on March 31, 2026, ending a 6-year eGPU ban on ARM Macs driven by AI developers’ need for Nvidia CUDA access
- George Hotz spent 1+ years engineering custom drivers, forcing Apple’s hand through the same hacker playbook he used against iPhone and PS3 restrictions
- Compute-only limitation restricts TinyGPU to AI/ML workloads—no gaming, no display output, no Metal support for macOS graphics apps
- Thunderbolt 4 bandwidth (40 Gbps) creates performance gap versus desktop PCIe; expect bottlenecks for large model training
- Apple’s approval signals strategic tension: integrated GPUs excel at consumer use cases but fail professional AI developers who prioritize CUDA ecosystem over energy efficiency
Apple’s walled garden is cracking under AI pressure, but the crack is controlled and narrow. For most developers, cloud GPUs remain the pragmatic choice. TinyGPU serves Mac loyalists who value local development enough to accept compute-only restrictions and technical complexity. Read more in the official TinyGPU documentation or check Tom’s Hardware coverage for technical details.

