NVIDIA unveiled Newton Physics Engine 1.0 GA at GTC 2026 last Sunday (March 16-19), delivering a production-ready physics simulator that slashes robot training time from days to minutes. Built by NVIDIA, Google DeepMind, and Disney Research—now managed by the Linux Foundation—Newton clocks 475x faster than DeepMind’s MJX for manipulation tasks on RTX PRO 6000 Blackwell GPUs. Real-world adoption is already happening: Skild AI trains GPU rack assembly policies with it, while Samsung uses it for cable manipulation in refrigerator assembly lines via Lightwheel.
Getting Started: Installation and First Simulation
Newton installs via PyPI and ships with 40+ examples for immediate experimentation. Installation takes under two minutes:
pip install "newton[examples]"
python -m newton.examples basic_pendulum
System requirements are straightforward: Python 3.10+, NVIDIA GPU (Maxwell or newer), and driver 545+ for CUDA 12. No local CUDA toolkit needed—it’s bundled with NVIDIA Warp. macOS users run CPU-only (no GPU acceleration). For GPU-accelerated examples, specify the device explicitly:
python -m newton.examples basic_humanoid --device cuda:0
python -m newton.examples basic_manipulator --viewer gl
The 40+ built-in examples span pendulum physics, humanoid locomotion, quadrupeds, robotic manipulators, cloth simulation, and cable deformables. Each example demonstrates a specific physics domain—rigid bodies, soft bodies, granular materials—providing hands-on learning without complex setup. Browse all examples on the official GitHub repository.
Performance That Transforms Robot Learning
Newton delivers 252x speedup for locomotion and 475x for manipulation compared to Google DeepMind’s MJX on RTX PRO 6000 Blackwell GPUs. MuJoCo Warp (Newton’s primary backend) alone offers 70x acceleration for humanoid simulations and 100x for in-hand manipulation over CPU MuJoCo. Training time collapses from days to minutes.
For dexterous manipulation specifically, Newton trains RL policies 65% faster than Isaac Sim/PhysX. This isn’t incremental improvement—it’s transformational. Robotics researchers who spent weeks waiting for simulations can now iterate in hours. The GPU parallelization enables thousands of simultaneous environments on a single card, dramatically accelerating reinforcement learning workflows. Full performance benchmarks are documented in the NVIDIA Technical Blog.
Already in Production: Skild AI and Samsung
Skild AI uses Newton with Isaac Lab to train reinforcement learning policies for GPU rack assembly. Their workflow demands sub-millimeter precision for connector insertion and circuit board placement in electronics manufacturing. Newton’s SDF collision detection and hydroelastic contacts deliver the tactile feedback fidelity required for high-precision assembly automation.
Samsung and Lightwheel leverage Newton’s VBD solver for cable manipulation in refrigerator assembly. Simulating flexible cable behavior during insertion tasks generates physically-grounded synthetic data for vision-language-action models. This isn’t a research demo—it’s production manufacturing automation.
Broader adoption includes Agility Robotics, Boston Dynamics, Figure AI, and Franka Robotics using NVIDIA’s Isaac/Omniverse stack, where Newton now integrates as the physics backend. These aren’t pilot programs. They’re production deployments validating Newton’s sim-to-real transfer quality.
Multi-Physics Engine Built on NVIDIA Warp
Newton bundles multiple physics solvers behind a unified API. MuJoCo Warp handles rigid-body dynamics (humanoids, quadrupeds, robotic arms). Kamino—contributed by Disney Research—tackles closed-chain mechanisms like robotic hands with coupled joints. VBD manages deformables (cables, cloth, volumetric objects), while iMPM simulates granular materials via particle methods.
The architecture builds on NVIDIA Warp, an auto-differentiable GPU framework that enables gradient-based optimization without CUDA coding. OpenUSD integration provides asset interchange—import simulation scenes, export to Isaac Sim for visual debugging. Advanced contact modeling includes SDF collision for complex CAD geometries and hydroelastic contacts using continuous pressure distributions, critical for high-fidelity tactile sensing.
Sensor simulation supports tiled cameras for batched observations: RGB, depth, albedo, surface normals, and instance segmentation. This matters for vision-based RL policies where rendering thousands of camera views in parallel accelerates training. Complete documentation is available at newton-physics.github.io.
Newton vs. MuJoCo, PyBullet, and Isaac Sim
Choose Newton for RL policy training, batch GPU simulation, and contact-rich manipulation. Use MuJoCo CPU for quick prototyping. PyBullet works for lightweight experiments. Isaac Sim/PhysX excel at digital twin workflows where visual rendering matters more than physics throughput—Isaac Sim runs 20x slower for pure physics compared to MuJoCo/PyBullet in benchmarks, though Newton now integrates as a swappable backend in Isaac Lab.
The comparison is stark: MuJoCo CPU runs single-threaded at ~100 FPS. Newton (MuJoCo Warp) delivers 252-475x speedup. DeepMind’s MJX (JAX-based MuJoCo) falls 252-475x behind Newton. For rapid prototyping, MuJoCo CPU still wins. For RL training at scale, Newton dominates.
Newton’s Isaac Lab integration offers the best of both worlds—use Newton for policy training (fast physics iteration), then switch to PhysX/Isaac Sim for visual validation and debugging. You don’t have to choose. Install Newton via PyPI to get started.
Key Takeaways
- Newton 1.0 GA was announced March 16 at NVIDIA GTC 2026, delivering 252x speedup (locomotion) and 475x (manipulation) over DeepMind’s MJX on RTX PRO 6000 GPUs
- Real production deployments exist: Skild AI trains GPU rack assembly policies, Samsung uses it for cable manipulation in refrigerator assembly
- Install via PyPI, 40+ built-in examples provide hands-on learning for rigid bodies, deformables, and multi-physics simulations
- Built on NVIDIA Warp with auto-differentiation, managed by Linux Foundation—open-source with strong industry backing from NVIDIA, Google DeepMind, and Disney Research
- Use Newton for RL training at scale, MuJoCo CPU for quick prototyping, Isaac Sim for visual workflows; Newton integrates as swappable backend in Isaac Lab
Newton transforms robotics simulation from a data problem to a compute problem. The 475x speedup isn’t marketing—it’s production-validated by Skild AI and Samsung. Start experimenting with the 40+ examples today.

