Uncategorized

Poolside Laguna XS.2: Open-Source AI Coding for Mac

On April 28, 2026, Poolside AI launched Laguna XS.2 and M.1—the first high-performance, open-source AI coding models designed for agentic work that run locally on developer hardware. Founded by Jason Warner (former GitHub CTO) and Eiso Kant, Poolside released Laguna XS.2 under Apache 2.0 license, achieving 68.2% on SWE-bench Verified while running entirely on a Mac with 36GB RAM. For developers concerned about intellectual property leakage, recurring API costs, or vendor lock-in, Laguna offers a local alternative: your code never touches cloud servers, no internet required, zero marginal cost per token.

Competitive Performance Without Cloud Dependency

Laguna XS.2 achieves 68.2% on SWE-bench Verified—the standard for evaluating AI coding agents on real-world GitHub issues. The cloud-only M.1 variant reaches 72.5%, while Claude Code leads at 80.8% and GPT-5 scores 74.9%. However, XS.2 is the first open-source local model to break 60%, a threshold that signals competitive quality rather than toy-level experimentation.

SWE-bench tests AI models on 500 real GitHub issues from open-source repositories. Moreover, models must investigate code, identify root causes, and generate patches that pass both FAIL_TO_PASS tests (fixes the bug) and PASS_TO_PASS tests (doesn’t break existing functionality). XS.2’s 68.2% means it solved 341 of 500 issues autonomously. Consequently, the 15% gap to Claude Code is significant but not disqualifying—it means developers can handle autocomplete, simple scripts, and boilerplate locally while escalating complex refactors to cloud tools.

Privacy, Cost Control, Full Transparency

Laguna XS.2 runs entirely on local hardware via Ollama with no internet connection required. Code never leaves the developer’s workstation—no cloud logging, no third-party servers, full data sovereignty. Furthermore, after the initial hardware investment ($2-3K for Mac or GPU setup), there are zero marginal costs per token, unlike cloud APIs that charge $20-200/month for subscriptions or pay-per-token pricing.

Poolside operates across US and Paris, focusing on coding automation for government and defense clients where IP protection is critical. Apache 2.0 license means developers can use, modify, and deploy commercially without restrictions. Early adopter feedback from Hacker News reflects the value proposition: “Finally, a competitive coding model without cloud lock-in. Code stays on my machine, no third-party logging.”

The break-even vs cloud subscriptions depends on usage volume and hardware amortization, but for high-volume users or privacy-critical development (enterprise, government, healthcare/HIPAA), the math favors local execution.

GitHub CTO’s Vision: Challenging OpenAI

Poolside was founded in early 2023 by Jason Warner (former GitHub CTO) and Eiso Kant (software entrepreneur). The company has raised $626M total, reaching $3B valuation in October 2024 (unicorn status). In October 2025, Nvidia invested up to $1B, quadrupling valuation to $12B. Warner’s stated vision: “The only competitor we think about is OpenAI. We are going after narrow AGI through software and code.”

Warner met Kant in 2017 when Warner tried to acquire Kant’s company to become the AI core of GitHub. The acquisition failed, but they spent six years plotting Poolside. Warner explained: “We started Poolside because we believed that to build truly capable coding agents, you need to own the full stack: data, training, reinforcement learning, inference.” Warner’s GitHub CTO background lends credibility—he built the platform that hosts most of the world’s code.

The Hybrid Strategy: Best of Both Worlds

The 15% performance gap (68.2% local vs 80.8% cloud) means Laguna XS.2 shouldn’t replace Claude Code or GPT-5—it should complement them. Recommended approach: use local (Laguna) for 70-80% of tasks (autocomplete, simple scripts, boilerplate, privacy-sensitive code) and cloud (Claude/GPT-5) for 20-30% of tasks (complex refactors, multi-file changes, quality-critical production code). Developer community consensus: “Sweet spot is using local for high-volume, low-complexity stuff and cloud for tasks where quality really matters.”

Multi-step agentic tasks where the AI needs to read files, run commands, evaluate output, and iterate require frontier model capabilities. Nevertheless, for autocomplete and simple generation, local models save API costs and provide instant feedback without network latency. Don’t replace cloud tools—complement them.

Key Takeaways

  • First competitive open-source local AI coding model: 68.2% on SWE-bench Verified breaks the 60% threshold that separates toys from tools
  • Privacy and cost control: Code runs entirely on Mac with 36GB RAM, zero marginal cost after hardware investment, Apache 2.0 license
  • Credible competition: Founded by ex-GitHub CTO Jason Warner, $12B valuation with Nvidia backing, targeting “narrow AGI through code”
  • 15% performance gap vs frontier models: Use hybrid strategy—local for volume and privacy, cloud for quality-critical work
  • Easy setup via Ollama: ollama pull laguna-xs.2 is a one-command installation on Mac
ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *