
Google is internally building Jules V2 under the codename “Jitro” — and if the early signals hold, it’s not just an upgrade. Every major coding agent in 2026 still asks you what to do. Jitro is designed to ask what you want to achieve. That’s not a UX improvement. That’s a different contract between developer and machine.
From Prompts to Outcomes
Here’s what today’s agentic tools — Copilot, Cursor, Claude Code, Codex, Jules V1 — all have in common: you give them a task. “Fix the login timeout bug in auth.py.” “Write unit tests for the payment module.” “Refactor this into smaller functions.” The agent executes. You review. That’s task-level prompting, and it’s where the entire market is parked right now.
Jitro proposes something different. Instead of a task, you define an outcome. “Reduce authentication errors by 15% this sprint.” “Get test coverage to 85%.” “Fix all accessibility violations in the checkout flow.” The agent then autonomously figures out what to change in the codebase to reach that goal. The developer moves from task manager to outcome definer.
Early reporting describes a persistent workspace where developers list goals, track progress, and configure tool integrations — not a one-shot task queue. The agent maintains awareness of your objectives and operates continuously toward them. That’s a meaningfully different relationship between developer and tool. (TestingCatalog first surfaced the Jitro codename and workspace details.)
Jules V1 Has Already Been Moving This Direction
If Jitro feels like a leap, look at what Jules has shipped in the past six months. Scheduled Tasks let you define recurring jobs — nightly lint passes, weekly dependency audits, monthly cleanups — that Jules runs automatically without re-prompting. Suggested Tasks lets Jules proactively surface issues it finds in your codebase. The Render integration closes the deployment loop: when a build fails, Render notifies Jules, Jules diagnoses the error and pushes a fix commit, and your deployment recovers without a human in the loop.
These features share a pattern: Jules initiating action based on state, not just responding to instructions. Jitro is that pattern taken to its logical conclusion — an agent that doesn’t just react, but pursues.
Why Google Is Doing This Now
The coding agent market is crowded. GitHub Copilot holds 29% workplace adoption. Cursor and Claude Code are tied at 18%. OpenAI’s Codex surged from near-zero to 3M+ weekly active users in under a year. Google’s Jules is growing but not leading.
Goal-driven development is a way to differentiate at the abstraction layer instead of the feature layer. If Jitro works, it’s not competing with Copilot on autocomplete quality or with Cursor on IDE integration — it’s competing on a different axis entirely. That’s a smarter play for a company that doesn’t own the dominant IDE or the dominant model.
The Problem Nobody Is Talking About
Goal-pursuing agents create a genuine accountability gap. When you prompt an agent to fix a bug, you’re responsible for the scope. When an agent autonomously decides what to change in order to reduce error rates — and refactors a module you didn’t ask it to touch — who owns that? The agent made a judgment call to hit a metric you defined. That’s a harder conversation than “I asked it to do X and it did Y.”
This isn’t a reason to dismiss Jitro. It’s a reason to go in with a clear-eyed mental model. Goal-setting agents will require guardrails, scope constraints, and probably a new category of review workflow. The developers who figure this out early will have a meaningful advantage over those who expect goal-driven AI to behave like a better prompt-driven AI.
What to Do Right Now
Jules V1 is live at jules.google, available on free and paid tiers. The Scheduled Tasks and proactive features are worth exploring now — they’re the foundation Jitro will build on, and getting familiar with the async model prepares you for what’s coming. Watch the developer keynote at Google I\/O on May 19. If Jitro ships as described, the question won’t be whether to adopt it — it’ll be whether your team has the discipline to define goals clearly enough for an agent to pursue them.













