Google released A2UI v0.9 on April 17, 2026 – a framework-agnostic protocol that solves how AI agents safely generate rich, interactive user interfaces across React, Flutter, Angular, and native mobile platforms. Instead of agents generating risky executable code or limited text responses, A2UI uses declarative JSON “blueprints” that clients render using pre-approved component catalogs. The same agent output works on web and mobile while preventing UI injection attacks.
As AI agents proliferate in applications, developers face a critical problem: how do agents display complex UIs like forms, charts, and interactive widgets without executing untrusted code? A2UI standardizes this with security built-in, cross-platform compatibility, and optimization for LLM generation.
The Agent UI Problem A2UI Solves
AI agents struggle with UI generation because existing approaches are broken. Plain text offers limited expressiveness and no interactivity. Raw code execution creates dangerous security risks through UI injection attacks. Framework-specific output locks you into React-only or Flutter-only implementations.
A2UI introduces a declarative, security-first protocol where agents send JSON describing UI intent, and clients render it with their own trusted component libraries. The security model is simple: declarative data format, not executable code. Agents can only reference components from client-controlled catalogs, preventing UI injection attacks entirely.
Cross-platform compatibility is automatic. The same JSON renders natively on React for web, Flutter for mobile, Angular for enterprise, and Lit for web components. A health companion app replaced static dashboards with dynamic AI-driven interfaces. A financial planning tool generates personalized widgets using Gemini, adapting to each client’s unique portfolio needs.
How A2UI Works – Architecture and Workflow
A2UI uses a six-step workflow. Users send messages to the AI agent. The agent generates streaming JSONL describing UI structure and data. JSON streams to the client application. The client renders using native components from a “Trusted Catalog.” User interactions trigger actions sent back to the agent. The agent responds with updated A2UI messages.
The key innovation is the flat adjacency list model where components reference each other by ID, instead of nested JSON trees. This design choice is non-obvious but critical – it’s why A2UI works better with LLMs than alternatives. Nested JSON is hard for language models to generate correctly mid-stream. Flat lists are simple.
from a2ui import A2uiSchemaManager
# Initialize with component catalog
schema_manager = A2uiSchemaManager(catalog_path="components.json")
system_prompt = schema_manager.generate_system_instruction()
# Agent generates JSON for "show booking form"
# {"type": "surfaceUpdate", "components": [...]}
The v0.9 SDK includes “incremental parsing and healing” to handle malformed LLM output mid-stream. Users see interfaces building in real-time rather than waiting for complete JSON responses.
What’s New in A2UI v0.9
Version 0.9 ships six major improvements. The new Python Agent SDK (pip install a2ui) simplifies agent development by handling schema management, prompt engineering, and message validation. Official React, Flutter, Angular, and Lit renderers received version bumps with full v0.9 support. Bidirectional messaging enables client-to-server sync for collaborative editing scenarios.
Client-defined functions add validation capabilities. The Web-Core library (@a2ui/web-lib) provides shared logic for all browser renderers. The philosophy shifted dramatically – Google renamed “Standard” components to “Basic,” emphasizing integration with existing design systems over introducing new components.
As the Google Developers team stated: “Frontend developers don’t want new components. They want their agents to work with the design systems they’ve already built.” This isn’t just a naming change. It reflects real-world feedback from enterprises who won’t adopt A2UI if it means rebuilding their UI libraries.
Related: Developers Use AI 60% But Delegate Only 20%: Anthropic
A2UI Tutorial: Build Your First Widget in 10 Minutes
Getting started requires four steps. Install the renderer for your framework (npm install @a2ui/react). Set up the Agent SDK on your backend (pip install a2ui). Define your component catalog by mapping A2UI types to existing components. Generate UI from the agent and stream to the client.
Skip installation entirely with A2UI Composer, an interactive tool at a2ui.org. Describe your widget, the agent generates output, and it renders live in your browser. No setup barrier.
import { A2UIRenderer } from '@a2ui/react';
function ChatInterface() {
return (
<A2UIRenderer
catalog={myComponents}
streamSource={agentWebSocket}
onAction={sendToAgent}
/>
);
}
The catalog is where you map A2UI component types to your Radix UI, ShadCN, Mantine, or Material UI components. Agents orchestrate your existing design system rather than generating new components from scratch. This maintains brand consistency, accessibility, and design system compliance automatically.
A2UI vs Vercel json-render – Which to Choose?
A2UI competes primarily with Vercel’s json-render. The differences matter. A2UI is framework-agnostic, supporting React, Flutter, Angular, and native mobile. Vercel’s json-render works with React only. A2UI uses declarative data with no code execution. Vercel uses React Server Components that execute server-side.
Choose A2UI if you need cross-platform support, security boundaries for untrusted agents, or an open-source requirement. Choose Vercel if you’re React-only, want a mature ecosystem with one year of production use, or already run Vercel infrastructure. A2UI suits multi-agent systems where third-party agents need to display UI safely. Vercel excels for internal tools with trusted agents.
Neither approach is wrong. A2UI targets broader reach and tighter security. Vercel offers deeper React integration and proven production stability. Match your requirements to architecture strengths.
Related: TorchTPU: Google Challenges Nvidia CUDA Moat with PyTorch
Production Considerations and Current Limitations
A2UI v0.9 is draft status with v1.0 targeted for Q4 2026. Expect API changes. The main production challenge is protocol fragmentation – A2UI, A2A, AG-UI, and MCP all solve related but different problems. Developers face “which one do I use?” confusion.
UX consistency poses real risk. When agents generate dynamic UIs, interfaces can change drastically with every interaction. Users struggle to learn applications that shift unpredictably. Keep core navigation and layout stable. Only make content areas dynamic.
Catalog design requires balance. Too many components create inconsistent UX and confuse LLMs during generation. Too few components limit expressiveness. Start with 10-20 core components and expand gradually based on actual use cases. The ecosystem remains early with two public case studies and limited community examples beyond Google’s samples.
Key Takeaways
- A2UI solves agent UI generation through declarative JSON that renders natively across React, Flutter, Angular, and mobile platforms without code execution risks
- Version 0.9 ships production-ready Python SDK, official renderers, bidirectional messaging, and design system integration philosophy that respects existing component libraries
- The adjacency list architecture optimizes for LLM streaming – flat component lists beat nested JSON trees for incremental generation
- Choose A2UI for cross-platform needs and security boundaries; choose Vercel json-render for React-only stacks with mature ecosystem requirements
- Protocol is draft status targeting v1.0 in Q4 2026 – expect API evolution, catalog design challenges, and UX consistency work
A2UI isn’t the only answer to agent-driven UIs, but it’s the most ambitious attempt at a universal standard. Whether it succeeds depends on Google’s commitment and community adoption over the next six months.













