Genesys unveiled the industry’s first Large Action Model-powered virtual agent in February 2026, marking a shift from AI that talks to AI that acts. LLMs predict the next word. LAMs predict the next action. It sounds like marketing spin—until you see the architecture. LAMs aren’t just LLMs with function calling grafted on. They’re smaller, faster models pretrained on action sequences instead of text, designed to execute multi-step workflows deterministically without hallucinating their way through your enterprise stack.
What Makes LAMs Different from LLMs
Large Language Models treat actions as suggestions wrapped in natural language. Large Action Models treat actions as first-class objects. The distinction matters when you’re automating workflows that touch money, compliance, or customer data.
LAMs are built on a neuro-symbolic architecture—neural networks for pattern recognition, symbolic logic for reasoning. This combination delivers three capabilities LLMs struggle with: deterministic multi-step planning, policy-aligned execution, and reduced hallucinations. Scaled Cognition’s APT-1, the model powering Genesys’s virtual agent, uses synthetic training data from agent-to-agent self-play. No human labeling. No manually curated datasets. Just AI agents learning action sequences by executing them against each other.
The architecture splits into six components: interpretation (understanding user intent), planning (crafting action sequences), execution (performing the actions), grounding (connecting abstract actions to concrete API calls), memory (maintaining context across steps), and feedback (learning from outcomes). When you ask an LLM to “process this invoice,” it generates text describing the steps. When you ask a LAM, it logs into your financial system, extracts the data, reconciles it with purchase orders, and flags discrepancies—autonomously.
APT-1 is smaller than most LLMs, optimized for speed over linguistic creativity. It tops tau-bench, the standard benchmark for agentic performance, while being small enough to self-host within enterprise VPCs. That security model matters. Your LAM runs behind your firewall, not in some vendor’s cloud.
LAMs vs LLM Function Calling: When the Distinction Matters
LLM function calling is a workaround. The LLM generates a function name and arguments, an external system executes the function, and the result gets fed back to the LLM for the next step. It works—until it doesn’t. Each step introduces hallucination risk. Each round-trip adds latency. Multi-step workflows become fragile chains where one bad function call derails the entire sequence.
LAMs plan the entire workflow upfront, then execute within a defined framework. Instead of re-engaging the model for each action, the LAM commits to a sequence and follows through deterministically. Built-in guardrails enforce policy compliance. If the workflow requires approval before transferring funds, the LAM stops and waits. If it needs to verify data before proceeding, it does so without inventing plausible-sounding nonsense.
| Aspect | LLM + Function Calling | LAM |
|---|---|---|
| Planning | Step-by-step (re-engage per action) | Upfront multi-step plan |
| Hallucinations | Higher risk (each step) | Lower (deterministic execution) |
| Compliance | External guardrails needed | Built-in policy alignment |
| Latency | Higher (round-trips) | Lower (single plan, execute) |
| Best for | Language-heavy, exploratory tasks | Deterministic multi-step workflows |
If your workflow is deterministic and multi-step—invoice processing, IT provisioning, compliance checks—LAMs reduce failure modes. If your workflow is exploratory or requires creative language generation, stick with LLMs. Or use both: LLM as the planner interpreting user intent, LAM as the executor carrying out the plan.
Genesys Goes First: Production LAM Deployment
Genesys announced its LAM-powered agentic virtual agent on February 10, 2026. General availability is expected in Q1 FY2027 (February through April 2026), making this one of the first production-grade LAM deployments hitting enterprises this quarter.
The system handles end-to-end customer request resolution autonomously. A customer asks to process a return. The LAM interprets the request, checks the order history, verifies the return policy, initiates the refund across payment and inventory systems, and confirms completion—no human intervention unless something violates policy. It works across voice and text channels in multiple languages out of the box, no fine-tuning required per customer.
Genesys partnered with Scaled Cognition specifically for APT-1’s deterministic execution model. Customer service workflows have strict requirements: follow the script, respect approval thresholds, never promise what you can’t deliver. Hallucinations aren’t quirky mistakes in this context—they’re compliance violations and customer trust failures. APT-1’s architecture prioritizes correctness over creativity, which is exactly what enterprise automation needs.
Developer Perspective: Integration Patterns and Adoption
Three integration patterns are emerging for agentic AI systems, and they apply whether you’re using LAMs or LLM-based agents.
Tool calling: Platforms like LangChain handle authentication, permissions, and logging. Your LAM calls tools through a standardized interface, and the platform manages execution details. This works well when you’re orchestrating existing APIs.
Unified APIs: Middleware normalizes multiple third-party APIs into a single interface. The LAM works with a consistent schema regardless of whether it’s calling Salesforce, Stripe, or your internal ERP. Providers like Composio specialize in this abstraction layer, handling rate limiting, error normalization, and authentication token management.
Model Context Protocol (MCP): LAMs discover available tools automatically through MCP servers. Instead of hardcoding which APIs exist, the LAM queries an MCP server at runtime and adapts to available capabilities. This pattern future-proofs your system as new tools get added.
The hybrid approach—LLM for language understanding, LAM for execution—combines the strengths of both. Use an LLM to interpret user requests and generate a plan. Pass that plan to a LAM for deterministic execution. The LLM handles ambiguity and context. The LAM handles correctness and compliance.
Skills developers need: agentic design patterns (planning, execution, feedback loops), API integration expertise (especially unified APIs and MCP), and observability for debugging multi-step failures. When a five-step workflow fails at step three, you need tooling that shows exactly what happened and why.
The Hype Check: What’s Real vs What’s Marketing
LAMs aren’t a new species of AI. They’re LLMs with better constraints and purpose-built training. The core is still a language model. The innovation is in what you train it on (action sequences instead of text) and how you structure execution (deterministic planning instead of step-by-step reasoning).
Many capabilities marketed as “LAM-only” are achievable with well-designed LLM systems. Add a planning layer, enforce execution guardrails, use structured outputs instead of free-form generation, and you get most of what a LAM provides. The difference is LAMs package these patterns into a pretrained model, whereas LLM solutions require custom engineering.
Where LAMs genuinely win: enterprises needing compliance guarantees, workflows requiring sub-second latency across multiple steps, and teams without the engineering bandwidth to build custom agentic systems on top of LLMs. APT-1’s synthetic training data approach is legitimately novel—generating training data through agent self-play eliminates the human labeling bottleneck that constrains LLM development.
But the bottom line: LAMs solve real problems for specific use cases. If you’re building automation that needs deterministic execution with built-in compliance, LAMs deliver measurable benefits—lower hallucination rates, faster execution, fewer failure modes. If you’re building a chatbot or content generator, an LLM is still the right tool. Pick based on the job, not the hype cycle.









