A brand new Python framework called Acorn (v0.7.2, released February 25, 2026) promises to fix what’s broken in LLM agent development: LangChain’s bloat, DSPy’s steep learning curve, and the lack of type safety everywhere else. Created by Andrei from askmanu, Acorn combines the best features of DSPy, LangChain, and Instructor into a Pydantic-first API that makes building long-running AI agents simple, type-safe, and production-ready from day one.
Developers are drowning in LangChain abstractions and fleeing to vanilla Python. Acorn offers a middle ground: structured agent development without the bloat. With agentic AI being the trend of 2026 and long-running agents still unsolved (the infamous “35-minute degradation problem”), a framework that simplifies this workflow couldn’t arrive at a better time.
Setup: Working Code in 10 Lines
Installation is straightforward: pip install acorn and you’re ready. Moreover, Acorn uses Pydantic models to define structured inputs and outputs, then wraps everything in a simple Module class.
Here’s a complete single-turn agent that summarizes text:
from pydantic import BaseModel, Field
from acorn import Module
class Input(BaseModel):
text: str = Field(description="The text to summarize")
class Output(BaseModel):
summary: str = Field(description="The concise summary")
word_count: int = Field(description="Number of words")
class Summarizer(Module):
initial_input = Input
final_output = Output
model = "anthropic/claude-sonnet-4-5-20250514"
summarizer = Summarizer()
result = summarizer(text="Long article here...", max_words=50)
print(result.summary) # Typed output!
That’s it. No verbose LangChain chains, no DSPy teleprompters—just Pydantic models you already know and a Module class that handles the LLM interaction. Furthermore, the Pydantic-first design means type errors are caught at development time, not when your agent fails in production.
Agentic Loops: The Killer Feature
Acorn’s standout capability is agentic loops: multi-turn execution where the LLM iteratively calls tools until it solves the task. Consequently, setting max_steps=5 enables the agent to automatically search, analyze, and synthesize across multiple turns without you writing orchestration logic.
Here’s a research agent that knows when to stop:
class ResearchAgent(Module):
initial_input = Input
max_steps = 5 # Enable agentic loop
final_output = Output
model = "anthropic/claude-sonnet-4-5-20250514"
@tool
def search(self, query: str) -> list:
"""Search for information."""
return ["result1", "result2"]
@tool
def analyze(self, data: str) -> str:
"""Analyze collected data."""
return f"Analysis: {data}"
def on_step(self, step):
"""Control loop execution."""
if len(step.tool_results) >= 3:
step.finish(findings="Collected enough data")
return step
The agent decides when to search, when to analyze, and when it’s finished. Additionally, the on_step() callback gives you full control to log behavior, modify available tools, or terminate early based on conditions.
This solves one of the hardest problems in LLM app development. In fact, long-running agents degrade after 35 minutes as context windows fill up, memory management gets complex, and existing frameworks force you to build this orchestration manually. However, Acorn handles it with one parameter and a callback.
Type Safety: Catch Errors Before Production
Every input and output in Acorn is a Pydantic model, which means full type safety, automatic validation, and IDE autocomplete. Moreover, tool schemas are auto-generated from type hints and docstrings—no manual schema writing required.
@tool
def search(query: str, limit: int = 10) -> list:
"""Search for information.
Args:
query: The search query
limit: Maximum results to return
"""
return search_api(query, limit)
Acorn reads your type hints (query: str, limit: int) and docstring to generate the tool schema the LLM needs. Consequently, schema drift becomes impossible because the schema is your code.
Type safety is becoming table stakes for production LLM apps in 2026. Similarly, Pydantic AI, Instructor, and now Acorn all prioritize typed I/O because runtime errors in production are expensive. Therefore, developers want to catch mistakes at dev time, not when the LLM calls a tool with the wrong parameter types and crashes your pipeline.
How Acorn Compares to LangChain and DSPy
Acorn is simpler than LangChain (less bloat, cleaner API), easier than DSPy (no steep learning curve), and more powerful than Instructor (adds agentic loops and dynamic tools).
LangChain’s problems are well-documented by 2026: dependency bloat, unstable APIs, and abstraction layers that became friction instead of help. In contrast, developers are fleeing to vanilla Python for simple tasks. The community consensus is clear: “LangChain is overkill for simple RAG apps.”
DSPy takes a different approach with prompt optimization and research-backed techniques, but newcomers struggle with the steep learning curve. Furthermore, terms like “signature,” “module,” and “teleprompter” are overwhelming when you just want to build an agent.
Acorn trades ecosystem maturity for simplicity. Nevertheless, you give up LangChain’s massive integration library and DSPy’s optimization capabilities, but you gain an API that makes sense in 5 minutes. For new projects where type safety and clean code matter more than ecosystem size, that’s a worthwhile trade.
Production Features: Built for Real Use
A framework released 2 days ago that already ships with streaming (real-time output), provider caching (reduced latency and cost), model fallbacks (high availability), and 85% test coverage signals production-first thinking. In fact, most new frameworks skip these features and add them later. However, Acorn’s creator clearly built for real use cases, not just demos.
The MIT license, 201 passing tests, and support for any LiteLLM-compatible provider (Claude, GPT, Gemini) suggest this isn’t a weekend project. Nevertheless, being 2 days old means Acorn is too new for mission-critical systems. Meanwhile, the ecosystem is immature, documentation is minimal beyond the README, and there’s no Stack Overflow safety net yet.
Should You Use Acorn?
Use Acorn when you’re building new agent projects from scratch, need type-safe LLM interactions without LangChain bloat, and want simple agentic loops without manually orchestrating multi-turn workflows. Additionally, it requires Python 3.10+ and works best with Claude or GPT for structured outputs.
Skip Acorn if you need async support (planned but not available), massive ecosystem integrations (use LangChain), or you’re working on mission-critical production systems (wait for maturity). However, the framework is well-tested and production-ready from a features perspective, but the ecosystem needs time to develop.
The honest assessment: Acorn is worth serious consideration for new projects. Furthermore, it won’t replace LangChain’s ecosystem or DSPy’s optimization power, but it offers a cleaner path for developers tired of fighting abstractions just to build a working agent.
