A developer just built a working programming language interpreter in a single day using Claude. Martin Janiczek created FAWK, a functional variant of AWK, by directing Claude Sonnet 4.5 through Cursor Agent. The project demonstrates that frontier LLMs have crossed a threshold most developers haven’t fully internalized yet.
The Catalyst
Janiczek’s frustration started simply enough. He wanted to use AWK for Advent of Code puzzles, but kept hitting walls. AWK, born in 1977, lacks features that modern developers take for granted. You can’t return arrays from functions. Variables leak between scopes. There’s no lambda support. Every workaround felt like fighting the language instead of solving problems.
Rather than accepting these limitations, Janiczek wondered: what if AWK had first-class arrays, lexical scope, and proper functions? Instead of spending weeks implementing this vision manually, he asked Claude to do it.
What FAWK Adds
The result addresses AWK’s core pain points. Arrays become first-class citizens. You can create them with literals, nest them, pass them by value, and return them from functions. Functions themselves become values you can pass around. Lexical scoping means local variables stay local. A pipeline operator enables functional composition.
The syntax stays familiar to AWK users while removing decades of accumulated friction:
BEGIN {
result = [1, 2, 3, 4, 5]
|> filter((x) => { x % 2 == 0 })
|> map((x) => { x * x })
|> reduce((acc, x) => { acc + x }, 0)
print result
}
The AI Development Process
Here’s where it gets interesting. Janiczek didn’t write the interpreter. He asked Claude for a README with code examples, then requested a full Python implementation. Claude delivered both. He also received working implementations in C, Haskell, and Rust from the same sessions.
The development followed an iterative pattern. Janiczek would describe a feature, Claude would implement it, and extensive end-to-end tests would verify correctness. Tricky features that seemed like obvious failure points—print working as both statement and expression, multi-dimensional arrays, closure environments—all worked correctly.
The only stumble came with arbitrary precision floating point math. Claude attempted Taylor series implementations but couldn’t get them right. Janiczek told it to use mpmath instead, and the problem vanished in moments.
The Zero-Knowledge Tradeoff
Janiczek is refreshingly honest about the cost. “I have zero knowledge of the code,” he writes. “I only interacted with the agent by telling it to implement a thing and write tests for it, and I only really reviewed the tests.”
This creates an uncomfortable situation. He has a working interpreter but can’t meaningfully modify it without first learning a codebase he didn’t write. The time saved building gets partially spent later understanding.
This isn’t necessarily bad. Developers already work with codebases they didn’t write. But it’s a different relationship with your own project. You’re more reviewer than author.
What This Means
The implications extend beyond one developer’s weekend project. Janiczek puts it directly: “I have to update my priors.”
Tasks that seemed out of reach for AI assistance—implementing type systems, building interpreters, creating new programming languages—now appear achievable in hours rather than weeks. The frontier moved while most developers weren’t looking.
This doesn’t mean every developer should start vibe-coding compilers. The zero-knowledge tradeoff is real. But for prototypes, experiments, and projects where speed matters more than deep ownership, the calculus has changed.
FAWK is available on GitHub for anyone willing to explore what functional AWK might look like. More importantly, it’s a data point suggesting we should probably test our assumptions about what LLMs can and can’t build.
The next time you dismiss a project idea as “too complex for AI assistance,” you might want to try asking first.











