Statically-typed functional language for LLM code generation
Give language models less syntax to waste.
ll-lang keeps prompts compact, catches mistakes before execution, and returns diagnostics an agent can repair without parsing human prose.
lllc mcp
Same benchmark sample, side by side: 142 TypeScript tokens versus 110 ll-lang tokens.
Built for the feedback loop LLMs actually live in.
Smaller prompts, same logic
Less punctuation and ceremony means more of the context window goes to intent, types, and behavior.
Compile before you execute
Hindley-Milner inference, tagged values, and exhaustive matches move bugs into the compiler instead of production logs.
Diagnostics models can repair
Error codes stay single-line and structured, so agents can fix one concrete issue at a time.
The “hello world” path stays short.
The language surface is compact enough for quick prompting, but it still compiles to real targets and ships with a built-in MCP server.
module Hello
Hello = printfn "Hello, ll-lang!"
./tools/lllc-bootstrap.sh run hello.lll
# Hello, ll-lang!
Not a toy syntax demo.
Self-hosting compiler
ll-lang compiles itself. The bootstrap compiler reaches a fixpoint:
compiler₁.fs == compiler₂.fs.
Multi-target output
One source can emit F#, TypeScript, Python, Java, C#, and an experimental LLVM backend.
Real tooling surface
lllc mcp exposes compile, diagnose, symbol, fix-preview,
and project graph tools for editor agents.
Docs stay close to the metal
The landing page pushes deeper reading into the README, user guide, spec, and compiler internals instead of duplicating them.