ll-lang wordmark

Statically-typed functional language for LLM code generation

Give language models less syntax to waste.

ll-lang keeps prompts compact, catches mistakes before execution, and returns diagnostics an agent can repair without parsing human prose.

8–17% smaller than F# on measured code
1.3–5.9x more compact than TS, Python, and Java on type-heavy samples
30 MCP tools exposed by lllc mcp
Animated side-by-side comparison of TypeScript and ll-lang with token counters

Same benchmark sample, side by side: 142 TypeScript tokens versus 110 ll-lang tokens.

Install a pinned bootstrap compiler ./tools/bootstrap-self.sh install
Why ll-lang

Built for the feedback loop LLMs actually live in.

Smaller prompts, same logic

Less punctuation and ceremony means more of the context window goes to intent, types, and behavior.

Compile before you execute

Hindley-Milner inference, tagged values, and exhaustive matches move bugs into the compiler instead of production logs.

Diagnostics models can repair

Error codes stay single-line and structured, so agents can fix one concrete issue at a time.

Live snippet

The “hello world” path stays short.

The language surface is compact enough for quick prompting, but it still compiles to real targets and ships with a built-in MCP server.

module Hello

Hello = printfn "Hello, ll-lang!"
./tools/lllc-bootstrap.sh run hello.lll
# Hello, ll-lang!
Proof points

Not a toy syntax demo.

Self-hosting compiler

ll-lang compiles itself. The bootstrap compiler reaches a fixpoint: compiler₁.fs == compiler₂.fs.

Multi-target output

One source can emit F#, TypeScript, Python, Java, C#, and an experimental LLVM backend.

Real tooling surface

lllc mcp exposes compile, diagnose, symbol, fix-preview, and project graph tools for editor agents.

Docs stay close to the metal

The landing page pushes deeper reading into the README, user guide, spec, and compiler internals instead of duplicating them.