Trillim

We're building local AI that runs on the hardware you already have.

Trillim builds infrastructure for running models on consumer CPUs and edge devices — no GPU required. We train and fine-tune ternary ({-1, 0, 1}) models designed to run efficiently on commodity hardware, and build the tooling to deploy them.

What we believe

GPUs are powerful but expensive, power-hungry, and scarce. Ternary quantization changes the equation: models with {-1, 0, 1} weights don't need floating-point multipliers at all. The right software can make CPUs fast enough for real-time inference. AI should run anywhere — laptops, Raspberry Pis, edge devices — not just in datacenters.

What we're building

Supported model architectures

BitNet, Llama, Qwen2, Mistral

Links