Price Per TokenPrice Per Token
OpenClaw

Best Local & Open-Source LLMs for OpenClaw (2026)

Community-voted rankings for running OpenClaw with local and open-source models

Our Picks

Best Overall Local

Mixtral 8x7B Instruct

Best Small Model

Mistral 7B Instruct v0.2

Best for Coding

Qwen2.5 Coder 7B

Provider
Model
Input $/M
Output $/M
Vote
Score
$0.140
$0.420
40+4
$0.140
$0.200
10+1
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.050
$0.200
000
$0.000
$0.000
000
$0.010
$0.020
000
$0.010
$0.020
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.200
000
-
$0.000
$0.000
000
$0.800
$1.200
000
$0.037
$0.150
000
$0.150
$0.150
000
$0.200
$0.200
000
$0.500
$0.500
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.170
$0.170
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.045
$0.150
000
$0.100
$0.200
000
$0.120
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.150
$0.150
000
$0.100
$0.100
000
$0.000
$0.000
000
$0.000
$0.000
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.100
$0.100
000
$0.100
$0.100
000
$0.050
$0.080
000
$0.200
$0.200
000
$0.500
$0.500
000
$0.170
$0.170
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.600
$1.800
000
$0.200
$0.200
000
$0.300
$0.300
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.500
$0.500
000
$0.070
$0.070
000
$0.200
$0.200
000
$0.240
$0.240
000
$0.200
$0.200
000
$0.050
$0.150
000
$0.300
$0.300
000
$0.030
$0.090
000
$0.800
$1.200
000
$0.040
$0.080
000
$0.040
$0.130
000
$0.000
$0.000
000
$0.400
$0.400
000
$0.100
$0.100
000
$0.040
$0.040
000
$0.100
$0.100
000
IB
Ibm
$0.000
$0.000
000
$0.300
$0.300
000
$0.040
$0.100
000
$0.060
$0.060
000
$0.170
$0.430
000
$0.030
$0.050
000
$0.020
$0.020
000
$0.049
$0.049
000
$0.080
$0.200
000
$0.100
$0.100
000
$0.100
$0.100
000
$0.000
$0.000
000
$0.150
$0.150
000
$0.200
$0.200
000
$0.000
$0.000
000
$0.040
$0.050
000
$0.020
$0.040
000
$0.140
$0.140
000
$0.030
$0.080
000
$0.059
$0.059
000
$0.140
$0.200
000
$0.000
$0.000
000
$0.140
$0.420
000
$0.030
$0.040
000
$0.300
$0.300
000
$0.060
$0.060
000
$0.100
$0.100
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.200
$0.200
000
$0.020
$0.040
01-1
$0.020
$0.050
01-1

Vote for open-source models that work well (or don't) with OpenClaw.

Pricing from OpenRouter.

Running OpenClaw with a Local Model

OpenClaw supports local models via OpenAI-compatible APIs. The easiest way to run a model locally is with Ollama or llama.cpp, both of which expose a local API endpoint that OpenClaw can connect to.

1

VRAM Requirements

7B models need ~6GB VRAM (4-bit quantized). 13B models need ~10GB. 70B models need ~40GB or multi-GPU. If you're on CPU-only, smaller quantized models (Q4) are usable but slow.

2

Recommended: Ollama

Install Ollama, run ollama pull llama3.3, then point OpenClaw to http://localhost:11434/v1.

3

Alternative: llama.cpp server

For more control over quantization and performance, llama.cpp's server mode gives you a full OpenAI-compatible API with fine-grained settings.

Compare all local LLM runners →

About This Leaderboard

This leaderboard shows community votes specifically for open-source and locally-runnable models used with OpenClaw. Models are filtered to include those from open-source providers: Meta (Llama), Mistral, Qwen (Alibaba), Google (Gemma), DeepSeek, Microsoft (Phi), and similar open-weight families.

Running local models with OpenClaw gives you full privacy, zero API costs, and offline capability — at the cost of needing hardware and accepting some quality trade-offs versus frontier API models.

Frequently Asked Questions

Based on community votes, Llama 3.3 70B is the top-rated open-source model for OpenClaw. It offers strong instruction-following and tool use while being free to run locally with sufficient VRAM.
Yes. OpenClaw supports any OpenAI-compatible API endpoint, so you can point it at a local Ollama or llama.cpp server running any open-source model. Performance depends on model size and your hardware.
For 7B models you need ~6GB VRAM (4-bit). For 70B models like Llama 3.3 70B, plan for 40GB+ or multiple GPUs. CPU-only inference is possible but very slow for agentic coding tasks.
Models with strong instruction-following and tool use matter most for agentic tasks. Llama 3.3 70B, DeepSeek Coder V2, and Qwen2.5 Coder consistently rank well for agentic coding workflows.