<<< Back to Index


LM Studio Mini Head-to-Head Benchmark (2026-02-22)


Run dir: /home/slime/.openclaw/workspace-base/.run/lmstudio-mini-benchmark/20260222-151100


Settings


Visual Summary

LM Studio Benchmark Scores

LM Studio Benchmark Speed


Chat Model Ranking


RankModelScoreAvg sec/prompt
1qwen/qwen3-vl-8b5/51.27
2openai/gpt-oss-20b5/52.44
3openai/gpt-oss-120b5/59.07
4qwen3-coder-next5/510.04
5google/gemma-3-4b4/50.93
6google/gemma-3-12b3/51.66
7deepseek/deepseek-r1-0528-qwen3-8b1/52.05
8zai-org/glm-4.6v-flash1/52.75
9mistralai/ministral-3-14b-reasoning1/53.28
10zai-org/glm-4.7-flash1/55.44

Prompt-by-Prompt (chat)


`qwen/qwen3-vl-8b` — 5/5 (avg 1.27s)


`openai/gpt-oss-20b` — 5/5 (avg 2.44s)


`openai/gpt-oss-120b` — 5/5 (avg 9.07s)


`qwen3-coder-next` — 5/5 (avg 10.04s)


`google/gemma-3-4b` — 4/5 (avg 0.93s)


`google/gemma-3-12b` — 3/5 (avg 1.66s)


`deepseek/deepseek-r1-0528-qwen3-8b` — 1/5 (avg 2.05s)


`zai-org/glm-4.6v-flash` — 1/5 (avg 2.75s)


`mistralai/ministral-3-14b-reasoning` — 1/5 (avg 3.28s)


`zai-org/glm-4.7-flash` — 1/5 (avg 5.44s)


Embedding Models



Artifacts


<<< Back to Index