there are now 3 open weight models that are reaching the quality of the closed ones (GLM-5.1, Kimi 2.6, Deepseek V4). There are two more that can write actually good code and small enough to run locally (Gemma 4, Qwen 3.6). This is not going away, no matter if OpenAI and Anthropic survive or not.

@lain How much hardware dose it take to run those models at home with reasonable performance?

@h4890 any mac with > 32gb ram, those new unified memory systems by AMD work too. or a graphics card with at least 24gb vram (this is much faster).
Follow

@lain Interesting... that's nothing! I thought it would require at least several GPU cards and at least 20k USD or more in hardware investment.

· · Web · 1 · 0 · 0
@h4890 yeah, these small ones are really useable with normal hardware and they cna do real work.

@lain I suspect they work fairly well with my modus operandi, when it comes to "vibe coding". That is...

1. Keep it short, smal functions, not more than 10-20 lines of code at most.

2. Keep it checkable. Make sure that the code does something that you can easily check.

I've been fairly successful with 1 & 2 using free models.

Sign in to participate in the conversation
Merovingian Club

A club for red-pilled exiles.