Wednesday, 13th November 2024
🔗 The first state-of-the-art coding LLM that runs locally (#). This Qwen2.5-Coder-32B should run locally in about 20-30 gb of ram and assessments show it to be better than 4o at coding but not quite as good as Claude 3.5.