
Conditions & Context After doing a review of its little 8B brother a couple days ago, today we are looking at Cogito V1 14B model and I’m curious how it would fare in my very simple test. Unlike the 8B, which was based on Meta’s Llama model, this 14B variant is forked off open source…

Conditions & Context Today I’m looking at Cogito V1 8B model in Q4 K M quantization. This is Meta’s Llama 3.2 under the hood, but with Cogito’s proprietary self-improving IDA training loop baked in. Is it better than Llama? Let’s dive in and see. I picked a very simple prompt which contains a mixture of…

Conditions & Context Today we are going back to France! On the table is a Mistral 3.1 Small with a decent 24B weight. It will be a tight squeeze onto a 16GB GPU, so I expect some CPU cores being lit up, but let’s see if it did as bad as Gwen. Let’s dive right…

Conditions & Context Today we are investigating Microsoft’s serious foray into open LLM models, the Phi4 in 14B weight. An interesting iteration. I’m saying interesting, because it’s been known for a long while that Microsoft was closely working with OpenAI and the Microsoft Copilot is based on the latest versions of GPT. That did not…

Model conditions & context Alright, so today we are taking a quick peek at Mistral’s latest open weight model, Ministral 3 in 8B size and with Q4 K M quantization. I was quite impressed with Mistrals 7B model and though the Ministral series is a different branch, I was still excited to give it a…

Conditions & context This is a follow-up to my earlier AI@Home DeepSeek R1 8B article. If you haven’t read that one yet, go read it first — this one won’t make nearly as much sense without it. Are you back? Good. Because what happened next is genuinely fascinating and I did not plan any of…

Model conditions & context Today we are looking at Mistral’s very fresh model (I think this is from December 2025) Ministral 3 14B in Q4 K M quantization variant in Instruct weight. So, this is a much younger model than the 12B Mistral I recently tested and really liked. Let’s see how it did and…

Conditions & Context One of my readers on Mastodon asked an interesting follow up question to my DeepSeek R1 review: “Can it refactor?”. So, going back to this model, but this time we are not focusing on speed, but rather on refactoring: can a model take a sample of code, make it more robust, more…

Conditions & Context Today we are looking at Qwen2.5 Coder 14B and testing how good it is in refactoring some Python code. Many in the local AI community swear by this model and I wanted to see whether there was any truth to it or not. As a non-developer who is learning Python I have…