Mistral seems to focus on some niche LLM model tooling that are somehow very needed in certain cases. Can't forget their OCR multimodal embedding model!
The biggest drawback is no Thunderbolt. The biggest sell for Macs right now is the ability to daisy chain them with the new RDMA update. A used M1 Mac Mini is more valuable than this.
Listen I bought six Retina displays, I don't also have money for a new Mac. Of course I'm going to complain about the lack of Thunderbolt daisy chaining after my frivolous expenses come home to roost.
The Neo is basically the mac flavor of an iPad meant for schoolchildren. It's a Chromebook competitor, not meant for whatever kooky AI shit you're doing at home.
Exo-Labs is an open source project that allows this too, pipeline parallelism I mean not the latter, and it's device agnostic meaning you can daisy-chain anything you have that has memory and the implementation will intelligently shard model layers across them, though its slow but scales linearly with concurrent requests.
Last year o3 high did 88% on ARC-AGI 1 at more than $4,000/task. This model at its X high configuration scores 90.5% at just $11,64 per task.
General intelligence has ridiculously gotten less expensive. I don't know if it's because of compute and energy abundance,or attention mechanisms improving in efficiency or both but we have to acknowledge the bigger picture and relative prices.
Sure, but the reason I'm confused by the pricing is that the pricing doesn't exist in a vacuum.
Pro barely performs better than Thinking in OpenAI's published numbers, but comes at ~10x the price with an explicit disclaimer that it's slow on the order of minutes.
If the published performance numbers are accurate, it seems like it'd be incredibly difficult to justify the premium.
At least on the surface level, it looks like it exists mostly to juice benchmark claims.
It could be using the same early trick of Grok (at least in the earlier versions) that they boot 10 agents who work on the problem in parallel and then get a consensus on the answer. This would explain the price and the latency.
Essentially a newbie trick that works really well but not efficient, but still looking like it's amazing breakthrough.
(if someone knows the actual implementation I'm curious)
If Arabic had to cater to afro-asiatic dialects phonemes then the script would have been even more messier. I'm a speaker of one, and my dialect is heavily influenced by the indigenous Tamazight language. and I think this is why many of the Amazigh community were and some still disappointed with the neo-Tifinagh script. While it carries symbolic weight, it doesn’t offer practical readability, phonemic clarity and tech accessibility of a modern script that Tamazight deserves. Latin script, ironically, fits Tamazight much more naturally.