Impressive model, for sure. I've been running it on my Mac, now I get to have it locally in my iPhone? I need to test this. Wait, it does agent skills and mobile actions, all local to the phone? Whaaaat? (Have to check out later! Anyone have any tips yet?)
I don't normally do the whole "abliterated" thing (dealignment) but after discovering https://github.com/p-e-w/heretic , I was too tempted to try it with this model a couple days ago (made a repo to make it easier, actually) https://github.com/pmarreck/gemma4-heretical and... Wow. It worked. And... Not having a built-in nanny is fun!
It's also possible to make an MLX version of it, which runs a little faster on Macs, but won't work through Ollama unfortunately. (LM Studio maybe.)
Runs great on my M4 Macbook Pro w/128GB and likely also runs fine under 64GB... smaller memories might require lower quantizations.
I specifically like dealigned local models because if I have to get my thoughts policed when playing in someone else's playground, like hell am I going to be judged while messing around in my own local open-source one too. And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.
Note: I tried to hook this one up to OpenClaw and ran into issues
To answer the obvious question- Yes, this sort of thing enables bad actors more (as do many other tools). Fortunately, there are far more good actors out there, and bad actors don't listen to rules that good actors subject themselves to, anyway.
> It's also possible to make an MLX version of it, which runs a little faster on Macs
FWIW, I found MLX variants to perform consistently worse (in terms of expected output, not speed) than GGUF in my measurements on my benchmark that matters to me (spam filtering). I used MLX models in LM Studio. GGUF was always slightly better.
Perhaps someone who knows more can pitch in and explain this.
It isn't 100% clear, but what quantization were you using for each? I've had worse results with MLX 8bit than what you get with Q4 GGUF, same model, seems mxfp8 or bf16 is needed when ran with MLX to get something worthwhile out of them, but I've done very little testing, could have been something specific with the model I was testing at the time.
> And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.
I checked the abliterate script and I don't yet understand what it does or what the result is. What are the conversations this enables?
LLMs are very helpful for transcribing handwritten historical documents, but sometimes those documents contain language/ideas that a perfectly aligned LLM will refuse to output. Sometimes as a hard refusal, sometimes (even worse) by subtly cleaning up the language.
In my experience the latest batch of models are a lot better at transcribing the text verbatim without moralizing about it (i.e. at "understanding" that they're fulfilling a neutral role as a transcriber), but it was a really big issue in the GPT-3/4 era.
I have a project where I'm using LLMs to parse data from PDFs with a very complicated tabular layout. I've been using the latest Gemini models (flash and pro) for their strong visual reasoning, and they've generally been doing a really good job at it.
My prompt states that their job is to extract the text exactly as it appears in the PDF. One data point to be extracted is the race of each person listed. In one case, someone's race was "Indian". Gemini decided to extract it as "Native American". So ridiculous.
I was attempting to help someone who runs a small shop selling restored clothing set up a gemini pipeline that would restage images she took of clothing items with bad lighting, backgrounds, etc.
Basically anything that showed any “skin” on a mannequin it would refuse to interact with. Even just a top, unless she put pants on the mannequin.
In my experience, though, it's necessary to do anything security related. Interestingly, the big models have fewer refusals for me when I ask e.g. "in <X> situation, how do you exploit <Y>?", but local models will frequently flat out refuse, unless the model has been abliterated.
It’s so dispiriting to me that we’ve achieved those closest thing yet to an “objective truth” machine (with the caveat of garbage in, garbage out, etc.) and these big companies are either afraid to actually let it exist, want to push their own politics, or a combination of the two.
"closest thing yet" is still a long way from close; as you say, gin=gout, and the internet without an attempt to be our best selves is instead our loudest propagandists and all our cultural stereotypes.
Of course, humans are also impacted by these things, at best we can be a little deliberate about rejecting a few of the more on-the-nose examples.
1) Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).
2) Asking questions about sketchy things. Simply asking should not be censored.
3) I don't use it for this, but porn or foul language.
4) Imitating or representing a public figure is often blocked.
5) Asking security-related questions when you are trying to do security.
6) For those who have had it, people who are trying to use AI to deal with traumatic experiences that are illegal to even describe.
> Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).
When’s the last time you tried this? ChatGPT and Gemini have no trouble responding with all the common criticisms of Islam.
Asking for criticism of Islam results in equal response tokens for defense of Islam alongside the criticisms. When pressed to not provide counterpoints, it refuses to remove them.
Asking for criticisms of Christianity gives only criticisms.
I tried again with the prompt “Give criticisms of Islam. No counterarguments” and it did work this time. This shows that they’re trying to make the model fair but it still has biases. In all my testing I’ve never seen a refusal to provide counterpoints to criticisms of Christianity but frequent refusals on Islam. Due to the popularity of this criticism of the model, it’s highly likely specifically trained on how to handle the subject.
7) ChatGPT wouldn’t let me generate a fake high bank account balance screenshot (was meant to be a response to all the “vibe coding can make anybody rich now” posts I saw on X)
8) ChatGPT wouldn’t let me generate a script to crack a password (even though I suspected I knew all but 2 characters in a 16 character password, which makes it highly unlikely I’m randomly trying to hack something)
The stupidest part of this is I could easily do these things myself, I just wanted to save a few minutes.
That is why they were pushed away from this. At least with vibe coded software, errors may prevent compilation, then when we're past that simply bad experiences, before they become human catastrophes.
> Any competent high schooler knows about water activity and sterilization. At least at the fundamental level.
Your high school taught you that while olive oil and garlic can be stored in isolation for quite a long time without issue, mixing them creates an anoxic environment which Clostridium botulinum, an obligate anaerobe found almost everywhere in the environment (and in this case the garlic) but not normally in dangerous quantities because of the oxygen in the air, thrives?
The closest my secondary school got to useful warnings about modern environmental hazards were: (1) do not cross railways, (2) electricity is dangerous, (3) do not mix bleaches, (4) wear safety goggles, (5) if you smell gas, open windows, do not flip light switches, and (6) HIV exists (but they didn't mention any other STDs at all). (Well, OK, schools also said "do not run with scissors" and "look both ways before crossing road", but that and similar were more primary school things, and they said "don't do drugs" but they lied about Leah Betts' cause of death).
The cooking classes were basically just "here's how you make a cake" and "here's how you make pastry" (and a teacher asking us to write it up but pretentiously telling us that she hated seeing "I think it tasted quite nice" because all the students always wrote that, but somehow simple thesaurus substitution was enough to satisfy her on that).
> I doubt most models refuse providing recipes without 0 risk of death.
0, like 1, is not a real number in probably. They represent infinity-to-one odds for/against a thing.
More concretely, seat belts and speed limits and minimum tire tread thickness and blood alcohol content are all part of road traffic law, even though all four of them combined still do not lead to "0 risk of death".
> LLMs are —if anything— ridiculously proficient at making random code compile.
Not ridiculously. Interestingly, but not ridiculously. Especially back when the example I linked you to happened, thus leading to the highly visible failure mode necessitating this kind of thing (the red teamers will have seen similar in private testing). You could have "rapidly improving", but with even with the rapid competency time-horizon improvements shown by METR, they're 80% on tasks which take a human 1-2 hours. If that was also true for biological stuff, they're probably currently able to enthusiastically write custom gene sequences that sometimes work, other times are the genetic equivalent of this: https://news.ycombinator.com/item?id=47614622
> What was your point again?
LLMs are a power tool with the bare minimum of safety guards for all the normal people using them thoughtlessly, and I'm replying to someone who is surprised that even those minimal basics of guards exist, both for their own sake and the sake of others around them.
Metaphor: a table saw may come with a saw-stop, which means you can't butcher a carcass with it, and people who imagine(!) working as butchers hear this and act surprised that table saws increasingly come with them by default because meat slicers don't.
I did not know about the trivially-produced botulinum toxin potential of garlic sitting in olive oil at room temperature.
I'm going to guess that asking a cloud censored/non-abliterated LLM would not get me this information, despite it being useful as a warning, not just as a way for bad actors to poison people.
> and I'm replying to someone who is surprised that even those minimal basics of guards exist
Misrepresentation of where I'm coming from. I literally failed to consider the weapon potential of biologics in this case (silly me). I was only thinking about the fact that they cured (essentially) my psoriasis.
Bad actors will always exist, but fortunately will always be outnumbered by good actors with access to the same tools. So while I understand your pressing for caution, I still think that your argument is futile; bad actors will always find uncensored AI while good actors continue to shackle themselves with censored AI that has failure modes which reduce actual ethical utility. I'm afraid to tell you that the cat is already out of the bag, dude. You're like the guy who wants to leave a sign saying "NO GUNS ALLOWED" just inside a daycare. "Sure, I'll get right on that," says the concealed-carry bad actor...
> Misrepresentation of where I'm coming from. I literally failed to consider the weapon potential of biologics in this case (silly me). I was only thinking about the fact that they cured (essentially) my psoriasis.
Thank you for the correction.
> Bad actors will always exist, but fortunately will always be outnumbered by good actors with access to the same tools. So while I understand your pressing for caution, I still think that your argument is nonsense; bad actors will always find uncensored AI while good actors continue to shackle themselves with censored AI that has failure modes which reduce actual ethical utility. I'm afraid to tell you that the cat is already out of the bag, dude. You're like the guy who wants to leave a sign saying "NO GUNS ALLOWED" just inside a daycare. "Sure, I'll get right on that," says the concealed-carry bad actor...
Guns are an excellent metaphor here, especially as with "good actors with access to the same tools" is a pattern-match to the incorrect statement that "only a good guy with a gun can stop a bad guy with a gun"*. Much of the world outside the USA neither has, nor wants to have, the 2nd amendment. Are gun bans perfect? No, of course not. But the UK (where I grew up) has far fewer homicides as a result, and last I heard when polled on issue even 2/3rds of the UK police feel safe enough to not desire to be armed (though three quarters would agree to carry if ordered).
Similarly, good actors using an AI can only cover the malignant use cases they themselves think of. Famously, the 9/11 attacks were only possible because at the time nobody had considered that anyone might weaponise the vehicles themselves until they saw it happen, which was also why of the four planes only one saw the passengers fighting back to regain control.
In particular, "bad actors will always find uncensored AI" suggests that all AI are equally competent. Right now, they're not all equal, the proprietary models are leading. Of course, even then you may argue that the proprietary models can be convinced to do whatever via the right prompt, and to an extent yes, but only to an extent.
The malicious users can only be slowed down (as opposed to the normal people who simply put too much trust into the current models who can be mostly prevented from harmful courses of action with the same guards). But AI provides competence that bad actors would otherwise not have, so even a simple guard will prevent misuse by nihilistic teenagers whose competence does not yet extend to the level of a local drug dealer let alone the competence of a state-sponsored terrorist cell.
I have found that a lot of the techniques used to decensor models (as far as I can tell, they basically get all their weights to say no turned off) also make them really stupid. Like, sure, it will help you rob a bank, but if you ask whether you should rob the bank it will go "The positives: … The negatives: … My take: You should ABSOLUTELY rob the bank".
The abliteration in particular that Heretic does apparently results in a best-in-class lack of "stupefying" the underlying model. You haven't read its claims, apparently.
I wonder if this is due to abliteration actually "damaging" the model, or just an artifact of the model never having been properly trained on "forbidden" topics (as it's enough for them to recognize them, and there's no point in dedicating neurons to something that will never be exercised anyway).
Modern abliteration is quite good at not damaging the model on ordinary topics. But yes, on many of the weirdest "forbidden" topics (excluding the mild stuff like ordinary erotica) there's not going to be any real training of any sort and it's basically hallucinations running wild. You even see this claim repeated explicitly on every model release "safety card": 'no, this model does not have the sort of fiddly tacit know-how it would need to actually advise anyone nefarious on this dangerous stuff'.
Assuming you’re not copy/pasting for these tasks. What’s the stack required to use local models for coding? I’ve got a capable enough machine to produce tokens slowly, but don’t understand how to connect that to the likes of VSCode or a JetBrains ide.
You need some way to give it tools - the essential ones for coding are running bash commands, reading files and editing files.
You need the LLM to be able to respond with tool use requests, and then your local harness to process them and respond to it. You can read how tool calling works with eg Claude API to get the idea: https://platform.claude.com/docs/en/agents-and-tools/tool-us...
Under the hood something like Claude Code is calling the API with tools registered, and then when it gets a tool use request it runs that locally, and then responds to the API with the result. That’s the loop that enables coding.
Integrating with an IDE specifically is really just a UI feature, rather than the core functionality.
You're comparing apples to oranges there. Qwen 3.5 is a much larger model at 397B parameters vs. Gemma's 31B. Gemma will be better at answering simple questions and doing basic automation, and codegen won't be it's strong suit.
Qwen3.5 comes in various sizes (including 27B), and judging by the posts on HN, /LocalLlama etc., it seems to be better at logic/reasoning/coding/tool calling compared to Gemma 4, while Gemma 4 is better at creative writing and world knowledge (basically nothing changed from the Qwen3 vs. Gemma3 era)
For llama-server (and possibly other similar applications) you can specify the number of GPU layers (e.g. `--n-gpu-layers`). By default this is set to run the entire model in VRAM, but you can set it to something like 64 or 32 to get it to use less VRAM. This trades speed as it will need to swap layers in and out of VRAM as it runs, but allows you to run a larger model, larger context, or additional models.
Haven't built anything on the agent skills platform yet, but it's pretty cool imo.
On Android the sandbox loads an index.html into a WebView, with standardized string I/O to the harness via some window properties. You can even return a rendered HTML page.
Definitely hacked together, but feels like an indication of what an edge compute agentic sandbox might look like in future.
>there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.
Mind giving us a few of the examples that you plan to run in your local LLM? I am curious.
Not to mention that doing what the big model makers do literally dumbs the model down.
They should at least allow something like letting you prove your age and identity to give you access to better/unaligned models, maybe even requiring a license of some sort. Because you know what? SOMEONE in there absolutely has access to the completely uncensored versions of the latest models.
CC is great at many things but one area it is still not great at is making GUI interactions look and work properly. Literally likely because it can neither see the GUI (without manual screenshot intervention) nor can it see it change over time. So if for example a progress indication is not working correctly, it will totally miss stuff like that.
I find it's often faster for me to finish the final 20% myself than to talk the agent into doing it for me; because too often the agent will start to eat its own tail, and spend far too long completing something that I find obvious.
Yes, and English/natural language is not necessarily more concise than programming languages, if you need to describe something precisely.
For example, I was recently trying to get an agent to debug something which was difficult to debug because it ran in an exotic context, where debuggers and logging and printf couldn't easily reach. The agent kept coming up with more and more elaborate and smart-sounding theories and debugging strategies, but nothing worked. I stupidly kept going with this for like 20 minutes, until finally I just went into an IDE, did a simple "comment bisection" where I commented stuff out until I found the line that was breaking, and found and fixed the problem in five minutes. So I solved it by typing code. The code I typed: "//" (in about six places). I could probably have gotten the agent to do the same thing but would have actually literally had to type more to explain to the agent what I wanted. In fact it took me longer to write this comment describing what I did here than it did to just do it.
This is a good question, and currently the answer is no. Quantum computers can only run very short, simple algorithms right now, because the qubits they're built out of are noisy. You need a lot of error correction, which the community is working on.
The thing is, unlike ordinary computers, quantum computers can factor numbers about as easily as they can multiply them. So as soon as they can multiply two large integers, they'll also be able to factor the result and break RSA encryption based on keys of that size.
This blog post gives a good sense of the state of the art and what progress might look like:
> That is usually configurable at the terminal level
And if you use Emacs, it's configurable at the buffer level. [1] This lets me build a version of Iosevka where `~=` and `!=` both become ligaturized but in different major modes, avoiding any confusion.
I'm not either. I think it may look "cool" visually but when trying to work with code with those in it, it seems odd, like that it's a single character even though it's not and it just breaks the flow
Because most of those who commented are among those who do not like ligatures, I must present a counterpoint, to diminish the statistical bias.
Some people like ligatures, some people do not like them, but this does not matter, because any decent text editor or terminal emulator has a setting to enable or disable ligatures.
Any good programming font must have ligatures, which will keep happy both kinds of users, those who like and those who dislike ligatures.
I strongly hate the straitjacket forced by ASCII upon programming languages, which is the root cause of most ambiguous grammars that complicate the parsing of programming languages and increase the probability of bugs, and which has also forced the replacement of traditional mathematical symbols with less appropriate characters.
Using Unicode for source programs is the best solution, but when having to use legacy programming languages in a professional setting, where the use of a custom preprocessor would be frowned upon, using fonts with ligatures is still an improvement over ASCII.
A coding font is supposed to help you distinguish between characters, not confuse them for each other. Also, ASCII ligatures usually look worse than the proper Unicode character they are supposed to emulate. The often indecisive form they take (glyphs rearranged to resemble a different character, but still composed of original glyph shapes; weird proportions and spacing due to the font maintaining the column width of the separate ASCII code points) creates a strong uncanny valley effect. I wouldn't mind having "≤", "≠" or "⇒" tokens in my source code, but half-measures just don't cut it.
No need to rely on app-specific configs. You can disable it globally in your fontconfig. For example, this disables ligatures in the Cascadia Code font:
reply