Check out charkoal.dev it has nested canvases and a few other extra features.
It is a great VSCode extension as-is, but the maintainers have abandoned it and they keep refusing to make it open-source. Someone is bound to make an open-source copy soon.
I don’t quite understand this alarmist argument about AI making us forget how to build software.
We are software engineers, we are used to this! The whole history of computing has been about creating higher abstractions to make it easier to build software. Who has thought recently about instruction sets, memory layouts, gotos, pointers, system calls? Some still do, but not everyone has to anymore.
From day one I had the expectation that my knowledge would become obsolete and that I needed to keep learning. That new tools will constantly replace me, my knowhow for doing things manually, and that I will need embrace and learn how to take advantage of new levels of automations.
Frankly my experience of AI hasn’t been much different from when React, Spark, Elasticsearch, AWS or Rust came in for instance, some random examples. You just keep learning and embracing the new technologies. Yes they automate some of what you were good at doing and that part of you is no longer needed, that’s the whole point.
I think we will be totally fine as software engineers, not because we are not being replaced, but because replacing ourselves and adapting to it is the core of what we do!
React, Spark, Elasticsearch, AWS or Rust were deterministic programming languages; they did exactly what the developer specified.
With Claude Code, they are semi-independent non-deterministic agents; they are more like consultants that the developer manages. The fact that they tend to generate verbose code which overwhelms the developer's ability to review is also troubling.
When they generate such code I hit delete and start over. I mostly don't have a problem understanding the code that Claude writes (or whatever AI I'm using today - my company keeps changing what they want me to use). It sometimes takes some time to figure it out, but it isn't any worse than figuring out what other programmers have written.
I do however need several rounds of "review this code", and pointing out trivial details that are wrong, before it is worth me trying to figure out the big picture.
>> The whole history of computing has been about creating higher abstractions to make it easier to build software
These abstractions are understandable and predictable. There is no mental model for the current LLMs(in entirety, even though the parts are known), the output might as well come from a genie.
That's not true. The mental model is a distribution over completions. Weird you say LLMs are like genies, genies being magical beings that can accomplish anything.
Curious that both average inputs and outputs match to exactly 5.74 kg. I had intuitively assumed there would be a significant difference, but I suppose that even if a lot of energy is extracted, the mass difference will be negligible, mc2 and all.
I'm also surprised that the vast majority of the output carbon is in the form of CO2 rather than feces.
It's all rather obvious in retrospect, it was just nice to see crystallized like this.
> I'm also surprised that the vast majority of the output carbon is in the form of CO2 rather than feces.
Not sure if you know, but in the same vein, I was shocked when I learned ~50% of the biomass of trees is carbon (right, we knew that), but also that the carbon came from CO2 the tree respirated. Also makes total sense in hindsight, but still think it's so cool.
The trees literally build their mass from the air and sunlight (and yes, obviously also water and trace minerals from the roots).
That's not true, in a democracy you tend to have methods of appeal that actually work, and their threat keeps the wheels of bureaucracy greased.
This is because, in principle, everything comes down to the fundamental threat that the people can remove the current government, and the government does have full control over the unelected civil servants. If they keep ignoring appeals, they'll eventually get dethroned.
There's a nice symmetry between this and the fact that the law is ultimately guaranteed by the governments monopoly on violence. They can dethrone you too if you don't comply.
When a democracy works, there can be a very effective balance between the people's leverage towards the government and the governments leverage towards the people.
In an authoritarian regime the same forces are present but they are not balanced in the same way. The people can still rise up and dethrone the ruler through violence, but that is so much harder, and it is mostly offset by the governments greater power of violence. So they can get away with so much more.
The US elected government has no control over the unelected civil servants as congress over the past 150 years did everything they could to prevent the spoils system.
Elected officials have significant influence they can bring to bear on specific decisions, general operations, and in many cases personnel decisions. That’s true at the level of individual house members and can be more true for other offices.
The rule of law and checks and balances also means these elected officeholders don’t have arbitrary control, which has a lot of upsides (and produced a professional and effective federal workforce) as well as some limits.
I swear we have a problem where we quantize to caricatures rather than recognizing tuned balance, and control theorists would probably anticipate this means things will start to swing a bit wildly.
Executive power over the civil service is an ant driving an elephant. You can say it's a good thing and it's intentional, but the fact of the matter is that the executive appoint a fraction of a percent of the positions and those positions have nominal personnel powers that they can't really use without fear of getting sued.
It's almost like positions are created and managed by law as well as leadership, and even leadership is supposed to follow law.
Fractional direct appointments are the usual case in any large organization. If you're the chief executive, you don't hire individual department workers, you might not even pick individual department management, you probably pick other "C-level" staff and have them manage management personnel most of the time.
It's more like a captain of a ship than "an ant driving an elephant." Every avenue you have to direct the ship depends on a network of knowledge and relationships supporting steering and operational systems. You don't DIY turning the tanker, you team-turn the tanker because you've learned how to work with a team.
I think this is completely wrong. For a democracy to form, substantially everyone must have bought in. That’s the upstream, not the threat of removal. Authoritarian “regimes” are constantly under threat of removal.
This is one thing many forget, mostly due to drinking our own koolaid about the inherent superiority of liberal democracy. Authoritarian regimes almost by definition have high public support, because they couldn't function at all if even a relatively small proportion of society went against them. The people who want to overthrow them are either out of the country or insignificant. Dictatorship is impossible without populism.
This doesn't make any sense to me. There are and have been numerous authoritarian regimes that lack "high public support", now and in the past. The entire idea for most authoritarian regimes is to slowly minimize the power of those who oppose them. And then, they spend a huge amount of resources looking for dissent (SD/Gestapo, Stasi, etc.) and trying to control the societal narrative.
Any government that lacks public support collapses.
Democratic governments can operate without a plurality of support for the current government, because the population generally supports and is invested in the system of government. When democratic governments fail, there is usually very little danger of violence or economic and societal instability, because there is trust in those systems. Corruption and malfeasance harms trust in the systems of governance which democracies depend upon.
Authoritarian governments depend on confidence in the government to continue functioning. The system of government isn't necessarily trusted, the workers of government aren't necessarily trusted, but the leaders are in charge and doing things. Media manipulation and effective propaganda is certainly an important tool for these governments, but pointing out that it exists doesn't mean that it doesn't work! Propaganda totally does work, by almost all measures. Russia, China, Cuba, Iran all have high domestic support for the government.
Authoritarian governments also tend to be very stable - people know what to expect. Democracies change periodically. The stability and familiarity are key to the trust that authoritarian governments maintain. The protests in Iran prior to the current conflict are a good example of what happens when a government fails to maintain the trust of the people - the arrival of war saved the current regime from falling apart at the seams when Khomeini died of cancer in a few months and a squabble for the leadership broke out amid a collapsing economy.
I think that you're underestimating the power of authoritarianism. For Iran, I don't think the government was in any danger prior to the war. It was capable of exerting control through the state apparatus quite easily. And look at North Korea, you think that the people under that government are supportive? That's nonsense on stilts.
Also, that collapse you refer to can take an awful long time under authoritarian control.
I feel like this discussion is more about westerners who don't understand the actual effects of political repression. A reminder, Nicolae Ceaușescu had a 90+% approval rating just a week before he was put on trial and deleted in less than a day. Measuring approval ratings in authoritarian regimes is an almost impossible task if you care at all about accuracy.
> The neurons serve as a biological filter: the training system translates screen pixels and ray-cast distances into electrical zaps, the living cells fire spikes, and those counts feed straight into a PyTorch decoder that maps them to Doom actions. The PPO agent, CNN encoder and entire reward loop run on ordinary silicon elsewhere. Cole’s ablation modes make the split testable, set decoder output to random or zero and the game still plays. The CL1 hardware interface works exactly as advertised. What remains unproven is whether 200,000 human neurons can ever carry the policy instead of just riding along.
Yeah… That’s quite the smoking gun.
So it’s quite likely then that the neurons are just acting as a bad conductor. The electrodes read a noisy version of the signals that go into the neurons, and they just train a CNN with PPO to remove that noise, get the proper inputs, and learn a half-decent policy for playing the game.
If this worked as advertised they shouldn’t need a CNN decoder at all! The raw neuron readout should be interpreted as game inputs directly.
Besides, they are not streaming the video into the neurons at all. Just the horizontal position of the enemies and the distance, or some variant of that. In that sense it’s barely more than pong isn’t it? If enemy left, rotate left, if enemy right, rotate right, if enemy center shoot. At a stretch, if enemy far, go forward, if enemy close, go back. The rest of the time just move randomly. Indeed, the behavior in the video is essentially that…
While we are at it, the encoded input signal itself is already pretty close to a decent policy if mapped directly to the keys (how much enemy left, center, right), even without any CNN, PPO or neurons.
EDIT: It seems like the readme does address these concerns, and the described setup differs significantly from the description in the critical blogpost. Still not entirely convincing to me, a lot of weights being trained in silicon around the neurons, but it sounds better. I don’t have time right now to look deeper into it. They outline some interesting details though.
No, this is precisely why there are ablations. The footage you see in the video was taken using a 0-bias full linear readout decoder, meaning that the action selected is a linear function of the output spikes from the CL1; the CL1 is doing the learning. There is a noticeable difference when using the ablation (both random and 0 spikes result in zero learning) versus actual CL1 spikes.
Isn't the encoder/PPO doing all the learning?
This question largely assumes that the cells are static, which is incorrect; it is not a memory-less feed X in get Y machine. Both the policy and the cells are dynamical systems; biological neurons have an internal state (membrane potential, synaptic weights, adaptation currents). The same stimulation delivered at different points in training will produce different spike patterns, because the neurons have been conditioned by prior feedback. During testing, we froze encoder weights and still observed improvements in the reward.
How is DOOM converted to electrical signals?
We train an encoder in our PPO policy that dictates the stimulation pattern (frequency, amplitude, pulses, and even which channels to stimulate). Because the CL1 spikes are non-differentiable, the encoder is trained through PPO policy gradients using the log-likelihood trick (REINFORCE-style), i.e., by including the encoder’s sampled stimulation log-probs in the PPO objective rather than backpropagating through spikes.
I believe you are looking at GPT 5.4 Pro. It's confusing in the context of subscription plan names, Gemini naming and such. But they've had the Pro version of the GPT 5 models (and I believe o3 and o1 too) for a while.
It's the one you have access to with the top ~$200 subscription and it's available through the API for a MUCH higher price ($2.5/$15 vs $30/$180 for 5.4 per 1M tokens), but the performance improvement is marginal.
Not sure what it is exactly, I assume it's probably the non-quantized version of the model or something like that.
>It's the one you have access to with the top ~$200 subscription and it's available through the API for a MUCH higher price ($2.5/$15 vs $30/$180 for 5.4 per 1M tokens), but the performance improvement is marginal.
The performance improvement isn't marginal if you're doing something particularly novel/difficult.
From what I've read online it's not necessarily a unquantized version, it seems to go through longer reasoning traces and runs multiple reasoning traces at once. Probably overkill for most tasks.
Not at all, one of the key features of that design system was that the boxes had no borders, and they were differentiated by their flat background fill color instead. There's borders galore here.
It’s a great daily snack, the constraints of Flash Fiction yield quite lean and punchy stuff.
reply