Logseq is probably the closest we had, but it doesn’t quite reach the Obsidian polish and is now moving off of plain text files (somewhat sadly but understandably).
I do this too. Relatively small changes, atomic commits with extensive reasoning in the message (keeps important context around). This is a best practice anyway, but used to be excruciatingly much effort. Now it’s easy!
Except that I’m still struggling with the LLM understanding its audience/context of its utterances. Very often, after a correction, it will focus a lot on the correction itself making for weird-sounding/confusing statements in commit messages and comments.
> Very often, after a correction, it will focus a lot on the correction itself making for weird-sounding/confusing statements in commit messages and comments.
I've experienced that too. Usually when I request correction, I add something like "Include only production level comments, (not changes)". Recently I also added special instruction for this to CLAUDE.md.
I’m not exactly a user with advanced needs, but I have a server with netcup and never had issues. I also know a couple of people who never had any issues with them. I know them as cheap and solid, never even heard of a bad experience I think.
Bummer they failed so hard at your deletion request.
I disagree. At least in my brief test drive, when used with Claude, the performance was on par with Cursor except that the Agent could actually interact with the terminal properly (Cursor is comically bad at this for some reason).
When the (generous!) Claude credits dry up functionality stops however. Gemini is as useless in Antigravity as everywhere else.
> The narrative from AI companies hasn’t really changed, but the reaction has. The same claims get repeated so often that they start to feel like baseline reality, and people begin to assume the models are far more capable than they actually are.
This has been the case for people who buy into hype and don’t actually use the products, but I’m pretty sure people who do are pretty disillusioned by all the claims. The only somewhat reliable method is to test the things for your own use case.
That said: I always expected the tradeoff of Spark to be accuracy vs. speed. That it’s still significantly faster at the same accuracy is wild. I never expected that.
I believe a lot of the speed-up is due to a new chip they use [1] so the fact that the speedup didn't reduce the number of operations is likely why the accuracy has changed little.
The people I know that use them the most also seem the most likely to buy into hype. The coworker who no longer answers questions by talking about code but instead by talking about which skills are the best is the same who posts all the hype.
Sure, multiple of our customers that distribute applications with a machine learning/AI component also need to distribute their models. They can use our OCI registry to distribute large images with huge layers. We specifically reworked our registry implementation to storing in-transit blobs on disk to save memory, ensuring the application doesn’t run out of memory [1].
Is registry OOM protection the only advantage your registry has for large layers? Robotics has a need for Docker tooling that handles large layers/images gracefully. Even if you've done the "right" thing and sideloaded your ML models with some other management system, CUDA layers and such are gigantic.
Edit: looking at this, this is very adjacent to some problems w/ robotics deployments. Fleet management, edge deployment, key management. Neat.
I'd be curious about the multi-artifact support. Can I declare a manifest that binds together multiple services (or a service and an ML model?) Do you support ML models as an artifact?
I feel you, but a huge percentage of recently funded companies are in the AI space. Software distribution for them is even more complex due to all the moving parts, and we want to make sure these companies know that our solution is a great fit for them.
reply