I never understood how people use compiled languages for video games let alone simple GUIs. Even though I'm now competent in a few, and I have LLMs at my disposal, I fall back to electron or React Native just because it's such a pain in the ass to iterate with anything static.
Native devs: what are your go to quality of live improvements?
Having a visual builder tool in an IDE like Delphi or Visual Basic or any of the others.
They ship with an existing library of components, you drag and drop them onto a blank canvas, move them around, live preview how they’ll change at different screen sizes, etc… then switch to the code to wire up all the event handlers etc.
All the iteration on design happens before you start compiling, let alone running.
what does compilation have to do with iteration speed? There's a lot of ways to get a similar feedback loop that youd get in something like react, like separating out your core gameplay loop into its own compilation unit / dll and reloading it on any changes inside your application
NPM is absurdly complex in comparison, it's just neatly abstracted. Maybe somebody will write a cross-platform reactive layer which can compile both natively and to the web?
Video games generally have various editors and that is where the major iteration happens. It's not like HTML where you type some tags and refresh. Instead you have/make editors to design your levels, UIs, characters.
Often use dynamic/scripting languages to improve iteration on gameplay code, even if a lot of the fundamental underlying code is native. And add dev-time hot reloading wherever we can so when you change a texture, it reloads ≈immediately without needing to so much as restart the level. We exile as much as we can to tables and other structured data formats which can easily be tweaked and verified by non-coders so we're not a bottleneck for the game designers and artists who want to tweak things, and make that stuff hot-reloadable if possible as well.
We also often have in-house build server farms full of testing code, because it's such a pain in the ass to iterate with anything dynamic. After all, games are huge, and sufficient testing to make sure all your uncompiled unanalyzed typecheckless code works is basically impossible - things are constantly breaking as committed during active development, and a decent amount of engineering work is frequently dedicated to such simple tasks as triaging , collecting, and assigning bugs and crash reports such that whomever broke it knows they need to fix it, as well as allowing devs and designers to work from previous "known good" commits and builds so they aren't blocked/unable to work on their work - which means internal QA helping identify what's actually "known good", hosting and distributing multiple build versions internally such that people don't have to rebuild the universe themselves (because that's several hours of build time), etc.
Some crazy people invest in hot-reloadable native code. There's all kinds of limits on what kinds of changes you can make in such a scenario, but it's entirely possible to build a toolchain where you save a .cpp file, and your build tooling automatically kicks off a rebuild of the affected module(s), triggering a hot reload of the appropriate .dll, causing your new behavior to be picked up without restarting your game process. Which probably means it'll immediately crash due to a null pointer dereference or somesuch because some new initialization code was never triggered by the hot reloading, but hey, at least it theoretically works!
And, of course, nothing is stopping you from creating isolated sandboxes/examples/test cases where you skip all the menuing, compiling unrelated modules, etc. and iterating in that faster context instead of the cumbersome monolith for most of your work.
Not game dev related, but I program in both Go and Python, and there really is no difference in my feedback loop / iteration because Go builds are so fast and cache unchanged parts.
re, iteration: Have you encountered ImGui [0]? It's basically standard when prototyping any sort of graphical application.
re, GUIs in statically typed languages: As you might expect, folks typically use a library. See Unreal Engine, raylib, godot, qt, etc. Sans that, any sort of 2D graphics library can get the job done with a little work.
You might also take a look at SwiftUI if you have an Apple device.
> It's basically standard when prototyping any sort of graphical application.
while imgui is super-cool, this is wildly overstating its reach or significance. It also embodies a very particular style of GUI programming (so-called "immediate mode", hence the "Im" part of the name) that is very well suited to some sorts of GUI applications and less so for others. The other style, often called "deferred mode", is the one used by most native toolkits, and it is very far from trivial to just switch an application between the two.
So, while there are plenty of good reasons to consider imgui for a graphical application, there are also many reasons why you would not want to use it too. It is very far from "standard" in terms of prototyping such apps.
Even though a lot of what people with agents is wreckless, they often build their own guillotine in the process too.
Problem #1: He decided to shoehorn two projects into 1 even though Claude told him not to.
Problem #2: Claude started creating a bunch of unnecessary resources because another archive was unpacked. Instead of investigating this despite his "terror" the author let Claude continue and did not investigate.
Problem #3: He approved "terraform destroy" which obviously nukes the DB! It's clear he didn't understand, and he didn't even have a backup!
> That looked logical: if Terraform created the resources, Terraform should remove them. So I didn’t stop the agent from running terraform destroy
> Problem #3: He approved "terraform destroy" which obviously nukes the DB! It's clear he didn't understand
The biggest danger of agents its that the agent is just as willing to take action in areas where the human supervisor is unqualified to supervise it as in those where it isn't, which is exacerbated by the fact that relying on agents to do work [0] reduces learning of new skills.
[0] "to do work" here is in large part to distinguish use that focuses on the careful, disciplined use of agents as a tool to aid learning which involves a different pattern of use. I am not sure how well anyone actually sticks to it, but at least in principal it could have the opposite effect on learning of trust-the-agent-and-go vibe engineering.
His backup plan prior to the event had large obvious issues.
His backup plan after the fact seems suspicious as well because he is making it much harder than it has to be.
Between that and a glance at the home page, it feels like someone doing AI vibe work who is not comfortable in the space they are working.
Who is the intended audience? Other vibe coders? I just think its weird that given his backup solution, he likely asked the AI to create it . whatever hot-wash he did for this event was invalidated.
I think that might be partly because on regular PC's you can just go and buy an NVidia card insteaf of fuzzing around with software issues, and for those on laptops they probably hope that something like Zluda will solve it via software shims or MS backed ML api's.
Basically, too many choices to "focus on" makes non a winner except the incumbent.
That's the broad developer community. 90%+ of the engineers at Big Tech and the technorati startups are on MacOS with 5% on Linux and the other 5% on Windows.
You’ll see a lot of MacBooks in Beijing’s zhongguangcun where all the tech companies are, but they also have a lot of students there as well, so who knows. You need to go out to the suburbs where Lenovo has offices to stop seeing them. I know Apple is common in Western Europe having lived there for two years (but that was 20 years ago, I lived in China for 9 years after that).
It wouldn’t surprise me if the deepseek people were primarily using Mac’s. Maybe Alibaba might be using PCs? I’m not sure.
I would also expect that the Deepseek devs are using MacBook. If not they may be using Linux - Windows is possible of course but not likely imho. I have no knowledge about that area though so would be interesting to here any primary sources or anecdotes.
Deepseek is in Hangzhou, so I guess they are. GDP/capita in Zhejiang is pretty high, even more so for HZ. If you ever visit, it feels like a pretty nice place (especially if you can get a villa around xihu). I also visited ZJU once, and it was pretty Macbooky, but I don't have as much experience there as Beijing's Zhongguancun.
I live in Germany not the US. I mentioned in another comment but aside from the fact that Deepseek mainly targets Linux I expect that the Deepseek devs are using Mac or Linux.
I think it's reasonable to say that the people responding to surveys on Stack Overflow aren't the same people who work on pushing the state of the art in local LLM deployment. (which doesn't prove that that crowd is Apple-centric, of course)
It's not the whole answer, but SO came from the .NET world and focused on it first so it had a disproportionately MS heavy audience for some time. GitHub had the same issue the other way around. Ruby was one of GitHub's top five languages for its first decade for similar reasons.
I certainly only use Macs when being project assigned, then there are plenty of developers out there whose job has nothing to do with what Apple offers.
Also while Metal is a very cool API, I rather play with Vulkan, CUDA and DirectX, as do the large majority of game developers.
Honestly though, gamedevs really are among the biggest Windows stalwarts due to SDK's and older 3d software.
Only groups of developers more tied to Windows that I can think of are probably embedded people tied due to weird hardware SDK's and Windows Active Directory dependent enterprise people.
Outside of that almost everyone hip seems to want a Mac.
The only "push" towards Metal compatibility there's been has been complaints on github issues. Not only has none of the work been done, absolutely nobody in their right mind wants to work on Metal compatibility. Replacing proprietary with proprietary is absolutely nobody's weekend project. or paid project.
It was originally the eternally-on-the-horizon Semantic Web, before somebody decided to reuse the name into something to do with crypto (perhaps without bothering to search for "web 3" beforehand)
That said the core argument for MCP servers is providing an LLM a guard-railed API around some enterprise service. A gmail integration is a great example. Without MCP, you need a VM as scratch space, some way to refresh OAuth, and some way to prevent your LLM from doing insane things like deleting half of your emails. An MCP server built by trusted providers solves all of these problems.
But that's not what happened.
Developers and Anthropic got coked up about the whole thing and extended the concept to nuts and bolts. I always found the example servers useless and hilarious.[0] Unbelievably, they're still maintained.
In my experience, a skill is better suited for this instead of an MCP.
If you don’t want the agent to probe the CLI when it needs it, a skill can describe the commands, arguments and flags so the agent can use them as needed.
They make a big difference. For example if you use the Jira cli, most LLMs aren’t trained on it. A simple MCP wrapper makes a huge difference in usability unless you’re okay having the LLM poke and prod a bunch of different commands
Fwiw I'm having a good experience with a skill using Jira CLI directly. My first attempt using a Jira MCP failed. I didn't invest much time debugging the MCP issues, I just switched to the skill and it just worked.
Yes occasionally Claude uses the wrong flag and it has to retry the command (I didn't even bother to fork the skill and add some memory about the bad flag) but in practice it just works
Do you mean wrap the CLI with an MCP? I don't get that approach. I wrapped the Jira cli with a skill. It's taken a few iterations to dial it in but it works pretty damn well now.
I'm good, yet my coworkers keep having problems using the Atlassian MCP.
The trick I've found is to vibe libraries that do one thing well with clear interfaces. The experience becomes more like importing a package which arguable has the same cognitive debt issues described above.
Editing a one shot on the otherhand reminds me of trying to mod a Wordpress plugin.
You can have other sensors that tell you it's a screen, maybe require a Live Photo, maybe also upload to a third party service faster than generation is possible? In the end I think we'd end up somewhere like with cryptography: generating a real fake might be theoretically possible but it could be made prohibitively expensive to generate.
I have little doubt where things are going, but the irony of the way they communicate versus the quality of their actual product is palpable.
Claude Code (the product, not the underlying model) has been one of the buggiest, least polished products I have ever used. And it's not exactly rocket science to begin with. Maybe they should try writing slightly less than 100% of their code with AI?
More generally, Anthropic's reliability track record for a company which claims to have solved coding is astonishingly poor. Just look at their status page - https://status.claude.com/ - multiple severe incidents, every day. And that's to say nothing of the constant stream of bugs for simple behavior in the desktop app, Claude Code, their various IDE integrations, the tools they offer in the API, and so on.
Their models are so good that they make dealing with the rest all worth it. But if I were a non-research engineer at Anthropic, I wouldn't strut around gloating. I'd hide my head in a paper bag.
I find the GitHub issue experience particularly hellish: search for my issue -> there it is! -> only comment "Found 3 possible duplicate" Generated with Claude Code -> go to start.
I am constantly amazed how developers went hard for claude-code when there were and are so many better implementations of the same idea.
It's also a tool that has a ton of telemetry, doesn't take advantage of the OS sandbox, and has so many tiny little patch updates that my company has become overworked trying to manage this.
Its worst feature (to me at least), is the, "CLAUDE.md"s sprinkled all over, everywhere in our repository. It's impossible to know when or if one of them gets read, and what random stale effect, when it does decide to read it, has now been triggered. Yes, I know, I'm responsible for keeping them up to date and they should be part of any PR, but claude itself doesn't always even know it needs to update any of them, because it decided to ignore the parent CLAUDE.md file.
Sometimes the agent (any agent, not just Claude — cursor, codex) would miss a rule or skill that is listed in AGENTS.md or Claude.md and I'm like "why did you miss this skill, it's in this file" and it's like "oh! I didn't see it there. Next time, reference the skill or AGENTS.md and I'll pick it up!"
Like, isn't the whole point of those files to not have to constantly reference them??
"Coding" is solved in the same way that "writing English language" is solved by LLMs. Given ideas, AI can generate acceptable output. It's not writing the next "Ulysses," though, and it's definitely not coming up with authentically creative ideas.
But the days of needing to learn esoteric syntax in order to write code are probably numbered.
OK, but seriously... if Anthropic is on the "best" path, aside from somehow nuking all AI research labs, an IPO would be the most socially responsible thing that they could do. Right?
That's a bummer. I was looking forward to testing this, but that seems pretty limiting.
My current solution uses Tailscale with Termius on iOS. It's a pretty robust solution so far, except for the actual difficulty of reading/working on a mobile screen. But for the most part, input controls work.
My one gripe with Termius is that I can't put text directly into stdin using the default iOS voice-to-text feature baked into the keyboard.
I’ve been doing this for a while [1], but ultimately settled on a building a thin transport layer for Telegram to accept and return media, and persistent channels, vastly improved messaging UX, etc. and ended up turning this into a ‘claw with a heartbeat and SOUL [2].
I really enjoyed reading both posts. Thanks for sharing!
I, like many others, have written my own "claw" implementation, but it's stagnated a bit. I use it through Slack, but the idea of journaling with it is compelling. Especially when combined with the recent "two sentence" journaling article[1] that floated through HN not too long ago.
Great posts! So far [2] is the only "claw" that has caught my interest, mostly because it isn't trying to do everything itself in some bespoke, NIH way.
I've been using email and Cloudeflare email router. You don't get the direct feedback of a terminal, but it's much easier to read what's happening in html formatted email.
It also feels kind of nice to just fire off an email and let it do it's thing.
Oooh, now this is a very interesting idea. I live in my inbox and keep it quite tidy. Email is the perfect place to fire-and-forget ideas and then come back to a full response.
Do you have a blog outlining how you set it up? I'm curious to learn more.
Exactly my experience, I know they vibe code features and that’s fine but it looks like they don’t do proper testing which is surprising to me because all you need bunch of cheap interns to some decent enough testing
No there is a wide gap between good and bad testers. Great testers are worth their weight in gold and delight in ruining programmer's days all day long.
IMO not a good place to skimp and a GREAT place to spend for talent.
> Great testers are worth their weight in gold and delight in ruining programmer's days all day long.
Site note: all the great testers I've know when my employers had separate QA departments all ended up becoming programmers, either by studying on the side or through in-house mentorship. By all second hand accounts they've become great programmers too.
So true. My first job was in QA. Involuntarily, because I applied for a dev role, but they only had an opening for QA. I took the job because of the shiny company name on my resume. Totally changed my perspective of quality and finding issues. Even though I liked the job, it has some negative vibes because you are always the guy bringing bad news / critizing others peoples work (more or less).
Also some developers couldn't react professionally to me finding bugs in their code. One dev team lead called me "person non grata" when coming over to their desk. I took it with pride.
Eventually I transitioned to develoment because I did not see any career path or me in QA (team lead positions were filled with people doing the job for 20+ years).
They bring down production because the version string was changed incorrectly to add an extra date. That would have been picked up in even the most basic testing since the app couldn't even start.
You jest but I was flabbergasted when doing some AI backed feature that the fix was adding a "The result you send back MUST be accurate." to the already pretty clear prompt.
First of all /remote-control in the terminal just printed a long url. Even though they advertise we can control it from the mobile app (apparently it should show a QR code but doesn't). I fire up the mobile app but the session is nowhere to be seen. I try typing the long random URL in the mobile browser, but it simply throws me to the app, but not the session. I read random reddit threads and they say the session will be under "Code", not "Chats", but for that you have to connect github to the Claude app (??, I just want to connect to the terminal Claude on my PC, not github). Ok I do it.
Now even though the session is idle on the pc, the app shows it as working... I try tapping the stop button, nothing happens. I also can't type anything into it. Ok I try starting a prompt on the pc. It starts the work on the PC, but on the mobile app I get a permission dialog... Where I can deny or allow the thing that actually already started on the pc because I already gave permission for that on the PC. And many more. Super buggy.
I wonder if they let Claude write the tests for their new features... That's a huge pitfall. You can think it works and Claude assures you all is fine but when you start it everything falls apart because there are lots of tests but none actually test the actual things.
I'm willing to bet most of their libraries are definitely vibe coded. I'm using the claude-agent-sdk and there are quite a few bugs and some weird design decisions. And looking through the actual python code it's definitely not what I would classify 'best practice'. Bunch of imports in functions, switching on strings instead of enums, etc.
I had to downgrade to an earlier release because an update introduced a regression where they weren't handling all of their own event types.
A few weeks ago the github integration was completely broken on the claude website for multiple days. It's very clear they vibe code everything and while it's laudable that they eat their own dogfood, it really projects a very amateurish image about their infrastructure and implementation quality.
I think they are betting that any of this code is transient and not worth too much effort because once Opus 5 is traimed, they can just ask it to refactor and fix everything and improve code quality enough so that things don't fall apart while adding more features, and when opus 5.5 comes out it will be able to clean up after opus 5. And so on. They don't expect these codebase to be long lived and worth the time investment.
In theory, comments on Hacker News should advance discussion and meet a certain quality bar lest they be downvoted to make room for the ones that meet the criteria. I am not sure if this ever was true in practice, it certainly seems to have waned in the years I have been a reader of this forum (see one of the many pelican on a bike comments on any AI model release thread), but I'd expect some people still try to vote with this in mind.
Being sarcastic doesn't lower the bar for a comment to meet to not get downvoted, so I wouldn't go thinking people miss the sarcasm without first considering whether the comment adds to the discussion when wondering why a comment is downvoted.
I only understood it after reading some of co_king_5’s other comments. This is Poe’s law in action. I know several people who converted into AI coding cultists and they say the same things but seriously. Curiously none of them were coders before AI.
I'm willing to bet you don't full-on YOLO vibecode like the lead Claude Code developer, running 10 Claude Code sessions in parallel to push 259 pull requests that modify >40k lines of code in a month [0]? There is zero chance any of that code was rigorously reviewed.
I use Claude Code almost every day [1], and when used properly (i.e. with manual oversight), it's an amazing productivity booster. The issue is when it's used to produce far more code than can be rigorously reviewed.
> - You can't interrupt Claude (you press stop and he keeps going!)
This is normal behavior on desktop sometimes its in the middle of something? I also assume there's some latency
> - At best it stops but just keeps spinning
Latency issues then?
> - It can get stuck in plan mode
I've had this happen from the desktop, and using Claude Code from mobile before remote control, I assume this has nothing to do with remote control but a partial outage of sorts with Claude Code sometimes?
I don't work for Anthropic, just basing off my anecdotal experience.
On top of that is something they should have had from earlier times. My biggest pain point is to not to be able to continue from my phone. I just use a service to pipe telegram to any cc session in the dev machine. This is the number 1 reason why I got excited by openclaw in the first place but its overkill to have it just to control cc
This is my general experience with the claude app, I don't know what they're smoking over at anthropic but their ability to touch mobile arch inappropriately with AI is reaching critical levels.
We’ve been building in this space for a while, and the issues listed here are exactly the hard parts: session connectivity, reconnection logic, multi-session UX, and keeping state in-sync across devices. Especially when it comes to long running tasks and the edge cases that show up in real use.
Isn't it a simpler solution to create some protocol for a browser or device announce an age restricted user is present and then have parents lock down devices as they see fit?
Aside from the privacy concerns, all this age verification tech seems incredibly complicated and expensive.
I think this solution exists (e.g. android parental lock, but also ISP routers). But parents and industry have failed to do so on a greater scale. So legislation is going for a more affirmative action that doesn't require parental consent or collaboration.
A service provider of adult content now cannot serve a child, regardless of the involvement or lack thereof of a parent.
Native devs: what are your go to quality of live improvements?
reply