Hacker Newsnew | past | comments | ask | show | jobs | submit | clktmr's commentslogin

I've written a fair amount of code for EmbeddedGo. Garbage Collector is not an issue if you avoid heap allocations in your main loop. But if you're CPU bound a goroutine might block others from running for quite some time. If your platform supports async preemption, you might be able to patch the goroutine scheduler with realtime capabilities.

The thing is, you'll typically switch to master to merge your own branch. This makes your own branch 'theirs', which is where the confusion comes from.


Not me. I typically merge main onto a feature branch where all the conflicts are resolved in a sane way. Then I checkout main and merge the feature branch into it with no conflicts.

As a bonus I can then also merge the feature branch into main as a squash commit, ditching the history of a feature branch for one large commit that implements the feature. There is no point in having half implemented and/or buggy commits from the feature branch clogging up my main history. Nobody should ever need to revert main to that state and if I really really need to look at that particular code commit I can still find it in the feature branch history.


Yep. This is the only model that has worked well for me for more than a decade.


This is what I do, and I was taught by an experienced Git user over a decade ago. I've been doing it ever since. All my merges into main are fast forwards.


> - he's labeled GenAI as nuclear waste (https://www.webpronews.com/rob-pike-labels-generative-ai-nuc...)

The whole article is an AI hallucination. It refers to the same "Christmas 2025 incident". The internet is dead for real.


The advent of coding agents killed Hacker News to some degree for me. Before I could always come here to get a pause from the hype, scandal and bait. Top comments were usually insightful; I really had this feeling to learn while browsing the feed. Today every brainfart about AI makes it to the frontpage. I know this sounds very dismissive, but most pieces really have no substance at all.

The good content is still there buts it drowns in noise and I'm not very good at filtering it out. I even suspect Hacker News is one of the prime advertisement targets of coding agent companies.

I would love to see if this is just my perception or if it can be found in the data.


Personally I don't care about what news articles end up on the front page - it's AI now, but there have been other trends in the past that did the same.

The bigger problem is the effect that it's had on "Show HN" postings, which in the past were things you could depend on were built by the person submitting it. That's why those posts tended to be more strongly moderated, because they often were seen as attacks on the person's art. Now I feel like most of the credibility has left the room on those posts.

Don't get me wrong - I have no problem with "vibe coding". I do plenty of it myself these days, for commercial purposes. But I feel it cheapens and waters down someone presenting work as their own.


A project was one of the easiest ways to evaluate a stranger. It was a great bullshit detector. If they can make something like this then they are probably someone with ability and experience and so the rest of what they have to say is probably worth listening to. But I also agree with the parent. HN seems to be flooded with hustle and rubbish since AI has taken off. It's eternal LLMber.


The phrase "eternal LLMber" saddens and scares me in equal measure.


As good poetry should.


I wonder if the marketing/hustlebros who only value our art as a "get rich quick scheme" (IE: the people pushing "Learn to 'code' (I hate the term 'coding')and half the new faces from india) learnt about Show HN and decided to ruin something good by making a linkedin post about "how great of a marketing avenue" we are and the vibecoded slop pushers listened in full force because they know nothing about our industry and thus don't know how valuable a non-salesy place to talk trade is


I'm just going to wait it out, as these trends come and go; before AI it was Rust, before Rust it was IoT, before IoT it was big data, crypto was weaved through it, etc.

I'm sure someone's done the numbers on HN trending topics over time aaand yup http://varianceexplained.org/r/hn-trends//.


I agree... I wrote an essay about this: https://joinkith.com/#the-internet-is-dead

tl;dr of the essay, we need to move back to human-to-human recommendations and trust systems, and people are already doing that a lot of ways by retreating to DMs (iMessage, email, in-person conversations) and personal recommendations rather than relying on Google + the algorithm. What this means for public forums I don't know. I think they're gone and will never come back, probably.


It's probably in their interest to have as many vibed codebases out there as possible, that no human would ever want to look at. Incentivising never-look-at-the-code is effectively a workflow lockin.


I always review every single change / file in full and spend around 40% of the time it takes to produce something doing so. I assume it's the same for a lot of people who used to develop code and swapped to mostly code generation (since it's just faster). The spend I time looking at it depends on how much I care about it - something you don't really get writing things manually.


At least try a different question with similar logic, to ensure this isn't patched into the context since it's going viral.


You can't "patch" LLM's in 4 hours and this is not the kind of question to trigger a web search


You absolutely can, either through the system prompt or by hardcoding overrides in the backend before it even hits the LLM, and I can guarantee that companies like Google are doing both


This has been viral on Tiktok far at least one week. Not really 4 hours.


You can pattern match on the prompt (input) then (a) stuff the context with helpful hints to the LLM e.g. "Remember that a car is too heavy for a person to carry" or (b) upgrade to "thinking".


Yes, I’m sure that’s what engineers at Google are doing all day. That, and maintaining the moon landing conspiracy.


If they aren't, they should be (for more effective fraud). Devoting a few of their 200,000 employees to make criticisms of LLMs look wrong seems like an effective use of marketing budget.


It looks like they do. https://simonwillison.net/2025/May/25/claude-4-system-prompt... They patch it in the prompt and they eventually address it in the re-enforcement training. It seems the eventual goal is to patch all of these tiny "glitches" so as to hide the lack of cognition.


A tiny bit of fine-tuning would take minutes...


It's pretty good at dead code elimination. The size of Go binaries is in large part because of the runtime implementation. Remove a bunch of the runtime's features (profiling, stacktraces, sysmon, optimizations that avoid allocations, maybe even multithreading...) and you'll end up with much smaller binaries. I would love if there was a build tag like "runtime_tiny", that provides such an implementation.


There is Tiny Go, which is similar to what you seek


You don't know what you want. That's why asking questions doesn't work. You think you know it, but only after you've spent some time iterating in the space of solutions, you'll see the path forward.


> You think you know it, but only after you've spent some time iterating in the space of solutions, you'll see the path forward.

I'd turn it around- this is the reason asking questions does work! When you don't know what you want, someone asking you for more specifics is sometimes very illuminating, whether that someone is real or not.

LLMs have played this role well for me in some situations, and atrociously in others.


I think what's lacking in LLMs creating code is they can't "simulate" what a human user would experience while using the system. So they can't really evaluate alternative solutions tothe humna-app interaction.

We humans can imagine it in our mind because we have used the PC a lot. But it is still hard for use to anticipate how the actual system will feel for the end-users. Therefore we build a prototype and once we use the prototype we learn hey this can not possibly work productively. So we must try something else. The LLM does not try to use a virtual prototype and thne learn it is hard to use. Unlike Bill Clinton it doesn't feel our pain.


This is so ironic. Why would you add all these "features" to Go, if you're not interested in using the language at all?


Why would anyone grow flowers when they can't eat them


There is also yaegi, a Go interpreter, which might be a better choice for small scripts than 'go run'.

https://github.com/traefik/yaegi


Also, for Go oneliners: https://github.com/dolmen-go/goeval

(Disclaimer: author here)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: