Hacker Newsnew | past | comments | ask | show | jobs | submit | userbinator's commentslogin

...and yoghurt is not an euphemism in this case, as much as the mentions of loneliness and Japan would make it seem like that.

Asm is simple enough that "mental execution" is far easier, if more tedious, than in HLLs, especially those with lots of hidden side-effects. The concept of a function doesn't really exist (and this is even more true when working with RISCs that don't have implicit stack management instructions), and although there are instructions that make it more convenient to do HLL-style call and return, it's just as easy to write a "function" that returns to its caller's caller (or further), switches to a different task or thread, etc. If you're going to learn Asm, then IMHO you should try to exploit this freedom in control flow and leverage the rest of the machine's ability, since merely being a human compiler is not particularly enlightening nor useful.

> Asm is simple enough

The general conceptual model of "asm" is simple.

Some instruction sets and architectures are hideous, though.

> merely being a human compiler is not particularly enlightening nor useful.

I don't think I can agree with that. At least it teaches you what the compiler is doing. And abiding by conventions (HLL-esque control flow, but also things like "put the return value in r0" and "put constant pools after the function") can definitely make it easier to make sense of the code. (Although you might share a constant pool across a module or something, if the instructions reach far enough.)

Not to say that you can't do interesting things, and can't ever beat the compiler. One of the things I most enjoyed discovering, in mid-00s era THUMB (i.e. 16-bit ARM) code, is that the compiler was implementing switch statements with tables of 32-bit constants that it would load into an indirect jump. I didn't get around to it, but I figured I could mechanically replace these with a computed jump into a "table" of 16-bit unconditional branches (except for very long functions, but this helped bring the branch distances under thresholds).


I agree entirely, great insight! I'd like to add that assembly is best enjoyed in a suitable environment for it, where "APIs" are just memory writes and interrupts. Game programming for the C64 is way more fun than dealing with linux syscalls, for example. A lower level interface enables all the fun assembler tricks, and limited resources require you to be clever.

Then you goto hell…

> Asm is simple enough that "mental execution" is far easier, if more tedious, than in HLLs

Ya totally I can also keep 32 registers, a memory file, and stack pointer all in my head at once ...fellow human... (In 2026 I might actually be an LLM in which I really can keep all that context in my "head"!)


there's an interesting new API skill for the human cortex v1.0, that allows for a much larger context window, it's called pen and paper.

For real! I occasionally write assembly because, for some reason, I kind of enjoy it, and also to keep my brain sharp. But yes, there is no way I could do it without pencil and paper (unless I’m on a site like CPUlator that visually shows everything that’s happening).

What do the words "mental execution" mean?

Using your brain and not the machine.

8 registers are sufficient; if you forget what one holds, looking up at the previous write to it is enough.

Contrast this with trying to figure out all the nested implicit actions that a single line of some HLL like C++ will do.


I see you have carefully avoided the em-dash. ;-)

I think people should find out themselves, but the OP was quite explicit about it.

"Getting high on your own supply" is exactly what I'd expect from those immersed in this new AI stuff.

Is that quote from the movie Scarface?

https://www.youtube.com/watch?v=U4XplzBpOiU # had to search for it right now, seems to be a movie-quote \o/


and often performance as well

BS. Nothing can be faster than a read()/write() (or even mmap()) into a struct, because everything else would need to do more work.


Sure, if your structure doesn't contain any pointers and you only ever want to support one endianness and you trust your compiler to fix the machine layout of the struct forever.

Mainly the first thing. If your struct is already serial, of course serialization will be easy.

...which is true for 99.999% of the time anyway, so it's not worth worrying about.

and from all of this they got only $130B ?

I wonder if your thoughts would be any different if they managed to get enough to actually pay off the deficit?


Proof of intelligence might be better.

the only downside to graphene is that they consider the user to be an attack vector

In other words, just like Google.


yeah but I'm less worried about that I just don't want to be spied on constantly.

It's not like anyone with a working brain would trust AI or AI tools in particular to do anything perfectly, and things like this just further reinforce that fact.

First time I've heard of it and a quick search finds articles describing it as "OpenClaw is the viral AI agent" --- indeed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: