Instruction decode for variable length ISAs is inherently going to be more complex, and thus require more transistors = more power, than fixed length instruction decode, especially parallel decode. AFAIK modern x86 cores have to speculatively decode instructions to achieve this, compared to RISC ISAs where you know where all the instruction boundaries are and decoding N in parallel is a matter of instantiating N decoders that work in parallel. How much this determines the x86 vs ARM power gap, I don’t know, what’s much more likely is x86 designs have not been hyper optimized for power as much ARM designs have been over the last two decades. Memory order is another non-negligible factor, but again the difference is probably more attributable to the difference in goals between the two architectures for the vast majority of their lifespan, and the expertise and knowledge of the engineers working at each company.
Seems obvious. If you don’t use it you lose it. Same thing happened with mental arithmetic, remembering phone numbers, etc. Letting an LLM do your thinking will make you worse at thinking.
The uncomputable real numbers always seemed strange to me. I can understand a convergent sequence of rationals, or the idea of a program that outputs a number to arbitrary precision, but something that cannot be computed at all is a very bizarre object. I think NJ Wildberger has some interesting ideas in this area, although I’m not sure I agree with his finititist interpretation in all circumstances. Specifically I don’t think comparisons to the number of atoms in the universe or information theoretic limits on storage based on the volume of the observable universe are interesting considerations here.
To me at least, if you can write down a finite procedure that can produce a number to arbitrary precision, I think it is fair to say the number at that limit exists.
This made me think of a possible numerical library where rather than storing numbers as arbitrary precision rationals, you could store them as the combination of inputs and functions that generate that number, and compute values to arbitrary precision.
The strategy is fine, the problem is there aren’t enough domestic competing players in cutting edge semi nodes for it to work anymore. The US had many competing foundries before its semi industry was hollowed out by Japan and Korea. Now the only player in the US is Intel and, having been mismanaged for the last decade or more, it’s at risk.
I don’t think propping up Intel is going to work though, they’re a sinking ship and their management seems too risk averse and incompetent. It might be better for the US, long term, to let them collapse and sell off strategic parts to different domestic players (NVIDIA, AMD, micron, TI, etc) and use tariffs or other trade policy to force some amount of leading edge semi fabrication to use domestic manufacturing.
Names are also a good way to determine how to draw boundaries between data, between code, etc. If you can give something a concise, descriptive, and intuitive name you can usually pull it out into its own function, type, etc. and it will _improve_ readability, since the name adds information and abstracts the implementation well. Names are also a good heuristic for whether your abstractions and boundaries are good. If they require verbose, misleading, or unintuitive names you may need to redraw those boundaries and abstractions.
Still makes more sense to do the transcription an analysis lazily rather than ahead of time (assuming you can do it relatively quickly). If that person never calls in again the transcription was a waste of money.
Re: YT AI content. That is because AI video is (currently) low quality. If AI video generators could spit out full length videos that rivaled or surpassed the best human made content people wouldn’t have the same association. We don’t live in that world yet, but someday we might. I don’t think “human made” will be a desirable label for _anything_, videos, software, or otherwise, once AI is as good or better than humans in that domain.
You’re not imagining what post scarcity can really look like. If you have abundant energy, automation, etc. you could manipulate geography and climate, you could build artificial land mass, and so on. It really depends on what people mean by post scarcity.