No. CPUs need to be designed to tolerate busy loops without damaging themselves (with appropriate cooling). CPUs of that era did not measure their own temperature, but they also weren't trying to squeeze as much juice through as the later Pentium 4s that ran very hot. Modern CPUs are self-regulating and try very hard to avoid damaging themselves even with inadequate cooling.
Im assuming the "halt and catch fire" thing is only really possible on pre-microprocessor machines built of discrete components, where the individual parts are small enough to overheat by themselves if driven wrong.
I'd guess the physical CPU package of an i386 would cope just fine with the few (hundreds?) of gates toggling in that small loop.
I wonder if it might be possible to do on a modern FPGA, if you artifically create some unstable circuit and pack deliberately it in a corner of the die?
There's probably some AVX512 concoction that would be the closest equivalent on a modern X64. There's probably an easy experiment -- if that concoction makes CPU freq drop and also makes whole-package thermals drop, it /could/ be due to localized die heating.
No. Intel CPU's, even way before Linux, were microcoded, so you're still using the full instruction fetch, decode, and microcode system for every step in your infinite for loop. You aren't wearing out the CPU any more than running any other code.
None of the instructions likely to be emitted by that loop will be microcoded, and the instruction will always be fetched from L1 cache. That said, this won’t be an issue simply because CPUs are designed and tested to be able to handle hot loops.
I remember win95 didn't use hlt instruction in the idle thread, it just did the same as linux. Power management wasn't a thing back then. I think ACPI and hlt came with winnt only.
>> All x86 processors from the 8086 onward had the HLT instruction, but it was not used by MS-DOS prior to 6.0[2] and was not specifically designed to reduce power consumption until the release of the Intel DX4 processor in 1994. MS-DOS 6.0 provided a POWER.EXE that could be installed in CONFIG.SYS and in Microsoft's tests it saved 5%.[3]
I stand corrected, I was under the impression that the original Pentium was the first architecture that had HLT, but maybe that was the first architecture I ran Rain on, since it had benefits (having ran Win95 on a 586, but never DOS on a 486 laptop)
Even single tasked systems like MS-DOS still had interrupts. You could HLT the processor and a keyboard interrupt could wake it straight back up and resume execution anywhere in the MS-DOS kernel. It's just that the typical TDP of a CPU back then was a couple of watts so there was literally no point in HLTing instead of busy-waiting so nobody bothered.
every x86 has had halt. win95 was just not using it even though you could write a 10 line program to get context switched in when idle that would halt it. it was one of my first programs as a child on a 486 66 dx2.
i just had chat gpt generate said program and i think its very similar to what I wrote. I'm unsure if it ever did anything but i've always been interested in efficiency:
#include <stdio.h>
#include <windows.h>
void main() {
printf("Setting process priority to low...\n");
SetPriorityClass(GetCurrentProcess(), IDLE_PRIORITY_CLASS);
printf("Halting the processor when no other programs are running...\n");
while (1) {
__asm {
hlt
}
}
}
The CPU would probably run cooler since it's not doing anything. Most of the circuit would be static, not flipping from 0->1 or 1->0 which is what tends to expend the most power.
Nope. They're built for it. Typically now on x86 at least you'd do a CLI then for(;;)HLT. That'd park the CPU unless a non-maskable interrupt was latched.
``` printk("Kernel panic: %s\n\r",s); for(;;); ```,
could the for-loop be damaging to the CPU by over-utilizing a small portion over and over again, in terms of it heating up a tiny space on it?