Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regarding the following code,

``` printk("Kernel panic: %s\n\r",s); for(;;); ```,

could the for-loop be damaging to the CPU by over-utilizing a small portion over and over again, in terms of it heating up a tiny space on it?



No. CPUs need to be designed to tolerate busy loops without damaging themselves (with appropriate cooling). CPUs of that era did not measure their own temperature, but they also weren't trying to squeeze as much juice through as the later Pentium 4s that ran very hot. Modern CPUs are self-regulating and try very hard to avoid damaging themselves even with inadequate cooling.


Im assuming the "halt and catch fire" thing is only really possible on pre-microprocessor machines built of discrete components, where the individual parts are small enough to overheat by themselves if driven wrong.

I'd guess the physical CPU package of an i386 would cope just fine with the few (hundreds?) of gates toggling in that small loop.

I wonder if it might be possible to do on a modern FPGA, if you artifically create some unstable circuit and pack deliberately it in a corner of the die?

There's probably some AVX512 concoction that would be the closest equivalent on a modern X64. There's probably an easy experiment -- if that concoction makes CPU freq drop and also makes whole-package thermals drop, it /could/ be due to localized die heating.


Maybe not the for-loop, but there has been research into damaging CPUs by repeated execution of some particular instructions:

https://www.semanticscholar.org/paper/MAGIC%3A-Malicious-Agi...


No. Intel CPU's, even way before Linux, were microcoded, so you're still using the full instruction fetch, decode, and microcode system for every step in your infinite for loop. You aren't wearing out the CPU any more than running any other code.


None of the instructions likely to be emitted by that loop will be microcoded, and the instruction will always be fetched from L1 cache. That said, this won’t be an issue simply because CPUs are designed and tested to be able to handle hot loops.


My AMD 386DX/40 didn’t even have a fan or a heatsink.


All CPUs didn't consume much power back then even at full load (several watts[1]), and thus leaving the CPU in a busy loop was the norm.

[1] http://www.cocoon-culture.com/lib/noise-report/external-docs...


I remember win95 didn't use hlt instruction in the idle thread, it just did the same as linux. Power management wasn't a thing back then. I think ACPI and hlt came with winnt only.


You could use Rain to cool down your CPU. That tool was useful under VM's and DOSBox too.


On Pentium or higher. 486s and earlier didn’t really have an HLT instruction, iirc


>> All x86 processors from the 8086 onward had the HLT instruction, but it was not used by MS-DOS prior to 6.0[2] and was not specifically designed to reduce power consumption until the release of the Intel DX4 processor in 1994. MS-DOS 6.0 provided a POWER.EXE that could be installed in CONFIG.SYS and in Microsoft's tests it saved 5%.[3]


I stand corrected, I was under the impression that the original Pentium was the first architecture that had HLT, but maybe that was the first architecture I ran Rain on, since it had benefits (having ran Win95 on a 586, but never DOS on a 486 laptop)


Idle loops are harder to implement when your system doesn't have multitasking.


Even single tasked systems like MS-DOS still had interrupts. You could HLT the processor and a keyboard interrupt could wake it straight back up and resume execution anywhere in the MS-DOS kernel. It's just that the typical TDP of a CPU back then was a couple of watts so there was literally no point in HLTing instead of busy-waiting so nobody bothered.


every x86 has had halt. win95 was just not using it even though you could write a 10 line program to get context switched in when idle that would halt it. it was one of my first programs as a child on a 486 66 dx2.

i just had chat gpt generate said program and i think its very similar to what I wrote. I'm unsure if it ever did anything but i've always been interested in efficiency:

#include <stdio.h>

#include <windows.h>

void main() {

  printf("Setting process priority to low...\n");

  SetPriorityClass(GetCurrentProcess(), IDLE_PRIORITY_CLASS);

  printf("Halting the processor when no other programs are running...\n");

  while (1) {
    __asm {
      hlt
    }
  }
}


That’s pretty much what DOS did as well.


The CPU would probably run cooler since it's not doing anything. Most of the circuit would be static, not flipping from 0->1 or 1->0 which is what tends to expend the most power.


It's not a risk or anything, but it does waste power compared to using the HLT instruction.


I think you need CLI; HLT as HLT by itself still allow machine to be woke up from interrupt.


Even with interrupts enabled sticking a HLT in that loop would be better than not.


Nope. They're built for it. Typically now on x86 at least you'd do a CLI then for(;;)HLT. That'd park the CPU unless a non-maskable interrupt was latched.


No.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: