Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

hey there Brendan, thats a pretty awesome presentation (I m only 20 minutes into the talk as of now ).. But you mention that the Apollo Engineers expected the CPU load to be about 85% during descent. And the Guidance computer's Kernel ran "Dummy Jobs" when no real jobs were run.

What are these Dummy Jobs ? And Why did they have to do this instead of just leaving the CPU idle ?



> Why did they have to do this instead of just leaving the CPU idle ?

This would require a CPU that was designed to idle.


wow , wow ! Looks like I don't understand the first thing about the CPU design . Do CPUs have to be designed to IDLE ? Can you throw some more light on this ?


A basic model of a CPU is running an infinite loop like this:

  1. If interrupts not masked, check for interrupt
  2. Load instruction
  3. Advance instruction pointer
  4. Execute instruction
It doesn't ever stop - as soon as the current instruction is finished executing it moves on to the next one. So, if you don't have anything better for the CPU to do, you need to have it spin in a loop of instructions anyway.

More modern CPU designs typically include an instruction that means "halt until next interrupt" which actually stops the CPU from fetching and executing instructions.


Why do CPUs and GPUs run hotter when doing more intensive tasks?

In your last statement I could see it making sense where the CPU actually halts, but did prior CPUs always run at about the same temperature? Or do these idle processes throw fewer instructions at one time so it's not as overwhelmed?


Modern CPUs, GPUs and SOCs have power management states that disable entire submodules when they're not in use, by actually gating off the clock to them. If you run without power management enabled, you'll find that they run hot all the time.


> did prior CPUs always run at about the same temperature?

Basically, yes. But then they typically produced so little heat they had passive heatsinks, up to and including the Pentium II (~20 W TDP) and ATI Rage 128 that I used back in '99.


wow.. Thanks for the explanation there.When we switched to modern CPUs that could actually Halt, was there actually hardware/physical changes to the CPU ? Or was it just a software change (ie) Added a new instruction to the existing instruction set ? ..

Is that what made it difficult for us to design processors capable of "idling" ? (ie) completely new hardware design


It was a hardware change. In those older CPU designs, the external clock signal was directly driving a state machine, so for as long as the clock was applied, the state machine would go.

It's important to realise that there was no good reason to have the ability to stop the CPU in those days - power consumption by the CPU itself was truly trivial compared to the memory and peripherals it was attached to, and those CPUs weren't really damaged or worn out by running continuously. Having the CPU spin in software when there was nothing else to do was perfectly fine.


Normally the CPU clock runs continuously and every cycle the program counter increments (or gets changed by a branch instruction of some kind.) If you want to stop the CPU, you have to gate the clock somehow. Maybe a timer that you could configure and enable via software. But that's extra complexity.. and if you use dynamic logic (which is smaller and faster than static logic), you lose state when you halt. Spinning in a tight loop, on the other hand, doesn't require any hardware support.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: