Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>What happened was that static scheduling stayed really hard while the transistor overhead for dynamic scheduling became irrelevantly cheap

Is the latter part true? AFAIK most of modern CPU die area and power consumption goes towards overhead as opposed to the actual ALU operations.



If it's pure TFLOPs you're after, you do want a more or less statically scheduled GPU. But for CPU workloads, even the low-power efficiency cores in phones these days are out of order, and the size of reorder buffers in high-performance CPU cores keeps growing. If you try to run a CPU workload on GPU-like hardware, you'll just get pitifully low utilization.

So it's clearly true that the transistor overhead of dynamic scheduling is cheap compared to the (as-yet unsurmounted) cost of doing static scheduling for software that doesn't lend itself to that approach. But it's probably also true that dynamic scheduling is expensive compared to ALUs, or else we'd see more GPU-like architectures using dynamic scheduling to broaden the range of workloads they can run with competitive performance. Instead, it appears the most successful GPU company largely just keeps throwing ALUs at the problem.


I think OP meant "transistor count overhead" and that's true. There are bazillions of transistors available now. It does take a lot of power, and returns are diminishing, but there are still returns, even more so than just increasing core count. Overall what matters is performance per watt, and that's still going up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: