Hacker News new | past | comments | ask | show | jobs | submit login

I would guess it is some combination of:

- longer pipelines - more silicon dedicated to speculative execution for more Instruction Level Parallelism - bigger caches - More analysis of incoming instruction stream to get more ILP and out of order execution

But yeah, more clock is more power is more heat.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: