Hacker News new | past | comments | ask | show | jobs | submit login

I like the article as a whole, but I'm unsure about this point:

> For example: Disabling CPU turbo just to save some current consumption is a bad choice, because the resulting extra time will use more energy than just getting the job done quickly and shutting off.

In one of my computer engineering classes, I learned that power consumption rises as the square of clock frequency - so doubling the clock will quadruple the power.

That seems like it'd imply that you'd actually have to measure the power difference to determine if the quadratic increase from the clock boost will outweigh the product of the constant power consumption with the additional time spent on the task.

Related - it'd be nice if the Pi's CPUs included granular power consumption information, either derivable from the datasheet, or as real-time values exposed in registers.




> In one of my computer engineering classes, I learned that power consumption rises as the square of clock frequency - so doubling the clock will quadruple the power.

This is not quite correct. Switching power of a chip (ignoring static leakage) is proportional to voltage squared times frequency. Most chips require a higher voltage to reach higher clock speeds, so there is a quadratic relationship there. However, I believe that the raspberry pi does not have dynamic voltage control, so reducing clock speed without also reducing voltage will not effect total switching energy consumption.



This is a well understood power optimization strategy called race to idle. It works because there are a lot of periferals taking power in addition to the cpu that you can’t switch off until the cpu is done.

There is also definitely a sweet spot. If you overclock the cpu too much your performance per watt drops too far and race to idle won’t work anymore.


For a continuous workload that's a reasonable rule of thumb, but it doesn't tell the whole story. You always have a certain static power draw, just from having a component enabled. So modern embedded systems will often use a "race-to-sleep" or "race-to-halt" strategy where they will execute tasks really quickly, before shutting down most of their components waiting for the next event to trigger.


There's a base amount of power overhead the device will use no matter what, even if it does nothing. They even provide benchmarks that show that current consumption for turbo increases 10% but reduces boot time by 11%, for a small but measurable difference in total energy used.


What about the internal resistance of the battery? Doesn't that increase with higher current?

As in 1A for 2 seconds uses less actual battery power than 2A for 1 second due to internal loss in the battery?

I may be remembering this wrong, It has been a long time since I studied this stuff.


That’s true when you’ve saturated all of those subsystems but not when you’re just CPU bound. If you’re doing high throughput from disk to memory to CPU and back to disk, there are levels of use where throttling IO helps with battery draw. There are old papers on the subject, and I have a suspicion that OS X started doing something of the sort when they went to nonremovable batteries in the MacBook. There’s a 30% reduction in power draw in that generation that they brag about but don’t really explain, and it was a handful of years after that first paper showed up.


This is very interesting, thanks for sharing!

So if it takes 1J to do some computation in 1 second (say 1GHz at 1W), you're saying that in the perfectly spherical cow case, it takes 2J to do that same computation in 0.5 seconds (2GHz at 4W).

However, that's just CPU consumption, if the overall system has a static rate of 4W, then it takes 5J (1J CPU, 4J system) at 1Ghz to do the task in a second, or 4J (2J CPU, 2J system) at 2GHz to do the task in 0.5 seconds.

Am I understanding you correctly? Basically, if the overall system's power consumption is similar to the CPU's power consumption at turbo, then it makes sense to turbo, if not, it doesn't?


I think you have it right but also my experience from optimising Android power usage was: your your intuitions are helpful for knowing what to try, but you have to test and measure everything as there are always complications. Luckily you are well equipped to benchmark it already :)


Yes, this is all correct, as long as you're implicitly assuming that the CPU itself has some static power dissipation as well (which it does) in addition to the rest of the system.

Unfortunately I missed the actual benchmarks in the article that empirically measured the power difference.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: