Hacker News new | past | comments | ask | show | jobs | submit login

From the article: more CUDA cores at a lower clock. Since clock scaling isn't linear on power consumption, doubling the cores and halving the clock (as an example, not the actual ratio they used), leaves you with a net efficiency gain.



Tom's Hardware nicely demonstrated the non-linearity of power consumption in their desktop GTX1060 review: http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1060-...

At factory settings the card draws 120W and pushes ~110fps in their 1440p test, but throttling the power limit down to just 60W only reduced it to ~90fps.

(As an aside, the AMD RX480 comparison shows why people are disappointed with Apple supposedly using AMD Polaris GPUs in the upcoming Macbook Pro refresh)


I just don't understand why Apple seems to prefer AMD. Bad experience with Nvidia's drivers in the old Core Two Duo MBPs? Does AMD have a better track record?


Apple is a backer of and is invested in OpenCL. OS X itself leverages OpenCL throughout the OS (Quicklook for example uses it to make previews faster) and of course FCP/Motion/etc make heavy use of OpenCL as well.

Nvidia cards are capable of OpenCL but they've never performed as well with it as they do with CUDA. AMD has always been the better option for that.

Of course Apple could implement CUDA support in their software, but they've never been big on running with vendor specific standards that they had no part in the development of.


You have some outdated information.

Although Apple created OpenCL and gave it to Khronos, they have the most outdated support for OpenCL.

The future on Apple platforms is called Metal compute.

There were around 6 Metal talks at WWDC 2016 and zero about Khronos technologies.


How does that impact the AMD vs. Nvidia part of the discussion?


It doesn't matter which cards are better at OpenCL, because it is a legacy technology on Apple platforms, most likely to never be updated beyond the current version 1.2 (latest is 2.2).

https://support.apple.com/en-gb/HT202823

Apple develops their own drivers for Metal Compute.


Could be a lot of things:

Power consumption, optimization for non-DirectX drivers, AMD could meet the parts demand from Apple.

Apple ha(d) really specific requirements for their machines, so I don't doubt the decision was a result of whatever specification being met by AMD and not Nvidia at the time. Nvidia seems pretty happy in cornering the high-end market and can barely keep the 1000 series in stock at the moment.


Alternative options: NVidia is too expensive per unit or most Apple customers could care less about dedicated graphics. I think the latter is most likely - they don't sell their products on specs.


Do consider that fps itself is non-linear, and the linear equivalent is the inverse (milliseconds per frame)


in which world the inverse of a linear function is not a linear function ?


The only one I've ever lived in.

https://www.wolframalpha.com/input/?i=1%2Fx


Still, I've worked with a lot of Apple and Dell laptops and they ALL have some type of overheating issues with GPUs. Whether they've solved all these problems with this, who knows. But I'm skeptical.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: