Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately, higher temperature says less about power and more about thermal design (often of the overall system and not just the chip).

> On paper, you get more lanes at a lower TDP w/ AMD.

I was hoping you (or anyone) had at least some real-world anecdata.

However, the theoretical power cost being lower suggests it's unlikely that if there's a premium in practice, it's unlikely to be significant.

> PCIe lanes and counting them is funny math. Do the homework on system boards

It's not that funny. Latency "taxes" are certainly a concern for some workloads, but, ultimately, if there's not enough bandwidth to get the data to the CPU, such that it might end up idle, that can trump any tax. The difference between 40 and 128 lanes of PCIe 3.0 in transferring 64MiB is on the order of 1ms.

Finding a mobo that allows access to all the lanes might be more challenging when there are 128 than when there are 40-48, but I expect the popularity of NVMe to reduce that challenge somewhat.

OTOH, it seems Epyc uses half those lanes for communication between CPUs, so the usable lanes doesn't go up for 2S vs 1S, so perhaps the comparison is really 128 lanes vs 96 lanes.



yes, latency vs. throughput, the main idea also behind GPU computing. It worked there well, and CPUs are incredibly going to sacrifice latency for throughout as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: