Super-curious: have you seen much customer-driven demand for AMD EPYC, i.e. for reasons other than Xeon availability? AMD processors have been affected to a lesser degree to speculative execution exploits, they're cheaper and (if I'm not mistaken) offer more PCIe lanes, etc.
Also, do you expect 7mm EPYC to do well in your space?
Thanks!
Depends on market segement. I've seen alot of demand for Epyc in workloads that are sensitive to memory bandwidth. Just so much to offer when you max out 8 channels.
They are cheaper due to fabrication process. The PCIe lane story is stuff of fanboys. It comes at a cost, power and heat.
Secondly, anyone looking at an NVMe box should be looking at AMD in my opinion. The trick is if you are doing a VM farm, mixing Intel and AMD aint the best idea, as you all know.
I see EPYC ticking up fast.
In terms of exploits like Spectre/Meltdown, I'm pretty sure the exploits AMD claimed were not vulnerable, they ended up pushing out microcode for anyway. So its a moot point.
I HAVE come across alot of customers who have DOUBLED their core count due to Spectre/Meltdown mitigations, and they are attracted to AMDs, high core, lower cost options. But remember, the power draw is different and always test/PoC!
> The PCIe lane story is stuff of fanboys. It comes at a cost, power and heat.
Could you unpack this a bit? Specifically, I'm curious if the cost is a premium per lane (e.g. W/lane greater on AMD than on Intel) [1]. Also, is that cost at all affected by the I/O volume or merely the CPU being power-hungry overall?
[1] Of course, that assumes everything else being equal, which it can't be, as well as equal proprotion of PCIe utilization, which is unlikely.
Ive had a few customers test AMD and found a higher operating temp and determined it was due to higher power consumption. On paper, you get more lanes at a lower TDP w/ AMD. In practice, as always, your results may vary. Test!
PCIe lanes and counting them is funny math. Do the homework on system boards, how they communicate, and the tax of moving information between processors.
However, I would say their tests were short, and AMD processors have 3 power operating modes. There was also a neat blog posted somewhere (I think on here...) a little while back suggesting that the AMD proc did not need to run at advertised power on the customer procs. It was about compile times and how much power still resulted in good times. That was consumer-grade Ryzen chips tested though.
Unfortunately, higher temperature says less about power and more about thermal design (often of the overall system and not just the chip).
> On paper, you get more lanes at a lower TDP w/ AMD.
I was hoping you (or anyone) had at least some real-world anecdata.
However, the theoretical power cost being lower suggests it's unlikely that if there's a premium in practice, it's unlikely to be significant.
> PCIe lanes and counting them is funny math. Do the homework on system boards
It's not that funny. Latency "taxes" are certainly a concern for some workloads, but, ultimately, if there's not enough bandwidth to get the data to the CPU, such that it might end up idle, that can trump any tax. The difference between 40 and 128 lanes of PCIe 3.0 in transferring 64MiB is on the order of 1ms.
Finding a mobo that allows access to all the lanes might be more challenging when there are 128 than when there are 40-48, but I expect the popularity of NVMe to reduce that challenge somewhat.
OTOH, it seems Epyc uses half those lanes for communication between CPUs, so the usable lanes doesn't go up for 2S vs 1S, so perhaps the comparison is really 128 lanes vs 96 lanes.
yes, latency vs. throughput, the main idea also behind GPU computing. It worked there well, and CPUs are incredibly going to sacrifice latency for throughout as well.
Do you have a "relevant" chunk of customers that are really looking for the high-density PCI-Express connectivity?
Are the 128 lanes per system a feature that actually draws in users with real world demands or is this the wrong thing to focus on?
To me this wasn't very surprising.
It's well understood in the more technically inclined enthusiast community that underclocking Ryzen yields tremendous efficiency improvements.
Famous overclocker "The Stilt" did a great analysis on Ryzen's launch day in 2017: https://forums.anandtech.com/threads/ryzen-strictly-technica...
One of his benchmarks showed an almost 80% efficiency improvement when underclocking an R7 1800X to 3.3GHz, which is just above Epyc's maximum boost frequency.
Since Epyc is almost the same silicon as Ryzen 1st Gen (B2 stepping instead of B1), the chips should have almost identical characteristics.
Unfortunately, I'm not aware of any similar detailed analyses on recent Intel Core processors to compare.
Samsung's low-power manufacturing node used by AMD has often been cited as the specific reason for the steep efficiency curve (and the realtively low upper end compared to Intel), but the general trend is the same for almost all chips.
On the other end of the spectrum, overclocker der8auer measured about 500W draw in Cinebench when overclocking the Epyc 7601 to around 4GHz: https://redd.it/92u6db
> Are the 128 lanes per system a feature that actually draws in users with real world demands or is this the wrong thing to focus on?
I'm going to go out on a limb and suggest (based on my own experience[1]) that most users are too ignorant to know that this might be something that they want or would benefit from.
Some of us have always demanded more I/O bandwidth (even if it meant 4S servers), but typically with a price limit.
I do, however, suspect that additional demand could materialize in the form of NVMe slot count.
[1] particularly with so many potential employers being categorically cloud-only, they don't even want to know about the underlying hardware or what it's capable of.
> In terms of exploits like Spectre/Meltdown, I'm pretty sure the exploits AMD claimed were not vulnerable, they ended up pushing out microcode for anyway. So its a moot point.
But even in the scenario where the microcode actually did incorporate some "interesting" changes, they haven't impacted performance at all. So this is basically the world's biggest ever design win at this exact moment.
Also, do you expect 7mm EPYC to do well in your space? Thanks!