AMD has a 5x advantage on the price of the CPU alone, not factoring in the price of the motherboard and other components (this is a 4 socket Intel motherboard vs. a dual socket AMD).
I think it's best to say it the way they did. It's one thing to be faster per dollar but it's another thing for to win on performance as well.
From single core to same number of cores to having more cores the 7742 wins in each case and still comes out on top. That's far more amazing than being 5x cheaper. Everyone has known Intel has charged an outrageous premium for a long time but it was also the best so people dealt with it.
Exactly, especially since high margins on those is a defining characteristic of intel. They can match on price if they kill their margin. Matching on performance or power usage ? That’s where their problem is.
Meanwhile amd is making sure to be low priced yet ahead for as long as possible / as many processor gen as possible to create themselves a stable market share, not just the people who buy whatever is best each day, but those that like stability and uniformity and wouldn’t go with zen but now that zen+ and zen2 confirmed it they just might be tempted
The single core performance is remarkable given that Intel has always been very strong there.
How is Linux compatibility in case of AMD these days? Intel is really good, if we ignore the need for blobs for their wireless cards. Everything just works, and it is energy efficient.
> How is Linux compatibility in case of AMD these days?
You install linux on an AMD system and it runs. Its been that way for a decade. If you buy an AMD GPU and choose a wireless card with care then there really isn't anything to think about when using linux except maybe needing a firmware blob. AMD is one of the better vendors for linux compatibility. CPU, GPU, whatever. All has first class open source support.
It isn't fun like the good old days when things went wrong and people had to learn about the internals of the system to make things run, fighting the hardware every step of the way. Kids these days will have no opportunity to learn how all this stuff hangs togther. It could be worse than the time everyone stopped having to learn assembly. We live in an age of moral decay.
I can only speak for the first gen zen. There's a bug with most motherboards that cause a hard lockup every day or so.
Changing the power state fixes the problem with an updated bios but disables boost.
Thread: https://bugzilla.kernel.org/show_bug.cgi?id=196683
Amd kept leaping ahead in single core while intel finally got their security mitigations in place. Although intel was ahead against zen+, it was not by as much as you imagine since a lot of the intels where tested with mitigation’s partially off by then (and then of course there is intel insistence on reviews being done with smt off which is cute since amd performs better on it too)
AMD has always had the problem of having weaker cpu's that were better per dollar. I agree they should have highlighted the 5x advantage, but for a long time people have regarded AMD as the value weaker option; this is what changed this gen and what they are trying to higlight. Finally AMD gets more performance for cheaper and is no longer just a better value in some applications
Yes, as I said, pre Core uArch, which launched in 2006 for laptops and 2007 for desktops.
Prior to that Intel was on the dead-end NetBurst and Athlon 64 and derivatives were the better chip when it came to performance per watt and outright computational power.
Interesting pricing decision by AMD. Given their performance, and no other competition in x86 market besides Intel, surprised they chose to sell at such a steep discount.
Sometimes, such pricing decisions can perpetuate “you are a weaker substitute” perception.
Nobody switches a big contract from one vendor to another because of a 2x ROI disparity. 5x is about the threshold where you start to feel like it’d be irresponsible to your shareholders to not switch, despite the switching costs.
I don't know why your comment was downvoted. This kind of metric is typical for the enterprise (where cap ex is only part of the equation), and Tom's hardware specifically called out relevance to the enterprise:
> The Geekbench 4 benchmark holds little to no relevance in the enterprise world. Nevertheless, it gives us a small taste of how AMD's EPYC 7002-series can provide enterprises with more bang for their buck.
You are correct that customers that already use both Intel and AMD would likely choose AMD over Intel even with a price difference of 10% if they were deciding on price/performance ratio. However, I think it's likely that AMD is trying to win market share from customers that are currently mostly using Intel. In my humble opinion that strategy is paying off now that more big shops such as Amazon AWS have AMD options.
Lisa Su is on record as talking about revenue / market share being a key focus:[1]
“We are always looking to increase our market share. That’s why we put out great products. As it relates to our market share targets, for server what we’ve said is we can achieve double digit market share from 4-6 quarters from the end of 2018.”
It's a smart decision in the era of public clouds, for sure. On-prem shops will be more hesitant because with things like VMware you can't hot move VMs between differing CPU architectures. In order to win these customers over you have to have longevity (that you'll still be the price/performance in 5 years when you lifecycle your hardware).
The "'weaker substitute' perception" aspect might happen in consumer goods but unlikely to happen when Google, Amazon and Microsoft consider provisioning a new data center.
I think you are wrong. Consumers use to save money, the masses at least. They won't give a shit about these details, most of them don't even know what it means. They pick the cheaper option, across the board. There might be some Intel fanboys with 5000$ machines, but those are the minority.
With businesses it is quite different. Money comes cheap. You can not get fired for buying IBM... To convince companies like Google & Amazon to build a data-center around a one-off delivery from AMD is pushing the boundaries. Now if they are 5 times cheaper, these companies might be more willing to consider betting on this outlandish horse. If it was 10% cheaper, I don't see how they could risk it, at least not at scale and scale is what AMD is interested in. Once they have some buy in at these dumping prices from large data centers, then they will sure raise the price unless Intel follows the dumping price strategy.
Not to mention that there are also other problems with business, like vendor lock-in. Consumers don't really have that problem either. They will buy what's cheaper, because they didn't sign a 10 billion $ discount from Intel for keep buying Intel in the future.
> To convince companies like Google & Amazon to build a data-center around a one-off delivery from AMD is pushing the boundaries.
AWS offers AMD instances already, and they are advertized specifically as being AMD. I found them 10% cheaper, with about 30% of the performance of the equivalent Intel instance.
Actual result of a test I did: m5.xlarge vs m5a.xlarge. The m5a.xlarge is 10% cheaper, but I found it takes four times the amount of seconds to perform some CPU and RAM heavy calculation than the m5.xlarge instance.
So no, it being cheaper saves me no money, because the performance is not there yet.
Our BigCo catalog contains only few high-core 1x/2x Xeons. Our huge C++ project linearly benefits from hardware performance, yet there is no way one can get anything outside of the catalog. The product is also certified to run only on Intel, so our customers are running only Intel.
The price advantage is less pronounced for desktop CPUs, however if you buy a AMd Ryzen CPU you are already getting more bang for the buck than buying Intel.
And is destroying intel bang for the buck on desktop and high end computer. Way more power and still for a lower cost, with massively lower power usage too.
The only area where intel is not currently being destroyed that way is laptop chips, everything else from the 100$ chip to the 7500$ chip going and means cheaper than intel’s alternative for more performance and less power usage.
> AMD has a 5x advantage on the price of the CPU alone
Maybe I missed something in your comment, but how did you come up with 5x?
$52,044 / $13,011 = 4.000000000
If instead, I use the numbers from the article, you get:
Intel: $52,044
AMD: $13,900
Which comes out to 3.74x.
If you normalize for performance, then yeah, you get 4.86x. But that doesn’t match up with your numbers ($13,900 vs $13,011). So was it a typo, or are you doing something different?
The articles will keep flowing for months/years and it'll become conventional wisdom that Epyc always pummels Intel. Intel marketing must be on Defcon 1.
This happened to Intel with Athlon and Opteron back in the early 2000s. Intel came back swinging pretty fast. They might this time too. In any case the consumer wins. Intel needed a swift kick in the butt.
They came back fast by bribing OEMs to only sell Intel and were fined for it by the EU because it was an antitrust violation. It's unlikely they can repeat that.
Intel had seriously better more reliable hardware at that point. You couldn't pay me to buy AMD after all the hardware I lost to thermal issues.
HN likes to believe that AMD is and was always the best if only Intel hadn't used monopoly tactics but the reality is that it's not as simple as that.
That being said, I'm truly happy with what AMD is delivering now and I hope they can sustain it for years to come. My next workstation is probably going to be AMD for sure.
As for Intel swinging back, if they had something ready, I believe they woulnd't have allowed a competitor to get close and surpass them like AMD is doing. Intel has gone through so many boring architecture at this point that I doubt it's just "strategy" on their part. They must be really struggling to deliver something competitive now.
Or maybe if Intel wasn't a monopoly AMD could've solved the heat issues better and faster. It's hard to play fair and win when the other one is cheating.
Actually, that's proof that the Athlon will still do the correct thing all the way to its death, whereas the Pentium crashed and thus also saved itself. Also, the P4 has more thermal mass due to its heatspreader, and therefore can sustain non-HSF operation longer.
Motherboards/CPUs in those days didn't really have thermal shutdown failsafes, in fact, we had to rely on the motherboard to report CPU temperatures, there weren't any built into the CPU itself. What we did have was shutdown on CPU-fan fail.
According to Linus techtips the 3xxx ryzen series is already neck/neck with Intel at CSGO (which is known to normally heavily favor Intel - as a csgo player this is really impressive), and appears to otherwise be competitive in other games.
Intel should be very scared. Not to mention the way Intel has crippled different models of their line up just to try to push people to use a more expensive CPU - and seems to have otherwise stalled out in general CPU performance.
Thanks to the price drop on Zen+ due to the Zen 2 release, I recently built a Ryzen 5 2600 system as my new gaming rig. My old gaming machine had a i5-7400, and in pretty much every game the CPU bottleneck has disappeared, especially newer titles. Even older single-core-heavy games like CSGO have greatly improved with the new CPU. Basically every game in my collection can now run at 1440p60 (my monitor's native res) instead of being limited to 1080p60. Switching from a GeForce 1060 to a RX 580 allowed me to ramp up the quality at that resolution as well (in most games).
Further, now that Zen+ support in Linux and OpenBSD has improved, it is a stable enough platform for those OSes. My new rig is doing double duty as my main workstation now thanks to that support along with switching to an AMD GPU.
I honestly don't think Intel and Nvidia have a place in my home and office anymore. My media server is an ancient Dell PowerEdge T310 and it's still going strong, but it is limited on drive capacity and has issues transcoding more than one 1080p+ stream at a time, so I believe I'll be retiring it soon in favor of a Zen based custom build.
>>My old gaming machine had a i5-7400, and in pretty much every game the CPU bottleneck has disappeared
I'm not being funny - have you ever found any kind of bottleneck with that CPU? I'm still rocking an i7-4970K + 1080Ti and I have never ever seen the CPU being anywhere close to max load in any game, and I'm mostly playing in 1440p@60fps.
It was mostly with single-core-heavy (i.e. older) games. Newer multi-core games did very well on the i5, but better still on the six core Ryzen. I mostly play older games so my experience is likely different from most gamers.
Edit: An example from memory, the very old game Crysis stuttered badly at 1440p with the i5, it was only playable at 1080p (both resolutions with all graphics settings on highest/ultra); this was with the GTX 1060. When I first built the Ryzen system I carried over my GTX 1060, and was able to play that game at 1440p60 smooth as butter. Turning off vsync I was getting close to 100fps, whereas with the i5 and vsync off I was lucky to spike above 70, with average fps somewhere around 30-35. Moving to the RX 580 actually dropped the fps cap slightly with vsync off, but still allowed me to play at 60fps locked for a smoother experience overall.
In general, this was the same experience with all of my older games (Skyrim, Far Cry 2, Mirror's Edge, Rust, etc). With newer multi-core games like Doom 2016 and Far Cry 4 I never had any CPU bottleneck on the i5, and the new system marked an increase in graphic quality without sacrificing smoothness and resolution.
Does anyone know of good articles about parallelism in modern game engines? All I've been able to find is a couple of slides in a deck about Doom 2016.
I went from a i5-7600k to a i7-8700k and noticed decent performance increases with my GTX 1070. (CS is netoriously CPU bound). I've since upgraded to a 1080ti, I don't know how much performance was from the cpu upgrade and how much was due to the extra cores the rest of the system could use without interfering with the game.
No, it's running DDR3 at 2333MHz. I mean I know it's not the latest and greatest but just as an example it plays Control at max settings at 1440p in locked 60fps, that's why I'm curious what kind of game could make it CPU-constrained because I just haven't ran into any yet. But then admittedly I haven't played any Warhammer games either.
Keep in mind too that your i7-4790K is much more powerful than my newer i5-7400 was. It’s almost on par with my Ryzen CPU. If you had a Haswell i5 instead of an i7 you’d likely have bottlenecks like I did.
GP said they're using i7-4970K; it is likely they meant i7-4790K. I'm using the i7-4770K. Which misses a certain virtualization hardware extension this K-series does not have, but the other ones in the series do have. Which pisses me off to not hand. For that fact alone, I won't buy an Intel CPU.
To save non-PC-gamers a Googling: "CSGO" here refers to "Counter Strike: Global Offensive", a popular multi-player first-person shooter. It's part of the Counter-Strike series, which traces its roots back to the year 2000.
Counter-Strike was super fun during the time it took form (the beta versions from 0.5 to 5.2 ish). They tried a lot or different things, like dual wielding machine guns. The community was awesome also.
Beta 0.1 will always have a special place in my heart playing it for hours at the LAN shop down the street from me. Bonus was being able to troll people by flying my dead-player spectator through doors and opening them. People would get really mad - it was awesome.
I am surprised most people on hackernews don't know but the CSGO engine is optimized for extreme frame rates. When played competitively they go for low resolution and ultra frame rates, thus the bottleneck isnt the GPU it becomes the CPU.
I am hard-pressed to say that it looks like a suitable benchmark. It's hard to find a desktop CPU released in the past five years that can't provide good performance for a game like that.
People still benchmark CS:GO because it is one of the few games that is CPU bound that still actively being played today. It's one of the few games where a single core performance and memory latency has a big impact on overall performance.
Intel always had a huge lead on CS:GO until Zen 2 where the gap finally closes up, which is why a lot of people is referring to CS:GO benchmark when talking about Zen 2 in context of gaming.
actually a lot of pro players are streamers, so they at least need a good cpu for game+stream. some even have a setup with two computers.
actually streaming games (especially new ones) still needs at least the i7's.
Some CS Go players believe (and maybe is true) that having high FPSes live around 300 give them an advantage so they will try to make the FPS number as large as possible. Such a gamer will probably pay a ton more money to get from 2500 FPS to 300 FPS. They would also have monitor and mouse+ keyboards with low latency.
So from my reading this "elite/full time" gamers will still buy an expensive CPU if they get 10 extra frames (but forget that in real life your PC has more programs running in background so reality may not match the benchmarks)
For people wondering why, it has mainly to with the fact that mouse polling rates are tied to FPS meaning that you still have heavy incentive to go above the FPS your monitor could handle.
This video (https://www.youtube.com/watch?v=hjWSRTYV8e0) does a really good job explaining frame latency in CSGO and why you can benefit from running at FPS significantly higher than your refresh rate, and why vertical sync doesn't solve it like it should. The game is free now, so I'd highly recommend trying it out if you're doubtful about a smoothness difference between 100 and 250-300 FPS.
The real reason is, I only really play csgo and pubg, I have X$ to spend on a CPU. Previously if you looked at them market, tier-to-tier amd vs intel - intel always was the stronger performer at the CSGO. Why spend X$ and get 250fps when you can spend the same and get 300fps?
I'm not sure whether they update the minimum system requirements alongside the game. Keep in mind, CSGO is from 2012 and has received numerous updates, and looks, feels and sounds completely different today.
When I first bought my main laptop in 2017, I could play somewhat smoothly at 1080p, now I have to use 720p.
It's an interesting side-effect, before Steam and other content-platforms, games titles iterated much faster.
EDIT: Then again, when I think about CoD, I'll retract my last sentence.
That’s not the only game LTT uses to benchmark with, but they include it because of its popularity and I suspect because it normally runs at such insane frame rates that differences show up more readily—much easier to tell 220 FPS vs 240 FPS than 59.5 vs 60.
I'm sorry, I'm just not going to buy a 2080ti and run it at 1080p with less-than-maxed out settings, and if I were that crazy, I wouldn't care much about the perceptually irrelevant difference between 110fps and 120 fps (and to be honest, you may have even less relevant differences), and if I found one or two cases where the difference was possibly meaningful, I wouldn't be likely to run with 0 background programs.
I mean, there are cases where it'll be barely noticable, but they're so vanishingly niche, you're not simply a gamer at that point, you're some really specialized breed.
A perhaps more relevant case is stuff like quicksync and AVX 512 - those kind of features really can make a night+day difference, in case you use them. Perhaps less critically, intel's memory latency is still better, so maybe for some L3-cache-size insensitive but latency sensitive workloads you'll find some stuff?
Marketing aside: Is the gaming IPC that /practically/ relevant? I mean, I'm gaming on a early-2012 Sandy Bridge CPU (Xeon E3-1235 aka "cheap i7-2600"), and it feels like my late-2013 R9 290 is the limiting factor these days, not the CPU.
And I am actually considering to combine my NAS and workstation/gaming rig into one "big" Ryzen 3xxx or 3rd gen TR machine and banish it into the basement. Bonus: The NAS finally gets ECC, and if I put in my old GPU and a new GPU, I can have two VMs for game streaming (tried parsec this weekend, and it was really nice!). Only drawback I can see is that I'd have to carry the rig upstairs once every blue moon when I play VR games.
While the average frame rate is the same, you see more stutters as the CPU chokes on feeding the GPU in certain cases. So the max frame time is much longer.
At least that’s what I saw on my 2600k@4.5GHz before upgrading to a 1700X@4GHz with the same 1080 GTX running mostly at 4k.
It seems like while the per-core IPC was mostly similar, maybe 10-20% different, the peak memory bandwidth was bottlenecking me.
The lower bounds is where it makes a big difference for most people who are GPU bound at the top end. The 1% and 0.1% of slowest frames when playing a game show huge differences based on CPU and RAM. For the smoothest gaming you need to remove all the potential bottlenecks of the system thats one of the reasons gaming has always pushed the boundaries on CPU/GPU.
it's not that relevant. There are virtually no plausible gaming setups where you'd be likely to actually notice the differences where intel actually wins. You need a very fast GPU, high fps screen, specific games, large differences (because 100fps vs. 150fps is much, much harder to see than say 40 vs. 60 fps), low resolution+settings, and few enough background programs that you don't cause issues on that side.
It's measurable sure; but it's not relevant practically no (at least, the 9900k vs. 3900x decision isn't; you will perhaps notice a little more easily the upgrade all the way back from sandy bridge... but even that...)
Oh, which games? The part of me who's still a child is looking for a GOOD reason to justify the spending to the part of me who is now a responsible adult (and to my SO).
Not even super new games, but games like Deus Ex Mankind Divided and No Mans Sky have better frame rates on my buddy's Ryzen 2 machine with otherwise identical specs than my i5 3450.
The first game where I saw the difference was Battlefield 1.
I think a lot of cross platform games have been optimised to use lots of CPU cores, since the modern consoles currently have 8. If you're running an older CPU with less cores, even if they're more powerful, you're starting to see performance drags, that was especially the case for BF1.
Since you mention gaming IPC (which isn't really a main factor in server CPU lineups), I would say Intel is still way ahead of AMD in terms of mobility lineups.
It's going to be interesting to see the Zen 2 mobile chips.
You are correct - I forgot what a massive market laptops are. I wouldn't personally consider an AMD-based laptop just yet, hopefully that will change soon!
I just bought my first desktop computer in a decade, in part for gaming, and bought all AMD.
CPU was a no-brainer: unless you really really care about top of the line single core performance AMD is much cheaper.
GPU was a bit trickier: again unless you really really care about top of the line GPUs the 5700(XT) are better value. RTX is an interesting sticking point.In the end since we're confident AMD is doing something here (because they are providing the hardware for the next console generation that is going to support RT) and we should know about it in about a year, and first rev tech isn't always a good buy I'm happy to delay getting on the RT train until the specs and support evens out a bit.
tl;dr; unless you want and can afford to overpay for the best of the best AMD is incredibly competitive in gaming
i think the current zen2 processors would tie intel on games now if optimization efforts equalized, which they may since the cpus are so popular and the delta is very very small. Get some 3600c16 ram, which speeds up the infinity fabric interconnect on ryzen and the gaming difference is already negligible
AMD cpu and gpu are in playstation and xbox, and I guess that won't change next generation. Game devs optimize perf for console first I believe, so it might favours AMD in games too, albeit indirectly.
It probably doesn't matter that much in consumer games unless you meant that as a euphemism for single core performance or you know it will be Intel chips in the next generation of gaming consoles.
You’re thinking zen/zen+. Since zen2 along with all the security limitation intel has had to include in their chips, amd is not running behind at gaming anymore.
And I'll still buy Intel! It's just always the safest, most reliable bet for me. (We only use Xeon systems here with ECC. We value stability and predictability.)
Unlike Intel, AMD doesn't restrict the use of ECC RAM to a specific range of CPU since Ryzen. As long as you have an AMD motherboard that supports ECC RAM the CPU will take it.
You could technically make a cheap home server with a low-end Ryzen at home that uses ECC RAM if you get a motherboard that can take it.
What's more stable and predictable about Intel then AMD? I'm genuinely curious. If you are upgrading you will have to buy a new motherboard anyway and thus it's not like Intel has a particular advantage there.
AMD has way better offering for the server market since every chip can handle ECC RAM.
Ehh, technically they do, the APUs like the 2400G, 220G, and 240GE apparently have a "broken" implementation of ECC on "all" motherboards, but many of those motherboards do advertise working ECC with the 'Ryzen PRO' variants of AMD's APUs.
I'm not defending Intel or even particularly have a positive opinion of Intel overall but I don't know how accurate this test is considering the difference in Linux kernel versions used in this benchmark. According to the article the Intel system is running a 3.10 kernel (doesn't say which distro but I'm pretty sure it's RHEL 7 or CentOS 7) while the EPYC system is running Ubuntu 19.04 with the latest 5.0 kernel. Also I'm not sure if the 3.10 kernel they're running on the Xeon system has been patched for Meltdown and Spectre which would could also affect the performance. I doubt it would amount to a significant difference in performance but usually benchmarks tests like this use the same OS and software stack for the tests.
Hence the reason I said I doubt it would make much of a difference but the 5.0 kernel generally outperforms 3.10. 3.10 has has already been deprecated and isn't updated anymore. The EPYC 7742's are still going to trounce the Xeon's but the Xeon's would probably have better real world performance on the latest 5.0 kernel and a recent distro versus kernel 3.10 and a dated distro like RHEL/CentOS 7. As I mentioned before, typically the same version of OS is used in these types of benchmarks.
RHEL backports the hell out of their kernels. They are monstrosities. The would not make any money if their OS performed significantly worse than the competition. Remember how much RHEL costs. And phoronix routinely does these performance tests across kernel versions. There is not that much difference. I think in one of his recent tests he even went back to ubuntu 12.04.
2. the Dell 840 are latest generation servers (https://www.dell.com/en-us/work/shop/poweredge-rack-servers/...), so I doubt that they're not optimized to be competitive with latest operating systems). after all, this is a real-world machine one buys, so it's supposed to be highly performing.
All in all, the performance improvement seems to be proportional to the single thread performance improvement (however small) componded by the higher core count.
However of course, it can't be excluded that the difference is at least partially caused by the different setup.
The benchmark you linked to somewhat misses the point I'm trying to make. It's specifically comparing the 4.0 branch (4.12-4.20) to 5.0 on the same distribution. The distribution is also a factor in performence. Check out this benchmark comparing CentOS 7 to other distributions on the same hardware:
CentOS 7 is slower in most of the tests compared to a recent version of Ubuntu (17.10 was used in this comparison), which itself is slower than Clear Linux in most tests.
This might actually be a win for intel, 3.10 may not have all the spectre and meltdown patches backported to it (though I suspect if it's RHEL 7 that it does).
This is big because to a lot of people at lower scale, price doesn't matter as much as compute density. AMD now excells in all 3 relevant criteria: density, initial cost and running cost (e.g. power)
OEM and data centres are the big accounts. Power consumption is crucial to battery life in remote devices (in this case, laptops) and electricity is roughly 50% of the cost of a super computer over five years (a lot of it is for cooling, and it's presumably less for data centres used for cloud or storage).
Well... inertia; some software relying on actually having an Intel, or being specifically optimized for them and them only; more offerings of Intel-CPU-based servers; more server designer experience with Intel rather than AMD; legal and perhaps sometimes not so legal lobbying by Intel; high-level-personal and company-collaborative relations of Intel with governments, academic institutes and large companies.
Availability is one problem. I live in Greece and wanted to build a medium workstation back in April. There were only two available Epyc processors in the market and only two mobos. I ended up building a rig with two Xeons in which case there were hundreds of available models/motherboards. It was either that or the option of buying from a German shop.
The 8180M is a "money is no object" processor so people who buy it won't really care about price/performance. It would be more interesting to compare Rome against something like a 6212U or 2x6252.
I wonder if there is any research into price elasticity of CPUs at the high end. How high could you price a marginally faster one before you stopped selling any?
Intel and the server OEMs definitely have this information but they're not going to share it. Considering the costs of terabytes of RAM and software like HANA or Oracle, there's room in the market for some pretty expensive CPUs.
"The Geekbench 4 benchmark holds little to no relevance in the enterprise world. Nevertheless, it gives us a small taste of how AMD's EPYC 7002-series can provide enterprises with more bang for their buck."
And then they go right ahead and make 'comparisons' based on irrelevant workloads. Not saying AMD isn't faster or better price/perf but please do the benchmarks properly. Same can be said of comparison of POWER 9 vs amd/intc.
People like to say "synthetic benchmarks have no relevance to the real world" to sound smart, but that isn't ever entirely true. They are all relevant to soem degree. Somewhere in the world is a real task that looks like any given synthetic benchmark. The danger is that the real world use cases that show some relevance with the synthetic benchmark may be more or less rare.
In this case, there are whole suites of benchmarks and real applications that have been compared between the new Epyc's and the Xeons and the only time Xeon has been on part with Epyc that I have seen so far is AVX-512 specific workloads. It is possible there may be some workloads where the worse ram latency of Epyc is a penalty that cannot be overcome by the larger L3 cache size. That used to be an issue on the previous gen Epyc, but the infinity fabric is faster now and the L3 caches are twice as big.
While nobody will decide which server to buy based on Geekbench alone, the cost/performance difference will be reflected in other benchmarks. With a 5x advantage for AMD, I doubt there will be any single benchmark in which a similar Intel box will perform better than the AMD one.
Would it be better if they ran the same kernel optimized by Intel|AMD? Isn't that what use to happen when everything was compiled with Intel's compilers?
Well everything about the install is different; so it can't help in a sane comparison! and the per/core difference isn't that big so that could be some of it - but even then the $ cost difference is huge.
I thought that, it could easily cover the performace difference. However, the AMD is so much cheaper it's still an epyc win for them even without a performance advantage.
This is not as much the launch of EPYC as is is of the new generation of them. They're just now relevant enough for those that don't just want maximum cost effectiveness but also compute density.
This is a terrible performance summary. Let's try to normalize it in terms of Geekbench points / $:
AMD has a 5x advantage on the price of the CPU alone, not factoring in the price of the motherboard and other components (this is a 4 socket Intel motherboard vs. a dual socket AMD).5X. That's incredible.