Hacker News new | past | comments | ask | show | jobs | submit login
AMD is determined to gets its rightful datacenter share (nextplatform.com)
206 points by ChrisArchitect on March 6, 2020 | hide | past | favorite | 115 comments



My biggest concern in moving to AMD is support for profiling and system visibility. On Intel, there are many more performance counters exposed, and Intel themselves make great tools (Vtune, Intel PCM).

AMD has a profiler, but it is nowhere near as capable as Vtune.

AMD has ways to do some things that Intel's PCM tools do (like monitor memory bandwidth), but they are exceedingly awkward (eg, only option is producing a CSV, no live output). And there are lots of things which are just missing (like support for monitoring NUMA fabric bandwidth, PCIe bandwidth, power, etc). I gave a talk at AMD in Austin last fall where I beat them up about some of these things.

Oh, and I'm up to over 260Gb/s of 100% TLS traffic from an AMD Rome :)


I think I see a comment to this effect on every post about AMD, that their software ecosystem isn't as advanced.

But, isn't it a pretty narrow market that benefits from Intel's tooling here?

Startups running their entire business on JITed runtimes on top of multiple layers of virtualization in cloud providers aren't using this. Enterprises with enough resources to revise their entire stack to work on ARM (Cloudflare, Microsoft, Google, Amazon) likely have the internal resources and skills already to build the missing tooling and profile statistically using massive fleets of servers.

How large is the target market of Intel purchasers that are large enough and performance sensitive enough that it's profitable to micro-optimize assembly, and yet not large enough that it's a rounding error to build similar tooling for AMD x86-64 or ARM or otherwise?


Its not a question of just building tooling. It is a question of AMD having fewer performance counters. So even organizations like Google with tons of SWEs can't do anything if there are no performance counters, or if the performance counters are not exposed.

And we (Netflix Open Connect) kind of fit that middle ground that you're talking about.


Any examples of missing counters that are essential? I am curious because I rarely see people care about things further than TLB, L1, L2 misses (which exist on AMD).


> Its not a question of just building tooling. It is a question of AMD having fewer performance counters. So even organizations like Google with tons of SWEs can't do anything if there are no performance counters, or if the performance counters are not exposed.

Have you tried JTAG?


Same applies to GPU debugging tools, and that isn't a narrow market, with HPC, FinTech, game engines, machine learning.


If you have a large enough problem space that you have the capacity to develop a 'bench' of people to tackle deep analysis, you probably should do so, rather than relying entirely solely on tools to tell you what's going on. It's not enough to know how to do something, you have to have done it routinely enough that when it really matters you don't treat it like a rehearsal.

> How large is the target market of Intel purchasers that are large enough and performance sensitive enough that it's profitable to micro-optimize assembly

I'd like to think if a large customer picked up the phone and called Intel and asked why the new processors they got are slower than the old ones, that asking that they send someone to take a look at it would not be outside of the realm of reasonable requests. But not being in a position to buy millions of dollars worth of gear from Intel so I'll have to wait for someone to tell me what happened.

If they bother to send someone out, that person would probably like to have - or even insist on - some decent tools. Reliable and informative, if not necessarily friendly enough for consumers.

I wonder to what extent Intel's tools prevent such customer visits, versus being made to give small and medium sized customer a feeling that they have some secret weapon using Intel, rather than the reality of one. A pretty collar for the pig to wear to the Fair.


Compared to both Intel on the CPU side and Nvidia on the graphics side, AMD has always been lacking in the software department. Hopefully they can fix that now that they're making some money.


Despite this, I find it monumentally easier to get drivers and software of AMD's website, compared to Intel.

And then there's also "Intel is Removing End of Life Drivers and BIOS Downloads"[0]. Great, if I didn't save them, I need to go to a 3rd party site, run the risk of getting malware, have to compare hashes, etc, etc, etc... This is, in my opinion, an absolute no-go as a vendor, and instantly makes me want to drop them. Which I did, all the PCs my clients get from me are AMD systems now.

AMD is a serious vendor, you can even get the old ATI RAGE drivers off their website.

[0]: https://www.bleepingcomputer.com/news/hardware/intel-is-remo...

EDIT: The linked tweet[1] in the article mirrors my sentiments on this topic almost perfectly:

> "DEAR COMPANIES THAT MAKE HARDWARE: YOU'RE SUPPOSED TO HOST YOUR DRIVERS UNTIL THE SUN EXPLODES OR YOU GO OUT OF BUSINESS, NOT UNTIL YOU GET TIRED OF HOSTING 20 MEGABYTES"

[1]: https://twitter.com/Foone/status/1172237142485078016


20mb? My latest nvidia 'driver' download was over 500mb and prompted me to log-in to something before installing.


What? I expect to get drivers even after the sun explodes


YES!


AMD is a classical hardware manufacturer in that they don't respect software as an equal partner in modern hardware development, despite the fact that their core competency is making general purpose computers. It's not a capital issue, it's a cultural one.

That said, AMD (and Intel) have reinvented themselves several times with pretty big cultural shifts so I'm hopeful they get someone like Jim Keller with the authority to shake things up.


> I'm hopeful they get someone like Jim Keller with the authority to shake things up.

I's say it is even less likely if Jim will be on the board


>so I'm hopeful they get someone like Jim Keller

Who do you think designed the Zen architecture? He as since left AMD, went to Tesla, and is now at Intel.


Why do you think I brought him up? AMD needs someone like Jim Keller but for software.


Who the hell is the Jim Keller of software? I don't think Dave Cutler is the right comparison, although they're both pretty fucking awesome.


Jeff Dean?


> Why do you think I brought him up? AMD needs someone like Jim Keller but for software.

If there will be a JK for software, I'd say if he lands in a typical dotcom, half of the dev team with careers typical to Facebook/Amazon/Google will be relieved of their duties in a week.


They'd never hire him. Too old.


Jim Keller didn't design Zen.


Are you talking about a Linux version of Vtune or are you running your datacenters on Windows?

I know Vtune for Linux exists but I've never heard much a bout it, everyone seems to be recommending "perf". Is there more to measuring memory bandwidth than multiplying last level cache misses with cache line size?

I think where Intel does best is their open source drivers, tools and contributions, though AMD has improved a lot in that respect too.


>Oh, and I'm up to over 260Gb/s of 100% TLS traffic from an AMD Rome :)

Hold on, is that 30% improvement from 3-6 months ago? I thought it was 200Gb/s last time and it was sort of pushed to the limit with memory bandwidth bottleneck? Do you think you can push it even further something like 300Gb/s?

Seriously this is mind boggling.


This is a motherboard designed for Rome, so it has 3200 memory, rather than 2933 (which was OC'ed). Plus a few improvements here and there in FreeBSD, minus the fact that we're using 2 Gen4 x16 NICs, so there is some cross-domain IO going on.


I believe AMD has been contributing upstream to intel-cmt-cat, and Intel has accepted those patches.


How is that possible? Raw Bw of PCIE is around 200Gb/s.


Pcie4 (of which rome is) can do about 220Gbps realistically per card. Multiple cards, which I'm sure Netflix does, can go higher.


True but usually it’s difficult to actually achieve that speed on the bus. Isn’t the NIC a bottleneck? Usually they aren’t capable of wirespeed due to shallow buffers.


It's doable. We do 100Gbps on pcie3, and AMD presented at the dpdk conference last year showing 200gbps: https://dpdkna2019.sched.com/event/WYAs/dpdk-and-pcie-gen4-b...


I'm curious, why encrypt Netflix video traffic at all?


(Not at Netflix) So that ISPs don't interfere with it by blocking, inserting ads, etc.


can you expand on the TLS bit?


Transport Layer Security, which used to be known as SSL (eg, https). It uses up nearly half of our CPU time and our memory bandwidth to read data from memory into the CPU, encrypt it, and write the encrypted data back to memory.

If we were serving plain http without TLS, we could probably reach 400Gb/s from this box (based purely on back of the envelope math)


I did not mean what TLS is, I meant the context of the high performance numbers. Using what software, etc.


Running on FreeBSD, I did a quick search, turns out I submitted it on HN [1] exactly 5 months ago. And it was only running 200Gbps then

[1] https://news.ycombinator.com/item?id=21043765


You can't use any kind of TLS offloading accelerator?


There are 2 classes of accelerators -- lookaside and inline.

The lookaside accelerators (like QAT) sit on the PCIe bus and DMA plaintext into the accelerator, and DMA encrypted data out. This offloads the CPU, but it does not help with memory bandwidth. Since memory bandwidth is our biggest bottleneck, they don't help.

Inline accelerators are "smart nics". The NIC DMAs a pre-formatted TLS record in plain text inside a (series of) TCP packet(s) from the host, encrypts it, and sends it in an almost stateless way. This is what we call "NIC TLS" in FreeSD. Sadly, the inline accelerators we've tried all have fatal flaws which prevent us from using them. I cannot get into those flaws for NDA reasons, but we're hopeful that there will be one that we can use soon.


Good to know, thanks. I'm curious what the fatal flaws are when you can eventually reveal them.


The first kind can't be made to work with P2P-DMA?



Are you asking what’s most cost-effective?


I'm asking what best meets the requirements, which may be a balance of cost and performance.


> But it will get very ugly if the market tightens and Intel can only lose both revenues and profits in a price war. There is no way to maintain at all. Period.

Yeah adding "period" to a sentence like that totally makes up for the lack of data backing it. AMD spent years in the red and they didn't disappear. I looked it up and Intel cash on hand for the quarter ending December 31, 2019 was $13.123B, a 12.64% increase year-over-year. That's about two quarters profit. Their debt to equity ratio is 0.33 which is on the lower end. They could double their ~29B debt if badly necessary and still be considered a good investment. That two together makes for one gigantic war chest...


Different things could be considered "ugly". Will Intel find itself in serious financial trouble? Almost certainly not. But we are already seeing some deterioration in margins due to more competitive server pricing.


Interesting and in comparison:

Nvidia: 0.21 AMD: 0.17

https://www.macrotrends.net/stocks/charts/NVDA/nvidia/debt-e... https://www.macrotrends.net/stocks/charts/AMD/amd/debt-equit...

Though as we have all come to learn, the market these days is more exposed to perception buying and selling over more robust analytical buying and selling. That and a little dynamic movement can easily amplify by automated trading and reaction trading.

AMD can only gain traction in the graphics consumer market and that may well pan out, though much seems to be riding upon RDNA2 and that is looking like late in the year, this with new consols scheduled for end of the year and that was prior to recent supplier dynamics may well well see those slip into next year.

We have seen the larger datacenters start to take note of AMD more and iirc cloudflare as well as Google buying some AMD offerings in more earnest. The time for those to traction won't be instant either, so any growth will be a while until measured.

But AMD from most people you talk with, seem to be in a good position and it has been a while since they had that postition (decades really), they seem to keep that momentum going and have a path to do that. So even if they don't start to make fast inroads into the datacenter, they are slowly gaining momentum and Intel is loosing momentum.

However, the recent supplier dynamics due to the recent human malware will impact all. So many aspects that make it hard to call, but however it goes, I don't see AMD nor Intel worried more than any other company. Though AMD may well be more exposed to supplier chain dynamics more than Intel and that advantage to Intel (having own fabs) can only sit in Intels favour.

However, the real one to watch will be RISCV and also ARM and recent developments by Fujitsu with the A64FX (well worth reading about as very interesting) stealing the TDP crown. Things may well get very interesting, though in technology, it always is and that is why we all enjoy and love it.

Also worth noting that the recent human malware will see a drive in working from home and that may well shift buying requirements for many businesses, which could see an increase in datacenter usage, though that will be at the expense of local buying.

As always, many dynamics, but perception does seem to be more of a factor than any common sense when it comes to markets. Though the real winners, will be storage and that is an area that gets overlooked and will have an impact more than anything else in the year ahead.


I remember similar things being said about nokia


Nokia has found itself unable to compete in new markets -- previously the smartphone has grown up from the phone and the PDA but the new smartphones have shrunk down from computers and this was fundamentally different. Let's say the peak of Symbian Nokia smartphones were the E71/E72 -- the E72 and the iPhone 3GS were released practically the same day. By this time some 100 000 apps were available for the iPhone (and a bit more than 10 000 for Symbian but it was a fractured ecosystem, another fatal flaw) and cumulative they have been downloaded a billion times. But even importantly, Symbian has started as the EPOC, the OS of the Psion PDAs in the eighties and it simply was unable to function well in the new world -- you can patch a preemptive system only so many times. Android and iOS were built on bona fide Unix. No such thing hampers Intel.

There's no new market here, just intensified competition and Intel has 7nm in the pipeline late 2021/early 2022, that's how far they need to survive on the venerable 14nm and the totally broken 10nm process.


My biggest concern when considering AMD on Linux is drivers. I see in my feeds Phoronix is constantly reporting on updates to recent kernels. However, most of these power management, etc equivalent features have been available for years from Intel.

How are people finding AMD hardware in the real world wrt Linux on say 4.19 or 5.4 kernels?


I run Ryzen on Linux 5.5 on my desktop workstation at home. Admittedly I am always running the latest kernel as long as I reboot often enough, but it has been good. I also use an AMD GPU, and that has been very nice. Intel probably still has an edge on supporting the Linux graphics stack but not by much, and AMD GPUs have a lot more muscle, so it’s not a bad trade off imo. Anything to avoid NVIDIA really.

(To those who love NVIDIA on Linux: I get it. However, Linux needs to move on from X11 and the old, broken, fragmented graphics driver “model.” NVIDIA will always be a second class citizen on Wayland until they change their tune regarding open source. But hey, maybe they’re OK with losing Linux and Mac users, and if that’s their position I’m perfectly happy losing them.)


Do AMD GPUs work well with ML drivers (CUDA and competitors)? Are higher-level drivers like PyTorch and TensorFlow compatible (and optimized, or else no point buying an expensive GPU to get only half-performance) with AMD backends?


It's far better today than it used to be, but it's still a work in progress. Over the past year or two, they've been doing a lot of hiring to bring ROCm HIP up to par as a CUDA competitor.

I wasn't exactly thrilled to see yet another GPU interface, but they seem to be trying to match the CUDA way of doing things. That ought to make supporting AMD much easier than it once was.


In general, no. The highest performance software only runs on Nvidia GPUs (or internal hardware, like Google’s TPUs).


I think this depends very much on what exactly you mean by "highest performance software". Some frameworks have upstreamed support for ROCm. A few of my colleagues have been doing some testing and mentioned to me they were reasonably impressed with performance on newer hardware.


My current impression is that AMD is not the best option for ML but is getting better over time. So YMMV. That said, I do not use my local machine for ML (and don’t do ML in any serious capacity.)


TensorFlow works with AMD's CUDA alternative named ROCm. You'd need to check installation instructions though.


Yes, ROCm is upstreamed into Tensorflow. On our Hopsworks platform, you can run the same TensorFlow code on either ROCm or Cuda. No code changes needed:

https://www.logicalclocks.com/blog/welcoming-amd-rocm-to-hop...

Performance ain't so bad on AMD GPUs anymore - and none of the EULA crap. Just buy a Vega R7 for 500 bucks and get 16 GB of memory and the performance of close to a 2080 Ti (for convnets, at least) - but with no data center restrictions:

https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/...


How about Pytorch?


You have to compile it yourself, which sucks. For some reason PyTorch aren't agreeing to upstream ROCm, yet.


You don't need to compile it, AMD provides docker images.

For example I was able to run fast.ai/pytorch oh my amd gpu https://github.com/briangorman/fastai_rocm_docker


> NVIDIA will always be a second class citizen on Wayland

More like Wayland will always be a second class citizen on machines with NVIDIA cards.


They are the only such graphics vendor, but sure. Either way NVIDIA users lose.


Or is it Wayland users that lose? On which side will the coin land? :)


The coin landed long enough that you can’t even see it under the dust. Xorg is effectively unmaintained aside from Xwayland. If you had hopes of Wayland disappearing, you can safely end them now.


I don't have hopes of that, I'm just waiting for the dust to eventually settle so we can just use it and stop waiting for it.

Do you know if Ubuntu 20.04 will be using it by default?


It's great. I've been using an EPYC 7401P as a primary workstation/build server for < 3 years, things have all been working great. I'm on Debian/testing, and I upgrade once a week or so.

I have a bunch of VMs on it, multiple-GPUs, and a bunch of disks. It does exactly what I want, and I'll be upgrading things next year or so.


Linux on ThreadRipper 1950 has run great since launch.

From the other side of the FOSS isle I can report that I have had zero issues under FreeBSD with: 1600X, 2700, Epyc 7002

FreeBSD & Linux both handle the Rx580 line of video cards without a problem. (including 4k@60Hz)


I am quite happy with my AMD Radeon RX 460 running Arch Linux with a 5.5.7 kernel. It has significantly more power than the embedded Intel GPU I have in the same system but doesn't require a fan either (the PC is completely fanless).


I've been running Arch Linux on my 3900X with no major issues. It took a couple of updates before the temperature sensing worked, but right now everything works perfectly.


The sad part is that AMD seemingly does not want temperature sensors to work on Linux. All progress on sensor support has been from the community (namely Guenter Roeck) using leaked information, and AMD has been plugging some of those leaks[1]. It's not clear to me why AMD does not add this support themselves, Intel has no problem adding temperature support so I don't think there's much to gain by trying to keep it a secret.

> Guenter Roeck: The information is available from AMD under NDA, which is why Windows tools like HwINFO support it. The added Linux support was largely possible due to what I am sure are unintentional leaks by AMD, mostly in AMD's Linux kernel graphics code. Unfortunately, they learned; the latest version of their graphics code include files no longer provides temperature sensor addresses for Zen2.

[1] https://www.phoronix.com/forums/forum/hardware/processors-me...


Maybe it's Microsoft that doesn't want AMD to reveal the sensor addresses and it's AMD that wants to sell processors for the next Xbox ...


Why would Microsoft care? O-o


Right now I'm battling the rough edges around the RX 5500 XT support. If you try and use 3 monitors it reliably crashes. You can kind of make it work by manually setting monitors.xml to a low resolution on them or a low refresh rate before you add the third monitor, but that only seems to work if you add the monitor at runtime. I copied the same monitors.xml for gdm as well so no clue what during initialization seems to be crashing it only if it's during boot. I've tried all the way up to DRM-next, mesa-git, latest linux-firmware and so far I'm stuck with 2 out of 3 screens.

I've also got a new 3700X and that's been completely stable.


So this is obviously non-server:

If you use their integrated graphics (as on laptops, etc.) there are sometimes driver hiccups (for example, sometimes a feature that's discrete GPU specific will be problematic with the integrated graphics but enabled by default), or slowness to shut down.

Besides that, of the three Ryzen machines I use regularly they're all fine daily drivers, but sometimes have fixable boot/kernel configuration oddities that vary by kernel release.

Based on this, I'd stay away from 2000 series with integrated graphics, but otherwise am pretty comfortable with the platform.


My desktop with a ryzen is more stable than it was with my previous i5: no bootup / boot down systemD delays that take 2 minutes to time out. No issues with vagrant / virtual box either


Bought a 1600x a couple months after launch (along with an rx580). Other than temperature sensors everything worked beautifully out of the box with a standard ubuntu install. Since then my work has bought a 2990wx and multiple 3950x's all mostly without issues (wifi only works on newer kernels on some x570 motherboards, but an upgrade to ubuntu 19.10 fixes this).


I don't have any Epyc hardware, but Linux 5.0 and 5.3 have worked great with my Ryzen 3 1200 and Ryzen 7 3700X, using Ubuntu 18.04.


I have a 3900x that I have used with the latest ubuntu and now popos. No stability issues at all, performance is great.


Works fine for me. Only issue is that the Picasso video driver isn't in 4.19 shipping with Debian 10. Text mode works though, so it is easy to build and install the latest kernel.


Text mode works though, so it is easy to build and install the latest kernel.

No building necessary BTW, just use the 5.4 kernel from backports.


I am using a Thinkpad with a Ryzen 5 Pro 3500U. Has been completely trouble-free for me on 5.4+ and this is easily the best Linux laptop I have ever owned.


I have an Ryzen 9 3900x with an AMD GPU, I haven't manually installed any drivers and everything I use it for (web, compiling, videos), works.


I have a 3700x desktop and have no issues with it. I have a $200 AMD APU netbook and it's fine. Slow, but I got the performance I paid for.


On laptops the open source driver experience still fails short of the fxglr capabilities.


Not sure which kernel is in Ubuntu 18.04, but my 1300X has been nothing but rock steady.


I wonder how many sales they miss out on because you can't mix AMD and Intel in VMware clusters.


Is this because runtimes optimize for the CPU stepping during startup? How many people actually rely on moving workloads around as live processes without restarting? I thought the industry had settled on "cattle, not pets" and letting instances appear and die as long as a quorum is maintained.


> cattle, not pets

People running VMWare don't even consider that.

Most people have that as a goal which they aren't achieving (and that's fine).

I would doubt there's any company in existence actually attaining that goal.


I admit I haven't had occasion to use VMware for a decade. Different world, I guess.


I don't use it either, but, it's The Thing in some IT departments.

It does offer some amazing capabilities for making old legacy applications that were never designed to have any semblance of failover or redundancy or disaster recovery operate in impressively resilient ways. You also pay for the privilege.


the main reason you cannot mix CPU architectures in VMware systems is because doing a vmotion with zero downtime is impossible without major performance hits.

having clusters with the same configuration is useful because it allows vcenter to migrate virtual machines to another hardware host without any downtime on the VM's part.

you can read more about this here: https://blogs.vmware.com/vsphere/2019/07/the-vmotion-process...


The people following the latest hype have definitely settled on that. The 95% of the tech world outside of the bubble just barely got comfortable with virtualizing everything in VMs about 5 years ago. The people that wrote the software enterprises are using absolutely did not design in scenarios where you can lose nodes arbitrarily without interruption.


How long before VMWare supports that?


Likely never. They don't even recommend having a cluster with varying Intel processors. Ideally with Vmware every machine in the cluster has the exact same model processor. Want to upgrade to a new processor? Start a new cluster.


there is good reason for this though.

Mixing different generations of cpu's makes it very hard to allow live migrations/failover of certain workloads if they depend on instructions or features that are present in some CPU generations and not in others. this gets even more confusing if you look at the product segmentation intel does on it's cpu's. Newer CPU's might not support all features older cpu's do for instance.


> there is good reason for this though

I don't disagree. I was just trying to point out that it is pipe dream to think that VMWare would ever support clustering Intel and AMD together when you can't even cluster differing Intel models together.


There was a brief period in the early 2000s when AMD had a clear performance lead in 32-bit and x86-64 datacenter stuff.

This was at the time of the first generation Opteron CPUs (single core), which could be configured in a dual socket motherboard in a 1U server. They had a good price/performance advantage over Intel, which was also at the time a single core per socket.

Around 2006 or so Intel started pulling ahead and AMD never really caught up.

With the development of things based on the zen/zen2 cores, now may finally be the time...


> Around 2006 or so Intel started pulling ahead and AMD never really caught up.

...never really caught up until a couple years ago when Epyc was launched. Then, with the speculative execution security issues that have cropped up, Intel has not only stopped gaining performance but has actively lost performance. AMD has a significant improvement now in price, performance, and definitely price/performance compared to Intel. The biggest thing keeping Intel afloat right now is inertia.


And ability to supply the market. AMD can't go from 5% share to 50% without vastly more wafers.


In previous times Intel had process edge over AMD.

Every time AMD had a good architecture that could steal market share, Intel just slashed prices until AMD was put in the place. Intel had good profit margins and and AMD was always struggling. AMD never had a change.

This time AMD is nofab company and TSMC has edge over Intel at least few years.


Your timeline is interesting to me because it suggests you think Intel started to pull ahead with in 2006 Clovertown (quad core MCM with northbridge memory controller) rather than with Nehalem or Westmere (integrated memory controllers) in 2009-10.


Conroe, the Core 2 Duo E6300/E6400 era in 2006, is where Intel definitively took the crown away from AMD. And AMD never got close to it again until Zen/Zen2.

Intel's much better prefetcher let them get away without having an integrated memory controller. https://www.anandtech.com/show/2045/5


Clovertown was basically two Core 2 Duos in a multi-chip module, and capable of multi-socket system configs. The AMD fans at the time snickered at the MCM, for being a hack and a workaround, not a "true" quad core part. AMD didn't launch the "true" quad core server part until a year after Clovertown. That was the "Barcelona" Opteron 23xx, and it was terrible, and broken. To me that was when AMD faded.


It has been quite a long time since I laid hands on any of these, but to refresh my own memory:

https://en.wikipedia.org/wiki/Opteron

section 3.1 and 3.2 there has the original single core opterons. At the time in 2003/2004 they ran circles around the single core xeons, where a two-socket motherboard meant two cores. This was the same era as the Socket 604 Xeons.


100%?


Rightful? You don't have any right to people choosing your product.


That's the article author's clickbait headline, not a statement from AMD. So why the indignation at AMD?


That is not the way the word is being used in this context.

Here, it simply means 'fitting'. It is a statement of opinion, not legal fact.

i.e. "the sports champion finally claimed their rightful place in the hall of fame."


They need to work on making faster cores instead of just throwing a ton of cores at the problem. In the datacenter, we commonly have to pay per-core license costs for certain vendor software. I'm not going to pay 7 figures to grow our Oracle license for example by switching to AMD when we can get Intel CPUs with faster, fewer cores.


AMD is competitive in perf/core now. Intel has been largely stagnant since Skylake, while AMD has been improving dramatically, starting with Zen in 2017.

That being said, your organization's problem isn't AMD vs Intel, your problem is Oracle. If your org spent the money you currently spend on Oracle licenses on a bestial monstrosity of a I AM BECOME DEATH DESTROYER OF WORLDS machine running postgres you'd have better performance. Regardless of whether it's AMD or Intel. (although you're probably better off with AMD)

I know it's not that simple. Just sayin'. Making sure HN meets its daily Oracle hate quota. Seriously though, Oracle sucks.


You are restating a status quo that's being kept by resident pricing models and a software that nobody bothers to make multicore-friendly.

IMO we'll see the bigger hosting players (Google, Amazon etc.) offer competitive prices on their many-core plans and sooner or later the other players will either have to relax their prices or start losing even more customers to the cloud offerings.

I could be wrong though. Many customers are hell-bent to never host in the cloud and I'm pretty sure that decision makes total sense in their context.

We'll see how things shake out but as a programmer I am seeing a slow but steady shift to making the software (languages and frameworks) multicore-friendly. It's happening with a glacial pace but it does happen.


Out of curiosity have you read benchmarks recently on Rome? Per-core performance is quite competitive now (unlike in the recent past).


No but I'll certainly pass the word along. I handle the network and wasn't directly responsible for evaluating AMD. I just heard the reasons why it didn't work and saw the not-so-impressive benchmarks when looking at single core performance. This was 2-3 years ago.


Power is also a high cost in a good data center, and Epyc also smokes Xeon for perf-per-watt.


As of Zen2, single core performance is sometimes better, sometimes worse than Intel, depending on the chips being compared and the workload.





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: