Hacker News new | past | comments | ask | show | jobs | submit login
End Of The Line for AMD FX CPUs (techpowerup.com)
55 points by steve19 on Dec 4, 2013 | hide | past | favorite | 41 comments



This is a good thing for the average computer because the average machine will have a discrete class GPU on the CPU. That means potentially millions of more potential gaming rigs to sell game software on. AMD's done their best work over the years when they pushed hard in an important area that Intel couldn't match (for a while).

This will make the average CPU a better processor for more users.


This news isn't a huge surprise, but it still saddens me. It seems that I started getting into x86 hardware just as the CPU battle/Moore's law started dying.

My first real build had an Athlon 64 X2, and back then I foolishly assumed that the closeness of the competition was going to last (this was just before Conroe/Core 2 launched). I decided to upgrade to a Core 2 Quad a few years later, and an Ivy Bridge chip about a year ago. None of these upgrades, however, feel quite as potent as that first time when I switched from a Prescott Celeron to that Athlon 64 X2.

Sure, synthetic CPU benchmarks show that Moore's law isn't quite dead yet, but singlethreaded performance just is not the battleground it was a decade ago. I feel that somewhere along the line, people stopped pushing the envelope. Have we gotten to the point where people have run out of ideas to push their hardware with? Perhaps it's because physical limitations are being reached. Still, I can't help but fantasize about what the world would be like if the x86 market were as competitive as it were a decade ago.


This is just a common misconception over what Moore said, and what his law means. Hint: it has nothing to do with the clock speed or even directly the prefromace of your CPUs.

Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. Which is still very much the case: http://en.wikipedia.org/wiki/File:Moore%27s_law_graph.svg


Improvements on synthetic benchmarks but real wolrd stagnation that we are no longer cpu throughput bound. Trig functions have improved from 90 ticks ti so.ething like 14 ticks. Getting an SSD mightatter more then 2 extra cores.The CPU is no longer the engine it is a piston.


It's not even close to dying, it's just not focused on faster cores or x86 alone. Now is the time to parallelize everything, over x86/ARM CPUs and/or GPUs.


The article presents this like it's a bad thing. APU-like architectures are the future of computing, and I'm glad that AMD will be focusing on them.

Now if only we had really good GPGPU frameworks to compile generic code to OpenCL and make use of those extra cores for non-graphics, non-numerical applications.


Those extra cores are designed to be good for streaming parallel data computations, not "generic code" - you can't even run a hash table performantly on a GPU due to the random pointer accesses.


Random pointer access is slow everywhere. However, a massively parallel GPU code will likely beat multi-thread CPU code there.

A GPU can switch threads on a cache miss, something few CPUs can do. Even the CPUs that do this switch between 2-8 threads (it's called SMT architecture, for example, popular Intel Pentium/Core CPUs have at most 2 hardware threads per core, Xenon Phi has 4). GPUs have dozens of threads per core and, often, more cores than CPUs. Combined with the fact that a GPU thread is much wider than a CPU this allows a GPU to saturate memory bandwidth even on a pointer chasing code. The same code causes the CPU to stall and underutilize its memory bus (as soon as the few threads hit cache miss there is nothing to do, software multithreading cannot switch between threads that are waiting for data).

Of course, as I said, it's still not efficient even on GPU (most of the bandwidth is used to move unneeded data as you only need one word from the whole cache line) and you still need to write massively parallel code.


> However, a massively parallel GPU code will likely beat multi-thread CPU code there.

Yes, but if you actually try to do it, you'll find that "massively parallel" code can be written only for certain problems that lend themselves to being, well, "massively parallel".

I've seen people struggling with GPGPUs ever since they appeared (at a certain point in my career I was involved in heavy number-crunching research). Six years later, there are still a lot of real-life problems that can't be put in a GPGPU-useful framework; in fact, coming up with good models so that you can reason if a given problem would ever lend itself to being solve like that is one of the more important progresses of the scientific community.

Look, GPGPUs are cool and all, and useful for certain classes of problems, but let's not do the whole Itanium thing all over again, please.


A lot of that struggling could have been avoided if the proper tools were available. For example, the compiler could automatically include two code paths - one AMD64, one OpenCL - and use the best one or switch between the two based on performance or runtime considerations. Language-level improvements (like what Erlang and Haskell have) could make writing parallel code much easier.

I said that APUs are the future of computing not because I'm favoring parallel applications, but because boring old manufacturing constraints are making greater single-threaded performance more difficult to achieve. What we have now is about as good as it's going to get, at least until we see 3D chips or molecular nanotechnology.


Indeed, there are problems that cannot be solved in a parallel fashion. If you are stuck with such a problem you are screwed no matter what is your architecture.

A whole lot of real-life stuff is either parallel or does not need heavy computing. This is why people started building massively parallel computers well before the last 6 years[0].

[0] http://en.wikipedia.org/wiki/History_of_supercomputing


> A whole lot of real-life stuff is either parallel or does not need heavy computing. This is why people started building massively parallel computers well before the last 6 years[0].

Exactly. A lot of the GPGPU pitch is easily refuted with some hindsight.


With no competition for high end desktop CPUs we can expect price increases.


  With no competition for high end desktop CPUs we can expect price increases.
It's a bit different in a saturated market, where most of the potential customers already own one or more of the products in question.

Intel is effectively competing with itself. I have three Intel CPUs in my home and Intel needs to beat them by a significant performance margin, and at an acceptable price, before I will buy another.

So they still have plenty of incentive to outdo themselves.


That already happened. There hasn't been meaningful competition in years.


and RAM, and hard drives, and graphics cards... I suspect 2014 will be my last high-end desktop purchase.


And in 2015 we will start replacing our desktops with convertible mobile devices. Slide your tablet or phone next to a keyboard, mouse, and monitor and voilà, you've got a desktop. I'm really hoping we're completely wireless on the monitor too.


Assuming I am willing to fork over a lot more money and use much less powerful hardware running a glorified phone OS, all for the sake of having a smaller computer.


Dell Venue 8 Pro...

Already we have tablets running x86 with good enough battery life, better power than the ARM counterparts, and a full OS, with external hardware support.

Sure, it's not completely there yet, but considering it's $399 today for something that is pretty damned close (and close enough that my girlfriend just ordered one, to use for most of her primary computing, but coupled with the active digitizer for sketching; basically, a netbook), it's only a matter of time before this is completely doable!


All of those are short-term concerns.

Eventually low-power portable-class computing hardware will be as powerful as typical desktop systems today. Though there will likely always be a niche for even higher performance machines.

Price will come down with volume and greater competition as the market matures, as it always does.

And the OSes will get better as people spend more time on these devices and developers spend more time targetting them and working out the appropriate UI optimizations for mobile vs. desktop modes.


Products like this have existed for a while now but have never sold well.


They've not sold well because they've been cumbersome and have involved massive sacrifices. Compare typical phone models with a cheap laptop when it comes to raw computing power.

But that gap is rapidly diminishing - the phones are catching up.

And as for user experience, you need to be able to get a "proper" desktop OS, and you need to not have to plug in lots of cables. And it needs to be able to drive a sufficiently high resolution display at good speed. It's slowly getting there, but it's got a bit to go still (but consider that today, mid-range Chinese unknown brands churn out quad core Android tablets with 2560x1600, and even low end phones come with 1920x1080 screens; and many of these devices can handle 3840x2160 output to external screens...)


Why would RAM, HD and GPU prices go up? Whos going out of business?


I think the parent was predicting RAM, HDs & GPUs will become more expensive as more and more people buy off the shelf non-upgradable hardware.

Without a large base to spread costs over, it will become a more and more specialised field.

Both Intel & AMD are moving towards not selling individual CPUs and instead having them built into the main board, and I imagine that after that RAM will become built in as well.

GPUs are disappearing from the low end (onboard is getting pretty good) and if Intel succeed with Xeon Phi cards, thats going to start splitting the HPC market away from GPUs. Without the low end to get rid of older stock and increase the yield, I could very well come that Nvidia and AMD aren't able to keep the R&D dollars in GPUs.

Component manufacturers will start exclusively targeting consumer electronic companies, and the Off the shelf market may find that it is forced to pay premium prices, further reducing the number of people who are participating.

Unfortunately consolidation is a sign of a maturing market, with most people being happy with an off the shelf system (i.e. most people even on HN are happy with a macbook pro / air and its limited upgradability). A quad core laptop, 8gb of ram and 256gb SSD is suitable for most people, especially if they have a way of storing another 1tb of rarely used content (external hdd, NAS, online storage). They have been available for the past 3 years, and we are starting to get to the stage where they will likely be suitable for the next 3 as well. It will be a shame as the days of building cheap consumer grade servers for home use will be behind us, as will High performance workstations for 1/3rd of cost of Dell / HP / whoever.


I thought they had, but maybe they've just stopped decreasing at the expected rate.

http://www.jcmit.com/disk2013.htm


Disk and RAM prices have had recent anomalous prices in the last few years due to external circumstances. There were serious floods in Thailand, a major hard drive manufacturer, in 2011 which caused persistent drive shortages up until fairly recently. There was also a large fire at a Hynix Fab plant that produced RAM which caused memory prices to spike for a while.


Dunno about going out of business, but there's been a major decrease in competition in the HD industry due to companies merging or discontinuing entire product segments.


> I suspect 2014 will be my last high-end desktop purchase.

For me it was 2002. Since then I have only bought laptops.


Too bad. I recently built a PC and chose the 8-core AMD FX-8320 over the 4-core Intel i5. When it comes to multi-tasking, and even medium gaming, I definitely haven't regretted the purchase.


I can't really imagine anyone regretting a middle-end CPU purchase these days, no matter which one they'd pick.

Just to comment on your specific purchase, it would seem that your 8320 would be relatively closely matching the corresponding Intel CPU (3470) in performance, as you note AMD having slight lead on multi-threaded benchmarks and Intel on single-threaded ones. But the major difference between the two is that the Intel CPU does this while consuming 50ish watts less power (77W vs 125W TDP). Of course that doesn't matter for all people and TDPs are not really comparable directly, but still it is something to note.


How does a leaked slide of their desktop roadmap indicate that they will only make APUs now? Wouldn't it be likely that they still make server processors?

I wouldn't be surprised if the ultra high end desktop cpu market isn't that large so they are assuming people who want that sort of power can just buy server processors.


I'm not sure this is the end of AMD, though.

Are they not going to make any server processors?

Can APU's not be used for high-end gaming? Can the GPU portion of the processor not be looked at the same way FPU's used to be seen?


Surprised everybody is in on the term APU, I had to look it up :)

https://en.wikipedia.org/wiki/AMD_Accelerated_Processing_Uni...

excerpt: * The Accelerated Processing Unit, formerly known as Fusion, is a marketing name for a type of microprocessor from AMD designed to act as a CPU and graphics accelerator (GPU) solution on a single chip *.

So I'd personally say not a very big surprise.


I wonder if I'll be able to use the built-in GPU while having a discrete nVidia graphics card. Because I like AMD processors (plenty fast for me, cheaper than intel), but I'll never stick and AMD graphics card in my machine again (yes, I run Linux).


No DDR4 ram?

I heard it was coming out in 2014.

This is good, I can't wait for HSA programming model.


Intel is getting DDR4 (only for servers I think) in 2014. AMD stays several years behind Intel.


I am using an AMD APU laptop right now, and I welcome this future.


I'm curious about the APUs handling of heavy floating point work. I do a lot of scientific/statistical computing work, and the FX series does have new extensions for just this type of work.

Of course, the best of that series also comes with a 220W TDP.


APUs (at least the desktop and laptop ones) use the same types of cores as the FX series, so they should perform about the same.


Does anyone sell Opteron based workstations? Xeon based ones are common enogh, but I can't recall seeing AMD based systems from any major OEM.


Microway sells Opteron 6300-based systems (aka Piledriver) but it's really been a while since AMD's chips have been competitive in performance per-core.

There was while when people would choose AMD because they could get enough more cores for the price that the performance would make up for it, and you could get more memory in a system than a similarly priced Intel. But those workloads weren't as command as AMD would have liked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: