Hacker News new | past | comments | ask | show | jobs | submit login
PC obsolescence is obsolete (extremetech.com)
101 points by evo_9 on Aug 21, 2012 | hide | past | favorite | 121 comments



> Now, compare an early Core i7 system (Nehalem) against what’s shipping today (Ivy Bridge). Clock speeds are up a bit, and there are cheaper/lower-end options available, but the Core i7-920 that launched at 2.67GHz with four cores and eight threads is still absolutely capable of powering the latest games and applications.

A big, big factor in the slowdown of obsolecense is that the current generation of consoles has stretched out far longer than is normal. Because the consoles have fairly PC-esque capabilities (arguably more parity than there has ever been before), many games released today are released on consoles and PC, often on a single code-base, at the very least on a single asset base.

Once the next generation of consoles is out, we will see games taking advantage of that extra power, and in turn PC requirements for new games will start to climb again.


This can't be emphasized enough- the only reason nothing has moved forward is because gaming hasn't paved the way. Most major hardware advancements on the desktop we've had have been a result of devices added for higher requirements in games. And right now, with the death of the B-tier game studio, there's no one pushing PC-only experiences that really has a reason to ignore the consoles. That's it.

Now, whether the next console tier will actually allow that or not? I don't know. Investors seem to be catching wind to just how low-margin the games business is, and as a result seem to think social gaming is the way to go(if Zynga's large and horrible decline hasn't convinced you that they are wrong yet, give it time. It will.)- which means less marketing dollars justifying that new console, and thusly, less reason to push the desktop.

But we haven't stopped because there's no more need. Just, the market is in a bit of a wonky place right now. Console manufacturers would love to have a new one to sell, but developers are pushing back because to actually develop at the kind of fidelity required to go past where we are now, your average game's budget will quickly double or more, with few returns in terms of profits- those margins just keep getting thinner.


I have to disagree with the statement no on is pushing PC-only experiences. Epic is pushing UnrealEngine4 as potentially PC-only unless the console makers step up.

Then there's Arma 3 just around the corner.

From my point of view, as a PC gamer, the reason there's not so much investment in using the PC graphics capability we have cheaply available right now is because of the influence of the console market. Most big budget games right now are made for the current crop of consoles which PCs outpaced years ago. Too many times the PC version is just the 360 version with UI changes to support the mouse and keyboard. You can look at the graphics mods that are being made for games such as Skyrim and GTA4 to see the wasted potential.

Also, it's really, really expensive (as you point out) to make a game that truly pushes the capabilities we have right now and very few developers are willing to do that, it's a huge risk. Part of UnrealEngine4's development are attempts to address this by providing tools to help reduce the time necessary with the creation process.

But not all games need high graphic fidelity, that's not the whole story. The true waste is the pure amount of cheap computational power we have available with both the GPU and CPU that's being wasted. Screw photo-realism, I want more physics and AI.


So what you're saying is, game development is waiting for tools that allow producing high quality graphic games more cheaply.


I would say yes and no.

The tools are there now. You can make an outstandingly impressive looking game with UnrealEngine3 right now if you ignore restraints of mainstream PC and console hardware. Look at the Nvidia and AMD tech demos for their cards to see what they can do.

But that game would be expensive, time consuming, and not worth the investment if you were planning on making money.

I would say it's better to describe it that they are waiting on tools that allow them to do more with the same time and money budgets on current or next-gen hardware. That's where things like UnrealEngine4 make it interesting because it seems that Epic is saying that PCs are under-utilized graphical work horses now and don't want to wait for the consoles to catch up. It's potentially a huge risk for them to take and it'll be fun to see how it plays out.

To me the tools thing isn't just about graphics, it's about development. Proper tools make high quality games cheaper to make in general. As an example, let's say you want to make a big open world game that has hundreds of indoor locations. You have a team of ten to create those indoor locations. If you can use tools that would allow a team of five people to do the same amount of work in the same amount of time as ten on the previous project then you have two choices. You can reduce your team to five to make the same amount of content for lower costs (and probably being better) or you can keep your ten to create more content for the same costs as before.

If you look at the UnrealEngine4 demos they are just as much about content creation tools as it is about graphics in general.


You're putting the cart before the horse. The reason that performance has not improved in the desktop and gaming are due to hitting fundamental limitations in the design of hardware.


The PlayStation 3 has 256 MB of RAM, not exactly on par with a current PC gaming system, which can easily have 16 GB.

I would argue that the rising production costs to produce games that actually make use of ultra-advanced hardware are more to blame for the slowing of the graphics race.

You can always do things faster by doing things in parallel. It just gets much harder to program them.


Gaming PCs also have a general purpose OS on them. It's not an apples-to-apples comparison. One is a general purpose computer, the other is a device designed for a single purpose.

But that's beside the point: the games industry drives the graphics card business, but that is only a one part of the consumer computer business. General purpose processors are a larger part, and processor performance improvement is not the exponential curve it was a decade ago.


But the hardware IS getting faster! Each successive release of GPUs for example still brings big performance improvements, us PC gamers are just going from 100FPS to 120FPS with a new card. People are running games at 5760x1080 resolution across three displays with just one mid-range graphics card, with anti-aliasing (and all the other settings) turned up to the maximum. Current games just don't make current hardware break a sweat.


Graphics cards are getting better, but not at the rate they did, say, a decade ago, and graphics cards are a small driver in the consumer computer market outside of games.


As far as I've heard, we haven't hit that limit in the consumer space yet, as we're just starting to get close to hitting it in other spaces. Is this not true?


We have hit it in the consumer space. A powerful processor from three years ago is still a decent processor today. That was not true in, say, 2000. This applies in consumer computers and high performance computers.

For a long time, processor architects were able to increase the frequency, increase the cache size, and increase the pipeline depth (to allow for more instructions in flight at the same time) to yield more powerful processors. As Moore predicted, processor architects kept getting more transistors to play with, and they were able to make processors more powerful by making designs that were "like the old one, just more so." But they've hit fundamental limits in the design: increasing the clock speed and the pipeline depth at the same time means that you have to communicate the same amount of data over a longer distance in a shorter amount of time in the silicon. We've hit the point where it's not feasible to do that anymore.

Hence, multicore. Processor architects are still getting more and more transistors to play with, so instead of using them to make a single core more powerful (as they did for a long time), they're using them to make multiple cores. But you no longer get the "for free" performance boosts that you did when you increased single-core performance. Now we need to change how we make software to take advantage of this new hardware. And some software can't take advantage of this new hardware.


Console manufacturers would love to have a new one to sell

Why? Don't they pretty much lose money by subsidizing them for a few years and only start breaking even after a long haul? I thought they made their money on game licensing.


Other than the Wii, that is often true. The business model is selling licenses to game developers for the games to be sold for that console. Maybe, just maybe, they'll make money on the hardware at the end of its cycle.

The licensing deal is also often why sometimes the PC version is five to ten dollars cheaper than the console version. You need no one's permission to publish to the PC market.


I would disagree about console gaming. A stretch this long happened before between the original NES and the SNES, as well as the original Gameboy and a more powerful version. From then on there was the 4 year like clockwork release of a new console system from every major manufacturer. In this cycle, none have any definite release date, stretching this generation to 8 or 9 years.

While I agree games propel hardware forward, I believe it's the combination of a graphics plateau, diversification in gaming, and the end of the free ride on Moore's Law, which the article pointed out. This generation of AAA games run arguably the prettiest graphics ever seen. But this comes at a cost of complexity and sheer hard work to build the environment and models, without the wow factor of a major leap in graphics quality. Crysis is still the Gold Standard in what gaming graphics can do, and that was released five years ago.

We are also seeing a large and unique divergence in gaming in the form of indie titles. Indie games, by their nature, could never compete pound-for-pound on the graphics of AAA games. They made up for this in experimental gameplay and embracing a unique graphics style. Before now it was very difficult to get an indie game exposure and the art of building said game was difficult. Now the game development toolchain has improved by orders of magnitude, and the advent of app stores on tablets and smart phones makes distribution and monetization much easier. Tablets and smartphones likewise bifurcated gaming. Tablets couldn't render the graphics or use the raw computational power found in the latest PC and console titles. This made them a natural home to indie titles.

There is also something to be said about Zynga, Facebook, and the whole crop of casual games that sprung up in the last five years. Suddenly the gaming market had many more diverse demographics they could effectively target. It's no longer the 18-25 year old male that was the prime gaming market. These titles don't need the raw computational power of next-gen consoles or the latest PC hardware. They run fine even on netbooks and tablets.

In all, I do believe that gaming was a primary driving factor in hardware development. In recent years video games have become a much more diverse ecosystem, many of which don't require the raw hardware power.


The gap between the NES and SNES was 5 years, the XBox 360 has been out for 7 - I think it's fair to say this is an unusually large wait for the next gen.

I don't know how you can say that this generation of AAA games is the prettiest ever seen, and also say that Crysis is the gold standard - this is kind of my point really, things have stagnated. Games run faster and faster on "current midrange hardware" (however you care to define that) but aren't really looking much better.

Your points on the video game market massively diversifying are extremely valid though, it's true that the disposable income upgrade-chasing 20-something is no longer the most luctrative or important demographic.


I think we are bottlenecked by content creation costs


I'm not totally sure that the diversification of video games is to blame. For every Duke Nukem, DOOM, and Quake, there where equally popular and less taxing games like SimCity and RollerCoaster Tycoon that have been players in the gaming world that did not push hardware forward.

Additionally, I very much take issue with your statement that Crysis, a game released in 2007 is still the "standard" for graphics intensive games.

The games that are pushing the bounds of PC hardware are games like [1]The Witcher 2, games which are huge improvements on [2]Crysis.

[1]http://cdn.ripten.com/wp-content/uploads/2011/05/witcher2-20...

[2]http://oyster.ignimgs.com/ve3d/ve3d/image/article/745/745255...


I disagree, we're simply just not seeing the growth in performance we used to, on the high end of the spectrum. This is valid for consoles too. It's not because of the consoles.

On the mobile side however, the recent performance increases have been very dramatic, just like in the early days with the 386 and Pentium and stuff.


The problem of desktop performance gain isn't that the gains aren't there, it's that people aren't buying them even when they are. People aren't buying 6GB of RAM, let alone 8. They're not buying SSDs for their desktops, even though those represent real-world speed boosts right in line with years-ago trends.

So with desktop demand tapering off, the performance jumps taper off. And with laptop demand still strong, performance jumps there remain strong. And with mobile demand sky-high: the performance jumps there are as high as computing has ever enjoyed.

If anything, consoles have been languishing for the same reasons desktops have been: there's simply not a lot of demand for a bigger/faster console.

Sure, eventually there will be new ones; just as Intel inevitably releases new motherboards and chips, despite soft competition from AMD and flagging demand. But the growth and clamor isn't there and the performance increases likely won't be either.


I see it more being about power profiles and Moores Law than anything else in respect to how Desktops are tapering and laptops are still going with tablet / phone going strong. The plateau is around a 2.5ghz dual core that supports the bangs and whistles when appropriate, be they branch prediction or atomic math or layered caches. The laptop space is now getting there, the tablet space is still 2 years out from that, and Desktops hit it 4 years ago. The reason is that the Desktop system could push out 125 watt cpus consistently without a hitch. Today, the real inroads in the Desktop space are not in core count (which nothing consumer grade saturates) frequency (which hit the ceiling on Silicon) or "cheats" (using transistors to accelerate instruction handling) they are coming out in significantly lower TDP demands. The modern i7 chip runs at 77 watts where the 920 in the article runs at 130 watts.

Meanwhile, laptops are maintaining their 20 - 40 watt profiles but are gaining performance. They are converging to that low power threshold. The tablet space is even lower, down to 5 watts in some cases, averaging from 3 to 10, but still, that is a convergence point.

It also doesn't hurt that AMD tripped over their own feet for a while and Intel is resting on its laurels pumping out higher performant CPUs without AMD to compete with at the high end.

Meanwhile, the graphics card business is heating up, the 7000 and 600 cards are upwards of 30 - 40% faster than the previous generation at much lower TDP, the Kepler architecture is extremely vector processing optimized to a fault in that it is worse as a gpgpu device than the 500 series, while the AMD Southern Islands chips are very generic cores optimized for GPGPU when their last few generations were graphics optmized, but in both regards they are markedly higher performant and consume much less power than many older gpus.

The advances are still there, they are just going in divergent directions. Nothing is stopping Intel from making a 32 core i7 with hyperthreading on a 500mm die besides the heat associated costs and tremendous production costs, but nothing on the market would utilize that. So they invest elsewhere.

The next generation of consoles isn't looking so bright either - they all seem to be targeting AMD gpus, and the WiiU will probably be using a modified 68X0 and the Sony / Microsoft camps will use the 78XX cards. They even are considering using FPUs. The gaming performance on the new hardware won't be that radical as the last time because the lower margins means they can't make extremely customized gaming hardware like back in the RSX days.


I think we're in agreement here. There's plenty of advancement: it just isn't in desktop horsepower because that's not where the demand is. I never meant to imply there was no advancement on the desktop at all. I was just trying to get the gist of "advancement tracking demand" out there.


Same thing I'm saying. Just provinding the examples of how the Desktop is still moving forward :P


We are, though - if you look at the very high end, look at some synthetic benchmarks, we are still seeing those 30% performance hikes that the author of this article misses. They just have very little impact on today's software and 99% of users have no need of the extra power.


In a way the article yells out trouble for Microsoft and friends. If their customers hardware from the last five years is "good enough" they wont go looking for an upgrade. (They will go looking for a _new_ toy with the ipad though). If next generation consoles really did drive desktop sales it would have been in Microsoft's interests to get out the next generation xbox sooner right?


I don't think it's easy to see Microsoft profiting by leveraging its Xbox division to boost PC sales (Windows license sales). I can't find a source for this, but I doubt Microsoft made much money if at all on the Xbox 360. The money is in Xbox live subscriptions and taking a percentage from sales on the marketplace. This is similar to how Amazon sells its Kindle line at a negative profit but makes a killing in Prime memberships and eBook %'s.


The irony here is that Microsoft cannibalized the PC gaming market to create the XBox market (and kill its one true enemy at the time: Sony). The result is -- arguably -- stagnation of the desktop PC market which was always -- I contend -- driven largely by gaming. The problem is that the desktop market made Microsoft money whereas consoles... not so much.

It seems to me that problem here isn't cannibalization -- if Microsoft hadn't done it, someone else (Sony?) would have -- but the "let's leverage our monopoly-driven money tree" mindset. XBox was dumped on the market at a loss, subsidized by Office/Windows and leveraged by Microsoft's dominance of desktop gaming, APIs, and developer mindshare.

Microsoft doesn't even try to play fair in new markets and eventually it had to bite them, and it did.


the desktop PC market which was always -- I contend -- driven largely by gaming.

For the leading edge of the desktop PC market, definitely (advanced graphics cards, fastest possible processor, fastest possible RAM).

On the other hand, the broad center of the bell curve of the desktop PC market has been driven by office applications. These have never needed the high performance that the gamers demanded - users simply wanted something "good enough for Word, Excel PowerPoint, and email".


The last half a decade I would say users are not as much driven by office applications, but by network applications, specifically the browser and email


Good point - agreed.


In the past, as PC's continued to improve past consoles, players switched to PC's, forcing the next generation of consoles. That's not happening this time around. It seems PC's have overshot the demand for power - a typical pattern according to Christensen.

The pressure on consoles will come from tablets (the next generation of Imagination Tech's GPU IP is comparable to xbox360's 7 year old GPU).


No doubt it has been replaced by device obsolescence, where the screen, camera, wireless radio, etc. are bundled together and improvements in any one component compels the purchase of a whole new unit.


I also suspect, however, that PCs are going to stick around for quite a while because they're now cheap commodities, and people respond to cheap; I've argued as much here: https://jseliger.wordpress.com/2011/10/09/desktop-pcs-arent-... and elsewhere.


Sad, but true. Faced with the reality that well-designed, flexible, modular systems have simply become too good, manufacturers are now effectively building in forced obsolescence, either tying components together (the Apple way) or arbitrarily declaring that only certain combinations are permitted (which also works with consumables, like the way my HP colour printer refused to print a black and white document the other day until I replaced an empty colour toner cartridge).

Software seems to be going the same way. SaaS is a fairly transparent rip-off in many cases. Certain big-ticket professional software vendors restrict sales to a network of resellers they can control, who dutifully pull last year's version as soon as this year's is available, so if you want to buy an extra copy to go with the 10 you already have, you can't (unless you upgrade your entire organisation to maintain compatibility).

Bridging the gap nicely are drivers. Here we see a range of dubious techniques. One popular one seems to be releasing equipment with drivers for certian specific operating systems that are available at the time, but then conveniently not releasing any drivers to support new operating systems released within what would otherwise be the normal lifetime of the device, thus artificially shortening its life often by many years. Another good trick (I'm looking at you nVidia) is to take essentially the same hardware, but sell one version with nerfed drivers and another premium (sorry, "workstation") version with "certified" drivers that actually use the hardware to its full capabilities.

I'm a little surprised that we haven't yet seen more people pushing back against these obviously customer-hostile trends, because a lot of people are spending a lot more money and suffering a lot more hassle than they should have to. Certainly some rip-off merchants do come unstuck; I've heard about what would have been extremely lucrative sales of very expensive specialist equipment that got flushed when the prospective customer discovered some form of artificial nerfing in software and took their cash to someone else who didn't do that. But it seems that everyday hardware and software are in one big race to the bottom.

Even premium products, like those high-end graphics cards and professional software installations with their multi-thousand price tags per unit, seem to be tolerated because almost everyone is doing it now. Perhaps this is partly because the more honest competition mostly comes from smaller organisations in the hardware field or in software's case via independent software houses and open source communities, and these kinds of suppliers are (rightly or wrongly, probably a bit of both) regarded as not being up to the job of supplying Serious Business Customers(TM).

What I haven't yet figured out is why this is still happening. Do you know anyone who, in either a personal or a professional capacity, actually thinks any of this stuff is done to help them, or that spending money on these things is a good deal? In a logical market, we would expect competition to spring up and exploit that vulnerability, promoting brands based on honest dealings and good quality, almost certainly charging a higher price for it, but with a greater perceived value that justifies that price. And yet, this doesn't seem to be happening, which suggests that many of the markets in technology industries are not effectively competitive, or some of the big name players are in practice dominant or outright monopolies even if they're not formally recognised as such.


Integrated devices aren't some grand conspiracy. Modular systems have costs, both in terms of money as well as size and weight. Consumers are deciding on their own, by and large, that modularity isn't worth the cost.

Likewise, it's easy to buy non-stupid printers right now, today. But few people want to pay what they cost. Even techies would rather spend $50 on a printer and then complain about how it screws them over with ink than spend $200 on a good printer.

If you want people to fight the trend, start with yourself. No, you can't fight it everywhere (not enough people want a modular cell phone for anyone to actually manufacture one), but you can fight it in a lot of places. Start by not buying crappy and restrictive printers just because they're cheap.


Modular systems have costs, both in terms of money as well as size and weight.

Right. It's absolutely necessary for Apple to make the new iPad so that you can't even take it to an Apple Store and get the storage upgraded, and to literally glue the Retina version of the MacBook Pro together so it's almost completely impossible to repair anything if it breaks.

Some integration has a genuine benefit, such as minimising size or weight. But plenty more integration is just artificial, and this trend is increasing. Some of the clients I work with design high-end IT hardware for a living, so I can say with some confidence that if you think the equivalent people at places like Apple couldn't design modern devices in more modular ways than they are, you're just kidding yourself.

Likewise, it's easy to buy non-stupid printers right now, today. But few people want to pay what they cost.

Why can't the colour laser printer I already paid for just work? Printing a black-and-white document with nothing but black toner is not rocket science. Stop apologising for people who deliberately screw their customers.

If you want people to fight the trend, start with yourself. [...] Start by not buying crappy and restrictive printers just because they're cheap.

That printer probably cost the equivalent of $1,500 when I bought it, and was marketed for use in a small office. I'm not talking about cheap consumer junk, not that it would make the slightest difference to my argument if I were.


Neither of Apple's non-modular systems you mention are artificial. Upgradeable storage would require substantial additional room in a device that has no room. The Retina MBP's screen is glued together because it makes for a substantial thickness and weight savings.

I'm not saying you can't complain about junk. I'm saying that it's really odd to complain about how it's some conspiracy of manufacturers when it's actually just people preferring the non-modular systems for many reasons, and then giving an example where you yourself went for junk instead of something that's supposedly like what you want.


Neither of Apple's non-modular systems you mention are artificial. Upgradeable storage would require substantial additional room in a device that has no room. The Retina MBP's screen is glued together because it makes for a substantial thickness and weight savings.

Agreed. This is not some evil conspiracy of hardware integrators (led by AAPL) to make non-modular devices. This is a design tradeoff in a leading edge machine [1] in which upgradeability was deemed to be less of a concern than making maximum use of space and weight.

[1] You can't get a Retina MBP for less than $2200, making it a premium product for the laptop category.


Neither of Apple's non-modular systems you mention are artificial.

Nonsense. Go look at the iFixit write-ups of taking apart recent Apple devices, which helpfully show exactly how similar/different the variations are in many respects, highlight all kinds of proprietary components literally down to the level of unusual screws that have no apparent benefit whatsoever over using a standard/compatible component, and incidentally debunk your claim about "substantial thickness and weight savings" in the process.

it's actually just people preferring the non-modular systems for many reasons

I'm still waiting for anyone to cite any of those reasons. Other things being equal, why would someone prefer to buy a system that was going to cost a vast amount more to repair in the event of screen damage, or where proprietary connectors meant they couldn't swap out the SSD for a higher capacity unit even though the device itself is actually removable? And as noted above, for the most part other things are in fact equal, or so close as makes no difference.

then giving an example where you yourself went for junk instead of something that's supposedly like what you want

I'm still waiting for anyone to tell me what I should have bought instead. As I said, the printer I was talking about there was very much not an entry-level device, it was pitched as part of a range for professional use in an office, and with a price tag to match. Asking for such a printer to print a black-and-white document with only black toner is not some sort of absurd concession or flippant request. There would be no excuse for nerfing a $150 printer in that way, never mind a $1,500 one.

Also, it's not as if I somehow found the only printer or the only printer manufacturer around who has this problem. Again, as I already said, almost everyone seems to work this way these days. If you insist on maintaining that after extensive research I still wound up buying "junk" unnecessarily, you can easily win this debate. All you have to do is cite an alternative device I could have bought, with a reasonably similar spec and for a similar budget, that does not have such artificial limitations. But you can't, and instead of accepting that, you persist in implying that it's somehow my fault that a world class brand deliberately nerfed a device that costs $1,000+ in order to increase their revenue on consumables.


A direct quote from iFixit: http://www.ifixit.com/Teardown/MacBook-Pro-Retina-Display-Te...

"Apple did not design and build a 1.5 mm thin LCD panel. They did, however, do something exceptional with the design of this display: rather than sandwich an LCD panel between a back case and a front glass, they used the aluminum case itself as the frame for the LCD panel and used the LCD as the front glass."

Remember your whole theory about iFixit debunking substantial weight savings? Turns out iFixit agrees with the OP and not with you.


Would it be too obvious to point out that what you quoted doesn't even mention anything about weight?

Also, here's another direct quote for you:

"Incorporating a removable LCD into the MacBook Pro with Retina display would increase the thickness by less than a millimeter, while still preserving the awesome Retina resolution."

Source: http://ifixit.org/2787/mid-2012-macbook-pro-teardown/

Unless you're going to argue that less than a millimetre is enough to swing a purchase regardless of display resolution, in which case a lot of MBP Retina owners are presumably on their way back to the Apple Store to swap for a MacBook Air, it looks like iFixit does support my claim after all.


There are a thousand ways to improve a MacBook Pro (or any other computer) that would increase the thickness by only a millimeter. Any individual one would be of essentially no consequence, but if you did them all, you'd end up with a machine a meter thick.


Any individual one would be of essentially no consequence, but if you did them all, you'd end up with a machine a meter thick.

No, you wouldn't.

But even leaving aside the obvious hyperbole of your comment, we're drifting away from the topic at hand. My argument is simply that proprietary-everything, all-in-one designs are a significant loss for the consumer and a huge win for the manufacturer, because they very effectively kill both the upgrade and repair industries, making the default response to buy more new equipment instead.

Given the disproportionately high rate of damage to mobile devices, and the relatively low starting specs, and the fact that most laptops (including Apple ones) have not had such severely restricted maintainability until very recently, this all seems like a huge step in the wrong direction, from the customer's point of view.

If it were only the screen on the Retina MBP, I could just about believe the theory that it was done to keep the size down, but it's obviously not only that screen. If you look at the direction of basically all Apple equipment in recent years, not just phones or even laptops, everything has been moving towards using unusual/proprietary connections and fixed-at-birth specifications for a while, and there are far too many cases where there was no apparent benefit of any kind for me to accept that it's all being done because it's what the customer wants. Fool me once, shame on you. Fool me dozens of times over a period of years, find me a good shrink.


If it was really that bad for customers, why do people keep buying the stuff? There's plenty of modular, easily-repaired hardware out there. It doesn't sell nearly as well, though. I think people value the benefits of the non-modular design more than you think.


If it was really that bad for customers, why do people keep buying the stuff?

Who's giving them a choice, other than not buying anything at all?

I think people value the benefits of the non-modular design more than you think.

I think people value other things that may or may not have anything to do with the lack of modularity, and I think they buy devices that have those benefits. Often, they probably don't even realise the lack of modularity or its implications at the time of making a purchasing decision, but even if they did, they might still value the other benefits over any downside due to lack of modularity.

We can't really conclude much from the current trends unless manufacturers also offer devices that have the benefits of modularity as well, potentially at a greater cost literally or in terms of things like bigger size or heavier weight.


There isn't much of a choice in the tablet or phone markets, but there's a ton of choice when it comes to laptops. If you want a modular laptop you can take your pick among a ton, but people still buy Apple's in large numbers.


The pentalobe screws are stupid, and I'll be the first in line to call Apple a bunch of jerks for it. However, they're unrelated to using non-modularity as a weapon against customers. The screws certainly qualify as such a weapon, but they're unrelated to component modularity.

As for the rest, can you give me a specific example of a non-modular system that Apple has made that didn't result in savings in cost, space, or weight? Because every example I can think of saves at least one of those, so either you're thinking of something I'm not, or we disagree on what the savings are. Either way, I can't address it until I know exactly what you're referring to.


I suspect we would disagree on whether any savings are substantial, and as such whether they could justify the obvious and severe reduction in ability to upgrade or repair these devices.

Personally, I don't buy the idea that a customer goes into a store and measures whether an iPad or a Galaxy Tab is a few tenths of a millimetre thicker before making their purchasing decision. On the other hand, I do buy the idea that a lot of people are shocked and then angry after their Apple gear gets damaged and they learn how much it's going to cost to repair and how long it's going to take, not least because I personally know several people who have been stung in that way. Of course, by the time they discovered that downside to the all-in-one designs, Apple already had their money.


The idea that it can only make a difference if people measure it is absurd. People buy Apple products because they look nice and work well. Shaving space and weight is part of their design. I know people buy iPads instead of Galaxy Tabs because of the industrial design. I seriously do not believe that the iPad could have such a nice design if it was fully modular. I'm sure you can come up with a bunch of individual components to modularize without substantially impacting the design, but that doesn't do us any good. Unless I misunderstand your complaint, they'd have to modularize many components, and that will definitely impact the design.

In my experience, people are mostly surprised as Apple's good service when they need a repair. It's not like other manufacturers provide cheap repairs, either.


The computer market is not a conspiracy of manufacturers that get together and decide to screw customers. And absent a conspiracy, there's no profit in that. Actually, I'd argue that the computer world has been increasingly tending toward openness, not away from it; but consumer devices are special a use case where almost all of computer-buying humanity simply doesn't care about the things you care about.

Anyway, I'd like to see a "modular, flexible" hardware device that meets the specs of an IPhone or Macbook Air. As for printers and gfx drivers, those have sucked since the beginning of time.


"The computer market is not a conspiracy of manufacturers that get together and decide to screw customers."

I agree, but for the record, there have been a variety of customer-screwing conspiracies within the computer market. If memory serves correctly, price-fixing arrangements have occurred in at least RAM and LCDs. Not to mention various anti-trust issues.


If my memory serves there has been quite a few of them over the last couple of decades, people just tend to forget them over time.

Plus a few lawsuits were avoided or defeated because of the generosity of the US federal government making laws that makes it more difficult to be a consumer; the DMCA comes to mind. Laws pushed by the very companies that are not in a conspiracy but willing to work together for their own benefit when needed. You want a third-party ink cartridge? Screw you because the law says you can't and it's even illegal for you to try.

Nope, nothing to see here.


The computer market is not a conspiracy of manufacturers that get together and decide to screw customers.

Then how do you explain the kinds of deliberate and obviously anti-customer behaviour I described before?

In some cases, a single dominant player is abusing their customers because they have a captive market. This is often the case with high-end equipment or software, because once someone has committed to using a certain product range the cost of switching is prohibitive.

In other cases, while there may not be active conspiracy, I can see no credible explanation for the way multiple suppliers all impose the same unnecessarily consumer-hostile policies unless there is a lack of effective competition such that those who offer consumers a better deal gain a commercial benefit from doing so.

Anyway, I'd like to see a "modular, flexible" hardware device that meets the specs of an IPhone or Macbook Air.

Completely modular in the sense of a tower PC case, of course not. But that doesn't mean they have to glue stuff together so even their own people can't easily repair a failed component.

As for printers and gfx drivers, those have sucked since the beginning of time.

Perhaps. But they mostly used to suck because there were no standards, so every device/software combination needed its own version of what we now call a device driver. There's no such excuse today.

Now the drivers suck because someone at the source made an active decision to screw their customers. How else do you explain a newer, much more powerful model of graphics card that somehow has severely degraded performance in some functions compared to a card of the previous generation, while much more expensive workstation cards using the same new generation of hardware under the hood do seem to exhibit the performance improvements one would expect?

And what other description is consistent with releasing a hardware device that should be useful for many years, yet not releasing drivers for it when Windows 7 comes out just a year or two later? It's not as if the arrival of Windows 7 was a surprise or the manufacturers don't know how to write Windows 7 drivers for their equipment. Chances are the next product range runs just fine on Windows 7 and its drivers are probably using a lot of the same driver code as before.

Obviously there is profit in these behaviours, because it forces customers to buy much more expensive high-end brands or to buy replacement equipment much sooner than they otherwise would have to.


To be fair, I don't think the integrated device model is worse than the beige box model per se. Tablets and smartphones afford us completely new ways to interact with computers (and each other). Recent ones, such as the Nexus 7, are reasonably priced to boot. There are some perhaps unavoidable consequences of replacing whole devices rather than individual components (environmental, to say the least), but in the abstract I think it's hard to say whether one model is necessarily worse than the other.


Nerfing hardware artificially can be beneficial to the consumer. I'm sure many things have a lot of their cost not in physical production, but in engineering/design. Software would be an extreme example of this. This being the case, nerfing allows different price points to be offered in order to more efficiently pay back engineering costs on the overall line. This is a win/win for the consumer and the seller compared to offering only one version of something.


I understand the market segmentation argument, but I fail to see how shipping a new generation of product, at a higher price than the previous generation, where the analogous model from the previous generation was unambiguously and objectively superior in specification, can possibly be benefitting anyone who buys the new, more expensive, inferior device.


Example?


Try Googling a bit for nVidia and terms like GPGPU and CUDA.


Device obsolescence is also encouraged by subsidized phone contracts, aggressive marketing and the craze new hardware cause in the tech press.


I think this pretty well reflects Microsoft's long-term consumer risk, too.

More than half of MSFT's revenue (http://www.tannerhelland.com/4273/microsoft-money-updated-20...) comes from Windows and Office (and over half the profit, Windows http://news.yahoo.com/microsoft-defer-revenue-windows-upgrad...).

Back in the 90s and early 00s, the rate of change in the PC market was amazing. The hardware you could buy in 1998 basically didn't exist in 1996, so there always was something new and exciting to spur new hardware sales.

New hardware sales represent the bulk of Microsoft's Windows licensing (the so-called "Microsoft Tax") revenue and so a large part of MSFT's profits. All MS had to do was be the default OS and their profit would grow without lifting a finger (obviously that's greatly simplified, MS had to do a lot of things).

Now that this hardware revolution is no longer taking place, MS' Windows licensing rev (and thus the huge profits) are threatened. Competition in the PC market have driven hardware profits to zero. MS and their clone makers have painted themselves into a corner.

Contrast this with Apple's strategy - produce desirable hardware that people buy for non-technical reasons (design, fashion, software, etc.) instead of raw figures (CPU, Memory, etc.). Folks will upgrade because they want to, not because they need to. That's a lot harder to pull off, but Apple seems to be doing a great job of it.

Ultimately, I think Apple's strategy is going to make a lot more sense in the long run. Drive the cost of the operating system upgrades to zero (I'd be shocked if the next OSX upgrade costs anything; I was quite surprised that Mountain Lion was {edit: not} a fee upgrade), limit the age of device that can run it, and create products that people want.


produce desirable hardware that people buy for non-technical reasons instead of raw figures

Of late I've been contending that specs are excuses admitting inadequacy.

Until recently, every new computer (broadly defined) was listed with a proliferation of specifications (_.__MHz CPU _GB RAM ___GB HD ...). Now, in particular with the iPad and trending with other machines, no specifications (beyond what amounts to "small, medium, or large") are given. Why? specifications are needed to convince buyers the product is good enough when it isn't.

Most users just want something that's capable enough that they don't have to think about the technical limitations. Most users glaze over at random-looking numbers associated with meaningless abbreviations denoting implementation details they don't understand; their question isn't "how many gigahertz is the processor?" it's "if I click something will I be annoyed by the delay?" The real answer to the latter question is "yes, you will". To obscure that dissatisfying answer, sellers toss out the profusion of specifications in an attempt to convince buyers that the product is good enough to buy anyway. Users, able to at least evaluate whether X > Y, compare the numbers and convince themselves that those numbers are as large as they can be and still have an affordable package.

Apple's goal is to sell hardware. With the onset of "retina" displays, flash memory, multi-core processors, iCloud, etc. all those specifications transition from confusion to "good enough". Doesn't matter what the number of pixels are, you can't discern them so it's good enough. Doesn't matter what the storage capacity is, between terabytes local and near-bottomless cloud behind it you won't run out. Flash is so fast you mostly won't care. Processor speeds equal supercomputers of recent past. It's all "good enough" - making the comparison between Apple products and "lots of incomprehensible random numbers & abbreviations" easy.

PC makers still flaunting specifications are saying "no, really, it's good enough - see?" making customers hesitate their purchase. Apple is going for "details are irrelevant, it's good enough - period."


I disagree with you here - PC makers list specs because it is technical specs, rather than user experience, that set their product apart from competing PC makers.

I mean, you compare a Dell laptop to an HP laptop, or a $500 Dell laptop to an $800 Dell laptop for that matter, they're all running Windows 7. How would you market one product over another without admitting their differences?

Admittedly, PC makers might be wise to stop fighting one another for market share and instead focus on fighting Apple for market share.


How would you market one product over another without admitting their differences?

Subjective experience. Most people don't compare horsepower & engine volume & torque & etc. in cars, they just walk into a showroom, try out a few, and go "I'll take that one."

Apple is winning despite not "admitting their differences" in speed, capacity, etc. vs HP vs Dell vs Dell. Apple's opposition have to resort to mutual fights over specifications precisely because they don't have anything better to appeal to; give the user specifications so good they don't care, refine the end-to-end experience, cut costs without cutting quality & experience, and buyers will flock.

Anecdote: I loved Sony products. Raved about them for years. Finally gave up because enough corners (hardware, software, UX) were cut that my enthusiasm was shredded.


Let me put it another way: Ignore Apple for a minute, consider just the market for Windows laptops. The subjective experience will always be, at its core, the experience of using a vanilla install of Windows 7.

To use a car analogy, imagine all car makers built their car on a Toyota Camry chassis. They can put in any engine they like (that's your CPU and whatnot), they can paint it and add aftermarket body kits (that's your keyboard, trackpad and plastic screen bezel) and they can hang an air freshener from the rear view mirror (that's your preinstalled software like the 30 day McAfee trial). The Toyota Camry chassis is Windows.

Ultimately, all those Camry-based cars are going to have a very similar driver experience because, at it's core, it's a Camry. When one Camry-based-car maker is competing with another they talk about paint job and engine because the paint and the engine are the only differences between their products.

Apple, in this analogy, is the only major car maker that isn't building on a Toyota Camry chassis.


>Competition in the PC market have driven hardware profits to zero. MS and their clone makers have painted themselves into a corner.

this might serve to explain why the latest version of windows seems to be tailored for mobile devices, TVs, and other consumer electronics instead of traditional PCs. You can't sell somebody a new desktop every year anymore, but you absolutely can sell the same person a new tablet every year.


Yup. Mobile computing devices are the PCs of the 90s. Even more so with the two-year contract extension device subsidization. $100 for the latest and greatest iPhone every two years? Absolutely!

Eventually the iDevice market will "settle down" just like regular PCs, and so I think Apple's strategy of software and design lock-in makes a ton of sense. In the long run, it's much easier (comparatively speaking) to innovate on the software end than on the hardware end. Once the iDevice gold rush is over, the software lock-in will keep me in the Apple family.

With more and more things moving to the "cloud" I think that will continue to make phones and tablets "disposable" in the way PCs aren't. Apple is trying to get there with the App store and iCloud, but they are still long off from "buy a new MacBook Air, sign in to iCloud, and it's just like your old computer!"

(aside: device subsidies never really worked for PCs, wasn't ePC some sort of a leased computer model?)


"I was quite surprised that Mountain Lion was a fee upgrade"

My knowledge of American anti-competition laws is lacking, but I remember back when they were able to release iOS-upgrades for the iPhone, but not the iPod Touch for free. At the time, there was supposedly something on the legal side of things which they were afraid to run afoul with by releasing free iPod Touch updates.

I suppose, if they were to release new versions of OS X for free, they may risk being sued for trying to push competitors out of the market by giving away a product that others are trying to sell.

No, it doesn't seem right to me, but sometimes laws are outdated or simply abused.


It was due to the way Apple had been accounting for OS upgrades, not due to competitive/monopoly concerns.


Specifically it's because Apple likes to take the Research and Experimentation Tax Credit for operating system development. The R&E credit requires that you work on a new product and not on simple refinement of an existing product and its features. There are a lot of subjective factors involved in qualifying and Apple felt more secure by requiring some money to upgrade. The big advantage was not the income but the big fat check from the federal government at the end of the year.

The R&E credit is a pork barrel scam for big companies with clever accounting departments, of course. But if it's there, why not take it?


According to this article, http://www.nytimes.com/2012/04/29/business/apples-tax-strate... :

"In 1996, 1999 and 2000, for instance, the California Legislature increased the state’s research and development tax credit, permitting hundreds of companies, including Apple, to avoid billions in state taxes, according to legislative analysts. Apple has reported tax savings of $412 million from research and development credits of all sorts since 1996."

$412M on the hundreds of billions Apple has earned since 1996 are pennies. That doesn't seem that much... Are there more?


You are mistaken. Free upgrades are not illegal. The Apple OS is not even a competitor of Windows because they dont run on the same hardware. MS got into trouble because they tried to use their dominant os position to force out competitors of their other products.


Apple's strategy makes sense for Apple, driving down the cost of software makes their hardware more attractive.

This doesn't make a great deal of sense for Microsoft, what does is driving down the price of hardware. Their strategy has always been to treat the hardware as a commodity because it is in their interest to do so.


I think it made sense in the 90s-00s rapid innovation space. Microsoft needed a computer on every desk/laptop/car/whatever in order to push Windows and Office licenses. Dramatic advances in hardware drove high turnover and more OEM licensing rev/profit.

Now that folks aren't upgrading very often, the cost of the Windows license is starting to be a much bigger proportion of the total cost, basically creating a price floor of PCs. This sucks if you're an OEM - you have a baked-in cost you don't control (aka MS Tax). Microsoft must naturally drop the licensing price to OEMS in order for OEMs to eek out some profit.

Long run Windows OEM license fees tend to zero. It's already happening with Windows 8 upgrade pricing (Mac OS X has already started this trend) and I expect it to continue.

MSFT knows this which is the primary reason to making their Surface Tablet (and Office online, to get SaaS recurring rev). I don't believe for a minute that they are only doing this as a "reference" implementation.


Core i7? PC Hardware hit a usability plateau with the Core 2 Duo. For modest tasks, my 4.5 year old thinkpad and home desktop are both Core 2 Duo and perform most tasks fabulously. I do look forward to an upgrade but I have thought the same thing that this article states - that PCs are lasting much much longer than they used to.

What is more interesting is smaller hardware. Mini-ITX systems, The RaspberryPi, and ATMEL/Arduino systems. Writing C code for a 16k ATMEL is a great way to appreciate the hardware in an 'older' PC.


I just turned my 5 year old Core 2 Duo Thinkpad into my home server. I was able to put two 1TB drives into it. (Using a laptop has the advantage of builtin screen, keyboard, wifi and wired networking, and most importantly a builtin UPS.) It replaced my previous server (single core AMD from ~2005.) Most noticeably power consumption has dropped from 75W to 14W.

I need 16GB of RAM in order to work with development data sets, and the old Thinkpad chipset can't take more than 4GB. Heck my brand new one is limited to 32GB, so that is probably what will cause my next upgrade. (I'll omit my rant about laptop screens getting worse.)


It is funny that hardware has taken a turn towards slower/lower-end. Appliance computing, I guess. There was a mad dash to make everything as fast as possible, then when it was faster than really required, attention shifted to making computers that are specifically designed for various tasks. Why have one computer that's not really good at anything (but does it fast enough to not care) when you can have a half dozen tiny and power-efficient computers that are great at their job (so good that you don't notice how slow they really are)?


Maybe I live in a different universe, but I feel like computers are so slow these days. How is everyone claiming that are sufficiently fast?

Starting up Outlook takes 10 full seconds (just the splash screen consumes half of that time). When I open up a web browser or click a link in an email, it takes about 5 seconds to fully load. Even distilling a file to pdf or loading Facebook takes almost 5 seconds.

I don't think it's an issue of my computer being particularly slow. Everyone at work with their quad core processors and 6gb of ram seem to just sit and wait without noticing. Have you ever been in someone's office while they search for an email? It takes minutes! And don't even try doing any work while Adobe is trying to update or the Antivirus scanner is running...

I spend so much time waiting for a computer. Waiting for IDEs to redraw, waiting for my computer to let me disconnect my usb drive, waiting for splash screens. What is going on?


There's always going to be slow code. That's a fact of life. That's also part of the "not really good at anything" point I made about general purpose computing. You complain about waiting seconds or minutes like it's outrageous. Instead sit back and think about what that computer is actually doing. I do, and I marvel that it only takes seconds or minutes.


Do you have an SSD?


I do on one computer, and it does make things noticeably faster, but still feels slow compared to how people are describing their computers. I am constantly waiting a couple seconds in between actions.


Indeed. My 2009, 15" 2.53GHz C2D (with 8GB RAM) is still ridiculously fast for most things. I don't play any games, but for development, I don't feel a need to upgrade. I would love to upgrade of course, but that would be out of pure whim, not a need. I'll probably wait till next year's rMBP (hopefully it'll be cheaper and more powerful) and fully expect that machine to work for me till 2018 (that is, unless the battery dies on me).


There were a few features released even after Nehalem which makes me basically uninterested in pre-Westmere CPUs.

AES-NI is the big one. Improvements to VT-x and VT-d for virtualization performance and security are other big ones.

Pure CPU clock hasn't been increasing, and number of cores isn't useful for many tasks due to difficulty programming, but CPU features are still a big deal. FMA with Haswell is going to be a big deal too.


AES is huge. It's the difference between "let me resume that VM for one second and suspend it again" and "well, I'll just wait until the next time I resume it and hope I remember what I was going to do". (Given VMs that live on an encrypted drive.)


Personally, I'm on a 2008 Dell Business Vostro 200 with an Intel Pentium Dual Core (same under the hood as Core 2 Duo, maybe?). I'm gonna upgrade someday, but if this thing is sluggish I don't really notice it at all.


I agree that there was a plateau slightly earlier, but I imagine Nehalem was used as an example because it's consistent with the 4 year cycle mentioned earlier.


Software and games have traditionally driven obsolescence. Now that software is shifting toward efficiency, remote processing, and web based interfaces, we don't need the power as much anymore.

It was easy to double the processing requirements of software when we were at a low exponent of moore's law, but now that is looked at as poor development. We're able to do more with less code.

Also, since facebook and other web services have diluted the demographic with people who mostly browse the web, the demand for power has waned.

However, I am annoyed with all the posts asserting that desktops are dead. This is far from true. Gamers will not give up their desktops. For power uses, a laptop just won't support 4 1080p monitors. I personally prefer having a desktop so I don't have to mess with docking/plugging in my laptop to a ton of cables when the desktop does a better job anyway. The desktop as serves as my media consumption station and is connected to the TV.


I used to do a lot of server work. Mostly virtualization in a lab environment. We created an elaborate test rig which literally the more computers you threw at the problem the faster you could iterate. About 4 years ago I left, but looked back at hardware a few weeks ago. It blew my mind how things have doubled since then.

I think servers for real work are still important, and they're not done at getting better faster.


The server market is almost as stagnant though. Multicore CPUs are important because of server virtualization, and it's always nice to be able to put 1/4TB of memory into a system, but there's a dearth of reasons to upgrade short of EOL or support being discontinued.


It really depends on what you do with your computer. I had a Core 2 Quad system that was perfectly fine for me in 2008, but as I dabble further in HD video editing, I dropped money on a new system a month or two ago. Now I can preview effects in real-time, and the rendering time is so much faster. Likewise, working with photos from my Canon T2i. The ability to chuck 24gb of memory in the system doesn't hurt either, and SSDs? I actually put my 4-year old computer on Craigslist for free because it seemed pretty useless to me after upgrading to this new system.

I'm looking forward to what kind of technology will be available in 2016.


I am absolutely amazed, that picture at the beginning of the article seems to have taken on a life of it's own.

More on topic, quick PC obsolescence was always something of a myth. Parts were only really obsolete if you absolutely had to run the latest shooter at the absolute highest settings known to man. So it wasn't that it was obsolete, it was that the user needed to justify their desire to buy the latest for bragging rights.


Less on topic, but more on the picture: I was wondering about that image. Something struck me as off. And then I realized that the computer had a steering wheel. So for those interested, yes, it _is_ odd for a computer to have a steering wheel that large.

http://www.snopes.com/inboxer/hoaxes/computer.asp


"Many a prognosticator who has tried to envision the future has been tripped up by a failure to correctly anticipate the direction of technological change."

I straight away thought of Kubrik's 2001. The complete failure to imagine mobile phones. Floyd uses a fixed video phone to phone home from the space ship. Personal mobile technology is the thing.

Back nearer the topic: When I can hook my phone up to a monitor and keyboard I'm done.

http://www.ubuntu.com/devices/android


If you remember "Stranger in a Strange Land" (1960s) set around the year 2000, one of the characters, a reporter I believe, had a mobile phone that was the size of a briefcase. Another character marveled that he must be rather well off because mobile phones were so expensive.


I would say you're right for most cases. But not for gamers or "power users":

* Game developers constantly up the ante. Try playing Battlefield 3 at 2560x1440 on Ultra on your 2008 PC and see how you go, if it even starts at all.

* Virtualisation has resulted in consolidation (for me at least). Whereas I might have had to keep around a Windows box to do testing, now I just fire up a VM on my laptop.


CPU obsolescence

OTOH everything else has been playing catch up. Disk, Network, Bus etc... Sure my CPU has hit the ceiling, but it spends 95% of its time idle anyway. Broadband five years ago was probably a 3-5Mbps link, and storage was a 7200rpm platter. Now broadband is 20mbps and storage is an SSD.


I'd say that this is even more valid for home servers. Of the 7 spare boxes I have, five are 5+ years old while two are 2 year old shit home laptops (Inspiron yay!).

These were all under $50 a piece when you factor in that 3 were free. All of these guys sit on ubuntu server or archlinux and never have problems. Hell, you can go buy surplus/ old 620's from corporate shops for < $100 and build yourself a full Beowulf cluster for under $500.


What's the power usage of old PCs like that? I've been using a sheeva plug as my server at home because it only uses something like 5 watts.

It seems like ARM servers are the best choice for home servers. They have adequate processing power and are cheap.


I also have a sheeva plug as my server, but if i did it over I would choose to grab an mac mini. My big jump to the sheeva was the power usage, but then I found out the mini only uses 12W! Along with the 12W you get a x86, with hd, cd drive, wifi, etc on a box that I can easily load linux (upgrading the sheeva's kernel isn't the easiest...) and in a few years when done with it I can easily sell the mini unlike the sheeva.


None of these have over 350W PSU's, and I don't think any of these rigs will draw anywhere close to that power. All I have is anecdotal evidence that my electric bill has increased by maybe $14-$20 a year. Running my stereo for a day probably eats more power than these things take all year. Plus, my livingroom in my apartment stays nice and warm during the midwest winters. I'm pretty happy with spaceheaters with processing power :)

edit: the real issue is that I can't get a cable connection in my neighborhood with higher than .7mbit up, so it's not like I 'm serving loads of content. I use these for running different tasks usually. Two are dedicated DB servers and the other 3 run whatever projects I'm consulting on this/ that week.


I use a VIA x86 system for my server.

18W peak from the wall, no fan (which is one less moving part to fail). Since it replaced an old P4 system (200W from the wall), I shaved $30/month off my electric bill


Home servers? I've always used incredibly out-of-date machines as home servers. There has never really been a market for powerful home servers.

I had a Pentium 166 with 80MB of EDO ram and a 1GB hard disk (bought new in 1996 as a P120 with 16MB of ram) until the end of 2005 as a home server, running Windows 2000.


There is a valid case for "powerful" home servers. Centrally backing up all the PCs in the house and streaming media to computers/TVs/consoles/tablets are two things I do in my house with my server. Granted it's not a brand new system, but it's a Core 2 Quad with 8GB of RAM, 20TB of hard drive space, and a Radeon 5700. Streaming and storing HD video isn't cheap. This centralized server is more cost-effective and convenient than all of my roommates having the media stored locally, though.


You don't need need memory or CPU power for streaming or backups, unless you are doing on the fly transcoding. You do need storage connectivity but even USB2 has a theoretical throughput of 60 megabytes per second while you'll get practical throughput of ~35 megabytes per second on gigabit ethernet.

The only time I saw my ~2005 era system have non-trivial CPU usage was when doing encryption, and even then it would peak at around 25% of CPU.


Encryption and compression are what the CPU and memory are for. The video card is for transcoding to deliver to the target system. It's mainly just a re-used system, but you wouldn't be able to do it with a PII and 64MB of RAM.


According to cpubenchmark.net the Nehalem Core i7-920 in (from November 2008) mentioned in the article has about half the performance of the Ivy Bridge Core i7-3770K released in April. In 2004 the best we had was Pentium 4's which were around 8x slower than the i7-920 that came 4 years later.


You also have to factor in that life expectancy of these machines. Usually processors last about 5 years and by the time that happens you're probably going to want to get the latest which will most likely be a different socket and require a new motherboard.

Furthermore, PC enthusiasts and gamers will never be satisfied. These folk live for the next hardware releases to get even the tiniest gain on their overclocks or go from 2x AA to 4x AA. The PC enthusiast community is a force to be reckoned with. Don't believe me? Just check out the plethora of forums out there and see how active they are. I don't see that going away anytime soon.


This is quite true.

My parents are still using a desktop system from 2002 for all their stuff, and it still working quite well.

This is the problem most OEMs are facing nowadays, as there this no need to upgrade every few years like it used to be.


For those who don't game or edit videos, this has been true for about 8 years. I retired my 2004 1GB RAM Dell desktop in 2012 and still keep it as a backup. The ever increasing speed of browsers and the shift of my work from the desktop to the cloud kept my system operating at the same speed it did 8 years ago - not blazing fast, but good enough.

I expect my Sandy Bridge replacement system to go another 7-10 years before I replace it.


Yep, seems pretty accurate to me.

In fact, my home system is pretty much identical to their hypothetical 2008 i7-920 system.

I have made a couple of upgrades over the years (Replaced the original Radeon 4870x2 with a GeForce 560 about 9 months ago when the ATI card died, recently upgraded from the original 6GB of RAM to 24GB.

Looking at what's other there now there's nothing that makes sense to upgrade to.


Raytraced games probably come closest to what would cause people to upgrade. But even Intel seems to be backing away from raytraced games off late.


Intel backed off because its futuristic hardware platform was too futuristic to make a good GPU. http://en.wikipedia.org/wiki/Larrabee_(microarchitecture)


I'm not sure I agree with this fully. There are compromises made in software as a result of lacking resources, with these resources now available some of these compromises can be avoided. For example, lets keep more resources in memory at all times, lets use more resource hungry algorithms that just was not feasible before, etc.


The relentless evolution of the PC was driven largely by PC gaming. In the 90's and early 2000's you could rely on the fact that a PC bought one year would struggle with games made the following year and be incapable of playing at least some games made 2-3 years later. As such, gamers eagerly adopted new hardware, overclocked, overspent by factors of 2 or 3 in the vain hope that it would buy them a few extra months (when in fact it was more sensible to buy a cheaper PC and upgrade more often).

Then a funny thing happened. The video game market outgrew the movie market. It started making financial sense to invest tens of millions in the development of a single game. The cost of the content (i.e. character designs, level designs, script, voice acting, motion capture, etc.) far outstripped the cost of coding the engine. It became a complete no-brainer to port games to as many platforms as possible to maximize your audience. The corollary to this is that it became necessary to design games such that it was possible to port them to as many platforms as possible. That meant that most developers stopped exclusively developing for one platform and instead started designing for them all. While you can always tack on some superficial eye-candy on when the hardware supports it, games could not be designed to have basic gameplay requiring more resources than the slowest platform was capable of. Thus, consoles became an anchor that brought the relentless march of PC game requirements to a screeching halt.

The Xbox360 and PS3, not to mention the Wii, are handily out-performed by today's desktops by a large degree. Mobile phones are poised to pass them within a couple of years! However, the games you can play on a PC are largely identical to the console versions. Sure, there are some improved graphics, additional content, etc., but these differences are superficial. In fact, if your hardware isn't the latest and greatest, most games can simply scale back the eye-candy to maintain performance. Even PC-only developers, such as Blizzard, have recognized that they must build games with conservative minimum requirements in today's market.

The end result is that PC gamers can run new games with only superficial inferiority to what the latest generation of desktop hardware is capable of. They no longer need to upgrade their desktops. At least, they won't until the next generation of consoles arrive. When that happens there will be many new games designed with greatly increased resources in mind and a big wave of desktop upgrades will ensue.

In the end, PC obsolescence isn't really dead. It's just quantized differently.


Let's turn this thread on its head

Does anyone think a desktop PC with a 20 year life is possible now? What kind of design?


If nothing else, I think power consumption is a dealbreaker for a 20-year desktop PC. In (say) five years, there will be an iPhone with the same specs, and then you'll be running a desktop PC for 15 years at 100 times the power consumption for no reason. A good analogy is the way otherwise perfectly-good CRTs disappeared so quickly in favor of relatively expensive and low-contrast flat-panels. I'm using extreme numbers, but at that timescale even small efficiency increases will make it worth upgrading, and efficiency increases are a huge focus right now.

The place to look for a 20-year computer is probably the same place they are now -- embedded systems with minimal power consumption and ample specs for the limited job they're designed for. E.g., cars.


Interesting view which I had not thought of previously in quite that way.

Devil's advocate: My old workstation has a 440w power supply but uses much less than that most of the time. A small percentage of the total electrical power that my house uses. What pressure is there to switch the workstation for a more efficient PC?


Depends on OS developments. My 2006 Mac Pro is not supported under Mountain Lion, but it's still an excellent machine; 4 cores @ 2.66Ghz, 32GB of memory, and 6TB of disk. GigE, FireWire, USB, DVD/CDRom. I see no reason other than PC envy to upgrade to a different system. Eventually Apple will stop releasing security updates for Lion, and then I'll have to consider moving to a Linux distro. Short of a hardware failure I figure I'll get another 4-6 years out of it. Replacement parts for Mac Pros suck in price, but I bet I can find what I need on eBay. Heck, my system appears to be worth less than $500 according to my last search.


> Does anyone think a desktop PC with a 20 year life is possible now? What kind of design?

Probably not, I think hardware still moves too fast for it. A current PC might serve as a media center for that long, though.


Two (well, three) things have helped me extend the effective life of a few machines for self and family. 1. (Particularly for laptops,) Purchase top-end discreet graphics processing.); 2. Max out the RAM before it becomes terribly expensive (as its form factor is passed by); 3. When storage space becomes constrained, throw a new hard drive in there.

Particularly the graphics processing on laptops. Inevitably, this ends up making the difference between a machine becoming "a little slow" versus it becoming "unusable", for general, day-to-day use.

In the next ten years, who knows? But I'm betting that going with the best graphics processing available (again, particularly in laptops, where this is often un-upgradable) is going to be a good investment in terms of extending a unit's effective lifespan.


When I first read this, I thought this said "PG obsolescence is obsolete"


Dennard scaling: http://en.wikipedia.org/wiki/Robert_H._Dennard

It's about the shrinking of MOSFETS that happened over the 35 years since he wrote about it. My electronics textbooks mentioned the scaling, but did not make this named attribution (perhaps just a bibliographic reference).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: