Hacker News new | past | comments | ask | show | jobs | submit login
The charm of buying old workstation hardware on the cheap (tedium.co)
235 points by artsandsci on June 5, 2019 | hide | past | favorite | 299 comments



The power consumption of some of these old workstations is obscene. There's probably some optimal intersection between cost of the computer and the cost of power consumption.

an older macbook pro (2012ish) gives 12,000 on the MT benchmark and only draws what, 85w?

the author briefly touches on this, but unless you get free electricity, this is a bigger issue than presented.


Yes! I am shocked that almost no one seems to comment on power consumption for these units. Sure, they cost $200 while being just as fast as $700 machines, but they can easily pull $50 a month in power, assuming 25 cents per kwh.

Not only is that a waste of money for power, it is horrible for the environment. These machines are obscenely inefficient power wise. It's likte folks who buy a r720 for $150 that comes with 32GB ECC ram and dual cpu's, but it pulls $75 per month in power. Even if you don't pay for electricity directly (you do in rent then), it's still terrible for the environment.


While I agree with the premise, you're exaggerating far too much.

If an R720 pulls anywhere near $75 a month [1], you got some sort of major problem. In reality, these things pulls closet to anywhere from 120 to 180 watts on the higher end.

1. https://www.reddit.com/r/homelab/comments/7r90l8/average_pow...


He said dual CPU’s using the worst case of 27.5 cents per kWH in Hawaii. 2 * .18 * 0.275 * 24 * 30.4 (days per month) = 72$ per month + whatever the rest of the machine uses.

Though few people’s electric prices are that high, and it’s rare to sit at 100% CPU 24/7.


It's more like 25€/month for our R720 at work (with 128gb ddr3 memory and dual xeon e5-2690v1) - if you run with powersave governor in Linux and enable lower C-states in BIOS. I'm not joking the included iDrac7 reliable shows 110-130W. How course if you number crunching at 100% cpu are you more like using 300W.


> Though few people’s electric prices are that high, and it’s rare to sit at 100% CPU 24/7.

While I agree a rate similar to that isn't all that uncommon in for instance Europe, from that perspective it's a relevant calculation. On the other hand the prices of old hardware tend to very between markets as well so that would still impact the break even point.


It costs a lot yes, but bad for the environment?

How much energy and pollution goes into producing new hardware? Not so sure buying new is at all better for the environment.

That and that we have even more obsolete hardware to take care of in the future.


In a competitive industry, dollar price is a good approximation of energy usage, which is an approximation of pollution.


Too bad pollution is externalized to future generations or foreign countries, so not included in the economic equation.


OP is saying that there is pollution for both options, and the total cost of that pollution is proportional to the total cost of ownership of the product.

He doesn't provide evidence, but it seems at least plausible.


A lot of it depends what's happening to that Xeon machine if you don't buy it. If it's otherwise going to the landfill then you using it a little longer to defer the creation of brand new hardware is absolutely the greener option.

Same logic that applies to driving an older car— it may not be the most efficient, but especially if you're low usage, extending the life of something that already exists is almost certainly greener than using something new.


Yes, you'd hope so - but unfortunately that's not true. We're terrible at pricing externalities (by design, it makes it easier to profit; for example free use of public commons like fjords for farming salmon - creating ecological disasters).


I wish the people downvoting this guy would actually research this claim. Seems reasonable to me. Seeing for a scatterplot of consumer good MSRP's vs. energy of production would be enlightening.


I would assume that they're being downvoted not for their claim (which I would not agree with, though I would love to see that graph), but for missing the point of the comment they responded to which points out that even if older hardware consumes more electricity, it may still be less environmentally harmful than purchasing new hardware which required many resources to manufacture.


That would miss gowld's point, which seems to be that a good first order approximation of the energy expended in the manufacture of a new machine is probably captured in a fraction of how much its retail price is.

You would definitely not expect that a machine selling for $1000 would have used more than $1000 of energy in its manufacture. Even accounting for energy costs all the way down the chain (materials arguably don't have any other cost other than energy and attention involved in collecting/isolating/refining them), my uninformed guess would be that with something like a computer, it's something like a bit less than half, with the rest of it being amortized human attention at various stages plus some profit margin, though I'm sure there's more refined and accurate models available.

So, throw out $480 as an energy number for our $1000. That means old hardware that's less energy efficient to the tune of $40 a month will out-impact the manufacturing cost of the new machine in a year.

As an exercise, contrast this with older automobile. If you've got a well-functioning 10-20 year old vehicle, it's probably somewhere between 80% to 50% as energy efficient as the higher efficiency choices you can buy new off a lot today. But the sticker price of a vehicle will tell you that it probably $10-20k of energy to produce. Because that number is high, from today, it will probably take your used vehicle longer than its remaining lifetime to exceed energy use involved in making the new car (and most new vehicles won't get you where you're going any faster either).


> You would definitely not expect that a machine selling for $1000 would have used more than $1000 of energy in its manufacture.

This is a misconception. Energy is not a transportable, fungible quantity the way dollars are. It is entirely possible that the device one buys for $1000 would require more that $1000 of energy to make, if manufactured in a modern economy with high environmental standards.

The major forces in the global economy over the last 3 decades have been this imbalance in labor, energy, and environmental compliance costs. The $1000 retail price in the US does not contain the largely externalized costs that its place of manufacture may have permitted.


Buying items in highly competitive industries from far away places is the most efficient way to turn western dollars into pollution.

What the parent didn't enumerate is that the energy cost has to be calculated as the point of use. You can do this for say solar panels made in China. If you assume that 100% of the purchase price is translated into energy costs, zero physical resources used, etc. That 100% of what you pay, went into energy, used by the nastiest sources you can get an upper bound on energy costs to create an item are. In China, $/KwH is 2.5-5 cents US. Probably lower if you are in some direct use of coal scenario. Using the lower bound, say a $100 solar panel made in China consumes energy costing 2.5C/KwH. It could have used at most 4MWH of electricity.


> Energy is not a transportable, fungible quantity the way dollars are.

You mean energy and dollars aren't both transmitted up/down wires or moved around with mass that represents stored potential?

> It is entirely possible that the device one buys for $1000 would require more that $1000 of energy to make

This.... doesn't sound like a business model that will last long. Can you give a concrete example?


Electrical power does not cross oceans. Dollars do. A factory in China running on a local coal plant's energy does not pay the same cost that a factory in Texas would pay.


If I go out and burn enough coal to release 1000 Joules of energy into the atmosphere, will that cost me the same as 1000 Joules of electricity consumed by my electric oven? Of course not, it would be orders of magnitude cheaper. You can’t make simplistic assumptions about the cost of energy in a product based on its final price because the costs of energy in different forms vary enormously - by many, many orders of magnitude.


The costs involved in the manufacture of a product at any single point in time are exactly what I can make assumptions about from the price of the product, nothing simplistic about it, given that prices are a result of the negotiation of a lot of details, including whether the optimal industrial input is some raw mass of coal that's burned in some managed way or Watt-seconds of directly supplied electricity. It's not going to be a perfect signal (demand matters, and any single estimated or registered price reflects a certain degree of imperfect judgment/optimization), but energy inputs are going to be a bounded factor.

If what you're saying is the actual energy expenditure may be what we're concerned about if we're speaking about environmental impacts and might be more properly modeled by something more complex, that's a worthwhile point. But it's going to be much less a matter of 1000J from a given mass of coal vs 1000 J of directly supplied electricity -- this factor will disappear behind whatever market allocations/optimizations are available -- and much more a matter of general industrial energy costs circa 1995 vs 2015, combined with the relative efficiency of manufacturing processes at both points in time.

What would we expect on those two fronts? Personally, I'd expect energy prices to rise with economic growth and occasionally fall with recessions, absent some large new source coming online or state-imposed costs for use. I'd also expect process efficiency to increase as well. Which would lead me to, again, see energy expended as reasonably estimated by some bounded factor of a final product price.


> So, throw out $480 as an energy number for our $1000. That means old hardware that's less energy efficient to the tune of $40 a month will out-impact the manufacturing cost of the new machine in a year.

That seems like an obscenely high difference in monthly energy cost (if we're going for an apples-to-apples comparison, in contrast with the article's posed comparison of a desktop workstation v. a consumer laptop). For reference, I run multiple desktops, multiple laptops, a full-size fridge, lights, fans, and an Echo, all mostly 24/7, and per PG&E my total monthly power bill (near SF) is less than that (and most of my hardware is on the older side).

We're more realistically talking (from my experience, running a lot of the sorts of older desktops the article mentions) a difference closer to $4 than $40. Even $10 (which would still be a pretty high estimate) would extend your estimate to 4 years until break-even.


Of the several objections to my comment people have registered, this seems like the best one: on reflection it seems likely that an 85W laptop's daily use is likely to be around 1 kWh (maybe 2 kWh if driven near capacity 24 hrs), which is on the order of $5-$10/mo. So for things to come out something like I'd speculated, either older workstations would need to use much more power (5-10kWh) or the manufacture of something new would have to involve much less energy than I'd guessed.


In addition to the sibling comment's point that externalities of energy production are unevenly distributed, the difference between an old machine and a new machine is that in the manufacture of the old machine the energy/pollution/labor have already long since been expended, and the amortization of that expenditure across more years of use is likely to compete well against even a very efficient newly manufactured piece of equipment. This is most dramatic in e.g. the purchase of cars, where it's typically more environmentally friendly to drive an older, even significantly less efficient car, than to purchase a new car.


It depends where you are.

In New York, about 60-65% of your electricity is emmission-free, with most of the rest being gas. In Ohio or Kentucky, it's all coal and gas. You're probably emitting more in Kentucky with a laptop than you are in NY with a workstation.

The whole argument is tedious and obnoxious anyway, as the marginal negative impact of squeezing a couple of years out of an older device is overstated and minimal -- you'd be better off assuaging your guilt by taking a 5m shower.


What about Perry, Davis-Besse, and Beaver Valley nuclear plants along with https://en.wikipedia.org/wiki/Wind_power_in_Ohio and hydro on Ohio river?


That claim assumes no externalities and no shenanigans with business models. I too would like to see a scatterplot, but I already expect GP's claim not to hold for phones, IoT, and anything bought used.

And that's also only the manufacturing part. The energy used when operating a device is not usually incorporated in purchase price.


Which means by buying these workstations used and on-the-cheap, I'm polluting less than if I were to buy brand new hardware.

Then there's the electricity cost. I own quite a few old workstations like this, and the power consumption ain't that much higher. Yeah, maybe compared to a laptop, but that's like comparing the feeding habits of a hummingbird v. an emu (and you can buy old laptops for relatively cheap, too).


I don't think that's true, we have serious issues pricing externalities correctly, and making sure the correct party pays for them. Way too often the bill for risk and cleanup lands on the (local) goverment/public - sometimes long after the profit has been made.

Common and extreme cases are pollution of the type we see on Nigeria where global conglomerates are "allowed" to ignore safety standards by a corrupt government. Eg: https://www.bbc.com/news/10313107

See also:

Michael Woodiwiss Gangster Capitalism: The United States and the Globalization of Organized Crime

https://www.amazon.com/Gangster-Capitalism-United-Globalizat...

Ed: and another example of problematic incetives is allowing power production for profit. We generally agree we need to use less energy, and yet have businesses that make more money when they can sell more energy... Sure, it might be a benefit to relatively sell more "green" energy - but really, the ideal energy company would make the most money when it got its customers to buy less energy....


All the electricity in my city to normal households are from solar/wind/hydro power so using old hardware is definitely better for the environment for me to use old hardware over buying new. It hurts the wallet though.


Hydropower uses massive amounts of cement, and the cement industry is responsible for about 5% of all carbon emissions. Similarly, if you're counting the environmental impact of manufacturing the more energy-efficient computer, you must also count the environmental impact of the solar panels or wind turbines that do not have to be manufactured because of the increased energy efficiency.


Surely the one-time emissions in cement required to build a dam that lasts for decades amortizes down to nothing compared with the ongoing outputs associated with coal or gas fired electricity production? (not to mention the upfront emissions associated with constructing those facilities as well...)

Most of those global cement emissions are surely in building sidewalks, highways, bridges, and skyscrapers? Seems weird to blame an otherwise pretty green electricity source for this carbon.


But the argument for buying used hardware holds for existing infrastructure too: the hydro plant is already built.

I certainly agree that we need to factor in the impact of construction, though!


Instead you are hurting the global environment (not your local environment).

If you use an extra kW, the city doesn't get to sell the extra kW into the national grid. So the national grid needs an extra kW, which is most likely produced by gas!

In a nationally connected electricity grid, each extra kW you use, increases the usage of the next marginal kW of power on the network, which is most likely gas (unless you use the kW during off-peak, when it might be nuclear depending on your country).


No. This is a fallacious misunderstanding of how markets work.

The hypothetical user of the older hardware is not damaging anyone else by consuming "green" electrons. Their demand provides a market for green projects, and the fact that there is non-green supply still available is simply an opportunity for new green supply to supplant it.

If the demand for green electricity is there, supply will appear, as long as it is economic to do so.


So you are suggesting there is two types of electricity markets: one for green electricity and another for say black electricity. Let's say their prices are in equilibrium.

So you increase your green electricity demand, and green power generation capacity is increased and more green electricity is made.

However the demand for black electricity hasn't decreased.

But you have created an arbitrage opportunity e.g. someone decreases their green electricity usage, and increases their black electricity usage.

Your fallacy is that you think it is possible to create two separate electricity markets (maybe separate grids, or strong regulation) for a good that is quite fungible.


For at least 10 years I have had a contract for "green energy" from my municipal supplier. They, in turn, contract to buy power from solar and wind suppliers to fulfill the consumption of those who are part of that program.

It does indeed work as I describe. And, because the wind and solar providers are among the lowest-cost providers at this point, "black" energy is losing market share quite quickly.

I understand electricity markets quite well and live and work in one of the most dynamic ones in N. America. This is how it works.


> If you use an extra kW, the city doesn't get to sell the extra kW into the national grid

It ain't guaranteed that the city would even be selling into the national grid in the first place.


I suppose the question is old desktop, old laptop, or old server. Depends on your compute needs.


And loud as well - servers aren't as optimized for noise as desktops are.

I use ThinkPads from Ebay - I get the high end model from 4-5 years ago. Coming off lease they can be in great condition often loaded with RAM and just need an SSD. The build quality is great, and they're cheap.


I looked into buying a server to harvest the CPU and RAM, then building a desktop machine using a regular cooling components to keep it quiet.

I found the real problem of a re-purposing server is not the noise, but rather that the motherboards are completely unsuitable for desktop work. And buying a new workstation motherboard that would take the Xeon(s) and ECC ram would make the build more expensive than just buying a new desktop machine with consumer grade hardware.


Couldn't you just use the server MB in an (e)ATX case, and use a different heatsink/fan?


Those motherboards have nothing to do with ATX. The rackmountable "blades" are 19" wide and ~24" long. They use redundant custom "jet engine" PSU with custom connectors. Even their PCI-E isn't standard. They are custom built for each generation of servers and mass produced.

Even higher end workstations like the various HP "Z" series and Proliant are custom.

(disclaimer: I have my (large, ~120 Units) homelab/self_employed_datacenter on ATX Rackmount 4U cases with watercooling. I am used to buying servers for the CPU and RAM as others do in this thread. I use consumer gears too, but they die young under full C++/C embedded CI jobs 24/7. I use 4U cases because of the unbearable noise of the blades)


Can I ask what you're doing with that home lab, what your electric bill is, and how loud your racks are?


> what your electric bill is,

I am in Quebec, power is cheap to the point of this being irrelevant. Heating is also required a large part of the year (including until yesterday because the weather has been horrible so far this "summer"...). Plus, well, tax credit for business expenses make that "less than free".

> and how loud your racks are?

Watercooling and passive PSUs makes it silent. I hate noise.

> Can I ask what you're doing with that home lab

Beside all the usual services for a small business (phone, email, storage, backup, routing, etc), it is mostly an oVirt (RedHat Enterprise Virtualization) private cloud running Docker VMs used by the CI to compile C/C++, build embedded device firmware images and run tests. The extra horse power (and least power efficient) nodes are woken up use wake-up-on-lan and boot a template using PXE from the GlusterFS distributed NAS. They are shut down when the load goes down. Only the big 7U case with the double1 140mm watercooling fans is open all the time. It is the "head" of the cloud cluster (but in theory is configured to auto migrate in case of hardware failure to the dual 120mm 4U case, but honestly I never tried the automatic migration).

Not very pretty, but good enough. https://imgur.com/a/hnlInSz (yes, there is some overlap between those 2 pictures because I moved some units between taking them, some units have been traded for others too).


Like Elv13 already said: the form factor is totally different, so it won't work in any desktop enclosure or with a standard power supply.

But apart from that, server boards sometimes don't have a storage controller, no audio, only a few USB2 port. Most have onboard VGA graphics that you can't turn off, or the BIOS won't support outputting video with a different graphics card.

And then there are usually few Windows drivers and you could get into heaps of issues with (the lack of) UEFI if you want to run a desktop version of Windows.

Server boards are custom build for datacenter application and just aren't suitable for workstation use.


My 26 core lenovo/xeon workstation is quieter than my T450s. The former is exactly the same noise level 100% of the time regardless of idle or compiling code. My thinkpad spins its fans up to a nice hum when compiling code.

Some rackmount server hardware is fairly quite these days too. Variable speed fan profiles and a cool room do wonders to keep the fan speeds fairly low. Probably the noisiest think in my rack at home these days is the Ethernet switch.


I bought a house a few years back and quickly realized that leaving my computers on 24/7 was terrible for my electricity bill. Now I feel bad for doing this while living with my parents all those years.


Heh, as a kid I always wondered about this. My parents hated if someone left the lights on, but my computer has a 600w power supply... how many bulbs is that!


That 600W is the maximum power it can supply.

You'll only come close to this figure when the PC is under full load (like gaming). But while idle, a (modern) PC draws about 30-50 watts.


Unless you were cranking everything to the max 24/7 (like maxing out your GPU while reading/writing to a database while doing a bunch of floating point calcs for something?), it probably idled at around 100 watts. Not great, but not something I'd worry about.


Like running SETI@home? https://setiathome.berkeley.edu/

I certainly had that running 24/7 when I was in high school.


If it makes you feel better, those older machines probably didn't have much in the way of power management to throttle back at idle.


What?

Mine has a ‘Turbo’ button!


About 100x 6watt LED bulbs.

I now get upset when someone goes back inside to turn off a 6w LED switched bulb. It’s not worth taking up a minute of 2 people’s time.

At $6 per year, it may not be worth turning off at all. Those seconds to turn it off add up.

But good luck arguing with someone that argues this specific wastefulness should trump all other wastes.


What matter more is whether flicking the switch wears out the components more than letting it run. The replacement cost matters more than the running electrical cost.


Opening the door probably costs more in energy (from AC or Heating losses), than the savings on turning off LEDs.


Often those switches are connected to multiple led bulbs, which can be 65w for a room.


I have been telling my family that for awhile. It's going to be $3 to $6 a year if you NEVER turn off an LED bulb lamp.

If you forget, it is ok.


Though this is why I bought a 0.5w LED for a room that only the cat uses. Sure it cost $4 instead of $1.5 for a 6w, but it’ll pay for itself.


The fact that LED bulbs are that cheap nowadays is insane to me.


The $1.50 may have been subsidized, but that’s just smart. Average electricity prices may be cheap, but marginal prices might be several times.

Saving has big systemic benefits.


Agreed. I have a couple 9w LED lamps that are never switched off. It cost about $7 per lamp per year at $0.09/kwh.


Power consumption is somewhat irrelevant if you live in a place where there is hydro or solar power - eWaste recycling is a huge environmental problem [1], and manufacturing has its own environmental footprint [2].

Buying used hardware is great, but really we should also be probably optimizing for performance-per-watt even with older hardware.

IMO the real issue is that most people just don't need more grunt than a 5-year-old machine is giving them at this point. What they really need is a solid state disk in the old hardware, and for Office and Chromium you'd never notice a difference. I still use a SATA SSD daily and while I've had NVMe in my work-supplied machines, for 99% of my workload it's just not even necessary.

[1] https://eridirect.com/blog/2015/06/how-does-e-waste-affect-t...

[2] http://www.electronicstakeback.com/toxics-in-electronics/whe...


I have multiple r720's that idle at between 130 and 155 watts according to iDRAC. The system has 8+ SSDs, 10 gig fiber, infiniband, and 160GB of RAM. Under load it goes up but typically it uses less than 200 watts.


I ran a core2 duo for years as a server, it was more efficient then any of the newer stuff I could find. I just upgrade to an I7, which costs more energy annually.

Granted it's faster and maybe less likely to break. http://cpuboss.com/cpus/Intel-Core2-Duo-E8400-vs-Intel-Core-...


It's probably also more energy-proportional. newer CPUs and hardware in general do a much better job of scaling power use with demand. so if you're not running flat out 24x7, you're probably using less power in practice, even though your max may be higher now.


Good point, it certainly seems to run cooler.


And heat; I have some computers I just don't run for the hot three months of the year; and I live in a temperate climate. In the winter though; they are nice room warmers.


You are probably seriously over estimating the actual power requirements. I have several servers running, some are 5+ year old Xeon towers, and they aren't costing anywhere near that.


How about you buy these machines and only run them in the winter? Heck, mine coins if you want! It's just going to heat the house!


Several years ago my friend did that to learn about cryptocurrency. He bought a bunch of GPUs off CL that were being dumped by people in favor of FPGAs (or whatever was the new hotness at the time for bitcoin). He had a bunch of server racks that he got for free when AMEX dumped 'em here locally. Also bought the mobos and ram off CL.

Then ran extension cords from all over his house into his garage to power the things, networked everything together, and played with it for 6 months, mostly over the winter.

He said he was able to break even (that is, mined enough coins - mainly other alt-coins, not bitcoin) to offset his costs in both electricity and what he spent on the hardware. Kept his garage closed and the door to his house open, and it heated things fairly well from what I recall.

In the end, though, he shut it down, because it was starting to go upside-down for him; I don't know what he did with the hardware (he ended up giving me one of the GPUs - so maybe he parted it out for friends to upgrade their systems?)...


What about the time investment? How can you take that into account in an unbiased (or least biased) kind of way?


Well, theoretically he did it because he genuinely wanted to spend that time learning, so the time investment shouldn't be factored in.

Same way that if I go to a movie, I don't typically think about it as costing me the price of a ticket and 2 hours of wages (although technically I could, I guess). I calculate time cost for things that I don't want to do.

If anything I would use the time spent in the value calculation (I payed $13 for 2 hours of entertainment).


There are plenty of movies where I've come out thinking that I'll never get those 2 hours of my life back. I do apply this logic to commute time though. $salary - $commute = $takeHome. Some jobs are not worth it.

I also agree 100% with the time cost on something you're wanting to learn is a sunk cost. After all, it's an investment in yourself. Even if it fails, you now have that experience of what not to do if presented the chance again.


In The Netherlands you get a large part of your travel costs reimbursed by your employer. How large it is, there are legal upper limits to avoid "untaxed extra payment".

If you can travel by public transport, part of the time of your commute is akin to leisure time. But I have big issues with failing to focus, especially if I gotta switch transport multiple times or run to make it or its crowded or...

My limit of commute is basically an hour, and if I don't get the costs reimbursed I simply do not take the job. It is bad enough that I lose that time as it is (as I argued above, it is hardly akin to leisure time).

As for the topic at hand, I've spend a good amount of time and money on e.g. old UNIX hardware (such as SGI Indy/Indigo 2/Octane, Sun Ultra 10, and some DEC Alpha machines) back in the start of this millennium. It was costly and bad for my electricity bills, and it took me a lot of time to play around with old platforms. I had a lot of fun though. And I'm not sure you can benchmark "fun". Nowadays, with solar energy on the rise, it might actually matter less to have these machines running. Except for in the summer. The additional heat would kill me.


Highly agreed on the commute time formula.

Learning to think about time spent on getting ready for work and commuting to and from work as a direct extension of my working hours changed the way I looked at jobs and approach salary negotiation. It's so easy for someone to ignore that cost if they haven't thought about it, and so hard once they have thought about it :)


"I learned how alt-coin mining works, first-hand" could easily be seen as a worthwhile experience.


Because if you normally heat your house through cheaper means (wood, natural gas, etc.) like most Americans, then it's still going to add a lot to your power bill.


But in the US, the time of year for heating your house with wood,natural gas, etc is the cheapest time for electricity.


Do electricity prices vary throughout the year? I don't remember that being the case when I lived in the US, and it's not the case in Ontario. We have time-of-use pricing that swaps the mid-peak and on-peak rates, but that doesn't affect the total charge very much.


Utility companies offer a product of supply and demand. During the middle of the day in the summer in Texas, demand is at its peak as everyone runs their A/C full tilt. That's the most expensive unit of electricity, so the pricing reflects that. Everyone knows that you don't run your laundry/dishes during that time. Winter time, most places are heating with gas, so electricity demand is just never as high so the prices are cheaper. Yes, some pricing options claim they are giving "free nights and weekends", or avg billing that "lowers" the summer rates while "raising" the winter months to keep it on average the same per month. That doesn't actually change the rate per KWh at the time. It's like buying car with the squeezing the balloon analogy, squeeze the price on one end the numbers bulge somewhere else.


Only makes sense if you don't have a heat pump.


Not everyone has to use electricity for heating.


However, electricity at least has the potential to be generated by solar, wind, hydro or nuclear; most people in North America are heating with natural gas or even heating oil, which by definition can't get to carbon-neutral.


What if the electricity is generated via hydroelectric or solar?


> The power consumption of some of these old workstations is obscene

Absolutely this.

I work from home and for around six years ran a company provided refurbished Dell Precision (a T5400 I think) workstation with dual Xeon CPU's, SAS Disks etc - this was around 2008. I think when purchased new this machine cost around GBP3K, we got it for around GBP1K. The thing was switched on for around ten to twelve hours a day and the power consumption was (I eventually realised) eyewatering.

The machine fortunately developed a fatal motherboard fault and I replaced it with a self build Core i5-4690K machine with SSDs and a modern graphics card. My electric bill halved and I got a way more powerful and energy efficient machine. It was quite a revelation.

These old machines may seem like bargains, but energy-wise they're just not economical to run (especially at UK domestic energy prices).


A few things in their defense though - newer workstations like z230 from HP generate much less heat noise and have lower power consumption. They are not $200 cheap but they cost a lot less than a loaded MBP for sure - even more so when bought refurbished or on eBay.

These workstations can be put in sleep mode with Windows or Linux and they wake up pretty quick - saves you lot of power consumption by not needing to keep it always on. Wake on LAN is your friend.

Next everything is easily replaceable which is great for the environment as these have much longer lifespans than a glued-in laptop.

Lastly the performance is just much better than a thermally constrained laptop.


Yeah. There are workstations and workstations. Why would the original poster's single Haswell- or Skylake-based Core i7 CPU in his workstation waste more energy than a single Skylake-based Core i7 CPU on a new gaming machine? Also, his storage and GPU seemed quite modest, so I don't think his HP is more wasteful than a new gaming PC.

Of course, 28 cores and eight spinning hard drives is a different story.


Reddit people with Kill-a-watt units indicate loads well below 200 watts for these workstations.

Heck, beasty Dell R710 2U servers appear to pull (in some cases, well) below 200 watts [1].

1. https://www.reddit.com/r/homelab/comments/7r90l8/average_pow...


Also consider that many xeon workstation/server boards have limited sleep state support. Any newer i3/i5/i7 will enter S5 sleep and barely sip 1w, with instant on wake. The time you save sitting waiting for the machine to start up (workstation/server boards are also insanely slow to post) will be worth the investment in a newer setup.


both HP and dell workstations fully support S5.


In a desperate attempt to get me to take Xeon Phi seriously, Intel once sent me a free Xeon Phi server. I still have the thing, but because the boards were passively cooled, the server fans sounded like an F-14 (edited) trying to take off from an aircract carrier when I booted the thing up. I ended up extracting the Xeon Phis and using them in consumer cases with a much quieter fan blowing on them. It also weighs a ton, but I digress...


To be pedantic a F-15 is an Air Force plane and not designed for a carrier. A F-14 is what you are looking for (think Top Gun).


To be more pedantic, he said "trying to take off". I think an F-15 pilot who somehow found himself on a carrier deck and had to (try to) take off wouldn't hold anything back in the power/noise department. :)


In the case of being extra extra pedantic, how did the F-15 without arresting gear get on the carrier in the first place? Okay, enough fun.


Theoretically it could be carried by an aerial crane heli, like a Mil V-12 (20-25T), an empty F15 being in the 15T range.


Noted, edited... Time to buzz the server tower?


I had a Dell R815 quad Opteron server for a short while. Kept a pair of ear defenders on hand for power up time!


May I ask why would Intel send you a server for free or what occupation comes with perks like this?


I worked in oil and gas (Halliburton) for 12 years.

The amount of stuff we got in "for review", "for test", "preview", etc. was simply amazing. Even pre-production gear a lot of the times. I found a pair of Tesla cards just sitting in a box in an office I cleaned out one day... and I know we got a system with some Phi cards in it when they came out.

The most interesting thing I ran into was when cleaning out a facility after a move, we found a Dell Itanium-1 box that not only did Dell not want back, they wouldn't even admit to making it in the first place... It ended up going home with one of our devs...

Nice thing about being a sysadmin was that we would get "video cards and such from our developers who had just upgraded to the latest and greatest - and the stuff they were throwing out was only one or two years old.. so our own desktop workstations built with cast-off parts were pretty nice.


That's pretty cool! I knew tech reviewers/writers get free stuff all the time but didn't know sysadmins do to. Thanks for sharing.


It wasn't really the sysadmins that got free stuff - it was department managers / tech leads, etc, that would get gear in for review to see if it fit with our workflow, processes, etc.

Us sysadmins just had to install/maintain it, and occasionally would "profit" when it was retired and the company/vendor didn't want it back.

Managed to build an entire multi-node NetApp cluster out of spare and retired parts one day when we were bored. Our NetApp rep said "I didn't see this, I don't know it's here, I don't know it exists, as far as I care it's a bunch of spare parts you just happened to put in a rack..." :D


It really depends. My home server uses about 120w idle (older series i7 with many disks). It comes out to about $25 a month in electricity.

That’s enough to where you should make sure it’s worth it, and ask yourself if you’re better off with a higher end NUC or even raspberry pi, which use orders of magnitude less power.

(We have solar so I don’t really worry about the power consumption anymore)


Wow what a difference! My laptop shows about 2-3 watt idle (brightness low of course). What a huge difference in power consumption over the years. Do you think that’s mostly the hard disks using the power?


My home server is basically my old laptop. It is a Dell Latitude e6540, just sips power when not under heavy load.

with a 4 core i7 and 16 GB of ram, it is more than enough for home use.

for storage, it has a m.2 SATA ssd for the root partition. And 2x 2tb hard drives that are raided with zfs for the home and network share folders.


Pretty much. I had to stop running my VAXen 24/7 because they use a lot of power and my lab would get too warm.

That said I have picked up "junk" computers with my neighbors kids, got the running and installed by with different operating systems. Fun stuff and you don't worry about breaking it.


Yes, and it can be a double-edged sword. My home office isn't heated (long story), so if I want some cheap heat in winter I'll fire up the old server and just batch convert some video. That's a pretty efficient room heater for 300 watts of power! Conversely I never convert video in summer...


It's funny how there is nothing really wrong with this. Maybe you could put it to work doing some valuable tasks, like protein folding etc? That'd be really cool!


In the winter I let my PC search for Mersenne Primes when I'm not using it.


I remember seeing a few years ago a startup that made a panel heater that was really a computer that would mine Bitcoin or something.


Do you know how it would compare to something like a space-heater in terms of something like $ per degree heated per cubic meter?


Um, exactly the same? (By thermodynamics)


Yes, this 100%. It was fun (and kind of hilarious) to have an old Dell PowerEdge Server in your dorm room back in college, when power was free. (And, in my case, mostly generated by dams, so not that bad for the environment.)

Keeping one in your house would be ridiculous and loud, though. Mine actually required two power cables! I ended up pawning it off to a friend for $20, who pawned it off to one of his friends for $20, and so forth. I wonder where it is now.


Mine went to Goodwill after I graduated from my university. It was a beast in terms of how loud it was.

I was able to strike a deal with IT - they'd give me 2U of rack space in the Uni's datacenter and some subdomains in the Uni's .edu. In return, they got a forever grateful undergrad with too much time on his hands.


Why is everyone here focused on the desktop workstations?

I switched to laptops over a decade ago. Used 3-5 year old Zbooks are 1/10th of their original price, and still perform flawlessly (mind-blowing fact: Haswells can keep up with Skylakes, and even beat them with some overclocking+undervolting).


I agree, one of the reasons to even upgrade personal computers is to take advantage of the power efficiency. Also must do for anyone who is planet conscious.

Perhaps if the person who buys these old enterprise server workstations also has renewable power source, then it would seem to be an over all win.


If you want a loud computer that consumes ten times as much electricity as it should, then an old Xean Workstation might be for you. Personally I prefer buying old Thinkpads. They're cheap, fast enough, and their electricity consumption is decent.


HP Z workstations are actually fairly quiet. It's one of the few computer lines that actually has published acoustic decibel ratings (19 dB for the low-end version, 34 dB for the high-end, see https://www.bluechipit.com.au/media/product_spec/Z420_WORKST...).

My main objection to the Z4xx series is that the cases are so bulky. I prefer the Z2xx small form factor models, or even better the Z2 (fastest computer I own, but also one of the smallest).

One of the big advantages of a Xeon is that it will use ECC memory, and thus be more reliable.


Coincidentally I'm considering buying a used Thinkpad X220 for fun projects for me/kids because of the reliability, build quality and the awesome keyboard. The best quote I have received so far is about $145 for i5/8gb ram/no-drive (I already have a spare ssd). Hoping the machine is in a good condition :)


Go for it. You will not regret it... I still have mine powered on and sits on my desk with an external plugged in. I suggest doing the FullHD mode if you're decent with a soldering iron. Here is a great resource for x220s. Enjoy!

http://x220.mcdonnelltech.com/resources/


I have never touched a soldering iron and quite wary of h/w tinkering - thanks for the link though. Even if I don't know how to work with h/w, I like to read about other people doing it.


If you do go for it look out for one with an original battery that may well fall into the battery recall that Lenovo has been running for years. A "dead" battery to the seller may represent a free replacement from Lenovo without any cycles it's clock.


Go for it, but you may need to change battery and the thermal paste


Thats my fear too - not only the battery but even the thermal paste part. I have never tinkered with h/w and have a bit of phobia of breaking things. I'll probably end up paying someone to do it :(


I can't talk about power consumption, but I have a couple of old HP z420 and the loudest component in them was the old Quadro cards. Swap that out and the system is very quite.


Quite quiet ;) :P


In my experience Xeon workstations are no more noisy than personal computers. That's including my current Lenovo with can't-even-remember-how-many cores and just shy of 200GB RAM.

I never felt a need to measure how much my employer pays for electricity.


I have a Xeon ThinkServer sitting here on my desk as I type this. It is considerably quieter than the laptop I sometimes use, under typical usage (Slack, VS Code, a couple other Electron apps, Outlook).


I built a dual-CPU E5- based workstation with 64 GB of DDR4, and kill-a-watt says it draws only 75W at idle.


A Macbook Pro lasts ten hours on a 80 Watthour Battery


> 10h

disable lid sleep (well, disable sleep entirely on osx 10+ because apple don't want to give you the option to disable lid sleep only) and it will never pass 4h.

the 6/8/10h some people claim is because they don't realize the computer is sleeping most of the time. I guess it is fine if it's just your facebook terminal...


Not if you're running Docker it doesn't.


If you actually run stuff the Dual Xeon system won't be sitting at 75W either.


I have an old Dell Xeon workstation, it is not loud at all.


The older ones definitely make noise from the fans when running at full tilt. The Precision 3630 on my desk will even make a bit of noise, it only has an i7-8700


I'm actually using the HP Z420 (the one mentioned in the article) at work. Pretty quiet IMO.

Note the Z420 is a workstation not a server. Buying an old server is completely different.


Some of those computers can heat a room though...


You feel it in big, packed open spaces when they cut the a/c the week end. It may be freezing outside but the floors become really hot.


Yeah, no shit, a laptop uses less power.

Look at ZBook, Precision workstations for the best performance for your dollar.


What’s a good place to look for an old reliable thinkpad? Do you find them on Craigslist or eBay?

I’d love to get an old one and stick an ssd in there for basic usage.


I bought an i5 t460 (2-3 years old) for 300 on eBay. The keyboard was kinda nasty so I bought a replacement for 30 on Amazon.


I googled “used laptops” and found a company that resells off-lease computers.

If you’re in a large(ish) city I’d expect you’d be able to find a similar company. I was able to pick up a x240 in great shape for $125, and test drive it in person before I paid.


My dual cpu z600 running 12 cores (24 with HT) is virtually silent. The GPU is the loudest part and hardly audible. I keep it in my living room.


Same here. I believe the Z600 series was specifically designed to be quiet.


BMW designed the case iirc.


It's a great design. My only desire for change really is that I had the z800 which is a bit bigger so I could hold more stuff :)


Ditto but with a passively cooled GTX1050ti and nvme drive it's a solid workstation.


Don't forget on most of the W-series you can stick 4 DIMMs in to get 32GB of memory.


I've been doing this for years, "upgrading" every 6 months when I find a new machine being thrown away. In the past year I did this with a 2006 Mac Pro and a 2013 System76 desktop. Both of these machines were a delight to work on - I was able to get 16gb of DDR2 ECC RAM and 2 quad-core Xeons for the Mac Pro for around $28 total on ebay. The Mac Pro could be flashed up to a 2007 model and ran El Capitan perfectly with a cheap no-name SSD off Amazon. It took a GTX 760 with some odd power adapters and it was a fantastic machine to do audio editing and playing with tensorflow-gpu on.

The System76 machine was a little more modern, with a Haswell i7, and after a new aftermarket cooler, fresh thermal paste, and another no-name SSD it ran Manjaro silently, even with Bazel putting it under some serious load.

I'm moving soon, so I sold both machines for around $350 each. It's a fun challenge, since usually these computers don't see modifications or have as good documentation as enthusiast PC parts, so you'll find a tiny community in some obscure forum sharing supported CPU upgrades, how to change out the cooler, stuff like that. Highly recommend it.


Meh, these old systems can be fun to tinker with but are _slow_.

E5-2667v2 vs i9-9900k the i9 is 60% faster single core, 44% faster multi-core (both are 8c16t). At lot of use cases, including the majority of development work for most people, is still dominated by single core performance so you're giving up a lot performance to save a couple bucks. The power draw on these server/workstation systems will likely also suck which will offset some of the $ savings.


You're talking about a $500 part vs a $200 part.

The xeon is the same price as a Ryzen 2700 or an intel 8400. They definitely beat it on power usage.

This all makes the argument really depend on how much utilization you think the xeon will have.

If you plan to run it 24/7 100% utilization (or as close as you can get), yes new hardware makes sense. (There's definitely a cost reason that these companies are ditching their old xeons for new ones, it saves them money and rack space).

If you're playing around for a few hours in the evening and on weekends (as suggested), it'll probably be a maximum of $10 per year in power costs.

So I think even with power costs, having run the numbers, these processors do make an attractive alternative to newer hardware. They are comparable and at a comparable price on the market, but if you can get a deal by bargain bin hunting, they can be a great value. It is unsurprising that the $/performance is very similar, that is how arbitrage markets are supposed to work.

There are some pitfalls here that are not mentioned, while the processor itself can be cheap, it is often the case that server grade motherboards are more expensive. This is particularly true with older Xeons, where people are trying to repair things that have broken. There's kind of a curve where demands drops up until a point and then the supply drops (people scrap things for the metal content rather than reselling)... so price starts to creep up again. This shouldn't be a problem.

The other issue is that you end up with hardware compatibility issues. Can I stick my new graphics card in the old xeon box? Probably(?) can I do something more exotic or new? maybe not? Again this shouldn't be an issue.


The hardware compatibility thing almost bit me. I have an older dell t3600 that a bought a new Radeon 480 GPU for. I only verified that the power supply could handle it and the pci slots were compatible. The card “fit” but I couldn’t put side cover back on, because there is a handle riveted to the inside of the side panel that collided with the card. I ended up having to drill out the rivets and remove the handle.


It makes sense if you need much Ram. DDR3 RDIMMs used from ebay are really Cheap.

And the price difference for the whole system is huge if you buy used. I payed for my System with a Xeon 2680 V2 (10C20T) 128GB Ram just a little bit more then the asking price for the 9900k alone. But I bought it almost 2 years ago so the comparable Mainstream CPU at that time was the 4C8T 7700k.

As a student with a limited budget performance per € matters more then performance per watt.


This.

I picked up a Dell T710 with 32 cores and 32GB of DDR3 ECC RAM for $400. I can throw more RAM in it for practically zero cost. It's amazing for batch workloads.

For interactive workloads, there's no need to get anything newer than a 4-series Core iX. The single-threaded performance has not improved much over the last five years. My daily driver is an i5-6500; motherboard and CPU, used, were $120.


I just got a used Thinkpad with an i7. I was eyeballing cheaper stuff with i5. The reason I upgraded is the security problems in CPU's require mitigations that will probably keep slowing them down. I bought a faster core to mitigate those slowdowns a bit. Plus, get security updates for a while longer.


so you gave more money to company with the broken product? why not buy amd?


I recommend people buying AMD where they can. Vermaden on Lobsters is a BSD expert. I asked them what hardware runs pretty much all BSD's. Vermaden narrowed it down to a few Thinkpads. All of them on eBay from recyclers had Intels. So, Intel just came with the box.

If building from parts or doing non-BSD, I'd go with an AMD, POWER, or ARM system.


> I asked them what hardware runs pretty much all BSD's. Vermaden narrowed it down to a few Thinkpads.

What were the recommended models?


Vermaden's comments on that were here:

https://lobste.rs/s/szzgjl/cheap_bsd_friendly_notebook

All the ones I saw in good condition on eBay were Intel's. I got a T420 with Core i7 to mitigate potential slowdowns from future CPU vulnerabilities. I've occasionally had to restart it from suspend/resume issues. Otherwise, it's been great.

One more thing: the function key and control key are swapped compared to most laptops. I didn't like that because I'm used to control being far left. duclare told me about a BIOS setting that swaps them back. Everything's fine now. :)


I got an off-warranty Dell Precision from a previous employer. They were cheap and ordere them with the lowest Xeon CPU at the time, so I spent $100 on RAM/CPU to max it out. 6 sticks of 4GB DDR3 and whatever the Xeon equivalant to the i7-950 was, and threw a couple leftover 2TB HD's in there.

Made a great server. But for a workstation it wasn't really that snappy and I just ended up using a 2016 MacBook Pro.


The other thing to keep in mind is that most of the benchmarks you see on the internet for these older systems haven't been redone with the mitigations needed for the new speculative execution vulnerabilities, but the ones for the newer processors more likely have, so you're not comparing like with like.

A cheap eight core processor sounds great until it turns out to be slower at everything than an even cheaper modern quad core. A quad core Ryzen 3 is under a hundred bucks, with a 45W TDP instead of 130W.


my oldest Xeon workstation is a dual E5-2696v2 with 128G ECC RAM. it has a total of 24 cores. Compared to that, I am pretty sure most i9-9900k setups are just for kids. ;)

the cost of the above xeon system (dual cpu + mb + RAM) is slightly more expensive than i9-9900k itself, but much cheaper than a i9-9900k + motherboard + RAM.


> Compared to that, I am pretty sure most i9-9900k setups are just for kids. ;)

Your machine is slower than a macbook air for most of my workloads about half the speed of my desktop so I wouldn't be so confident.


Update to Right link: https://cpu.userbenchmark.com/Compare/Intel-Xeon-E5-2696-v2-...

I was curious how my home PC stacked against it.


so you silently replaced E5-2696v2 with E5-2690v2 and then took one CPU out of the machine before doing the comparison?


Not parent but https://cpu.userbenchmark.com/Compare/Intel-Xeon-E5-2696-v2-...

Still ~33% slower than last gen ryzen.


not parent either, but your still comparing a single socket of his system with a 8 core ryzen. The latter is nearly double the single thread perf, but the former has 3x the total cores when both sockets are active. Worse the benchmark is likely tilted towards newer AVX workloads/instruction sets that the older cores don't have. So for boring old integer workloads that are easily parallelized (say compiling code) its likely that xeon is at least 30% faster. Possibly more.

I'm in the same boat, I have a 12 core/64GB xeon machine I picked up 4 years ago for $200 (130W compiling). Along the way I put 4 256G SSDs on the RAID5 controller. It likely gets totally thumped by the latest 12 core ryzens, but OTOH, my compile times are ~2 minutes for a fully clean build, on that machine. A similarly spec'ed thread ripper at work only pulls that down to 90 seconds or so. Logically I can't really justify the ~$1800 the new machine is going to cost by the time I get a raided SSD, MB and 64GB ram. I might do it anyway, but its definitely not logical.


Utilizing a 24 core system is harder than an 8 core system. You will have to run multiple compiler instances which eat up memory. If you only change a single file it will take 60% longer than the newer system. Especially if you're using a single threaded linker you will be dependent on the performance of a single core. The 24 core system just doesn't perform well at all. 30% faster than a single CPU means 65% the speed of a dual socket system. Remember the original comment said that the 8 core system is "just for kids" but it's the opposite. The lower multicore performance will be compensated by the significantly higher single core performance and the upfront savings will be compensated by the fact that the newer system consumes less power.


I tend to subscribe the the fastest possible single thread mantra as well and buy low core count, high frequency machines for desktop usage. OTOH, compiled code software development is one of those areas where your better off with a few more cores vs higher frequency/newer machines. A very large number of opensource/automake/cmake/etc based projects have nearly perfect scaling out to at least a hundred threads. This is frequently true even for linker and packaging passes (given a parallel compressors like pigz). Between incremental linking, and runtime/loader linked systems (think linux kernel modules where each module is effectively a few C files linked into a .ko, each linked independently) this can even be true for the link phase as much of the overhead is IO/syscall bound even when cached.

Either way, your still missing the point, these machines cost a couple hundred dollars, and they might be a few percent slower, and 3x the power budget, but where I live that power is going to work out to $20 a year. Its going to take 50 years to make up the price difference in power, and in 5 years i'm going to buy the same machine someone paid $4k for $400 and that person is going to have to spend another $4k to get something better.


Didn't notice that, well spotted.


I assume that is massively slower for single threaded or very lightly threaded tasks than a 9900K.


i9-9900k costs ~$500. That buys you quite a lot:

20-core Xeon E5-2698 V4 ($405 on eBay)

22-core Xeon E5-2699 v4 ($300 on eBay)

28-core Xeon QL1F 8176 ES Platinum ($488 on eBay)

They will all beat the 8-core i9 in almost any multicore test.

For $520 you can have a whole server with 40 cores (4x E7-4860) and 128GB RAM:

https://www.ebay.com/itm/292944891997

or 64GB RAM $100 cheaper:

https://www.ebay.com/itm/233201237130


You're giving away my secrets! I've bought 3 of these types of HP workstations over the years, starting with a Z210 with a Xeon E3-1240. I added a graphics card to make it a super budget gaming PC.

My most recent one is an HP Z420 workstation with 128GB of ECC memory, an 8-core Xeon, and Win10 Pro installed for $620 delivered. Benchmarks of the CPU show its comparable to a Ryzen 1700X in single and multi core, but I really bought it for the RAM. It's a great machine for homelab-type virtualization.


I have a hp z210 e3-1270v1 3.4ghz, put a 1050ti mini powered by pci-e, works great for my minimal gaming needs and damn save some ram for the rest of us!


I'm still on the z600 after years and I love it. I just upgrade the GPU every few years. I have an extra PSU for it somewhere around here. This thing will outlive me I think.


Do these older CPUs have hardware level decoding for modern video formats such as AV1?


Does any modern chip have that yet then?


Perhaps not. I might be thinking of VP9 if that is what YouTube has been using.


Ah yes that’s VP9, though you can also use an extension (or just a config tweak in Firefox), or Edge to view h264, which is minimum Sandy Bridge.

Intel Kaby Lake, Coffee Lake, Whiskey Lake, Amber Lake, Apollo Lake and Gemini Lake CPU families, AMD Raven Ridge APU family, and Nvidia Maxwell GM206, Pascal, Volta & Turing GPU families have full fixed function VP9 hardware decoding for highest decoding performance and power efficiency. (Wiki)


Judging by youtube modern CPUs are enough for decoding av1


Everything post Sandy Bridge does.


Hmm, that kinda shifts my position on the iron triangle.


Where do you buy them?


NewEgg and eBay


This will work if you are located in big western city. He bought his Z420 for 50$, and when I check ebay right now they are going for 150-300$ + 100-200$ shipping. And that with E5-1603 CPU, not the E5-2667 or similar. Lets say I have 5 year old AMD FX-8350 due to upgrade this year. That Xeon is actually slower, looking at some basic benchmarks (of course it may be faster in some, but I don't have time for detailed investigation). So ~300$ total for a sidegrade? And that not including hassle with shipping, maybe dealing with scammer seller from ebay, then import tax 30% (and who knows how base price will be calculated at customs). Etc. etc. Not worth it.

I'm not saying that upgrading for cheap is bad of course, but getting stuff like described in the article is borderline lottery win in most of the countries. You can't depend on it or plan in advance.


That Xeon is faster, not slower than your Bulldozer chip.

Passmark for the E5-2667: Multithreaded: 10372 Single threaded 1609

Passmark for the FX-8350: Multithreaded 8951 Single threaded: 1510


I was comparing E5-1603 vs FX-8350. That was for ~300$ (of course including case, psu, gpu and memory).


I would add another negative to the list from personal experience. About a year ago I built a used server dual Xeon home server with e5-2680v2s. Now the pass mark scores per chip are about 15,000 but the power draw overall can get quite high. If you’re in an area with more expensive kwh I would recommend looking into something like a threadripper or a more efficient and newer cpu.


If it's a home server doing not much most of the time, an intel skull/hade canyon can be a good choice. It can be really powerful when it needs to be, is effectively a laptop cpu the rest of the time, so not drawing much power. The only inconvenience is that it has no ipmi and it becomes really loud under heavy load.

Also you can sort of get 10gbe through a thunderbolt adapter but I ran into some compatibility issues with hyper-v.


That depends on where you put it and how much uptime the machine sees. My cold storage NAS draws 70W idle, but its up time is at most a few hours per week - and I essentially got it for free. Since the 12 disk file server case sits in the basement it doesn't bother anyone with the turbines running.

I thought about going dual Xeon for my desktop, but found that I don't often need that much power (at home) to warrant excessive power draw you mention.


Every comment here so far is about power consumption, which is mentioned in the article:

> And there are also considerations here from the perspective of power consumption. A big box that’s always plugged in will inevitably use more power than a tiny laptop, even if the big box can do a lot more.

[...]

> But if you can make the case for it, it might be worth your time. In my case, I was looking to have more of a desktop experience for times when I wanted slightly more horsepower than a laptop, and I also wanted a machine that could do virtualization when needed or desired.


Those arguments just don’t amount to much more than “I don’t care”. I don’t even know what “more of a desktop experience” is supposed to mean, and any MacBook is perfectly adequate for virtualization.

They are also pretending this is an environment vs performance argument, when their choice of old processors clearly shows it is rather environment vs costs. They could get far better performance with newer CPUs, both absolute as well as per Wh.


Use profile matters a lot. Leaving one of these one 24/7 is a noticeable amount of juice; turning one on for a few hours every few days is probably lost in your noise floor of all the other things you're doing. You also have to consider the processing cost for recycling and the displacement of the new machine and all of its processing and production costs to the environment. You probably do come out environmentally ahead in a lot of scenarios if you take account of the whole picture as a result.


It's like every commenter here is allergic to turning off the computer or putting it to sleep when not using it.

If used sanely, the power consumption cost is tiny compared to buying a new computer.


> If used sanely, the power consumption cost is tiny compared to buying a new computer.

That depends entirely on how long you plan to use the system. 1 year vs 5 years is a huge difference.


I mean the HP Z220 workstation he has a photo of has an i5, is the power consumption that much more than any other PC?

https://www.wattdoesituse.com/hp-z220-workstation

This says 8 hours a day would be $23 a year.


I honestly don't know a single person or a business who shuts down computers for 16 hours/day consistently.

Not a single one.

Power consumption should be measured as 8-10 hours at 50% load, and 14-16 hours at idle.

That easily doubles your estimates, I think. Not to mention the places where power is more expensive than $0.12/KWh


Call it $100 a year then. That's not very much if you're already buying a machine to tinker around with.


Why do the machines use so much power?


If we're talking about idle power, it is mostly that the power management hasn't been tuned for power saving as much as for stable performance, and that there is more power-hungry hardware in the system. The presence of a discrete GPU rather than the typical iGPU. A chipset with more external controllers, most of which may not be properly powered down at idle. Often more than one storage device. And finally, a RAM configuration that is also more power hungry at idle due to the type and quantity of chips, and the high-performance configuration.

I have a Xeon E5-1650v3 6-core workstation in my office with 64GB of RAM, two 7200 RPM disks running constantly, two SATA SSDs, and a GeForce GTX Titan X GPU. According to UPS self-reporting, it is drawing ~80W when essentially idle. The powertop utility reports ~90% C6 idle state for the CPU cores and ~55% C6 idle state for the package.

I have an i3-8100 4-core PC at home with 16GB of RAM, two 5400 RPM disks which are set to spin down, one nvme SSD, and the iGPU. According to UPS self-reporting, it is drawing under 15W. The powertop utility reports ~97% C7 idle state for the CPU, ~99% RC6 for the iGPU, and doesn't report a package idle state.

I have an older i7-4700MQ 4-core Thinkpad with 16GB of RAM, one SATA SSD, and a discrete GeForce GT 730M GPU alongside the iGPU. It is drawing about 13W off the battery with screen active but at low brightness so I can query it locally and write this post. The powertop utility reports ~95% C7 idle state for the CPU, ~99% RC6 for the iGPU, and ~60% C2 plus ~23% C3 for the package.


Old Xeon CPUs, especially dual socket ones, just tend to be the highest TDP ones. The Xeon E5-2667 v2 mentioned in the article is a 130W CPU, which he compares to the AMD Ryzen 7 2700X which is 105W. The AMD, in benchmarks is around 5% faster.

I was honestly expecting more of a different there in both speed of the AMD and power consumption of the Xeon. If you get a workstation with two of them, you're talking about 260W max consumption. And of course that is max consumption, idle consumption is probably more important and is probably way, way lower.

One place where some of these machines really shine though is in memory availability. If you need a lot of RAM, it can be hard to get a new desktop that'll take more than 64GB, but workstation chipsets can often go to half a TB or more. Of course, that RAM uses a lot of power...


I have a E5-2670 (V1) based system and the idle power draw is around 80W. I doubt a modern system with the same cores and threads (say an i9-9900k) is going to draw that much less while idle. Even if it uses half the power, that equates to electricity savings of under $5 per month if it's on 24/7.


Back in those days, Intel simply did not care one iota about power consumption. The P4 processors were absolute power hogs, and all Intel did was brag about how great their memory bandwidth to the RAMBUS memory was.


The last Pentium 4 was released in 2008; the Xeon E5-2667 v2 parts discussed in the article were released in 2013, years after Intel abandoned their Netburst approach.


Xeons are usually a generation behind on process and sometimes other technology as well.


Be that as it may, that Xeon is post Nehalem, it’s not at all related to Netburst. Netburst was an evolutionary dead end.


Oh yeah, back in those, uh, Westmere/Sandy/Ivy/Haswell days. The days when power consumption was given priority over even performance.


On a vastly more practical note, check out JDM_WAAAT's site https://www.serverbuilds.net/ . They're more piecemeal parts list, but he has a very reasoned approach to select parts that deliver value/performance at their price point, and are available at the selected price in sufficient quantities.


I've looked into this kind of thing in the past, but there was always something that made me shy away, despite the prices being relatively favorable.

Most of the time, I think it was a worry of "if the power supply dies" - because usually, these workstations have a very proprietary PSU, which can sometimes be difficult to source, and when you find one, the price can be nothing short of insane.

Then there was RAM - most of the time, you needed ECC RAM, and that stuff could be expensive if you wanted to push the system to its maximum config. Of course, this was years ago, maybe things have changed in the market?

Last was the potential noise and heat factors. I once had a Core2Duo that I dropped in a 8800 GTX (or something like that) - and while the noise wasn't bothersome, the heat output was something else. But I do know what a server fan system sounds like, and I'd worry about a workstation having that same kind of jet-engine experience.

So I never pulled the trigger, so to speak.

Today, I've been almost going the direct opposite direction.

I've got on my "list of things to repair/build" a TRS-80 Model 100 (portable), an old Toughbook C29 to refresh, and I'm contemplating building a custom "cyberdeck", using a variety of different parts and components (probably an ESP-32 coupled to an Arduino Nano, with the Nano acting as a keyboard interface, because the keyboard isn't off-the-shelf, it's from a toy computer, the ESP will probably drive some kind of GLCD, then custom firmware, etc for everything else - not really a practical system, but probably more fun).


+1 on the PSU.

Having attempted to replace power supplies on both Dell and HP workstations, that's just not possible. It doesn't boot without a one from the manufacturer.

Surprised that none of the comments mention it.


my hp z210 came with 8gb ecc mem, I had I believe it's ddr3 24gb was kind of upset that the hp site said max 16gb (that's not the case) the 24gb ddr3 worked anyway. (hopefully the ram issue helps someone else) yes the mobo died a few months after so that's a downside of used equipment, but the replacement mobo was $30 + an extra $30 in case it happened again. I put a 1050 ti pci-e powered for budget gaming since I'm not made of money, works well and can multitask like crazy. I believe my e3-1270v1 3.4ghz is comparable to my old i5-2500k without the oc capability


I bought once Supermicro workstation (dual Xeon 2680), and it was LOUD. Even when shut down, the fans were still making noise, so I've had a habit of turning it off from power to make it silent.

While the idea looked appealing at the beginning (dual CPU, 64GB RAM, great cable management), at the end I've decided that it doesn't justify the cost and hassle. Plus sometimes there were very funny issues - for example to run Rocksmith you needed to manually set affinity on windows to run it on only one CPU. After upgrading to last gen Thinkpad extreme it feels that Thinkpad even does 4k rendering faster.


Old xeons are hot, and power hungry.

My monthly electric bill decreased by nearly $10 after replacing my 2010 MacPro with a 2018 MacMini.


> Old xeons are hot, and power hungry.

The other thing I'd worry about on a old, heavily used workstation is wear on the fan bearings. I've seen ones that got pretty loud after 5-6 years and replacement parts generally aren't available at that point.


Bearings? You're not buying 20 year old machines. You're more likely to get maglev fans

No parts? It's a bloody fan, ffs


Aren't most fans available as standard replacement parts? Or at least, something close enough? I once zip-tied (well, tied in with wire) a replacement CPU fan when the one I thought would fit didn't quite line up.


Dell and HP are the two major workstation brands I'm familiar with and both of them use custom fan assemblies. Search for "Dell Precision workstation fans" on eBay if you'd like to see examples.


On the other hand, ripping out the old case fan and replacing it with one held in place by sheet metal screws or glue is an option. When you are buying old hardware like this it's ok to bodge the repairs if necessary.


That's cool that you can tell. $10 is 2% of my monthly electric bill. Do you turn your machine off or let it go into deep sleep? I tend to leave the stuff on all the time to minimize heating/cooling cycle stress . It's probably led to some premuturely dried out electrolytics (and maybe a little bit more on the electric bill) but I stopped losing spinning media and the only things that are failing in the old (>10 year) old boxes I have up are the fans.

EDIT - FWIW my home office daily drive are dual 6 core Xeon 3104s at 1.7ghz from the Dell refurb site. Although it's my desktop system I also do a lot of parallel stuff on it


>$10 is 2% of my monthly electric bill

You spend $6000 a year on electricity? Are you running an aluminium smelter in your back yard?


Typical for a larger home in the south. Air Conditioning is not exactly optional when it's 95F and 85% humidity for half the year.


I've lived in the southern US my whole life. An average bill of $500/month is a lot. The only way I could see that is if most or all of the following are true: very large house, very old A/C, using electricity for heat in the winter (not a heat pump), unusually high rate, setting thermostat below 70 all the time.

My house is not huge, but it is nearly 70 years old and doesn't even have insulation in half the walls. My electric bill rarely gets above $200/month even when the high temp is 100 most days. My a/c is not particularly new either, I think it's approaching ~20 years.


I used to get one or two electric bills every year less than $200 back in the noughts but the weather's not been cooperating of late. 40 year old house, 2500 square ft under air or heat pump, summertime thermostat set at 79, located in an unusually arid part of s/w Florida. No insulation in the cinder block walls and very thin insulation in the attic. No shade foliage to the west or to the south. Tons of heat gain through the south facing side: Previous owner had to have equity in a sliding glass door company, the entire south side opens up with sliding glass doors.

I have a solar powered attic ventilator on my must-buy list but I'm worried about puncturing the brand new roof's membrane.

Copyedited


If you have poor insulation and drafty construction, the attic ventilator may be doing more harm than good.

It can be pulling up cooled air from within the house into the attic, and then pumping it out of the house. No wonder your bills are enormous.

In cases like yours, retrofitting insulation should more than pay for itself in just a couple of years.


The only attic ventilation now is passive through the soffits and ridge vents. Active ventilation was on my list but now I've struck it off based on your post. 3/4 of my attic space is vaulted -- I can't get flats up there, maybe it can be blown, but if it's blown how do they go about inspecting it for proper application? Or is it more like BGA assembly without X-ray inspection (spray and pray?)


For blown insulation (I think you mean fibers, not expanding foam), I think the installation method integrates a basic check, i.e. you cut small access holes on both ends of a space you are filling, and it is pretty obvious whether bulk material makes it to the other end to spill out. You probably need a stud-finder or similar to map the spaces between rafters, fire blocking, etc.

The best way to plan or inspect an insulation job is to use an infrared camera to observe temperature gradients. You can see hot or cold spots where heat is conducting or convecting through the structure.


Maybe I'll luck out and the vaults won't need remediation. Yes I was thinking fibers, not foam. I'm sure I'm underestimating the effort eeded to do a proper job on the vaults within the things' narrow confines: Access to two of four vault edges doesn't seem possible without punching through the gables from outside.

Thank you for the practical suggestions on moving forward. Email me in profile if you're in Largo or Sarasota FL sometime and I'll buy you a brewski or coffee or something.


For retrofit, I think it is more common to cut 1-2 inch openings through the ceiling or wall face (i.e. cut through plaster or remove pieces of paneling), blow the insulation, and finally patch those holes. And, this pattern will repeat every 5-10 feet, since the bays should be physically divided along their length by fire-blocking if built to code. Each blocked section will have to be filled as a separate step.


Disclaimer: I'm not an HVAC professional, but I did apply closed cell foam insulation to one of the rooms in my house.

In my experience, the spray does a very good job of sealing the joints where applicable. That's probably not the right type for an attic, but more insulation would generally be better than less.

That's not a particularly scientific approach, but when we're talking about old houses, you're almost never going to be able to do everything the way you want 100%.


I'm trying to envision what you did. Were you sealing up penetrations in an exterior surface or along a joint edge or were you filling voids in for ex the wall? I had good results filling a nasty opening around a water pipe with the spray but it didn't last because I failed to observe the warning about UV degradation and the material eventually flaked away.


Maybe some awnings would help reduce the solar gain through the sliding doors? Also, “the best time to plant trees...” and all, if it’s an option


I'll check out the awning option - great idea. The sun's so high in the summer they wouldn't need to be deep at all.

The previous owners ruled out trees by planting hundreds of sq ft of concrete topped with crumbling fragile coolcrete. But for the lack of soil what I'd really wish for are trellises covered moonflowers, passion flowers, and all sorts of fast growing beautiful vines. Maybe I still could do this with long rows of planters. Thank you for the excellent suggestions. You also get a brewski/coffee if you're every by Largo/Sarasota FL, email in profile.


Cool, I hope it works for you! And thanks for the offer :-)

Yeah, some trellises sound like a great idea, I've always wanted one of those over a patio, covered in vines.


With proper architecture [0], you can have a much more efficient house that requires little if any air conditioning. It'll look much nicer than the typical McMansion too.[1]

[0] https://www.sciencedirect.com/science/article/pii/S2214157X1...

[1] https://www.buildnative.com/portfolio/peak-ridge/?portfolioC...


A custom net zero home, which is what you've linked here, sounds pretty damn expensive. Unless money is no object, you're probably better off slapping some extra solar panels on your existing McMansion.


The science direct article content was excellent for new builds. Up until the early 70s, when draining Florida was still a thing, before depletion concerns, some neighboring homes had an air conditioning mode routing pumped shallow aquifer ground water through the air handler for cooling, and dumping the warmed water. I wonder if a heat exchanger to a deeper aquifer is a practical alternative.


Man I wish - I'd get something in return for the wasted money. The neighborhood just feels like a big smelter.



Great article. It expresses ideas similar to Game & Watch and GameBoy creator Gunpei Yokoi's philosophy of Lateral Thinking of Withered Technology.[0]

> Yokoi said, "The Nintendo way of adapting technology is not to look for the state of the art but to utilize mature technology that can be mass-produced cheaply."

> "Withered technology" in this context refers to a mature technology which is cheap and well understood. "Lateral thinking" refers to finding radical new ways of using such technology.

When designing the GameBoy, Yokoi realised that the older, simpler Z80 processor would just as well serve the purpose of making fun handheld games as the more contemporary options would (and one might argue that the limitations of the machine forced game developers to be more creative than they might have otherwise). Likewise with the monochromatic display.

The GameBoy was cheaper to manufacture and buy, more well-understood by developers, and crucially, much more power efficient than its several competitors. And it killed them.

[0]: https://en.wikipedia.org/wiki/Gunpei_Yokoi#Lateral_Thinking_...


Yeah, that's a no from me. Who would want another massive heat-generative, power-hungry desktop/server humming along in their home office as their main workstation?

Now if I were to build a home lab, heck yeah I would jump on board. A couple of those systems would make a real nice private cloud.


> A couple of those systems would make a real nice private cloud.

+1. I host an R710, R410 and 2 R610s in my basement and its a lovely cluster. I would never have them on any other floor.


Used to run an older Xeon system I picked for free as home server/workstation. Now you can pick up 8c16t Ryzens pretty cheap and they even support ECC on cheap consumer motherboards, so unless you get used Xeon for literally next to nothing, you may want to consider other alternatives. There's also the fact that recently more and more vulnerabilities have been discovered in Intel products, which makes Xeon much less attractive platform.


It may feel a bit anachronistic at times, but learning terminal based apps is a good way to beat the cycle of upgrades. Many tasks are not all that complex, and they don't need recent hardware to perform well.


I used to love upgrading my home desktop, I'd research specs, figure out the best price/performance and upgrade components regularly (motherboard, CPU, hard drive, etc).

But now that most of my computer time is spent in a web browser (most of my coding is done at work so I rarely even run an IDE at home), I'm happy with my 5 year old laptop.

I built a nice 8 core Xeon desktop about 4 years ago, but haven't powered it on at all in at least the past 2 years, I moved a year ago and haven't even taken it out of the box, it's still sitting beside the computer desk.


I like to do this with laptops -- you can get a few years old maxed out Dell Latitudes, Thinkpads etc on ebay for 80% off. These outperform a standard new $800 laptop easily, though obviously you pay in sleekness/weight (mine is 90% stationary so I don't much care).


Funny story. Back in when my Bride and I lived in a single bedroom compartment, she looked at my small stack of computers and said "why don't you get one big one"?

Later that month, I was shopping at the Lockheed Martin outlet/surplus store, and spied a lovely Sun 3/280 in a 8', 19" rackmount for $25. I could not resist, much to her dismay as the refrigerator sized chassis got home. The original hardware hosting a threadripper in the case of theseus.


The arguments about power consumption only apply to a specific set of old workstation CPUs. You can buy old hardware that isn't super inefficient. I bought an HP EliteDesk tower on the cheap ($150) last year that had a i7-4790 (an all-around great CPU) and is super quiet. With a lower-power GPU in it (nvidia 1030) it's even decent for light games or video decoding. I've not directly measured the power consumption, but it's not nearly the monster these old Xeons can be.

The main point, from what I gather, is that you can buy older hardware that's just as powerful as something you'd buy today, but you get the benefit of better repairability, cheap replacement parts, and saving a bunch of plastic from the trash. I don't see the need for buying brand-new hardware when the old stuff works just fine. (My main machine is a 6-year-old Thinkpad)


I was given an HP Z230 workstation when I started at Microsoft five years ago and haven't upgraded since then, despite being well beyond eligible for a hardware refresh. It has 32 GB of RAM and an (edit) 4-core 8-thread i7-4770, plus an OS SSD with several data HDDs (integrated graphics, but meh).

CPU performance has been basically stagnant for the past half-decade. Even on my home PC (purchased around the same time) the only thing I've changed is the graphics card - and that was to sell it for a less-powerful one, then buy back the original card years later for cheap! Maybe if you want to play games on the cutting edge of VR then constant upgrades are necessary, but five-year-old desktop hardware should be fine for nearly anyone.


That i7-4770 is a great processor (though it's 4c8t, not 8c); I actually still have an i5-3570k running in my HTPC/little gaming rig and it too is ticking along just fine. For my money, the Ivy Bridge and Haswell processors are the best bang-for-the-buck for processors, ever.

The one note I'd add about CPUs is that they've gotten a lot wider over the last couple years. IPC has changed iteratively (AMD's has changed a lot though, particularly for Zen 2) but parallelism really has improved many workloads over time. Ryzen made it a lot easier to throw 8 cores/16 threads at a problem and for a lot of stuff folks around here are likely to care about that matters a good bit. And Ryzen 2 makes it downright affordable to throw 12c/24t at a problem. (He said, getting ready to throw down on a 3900X for video encoding...)


I love my 4770k, on air mine hits 4.4ghz stable.


>CPU performance has been basically stagnant for the past half-decade

what? comparing an i7-4770 to an i7-8700 (current gen, similar launch price), the latter is 25%-86% faster, depending on your workloads.

https://cpu.userbenchmark.com/Compare/Intel-Core-i7-8700-vs-...


The i7-8700 is heckin' fast, but if you look at it over time, the rate of improvement has slowed down quite a bit. The 4770 is six years old. The Core 2 Duo E6600 is its price-comparable equivalent from six years before that. An E6600 has a PassMark score of 1553. An i7-4770 has a PassMark score of 9780. An i7-8700 has a score of 15155.

And we shouldn't miss that that's way faster! But the slope of that curve is a lot shallower than it has been historically, and a 2x improvement probably doesn't move the needle in the way that a 6x improvement does.

We're spoiled, in that we can say such an improvement no longer moves the needle. But it also means that hardware stays viable for much, much longer.


That link shows only a 25% improvement in single-core performance in four generations. That's an embarrassment, Intel.

Throwing more cores at a problem has never been difficult. The whole point of the article is that you can buy a manycore workstation for peanuts. If manycores solves your workload, you're going to have a good time. If, like most power users, you need more single-thread performance, CPU performance has been stagnant.


Minor detail: the entire i7-4770xx line of products are 4-core, 8-threads CPUs[0]

[0] https://ark.intel.com/content/www/us/en/ark/search.html?_cha...


Perhaps someone here can give me a bit of advice. I've been undecided on whether to buy a refurbished workstation for a while now. I do a lot of aosp builds for some of my clients and on my current setup it can take 3 or 4 hours for a clean build. I've been looking at various builds on https://www.bargainhardware.co.uk/ For about £1500 to £2000. If anyone has any experience or advice it would be greatly appreciated.


My favourite setup is an HPz620 v2 with e5-xxxxv2 CPUs, and >= 64 gigs of ram.

Basically any workstation with 2 Intel Xeon E5-2650 V2, or the L variant (lower power) will do.

for example:

1 x HP Z620 Grade A - 800W - V2

2 x Intel Xeon E5-2650 V2 - 8-Core 2.60Ghz (20MB Cache, 8.00GTs, 95W)

1 x HP Z420, Z620 - Heatsink

1 x HP Z620 2nd CPU Riser Board, Fan & Heatsink

8 x 8GB - DDR3L 1333MHz (PC3L-10600R, 2RX4, ECC REG)

1 x 2TB - SATA (7.2K, 3G) HDD - Major Brand

You'll need a new graphics card and an SSD, but that is £750 inc vat. this means you can spend £500 on an nvidia RTX, and get a 6k monitor.


Thanks I've found it difficult to find any info on what I can expect for certain hardware. It seems most people are either hobbyists who are on similar hardware to me or people working for big companies who can afford massive build servers.


If you’re ok with a mobile workstation, i grabbed a refurbished P52 on ebay. They go up to xeon (6 core) and you can stuff a ton of ram in there. You can’t get one if these models without a quadro video card unfortunately so that does mark up the price, but it should be useful for running the emulator. And lastly an Nvme SSD - if you have all those it should work well, but i cant comment on the thermal longevity as to whether it might throttle during a long build as it’s been many years since ive done a AOSP build or anything that heavy. I got mine for 1800ish USD but i added some ram on top. It was definitely overkill for what I do, but I think it would suit your case well.


Hmm I hadn't thought of a mobile workstation I'll give them a look. I used to have a Dell mobile workstation in one of my previous jobs which put me off them a bit because it weighed a ton and was a pain to carry around but the p52 looks quite interesting.


> I do a lot of aosp builds for some of my clients

If you don't mind - what kind of job are you in that you can do AOSP professionally? I always saw that as a pure-hobbyist thing.


I do embedded software engineering for IOT companies at the moment. Usually I get a zip file filled with all the sources from some random design house in China and then have to pull out all the rubbish and configure it for my clients needs.


Hello! I manage this website - here to help if you have any questions RE benchmarking, compatibility, limitations etc.


For one I would love to have an SGI workstation, I still have CG magazines from this era and its plastered everywhere in the ads, and I'm still amazed at the graphic output (so smooth) and what we had in the 90s.

https://www.youtube.com/watch?v=ZDxLa6P6exc

I'd imagine something like this would've been used to make Crash Bandicoot and likes. It'd be really interesting to play games built on this thing.


It's not just workstation hardware - I've recently bought a couple of Dell T410s off eBay for $125 each (shipped!). Quad-core Xeon E5620s, 4G RAM, 6-bay hotswap chassis, DVD-ROM, iDRAC 6 Enterprise, PERC 6i RAID card.

Upgrades: $25 for 120G SSD (boot drive, goes in the empty second 5.25" bay), $27 for 16G RAM, $20ish for an E5670 6-core CPU, $35 for an LSI HBA in IT-mode (supports >2T drives), $10 for a set of SFF-8087 cables to go from the HBA to the existing hotswap backplane. 30-45 minutes to upgrade all the firmware for the DRAC, lifecycle controller, BIOS, etc.

Grand total of around $250 for a really nice server with full remote management, and for the cost of another E5670 and a Dell heatsink I can upgrade to dual-CPU (12 cores/24 threads).

They're so cheap that I've gotten two systems to upgrade as described, and this morning ordered a third (yet again $125) to have for spare parts.

The eBay vendor emailed me and offered to sell me a pallet of 24 for $80 each, but I don't have that kind of need or money lying around...

As for laptops, I tend to get refurb/off-lease Thinkpads from arrowdirect.com (coupon code ARROW gives 15% off), then max out the (cheap DDR3) RAM and throw a SSD in where the HD was. I've built up a T420s and an X230 like this for when I need a decent portable machine but don't want to take my expensive Macbook Pro somewhere. For the T420s I even got a $50 adapter board from a guy in China that let me put a FHD IPS screen in, instead of the 1440x900 TN LCD that it came with...


I upgraded a free T410 with two X5675, 32GB ECC RAM, PCIe to NVMe adapter, a RAID array of 2TB drives and an USB3 controller.

I's a quite capable machine. I needed it to learning about NUMA archs and test my software.

However there is no sleep mode. The boot time is not that bad for a server so I start it with IPMI bounced from a SBC.

I made it quiet the hard way, mainly for fun and learning about embedded control loops: water cooling with a passive motorbike radiator and a Arduino to control pumps and monitor temperatures while feeding fake hall sensor data to the original BMC so it doesn't freak out.


Building my own home server I went a different route with the latest, fastest i3 processor (at the time i3-7350k, not i3-8350k) to prioritize lower power and single thread execution speed.

E5670 holds up surprisingly well, similar total performance – 1/2 single thread execution speed 2x threads – in a similar power envelope for half the cost.

The value in those used workstations is mostly in the case and that is hard to replicate in these days of style-driven (wtf?) computer hardware where everything has LEDs and is generally targeted to excite 12-year-olds.


>pallet of 24 for $80 each

at those prices I really do wonder about that cross section of recycling/dumping and inefficient power consumption


I'm amazed that they can sell them for $125 including shipping (which has to be at least $40-50) and still make a profit. Makes me wonder what THEY paid for them..

Vendor said they had more than 100 still available, and both of the units I've gotten so far are the original config as shipped from Dell according to a service tag lookup.


They probably don't pay for it. The hardware is decommissioned after a few years of usage.


In late 2017 I built myself a threadripper based workstation for ~5000$ with a 1950x, 64gb ECC (can fit up to 128), a single 1080 ti, single NVMe drive and two 8tb HDDs. The system was designed to allow for more HDDs (case space and thermal capacity, 8 should fit easily), 4 GPUs and durable enough so that I can wrap it up and ship it around the world if needed (The whole thing weighs ~18kg with a single GPU. Whether shipping a PC is a good idea or not I have yet to find out)

Threadripper was still 1000$ + motherboard for 300$. And RAM was at all time high with 750$ for 64gb ECC.

Because of this price tag I considered getting a used xeon, but looking at the total I probably would have only saved less than 1000$, or 20%, perhaps 100-200$ more due to used RAM for a total of 24%. That percentage saving is much lower than expected.

Other factors:

The cpu mentioned in the article has 40 PCIe lanes vs. 60 in threadripper and I figured more lanes is better should I ever put in all 4 GPUs. This seems to more or less not matter though. Something learned since then.

Better energy efficiency (newer processor and better PSU), more control over heat and noise (assumption, I have 6 quiet fans in there), more lanes (although near useless for GPUs apparently), more room for storage (8-12 drives possible), GPUs (4 possible) and simply all parts new with warranty for 20-24% more

This is very use case specific of course and there might be better used systems available as well but I would probably do it again. The savings were less than I initially thought and that was at an all time high for RAM prices and the first generation of threadrippers, should be even less now. What I should have done differently is use it more often, but that's a different story.

-> The expensive parts are storage, GPU and high quality case/PSU/fans. If you want to build your first workstation, storage and GPUs should be bought separately anyways. But if you don't need them or already have those from a different system, a used xeon sounds interesting. You still probably save less than expected though, over the years with less warranty, more power draw and more work necessary (which can be fun! not denying that)

(Yes the linked article is about home computer, not workstation. This is a different scenario. I figured it's nonetheless relevant for some)


If a lot of HN users were interested it would be cool if someone could organize group-buys of these types of hardware. Usually buying old hardware (for super cheap) is limited by volume and most sellers aren't willing to ship something. If someone removed those pains and hand-picked "worth it" (for the home/small business server lab) items it would be something a few people might be interested in.


This stuff is fun to do but it’s not s magic bullet. The title would be better named “why your next home computer could be an old Xeon workstation”

These older machines use way more power. Modern stuff is far more quiet and can do a lot more with less heat and sound.

I have a dual Xeon 2U here at home w/ 48Gb Of Ram, redundant power supplies etc... it’s awesome. But it’s not even remotely close to being as performant as my new MacBook Pro.


My gaming desktop right now is actually an old Dell Precision tower with a third gen i7 that I got for $140 off eBay. I swapped out the PSU, upped the ram to 16gb, added an SSD I had laying around, and replaced the Quadro GPU with a 1060. In all it cost me about $350 and comfortably runs every game I own. Power draw isn't too much of a concern since it isn't used too often.


For what it's worth: I keep my computers / servers off all the time. When I need them I use Wake-On-Lan to remotely turn them on when needed.

Even my power hungry 71TB NAS is off 99% of the time and only turned on when needed. Saving me 150-200 Watt idle power usage.

If you don't need your stuff to be on 24/7 you can buy older more power-hungry stuff as long as you turn it off when done.


Takes me back. I bought a used Dell workstation for a few hundred bucks. Was pretty bare, basically case, power supply and motherboard. At the time, it was the only motherboard I was aware of that supported dual socket CPUs and AGP graphics. Even though I bought it used, Dell support was willing to help me out debugging why it wouldn't boot (something to due with a mismatch between the CPU power control units - I think that's what they were called- with a dual P4 Xeon setup). For the workstation, it was pretty damned affordable for what I got. The kick in the jaw was that it took RDRAM. To put in 1.5 GB of RAM doubled the cost of the build, easily. Few years later when I was an FTE, and no longer in school, dropped a couple of grand USD to upgrade the RAM to 3 GB. That was circa 2005. Great workstation, but boy was that RDRAM expensive...


Oh hey, I have an old z420 that I picked up off of ebay that I use as a hypervisor. It had running for a few years and worked great for what I was using it for. I replaced it earlier this year with a mini-itx ryzen based system, because I didn't want the tower in my office anymore.


How is the heat management in your Mini-ITX Ryzen system?

I've read that Ryzen can generate a good amount of heat. I'd worry that Mini-ITX due to the formfactor may struggle more.


Works fine with the stock heatsink ootb, even with a mild overclock.

They even have smaller than ITX systems now, though not quite as small as a NUC: https://www.asrock.com/nettop/

I have one of these and it runs great.


I run a Ryzen 5 2600 in a Cooler Master 130 case with the stock cooler. I run it with stock settings and frequencies (XMP enabled). I do not encounter any thermal throttling in normal usage, including long gaming sessions. That said, the 2600 is a 65W chip while the 2700X is rated at 105W. Maybe I would run into more thermal constraints if I used the 105W part, but so far the 2600 has had plenty of performance to meet my needs of a semi-portable gaming/development machine.


So far no heat issues with the stock heat sink (2600).


I don't find any of these workstations really interesting. They are all the same x86_64 architecture so just getting the latest and greatest puts you leagues ahead.

I find the really old workstations to be way more interesting. You rarely see a MIPS, Sparc, PA-RISC anymore. Once the CPU wars were over (Intel won, btw) you could get old SGI supercomputers, DEC Alphas, Sun Sparcstations, and NeXT Turbo Cubes for almost nothing. My very first website was run from an Alpha workstation on NetBSD under my dorm room bed.

They were power hungry, not state of the art, and sometimes quirky to use (Irix Motif comes to mind) but they were a blast.


The family machine is a 2009 Mac Pro with dual hex core processors, 48 gigs of RAM and 2 low watt video cards running Ubuntu. It does multiseat like a champ: one side was playing Hitman (2016) while the other was doing Tomb Raider (2013). I had been wanting to do multiseat for about a decade. In addition to using it remotely with ssh, I have guacamole on it, connecting to GDM so I can if I need to get to my desktop through a web browser. Total cost was about $600. I'm very much enjoying it.


We’re in a 5-10 year plateau where buying yesterday’s machines dorsn’t cost much performance, at least for CPUs.

If you bought a 5 year old computer in 2011 you’d be buying last gen stuff. But in 2019 even the sandy bridge (2012) intel processors aren’t that bad. Obviously you are paying with power because you’ll buy a 2 or 4 way system to compensate, but if you use it just a few hours per day it’s a steal.


If the noise factor isn't a concern. For me it is, so I'll stick to Intel NUCs (until a good AMD alternative comes along).


Join the club - and you can get refurbished NUCs.

Mine is running silently right now and I reckon it is using a quarter of the 65W power available to it.

The HP Z220 has a 400W power supply and I reckon it is probably using 200W right now.

Silence is priceless.

But, what if you can't afford silence?

If the Z220 can be found in a skip then it comes for free. If you keep it for three years then it eats that much electricity (assuming daily usage) that it costs as much as a NUC.

There are a lot of variables here, but I think that the Intel NUC option is just as good value as the Z220 if electricity is factored in and the Z220 is assumed to be free. It all comes down to whether you want to pay up front and save money on the electricity bills or have no up-front costs and pay extra electricity bills over time.

In three years time, what would you prefer?

I think the Z220 would be hard to rehome. The Intel NUC? I think it would make a nice set top box or hand-me-down for many people. If there were no takers and it had to be eBayed then the postage would be affordable to the buyer. I am not so sure the same could be said for the Z220.

There is one final benefit to the NUC - a cleaner conscience when leaving it on. You know that you are not being overly greedy with one's carbon footprint.


I have a Dell Xeon with passive cooling, no sound.


I dont know what it is but I love looking through junk to see if I can find something valuable.

I rarely find anything useful but I just love going through junk of any sort.

I'm guessing my ancestors were some sort of scavangers or something?? I've never understood why it gives me so much pleasure!


Great idea. You can easily buy a Mac Pro quad-core (single or dual cpu) for anywhere between $300 to $900. It's going to work on par with today's expensive machines :-)


Currently using a Dell r710 I got from a recycler for $75 as a freenas box

It's not THAT loud and serves my needs pretty well. Came with a single 2.1gHz xenon and has room for a second


I always wanted to buy a SGI station, and use the old OS. I hope someday find one example that it's pristine.


This article is a bit verbose, not very succinct.

I was trying to find the actual specs on his purchase.

Is it using ECC RAM?


Power consumption and fan sounds on those things were insane.


The Xeon workstations I have and can easily access, are all 32bit

What good will one let alone several 32bit machines will do for me now a days.

I can use Ubuntu, or chain them, but how would you do anything with virtualization and advanced computing.?


Completely depends on your use-case. Writing/building/running Java, JS, Python, or anything else architecture-independent? Data processing? You're set. Building release binaries for a native-code application? Yeah, no. But it totally depends what you're doing.


That's...interestingly old HW. As in I'd expect it to be more expensive than newer 64 but stuff


I do this. HP Z620 is a phenomenal workstation.


old Xeon are not all E6-2680v2 kind of junk, you can buy a pair of used Xeon Platinum 8175M + a decent brand new dual socket motherboard for less than $2,500 USD.


What a useless article that just talks about intangible social reasons to buy something used.

I would've expected some kind of cost analysis for how much compute power $500 buys you on a few years old workstation versus something new, and that you get more for your money used. Nothing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: