According to Western Digital[1], the CVE involved[2] has been public and unpatched since 2019. That's insane.
There's "we don't support end-of-life devices" and then there's "we refuse to fix absolutely critical, crippling security vulnerabilities in devices just a few years old." This is well over the line.
I smell (but have no idea of the merits of) a class action lawsuit.
EOL should be made to legally mean: opensource or something equivalent.
If Windows isn't supported on my laptop, I can nearly always replace it with Linux and still keep it alive.
With stuff like this, they can decide to never update it again or just turn off some critical service and render a device unusable. It's not just a thing about owning what you purchase and having the freedom to do what you want with it, it's also about the environment. Forcing things to become dangerous for use or just having the power to kill it remotely out of business interests means more electronic trash.
https://publiccode.eu/ goes in the right direction on a similar topic, but this should also start getting a bit of the limelight.
Okay. Stallman was right. I know it's cliche, I know he's weird. But I learned a LONG TIME AGO never to bet against the man's predictions and ideas on software. He called all of this, and quite a few people (especially HERE) would do well to get back to fundamentals and work on fixing it.
FWIW, I ran for office on an election integrity platform. Demanding that we own and operate our own elections. While "open source software" worked well enough (10+ years ago), everyone immediately understood "citizen-owned software". Transforms that bullet point into 90/10 (pro/con) issue.
Edit: I should clarify something. FOSS isn't sufficient. Citizen-owned software means we control the repo, builds, distros, etc. Microsoft, RedHat, whomever can fork or whatever, for resale or whatever. But we always control the root.
> Edit: I should clarify something. FOSS isn't sufficient. Citizen-owned software means we control the repo, builds, distros, etc. Microsoft, RedHat, whomever can fork or whatever, for resale or whatever. But we always control the root.
Who is "we"? That's the point of FOSS: "we", as "we control the repo, builds, distros, etc" is defined basically as anyone who wants to do it. The idea of forking is an incredibly important part of that, but it sounds like you're suggesting that "we" would be limited to a specific group and centralized control.
I explicitly said FOSS. Any one can fork. How can I better make that point?
My motivation: my county invests in its own election management system (GIS, VRDB, ballot production, tabulation, report generation, etc). I demand that this entire stack is available to everyone for any purpose. Especially other counties that want an alternative to proprietary stacks hidden behind trade secrets, copyrights, and onerous EULAs.
Is this not what you want too?
> ...it sounds like you're suggesting that "we" would be limited to a specific group and centralized control.
"We" the citizens implies "everyone".
But more to your point, what are some examples of FOSS which are not de facto centralized?
This is an interesting point: there is an analogy to be made with unmaintained equipment that can become environmental hazards and making such equipment without proper warnings and recalls.
I was thinking of getting one for myself, but I think I'll stick to USB-connected JBODs.
The powerpc SoC in that drive is even supported in mainline Linux. And the code for original v2.6.32 Linux and some ancient U-Boot is available on WD website. So people could have made their drives secure after EOL. Some maybe even did.
> So people could have made their drives secure after EOL.
A very, very, very select few with the necessary skills could have. This is a mass-market device, if 1 in 10 000 buyers is able to do this, that'd be a lot, and then those would have a Linux machine on an ancient SoC with none of the original functionality.
So? The code is available, which is what the parent commenter wanted. Getting something FOSS running using something that's already supported in mainline Linux is a week long project, perhaps, if you suffer with outdated U-Boot. You can still download the code for this device even on WD's website. (That's quite something, because typically companies start violating the GPL shortly after the device stops being sold, or when they get acquired and redesign their website)
This one would probably be harder, because there's not a ton of 32bit ppc distros around, and the storage is limited, so you'd have to setup some toolchain and build some ppc basis for the userspace, yourself.
I don't think many people would root for some 11 year old WD's trashy userspace code, and would rather want to run current musl, busybox and samba on this, or whatever.
Someone who likes these devices could have done the work and could have published it for others.
Market experimentation is necessary for innovation. Now we've learned never to buy a product from WD (in case you forgot that last year they intentionally sold unfit SMR hard drives and lied about it).
In this case, "we" is probably a very small group of people. For starters, it's limited to folks who hear about this in the first place. Ars Technica isn't exactly a huge news site, and, while it might be mentioned on TV news, I doubt they'll be talking about it all day long, so only a subset of their viewers would hear about it. Then, beyond that, it's limited to the subset of those who believe that the appropriate response is to not buy WD products anymore. Which isn't everyone on HN, and might not even be half of them.
So maybe we're looking at 1% of the general population? Their opinion isn't going to make a big enough dent in WD's revenues to justify the cost of supporting a product like this past its commercial end of life.
I think that there is a free market answer to this, but it in no way involves successfully pressuring companies like Western Digital to support discontinued products in perpetuity. It's just good old-fashioned "caveat emptor", pure and simple.
Granted, we're talking about consumer tech, so the vast majority of emptors don't have the background knowledge to be able to successfully caveat for themselves. So that approach is sort of acknowledging that a bunch of people will be screwed.
Which might be the best you can get. This is an area with no perfect solutions. More consumer product regulation might curtail events like this, but at the cost of more regulatory costs, possibly stifling innovation, and the risk that the legislative body (few of whom really understand these issues themselves) will just deliver us the latest heinously misdesigned law in a long series of heinously misdesigned law. Leaving things as they are might avoid all of that, but would guarantee that things like this keep happening more-or-less unchecked. Which kind of solution someone prefers is perhaps more a matter of political leaning than any practical consideration.
Every brand proves themselves untrustworthy at some point, look at Seagate or every airline. The free market breaks down all the time and that's why government regulation is so necessary.
This is a broken model when the consequences of making the mistake can be so dire. At the extreme end of this, sure, the "free market" can punish United Carbide by driving them out of business after they killed 3000 and poisoned half a million people in India, but what good does that do the people who got killed?
Markets can only behave reactively, but some harms need to be mitigated proactively.
I find the environmental impact argument here to be very dubious. Even if a Linux-like alternative were available for this hard drive, the vast majority of people would be unaware of that or not willing/capable of figuring out how to work it and would throw the device away and replace it anyway. And even if they did save the hard drive, they might just use the money they saved by not replacing it to upgrade their computer or phone a little sooner instead.
Hard drives are a particularly bad example too because the risk of failure goes up as the device ages, so people probably should replace them after a few years of heavy use regardless of software support.
We live in a rich society, electronics are relatively cheap, and people like having shiny new things. We will inevitably continue having more and more electronic waste. Right to repair type laws are great for consumers for many other reasons, but I don’t think they’ll make any kind of meaningful dent in the reduction of electronic waste.
Hard drives are a mediocre example, but I have exactly such an enclosure sitting around that I use as cold-storage because I know better than to connect it to a network. If I want to get data off it, I set up an isolated segment in RFC3927 space.
As a result, I've bought a new NAS-type device which does have supported software and which I can put on a live network with only minimal worry. (I still take backups of course, but not as thoroughly as I should.)
That's a direct waste impact of the device being out of support. I like the form factor. I like the capacity. I wish I could just run Openwrt or something on it. (It's a Marvell SOC inside, so that's the likely target if I had porting skills.)
I get the impression they're only describing how they handle it (for the crowd here); I don't see any suggestion the OP thinks this is a reasonable expectation of ordinary consumers.
Wow. This is the ultimate IoT failure scenario, the product is in use but no longer supported and an exploit can hit it from the Internet.
I suspect there will be litigation, but I am not a lawyer. I will be interested to see it though, what is the responsibility for using things post end-of-life? What we might see is a new “your on your own” mode built into this sort of appliance that once EOL hits it asks you to affirmatively indicate that you know it is no longer supported and any further use comes at your own risk.
It is also another great case study for open source NAS devices, like the TrueNAS line, that you can patch even if iXSystems goes away.
I am trying to get rid of any smart devices in my life. I need less things connected to the internet not more. I bought Bruce Shneier's "Click Here to Kill Everybody, and that just convinced me even more. Software is a weak point, and I'd rather have the least amount possible in my "mission critical" everyday life gadgets. I want single-purpose, purpose built devices that do that one thing very well, without connecting to the internet for a whole host of secondary, and mostly useless or even harmful surveillance, reasons.
I never jumped on the “everything IoT” bandwagon, but recently I’ve relented a bit.
I own/use exactly 4 “IoT technologies”.
I have smart radiator thermostats that turn on/off the heat based on family presence, as well as “predict” the heating needs based on weather forecast, sunshine hours, etc.
I have a heat pump in my vacation house that is remote controlled by the internet. Not strictly needed, but very convenient in the winter.
To keep an eye on my vacation house I also have a camera that uses HomeKit Secure Video. Requires no infrastructure on my end except network.
And finally, like a lot of people I have Hue.
I control it all through HomeKit. I also keep everything on a couple of IoT VLANs, one for “trusted” devices (AirPlay, media, PlayStation, etc), and one for “everything else” with client isolation, filtered dns, IDS/IPS. No upnp/nat-pmp, no tunneling (if detectable), and no other dns servers but mine (until dns over https becomes a thing with IoT)
In the future however it might become more or less impossible to avoid. Our neighbors recently purchased a new oven, which of course comes with internet connectivity. Their new washing machine also has internet connectivity.
In the very near future everything will be connected under the pretense of providing useful feature, when in reality it’s all about pushing adds and gathering telemetry data.
And my concern would be that, like the diesel motor example, someone would find a failure state that starts a conflagration. Leaving it on accidentally at 200C probably won't hurt anything, but malicious hacks might be catastrophic.
Starting pre-heat before you get home.
Monitoring temperature away from home.
Leaving a dish in the oven and starting it at a specific time, say an hour or two before you leave the office and head home for the day.
Checking if you left the oven on after you left home.
> or washing machine
Notifications for the end of the cycle.
Status of detergent levels if equipped ("Do I need fabric softener?" whilst at the store).
Starting a load (similarly to the oven above) while you are on your way home so you don't end up with wet clothes sitting all day, so they're ready for the dryer when you walk through the door.
--
I get your popular cynicism, but c'mon now. None of the stuff I'm outlining there was researched, it was all off the cuff and I live in an apartment with "dumb" appliances from 1996.
My new oven has bluetooth and wifi connectivity. I obviously thought this sounded completely insane, but I went ahead and googled to see what people were using it for. I was actually surprised and impressed.
First, there's a way to do sous vide cooking without one of those big circulators like the one I use. They sell a device that is a little bluetooth thermometer; you can put it in a pot of water, and the induction range can keep the water at a precise temperature. No circulation, but more than good enough imo.
Then you can use the wifi to operate it remotely. So as a case scenario, one could fill a pot with ice and water and a sous vide bagged steak, then turn it on remotely, say an hour or so before getting home. Or one could conceivably leave an uncooked cake in the oven?
I will probably never bother to enable any of this, but yeah, there are uses. I don't want it, but I get it.
Most of these things can just be done by using the timer that comes with every oven. I've often set the oven to come up at around 15.00 so I can cook a stew for a few hours before getting home.
I guess people who cook more than I do could find it useful.
But then again, if you’d asked me 4-5 years ago if I could see the usefulness of smart radiator thermostats, I’d probably have said no. The ability to remotely turn on/off heating is not that big a deal (to me).
It’s only when coupled with AI to optimize savings it becomes a great thing. Since installing the smart thermostats, I save on average 30% on my heating bill, and save the environment from 30% of my CO2 emissions from heating.
The AI part learns about how much the house heats up from a given amount of sunshine, coupled with outside temperature and heat loss, and controls the thermostats accordingly. If the indoor temperature is 1C below the target at 6am, but it’s expected to be sunshine, it simply won’t turn on the heat.
remote pre-heating of the oven, remote alerts when the laundry is done. Non-connected versions of these appliances just make a really loud alarm sound when they need someone to pay attention to them, but a small notification on a phone would be more effective and less annoying.
Remote preheating? I prefer a little more work-life balance, i.e. cook when I'm at home. And use the (rather short) time for preheating to look at the paper mail or mix a drink. Or stroll through the garden for some minutes.
Other appliances, e.g. our washing machine, tell me already how long they'll take. So I can set an alarm to not forget to care for them later. And if my fridge would alarm me because its door is not properly closed, I'd rather prefer a loud alarm over a small notification which I might overlook because a mobile device is someehere else.
I used to run on PfSense with Suricata, but my SG-3100 would frequently get rebooted by the watchdog because the load was too high.
These days i just use a Ubiquiti Dream Machine Pro. The base version works just fine as well, the pro just fit better on the shelf, and i didn't need an additional AP.
as most products become connected you will have to pay a premium for products that are not - especially as companies try to make profits from tracking (as in SmartTvs).
People who are not rich enough to buy everything single purpose will have to pick and choose.
Tracking and ads. Now that your TV menus have ads embedded, it’s a small leap from there to have to watch a commercial before you can start your washing machine
If anything, ecological collapse might hasten this future.
Just last week, there was this news about the Texas power company that remotely raised temperatures on people's thermostats to save power and avoid brownouts/blackouts. This predictably caused public outrage as people discovered what was in the fine print that they signed without reading, but from the perspective of the power company, everything worked as intended, so I absolutely expect to push these remote-controllable thermostats even harder moving forward.
I was counting that as an example of a tech done well! CA does this too, but afaik, it is advertised as such when one signs up for a free Smart meter and there is even a slightly better rate offered.
I just don’t see an alternate way to implement this. Prompting users via pricing is not something that will quick enough or intuitive enough. Also the sacrifice they are asked to make is at most a few degrees
We are all underestimating the level of fluctuation in energy requirement caused by a slight weather change. If the system were to have capacity comparable to max possible usage, the rates would be sky high. This is not like storing water and even that we can’t store beyond a certain point. See what happens with flood and drought patterns. The power company will sell it to someone cheap, we just don’t store it
It was presumably the upper threshold before air conditioning came on.
Explanation for those wondering why this needed explanation: Most thermostats set the lower threshold before heating comes on, so turning it up would use more energy.
Raising the temperature during a heat wave is the same as increasing the threshold where the ac unit kicks in. So instead of all these ac units clicking on to cool off houses at any temp above 70 degrees, the ac units click on at 80 degrees. That change saves a ton of energy.
One of the things I do during the year is set the AC according to the outside temperature. During the winter months, my house is set to 70 degrees. As the average outside temperature rises, I increase it by 2 degrees for every 10 from 60. So when it's 100 degrees outside, it's 78 indoors. I have an older unit and watched in horror last year as it struggled and failed to cope with 110 degree temperatures.... I basically just gave up, shut it down, and waited until night time to turn it back on.
Managing an integrated grid using household solar and battery as part of the supply management is one thing.
Adjusting people's usage for demand management is another.
They need to be separated and managed differently.
Instead of push adjusting HVAC for demand, they could use proper price signals with the appropriate local automation at the household that can be over-ridden by the consumer.
I agree price signals and local automation are the way to go.
I would love to have my HVAC run according to the true cost of electricity, also minimizing CO2 emissions.
BUT giving the grid operators emergency control over demand is a useful tool when the grid is close to failure. Too many people will just hit override unless the price signals are very high. When grid operators know people will die if their system goes down and it’s on the brink, people choosing to be 4 degrees cooler is such an immoral choice I’m OK removing that choice from them.
Also very helpful to reduce shocks to the system after any outages by temporarily limiting HVAC restarts.
> Too many people will just hit override unless the price signals are very high.
But that's the point of an energy market, it will adapt the price signal to the supply and the demand.
People will override it without thinking only the first time. The second time they will remember the humongous bill they got because of that and will think twice.
These programs are opt-in and normally come with incentives like a $75 gift card or a free thermostat.
It's possible that people are using thermostats that are still provisioned to the previous tenets electrical bill.
It's a important technological achievement to be able to shift and schedule loads so that we can switch to more efficient energy sources. Requiring the power generation companies to service the upper percentage of usage is a lot of expensive environmentally as well as economically.
Ecological collapse will likely tumble us more into a black mirror future as populations become increasingly dependent on technology to fulfill their needs.
At the very least, people should be able to make an informed choice about ads & tracking. The (imperfect) analogy would be e.g., a Kindle which is a bit cheaper if you buy one that shows ads on the lock-screen.
Regulation also helps, e.g., in the UK LG's smart TVs have a toggle to enable content fingerprinting, and it's off by default.
Last month I gave our nice SmartTV to a friend and bought low quality non-Smart TV from Walmart for $199. Really low quality but running it with just a new Apple TV box is a UI delight. My wife is unhappy with the downgrade, but I love it - much more private, no unwanted SmartTV apps and having the UI driven in ways I didn't like, being forced to have the old device on the Internet, etc.
Agreed, but the My Book doesn't require a connection. It can use it, optionally, but I don't think it would run afoul of this prepurchase test.
Now, as for how they ended up configured to be accessible from the internet, I'm not sure. Is that a user-facing option? Is it on by default? Did WD have a responsibility to disable the feature when they stopped maintaining it?
Plenty of IoThings don't "require a connection", yet end up on the Internet anyway, which is perhaps worse.
That's one of the reasons why the transition to IPv6 is good : IoT directly connected to the Internet without any NAT is going to be the default assumption.
If you sell me a HD/IoT, and say it does X/Y/Z and then no longer want to provide X/Y/Z, I should get my money back.
This whole, "ah sorry we decided to end-of-life that because we couldn't make any more money off it" is so many levels of bullshit.
It's also part of a broader attitude on not repairing anything. Battery dead in your activity tracker? throw it out and buy a new one. Worse still, the band has degraded but the electronics are all fine? still throw it out (or put it in a drawer to never come back out).
Closely aligned to a recent story where Amazon is/was destroying masses of products that hadn't been sold. The waste is just extraordinary.
> If you sell me a HD/IoT, and say it does X/Y/Z and then no longer want to provide X/Y/Z, I should get my money back.
This is my takeaway from the whole Peloton fiasco. If you spend thousands of dollars on a product, and after the purchase the company bricks the product, the company should legally owe you a full refund.
Under the Consumer Rights Act in the UK there is no end of 'warranty' period. If the product expires when it would be reasonably expected to work then you have a claim against the company. They can fix, like-for-like replace, or pay you money. They do get to reduce the compensation based on the use you've had of the product, but that seems reasonable.
I had a perfectly functioning Phillips Hue lighting system, then one day Phillips decided it was no longer supported and I had to buy an upgraded system to get the same features that I already had.
When a car manufacturer sells a car, in many parts of the world they're supposed to still offer parts for it for 7 years after they stop selling it, by law.
That's not exactly great, tbh. A car can easily run for 10y (mine is due for a replacement at 19yo), so 7y seems ecologically unnecessarily wasteful. I get that it doesn't make sense to offer parts for 20yo cars, but if we want to live in a sustainable future, people need to stop replacing cars after 5y or so.
Not 7 years after you bought it. It is 7 years after they stop selling it. If you bought the last one, sure. But if you bought it say 2 years after release you may end up in the tens of years of usage. Also don't forget cars need maintenance. You can compare with buying iPhone on release day versus the day before it is discontinued.
7 years is a relatively long time for a product, but not necessarily for a car. The average car age at scrap time is 13.9 years (UK stats). [Wildly off topic ranting about the
balance of embodied versus emitted CO2 for the typical ICE vehicle omitted. Clue: 10 years is about the break-even point.]
> Wildly off topic ranting about the balance of embodied versus emitted CO2 for the typical ICE vehicle omitted. Clue: 10 years is about the break-even point
Do you mean it takes about 10 years for the car to emit the same amount of co2 that was used in producing it?
If so, that's not "breakeven". Breakeven is when future emissions savings from a newer better car make it worth scrapping (not just selling) a car and replacing it with a newly manufactured one.
It’s generally not necessary to replace cars every 5 years due to part shortage.
For years the OEM (original equipment manufacturer) replacement parts fill into the wholesale and retail channels. If the manufacturer stops offering the part, there are still parts in those channels to meet demand, sometimes for many more years to come.
The manufacturer may keep offering parts beyond the requirement, too. Parts can be profitable, and service can be a big part of dealer revenue. Dealers who make money on service put pressure on the manufacturer to keep parts available.
Physical parts are also pretty easy to copy, so if demand persists beyond the availability of OEM parts, you can often get aftermarket parts. There is also a channel of junkyards and enthusiasts who hoard used parts with life still left in them.
I’ve got a couple friends who like old vehicles and they are able to get parts for trucks that are nearing 40 years old. I own a car that just turned 12 and have never had the slightest problem getting parts for it, even OEM.
> I get that it doesn't make sense to offer parts for 20yo cars
Why not? Personally i believe a hardware manufacturer should be bound by law to sell parts for any vehicle it sold in the past, as long as it exists. Yes even if it was produced 40 years ago. There's no reason for it to be otherwise.
If this was mandated by law all the tiny incompatibilities that they introduce with very varied models will suddenly go away because they'll be careful to make products that can actually be maintained and repaired decades from now with spare parts compatible with different models.
Some people use cars who are more than 50 years old and they're just fine. Same goes for washing machines, drills, bikes... We have to stop this capitalist nonsense that keeps on producing single-use items that end up in the trash in the coming months. IT'S INSANE!
“a hardware manufacturer should be bound by law to sell parts for any vehicle it sold in the past, as long as it exists”
Cars would get extremely expensive. Imagine Ford still having to supply model T wheels and engines.
They either would have to have kept a production line and employees knowing to operate it around, or have stocked ‘enough’ parts, where ‘enough’ is very hard to estimate up front. Also, stocked parts deteriorate, so for, for example, tyres, they would have to keep a production line running.
And that’s the easy case. At least Ford can ballpark know how many model T’s still exist. what if a car of a model that we thought didn’t exist anymore gets discovered in a barn?
And of course, that wouldn’t work in cases where manufacturers get bankrupt.
> or have stocked ‘enough’ parts, where ‘enough’ is very hard to estimate up front.
On the building where I live, the company which does the elevator maintenance offered a significant discount on the replacement of the control system in the machine room with a more modern model (which uses less energy). Their reason: including us, they had only two clients left which used that older system, so once both upgraded to a newer (and more common) model, they would no longer need to stock replacement parts for that older model.
It's extra silly since there's always a robust aftermarket for any reasonably popular car. You can buy new Model T parts in 2021. A lot of modern cars are built on shared platforms, too, so the odds of someone still making parts go up since the engine is probably the same as 20 other models.
>It's extra silly since there's always a robust aftermarket for any reasonably popular car.
One difference is the difficulties in making some 'parts' now. It's one thing to make a fuel pump for a 1950's car, quite another to duplicate a controller board, often with obsolete parts and secret software, that runs the convertible top.
Another is that a lot of parts are more monolithic than they used to be and harder to design/build. Look at headlight assemblies.
It doesn't help that with the increasing amount of software and electronics in a modern car (a Tesla being the leading example), the auto industry is acquiring the product lifecycle times and forced obsolescence of the PC industry.
But consumer electronics are written off on much shorter timeframes than cars. Are we going to require support for electronics for 7 years? I would be in favor of it, but I don't see it happen. Device was EoL in 2015, the vuln is from 2019, breach 2021. Even if we take the vuln date, that's 4 years after EoL. The question then becomes: is it reasonable to say that electronics are expected to have a life half of that of cars? I could argue both ways on this.
But it's EOL. EOL means end of life, not just "no longer sold". EOL means we agreed it's trash and you are using it at your own risk.
It's not the manufacturer's fault if your expired fire extinguisher or medicine fails.
Internet things are out durable goods. They require maintenance or they fail.
If EOL is unreasonably short, that's a factor in purchase decision or contract or commercial code violation if they surprised you with accelerated EOL.
That works if they said the EoL date explicitly at checkout. I think they should also have to say the per annum cost, so "£120, equivalent to £240 per annum to our EoL date" (ie if the EoL is in 6 months).
That's an interesting case. BMW knew about that back in 2012 but covered it up for 5+ years before finally agreeing to a recall. They perhaps are covering their butts against lawsuits due to thirty negligence. And they wouldn't want a new fire triggering extra attention to an old fire, and also they want to burnish (ha!) their brand image, unlike WD who has practically no competition.
> But consumer electronics are written off on much shorter timeframes than cars.
They are, but it kills the planet.
We have 12 year old PCs in the family. Well, back then they were high-end models bought for work. Now they are running Linux just fine for general family purpose. They use a bit more of energy than a new model, but where we live you have to heat buildings at least 8 months a year anyway. I am sure producing a new one would have required much more resources.
It's also important to point out that despite using more energy because it's older hardware, it'll use considerably less energy during its all life, than it takes to produce a new computer.
Replacing old hardware with "more energy-efficient" hardware is a trap from green capitalism and the numbers do not add up usually.
I find this claim a little hard to believe. Data centers routinely replace ~3 year old computers because the number of old computers they would need to keep running and cooling exceeds the cost of new more efficient hardware. The price of new hardware includes all the energy costs of producing it. Obviously the environmental externalities of energy aren't fully priced in, but that is also the case for data center energy.
What might be a more useful comparison is a $20 5W raspberry pi vs. a 2009 100W (at best) desktop computer where a month of continuous desktop operation costs more in energy than the new raspberry pi and ~1 year of usage.
In my opinion the difference should be whether something is the first generation of a product or an iteration. Basically somehow using the Lindy effect to determine how long it should be supported.
E.g. if I had bought the very first generation of the iPhone it would have been much more out of couriosity, not long time usage expectation. But I think there now has been enough development on it, that I should be able to expect the iPhone I bought in 2020 to last say to 2030 somehow if I want to stick with it.
OK, well now come up with a formula that says (from a series of objective and not-readily-gameable characteristics) for a random electrical/electronic device (or, if you want to separate the two, provide a way to do so - is a 1980's microwave with an 8-bit controller for the timer 'electronics' or 'electrical'?) how long the warranty should be, how long it should be 'supported' (for a definition of 'supported') and how long critical errors should be fixed. Also specify which issues, and at which time point, should be paid for or not.
There is a lot of 'everything should be supported for eternity for free' in this thread, but I see no (at least somewhat) concrete suggestions for better solutions. If it was easy, we would have done it people.
"If it was easy, we would have done it" is just not true.
Companies have little to no incentive and consumers have little to no power. Barring some very unlikely industry-wide agreement, it requires legislation.
(Note I'm not replying to your first paragraph at all, only the last)
Here's a concrete solution -- The US Consumer Product Safety Commission should be given regulatory power over all consumer products, such as to enforce a minimum five (5) year support for repairs and parts availability, and standardized safety + recall notices.
NHTSA already does all of this for cars and vehicles. (They require a minimum 10 year parts availability. They handle recalls via VIN for auto manufacturers, including notices and matching effected model lookups, etc) - https://www.nhtsa.gov/vehicle
US-CPSC should run a matching mirror'd program exactly like it, but for consumer goods sold in the US.
> There is a lot of 'everything should be supported for eternity for free' in this thread, but I see no (at least somewhat) concrete suggestions for better solutions. If it was easy, we would have done it people.
That's the easy, concrete solution. It's just not "profitable" from a capitalist perspective, and that's why no manufacturer is doing it.
Well not exactly nobody. Some manufacturers like FACOM (probably others) that are not into electronics have (or used to have) lifetime warranties and will happily repair or replace a product that is decades old.
The question is: as a society, do we want to pursue everyone's good and make things more efficient and ecological? or do we want more profits for the industry owners and more inequality and damage to the environment?
These two paths are fundamentally opposed. There is no conciliating them. If you want "nature" and "consumer rights", then you need to abolish "capitalism".
Moreover, I think WD defense here will be that while you could, you did not have to plug your WD device to internet, in order for it to serve its purpose. They will try to argue that the majority of their offer is hard discs, and if your drive has extra ability to connect to internet, that's extra feature.
Yes, but that "extra feature" is most probably the main reason that model was bought instead of a cheaper model without a network connection. That is, that "extra ability" would be in fact the main differentiator of that model.
I think the issue here is "end of life" is defined way too soon for portable hard drives. The mechanical parts have a 4% annual failure rate, or a half-life of 12.5 years. The software should be supported for that long.
I think that we need to separate the life of the hardware (WD My Book) and the life of the cloud service (the "Live" bit).
The drives themselves and their function as a NAS on a LAN is still fine.
When "Live" EOLed, they should have turned off the servers and disabled the function. If they weren't going to service/support it anymore, then they should have been honest about it.
Of course, their consumers would have screamed, so they did the worst possible thing, kept the service running as-is and didn't make it clear that people were leaving themselves open to attack.
> I think that we need to separate the life of the hardware (WD My Book) and the life of the cloud service (the "Live" bit).
That makes me think of the internet radio owners (Grace Digital, Yamaha, etc.) that all had their products brick because the underlying look-up service just up and went away. No problem. You just have to buy another hunk of plastic and microprocessor.
I agree with your conclusion of increasing the length of support. But your half-life analysis isn't really the right choice. Hard drives tend to have a bath-tub failure rate: https://en.wikipedia.org/wiki/Bathtub_curve
There is also EOL for the HDDs and EOL for the NAS itself. HDDs are easily replaceable and the NAS hardware itself should last for a long time. It's a linux box that shares a filesystem via CIFS/NFS no need to throw that out when a HDD dies, that'll last for a decade, if not more.
You can probably open up the portable hard drive and remove the physical hard drive, ending up with a 3.5". Some contain high quality server hard drives (some don't) for cheap price. You do lose your warranty though.
...and yet Apple still fixes security bugs in your OS even if your laptop is outside of warranty. I say as long as the device still connects to the internet without a clear error message etc. it should be supported. Realistically the average user has better things to do than check the support period of a random network drive they bought at some sale.
They would. But if there was a software vulnerability that would wipe out the hard drives of computers still realistically in use, much past that 3 year agreement expiry? i'd expect them to be on that.
Suppose there were a vulnerability in Mac OS X Tiger that wiped the hard drive of any G4 Powerbook connected to the internet. Would you expect Apple to be on that?
From conversations on this website about Apple's responsiveness/attitude to security researchers hunting bounties, I have gotten a contrary impression about Apple.
As a user interesting in security and following the field quite closely. I know about the limitations of their bug bounty program and some bad experiences some people have had with it. They still ship security updates to 8 years old devices and 3 years old OSes. Is that a problem?
WD sells the hardware, hard drive, and software as one sealed unit. The part that fails first will be the hard drive inside. The unit shouldn't be end-of-life until it actually should be expected to fail. (WD claims the units were end of life in 2016)
Yes, but there's a difference between the expected rate of mechanical failure, and the existence of a software bug.
The current fault erases everything on the drive. That can be equated fairly equally with mechanical failure. Let's imagine a different bug for instance, that allows anyone on the internet to read/write all the data on the drive. If one of these bugs emerged after 12 years, then there is a much stronger argument that the manufacturer should fix it, because it is a fundamental fault with the device.
Under the sale of goods act of 1979, the device must be suitable for the purpose for which it was sold, and there is no time limit on that, so any consumer with the device could demand that the person who sold it to them fix it. The device was unsuitable for the purpose for which it was sold from day one - it just took 12 years to find that out.
> I suspect there will be litigation, but I am not a lawyer. I will be interested to see it though, what is the responsibility for using things post end-of-life?
Any situation where "you are on your own" should include the dropping of protections preventing you from actually fixing the device. If the manufacturer refuses to produce updates, that should NOT mean there is no way to update the software on the device.
As a consumer, this is the reason I have boxes with hundreds of £s worth of routers which I've taking out of commission, because the manufacturer stopped updating them.
This really shouldn't have to be a consumer issue, it's wasteful to have to replace something like a router after the short EoL and inevitable vulnerabilities are found.
Perhaps we need more modularity, so core components can be swapped out to upgrade them.
Seems like the real answer is, you have to pay for ongoing support or the base cost of the device has to go up a lot or you roll your own with pc hardware and you're responsible for updating and making sure everything still works.
Personally, I'm not opposed to any of those plans.
PC hardware isn't ideal because you lose the benefits of specialised offload capabilities.
I'd be willing to pay a nominal annual fee for devices which are important to get firmware updates; router, WiFi AP, phone, etc.
To an extent I've partly achieved the modularity I speak of; after becoming fed up with decommissioning WiFi routers every couple years, I moved to router, switch, AP (Ubiquiti gear).
>PC hardware isn't ideal because you lose the benefits of specialised offload capabilities.
Just so you know, this isn't true. Nothing stops hardware offload from being put in a PCIe slot (that's where it tends to have started actually). And indeed that's a fairly standard thing on decent NICs, which don't have to be particularly expensive if you buy used. I've switched to OPNsense for routing functionality at all sites I manage, and have had no issues with hardware offloading for CRC, TCP segmentation and large receive offloading, or VLAN hardware filtering with Intel or Chelsio NICs. Both are very well supported under FreeBSD. I suspect a Linux based solution would have an even wider range of options since it tends to have more hardware support, so something like a Mellanox card could work.
At any rate though it works well, and running on a PC opens a vast array of useful options and flexibility, all the standard tools are there, and one can get something enormously more capable then a typical pre-made gateway/router for cheap.
>I moved to router, switch, AP (Ubiquiti gear).
I did this as well way back, and still run the switch and APs as UniFi for now. But I'd caution against depending on them for routing, it's always been a weak point and they've really, REALLY gone down the tubes on development there long since. UniFi gateway/routing and most network services that they've decided to make dependent on it (DNS, DHCP etc) is frankly complete crap. OPNsense (probably pfsense would work too but I didn't want to go to another proprietary solution with a worrying concentration of asshole at the top), VyOS or the like aren't quite as nice to manage, but it's so nice to have normal capabilities and something kept up to date again.
Given my statement, you're correct, but what I meant to say was; I can't just take a random PC (-like) device and expect hardware offload or other similar capabilities, I still have to buy specialised hardware, a suitable NIC, e.g Intel. I make a point of buying Intel NICs for this purpose.
> But I'd caution against depending on them for routing
I agree. I've been running Ubiquiti for 5+ years now at home. At the time nobody was recommending against them, quite the opposite in fact. I use EdgeMax/EdgeSwitch/EdgeRouter + UniFi APs.
> VyOS or the like aren't quite as nice to manage
I'm quite happy with VyOS/Vyatta, I'm not interested in fancy UIs, which made me _less_ interested in UniFi.
I've considered getting a PCEngines device because of the open/libre bootloader (uboot), when the EdgeRouter finally kicks it. It's nice and compact like a router, I can stick in my 12u network rack cabinet, and I'll be able to keep it up to date for longer as well as integrate it better with the tools I use.
>I can't just take a random PC (-like) device and expect hardware offload or other similar capabilities, I still have to buy specialised hardware, a suitable NIC, e.g Intel. I make a point of buying Intel NICs for this purpose.
I mean sure, you can't just take an entirely random PC, but then if you just grab any completely random AIO piece of junk you're probably not going to get a great experience either, "hardware offload" or not. The variety of kit available, particularly used, that has even built-in Intel networking chipsets which work is wide enough and cheap enough that it doesn't seem like a particularly limiting factor.
>I agree. I've been running Ubiquiti for 5+ years now at home. At the time nobody was recommending against them, quite the opposite in fact. I use EdgeMax/EdgeSwitch/EdgeRouter + UniFi APs.
Yeah, when I started with Ubiquiti they were excellent with much promise to come. Unfortunately their CEO is a mixture of toxic as hell and seems only semi-invested at this point while also having complete company control (don't let the "public" aspect fool you, he has a super majority of the shares) and has torpedoed their talent pool and vision. Damn shame, I don't know of anyone else pursuing quite the same thing at all.
I wouldn't bother with them for LAN stuff (their PtP/PtMP is still pretty competitive and useful) greenfield at this point, but even so I think they're fine on the switching side and acceptable for an existing investment on the WAP side. For switching they stopped being as feature/price competitive a while ago, but that doesn't harm the basic functionality. I have doubts about them navigating the multigig or WiFi 6E/7 transitions, they've been able to coast on old talent and investment by people no longer there for a long time. But that won't get pressing for a while so it's not a driver.
The gateway aspects definitely are though, awful. Edge is certainly somewhat better, but even there one can see the stagnation and rot.
>I've considered getting a PCEngines device because of the open/libre bootloader (uboot), when the EdgeRouter finally kicks it. It's nice and compact like a router, I can stick in my 12u network rack cabinet, and I'll be able to keep it up to date for longer as well as integrate it better with the tools I use.
I considered them as well as a range of others that supported coreboot. But in the end since I have a rack in a more out of the way place I just waited for a good 1U server deal to turn up on Ebay and grabbed a few of those. I have a decent SuperMicro system and a few HP DL20s, Provantage even briefly had a bunch up brand new for $350. For something as important as a gateway, it's actually kind of nice to have things like full lights-out-management and kit that's all designed to run 24/7 for years. 1U stuff of course tends to be noisier, but swapping fans for something quieter even at 40mm has tended to work pretty well for me in this application since they tend to be relatively power sipping (at least for x86). No GPUs, no big stonking 100W+ CPUs, not a whole lot to watts dissipate even in that form factor. Though too bad there don't seem to be more options without going custom for something SoHo/SMB focused that could trade Us for noise a bit more. Maybe that'll come around, in data centers density is king of course but I recall reading some articles that even there, let alone in company IT rooms/closets, there has been a growing appreciation about the dangers of high workplace noise levels.
Four-year support for a device that is connected to the Internet is ridiculous. I bought a Synology device in late 2012 and it's still getting major release updates.
I wonder if the bad publicity which will come out of this was worth it for them. Fixing such a crucial bug would have probably taken a few hours of work.
You can release patches 6 years after your device is EoL but there will forever be more security issues and people using your ancient product (think how long it takes some versions of Windows to truly reach less than 100k active machines. Hell I wonder if Windows 3.1 has really reached that number or not. The long tail is going to be loooong). Not to mention you've created a precedent that the device is still getting patches and can be used by users, only making the lifecycle issue worse.
You can release a version which severely limits the capability of the product or effectively disables it but this is just a guaranteed way of getting bad press and even more customers will be mad at you for killing a device early.
You can turn the device over to the community (if you can managed to get it through legal and 3rd party agreements) but that isn't actually going to solve anything as it's not a product for extremely tech savvy users, at best it buys deflection in the news report in exchange for the effort of doing this (if you can at all).
You can claim the lifecycle is over and years later and be technically correct but still get the bad press and user feedback anyways.
.
Retrospective in this particular case it would have been really great for them to fix this particular issue but that's an extremely hard thing to accurately judge unless you're from the future. I mean what percentage of devices that are many years EoL you've heard get this bad of a publicity hit? I'm sure there are many individual cases to point to but it's nothing against the absolute deluge of devices that have been discontinued over the years quietly without issue.
EOL is 3 years after purchase (if bought in 2015)? Just imagine a car and after 3 years no more spare parts? Personally everything below 10 years sounds ridiculous to me.
With modern high end smartphone it's the same issue. At best you get 2 years of patchs now, which is ridiculous because they really expect people to be able to afford >700€ phones that often.
I think that's more a specific vendor / android thing?
I just bought a 2016 iPhone SE second hand. It's running the latest iOS 14 right now, and is officially supported on iOS 15 too.
I think Apple is an outlier in terms of support, which i think is silly, because 5 years of support really should be taken for granted. There are many working phones or computers that are 10 or 15 years old and healthy.
This is why I'm partway through switching from Android to Apple as I replace devices. I could go with Android devices more likely to have third-party ROM support, but if I'm going to be concerned about security and patching why should I trust a pseudonymous person on a website forum to run my device?
As more security-related features get added in Android is also losing some of its differentiating abilities - e.g. wifi scanning isn't possible on iOS, but I think it's also been getting more restricted on Android in the last couple releases. That and the must-use-our-webkit browser restrictions on iOS are the only things that that have really annoyed me on iOS.
2015 iPhone 6S's are still getting patches, in fact it'll be compatible with iOS 15 this coming fall. It'll have seven years of ongoing support by this time next year.
Unfortunately the 3 guaranteed years of updates for pixels is pretty much the best in the Android world. There are very few exceptions like FairPhone reliably providing long term updates
In Android-land this is currently about the best actually. There is a chance of it improving over time as the updates will be decoupled from hardware support but the situation is markedly worse than on iOS.
It's difficult to imagine many of the smaller IoT companies providing support for junk made back in 2010, but I do have a daydream law/policy compromise:
Require all hardware devices to have ongoing security support for twenty years. A company can opt out of that support after three years, provided that they open source everything down to the firmware, and provide documentation and any necessary jailbreaking tools.
also an amendment to bankruptcy law that requires open sourcing propriety digital tech as part of filing.
Huh. This is a huge security flaw and they decided to not patch it. Winning is patching this.
This is so disingenuous (from the first report of this flaw):
“Western Digital takes the security of our customers’ data seriously, and we provide security updates for our products to address issues from both external reports and regular security audits.”
> Huh. This is a huge security flaw and they decided to not patch it. Winning is patching this.
"Winning" isn't "gee if we could just say we'll patch every discontinued product forever and imagine it had no downsides wouldn't that be great" - that's known as "dreaming".
Which is why I think if we are going to do IoT, it needs to all be behind some gateway designed by a company that will maintain security for decades.
Make a NAS drive which exposes everything to the gateway without bothering for authentication and then let Apple/Google/Amazon work out how to defend access to the resource.
The IoT device will still be connected to a LAN/WiFi network to access that gateway. Local attacks will still be possible (imagine a flaw in the TCP/IP stack, local firewall, or anything on that direction). And if that big company gets hacked (never say never), hackers would get access to a huge amount of devices ready-to-pwn. Not even thinking then about what this would mean from a privacy/mass surveillance point of view (by those companies or by agencies infiltrating them).
I can kind of see a device that has no ports open on the local network, and just uses an always-on ssh tunnel to HQ to receive commands. That way it's not exposed to anything on the local network, and it's the company resources that would have to be hacked first.
There is: don't release devices with security flaws in the first place. The fact is, they released a fatally flawed device. That the flaw was discovered later doesn't change that fact.
I think the way we talk about security patches and updates obscures the fact that they're correcting fundamentally flawed software. In other circumstances, this would result in product recalls. Those financial incentives are a big reason why product recalls for serious faults are fairly rare in other types of products.
Software should be more secure. However I don't think it's realistic to expect bug-free perfectly secure software. There is no field of engineering where 100% perfect tolerances are possible, and when you start getting past 99% the resource requirements to get to the next fraction of a % quickly go non-linear.
This is why 1) Security patches will pretty much always be necessary and 2) relying on perfect software alone is insufficient. Other precautions also need to be taken to prevent or mitigate attacks. In this case, it appears the My Book attack requires the IP address of the device. That indicates that people impacted may have been running these networked in a way that exposed them or the relevant ports to the world, which is very bad security.
> There is no field of engineering where 100% perfect tolerances are possible
This is a false proposition.
All actual engineering professions I know of have processes, checks and balances to avert disasters and premature failures. No one expects all of the shingles to be perfectly straight or have the same color, occasionally a roof may have a bit of a leak, yet I think even a single 4y old roof developing a massive leak would be a big deal. Imagine the consequences if all 4-11y old red roofs from a large construction company collapsed or developed massive leaks overnight.
Neither bridges nor roofs keep standing because they are built perfectly, nor are all bugs security issues. Yet a single fatal flaw can bring a bridge down and a single off-by-one can be a root exploit.
We shouldn't expect perfect software, yet we also shouldn't need security updates (at least not often and on everything). WD SW was fatally flawed, shouldn't have been released, WD should be responsible. SW "engineers" should be ashamed to be associated with such practices. I know I am.
Roof/building/bridge construction are things that have been around thousands of years. And they still get things extremely wrong sometimes: look up the footbridge in London as an example, it needed the equivalent of a security patch. On the issue of roofs, it's also often recommended that you do annual roof inspections and maintenance, looking for potential issues.
Regardless, we're talking about security, so let's go to the actual real-world equivalent. Physical security. Is perfection possible there? Nope. Anything even remotely capable of surviving a sustained attack is going to cost an amount of money reserved for corporations and nations, not everyday consumers.
If you think the current quality of physical engineering is something we should hold up as a comparison to the security quality expected of software, let's compare notes in a few hundred years when programming reaches the same level of maturity.
Maybe I should have used electrical installations as an example?
> real-world equivalent. Physical security
Physical security isn't a good analogue, because you don't have a line of hackers, trying to get into your cupboard and magically also millions of other cupboards, closets and storage rooms with near 0 marginal cost and incomparably easy way to avoid getting caught.
Unlike physical security, perfect SW security is pretty much attainable (shocking, isn't it), with 0 marginal cost (no cost to duplicate). Physical access, social manipulation etc is physical security.
> footbridge in London
One bridge in 100, caught before it failed? Compared to all SW with network access and weekly updates.
We know how to engineer things so they are reasonably safe, we actually do it.
We also know many ways to make software much, much safer with only moderate investment, and some ways to make provably correct SW, and we do not care.
Exposing IoT to the world is (going to be) common with IPv6, not sure how much SLAAC privacy extensions and "temporary" (hours to days ?) addresses help ?
That's essentially impossible for this class of device, though. While this case is a bug in the on-device API, there are countless others involving previously unknown vulnerabilities in widely-trusted software components. Heartbleed comes to mind.
Hardware needs to be liberated from unsupported software, and users should be made aware of vulnerabilities and support status. Making software vendors liable for future exploits of unknown vulnerabilities opens a can of worms that would have non-neglible consequences for everyone who writes software, and not all of those would be beneficial to security.
Product recalls are for defects that can cause physical injury. If the MyBook had an electrical defect that could cause a shock hazard or start a fire, it would have been recalled.
Many products have flaws, but if the worst that can happen is that the customer feels ripped off, they don't get recalled. There might be remedies under a express or implied warranty, but tech products typically disclaim all of that in their terms of service.
Total remote control of the device allows quite a bit... disabling thermal protection, disabling fans, changing voltages on regulators to be out of allowed range, etc.
Just that attackers didn't choose to destroy a device in this instance, but just erased it, doesn't mean allowing control of all software on the device to randos on the internet, by not patching a root RCE vulnerability is not a physical safety issue.
> Total remote control of the device allows quite a bit... disabling thermal protection, disabling fans, changing voltages on regulators to be out of allowed range, etc.
Having total remote control of an IoT device doesn't mean any of these things. Thermal protection is hardware driven typically on die, on cheap devices fan control is implemented in hardware because it's cheaper and easier than software solutions. Voltage regulators are hardware devices that aren't adjustable, even adjustable ones have a working range that is set via resistor. Switching power supplies aren't software driven.
All of that is software controllable on many devices, especially on SoC based NASes. Working range of adjustable regulators is almost always higher than the device connected to it (resistor may be used to set the default voltage for example), on pretty much all HW I've seen so far with no way to set hard limits. And you can also cause issues just by abusing transients even on regulators that can only be turned on/off. Thermal protection on SoCs is usually based on SW regulation loop. (grep for cooling-device through DTSes in Linux tree, all those SoCs have regulation in SW)
I have one, and looked into what I might put on it instead of the unsupported software some years ago.
I had noticed shortly after purchase that it was basically a Debian install, on MIPS if I recall correctly. While WD had essentially washed their hands of the device ages previous, it wasn't going to have a current Linux to put on either. Maybe one of the BSDs could be an option.
Instead of worrying too much about it, I have the remote access feature turned off, and have other backup options in case this goes belly up. I will be looking closely at what vulns come to light as being exploited though.
I had WD MyBook World and WD MyBook Live devices for many years as cheap cold storage. Replaced the last one earlier this year by an Olimex Storage Box [1] and a Seagate 2,5" 4TB ST4000LM016 drive. But even when I operated them, I disabled all the cloud features. These were Debian boxes, you could ssh in, enable NFS, stop all WD proprietary software and live happily every after.
Newer WD NAS devices are much harder to tweak, none should buy them anyway.
Even as Debian boxes, they're limited because Debian isn't supporting the platform any more either. That means you can't just point it to a new repo, and do all the usual apt goodness to get an up to date system.
I'm leaning toward one of these as a basic NAS replacement: https://ameridroid.com/collections/single-board-computer/pro...
Based on the Odroid XU4, it's a fairly speedy little SBC that runs Armbian, and has a heat sink that doubles as mounting for your hard drive of choice.
That means they were still 0.2 away from the worst possible vulnerability score.
So, on the one hand, Could be worse! On the other hand, there could be a software dev sitting out there thinking "I screwed up big time & didn't even get a perfect score on it. Oh well, better luck next time."
I also hate how when your phone or other device (i.e. I have an old Chromebook) is no longer supported there is no indication whatsoever that it is the case. You have to surf around the web to find EOL dates and every producer has it different. It should be visible and more in your face or at the very least it should be in "About" or Update settings.
curl has a handy --resolve option for telling it how to resolve certain names. Useful when you want to connect to something that thinks it's foo.bar.com and send the right tls/host, but DNS doesn't have it correct yet
Wow, so completely trivial to get root RCE on any of these devices, brutal, all you need is an IP address. That’s as bad as vulnerabilities get.
Insane that this has been known since 2018, and not patched - these devices are from ~2011-2015, far from ancient. I’ll personally never, ever buy anything from Western Digital again.
Typically when building a device like this you get a version of Linux / embedded-OS from the SoC vendor and you are stuck with it because SoC vendor doesn't provide docs that would allow drivers to be maintained. This makes ongoing support harder than it might otherwise be.
> I smell (but have no idea of the merits of) a class action lawsuit.
Class action lawsuits don't seem to help anyone except lawyers. I'm not being cynical; they really seem to benefit only lawyers. I've gotten caught up in plenty of class action payouts over things like this, and invariably I lose more on the time wasted reading the letter or email about it than I gain from being part of the class. Things like checks for $1.90 or a month of "free" service from something I haven't used in years. I want companies to be punished for egregious failures (not sure this one qualifies), but I'd honestly rather have the money go 100% to the government and be used to fund more enforcement and prosecutions. It's adding insult to injury when they make token reimbursements to us peasants.
The box should prominently display how many years of life you're buying, in the same giant font size that it shows how many terabytes of storage you're buying.
Read this, then checked my old My Book Live.
Sure enough... everything's gone.
I just used it for continuous laptop backups over my LAN, so unless one of them crashes tonight, I should be good. But, this will certainly give me pause when considering WD products, and this type of product in particular.
This is very bad, WD! Are your other products also vulnerable in this way? Why should anyone ever again trust your company to keep their data safe?
The affected model here, known as My Book Live, uses an ethernet cable to connect to a local network. From there, users can remotely access their files and make configuration changes through Western Digital cloud infrastructure. Western Digital discontinued the My Book Live in 2015. The support forum thread was first reported by Bleeping Computer.
The affected product is not a drive but a NAS solution. In terms of being a reasonable consumer my thought process is to evaluate distinctive products differently.
NAS is a cheap computer with attached drives. It’s going to have computer-like failure modes.
Non-technical users don't care about this distinction. The name Western Digital is on both, and that name's brand value is destroyed forever if this indeed turns out to be irreversible. Though they could probably pivot to selling their drives in other's NAS devices without much difficulty if they want to remain in this market.
Non-technical users don't really buy hard drives individually, they would at most ask their computer-savvy friend or go to a shop who will replace it for them.
I highly doubt the average non-technical user even knows who produced the hard drive they own.
> Non-technical users don't really buy hard drives individually
Yes, they do. My dad bought two WD My Book drives by himself as backup drives. He luckily was not aware that those things have an extra software for backups, nor how to install it, but those things are just sold like that targeted to the average user in your average computer store.
Internal drives, you're right, likely wouldn't be bought by a non-techie alone.
But THIS affected type of drive is specifically marketed to a very non-technical user, and I can certainly confirm my non-technical friends and family have bought products in this segment independently, whereas my techie friends steer clear.
It's a NAS that's made to look like a drive. My techie friends would never buy it - they'd get Synology/QNAP, or do their own over-complicated time-consuming solution (slight editorial opinion there ;), or use cloud backup, or some combo. But my dad, mother in law, and other relatives have products like this, and have purchased them on their own. In fact, I think when asked, most techies would in fact go against the notion of buying cheap but internet-exposed storage device for a non-technical friend :O
Your slight editorial opinion reminded me of this HN hot-take from the DropBox announcement:
> For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.
Somebody actually poisoned Tylenol. People realize that all the food they eat could have been poisoned in the grocery store. There's no easy mapping to "every external hard drive manufacturer could stop updating their products and your data could spontaneously delete itself."
True, a small computer running an Internet connected general purpose operating system should have a prominently labeled expiration date of some short time after the last patch. Use after that date is not recommended and potentially dangerous. WD sold something that looked like a toaster to the consumer but was actually a lump of cheese.
There have been incidents of (other manufacturers') hard drives shipping with pre-installed malware though [e.g. 0]. I think that's a good analogue for Tylenol.
Well, the GP said "why should anyone ever again trust [Western Digital] to keep their data safe?" Using a Western Digital hard drive, internet connected or not, very much means trusting a WD product with your data.
But I might be reading too literally. And personally, I do intend to continue purchasing Western Digital hard drives, just nothing internet connected, because I agree they're different products with different risks.
A NAS isn't actually distinct from a plain drive in that particular way. A plain SATA hard drive is a cheap computer with some attached hardware, just like a NAS. Literally, there's probably at least one ARM CPU core in there controlling everything and communicating over the link to your main computer.
The difference is the protocol that goes over the link. In a NAS, it's SMB/NFS access to files over ethernet/wifi, and for a SATA drive it is the SATA command set accessing blocks. The difference is small.
This is the third such major scandal from WD in as many years. First, selling SMR drives as CMR. Second, selling 7200RPM drives as 5400RPM. Now this. They've lost a lot of credibility, and I don't see any of the other manufacturers having issues rivaling WD.
I suppose you mean the other way around? Otherwise, what's the problem with selling a higher RPM device as a lower one? Isn't higher RPM better? (Genuine question, I don't know much about HDD).
The faster devices generally vibrate more and give off more heat, which might make them unsuitable for mass storage (i.e. in a NAS box or other multi-drive arrangement). The difference might technically even invalidate warranties.
They also draw more power, which might mean they simply don't work properly in some low-power devices, or mean that any UPS someone might have one connected to will not last as long as they expect during a power interruption.
WD Red is marketed for NAS/storage, where it's not uncommon to have 4/8/12/16 drives in the same chassis where noise/heat becomes a very different problem. It turns out that the drives generated the same heat and consumed the same power as 7200RPM drives, while the data sheets were outright lies and BS. WD decided on double down on the BS for several months, and only recently released new SKUs where the specs matched the marketing.
It’s correct, and higher is better, 7K2 is faster and often longer lasting somehow. Generally better built. Downsides are noise, power draw and thermal.
They sourced drives from former HGST under WD but 5K4 we’re not available so marketing rebadged 7K2 as “5400rpm-class” and lucky buyers were hit with better performances.
Sort of. The WD Red drives outperformed actual 5400 drives but underperformed other 7200 drives by a large margin (180MB/s vs. 230MB/s), while not necessarily offering heat or noise benefits that matched. My current 8x8TB WD Red Unraid build will likely be the last WD drives I buy for a long time.
Public-facing Synology devices got hit in the past too, but they just used it to mine Litecoin. I think someone calculated it and figured out that it cost something like $400k in electricity to mine $100k of coins on the boxes.
And of course Qnap in the recent month. It keeps repeating, and keeps happening, and some ( me ) keep ranting about it and no one / company is doing anything much with it.
Majority of people buy NAS only use it to access their Data within their internal network. But somehow they all include Internet / Cloud features as upsell.
Seagate mitigated their most famous problems. They were a good bargain buy for a while after that as they tried to rebuild their brand. Issue is now they're too proud of themselves, you're better off buying something else. They might still be the cheapest per GB for certain sizes, just look at the backblaze stats.
One option is shucking a high capacity external. Those are more likely to be high quality. Some are rebranded enterprise drives. But the hd model isn't guaranteed, and warranty support might be iffy.
for my raid i try to buy disks from different manufacturers. with the limited selection this is extremely hard. i have to fall back to different models from the same manufacturer and even buying the same model from different shops to get at least disks from a different batch.
As a part of the sale of HGST to WD, Toshiba got assets to produce 3.5” hard drives from HGST. I’d consider them the true heirs of HGST, and their good reliability as shown in Backblaze’s studies bear that out.
Unless WD changed their mind again, HGST HDD is still available as of 2021 because customer has been demanding the exact same HDD with exact same model number from the exact same plant. Basically they want to know it is the same old HGST and not WD. And some ( if not most ) of them are large, enterprise customers. So they brought it back in ~2019 (without an announcement).
They used to make hard drives years ago, and I still have three old 1.5 TB units kicking around that I can't seem to throw out, because they won't die.
Seagate is the other big consumer brand and they also have their own set of issues. Anecdotally, I've had multiple Seagate drives die within a year or 2 whereas WD ones have kept going for years longer.
Seagate's anecdotal failure rate was why I kept to WD. Backblaze's numbers convinced me this is not a problem I need to worry about in the present. Seagate had much better price/performance than Toshiba in my local market when I last needed drives, so it's what I got.
I am afraid to buy WD drives since the mid 2000s. I used to store all my media in them, and with WD every 1 to 2 years, my drive would be corrupted to the point that it’d need formatting. Having to rebuild my collections multiple, with always “improved” WD models moved me away from them forever. I don’t even buy server grade WDs.
There are a limited number of hard drive manufacturers. All of them had reliability issues at some point. Like the IBM "deathstar" and the Seagate "failacuda", I don't know about an equivalent from WD. Reliability come and go, and you don't even know when you buy because problems can take years to develop, and it is dependent on usage.
They are all in the same ballpark, with Seagate a bit worse than the others but not terrible like it was a few years ago. WD tends to be consistently in the middle of the pack.
The latest fail from WD (as a hard drive manufacturer) was when they sneaked inadequate SMR technology into "red" consumer-grade NAS drives. But others did more or less the same. Now, all manufacturers are transparent about the technology used and newer "red plus" drives from WD are CMR.
So I'd say, for now, I'd say WD is as good (or as bad) as any other and I would let price decide. For reliability, that's what RAID is for. And in any case, RAID or not, and no matter how reliable your disks are, if your data is precious, you need backups too.
Mine is still prompting me to register my device. It's been a secondary Time Machine backup for 10+(?) years, but really is too slow to ever do a restore from. This is a good reminder to replace it.
In this particular case, there were My Book drives and My Book Live. When the Live part was configured, you would be creating an entry-point into your network for WD to run code on your drive. I know this, because I purchased one, and read the small getting started guide that came with it.
Needless to say, I never ran any of the Live code. Several of these sorts of things have come up in the industry that always made me recall how happy I was to not have those drives with their backdoors on my network.
The CVE says it needs the IP address. How did the entry point work? Unless it was something like NAT port-forwarding I don't know how the attack could punch through to whatever port the device was using to expose the API.
Verify that your previous backup drive is readable before writing to the next backup drive.
But hey, if you just got a ransomware note, and think "I'm good, I have my backups!", wouldn't you want to flip the read-only switch before plugging in those backup drives? I would. In fact, I'd flip that switch always before trying to read from a backup drive.
How many My Book customers would even understand the meaning of your [correct] advice? When companies fuck people over with a defective product, we should resist the urge to tell the victims to be more tech savvy and not use those sort of products. Particularly when those products are intended for the general public.
It's always the same old thing.
But the fundamental problem will never vanish: computers are complex, and no matter how hard you try with neat packaging and software, this complexity cannot be hidden. Sooner or later the illusion bursts at its seams and the user discovers another failure mode that they weren't even aware of.
WD really messed up there - but they and others will mess up again, so if the user's goal is not losing any data they'll still need to do more than buy the next shiny thing and click "accept" on the EULA. Because in the end pushing around the blame won't get you the files back.
Problem is that whoever designed the system should have done a better job. Computers are still (and probably will always remain) a niche skill so the blame lies completely on the shoulders of the WD engineers/designers who left this option open on the device
Very interested to see what the root cause ends up being - this is mind bogglingly bad. I know several people who use these devices because of their simplicity, and they’d be devastated.
I run a FreeNAS setup at home on an HP MicroServer. From time to time, I lament the amount of time I’ve spent over the years maintaining it, and after a recent drive failure (and wishing for more performance), I started considering consumer options like Synology/QNAP.
Articles like this always give me pause, and remind me why I invest the time I do to maintain my own thing. I’d rather fuck something up myself than fall victim to an issue like this, and the time spent setting it up likely pales in comparison to dealing with data recovery.
I really hope these users can get their data back.
QNAP's recent situation was hard to believe. Having only just fixed a significant SQL injection vulnerability[0], a spate of ransomware hit. It was originally announced to be this issue that people rushed out to patch[1].
It ultimately turned out to be a backdoor account[2]. Backdoors really upset me. Every reasonable "you can't blame people for mistakes" goes out the window when it's not a mistake.
Following the ransomware, they released an auto-installing malware remover. A Python script that detected one particular piece of code associated with this recent attacker. That malware remover was a python script full of vulnerable exec calls introducing multiple new RCEs.
Wow. Back door accounts are just totally inexcusable these days - manufacturers should be held to account if their back doors end up getting exploited by threat actors. (Granted, you could argue it’s hard to distinguish between a really terrible bug and a back door, sometimes, but “intent” should cover this difference…)
Would you happen to have a write up/article about this series of events that isn’t a sanitized series of security bulletins? Sounds like it’d make for good reading.
Reading further along in the forum, the `walter` thing sounds like it's only present in test code and comments.
The actual backdoor looks like the `jisoosocoolhbsmgnt` session ID [1] that was removed in the update [2]. It looks like a hardcoded session ID used for tests [3]. Leaving something like that hardcoded and active in the production code is inexcusable.
I've never tried FreeNAS, but am surprised it took so much maintenance. I've been using FreeBSD with an HP Microserver and ZFS for the same purpose for about 10 years. I just install the updates and everything just works. Curious what kind of maintenance is taking up your time?
For me, the issues were mostly related to the period of time I started using FreeNAS, and getting a bit too creative with plugins and jails.
On the first part, there was a period of time when the project was going through some upheaval (quite a few years ago now) and I had some issues with upgrades.
This goes hand in hand with the 2nd thing - I was trying all sorts of plugins and using it as a BT and newsgroup client. These did not upgrade well.
I suspect that a fresh vanilla build (TrueNAS now) would probable be zero maintenance. I need to upgrade my drives (a bunch of 1TB drives from the era I built it), and I’ll probably use this opportunity to start fresh.
Those projects that start easy, and then suffer from:
hmm, let's try that,
then let's try these add ons, etc.
And everything works, for years, and then you need to return to them, but can't remember anything!
And then, it turns out, that you don't have the time you used to have.
And even though you loved tinkering late into the night, 10 years later, hight has become precious.
For sleeping. Or trying to sleep!
>Very interested to see what the root cause ends up being
The article says:
>Multiple users reported that the data loss coincided with a factory reset that was performed on their devices. One person posted a log that showed unexplained behavior occurring on Wednesday:
I'm leaning more towards 0day than "software bug", mainly because I can't really think of a reason why a bug would cause the factory reset to get triggered, but I can think of plenty of ways an internet connected NAS can be exploited.
>I am guessing this is normal? My data backups are offline. The idea of making my backups read/write on the internet seems insane.
It's a NAS. Backup isn't the only thing you can do with it. A personal cloud (aka dropbox) is also a valid use case, and requires internet access to work.
For certain notions of "read/write", the risk can be lower. You might make backups to storage, but not have permission to delete files. Or when you delete files, they are kept for a certain safety window (e.g. if you delete a file on June 24, the file is instead moved and can be restored any time before July 24, at which point it is purged).
My guess is someone figured out how to bounce commands off WD command server (customer reps helping people reset their devices) and they are spamming customer ids.
Either that or these devices have unique ports on the WAN with a 0-day but that would take a lot more effort to exploit.
That CVE only shows how to factory-reset a single MyBook to which you can connect and send HTTP requests. There's still something missing to explain how it could happen to seemingly all MyBook's at once, including those behind a NAT.
There are comments in this thread from people who read about this in the news, checked their own device, and found it was wiped as well. This seems like it would be very unlikely to happen if this were just happening in isolated cases.
Qnap had a lot of security issues in the past, like unpatched RCE and backdoors. I wouldn't recommend, (and never ever use qnap cloud or upnp) as it seems they don't really do a good job on software QA or penetration testing or security audits.
Hardware is fine but it's much more expensive than just building it yourself.
QNAp takes the ‘just ship it’ approach. They have a lot of functionality, but it’s clear if you deal with their software much that it was a rush job. Lots of random breakages, bugs, stability issues, etc.
You’re wise to not trust it with anything important.
Some of their hardware is tough to do yourself. e.g. I've got a TL-D800S hooked up to my home server. I know I could build something comparable, but once hardware involves actual "building" rather than just "assembly", it's past the level of effort I want to spend.
I converted a QNAP device to Ubuntu after the updates stopped coming. It’s decent and fits perfectly where it lives in my house.
Of all things, it has a weird boot-up bug where it fails to boot to Ubuntu during about a half-hour period (around :15 and :45) each hour and lands on a kernel panic within a second after grub. I have no idea where to start. Firmware is latest.
I’m about 3/4 certain that it’s between those times, and never at the top of the hour.
Getting the exact panic would unfortunately mean to have to get it down from the shelf where it lives, and plugging it into a monitor to replicate. It doesn’t make it out of low-res text mode when it does.
Most of the time, I’m usually in a rush to get it back up as my family wants lights and Plex to work normally.
If you get a chance to test, I'd recommend changing the time on the device and seeing if that makes a difference, should clue you in on if it's a software or environment issue.
There are countless systems out there which are far older and also more secure. Age is of zero relevance except to those who propagate forced obolescence, mindless consumption, and generation of massive amounts of e-waste. This "update mentality" is a cancer itself. Especially when this is supposed to be a simple device.
That said, relying on the same credentials for a long time is more problematic. My bet is on credentials being bruteforced, especially if many people have left them at their default or used easily guessable ones.
Also - age is of zero relevance to internet connected linux devices?
I rely on many "8-10 year old device that hadn’t received a security update in 6+ years". My coffee machine. My fridge. My hifi. All my motorcycles. None of them run linux and are directly connected to the internet.
But think about this from the perspective of the average buyer/user. Hard drives are traditionally "local". Local = safe. Now, a new generation of drives are introduced with this awesome capability to get access to data remotely. "Cool", they think. They don't understand how the Internet works, and so they don't really see the issue, or if they have a basic understanding of networks they might understandably believe that "it's behind the router, so it's safe".
The remote access feature was disabled by default. For many people, the only reason it was accessing the internet was so it could get security updates.
I can't imagine a scenario where people assume an 8 y.o. device is automatically trash, only because it's old.
I'm typing this on a 10 y.o. laptop. It still works. Sure, the manufacturer and browser vendor released updates, but it wouldn't be too out of the ordinary if it didn't.
That being said, I think its predecessor would still work (for posting a comment to HN at least)
This doesn't appear to have anything directly to do with Linux, just crappy software written by WD running on it. Please be a little more careful with your criticism.
I don’t think the commenter was criticizing Linux. Seemed more a comment on the manufacturers who lazily slap these products together with minimal effort. Taking a free OS, sprinkling a thin layer of their software on top, and then abandoning their responsibility to maintain the full software stack being a common example of that minimal effort.
So, yes, essentially you end up with a bunch of devices running vendor-specific unmaintained Linux distros on the wide open Internet. The “unmaintained” part of that sentence is the problem, not the “Linux” part.
I agree, but it also makes me consider what the role of software engineering (as a discipline) is in this disaster.
Shouldn't we design systems that are hard to break by default? Shouldn't the OS assume that terrible things are going to happen anyway, and provide protection from bad faith actors in case the OS is indeed left unpatched for 10 years while being fully exposed to the internet? Is it even possible to design a system that provides this level of security so that we can get away with near-zero additional security expertise from product designers who build on top of it?
I think that, first of all, it's Western Digital's responsibility that things went south here. But shouldn't we build systems that provide bomb-proof security for the many companies that build on top of it? Is it even possible? And if so, how? In the end we would be doing ourselves (as consumers) a favor.
> Is it even possible to design a system that provides this level of security so that we can get away with near-zero additional security expertise from product designers who build on top of it?
I highly doubt it, at least not with 100% certainty. We build bridges to last and to stand weather. These are enormous constructions with large safety margins, teams designated just for security (against weather, earthquakes etc) with peoples lives on the line. Yet, we need to inspect them on the regular to make sure no assumption broke, no safety system was triggered and nothing unexpected happened.
If we can't make this work at this scale, I have no hopes that software for comparatively cheap consumer devices manages to achieve this.
You're correct that this would be very nice to have.
However this is a problem the industry has been struggling with for decades. It's simply not easy (and maybe not even possible) to achieve what you claimed "should" happen. Nobody knows how to produce bug-free software at scale.
I know. I'm just pondering if it's possible to come up with a design that guarantees a secure system even if you assume that all of your protective layers will have security holes in them that you will not be able to patch. Does or can such an architecture exist?
Current mainline Linux still has support for the powerpc SoC in those drives. It may take a week for someone to prepare alternative firmware for the device, that is modern, uptodate and safer. U-Boot dropped the support in 2017.
So if anyone wants to support their 11 year old drive they can do so.
By definition "directly connected to the internet" means if a device can on its on accord, direct requests to an entity, ask it a question, and act upon it is true. From what I understand these WD boxes go to a management service in the cloud. and were told they should factory reset. Whether something is pull-only (as in this case) or push (say allows HTTP or SSH access from a random on the internet) is irrelavnt if either result in unauthorised activity on a box in your home.
In defense of the parent comment, there is a meaningful difference between a device acting as the terminating IP meaning any open services are directly probe-able and a device sitting behind a firewall.
For this particular attack (assuming c2 server compromise?) that might not matter, but ultimately there is a massive difference in attack surface when comparing “direct” with “NATed”
How good do you reckon a 6 years past EOL consumer linux device's defences against a browser running 3rd (or 1st) party javascript making http requests to http://192.168.0.1..254]/cgi-bin/factoryRestore.sh?
How much would you bet against that being an unauthenticated call or one with leaked hard coded reds?
Not sure this makes any sense, the 6 years past EOL consumer linux device isn't running a browser.
Or are you assuming the user's browser itself is compromised and is running random javascript hitting the NAS address? That would be unfortunate, but I'm not sure I'd blame it on the "6 years past EOL consumer linux device"
Doesn’t need any browser compromise as such, just a user on the same wifi network running a browser and visiting a site with malicious JavaScript (possible a malicious site, possible a benign site with us delivered by a shitty ad network, possible a poorly secured site with persistent xss flaws).
Classic old cross origin request forgery. It ranks #7 I owasp’s top 10 website security flaws, and they have this to say about it:
“XSS is the second most prevalent issue in the OWASP Top 10, and is found in around two thirds of all applications.“
I remember Opera showing an error when I tried to follow a link from Internet to a private addresses (192.168.0.0/16 and such). Don't browsers enforce that anymore?
gives me a 403 Forbidden error, and if there was a known default password to my router - it'd try and do a factory reset on it. (It actually wouldn't, it'd send back a confirmation popup, but...)
Yeah difference in threat model of “evil internet can make tcp connection to me” vs “basically need a c2 compromise” is huge. Sucks for those that lost data either way.
Was very worried when I checked through the 5x 3TB My Books I have to find that all the data was ok - which I thought was odd as they're very old models.
Turns out at the start of COVID I re-did my home networking and the gateway addresses on all the My Books were incorrect, so they haven't had internet connectivity for over a year... (quickly adds extra firewall rules to prevent them for bonus safety).
Hopefully this flushes out the elephant in the room of the irresponsibility of manufacturers selling devices like this and then leaving the software unpatched / unupdated within a year or two of selling it.
Unfortunately this needs to become regulated. Either commit to "lifetime" updates (at least 10 years) or be forced to put a massive warning on the label advising the secure period for which the product can be used. Just like food has expiry dates, so too do these devices need them.
They are still pushing security patches for ios 12 which is older than 5 years. I doubt there are many iphone 4s users left at all since most apps won't work.
>Does it come with any guarantees of lifetime for those patches though?
This guarantee is called "consumer law". The requirement of a patch means that the product you paid for was broken. So yes, they absolutely have a legal requirement to fix that (or refund your purchase), and compensate for any damages that their broken product caused you.
Not that this is enforced particularly often, but given Apple's ubiquity and active hostility towards users that would want to patch vulnerabilities themselves, they specifically would get in a lot of hot water if they didn't provide patches for a device's lifetime.
But lifetime is evidently not as long as the device lasts, as we see with Microsoft. Plenty of XP machines are still alive out there and they have long been out of support. So while Apple does provide a lot of support for older devices seemingly, how much are they legally/contractually required to provide currently?
> But lifetime is evidently not as long as the device lasts, as we see with Microsoft. Plenty of XP machines are still alive out there and they have long been out of support.
This will vary from jurisdiction to jurisdiction, but where I live, "lifetime" is defined as the period of time in which a product is reasonably expected to last.
Talking about "XP machines" is a bit weird, because you're really talking about two different products there. You can very easily install whatever OS you want on there, so it's not an "XP machine", it's a machine and then a copy of XP.
In that specific case, the expected lifespan of a desktop computer is probably on the order of a decade. Maybe you'll have to replace a component or two (particularly HDD) before then, but you wouldn't expect to need serious repair work prior to that. But also, you wouldn't feel cheated if it died after 11 years.
The lifetime of the Operating System is another matter entirely. It's probably reasonable to make the case that the OS's lifetime is infinite, given that software doesn't degrade in the same way physical components do - if a non-networked program is broken in 2021 it was broken in 2001 as well (assuming you're running the same functional hardware, which isn't Microsoft's problem). It's also totally reasonable to make the case that the OS's expected lifetime is similar to the lifetime of the hardware it's going to be installed on, and I think this is probably the stronger case.
But whichever of those you side with, it doesn't really matter. If you buy a washing machine and it breaks after 2 years, you're entitled to either a full refund, or a replacement. If you opt for a replacement, the company doesn't have to send you the exact same model of washing machine, just something that's equivalent. Similarly, if there's a bug in Windows XP that renders it broken (e.g., a critical vulnerability that makes it impossible to enable network connectivity without getting your machine compromised), then Microsoft can just go "here's a copy of Windows 10, go buck wild". Even if you operate under the idea that software does not have a lifetime, Microsoft are still providing updates for what is fundamentally the same product (Windows). That it's not specifically Windows XP isn't really a problem in terms of their legal responsibilities.
Now, could you call Microsoft up right now and finagle yourself a free copy of Windows 10 just because there's some unpatched vulnerability in Windows XP? I'm not sure, but I reckon there's a chance they'd do it just to get you off their backs. It's not like there aren't millions of pirated copies out there anyway.
> how much [support] are [Apple] legally/contractually required to provide currently?
The expected lifetime of a phone is lower than that of a desktop computer (for many reasons), so I'd say around 5-6 years per device. The software/hardware distinction mentioned above doesn't really exist for devices that Apple sells seeing as they actively try to stop you from installing any software that they don't explicitly approve of, so that would cover both hardware defects and software defects.
At an absolute minimum it would be 3 years, as if Apple tried to argue otherwise you could very easily point to things like their environmental impact reports that assume a lifespan of 3 years per device (and even describe this as "conservative"!).
That said, as far as I'm aware Apple typically goes above and beyond their legal support requirements for software on their devices. They did get sued where I live over warranty periods (they were claiming that customers needed to pay extra to get warranty for more than 12 months, which is absolutely false), but that was in relation to hardware.
> If you buy a washing machine [here] and it breaks after 2 years, you're entitled to either a full refund, or a replacement.
This really stood out to me. If I bought a washer here in the US and it broke after two years, I expect I’d be on my own.
I’d be frustrated, of course, but I’d either fix or replace it and go on, probably not buying from that brand again. (Although, buying another brand could still get me the same internal parts and defects nowadays.)
I find it fascinating that I’m not at all upset by this situation. I’m guessing that I conclude it happens rarely enough that I’d rather bear the risk over pushing that risk back into a bundled insurance product with every purchase. I don’t feel like an insurance fight and waiting for a service call while I have a pile of wet laundry and another of dirty. (But maybe I’m suffering Stockholm Syndrome here.)
> This really stood out to me. If I bought a washer here in the US and it broke after two years, I expect I’d be on my own.
You probably would be. The US has by far the weakest consumer law of any Western country. Possibly the most egregious example of this is that businesses can advertise something as costing $5, you walk into the store with a crisp $5 bill to purchase it, and then get told that you don't have enough money.
Whenever this is mentioned online there's typically a flood of people who live in America commenting on how that's totally normal and "there's nothing they can do dude", completely oblivious to how absolutely mental that idea is to the rest of the planet. So I think your idea of it just being a case of growing up in a system without consumer protections making it seem normal is correct.
> I don’t feel like an insurance fight and waiting for a service call while I have a pile of wet laundry and another of dirty.
Ah, but here you've missed the trick! Yes, if your product broke and you needed to get a completely new one and/or a full refund, that's a pain in the arse. But it's an even bigger pain in the arse for the business, who functionally just lost the entire value of the product. They're incentivised to prevent that from happening.
The effect of this law isn't actually to give you an option if a product is broken (although it does that as well), the purpose of it is to make manufacturers stop selling broken products. Because they know that you can get a full refund for years after the point of sale, they make damn-well sure that the product lasts that long.
There's no need to do that. They could render the internet connectivity part inert during the last patch when they don't want to maintain it anymore. Make it LAN-only. If the firmware is locked, unlock it so people can run their own stack.
I hope it doesn't just translate into subscriptions but also manufacturers thinking hard about the security footprint of these devices in the first place. Stop making these things so promiscuous in terms of their functionality and ship a minimalist hardened kernel, possibly even externalise it to a 3rd party for patches / updates. We may even get some open standards and protocols support out of it as an upside (since it will be so much more expensive to build custom proprietary crap if it means you have to support it yourself).
I think if long term support is your thing, ownership doesn't really make sense for anything. Ownership is always going to come with a certain level of do it yourself.
I find it funny that we simultaneously work in places that understand that software requires constant maintenance and charge customers a recurring fee for it while being gobsmacked that we have to pay indefinitely for the same.
You go on-prem for the same reason you work does. Because the software licensing plus the amortized cost of hardware is less than the subscription.
I'm surprised since data storage, transfer, and compute are the things that you can absolutely beat every cloud provider on in terms of price. It was why we keep our storage in our own DC.
There is no reason. It's even hard to justify non iot drives when you consider having to buy drives for backups and maintain those backups. The cloud options just come out really good value.
I keep hard drives around for data I don't care about like games and movies but my personal data like photos is all in the cloud and it costs me next to nothing to store it.
This was an issue caused by a negligent company failing to keep their devices secure. For me the solution to this is not just to rely on another potentially-also-negligent company. In my opinion on-prem+off-prem, or multiple off-prem solutions are necessary for anything of actual value.
I think we need to lean into the environmental angle to have a chance of addressing this in a better way. Frame this legislation as anti e-waste. Any computer device with an expiration date shorter than 10 years gets a 500% tax, or something like that. I'm just spitballing here but it seems like a surmountable problem to me.
> “It is very scary and devastating that someone can do factory restore on my drive without any permission granted from the end user,” one user wrote.
I'll say again: backup drives must have a physical write enable switch on them. To all the people who argue against this - just you wait till it happens to you!
Not having multiple backups is also a very bad idea. One day my 8T drive slipped out of my hand and smacked on the floor. That was the end of it. But I wasn't out my data, just out the cost of another drive.
Cloud backup is a stupid idea. At any moment it could go dark, for any reason, and you have no recourse. Might as well bare your throat and hand some random stranger a razor.
> Cloud backup is a stupid idea. At any moment it could go dark, for any reason, and you have no recourse. Might as well bare your throat and hand some random stranger a razor.
I really disagree - physical storage in your home is the common alternative for a lot of consumer users - and that physical storage will involve maybe unplugging your external hard drive between backups - but otherwise never checking for the consistency and accuracy of the data nor the hardware. If you're working in a data center then it's your job to do these things and it doesn't take very much time... for normal folks cloud backups are likely going to be more reliable.
I think you've fallen into the trap of being scared by the news. Normal hard drive failures don't make it to HN because they're so boring and common. You even had one yourself. That's how unreliable maintaining your own backup device is.
This very story is just that - people had their own hard drive and it wasn't maintained for security so it got erased. At least this article needs to be added to your mental list of failures of local storage to balance your news-driven bias.
"At least this article needs to be added to your mental list of failures of local storage to balance your news-driven bias." What a toxic comment. You are saying far more about yourself than you are of someone you don't know at all. Projection and a lot of anger/hate inside.
If my cloud backup backend provider cancels my account, my backup software will quickly complain. There will be a window of exposure, but I can act quickly.
If my backup harddrive is in the same house as my main storage and my house burns down, I'm fucked.
If I'm fancy, I may have multiple backend storage providers, too. Lot easier to do that then to have multiple houses. I wanted to use a safe deposit box in the past, but last time I called banks near me, they didn't even have safe deposit boxes available to rent! I can ask some friends, but then will be limited in frequency of updates, still.
"Redundant Array of Inexpensive Cloud Storage Providers is not a backup!!!" <grin>
(Can we make RAICSP a thing? Or do multi-cloud solution vendors already has a snappy marketing term for storing everything on all of S3, Azure Blob, and Google Storage at once?)
Yeah, I’m falling on this too. A client with write-only privs to a buckets on multiple cloud storage providers is literally the most durable backup in human existence right now. Like nothing else will get you worldwide DR and 30+ 9’s of durability.
3 (designed for) 10 9’s systems whose failure modes are wholly independent might be 30 9s.
I strongly suspect (as in “for the right price would stake my life on it”) that there are some common failure modes across the major cloud vendors’ offerings.
Your advice is directionally sound (and is what I do for family docs and photos); I just think it has fewer meaningless 9s than you’re imagining.
I didn't think syncthing supported encrypted replicas - that's why I passed on it years ago for a dropbox-like use case. I wanted to have a replica in the cloud as a relay, without having to fully trust the machine. A quick google suggests they may have something beta along these lines, so this may change.
I believe Resilio can do it, but it is closed source.
If you're looking for backups and not dropbox-like functionality, then something like restic might be more appropriate. It does actual content addressed backups, so you've got history for those times when you realized you messed something up six backups ago.
Companies like Iron Mountain provide this service to businesses. Even if a bank no longer offers safe deposit boxes (and you were relying on their "safe", which for small banks was an arbitrary thing), there are usually facilities in larger cities that offer the same sort of service.
Just went through this with my family (in Australia) when the local bank branch said they were shutting down the service. It was basically a locked room in their branch, there was no "safe" involved.
Backblaze's latest drive report shows an annual failure rate for hard drives of 0.85%. I very strongly suspect the total of all failures of the type you're listing there is many orders of magnitude lower than that across all cloud storage users.
If you care about your backups, a single cloud vendor/account vanishing isn't going to be a problem, in exactly the same way that if you care about your backups,m a single drive failure will not be a problem. They are both predictable and mitigable risks.
Yeah, but it certainly feels in the right ballpark from my personal experience. I’ve normally got 20 or so drives in use at any time (perhaps fewer spinning rust drives not that ssds are more common), and I have a drive go bad occasionally - not every year or two, but certainly every 5 years. So I reckon they’re at least order of magnitude right.
That's why companies need to start paid/premium customer support: when you lost access to your account, you should be able to pay 10, 100, 1000 of money to talk to a real person who understand how the system works and where there a mistake was made.
Like a hacker stole your account and you cannot take it back? A special person will to do a background check (e.g. call your employer) to verify that you is actually you.
Robot erroneously blocked your account because of misclassified spam-like activity? Your account is restored, and you are fully refunded.
do you know of any tools for consistency/accuracy? I've been meaning to do some basic md5 checks for all my files but haven't gotten around to it. Every now and then I find a corrupt jpg image that has a rectangular band running through it or something of the like.
Generally I'd recommend using a filesystem designed for that. Something like btrfs or (I think) ZFS, which have checksums built into the filesystem (and if you set them up in a RAID configuration, these checksums can be used to correct data as well)
I believe that's some form of bitrot. AFAIK only ZFS can deal with this (assuming you use ECC RAM).
EDIT: According to Wikipedia, ZFS, Btrfs and ReFS offer strategies to deal with various forms of data degradation, although it appears as if ZFS is still king.
For all my important photos I use par files. Using command-line tools like par2create and par2repair is simply part of the routine of storing photos from the camera on the NAS.
I also use it for music files, in the past too many mp3s got broken.
> Cloud backup is a stupid idea. At any moment it could go dark, for any reason, and you have no recourse. Might as well bare your throat and hand some random stranger a razor.
It's not stupid but redundancy is important, don't rely on cloud only.
My own anecdata: I lost a couple TBs with Streamload[0] ~2007 when they switched over to MediaMax. They wouldn't admit it was their fault and I never got any kind of compensation - even though I had just renewed my yearly subscription a couple months prior. Thankfully I had offline backups on CD/DVD but that was definitely no fun. More recently CrashPlan decided to no longer offer unlimited backup for non-enterprise users i.e. I would have to pay a LOT more money to back up multiple TBs. At least they offered me a partial refund for the remaining months in my yearly subscription. I really really hope Backblaze keeps their unlimited plan.
As for physical backups, my 8-drive RAID-5 GPT partition got somehow nuked when I was trying out a new Windows 8 installation (one of the first official builds). I never found out what actually caused it, but I immediately went back to Windows 7 and waited for 10 to be out for a couple years before finally switching over. I'm still paranoid and keep my backup server separate and off the network except to make diff backups.
Bit-rot is also becoming a serious issue. I hope to finally build a home ZFS server (with ECC RAM of course) to deal with this, as I've noticed non-streaming encoded videos sometimes just stop working after 15+ years. I'm not sure if it's actually bit-rot but I can't think of anything else that would cause this to happen.
In any case, keep multiple backups: offline and online!
Hard disk drives go on sale regularly. I replace mine usually once a year or so. They're not expensive relative to the loss of the data. I just bought a 500G SSD drive on sale for $60.
I've read that SSDs are also generally more susceptible to data degradation earlier than spinning rust, although I'm not sure how true that is. Wikipedia[0] says:
- Solid-state media...store data using electrical charges, which can slowly leak away due to imperfect insulation.
- Magnetic media...may experience data decay as bits lose their magnetic orientation. Periodic refreshing by rewriting the data can alleviate this problem. In warm/humid conditions these media, especially those poorly protected against ambient air, are prone to the physical decomposition of the storage medium.
In either case, only ZFS/Btrfs/ReFS seem to implement strategies to deal with data corruption. Kinda sad that Apple recently released a new FS and didn't bother to address this.
Why not as a backup strategy? I had my desktop harddrive fail and while I had some local backups but the cloud backup restored me to almost the minute of the failure. The only downside to cloud backup is the slow restore process.
> The only downside to cloud backup is the slow restore process.
If you use BackBlaze, they offer multiple options to restore data quicker if you're willing to shell out more money, including AWS or sending you a physical drive with your data on it, albeit the latter is kinda expensive, but if you have a ton of data and need it ASAP, is probably worth the cost.
I recently lost a 16TB drive for the second time in 6 months and my local backups were out of date by a couple months, meaning I needed to restore ~200GB of data. Aside from the web interface file explorer window being cramped (and support telling me to basically go fuck myself after suggesting that), I was able to make ZIP files from recently changed data and use their downloader tool which IMHO is a piece of crap and probably designed by some intern, but at least the download speed was fairly fast, even in Europe, in contrast to upload speeds capped at around 250 KB/s (though thankfully support multiple threads).
Any single backup is a stupid idea. Combine them, and you've got a great backup solution. Why would I keep important data in one physical location?
My house could burn down tomorrow, but I'd still have my cloud backups. My cloud backups could roast in a fire tomorrow, but I'd still have my local backups. And probably several other cloud backups.
If you put all your eggs in one backup, sooner or later, you're going to get screwed.
He only reliable backup mechanism is starting a successful cult whose main tenet is that your data is valuable communication from God, so that they continually duplicate it. Anything else will go bust long, long before the heat death of the universe.
Amazon just announced AWS Cult-As-A-Service. For the small fee of just $0.05 per Gigabyte per month they will store your data in three separate cults located in geographically distinct regions.
WHAT!? How the heck did you get that conclusion from this story? These are literally your people, who backed up to a local device and essentially chose their own local NAS rather then the cloud. Which unfortunately for them came from a company with bad support that dropped it 6 years ago and likely did a shitty job even before that. But not everyone can be a tech person! Even tech people can't be that knowledgeable about more than a fraction of information technology, the field is crazy vast [0].
The failure modes here are exactly why "cloud" makes so much sense for so many. Keeping stuff up to date, or migrating if the company drops it (even noticing that support has stopped). Securing access. Verifying integrity. Amortizing the cost of different media cold storage fallbacks with things like tape robot facilities. And on and on. You wanna opine a bit on the relative probability that Amazon, Apple, Backblaze, Dropbox, Google, Microsoft, rsync.net, or a host of others just "at any moment going dark, for any reason" vs Western fucking Digital My Dropped-in-2015-Book dying? Think carefully.
I mean shit, I actually have a custom Epyc-based TrueNAS Core (not FreeNAS anymore! gotta keep up!) system with tens of terabytes as my main backing store, and replication to a remote site, with more usage of network segmentation, wireguard, ZFS user privileges so compromise of the replication credentials can't delete old stuff, then I'd expect nearly anyone to have any idea about. I still have it going to Backblaze B2 as well, and I pay to have them hang onto it for a year rather then 30 days. And the only reason I can swing it all is that I can amortize a lot of it professionally, get other usage from it, and draw upon a bunch of both knowledge and metaknowledge that my particular life path happens to have granted me.
For 99% of the people in my life "turn on iCloud/Backblaze/whatever" is the right advice. Backups are very risky without heavy automation and regular attention on top.
----
0: One thing working at various levels has given me a deep appreciation for is what broad shoulders I stand upon, many layers deep, and how valuable and deep the knowledge of so many other tradespeople I interact with in life is. The mechanics, electricians, plumbers, general contractors, structural engineers, medical folks, researchers and so on I work with or have worked with all possess skills and knowledge that I never will. That's why we're social animals. I've traded on my skills helping someone improve their network for a hand with skilled carpentry or electrical work or the like. I don't see any shame in that.
They hooked the backup to the internet. Not a good idea. Nothing connected to the internet is secure, especially since nothing comes with physical write-enable switches.
Incredibly shallow and dangerously bad hot take. I think you have a very poor understanding of what "security" is, nor the concept of backups in general which necessarily (like security) have a very strong human factors requirement. A backup which is too much of a PITA and requires nearly any level of manual effort simply isn't going to get used much or maintained well at all by the vast majority of the population. Tools and systems exist to serve humanity, not the other way around, and when a system fails badly for many it's not humanity that must all change, it's a shitty system.
Fact: if these people had been running to a decent cloud service, they'd probably still have their data. Really "don't connect backup to the internet" is like, fractally stupid. The more one digs the more stupid angles turn up recursively similar to higher level stupid. "Don't connect backup to the internet" means what to you? What about the computer itself making the backup? If that is connected it could get infected as well, now what? Everyone is just supposed to give up on all networks entirely because "hurr durr nothing on a network is secure"? Which is wrong anyway since security an economic equation, not some ethereal absolute. I hope you never give this advice to anyone IRL.
Consider that I worked on the 757 stabilizer trim gearbox. The idea is no matter what fails, the airplane lands safely. I've been trained to think that way.
Consider also that I've worked with computers for 45 years. I've seen about every failure you can think of, including fires and floods, explosions and earthquakes, viruses and phishing.
The most unreliable, by far, part of computing is the internet.
"Cloud backup is a stupid idea. At any moment it could go dark, for any reason, and you have no recourse."
If you are aware of the issue, and get kicked off AWS, or whatever, you move the backup. It is only a problem if your data catxhes fire and you get removed from cloud at the same time
Cloud backups are just one layer of a robust backup strategy. A copy on your desk, one in your closet, one at your parents, one in Backblaze, one in Glacier, etc.
I know you're joking but to stay on topic, you still have to even get them to even admit they have your data, since it's supposedly illegal. Thanks to Snowden we know, but even in light of those leaks, they have and will probably continue to lie during Senate/Congressional hearings. Once we get past that hurdle, then we can talk about restoring your illegally captured data ;)
My backup drive has a physical switch. I used to be meticulous in setting it to read-only except while making a backup. But now I just leave it in read/write mode because I lost my enthusiasm. It's not plugged in when not in use though, so it's a bit safer from this sort of thing.
How would you do automated backups with a physical switch? Are you proposing nobody does automated daily or more frequent backups?
I suggest rolling backups. Tapes are also a possibility, as they can be written in append-only mode.
Just remember that ransomware will encrypt any drive you hook up to it, including your backup drives. A physical write-enable will prevent that, and it should be disabled anytime you are restoring from backup.
I want a write-enable switch even for simple tasks like copying one drive to another, just so I don't mistakenly copy the wrong direction.
> and it should be disabled anytime you are restoring from backup.
What I was alluding to but didn't make clear was that physical switches are easily defeated by human laziness or mistakes. Why have a physical switch on drives you're copying if you might accidentally switch the wrong one or eventually stop bothering? This already happens with warning popups in Windows when you try to run an untrusted program, for instance. People get trained to bypass the security because it's just a tedious obstacle.
You personally might be careful enough to forever set the switch correctly, but people who didn't even know their hard drive was years-un-patched and internet connected yet left it there to get hacked also wouldn't reliably set the switches every time either.
Regarding accidentally copying in the wrong direction, I think a safer way than a physical switch is to show some details of what will get deleted. Maybe previews of images, tiny snippets of text from the files, a graphical view of the space they occupy, etc. Make it more visceral, like throwing away an actual book where you can see how many pages it has and what the picture on the cover is.
> backup drives must have a physical write enable switch on them. To all the people who argue against this - just you wait till it happens to you!
Yep, I once reformatted the wrong drive. I plugged in a firewire drive into a friend's Mac and didn't realize he had an internal drive that apparently also showed up with a firewire icon. Whoops!
>Cloud backup is a stupid idea. At any moment it could go dark, for any reason, and you have no recourse.
I wrote something about it [1] just a few days ago. And I have been banging on about it for years if not a whole decade.
>What we need is something like iOS TimeCapsule. Let the Phone do the Photo Management on Devices, and have an on site ( Home ) DataBackup as well as offsite ( iCloud ) Data Backup. Apple could even make iCloud Backup subscription a requirement for TimeCapsule to function.
We need both. In a simple, safe, easy to use fashion.
I don't know what you mean by "cloud", but an off-site backup is essential. Your home can be burglarized, or burnt down, or flooded. Basically anything you insurance covers for your home you should cover regarding your data.
> At any moment it could go dark, for any reason, and you have no recourse.
There is no difference between your backup going dark or your main data source going dark. The whole point is that the data is duplicated and the odds of both of them going dark at the same time is low. This only means that you should check your backup regularly.
> I'll say again: backup drives must have a physical write enable switch on them. To all the people who argue against this - just you wait till it happens to you!
Reminds me of the little sliders on memory cards. Though I think those were software driven. It didn’t actually lock it out from writing.
Are there any portable USB style disks that have a hardware enforced read only switch?
you do need some sort of off-site backup for disaster recovery. House can burn down, get burglarized, flooded, and you absolutely must account for that if you really want to mitigate the risk
Your faith in the internet doesn't match mine. (Far too many single points of failure.) But suppose you're right. I can buy 10 drives for cheap, but not 10 different cloud accounts.
I also don't care to have the internet snooping on my data. Yes, I know about encryption. Enough to know it cannot be trusted.
Can't wait for my 3$ as compensation for losing all the data. Seriously now. A class action lawsuit should end with the culprit being out of business in cases of extreme negligence like this one. Anything less just incentivizes them to weight "risk/benefit" of not doing anything
>should end with the culprit being out of business
I would agree if it's non-EoL products. This, not so much.
Don't get me wrong, I'm not trying to say since it's EoL WD don't need to do anything. Just that the degree of seriousness of this incident changes with that.
That means their EOL policy was negligent. They should have shut down the remote access parts when they EOLed it. It's like leaving hazardous waste around when shutting down a factory.
I would agree if't s a non-EoL products. This, not so much.
A company should not be permitted to just declare something “EoL” if it is still in widespread usage. They made it, they support it as long as any customer still has it.
At the very least, the EOL date should be prominently displayed on the package (and product page, in the case of online shopping). Just like how I was able to see CentOS 8's EOL date when deciding what OS to install on my servers, except guaranteed.
Hardware products used to be supported for 15 years, then 10 years, then 7 years, then 5 years… … Now nobody knows. I guess h/w mfr. saw what s/w vendors do wrt their products, and figured they can you do the same. SaaS is worse, you are only supported until you pay.
Accumulation of benefits, shifting of responsibilities, the American way.
I can't agree strongly enough with you, though I'm not sure about the term you recommend. On the one hand, imagine if your car were end of lifed after a couple years. On the other hand, imagine if it took 20 years. The first would ensure you are screwed, the latter would encourage serial corporate bankruptcy.
There's a good middle ground. Perhaps, like the 10 years for cars, it needs to be legislated. Perhaps this is what we consumers have decided to accept, that once the warranty runs out, that's it.
In the case of car service parts, there’s a burgeoning after-market network of suppliers. I can probably find 5+ parts from different suppliers for brake wear parts for even 25 year old cars sold in significant volume. As someone wrenching on very much not-new cars for the family (and occasional friend), I don’t think we need a legislative solution for car parts.
Except that there already is a legislated solution requiring manufacturers to supply parts for (depending on where) ~10 years, irrespective of warranty.
The fact that there is after-market suppliers is more that manufacturers haven't worked out how to stop that on purely mechanical parts.
They would much prefer to tie you to their own maintenance network and parts if they could.
Products are sold with a warranty. If you want guaranteed support for X years beyond the warranty, pay for an extended warranty.
If the product in question doesn't have an available extended warranty, then pick another product.
Telling every hardware maker "you have to support every device you make forever" is ludicrous.
I have some 15-year-old netbooks in storage that still power on, should eeePC have to "support" them now? Should they have to maintain a repair depot with replacement parts forever? Should they have to just release their own version of Linux forever to support each model of hardware?
> I have some 15-year-old netbooks in storage that still power on, should eeePC have to "support" them now? Should they have to maintain a repair depot with replacement parts forever? Should they have to just release their own version of Linux forever to support each model of hardware?
If WD is gone, are you going to buy drives from Seagate, seriously? I've had 3 hard drives from Seagate just fail by itself before, 0 from WD so far. HGST used to be good, but sold to WD. I have no experience with Toshiba, but they have less product diversity and higher price.
Of course they issue a CYA about their cloud systems not being compromised, but then bury the fact that other systems were compromised by using the passive voice when they point to "threat actors".
I'm assuming the My Books phone home in some way to facilitate file access over the internet. If so, that WD system got cracked. That seems more likely since-- at least for my ISP and I'm guessing most home ISPs-- don't expose individual device ports to the world unless the user sets NAT for that.
Anyone know if NAT configs are required to use the feature that let's you access files from anywhere? If so, someone simply cracked bad security 6 years out of date and scanned for devices.
Given typical IoT security, it might only have been using the default user & password.
The vast majority of users assuredly have an Internet connection they already use with at least one other machine on it, so unless ISPs are handing out another public IP for their NAS, it's almost certainly behind a NAT.
If so, someone simply cracked bad security 6 years out of date and scanned for devices.
Maybe the majority of them aren't so much an "exploit" as they are "easily guessed credentials". The fact that anyone else, anywhere else on the Internet can also use those same credentials to access their data is something that probably few people realise when they set one of these things up.
Judging by the CVE they linked, you need local access. But, since it's a GET parameter, this could be a drive by attack where someone puts all the links to common home networks in a page and the browser just takes care of that (and there are ways to guess the local network). Though, judging by the number of people affected, it might be something else.
I wasn't sure because apparently there can be issues with multiplayer or online play with some game consoles and people will be directed to mess about in their router settings. There were some stories about problems with the Xbox like that.
My money is on "hardcoded admin backdoor password leaked".
Edit: Found the manual[1]
"Changing a User’s Password: When viewing details about a user, the Administrator can change the user’s password (no password is the default setting)."
Alternatively - a backdoor without any authentication requirements at all was discovered.
There have been times when web crawlers have found pages with delete icons on them that are pointing to GET requests and, after dutifully following them to index the data, resulted in a server being wiped.
There’s a difference between MyBook and MyBook Live. The Live version (this article) is ONLY accessible over the cloud service for some reason. Even if you’re on the same LAN there’s no SMB, FTP or anything else. Only the cloud connection. The MyBook is a more normal NAS.
Bought one of these for my dad few years ago without noticing the difference. Cheap, hobbled and bricked itself more than once. Even without this incident, would not recommend.
> The Live version (this article) is ONLY accessible over the cloud service for some reason. Even if you’re on the same LAN there’s no SMB, FTP or anything else. Only the cloud connection.
I found a user manual for a My Book Live model that suggests that it supports SMB, NFS, FTP, and AFP [1].
Seriously? What's the point of owning the hardware then, might as well just use Dropbox? Seems like you take on all the risk for essentially no benefit.
Hard drives are consumable devices. Dropbox sounds like a better deal considering you get redundancy, don’t need to replace drives when they fail, can recover deleted files for some amount of time (I think)..
This is much much less of a problem for consumer workloads. A pair of hard drives can last a household a decade of moderate use without needing replaced.
Turns out that if you wanted your data to actually be safe, you had buy a new device every time Western Digital dropped support and stopped releasing security updates (3-4 years after launch), copy your data across and throw out the old device.
To be honest, I just assumed the "Live" service was a subscription but it actually does appear to be one-time.
I should note, however, that it seems the drive is built in and not easily user replaceable/swappable, so it's not really "however many terabytes you want".
Sorry, but this is incorrect. I have a MyBook Live (says so on the device) which is accessible via LAN with SMB, AFP, FTP, etc. It also supports cloud access, I think, though I never enabled it and I don't currently want to replug it back in to check the (local) HTTP web interface to see what other settings it has.
It's more nuanced than that. Many of those things you mention are backups, just not necessarily that reliable for certain threats. By your standards, you'd probably say tapes, removed from their drive, are a backup. Yet, buildings burn down, taking the tapes with them. If restoring backups from off-line, off-site storage is going to take days or weeks, then for some organisations, you may as well not have a backup. Cloud sync, duplication, connected devices are fine for a backup, but probably not as the only backup. But they still reduce the likelihood of total data loss compared to not having them.
Diversity and risk analysis are what's important here.
I agree with what you say, I'm just nitpicking on differentiating redundancy from backups.
Yes, a building with tapes can burn down. But whether it does so or not is not correlated with the state of the system that the original data lives on.
Redundancy is a great step to prevent data loss. But redundancy won't do what a backup would: keep your data safe even if your system is fucked.
Cloud sync is a double-edged sword, because you make Dropbox a part of your system; their failures are not decoupled from yours anymore. Say, someone hacks into your Dropbox account, encrypts/deletes all data, and then downgrades/cancels your plan so that their backups go poof.
Then you'll find yourself with no data - and no backup.
Compare that to the tape burning down. You just make another tape. The chances of both your system going down, and the tape building having a fire at the same time are astronomically low, because these events are independent.
TL;DR: backups are decoupled, which is why you need them.
I'd say there's an even better distinction: backups keep your data safe even if you manually delete it. With only redundancy if you delete a file it's gone. With a backup, it's still in the last backup. Of course there's usually some maximum retention period for backups, but that can get quite long.
Everything else is secondary and can be covered by both. The chance of your redundant data center burning down at the same time as your primary is also low. If one burns down, you just bring up another. Likewise for smaller-scale users, where the chance of your cloud provider being down when your house burns down is also incredibly low.
So it's really the versioning or snapshot nature that's the key aspect of backups as opposed to simple redundancy.
Agreed that RAID isn't backup, but other online systems, while not completely decoupled, are less-coupled, which serves as a convenient middle ground for some cases.
>Dropbox allows you to roll back changes and undelete files
What you are saying, is that Dropbox allows you to purchase a backup service from them (without any guarantees, at that).
Cloud sync isn't backup. If someone performs backups on the either side of the cloud, that's great, but that's an extra, which is what I'm saying to begin with :)
You know, Dropbox functions in part on the assumption that users act independently. I wonder if they can tell, and if so what happens, if a vendor bug like the one in the article cause a large fraction of users to take the same action simultaneously.
By that logic _nothing_ is a backup. A disconnected hard-drive could be lost in a fire, theft, physical damage, you could stop paying your rent and your landlord could repossess it...
Telling people that any redundant copy that isn't 100% reliable (i.e., everything) isn't a backup is not a good way to get people to take backups.
While a hard-drive could be lost in a fire, etc., the state of availability of the data on it is decoupled from that of the system that has the data that's being backed up.
And that's the entire point! Take that as a definition: a backup is a redundant copy whose availability is independent of that of the original.
FYI, from one of the comments below:
>I've "lost" (had to restore from personal backup drives) data from Dropbox due to an error made by their support staff during a mass rollback, which was itself needed due to the Dropbox client interpreting a drive disconnection as a mass file deletion.
How decoupled is decoupled enough though? Does it need to be air-gapped?
You could access your dropbox data from computer B even while it is syncing files from computer A. Together with the 30 day file recovery window, this seems fairly decoupled to me. Assuming that the dropbox file history and the dropbox normal sync is seen as two separate systems.
Now you shouldn't have this as your only backup, but combine it with a second service with similar guarantees and even if you wipe your local drive and the wipe get propagated to dropbox, you are still backed by the 30 day history inside both dropbox and the second service, which are decoupled from each other.
I think it is correct that, without unusual precautions, Dropbox should not be counted as a distinct backup copy for satisfying the 3-copies rule. I've "lost" (had to restore from personal backup drives) data from Dropbox due to an error made by their support staff during a mass rollback, which was itself needed due to the Dropbox client interpreting a drive disconnection as a mass file deletion. (This was before they implemented mass rollback through the UI; it's a safer process now.) And this is fine. Dropbox does file synchronization, which means it propagates data loss as well as intentional changes.
I don't know if this has gone to court yet - but I'd assume that if Dropbox does a thing to delete a bunch of data that causes you a great loss of value when you're paying them specifically to store a bunch of data - then there would be enough of an implied contract in place that you could extract a bunch of pain from them regardless of what their TOS tries to say.
The law is complicated and the details are extremely important - but intent is a pretty big factor.
Also - I'd strongly advise someone who is deciding between buying an external hard drive or using a third party for backup to use the third party. There are definitely risks that remain, but their SLA is likely going to end up being a lot stronger than your hard drive's reliability.
I am very sorry to break it to you, but there is no Santa, and you'll be SOL if Dropbox's SLA hits a SNAFU.
Also, from another comment in this thread:
>I've "lost" (had to restore from personal backup drives) data from Dropbox due to an error made by their support staff during a mass rollback, which was itself needed due to the Dropbox client interpreting a drive disconnection as a mass file deletion.
That has gone to court. They're using the same limitation of liability (including data loss), as-is clause, and waiver of warranties found in every software license. It's entirely enforceable.
Read the TOS. You're not getting anything from them if they delete your data.
Then people irresponsibly storing single copies of critical data on Dropbox raise the cost for more responsible users, because upon loss Dropbox has to pay out the high value of the loss to the irresponsible user, but pay next to nothing to the reponsible users, who already have multiple backups or store less critical data.
It should at least be a tiered offering with different liability caps.
I wouldn't call it a backup without further clarifications. Like, if it can be killed by the same power surge that takes your main server out, I wouldn't call it a backup.
There are a lot of ways to introduce redundancy in the system that would make backups needed much less frequently (if ever).
But what a backup does is it allows you to have your data in case your system and accounts are compromised, whether through a misuse, failure, or an attack vector.
ANY backup solution can go down with a big enough disaster.
An on-site backup is A backup, but it shouldn't be the ONLY backup. Typically you want a hierarchy of backups, one of which is on-site and on-line but a different machine, another of which is off-site but on-line, and possibly one or more offline backups. But on-line backups are fine, IF (and only if) they're copy-on-write/append-only.
Of course if the data can be deleted or overwritten too soon (even tape drive backup systems have some period over which they start re-using tapes) it's worthless. But that's an entirely separate concern from whether it's always-on or internet-connected or on the same national power grid segment as the original.
I was with you through the rest of the thread until here, but this is too extreme. By your definition there does not exist such a thing as an in-site backup, does there?
Take it to the extreme and everything is connected somehow at some point, thereby making a True Backup by your definition physically impossible, even off-site. The point where the backup is made could have a zero-day propagate to the tape machine, killing it and the tape by spinning up the motors. A blueray drive could have the firmware pwned.
If not even incremental snapshots synced to a remote count as backups, you are probably the only one to harbor this interpretation of the word and it arguably loses its usefulness as a word.
Well, as long as you can recover from an onsite backup after logging into your system and completely destroying everything you can, it counts.
An incremental snapshot to a remote is as good as a backup as WD users just found out.
>The point where the backup is made could have a zero-day propagate to the tape machine
Again, the point here is the failure of the backup is decoupled from the failure of your system.
Again, everything is on a spectrum, and we can argue about definitions, however, a copy on a hard drive that's sitting on your bookshelf is decoupled from whatever happens to your computer unless both get destroyed (or stolen) - and that's how good that backup solution is.
The data on a RAID array will go poof if you accidentally rm -rf that partition, so the chances of it failing when the system fails are very high. Hence it's an awful backup solution.
What you have to consider is not whether the chances of failure are high or low, but where the chances of failure of both the system and the backup at the same time are non-negligible.
Well, as long as you can recover from an onsite backup after logging into your system and completely destroying everything you can, it counts.
An incremental snapshot to a remote is as good as a backup as WD users just found out.
>The point where the backup is made could have a zero-day propagate to the tape machine
Again, the point here is the failure of the backup is decoupled from the failure of your system.
Again, everything is on a spectrum, and we can argue about definitions, however, a copy on a hard drive that's sitting on your bookshelf is decoupled from whatever happens to your computer unless both get destroyed (or stolen) - and that's how good that backup solution is.
The data on a RAID array will go poof if you accidentally rm -rf that partition, so the chances of it failing when the system fails are very high. Hence it's an awful backup solution.
What you have to consider is not whether the chances of failure are high or low, but where the chances of failure of both the system and the backup at the same time are non-negligible.p
I think this backup maximalism can be taken further. Say if it’s not one of many tapes stored offsite at an Iron Mountain repository, it’s not a backup. Or multiple locations at that, can’t discount redundancy (what if one location gets nuked)
OK, looks like many responses have the same misunderstanding here.
To find out if something is a backup or not, ask a simple question: does the availability of the data on it correlate with what's happening to the system it's backing up?
If the answer is no, then it's not a backup.
Whether your tape storage gets nuked or not does not, in any way, correlate with whether your system gets hacked into, or whether someone who has a grudge against you messes with your system.
Then P(data loss) = P(system is fucked) x P(backup is fucked).
If your backup is not decoupled from the system, then P(data loss) = P(system is fucked) -- which is a far larger.
You don't need to take your tapes to Everest. You just need to decouple them from the source.
TL;DR: if you can destroy a backup from the system it's backing up, it's not a backup :)
I back up to Dropbox and an external hard drive next to my machine. I've used both to restore my data on hard drive death.
I can't imagine the amount of time and/or money it would take to do what you want. It certainly doesn't seem like it is something most computer users would do.
I can set anyone up with a (for example) iCloud + time machine backup with very little work, and it will keep ticking over, backing up. What's your suggestion exactly?
> There are two kinds of users: those who back up their data, and those who haven't experienced a data loss yet.
I have never had a significant data loss, and I back up my data.
The weaknesses of cloud are obvious, and alone it's not enough, but leaving it on the table is dumb unless you have some other way of practically and reliably maintaining a far-offsite copy. Most people don't, and even fewer people have a good reason to bother when they're employing offsite backups as part of a broader strategy like 3-2-1.
And then because that is so cumbersome and expensive, it gets done only once and when you have a disaster, you can restore your backup from a few years ago.
There just seems to be no way to square the curcle - if the backup is trully offline, it's clumbersome to make if thr backup is effortless, it's online and vulnerable.
Discontinued in 2015, so no security patches. That's the problem with all of these purpose built IoT devices, a general lack of security updates even when they're still supported, and of course no chance at all if 6 months or a year after you buy it they decide to drop the product line.
"Western Digital has determined that some My Book Live devices are being compromised by malicious software. In some cases, this compromise has led to a factory reset that appears to erase all data on the device. The My Book Live device received its final firmware update in 2015. We understand that our customers’ data is very important. At this time, we recommend you disconnect your My Book Live from the Internet to protect your data on the device. We are actively investigating and we will provide updates to this thread when they are available."
I'm using a WD My Cloud for non-essential data. I was wondering if my unit would be affected as well. As far as I can tell, the latest firmware update completed January 01, 2020. I'm safe for now but I better build something more robust.
It runs Linux, and they could have at least opened up the firmware for opensource updating. WDs response to Windows 10 blocking network access because the My Clouds use an old version of SMB is to buy a new one (and trash working equipment) or enable a not particularly secure SMB version in Windows.
I don't see anything. The site isn't DRMed either; I'll demonstrate by pasting the text of the article:
> It doesn't matter what the files are: If you try to share these formats over a network, Western Digital assumes not just that you're a criminal, but that it is its job to police users. You see, MP3, DivX, AVI, WMV and Quicktime files are copy-protected formats.
> The list of banned filetypes includes more than thirty extensions. Some of them are bizarre: .IT files are banned — these are Amiga-style music modules composed with Impulse Tracker, a particularly well-loved tracking sequencer that hasn't been updated in almost a decade. I composed with IT myself, back in the day, and still have all my shitty compositions, none of which Western Digital would have me share. (Try MOD vs. Speak&Spell masterpiece Eddie Dreams of Women, if you dare: IT, MP3)
> Isn't it cute how the only data it views as worthy of policing are music and movies? These are the only copyrights that matter under corporate monkey law.
> It's the most astonishing example of crippled equipment I've ever seen. A DRM'd hard drive! Whatever next?
Dreaming meat?
> *UPDATE: The manual's appendix and online support site provide setup instructions for SAMBA, allowing access over IP instead of with the DRM-infested and poorly-reviewed client app, elsewhere claimed to be "required."
*
> *MOAR! Samba not enough? Gut the firmware and install made-to-measure Linux: An entire community of folks is here to help you hack your MyBook: mybookworld.wikidot.com.
Internal formatting was stripped in the copy-and-paste process. I could add it back, but that would detract from the purity of the approach. ;D
A quick search through Shodan will reveal thousands of unsecured storage devices just sitting open on the internet. I'm really surprised it took this long for something like this to happen.
When I purchased a My Book about 15 years ago, the drive was hardware encrypted by the controller board.
That means even if no password was set, if you didn't use it with the WD controller board, you couldn't access your data.
I threw the case away, sold the controller board and formatted the encrypted 2 TB drive with ext4. I'm still using it for backups. I have a restic script that also backups the data to Backblaze.
Sounds like its affecting people using the My Book live service to access it over the internet
Just wow though... Hopefully its not actually doing something like overwriting the data with random bits and just deleting the pointers to the file in the MFT. If the latter case, the data can be easily recovered
The article mentions that a factory reset was performed on the devices. Factory resets usually leave the data in a readable state depending on how paranoid the developers coding them are - you're not usually going to run shred -n 11
The WD My Book Live series NAS has two models, a single-disk one called just My Book Live where the disk is not user-replaceable, and a dual-disk one called My Book Live Duo where the two disks can be replaced tool-free. AnandTech had a pretty thorough review of the single-disk one https://www.anandtech.com/show/4952/wd-my-book-live-network-....
I had the MBL Duo. It ran on an 800MHz Applied Micro APM821XX PowerPC processor with 256MB RAM. The stock firmware was a customized version of Debian. I couldn't remember exactly which version of Debian but it's definitely older than Debian 9. Maybe 6 or 7? Anyway, you can enable SSH access via web UI and directly login to the device and see everything running there.
IIRC WD used OpenVPN to establish a connection to their servers to allow remote access to these devices behind NAT and firewalls. I had always been wondering what if their servers were hacked and then hackers could theoretically access the NAS via that VPN connection and by extension my LAN behind firewalls so I disabled the remote access feature.
A couple of years back I finally got tired of the ancient Debian, and since WD had already deprecated the device without further updates/patches, I was looking for alternatives. There were people on WD forums trying to compile and install newer versions of Debian. But since the device does not have video out or console port, it was very easy to mess up and got bricked.
Eventually I found OpenWRT (yes, the linux-based router OS) had official support for both MBL models (https://openwrt.org/toh/western_digital/mybooklive) and the installation seems pretty safe (especially for the Duo with replaceable disks). So I went with OpenWRT and never looked back.
My MBL Duo now sits at my parents-in-law's place running OpenWRT. I had setup WireGuard to connect it to my LAN in another city and used it as a remote backup destination in case of either place catching fire or whatever other disasters.
And now I feel pretty lucky for my decision to ditch the original firmware. Good riddance!
I used to have one of those, and it bricked after a power outage. Luckly it only contained media I could download again. I opened it to try recovering my files from its HDD directly, but to my surprise it merges the HDD with a portion of internal flash in a single volume. I didn't try recovering it further because I didn't care of the data, but I was really expecting to see my files in that HDD... After that I ditched its mother board and connected its HDD to a raspberry pi.
Is there any modern, reliable replacement for the old "Time Capsule" from Apple?
I have a home server with samba, etc where I'm pushing Time Machine backups, but I'd prefer something standalone that doesn't require much setup. What are others using for this?
I think what people do right now is an always-on Mac Mini with external drives connected. The Mac Mini strategy introduces a failure point at the USB->hard drive connection. Pretty low-maintenance, until the $3 USB thingy dies!
Apple's exit from the router business came at precisely the wrong time: they could sell an integrated smart home / mesh network / router / local+cloud backup that also provided macOS/iOS/tvOS software update caching / content caching. My guess is they didn't do it because they felt they could make more money charging for cloud-only services (they love their services revenue) rather than selling networking devices.
I have a small "normal pc-like" box with up-to date Linux. It required some initial setup but it will be supported forever and almost any software can be installed there.
I can't believe we're just glossing over the fact that everyone who has been compromised probably has an intruder still sitting on their LAN, or at least a big hole in their network somewhere. The request came from somewhere, and most people don't willingly forward ports to these devices.
You're all worried that the data is gone, maybe you should be worried that someone might have taken it.
If they're recommending PhotoRec, it means it was either rm -rf / or a quick format. Your data is there, but the directories aren't there, and any file that wasn't written in a single extent is going to be awfully hard to piece back together.
There are a lot of comments on the situation itself, or the companies involved when these type of events happen. Speaking of evolutionary behavior as a whole, why does the industry still keep driving towards IoT / IaaS when events like these are becoming more common? Is there real risk analysis going on that determines that putting crucial operational infrastructure in the hands of a vendor is worth the consequences? I know this particular event is related to a consumer product, but consider how a lot of businesses work. Once a solid system is built, there's no reason to change it - hence the abundance of legacy systems in finance etc.
It makes me feel like these things could eventually go full-circle and companies will value owning their own datacenters. It may also make more financial sense for companies that have recently realized a lot of savings in office space leasing that they could pivot into infrastructure.
"Western Digital has determined that some My Book Live devices are being compromised by malicious software. In some cases, this compromise has led to a factory reset that appears to erase all data on the device. The My Book Live device received its final firmware update in 2015. We understand that our customers’ data is very important. At this time, we recommend you disconnect your My Book Live from the Internet to protect your data on the device. We are actively investigating and we will provide updates to this thread when they are available." - Western Digital
I've said it before and I'll say it again: consumer tech sucks. From routers and NASes with back doors to lighting systems and thermostats that "need" someone else's computer to function (as though their onboard CPU isn't more powerful than yesteryear's PCs), "caveat emptor" couldn't possibly be more true.
Even though I enjoy building my own systems, it's a crying shame that I have to build my own whether I want to or not, and even Micro Center doesn't carry server motherboards with IPMI and a BMC.
As of May 15, 2021 the Seagate Access feature of Seagate NAS products will be discontinued. Specifically, the Seagate Access service, Seagate Access through Seagate Sdrive, Seagate Access through Seagate Media App, and Seagate MyNAS will no longer be available after May 15, 2021 at midnight Central European Time. Additionally, customer support for the Seagate Access service will also be discontinued.
The removal of this service means that access to all Seagate NAS devices via the Seagate Access web portal, Seagate Sdrive, Seagate Media App, and Seagate MyNAS will no longer function. However, you will not lose remote access to the files on your Seagate NAS since it can be configured and accessed using the FTP/SFTP service. Similarly, your Seagate NAS will not change for standard network access within the home or office network using common network protocols on macOS and Windows.
Please know that we remain grateful for your purchase of a Seagate NAS and hope you continue to enjoy it despite this change to remote access via Seagate Access, Sdrive, Seagate Media App and MyNAS.
For questions, please contact https://www.seagate.com/contacts/.
Cordially,
The Seagate NAS Team
What I haven't seen in a superficial reading of this thread: Did the device really erase the data? A quick way to do this, nowadays, is to encrypt it and, upon reset, throw away the key.
If it's an old school quick format, some types of data can be recovered. A tool like "photorec" can scan the drive and recover pictures, MP3 files and so on, mind you without original metadata like file names and dates. May be better than a total loss.
I might be wrong, but isn't the "encrypt everything & delete key" method -way- more difficult to recover from than an actual disk wipe?
I mean, with a disk wipe, you can still try to recover some data using data forensics tools. If they key is done, you can have the best forensics team in the world, you can't decrypt the garbled data.
No surprise. WD My Book was the worst purchase of my life.
The dual RAID hard drive just started clicking one day, and no data could be read after that. Apparently using only one drive was not possible on hard drive failure or the controller failed. (I did not physically try to remove the other drive as the drives were somehow coupled, maybe mirrored RAID mode was not even possible)
This was about ten years ago. I would not touch another Western Digital product after that.
I wonder how this happened. I mean, most of these devices are going to be behind NAT, which isn't perfect security but does mean the device has to basically go looking for trouble to get hacked.
I mean theoretically a vulnerable device on your network that doesn't actively seek out connections is risky but it shouldnt see this kind of total across the world compromise.
These are only storage devices, albeit EOL and no support or updates, but they still just work until... I anticipate to read in the near future about lawsuits around people losing their crypto access keys as well as entire business' closing from loss of their data.
Now consider the coming IoT flood that will also undeniably include medical devices, some of which will directly keep individuals alive AND be connected. In time this will redefine the phrase "End of Life".
Uncle Bob 1990-2042
Uncle Bob was a great human, had a great family and did great things in his community. Uncle Bob however failed to maintain his personal security practices and his medical device was compromised as a result therefore leading to his EOL. RIP U Bob.
Certainly unfunny dark humor derived from what is certain to come. The real problems are just 'warming" up.
Hmmm. I have one of these drives sitting on my desk. Hasn't been powered up for weeks. Do I disassemble it, take the drive out and grab the data that way or just wait? What are these things formatted as?
Schrödinger storage: data is both lost and found.
Edit: This isn’t the Live version. I think its going to be in pieces Real Soon.
MyBook is an awful product anyway. In older ones that I needed to recover some data from, the power supplies would fail quite quickly. Cheap drives, cheap components probably, cheap enclosures. Hopefully the drives weren't zeroed out in this case, so inexpensive data recovery can be used.
Insurance companies should try to hire experts to improve digital security of their clients, and then sell some insurance covering this kind of loss.
I'm not really fond of insurance companies, but as long as there are no solid and MANDATORY security standards and practices put in place, nothing will change. Maybe government should just inspect and validate the sale of software if it's secure enough. Of course the process will involve more work, but it's not like the silicon valley cannot pay for this.
It's like we're at an age when the seatbelt and speed limits are not even made into laws, and we're still somehow surprised of accidents.
So one wouldn't want to have an NAS with WD products (although the bare drives are fine). Is Synology the only good option now? Seems like a good time and place to ask if anyone can link to a good tutorial on a good NAS setup.
It seems to be a bad idea to buy commercial NAS devices, because eventually, every device would go end of life. Better build a DIY device with an open firmware.
But thinking again, this kind of approach wouldn't be applicable to typical ordinary people. Then I guess the next better thing would be to force NAS devices to stop accepting Internet-facing connections after their support period ends. Every device has a defined lifespan which dictates how long it can work without sacrificing quality. As a device dealing with important data, NASs should be given higher restrictions.
So I have a Schroedinger's MyBook Live in front of me that I disconnected after reading this. I can probably live with it having been wiped, but what is the current wisdom on how I should proceed?
These devices have no way to read from them via USB (which is why I never liked the thing), so the only way to peek in and see if my stuff is still there would be to plug it into the router and power it on. I presume doing that would get it wiped today.
Should I just leave it in its current superposition of wiped and not wiped until a fix comes out?
I doubt a fix is coming. Just keep it from reaching the Internet long enough to disable the cloud features. A patch cable directly to the PC should work.
Even worse it was a known CVE - the firmware update was in 2015.
Western Digital WD My Book Live and WD My Book Live Duo (all versions) have a root Remote Command Execution bug via shell metacharacters in the /api/1.0/rest/language_configuration language parameter. It can be triggered by anyone who knows the IP address of the affected device, as exploited in the wild in June 2021 for factory reset commands.
Almost bought one of these, but opted for a raspi and usb3 external drive instead. Works great with Time Machine. https://saschaeggi.medium.com/use-a-raspberry-pi-4-for-time-... was my guide, though I deviated by using ext4 instead of hfs, and didn't need avahi.
But use ext4 instead of HFS+, and use Samba instead of Netatalk. With HFS+, the bundle would get corrupt after a few days. With Netatalk, a sudden power outage would corrupt the whole thing.
Don't listen to 99% of blog posts on this. They all tell you to use HFS+/Netatalk. None of those authors have actually used that backup solution for any extended period of time, because if they did, they'd find that it's unusable. Use this one instead:
Thanks for the recommendation. We work pretty closely with Apple's SMB2 client developers to make sure everything works well together with Samba. They're great fun to work with !
So we've also known that designing something to withstand the hostile internet many years into the future would require an entirely different approach to software engineering than taking commodity software and flashing them in with some integration of custom bits. But nobody stopped the vendors and it was cheaper than the competent alternatives.
I just bought a large external hard drive for exactly this reason in case things fail. I did not use this live feature but I feel much better with an offline copy of my data safe. This lesson has been played time and time again when will people learn? You need to be in physical possession of your data and if it is truly critical you better have a second copy kept in another location in case of theft or fire.
I have a WD MyBook and unplugged it last night - still fine, fortunately. I ssh'ed in and renamed the factoryRestore.sh and wipeFactoryReset.sh to cripple the attack as well as removing one drive from RAID until I can get an airgapped backup.
However, that doesn't fix the actual vulnerability itself. Anybody else affected have a suggestion for how to e.g. upgrade the relevant packages to make this secure?
I am not sure if I understand correctly that this product is not maintained (EOL) since 2015? So officially not receiving any updates since then?
If so I am not sure what people complain about.
If my understanding is incorrect then WD behaviour is to blame (at least people should get money as compensation for the lack of due diligence from WD).
Now, if someone's life depends on the data, having it in one place only is crazy.
This all goes back to the fact that cloud based / cloud connected storage, that is not also backed up on disconnected media, is essentially insecure/unreliable.
I have cloud accessible storage that is backed up on two separate 2TB USB drives, in A/B fashion. For me, 2TB is enough to cover my irreplaceable data (mostly digital photos).
Looks like I had the "Update My Device" setting off. Whew. I have 20 years of kids photos and videos, though I'm nearly positive I've backed this up and shared it with my ex-wife.
Nearly had a heart attack when I saw this.
Though mine is the WD MyCloud device. I have a MyBook one too, but it's offline. Is the MyCloud device included in the issue?
WD needs to patch this, but I always recommend people to also have a cold backup of their data. I make a back up of everything I have every 6 months, rotating my target backup device (an external HDD) through a simple command line script. While not 100% hacker proof, it's a lot safer than relying on internet-connected devices.
One lesson I see here is that software-only triggered factory resets shouldn't be a thing.
Routers used to have reset pins/buttons which had to be held down for a while to make them factory reset. That should be the _only_ option to factory reset a device that wipes user data along with it.
Imagine if your cryptocurrency wallet or passwords was on this cloud only system. It is designed to backup your data on the cloud and then it deletes it instead.
Not even their cloud is doing well for its self then. Exactly why I kept on pushing for self-hosted systems; in this case a local NAS.
WD screwed up. However, users must have a backup policy for when (not if) data hits the fan. For important data, having multiple onsite copies with few of them totally dumb (as opposed to smart IoT) and multiple offsite copies should become a habit.
Folks, buy 2 similar drives & backup stuff monthly to both & have your local workspace synced to another machine local or cloud using any tool like rsync, Cyberduck, iCloud whatever.
More and more convinced that the best data backup is a read-only one, and not connected to the internet. Optical disks will have their use, even if they will not be mainstream anymore.
It's hard to have a great opinion of WD after: (1) this, (2) the SMR disaster of supposedly NAS HDDs, (3) the portable drives silently encrypted even when no encryption was set.
In other news- now is the perfect time to invest in a MyBook. The price will plummet now, and WD will receive such a reaming from this incident that it will never happen in the future...
The prices of MyBook's have been through the roof the last few months because of Chia[0] farming. Most models literally cost almost twice what they did 6 months ago
Could this happen if your router is not open to the world? I dont get it to be honest, maybe the device sends queries commands home? that seems unlikely
NAPT is for facilitating internet traffic despite shortage of ip addresses, the opposite of a firewall in a way. So it's a success, not a bug, if it manages to enable some incoming traffic.
I'm inclined to write off (anyone using a WD "My Book" device) as beyond help but a simple SFTP client like WinSCP or Cyberduck or Filezilla is comprehensible to just about everyone.
You drag and drop to the SFTP destination[1] which covers "History"[2] with immutable ZFS snapshots and "Integrity"[3] with ZFS itself and "Security"[4] by running nothing but OpenSSH.
If you're going to set your parents up with something, you could do worse than that ...
Are you forgetting the recent post about how kids these days don't even have any concept of files? You're over-estimating most people in their technical competence and the required level of understanding and patience needed to do anything with SFTP. You say "drag and drop" and I think of the hours of troubleshooting and explaining (and re-explaining) I'd have to do if I were to try to implement this with family.
A "backup plan" which involves manually copying files around is no plan at all. It's basically guaranteed to fail when you forget to copy the files for a while, or when you give up on it because it takes too long.
I do both: I backup to my Synology NAS, which then in-turn uses Synology's Hyper-Backup (which is very nice, btw) to my Azure storage account - costs me about $15/mo to store a few terabytes with PITR recovery back to when I started doing this in 2018.
The thing is... I can't help but worry someone's going to compromise my NAS and DBAN the drives and then extract the Azure storage key and use that to delete all my backup blobs...
(Yes, the backup client needs read, write, and delete permissions, unfortunately - and Azure doesn't offer a Blockchain-style "append-only" mode for blob storage, unfortunately - still, better than nothing).
UPDATE: Apparently Azure Blob Storage does support strict append-only blobs that cannot be mutated or deleted, only appended - so I wonder if Hyper-Backup can use that…
Is there anyone out there with $$$$ who will stop at nothing to part you, a rando, from your old data? Probably not. Are there sophisticated attackers who will burn a couple 0-days to build a botnet for the sole purpose of randsoming NASes AND attached cloud accounts AND the origin systems, accounting for tons of possible configurations? Still pretty unlikely — this is NotPetya level stuff with small payoff.
If you find yourself in the crosshairs of a sophisticated, dedicated attacker (perhaps one in possession of a 0-day), you’re pretty much done. Offline write-only backups stored offsite are the only defense.
However, is there a bug lurking in Hyper-backup that might accidentally wipe stuff from Azure storage, and the bug hits a month before your house gets struck by lightning? Maybe…
> However, is there a bug lurking in Hyper-backup that might accidentally wipe stuff from Azure storage, and the bug hits a month before your house gets struck by lightning? Maybe…
Help All data in WD mybook live gone and owner password unknown - https://news.ycombinator.com/item?id=27625925 - June 2021 (66 comments)