I find this update very hard to follow. Can someone tell me if I'm misreading it? I'm going to quote it twice, and then attempt to summarize:
After examining the image from our July investigation, we discovered software capable of generating TOTP codes if provided a TOTP key. We found software implementing the decryption method we use to secure TOTP keys, along with the secret key we use to encrypt them. We also found commands in the bash history that successfully generated a one-time code. Though the credentials found were unrelated to any of the unauthorized Linode Manager logins made in December, the discovery of this information significantly changed the seriousness of our investigation.
and then:
The findings of our security partner’s investigation concluded there was no evidence of abuse or misuse of Linode’s infrastructure that would have resulted in the disclosure of customer credentials. Furthermore, the security partner’s assessment of our infrastructure and applications did not yield a vector that would have provided this level of access.
Linode’s security team did discover a vulnerability in Lish’s SSH gateway that potentially could have been used to obtain information discovered on December 17, although we have no evidence to support this supposition. We immediately fixed the vulnerability.
Here is my read of what this says; I'd like to know if I'm wrong.
"One of our customers got owned up in July, and gave us an attacker source address within Linode. We pickled up the attacker's host. In December, we examined the pickled host, and found secrets related to the way we store 2FA credentials, indicating that our credentials database may have been compromised. In conclusion: we have no idea how that could have happened."
They changed the 2FA to use a microservice, so whatever the vulnerability was before, if the 2FA is now on an isolated server, that vulnerability shouldn't have access to the new 2FA key.
I think it's fairly important to note that they're NOT currently using the microservice for the 2FA, and they're NOT using bcrypt right now.
The blog post states they're "working towards" these changes, they're not currently in place. It's fairly unlikely that they're using the same secret key as the one they found on the server, but it's fair to assume that they are still using salted SHA-2 for your passwords and the same 2FA setup right now.
They likely won't roll out the major changes until they roll out the "new and improved" Linode dashboard they're coming up with.
The article didn't state that. The article stated they are rolling out soon. The new dashboard will be an open source project. So you'll know when that gets released. There is no link to the project yet so assume that part isn't started yet. So the microservices should be released in a timely manner. Let's hope with the new focus on transparency if there are any delays they will keep us posted.
Isn't that exactly what I said? o.O I said they will "likely" get rolled out with the new dashboard, not that the article said they would lol. But, they never stated when it would happen anyways, so "delays" aren't really a thing when there's no deadlines.
But given that they don't know what the vulnerability is, there's no way of knowing that.
When it comes to the security of who's hosting my servers, I want a little more reassurance than they shouldn't have access. I need to know that they don't.
Yeah, the thing you are missing is why the story doesn't add up :P
Seriously, in one paragraph they state their TOTP keys were taken, in another they state nothing is wrong because they can't figure out how an attacker might have taken those. WTF?
It sounds they have an idea how it could have happened — the Lish vulnerability — but they don't know if that's how it actually did happen or if there's another undiscovered vulnerability lurking.
Ok, now help me understand how that can be true given this quote from the advisory:
The findings of our security partner’s investigation concluded there was no evidence of abuse or misuse of Linode’s infrastructure that would have resulted in the disclosure of customer credentials. Furthermore, the security partner’s assessment of our infrastructure and applications did not yield a vector that would have provided this level of access.
"no evidence" or insufficient logging. The wording is spun to their favor throughout the entire advisory. Like "Security Investigation Retrospective" making it sound like they have the whole thing wrapped up. Also that everyone will be fine once their passwords are reset, when it sounds like they still don't really have any solid idea if they fixed the problem or not.
1. Their security partner (whoever that is) didn't see the Lish vulnerability, either because Linode's security team had already fixed it or because they just missed it.
2. Their records weren't sufficient to show the breach itself.
I'm not just trying to be argumentative. Here, let me quote more specifically:
no evidence of abuse or misuse of Linode’s infrastructure that would have resulted in the disclosure of customer credentials.
I feel like I must be misreading something. Didn't they say earlier that they found secrets from their account credentials database on a customer instance that was used to attack (apparently) PagerDuty? That's not "no evidence of abuse or misuse of Linode’s infrastructure"?
you forgot "that would have resulted in the disclosure of customer credentials".
I'm speculating, but having been a fly on the wall at this kind of meeting before, here's my theory of how this went down:
Manager: "So somebody got the key to generate one-time password tokens for PagerDuty. How did that happen?"
Engineer: "I have no idea."
Manager: "What about that Lish vulnerability? Could it have been that?"
Engineer: "Could have been."
Manager: "We don't know?"
Engineer: "I mean, they could have gotten it that way, but we don't have logging for security vulnerabilities, because, you know, we would just fix them instead of logging that they occurred..."
Manager: "Okay, I want a second opinion. Time to bring in $SECURITY_PARTNER."
$SECURITY_PARTNER: "What Engineer said. And here's a bill for our time."
Manager: "What about if the customer screwed up and lost a device? Could it be that?"
Engineer: "It could be that."
Support: "PagerDuty said something about wiping their phone."
Engineer: "If they did lose a device, it would be indistinguishable from the evidence we have."
Manager: "So, maybe we weren't hacked after all?"
Engineer: "Maybe not."
Manager: "Any other theories?"
Engineer: "I just want to go on record... AGAIN... that I don't trust our ColdFusion infrastructure at all. The people responsible have all left, and none of us understand it. We need to rewrite in a cooler language, like Python. And hire some actual security people."
Manager: "Sigh. We've gotten enough flak over this that I guess I'll put it in the budget. Okay, good meeting everyone. I'll write this up in a blogpost, because we need to show transparency. That way customers are at least as confused as we are."
> After examining the image from our July investigation, we discovered software capable of generating TOTP codes if provided a TOTP key. We found software implementing the decryption method we use to secure TOTP keys, along with the secret key we use to encrypt them. We also found commands in the bash history that successfully generated a one-time code. Though the credentials found were unrelated to any of the unauthorized Linode Manager logins made in December, the discovery of this information significantly changed the seriousness of our investigation.
As I read it, the 2FA login does some processing of the credentials before submitting it to their authorization server. Someone logging into a machine directly instead of going through their portal would have to do this processing themselves. They found a program designed to do just this on the compromised machine, which a normal user would be very unlikely to have, and thus serves as strong evidence of malicious activity.
The page's title "Security Investigation Retrospective" fits the content better than the current HN's "Linode Security Advisory":
it is about last July's breach that caused January's password reset; what happened and what they've done about it.
Not sure what to think about Linode anymore, on the one hand from a pure reliability point of view they have been bullet proof, had a few issues during the DDoS in December and I've always found their support to be good (the few times I've used them in 7 years).
On the other hand they've had security issues fairly regularly and their response to the DDoS was pretty poor.
That said if I was a cynic I'd say they probably have one of the best setups for dealing with future attacks (old joke about never firing an employee who made an expensive mistake because he'll never make that mistake again) and finally seem to be taking security seriously, for me the list of changes all sound good (particulary open sourcing Linode Manager and tokenising CC's had to reset a card because of them once before).
We are slowly moving a lot of stuff back to a DC up the road but we still have some stuff with them.
I have to be fair about Linode's performance. My Argon2 test suite averaged around 45 seconds on my Linode.
I've relocated my VPS to AWS following these recent discussions around security, and the same test suite now runs between 5 and 20 minutes, presumably based on what my neighbours are doing at the time.
Fair call. I had the entry level Linode with 1GB RAM and 1 vCPU, and moved to the t2.micro, also with 1GB RAM and 1 CPU. I'm aware that cost me SSD's, but nothing I do is IO relevant.
This is because DO is desperate to retain customers. I don't know if it's still the case, but a year or so ago the CEO was personally handling customer service, responding via email and adding credits to people's accounts. If the CEO has that much time on his hands, to personally handle every customer dissatisfaction, then the customer base must be quite small.
I think for me the largest red flag was DO not even having bandwidth monitoring in place. The only reason there were no bandwidth limitations was because they were not even physically tracking bandwidth usage. How do you launch a hosting platform without something as basic as being able to monitor bandwidth?
Frankly, I don't see any harm in this. Actually, it's quite positive to see a CEO mingle with her user base. That's how you make sure you're in touch with your users to know what they need.
>Linode’s security team did discover a vulnerability in Lish’s SSH gateway that potentially could have been used to obtain information discovered on December 17, although we have no evidence to support this supposition. We immediately fixed the vulnerability.
Yeah, LiSH was exploited like 3 years ago...
Unless it's been completely redesigned it's probably still got heaps of vulnerabilities, screen wasn't designed for untrusted input.
>CC Tokenization: Although our investigation yielded no evidence of credit card information being accessed, we are taking advantage of our payment processor’s tokenization feature to remove the risk associated with storing credit card information.
Nobody thought about doing this when every customers card plaintext card info was taken years ago?
Anyway, the entire blog post is bullshit. It completely fails to address their previous security track record. A single hack isn't that big of a deal, but this has happened to Linode countless of times.
It seems like the post creates more questions than it answers, but it's great that they are sort of transparent. I guess it's due to ongoing investigation.
But it is quite surprising that someone was able to acquire the key for the token generation and they seem to have no explaination for it. And wow, they only started tokenizing credit cards now? And SHA-2 for password hashes?
THB, after reading this post my confidence in them hasn't really increased. It feel like: "We fucked up, but are not exactly sure why but will fix some issues nevertheless".
To add to that, bcrypt is not the best recommendation if choosing a password hash today. In theory they should be adopting Argon2 (or maybe scrypt).
In practice, I suspect that either the bindings for Argon2/scrypt don't exist or aren't easily adoptable given their use of ColdFusion. They do exist in Python.
I'm saying this as a proponent of Argon2, who has invested a lot of time trying to improve the codebase[0].
It currently isn't ready in large production. Efforts to stabilise the API are being spearheaded by someone apparently outside the project[1]. If you're reading this @lucab, thank you.
In the meantime, my Ruby bindings have been broken on three separate occasions due to API changes. You could easily say "Don't track master", but the one release has a tag of 20151206, and it's just an arbitrary a tag as any particular commit id. There is no branch from which you could apply "bugfix only" updates.
Two separate commits broke compilation. This commit[2] was a shambles.
Most importantly, they have commits going in two days ago that change the test vectors[3]. That means if you update your library, verifying existing passwords breaks. The hash identifier doesn't change ( in the way that bcrypt had $2, then changed it to $2a then $2y when they changed the algorithm) which means you can't just write an "upgrade hash" function. I can't find any documentation relating to this change.
It's important to note that none of this means your passwords are easily broken, or that it's insecure, which is the implication I often see thrown around when discussing Argon2 being "new".
All three are good choices, with their own advantages and disadvantages. Argon2 may be clearly the best choice a few years from now, but both the algorithm and software implementations are immature. It's makes sense to be conservative and go with the more battle-tested options.
(Also last I looked Python has no good scrypt bindings.)
Just an FYI, NIST still recommends SHA-2 for password hashing, they still don't see enough benefit from Bcrypt with the advent of super fast ASIC and FPGAs. Scrypt and Argon2 are too immature. Coldfusion isn't a reason for not using either as cold fusion can run any Java code very easily. Python can run C code easily. So bindings for any language is never a reason as long as you know how to use your tools.
Bcrypt does add some extra benefits over SHA-2 for typical offline password hacking. So it's still a good step.
It would be foolish to go with Agon2 as it only won the Password Hashing contest a little over 6 months ago. Bcrypt has been found to be solid for other 15 years now and has had tons of eye balls on it.
Scrypt has had issue in the past and hasn't been nearly as scrutinized as Bcrypt.
The fact of the matter is, the good guys aren't working as hard as the bad guys when it comes to good security.
Waiting until login until you upgrade to bcrypt is a requirement is compotent password storage. At this point in time all Linode should know is SHA-2(password) and they can't use that to derive bcrypt(password).
The way upgrade should work is that the user provides their password, which is verified with SHA-2 and then hashed with bcrypt and stored again.
In order to do this without people logging in Linode would have to bcrypt hash the SHA-2 hashed passwords and then keep doing that for all password validations.
> Waiting until login until you upgrade to bcrypt is a requirement is compotent password storage
It's not even remotely competent. This blog makes it clear they're not even sure how their secret key was stolen. These hashes could be walking out their backdoor as I type this. Keeping vulnerable hashes at rest is insane.
It would be far more competent to bcrypt the SHA-2s, so that at least when the hashes wander out the backdoor they haven't really found, peoples passwords aren't trivially attackable.
> In order to do this without people logging in Linode would have to bcrypt hash the SHA-2 hashed passwords and then keep doing that for all password validations.
No, they'd just have to replace Bcrypt(SHA-2(password)) with Bcrypt(password) once the customer finally logs in.
It's an immediate net upgrade to the resistance of at-rest Hashes to brute force attacks with zero downside.
That will work as a way to strengthen the hashes (a few other people pointed that out as well).
My point was that if you have a system which can go straight from SHA2(password) to bcrypt(password) then the system must be storing the plaintext of the password, which would be very bad.
> My point was that if you have a system which can go straight from SHA2(password) to bcrypt(password) then the system must be storing the plaintext of the password, which would be very bad.
Yes, I understand that. It's just completely irrelevant to the question of whether or not it's competent practice to store vulnerable hashes indefinitely, awaiting customer log in.
Again, it is not a competent practice. Wrap vulnerable hashes in strong ones immediately; they're a huge liability to leave sitting in your storage even when you don't have evidence that there's a backdoor in your systems that you cannot seem to find.
I don't see anything to indicate they are converting directly from sha2 to bcrypt. When the user logs in next, if it matches the sha2 hashed password, insert the bcrypt hashed password in the bcrypt hash password field and use that from now on.
> At this point in time all Linode should know is
> SHA-2(password) and they can't use that to derive
> bcrypt(password).
> ...
> In order to do this without people logging in Linode would have
> to bcrypt hash the SHA-2 hashed passwords and then keep doing
> that for all password validations.
They just could make the intermediate step bcrypt(SHA-2(password)) via some lockstep code/db backend update. Then on next user login after verifying against bcrypt(SHA-2(password)), update the db to the more straightforward bcrypt(password). At the very least this would increase the difficulty of brute forcing in the meantime.
Two things in their "What We’re Doing About it" section really surprised me because they are things I would have expected a company of their size and sophistication to have done long ago:
> we are taking advantage of our payment processor’s tokenization feature to remove the risk associated with storing credit card information
Unfortunately, there are some facts in Linode's post that are not correct.
>On July 9 a customer notified us of unauthorized access into their Linode account. The customer learned that an intruder had obtained access to their account after receiving an email notification confirming a root password reset for one of their Linodes. Our initial investigation showed the unauthorized login was successful on the first attempt and resembled normal activity.
This is almost correct. Someone got in to our account on the first try. They knew the password and a valid TOTP token. Although, Linode's email isn't what notified is, it was our intrusion detection system.
>On July 12, in anticipation of law enforcement’s involvement, the customer followed up with a preservation request for a Linode corresponding to an IP address believed to be involved in the unauthorized access. We honored the request and asked the customer to provide us with any additional evidence (e.g., log files) that would support the Linode being the source of malicious activity. Neither the customer nor law enforcement followed up and, because we do not examine customer data without probable cause, we did not analyze the preserved image.
This is partially correct. We informed Linode that we saw suspicious activity within their network, and reached out to them to inform them. We provided any and all logs we had. We also informed them that we passed the info on to law enforcement, in case they wanted to proactively preserve the data. The knew we had no further information, and as such didn't ask for anything additional.
>On the same day, the customer reported that the user whose account was accessed had lost a mobile device several weeks earlier containing the 2FA credentials required to access the account, and explained that the owner attempted to remotely wipe the device some time later. In addition, this user employed a weak password. In light of this information, and with no evidence to support that the credentials were obtained from Linode, we did not investigate further.
The story behind the mobile device is totally incorrect. The user did not lose their device, the device had been restored (intentionally wiped) 9 months prior to the compromise. The user got a new device, and never set up MFA on their new phone after wiping the old one. The device was, and still is, in the user's possession. The device has not been powered on in a long while.
The user who was compromised was no longer in possession of their MFA secret. They deleted it, intentionally, with no backups existing.
If anyone here is going to be at Velocity 2016 in Santa Clara, or at Monitorama PDX 2016, I'll be giving talks on how PagerDuty was compromised back in July. This includes full details of how this happened, including the details of the mobile device referenced above. There are some details in my talk that don't line up with the blog post provided by Linode. :)
I watched PagerDuty leadership run out of the room when Amazon completely shit itself during a conference in 2011, and I heard a few months later that they'd signed up with Linode at some point after that. I could probably put two and two together there as a reliability strategy to back up against AWS failures.
Keep in mind Linode had a pretty good rep at the time. I wouldn't second guess them on the call, as I probably would have made it at the time, too. They didn't dig in or lock themselves in with debt, and bailed when the time was right, too.
As a former Linode employee myself, I can verify that I know who this is, and that he knows what he's talking about on both sides of the fence. He's worth listening to.
> Although, Linode's email isn't what notified is, it was our intrusion detection system.
Are you able to elaborate on this? I understand you may not want to name specific vendors/products in the name of operational security but it sounds like in this scenario whatever is in place actually did its job.
Most of this is still valid. There may be some differences as we've improved our configuration over time.
We use OSSEC for host-level intrusion detection. This fired off quite a few alerts as the malicious party began to log in as root on the serial console, amongst other things.
We also have supplemented it with other tools, such as an in-house wrapper around nmap, to alert us to hosts that don't match their expected network configuration. So when ports get opened incorrectly, someone is alerted usually within a minute.
No kidding. After reading all this I think people would be nuts to continue using Linode for anything critical (hell, maybe even for anything at all) at this point.
How many times has this kind of thing happened to them now?
This is very helpful information. Can you say if you've moved to a different provider or if you're now racking your own machines? And if you do have a new provider, can you say who you are and how you evaluated them?
I have been doing some research in my (limited) spare time to try to find a new provider, but I still have not made the switch from Linode.
So we ended on Microsoft Azure in Fresno. In hindsight, they would not be my preferred choice of a non-AWS provider to replace Linode. However, with the situation we were in they were the best choice.
We operate all of our datacenters in a multi-master configuration across the WAN, so latency is something we need to be mindful of. We were also in the middle of an emergency situation that required a migration, so we needed a solution that would allow us to evacuate quickly.
In the end we decided that we wanted a provider who supported a VPC-like network configuration, and was roughly within the same latency profile as Linode in respect to US-WEST-1 and US-WEST-2.
If timing hadn't been a concern, we may have chosen differently. We felt we didn't have the luxury of time.
> Azure replaced Linode very quickly after the July incident.
Bah. More virt. Throw down some cash on a cage and fire up AWS Direct Connect, man. It's bliss, you get more choices out here than I did with us-east-1, plus you guys can afford it now.
We plopped in physical gear for a couple parts of our AWS infra a few employers ago and cut our Amazon opex by like, two thirds. Not helpful for your using Azure as a reliability strategy, but worth thinking about for your write-heavy databases, for example.
I've been a happy Linode customer for a long, long time. With that I have to say the fact that they released this late on a Friday afternoon leaves a bad taste in my mouth. That in turn leaves me with doubts about the veracity of their statements. I'll probably be looking at alternatives now.
I thought this kimsufi thing sounded interesting so I looked it up, and ... wow the processors in the systems they offer are old. Where do they even get these things? Must be leftover from OVH?
> We have been working with federal authorities on these matters and their criminal investigations are ongoing.
I cringe when I see companies say this. As if we're supposed to feel like the "hack" was somehow more sophisticated than spearfishing or social engineering because there's feds on the case.
There's a hole in your security. Diligently look for that hole. If it's a mistake own up to it fully and apologize. Make your system robust. If you don't have the talent in your organization to do this, then hire more talented engineers. Compete with other companies for good people.
Regardless of the severity of the means, the fact is that this sort of attack is entirely illegal and involving law enforcement is a clear requirement.
That's fine, but I think the parent is implying (probably correctly) that involving law enforcement isn't really doing anything for the customers of the service. Sure, what happened was a crime, and if the attackers are really unlikely they could end up getting arrested in a couple years. "And?"
Point being that your users have no need or desire to know that. It's a cheap point to score for the marketing department, to detract from the issue at hand - how it could have been compromised to begin with.
I've been using Linode for years and I haven't had too many problems with the service itself. That being said, this makes me think twice about staying with them. If I wanted to switch, is the only real competitor Digital Ocean or are there any other good choices?
Same. I have been using it for a cheap freebsd website instance, for just over a year now. So far, I have needed very little interaction with support, so I can't speak to how good it is.
For me, because once you get beyond the $5 nano/free tier, things get expensive really quick.
For example, I run about 10 different sites off one Linode, but only one of them gets any substantial traffic. Still, in order to run that site, which works just fine on a $20/mo 2GB/2core unit on Linode, I'll push out about 150GB in outbound bandwidth a month, and require around 15-20GB in storage. Pricing that out on AWS, ignoring the free tier, I'm looking at at least $40/mo, so double the price.
These are hobby sites or side projects, so they are not necessarily mission critical, but it would be sad if they went down.
There's a serious benefit to getting a VPS when you're on the low end like that, where power and bandwidth per dollar are greatly maximized.
If you're are paying double digits or more for cloud hosting, you'd probably get much better bang for your buck by getting a similarly priced dedicated hosting setup.
The main value provided by cloud hosting is easy scalability, not cheap prices.
I'd start out by looking at the OVH offerings, Kimsufi (a part of OVH) has dedis starting from $5 a month in both NA and EU.
You can get a cheap "High Availability" setup for $10 with servers on two different continents.
Oh, and the bandwidth is free. (Although, 100mbit.) You can max that line 24/7 for a month and pay nothing more than $5, on linode that costs twice as much that'd cost $600 just for the bandwidth.
If you need something beefier than Kimsufi, OVH also has their mid-range brand soyoustart (https://www.soyoustart.com/us/essential-servers/) with prices starting at $42 that'll just completely destroy any similarly priced cloud offerings.
I'm only focusing on OVH here because they're IMO as the biggest provider in this market they're also the safest choice. Leaseweb and Voxility might be worth checking out too.
Quick disclaimer: I'm not currently an OVH customer (their DC locations don't fit my use case) nor am I in any way affiliated with them.
I learned to not to trust Linode after incident around 2012-01-23: I emailed them asking if they would compile a new kernel without the /proc/pid/mem local root exploit; they manually patched and compiled a new 3.2.1-linode40 kernel. I booted into it, but it repeatedly locked up my VPS after ~10-12 hours of runtime. No monitoring or automatic reboot on their side, and both lockups happened right before I went to sleep. Did they bother notifying anybody about their buggy kernel, which many people probably booted into? Nope. Nothing.
Please don't post off-topic replies to the top comment. Or to any comment, really, but with the top comment people sometimes do so to get their own post closer to the top of the page, which is not legit.
I'm not sure if you actually read the top comment, because my response was very much relevant.
tptacek asked about inconsistencies in the Linode advisory, I posted a screenshot of an apparent Linode employee claiming that they lie to their customers regarding security issues. (Which is a claim I am willing to personally back up)
I interviewed at Linode. They are really nice guys, but they are located in a really odd area of New Jersey.
This is fine, but no remote jobs. I actually would have taken the job if they offered remote employment.
I've also only ready bad things about the executive level management. I asked the interviewer about it, they did seem to confirm my suspicions in this regard, but he did say that they had some great new technical leadership that will be driving the company forward.
Seems they really have very flippant executives, or it's just their CEO.
I left a Glassdoor review about Linode that was removed because I mentioned an employee (anonymously) who rubbed his genitals on coworkers' keyboards as a joke. This was reported to and covered up by management because the employee was essential. Anyway, Glassdoor responded to ostensibly a Linode complaint by removing my review several weeks after I left it. So they do watch it.
There's a lot more to the story, for sure. It's the worst of the bro culture, or at least it was when I left. At my next employer they looked over Linode employees as recruiting opportunities after hiring me, and came to ask me about potential candidates, and the only one they were interested in was genital rubber. I laughed and said I'd quit.
Employee Disclaimer: It's not 2011 and Mike is not working at Linode anymore. You really don't get what it's like working at Linode TODAY. I'm sure everything your stating was terrible for you but it's not an accurate representation of what the company has become.
The same people are running the company who did then, who were responsible for setting the culture of the company, covering for employees who did unspeakable things (worse than what I've said here), and pretty much instructing employees to lie to customers.
Also, all the people who quit Linode and ran to this coast after I left have kept me very apprised of what working at Linode is like TODAY. I still communicate with your colleagues quite regularly, too.
It is important to reiterate here, as I have before, that I wish Linode no ill will. I'm becoming more comfortable with calling spades spades, but I do not wish Linode to fail. I actually hope a lot of this stuff can be fixed, whatever that entails, but I have a serious gripe with some events that have transpired since 2011, from security to personal. If Linode would start being a little more honest about their gaping security troubles, and not rely upon people like me and Tim who actually know the truth to just shut up about it (I'm getting braver as Tim does, and I appreciate how willing he is to not let PagerDuty be tossed around by Linode's deceit), I'd be a bit happier. We're crossing into knowingly compromising the safety of the Internet, PII, and a number of production infrastructures that still run on Linode in some cases, and I do care about that.
And again, that tone of deceit is set from the top.
I mean, I was responding to a personal comment that directly and personally told me I don't "get" things (and which included what can be reasonably interpreted as a veiled threat by mentioning my departure year from a throwaway account, to make clear that I'm a known quantity in the equation). I'm really trying, here, Dan. We've talked about this over e-mail, but it's getting really tough to contribute here with arbitrary boundaries that are inconsistently enforced, and that a penalty remains on my account for some comment I made in the past that doesn't even matter any more.
I bit my tongue on you detaching this subthread because I've learned that moderation is opaque and largely not welcome to outside opinion, but I agree with Ryan up there and suspect you detached the thread to hide where I went with it. I'm fine with that (honest). Just wish you'd say that.
Yes, that's much better. You took out the personal attack, which was all that was needed.
I don't have the least opinion about Linode or "where [you] went with it". My concern is with civility on Hacker News. That's not an arbitrary line, though I'd never claim we make every call correctly.
The GP seemed to me merely to be saying that the company had changed since you left. That doesn't seem personal, nor a threat, but perhaps there are subtleties I'm missing.
There was also stabbing employees (to the point of requiring an ambulance) while fooling around with a knife, setting the building on fire more than once, and tormenting other employees who he didn't like. All of that was tolerated and dismissed by management, specifically Chris Aker and Tom Asaro, which should tell you what you need to know about ever working there. One of Linode's early forays into hiring women ended very, very badly, and the only one who made it from that time period was forever marked by the experience.
And since he outed himself (I was careful not to), it's worth pointing out that the same individual now sits on the school board where he lives. Food for thought.
I required several years of counseling to recover from working at Linode, and I am no longer under any legal obligation to keep that a secret. I'm consulting with an attorney regarding the best way to tell my story without incurring legal liability, because Linode nearly killed several people who are very dear to me.
Not surprised they have such a toxic/brogrammer work environment. Some of the people on their IRC channel seemed really rude when I used to go on there (including the person who did that). Also I used to know a woman from one of the FOSS communities I'm involved with who worked there a few years ago and it definitely seemed like they treated her in a very tokenized way.
I find this update very hard to follow. Can someone tell me if I'm misreading it? I'm going to quote it twice, and then attempt to summarize:
After examining the image from our July investigation, we discovered software capable of generating TOTP codes if provided a TOTP key. We found software implementing the decryption method we use to secure TOTP keys, along with the secret key we use to encrypt them. We also found commands in the bash history that successfully generated a one-time code. Though the credentials found were unrelated to any of the unauthorized Linode Manager logins made in December, the discovery of this information significantly changed the seriousness of our investigation.
and then:
The findings of our security partner’s investigation concluded there was no evidence of abuse or misuse of Linode’s infrastructure that would have resulted in the disclosure of customer credentials. Furthermore, the security partner’s assessment of our infrastructure and applications did not yield a vector that would have provided this level of access.
Linode’s security team did discover a vulnerability in Lish’s SSH gateway that potentially could have been used to obtain information discovered on December 17, although we have no evidence to support this supposition. We immediately fixed the vulnerability.
Here is my read of what this says; I'd like to know if I'm wrong.
"One of our customers got owned up in July, and gave us an attacker source address within Linode. We pickled up the attacker's host. In December, we examined the pickled host, and found secrets related to the way we store 2FA credentials, indicating that our credentials database may have been compromised. In conclusion: we have no idea how that could have happened."
Am I missing something else?