Hah. Yeah. Found a bunch of ssh keys, passwords, etc for Comcast years back which turned into a shitshow when I tried to report it. Once I found the right people to talk to things got better, but the entire experience was really reflective of how bad large orgs are with security.
A friend once told me he was having a hard time getting a client to take his security concerns seriously. So I went on github and found a commit in their repo that included a production password and sent it to him. Maybe took 5-10 minutes to find? Apparently once they found out about the commit, they panicked a bit and started taking his concerns more seriously.
Old school one when I was a security consultant for a bit (pre-automated pentest scammers). Medium size regulated fintech. Domain admin passwords and admin accounts were stuck on post it notes on a board in the machine room. If you went over the road to the college, asked to use the toilet, which they seemed fine with, and poked your 200mm lens out of the bathroom window you could snap them all.
Don't assume that level of competence improved with addition of technology.
Heh, sometimes, sure. In a separate comment I mention a company with whiteboard passwords. What I didn't mention is that they had a glass wall that you could look into from a well-traffick'd hallway. One of the larger companies that worked at the office (not any longer) rhymes with loinbase.
Also, I no-joke heard of a company that absolutely, unironically, did the webcam thing with RSA tokens.
Did some consulting for an org that did managed IT and found that they wrote on a white board all of their passwords. Wrote them an email basically telling them "hey maybe you should erase that". May or may not have billed them for the time it took to write that email.
They put a piece of paper over the passwords in response.
Yikes. It is sad to hear stories like that, where security is not a concern until panic sets in. :(
Yet another reason we need to adopt standards like security.txt and make it easy to report these things as it is to tell robots to ignore us with robots.txt. See securitytxt.org for more on the project.
It's tough. I'm our public security reporting email list.
We get a lot of things that boil down to "When I go to your website, I am able to see the content of your html files!" ... yes, reporter. That is what a web server does. It gives you HTML files. Congrats that you have figure out the dev console on your browser, but you're not a hacker. I'm trying to go with Hanlon's razor here and assume this is inexperienced people and not outright scams.
We don't get a lot of these, but they far outweigh actual credible reports. But we try our best and take everything seriously until it can get disproven. And it's exhausting. So I get it sometimes. Sometimes having a place for responsible disclosure just opens yourself up to doing more paperwork (verifying that the fake reports are fake). That said, we still do it.
> Sometimes having a place for responsible disclosure just opens yourself up to doing more paperwork
100% this. And it bites harder when you’re a scrappy time constrained startup, or just offering a public service.
I maintain a public API that returns public information- observable facts about the world. As such, the API doesn’t have any authn/z. Anyone can use it as little or as much as they want, free of charge.
Of course I get at least 1 email per year telling me my API is insecure and that I should really set up some OAuth JWT tokens and blah blah blah.
I used to reply telling them they are wrong but it gets hostile because they want money for finding the “vulnerability”.
On the flip side, at another company I once got a security@ email that sounded like a false alarm. I quickly wrote it off and sent a templates response. Then they came back with screenshots of things that shocked me. It was not a false alarm. That guy got paid a handsome sum and an apology from me for writing him off.
Or this! It's not just paperwork, but also mental capacity. Having a place for responsible disclosure yields enough "fake" disclosures that you become desensitized to it. Boy who cried wolf style.
It's possible "security isn't a concern" because they are dismissing the report, not the security.
I think the fundamental problem is, a lot of orgs just don't care about security, as it doesn't affect their bottom-line. Even breaches are only a temporary hit on the PR. Proper way to address that might just be legislation, with heavy fines based on total revenue.
That and also security is just hard to scale. That's why if it was mandated by legislation, companies would be forced to spend a comparable amount on scaling their security teams and efforts.
Most respectable services will have an abuse@ address you can contact. They should at least be able to get your issues where they need to go internally. I've had very good results for companies and networks in the US.
I've never had an outright bad experience reporting a security issue, but some companies definitely aren't geared up to handle reports. I found that an energy provider's API would give usage information for past addresses and eventually I think the right team got told, but it was a nightmare trying to find someone to actually report the issue to.
It's hit and miss. Sometimes they want to throw you under the bus. Sometimes they want you to sign affidavits. I've never been asked to sign an NDA or anything like that. Sometimes they threaten with criminal charges. DoJ recently released some guidance about good-faith security reporting, so it might be easier these days. Doubt that affects active litigation/prosecution or vindictive orgs, though.
Worked at a place where they liked to use encrypted Java prop files... with the passwords hard coded in the app (in the same repo). Those were internal repositories, though.
A friend once told me he was having a hard time getting a client to take his security concerns seriously. So I went on github and found a commit in their repo that included a production password and sent it to him. Maybe took 5-10 minutes to find? Apparently once they found out about the commit, they panicked a bit and started taking his concerns more seriously.