> On August 2, 2017, Equifax notified the FBI of the Data Breach. It also retained legal counsel to guide its investigation into the breach. The same day, Equifax’s legal counsel retained Mandiant to assist in the investigation into the incident. Experts would later note that these steps suggested that Equifax knew that the Data Breach was serious. In the days immediately following the discovery of the Data Breach, Gamble and Ploder sold more than $1 million in Equifax stock. On August 1, Gamble, Equifax’s Chief Financial Officer, sold stock for $946,374, representing more than thirteen percent of his holdings. On August 2, Ploder sold stock for $250,458, representing four percent of his holdings. These sales were not made pursuant to a Rule 10b5–1 trading plan. Smith would later state in congressional testimony that Ploder and Gamble would have been in many of the meetings he had concerning the Data Breach.
Am I crazy or is this not blatant insider trading?
It's sad how obvious this is. Possibly even more obvious than the insider trading at intel prior to the spectre/meltdown public release.
This will be forever the legacy of Eric Holder, the man who changed the justice department policy to go after smaller 'fines' as settlements instead of prosecuting crimes.. only because of the simple fact that fines are easy to win, and criminal cases can be lost.
Justice is now escapable because it's been deemed "too difficult to pursue"
>changed the justice department policy to go after smaller 'fines' as settlements instead of prosecuting crimes.. only because of the simple fact that fines are easy to win, and criminal cases can be lost.
This policy change could also perhaps be attributed to lobbyists seeking to maximize profits and minimize risks for corporate clients who are knowingly breaking the law.
Sure, which is the fault of the people and the government, as far as I’m concerned. If an elected official is corrupted or doing something we disagree with, it’s our job to fix it.
> On March 6, 2013, Holder testified to the Senate Judiciary Committee that the size of large financial institutions has made it difficult for the Justice Department to bring criminal charges when they are suspected of crimes, because such charges can threaten the existence of a bank and therefore their interconnectedness may endanger the national or global economy. "Some of these institutions have become too large," Holder told the Committee, "It has an inhibiting impact on our ability to bring resolutions that I think would be more appropriate.
> Prosecution rates against crimes by large financial institutions are at 20-year lows. Holder has also endorsed the notion that prosecutors, when deciding to pursue white-collar crimes, should give special consideration to "collateral consequences" of bringing charges against large corporate institutions, as outlined in a 1999 memorandum by Holder. Nearly a decade later Holder, as head of the Department of Justice, put this into practice and has demonstrated the weight "collateral consequences" has by repeatedly sought and reached deferred prosecution and non-prosecution agreements and settlements with large financial institutions such as J.P. Morgan Chase, HSBC, Countrywide Mortgage, Wells Fargo, Goldman Sachs, and others where the institution pays a fine or penalty but faces no criminal charges and admits no wrongdoing. Whereas in the previous decade the Bush administration's Department of Justice often sought criminal charges against individuals of large institutions regardless of "collateral consequences" such as cases involving Enron, Adelphia Communications Corporation, Tyco International, and others.
This is missing the perspective of just how much a disaster Obama inherited in 2008.
Also I get that it's Wikipedia, but the comparison to Bush is ridiculous. The Enron story is a long one. Here's Gray Davis's perspective on exactly what it means that the Bush DoJ "sought criminal charges:"
> "I inherited the energy deregulation scheme which put us all at the mercy of the big energy producers. We got no help from the Federal government. In fact, when I was fighting Enron and the other energy companies, these same companies were sitting down with Vice President Cheney to draft a national energy strategy."
Furthermore the whole reason Holder is using the term "collateral consequences" instead of "collateral damage" is because that term was used by Dick Cheney and the press to describe why he does not care about civilian deaths in Afghanistan and later Iraq. It would be a stretch, but surely the lack of investigations into no-bid war contracts like Halliburton's had as much to do with their "collateral consequences" on military operations as it would on their paychecks.
Obama inherited multiple quagmires due to disregard of the consequences of criminal justice policy.
Normal people will never be on the wrong side of a banking fraud, except if their bank goes out of business. What is justice there?
So while I believe bad people should go to jail, I sympathize with Holder's point of view.
This is a terrible take. For 8 years, whenever Obama or Holder were asked to make things right at the expense of their future employers, they proclaimed just how powerless they were. The White House and the DOJ have the power to squash wall street like bugs, and make "normal people" whole. They CHOSE not to.
Correct me if I'm wrong, but isn't it the job of the SEC to investigate the insider trading, and then call in the DOJ if they deem it necessary to file criminal charges?
I'd like to defend Eric Holder. It's no secret what he does or who he works for. He's a lawyer that works for the big banks and other powerful industries, his job is literally to keep them out of legal trouble.
We should be pointing the finger at the people who knew all these things and still put him in charge of the justice department. We should also be pointing the finger at the ones who have the power to change these policies now, but fail to do so.
Now we've got William Barr who has worked for Pillsbury and Kirkland & Ellis. You know, the exact same groups who represent scumbags like Epstein and BP...
No, Eric holder's legacy will be not going after bankers because that could hurt the economy. His solution was to rather leave the criminals in place instead of suffering any sort of short term pain (if that).
Basically "We investigated ourselves and found there was no wrongdoing." As the top comment there notes, they're positing that the director of US Information Security wasn't aware of the data breach until 2 weeks after it was uncovered.
Obviously you're probably right, but I'll just try to troll this a bit. According to the principle of charity (or steelmanning), you should interpret an argument in the strongest possible way, and thus you should in this case interpret the statement with an inclusive or, which is more likely to be true.
It's quite possible that these were scheduled trades that were arranged far in advance before either executive was aware of the breach. So, no, not necessarily insider trading. The optics sure look bad, but it could just be shitty timing.
I am admittedly not the most educated person in this area (I wasn't 100% sure that "rule 10b5-1" was the rule that applied here), so I'd like to learn more. Can you give me an example of a reason you'd pre-arrange the sale of stock but not do it in accordance with Rule 10b5-1?
Arranging some other transaction (e.g. buying a yacht) in advance that would require cash, so the executive plans in advance a single sale to execute just ahead of the need for cash. If we go with the yacht purchase, perhaps in six months the builder needs final payment, so Mr. Executive arranges for a single sale of company stock a couple weeks before that date.
Maybe such a thing does indeed require amending The Plan, but I haven't seen anyone with expertise chime in. I'm just saying that logically, "pre-arranged" does not necessitate "working within Rule 10b5-1"
Not an expert either, but if such a thing was allowed, you could arrange to buy expensive stuff you want to have on a regular basis (I would assume this is not uncommon for CEOs) and then just agree orally with the seller to cancel the transactions when the stock is down, go through with it when the stock is up.
Yeah, I would assume that something like that would also be done within Rule 10b5-1, but that has hidden assumptions that I know nothing about, like that it's not just "the plan" but multiple plans, etc. Anyway, thanks.
Maybe, but "pre-arranged legally" == "Rue 10b5-1 trading plan".
So if it was pre-arranged but was not following the rules, it doesn't matter that it was pre-arranged, it's still counts as illegal insider trading as-if not pre-arranged.
Reasons being super obvious, since it would be easy to do insider trading in a stealthy way otherwise.
The purpose of that trading plan is precisely to allow you to have arrangements to sell shares as an insider. So if he was pre-arranging legally, this is how he'd do it.
You don't have to be an expert. Practically everyone who isn't rank-and-file gets the dossiers on this nonsense in a public company.
I think so, they still have to be publicly disclosed with an appropriate notice. I don't know the exact rules, but I think it typically has to be announced something like 30 days prior. May also involve restrictions around significant events such as earnings calls, quarterly events or shareholder meetings.
I'm not aware of the details, as I'm just a peon in Back Office, but I do know traders pay attention to not-insider "insider" trading announcements from the SEC (yes, this pretrade information is publicly available from the SEC). I have no idea about non-US rules.
It's possible they were already planning to make those sales. Executives who can potentially have inside information often need to tell the SEC far ahead of time about sales they intend to make.
This is quite strong policy. Usually in most sinister incompetent companies, the user name is "admin" and the password is "password".
On a serious note: there should be a mandated, periodic, third-party security audit by neutral parties for all entities which deal with user data beyond a certain specified level of sensitivity. It should not be left to their discretion when to run such an audit from their end. Whether an entity similar to SEC for the stock exchange is desirable can be debated, but the current laissez faire approach to data will lead to even more such disasters.
There could be whistleblower protections for hackers. Consider the previous attitude was hackers are causing millions of dollars of damage and need to be thrown in prison. With the proliferation of state sponsored and counter intelligence hacking over the past 15 years, no one believes you can make anything secure just by throwing enough teenage script kiddies in federal prison.
The reverse now is companies are taking the blame and legal liability for being negligent in their security practices.
That said, no organization public or private is impervious. Heartbleed and meltdown should have driven that home to anyone who thinks otherwise. Determining where the line of negligence lays will be a harder one to draw, though for civil liability it may not even matter (which is great for Google, Apple, and death for small businesses.)
While nothing is impervious that really has nothing to do with Equifax. Equifax is a case of gross negligence and malfeasance. There were no less than three security audits of Equifax going back as early as 2014. Every audit indicated major security vulnerabilities and Smith disregarded these audits each time.
A little bit of both. Equifax (and other credit reporting agencies) are subject to an annual audit from the SEC. There were issues being raised from the SEC with Equifax's processes going back to at least 2012. Ernst & Young also had a responsibility of oversight for Equifax's security practices due to acting as Equifax's primary independent auditor with regards to shareholder reporting.
Additionally, Equifax then paid on three separate occasions for external security audits at the direction of upper management (one specifically directed by Smith where the auditors were specifically directed to report the results only to Smith).
I haven't seen enough information to give a positive answer, but only to make conjecture that the external audits were driven by outside pressures; likely coming from the SEC. So the audits were performed, but this was essentially just a formality and management largely disregarded the audit findings.
I was once pulled in to consult on a new unix system that was being connected to a bank's mainframe. The operator I was working with, who had worked on mainframes his entire career, hesitated and said he needed to call someone because he couldn't remember the password into the unix system. When they didn't answer, I asked if they wanted to try 'root', or something. They did. It worked. Stunned silence followed. They wanted to know how I knew that. Nobody was supposed to know that.
The laws already exist, the penalty is just too small. With higher penalties there would be an insurance market where the insurers set standards and performs audits.
Standards set by buerocrats are usually written by special interest groups and don't achieve the desired outcome at a good cost.
Sadly having been in such an environment, I can confirm that indemnity insurance exists for such situations. What happens is the underwriter to reduce risk mandates a very rigid process.
In one case, I saw a "private cloud" provider underwrite their client's system by owning the "software-development-release-cycle". They were mandating quarterly releases and three-month manual testing and regression periods.
They put themselves in a situation whereby they could charge the client for the tin, administering the process, the time and materials for the deployment and testing and the indemnity premium.
They reduce their risk/exposure because of infrequent releases and such long regression cycles meant assurance levels were rarely met within the dedicated window. There was a very long tail of unreleased features. In summary, they mitigated any risk by chocking the product, reducing the number of releases and the size of them to deliver a fraction of the value available.
We learned to work around by taking advantage of feature switching, but quarterly releases are a death knell for a product.
It's a recent trend that public corporations are being punished through securities fraud lawsuits since conventional regulators have been bought out.
Just about any managerial incompetence can be spun as "securities fraud" since the basic presumption of most companies is that management is competent. Maybe they'll give up on that in order to reduce their exposure to lawsuits.
There are two types of possible regulation, one is control and the other is liability.
With control, some administrative body says you have to do X, Y and Z. And presumably, if you jump through the hoops and it blows up, there is an implicit guarantee. This kind of regulation is common across banks, and in 2008 when all the reserve requirements were deemed insufficient and all the acceptable ratings meaningless, there was a bailout.
The other alternative is a liability approach. You are liable for something (e.g. protection of customer information) and you are responsible for execution in the best way to know possible. If you fail, there is some punitive measure taken.
I personally prefer the second, especially since security is a hard problem. There are best practices, sure, but from my experience, I don't believe regulators and auditors are effective in their stated goals.
I don't see much difference in the two approaches, except the later case will require customers to collectively sue you for damages, in which case you can probably run a cost-benefit analysis to find out if it would be worth it.
In a regulatory environment, if a corporation does not comply, the punishment is increased until either the corporation complies or seizes to exist. Since not existing is bad for profit, corporations usually comply eventually.
In a liablatory environment, as long as nothing happens, a corporation can continue to proceed down a dark path with no ill effects and abuse the rules as they see fit. It's only costly when things go wrong and you can calculate the likely cost of things going wrong.
The difference is pretty big because there are a couple of conflicting things with the regulatory regime: it must be predictable (if things change rapidly, it becomes hard to comply and you'll get defacto non-compliance and shadow data storage) and it must be up to date. Policy makers usually don't specifically want punishments or changes in behaviour, they want outcomes.
In a liability environment, you attempt to describe the true cost and risk and allow the company to adapt to changing environments.
A practical real-world difference is "password rotation requirements". Most real-world security professionals knew the dangers of strict password rotation requirements for years before NIST could release information on it. And just because the process of standardization must necessarily be slow, the NIST requirements would then have to flow to other departments so they can then update their standards and so on.
Today, in most financial and healthcare companies, password rotation standards are rampant despite it being the case that NIST has advised against. This is often because the companies don't want to spend the money to alter policies but also often because in the 'no-man-is-an-island' interconnectedness of firms, policies in one can impose corresponding policies in others. So that means that your new startup will need to rotate passwords every month in order to integrate with (say) Bank of America.
Apart from all that, if you genuinely think about it, we want to do cost-benefit analyses on this. If the risk of leakage of customer data is low enough and we value it at some $x dollars per unit, then there's some $y of cost above which it isn't worth it. This intuitively makes sense since customer data, no matter how personal, isn't worth infinity. If it were, no one would collect it. No one. In fact, by giving me your phone number I would suddenly be holding an artefact of infinite value. Or by giving Amazon your shipping address. No one wants that liability and information exchange would halt despite everyone (in reality) wanting it to happen.
Interesting. I can see the rationale for that: we currently exist in a liability environment and that's where we are. The CYA process means that people stick to what's already there because introducing a change means you're on the hook for the change.
What I hoped for is that I, as a startup, can beat the government on picking a security standard because the government has to cater to all but I can just beat it by being better than Big Slow Dinosaur (BSD). But because I have to integrate at some level with BSD that has to follow the government I lose that advantage.
I suppose, with a regulator capable of moving fast to respond to threats, aware of sunset periods, regulation could be superior. Somehow, I expect forums like this one to be full of software engineers complaining about "the constantly changing requirements from NIST" if that were to happen.
If instead of liability, you had a regulation mandating the implementation of specific guidelines, they'd use the ones produced by NIST, which until 2017 also recommended password rotation.
The big operative difference is that under the Control regime, the regulator decides what practices are acceptable. Under the Liability regime, the acceptability of practices depends more directly on their effectiveness (with more or less directness depending on the type of liability).
A Liability regime would encourage companies to be actually secure, because they're responsible for what happens to data that is lost. A Control regime would encourage companies to check boxes from a list provided by a regulator. A Liability regime encourages being pro-active vs reactive to the regulator in a Control regime.
There are middle grounds, like HIPAA or GDPR. Both give companies some leeway in terms of creating their own checkboxes, and fines are for actual breaches, not just improper process.
Determining which organizations should be charged with third-party security audits / regulation in an already complicated regulatory landscape (orgs may already be providing security info to 2-3 regulators in different formats is challenge 1.
Challenge 2 is actually getting any legislator / regulator (at least in the United States in the current climate) to agree that this is an important and urgent regulatory matter worth adding the burden to companies', and that they should move on this now to improve overall national security.
Challenge 3 is to only make the request once, in a standard format so that the data is actually relevant instead of overlapping requests from different organizations that turn useful data into a paperwork drill that is irrelevant by the time it leaves the org.
Lastly, I'd say some sort of open source middleware proof of concept to exchange this information would go a long way toward accountability. Industry could even propose the best option themselves via their existing interest groups.
Right. We do this with accounting firms, and I think we should do it with data security as well. Does it cost money? Sure. But that's the cost of doing business. If you have personal info like this and you profit from it, then you are also responsible for safeguarding it.
Just a quick gut-check: are we sure this is working well with accounting firms?
They're all merging again post-Arthur-Anderson scandal with their services consulting businesses and while there may be controls and training and they're all pretty serious about it (I worked for such a company at one point) it doesn't seem like many folks are getting dinged on violations of late.
I agree that it is worth the cost (as a citizen whose data is being lost) but in the current landscape the companies may argue that it is not, and might be right.
Unfortunately if there were such audits, I can practically guarantee that it would end up being bottom-dollar devs employed through unions or insider dealings. And the larger companies like equifax would likely have a deal allowing them to self audit to some extent.
See: Boeing, iso certifications, building inspectors, health inspectors, any large civil engineering or aero firm, etc.
To be clear, I do agree that it is absolutely needed in the US. I just have no idea how you could implement it. Culturally the US seems to think that asking forgiveness and looser regulation for businesses is the right direction.
Not accusing you of this, strictly speaking, but I see this as just more of the "if you regulate, they'll just do X, so don't bother" defeatism that is used all the time to argue against regulation, taxation, or any sort of policing of the rich and powerful. We have numerous examples of regulations actually working as designed. Notable failures (e.g. the IRS, the financial industry in the 2000s) are due in large part to persistent under-funding by Congress, rather than any inherent impossibility.
Yes, it always comes down to funding. But the major anti regulation party has also been staunchly anti tech regulation for a long time (link below), and even when the major tech companies tried to throw their weight around they lost (see Google et al vs fcc during the repeal of net neutrality). This can also be seen with the recent public appearances of govt harassment/scapegoating of facebook, which looks like "hey, we'd like you to change your rules" while simultaneously not regulating them. I think the first step needs to be a bit of a cultural shift towards regulation again before it will be effective (see epa push after the Ohio River caught fire). I don't mean to say "don't regulate, it's pointless", I mean to say "set it up from the top down, don't try to piggyback data regulation onto a framework that wasn't designed for it".
I was shocked (shocked!) to learn that the "municipal" inspector of works was a private individual, who was paid directly by the building company. Not by me - by the company that was supposedly being monitored. I didn't even have his name and address.
Wow, I always thought the UK was more focused on govt oversight than the US. Here in the US, building inspections have to be organized through government/municipal agencies. Whether they are subcontracted out is related to the size of the city/county but I know in at least 2 medium-large cities (250k-500k people) they have dedicated building inspectors on the payroll as govt employees.
Edit: Of course it would be irresponsible to say that they were consistent. Each inspector has their own ideas of "that should really be 2x12, not 2x10", or "that stairway is too steep", or "that should look more like the other houses", etc. But I do see value in forcing everyone facing a semi consistent set of rules.
I think it would be more efficient to give literally any teeth at all to prosecutors in these cases. Make it a big enough liability that they have to care. Let them figure out how to avoid the crippling fines.
Unfortunately this particular genie is already out of the bottle. We can improve security practices going forward, but at this point any American with a credit history has had their personal details compromised.
In a market where Equifax having a critical data breach threatened is bottom line, they'd be incentivized to implement this themselves.
A critical data breach doesn't threaten its bottom line. Unless someone uses such a breach to turbo credentials into one of the credit-querying institutions and reveals, for example, the detailed criteria by which such an institution grants a loan.
There are external financial audits to provide public information about a company's finances are accurately reported to the public. We need the same thing for security, because there's no real way of assessing that as an outsider and that knowledge is a public good, much like a corporation's finances.
Another approach would be to start fining organizations that leak personal information. If the fine was sufficiently large per individual companies would then be incentivised to start taking security seriously and third party audits would probably be part of that.
How would one motivate these companies to _actually do something_ with the results of the audit? Perhaps, buried deep in the burrows of their bureaucratic empire, are several audit reports outlining this vulnerability..
since it's using regular admin credz, a lot of prior 'breaches' might even have gone unnoticed, since its likely no alarm bells would have been in place to find authorised access misuse.
Yes. You can't rely on capitalism to regulate businesses, much less to regulate businesses who deal with the private information of people other than their customers. (See also: Google, Facebook)
And you can’t trust the government to regulate business either between the revolving door of government and private industry and each party is biased for and against certain industries.
Note that this is an order on a motion to dismiss; none of the fact claims reported here are findings by the court, they are allegations made against Equifax. In a motion to dismiss, the facts in dispute are viewed in the light most favorable to the non-moving party, and here Equifax and other defendants are moving to dismiss. That's why the supporting reference for every fact claim is to the complaint against Equifax in the case, and “According to the plaintiff” is liberally scattered throughout the document.
So, the footnotes for the "admin" "admin" (46. Id. ¶ 225 (emphasis omitted) points at footnote 1. Am. Compl. ¶ 3.) claim refer to the amended complaint, paragraph 3? Any idea where this amended complaint is, which most of the early footnotes are referring to?
So bullet point # 225 from that complaint basically says the same as the PDF we're discussing:
>Likewise, Equifax “protected” one of its portals used to manage credit disputes with the username ‘admin’ and password ‘admin.’ This portal allowed access to a vast cache of personal information, including employee names, emails, usernames, passwords, consumer complaint records, and the Argentinian equivalent of Social Security numbers. The portal also granted administrative access allowing intruders to add, delete, or modify records. A November 15, 2017 article in Forbes quoted cybersecurity expert Wes Moehlenbruck, who stated that this was one of many “very grossly negligent security practices” at Equifax. The article continued, “‘Admin/admin’ as a database password is a surefire way to get hacked almost instantly,’ Moehlenbruck says. ‘A production database with this account smells of poor security policy and a lack of due diligence.’
But the complaint is similar. It's simply the allegations of one side. Part of the job of a trial is deciding the truth or falsity of these factual claims.
>For example, Equifax relied upon four digit pins derived from Social Security numbers and birthdays to guard personal information, despite the fact that these weak passwords had already been compromised in previous breaches. Furthermore, Equifax employed the username “admin” and the password “admin” to protect a portal used to manage credit disputes, a password that “is a surefire way to get hacked.” This portal contained a vast trove of personal information.
I’m genuinely curious how this happens. I remember my first job in the industry, just out of university. I knew nothing about security, but still wouldn’t have done that. My first gig was in a credit union software company, and the security standards were nonexistent, yet we still had more reasonable passwords than this (which sounds like an installation default).
Colleague #3: "Sounds good to me. We're behind the firewall and the NIC used for Dell iDRAC or HP iLO is on an isolated network unique to the physical datacenter. Remote access for our techs is managed through a secured bridge that requires all sorts of security hoops on our company intranet, and remote access for general internet traffic is not available due to the firewall restrictions. There's no way hackers will get through that in the first place."
Colleague #4-20: Build various integrations to database, all with their own ways of storing credentials.
Colleague #2: "It's really past due time to change the database password, but first we have to make sure all critical systems can still access the database."
Which is why forward planning and prompt action is worth so much.
I know I'm stating the obvious, but I've seen some worrying attitudes of "just in time" that seem to go hand in hand with a misunderstanding of Scrum Sprints or Kanban. Where people concentrate on the tree and ignore the vast interconnected forest around them.
You would be shocked at how nonchalant and downright negligent people can be about security at even the largest companies in the US. I did consulting work at a large insurance company that had the contact information, ssn, and PHI of pretty much everyone in the America (and I mean everyone). I lost track of the number of times people checked in the production password into git. In fact our production cassandra instance still was using the default cert password 'changeit' when I left. Unsurprisingly, this company was filled with contract workers and H1B workers that were barely able (if at all) to get their work done.
Colleague #437: "So whoever first set this up has left, so I'll just follow the documentation they left to figure out what they did... Oh. ...eh, I got a deadline."
Security is a cost and nuisance. It's the first thing to be cut.
To keep high security at all times you need:
1) Process aka bureaucracy. Mandatory checklists. Checklists are returned and inspected by others. Anything missing or uncertain is checked again and fixed.
2) People who are responsible for security are independent from other concerns. They can have adversarial relationship with people responsible for getting things done if there is conflict of interest. People responsible for security must have status and power to enforce it.
Consider a scenario where you need to take the system down and fix something quickly. It's completely reasonable to allow dummy password few hours when people are around fixing the problem until the system is back online.
But if there is no process in place to remove security temporarily and then restore it something is always forgotten. People who would order password to be changed is not using it and forgets the whole thing. People who use it don't say anything and it becomes new normal.
You need to mandate checklists. You force people to use them and return them. It's costly and makes things slower.
Usually they rely on some other mechanism for security. Like you can only access the portal admin page from the intranet or a few IP addresses. That has failed, not the fact that they didn't change the password.
Scrolling through the comments I'm surprised (and not all at the same time) no one has made a comment like this:
So what?
If an attacker is able to reach your DB the ballgame at 90% of the way over already. Yes I understand that a strong U/P on the DB server would be 1 final gate but unless I'm living in some alternative reality I can tell you plenty of companies use weak/shared/guessable passwords for stuff that shouldn't be reachable from the outside like this. And honestly? Securing the DB with 1 extra (potentially useless) line of defense is an extremely low priority for most businesses.
Defense in depth is important in organizations for exactly this reason. It only takes one admin falling for a well crafted phishing email to get an insider in your network, which is why you need to design it in a way where they find a whole new set of roadblocks once they're inside.
Sure they might eventually break those too, but it's time and effort and opportunity to be caught.
I agree it's not ideal that something so important wasn't at least only accessible by VPN but any decent IT professional knows to set a strong password like this, because it's so easy to do so - sometimes you can't just use a VPN because of historic reasons or due to the complexity of the network but you can certainly set a strong password on a database server.
Assuming patched, modern database software, that should keep attackers at bay until all the failed login attempts are spotted. It's very embarrassing that they (allegedly) did not do this. Of course, note that in this case it's a custom web application so it would probably be at least more of a challenge to compromise.
Right, the username/password is easy to point to as an obvious problem, but surely this wasn't the _only_ security hole the hackers had to get through, right?
I can't imagine a secure database password would have prevented the hack.
Though the lack of a secure database password probably hints at Equifax's attitude toward security in general.
This is about liability. If you're not doing even the most basic things to protect other people's private data, then you can be held accountable for damages when the data is stolen.
> "Furthermore, Equifax employed the username “admin” and thepassword “admin” to protect a portal used to manage credit disputes, a passwordthat “is a surefire way to get hacked.”"
The document does not state any further detail than this, so it is a bit unclear as to what exactly was located on this portal - does not seem like it was their main database anyhow.
An unnamed large bank in the US uses "admin"/"changeme" for a customer database. I'd love to say more about it but unfortunately that would probably identify me/them.
Where does it say they used the defaults for their "main database"? As far as I'm aware it was "only" the password for a management portal for customer complaints, not the keys to the kingdom.
I tried setting up credit freezes at all 3 credit agencies as a result of the law making it free in the wake of the Equifax breach.
It took maybe 5 minutes at Experian and Transunion. Equifax's site 500'd when I attempted to set it up, and they had no way to take a report. When I called them on the phone, they suggested I send them a fax, and their customer service rep suggested I was a useless person who would never go anywhere in life when I said that was unacceptable and they should be out of business.
>Equifax also failed to encrypt sensitive data in its custody. According to the Amended Complaint, Equifax admitted that sensitive personal information relating to hundreds of millions of Americans was not encrypted, but instead was stored in plaintext, making it easy for unauthorized users to read and misuse. Not only was this information unencrypted, but it also was accessible through a public-facing, widely used website.
I'm using that headline as our thought of the day in group chat at work. Because that is just egregious and negligent.
Nobody thought to raise that? to anyone?
Although I can understand. I have several people who now call themselves DevOps on a project who have practically zero experience with systems operations _or_ development, and have done some utterly incomprehensibly stupid things. It doesn't matter how fancy your cloud tech is, if someone creates VPCs with default ALLOW ALL rules, stuff is going to get compromised. Worse yet, some are _fighting_ against changing the ingress rules because that would show that they were wrong! I'd at the very least rotate them out and replace them if I could. (rant over)
This can happen for many reasons. People just want to get whatever it is working. There is probably a lot of time delivery pressure and something like IAM is complicated.
IMO, the first step to fixing the problem is give DevOps the proper amount of time to design the required permissions. It sounds easy from the outside, but again IAM can be very complex.
Additionally, DevOps must think security first. That means a newly deployed service has zero access and goes from there. Developers are going to be annoyed, but DevOps needs work with them and vice versa.
Yes, least possible permissions is a tried and tested axiom that should be foremost in people's minds. The same with layered security in depth, disabling unneeded accounts, etc etc.
I'm seeing a lot more inexperienced people getting access to stage/production systems (i.e. internet-facing to a greater or lesser extent) due to the DevOps paradigm. Of course the role sounds cool so people advertise for it, and apply for it, but there's a serious lack of understanding of just what it is! Developers need a good understanding of Operations, and Operations Admins need a good understanding of Development.
Things like not understanding the reason why you'd want to test the network access and DNS lookup from the stage pods instead of their local machine. Or not knowing how to perform basic source control tasks.
Of course, I can be dismissed as a grumpy old man. I am, I'm in my 40s and 23ish years of Linux operations has, I hope, taught me a couple of lessons. But I'm not yelling at the kids to get off my lawn, I want to teach the kids about correct garden maintenance, weeding, and when to plant bulbs and seeds (to stretch a metaphor way too far!). I find people are resistant to learning basics "because the cloud", or putting in due diligence because they're paid too little (which I fully understand!)
Sorry, grumpy old sysadmin who is now a team lead with lots of responsibility and too little time to brain dump his 20+ years into some younger heads. I'll try to lighten up :)
And yet, when things like this happen, people want to blame the CEO. Sure, the buck stops there and that person is really responsible for everything. But should the executives really be concerning themselves with the database password? It's an utterly irresponsible thing, and those actually working on the product should have known better.
It happens pretty often. $thing is installed, individual user accounts are created, and the default login is forgotten about. Nobody uses it, so nobody noticed it.
Another variation is not even knowing default accounts exist. i.e. where there is a CLI command to add a new user, which was done during install.
This obviously isn't always the case - but it happens a lot.
And you are talking software, which is known for being quite dynamic.
You'd be surprised how much "inertia" there is in other sectors, some stuff keeps happening even when alternatives are not just better, but also cheaper.
It happens quite a lot - nowadays services are deployed into the cloud where people are more security concious but when people deploy on-prem they are often more negligent
There is an unbelievable amount of sensitive data, whether corporate or personal, unencrypted on network shared drives and laptops across corporate America.
I'm pretty much a lay person when it comes to security, so I don't know generally how safe or unsafe that is. But there is definitely a sense that, as long as you don't get phished, everything on-prem is basically "secure" and IT is just taking care of it.
For example my employer had strict rules about data that can be stored on a cloud service, but less-strict rules about data that can be stored on an on-prem network drive.
This might seem shocking, but I did internals for many years and the number of networks I completely compromised via SQL Server is pretty funny. Almost all of them. sa/(blank) -- run xp_cmdshell, abuse server privileges to pivot to other servers or right to domain admin, then compromise their non windows environment with all the access. Networks still get owned this way on internals / red teams all the time. Granted we expect a company with all our personal data to do better, but they are still just a big company making the same terrible choices as everyone else :)
>Instead, due in part to Equifax’s failure to implement effective logging techniques, hackers were able to continuously access this sensitive personal data for over 75 days.
Where was that database located? If I had a database in an offline computer with that username and password, it wouldn't be a problem. I'm not saying this is the case, but perhaps it had a whitelist of hosts that could connect to it, which were "properly" protected?
Though competition without consequences clearly isn't.
I'd like to see a few more bureaux. Or the role nationalised. Government is at least in theory answerable to the citizens. Though government as financial vetter introduces numerous other issues.
The question of why people require credit for day-to-day financial activities, many carrying balances, is another part of this question. Sufficient pay, collective bargaining, workplace and tenant / homeowner protections, and wealth and land taxes are a few policy changes outside the data security arena which would help markedly.
Duopolies and Monopolies are far worse for markets than those dominated by 3 or more entities. Getting rid of one of the agencies will only result in more price-fixing and abuse by these companies.
My comment is only part blood thirst and part just wanting to simplify my credit report tracking from a consumer perspective. I would also like as few companies as possible having my personal information as possible especially after problems like this.
The other part is if I'm applying for any kind of credit, I assume the most conservative lender would look at all three results and just go with the lowest credit score.
But I agree, even in a semi free market, competition is good.
When I worked for BofA decades ago, there was a PDP-11 at the center of a point to point network of other machines for handling SWIFT, Telex, FedWire .... This network turned over the assets of the bank every 4 days and BofA at the time was the largest private bank in the world.
The password for that console was sesame. Transactions were testworded but otherwise sent in plain text. When I worked in Europe a few years later I constructed by own telex bankwire transaction from a hotel in Italy. It was to my account for my money but it worked, no questions asked.
Tsk! I remember reading a white hat pen-test report once where a major US bank had their master MS-SQL server left at the default root user 'sa' and no password. From memory the pen-test team got full access to the main transaction tables within a minute.
If I remember correctly, they immediately stopped the testing and reported it to management, but I believe they never heard back as to whether the problem was fixed. If anyone knows more details about this, I am sure we would all appreciate an update.
>And, when Equifax did encrypt data, it left the keys to unlocking the encryption on the same public-facing servers, making it easy to remove the encryption from the data.
Something similar occurred at a previous employer. They were counting on the admin port being blocked on the firewall. This was 15 years ago but even nowadays network guys tend to manage firewalls with spreadsheets and manual updates rather than something like Chef so it’s unsurprising that it got missed or overwritten in an upgrade or something.
I work with financial software and you'd be surprised how much of all this "regulation" is based on self assessments. Auditors are looking for liability shifts, not real security.
I was dumbfounded by this when I was consulting prior and worked with some banks on mortgage compliance. Almost everything about banking is self assessments and reporting. It is similar to the idea of Boeing doing self testing for the FAA and reporting all is fine. Regulation doesn't bring safety or security, it brings reporting that rarely gets analyzed and even if it is there is no way it will show anything but the most blatant of fraud etc. It is akin to closing the barn doors after the horses have all left, at least the banks can say hey 10 horses left, but nothing was done to prevent it and they won't get in trouble cause they reported on it.
At least that was kinda my takeaway from those jobs. I could just have a skewed version based on the stuff I worked on.
And auditors don't really have access to passwords etc. They can run an assessment tool to see that there's an account named "admin," but typically don't get access to /etc/shadow or passwords within applications.
Now a pentester? If they don't spot this during an assessment, they suck. But pentesting isn't always performed on a rigorous schedule.
Regulation doesn't matter directly. What matters is organisational culture. Regulation is one tool to change the culture of organisations in an industry, but it only works if it is
* Communicated clearly so that there is a path to compliance[1].
* Enforced with penalties so that people within non-infinite-budget organisations can sell[2] the change as a means to cut costs[3].
------
[1] If you only do the 1st, then it is like forcing children to swim by throwing them off a boat. Some people think that "sink or swim" forces someone to swim; it doesn't. It just presents two possibilities: swim or die.
Regulation means nothing without proper auditing and enforcement. Question is how are countries (US in this case) not enforcing the most basic regulation...
"Was that wrong? Should I have not done that? I tell you, I gotta plead ignorance on this thing because if anyone had said anything to me at all when I first started here that that sort of thing was frowned upon, you know, ‘cause I've worked in a lot of offices and I tell you people do that all the time."
Is there some nuance to this admin/admin used to access a portal?
I can completely imagine a headline like this when there is an old basic auth overlaying an application with a real password. It just seems unlikely that all the logging in the customer service portal will say, "updated by admin".
These security nightmares begs the question: Why don't databases use asymmetric keys and authenticate & authorize access? Why are we still reliant on password based authentication? If it's simply the question of key management and distribution, that's a solved problem.
I imagine developing something like equifax today, you'd want to hook up it up to your SSO, and used row based security so a user can only read their row, and then you focus your efforts making sure user accounts, especially privileged ones such as staff, aren't being abused. (You'd still probably establish system to system level trust, such as keys between your API and DB).
But it's so much easier and cheaper just to connect using a username and password, and then do whatever the framework you chose does by default.
I believe it's a joke referencing the default password for mssql for many years of sa/null. Eventually the install started forcing the user to change it to something, but for a time there were many mssql databases out there with a default password of null.
If all software automatically changed its own admin password after 121 days, and refused to set old passwords (store the last 5 salted hash), that might be a good enough stick to force people to rotate passwords themselves, and default passwords would go away.
Yeah but you can't do anything about that unless you also enforce complexity. If the goal is to prevent defaults (ex. to prevent drive-bys), the above does that; "good" passwords is a separate requirement.
its easy to let them type old password then new password twice, that way u can confirm it's not similar to old password atleast since u receive it in unhashed form from the user themselves.
Why are these companies allowed to hold my identity information? I have no form of credit and yet these companies are still allowed to keep track of me and store my information in an insecure fashion. I hate how idiots control the show in this country.
Am I crazy or is this not blatant insider trading?