The problem with fines is that they happen after the fact and only if the worst actually happens. Tons of companies have totally abominable security and never get breached only out of dumb luck. So you'll still get lots of companies playing Russian Roulette where they make higher profits for ten years before they may or may not suffer a breach and get fined into oblivion, at which point they file for bankruptcy and start over.
You also end up creating a lot of really perverse incentives, like nefarious companies not disclosing data breaches because disclosing them would result in liability even though that's necessary for the victims to take steps to mitigate the damage. There's a reason the NTSB does no-fault investigations.
And a lot of mediocre but still harmful incentives like cargo culting decades-old security checklists to satisfy compliance requirements even though they don't actually result in improved security, but do create a false sense of security.
More than that, the problem is that humans are fallible, so even if you do 99.9% of everything right you can still make a mistake. A company with one security vulnerability can get just as compromised as a company with ten thousand. Does it really make sense to destroy OpenBSD with fines as soon as they have one security vulnerability? Or every random company that uses OpenSSH on a day that a not publicly known 0-day is being exploited in the wild? Or a company that updates to the latest version of some software that claims to have fixed a CVE even though it didn't?
The real problem here is architectural. It shouldn't be possible for someone to breach Equifax and get all your information because they shouldn't have that information to begin with. They shouldn't exist. Your data should be yours, on your device, so that it isn't possible for someone to get it by breaching a third party because the third party doesn't have it.
If you make the fine large enough that it may cause the company to go under, you can bet they'll buy some insurance. And you can bet the insurance companies will have some standards to reduce the risk of a company getting breached, such as doing audits regularly.
For example, if Equifax faced a fine of $5B (more than 1/4 of their market cap) instead of $500M, you can bet they'd be more serious about audits in the future. However, we've conditioned business to expect minor consequences for breaches, so security becomes an afterthought. Likewise, the $5B fine against Facebook is unlikely to change anything, though a $200-300B (20-30% market cap) fine would be much more convincing.
The point isn't necessarily to ruin companies, but to set a precedent that says these types of issues will not be tolerated. It'll force companies to get insurance, and the insurance will have an incentive to avoid collection on the policy.
Using fines that large is how you get them to not buy insurance, because it would cause the insurance to be prohibitively expensive, assuming you could even find someone to sell you a policy that large.
It also doesn't make any sense to base fines on market cap because the two things have nothing to do with one another. All that would really do is cause corporations to restructure their operations to separate the entity that does all the dirty work from the one that owns all the assets, so that the entity that exists in your jurisdiction and is susceptible to being fined is renting/leasing everything and has only a nominal market cap, whereas the one with all the assets is a totally independent company that isn't even in your jurisdiction and never does anything "wrong" because all it ever does is lease and license things to a different entity.
It also seems kind of obvious that even if you could try to impose a fine equal to 20-30% of a company's global market cap, all that would do is cause the local entity declare bankruptcy, dissolve and abandon your jurisdiction without actually paying the fine, because that large of a fine would exceed the long-term value of operating there. Especially when there isn't any guarantee it won't happen again if they stay. For that matter it would tend to make companies not want to operate there to begin with, because it's possible to do your best and still fail, and that kind of uncertainty is precisely how you drive businesses away.
But most importantly, it still generally isn't the large tech companies who are the ones with poor security. It's the other industries, especially finance and government, that are collecting just as much data but then doing a much worse job of securing it. What does a fine mean to the DMV or OPM?
These large rich tech companies are really responsive to 'compliance' with the letter and spirit of laws that otherwise might cause severe losses. Look at, eg, gpdr, and google suddenly getting religion about you being able to mass-download your data. Yes you can legislate solutions to corporate behaviours.
"These large rich tech companies" are not the ones getting breached. The likes of Google and Microsoft take security seriously already. The problem is the likes of Equifax and Capital One and government databases with poor security that nonetheless contain all kinds of sensitive information that they shouldn't be aggregating and retaining to begin with and they certainly shouldn't be required by law to collect and store, even though they frequently are right now.
Also:
> and google suddenly getting religion about you being able to mass-download your data.
Letter, yes. Spirit, I'm not so sure, it feels like Google and FB want to keep doing what they're already doing, and comply where they have to, instead of reconsidering whether they actually need all that data and need these dark patters for consent (which would be the spirit of GDPR)
And the smaller-than-FAANG companies... too many checklists, contracts and theater ("GDPR requires us to disable autofill on this form") and not enough actual rethinking what they're doing and if they should change their approach to data... so we'll still be seeing plenty of breaches where they shouldn't even be having the breached data
It'll probably be a decade before we see real effect from the GDPR...
You also end up creating a lot of really perverse incentives, like nefarious companies not disclosing data breaches because disclosing them would result in liability even though that's necessary for the victims to take steps to mitigate the damage. There's a reason the NTSB does no-fault investigations.
And a lot of mediocre but still harmful incentives like cargo culting decades-old security checklists to satisfy compliance requirements even though they don't actually result in improved security, but do create a false sense of security.
More than that, the problem is that humans are fallible, so even if you do 99.9% of everything right you can still make a mistake. A company with one security vulnerability can get just as compromised as a company with ten thousand. Does it really make sense to destroy OpenBSD with fines as soon as they have one security vulnerability? Or every random company that uses OpenSSH on a day that a not publicly known 0-day is being exploited in the wild? Or a company that updates to the latest version of some software that claims to have fixed a CVE even though it didn't?
The real problem here is architectural. It shouldn't be possible for someone to breach Equifax and get all your information because they shouldn't have that information to begin with. They shouldn't exist. Your data should be yours, on your device, so that it isn't possible for someone to get it by breaching a third party because the third party doesn't have it.