The penalties would undoubtedly been much higher if they didn't disclose it. Of course, perhaps it never would have been discovered, but whistleblower protections/incentives and high enough penalties for covering up issues can set the right balance.
It's similar to just about any other violation, really: if I injure someone accidentally—even through negligence—I'm going to get a much more lenient punishment if I don't try to cover it up or run away from it.
Who in their right mind is going whistle blow and risk their entire career over a security flaw that was detected internally, found to be unutilized, and was fixed in a timely fashion?
The fact that such a case even has reporting requirements at all seems nuts to me.
I am shocked to see the "let's make writing vulnerable code illegal" take be so popular on HN. If you have written any meaningful amount of code, you have written vulnerable code.
Every case I've seen in my career where this has happened has not been "negligence" but developers not realizing there's some obscure logging middleware or something.
An employee who left for another job or simply retired and who feels this was wrong. Plenty of lads in Meta earn enough to buy a house and have some investments that there is little leverage over them to ruin their careers.
SWE in security here. Why the heck would I "whistleblow" in a scenario where a vulnerability was internally found, unused, reported to legal, and fixed? That is part of any healthy SDLC.
The EU is implying that it is illegal to accidentally write vulnerable code. Pure insanity, nearly every software company would go out of business overnight if this was a stance they actually enforced.
I'm sure we've literally never written a vulnerable line of code in our lives, right?
Security reviews are part of a healthy SDLC. You catch vulnerabilities as part of security reviews as they would be totally unnecessary if people simply wrote perfect code to begin with.
There needs to be some kind of punishment for failing to take basic security practise into account.
A system where a simple disclosure is enough will probably result in company ignoring security, then when there is a problem they disclose and go on without change.
But, it is also important for the fines to be reduced when taking the right steps to improve.
Balancing this will probably be quite difficult.
Executives can already go to jail for not reporting vulnerabilities. The risk of personal criminal liability is one of the reasons people choose not to move into the Director role, in big tech security careers.
This is why we need regular government inspections that will make such disclosures inevitable. It's kind of insane that we don't already have an EPA (Environmental Protection Agency) of citizen data management.
There is also no real benefit to storing passwords in plaintext so I don't think your fear is realistic at all. If you are going to fine this at all, then 10k-100k would be an appropriate amount.
That seems like pretty much a license to have practices as poor as you want. For a company of that size, a fine that size would just not be material at all.
Meta put their customers at risk through negligent actions. A fine in the range you propose would be lower than any investment required to improve security (e.g. by hiring a single additional person). What company in their right mind would do anything to improve security in that case?
That's close to how I arrived at the number - I used programmer wages. Reading about best practices around password storage and implementing it is fairly quick and easy, so a fine of that size will still be sufficient incentive.
But why would that be an appropriate amount? By having the fine around the lowest possible investment to tackle the issue, you're literally incentivizing companies not to take security seriously. After all, you can save money early into development, and just fix it whenever you have time - you'll still save money compared to doing things right immediately.