Facebook used to promise that phone numbers collected for 2FA purposes wouldn't be used for advertising purposes, and then broke that promise.
That is just one example which was made public and was done intentionally. There's potentially plenty more cases where it's either done but concealed (because of how many factors go into ad targeting it's often impossible to categorically prove/disprove which data was used to target an ad) or done by accident from earlier code that just assumed all data is fine to use for ads (or that the new product which shouldn't share its data for advertising purposes was accidentally storing data in the same place as other products who do share data for advertising purposes).
Why should we trust companies that have a business incentive to break their promises? If Google's biggest revenue stream is ads and that requires personal data then I wouldn't trust any of their promises not to use some personal data unless I can 100% audit all the code myself and prove that it's indeed that code that is running in production.
Because we have, as an industry, collectively surrendered our morality to "the market", which is supposed to fix all these problems by its almighty powers.
Because the prevailing voices our country have spent the past half-century telling us that government is Bad, and business (and greed) is Good, and enough of us have bought into the idea that we've systematically dismantled enough of the systems that keep corporate greed from running amok and ruining people's lives that they now effectively run large chunks of government (eg, see ALEC).
Because too much of our society—especially in the tech sector, and more especially in the parts of it that are overrepresented on HackerNews—has come to worship wealth and the wealthy, and to believe that they should be allowed huge amounts of latitude to do what they want with their wealth.
Having worked on those products (and in some cases, written the terms), I'd say so.