A far more tolerant audience with a better understanding of nuance across a great span of subjects. Harping on Reddit is incredibly passé and showcases the elitist echo chamber of HN. So yes, I’ll take it as a compliment. You can continue being embarrassed for me I suppose.
It's not ethical. But it makes sense why they're pandering and ignoring the fact that the system made a mistake in this situation.
The mistake was clear, but not life threatening in this particular instance. It was "almost" at the corner where they wanted to stop.
It's also pretty clear that they care more about appealing to Tesla and to Tesla fans than they do about being an honest review channel-for better or worse, Elon Musk has millions of followers who have established a parasocial relationship with his public persona and so they take criticism of him or his products as personal insults.
Putting that all together, what would they gain by complaining about it?
It would just alienate their viewers and the company that they rely on.
The smoothest response would have been to laugh it off and show self-awareness about it, but the next best response was probably just to ignore it.
Hopefully some real review channels are able to use the service and give real feedback.
You know, there's a creative third way which the US could approach if it had the cajones.
Allow OpenAI and other AI companies to use all data for training, but require that they pay it forward by charging royalties on profits beyond X amount of profit, where X is a number high enough to imply true AGI was reached.
The royalties could go into a fund that would be paid out like social security payments for every American starting when they were 18 years old. Companies could likewise request a one time deferred payment or something like that.
It's having your cake and eating it. Also helping ease some tensions around job loss.
Sadly, what we'll likely get is a bunch of tech leaders stumbling into wild riches, hoarding it, and then having it taken from them by force after they become complacent and drunk on power without the necessary understanding of human nature or history to see why they've brought it on themselves.
There are many possibilities. Perhaps they're allowed to use anything publicly accessible but have to release their model every x amount of time, which might be a month or a year. My biggest fear is that as happened with copyright's limited term, this limited term would get chipped away at over the years.
Another would be that they couldn't sell access to customers directly but rather must license it out to various entities at rates set by regulators. Those entities then would compete with each other for end customers. This of course might be prone to regulatory capture like happens with utilities.
Not to be funny on purpose, but we are having discussions in America currently on if we should finance aid for poverty and the like. I love your idea though.
The flipside of labeling people let go as "poor performers" is that you make the people still on board look like "high performers" by contrast. This increases the social value of being an active Meta employee. In that way, you could see it as "brand building". Because they will pitch themselves as employing only "the best of the best" and this reputation can be used to recruit people who want to take the gamble of potentially benefitting from that.
By damaging the reputation of the people they lay off in order to improve their own reputation, it's almost a form of reputational theft. It's unethical but I can see why they are doing it.
If you take a job at Meta, try to understand that the company can and will screw you if it benefits them so be prepared to do the same in turn. Never forget that Meta is not a great company in the sense that it is technically excellent. What I mean by that is that their technical excellence is not a product of their culture but a necessity of operating at scale.
What makes Meta great is that it's one of the most ruthlessly managed companies in its class. It knows how to thrive in legal and ethical gray areas. This is the primary thing that its culture selects for and as a result it is a master at that art.
So use them like they use you and don't fall for their non-sense about being mission driven or making the world more open and connected. It's a fleet of pirate ships. Nothing more.
- t. resigned from Facebook twice in my career in order to work at "better" (by my standards) companies.
I believe that what we're not accounting for is the belief among many wealthy people that scientific research and all other intellectual labor will soon be automated by AI.
I believe that what those wealthy people aren't accounting for is the need for some class of humans to act as a translation layer between the expert AI systems and the rest of us in order to allow the discoveries and results to percolate through human institutions.
Or, rather, they may be underestimating the bottleneck that will be introduced by trying to hoard all of those results within their own circles of trust and influence.
More fucking morons. The gap with biomedical research isn't in the realm of language models, but in the amount of information that exists in biology that we don't know. I'm not sure what percentage of all the genetic data on Earth we've sequenced, but it's not much, and we still don't quite have a mechanical understand of a single cell, much less some complex multicellular organisms with proteins affecting gene expression, cell membrane receptors being reused in 50 different tissue types, molecular secretion and diffusion altering our minds functioning, and electrical currents synchronizing brain firing at a distance.
No LLM trained on PubMed will be able to suss this all out - more data is needed.
Even in pure mathematics, where I am currently a grad student and as needed a big fan of trying to get LLMs to explain stuff to me at 1 am, they just aren't that good. If it's a popular question where I could have tried math overflow, sure, it's probably just going to get some details weirdly wrong, but for subtle complex concepts, it's not making some golden age of truth and understanding.
And God help the LLMs trying to understand physics that are trained on all the BS on Youtube and the blogs.
But are they wrong? I'm pretty sure that I can ask any LLM to produce a followup for "Transformative Hermeneutics of Quantum Gravity" and get one as good, or even better than the original. Lack of percolation would actually be an improvement.
Sure, LLMs may indeed produce plausible bs akin to the classic bs paper you mention. But it should be considered that all science is being gutted, including unambiguously substantial fields (biology, physics, chemistry).
Welcome to the era of the Conservative Justice Warrior. They're deploying all of the same annoying and repressive tactics that got progressives booted from power with breakneck speed.