Both be true at the same time: some teams spend a fortune on AI and the AI investments won't get the expected ROI (bubble collapse). What is sure is that a lot of capacity is been built and that capacity won't disappear.
What I could see happening in your scenario is the company suffers from diminishing return as every task becomes more expensive (new feature, debugging session, library update, refactoring, security audit, rollouts, infra cost). They could also end up with an incoherent gigantic product that doesn't make sense to their customer.
Both pitfall are avoidable, but they require focus and attention to detail. Things we still need humans for.
> What is sure is that a lot of capacity is been built and that capacity won't disappear.
They really are subsidizing what will be an incredibly healthy used server equipment market in a year or two. Can’t wait. My homelab is going to be due for an upgrade.
Your response contains a performative contradiction: you are asserting that humans are naturally logical while simultaneously committing several logical errors to defend that claim.
commenter’s specific claim—that adding a note about the definition of "if" would solve the problem—is a moving the goalposts fallacy and a tautology. The comment also suffers from hasty generalization (in their experience the test isn't hard) and special pleading (double standard for LLM and humans).
When someone tells you "you can have this if you pay me", they don't mean "you can also have it if you don't pay". They are implicitly but clearly indicating you gotta pay.
It's as simple as that. In common use, "if x then y" frequently implies "if not x then not y". Pretending that it's some sort of a cognitive defect to interpret it this way is silly.
> Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., “not bad” represented as “good”)[...]
From: Negation mitigates rather than inverts the neural representations of adjectives
The facts are in the PISA data collected by the OECD. If you drill down by subpopulation, the majority group in the U.S. goes toe to toe with the majority groups in Asian countries, and beats the majority groups in western european countries: https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
National competitiveness and distributional equity don’t go hand in hand. China has made tremendous achievements by focusing investment on key provinces instead of trying to bring everyone up together.
The problem isn't technical in nature. We need a brand-new socioeconomic system that outcompete liberal democracies while reducing CO2 emissions.. We are in deep trouble.
As opposed to an information bubble with a small group of humans? It has less personalized hallucinations but more extreme and negative ones, which I think is worse. Ideally people would look at reliable sources and use critical thinking for information, but ChatGPT seems like a better conversation partner than the average Redditor of today (who's probably also a bot...but one trained on drama and negativity).
What I could see happening in your scenario is the company suffers from diminishing return as every task becomes more expensive (new feature, debugging session, library update, refactoring, security audit, rollouts, infra cost). They could also end up with an incoherent gigantic product that doesn't make sense to their customer.
Both pitfall are avoidable, but they require focus and attention to detail. Things we still need humans for.
reply