Hacker Newsnew | past | comments | ask | show | jobs | submit | lugu's commentslogin

Both be true at the same time: some teams spend a fortune on AI and the AI investments won't get the expected ROI (bubble collapse). What is sure is that a lot of capacity is been built and that capacity won't disappear.

What I could see happening in your scenario is the company suffers from diminishing return as every task becomes more expensive (new feature, debugging session, library update, refactoring, security audit, rollouts, infra cost). They could also end up with an incoherent gigantic product that doesn't make sense to their customer.

Both pitfall are avoidable, but they require focus and attention to detail. Things we still need humans for.


> What is sure is that a lot of capacity is been built and that capacity won't disappear.

They really are subsidizing what will be an incredibly healthy used server equipment market in a year or two. Can’t wait. My homelab is going to be due for an upgrade.


Your response contains a performative contradiction: you are asserting that humans are naturally logical while simultaneously committing several logical errors to defend that claim.

This comment would be a lot more useful with an enumeration of those logical errors.

commenter’s specific claim—that adding a note about the definition of "if" would solve the problem—is a moving the goalposts fallacy and a tautology. The comment also suffers from hasty generalization (in their experience the test isn't hard) and special pleading (double standard for LLM and humans).

When someone tells you "you can have this if you pay me", they don't mean "you can also have it if you don't pay". They are implicitly but clearly indicating you gotta pay.

It's as simple as that. In common use, "if x then y" frequently implies "if not x then not y". Pretending that it's some sort of a cognitive defect to interpret it this way is silly.


In the original studies, most people made an error that can't be explained by that misunderstanding: they failed to select the card showing 'not y'.

From my armchair this feels relevant:

> Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., “not bad” represented as “good”)[...]

From: Negation mitigates rather than inverts the neural representations of adjectives

At: https://journals.plos.org/plosbiology/article?id=10.1371/jou...


Interestingly, per capita, china is worst than EU, and will be way worst in 10 years.

I would say quite the opposite. Have you considered the position of the general population in your assessment?


You are wrong at so many levels. Your argument is factually incorrect and logically flawed. And you know it.


The facts are in the PISA data collected by the OECD. If you drill down by subpopulation, the majority group in the U.S. goes toe to toe with the majority groups in Asian countries, and beats the majority groups in western european countries: https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....

National competitiveness and distributional equity don’t go hand in hand. China has made tremendous achievements by focusing investment on key provinces instead of trying to bring everyone up together.


Maybe you should actually prove him wrong. Making a claim without evidence doesn’t help anyone.


The problem isn't technical in nature. We need a brand-new socioeconomic system that outcompete liberal democracies while reducing CO2 emissions.. We are in deep trouble.


The existing global socioeconomic systems have been able to solve other environmental commons problems before, even if this one is larger in scale.

> We need a brand-new socioeconomic system that outcompete liberal democracies while reducing CO2 emissions.

I presume you'd agree that isn't likely? So saying "We need x infeasible thing" seems about as helpful as those pushing climate change denialism?


Will they? What kind of breakthrough do you see coming that would convince large actors to make that switch?


Sorry but Google is a multinational corporation. It makes profites and products everywhere in the world. You should probably open the eyes.


> With AI, the society will be more divided, more polarised, and less happy than before.

While I agree for less happy, I am not seeing AI chatbot been more divisive and polirised than social media in general. Am I missing something?


A personal information bubble for anyone. All deviations from reality are normalised with "hallucinations is OK for AI".

It's devisive as much as it can be.


As opposed to an information bubble with a small group of humans? It has less personalized hallucinations but more extreme and negative ones, which I think is worse. Ideally people would look at reliable sources and use critical thinking for information, but ChatGPT seems like a better conversation partner than the average Redditor of today (who's probably also a bot...but one trained on drama and negativity).


Ads are essentially a biding system. Less biders means less profit. Removing scams means losing money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: