Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Language is not humanness either; it is a disembodied artifact of our extended cognition, it is a way of transferring the contents of our consciousness to others or to ourselves over time. This is precisely what LLMs piggyback on and therefore are exceedingly good at simulating, which is why the accuracy of "is this human" tools are stuck at %60-70's (%50 is a coin flip), and are going to be bounded for a foreseeable future.

And I am sorry to be negative but there is so much bad cognitive science in this article that I couldn't take the product seriously.

> LLMs can be scaled almost arbitrarily in ways biological brains cannot: more parameters, more training compute, more depth.

- Capacity of raw compute is irrelevant without mentioning the complexity of computation task at hand. LLM's can scale - not infinitely - but they solve for O(n^2) tasks. It is also amiss to think human compute = a singular human's head. Language itself is both a tool and protocol of distributed compute among humans. You borrow a lot of your symbolic preprocessing from culture! Like said, this is exactly what LLM's piggyback on.

> We are constantly hit with a large, continuous stream of sensory input, but we cannot process or store more than a very small part of it.

- This is called relevance, and we are so frigging good at it! The fact that machine has to deal with a lot more unprioritized data in a relatively flat O(n^2) problem formulation is a shortcoming, not a feature. Visual cortex is such an opinionated accelerator of processing all that massive data that only the relevant bits need to make to your consciousness. And this architecture was trained for hundreds of millions of years, over trillions of experiment arms - that were in parallel experimenting on everything else too.

> Humans often have to act quickly. Deliberation is slow, so many decisions rely on fast, heuristic processing. In many situations (danger, social interaction, physical movement), waiting for more evidence simply isn't an option.

- Again a lot of this equivocates conscious processing to entire cognition. Anyone who plays sports or music knows to respect the implicit, embodied cognition that goes on to achieve complex motor tasks. We are yet to see a non-massively-fast-forwarded household robot do a mundane kitchen cleaning task, and go play table tennis with the same motor "cortex". Motor planning and articulation is a fantastically complex computation; just because it doesn't make it to our consciousness or instrumented exclusively through language doesn't mean it is not.

> Human thinking works in a slow, step-by-step way. We pay attention to only a few things at a time, and our memory is limited.

- Thinking, Fast and Slow by Kahneman is a fantastic way of getting into how much more complex the mechanism is.

The key point here is as limited in their recall, how good humans are at relevance, because it matters, because it is existential. Therefore when you are using a tool to extend your recall, it is important to see its limitations. Google search having indexed billions of pages is not a feature if it can't bring the top results well. If it gets the capability to sell me whatever it brought up was relevant, that still doesn't mean the results are actually relevant. And this is exactly the degradation of relevance we are seeing in our culture.

I don't care if the language terminal is a human or a machine, if the human was convinced by the low relevance crap of the machine it just a legitimacy laundering scheme. Therefore this is not a tech problem, it is a problem of culture; we need to be simultaneously cultivating epistemic humility, including quitting the Cartesian tyranny of worshipping explicit verbal cognition that is assumed to be locked up in a brain; we have to accept that we are also embodied and social beings that depend on a lot of distributed compute to solve for agency.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: