Hacker Newsnew | past | comments | ask | show | jobs | submit | esafak's commentslogin

Way too much work :( At least my video collection is mostly miniDV.

How is spatial reasoning useless??

I'd rather say it has a mind of its own; it does things its way. But I have not tested this model, so they might have improved its instruction following.

Well, one thing i know for sure: it reliably misplaces parentheses in lisps.

Clearly, the AI is trying to steer you towards the ML family of languages for its better type system, performance, and concurrency ;)

And how does the book suggest countering the problem?

Current agents "live" in discretized time. They sporadically get inputs, process it, and update their state. The only thing they don't currently do is learn (update their models). What's your argument?

Brought to you by the same AI that fixes tests by removing them.

Can you show one?

The end result would be a normal PPT presentation, check https://sli.dev as an easy start, ask Codex/Claude/... to generate the slides using that framework with data from something.md. The interesting part here is generating these otherwise boring slide decks not with PowerPoint itself but with AI coding agents and a master slides, AGENTS.md context. I’ll be showing this to a small group (normally members only) at IPAI in Heilbronn, Germany on 03/03. If you’re in the area and would like to join, feel free to send me a message I will squeeze you in.

It really is infuriatingly dumb; like a junior who does not know English. Indeed, it often transitions into Chinese.

Just now it added some stuff to a file starting at L30 and I said "that one line L30 will do remove the rest", it interpreted 'the rest' as the file, and not what it added.


The purpose of communication is to reduce the cost of obtaining information; I tell you what I have already figured out and vice versa. If we're both querying the same oracle, there is nothing gained beyond the prompt itself (which can be valuable).

This is a prelude to imbuing robots with agency. It's all fun and games now. What else is going to happen when robots decide they do not like what humans have done?

"I’m sorry, Dave. I’m afraid I can’t do that."


It's important to address skeptics by reminding them that this behavior was actually predicted by earlier frameworks. It's well within the bounds of theory. If you start mining that theory for information, you may reach a conclusion like what you've posted, but it's more important for people to see the extent to which these theories have been predictive of what we've actually seen.

The result is actually that much of what was predicted had come to pass.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: