Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

why would you need batteries included? the ai can code most integrations (from scratch, if you want, so if you need something slightly off the beaten path it's easy




I think the logic can be applied to humans as well as AI:

Sure, the AI _can_ code integrations, but it now has to maintain them, and might be tempted to modify them when it doesn't need to (leaky abstractions), adding cognitive load (in LLM parlance: "context pollution") and leading to worse results.

Batteries-included = AI and humans write less code, get more "headspace"/"free context" to focus on what "really matters".

As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.

Nonetheless, I'm positive in a couple of years we'll have found a way for LLMs to be equally good, if not better, with other frameworks. I think we'll find mechanisms to have LLMs learn libraries and projects on the fly much better. I can imagine crazy scenarios where LLMs train smaller LLMs on project parts or libraries so they don't get context pollution but also don't need a full-retraining (or incredibly pricey inference). I can also think of a system in line with Anthropic's view of skills, where LLMs very intelligently switch their knowledge on or off. The technology isn't there yet, but we're moving FAST!

Love this era!!


> As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.

i have the exact opposite experience. its far better to have llms start from scratch than use batteries that are just slightly the wrong shape... the llm will run circles and hallucinate nonexistent solutions.

that said, i have had a lot of success having llms write opinionated (my opinions) packages that are shaped in the way that llms like (very little indirection, breadcrumbs to follow for code paths etc), and then have the llm write its own documentation.


Maybe if they could learn how to switch their intelligence on, that would help more?

What’s more likely to have a major security problem – Django’s authentication system or something custom an LLM rolled?

I don't even particularly care for Django, but darned if I'd want to reimplement on my own any of the great many problems they've thoroughly solved. It's so widely used that any weird little corner case you can think of has already been addressed. No way I'd start over on that.

Its literally the opposite.

Why would you generate sloppy version of core systems that must be included by default in every project.

It makes absolutely zero sense to generate auth/email sending/bg tasks integration/etc


Because then every app is a special snowflake.

At some point you'll need to understand things to fix it, and if it's laid out in a standard way you'll get further, quicker.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: