I am beginning to notice more "features" in apps that suspiciously raise questions: "Was that feature really needed and was the AI to sneak it in there!?"
This is my reflection as well. I find myself spending MORE time reviewing LLM-generated code and also spending time thinking through LLM generated choices, which, at many times are inefficient or bloated. Keeping the LLM on the right rails takes up more time, even with lengthy agent.md and claude.md files to manage behaviors.
I have always wanted to learn Rust, but was too distracted to get started.
So, I started working with Claude on building a postgres database replication application. I'm learning Postgres internals as well as how brittle database replication and subscription can really be. Although this is for Seren, you can replicate between any PG databases.
https://github.com/serenorg/postgres-seren-replicator
Big learning: Claude Sonnet with Rust is massively productive. I'm impressed, but code bloat is a thing.
Our small team at Seren just launched a landing page and chatbot to talk about agentic databases and see what developers are thinking about them. There is a feature request button that pushes feature ideas to github. Give it a shot if you got a few minutes. We are building on an OpenSource Postgres version for AI agents and are curious what HN is thinking about features.
My particular interest in this project was that if the many interviews I explore for a job end in rejects, then at least maybe I might get paid a referral bonus as a sourcer for the job. After spending 6 hours onsite and getting denied, seems only fair to get paid if I can help source?
The company I'm working for has this exact platform as a dream project we've always wanted to do, and we 100% would've wasted a lot of money coming to an inferior end product to what you've created. Keep your head up!