Hacker Newsnew | past | comments | ask | show | jobs | submit | emmitska's commentslogin

I love this. I have to be on LinkedIN daily to look for a job, and I am so tired of every posts with the same language/pattern/tone. Don't even start me on the animated infographics... People don't realize how generic and none sensical all this is. I'm so excited by the richness your tool offers. Can't wait to make it part of my daily favorites!

Awesome, I’m glad to hear that. I use it everyday and you can’t imagine how useful it is for me as well. I am open to feedback and suggestions to improve this tool for everyone and the more you use it, the better it learns the way you write.

The tell isn't any single word, it's that everyone converges on the same mildly enthusiastic middle manager voice with the same sentence length distribution. We run into this daily building Metric37.com in the same space, and the hard part isn't swapping vocabulary, it's breaking the rhythm so outputs stop looking like a normal curve. Good luck with the job hunt, the infographics will survive us all.

I am not sure I understand what you mean. Did you add some of your samples and tried a rewrite? Or you just want to spam

Not testing your tool, was a general observation about rhythm vs vocabulary being the harder tell. Best of luck with the launch

Amen. Now with all the agents and bots, I often pause and wonder — how much code is there left to write that we need AI as our saving grace? How many unsolved problems, underserved customers, unanswered questions actually justify the volume? Where did we all go wrong?


I think we have reached peak functionality in software, therefore the only place left to go was make the underlying code more complex, messy, and impossible for humans to read. /s


"Do me a SOLID, YAGNI, give me a DRY KISS" — that's been my coding philosophy for 20 years. So when I came back to building after a long detour, I couldn't stomach watching agents confidently generate 400 lines where 40 would do. What I found is that the discipline was the feature, not the obstacle. I ended up pair programming closely — not because I distrusted the agent, but because I couldn't let go of the architecture. The internet kept telling me to stop going into the weeds. Your article explained why that instinct was right. Everyone else is happy grinding in third the whole race. I went 1, 2, 3 — and because I didn't bury myself getting out of the driveway, I still get to shift into fourth.


As well as pair programming with the AI, you can explicitly put those principles in AGENTS.md and the stochastic code generator will pay attention and be less verbose.


Exactly. There's a difference between vibe coding and agentic software engineering. One is just prompting and hoping for the best. It works surprisingly well, up to a point. And then it doesn't. If that's happening to you, you might be doing it wrong. The other is forcing agents to do it right. Working in a TDD way, cleaning up code that needs cleaning up, following processes with checklists, etc. You need to be diligent about what you put in there and there's a lot of experience that translates into knowing what to ask for and how. But it boils down to being a bit strict and intervening when it goes off the rails and then correcting it via skills such that it won't happen again.

I've been working on an Ansible code base in the past few weeks. I manually put that together a few years ago and unleashed codex on it to modernize it and adapt it to a new deployment. It's been great. I have a lot of skills in that repository that explain how to do stuff. I'm also letting codex run the provisioning and do diagnostics. You can't do that unless you have good guard rails. It's actually a bit annoying because it will refuse to take short cuts (where I would maybe consider) and sticks to the process.

I actually don't write the skills directly. I generate them. Usually at the end of a session where I stumbled on something that works. I just tell it to update the repo local skills with what we just did. Works great and makes stuff repeatable.

I'm at this point comfortable generating code in languages I don't really use myself. I currently have two Go projects that I'm working on, for example. I'm not going to review a lot of that code ever. But I am going to make sure it has tests that prove it implements detailed specifications. I work at the specification level for this. I think a lot of the industry is going to be transitioning that direction.


Except that when its system prompt is full of instructions, caveats, design principles, gotchas, architecture notes, memories from the past, and personal preferences, at some point it's going to just ignore them outright. Heck, Claude Code won't even use critical instructions from a 100-line CLAUDE.md file sometimes. So you still have to be extremely vigilant about noncompliance.


If your instructions are being ignored you may need a new model or harness.


My CLAUDE.md is deliberately very short, and only includes very specific rules like "never list yourself as a co-author or committer in git commits". Claude will very regularly ignore this rule, apologize every time I tell it to fix it, update its memories, etc. and then an hour later do the exact same thing again.


The cost structure here is what stands out to me. $2.50/hr makes this viable for personal or small-business use cases that would have been unthinkable a year ago — security footage, home cameras, dashcam archives. The interesting design question is what happens when that cost drops another 10x and this is just a default feature of consumer cameras. Most people won't opt out of something they didn't know they opted into.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: