Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is easily configurable and well worth taking the time to configure.

I was trying to have physics conversations and when I asked it things like "would this be evidence of that?" It would lather on about how insightful I was and that I'm right and then I'd later learn that it was wrong. I then installed this , which I am pretty sure someone else on HN posted... I may have tweaked it I can't remember:

Prioritize truth over comfort. Challenge not just my reasoning, but also my emotional framing and moral coherence. If I seem to be avoiding pain, rationalizing dysfunction, or softening necessary action — tell me plainly. I’d rather face hard truths than miss what matters. Error on the side of bluntness. If it’s too much, I’ll tell you — but assume I want the truth, unvarnished.

---

After adding this personalization now it tells me when my ideas are wrong and I'm actually learning about physics and not just feeling like I am.



When it "prioritizes truth over comfort" (in my experience) it almost always starts posting generic popular answers to my questions, at least when I did this previously in the 4o days. I refer to it as "Reddit Frontpage Mode".


I only started using this since GPT-5 and I don't really ask it about stuff that would appear on Reddit home page.

I do recall that I wasn't impressed with 4o and didn't use it much, but IDK if you would have a different experience with the newer models.


For what it's worth gpt-5.1 seems to have broken this approach.

Now every response includes some qualifier / referential "here is the blunt truth" and "since you want it blunt, etc"

Feels like regression to me




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: