This is why I prefer models from Anthropic, especially for language-related tasks: they are more natural and to the point. GPT always used too much corporate-speak and market-speak, and this recent update looks terrible: I do not want my AI assistant to crack jokes, be sycophantic, or say "I’ve got you, Ron". I want it to assist me without pretending to be something that it isn't.
In my experience, somehow the strength in natural language / litigious / prosaic work translated negatively in a way to coding. The verbose, prolific way it writes + the investment into dev tooling by Anthropic resulted in Anthropic's models leading the sycophantic-presumptuous-over-confident frontier.
So much so that I still have barely used Sonnet 4.5 thinking.