Doubt it, I've tried prompts where a simple Google query would lead to an answer to see if it could save me time (eg. query about AWS service usage, some azure stuff). In every instance I got a plausible sounding solution to my problem that would directly contradict documentation, it invents capabilities/features and misleads.
I've tried using it for code review on a few functions and tasked it to improve provided code - every time it would write worse code eg. I had some logic that would filter to a new list and then append replacement - it's refactor did filter -> add or replace for already filtered items, the reasoning was bullshit : fake performance claims about avoiding allocation when the "allocation" in question was value type, and the suggested alternative was replacing a vector with a hash map which is both logically wrong because of losing order, and slower for the use case.
For generating small stuff like a regex the pain you have to go through to get a correct prompt is higher than writing the thing and you still need to double check it.
I see no use case where chatgpt would improve my workflow in current stage and I've seen so many idiotic bugs recently when pressing the devs that introduced them it's basically "ChatGPT".
The one time it was useful was when I had to convert a model definition to open API spec - was easy to fact check and give feedback to get a decent solution.
GiGo, basically