My non-physicist but curious-about-the-topic take is similar. Things at the quantum level are not "complex" in the systems-theory sense. They couldn't be, I think, since we're dealing with the most basic constituents of the universe. They are mysterious, confusing, wildly counterintuitive... but they are fundamental. The most basic stuff there is.
The study of these things, on the other hand, is genuinely complex and difficult. But that's epistemology, not ontology.
> Having to debug code I didn’t write that’s an integral part of what I’m building is an annoying context switch for anything non-trivial.
That's the problem I've been facing. The AI does 90% of the work, but the 10% that I have to do myself is 20x harder because I don't have as much knowledge of the codebase as I would if I had written all of it by myself, like in the old days (2 years ago).
It was a little while ago but, GLM's code was generally about twice as long, and about 30% less readable than Sonnet's even at the same length.
I was able to improve this with prompting and examples but... at some point I realized, I would prefer the simplicity of using the real thing.
I had been using GLM in Claude code with Claude code router, because while you can just change the API endpoint, the web search function doesn't work, and neither does image recognition.
Maybe that's different now, or maybe that's because I was on the light plan, but that was my experience.
Claude code router allowed me to Frankenstein this, so that it was using Gemini for search and vision instead of GLM. Except that turns out that Gemini also sucks at search for some reason, so I ended up just making my own proxy which uses actual Google instead.
But yeah at some point I realized the Rube Goldberg machine was giving me more headaches than its solved. (It was also way slower than the real thing.) So I paid the additional $18 or whatever to just get rid of it.
That being said I did just buy the GLM year for $25 because $2/month is hard to beat. But I keep getting rate limited, so I'm not sure what to actually use it for!
No no! It was just the way you wrote it; but I think I misunderstood it.
> I found myself asking Sonnet [...] after the 4th time of doing that [...] just switched models.
I thought you meant Sonnet results were laughable, so you decided to switch to GLM.
I tried GLM 4.6 last week via OpenCode but found it lacking when compared to Sonnet 4.5. I still need to test 4.7, but from the benchmarks and users opinions, it seems that it's not a huge improvement though.
Last week I got access to Claude Max 20x via work, so I've using Opus 4.5 exclusively and it's a beast. Better than GPT 5.2 codex and Gemini 3 Pro IME (I tested both via OpenCode).
I also got this cheap promo GLM subscription. I hope they get ahead of the competition, their prices are great.
Three years ago, I left academia after finishing my PhD in Economics, frustrated by how little real-world impact my hard work seemed to have. I moved into IT, wanting to build things that would be more immediately useful and practical. Still, the dream of using science to create positive change never left me.
I was invited to work with AI at a company that develops software for the public sector. It wasn't the dream (I wouldn't be using my academic expertise) but it felt like a step closer. At least I'd be providing tools to support people who directly affect others' lives. From the start, I told my boss that I hoped someday to offer not just AI tools, but real socioeconomic statistical analysis as a service for the public sector. And while I've been happy working with AI, I've always sought out opportunities on projects that were more data-driven.
Three years later, some clients expressed interest in having our AI chatbot provide real-world socioeconomic data analysis. My boss just gave me a promotion to lead both the AI team and this new socioeconomic data initiative.
I was reflecting the other day on how fortunate I am, my dream "chased me." But it wasn't simply luck. I had always stayed attuned to the opportunities that arose.
A bit of a counterpoint. I've done 3 years of therapy with an amazing professional. I can't exaggerate how much good it did; I'm a different person, I'm not an anxious person anymore. I think I have a good idea of how good human therapy is. I was discharged about 2 years ago.
Last Saturday, I was a little distressed about a love-hate relationship that I have with one of the things that I work with, so I tried using AI as a therapist. Within 10 minutes of conversation, the AI gave me some incredible insight. I was genuinely impressed. I had already discussed this same subject with two psychologist friends, who hadn't helped much.
Moreover: I needed to finish a report that night and I told the AI about it. So it said something like, "I see you're procrastinating preparing the report by talking to me. I'll help you finish it."
And then, in the same conversation, the AI switched from psychologist to work assistant and helped me finish the report. And the end product was very good.
I was left very reflective after this.
Edit: It was Claude Sonnet 4.5 with extended thinking, if anyone is wondering.
You learned skills your trained therapist guided you to develop over a three year period of professional interaction. These skills likely influenced your interaction with this product.
Be careful though, because if I were to listen to Claude Sonnet 4.5, it would have ruined my relationship. It kept telling me how my girlfriend is gaslighting me, manipulating me, and that I need to end the relationship and so forth. I had to tell the LLM that my girlfriend is nice, not manipulative, and so on, and it told me that it understands why I feel like protecting her, BUT this and that.
Seriously, be careful.
At the same time, it has been useful for the relationship at other times.
You really need to nudge it in the right direction and do your due diligence.
I had a similar thing throughout last week dealing with relationship anxiety and I used that same model for help. It really did provide great insight into managing my emotions at the time, provided useful tactics to manage everything and encouraged me to see my therapist. You can ask it to play devil's advocate or take on different viewpoints as a cynic or use Freudian methodology, etc... You can really dive into an issue you're having and then have it give you the top three bullet points to talk with your therapist about.
This does require you think about what it's saying though and not taking it at surface value since it obviously lacks what makes humans human.
> I actually don't think documentation is too important
Trying to maintain a documentation was actually problematic in a recent project. I made some big changes that were not immediately updated in the documentation; then, in a new session, Claude Code was always referring to the stale markdown documentation file and getting confused about the codebase.
I still can't fathom how one of my favorite Android features simply disappeared years ago: the 'time to leave' notification for calendar appointments with address info.
reply