Hacker Newsnew | past | comments | ask | show | jobs | submit | tomashubelbauer's commentslogin

Anthropic banned my account when I whipped up a solution to control Claude Code running on my Mac from my phone when I'm out and about. No commercial angle, just a tool I made for myself since they wouldn't ship this feature (and still haven't). I wasn't their biggest fanboy to begin with, but it gave me the kick in the butt needed to go and explore alternatives until local models get good enough that I don't need to use hosted models altogether.

I control it with ssh and sometimes tmux (but termux+wireguard lead to a surprisingly generally stable connection). Why did you need more than that?

I didn't like the existing SSH applications for iOS and I already have a local app that I made that I have open 24/7, so I added a screen that used xterm.js and Bun.spawn with Bun.Terminal to mirror the process running on my Mac to my phone. This let me add a few bells and whistles that a generic SSH client wouldn't have, like notifications when Claude Code was done working etc.

How did they even know you did this? I cannot imagine what cause they could have for the ban. They actively want folks building tooling around and integrating with Claude Code.

I have no idea. The alternative is that my account just happened to be on the wrong side of their probably slop-coded abuse detection algorithm. Not really any better.

How did this work? The ban, I mean. Did you just wake up to find out an email and that your creds no longer worked? Were you doing things to sub-process out to the Claude Code CLI or something else?

I left a sibling comment detailing the technical side of things. I used the `Bun.spawn` API with the `terminal` key to give CC a PTY and mirrored it to my phone with xterm.js. I used SSE to stream CC data to xterm.js and a regular request to send commands out from my phone. In my mind, this is no different than using CC via SSH from my phone - I was still bound by the same limits and wasn't trying to bypass them, Anthropic is entitled to their different opinion of course.

And yeah, I got three (for some reason) emails titled "Your account has been suspended" whose content said "An internal investigation of suspicious signals associated with your account indicates a violation of our Usage Policy. As a result, we have revoked your access to Claude.". There is a link to a Google Form which I filled out, but I don't expect to hear back.

I did nothing even remotely suspicious with my Anthropic subscription so I am reasonably sure this mirroring is what got me banned.

Edit: BTW I have since iterated on doing the same mirroring using OpenCode with Codex, then Codex with Codex and now Pi with GPT-5.2 (non-Codex) and OpenAI hasn't banned me yet and I don't think they will as they decided to explicitly support using your subscription with third party coding agents following Anthropic's crackdown on OpenCode.


> Anthropic is entitled to their different opinion of course.

I'm not so sure. It doesn't sound like you were circumventing any technical measures meant to enforce the ToS which I think places them in the wrong.

Unless I'm missing some obvious context (I don't use Mac and am unfamiliar with the Bun.spawn API) I don't understand how hooking a TUI up to a PTY and piping text around is remotely suspicious or even unusual. Would they ban you for using a custom terminal emulator? What about a custom fork of tmux? The entire thing sounds absurd to me. (I mean the entire OpenCode thing also seems absurd and wrong to me but at least that one is unambiguously against the ToS.)


> Anthropic is entitled to their different opinion of course.

It’d be cool if Anthropic were bound by their terms of use that you had to sign. Of course, they may well be broad enough to fire customers at will. Not that I suggest you expend any more time fighting this behemoth of a company though. Just sad that this is the state of the art.


It sucks and I wish it were different, but it is not so different from trying to get support at Meta or Google. If I was an AI grifter I could probably just DM a person on Twitter and get this sorted, but as a paying customer, it's wisest to go where they actually want my money.

There is weaponized malaise employed by these frontier model providers and I feel like that dark-pattern, what you pointed out, and others are employed to rate-limit certain subscriptions.

They have two products:

* Subscription plans, which are (probably) subsidized and definitely oversubscribed (ie, 100% of subscribers could not use 100% of their tokens 100% of the time).

* Wholesale tokens, which are (probably) profitable.

If you try to use one product as the other product, it breaks their assumptions and business model.

I don't really see how this is weaponized malaise; capacity planning and some form of over-subscription is a widely accepted thing in every industry and product in the universe?


I am curious to see how this will pan out long-term. Is the quality gap of Opus-4.5 over GPT-5.2 large enough to overcome the fact that OpenAI has merged these two bullet points into one? I think Anthropic might have bet on no other frontier lab daring to disconnect their subscription from their in-house coding agent and OpenAI called their bluff to get some free marketing following Anthropic's crackdown on OpenCode.

It will also be interesting to see which model is more sustainable once the money fire subsidy musical chairs start to shake out; it all depends on how many whales there are in both directions I think (subscription customers using more than expected vs large buys of profitable API tokens).

So, if I rent out my bike to you for an hour a day for really cheap money and I do so a 50 more times to 50 others, so that my bike is oversubscribed and you and others don't get your hours, that's OK because it is just capacity planning on my side and widely accepted? Good to know.

Let me introduce you to Citibike?

Also, this is more like "I sell a service called take a bike to the grocery store" with a clause in the contract saying "only ride the bike to the grocery store." I do this because I am assuming that most users will ride the bike to the grocery store 1 mile away a few times a week, so they will remain available, even though there is an off chance that some customers will ride laps to the store 24/7. However, I also sell a separate, more expensive service called Bikes By the Hour.

My customers suddenly start using the grocery store plan to ride to a pub 15 miles away, so I kick them off of the grocery store plan and make them buy Bikes By the Hour.


As others pointed out, every business that sells capacity does this, including your ISP provider.

They could, of course, price your 10GB plan under the assumption that you would max out your connection 24 hours a day.

I fail to see how this would be advantageous to the vast majority of the customers.


Well, if the service price were in any way tied to the cost of transmitting bytes, then even the 24hr scenarios would likely see a reduction in cost to customers. Instead we have overage fees and data caps to help with "network congestion", which tells us all how little they think of their customers.

Yes, correct. Essentially every single industry and tool which rents out capacity of any system or service does this. Your ISP does this. The airline does this. Cruise lines. Cloud computing environments. Restaurants. Rental cars. The list is endless.

I have some bad news for you about your home internet connection.

They did ship that feature, it's called "&" / teleport from web. They also have an iOS app.

That's non-local. I am not interested in coding assistants that work on cloud based work-spaces. That's what motivated me to developed this feature for myself.

But... Claude Code is already cloud-based. It relies on the Anthropic API. Your data is all already being ingested by them. Seems like a weird boundary to draw, trusting the company's model with your data but not their convenience web ui. Being local-only (ie OpenCode & open weights model running on your own hw) is consistent, at least.

It is not a moral stance. I just prefer to have my files of my personal projects in one place. Sure I sync them to GitHub for backup, but I don't use GitHub for anything else in my personal projects. I am not going to use a workflow which relies on checking out my code to some VM where I have to set everything up in a way where it has access to all the tools and dependencies that are already there on my machine. It's slower, clunkier. IMO you can't beat the convenience of working on your local files. When I used my CC mirror for the brief period where it worked, when I came back to my laptop, all my changes were just already there, no commits, no pulls, no sync, nothing.

Ah okay, that makes sense. Sorry they pulled the plug on you!

I am using GPT-5.2 Codex with reasoning set to high via OpenCode and Codex and when I ask it to fix an E2E test it tells me that it fixed it and prints a command I can run to test the changes, instead of checking whether it fixed the test and looping until it did. This is just one example of how lazy/stupid the model is. It _is_ a skill issue, on the model's part.

Non codex gpt 5.2 is much better than codex gpt 5.2 for me. It does everything better.

Yup, I find it very counter-intuitive that this would be the case, but I switched today and I can already see a massive difference.

It fits with the intuition that codex is simply overfitted.

Yeah I meant it more like it is not intuitive to my why OpenAI would fumble it this hard. They have got to have tested it internally and seen that it sucked, especially compared to GPT-5.2

Codex runs in a stupidly tight sandbox and because of that it refuses to run anything.

But using the same model through pi, for example, it's super smart because pi just doesn't have ANY safeguards :D


I'll take this as my sign to give Pi a shot then :D Edit: I don't want to speak too son, but this Pi thing is really growing on me so far… Thank you!

Wait until you figure out you can just say "create a skill to do..." and it'll just do it, write it in the right place and tell you to /reload

Or "create an extension to..." and it'll write the whole-ass extension and install it :D


i refuse to defend the 5.2-codex models. They are awful.

I don't know why any frontier model lab can't ship a mobile app that doesn't use a cloud VM but is able to connect to your laptop/server and work against local files on there when on the same network (e.g.: on TailScale). Or even better act as a remote control for a harness running on that remote device, so you couldn't seamlessly switch between phone and laptop/server.

I'm also so baffled by this. I had to write my own app to be able to do seamless handoff between my laptop/desktop/phone and it works for me (https://github.com/kzahel/yepanywhere - nice web interface for claude using their SDK, MIT, E2E relay included, no tailscale required) but I'm so baffled why this isn't first priority. Why all these desktop apps?

This looks awesome! And incredibly polished. Exactly the approach I take to vibebin-- I may have to integrate yep anywhere into it (if that's ok) as an additional webui!

https://github.com/jgbrwn/vibebin

Although I would need it to listen on 0.0.0.0 instead of localhost because I use LXC containers so caddy on the host proxies to the container 10.x address. Hopefully yep has a startup flag for that. I saw that you can specify the port but didn't see listening address mentioned.


Cool! Your project sounds really interesting. I would love to try it out, especially if you integrated yep! Yes it has yepanywhere --host 0.0.0.0 or you can use HOST env var.

This is not a problem when you assume the role of an architect and a reviewer and leave the entirety of the coding to Claude Code. You'll pretty much live in the Git Changes view of your favorite IDE leaving feedback for Claude Code and staging what it managed to get right so far. I guess there is a leap of faith to make because if you don't go all the way and you try to code together with Claude Code, it will mess with your stuff and undo a lot of it and it's just frustrating and not optimal. But if you remove yourself from the loop completely, then indeed you'll have no idea what's going on. There still needs to be a human in the loop, and in the right part of it, otherwise you're just vibe coding garbage.

I used to use Claude Code with Opus exclusively because of how good it is IME. Then Anthropic banned me so I switched to OpenCode. I really want OpenCode to win, but there is long way for it to get the same polish in the UX department (and to get a handle on the memory leaks). I am 100 % sure Claude Code is hacks upon hacks internally, but on the surface, it works quite well (not that they have fixed the flashing issue). With OpenCode I also switched to GPT-5.2-Codex and I have to say it's fairly garbage IME. I can't get it to keep working, it takes every opportunity to either tell me what I should do next for it or just tell me it figured a particular piece of the larger puzzle out and that if I want it can continue. It is not nearly as independent as Opus it. Now I'm on the Codex CLI with GPT-5.2 as I figured maybe the harness is the issue, but it is not very good either.

I think big corporations are just structurally unable to create products people actually want to use. They have too much experience with their customers being locked in and switching costs keeping them locked in. Anthropic needed a real product to win mind-share first, they will start enshitifying later (by some accounts they may already have). The best thing a big corporation can do with a nascent technology like that is to make it available to use to everywhere and then acquire the startup that converts it to a winner first. Microsoft even fumbled that.

> It is made as a demonstration of how even a basic non-ML algorithm with no data from other users can quickly learn what you engage with to suggest you more similar content.

Yeah, it got really sticky real fast. From the random (?) selection it starts off where I couldn't recognize anything but popular TV shows, it immediately over-indexed on that content and I had to fight for my life to see anything else in the feed that I would recognize and consider a good algorithmic pick for my interests.

Which is brilliant, because Instagram has the same issue for me - absolute metric tons of garbage and whenever there is a gem in that landfill of a feed that I interact with positively, it's nothing but more of that on my feed for weeks until I grow sick of that given thing. In conclusion, Instagram could have used this 30 line algorithm and I'd have the same exact experience when using it.

Algorithmic feeds are obviously problematic for turning several generations into lobotomized zombies, but they are also just not very good at nuance, so it is not even a case of something that's bad for you but it just feels so good. It's just something that's bad, but is able to penetrate the very low defenses in human psychology for resisting addiction and short-term gratification and there is no incentive to improve them for the sake of the user as long as they work for the advertisers.


I'm just glad I was there for Mozilla's peak. Hopefully I'll get to experience Ladybird's next.


You beat me to it. Thanks for sharing it

I do this sometimes - let Claude Code implement three or four features or fixes at the same time on the same repository directory, no worktrees. Each session knows which files it created, so when you ask CC to commit the changes it made in this session, it can differentiate them. Sometimes it will think the other changes are temporary artifacts or results of an experiment and try to clear them (especially when your CLAUDE.md contains an instruction to make it clean after itself), so you need to watch out for that. If multiple features touch the same file and different hunks belong to different commits, that's where I step in and manually coordinate.

I'm insane and run sessions in parallel. Claude.md has Claude committing to git just the changes that session made, which lets me pull each sessions changes into their own separate branch for review without too much trouble.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: