Hacker Newsnew | past | comments | ask | show | jobs | submit | elliot07's commentslogin

You should use Shopify and focus on the store part of the business, not the SQLite part :)

The author should check to see if the HTTP response body contains "nginx" or "apache" and just filter those out. Seems like at least 50% of what I'm seeing.

Also would be nice if there was a hotlink to view the original site directly from the index page.


The search page lets you add multiple exclude filters to the aggregation pipeline. So as you filter common strings, the interesting results bubble to the top.

If you click the image it should take you to an info page on the service.


Why has public comms been so poor on this issue? There's been lots of Github issues posted in the Claude Code repo with lots of new comments each day screaming into the void, but radio silence from Anthropic since the revert in December. It's clearly causing a lot of frustration for users leading to clever workarounds like this.

It was obviously a complex issue (I appreciate that and your work!). But I think there's a lot to be improved on with communication. This issue in particular seems like it has lost a lot of user trust - not because it was hard to solve and took awhile - but because the comms and progress around it was so limited.

Eg issues:

* https://github.com/anthropics/claude-code/issues/1913

* https://github.com/anthropics/claude-code/issues/826

* https://github.com/anthropics/claude-code/issues/3648


The communication is definitely on me! There honestly wasn't much new to say -I've been slowly ramping since early Jan just to be extra sure there's no regressions. The main two perf. issues were:

1. Since we no longer have <Static> components the app re-renders much more frequently with larger component trees. We were seeing unusual GC pauses because of having too much JSX... Better memoization has largely solved that. 2. The new renderer double buffers and blits similar cells between the front and back buffer to reduce memory pressure. However, we were still seeing large GC pauses from that so I ended up converting the screen buffer to packed TypedArrays.


I’m really surprised that GC is an issue at the bits/sec throughput a TUI would be pushing. At the risk of making an obvious observation: your render loop is doing way too much work for what it is producing.


Most people's mental model of CC is that "it's just a TUI" but it should really be closer to "a small game engine". For each frame our pipeline constructs a scene graph with React -> layout elements -> rasterize them to a 2d screen -> diff that against the previous screen -> _finally_ use the diff to generate ANSI sequences to draw. We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written. You're right that in theory we shouldn't have to do much work, but in practice that's required optimizations at every step.

For the GC pauses specifically, what mattered is predictable performance. More allocations == more GC == more frames where the VM is locked up seemingly doing nothing. On slower machines we were seeing this be in the order of seconds, not ms and when somebody is typing all they feel is the 1 character that's stuttering. Honestly, I was surprised about this too as GC in JS is often not something that's too impactful.


Can I ask why you used JavaScript at all for CC? Or even React for a simple UI? It seems misaligned with the app’s nature. This bug, GC pauses, everything else you mention… This isn’t criticism, because I believe people make good judgements and you and Anthropic will have good reasons! It’s just puzzlement, in that I don’t understand what the balance judgement was that led you here, and having experienced all the issues it led to I would love to know what the reasons were. Thankyou for any insights you can share :)


Supposedly using react allows CC to have structured output so it can understand what’s on the screen.


Should have used Go with Bubbletea.


Maybe the problem is using React for that.


Thanks for the in depth explanation! I think the comparison to a game engine makes a lot of sense. Is the diff just part of the react rendering engine, or is it something you intentionally introduce as a performance optimization? Mostly I’m wondering how much the diff saves on rendering performance if you’ve already generated a 2D raster. In the browser, this saves on API calls to the DOM but at that point you haven’t rendered anything resembling an image. Is this what helps to resolve flickering, perhaps?


> On slower machines we were seeing this be in the order of seconds, not ms and when somebody is typing all they feel is the 1 character that's stuttering.

You mean like a laptop that is trying to stay cool (aka, cpu throttling) on battery power while Claude is running a slightly different version of the test suite for the 5th time to verify it didn't break anything?

Yeah, the typing latency is really bad in those cases. Sometimes waiting for 40 seconds or more.


What the fuck? What's wrong with idk, ncurses?


TUI development is a lost art these days, apparently.


TUIs are experiencing something of a renaissance, I think. My hypothesis is that the popularity attracts newcomers who inevitably bring their own preconceptions from other domains. Many are unaware of the state of the art and reinvent the wheel. I don’t think this categorically a bad thing, but it can be a mixed bag of slop and genuinely useful cross-pollination.


Code sharing with their web app. Layouts. Event handling. Not wanting to reimplement all that from scratch when React and Ink is a popular and full featured option.


I think this is the main issue. When I would get into flickering mode, it appeared that the entire TUI was re-rendering on every key press. I don’t know if it’s maybe just the limitation of Ink or even terminals.


Well vim doesn’t flicker so it’s definitely not a limitation of terminals, but you’re probably right about the Ink/React stack.


So why are you stuck with ink/react stack?


I don’t use React/Ink for anything, what do you mean?


Presumably they had no clue how to fix it and just ignored it and pretended everything works fine because they didn't want to admit it?


Honestly, may be an unpopular opinion but I disagree with the ideal path. This may be on-paper the correct path for this sim, but in my experience this will lead you to bad career + team outcomes. There's better options based on my experience:

1. If the junior dev is really that critical for a large project for some bizarre reason (fix that next time), tell Gary he's critical to that and say you can realloc ppl to cover or do this task under a 1hr time limit if it's urgent (if exceeds then kill the task). 2. Say to Gary next time let me know directly rather than dm someone on the team so you can route it to the right person (buys trust, covers team). 3. Renewal of BigCo is important to the biz. You should have some room to accommodate requests like these without being a stone to adhoc requests. It will not buy you or your team favour at all. Remember, this is a startup!


I don't think this is unpopular at all—I think it’s actually the 'Senior/Pragmatic' view.

This highlights a key distinction: The simulator is designed to teach heuristics (e.g., 'Default to protecting the team'), not a rigid playbook. In a real startup, specific contexts (like 'BigCo Renewal') often override the default heuristic.

You nailed three critical nuances that the default path glossed over:

Bus Factor: If the Junior is the only one who can pull the data, that's an engineering failure on my part.

Business Alignment: In a startup, 'Revenue' > 'Sprint Integrity.' Being a 'stone' to revenue-critical requests is a fast way to lose influence.

The Middle Path: Your suggestion (Timebox/Reallocate) is the advanced move. It solves the VP's pain without wrecking the sprint.

Thanks for adding this perspective—it shows exactly where 'Best Practice' meets reality.


Celery is such garbage to run/maintain at any sort of scale. Very excited for this. Rq/temporal also seem to solve this well.

Anyone here done the migration off of celery to another thing? Any wisdom?


A customer of mine has two projects. One running on their own hardware, Django + Celery. The other one running on AWS EC2, Django alone.

In the first one we use Celery to run some jobs that may last from a few seconds to some minutes. In the other one we create a new VM and make it run the job and we make it self destroy on job termination. The communication is over a shared database and SQS queues.

We have periodic problems with celery: workers losing connection with RabbitMQ, Celery itself getting stuck, gevent issues maybe caused by C libraries but we can't be sure (we use prefork for some workers but not for everything)

We had no problems with EC2 VMs. By the way, we use VirtualBox to simulate EC2 locally: a Python class encapsulates the API to start the VMs and does it with boto3 in production and with VBoxManage in development.

What I don't understand is: it's always Linux, amd64, RabbitMQ but my other customer using Rails and Sidekiq has no problems and they run many more jobs. There is something in the concurrency stack inside Celery that is too fragile.


Celery with the Redis backend seemed pretty solid.

RabbitMQ and friends are just a pain to use.


Migrated Celery to Argo Workflows. No wisdom as it was straightforward. You lose a lot startup speed though, so it's not a drop-in replacement and is only a good choice for long-running workflows. Celery was easier than Argo Workflows. Celery is really easy to get started with. I like Airflow the best, but it's closer to Argo Workflows in terms of more long-lived workflows. I hope to try Hatchet soon. I've read Temporal is even harder to manage.


Can share the sentiment, had to work with celery years ago, and the maintenance/footguns exceeded the expectations. The codebase and docs are also a bit messy, it's a huge project used and contributed by many so it's understandable I guess. Anyway, Argo if you are in K8S, something else if you aren't. And if you are a startup and need speed, just go with something like procrastinate.


I remember working at a startup in the early 2010's that used celery for all of its background job infrastructure. There were several million tasks run daily across dozens of servers. Celery would regularly hang. Queues would pile up. We had some crazy scripts that would restart celery, detect and kill hanging processes, etc. Fun times.


We switched from Celery to Temporal. Temporal is such a great piece of distributed system.


What were the problems you had with Celery?


They for sure nerfed it within the last ~3 weeks. There's a measurable difference in quality.


They actually just had a bug fix and it seems like it recently got a lot better in the last week or so


Chonkie is great software. Congrats on the launch! Has been a pleasure to use so far.


Thank you :)


OpenAi has a version called Codex that has support. It's lacking in a few features like MCP right now and the TUI isn't there yet, but interestingly they are building a Rust version (it's all open source) that seems to include MCP support and looks significantly higher quality. I'd bet within the next few weeks there will be a high quality claude code alternative.


Congrats on the V2 launch. Does Plandex support MCP? Will take it for a test drive tonight.


Thanks! It doesn't support MCP yet, but it has some MCP-like features built-in. For example, it can launch a browser, pull in console logs or errors, and send them to model for debugging (either step-by-step or fully automated).


Agree. My wish-list is:

1. Non JS based. I've noticed a ton of random bugs/oddities in Claude Code, and now Codex with UI flickering, scaling, user input issues, etc, all from what I believe of trying to do React stuff and writing half-baked LLM produced JS in a CLI application. Using a more appropriate language that is better for CLIs I think would help a lot here (Go or Rust for eg).

2. Customized model selection (eg. OpenRouter, etc).

3. Full MCP support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: