Hacker Newsnew | past | comments | ask | show | jobs | submit | PrayagS's commentslogin

API only user or Max x20 along with extra usage? If it's the latter, how are the limits treating you?

I went for cursor on the $200 plan but i hit those limits in a few days. Claude code came out after i got used to cursor but I've been intending to switch it up on the hope the cost is better.

I go api directly after i hit those limits. That’s where it gets expensive.


+1 to beads. Works great

Been thinking of having Opus generate plans and then having Gemini 3 Flash execute. Might be better than using Haiku for the same.

Anyone tried something similar already?


From their response headers, it seems like the request is coming from NGINX directly. How do they defend themselves against DOS attacks?


Big server. And if it goes down it goes down? Who cares, it's hackernews.


I'm in India and we're affected as well.


Oceania here gang and i think that it is a global issue


I haven’t used Cursor since I use Neovim and it’s hard to move out.

The auto-complete suggestions from FIM models (either open source or even something Gemini Flash) punch far above their weight. That combined with CC/Codex has been a good setup for me.


That used to be the case a few years ago as well, when Wayland/sway was still considered experimental.

I had tried Manjaro i3, and XFCE’s i3 variant but at the end it was actually more convenient to install the KDE version and then install i3 on top.


True. In my experience, it has been good to use for second opinions and code reviews. Usually in CC/Codex via an MCP like zen.


> another factor to consider is that if you have a typical Prometheus `/metrics` endpoint that gets scraped every N seconds, there's a period in between the "final" scrape and the actual process exit where any recorded metrics won't get propagated. this may give you a false impression about whether there are any errors occurring during the shutdown sequence.

Have you come across any convenient solution for this? If my scrape interval is 15 seconds, I don't exactly have 30 seconds to record two scrapes.

This behavior has sort of been the reason why our services still use statsd since the push-based model doesn't see this problem.


Is "v2" based on their paper around Monarch?


It is Monarch, yes


Ah then is there any material on the new hybrid system that you mentioned? TIA.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: