Hacker Newsnew | past | comments | ask | show | jobs | submit | fwip's commentslogin

I think the common miscommunication here is that defense is the largest part of the US discretionary budget (about half overall), but that doesn't include those non-negotiable things like Social Security, Medicare, etc .

Trump doesn't want to do Medicare etc anymore. The states can do that now.

You can smell it.

Sure, and an LLM-written article will use that pattern eight times in two pages.

As a small kid, I learned how to use the DOS command line to launch this game on my parents' PC. I also remember really enjoying Sopwith 2, which added cows, among other things.

You take a binary that's intended to run on the Xbox 360, and emit a new binary that runs on a modern x86 computer.

If you haven't tried it, the Steam controller does a pretty good job of playing mouse&keyboard games. The original is probably hard to find now, but allegedly they'll release a new one later this year.

And having a colony on Mars will be profitable because of...?

Was the British colonization and funding of Canada, New Zealand, and Australia profitable? All three colonies were not profitable for decades after their formation.

Yet looking back, colonialism was probably the most profitable venture ever undertaken. All three of them ended up becoming key allies and instrumental trading partners.

Think on a longer time scale.


I'm pretty sure that Britain actually had pretty specific goals of profitability from the get-go.

[flagged]


Am I the bozo with this? I assure you I don’t think I am very smart.

Building those colonies involved a lot of slavery and forced or indentured labour.

All of those places had valuable resources for extraction. That was the whole basis for their colonization. The whole basis of colonialism, itself.

Mars has a lot of rocks.


It's not just speed - incremental parsing allows for better error recovery. In practice, this means that your editor can highlight the code as-you-type, even though what you're typing has broken the parse tree (especially the code after your edit point).

Thank you for being up-front in disclaiming that this project is AI-written, both here and in the Github page. I really appreciate the transparency.

I think reasonable people can disagree on this.

From the point of view of an individual developer, it may be "fraction of tasks affected by downtime" - which would lie between the average and the aggregate, as many tasks use multiple (but not all) features.

But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.


> But if you take the point of view of a customer, it might not matter as much 'which' part is broken. To use a bad analogy, if my car is in the shop 10% of the time, it's not much comfort if each individual component is only broken 0.1% of the time.

Not to go too out of my way to defend GH's uptime because it's obviously pretty patchy, but I think this is a bad analogy. Most customers won't have a hard reliability on every user-facing gh feature. Or to put it another way there's only going to be a tiny fraction of users who actually experienced something like the 90% uptime reported by the site. Most people are in practice are probably experienceing something like 97-98%.


Sorry, by 'customer' I meant to say something like a large corporate customer - you're buying the whole package, and across your org, you're likely to be a little affected by even minor outages of niche services.

But yeah, totally agree that at the individual level, the observed reliability is between 90% and 99%, and probably toward the upper end of that range.


A better analogy is if one bulb in the right rear brake light group is burnt out. Technically the car is broken. But realistically you will be able to do all the things you want to do unless the thing you want to do is measure that all the bulbs in your brake lights are working.

That's an awful analogy because "realistically you will be able to do all the things you want to do". If a random GitHub service goes down there's a significant chance it breaks your workflow. It's not always but it's far from zero.

One bulb in the cluster going out is like a single server at GitHub going down, not a whole service.


Or if your kettle is not working the house is considered not working?

I've been on a flight that was late leaving the gate because the coffeemaker wasn't working.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: