Its absolutely possible to have both a SaaS based control plane and continue functioning if the internet connection/control plane becomes unavailable for a period. There's presumably hardware on site anyway to forward requests to the servers which are doing access control, it wouldn't be difficult to have that hardware keep a local cache of the current configuration. Done that way you might find you can't make changes to who's authorised while the connection is unavailable, but you can still let people who were already authorised into their rooms.
The self service kiosks are intentionally throttled when scanning barcodes, at a guess to prevent people accidentally scanning the previous/wrong item - I once had some problems with one and a staff member flipped it into supervisor mode at which point they were able to scan at the same rate you'd see at a manned checkout.
I think that's handled by the barcode scanner itself, at least on the ones I've used. The scanner will not recognize the same code immediately, but will immediately pick up a different code.
What's slow is that after each scan it needs to check the weight which means it lets the scales settle for one second before accepting another scan.
Now take that, and add someone in our Polish supermarket chain (Biedronka) having the dumb "insight" to disable "scan multiple" option. Until ~month ago, whenever buying something in larger quantity, I could just press "Scan multiple", tap in the amount, scan the barcode once, and move all the items of the same type to the "already scanned" zone. Now, I have to do it one by one, each time waiting for the scales to settle. Infuriating when you're buying some spice bag or candy and have to scan 12 of them one by one.
I scan as fast as a manned checkout (I did my time in retail). And I can scan my groceries at the speed whilst the people next to me spend most of their time rotating an item to find the barcode.
You don't have to chain 8 PRs together, Github tries really hard to hide this from you but you can in fact review one commit at a time, which means you don't need to have a stack of 8 PRs that cascade into each other.
Yup, that's what my team does. It works wonderfully, and it fits well with Github's "large PR" mindset imo. It could be a bit better in the Github UI, but so can most things. I vastly prefer it to individually reviewing tons of PRs.
The funny thing about this debate for me is that i find it comes down to the committer. If the committer makes small commits in a stacked PR, where each commit is a logical unit of work, part of the "story" being told about the overall change, then i don't personally find it's that useful to stack them. The committer did the hard part, they wrote the story of changes in a logical, easy to parse manner.
If the story is a mess, where the commits are huge or out of logical order, etc - then it doesn't matter much in my view.. the PR(s) sucks either way.
I find stacked PRs to be a workflow solution to what to me is a UI problem.
That's true, but acceptance is still all or nothing.
When chaining commits it's possible to (for example) have a function that does THING and then have another PR that have a function that uses first one.
It's somewhat PITA when team has no-dead-code hard rule, but otherwise it's quite manageable and invites rich feedback. Reviewer and feedback can focus on atomic change (in example: function that does THING) and not on a grand picture.
I've seen that you can read one commit at a time, but never anything for reviewing (or diffing between them if they change) - is there a UI beyond just clicking on the list of commits?
Though I forget if you can even comment in the individual commits in that view. Complex multi-commit PRs have generally been a nightmare on GitHub in my experience.
You can comment on the individual commits' changes, but you can't comment on the commit (e.g. its message) itself. I believe you can do this in Gerrit.
Does that happen on merge or before PR creation? I thought the setting only applied it when you hit the merge button, so you'd still have commits prior to the merge. Though that won't help if someone pre-squashes them :s
Happens on merge, sure, but the end result is that you kinda lose the individual commits. You can still find them in the PR, but you won't see them in the git history, so git blame will just point you to the one big squashed commit.
If you, however, open a chain of 8 PRs, and merge them in the right order, the individual commits will be persisted in the git history. Potentially worth it if you like to have a "story" of several commits...
So let me get this straight, GOG, a privately owned company, wants me to donate money to them so that they can buy the rights to games in order to sell them to me?
It is true, but occasionally it is a practice in some companies with certain business models to leave part of users contributions free up to the users to decide. I have bought some games from their preservation program, and gave me the option to add some extra donation for the project, which I did. I guess this is similar to that.
Moreover, if somebody is really into these old games, they may want to support it and get access to the behind the scenes material, discord, vote for which games to prioritize etc. I don't think this is very different than eg subscribing to the patreon of a creator to get some extra content.
That's what it looks like. I kind of get it, as there's no guarantee that a game they make available again will sell enough to cover the costs - it's as much a preservation effort as a commercial one.
For a lot of games it's just a matter of configuring dosbox and packaging it, I can't see how that would be very expensive. But for others it's a lot more involved.
But either case we talk about commercial products. The games are still copyrighted commodities to be sold. I assume they get licensed by the copyright holders to update them and sell them in gog. I do not see how a "charity"-based process would make sense or be honest here.
What are you talking about? Charities can sell products, they're just not supposed to make a profit, so all profits would have to go back into preservation.
I mean we can go off on a tangent about why IKEA should not get away with being registered as a charity, but as long as GoG is not doing tax evasion I don't see the problem.
The copyright holders still make profit from the sales as for-profit entities. I do agree that non-profit status would make more sense for the preservation program though. But it still would not mean that nobody makes profit out of the result.
Simplified, but yes, more or less correct. I'm a patron of theirs, and see it more or less as a donation (obviously isn't, in the eyes of the tax agency).
Yeah my first reaction was, what the heck is is this ?
Now I can imagine having specific campaigns. Let's say they need $50,000 to release an upgraded port of Shining Force.
Cool, I might be open to pre-ordering it at $25 so they can see if there's enough interest to proceed. But why am I going to literally just donate to a private company. I think the entire world has gone mad, there's not even a real product here. It's not like for that $5 a month they give you a random game or something. They just want money.
I would be happy to donate to campaigns to buy old ip (video games, but also music, movies, old tabletop rpgs etc) to then slap an open license on and release for free. Seems like a good investment for the future, to get as much content as possible away from rights hoarders.
I am also happy to buy more old games from GOG than I ever have the time to play, so they already get my money.
For many of those who like and buy from gog, that would not go well at all. The whole point of gog is that you actually own a copy of the game when you buy it. A subscription-based access to games would be antithetical to that. It is the main, if not the only, selling point of gog currently.
This is something I’ve been learning in the completely different context of bouldering since I took it up a few months ago. When you start out you instinctively move slowly, so you can be sure of your footing and won’t fall off, but somewhat counterintuitively it’s better to move as quickly as you can. This has two advantages - firstly the quicker you move the less time you’re on the wall, and the less energy it takes, just staying in place takes energy when you’re dangling off a wall by your fingertips. Secondly you can use momentum to your advantage, instead of stopping and then having to get yourself going every move you just bounce from hold to hold.
I have no pithy summary of how this applies to the world of business or software development. It just reminded me of that.
Counterpoint: No it won’t. People are using LLMs because they don’t want to think deeply about the code they’re writing, why in hell would they instead start thinking deeply about the code being written to verify the code the LLM is writing?
So, not a home user then. If you make your living with computers in that manner you are by definition a professional, and just happen to have your work hardware at home.
In the context of background jobs idempotent means that if your job gets run for a second time (and it will get run for a second time at some point, they all do at-least-once delivery) there aren't any unfortunate side effects to that. Often that's just a case of checking if the relevant database updates have already been done, maybe not firing a push notification in cases of a repeated job.
If you need idempotent db writes, then use something like Temporal. You can't really blame Celery for not having that because that is not what Celery aims to be.
With Temporal, your activity logic still needs to ensure idempotency e.g. by checking if an event id / idempotency key exists in a table. It's still at-least-once delivery. Temporal does make it easy to mint an idempotency key by concatenating workflow run id and activity id, if you don't have a one provided client-side.
Temporal requires a lot more setup than setting up a Redis instance though. That's the only problem with it. And I find the Python API a bit more difficult to grasp. But otherwise a solid piece of technology.
I thought that the point was to post valuable thoughts - because it is interesting to read them. But now you suggest that it depends on how they were generated.
Yeah, but if you're having to turn to a machine to compose your thoughts on a subject they're probably not that valuable. In an online community like this the interesting (not necessarily valuable) thoughts are the ones that come from personal experience, and raise the non-obvious points that an LLM is never going to come up with.
reply