Hacker Newsnew | past | comments | ask | show | jobs | submit | topspin's commentslogin

No tags on objects.

Garage looks really nice: I've evaluated it with test code and benchmarks and it looks like a winner. Also, very straightforward deployment (self contained executable) and good docs.

But no tags on objects is a pretty big gap, and I had to shelve it. If Garage folk see this: please think on this. You obviously have the talent to make a killer application, but tags are table stakes in the "cloud" API world.


Thank you for your feedback, we will take it into account.

Great, and thank you.

I really, really appreciate that Garage accommodates running as a single node without work-arounds and special configuration to yield some kind of degraded state. Despite the single minded focus on distributed operation you no doubt hear endlessly (as seen among some comments here,) there are, in fact, traditional use cases where someone will be attracted to Garage only for the API compatibility, and where they will achieve availability in production sufficient to their needs by means other than clustering.


What are "tags on objects?"

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object...

Arbitrary name+value pairs attached to S3 objects and buckets, and readily available via the S3 API. Metadata, basically. AWS has some tie-ins with permissions and other features, but tags can be used for any purpose. You might encode video multiple times at different bitrates, and store the rate in a tag on each object, for example. Tags are an affordance used by many applications for countless purposes.


Thanks! I understand what tags are, but not what an "object" was in this context. Your example of multiple encodings of the same video seems very good.

Why not? SMB is no slouch. Microsoft has taken network storage performance very seriously for a long time now. Back in the day, Microsoft and others (NetApp, for instance,) worked hard to extend and optimize SMB and deliver efficient, high throughput file servers. I haven't kept up with the state of the art recently, but I know there have been long stretches where SMB consistently led the field in benchmark testing. It also doesn't hurt that Microsoft has a lot of pull with hardware manufacturers to see their native protocols remain tier 1 concerns at all times.

I think a lot of people have a hard time differentiating the underlying systems from what they _see_ and use it to bash MS products.

I heard that it was perhaps recently fixed, but copying many small files was multiple times faster to do via something like Total Commander vs the built in File Explorer (large files goes equally fast).

People seeing how slow Explorer was to copy would probably presume that it was a lower level Windows issue if they had a predisposed bias against Microsoft/Windows.

My theory about Explorers sluggishness is that they added visual feedback to the copying process at some point, and for whatever reason that visual feedback is synchronous/slow (perhaps capped at the framerate, thus 60 files a second), whilst TC does updating in the background and just renderers status periodically whilst the copying thread(s) can run at full speed of what the OS is capable of under the hood.


I dunno about Windows Explorer, but macOS’ finder seems to hash completed transfers over SMB (this must be something it can trigger the receiver to do in SMB itself, it doesn’t seem slow enough for the sender to be doing it on a remote file) and remove transferred files that don’t pass the check.

I could see that or other safety checks making one program slower than another that doesn’t bother. Or that sort of thing being an opportunity for a poor implementation that slows everything down a bunch.


A problem with Explorer, that it also shares with macOS Finder[1], is that they are very much legacy applications with features piled on top, and Explorer was never expected to be used for heavy I/O work and tends to do things the slower way possible, including doing things in ways that are optimized for "random first time user of windows 95 who will have maybe 50 files in a folder"

[1] Finder has parts that show continued use of code written for MacOS 9 :V


This blows my mind. $400B in annual revenue and they can't spare the few parts per million it would take to spruice up the foundation of their user experience.

This is speculation based on external observation, nothing internal other than rumours:

A big, increasing over last decade, chunk of that is fear that they will break the compatibility - or otherwise drop in shared knowledge. To the point that the more critical parts the less anyone wants to touch them (heard that ntfs.sys is essentially untouchable these days, for example).

And various rules that used to be sacrosanct are no longer followed, like the "main" branch of Windows source repository having to always build cleanly every night (fun thing - Microsoft is one of the origins of nightly builds as a practice)


It's probably a vicious cycle.

Less people are trusted to touch ntfs.sys due to lack of experience, thus they never gain it and that in turn means less work and in turn means even less people have proved themselves trustworthy enough to work on it.

Until nobody remains in the company that is trusted enough.


> to bash MS products.

Microsoft gives them a lot of ammo. While, as I said, Microsoft et al. have seen that SMB is indeed efficient, at the same time security has been neglected to the point of being farcical. You can see this in headlines as recent as last week: Microsoft is only now, in 2025, deprecating RC4 authentication, and this includes SMB.

So while one might leverage SMB for high throughput file service, it has always been the case that you can't take any exposure for granted: if it's not locked down by network policies and you don't regularly ensure all the knobs and switches are tweaked just so, it's an open wound, vulnerable to anything that can touch an endpoint or sniff a packet.


Agreed, but that used to be the difference between MS and Google.

MS would bend backwards to make sure those enterprise Windows 0.24 boxes will still be able to connect to networks because those run some 16bit drivers for CNC machines.

Meanwhile Google decided to kill a product the second whoever introduced it on stage walked off it.

Azure is a money-maked for MS, and wouldn't be so without those weird legacy enterprise deployments. The big question is if continuing to increase a posture about about security together with an "cloud" focus is actually in their best interest or if retaining those legacy enterprises would have been smarter.


I have a cheap samsung from 5 years ago that pops up a dialog when it boots. I've never read it or agreed to it. It goes away after about 5 seconds. After that I stream using HDMI and all is well. It's also never been connected to a network.

Can't say what other TVs do, but this one works fine without TOS etc. If there is some feature or other that doesn't work due to this, I can say I've never missed it.


In Lansing, it was below freezing and windy most of the day. If I noticed 100 people standing around on the pavement for hours in that, I'd probably imagine they deserved at least some regard for their concerns. But then, I'm not a Michigan politician that needs to get gamer Johnny out of my basement and on to a cushy non-profit no-show kickback job, courtesy of whatever big tech outfit wants a data center.

> Quiet red areas rolling over to Flock

What is funding all those Flock reps jetting around BFE to dazzle and kickback the boomer city managers and county commissioners of deep red littleville America? Is it the 2 cameras in Big Rapids MI or the 2425[1] cameras in Detroit metro?

The "roll over" that mattered has already been secured.

[1] https://deflock.me


Do flock reps even need to fly out? They have massive contracts with the Walmarts of the world and the underlying commercial property owners. You don’t need to have a rep when it’s already in your area.


> Does it apply to completely novel tasks? No, that would be magic.

Are there novel tasks? Inside the limits of physics, tasks are finite, and most of them are pointless. One can certainly entertain tasks that transcend physics, but that isn't necessary if one merely wants an immortal and indomitable electronic god.


Within the context of this paper, novel just means anything that’s not a vision transformer.


"Perl6/Raku killed Perl."

Perl was effectively "dead" before Perl 6 existed. I was there. I bought the books, wrote the code, hung out in #perl and followed the progress. I remember when Perl 6 was announced. I remember barely caring by that time, and I perceived that I was hardly alone. Everyone had moved on by then. At best, Perl 6 was seen as maybe Perl making a "come back."

Java, and (by extension) Windows, killed Perl.

Java promised portability. Java had a workable cross-platform GUI story (Swing). Java had a web story with JSP, Tomcat, Java applets, etc. Java had a plausible embedded and mobile story. Java wasn't wedded to the UNIX model, and at the time, Java's Windows implementation was as least as good as its non-Windows implementations, if not better. Java also had a development budget, a marketing budget, and the explicit blessing of several big tech giants of the time.

In the late 90's and early 2000's, Java just sucked the life out of almost everything else that wasn't a "systems" or legacy big-iron language. Perl was just another casualty of Java. Many of the things that mattered back then either seem silly today or have been solved with things other than Java, but at the time they were very compelling.

Could Perl have been saved? Maybe. The claims that Perl is difficult to learn or "write only" aren't true: Perl isn't the least bit difficult. Nearly every Perl programmer on Earth is self-taught, the documentation is excellent and Google has been able to answer any basic Perl question one might have for decades now. If Perl had somehow bent itself enough to make Windows a first-class platform, it would have helped a lot. If Perl had provided a low friction, batteries-included de facto standard web template and server integration solution, it would have helped a lot as well. If Perl had a serious cross-platform GUI story, that would helped a lot.

To the extent that the Perl "community" was somehow incapable of these things, we can call the death of Perl a phenomena of "culture." I, however, attribute the fall of Perl to the more mundane reason that Perl had no business model and no business advocates.


Excellent point in the last paragraph. Python, JavaScript, Rust, Swift, and C# all have/had business models and business advocates in a way that Perl never did.


Do you not think O'Reilly Associates fits some of that role? It seemed like Perl had more commercial backing compared to the other scripting languages if anything at that point. Python and JavaScript were picked up by Google, but later. Amazon was originally built out of Perl. Perl never converted its industry footprint into that kind of advocacy, I think some of that is also culture-driven.


Maybe until the 2001 O'Reilly layoffs. Tim hired Larry for about 5 years, but that was mostly working on the third edition of the Camel. A handful of other Perl luminaries worked there at the same time (Jon Orwant, Nat Torkington).

When I joined in 2002, there were only a couple of developers in general, and no one sponsored to work on or evangelize any specific technology full time. Sometimes I wonder if Sun had more paid people working on Tcl.

I don't mean to malign or sideline the work anyone at ORA or ActiveState did in those days. Certainly the latter did more work to make Perl a first-class language on Windows than anyone. Yet that's very different from a funded Python Software Foundation or Sun supporting Java or the entire web browser industry funding JavaScript or....


Thanks for detailed reply. Yes, the marketing budget for Java was unmatched, but to my eye they were in retreat towards the Enterprise datacentre by 2001. I don't think the Python foundation had launched until 2001. Amazon was migrating off Perl and Oracle. JavaScript only got interesting after Google maps/Wave I think, arguably the second browser wars start when Apple launches Safari, late 2002.

So, I guess the counterfactual line of enquiry ought to be why Perl didn't, or couldn't, or didn't want, to pivot towards stronger commercial backing, sooner.


"I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines"

Since k8s is very effective at running a bunch of containers across a few machines, it would appear to be exactly the correct thing to reach for. At this point, running a small k8s operation, with k3s or similar, has become so easy that I can't find a rational reason to look elsewhere for container "orchestration".


I can only speak for myself, but I considered a few options, including "simple k8s" like [Skate](https://skateco.github.io/), and ultimately decided to build on uncloud.

It was as much personal "taste" than anything, and I would describe the choice as similar to preferring JSON over XML.

For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.


> For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.

I feel the same. I feel like it's a me problem. I was able to build and run massive systems at scale and never used kubernetes. Then, all of a sudden, around 2020, any time I wanted to build or run or do anything at scale, everywhere said I should just use kubernetes. And then when I wanted to do anything with docker in production, not even at scale, everywhere said I should just use kubernetes.

Then there was a brief period around 2021 where everyone - even kubernetes fans - realised it was being used everywhere, even when it didn't need to be. "You don't need k8s" became a meme.

And now, here we are, again, lots of people saying "just use k8s for everything".

I've learned it enough to know how to use it and what I can do with it. I still prefer to use literally anything else apart from k8s when building, and the only time I've ever felt k8s has been really needed to solve a problem is when the business has said "we're using k8s, deal with it".

It's like the Javascript or WordPress of the infrastructure engineering world - it became the lazy answer, IMO. Or the me problem angle: I'm just an aged engineer moaning at having to learn new solutions to old problems.


It’s a nice portable target, with very well defined interfaces. It’s easy to start with and pretty easy to manage if you don’t try to abuse it.


I mean the real answer it is got easily to deploy k8s so the justification for not using it kinda vanished.


How many flawless, painless major version upgrades have you had with literally any flavor of k8s? Because in my experience, that’s always a science experiment that results in such pain people end up just sticking at their original deployed version while praying they don’t hit any critical bugs or security vulnerabilities.


I’ve run Kubernetes since 2018 and I can count on one hand the times there were major issues with an upgrade. Have sensible change management and read the release notes for breaking changes. The amount of breaking changes has also gone way down in recent years.


Same. I think maybe twice in that time frame we've had a breaking change, and those did warn us for several versions. Typically the only "fix" we need to apply is changing the API version on objects that have matured beyond beta.


I applaud you for having a specific complaint. 'You might not need it' 'its complex' and 'for some reason it bothers me' are all these vibes based winges that are so abundant. But with nothing specific, nothing contestable.


My home lab has grown over the years, now consisting of a physical Proxmox cluster, and a handful of servers (RaspPi and micro hosts). A couple years back I got tired of failures related to host-level Docker issues, so I got a NAS and started using NAS storage for everything I could.

I also re-investigated containerization - weighing Docker Swarm vs K3s - and settled on Docker Swarm.

I’ve hated it ever since. Swarm is a PITA to use and has all kinds of failure modes that are different than regular old Docker Compose.

I’ve considered migrating again - either to Kubernetes, or just back to plain Docker - but haven’t done it. Maybe I should look at Uncloud?


100%. I’m really not sure why K8S has become the complexity boogeyman. I’ve seen CDK apps or docker compose files that are way more difficult to understand than the equivalent K8S manifests.


Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).

With k8s you write a bunch of manifests that are 70% repetitive boilerplate. But actually, there is something you need that cannot be achieved with pure manifest, so you reach for Kustomize. But Kustomize actually doesn't do what you want, so you need to convert the entire thing to Helm.

You also still need to spin up your k8s cluster, which itself consists of half a dozen pods just so you have something where you can run your service. Oh, you wanted your service to be accessible from outside the cluster? Well, you need to install an ingress controller in your cluster. Oh BTW, the nginx ingress controller is now deprecated, so you have to choose from a handful of alternatives, all of which have certain advantages and disadvantages, and none of which are ideal for all situations. Have fun choosing.


Literally got it in one, here. I’m not knocking Kubernetes, mind, and I don’t think anyone here is, not even the project author. Rather, we’re saying that the excess of K8s can sometimes get in the way of simpler deployments. Even streamlined Kubernetes (microk8s, k3s, etc) still ultimately bring all of Kubernetes to the table, and that invites complexity when the goal is simplicity.

That’s not bad, but I want to spend more time trying new things or enjoying the results of my efforts than maintaining the underlying substrates. For that purpose, K8s is consistently too complicated for my own ends - and Uncloud looks to do exactly what I want.


> Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).

And if you want to use more than one machine then you run `docker swarm init`, and you can keep using the Compose file you already have, almost unchanged.

It's not a K8s replacement, but I'm guessing for some people it would be enough and less effort than a full migration to Kubernetes (e.g. hobby projects).


This is some serious rose colored glasses happening here.

If you have a service with a simple compose file, you can have a simple k8s manifest to do the same thing. Plenty of tools convert right between the two (incl kompose, which k8s literally hands you: https://kubernetes.io/docs/tasks/configure-pod-container/tra...)

Frankly, you're messing up by including kustomize or helm at all in 80% of cases. Just write the (agreed on tedious boilerplate - the manifest format is not my cup of tea) yaml and be done with the problem.

And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).

You don't need to touch an ingress until you actually want external traffic using a specific hostname (and optionally tls), which is... the same as compose. And frankly - at that point you probably SHOULD be thinking about the actual tooling you're using to expose that, in the same way you would if you ran it manually in compose. And sure - arguably you could move to gateways now, but in no way is the ingress api deprecated. They very clearly state...

> "The Ingress API is generally available, and is subject to the stability guarantees for generally available APIs. The Kubernetes project has no plans to remove Ingress from Kubernetes."

https://kubernetes.io/docs/concepts/services-networking/ingr...

---

Plenty of valid complaints for K8s (yaml config boilerplate being a solid pick) but most of the rest of your comment is basically just FUD. The complexity scale for K8s CAN get a lot higher than docker. Some organizations convince themselves it should and make it very complex (debatably for sane reasons). For personal needs... Just run k3s (or minikube, or microk8s, or k3ds, or etc...) and write some yaml. It's at exactly the same complexity as docker compose, with a slightly more verbose syntax.

Honestly, it's not even as complex as configuring VMs in vsphere or citrix.


> And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).

https://kubernetes.io/docs/concepts/services-networking/serv...

Might need to redefine the port range from 30000-32767. Actually, if you want to avoid the ingress abstraction and maybe want to run a regular web server container of your choice to act as it (maybe you just prefer a config file, maybe that's what your legacy software is built around, maybe you need/prefer Apache2, go figure), you'd probably want to be able to run it on 80 and 443. Or 3000 or 8080 for some other software, out of convenience and simplicity.

Depending on what kind of K8s distro you use, thankfully not insanely hard to change though: https://docs.k3s.io/cli/server#networking But again, that's kind of going against the grain.


If you just want to do development, honestly it's probably better to just use kubectl port-forward (ex - map 3000, or 8080, on your machine to any service/pod you'd like).

As for grabbing 443 or 80, most distros support specifying the port in the service spec directly, and I don't think it needs to be in the range of the reserved nodeports (I've done this on k3s, worked fine last I checked, which is admittedly a few years ago now).

As you grow to more than a small number of exposed services, I think an ingress generally does make sense, just because you want to be able to give things persistent names. But you can run a LONG way on just nodeports.

And even after going with an ingress - the tooling here is pretty straight forward. MetalLB (load balancer) and nginx (ingress, reverse proxy) don't take a ton of time or configuration.

As someone who was around when something like a LAMP stack wasn't "legacy", I think it's genuinely less complicated to setup than those old configurations. Especially because once you get it right in the yaml once, recreating it is very, very easy.


It's not the manifests so much as the mountain of infra underlying it. k8s is an amazing abstraction over dynamic infra resources, but if your infra is fairly static then you're introducing a lot of infra complexity for not a ton of gain.

The network is complicated by the overlay network, so "normal" troubleshooting tools aren't super helpful. Storage is complicated by k8s wanting to fling pods around so you need networked storage (or to pin the pods, which removes almost all of k8s' value). Databases are annoying on k8s without networked storage, so you usually run them outside the cluster and now you have to manage bare metal and k8s resources.

The manifests are largely fine, outside of some of the more abnormal resources like setting up the nginx ingress with certs.


Managing hundreds or thousands of containers across hundreds or thousands of k8s nodes has a lot of operational challenges.

Especially in-house on bare metal.


But that's not what anyone is arguing here, nor what (to me it seems at least) uncloud is about. It's about simpler HA multinode setup with a single/low double digit containers.


> I’m really not sure why K8S has become the complexity boogeyman.

Was what i was responding to. It's not the app management that becomes a pain, it's the cluster management, lifecycle, platform API deprecations, etc.


Which is fine because it absolutly matches the result.

You would not be able to operate hundreds or thousand of any nodes without operation complexlity and k8s helps you here a lot.


Talos has made this super easy in my experience.


I don't think that argument matches with they "just need to run a bunch of containers across a few machines"


That’s awesome if k3s works for you, nothing wrong with this. You’re simply not the target user then.


Perhaps it feels so easy given your familiarity with it.

I have struggled to get things like this stood up and hit many footguns along the way


If you already know k8s, this is probably true. If you don't it's hard to know what bits you need, and need to learn about, to get something simple set up.


you could say that about anything…


I don't understand the point? You can say that about anything, and that's the whole reason why it's good that alternatives exist.

The clear target of this project is a k8s-like experience for people who are already familiar with Docker and docker compose but don't want to spend the energy to learn a whole new thing for low stakes deployments.


Uncloud is so far away from k8s, its not k8s like.

A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.


>A normal person wouldn't think 'hey lets use k8s for the low stakes deployment over here'.

I'm afraid I have to disappoint you


k3s makes it easy to deploy, not to debug any problems with it. It's still essentially adding few hundred thousand lines of code into your infrastructure, and if it is a small app you need to deploy, also wasting a bit of ram


"not to debug any problems with it"

K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.

"It's still essentially adding few hundred thousand lines of code into your infrastructure"

Sure. And they're all there for a reason: it's what one needs to orchestrate containers via an API, as revealed by a vast horde of users and years of refinement.


> K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.

...the fact it's still k8s which is a mountain of complexity compared to near anything else out there ?


Except it isn't just "a way to run a bunch of containers across a few machines".

It seems that way but in reality "resource" is a generic concept in k8s. K8s is a management/collaboration platform for "resources" and everything is a resource. You can define your own resource types too. And who knows, maybe in the future these won't be containers or even linux processes? Well it would still work given this model.

But now, what if you really just want to run a bunch of containers across a few machines?

My point is, it's overcomplicated and abstracts too heavily. Too smart even... I don't want my co workers to define our own resource types, we're not at a google scale company.


Indeed, it seems a knee jerk response without justification. k3s is pretty damn minimal.


Merely an anecdote: I had one female house cat that clearly understood a number of words. She could easily and consistently pick out "catnip" in a sentence. "Cow", "get up", "tuna" and several other words and phrases were all understood.

This is unique in my personal experience. I've haven't seen this in other cats.


"didn't support object tagging"

Thanks for pointing that out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: