Hacker Newsnew | past | comments | ask | show | jobs | submit | daneel_w's commentslogin

I've built a lot of API backends with Perl and FCGI::ProcManager, letting nginx (and Apache HTTPd in the past) front everything. For me it has been a pleasantly simple, incredibly robust and high-performing setup with no mess to speak of.

With the exception of summed size of selected items, the Finder has all of that. Help yourself to the "View->Show Status Bar" menu option. Also, "View->Show View Options->Calculate All Sizes" to show storage size for directories.

Netatalk has been around for like 25 years: https://github.com/Netatalk/Netatalk

Relevant to the discussion is that the project comes with an AFP client as well. I have no experience with the client but I've used the Netatalk server for more than 15 years.


I've already built it: https://github.com/jamesyc/TimeCapsuleSMB

This runs Samba 4 on the Apple Time Capsule.


Something quite acidic after the meal works great for me. I prefer a small glass of water with two tablespoons of apple cider vinegar. It's not tasty but it's just one quick swig, and as a bonus fermented beverages/foods are very beneficial.

OP was making a sarcastic joke, but nobody bothers reading the second paragraph to understand that.

What you're seeing are the speeds of various multi-tier caches (RAM, intermediate SLC etc.) It cannot write to its main flash memory that fast. While it to the user looks like they just wrote 10 GiB in a single second, the SSD is internally still busy for another 10 seconds persisting that data. The actual real write speed of top-shelf consumer grade SSDs these days is somewhere in the vicinity of 1.5 GiB/s. Most models top out at half of that or less.

Those comments are about the 25 years old RTL8139, among the world's first highly affordable and fully-integrated Fast Ethernet controllers that ended up on pretty much every motherboard. Contrary to all of the aged complaints about the RTL8139, I ran several such on OpenBSD (and Windows) for close to ten years with no problems at all.

> Tangentially related but regarding RSA and ECC... With RSA can't we just say: "Let's use 16 384 bit keys" and be safe for a long while?

That's correct. The quantum computer needs to be "sufficiently larger" than your RSA key.

> Basically: for RSA and ECC, is there anything preventing us from using keys 10x bigger?

For RSA things get very unwieldy (but not technically infeasible) beyond 8192 bits. For ECC there are different challenges, some of which have nothing to do with the underlying cryptography itself: one good example is how the OpenSSH team still haven't bothered supporting Ed448, because they consider it unnecessary.


I wonder when the OpenSSH developers will change their stance on Ed448.

I'm not familiar with their stance, but bear in mind the costs of introducing new key type on the ecosystem, and on maintenance of SSH implementations.

Imagine if we would've had the same hesitant cost-first reasoning about Ed25519, and then again about ML-KEM and SNTRUP.

I didn't suggest cost-first.

You suppose what happens if the OpenSSH maintainers considered the cost when implementing those algorithms? Perhaps they did, but decided the benefits were worth it.


What does ed448 mitigate against vs ed25519?

The simplified answer is, larger keys that demand a far larger effort to break, in a way similar to RSA-4096 vs RSA-2048.

The predicted timelines for quantum computer advances (and the requirements for practical applications) have shrunk dramatically in the past 15 years. What used to be a no-later-than-2035 recommendation for getting off e.g. RSA-2048 in good time, is today no-later-than-2030. The admission of 256-bit curves for ECDSA/ECDH has been supplanted by 384-bit curves already years ago.

In the absolutely ground shaking event that a future application of quantum computation somehow manages to cut Ed448's equivalent security of ~224 bits in half, exploring even a small portion of a 112-bit space will still cost more electrical energy than we can possibly provide.


The whole point is that RSA and ECDH can't be made safe against quantum computers by making the keys bigger. The speedup is exponential and so breaking a 4096-bit key is only twice as hard as a 2048-bit key. The 'cutting in keysize in half' is true in principle in general (but not in practice, as the article points out), but for some algorithms it's much worse.

Just to be clear, I'm not advocating for Ed448 for the KEX - we already have ML-KEM and SNTRUP in OpenSSH and everyone should start using those. I'm advocating for Ed448 DSA ("SSH pubkey").

Surely you must've noticed that pretty much all of their bare metal offerings ("dedicated" and the stuff on "auction") have multiple disks, allowing for various RAID configurations?

> Surely you must've noticed that pretty much all of their bare metal offerings ("dedicated" and the stuff on "auction") have multiple disks, allowing for various RAID configurations?

I don't know where to start with this comment. Do I really need to spell out the difference between cloud and bare metal ?

A few examples...

    - Live migration ? Cloud only.
    - Snapshots ? Cloud only.
    - Want to increase disk space ?  Tick box in cloud vs. replace disks (or move to different machine) and re-install/restore in bare metal....
    - Want to increase RAM ? Tick box in cloud vs. shutdown, pull out of rack, install new chips (or move to different machine and re-install/restore)....
    - Want to upgrade to a beefier processor ? Tick box in cloud vs move to a completely different machine and re-install/restore

You can get snapshots and live migrations working on-prem. The cloud isn't magic, it's just servers with hypervisors and software running on top of them. You can run that same software.

Also, with something like Hetzner you would not be going in and physically doing anything. You also just tick a box for a RAM upgrade, and then migrate over or do active/passive switch.

The cloud does have advantages, mostly in how "easy" it is to do some specific workflows, but per-compute it's at least 10x the cost. Some will argue it's less than that, but they forget to factor in just how slow virtual disks and CPU are. Cloud only makes sense for very small businesses, in which the operational cost of colocation or on-prem hosting is too expensive.


cloud vs bare metal is:

are you a capable engineer or do you believe in magic?

the savings of a cheap engineer disappear on the cloud bill. get a badass well paid engineer who can do both and doesn't talk his way out of this financial madness


> get a badass well paid engineer who can do both

Well, fine, but its abundantly clear that this blog post was not written by a "badass well paid engineer".

The person who wrote that blog post was clearly unaware of the trade-offs of the decisions he was making.


Well you did say your data is lost when a disk fails, which is not true. Parent pointed out that for you.

Yeah you pay for and get additional stuff with cloud. Nobody disputed that.


> Well you did say your data is lost when a disk fails, which is not true.

Well, technically its still a possibility.

I am old enough to have seen issues with RAID1 setups not being able to restore redundancy, as well as RAID controller failures and software RAID failures.

Also, frankly you are being somewhat pedantic. My broader point was regarding cloud. I gave HD Failure as one example, randomly selected by my brain ... I could have equally randomly chosen any of the other items ... but this time, my brain chose HD.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: