Hacker Newsnew | past | comments | ask | show | jobs | submit | more louwrentius's commentslogin

Copaganda is indeed a good book, recommend.


If you decide not to use a forwarder, the DNS server will be truly independent.

The DNS server will contact the Root servers for the TLD namesevers of a domain, the TLD nameservers and then the actual authoritative nameserver for the particular domain.

No forwarder needed.

This means you bypass any DNS based filtering any DNS ‘forwarder’ may have in place.


I've always felt it makes sense to either use a forwarder you trust or just operate the root zone yourself. Going to the root zone dynamically is certainly the most technically correct, but if your goals involve either "independence" or "retaining some measure of the performance of using forwarders while still resolving things directly yourself" then you can just pull the root zone daily and operate your own root server https://www.iana.org/domains/root/files. Of course, IANA would rather you just use DNS as technically correct as possible because, well, that's what they exist for, but they don't attempt to roadblock operating your own copy of the root.

It's hard to go much deeper than that in practice as the zonefiles for TLDs are massively larger, massively more dynamic (i.e. syncing once a day isn't usually enough), and much harder to get ahold of (if it all, sometimes).

Regardless of how you go about not using a forwarder, if that's the path you choose then I also heavily recommend considering setting up some additional things like cached entry prefetching so recently used expiring entries don't get "hitches" in latency.


There's an unofficial list of the ones that one can officially replicate.

* https://news.ycombinator.com/item?id=44318136

There are actually several additional subdomains of arpa. that one can also replicate, not on that list, which are largely invariant.

And really it's not about technical correctness. It has been known how to set up private roots since the 20th century. Some of us have had them for almost that long. Even the IETF has glacially slowly now come around to the view that idea is a good one, with there now being an RFC on the subject.

The underlying problem for most of that time has been that they're difficult to do with BIND, at least a lot more difficult to do than with other content DNS server softwares, if one clings, as exhibited even here in the headlined article, to a single server vainly wearing all of the hats at once.

All of the people commenting here that they use unbound and nsd, or dnscache and tinydns, or PowerDNS and the PowerDNS Recursor, have already overcome the main BIND Think obstacle that makes things difficult.


Fantastic all-in-one resource!

It's technically incorrect in that IANA would like you to have your DNS server use the DNS protocol's built in system of record querying and expiry rather than pull a static file at your own interval (IIRC I don't think root servers support AXFR for performance reasons?) as there is no predefined fixed schedule for root zone updates. Practically, root zone update changes are absolutely glacial and minuscule (the "real" root servers only get 1-2 updates per day anyways) so pulling the file once per day is effectively good enough to never care it's not as DNS would intend you to get the record updates.

Setting this up in bind should be no more difficult than adding a `zone "."` entry pointing to this file, the named.conf need not be more than ~a dozen lines long. It's easy to make bind config complicated though (much like this article), but I'm not sure that was the barrier vs just being comfortable enough about DNS to be aware the endeavour is even something one could seek to do - let alone set out to.


The general root servers generally don't support AXFR, but if you want to AXFR the root, you can do so from lax.xfr.dns.icann.org or iad.xfr.dns.icann.org.


Root hints are enough for most use cases. In 30 years of running my own DNS servers, I never once needed to replicate the the root zone. Unless you have a totally crap internet connection you're not going to notice those extra lookups.


I used to do that, but that has the downside of sending all your DNS requests unencrypted over the network. By using a forwarder you have the option to use DoT or DoH.


There is work coming at the IETF to help with this.

- Draft: DELEG (a new way of doing delegations, replacing the NS/DS records).

- A draft to follow: Using the extensible mechanisms of DELEG to allow you to specify alternative transports for those nameservers (eg: DoH/DoT/DoQ).

This would allow a recursive server to make encrypted connections to everything it talks to (that has those DELEG records and supports encrypted transports) as part of resolution.

Of course, traffic analysis still exists. If you are talking to the nameservers of bigtittygothgirls.com, and the only domains served by those name servers are bigtittygothgirls ...


After reading the article I come to the conclusion that this was never really about water. There wasn't even a water shortage, only a technical issue that would be resolved.

This was about some people on the waterboard not being able to manage angry - semi-aggressive- people properly.

And now those people can irrigate their lawns while others can't even drink, wash or cook.


I agree, none of this was really necessary. The board/city was broadly within their rights not to sell (and down the road, as water supplies become more strained they may in fact not have extra water to sell).

But the board handled the situation very poorly. The job of being on a board like this is often to sit patiently while people complain and perhaps yell. Try to keep things calm and moving along, let everyone make their statement.

They should have just accepted the feedback and then scheduled discussion on various options for some future meeting, with a final vote even further out.



I read 'Nuclear Batteries' and the first thing I think about is the "Lia Radological accident", where three men were exposed and one died.

https://en.wikipedia.org/wiki/Lia_radiological_accident

This incident happened in the country of Georgia, which was part of the Sovjet Union. Which probably already hints towards the root-cause of this incident (they lost track of the devices).

Also: Just because you can, doesn't mean you should.


oh man i read that medical report [1] a while ago. nightmare fuel, with nightmare pictures included.

https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1660web-81...


There’s something especially bad about radiological burns. Not necessarily knowing severe damage is being done, we don’t have a feedback loop to even know we should get away. And beyond the metaphysical and psychological aspects for me, they just look wrong.


Oh wow. What a read. Thank you.


The Wikipedia article links to the IAEA recovery video where you can more or less see everything : https://www.youtube.com/watch?v=BE5T0GkoKG8


I find the article a difficult read for someone not versed in “confidential computing”. It felt written for insiders and/or people smarter than me.

However, I feel that “confidential computing” is some kind of story to justify something that’s not possible: keep data ‘secure’ while running code on hardware maintained by others.

Any kind of encryption means that there is a secret somewhere and if you have control over the stack below the VM (hypervisor/hardware) you’ll be able to read that secret and defeat the encryption.

Maybe I’m missing something, though I believe that if the data is critical enough, it’s required to have 100% control over the hardware.

Now go buy an Oxide rack (no I didn’t invest in them)


The unique selling point here is that you don't need to trust the hypervisor or operator, as the separation and per-VM encryption is managed by the CPU itself.

The CPU itself can attest that it is running your code and that your dedicated slice of memory is encrypted using a key inaccessible to the hypervisor. Provided you still trust AMD/Intel to not put backdoors into their hardware, this allows you to run your code while the physical machine is in possession of a less-trusted party.

It's of course still not going to be enough for the truly paranoid, but I think it provides a neat solution for companies with security needs which can't be met via regular cloud hosting.


The difference between a backdoor and a bug is just intention.

AMD and Intel both have certainly had a bunch of serious security relevant bugs like spectre.


How can I believe the software is running on the CPU and not with a shim in between that exfiltrates data?

The code running this validation itself runs on hardware I may not trust.

It doesn’t make any sense to me to trust this.


The CPU attests what it booted, and you verify that attestation on a device you trust. If someone boots a shim instead then the attestation will be different and verification will fail, and you refuse to give it data.


That creates a technical complexity I still don't trust. Because I don't see how you can trust that data isn't exfiltrated just because the boot image is correct.

If you control the hardware, you trust them blindly.


Your right it is complex; but it's a 'chain of trust' where each stage is in theory fairly easy to verify. That chain starts with the firmware/keys in the CPU itself; so you have a chain from CPU->CPU Firmware->vTPM->guest bios->guest OS (probably some other bits) Each one is measured or checked; and at the end you can check the whole chain. Now, if you can tamper with the actual cpu itself you've lost - but someone standing with an analyzer on the bus can't do anything, no one with root or physical access to the storage can do anything. (There have been physical attacks on older versions of AMDs SEV, of which the most fun is a physical attack on it's management processor - so it's still a battle between attackers and improved defences).

[edit: Took out the host bios, it's not part of the chain of trust, clarified it's only the host CPU firmware you care about]


Hasn't that been exploited several times?


Exploited in the wild, difficult to say, but there has been numerous vulnerabilities reported on underlying technologies used for confidential computing (Intel SGX, AMD SEV, Intel TDX, for example) and quite a good amount of external research and publications on the topic.

The threat model for these technologies can also sometimes be sketchy (lack of side channel protection for Intel SGX, lack of integrity verification for AMD SEV, for example)


I don't believe so? I have no doubt that there have been vulnerabilities, but the technology is quite new and barely used in practice, so I would be surprised if there have been significant exploits already - let alone ones applicable in the wild rather than a lab.


The technology is only new because the many previous attempts were so obviously failures that they never went anywhere. The history of "confidential computing" is littered with half baked attempts going back to the early 2000s in terms of hypervisors, with older attempts in the mainframe days completely forgotten.


I saw what I thought was a nice talk a couple of years ago at fosdem introducing the topic https://archive.fosdem.org/2024/schedule/event/fosdem-2024-1...

Even when running on bare metal I think the concept of measurements and attestations that attempt to prove it hasn't been tampered with are valuable, unless perhaps you also have direct physical control (eg: it's in a server room in your own building)

Looking forward to public clouds maturing their support for Nvidia's confidential computing extensions as that seems like one of the bigger gaps remaining


I don't believe in the validity of the idea of 'confidential computing' on a fundamental level.

Yes, there are degrees of risk and you can pretend that the risks of third-parties running hardware for you are so reduced / mitigated due to 'confidential computing' it's 'secure enough'.

I understand things can be a trade-off. Yet I still feel 'confidential computing' is an elaborate justification that decision makers can point to, to keep the status quo and even do more things in the cloud.


I'm a relative layman in this area, but from my understanding, fundamentally there has to be some trust somewhere, and I think confidential computing aims to provide a way to both distribute that trust (split the responsibility between the hardware manufacturer and cloud provider, though I'm aware already sounds like a losing prop if cloud providers are also the hardware manufacturer) and provide a way to verify it's intact.

Ultimately it's harder to get multiple independent parties to collude than a single entity, and for many threat models that's enough.

Whether today's solutions are particularly good at delivering this, I don't know (slides linked in another comment suggest not so good), but I'm glad people are dedicating effort to trying to figure it out


If you get it right (and damn you really need to ask your cloud provider to prove they have...) - you don't need to trust the cloud provider in this model at all. In reality most of the provided systems do trust the provider somewhere but only to the level of some key store or something in the back, not the people in the normal data centres.


Well there were some advances in the space of homomorphic encryption, which I find pretty cool and would be an encryption which does not require a secret to work on the data. Sadly the operations which are possible are limited and quite performance intensive.


I’m not anxious about rapid technological change.

I care about the fact that technology is used to undermine democracy and destroy social cohesion.


Yeah but like 23 dudes can have more money than god, so this is a moral imperative.


They got $100M from USIT, which seems to be owned by Thomas Tull

https://en.wikipedia.org/wiki/Thomas_Tull


The iostat command has always been important to observe HDD/SDD latency numbers.

Especially SSDs are treated like magic storage devices with infinite IOPS at Planck-scale latency.

Until you discover that SSDs that can do 10GB/s don't do nearly so well (not even close) when you access them in a single thread with random IOPS, with queue depth of 1.


That's where you start down the eBPF rabbit hole with bcc/biolatency and other block device histogram tools. Further, the cache hit rate and block size behavior of the SSD/NVME drive can really affect things if, say, your autonomous vehicle logging service uses MCAP with a chunk size much smaller than a drive block... Ask me how I know


Thanks for sharing.

I came to a similar conclusion: TiniMiniMicro 1L PCs are in many ways a better option than Raspberry Pis. Or any mini PC with an Intel N-series CPU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: