Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I will trust independent audits of local code and local hardware. There are still plenty of opportunities for someone to send out malicious patches, but the code running can (and probably will) be analysed by journalists looking for a scoop and security researchers looking for a bug bounty.

I have no idea what code is running on a server I can't access. I can't exactly go SSH into siri.apple.com and match checksums. Knowing Apple's control freak attitude, I very much doubt any researcher permitted to look at their servers is going to be very independent either.

Apple is just as privacy friendly as ChatGPT or Gemini. That's not necessarily a bad thing! AI requires feeding lots of data into the cloud, that's how it works. Trying to sell their service as anything more than that is disingenuous, though.



> I have no idea what code is running on a server I can't access.

That's like... the whole point? You have some kind of hardware-based measured boot thing that can provide a cryptographic attestation that the code it's running is the same as the code that's been reviewed by an independent auditor. If the auditor confirms that the data isn't being stored, just processed and thrown away, that's almost as good as on-device compute for 99.999% of users. (On-device compute can also be backdoored, so you have to trust this even in the case that everything is local.)

The presentation was fairly detail-light so I don't know if this is actually what they're doing, but it's nice to see some effort in this direction.

E: I roughly agree with this comment (https://news.ycombinator.com/item?id=40638740) later down the thread -- what exactly the auditors are verifying is the key important bit.


I do like Apple's attempts to make this stuff better for privacy, but a pinky promise not to leak any information is still just that.

Apple has developed some of the strongest anti tampering compute on existence to prevent people from running code they don't want on hardware they produce. However, that protection is pretty useless when it comes to protection from Apple. They have the means to bypass any layer of protection they've built into their hardware.

It all depends on what kind of auditing Apple will allow. If Apple allows anyone to run this stuff on any Mac, with source or at least symbols available, I'll give it the benefit of the doubt. If Apple comes up with NDAs and limited access, I won't trust them at all.


Exactly, Apple has barely any oversight or accountability for their privacy claims. Sad to see so many people taking their word at face value.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: