This is a great idea. I now exclusively use SSH keys on hardware security modules of some kind. I use "Secretive", a mac app that does the same, plus a yubikey using yubikey-agent (https://github.com/FiloSottile/yubikey-agent; there are too many complicated ways to use SSH keys with a yubikey this is one of the friendliest ones). Depending on the security and frequency of which I access the service impacts whether I need presence confirmation or use secretive versus the yubikey.
I would be remiss to mention there are existing SSH TPM projects, not sure how this one differentiates. It seems to at least have the user experience pretty simple, similar to yubikey-agent (and secretive), and unlike some of the existing solutions which have quite a few extra steps:
https://github.com/tpm2-software/tpm2-pkcs11/blob/master/doc...
I really love it when projects like this address there are "competitors" or other software in the space and provide a fair comparison. It would be great to see one of those here :) For example you can easily use SSH FIDO keys but they aren't supported by all server sides, ECDSA keys often are (though not always!) etc :)
It's not written in the README.md, but `ssh-tpm-agent` is very much a copy-paste of `yubikey-agent` but with a bit better testing and the yubikey part swapped with TPM stuff.
ykcs11 allows you to use the native SSH agent (or even no agent at all for individual ssh invocations) with an ssh key on a yubikey using their pkcs11 provider.
When you see a guide containing a 7 step manual to do something that should be simple, it's worth taking a step back and consider the question if you can make this valueable security feature more easily accessible.
Side-loading so-names into your security critical apps is probably something you should be more critical of as well.
are you extending this to the usage of yubikey-agent and ssh-tpm-agent as well?
both variants, whether it's using a PKCS11 provider using a standardized interface, or using a completely custom SSH agent, will need to deal with secret material.
although I'm no expert on the inner workings of SSH, I'd expect there to not be much difference between having the OpenSSH agent interface with ykcs11 (which is also open source and can be reviewed) and using an alternative agent with piv capabilities that was found on github.
>both variants, whether it's using a PKCS11 provider using a standardized interface, or using a completely custom SSH agent, will need to deal with secret material.
`ssh-tpm-agent` is not dealing with secret material. That is delegated to the TPM and we are only in the business of telling the TPM to sign stuff for us. Yubikey-agent does create a private key on the machine before inserting it into the yubikey.
I would not call these two things equal and there are a difference to what the potential impact is.
>although I'm no expert on the inner workings of SSH, I'd expect there to not be much difference between having the OpenSSH agent interface with ykcs11 (which is also open source and can be reviewed) and using an alternative agent with piv capabilities that was found on github.
> No, as they never get loaded into the ssh binary and are external programs communicating over an interface.
my understanding is that the same would apply if you use ykcs11 in the OpenSSH agent instead of using it directly in `ssh`, which would make this a comparison between (OpenSSH agent + YKCS11) vs e.g., ssh-tpm-agent.
from the qualys report you linked:
> Note to the curious readers: for security reasons, and as explained in
the "Background" section below, ssh-agent does not actually load such a
shared library in its own address space (where private keys are stored),
but in a separate, dedicated process, ssh-pkcs11-helper.
additionally, as I understand it, this basically boils down to use-after-free due to unsafe code, which could occur in either agent implementation, even without loading an extra .so, although the presence of .so loading in general certainly does increase the attack surface.
> `ssh-tpm-agent` is not dealing with secret material.
while the agent is certainly not accessing the private key directly, as long as you can access the agent and make it sign whatever you want, this will still be a very valuable vulnerability, with the only downside (compared to non-HSM keys) that you won't have persistent access to the private key, only temporary access for signing.
this can also be partially mitigated by requiring user interaction for every signing operation, but it's also not necessarily something that works for all use cases, such as when connecting to a few hundred destination hosts.
> There is a separation of concerns here though.
when you compare (OpenSSH agent + YKCS11) to ssh-tpm-agent, they're both separated from the main `ssh` process and communicate through the SSH agent API.
PKCS11 allows you to use `ssh -I /path/to/lib.so` directly, which there doesn't seem to be a comparable alternative for in ssh-tpm-agent, so I'll ignore that feature for now.
>additionally, as I understand it, this basically boils down to use-after-free due to unsafe code, which could occur in either agent implementation, even without loading an extra .so, although the presence of .so loading in general certainly does increase the attack surface.
You are not going to see similar exploits available in a program written in Go, it would result in a crash. Not an actual code execution issue.
So you are removing attack surface by not loading shared libraries, and quite a bit of attack surface considering it's not written in C.
> this can also be partially mitigated by requiring user interaction for every signing operation, but it's also not necessarily something that works for all use cases, such as when connecting to a few hundred destination hosts.
Allowing better UX would also allow people to utilize better and more secure defaults. The PKCS11 stuff is general unfriendly and using it correctly as a result gets harder.
This isn't only about attack surface, but also enabling more user friendly tooling.
What’s the threat model of TPM? They claim it’s for “physical attacks” but they can only enforce it when there is no software vulnerability or unauthorized privileged access anywhere so it’s a very small area of the Venn diagram where you have an attacker whose capability is “physical access” but also “does not possess any exploit or one-touch-access”. This narrows down the list to:
• Malicious coworkers, family members, and house keepers against a computer who is NEVER left unattended (i.e. screen locked when you leave 100% of the time)
• Local government agents (i.e. the local police) who can confiscate your powered on device but cannot afford to buy 3rd party cracking services that utilize exploits or more advanced extraction techniques (external RAM dumping)
• A device that is completely powered off and confiscated by a powerful nation state agent but not later on returned to the original owner and they forgot to wipe the device in case of implants.
In any case the design of TPM is completely flawed and suffers from “astronaut architects”. If you grep the three volume 1000+ pages of the TPM 2.0 architecture documents, you’ll not find a single mention of “threat model”.
Specifically TPM is a multi-million dollar industry signed on by many tech companies (including Microsoft who uses it as an excuse to get you to buy a new PC because TPM == more secure right, but also because older computers don’t support it) because places like governments and banks require it because they also don’t understand threat models.
TPM protection is fundamentally flawed because:
• It cannot protect against a compromise in the boot chain (e.g. a UEFI driver is exploited, and it lies to TPM about the subsequent stage of code that is loaded while running a malware implant)
• It cannot protect against RCE (remote code execution). This means if Windows ever has a vulnerability that can be exploited remotely, they can keylog -> steal your PIN -> replay it later to dump the key. Or just dump the key in memory if they have a PE (privilege escalation) as well.
• It cannot protect against a user volunteerly installing malware (Bonzi buddy?)
• It cannot protect against an attacker who installs something on your unattended computer (USB Rubber Ducky, Flipper Zero, etc)
Basically the most common ways people get compromised sees no protection from TPM while esoteric attack situations that no attacker will realistic attempt are protected.
TPM can never protect against these cases because it is logically (fTPM) and/or physically (dTPM) separate from the CPU. That means it cannot perform any policy enforcement against a CPU whose execution is under control of the attacker.
TPM is not designed to prevent intrusion from hackers, it's designed to turn your general purpose computer into an appliance by preventing you, the owner, from modifying the OS in your computer as you see fit (and interact with third party services at the same time, thanks to remote attestation).
It means that instead of _just patching_ the software in your computer to customize it now you have to resort to using 0days to do it like a criminal which makes it considerably harder.
It does help against hackers, of course, and the same restrictions do secure you against some attacks (evil maid attacks) but that's not the intent.
The threat model TPM protects against is:
- You log in into Netflix (or whatever)
- Netflix sends your PC the movie so you can watch it.
- Your PC now has the movie in memory.
- You extract the movie from your PC's memory and you can now watch it forever without Netflix's permission.
What the "trusted" in Trusted Platform Module means is that with TPM they can trust your PC to not let you do that.
It's a double-edged sword, right? On the one hand, Windows 12 can be completely iOS-like and you can't do anything about it. On the other hand, someone with physical access to your machine can't replace your OS that sends them your disk encryption password as soon as you get a DHCP lease.
Chrome OS really got this one right. You can disable all the security, but there is hardware that tells you that happened. It can also tell your employer so you don't download their IP to a laptop running malware. That's all it's ever been used for; no matter how much people try to make DRM a thing, it's never once worked. Every Netflix-exclusive show is easily downloadable on Usenet.
The sword analogy is getting a bit awkward here, but the people making the sword only care about how well it protects media and software - because they want their marketplaces (app/music/movie/game stores) to be as attractive as possible to media conglomerates.
I'm uncertain. Microsoft isn't making any real money off of media. They sell licenses to use Outlook and rent you some computers, and there's their income.
Of course they make money that way, because Netflix and co want DRM and they don't want you to stop watching Netflix on Windows and have to do it on your phone. So they'll go and make TPMs happen.
Also, Microsoft makes a lot of money from videogames, and TPMs help enforce microtransactions in single player games if nothing else.
That's just a game of chicken neither player wants to admit. "Netflix drops Windows 11, Microsoft launches new subscription service," ain't gonna be good for Netflix.
It means malware can’t exfiltrate the SSH key from your machine and keep using it. But yes they can potentially use it while still on your machine depending on if presence confirmation or re-inputting a credential is required. But that still closes a big gap.
On a Mac secretive can also pop a notification making it more likely to passively observe such usage (not fool proof though) and the key can’t (easily, maybe with some complex exploit) be used from an app not signed by the original developer. It can also require re input of your password. That specific security probably isn’t possible on Linux though.
> It means malware can’t exfiltrate the SSH key from your machine and keep using it
Are you sure about that? Presumably the secret parts of the SSH key are being read into memory at some point, or a RCE could dump the key the same way ssh-tpm-agent does.
Don't rely on a TPM to store secrets. Use a secrets store that can be audited for use and have it generate dynamic, short lived credentials. For SSH, use SSH CAs.
>Are you sure about that? Presumably the secret parts of the SSH key are being read into memory at some point, or a RCE could dump the key the same way ssh-tpm-agent does.
This is not how ssh-tpm-agent works. It does the key signing inside the TPM so you do not have access to the key on the machine itself.
The private key never hits memory or the machine itself.
> that can be exploited remotely, they can keylog -> steal your PIN -> replay it later to dump the key. Or just dump the key in memory if they have a PE (privilege escalation) as well.
That can't happen if a TPM is used correctly. If the TPM is coupled with an OPAL enabled SSD for example, the actualy key used for data encryption is never loaded into main memory. Sure, disk content can be read by a malicious OS, but you do not gain access to any encryption keys. Additionally, you prevent MITM attacks by using challenge response authentication which some TPM's support, so again, your key is never revealed.
> It cannot protect against an attacker who installs something on your unattended computer
This is a strawhat since you won't ever be able to protect from user space malware. Apart from that, TPM's can enforce mandatory access control if supported (I know the T2 on Macs does exactly that). So no, you cannot easily install a root kit and expect it to work.
> It cannot protect against a compromise in the boot chain (e.g. a UEFI driver is exploited, and it lies to TPM about the subsequent stage of code that is loaded while running a malware implant)
At this point we are in the realm of an attacker stealing your device, desoldering a chip and rewriting the firmware on that chip. Were talking about _considerable_ ressources spent, just to trick you into thinking your device hasn't been compromised.
I know that on non-Apple devices vendors will likely use some insecure off the shelf solution, but if done correctly these things are as close to unbreakable as possible.
> Basically the most common ways people get compromised
The TPM spec was written by NIST and it's purpose is not to protect the common user, but to offer advanced security features for business clients and governments. And there, a device that get's stolen is a common occurence.
The reason you see it integrated in consumer devices for cost reduction. It getting abused for DRM purposes is entirely the media industries fault.
You can edit UEFI drivers from the operating system's bootloader, and you can even flash the UEFI itself from the OS in most computers. While secure boot. Failing that, you can shim a preloader between the bootloader and the UEFI and load arbitrary drivers despite secure boot, like is done here : https://github.com/ValdikSS/Super-UEFIinSecureBoot-Disk
Any sufficiently motivated attacker can make a UEFI rootkit happen, and it's in the wild right now. TPM really do offer no protection to users, either against userspace malware, or rootkits. It's purely about DRM.
Secure Boot and TPM do offer tangible security benefits and is security features you can take ownership of.
Secure Boot allows your own key hierarchy, and TPM allows you to take ownership.
The linked boot disk isn't really proof that Secure Boot is useless. If you don't set a MOKManager password (as you should), and you change the security state of the machine while present at the keyboard. Yes you can boot things.
This is intended to make sure people can actually decide to trust things. And having insecure defaults makes this less useful. Not very surprising.
EDIT: The bootdisk won't work with a recent shim nor a recent grub. The old shim it was using should be revoked if you have any remotely updated machine as well.
TPMs could also prevent attacks like this on your machine.
Incidentally I've invested quite a bit of time in making user-friendly Secure Boot tooling as well. https://github.com/Foxboron/sbctl
In the TPM threat model, the attacker is you, and the defender is your employer who wants to make sure that you don't exfiltrate keys from your work laptop to any other (i.e. unauthorized) device.
I mean, your complaints are that it doesn't make software more secure. That's true but that's an orthogonal effort. Imagine there was a world where software was secure (they rewrote Windows in Rust or whatever); now what tools do you need to ensure that someone didn't replace your secure software with their own compromised version? (For example, how do you know that LUKS is asking for your full-disk decryption key, and not some piece of malware that some random package maintainer added to /etc/systemd?) That's the gap that the TPM is designed to fill.
Meanwhile, sure, it's not going to prevent you from clicking a link in your email to fakebank.com and typing in your SSN.
I'm surprised that people are surprised that a $0.20 chip doesn't eliminate software bugs.
How many keys can I store inside a laptop's TPM? Is there a limit? I have hundreds of SSH key pairs, one for each thing I connect to. My .ssh/config file defaults to a nonexistent key for all hosts, except for hosts configured elsewhere in the file, where each host gets its own IdentityFile entry.
It has 6.3 Kb or something of memory. So not a lot.
What `ssh-tpm-agent` does is that it simply doesn't store the keys on the TPM at all. It creates the ephemeral SRK, then load a sealed private key back into the TPM that allows us to use it.
So you can have a couple hundred SSH key pair this way.
It's not terribly complex, and it's all scripted. I don't like my identity being trackable across services, or across areas of my life. For instance, dickheads have created databases of public keys scraped from services like github, which can be used to identify the vast majority of github users whenever they ssh into some other service. By default, the ssh client will try to authenticate with every available keypair until it finds a match, which can further leak information about you(r machine).
Yes, I understand the attack, but what is the threat? You are already uniquely identifying yourself to each of these servers. What is the threat to you if your identity is correlated between them? You probably have the same username on many of them, no?
It seems like privacy cosplay to me, tbh. I'm genuinely curious what type of threat it actually protects against, if any. What happens when two different systems you ssh into knows it's elric on both?
Apparently you can use a ssh-agent for HostKeys, and by extension ssh-keysign.
So I think this should be trivial to implement actually.
It might be cool to add some attestation feature so you can verify the boot of the machine before releasing the host keys. Might be practical in scenarios where you are SSHing into an initrd or a sensitive remote host.
I would be remiss to mention there are existing SSH TPM projects, not sure how this one differentiates. It seems to at least have the user experience pretty simple, similar to yubikey-agent (and secretive), and unlike some of the existing solutions which have quite a few extra steps: https://github.com/tpm2-software/tpm2-pkcs11/blob/master/doc...
I really love it when projects like this address there are "competitors" or other software in the space and provide a fair comparison. It would be great to see one of those here :) For example you can easily use SSH FIDO keys but they aren't supported by all server sides, ECDSA keys often are (though not always!) etc :)