> We use third-party services to send a registration code via SMS or voice call in order to verify that the person in possession of a given phone number actually intended to sign up for a Signal account. This is a critical step in helping to prevent spam accounts from signing up for the service and rendering it completely unusable—a non-trivial problem for any popular messaging app.
I'm not sure why you need to assume that it will be linked back to your real identity; I haven't seen anything that indicates any motivation to do something like that. I'm all for being cautious, but being overly cynical can lead to letting perfect being the enemy of the good.
For the spam part, I commented below how’s that doesn’t work and it doesn’t even make sense for a messaging app.
> I'm not sure why you need to assume that it will be linked back to your real identity;
I’m not assuming, only North America (edit: and some European countries) doesn’t require an ID for a phone number (1), and even in here, you would use it in other services that are linked to your real ID like banks or paying the phone bill online. The concept simply boils down to as soon as you find an account’s phone number, it’s a game over for that said privacy.
> The concept simply boils down to as soon as you find an account’s phone number, it’s a game over for that said privacy
You completely misunderstand what kind of privacy Signal aims to achieve. Signal protects you from eavesdropping and data hoarding, two major privacy issues with solutions like Facebook Messenger for example.
They do not and have never claimed to offer a service where “privacy” means nobody knows who anyone is, it isn’t Tor and I wouldn’t want it to be.
If you don’t like the goals and design choices of Signal, just use another service.
There are benefits of the choices they’ve made, namely ensuring that most users of the service are “real people”, which I think is great. It’s not a social network, it’s a messaging app between friends that solves issues presented by alternatives like SMS or Instagram; that’s it.
It's a lot less like data hoarding than keeping a separate copy of your social graph. What is an adversary going to do with a list of phone numbers that are known to have signal accounts and nothing else?
Hoarding =/= collecting the bare necessities. Signal needs one piece of data to distinguish users from each other, and collects that. Hoarding would be to collect (significantly) more pieces of identifying data, more than needed to distinguish users. Signal does not appear to be doing that.
Because they don’t know anything except the phone number so all they have is a list of phone numbers which maybe people use. Quite different from Facebook reading everything you send, for example
And how to these black markets connect the phone numbers to names? I guess from data collected from more insecure sources. So I think Signal is being responsible with their data.
Also, you need some way to log in to your account. So you need an identifier and some way to validate that you are the owner of that identity. And next to that you want to prevent spam. So I think the choice to use a phone number as an identifier for a text-messaging app that is meant to be a secure replacement of SMS is not that weird.
But let's say they are data hoarding our phone numbers, and they can get other details about us through the black market because we use other more insecure services where we suddenly don't seem to care about privacy. Then what do you think Signal does with this data? They can't resell it because they don't have anything unique, they actually need to invest money to link their database of just phone numbers to something else. And then? What malicious things will they be able to do?
Ok, now you have a list of people's names and you know they have signal installed. Google and Apple also have this (presuming you installed it via a mobile app store). Your carrier has this (from the IP addresses on your messages).
What have you gained? What does the attack look like?
> On the opposite end of the spectrum, users who want to live on the edge can enable an optional setting that allows them to receive incoming “sealed sender” messages from non-contacts and people with whom they haven’t shared their profile or delivery token. This comes at the increased risk of abuse, but allows for every incoming message to be sent with “sealed sender,” without requiring any normal message traffic to first discover a profile key.
By default, the first message between someone and you clearly identifies who is communicating with whom. That's enough.
We don't know whether an intelligence agency is listening in on their servers and logging this data.
Assuming an eavesdropper that can defeat TLS or is listening via DMA attacks on the signal servers,
- you can log initial signup or login, which allows you to connect user id and phone number
- you can log the first time a chat is created, which allows you to build a social graph of which person is connected to which other people
- even with sealed sender, you still know the identity of the receiver and the IP address of the sender, which is often enough to figure out who is in contact with whom
This would be enough dragnet surveillance to automatically figure out the contacts of people you've already identified as threats. You'd also have enough evidence to get a sealed court order to do targeted surveillance on these people.
The news today is a step in the right direction for sure, but more needs to be done if they want more privacy and anonymity-focused people to use it. This section on what makes a good messaging platform still resonates: https://dessalines.github.io/essays/why_not_signal.html#what...
As long as you can’t host and use your own server, you should never assume that.
> There are benefits of the choices they’ve made, namely ensuring that most users of the service are “real people”
You communicate with your colleagues and clients over emails and you know they are real, you probably play games too and use discord and you know they are real, meanwhile you can be talking to bot in twitter that they are registered with a “real” phone number.
Focus on the issue, not the person (Tucker), you might not trust a person which is fair, but you are still trusting Signal’s server, you can NEVER know if they have a memory injection backdoor running in there, you can audit the code as much as you want and it still passes, yet, the messages are compromised.
There are ways of getting messages without breaking Signal or using a backdoor. One of them is getting the messages from the other party(ies) involved. You can't protect yourself from this even if you self host. Something else that might happen is you ending up with your phone hacked because you're talking with someone close to Putin.
The only way to know for sure is for you to create an alternative service, write all code yourself, and host everything without ever leaving your server alone. And even then you can't be sure you haven't been hacked.
On a side note, if we're getting information from someone that lies a lot and often leaves out details that don't fit the narrative, then perhaps we should also look at the person, not just the issue.
> One of them is getting the messages from the other party(ies) involved. You can't protect yourself from this even if you self host.
You certainly can, the self destruction messages are one of the ways, sure, it is not the only solution as you need to make sure the OS is secure itself too, but definitely helps in that case, no messages stored at rest and all are encrypted in transit.
> Something else that might happen is you ending up with your phone hacked
Which is essential to have a messaging platform that allows multi-client/cross platform, say running that app on a hardened OS is an option and possible compared to only iOS with a phone a number for example.
> write all code yourself, and host everything without ever leaving your server alone.
You don’t need to write it yourself, as long as you can read it, and host it knowing no other services are spying on that server, should be miles ahead of other apps like signal, sure, you can still have that server breached, but first you need to know where’s that server, or even you are using this messaging app in the first place, contrary to Signal for example, all I need is checking if you use it by the phone number. Not to mention it will make it harder for whoever is trying to spy on you, if most people ran their instances, but that’s a little bit more of a dream as the average person won’t, but at least the option should be provided.
Signal makes the app open source and you can build it yourself and use it. The messages are E2EE so we don't need to trust the server in the same way because they aren't being decrypted there. They can't have the key. They could be logging the messages and metadata, but that's a different argument. And it really would come down to the NSA being able to hack AES with a quantum encryption (though I don't think this was out at that time). So I have pretty good reason to trust signal despite there still being some gray areas that I could still want more light on. It's just that we're the shadows are I'm unconvinced it could undermine the whole system. You can't fit an elephant in the shadow of a mouse.
On the other hand Tucker isn't even being consistent in his telling of the story. He says that he hasn't told anyone and makes a big deal to even mention his wife, so we think even his closest confidants. But then what message did he send over signal that was extracted? The personal notes? There's also much more reasonable pathways for the NSA to get that information. If he's researching and just storing notes on signal he's still leaving breadcrumbs somewhere. He's a popular news host so I'd be surprised if the NSA hasn't tried to compromise his whole phone, and signal only protects your messages in transit. The only evidence we have is his word that someone from the NSA told him. Which itself would be really weird because it'd completely undermine that capability or imo a more likely explanation is someone is lying. Gov does disinformation all the time and convincing people a secure channel isn't seems pretty useful since they'll turn to easier methods.
So I don't have to rely on my distrust of Tucker or his history of misinformation. If this was my only and first encounter there's more than enough for me to be suspicious in just his telling.
Neither Signal nor Telegram allow to pay a small amount in cryptocurrency to prove you are not a spammer. This shows that they are really interested in knowing who is their user.
Sure, but that means that your phone number is linked to your identity even without Signal? There's no additional data that Signal links to it, other than that you're a Signal user and when you sent your last message.
Your previous question was "I'm not sure why you need to assume that it will be linked back to your real identity?"
If it's not possible to buy a phone without a strong attestation of identity, as is the general case in at least one country, then the identity relationship is baked in.
It's probably possible to buy a burner phone even in South Korea. But for those who are using their standard-issue phone with Signal, the problem most certainly exists.
And even in countries where there isn't some national phone-as-identifier policy, effectively most people's phone numbers tie them to their real-space identity even if there's no explicit personal data association[1], and in most cases, phone number, IMEI, AAID, and/or billing data (credit card payment authorisation) provide far greater assurance.
Point remains that 33 bits will identify any given person among the 8 billions now living, and a phone number itself, plus ancillary leakage (activity patterns, location) are an exceptionally poor basis for an anonymous or pseudonymous identifier.
From https://signal.org/blog/signal-is-expensive/
> We use third-party services to send a registration code via SMS or voice call in order to verify that the person in possession of a given phone number actually intended to sign up for a Signal account. This is a critical step in helping to prevent spam accounts from signing up for the service and rendering it completely unusable—a non-trivial problem for any popular messaging app.
I'm not sure why you need to assume that it will be linked back to your real identity; I haven't seen anything that indicates any motivation to do something like that. I'm all for being cautious, but being overly cynical can lead to letting perfect being the enemy of the good.