For sure, the second one is answered - it is possible to parallelize GNNs to the billion-scale, while still using message passing. It requires rethinking how message passing is implemented, modifying objective functions that work in parallel, and changing ML infrastructure. You're not going to get to large graphs with generic distributed Tensorflow.
I don't know if the third question is fully answered, but there are many approaches to preserving locality, either by changing architectures or changing objective functions.
Also, errata: PinSage was developed for Pinterest, not Etsy (hence, not EtsySage).
For sure, the second one is answered - it is possible to parallelize GNNs to the billion-scale, while still using message passing. It requires rethinking how message passing is implemented, modifying objective functions that work in parallel, and changing ML infrastructure. You're not going to get to large graphs with generic distributed Tensorflow.
I don't know if the third question is fully answered, but there are many approaches to preserving locality, either by changing architectures or changing objective functions.
Also, errata: PinSage was developed for Pinterest, not Etsy (hence, not EtsySage).