> Unfortunately, no matter how hard you try, there is a certain percentage of nodes for whom hole punching will never work. This is because their NAT behaves in an unpredictable way.
Or they are centrally/corporate-controlled and do not allow hole punching.
UDP-based protocols are well suited for P2P, since hole punching is straightforward if you have predictable port mapping, you cannot disallow it. In that spirit, we are currently exploring this with:
The main idea is to have a simple encryption (ed25519/chacha20+poly1305) for encryption in the transport layer, on top of that then qh, where certs are use for signing content.
With out of band key exchange, you can establish a connection after you successfully punched a hole.
Try doing it over a network that only allows connections through a SOCKS/Squid proxy, or on a network that uses CG-NAT (i.e., double-NAT).
See also:
> UDP hole punching will not work with symmetric NAT devices (also known as bi-directional NAT) which tend to be found in large corporate networks. In symmetric NAT, the NAT's mapping associated with the connection to the known STUN server is restricted to receiving data from the known server, and therefore the NAT mapping the known server sees is not useful information to the endpoint.
"Unfortunately, no matter how hard you try, there is a certain percentage of nodes for whom hole punching will never work. This is because their NAT behaves in an unpredictable way. While most NATs are well-behaved, some aren’t. This is one of the sad facts of life that network engineers have to deal with."
In this scenario, the article goes on to describe a convention relay-based approach.
I would guess that most consumer routers are very cooperative as far as hole punching because it's pretty critical functionality for bittorrent and many online games. Corporate firewalls wouldn't be as motivated to care about those use-cases or may want to actively block them.
Unlike websockets you can supply "cert hash" which makes it possible for the browser to establish a TLS connection with a client that doesn't have a certificate signed by a traditional PKI provider or even have a domain name. This property is immensely useful because it makes it possible for browsers to establish connections to any known non-browser node on the internet, including from secure contexts (i.e. from an https page where e.g. you can't establish a ws:// connection, only wss:// is allowed but you need a 'real' tls cert for that)
Someone correct me if I'm wrong, but I think p2p-webtransport was superseded by "webtransport" (https://github.com/w3c/webtransport). Supposedly, the webtransport design should be flexible enough to support p2p even though focus is the traditional server<>client.
The story here is a bit complicated. WebTransport is, in some sense, an evolution of RTCQuicTransport API, which was originally meant to solve the issues people had with SCTP/DTLS stack used by RTCDataChannel. At some point, the focus switched to client-server use cases, with an agreement that we can come back to the P2P scenario after we solve the client-server one.
Superceded? No. Webtransport already was well on its way to approval when p2p-webtransport was created.
Webtransport as a protocol certainly could be used for p2p, but the browser APIs aren't there: hence p2p-webtransport was created, to allow its use beyond traditional server<->client.
Maybe success rates are higher with UDP – I don’t know. But it certainly works to hole punch with TCP as well. If you’re lucky you can even run into a rare condition called ”TCP simultaneous open”, where both sides believe they are the dialer.
First time I've heard about this, and went looking for more. Came across https://news.ycombinator.com/item?id=5969030 (95 points - July 1, 2013 - 49 comments) that had bunch of background info + useful discussions.
It can be done, but it's less reliable and also requires the ability to forge packets that is not allowed on all platforms. So it's hard to use in any production application if you want it to run in user space, on Windows, or on mobile.
Wait? How does that work? QUIC REQUIRES CA TLS for all endpoints. So you can do the discovery/router workarounds but then the person trying to connect to you with QUIC won't be able to unless you have a signed corporate CA TLS cert. I guess you could integrate some Lets Encrypt ACME2 periodic updater scheme into your P2P program but that's getting pretty complex and fragile. And it also provides a centralized way for anyone who doesn't like your P2P tool to legally/socially pressure it to shut it down.
No. QUIC require TLS. TLS just provides a way to move certificates, but doesn't care what a "certificate" actually is. JPEG of your 10m swimming certificate from school? Sure, that's fine.
The endpoints get to decide which certificates to accept and in practice in a web browser and many other modern programs that'll be some sort of X.509 certificate more or less following PKIX and on the public Internet usually the Web PKI which is a PKI operated on behalf of the Relying Parties (literally everybody) by the Trust Stores (in practice the OS vendors plus Mozilla for the Free Unix systems) but none of that is defined by QUIC.
Ok so you need to trust each other's certs. What's the big deal? Presumably you already have some other channel to share addresses so you can also share temporary self signed certs for this purpose.
A good time to mention that the P2P Yggdrasil network uses QUIC/TLS selfsigned certs but then runs its own encryption over that. You can add as many peers as desired, and the network will automatically choose the best path (latency). So no multi-pathing but gets around the issue of changing IP addresses/network locations. Plus, it's able to do multicast to find peers in your LAN without having a centralized control server. I'm actually getting better speeds than WireGuard over my LAN - but this is a stable link. Once you start sending the yggdrasil packets over long unstable links you may start to get into funky issues like TCP in TCP head of line blocking, but they try to mitigate this by having huge MTU sizes and packet dropping algorithms. (https://yggdrasil-network.github.io/2018/08/19/congestion-co...)
I'm working with QUIC in a personal project, while you can roll your own QUIC library the spec is large enough that it's quite a bit of work to implement it yourself. Most libraries allow you to pass in your own certificates. Realistically you could just bake in certs to your program and call it a day. Otherwise yes, you can implement your own cert logic that completely ignores certs altogether. s2n-quic for example specifically allows for both, though the former is much easier to do.
I guess most if not all QUIC endpoints you come across the internet will have encryption, as the specification requires as such. But if you control both ends, say you're building a P2P application that happens to use QUIC, I don't think there is anything stopping you from using an implementation of QUIC that doesn't require that, or use something else than TLS, even if the specification would require you to have it.
Just as long as you statically build and ship your application. Because I guarantee the QUIC libs in $distro are not going to be compiled with the experimental flags to make this possible. You're going to be fighting QUIC all the way to get this to work. It's the wrong choice for the job. Google did not design QUIC for human use cases and the protocol design reflects this.
> Unfortunately, no matter how hard you try, there is a certain percentage of nodes for whom hole punching will never work. This is because their NAT behaves in an unpredictable way.
Or they are centrally/corporate-controlled and do not allow hole punching.
UDP-based protocols are well suited for P2P, since hole punching is straightforward if you have predictable port mapping, you cannot disallow it. In that spirit, we are currently exploring this with:
https://github.com/tbocek/qotp and https://github.com/qh-project/qh
The main idea is to have a simple encryption (ed25519/chacha20+poly1305) for encryption in the transport layer, on top of that then qh, where certs are use for signing content.
With out of band key exchange, you can establish a connection after you successfully punched a hole.
However, its not QUIC compatible in any way (https://xkcd.com/927)
the https://github.com/qh-project/qh link doesn't work for what its worth.
Isn’t a concept of TURN server from RFC 5766 a solution for this problem?
You can't disallow hole punching.
> You can't disallow hole punching.
Try doing it over a network that only allows connections through a SOCKS/Squid proxy, or on a network that uses CG-NAT (i.e., double-NAT).
See also:
> UDP hole punching will not work with symmetric NAT devices (also known as bi-directional NAT) which tend to be found in large corporate networks. In symmetric NAT, the NAT's mapping associated with the connection to the known STUN server is restricted to receiving data from the known server, and therefore the NAT mapping the known server sees is not useful information to the endpoint.
* https://en.wikipedia.org/wiki/UDP_hole_punching#Overview
From TFA:
"Unfortunately, no matter how hard you try, there is a certain percentage of nodes for whom hole punching will never work. This is because their NAT behaves in an unpredictable way. While most NATs are well-behaved, some aren’t. This is one of the sad facts of life that network engineers have to deal with."
In this scenario, the article goes on to describe a convention relay-based approach.
I would guess that most consumer routers are very cooperative as far as hole punching because it's pretty critical functionality for bittorrent and many online games. Corporate firewalls wouldn't be as motivated to care about those use-cases or may want to actively block them.
1 reply →
I hope some day the browser's webtransport also gets p2p support.
It seemed like there was such a good exciting start, but the spec has been dormant for years. https://github.com/w3c/p2p-webtransport
It is halfway there arguably, and libp2p does make use of it - https://docs.libp2p.io/concepts/transports/webtransport/
Unlike websockets you can supply "cert hash" which makes it possible for the browser to establish a TLS connection with a client that doesn't have a certificate signed by a traditional PKI provider or even have a domain name. This property is immensely useful because it makes it possible for browsers to establish connections to any known non-browser node on the internet, including from secure contexts (i.e. from an https page where e.g. you can't establish a ws:// connection, only wss:// is allowed but you need a 'real' tls cert for that)
Someone correct me if I'm wrong, but I think p2p-webtransport was superseded by "webtransport" (https://github.com/w3c/webtransport). Supposedly, the webtransport design should be flexible enough to support p2p even though focus is the traditional server<>client.
The story here is a bit complicated. WebTransport is, in some sense, an evolution of RTCQuicTransport API, which was originally meant to solve the issues people had with SCTP/DTLS stack used by RTCDataChannel. At some point, the focus switched to client-server use cases, with an agreement that we can come back to the P2P scenario after we solve the client-server one.
Superceded? No. Webtransport already was well on its way to approval when p2p-webtransport was created.
Webtransport as a protocol certainly could be used for p2p, but the browser APIs aren't there: hence p2p-webtransport was created, to allow its use beyond traditional server<->client.
Any UDP protocol can be made P2P if it can be bidirectionally authenticated.
For TCP based protocols it's very hard since there is no reliable way to hole punch NATs and stateful firewalls with TCP.
Maybe success rates are higher with UDP – I don’t know. But it certainly works to hole punch with TCP as well. If you’re lucky you can even run into a rare condition called ”TCP simultaneous open”, where both sides believe they are the dialer.
> where both sides believe they are the dialer.
First time I've heard about this, and went looking for more. Came across https://news.ycombinator.com/item?id=5969030 (95 points - July 1, 2013 - 49 comments) that had bunch of background info + useful discussions.
It can be done, but it's less reliable and also requires the ability to forge packets that is not allowed on all platforms. So it's hard to use in any production application if you want it to run in user space, on Windows, or on mobile.
1 reply →
Wait? How does that work? QUIC REQUIRES CA TLS for all endpoints. So you can do the discovery/router workarounds but then the person trying to connect to you with QUIC won't be able to unless you have a signed corporate CA TLS cert. I guess you could integrate some Lets Encrypt ACME2 periodic updater scheme into your P2P program but that's getting pretty complex and fragile. And it also provides a centralized way for anyone who doesn't like your P2P tool to legally/socially pressure it to shut it down.
> QUIC REQUIRES CA TLS for all endpoints
No. QUIC require TLS. TLS just provides a way to move certificates, but doesn't care what a "certificate" actually is. JPEG of your 10m swimming certificate from school? Sure, that's fine.
The endpoints get to decide which certificates to accept and in practice in a web browser and many other modern programs that'll be some sort of X.509 certificate more or less following PKIX and on the public Internet usually the Web PKI which is a PKI operated on behalf of the Relying Parties (literally everybody) by the Trust Stores (in practice the OS vendors plus Mozilla for the Free Unix systems) but none of that is defined by QUIC.
Ok so you need to trust each other's certs. What's the big deal? Presumably you already have some other channel to share addresses so you can also share temporary self signed certs for this purpose.
What prevents you from just using certificates not signed by a CA and verifying them based on the public key fingerprint?
A good time to mention that the P2P Yggdrasil network uses QUIC/TLS selfsigned certs but then runs its own encryption over that. You can add as many peers as desired, and the network will automatically choose the best path (latency). So no multi-pathing but gets around the issue of changing IP addresses/network locations. Plus, it's able to do multicast to find peers in your LAN without having a centralized control server. I'm actually getting better speeds than WireGuard over my LAN - but this is a stable link. Once you start sending the yggdrasil packets over long unstable links you may start to get into funky issues like TCP in TCP head of line blocking, but they try to mitigate this by having huge MTU sizes and packet dropping algorithms. (https://yggdrasil-network.github.io/2018/08/19/congestion-co...)
https://yggdrasil-network.github.io/documentation.html
I'm currently working on creating a managed Yggdrasil relay node service. A feature I hope they implement is QUIC multistream support.
The existing WebTransport API implemented in all browsers actually supports you providing the fingerprint of a certificate that can be self-signed.
I'm working with QUIC in a personal project, while you can roll your own QUIC library the spec is large enough that it's quite a bit of work to implement it yourself. Most libraries allow you to pass in your own certificates. Realistically you could just bake in certs to your program and call it a day. Otherwise yes, you can implement your own cert logic that completely ignores certs altogether. s2n-quic for example specifically allows for both, though the former is much easier to do.
I guess most if not all QUIC endpoints you come across the internet will have encryption, as the specification requires as such. But if you control both ends, say you're building a P2P application that happens to use QUIC, I don't think there is anything stopping you from using an implementation of QUIC that doesn't require that, or use something else than TLS, even if the specification would require you to have it.
Just as long as you statically build and ship your application. Because I guarantee the QUIC libs in $distro are not going to be compiled with the experimental flags to make this possible. You're going to be fighting QUIC all the way to get this to work. It's the wrong choice for the job. Google did not design QUIC for human use cases and the protocol design reflects this.
1 reply →