> IPv6 restores globally routable addresses to every node, letting peers connect without contortions.
Global routeability doesn't automatically mean global reachability.
Many consumer and professional routers will block inbound TCP connections, and incoming UDP traffic without at least similar outbound UDP traffic preceding it, so you will still need hole punching.
Hole punching does get significantly more easy with v6, though, since there's really only one way to do "outbound connections only" firewalling (while there's several ways to port translate, some really hostile to hole punching).
Arguably one thing that's missing is a very simple, implicit standard that allows signalling a willingness to accept an inbound TCP connection from a given IP/port that such stateful firewalls can honor, similar to how they already implicitly do it for UDP, but with HTTP 3 running over UDP, the point might well be moot soon.
That simple, implicit standard exists since RFC793:
Simultaneous initiation is only slightly more complex, as is shown in
figure 8. Each TCP cycles from CLOSED to SYN-SENT to SYN-RECEIVED to
ESTABLISHED.
TCP A TCP B
1. CLOSED CLOSED
2. SYN-SENT --> <SEQ=100><CTL=SYN> ...
3. SYN-RECEIVED <-- <SEQ=300><CTL=SYN> <-- SYN-SENT
4. ... <SEQ=100><CTL=SYN> --> SYN-RECEIVED
5. SYN-RECEIVED --> <SEQ=100><ACK=301><CTL=SYN,ACK> ...
6. ESTABLISHED <-- <SEQ=300><ACK=101><CTL=SYN,ACK> <-- SYN-RECEIVED
7. ... <SEQ=101><ACK=301><CTL=ACK> --> ESTABLISHED
Simultaneous Connection Synchronization
Figure 8.
Every stateful firewall supports this. All you need to communicate off-band is IP addresses and ports.
Absolutely agreed — and this is an important distinction.
IPv6 gives you global addressability, not guaranteed reachability. Stateful firewalls still exist and inbound-by-default is still rare on consumer networks.
The I6P design explicitly assumes that reality. The motivation for being IPv6-first is not “firewalls disappear”, but that the problem space collapses from many forms of NAT and address/port translation down to mostly predictable stateful filtering.
That’s also why the transport is QUIC/UDP: firewall behavior is far more consistent, hole punching is simpler, and path changes are survivable.
So IPv6 isn’t treated as magic — it’s treated as a cleaner substrate with fewer pathological cases than IPv4 NAT.
This is true, but the beauty of UDP is that it's basically just a raw socket with a tiny 8 byte header slapped on top, with 2 bytes for source port, 2 bytes for destination port, 2 bytes for length, and 2 bytes for a checksum.
You could slap a UDP header on top of the TCP header and get the benefits of TCP with the hole-punching capabilities of UDP, provided you implemented some kind of keep-alive functionality and an out-of-band way of telling the "server" to establish an outbound connection with the "client". Or use QUIC, assuming it fits the use case.
> But it's often disabled for the same reason as having router-level firewalls in the first place.
Yeah, anything that allows hosts to signal that they want to accept connections, is likely the first thing a typical admin would want to turn off.
It’s interesting because nowadays it’s egress that is the real worry. The first thing malware does is phone home to its CNC address and that connection is used to actually control nodes in a bot net. Ingress being disabled doesn’t really net you all that much nowadays when it comes to restricting malware.
In an ideal world we’d have IPv6 in the 90’s and it would have been “normal” for firewalls to be things you have on your local machine, and not at the router level, and allowing ports is something the OS can prompt the user to do (similar to how Windows does it today with “do you want to allow this application to listen for connections” prompt.) But even if that were the case I’m sure we would have still added “block all ingress” as a best practice for firewalls along the way regardless.
Port forwarding and hole punching have different objectives and outcomes, and I believe PCP only caters to the former.
While the outcomes might be similar (some inbound connections are possible), the scope (one specific external IP/port vs. everybody) and the semantics ("endorsement of public hosting" vs allowing P2P connections that are understood to require at least some third-party mediation) differ.
I also don't think that port forwarding is possible through multiple levels of firewalls (similar to "double NAT").
This article focuses on the transport-layer design, not a torrent client replacement.
The goal is to provide a reusable IPv6-native P2P connection layer (QUIC-based, NAT-free) that existing clients or new applications can integrate without touching their higher-level logic.
I don’t really follow a “standard” education path. I’ve been interested in technology for as long as I can remember and had early access to computers and the internet. I’m about to turn 15 in a few days, and I’ve been programming for almost 6 years now — I started when I was 9.
Most of what I know comes from self-study, experimentation, reading documentation, breaking things, and rebuilding them. I usually learn by doing projects rather than following a fixed curriculum. As for papers, I mostly write by organizing ideas that come to my head and then grounding them with research and practical knowledge. Lately, I’ve been considering writing one about implantable RFID microchips, just to explore the topic more deeply.
Thanks for sharing. I want to ask you something: I understand that with IPv6 the idea is that every household receives several of IPv6 addresses so that every single IoT device has their unique IPv6 address and there is no NAT needed.
Would it be possible to use a dozen of IPv6 addresses at the same time? Like send one UDP packet over certain IPv6 interface, next packet over another IPv6 interface, and so on. If both sending and receiving end have access to multiple IPv6 addresses I can see how this significantly increases complexity for tracking.
Could you split up the traffic across dozens or hundreds of IPv6 source addresses?
> Could you split up the traffic across dozens or hundreds of IPv6 source addresses?
Yes
> I can see how this significantly increases complexity for tracking
Not really. You just track at some prefix level. In general, the ISP will hand out a /64 per consumer so that's what you can track. From there, you can build more complex and more precise grouping rules for tracking.
If you assign a subnet to a host, or allow the host to claim multiple addresses via ND from the link subnet, then you can use as many addresses as you want. You could give every process on your machine its own IPv6 address for example.
The biggest tracking hurdle is to figure out if the ISP that handed out the block of addresses is handing out /64s, /56s, or /48s. The network provided to you is functionally the same as the IP address assigned to you with IPv4.
In theory I could rent an IPv4 /29 (of which 6 addresses are usable) for like 20 euros a month from my home ISP to cause the same confusion but I doubt it'd confuse trackers to use those.
I realize it's intended to be an unsupported edge case but I'm curious. What happens in the event a NAT is present along the IPv6 network path? Do you just forward a port the same as you would with the various IPv4 solutions and move on? Or does it break catastrophically? Something else?
If it weren't for Internet infrastructure hobbling SCTP (via firewall), SCTP provides the same QUICC (session multiplexing) within same 5-tuple and with way much lower packet overhead and smaller code base too.
As with any network protocol design, the tradeoff is slighty gained from versatility over loss of privacy. So it depends on your triage of needs: security, privacy, confidentiality.
Now with the latest "quadage", unobservability (plausible deniability).
From what I recall, one downside to SCTP is that things like resuming from different IP addresses and arbitrarily changing the amount of connections per socket didn't work well in standard SCTP. Plus the TLS story isn't as easy. QUIC makes that stuff easier to work with from an application perspective.
Still a fascinating protocol, doomed to be used exclusively as a weird middle layer for websockets and as a carrier protocol for internal telco networks.
Unfortunately most of the existing communication protocols that are standardized conform to a broken model of networking where security is not provided by the network layer.
Cryptography can't be thought of as an optional layer that people might want to turn on.
That bad idea shows up in many software systems.
It needs to be thought of as a tool to ensure that a behavior is provided reliably.
In this case, that the packets are really coming from who you think they are coming from.
There is no reason to believe that they are without cryptography.
It's not optional; it's required to provide the quality of service that the user is expecting.
DTLS and QUIC both immediately secure the connection. QUIC then goes on to do its stream multiplexing.
The important thing is that the connection is secured in (or just above) the network layer.
Had OSI (or whoever else) gotten that part right, then all of these protocols, like SCTP, would actually be useful.
Tangentially related, but any feedback from devs using P2P? Usable for consumers, or too many peers not able to connect? using WebRTC or something more high-level like peerjs?
Good question — this is probably the most important clarification.
I6P is not trying to replace QUIC or TLS, and it’s not a competing transport. QUIC is the transport.
What I6P provides is a reusable P2P connectivity and transport layer built on top of QUIC, so applications don’t need to re-solve the same problems over and over again:
- Cryptographic peer identity decoupled from IPs
- Explicit peer-to-peer session semantics (not client/server)
- Built-in chunking, Merkle verification, erasure coding, and resumable transfers
- Stream pooling and batching tuned for high-throughput P2P links
- Session resumption and 0-RTT specifically for peer reconnections
- A clean abstraction boundary so existing apps can integrate without rewriting their logic
You absolutely could build all of this directly on raw QUIC — and many projects do, each in slightly incompatible ways. I6P’s goal is to standardize that layer so P2P apps can focus on application logic instead of reimplementing transport mechanics.
So the niche isn’t “better QUIC”, it’s “QUIC-based P2P without bespoke transport stacks per project”.
Easy & reliable p2p would upset so many apple carts. I get the feeling that if something really started taking root in that space, the big boys would push for a new obstruction--i.e. NATv6 or some such thing--to put the genie back in the bottle. And since so many people "fake it 'til they make it", they will swallow those worms like eager baby birds. Anyone who rejects the worms will be branded a heretic.
> IPv6 restores globally routable addresses to every node, letting peers connect without contortions.
Global routeability doesn't automatically mean global reachability.
Many consumer and professional routers will block inbound TCP connections, and incoming UDP traffic without at least similar outbound UDP traffic preceding it, so you will still need hole punching.
Hole punching does get significantly more easy with v6, though, since there's really only one way to do "outbound connections only" firewalling (while there's several ways to port translate, some really hostile to hole punching).
Arguably one thing that's missing is a very simple, implicit standard that allows signalling a willingness to accept an inbound TCP connection from a given IP/port that such stateful firewalls can honor, similar to how they already implicitly do it for UDP, but with HTTP 3 running over UDP, the point might well be moot soon.
That simple, implicit standard exists since RFC793:
Every stateful firewall supports this. All you need to communicate off-band is IP addresses and ports.
Huh, TIL, thank you!
Are you sure all firewalls support this? RFC 5382 seems to specify it, but then again, middleboxes aren't exactly known for strict RFC compliance...
Absolutely agreed — and this is an important distinction.
IPv6 gives you global addressability, not guaranteed reachability. Stateful firewalls still exist and inbound-by-default is still rare on consumer networks.
The I6P design explicitly assumes that reality. The motivation for being IPv6-first is not “firewalls disappear”, but that the problem space collapses from many forms of NAT and address/port translation down to mostly predictable stateful filtering.
That’s also why the transport is QUIC/UDP: firewall behavior is far more consistent, hole punching is simpler, and path changes are survivable.
So IPv6 isn’t treated as magic — it’s treated as a cleaner substrate with fewer pathological cases than IPv4 NAT.
This is true, but the beauty of UDP is that it's basically just a raw socket with a tiny 8 byte header slapped on top, with 2 bytes for source port, 2 bytes for destination port, 2 bytes for length, and 2 bytes for a checksum.
You could slap a UDP header on top of the TCP header and get the benefits of TCP with the hole-punching capabilities of UDP, provided you implemented some kind of keep-alive functionality and an out-of-band way of telling the "server" to establish an outbound connection with the "client". Or use QUIC, assuming it fits the use case.
At least there's an explicit standard for signalling: RFC 6887 Port Control Protocol. Many routers also support it.
But it's often disabled for the same reason as having router-level firewalls in the first place.
> But it's often disabled for the same reason as having router-level firewalls in the first place.
Yeah, anything that allows hosts to signal that they want to accept connections, is likely the first thing a typical admin would want to turn off.
It’s interesting because nowadays it’s egress that is the real worry. The first thing malware does is phone home to its CNC address and that connection is used to actually control nodes in a bot net. Ingress being disabled doesn’t really net you all that much nowadays when it comes to restricting malware.
In an ideal world we’d have IPv6 in the 90’s and it would have been “normal” for firewalls to be things you have on your local machine, and not at the router level, and allowing ports is something the OS can prompt the user to do (similar to how Windows does it today with “do you want to allow this application to listen for connections” prompt.) But even if that were the case I’m sure we would have still added “block all ingress” as a best practice for firewalls along the way regardless.
4 replies →
Port forwarding and hole punching have different objectives and outcomes, and I believe PCP only caters to the former.
While the outcomes might be similar (some inbound connections are possible), the scope (one specific external IP/port vs. everybody) and the semantics ("endorsement of public hosting" vs allowing P2P connections that are understood to require at least some third-party mediation) differ.
I also don't think that port forwarding is possible through multiple levels of firewalls (similar to "double NAT").
1 reply →
Author here.
This article focuses on the transport-layer design, not a torrent client replacement. The goal is to provide a reusable IPv6-native P2P connection layer (QUIC-based, NAT-free) that existing clients or new applications can integrate without touching their higher-level logic.
Feedback on design trade-offs is very welcome.
https://github.com/TheusHen says you're 14 years old.
The project is very impressive, as is https://github.com/TheusHen/ternary-ibex and having papers: https://orcid.org/0009-0009-5055-5884
What's the education path for a 14 year old that does this stuff?
I don’t really follow a “standard” education path. I’ve been interested in technology for as long as I can remember and had early access to computers and the internet. I’m about to turn 15 in a few days, and I’ve been programming for almost 6 years now — I started when I was 9.
Most of what I know comes from self-study, experimentation, reading documentation, breaking things, and rebuilding them. I usually learn by doing projects rather than following a fixed curriculum. As for papers, I mostly write by organizing ideas that come to my head and then grounding them with research and practical knowledge. Lately, I’ve been considering writing one about implantable RFID microchips, just to explore the topic more deeply.
Thanks for sharing. I want to ask you something: I understand that with IPv6 the idea is that every household receives several of IPv6 addresses so that every single IoT device has their unique IPv6 address and there is no NAT needed.
Would it be possible to use a dozen of IPv6 addresses at the same time? Like send one UDP packet over certain IPv6 interface, next packet over another IPv6 interface, and so on. If both sending and receiving end have access to multiple IPv6 addresses I can see how this significantly increases complexity for tracking.
Could you split up the traffic across dozens or hundreds of IPv6 source addresses?
> Could you split up the traffic across dozens or hundreds of IPv6 source addresses?
Yes
> I can see how this significantly increases complexity for tracking
Not really. You just track at some prefix level. In general, the ISP will hand out a /64 per consumer so that's what you can track. From there, you can build more complex and more precise grouping rules for tracking.
1 reply →
If you assign a subnet to a host, or allow the host to claim multiple addresses via ND from the link subnet, then you can use as many addresses as you want. You could give every process on your machine its own IPv6 address for example.
3 replies →
The biggest tracking hurdle is to figure out if the ISP that handed out the block of addresses is handing out /64s, /56s, or /48s. The network provided to you is functionally the same as the IP address assigned to you with IPv4.
In theory I could rent an IPv4 /29 (of which 6 addresses are usable) for like 20 euros a month from my home ISP to cause the same confusion but I doubt it'd confuse trackers to use those.
4 replies →
IIRC you could still track because all those mutiple IPv6 addresses will have the same prefix.
yes - this is also part of the privacy extensions spec: https://datatracker.ietf.org/doc/html/rfc4941
It is quite easy todo 100 lines of Python, you can even send ip packets with faked source adress.
2 replies →
Yes, but realistically the guy who is tracking you tracks the first 64 bits of the address, which identify the network.
5 replies →
> QUIC-based, NAT-free
I realize it's intended to be an unsupported edge case but I'm curious. What happens in the event a NAT is present along the IPv6 network path? Do you just forward a port the same as you would with the various IPv4 solutions and move on? Or does it break catastrophically? Something else?
If it weren't for Internet infrastructure hobbling SCTP (via firewall), SCTP provides the same QUICC (session multiplexing) within same 5-tuple and with way much lower packet overhead and smaller code base too.
As with any network protocol design, the tradeoff is slighty gained from versatility over loss of privacy. So it depends on your triage of needs: security, privacy, confidentiality.
Now with the latest "quadage", unobservability (plausible deniability).
From what I recall, one downside to SCTP is that things like resuming from different IP addresses and arbitrarily changing the amount of connections per socket didn't work well in standard SCTP. Plus the TLS story isn't as easy. QUIC makes that stuff easier to work with from an application perspective.
Still a fascinating protocol, doomed to be used exclusively as a weird middle layer for websockets and as a carrier protocol for internal telco networks.
Unfortunately most of the existing communication protocols that are standardized conform to a broken model of networking where security is not provided by the network layer.
Cryptography can't be thought of as an optional layer that people might want to turn on. That bad idea shows up in many software systems. It needs to be thought of as a tool to ensure that a behavior is provided reliably. In this case, that the packets are really coming from who you think they are coming from. There is no reason to believe that they are without cryptography. It's not optional; it's required to provide the quality of service that the user is expecting.
DTLS and QUIC both immediately secure the connection. QUIC then goes on to do its stream multiplexing. The important thing is that the connection is secured in (or just above) the network layer. Had OSI (or whoever else) gotten that part right, then all of these protocols, like SCTP, would actually be useful.
Tangentially related, but any feedback from devs using P2P? Usable for consumers, or too many peers not able to connect? using WebRTC or something more high-level like peerjs?
What's the landscape today?
I guess i don't really understand the niche being targeted. How is this different/better than just standard quic with TLS?
[Don't get me wrong, if you just wanted to make your own as a learning project or because its fun, that's cool too]
Good question — this is probably the most important clarification.
I6P is not trying to replace QUIC or TLS, and it’s not a competing transport. QUIC is the transport.
What I6P provides is a reusable P2P connectivity and transport layer built on top of QUIC, so applications don’t need to re-solve the same problems over and over again:
- Cryptographic peer identity decoupled from IPs - Explicit peer-to-peer session semantics (not client/server) - Built-in chunking, Merkle verification, erasure coding, and resumable transfers - Stream pooling and batching tuned for high-throughput P2P links - Session resumption and 0-RTT specifically for peer reconnections - A clean abstraction boundary so existing apps can integrate without rewriting their logic
You absolutely could build all of this directly on raw QUIC — and many projects do, each in slightly incompatible ways. I6P’s goal is to standardize that layer so P2P apps can focus on application logic instead of reimplementing transport mechanics.
So the niche isn’t “better QUIC”, it’s “QUIC-based P2P without bespoke transport stacks per project”.
Yeah, but why would you implement erasure coding over top of a reliable transport like QUIC?
Many of the other things kind of sound like what you would get already with raw quic.
> globally routable addresses ... simpler security
I don't believe those are synonymous.
to me this all seems heavy much vibed - take a look at the github repo
After closing three popups, I closed the page.
[dead]
Easy & reliable p2p would upset so many apple carts. I get the feeling that if something really started taking root in that space, the big boys would push for a new obstruction--i.e. NATv6 or some such thing--to put the genie back in the bottle. And since so many people "fake it 'til they make it", they will swallow those worms like eager baby birds. Anyone who rejects the worms will be branded a heretic.
A number of existing P2P things already do this, though usually with UDP.