> In other words: The current IPv6 specifications don't allow public IPv6 addresses to send packets to public IPv4 addresses. They also don't allow public IPv4 addresses to send packets to public IPv6 addresses. Public IPv6 addresses can only exchange packets with each other. The specifications could have defined a functionally equivalent public IPv6 address for each public IPv4 address, embedding the IPv4 address space into the IPv6 address space; but they didn't.
Why didn't they? What are the arguments against this solution?
You would have to upgrade the software and OS of systems using IPv4-only to understand the IPv6 IP header. (Which is much simpler than upgrading to support IPv6 AND assigning IPv6 addresses)
I guess the bigger problem is that all routers in the path between an IPv6-only host and an IPv4-only host would have to support IPv6 to parse the destination IPv4/IPv6 address and make the proper routing decision.
This would make "IPv6 to IPv4 and vice-versa" traffic only work for some, depending on which ISPs has upgraded their equipment to support IPv6, and which IPv4-only hosts has upgraded their software+OS to support IPv6. This could result in IPv6 getting a very bad reputation, further delaying adoption.
>> In other words: The current IPv6 specifications don't allow public IPv6 addresses to send packets to public IPv4 addresses.
> Why didn't they? What are the arguments against this solution?
They did. See "Stateless IP/ICMP Translation Algorithm (SIIT)"
This document specifies a transition mechanism algorithm in addition
to the mechanisms already specified in [TRANS-MECH]. The algorithm
translates between IPv4 and IPv6 packet headers (including ICMP
headers) in separate translator "boxes" in the network without
requiring any per-connection state in those "boxes". This new
algorithm can be used as part of a solution that allows IPv6 hosts,
which do not have a permanently assigned IPv4 addresses, to
communicate with IPv4-only hosts. The document neither specifies
address assignment nor routing to and from the IPv6 hosts when they
communicate with the IPv4-only hosts.
I remember being able to overwhelm my first "home router" with the "Browse for servers" tab in Counter Strike 1.6!
It would fetch a list of all servers from Steam, and then connect to them individually, eventually killing my router.
Consider sending Aussie Broadband a link to my blog post. It should be a simple fix for them to raise the timeout, which should fix the problem for all their customers.
I describe those workarounds in my post as well. But that only solves the problem for me.
Making my ISP fix the underlying issue - that their TCP connection idle-timeout is too short - will make sure all their customers won't have to encounter this problem.
Please read the post. My ISP already confirmed the problem, and told me that they expect to roll out a fix this week.
I live in Denmark, and here it is fairly common that ISPs do Carrier-grade NAT.
What does their fix look like? I guess you can't change this limit for all connections otherwise they'd have to buy more IP addresses for their NAT routers, so maybe they only fix it for SSH connections, them being few?
I had the same problem and did the ~/.ssh/config trick years ago. Interested in contacting my ISP so that they fix the problem for all users (although it might be fixed now, idk).
They will increase their "TCP established connection idle-timeout" from 1 hour to 2 hours and 4 minutes as I requested.
This shouldn't make much difference for them. Most connections are closed within a few seconds anyways. Long lived connections with no traffic are rare.
With no data whatsoever, I'm guessing less that 1% increase in NAT table size.
When I lived in Denmark, 3 would often use carrier-grade NAT, but not always. Based on talk with colleagues back then, it's quite common with mobile broadband.
Here in Finland, the situation is similar; when using mobile broadband you usually end up behind CGNAT.
Luckily, most ISP's will happily provide a static IPv4 for you for a small fee.
I missed that part. I would not have expected that in Denmark. LSN is awful. You will be sharing source port depletion limitations with others in your network. That also means you can't host any servers unless you use port forwarding services or reverse vpns like hamachi. It also means you are sharing a SNAT with others on your network which means that malicious traffic from others could be attributed to you. Glad they are fixing it for you. If they didn't, then one would hope there were other ISP options.
Any ISP using LSN will have low NAT timeouts because it takes memory on their routers to track sessions and state. I would be surprised if your ISP remove timeouts unless they are letting it fall back to FIFO pruning on your segment. Did they tell you what they are changing?
It sounds like he's paid his ISP for a (dedicated) public IP, so it should be 1:1 NAT, which doesn't really need connection tracking.
For the rest of the customers that don't pay extra for a public IP, all the crappy things you mention do apply.
Hopefully, the ISP does native IPv6?
And, while 60 minute timeouts violate the RFC, it's a whole lot better than I expected. Usually CGN timeouts are around 15 minutes for nice ones, and I've seen 10 seconds at the bottom end.
I wish the longer ones would probe both ends of the connection to see if it's still live a minute or so before they intend to kill it.
That's bullshit, CGNAT is likely to cause all sorts of issues that the average users aren't going to realize being caused by their "I"SP (A frequent one : being unable to host video game sessions). They aren't getting real Internet, and are being treated as second tier citizens.
Yeah, my ISP uses it. It does come with some of the downsides the previous poster mentioned: the inability to make myself reachable from $the_world can be annoying, and I get a captcha on Google every time because of "unusual traffic" (I mostly use DDG, but sometimes fall back to it). Also, ACM blocked me at some point because "my IP is infiltrated by SciHub" (their words).
In the end, it's an imperfect solution for a real problem that mostly works well enough.
Why didn't they? What are the arguments against this solution?
You would have to upgrade the software and OS of systems using IPv4-only to understand the IPv6 IP header. (Which is much simpler than upgrading to support IPv6 AND assigning IPv6 addresses)
I guess the bigger problem is that all routers in the path between an IPv6-only host and an IPv4-only host would have to support IPv6 to parse the destination IPv4/IPv6 address and make the proper routing decision.
This would make "IPv6 to IPv4 and vice-versa" traffic only work for some, depending on which ISPs has upgraded their equipment to support IPv6, and which IPv4-only hosts has upgraded their software+OS to support IPv6. This could result in IPv6 getting a very bad reputation, further delaying adoption.
Anything I am missing?