IPv4 address utilization is incredibly low. For example, consider 44.0.0.0/8 - it's sitting around almost entirely unallocated. UCSD Caida uses it for their network telescope (pretending to use it for amateur radio) and won't give it back.
If anything, the first organization to have netblocks yanked out from under them should be the US DoD. As I mentioned a few days ago [0]:
> There are also large portions of the 13 /8s (218 million IPs!) assigned to the US Department of Defense [5] that you wouldn't need to scan since there are no routes to them at all: the 11.0.0.0/8, 22.0.0.0/8, 26.0.0.0/8, 28.0.0.0/8, 29.0.0.0/8, 30.0.0.0/8, and 33.0.0.0/8 networks are, for all intents and purposes, "missing" from the public Internet.
> Additionally, there are only four /24s in 21.0.0.0/8 that are reachable from the public Internet. Out of the 16,777,216 IP addresses that make up 7.0.0.0/8, only 255 are reachable (7.7.7.0/24) [6].
Just because we can't "see" them or get to them doesn't mean they aren't using them, though.
Yes, it sucks that we're out of IPv4 addresses but we've known this was coming for, what, 20+ years? The quicker we all get moved over to IPv6, the quicker we can forget about it.
(Disclosure: I've had a static IP from 44/8 for ~25 years and I don't wanna give it back.)
Technically that depends on the agreements with IANA. If IANA could say "HEY YOU if you're not using 80% of your space actively, by January after next then we'll charge you a an exponentially increasing fee" I suspect they'd get a lot more movement. I suspect that the price of IP space on the market will also change that attitude as people go "Well crap that's worth a lot of money"
If the US military had to pay a staggering $100/IP per year it wouldn't even show up in a detailed audit. It's an utterly inconsequential amount compared to the military budget.
It's not a price thing, it's a "why should we" thing.
Well, see, compared to what most of us are used to nowadays, IP networking looked a little bit different back then.
In 1981, RFC791 [0] came about, describing a way of doing IP addressing on the ARPANET (based upon "classful networks" [1]). In accordance with this scheme, you would get either a /8 ("Class A"), a /16 ("Class B"), or a /24 ("Class C") -- depending on how many hosts you had (or thought you might reasonably have). This is the main reason why you see some organizations with a /8 today that you wouldn't expect.
Classless routing ("CIDR") [2] -- or, more specifically, variable length subnet masking (VLSM), what we're all familiar with today -- didn't officially exist until RFC1518 [3] and RFC1519 [4] (fall 1993). CIDR was introduced as a solution to a problem people were already starting to experience then: a shortage of IPv4 addresses!
(Yes, 25 years ago, they realized they were running out of IPv4 addresses and started working on solutions. IPv6 [5] first emerged as a "draft standard" in late 1998 -- but only officially became an "Internet standard" last summer!)
How come IPv6 wasn't standardized before 2017? I suspect there's some kind of story there. Do you happen to know the details? Any links/pointers? Thanks!
It's just nonsense. In the IETF's nomenclature, the label "Internet Standard" is used to refer to, and I quote:
A specification for which significant implementation and successful
operational experience has been obtained may be elevated to the
Internet Standard level. An Internet Standard (which may simply be
referred to as a Standard) is characterized by a high degree of
technical maturity and by a generally held belief that the specified
protocol or service provides significant benefit to the Internet
community.
A specification that reaches the status of Standard is assigned a
number in the STD series while retaining its RFC number.
Most internet standards are not "Internet Standard"s, as can be seen by the fact that IPv6 got assigned the standard number 86, and up until last year, IPv6 wasn't either, but it still was a perfectly fine and well-documented standard, and has been for a long time.
I'm sort of familiar with how sacred these standards are. I'm interested in what was the deciding factor, when IETF and which WG decided now we have enough (significant) successful operational experience.
It's surprising, because IPv6 was set in stone at least around 2010 already, with a lot of experience (6bone ended in 2006, and so on), but on the hand general rollout is still very much ongoing.
There was a time before NAT, where every machine had a publicly routable IP, and things were good. Any machine could talk directly to any port on any other machine (firewall willing). Until one day, IP exhaustion attacked.
Thankfully NAT doesn't exist in IPv6, because 2^128 addresses should be enough for everybody.
You need NAT whenever you can't change your peers' routing table, for whatever reason.
It's used extensively in IPv4 world for LAN=>WAN connectivity because your only peer (your ISP) won't agree to send all traffic for 192.168.0.0/24 to your home.
Its most popular use case disappearing doesn't make it any less useful for the narrow set of circumstances when it's basically your only option.
NAT64 is NAT in reverse, essentially. Your internal, IPv6 network can use it to contact an IPv4 host by embedding the target address. This is arguably better than current NAT44. The NAT64 only needs to track which ports and IPs are used on the IPv4 side which should become easier as more hosts don't require an IPv4 anymore.
It's a last-stage transitional method IMO, when most but not all of the internet is IPv6.
Routable != Reachable, which is obvious once you actually setup a firewall once.
My IPv6 /48 is entirely routable. The reachable portions are some ports in the list of popular protocols like SSH, HTTPS and similar.
The good part is that the entire /48 is reachable from within and simplifies OpenVPN routing a lot compared to IPv4, which requires a bit of outbound NAT hackery so it doesn't have to traverse the firewall three times (OpenVPN->WAN->LAN instead of OpenVPN->LAN)
NAT of course is not the best solution ever, but vast majority of currently used network nodes do not need publicly routed IPs in a limited global space. If anything, many of them are much better without one. Yes, I know, firewalls. But why not just avoid the issue by not having publicly accessible address for non-public systems in the first place?
Because NAT is an abominable hack with significant technical and administrative overhead.
My home is not intended to be a public space. But, it has a unique address, just like any other bar or restaurant in the area. Being able to locate it with a unique identifier is valuable, even if I don't intend for my living room to be publicly accessible.
> Because NAT is an abominable hack with significant technical and administrative overhead.
Agreed. But you can use private networks without NAT. In fact, 99% of traffic in the private network doesn't need NAT. Only outside traffic does, and only that which can't use proxies etc. - which is pretty small amount of overall traffic, I think.
Particularly notable example of this is that the three ip6tables rules needed to get NAT-like everything out nothing out behavior are exactly the same three rules that you need in v4 iptables for the NAT to have any security effect at all.
And how is NAT not added security by default? By default it drops everything incoming, no? (I mean, theoretically NAT doesn't, but [almost] every practical implementation situations means that there's no possible automatic internal-external address correspondence, otherwise you wouldn't need the NAT.)
A NAT doesn't drop anything. A NAT translates. A firewall drops.
If you only have a NAT, your ISP (or anyone who has compromised their router, or possibly simply your neighbour when ISPs occasionally fail to isolate their customers on layer 2) still can send you packets addressed directly to your "internal" addresses. The only thing that actually helps is a stateful firewall. And when you have that, the NAT does not add anything security-wise.
NAT and "internal addresses" is as much a security mechanism as not telling anyone that there is a room called "living room" in your house. If you want to prevent strangers from getting into your living room, you don't use internal names for your rooms, you install a lock on the door.
By default a NAT is unconfigured, so it doesn't know what to translate into what. The usual practical deployment consists of a dynamic outgoing NAT setup and UPnP. So outgoing TCP/UDP/etc protocols are snooped and the incoming packets are translated accordingly. (Hence the UDP NAT hole punching technique.) This setup works well for SoHo sites.
In this case without knowing my egress traffic, any incoming packet will be dropped by the NAT middlebox/facility/software/device/module/thing. So NATs do drop. And nice ones emit ICMP or TCP RST too.
> The only thing that actually helps is a stateful firewall. And when you have that, the NAT does not add anything security-wise.
Yes, indeed. That's a different threat model though. And I agree that hosts should have a default deny ingress policy.
Yet NATs do work wonderfully for SoHo networks. And of course a stateful firewall is just as easy to fool (circumvent) with a back-connecting connection as a NAT.
And a NAT as described above or a stateful firewall that enables local network functions such as CIFS/SMB were just as vulnerable to the usual Blaster-type worms/malware.
Again, of course, usually things go hand in hand. Firewalls usually have SNAT/DNAT capabilities, and SoHo shitboxes come with too much of every kind of NAT/firewall thingies already.
And, I know the misery of interconnecting two internal networks both using the same 192.168.0.0/24 or whatever prefix, so I can't wait to get rid of this and move to proper v6. But the fact of life is that the typical NAT setup was very convenient for ticking the local network ingress security checkbox for ISPs for years (decades).
> In this case without knowing my egress traffic, any incoming packet will be dropped by the NAT middlebox/facility/software/device/module/thing. So NATs do drop.
No, that is the firewall, or simply the IP stack of the device, not the NAT. A NAT only translates. In your typical home setup, it will track connections coming from the LAN side and translate the source address of the connection to the public address, and then match packets of the same connection in the opposite direction and translate them back. If there is a packet coming in on the WAN side that does not match any known connections that are being translated, the packet is simply left unmodified by the NAT. If it happens to be addressed to the public adress of the gateway, it will then be passed to the higher layers of the IP stack of the gateway where it might be delivered to some TCP or UDP socket or who knows what other stuff that is running on the device and using IP--and if the IP stack cannot find any applicable socket to deliver it to, it might respond with the appropriate ICMP error or TCP reset or whatever. If the packet is not addressed to the gateway's public address, it will simply be forwarded to wherever the routing table says--if it's addressed to an address from the LAN range, it will be forwarded to the LAN.
A NAT without a firewall does not drop packets. A NAT only translates addresses of packets belonging to connections it is configured to translate, everything else is left untouched. If there is no firewall in addition to the NAT, it will not prevent inbound connections from the WAN side to the LAN side.
> But the fact of life is that the typical NAT setup was very convenient for ticking the local network ingress security checkbox for ISPs for years (decades).
Well, the real fact of life is that tons of supposed network professionals think that, and then potentially deploy a pure NAT setup that allows unlimited access to the LAN from the ISP's router and possibly other customers of the ISP when the ISP fails to properly isolate customers on layer 2 because of the completely baseless assumption that a NAT prevents inbound connections.
This is largely theoretical now, as there's a single bit difference in definition (or implementation) of a NAT. (Leaves unknown untouched, or drops them.) But you are right, that thinking of a NAT as a pure transformer that cannot drop packets results in a better model.
> deploy a pure NAT setup
Is that even possible? I mean, sure, get a PC with 2 NICs and use raw sockets, but with a simple off the shelf network gadget?
> This is largely theoretical now, as there's a single bit difference in definition (or implementation) of a NAT. (Leaves unknown untouched, or drops them.)
So ... any distinction between two different things is a single-bit difference and therefore largely theoretical? I am not sure I follow ...
> But you are right, that thinking of a NAT as a pure transformer that cannot drop packets results in a better model.
Which is the important point, in particular in this context where the common argument essentially is "because I consider dropping packets a function of a NAT, NAT is good to have", where the whole argument only hinges on the definition, not on any real-world facts about network devices.
> Is that even possible? I mean, sure, get a PC with 2 NICs and use raw sockets, but with a simple off the shelf network gadget?
I don't know, I have never investigated that, as I don't tend to use off the shelf network gadgets. However, there is no need to use raw sockets, just install Linux and use the kernel's netfilter, which behaves in exactly that way: If you only configure NAT rules but no filtering rules to prevent inbound connections, your LAN is wide open. And that is not bad design, that is the obvious way to implement this as a matter of separating concerns.
Now, Linux is (was?) a popular OS for off the shelf home routers, and some of them at some point even use(d?) netfilter for their NAT. If you consider what kinds of completely moronic security holes have been and still are being found in those kinds of devices (shell injection in the web interface, trivial buffer overflows, backdoors with hard-coded passwords, ...), that does not make me particularly optimistic that they are careful when putting together the actual network setup on those devices. Not configuring the filtering of inbound connections is exactly the kind of mistake that I would expect to happen in an environment that produces that kind of garbage: Unless you are competent and do invest in quality assurance, that is exactly the kind of problem that noone will notice, as it does not affect day-to-day operation, and noone in the target market will notice if you screw it up because even the supposed experts think that the NAT implies that they are safe.
I suspect other network stacks or even ASICs have similar separation of concerns in order to be usable for a broader market than "home routers", so I suspect you can make the same mistake on other platforms used for implementing home routers.
So, do I know that there are devices out there that do NAT but don't filter? No. Would I be surprised if there were? No, definitely not.
However, if you do away with NAT, that makes it much more likely that a home router lacking a filter wouldn't go unnoticed for long, because it's trivial to check whether it does filter or not. So, it's not only that NAT does not logically imply a firewall and that a firewall does not necessitate NAT, but that not having a NAT makes it actually more likely that your device does in fact have a firewall.
I think the most productive way to look at it that for NAT/NAPT/cone NAT to work, a necessary prerequisite is a default-deny inbound firewall policy and stateful connection tracking.
Once you have that, layering on NAT is possible. But the security implications were already addressed before you get to that point.
> NAT does nothing meaningful securitywise that a firewall cannot achive
One good thing about NAT is even if you screw up the firewall config, such as configure everything in "allow all" mode, your internal network is still secure, because private IPs are not routable at the Internet level.
Using NAT and the IPv4 private address ranges sounds at first like a good solution but it creates some operational problems when two organizations want to connect their networks or when two organizations merge.
The fact something looks "dark" on that plot doesn't tell you anything about how much of the space is allocated.
The site linked by that many years old Reddit post says that in fact loads of this space is allocated, to specific people, in this case "Hams", I see alphanumeric designations which I seem to remember Hams call "handles" as well as geographic locations and some human names.
Is the Amateur Radio "Ham" community disorganised? Yup. Does that magically mean they're not using this address space and so it's "unallocated"? Nope.
The allocation is for equipment within the packet radio network and, if you announce BGP, for your gateway connecting that equipment to the wider internet. I'm happy for UCSD to keep this block but I may be biased due to my participation in the network and general vicinity (~10mi) to the school itself.
> The fact something looks "dark" on that plot doesn't tell you anything about how much of the space is allocated.
A lot will also depend on when the scan is done. For instance, the University of Cambridge has 131.111/16 and hands it out to student devices, so if you scan during a vacation a large portion of it will look empty.
I had this thought too, but do we realistically know anything about what types of devices are likely to respond to ping? What's the alternative? Try to initiate a TCP handshake to every port of every IP? How many servers are in UDP listen only mode or whitelist IPs to whats on their network?
The alternative is to switch to IPv6. There is no requirement to offer any particular publicly reachable service on an IP address, a machine or device might be completely firewalled off from directly communicating with the public internet, and it is still perfectly acceptable for it to have a globally unique IP address, so it is ultimately pointless to try and measure "the real utilization" using packets.
As far as I understand the situation, it would be pointless to chase “unused” address ranges and try to claw them back because:
1. It would be technically very difficult, and old equipment is frequently hardcoded.
2. With the rate the allocations are going, you’d get maybe an additional few months or a couple of years out of this enormous effort. After that, all the addresses really would be allocated, and now what do you do?
Considering the huge effort, technical and political, it would take to do such a thing, it is easier to just adopt IPv6.
Yes this, we were blowing through a /8 every couple months in the past, haven't checked recently but might even be faster now. The parent argument comes up in every one of these threads.
Using ping is a really crude way of establishing the 'darkness' of a prefix.
Complete prefixes just block off all ICMP incoming traffic, as do many core internet infra-structural devices (although they may pass them on, they won't answer pings directed to them).
No, but they probably have a bunch of probes along the lines of "if you see any traffic in these ranges on the public internet, raise all the alarms," which is arguably a kind of usage in its own way.
It is very much in use - it may not all be visible to the public internet however.
If you said it was underutilized, you'd be correct.
But before we go after the hams, what about the absolutely massive address space held by the US DOD and related agencies? its literally 20x the size of the /8 held reserved for the ham community.
I see this argument posted a lot. Even if every one of the underutilized /8 or non publicly announced (US DoD) /8s was given back and allocated in small /24 to /20 chunks to small to medium sized ISPs, it would not have a greatly measurable long term impact. It would postpone the absolute need to migrate to ipv6 by maybe... 3-4 years, globally.
The argument that we should expend a lot of effort (technical, legal, and otherwise) to reclaim a dozen /8s and start making them part of the global routing table has been pretty thoroughly debunked by ARIN, RIPE and APNIC. They are instead focused on getting people to use v6.
Due to the efforts of a small number of people in Apple engineering (notably Erik Fair), Apple was a pretty major internet presence in the early 90’s. Though, oddly, the exec level pretty much didn’t know it existed. Remember eWorld? (Probably not.)
The relevant bit: 'we renumbered the entire company from "picked out of the air" IP Addresses to net 17, assigned to us upon request by the Internet Assigned Numbers Authority (IANA) (R.I.P., Jon Postel).'
Originally Apple's MacTCP was actually a product they sold separately, it wasn't bundled into the OS until 1994. Although it was usually easy to get a hold of a copy since universities and such had site licenses, and ISPs often had redistribution licenses
Remember that Microsoft believed that the future was subscriptions to CDROMs like Encarta that would be delivered by mail.
It was late to the party on the internet, but was a pioneer in seeing the value of subscription computing, predating pretty much every SaaS company out there today.
>UCSD Caida uses it for their network telescope (pretending to use it for amateur radio) and won't give it back.
Why do we have to convince them to give it back? It's not like IP addresses are tangible and we have to break into their offices to steal them. Major ISPs could stop all routes with them and those addresses could be recovered and reassigned to other AS's
I understand this is quite a big threshold to cross, but it feels necessary.
Even in developed countries IPv6 is barely deployed (my UK ISP - BT - pretends that they rolled it out but half of the time my modem tells me IPv6 is not availabled until I force it to reconnect, and no sign of IPv6 on mobile networks).
Was looking at whether it was more economical to buy a small address block vs rent it from a datacentre. A /24 address block seems to cost around $4,000 upfront but then you need to pay €2,000 signup + €1,400 per yr RIPE membership fee. On the other side IPv4 is not going away and the price is only going to go up given the failure of IPv6 to deploy. Is there any way around the RIPE membership? Like owning an IPv4 address block through a proxy / broker?
Comcast's IPv6 implementation is rock solid and faster than IPv4 most of the time. I've been running publicly accessible IPv6 HTTP hosts over it for several years now.
The last two Motorola Modems I've had came with IPv6 support and were provisioned by Comcast out of the box.
Access Points from D-Link will autoconfigure via SLAAC but DNS has to be hardcoded.
It's 2018 and yet Ubiquiti's UniFi enterprise networking gear only has alpha support.
IPv6 on Tomato just works and has for years, IPv6 on Raspbian not so much.
Windows just works seems to fuck with nonWindows clients so Enterprises disable it via Group Policy.
Android devices seem to forget IPv4 if they catch a wiff of IPv6 on the network.
It does but I can't say how frequently. My IPv4 address has only changed 7 times since 2012. My IPv6 record has changed once this year when I shut down my modem for an extended period of time.
I use http://freedns.afraid.org/ to manage my DNS. They support IPv4 and IPv6 with domain linking so you can update all your DNS records with 2 calls (one for IPv4 and one for IPv6). This can be done with wget via a cronjob on your web server or in most routers these days. If you use their premium tier you can use stealth records and wildcard DNS.
Just remember to set your TTLs on your A records to something low like 60 seconds. And don't host anything mission critical.
Thanks. I am very much interested in making a similar setup of my own on a Raspberry Pi. Do you recall what were the IPv6 issues with Rasbian? Also, do you have any concerns about potentially going against Comcast policies which appear to prohibit running servers?
I guess I should rescind my comment about Raspbian because I haven't had any major issues since the Jesse release and I doubt anyone is even going to use that with the Stretch release out.
I had a lot of issues with Raspbian prior to Jesse just failing completely to get an IPv6 address from the Gateway and not being able to resolve anything.
I've been running Pi.Hole and OctoPi on Jesse for at least 6 months with only minor issues that aren't the distro's fault but the applications. OctoPi doesn't even configure the HTTP Server so IPv6. Pi.Hole listens for HTTP requests and responds with errors.
I installed Stretch on a Pi and setup a Ubiquiti Controller a few weeks ago and IPv6 works flawlessly with it.
\* Regarding Comcast's policies, I have never run into an issue with them and I've been running servers for over a decade. That being said, mine are personal use, low traffic, and not business related.
> Windows just works seems to fuck with nonWindows clients so Enterprises disable it via Group Policy.
well I've seen most enterprise networks with no IPv6 support. (mostly admins thing that this beast is hard to work with)
Also Kubernetes has limited IPv6 support (mostly on the documentation site) (of course it's easier to support a IPv6 only cluster, because heck you wouldn't really need BGP or some wierd overlay networks (sadly it still is not there yet)).
In Germany IPv6 within "Deutsche Telekom" is just good. But sadly on the consumer side you still won't get any fixed IPv6 (due to privacy concerns!)
You won't believe the number of crazy people in Germany who believe that static IP addresses are the root of all evil because of all the magucal tracking and surveillance possibilities they famtasize about.
Other big German providers just fail at this. O2 for example only hands out IPv6 addresses very selectively based on whose lines they are renting in the area. I have no chance in hell of getting an IPv6 prefix from them.
telekom has some IPv6 routing issues every now and then that I did not encounter with other ISPs. I believe it is because they peer more selectively at de-cix instead of having open peering.
In Belgium, my provider's (Telenet) IPv6 connection is better than IPv4. Every now and then some network applications and websites will stop working, and invariably they're the ones that depend on IPv4. Meanwhile the IPv6 internet just keeps working without a hitch. Takes a good 5 minutes for the IPv4 to come back on then. Luckily I've got a static /64 subnet from the ISP so most of the services I run for personal use are basically IPv6-only at this point.
You need to make the distinction between PA(provider allocated) and PI(provider independent) objects, as well as the distinction between LIRs(members of the RIPE NCC) and end-users.
PA can only be held for further assignment in their networks by a LIR that is a member of the RIPE NCC, received in two ways: maximum of one IPv4 allocation /22 with proper justification, or acquisition through merger with another LIR. The membership fees are as you mentioned.
PI resources are assigned to end-users in the region, that do not need to be members of the NCC. They do however need a valid end-user agreement with a LIR, and each(unless they are allocated on the same date) object entails a 50€/yr fee, billed to the LIR. PI IPv4 is no longer being assigned from RIPE, but existing ones are openly traded.
My guess would be that the nets you were looking at were PI networks, so no, you do not need to pay the membership fee.
It doesn't help the UK that one of the largest ISP's, Virgin Media (the only cable supplier) don't yet support IPv6. They're talking about going with DS-Lite, which carries its own problems.
Once they go live, expect to see the UK climb significantly.
Not quite true - EE (so one of the 'big four' providers rather than a small MNVO) has IPv6 support. Right now, my phone has an IP address assigned out of EE's 2a01:4c8::/29 netblock and I can connect to IPv6 only websites.
My phone isn't super modern either - a 2016 era OnePlus 3 running Android 8.0.
Ah I thought I'd imagined the BT router saying it supported IPV6. By the time I'd enabled DHCPv6 on my Mikrotik hardware, it had disappeared from the HomeHub dashboard.
OVH doesn't properly deploy IPv6 either. pfSense is also being weird (if you only get a /64 you need SLAAC or else you can't configure the LAN with it for some bizarre reason and OVH doesn't SLAAC properly)
Though thanks to HE/tunnelbroker.net I deployed a /48 on my network and all is good now (and easier).
It's a brand-new BT router so I would be surprised if that was the case. I suspect the non IPv6 connectivity rather come from which node I connect to on the network.
If you're on VDSL try grabbing an HG612 from ebay and using that instead, with a decent router behind it. I was running a dual stack setup like that for over a year without issue, before moving over the Virgin Media and having to rely on a HE.net tunnel.
"It's important to keep in mind that while 185/8 is finished, we still have around nine million recovered IPv4 addresses in our available pool. Under current policy and growth rates, we expect these to last a further two years."
They've really cracked down on IP allocation since the days of giving companies entire /8s, or even since 2012
"185/8 was allocated in just five and a half years...in comparison, the preceding /8 - allocated under the old needs-based policy - lasted only five months"
> They've really cracked down on IP allocation since the days of giving companies entire /8s [...]
You did not have to be a company, your estimates of how many of the IPs you request you'll plan to use right away, after one and after two years just had to be within some boundaries (ripe form #160, if memory serves me correctly).
Well, looks my memory got corrupted over the last 2 decades: apparently not the correct number. Anyway: You had to specify purpose once and provide an allocation plan for the periods I mentioned above.
Many people, myself included, feel DHCPv6 is a silly idea and should go away. IPv6 was designed to not need it, and continuing to use DHCP is just inertia and a desire to keep doing things in the same old way.
As I recall originally everyone was supposed to use well-known addresses like ::1, 2, 3 on the current network or multicast DNS.
But nooo. Was that too hard? DHCP or nothing! And then DNS was added to SLAAC.
The rest of DHCP was always useless. Once you have DNS, it can be used for all other service discovery.
Does anyone know the amount of IP space that's allocated but never used or even allocations that have no routes to? I know there is a market to lease IP4 blocks and there is a resale market. I just wondered how much is just being sat on.
That link looks great. I've bookmarked it to read later when I can give it the attention it deserves. I find networking at this scale fascinating and something I'd like to refine my knowledge with and perhaps do some sort of CNNA style course.
Real life example: how do you isolate a wireless guest SSID from your main work network without different subnets? Those are still needed but said ISP will just hand you one network.
Many wireless APs have a "guest filter" mode which prevents devices on the guest SSID from communicating with the LAN outside of DHCP and optionally other approved services.
I use this on my home LAN, guests can communicate with the Chromecast and media server but can't reach the printer or others.
A subnet is simply a range of addresses assigned for a particular layer 2 network. Not having a subnet is equivalent to not having addresses ... that does not really make sense.
Well, if you have multiple layer 2 networks, you need multiple subnets, and you should assign a /64 per layer 2 network for stateless autoconfiguration to work, so splitting a /64 into smaller pieces is not a good idea, therefore an ISP should give you at least a /56, better a /48, that you can subnet for any layer 2 networks that you might have.
It's hard for an ISP to justify switching. It costs money now, so your competition who do not switch can use the money saved to outcompete you. Switching costs are also likely to fall the longer you wait. Customers don't demand it.
If I was CEO of an ISP, I would have ordered a plan, but not put it into action just yet.
I work at a small ISP and I consider this a misconception.
Any network equipment your ISP bought or gave you in the last 10-15 years supports IPv6. Equipment only has a certain lifespan anyways, so there really has been no "cost" to upgrade. Maybe a bit of software needed updating.
In my experience, the problem is customers don't care that much about IPv6.
When your customer is an IT professional who has been configuring networks the same old way for years, they expect to be sent the IPv4 subnet information. It would be a negative customer experience if we gave them an IPv6 subnet by default. Since they don't see the need and aren't used to it, it's bound to cause frustration for them.
We do support IPv6 for any customer that wants it.
I think IPv6 will happen when web sites can no longer get IPv4 addresses. Then people will start saying, "my favorite site is v6 only!", so the IT people hear about it and start to care.
> I think IPv6 will happen when web sites can no longer get IPv4 addresses
With vhosts and the proliferation of CDNs, will this ever happen? If my site is behind CloudFront, I don't need any IPv4 addresses of my own
The only thing that IPv6 really solves for end-users is peer-to-peer. If video games, VoIP, etc have lower lag on IPv6 (due to not having to go though a mediating server) customers might demand it.
Is it just me, or does the excessive use of undefined acronyms make this a difficult read for everyone else too? At the very least, please annotate your first use of acronyms like LIR and RIR so us outsiders have some context.
I'd say that if you don't understand the acronyms, you won't find the rest of the article useful or informative, either, so I don't think it's much of a problem.
Speak for yourself. I know what an /8 is, know what it means to be running out of IPv4 addresses and having to ration the last ones available, and am interested in news about how that process is going, especially insofar as it affects the transition to IPv6. That was enough for me to grasp the majority of the current article, which I found informative. But I’ve never interacted with the IP address allocation process myself, nor is it all that central to my interests, so my knowledge is relatively skin deep. I could tell you what ICANN was and probably remembered that RIPE was one of its regional affiliates, but not much more than that. In particular, I had no idea what an RIR or LIR was. So, while I didn’t mind the use of jargon, neither would it have hurt to spell out the acronyms the first time.
Sure, but that doesn't explain why they show the USA jumping from 27.7% to 38.7% in one day (5/24 to 5/25) when Google's numbers don't show a similar rise.
You see the same jump at the same time in some other countries, such as Norway and Canada. I would guess that's down to a measurement adjustment on the part of Akamai, enabling them to see more adoption that they had been missing.
You might be using IPv6 and not even know it (if you have a smart phone or residential internet access, chances are that you already are using it). And that's how it supposed to be, it would never be a concern of a user.
I don’t mean killing IPv4 - I mean all infrastructure and ISPs supporting IPv6 and treating IPv4 as legacy. Like V12 gasoline engines. They exist and are even produced today but the world has pretty much abandoned them.
People are lazy and will wait for absolutely the last moment: "why should I spend time configuring my network when I still can purchase IPv4?" This was especially bad because vendors and ISPs had the same attitude, so even when there was someone willing to do the work, those parties stayed in the way.
I actually see great parallels about Python 2 vs 3. The changes between these versions aren't really big, and converting is not that hard. Other languages already did it many times (Ruby actually did it when switching from 1.8 to 1.9, but people moved on quickly, because if they didn't, their apps and libraries no longer would work and no one would use them).
IMHO the real cause was that Python gave a lot of time to do it (nearly 15 years!), but there was no real interest in Py3 until 2015 when Python 2.7 went into maintenance mode (no new features backported, BTW all features of Python 2.7 were backports of Python 3, fueling "Python 3 doesn't have anything new, what should I switch?"), and organizations most likey will wait until 2020 (EOL) to port their applications.
I was a charter business fiber customer for 5 years. (multiple sites in multiple states) I asked every few months, got relayed to the same web page sayings "its rollout is planned in Q4" (for 5 years it said that, never included the year) and gave instructions on how to setup the single 6RD server they had in St. Louis. They merged with a few other cable companies, who had working ipv6, but nobody could give me timelines. We had customers wanting to access our applications via IPV6. We actually had to setup a remote proxy on an IPv6 network to then proxy back over IPv4. I left that job a year ago, so perhaps Charter has had a huge change of heart, but I strongly doubt it.
There is an IPv6 address block assigned for "IPv4-translated" address (::ffff:0:$IP4ADDR). In networks that support this, packets destined for these addresses are routed to a NAT64 box for introduction to the v4 internet.
Like all NAT-based solutions, this only provides support for outgoing connections out of the box, but that's unavoidable without assigning v4 addresses to hosts, and in the consumer ISP use case is all you need anyway.
Anybody know why 240.0.0.0/4 is not used? It currently blocked by many routers/firewalls but it was possible to start "unreserving" process long time ago and use this net by now.
It's hardcoded to be broken in too many devices. NANOG has long and winding threads about reclaiming the class E space - there was consensus that it wasn't possible to get enough devices unbroken for the space to actually be usable.
>we still have around nine million recovered IPv4 addresses in our available pool. Under current policy and growth rates, we expect these to last a further two years.
And by then they will have recovered even more. The end of IPv4 is a lie, and how bad IPv6 is and the lack of good transitioning systems doesn't help.
We do have more and more people in the world. Things like NATing can help get around it to some extent, but at the end of the day there aren't enough addresses to give every human alive today even one IP address. That doesn't strike me as a problem that can be solved by buybacks.
Trouble is that NAT solved a huge chunk of the problem. The current setup is not quite painful enough for people to want to fix it. The good is the enemy of the great, as it were.
NAT was (and is) destructive and painful that some of us gave up writing network software in the late-90s/early-00s. I personally abandoned several network-focused projects in the early 2000s.
The current status quo only seems "not quite painful enough" if you accept that most people cannot use true network software, limited to client-server architecture where party lines[1] communicate with each other only with the permission of central privileged imprimatur[2].
"if you accept..." I mean, isn't that a very reasonable assumption? Do you really disagree with the notion that more than half of global population don't care at all about the ability to "use true network software", wouldn't use it if they could, and, as long as they're not restricted too much, knowingly avoid solutions with more freedom and actually prefer centralized solutions as long as they're even just a bit more convenient? Heck, if we don't listen at what people are claiming and look at their actions, then even in the techie crowd the majority aren't ready to sacrifice minor conveniences to choose decentralized models over a client-server run and entirely controlled by someone else.
Interrent would be very different (better) today if not for NAT.
NAT restricts what is possible to do over network for example we only use TCP and UDP protocols, because those protocols are supported by most devices. Similarly we have very minimal number of peer to peer applications. P2P currently is mostly popular with piracy, but it could be beneficial for other uses.
NAT + asymmetric speeds (which started because of DSL, but ISPs decided to keep things that way even though it is no longer necessary) are responsible that's why we haven't a lot of services that are centralized. IPv6 has chance to fix this and I am so glad NAT wasn't included in its design.
ipv6 solve the problem of one ip per device, with ipv6 you can have both local and public IP's on the same interface! Then your app's and services can choose whether they like to listen on local or public IP's.
Without NAT you can do so much more stuff, like peer-to-peer (p2p) networking. Yes, you can do p2p with ipv4 behind NAT but it's super complicated and brittle.
Also bypassing the NAT is complicated, you have to fiddle with the router settings, and often you have to call your ISP to give you a public IP. This makes it hard or impossible to sell "Internet of things" (IoT) devices to regular people as you can't just plug them in.
Networks today are very good with high bandwidth and low latency, which enables some interesting use cases, for example virtual reality (VR) where you just have a thin client plugged in to the network and then have all the compute power located in a data-center a few miles away, with sub ms latency.
Another usecase is apps with service like functionality, like decentralized Facebook, and chat messengers.
Do you have redundant power supply at home, redundant internet connection? Keeping your own server up and running at home is unreliable and annoying. Having animals, kids, makes it even more difficult. If I would have to rely on it beeing up while I am abroad, I would rather pay for VPS.
Hiding insecurity is perfectly valid. It is making attack surface smaller. I do not get pings of death, constant scanning, login attempts all the time on my local machine which is always behind NAT. Every server that has public IP gets scanned or tried out with vulnerabilities. I can connect totally new PC to router with NAT and not be owned in matters of minutes by some botnet. My router might be exposed but it is something I know. All machines behind router are perfectly fine for remote vulnerabilities.
> Do you have redundant power supply at home, redundant internet connection?
Depends what you need. My last power outage was over a year ago, and Internet issues will generally resolve themselves in a relatively short period of time. That's reliable enough for a lot of use cases.
> Do you have redundant power supply at home, redundant internet connection? Keeping your own server up and running at home is unreliable and annoying.
That's all besides the point. When you want to share a file with someone while you are both working on it, say, there is no need for a "server". IP is perfectly fine for transfering a file from your machine to theirs. When you want to talk to someone over the net, there is no need for a "server". IP is perfectly fine for transmitting voice calls between your machine to theirs.
Your mistake is in your assumption that you even need a server in the first place. For some things, that might be useful. For other things, that is only needed as a workaround for NAT in the first place.
Also, reliably running a server at home isn't hat hard either, even today. With hardware offerings that are a better fit, it could be even easier. There isn't really any reason why hosting your own "server" at home needs to be any more difficult than hosting your own vacuum cleaner.
> Hiding insecurity is perfectly valid. It is making attack surface smaller.
No, it doesn't. It simply makes it harder for you to notice that you are not secure, that's all. This is not about whether firewalling insecure services off from public access makes the attack surface smaller. It does. But NAT doesn't, a firewall does. If you have a firewall, you don't need NAT. If you don't have a firewall, NAT won't protect you.
> I do not get pings of death, constant scanning, login attempts all the time on my local machine which is always behind NAT. Every server that has public IP gets scanned or tried out with vulnerabilities.
Which is just completely irrelevant. None of these things are a security risk. They are annoyances when trying to debug the network, that's all. And none of that is in any way fundamentally helped by even a firewall. You have a huge attack surface in your web browser that is completely unaffected by your firewall and by NAT as well, pretending that a service listening on a port is somehow a huge security problem, but executing untrusted code inside a massively complicated virtual machine is harmless is just completely focusing on the wrong problem. Also, all those pages that you load into your browser sort-of have access to your local network anyway, because your browser is inside your firewall and can connect to all those services that you pretend your NAT protects.
> I can connect totally new PC to router with NAT and not be owned in matters of minutes by some botnet.
You are constantly confusing firewalls and NAT. That is done by a stateful firewall, not by a NAT.
> My router might be exposed but it is something I know. All machines behind router are perfectly fine for remote vulnerabilities.
We are talking about IPv6 and possibilities to directly access machine where some vulnerable service might be exposed by misconfiguration. If you have remote code execution vulnerability service listening in that service it is really bad. Even pro people forget to close their database on servers sometimes, cannot think what weird stuff might be running on normal users machines.
I did not even touched running untrusted code by user because that is not in the scope of discussion. It is insecure with whatever the network configuration will be.
I do not know how you can connect to device behind NAT without setting up tunnel to it. But I might be wrong, point me to some resource please?
> We are talking about IPv6 and possibilities to directly access machine where some vulnerable service might be exposed by misconfiguration.
That is no different than with IPv4. If you have a stateful firewall, that isn't possible. If you don't, it is.
> Even pro people forget to close their database on servers sometimes, cannot think what weird stuff might be running on normal users machines.
Which is why you should have a stateful firewall. A NAT does not add anything to that.
> I did not even touched running untrusted code by user because that is not in the scope of discussion. It is insecure with whatever the network configuration will be.
It is very much in scope of the discussion, as every single end user does it. No matter how great their firewall is, you just send them a link to a website, and that website now gets to execute Javascript code on the inside of the firewall, with more or less direct access to all the insecure services supposedly protected by the firewall. Including even stuff only listening on localhost, which wouldn't be reachable directly even without a firewall. If you want to do a mass-scale attack, you serve that code through an advertising network.
So, you actually have to secure the services anyway, even a firewall is insufficient to protect vulnerable services on end-user networks.
> I do not know how you can connect to device behind NAT without setting up tunnel to it. But I might be wrong, point me to some resource please?
By sending a packet addressed directly to the internal address, which your ISP can do, anyone who compromises your ISP's edge router can do, and more often than not your neighbours can do when your ISP fails to properly isolate customers on layer 2.
What's wrong with IPv6? The only complaints I've seen are "the numbers are bigger", as if that's not the point, and that makes them harder to remember.
My "main issue" with it is that if people are used to being behind NAT, they now have to be a bit more careful about securing their computers (firewall etc.) because every computer now is publicly accessible. Most routers do not even seem to have an IPv6 firewall.
The 'residential gateway' for my attached fiber connection doesn't allow incoming syns for the ipv6 addresses it hands out and I couldn't even find a way to tell it to let me actually use the internet as intended, other than bypassing it (which works fine).
Most endpoints these days don't have much if anything listening by default though. The reality is that even trusted local networks are hostile networks, and vendors have responded to that.
Ultimately we do need to secure our endpoint devices. They need to be secure by default. NAT and firewalls let us get away with insecure broken OSes and services for a while, but not forever, and they create the "soft underbelly problem" where once someone manages to hop your firewall everything is vulnerable.
NAT does not provide access protection. NAT only hides the lack of access protection when it isn't there. A stateful firewall provides access protection, and that works with both IPv4 and IPv6.
Also some implementations (including Windows) [0] expose the MAC address of your device to the Internet, creating a huge privacy problem. IPv6 is a mess.
One of these days yall are going to see it my way... in which I think ipv6nat is important to use despite everything you hear about ipv6nat saying it should never be used, usually by people theorycrafting instead of being responsible for actual systems. (Cue the "but nat was never very secure" etc comments.)
That's approximately a /17 worth of IPv4 addreses, recovered over multiple years going through all the low hanging fruit (eg the original /8 networks).
9 million addresses for 2 years is a burn rate of ~375k/month. Another 2 million newly obtained addresses in the next 2 years will last 6 more months.
It's already more economical, it's just not surfaced very well economically. People hide the price of IPv4.
One of the IPv6 ISPs did a talk about this, they realised that rather than hiding this cost they could surface it and then magically instead of "Try to persuade technical people to choose IPv6" the situation is "Make technical people explain to their finance department why they're spending the extra money" and what do you know, "Learn IPv6" is way more popular than "Argue with accountants".
IP addresses accumulate reputations as well as background noise traffic bound for them. While "plenty" of these recovered addresses exist, there is something to be said for being able to get allocations of unused addresses.
Is it time to just admit IPv6 is a failure and move on to new standard? Adoption is very slow though economical needs are already here. Instead of complaining about users maybe we should do something so they actually want to switch?
1. 128 bit is likely an overkill. 64 bit would give more or less 2 billion addresses per every human for earth. Likely enough.
2. It's unreadable. Why the hell use hex?
3. It's been proposed in 1998. It became a standard last year. Not exactly quick adoption and it tells something.
Yes, it will probably be deployed out of necessity, but it's weird nobody noticed that if something doesn't get traction for 20 years, there is probably something wrong with it.
1. Easier organization. Allows logical subnetting and more efficient routing.
2. It allows you to see subnetting. It makes managing IPv6 so much easier than IPv4.
3. It became "Internet Standard" last year, because the working group decided that it was about time to combine all the RFCs with the errata corrected in a single RFC. Just a formality.
As for traction... blame humans. If there's something better to replace the old thing, but the improvements aren't dramatic enough (e.g. saving tons of money) and the old thing still works, then a huge portion of lazy people won't switch.
1. Perhaps it seems too much now, but if we're taking the time to migrate our global network to a new standard, we might as well get a ton of addresses out of it for many years to come.
2. Why are you reading it? Networking addresses do NOT need some special UI / UX interface. For most users, they are never going to see it.
3. Neat, but that's not really a point. If you find yourself setting time limits on about everything in life, well let's just say you're boxing yourself in for failure.
Over the past 20 years we've seen the explosion of growth in devices, networking, and communication. It comes with no shock (to experts and experienced technical people alike) that it's taking a long time to switch over. There isn't some magical "press this button for everyone to be IPv6" FYI.
IPv4 worked for a long time and works for majority of installations now. It's hard to migrate when advantages are not obvious. I'm not sure that IPv6 is very good, I'm always having problem with it, but any standard will have adoption problems when current solution is good enough. I don't think that there's need for new standard and I'm sure that new standard won't be adopted better than IPv6.
NAT breaks the original design of the internet where every endpoint should be reachable from anywhere. Try using protocols that rely on this (like SIP) and see how far you go.
Just look at how dark it is: https://benjojo.co.uk/internet-2018.png (from https://blog.benjojo.co.uk/post/scan-ping-the-internet-hilbe...)
Discussion on r/amateurradio - https://www.reddit.com/r/amateurradio/comments/ohi7j/did_you...