Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does it? If you pretend IPv6 doesn't exist, sure, but that's like pretending UDP doesn't exist because all of your applications use TCP, or only logging traffic going to port 80 because you don't have HTTPS yet.

Every firewall I've come across has a default deny rule for incoming IPv6 traffic, giving the firewall the same properties as any IPv4 network. Host firewalls are the same; anything ranging from Windows Firewall to UFW and firewalld have presets to block all traffic except for the applications you've whitelisted. Once you get to huge enterprise routers managing routable IPv4 addresses and IPv6 addresses the situation may become different, but it's still not that much overhead.

The biggest problem with securing IPv6 seems to be ignoring it assuming that makes it disappear. If you configure your firewall to drop all IPv4 traffic not on a whitelist but somehow manage to forget to add the same rule for IPv6, you should re-evaluate your networking knowledge and maybe get up to speed with how the internet has changed since 2015.



Its not just firewalls.

Its also all kinds of code that interacts with the internet in all kinds of ways. Extending all that code to two kinds of IPs, writing tests, setting up two types of IPs in development, staging and production, monitoring real life implications ... that would be a huge cost with no benefit at all.


If you're writing code, you'll be either manually specifying the IP address family (so there's no real IPv6 risk) or you're probably using middleware that does all the hard parts for you anyway. If anything, I'm annoyed how hard it is to get a socket listening on both IPv4 and IPv6 in many low level libraries. I just want a socket to receive data on, who cares what address family it's from.

In my experience, IPv6 Just Works (tm) with modern software. There are some mid 00's frameworks for blacklisting abusive hosts that can't parse IPv6 addresses, or don't understand the /64 subnet you need to treat as a single IP address, but that's all I've ever run into. If anything, that gave me an excuse to finally get rid of an old Perl network filter running on my server.

I'm not sure how many tests the average piece of software needs that deals with the type of address family connecting. I suppose it matters if you want to test your rate limiting middleware or your logging library? That should only matter for the vendored code of course because modern libraries all have those tests themselves already. It's not like you need to run and write every test twice, only one or two very specific subcomponents if any.

If you're writing firewalls or kernels or router firmware then yeah you'll have your hands full with this stuff, but that's far from the standard developer experience. In those cases, IPv6 is a reality as much as TCP and UDP are.


> if you're writing code, you'll be either manually specifying the IP address family (so there's no real IPv6 risk)

AF_ANY is a thing, and it’s a best practice.

gethostaddr (iirc) was the old interface, but nowadays getaddrinfo is almost the default and supports AF_ANY.


To add a trivial example: if an application is coded well, it’s ready to connect to hosts both on ipv4 and ipv6, in the sense that when resolving a dns name, it will ask for addresses of any kind (unless it supports being explicitly told to only use ipv4).

So now you’re getting a record with multiple ip addresses, some of which are ipv6, but ipv6 is blocked… there you go with random connection delays and possibly timeouts.

Ipv6 exists and it’s getting more and more adoption, no matter if some people keep their head under the sand…


> multiple ip addresses, some of which are ipv6, but ipv6 is blocked… there you go with random connection delays and possibly timeouts.

RFC6555 “Happy Eyeballs” discusses this.


A/AAAA records are a special sort of hell to debug remotely. "My browser can find it but I can't ping it! What do you mean ping6?"

In some environments that is maddening and I don't blame people for just deciding not to either at all or only translating at WAN.


For one the BSD socket interface for IPv6 supports IPv4 as well via the ::FFFF: prefix: https://www.ibm.com/docs/it/i/7.1?topic=families-using-af-in...


> that would be a huge cost with no benefit at all.

Internet facing IPv6 infrastructure is usually much cheaper that their equivalent IPv4 enabled peers.

So supporting IPv4 can be the huge cost with no benefit at all, if all your clients/peers can use IPv6


> if all your clients/peers can use IPv6

Including the case where you have something else in the middle already - for example, if you're fronting a website through cloudflare, then you can only have IPv6 on your server and still support dual-stack for clients:)


Careful, you'll also summon the people who block port 80/443 UDP


> but that's like pretending UDP doesn't exist because all of your applications use TCP, or only logging traffic going to port 80 because you don't have HTTPS yet.

In fairness, neither of those things would be unreasonable stances, given those conditions.


Why firewall it when you can straight up turn it off?

net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1


ipv6.disable=1 in kernel boot flags so the interfaces don't even show up


I do both of these on Ubuntu just in case one of them doesn't work, too many apt hooks constantly changing the kernel...


disable via kconfig so it’s not even in kernel.


I do this on Gentoo but nobody likes to run Gentoo in production.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: