Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> He's still going at it 100,000 ssh attempts later.

I got hit with >100,000 on my main desktop a few years ago when I was procrastinating fixing my heavy-handed fail2ban config. I noticed what was happening first from the lag it was causing. It turns out >10 SSH password attempts/second can eat up a significant portion of my 3GHz "Yorkfield"[1] CPU. It wasn't hard to discover the problem: the logfile was rapidly filling with failed SSH password attempts. This is particularly useless as I have used

    PasswordAuthentication no
for many years. There is no chance that the script was going to gain access, but the system load from the rejections was terrible. So yes, I fixed fail2ban and added a few more "instant-ban" rules against anybody that tries password authentication, but the real fix that was moving sshd to a random port. Invalid SSH connection attempts dropped to approximately zero immediately. It's trivial to find with a port scan, but in practice almost nobody has even bothered.

It's probably like the old joke where two hiker see a grizzly bear and one stops to re-tie his shoes. "You can't outrun a grizzly!" "I only have to outrun you."

[1] Q9650 (E0); it still works great, even if it's starting to show its age



Out of curiosity, why is your desktop exposed directly to the internet (no NAT/firewall) at all?


NAT doesn't provide security, and a ssh server isn't useful if you filter that port at the firewall.


Masquerading private address with you GW public address and limit connection to outhoing one when ingress filtering is correctly done on your firewall/ISP side is quite efficient.

Using private address is just a convenient way to set up easy invariant templates for FW rules. No more, no less.

If you add the fact that ISP used to not route RFC 1918, it used to work quite efficiently.


That "convenient way" has been incredibly damaging to the internet. The primary benefit of the internet was that every peer can publish without needing permission of a 3rd party. IP Masquerading / NAT removes that ability, and has cause a massive amount of centralization. These gatekeepers are necessary to workaround the limitations of every host having to share a party line. Regular use of RFC 1918 for most hosts has prevented the development of real network software.

If you want the internet to continue to degrade into something closer to cable TV, then continue requiring central gatekeepers. If, instead, you care about the future of the internet, then please use globally routable addresses instead of the imprimatur we call NAT.


one of the benefit of NATing that it is mentally easier to recongize inbound and outbound traffic in firewall rules.

It has been used as a way to centralize traffic by some rogue ISP, and then because Large Scale Nating involve to hold in memory a lot of state considered bad practices because it was costing money to ISPs. (plus FW redondancy/HA in NAT require to synchronize states with CARP or CISCO techs).

But NATing behind the POP of the customer behind a public IP with the classical 3 ways filtering (corporate net, DMZ, internet) still enables templates to be easily shared and understood.

It is not NAT that sux. It is incompetent sysadmins the problem.


> NATing that it is mentally easier

"Because it's easier" is a terrible reason to break the internet and limit the development of networking software such that proper direct-connections (in either peer-or-peer or client-server style) are useless and a centralized 3rd party is required to negotiate the connection and/or manage the NAT hole-punching.

You're talking about convenience for a specific set of tools, when the problem is about freedom to publish without middlemen.

> a way to centralize traffic by some rogue ISP

ISPs have little[1] to do with this. I'm not talking about centralization by the ISPs; I'm talking about how network software such as VOIP should be making direct connections once the address is known, which is impossible due to NAT. Instead, we have Skype with Microsoft in a de facto position of control over a lot of the "voice chat" ecosystem.

> enables templates to be easily shared and understood

I'm sure the file-format for those templates can be extended to support a placeholder/variable/macro for local addresses.

> NATing behind the POP of the customer

That's my entire point. This is how the internet was turned into a "two tier" system, where some hosts can use listen(2)/accept(2) usefully, but everyone else has to ask permission of the incumbent feudal lord for permission if they want to accept a connection.

You seem to prefer trading that ability for an internet that resembles the "cable tv" model instead of a network of equal peers (in the protocol). I hope having convenient firewall templates was worth it.

[1] other than dragging their feet on IPv6 for the last ~15 years, which removes the need for any type fo NAT




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: