Seems like rate-limiting expensive pages would be much easier and less invasive. Also caching...
And I would argue Anubis does nothing to stop real DDoS attacks that just indiscriminately blast sites with tens of gbps of traffic at once from many different IPs.
In the last two months, ardour.org's instance of fail2ban has blocked more than 1.2M distinct IP addresses that were trawling our git repo using http instead of just fetching the goddam repository.
We shut down the website/http frontend to our git repo. There are still 20k distinct IP addresses per day hitting up a site that issues NOTHING but 404 errors.
Caching is already enabled, but this doesn’t work for the highly dynamic parts of the site like version history and looking for recent changes.
And yes, it doesn’t work for volumetric attacks with tens of gbps. At this point I don’t think it is a targeted attack, probably a crawler gone really wild. But for this pattern, it simply works.
There's a theory they didn't get through, because it's a new protection method and the bots don't run javascript. It could be as simple as <script>setCookie("letmein=1");reload();</script>
Rate limit according to destination URL (the expensive ones), not source IP.
If you have expensive URLs that you can't serve more than, say 3 of at a time, or 100 of per minute, NOT rate limiting them will end up keeping real users out simply because of the lack of resources.
Right - but if you have, say, 1000 real user requests for those endpoints daily, and thirty million bot requests for those endpoints, the practical upshot of this approach is that none of the real users get to access that endpoint.
Yeah, at that point to might as well just turn off the servers. It's even cheaper at cutting off requests, and it'll serve just as many legitimate users.
No, it's not equal. These URLs might not be critical for users — they can still browse other parts of the site.
If rate limiting is implemented for, let’s say, 3% of URLs, then 97% of the website will still be usable during a DoS attack.
Right, but in terms of users ability to access those 3%, you might as well disable those endpoints entirely instead of rate limiting - much easier to implement, and has essentially the same effect on the availability of the endpoints to users.
this feels like something /you can do on your servers/, and that other folks with resource constraints (like time, budget, or the hardware they have) find anubis valuable.
Sure, didn't mean to imply Anubis wasn't an alternative, just was clearing up that there are options beyond source IP rate limiting, which several people seemed to be thinking was the only option because of comments about rate limiting not working because it was coming from 35K IP addresses.
Most of the "free" analytics tools for android/iOS are "funded" by running residential / "real user" proxies.
They wait until your phone is on wifi / battery, then make requests on behalf of whoever has paid the analytics firm for access to 'their' residential IP pool.
2. The US is currently broken and they are not going to punish only, albeit unsustainable, growth in their economy.
3. Internet is global. Even EU wants to regulate, will they charge big tech leaders and companies with information tech crimes which will pierce the corporate veil? It will ensure that nobody will invest in unsustainable AI growth in the EU. However fucking up economy and the planet is how the world operates now, and without infinite growth you lose buying power for everything. So everybody else will continue to do fuckery.
4. What can a regulating body do? Force disconnects for large swaths of internet? Then Internet is no more.
I would go for making the AI companies pay. Identifying end users for other abuse works but there are problems on state borders, for monetary solutions it should be easier.
> And I would argue Anubis does nothing to stop real DDoS attacks that just indiscriminately blast sites with tens of gbps of traffic at once from many different IPs.
Volumetric DDoS and application layer DDoS are both real, but volumetric DDoS doesn't have an opportunity for cute pictures. You really just need a big enough inbound connection and then typically drop inbound UDP and/or IP fragments and turn off http/3. If you're lucky, you can convince your upstream to filter out UDP for you, which gives you more effective bandwidth.
And I would argue Anubis does nothing to stop real DDoS attacks that just indiscriminately blast sites with tens of gbps of traffic at once from many different IPs.