Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wait, so every time someone makes a citation on wikipedia it causes Archive.org to archive that URL? This seems like it could be abused by a malicious agent, no?


Well, in my hypothetical world, the Wikipedia editor page would include a widget in which a user could submit a URL, and Wikipedia would generate the markup, including an archive URL. Deciding how to implement this so that it works efficiently will have some overlap with issues on that 2nd hardest computer science problem of cache invalidation.


Good idea, after that it's really just rate limiting and perhaps only looking up the same URL every X days. Perhaps a per domain limit as well. No problem they probably haven't already solved.


There's already the citation tool in the default editor. Shouldn't be too hard to just fire off an AJAX request to the Archive when the citation is added, right?


Easy enough, but what do we call it?

The Wikiarchivator?


You can already just tell archive.org to archive a URL. How would this be different?


You could have a "reverse DDoS" where archive.org blocks requests from Wikipedia, removing the ability to add Wikipedia citations.


There's no such thing as a "reverse DDoS", that's just a regular denial of service attack.


A theoretical reverse DDoS: You somehow prevent traffic from reaching their load balancer so it automatically scales their AWS instances to zero.


Your minimum instances would always be 1 in the autoscaling group associated with your ELBs (or now, ALBs).

Regardless, neither the Internet Archive nor Wikimedia use AWS or other cloud providers, as it would be prohibitively expensive. They both run their own infrastructure/ops.


Mostly a joke, just trying to think of what that term would mean.


I think a reverse DoS attack would be one that actually increases the capacity of the service. I'm not sure what scenario would allow that to happen though.


Some streaming sites (e.g. 4 on Demand, IIRC) have a peer-to-peer element; you could reverse-DDOS those by running a lot of computers "seeding" (perhaps by leaving them on the page but at the end of the video) distributed all over the globe.


What I meant was:

1. Get Wikipedia to send lots of requests to Archive.

2. Archive blocks requests from Wikipedia.

3. Wikipedia citations are disabled.

In other words getting Wikipedia to DDoS Archive, so that Archive's defense hurts Wikipedia.

A very silly scenario of course, just coming up with one for why an attacker might want to indirectly DDoS Archive via Wikipedia.


What would probably happen between 1 and 2 is that archive.org notices spike in traffic from Wikipedia, talks to Wikipedia engineers, they find out who is generating all these requests, and block them from editing Wikipedia.

Though it's certainly possible to generate enough junk edit traffic to cause disruption on Wikipedia, but that's nothing new. It's the nature of Wikipedia as the resource - it trusts the internet community to be good on average. So far it worked.


That's still not a DDoS, as its missing the Distributed element. Almost by definition if you can block all requests at source its not a DDoS, its just a regular old DoS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: