Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

n^2 algorithms on _massive_ inputs seems a little far fetched, no?

Around one to one hundred billion things start getting difficult.



The challenge with big-O is you don’t know how many elements results in what kind of processing time because you don’t have a baseline of performance on 1 element. So if processing 1 element takes 10 seconds, then 10 elements would take 16 minutes.

In practice, n^2 sees surprising slowdowns way before that, in the 10k-100k range you could be spending minutes of processing time (10ms for an element would only need ~77 elements to take 1 minute).


I'd argue that one billion is fairly large for what most people work on. But yes, point taken.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: