Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Believe it or not, sorting is one of the most common operations software performs. As a simple example think of how many times somebody queries something like `SELECT (...) FROM huge_table WHERE (...) ORDER BY (...)` Obviously the order by means the data needs to be (at least partially) sorted before it can be returned. To be fair that is a different case algorithmically since DB's are almost never able sort entirely in memory. But there are plenty of other examples where in memory sorting is necessary or provides advantages for later computation steps (eg. ability to cut of elements larger than a certain threshold).


yeah but it's already implemented into db software, why would devs reinvent the wheel ?


Because "do it once, never improve again" is a bizarre philosophy?


I think most db software are already quite well optimized.

I mean unless you're a db software dev, and unless you're profiling it for each use case I wonder if you can really find something to optimize.

I just meant that's it's a niche. I honestly got no idea how db software are programmed but I doubt any dev can pretend to do better.

I guess that algorithm would interest people who recompile their db software, or who don't use those db software.

So here comes the question : what are the pro cons of using a db software ? Why would some devs still use plain files to store data ?


I think it's valid to ask questions like this. We get advice all the time warning us not to try to invent our own algorithms for certain things. Obviously if we all did that, though, we would make no progress as programmers.

I'm not really an expert but it looks like this algorithm does sorting in a way that doesn't require as much extra memory as others..? I could be wrong about that, but the point is that this algorithm likely has some certain situations where it performs better than others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: