Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can have an intuitive sense of how many operations you're performing without knowing big-O notation.


Of course, but part of the benefit from studying CS is that you'll be able to recognize intuitively when there is or there should be a better solution.

Let's say you want a data structure that performs three operations. Insert, Delete, and Find (as in, 'is this in the database?'). The intuitive sense may come from saying, "Linked-Lists would be horrible for this! Each operation would be slow." (O(n)) The practiced programmer may say, "I can keep the data sorted and use an Array. Those would probably be faster." (O(lg n)) However, if you learned a little more CS and how hashes work, you would know that they are constant time for all three operations. You never had to waste time thinking about which choice to make because you know how hashes are implemented and that they specialize on those operations running in constant time.

Besides, big-O notation takes no time at all to learn. I learned it's theory as a freshman in high school during algebra II when we wanted to know which of two polynomials grew faster. Take the most significant part, rip off the constants, and that's its growth rate.


I got into trouble once at an interview, when I answered a question about joining two data sets by writing a SQL join. The interviewer didn't know how to think of the performance of SQL - he was assuming I'd write something iterative so we could talk Big-O. Which is to say - a lot of tools used in the real world don't easily submit to Big-O, and some academic types resist learning them for that reason.


I find it interesting that you think a programmer needs to know big-O to know when to use a hash-table vs a list; I don't find that to be the case at all.


"Take the most significant part, rip off the constants, and that's its growth rate."

Well, it's an upper bound on its growth rate. And only after some possibly-gigantic n.


Yeah I understand. I was describing how I did it in Algebra II. They would give up f(n) = 5x^3 +4x + 8 and we would know that it was O(x^3).

We also understood that it was as n -> infinity, hence why f(n) = 2x^4 would have a larger growth rate than g(n) = x^2 + 5000. When you're talking about scalability when programming, you're going to have to understand big-O, how constant factors come into play (why Floyd-Warshall at O(n^3) is often better in practice), and how big-\Theta works. If not, you're eventually going to make a mistake and slow everything down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: