Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I should also point out that I agree with your statement that it's unrealistic to expect that we'll ever find a universal best practice. However, I also believe that is orthogonal to the question of whether productivity should be measured or not.

Imagine that productivity was as simple as bug-free function points per iteration. In this hypothetical case it's entirely possible that Alice is more productive in Ruby and Bob is more productive in Python, and we can measure the productivity.

However, the problem as given is that we don't even know that Alice produces more in Ruby or Bob in Python. All we have is their subjective assertions. That is the question I have: What are Alice and Bob talking about, and do we even know whether there is a correlation between their self-diagnosed productivity and anything useful we can measure like project success?

I don't think so. Consider Charles the Arc programmer and Debbie the PHP programmer. They work on identical Ruby projects for the same amount of time.

Charles reports that he was horribly unproductive using Ruby's brain-damaged implementation of half of Lisp. Debbie reports she was way more productive in Ruby than she ever was in PHP.

But in reality, Charles accomplished far more using his knowledge of Lisp, while Debbie spent most of her time learning how a truly dynamic language worked. Nevertheless, she felt "flow" and "freedom" while learning Ruby.

Can we trust their self-diagnosis? According to them, Debbie was more productive than Charles.



I think the reason finding good measures of productivity is because of some inherent properties of math. If a programmer sits and thinks for a bit, they may find a more (time/speed/memory) efficient way to implement something in fewer lines of code. The complement to that is that they might just implement something really fast in a brain dead way. Because these are mathematical algorithms, the various rules about provability apply.

Really, the only way to measure programmer productivity is how fast they can implement a perfectly detailed spec. If you have some unknowns in there, your measurements will tell you nothing.

Therefore, I might suggest a test whereby two programmers are told to implement a program with a spec that is completely known, in the same language, and with the same set of libraries, in their perfered (or an independently chosen ) working environment. What this comes down to is just a race, and some people have good days, and some have bad days.

If you run this test several times with different specs, you might be able to objectively tell whom is better given those environmental characteristics. Of course, these measurements will be nearly useless in a real project environment for the reasons cited in the first paragraph.


Implementing a perfectly detailed spec isn't really programming per se. It's more like merely translating or codemonkeying. I think that having significant unknowns in there is an inherent part of programming, and the actual cases that don't are either rare or ridiculously trivial.

And it's pretty hard to quantify any unknowns. Somebody said that programmers do know when they're productive. I know when I am and that's basically whenever I write anything at all that I _know_ will contribute to the functional or algorithmic completion. Merely that constitutes productivity!

That means I know that "if I keep doing this I will finish the program at some point". Writing boilerplate doesn't count. If I keep slamming getters, setters, and a like into my source file twelve hours a day it won't make the actual program ever ready. All I'm typing in are effectively prerequisites.

Boilerplate doesn't count even if it's mandatory in the language I'm using: I'm not sure if I could ever feel productive with Java eventhough I know I would really, really have to write a hundred setters/getters in order to finish. The logical step that would make me productive on my own scale would be to scoop power from a better language.


However, I also believe that is orthogonal to the question of whether productivity should be measured or not.

You're correct, as taking that to it's end conclusion would imply it's impossible to improve (which I'm sure most would disagree with).

Perhaps the question "How do you measure improvement?" is relevant. This might be a roundabout way to yield the former.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: