Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's an unsatisfying conclusion, but is it really all that surprising? To coin a metaphor, a programming language is a shoe; only you know if it fits right. It might stretch out and become more comfortable over time, but it's not a debatable point that it's pinching your toes.

Programming isn't a natural mode of thought, and programming languages aren't a natural mode of expression. I think it's unrealistic to expect that we'll ever find a universal best practice.



I should also point out that I agree with your statement that it's unrealistic to expect that we'll ever find a universal best practice. However, I also believe that is orthogonal to the question of whether productivity should be measured or not.

Imagine that productivity was as simple as bug-free function points per iteration. In this hypothetical case it's entirely possible that Alice is more productive in Ruby and Bob is more productive in Python, and we can measure the productivity.

However, the problem as given is that we don't even know that Alice produces more in Ruby or Bob in Python. All we have is their subjective assertions. That is the question I have: What are Alice and Bob talking about, and do we even know whether there is a correlation between their self-diagnosed productivity and anything useful we can measure like project success?

I don't think so. Consider Charles the Arc programmer and Debbie the PHP programmer. They work on identical Ruby projects for the same amount of time.

Charles reports that he was horribly unproductive using Ruby's brain-damaged implementation of half of Lisp. Debbie reports she was way more productive in Ruby than she ever was in PHP.

But in reality, Charles accomplished far more using his knowledge of Lisp, while Debbie spent most of her time learning how a truly dynamic language worked. Nevertheless, she felt "flow" and "freedom" while learning Ruby.

Can we trust their self-diagnosis? According to them, Debbie was more productive than Charles.


I think the reason finding good measures of productivity is because of some inherent properties of math. If a programmer sits and thinks for a bit, they may find a more (time/speed/memory) efficient way to implement something in fewer lines of code. The complement to that is that they might just implement something really fast in a brain dead way. Because these are mathematical algorithms, the various rules about provability apply.

Really, the only way to measure programmer productivity is how fast they can implement a perfectly detailed spec. If you have some unknowns in there, your measurements will tell you nothing.

Therefore, I might suggest a test whereby two programmers are told to implement a program with a spec that is completely known, in the same language, and with the same set of libraries, in their perfered (or an independently chosen ) working environment. What this comes down to is just a race, and some people have good days, and some have bad days.

If you run this test several times with different specs, you might be able to objectively tell whom is better given those environmental characteristics. Of course, these measurements will be nearly useless in a real project environment for the reasons cited in the first paragraph.


Implementing a perfectly detailed spec isn't really programming per se. It's more like merely translating or codemonkeying. I think that having significant unknowns in there is an inherent part of programming, and the actual cases that don't are either rare or ridiculously trivial.

And it's pretty hard to quantify any unknowns. Somebody said that programmers do know when they're productive. I know when I am and that's basically whenever I write anything at all that I _know_ will contribute to the functional or algorithmic completion. Merely that constitutes productivity!

That means I know that "if I keep doing this I will finish the program at some point". Writing boilerplate doesn't count. If I keep slamming getters, setters, and a like into my source file twelve hours a day it won't make the actual program ever ready. All I'm typing in are effectively prerequisites.

Boilerplate doesn't count even if it's mandatory in the language I'm using: I'm not sure if I could ever feel productive with Java eventhough I know I would really, really have to write a hundred setters/getters in order to finish. The logical step that would make me productive on my own scale would be to scoop power from a better language.


However, I also believe that is orthogonal to the question of whether productivity should be measured or not.

You're correct, as taking that to it's end conclusion would imply it's impossible to improve (which I'm sure most would disagree with).

Perhaps the question "How do you measure improvement?" is relevant. This might be a roundabout way to yield the former.


Your simile is what disturbs me: Often when programmers describe their productivity they are really describing their comfort level while carrying out programming tasks.

However, when we use the word "productivity" in every other aspect of business, we are describing the production of output. We don't measure how the output is produced, or the subjective feelings of the producers.

So what output are we discussing when we talk about productivity? Or are we using the word in a special way that has nothing to do with productiviy as it is used elsewhere?


Point taken on the simile. It's like that like never existed.

As to the rest, I dunno. Let's consider the following scenario: every time you wrote something down on paper, there was a chance the paper would spontaneously burst into flame. There are strict, reproducible rules governing when the paper would and would not catch on fire. People go to school to learn these rules, which are varied and subtle.

In this world I've described, is the best writer a person who can write the most essays that don't spontaneously combust? I feel like as programmers we think that because we can easily quantify whether something works, we can also quantify whether it's good. Or worse, that everything which doesn't burn away is essentially the same: just words on paper.

It's simply not true. The fact that writing a program feels more practical than writing a novel doesn't change that.


If it's possible to measure whole project success, at least it should be possible to see whether that is correlated with programmers' reported comfort levels (or some other reported emotional state, like excitement or fear).


If there are two programmers and they both produce exactly the same output, but the differences are as follows:

The first programmer has spent a lot more time in doing it (like 5x the second programmer) and carefully verified each line to make sure the code and algorithm are correct both conceptually and in practice, and tested that the code won't fail in any of the imaginable corner cases.

The second programmer wrote it over the weekend, didn't test much else but merely cross-checked the code with a few working test cases similar to what the program will be used with for starters. He maybe followed his intuition to guide how to write or let his creative flow dictate the details of the program but he really hasn't had time to verify why.

There seems to be no clear notion of which programmer was more productive. It all depends on what perspective the project is viewed. On the other hand, I believe there's always some absolute value underneath and productivity isn't entirely an subjective property.

(Mandatory car analogy: If you want a Trabant and I sell you a Mercedes-Benz for the price of a Trabant and you still treat the car like a Trabant, the Mercedes is still better built regardless of how you extract value out of it.)

When the project was deployed, both programs worked correctly as they are similar. The cost per line/token of the first programmer is five times the cost of the second programmer. On the other hand, had the second programmer undergone the same rigorous verification it would have taken 2x the time of the first programmer.

Apparently the first programmer's code has a lot more value simply in being well-thought, and the programmer knows exactly why he has arrived in the solutions he wrote in the code. If the code broke the programmers would be on very different levels with regard to how to start debugging. On the other hand, the second programmer produced the same program in 1/5th of the time which may greatly please the management. However, the second programmer might easily become a grand failure in the next project as in this project he already reached the same solution if not by accident then at least by luck. Both programmers produced the same comments as they wrote it mainly for the future maintainers, as usual, instead of as a record of the whole mental process they experienced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: