If I look at my company's gitlab statistics, the best engineers correlate with the top third of number of PRs not only in quality (perceived) but also in volume (number of open/merged PRs). I hear this a lot of the mythical top level engineer that thinks for 3 days and then bursts a small PR that changes the world, but I haven't seen it.
Mostly because for good things to get pushed out in the real world, they are broken apart in several iterations, they will have extra PRs for terraform changes, new tests, new monitors, etc. And those engineers will not only do their core work but also clean up a bug here and there during the week, while mostly never getting stuck on any single change. Curious if you have actually seen the top engineers in your organizations somehow being on the bottom by volume of PRs, because in my experience the differences sometimes are easily 3-5x the volume of PRs.
We do value small PRs and incremental changes that shouldn't take more than a day or less to develop before getting merged, so your mileage may vary if you let people create huge changes in one go (it has some disadvantages I personally don't like in terms of reliability)
I have seen it, but it really depends on the type of problems being solved and the overall team.
The most valuable team member generally fills in for whatever the team struggles with. Sometimes that’s making thousands of minor UI changes, other times it’s spending months writing 4kb of highly optimized code to avoid spending 10’s of millions replacing existing hardware.
The difficult bit when looking for people who will adjust to the needs of a team is by definition they aren’t working on the same things in different environments.
The best engineers I see farm out the easier issues and even some of the hard work. But they supervise and mentor. Counting lines of code disincentivizes mentoring.
We have several criteria that we look at, including impact on the team and outside the team, expertise, etc, which includes feedback from peers. But after a few years (been here over 6), with the output of that process and correlating to the statistics I came to the conclusion it was significant enough to casually look at. I never found an outlier in the direction of very little PRs but also very good impact on the team/company. For the performance criteria that affects promotions etc we don't actually look at it, this is something I do because I like to see my own statistics and after a while you remember who is usually where in the sorted list.
Thanks for sharing. I was legitimately curious. This is very interesting. I still have a healthy dose of skepticism, but it’s not like you’re stack ranking based on PR frequency or size, and the fact it’s not an input to your function might be why it has the signal you see. Kind of a catch-22 for lazy managers.
Yeah, we've tried to be reeeeeaaally careful in not letting this become important for evaluating performance due to all the pitfalls it has and how it can be gamed. End of the day nothing beats actually reviewing the PRs themselves and trusting the feedback from peers in my opinion.
Mostly because for good things to get pushed out in the real world, they are broken apart in several iterations, they will have extra PRs for terraform changes, new tests, new monitors, etc. And those engineers will not only do their core work but also clean up a bug here and there during the week, while mostly never getting stuck on any single change. Curious if you have actually seen the top engineers in your organizations somehow being on the bottom by volume of PRs, because in my experience the differences sometimes are easily 3-5x the volume of PRs.
We do value small PRs and incremental changes that shouldn't take more than a day or less to develop before getting merged, so your mileage may vary if you let people create huge changes in one go (it has some disadvantages I personally don't like in terms of reliability)