Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your definitely is not actually the definition of objectivity, though it is one way to be relatively confident that you are being objective, so I won't argue the semantics too much.

All of these can be objectively measured to a degree if you actually care to take the time:

* Logical ordering of thought: identify and diagram the main ideas in the text. Identify transitions in the text. Identify explicitly named connections between pieces. Multiple people can do this and expect to have a high degree of similarly in their results.

* Separation of components: similarly identify and diagram the components they list by name, the relationships the identify by name, the responsibilities they identify by name.

* Technical vocabulary: list all of the technical terms. Compare their usage against a dictionary.

* Ability to code: run their code. Does it complete and produce the expected output? This is absolutely objective. You can add further constraints and retain absolute objectivity: does it complete within a certain time, stay within a certain memory budget, stay within a certain cyclomatic complexity threshold, have a certain percentage of test coverage, etc.

* Familiarity with data structures and algorithms common to the problem domain: list the major constraints of the problem domain, list the data structures according to feature which addresses the constraints, similarly list algorithms. Compare to the candidate's answers. How many of the major concerns did they address? How many of the applicable data structures/algorithms did they know? Did they volunteer anything new and were they able to explain how it addressed the problem constraints?

* Understanding of the cross-cutting concerns. This could almost be a checklist. I would make it a little more involved. As a mentioned, Q&A, see what solutions they present, but to have a quantifiable metric we can identify major components and identify the major concerns each of those addresses, see how many the candidate reached, give bonus points for value concerns they addressed that we didn't.

I'm sure if I spent more time I could expand both of these lists.

I will concede that this is still subjective in many ways, especially in the interviewers choices of what is "correct" ( what are the problem constraints, etc.) and what parts of the answers after important.

In that regard I will concede to you that there is an ultimately subjective nature to most of this, because deciding what is valuable has an element of subjectivity, but that is going to be true of pretty much any pursuit outside of pure mathematics (and I'm not convinced we have entirely objective values there either). However, once we have decided what we value it's possible to eliminate a lot of the subjectivity from measuring it. In most interview processes it's not a lack of ability to be objective, it's a lack of concern about being objective.

And actually, I'm not too bothered by that. A healthy meritocracy does not require absolute objectivity. What it requires is an explicit statement of what the values are and a transparent means of evaluating people against those values, and but according to any other values. The values can be subjective. The evaluation can be subjective. As long as the values are known and the evaluation process is transparent it can function as intended. Even better, by clearly communicating the values of the system you send a strong signal to others so the can determine if your organization is something they want to be a part of.

Objectivity is a good tool to help maintain that transparency. But I'm not worried so much about the subjectivity of it as I am hidden values and opaque evaluations tied to things that should be irrelevant according to the stated values.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: