Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> Of course computer architecture, compiler knowledge etc is valuable and part of the interview process. I was referring to the general process of evaluating- - thus the question was focused on methods not content.

The general process is fucked up because the interviewers are incompetent enough that they can't tell who is actually a smart guy. This is why incompetent interviewers resort to binary decision making on programming questions.

Here's a way to think about it. Let's take two candidates with equal communication skills:

1. This candidate solves all coding problems fast, with great syntax, good style and discusses algorithmic complexities well

2. This candidate is smart, can go deep in their specialty (because they've spent years developing complex expertise) but couldn't do a medium-level tree traversal without hints or has bad coding style (on whiteboard)

Who do you see getting picked (at Google)?

I remember a friend of mine at Google (a Java programmer) docked points off a candidate (a C programmer) because the candidate passed length of array as an argument. I was like wtf? Seriously? You don't know C?

The data-driven committee-based process completely ignores that not interviewers are equal. Their feedbacks on paper are not equal.

There's a reason scouting still exists in Baseball/Soccer/Football because data-driven process has given mediocre results at best. Interviewing is much like scouting. You cannot use mediocre engineers to identify who's a real talent.



The process is not designed to find "real talent."

"Real talent" is innate and thus no amount of outside valuation will ever do it justice. No one else, but the "real talent" holder, can effectively judge his "real talent" so it is up to him to make it known to others that he does, in-fact, hold "real talent."

The process is designed to find people that are good enough to do an array of jobs. "Real talent" gets scouted, because it's shown that it "has got the right stuff."

This argument boils down similarly to the "smart but lazy" exasperations. "I'm talented, but no one realizes it!"

Candidate 1 is getting the job at my startup and Candidate 2 will be getting a referral if he ever decides to take up a teaching position. Deep specialization is useless unless you're doing consulting (in-house and out-house) or teaching.


Like I said I speak for myself not Google.

They're both valuable, perhaps for different types of work.

A balanced tree does not tell me much about a person's knowledge but how they are able to apply and learn CS concepts does. Deep knowledge of a problem domain is valuable, but also highly perishable - the world changes fast and good engineers need to learn and adapt. Furthermore, if a candidate speaks about a domain they have deep expertise in, we have two more problems: 1. The interviewer might not know enough about the field to assess the candidate. (Generic CS algs and data structures are a good common denominator). 2. A candidate's knowledge about a field doesn't tell me how good they are at reasoning. They may have spent 5 years doing that one thing - that's enough time to learn almost anything in a limited domain - say OS schedulers.

How would you structure an interview process?


Good arguments and we're getting there. You seem open minded unlike others and so I'd like you to answer the following:

-- Furthermore, if a candidate speaks about a domain they have deep expertise in, we have two more problems: 1. The interviewer might not know enough about the field to assess the candidate. (Generic CS algs and data structures are a good common denominator).

So the problem is with the interviewer and not the candidate right? The interviewer is unqualified to interview this person

2. A candidate's knowledge about a field doesn't tell me how good they are at reasoning. They may have spent 5 years doing that one thing - that's enough time to learn almost anything in a limited domain - say OS schedulers.

Couldn't the same be said of candidates good at whiteboard tiny algorithm puzzle solving? They may have spent 5 years doing that one thing - say algorithmic puzzle solving. The fact that books and coaching sessions industry exists should prove this right? Your process doesn't give as much weight to someone who has the passion to go deep in something productive as much as passion to solve toy algo puzzles


That's nice of you to say, though I'm not sure I earned your compliment.

Re experience - yes, if you're interviewing for someone with deep Java or ML knowledge certainly make sure the interviewer can assess that.

Logic puzzles are not perfect - I used to be really good at solving them and if I ever have to interview on them again, I'd definitely have to practice first, to get up to speed. But I do think that being able to solve them is a meta skill. If you get really good it entails that you are good at reasoning and solving complex problems, or at least a specific class. If you are good with trees you'll likely be able to answer a trie question. It doesn't mean you're just good at that, but that you can apply knowledge to new problems - which is exactly why you are hired. An equivalent real question I got in Java was how to access a private member of a class. If I knew the answer it would mean I have some random bit of trivia in my head. If I solve it using Java knowledge it means I can apply my domain expertise in a new way - that's compelling proof of problem solving skills. If I ask why would you want to do that, it indicates reflective and critical thinking skills.

A seasoned engineer in ML or Java should, imho also be able to solve puzzles. They are, in many ways, better proof of abstract thinking abilities and problem solving skills than specific knowledge. But I agree they are both important - if I want to build a search engine I prefer to have a seasoned ElasticSearch lady lead the project than an equally smart person with no background.

Re your friend, if it helps, I interviewed at a lot of companies and probably failed about 70% of interviews in the first week or two. I got much better after a few weeks, which proves your point that interviewing is also a skill. And I mean for both tech and behavior questions. That's normal, I think.

Re Google, while I don't represent the company, there are public statements that they'd rather have a false negative rate rather than false positives. It's better to say no to a great applicant (politely) than yes to a bad one. If you've ever had to let someone go, or deal with a bad team member, you know what I'm talking about. Of course there's the occasional interviewer who asks a silly question or gets something wrong. Do you never make mistakes? It's impossible to ensure everything goes perfectly in such a large company, but I can tell you, as someone who works here, that people try really hard to do a good job at it, and leave everyone with a good impression. It's the rational thing to do: if you treat people poorly 1. You're and asshole and 2. They'll hate you. Nobody likes assholes especially at Google, and I can share that I haven't met one yet. That's pretty amazing given the size. Thanks for engaging in this discussion, I really enjoy it.


Great reply. And I'd like to refute them :)

Let me foreword by saying that I am not against coding interviews in general. But, I am against the interview process as it is implemented NOW. As implemented today, the process is overweight on coding optimal solution with good coding style.

Translated to humanese: "It doesn't matter if you invented linux kernel at your last job. We can't hire you because you didn't solve this tree problem in time or had bad coding style as COMPARED TO OTHERS who don't know shit but can code puzzles on whiteboard"

What I think an ideal interview score looks like:

Score = w0(whiteboard-coding skills) + w1(algorithms) + w2(area of expertise) + w3(design) + w4*(lets work together to invent something new/passion for engineering)

As it stands, the interview process has the following weights: w0=49%, w1=49%, w2=1%, w3=1%, w4=chunk change

See what I did there? The way interviewers in these companies are hiring is by calibrating their favorite coding puzzle over a wide range of interview candidates. They simply don't account for the fact that that person might genuinely be good at something that the interviewer doesn't even know exists.

>> A seasoned engineer in ML or Java should, imho also be able to solve puzzles. They are, in many ways, better proof of abstract thinking abilities and problem solving skills than specific knowledge.

You keep asking for proof that the candidate should be awesome. Where's the proof that the interviewer can adequately judge based on the equation I gave above? The candidate could be seriously good at ML but if you send the average java coder at Google to interview them then gosh, how would they even recognize his genius? Now you'll say we have 5 rounds to eliminate biases. Buuutt, the way the hiring committee reads interview packet is, and I quote you:

"It's better to say no to a great applicant (politely) than yes to a bad one"

Which means, even if two interviewers think this awesome ML guy is GREAT, the candidate wouldn't be hired because the other 3 interviewers were incompetent and marked the candidate no hire.

What I'm trying to say is that the hiring committee based process encourages actively removing smart people and selecting the ones who solve whiteboard puzzles. The candidate could be a shit engineer in person but as long as they solve code puzzles in those 5 hours they get hired. Make sense?

Look, I'm not saying we need to eliminate coding rounds. All I'm saying is balance it out. A person who can solve a tree problem in one round surely doesn't need to be asked another stupid coding problem (which is asked just to make sure some random crackpot didn't squeeze in through the cracks). I mean, if you don't have confidence in your own interviewers that you need to keep asking those problems, what does it say about the process? That you don't trust your own interviewers? Isn't that the real problem?

You can choose to ignore all feedback I give here but then, I'd specifically call Google out and say that they're not a great engineering company until they figure out that their hiring is broken and they need to stop having their mediocre engineers keep rejecting the good ones.

Or...if you really care about working with great peers, show this answer to hiring committee and see what they think. They need someone to tell them, "Hey, why are you implementing a process that self selects mediocrity"

P.S: Why do I know it is mediocre?

1. Other companies have tried the same process and it didn't translate to anything

2. You know there are enough mediocre engineers at your workplace :)


One more thing: the interview process is as much about assessing team dynamics as skill/ability. I wouldn't hire Linus at my own company because he's kind of an asshole. I could be wrong but this is the perception I have from interviews etc. I love his work but can't say I'd like to work on a team that shares his style. It's important to stress that I never talked to him I just saw some interviews. His emails actually seem very reasonable and thoughtful, and he comes across as a regular person.


I agree with your general approach to a more holistic interview process. I haven't thought too much about the weights so can't comment there.

I'm pretty sure the weights vary depending on roles: interviewing a tech lead for ML requires deep experience in That field and will have more of those questions than basic CS algs. Or at least a higher weight than it would in a fresh graduate.

It's not my place to judge the average ability of Google engineers. I'd guess it's a normal curve somewhat skewed to the right since the bar is really high. In my limited experience they're on par with top silicon valley engineers but have a different profile than startup ones due to self-selection.

The point of many interviews is to reduce false positives, both for skill and professionalism. It works amazingly well. Every single person I talk to has something interesting to say and is at the very least competent at what they are doing. Not everyone feels brilliant but everyone does seem bright. And I'm not drinking the cool aid.

Hiring good people is easy if you just test for skills. It's much harder to make sure someone is a genuine person, or at least not an asshole. Google does a tremendous job here and I worked at enough startups and a few large companies to know how incredibly difficult that is.

That other companies failed trying to do this can be because of many reasons, from bad engineers to start with, lack of resources, mistakes in assessing ability or rush to get devs on the team asap. Again, not my place to comment.

Every company asking its employees to interview prospects implicitly assumed they are good, or at least competent. If they don't, they have a bigger problem to start with.

If they hired someone they already think they are good, right? And if they ask them to interview someone else they think they are able to judge, or don't understand that logic (In which case they have even bigger problems).

How does a mediocre engineer fail a stellar one? If the candidate is really better than the interviewer, won't they be able to solve anything they ask them to?

I think the reason multiple interviewers ask CS questions is to avoid the issue you mentioned earlier: asking one question the candidate just so happens to know really well. Random sampling should solve it.

I interviewed at Palantir a long time ago and was asked super hard CS questions over the phone and in the first rounds and I killed it. The last round I was tired because I hadn't slept well and tanked one problem. Just one, everything else I had solved really well. Yes I thought it was silly to reject me and wish they had given me another shot. If you are at Palantir and are asked to interview someone, think they are bad, and they get hired, it's demoralizing - you just wasted your time and the company doesn't value your opinion. Sure, you could have a score and average things but again, firing is way way worse than saying no to a good candidate who just had one bad question. And everyone I interviewed with was legit. (I had another series with them later, for a different role and got an offer :-) )

I agree that weights could be adjusted but 1. How do you know that would yield better results? (Show me the data - I don't think anyone has it) and 2. It can also be a cultural choice to bias more towards CS skills vs design or whatever else. Perhaps the company has great design patterns and can teach those skills v effectively. I'm not saying it's true, just an example. If you had a company and ran the interview process your way, I promise there will be someone just like you who vehemently disagrees with your weights :-). But if you could prove that process A vs B yields better outcomes, most companies would definitely follow suit (if they pay attention, are good at adjusting their processes etc).

I don't actually know how they read packets, I just know they work really hard to avoid false positives, and I wholeheartedly agree.

Btw I was rejected by Google twice before - resilience is part of the game ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: