All of these supposed "flaws" of leetcode are by design. Big companies want people who are smart enough to do the work, but obedient enough to put up with all the bullshit that comes with working at a big company. What person better matches that than someone who's able and willing to study for and pass a tech version of the SAT? Every anti-leetcode article I read is some version of "leetcode is bad because it measures the wrong things." No, we all know it measures those things, and those are exactly the things the measurers want to measure.
You might ask, so why do startups do leetcode too? I heard startups are supposed to be, uh, innovating, developing new technology, and working on hard, meaningful problems? Shouldn't they want brilliant, super effective people, instead of smart-enough, obedient workers? Apparently not. Apparently they want the same workers bigcos want. The implication of this is left as an exercise to the reader.
The idea that companies are using leetcode tests for rational reasons goes against everything I've seen and experienced or read about. What I have seen, working at startups for more than 20 years, is irrational, self-destructive behavior, over and over and over again. I've worked with many entrepreneurs who have a few million dollars in the bank, and they have a great idea, but they self-destruct due to two big reasons: ego and fear. I wrote about this in How To Destroy A Tech Startup In Three Easy Steps, where I talk about two cases that I saw with my own eyes. This kind of self-destruction is much more common than the kind of clever, rational behavior that you impute to these companies. And I think many other people are also witnessing this kind of irrational, self-destructive behavior, consider the reviews posted on the books page, where quite a few people chimed in to say the book matched their own experience:
I think it's quite possible that both you and the parent comment are correct.
Based on my experience, you are most certainly correct that the companies using leetcode aren't following any line of reasoning, and certainly not one as sophisticated as what the parent is implying. Most decisions made by "leadership" are irrational and self-destructive as you describe there (your book looks very interesting btw).
However what the parent is claiming, as far as the effects of the leetcode interview process is certainly true. I also think that the parent is correct in that these, perhaps unintended, consequences of leetcode interview in attracting a docile though technically competed work force are beneficial to large organization.
I also think the parents second comment, about these consequences not being beneficial to startups ultimately aligns with your view: startups do great harm to themselves by mindlessly aping the behavior of big name tech cos, mostly out of ego and fear.
> However what the parent is claiming, as far as the effects of the leetcode interview process is certainly true
And once you hit LeetCode employee critical mass your culture becomes LeetCode employees. Management may have started it, but employees amplify and make it ubiquitous (especially engineers promoted to management/hire/fire).
> in attracting a docile though technically competent work force
Bingo. IMO this hits the nail absolutely on the head. $BigCo wants docile people who will show up and jump when they're told to jump without asking questions or making a fuss. And they'll throw a big bag of money at you to do so.
Obviously this is an incredibly narrow/limiting employment experience. But different strokes for different folks I guess
Would love to talk about the dating app world. It's so funny how many founders come into it thinking their "revolutionary matching algorithm" or whatever is going to make the greatest dating app ever. Until you realize that the problem is human nature, and no one wants to be told who they should match with. People want a name, an age, and a face; that's it. And anything that gets in the way of that will lead to your app being ignored.
And a location/distance, that seems to be pretty important most of the time, from what I can remember, but otherwise yeah.
We had one of those people as a client and thought his "special sauce" of Myers Briggs, some Neuroscience research test bullshit we had to build into the system and have people, and some bullshit matching algorithm based on astrological sign would be the next big thing. Of course it wasn't, but my boss was happy to take the guy's money to build it anyway.
The client actually told us that it would have ten million of users after three weeks of release, with basically no money spent on marketing. It did not. Nowhere close.
Some relevant offtopic. Ego and fear are also described in the eastern philosophy as the two biggest obstacles on your way. Ego is the illusion of separateness, the great seed of all human evils, that takes a monumental effort to get rid of. Next comes the fear of making the next step, "taking on more responsibility" as we would say here, because that new responsibility feels so overwhelming. Those who fail to make the leap of faith (or "to challenge themselves" in corpspeak) fall, and have to climb up again.
Thanks for recommending "How to Destroy a Tech Startup in 3 Easy Steps" - loved it!
edit: sorry, just realised you are the author!! (lack of sleep - not work related ))). Great reading, really enjoyed it. "Sital" gonna be used as a nickname )) Thanks mate! :praise
Every unicorny startup I've interviewed w/ that had a standard big-tech interview loop (1 LC style screener + final w/ 2-3 rounds of LC + 1 system design) was chock full of ex-big tech engineers and managers, replete with stories about wanting a faster paced / dynamic environment.
So yes, that absolutely is who they are recruiting. It probably comes down to no more than believing big tech companies have the very best in the industry (ie. like being in an Ivy League school) and standardizing the hiring bar w/ other companies they're actively in competition with. More than obedience, it probably is the aspect that they're smart and determined to succeed.
> More than obedience, it probably is the aspect that they're smart and determined to succeed.
Yup that's what it's about!
The problem with trying to test how good you are at the job is that you simply can't do it in a 1 hour (or even a 1 day) interview. You can however assess how smart somebody is, which is definitely correlated to job performance. It's also connected to growth potential - even if it were somehow possible to accurately measure how good a programmer a candidate is at the moment they are applying for the job, companies would still want to try to predict how good they are likely to be over the next few years.
Not OP, and I don't think it directly indicates how smart you are, but it does show you are at or above some smartness threshold (i.e. smart enough to get the answer).
If you don't get the answer, you might still be above that smartness threshold, but didn't get it for some other reason (didn't have time to study, didn't sleep enough the night before, interview anxiety, etc.).
Yes, being able to understand and apply algorithms and data structures requires some intelligence.
I know we try to stay away from the “we’re smarter than you” vibe here because it’s gross, but pretending leetcode style questions don’t require some degree of complex thought is absurd.
Personally, I need to practice leetcode style questions pretty regularly in order to perform reliably without reference materials. If more than a couple of months go by, the details of particular algorithms begin to fade.
I think it's unreasonable to expect people to spend their off time practicing these sorts of problems, in the same way we think it's unreasonable to require everyone interviewing to be working on personal development projects over the weekend. I suspect most working people simply do not have the time.
Lol yeah apart from sanitation, the medicine, education, wine and public order what have the romans ever done to us? In your view that’s not enough to make an informed eng hire?
Lol. So you can't give me any concrete examples. Got it.
Nothing tells me more about someone that is full of it than simply being dismissive of others. I ask for an example to support your argument, instead you chose to be defensive and give me attitude.
Exactly. If you have a candidate who can design good systems, solve complex algorithmic problems, and play nice with others, then I’d say that’s a pretty strong candidate.
IQ does not tell determination/hard-working-attitude at all, in fact many with high IQ ended up being nobody, as high IQ makes learning relatively easier, most got used to that and just let the 'grit' go.
leetcode is the SAT for coding, not perfect, but at least it's close to fair play.
> IQ does not tell determination/hard-working-attitude
this is what everyone said a college degree was for. Or experience. Neither of which matter when a senior dev with 10 years of experience and two kids still has to find an hour or two a day for three months in a row to grind out textbook algorithms to fake problems. Just to switch their fucking job. Thanks to cargo-cult insanity, god help those stuck in miserable jobs that just want out.
I mean, if the actual solution in a work environment is "use existing solution from people who already did it" but you have to do solve it by route memory then it's a fake problem.
The actual solution for encountering a differential equation at work is wolfram alpha, not getting a pen and paper to apply heuristic knowledge on solving differential equations
> leetcode is the SAT for coding, not perfect, but at least it's close to fair play.
It would be amazing if leetcode were like the SAT, and you could just get one good score and then never think about it again.
Anything like that would make it much lower-friction to switch between FAANGs (and friends), though, which I suspect is a big part of why they've settled on doing things this way.
That’s a fascinating idea. Get companies to accept some form of standardized testing and have it be transferable. That would greatly increase the motivation behind its studies I would presume.
I, for one, will probably never subject myself to a FAANG-type interview, but absolutely would study for and take a similar standardized test if it unlocked the same kind of opportunities, and I didn't have to re-take it with every interview.
This is a startup idea in fact, wait, someone is doing that: https://codesignal.com is one that I just learned yesterday, but it's not 'standardized' like what SAT does.
In fact I believe software should have some qualification tests, e.g. general coding, database, cloud computing, etc. Like CPA for accountants. Each test should be valid for a few years in each category.
The hard part's not creating some kind of certification, it's getting desirable employers to accept it as a replacement for the most-painful parts of their interview processes. I suspect a lot of top companies don't want to make it easier to jump between them.
The "most painful" part of any interview process is just the part that you happen to be the worst at. Personally, I don't find leetcode to be painful at all and would be much more enthusiastic about something that eliminates any part of the interview where you have to talk about yourself.
Sure it's a great idea and hiring culture would be so much better if it happened, but I think the parent is correct that companies want friction at that point. None of them want to make it easier for talent to jump ship and transfer.
Certification is generally binary, though - you're either certified or you're not. This might be a good thing though, as it establishes a decent floor for technical competency and can save everyone from having to deal with at least some of the technical interview nonsense.
What some of the people in these comments seem to want though is some sort of standardized score/ranking system, which I suspect might lead to even more nightmarish outcomes than the current leetcode-y interview processes. (e.g. Employers setting absurdly high cutoff scores, choosing one applicant over another because simply because they scored a couple points higher, applicants grinding unimaginable amounts of unpaid hours to bump up their scores a bit, etc.)
I've done first party certification and it was always a joke -- they need you to pass so you can get your employer to use their services. Are there trusted third party certification services that are willing to fail half or more of their customers?
for SAT or GRE, my understanding is they have a huge pool of problems, before the test they're picked and assembled somehow. I hear no one complains about SAT or GRE selections of problems, so it might be less of a concern.
I can't tell if you are being facetious or not. I hope you are being facetious. I really do.
As many other comments have indicated right here on Hacker News, professional licenses are commonplace for occupations such as plumbers, electricians, doctors, lawyers, and even many types of engineers.
I suppose that central governments (such as the US federal government) should offer licenses for myriad types of software engineers, hardware engineers, software architects, hardware engineers, and so on.
Wouldn't it help companies if they could choose to interview only candidates who were licensed as, say, a level three penetration tester (intermediate penetration tester) or a level five database architect (expert database architect)?
The whole "Let's reinvent the wheel mentality" surrounding software, The Internets, and hardware simultaneously bemuses and frightens me.
"The eye never has enough of seeing, nor the ear its fill of hearing. What has been will be again, what has been done will be done again; there is nothing new under the sun." King Solomon, Ecclesiastes.
and
"I only wish that wisdom were the kind of thing that flowed ... from the vessel that was full to the one that was empty." Plato, Symposium
Why don't we simply allow unqualified, blind, inebriated people to drive automobiles on public roads at whatever speed they would like? Why don't we let a guy who watched a bunch of YouTube videos call himself a brain surgeon, and perform brain surgery on people who don't even need brain surgery in the first place? Hey, wait, i've gotta gureaat idear: y botherr haviing aany ruuules at al! Sheesh.
Without rules, men simply return to a state of nature where life is short, brutish, and mean (Hobbes).
I just found this on Google...
******
Origin of Life is Nasty, Brutish, and Short
This expression comes from the author Thomas Hobbes, in his work Leviathan, from the year 1651. He believed that without a central government, there would be no culture, no society, and it would seem like all men were at war with one another.
******
The “Wild West” mentality of folks who seem to believe that rugged individualists (not federal agencies such as the US Department of Defense) built Silicon Valley, is perched atop the same type of popular, yet nonsensical, mythology (falsehood) as Horatio Alger's famous character who was actually named Ragged Dick. (Really, Ragged Dick was the character's name, I am not being facetious) who metaphorically pulled himself up by his bootstraps to rise from street urchin to CEO (who was a wealthy industrialist).
Imagine a military, any military, anywhere, anytime in human history, that didn't have ranks, titles, and gasp... tests which members had to pass to move up the ranks. How well do you suppose a military without ranks and without tests would fare in combat? Obviously, such as military would be in a state of hopeless disarray.
Licenses are not a necessary evil; they are a good, and proper way to identify and reward qualified professionals, while simultaneously enabling "the rest of us" to know, for example, who's a mere private, whom we can walk past without batting an eye, and who's a colonel, whom must stop and salute.
Imagine a hiring manager say, "Hey, this kid never went to college, but he's a freshly minted level one software engineer, I say we bring him in for an interview."
Why should every company need to create their own initial screening tests? Imagine a trucking company that needs to hire a truck driver with a particular type of commercial driver's license (CDL). In the employment advertisements they post, such companies almost invariably include verbiage such as, "Class A CDL required" or "Must have a Class B CDL." See? The candidate must have already passed an initial screening test, by acquiring a particular type of license, prior to being granted an interview.
This is obviously a huge benefit to both candidates and companies alike because it saves both sides a lot, and I mean, a lot, a lot, a lot... of time!
These days it is comically inane that tech candidates are normally expected to take an endless stream of initial screening tests to prove their mettle to each and every prospective employer (unless they were referred, famous, or for some other reason exempted from the requirement).
Imagine a CPA (certified public accountant) being required to pass a basic auditing test before he was granted an interview for a new job. Why would a company ask a CPA to take such a test? If a candidate is a CPA, then (unless, for example, he cheated on his CPA test, or suffered some sort of memory loss) he has already proven that he has a substantial amount of knowledge about auditing.
Yes, of course companies should administer their own tests. But professionals licenses can, and do, enable both candidates and companies to avoid the sort of initial screening test which companies commonly require of software engineering candidates. Currently initial screening tests, such as LeetCode, are a response by hiring companies that are typically deluged with a sea of unlicensed (and almost entirely unqualified) candidates, all of whom claim to be qualified.
Nice rant.
It’s full of logical holes, of course, as there’s no common definition of tech roles and their titles, let alone a common understanding of the included tasks.
There are React Boot Camp folks who can whip up a frontend in no time who are arguably more competent at this task than a PhD in Comp Sci, and there are PhDs in Comp Sci who never wrote a lick of code in their life.
Technology in software IS Political, and 30 years of development experience shows me various parallel worlds that are arguably better than our current one.
Someone who grew up coding and loves hacking as close to the bare metal as possible is worth more than a dozen of your certified software engineers.
Let me know when the world standardizes on a Comp “Sci” curriculum.
Actual computational science has very little to do with computers and software development, which is a thesis statement I will be happy to support if pressed. For now, I end my rant
How is it fair, it favors young graduates and people with lots of time to prepare tremendously.
Would it seem sensible to you that every time a doctor applied for a job, even if he has 20 years of experience, he'd need to compete with recent medical school graduates on some first year medical school exam?
I was just entertaining OP, I don't think IQ correlates that well to SW engineering job performance. It's only one of many many factors (albeit probably the easiest thing to test for - testing curiousity, emotional intelligence, motivation, work ethic and resilience is much much harder).
But if we're saying Leetcode is simply testing for IQ and not really for software engineering ability, why not just test for IQ? IQ tests are designed in a way that after a certain threshold it's pretty hard to improve in them - they actually test some innate ability.
Leetcode tests how well you are prepared for Leetcode, and perhaps how well you do under stress. It correlates only slightly with intelligence and even less slightly with programming ability.
I think the fact that leetcode needs to be studied incorporates work ethic/motivation/resilience alongside IQ. Also, at some stage in the interview process you have to interact with people which will expose your social skills/emotional intelligence to some degree.
And algorithmic questions are a straightforward way to get a scalable, structured interview round. It's difficult to entirely replace algorithmic rounds with questions that are both scalable and structured.
Leetcode doesn't measure "smart and determined to succeed". It measures "has enough extra time and energy to devote to practicing pointless brainteasers for weeks."
In other words, whatever it's intended to do, one of its primary functions in practice is to screen out people who are bright, driven...and poor, working long hours and trying to keep themselves and/or their families going.
Weeks? There are stories on Blind and the LC forums of people taking upwards of a year with pretty hefty real-life constraints. And why wouldn't they, given the opportunity to essentially double comp over standard industry jobs? That's a life changing opportunity if you're in a somewhat unfortunate life situation.
That it's a bit of a sacrifice is the point. If you're naturally smart enough this stuff comes quickly, great. If not, you're going to have to work for it - that requires discipline most people don't have.
Whether you realize it or not, you're advocating for keeping people who aren't already mid-to-upper-middle-class (among others) out of these kinds of tech companies. Anything that's designed to make you work hard extra, outside of work to learn separate skills just to pass interviews is guaranteed to make it disproportionately harder for people who are, for whatever reason, unable in practice to devote many hours of their free time to fairly complex technical studying.
No matter how "disciplined" they are, people already working 80 hours/week just to put food on the table don't have the luxury to be doing that. No matter how "smart and/or determined to succeed" they are, people raising 2 kids by themselves would be irresponsible to be doing that.
Now, maybe you think that kind of person should be denied an opportunity to join the super-1337 hackaz' club that is FAANG or whatever. Personally, I think that kind of classist gatekeeping is disgusting.
While I certainly wouldn't presume to claim that no programming jobs are remotely related to Leetcode—I'd be perfectly willing to believe that you, personally, have to engage mentally with "classic" algorithms on a daily basis—from everything I've read about them, I doubt very much that more than a minuscule fraction of programmers and other tech workers do anything on a day-to-day, or even month-to-month, basis that would be close enough to Leetcode that they could be considered similar skills.
I've been working as a programmer—largely PHP, Java, Objective-C, and Swift—for over 20 years now, and never, in my work, have I been asked to invert a binary tree, reverse a singly-linked list, or any similarly contrived scenario.
There's a huge difference between "thinking about algorithms" in the most general sense of that phrase (which means basically any programmer who does any of their own design work) and in the sense of the particular "algorithms" that we learned in CS classes and are featured in Leetcode problems.
In no circumstances did I ever make algorithmic decisions in an informational vacuum and then cowboy code my way forward, or rather, one seeks to avoid the mistakes of one’s youth.
Search engine: read material about the heart of the problem from domain experts, try different approaches and then decide.
I only took 1 CS class in college- Cybernetics with Huffman. I failed! So clearly, that was a strong predictor for my future 12 year career at google (where I failed to do quicksort, or virtual ants problems). Nowadays I take a lot of extra time to come up with questions that aren't in leetcode, are easier than leetcode, and give far more signal than any leetcode would, for anything except for a 10X Staff Engineer.
To be honest? every programmer needs to be able to pass FizzBuzz.
Beyond that, I add some more basic work around byte representations of data (I have 4 symbols... how many bits do I need to encode a symbol?).
I will typpically also include one major bug in a piece of code and ask the user to identify it (with hints).
I once interviewed a guy- a CTO at a biotech- and he wouldn't answer the question "I have a million DNA sequences and want to count the number of occurrences of each sequence" (could be hash table, could be any number of other solutions; he just balked and exited the interview).
Only if the person can get all that right would I even consider asking something more complicated.
I once interviewed a guy- a CTO at a biotech- and he wouldn't answer the question "I have a million DNA sequences and want to count the number of occurrences of each sequence" (could be hash table, could be any number of other solutions; he just balked and exited the interview).
Maybe he's just been around the block a sufficient number of times to have gotten tired of the standard SV "here's a hoop, jump through it, now do it again, and again and again" ritual that it likes to call "interviewing" for some reason.
Seriously - your questions are fine for junior or mid-level roles, but beyond that, you should have more important thinks to talk about. And both of you should be able to tell if the other is a bullshitter and/or a lightweight within about 30 minutes at most (and often far quicker than that) of a normal, focused technical conversation -- without having to specifically grill the other person, or otherwise put on the pretension that you're one whose bona fides are established beyond question, and they're the one who needs to jump through a sequence of hoops to prove that they are worthy of your time and attention.
Just cut the crap, get down to brass tacks, and talk what needs to be done as if they're a peer. That's all you have to do.
I agree liking to ask simpler stuff first. But what about this more complicated stuff? My problem is that when we talk about stuff having more "signal" - how are we determining if those questions are giving us more signal?
My question is fairly open and allows me to continue asking significantly more complicated questions. So far, nobody has (for example) asked enough clarifying questions to determine that a bloom counting filter could potentially solve the problem- most people just store ASCII string keys in a hash table, which wastes tons of memory and requires a lookup for every string.
So usually I end up after 45 minutes finding that the candidate has more or less tapped themselves out at "make a hash table of string keys, use it to store counts" (OK for a very junior programmer) or "make a perfect minimal hash" (I help them get there if they don't know what those are) or "use a probabilistic counting filter". This is about all I need to make a determination.
The problem with this question is that outside the junior people who won't understand the hints, many good programmers would pass or fail just based on whether they have seen a similar problem or know what a Bloom filter is. CS is so broad that you may have been working for many years and never come across one topic that the interviewer deems standard.
I don't fail people who don't suggest a bloom filter. What I meant was: knowledge of probabilistic data structures and knowing this is a situation where they could be used is a sign of a more experienced programmer (likely one who's worked in clickstream processing).
If all you do in my interview is know that a hash table or other associative data structure is good for maintaining counts, that using a full ASCII byte to store symbols from { A G T C } is wasteful and compress the letters to 2 bits each, you can pack multiple 2 bits into bytes, and can write a function that codes and decodes such data, I'm happy and you get a pass. I'm even happy to sit there and help people through nearly all the steps of the codec. Not trying to trick anybody or select for obscure CS knowledxge.
To me the real question is, how much hinting is reasonable for the coding and decoding function? Many programmers (including senior ones) struggle to implement:
x <<= 2 # left shift the int to make room for the next item
x |= pattern # or the new 2-bit item into the accumulator.
Since I never use bit munging in my day job (it's 90% python data science) should I really ding somebody for not knowing about left shift or or-assignment?
What I really don't get is why people immediately jump to trying to implement huffman coding, RLE, or lookbacks as a solution to reduce the size of the DNA string. I still wonder if how I present the question is giving everybody a fair chance to shine.
I am 100% certain that your questions are not giving everybody a fair chance to shine. You have to start with the area of expertise they have already worked in and not the area that you happen to have experience in or be interested in. I could (for example) ask “simple” dynamic optimisation questions that are “easy” for me and I know that 95% of great experienced developers would fail. So obviously I don’t do that. I want to hire smart developers who can learn. Not developers who happens to have the same experience/interest that I do.
I hope you realised that your questions are extremely narrow and in no way filter for competent developers. Imagine yourself going to an interview and being asked similar narrow specific questions but in a different area you have no previous experience in. You would probably fail. For example: how would you represent a lambda closure when compiling a functional language to machine code? It is a super easy question for me but I know most people would fail. So I don’t ask questions like that when interviewing.
This sounds like a perfect question for a company that's hiring engineers who want to focus on optimizing DNA storage. If that's your goal you should just put that in the hiring criteria. You'll get more candidates that know those type of algs. If your goal is know if the candidate knows about bloom filters a better question might be "What do you know about bloom filters?" Definitely seems like you'd get answer in less than 45 minutes.
I think "key/value counts" is absolutely the correct starting answer. Anything after that is optimization. In the real world optimization comes at length through research, learning, grokking, reading, benchmarking, sleeping, mulling things over. The only way to short change that is knowing about those optimizations in advance, so might as well just ask about those algs specifically.
All that being said, I think it's an enlightening question. Thank you for sharing it! You've educated me. I've been coding a loooooooong time and didn't know about bloom counting filters specifically.
But have you had a chance to look back and see if those questions worked out? Like, are they a good predictor for how the person actually performed in the job?
Exactly. I honestly think no one is entitled more than software developers. Not even that chick from Mean Girls. Hell which other similarly paying field can you double compensation in like three or four months. Six months if you are slow or have a demanding life. Lawyers? Hahahaha. Doctor? Hahahaha. At least with LC there is an end goal. Those 75 questions. Or well I think 150 questions.
I hate LC personally but also am able to recognize that it is a silver platter handed to me. The article above mentions all the negatives. Ya bro who didn't know them?
yep. And you typically do change jobs every few years just to get a raise or promotion.
Which means... Always. Be. Leetcoding. Do the bare minimum at your job and then do LC the rest of the time. Because your job is just your job, but your career is LC. These companies don't yet realize they are optimizing for mercenaries that have no loyalty to the code nor the company.
On a slight tangent, some of the people over on Blind would sell their own mother for a tiny bump in TC. That mentality used to be limited to Wall St. or maybe Big 4 accounting firms, etc. But now it's the whole tech industry. I remember having a software job was a lot more fun, back around 2007ish. That culture is so foreign to me now.
It's what happens when an industry is awash in dumb money. Startups are seen as either half-baked schemes trying to get a cut of the VC funding pie, or cheap R&D labs for the big tech companies in search for inevitable acquisitions. The big tech companies who originally made their money delivering goods or services of real value, squander that goodwill through oligopolistic rent-seeking, engaging in shady anti-competitive or anti-consumer behavior, and endlessly building derivative, short-lived products that users don't actually want in hopes of inflating their moats. In a milieu such as this, engineers become mercenaries because "making the world a better place" has become a tired old cliche, both false and naive. Just a lot of cynicism all around.
I'm quite certain that making people dread interviewing is part of the point. Else why keep making people who've already passed multiple times, do it again? First FAANG to start routinely letting people who've already passed a couple similar interviews at peer companies in without repeating the leetcode-hazing, then the other companies have to do it, too, now suddenly the rate-of-increase of software developer pay's even faster than it already is, which none of them want.
Even having to pass such a test on a very aggressive set schedule—say, every 5 years—but only at those set times, would be better—for developers. Not for the companies hiring them.
Being able to quickly come up with efficient algorithms to domain-relevant problems are not pointless brainteasers.
Being able to quickly invert a binary tree and that kind of nonsense belongs in library code (at best). It's not something that the average developer should ever need to worry about the actual implementation of, let alone have to implement from scratch in under 30 minutes.
Yes, I agree they're relying on post-hoc reasoning to justify their own place in the industry. All of the grade inflation stories about Ivy League and similar schools and personal experience in the industry with these types don't give me a lot of confidence in the present value of those school names if the dynamics of VC funding change a bit.
I wonder which came first? Did they hire those people because they implemented leetcode style interviews, or did they implement those interviews because they originally hired or were founded by ex-bigco folks that simply did what they knew?
Perhaps there's another collectively shared work experience that also created leetcode style interviews. That is dealing with someone who has a degree, can speak well of what it takes to code, but who cannot code to save their life. I have known more than a dozen such "engineers". That's the reason you code during an interview, I think the emphasis on optimal O(N) is just a set of engineers who don't believe it should be "that easy" and doesn't really help produce more of a signal.
My coding question, for example,
You have a music player that should be playing a 900 song list randomly. You notice that you keep hearing a song being repeated during your drive and you are curious if its truly random. You also keep hitting skip in the hopes a particular song comes up. Write a piece of code that simulates this.
You should tell me three things;
1) the number of songs played before a repeat occurs
2) how many songs needed to be played before you played them all
3) after you succeed in playing them all, how many times has the most common song been played.
You can use libraries if you know them, and you can use the built in sort method in the language, I will google it for you if you don't remember the syntax.
But.. why? Does that reflect anything close to the job they would be doing? Is handicapping someone in every way (You will google it for them), putting them on the spot in an already tense situation and expecting them to code while you watch the way that literally any software job works?
This is the insanity to me, "Here, do this contrived task that doesn't represent anything you will be doing... to prove that you can do the job"
I once had a whiteboard interview for a senior engineer position where they demanded that I write it in syntactically correct python, indentions and all, on the whiteboard. I'm trying to talk about code at a high level with them meanwhile they are deducting points because I assigned a dictionary key directly vs. using the dictonary's method. It turned me off to the company as a whole and the entire interview went downhill from there.
> But.. why? Does that reflect anything close to the job they would be doing?
Yes? Yes. Figuring out how to make a computer solve problems is very much the job of a software developer. They will only encounter harder and less well defined tasks in their actual job. If they can’t do this and you hire them that is like hiring an opera singer who is mute, or a baker who is deadly alergic to flour.
> putting them on the spot in an already tense situation and expecting them to code while you watch
There are mitigating factors one can do. We make sure our hiring managers let the candidates know that there will be a coding challenge. We ask the candidates if they prefer to chat while they work through the task or prefer to be left alone and we acomodate what they choose. We let them know that whatever style they prefer it won’t change anything.
> meanwhile they are deducting points because I assigned a dictionary key directly vs. using the dictonary's method
That sounds very unpleasant. Sorry to hear that. Interviews are a two way street. You are interviewed and at the same time you are interviewing them. I think you were right in judging them, and you dodged a bulet there.
By the sound of it you are a talented, and capable developer. It might be that you can’t imagine it, but there are people who apply for developer jobs, has a really good ability to talk about the job, they seemingly have the right experience, yet somehow they can’t program even super simple tasks. Even after you give them every acommodation immaginable to humankind. If you haven’t seen this yet you won’t believe it. If you have seen it you want a filter against this particular kind of candidate.
I’m not saying that this filter goes always well. Every filter ever invented had both false positives and false negatives. We might lose a briliant developer because some quirk of the task throws them. It is sad. We are trying to minimise the chances of this, but it certainly happens.
I can certainly understand that there are unqualified people who apply, what I'm disputing is your assertion that there is a correlation between doing the contrived problems under unrealistic conditions and future job performance. It sounds like your goal is just to filter out the absolute worst of the worst, and I'm sure its effective at that, but I believe you might be filtering out more of the top end than you realize.
> It sounds like your goal is just to filter out the absolute worst of the worst,
Yes.
> I'm disputing is your assertion that there is a correlation between doing the contrived problems under unrealistic conditions and future job performance.
You are projecting something here. You are saying, without any supporting evidence, that the problems are contrived. They are not. They are really the core of what me and my coworkers do day in and day out.
You are also saying that the conditions are unrealistic. What makes you think that?
If you and your coworkers are solving the problem you posted "day in and day out" just hire me and you can get rid of them :)
You are right though I have no evidence they are contrived other than your example problem. The unrealistic working conditions though is indisputable, do you typically relay your google search's for syntax through your boss? Do you often work under extreme time pressure on problems you have never seen before? Perhaps you do, but I can tell you with certainty that it's not the norm in the industry to work in that way.
Not the guy you responded too, but is there a good way to avoid a time pressure?
A limit has to be set as to the time spent per candidate from the company's pov.
> It sounds like your goal is just to filter out the absolute worst of the worst
Indeed, this is a holy grail of the applicant funnel. If a tech-for-interviewing company comes up with a better way to remove the worst 50-ish% of applicants efficiently, they’ll have no worries about their future. (The overall applicant pool is significantly adversely skilled as compared to the have been or will be quickly hired pool.)
There is a class of engineer who have little performance anxiety by nature. They simply don’t care and are normally blunt types who love algorithms.
And they are, frankly, pretty bad at writing code because they have little empathy for the reader and greatly inflated sense of self worth. They’re the kind to use complicated C++ features or algorithms for little reason. Essentially, smart idiots.
They also believe in silly things like LC being a fair and rational way to evaluate candidates and don’t see the bias at all. “Eugh she’s an ugly woman, I think I’ll give her the LC hard and little help.”
There is another class who realizes how stupid LC is, but are happy to play the game to quickly accumulate power and prestige. They usually have psychopathic tendencies and aren’t great coworkers.
LC is great at hiring these types. Feel free to stick to it if you enjoy having them as coworkers.
This fear of hiring inept people kind of surprises me. Can't they just let people go who interview very well but are poor performers on the job? If they don't care to follow up on the performance of new hires, I have to wonder if it really matters one way or the other.
I'm outside any major metro area, when we've needed to hire there haven only been a handful of people responding. When people come in for an interview it's been pretty easy to tell if they really can't code at all without making them actually code. I haven't found this to be challenging.
I suspect the issue is interviewing a large number of people in a short period of time. If you don't have 45 minutes or so to spend on each person, using automated leetcode style problems probably starts to seem like a pretty attractive way to weed people out.
Lastly, usually one or two people from my team would take part in interviews. If there aren't any developers available at interview time (i.e., it's a manager and a someone from HR), I can start to see how people without any real coding experience can make it through the process. Again, this is probably a place where automated testing looks like a reasonable solution.
> Can't they just let people go who interview very well but are poor performers on the job?
Eventually, yes. But it takes time for them to start, time and money to onboard them, evaluate how they’re ramping up, then if not acceptable, to follow whatever performance management process is indicated by the company and local law, then transition whatever work they were doing. This could be several months and tens of thousands of dollars just to get back to a worse state than when you walked into interview that candidate.
Interviewing even slightly better than last year can pay large dividends.
# You have a music player that should be playing a 900 song list randomly.
# You notice that you keep hearing a song being repeated during your drive and
# you are curious if its truly random.
from collections import Counter
import random
r = random.Random()
songs = range(1,900)
unplayed = set(songs)
# You also keep hitting skip in the hopes a particular song comes up.
# Write a piece of code that simulates this. You should tell me three things;
# 1) the number of songs played before a repeat occurs
# 2) how many songs needed to be played before you played them all
# 3) after you succeed in playing them all, how many times has the most common song been played.
counter = Counter()
firstrepeat = None
totalplays = 0
while unplayed:
song = r.choice(songs)
unplayed -= set([song])
if firstrepeat is None and song in counter:
firstrepeat = len(counter)
counter[song] += 1
totalplays += 1
print(f"""
Number of songs played before a repeat: {firstrepeat}
Total number songs played: {totalplays}
Most common song: {counter.most_common()[0][0]}
How many times has the most common song was played: {counter.most_common()[0][1]}
""")
Sample run
$ python3 foo.py
Number of songs played before a repeat: 50
Total number songs played: 7263
Most common song: 375
How many times has the most common song was played: 17
I'm wondering why would the music player implement sampling with replacement and not without replacement? Sampling without replacement would shuffle the list and then play the tracks in shuffled order, no repeats until all songs are played once.
This is what I was thinking, as I was driving in, and heard the same song. It was an iPhone, and it was my impression that this is how the iPhone worked. Now, my iPhone did something else weird, it randomly dropped songs from being downloaded, so songs 'disappeared'.
One other thing, I just got a new car, and this was the first time I was using 'next' using bluetooth. So, as I was driving in, I was in my head trying to figure out whether the phone now only had 100 songs downloaded, or whether this bluetooth 'next' function on shuffle was picking a random song from the list, rather than 'next' on a shuffled list.
I will google it for you if you don't remember the syntax.
You might as well just stand behind them and breath down their neck for added effect.
Seriously, that's way too claustrophobic, and not to mention a huge practical annoyance -- for the added latency of having to ask you to google stuff for them and then give you the result somehow, instead of just letting them do it themselves, when all they want to do is get past this trite exercise and start having a real conversation.
I notice quite a few early stage startups that recruit w/ large initial funding rounds (ie. a $20-40M series A) almost invariably have a note about how their founders are ex-FAANG. I can't help but believe it helps a ton w/ funding, professional networks, team building, etc.
Again, no different than having an Ivy League education. There are big advantages to scope/reach/opportunities in the industry.
The predecessors to leetcode questions were being asked, for example, in PhD defense dissertations as well as some that were highly specific to the company in questions (sort 2MB of data 1MB RAM). Many of the problems are basically late-undergrad, early-graduate CS student problems.
Do PhD defenses actually involve asking random tech questions instead of what's in the dissertation? Seems like if they're going to fail that, you already have bigger problems.
My PhD was in biophysics and the questions ranged widely outside my dissertation (in my case, you actually do a defense to proceed to work on your PhD, then write your dissertation and give a final talk; there's no way, unless your advisor rejects your dissertation, that your phd wouldn't be awarded. Other programs rear-load the process and have a real "defense" at the end, which is crazy if you think about it.
At one point in my thesis defense, I derived several equations I hadn't seen before, on the fly, such as "What is the time resolved fluorescence of a fluorophore in 4-dimensional space?" and finally understanding ergodicity (https://en.wikipedia.org/wiki/Ergodicity).
My defense wasn't about determining if I was an expert and qualified to write a dissertation in my field (my questioners already knew that), but to determine if I was a well-rounded general intelligence capable of out-of-task prediction.
In my phd experience in the USA we had oral qualifying examinations which involved whiteboard derivations. The thesis defense after writing was really mostly focused on probing the results in the thesis.
I don't know man. I think you're going against Hanlon's razor with this belief. LeetCode type problems may do the things you're saying, but I'm not sure that's the intent.
Personally, I believe the reason companies do LeetCode type questions is because everyone else is doing them and no one can agree on a better way. Large companies want a hiring process that scales and is relatively uniform and LeetCode questions meet that criteria. Is there a better way? Probably, but figuring out what that is takes time, effort, and investment and most companies don't want make that investment for something they may get wrong.
Basically, most everyone knows LeetCode is shit, but as long as everyone else is doing it, there's little incentive to change. If everyone is doing the same stupid thing, at least your company isn't falling behind. If you decide to do something different, there's a real possibility you could spend a bunch of time and effort only to make things worse.
It'd be a fun social experiment if one of the big tech companies replaced the leetcode-style rounds with something arbitrary. Let's say: The "jumping jacks" round. You have to turn your webcam on and do 100 jumping jacks in 60 seconds. Only then will you potentially advance to the next round.
The person watching on the other end can evaluate how far over 100 you got, whether or not your form matches best practice, and assess how you were breathing in case... you know... they hire you and then there's a business need for you to do 500 jumping jacks in five minutes.
They might not even end up hiring anyone different than they otherwise would.
First, it’s probably illegal in the USA due to ADA unless you could show that it tested something physically related to the job. Example: a big company is required to make accommodation for a, say, qualified analyst who is blind, but can reject a blind candidate for a job that required driving.
But after 45+ years of hiring programmers, the world hasn’t yet figured out which factors are germaine and which are not.
I'm pretty sure that most of the benefit of FAANG-type interviews is narrowing the candidate pool to almost exclusively people who'd be OK to hire. IOW they could just hire randomly from the pool with no leetcoding at all, and likely do about as well.
The trouble is, they actually can't do that because then the pool would change, quickly.
Google actually kept statistics on how much they liked candidates and how well they did after hiring. This has the obvious problem of not tracking those they turned down, or those who turned them down, but they published a paper on the results and IIRC basically found no correlation at all.
But I think like you say, what they did do was create a filter that only very patient and technically qualified candidates could get through. Beyond passing the filter, their very standardized process did not derive any useful information about candidates. But I feel pretty certain if they made an offer to every candidate and not just those on the pass side of the filter line, they would have seen some differences.
My guess is that by the time the self-selection of people who think they're ready to apply and have a good enough shot to make it worth all the time the interviews take, happens, and they get to an in-person (having perhaps passed a less-harsh phone-screen leetcode question), most further leetcoding isn't doing much aside from keeping the self-selection effect strong.
The other parts of the interview might be doing something per se useful, I suppose.
I like that idea. It pretty much performs the same basic function that LC advocates say that it does -- "Okay maybe it doesn't really measure useful skills, at least not in proportion to how they're actually used, but hey, it at least measures their determination to get the job, and their willingness put up with utter nonsense, which is of course hugely important to us" -- and is much quicker. At at least gets you out of your chair, and your circulatory system refreshed and oxygenated.
I'm going through job interviews now (not with Amazon, but others). The whole thing feels like well-rehearsed regurgitation. Well, and a smidge of acting, obviously, because you don't want it to come off as well-rehearsed. But they know exactly what they want to hear and you're competing with other candidates who see the same writing on the wall and are doing their own rehearsals.
I thought it was just the leetcode, but it's the behavioral questions and system design exercises, too.
Eh, I'm coming across as cynical. I'll be fine--I just wish we (as an industry) had a better system.
No it’s totally right. Going in cold is basically impossible. You have to be ready to regurgitate leetcode answers, “approved” system designs, STAR stories about your career that show various BS story arcs. And of course pretend like you’re not just regurgitating for… reasons. It’s all a bizarre cargo cult acting session at this point.
Big companies even send you a study guide these days ffs
LP style of interview optimizes even more towards storytellers and fair bit of BS purveyors. The entire process is far too impersonal to drive any real and effective assessment.
I get the sense that, because of Amazon's roots as a bookseller, they still like to optimize for people who are a bit more well-spoken and well-read, compared to other FAANGs. At the executive level, PowerPoint is banned and all meetings start by reading a well-composed multipage memo.
Being able to tell a great story might not be highly valued at most orgs, but I think it's part of Amazon's culture and something they desire in their skilled positions.
This is ridiculous. You yourself admitted that conspiracy theory doesn't explain startups. The answer is simple: leetcode interviews are stupid easy to give. There is no "nefarious conspiracy to keep workers obedient." It is simply the path of least effort.
I think leetcode style interviews being easy for the interviewer / company is the most rational explanation for why its so pervasive. Its easy for the company as an automated screener and an on-site because there is a clear pass / fail rubric that can filter people. If you have a bunch of people doing interviews that don't want to do it (Google and FB interviewers cough cough), then this style of interview is perfect for the interviewer to conduct if they just want to get it over with.
I've noticed that Apple only uses simple leetcode-y style questions for its phone screens, but for their on-sites they ask very domain specific questions. It is interesting because if you're actually good at the domain you're interviewing for, they are very easy. If you're not they can be intractable. Its clear that each team puts a lot of thought into the interview process and I imagine Apple teams get a very high SNR, at least compared to Google / FB.
You could use the ancient Microsoft logic questions. They are equivalent as easy. Leetcode is sticky for a reason even if it's not the intended reason.
this post is way too optimistic. you're implying the people making the decision that leetcode tests are used are competent enough to know they suck but still use them for the soft-factors you mention. I would bet essentially all of my money that instead they're simply incompetent and actually think these tests are good at testing ability.
that being said, everybody talks about like leetcode tests for a week straight is the norm. I've recently been through the whole thing with several big tech companies and some smaller companies, and all of them only used them for the phone screen.
in fact with microsoft for various reasons they skipped my phone screen and I went straight to onsite, and there was no leetcoding involved at all. still some whiteboard coding, but rather about highly specialized problems relevant to my field, none of this algorithm puzzle bullshit.
I can boil this incompetence you describe down even further: employers and their employees don’t want to spend time making their own interview challenges.
It’s actually pretty hard and time consuming to come up with a mock scenario and evaluate it, especially when you also have your regular work to do.
Leetcode is seen as good enough and all the effort is on the interviewee. The current employees don’t see the pain and suckage. They already have a job and it’s not their problem.
May be that’s where the problem is. Interviewing skills should be part of the responsibility. There could be a committee that can set and evaluate the process and questions. However, this will hurt the ego of a typical engineer so much. No one likes to be told what questions they can ask. Everyone likes to believe they are good interviewers, it’s just that they “hate interviewing”.
>>I would bet essentially all of my money that instead they're simply incompetent and actually think these tests are good at testing ability.
Probably the same people who think JIRA is a great project management tool :>) - when in fact, it (JIRA/Leetcode) gets used a lot, because it gets used a lot by other people - and for no other good reason.
Agree, from my experience this is largely a factor of requesting developers not used to interviewing to suddenly have an interview, often with little guidance either from their manager or hiring team. With nothing else, they have to rely on their own experience on how they were interviewed, with leet code, and so they continue the cycle.
> Big companies want people who are smart enough to do the work, but obedient enough to put up with all the bullshit that comes with working at a big company.
But then you also have the companies who see other companies do it, so they cargo cult. Those are companies you definitely want to avoid. Their entire stack is because of cargo culting not because of an engineering need.
I think you also have a few of the Bro Culture companies who do this as a hazing ritual.
The problem with this theory is that I have met a ton of engineers who really have a lot of their identity wrapped up in how well they did on leetcode interviews [mostly by memorizing enough of them to pattern match] and heap scorn on candidates who don't do well on them as if they are subhumans.
The ego and status jockeying appeal of leetcode is very, very high and a lot of engineers just eat it up.
I think discussions of this topic which ignore the fact that there is a substantial population in our industry who fall into this trap are basically flawed.
A key clue about some of their reasons for doing it is that they still make you go through it even if you're, say, applying at Google, passed theirs 8 years ago, passed Facebook's 5 years ago, and passed Netflix's 2 years ago.
I think part of the purpose is to cool competition between them.
I agree, on the startup side, the only "brilliant" player is the founder(s), and they just need people to pump code, do marketing, etc... non-stop, because "they believe on the mission". I don't like the Leetcode interviews either, but what can we do? I have had the other types of interviews (take home projects, trivia questions, code reviews, pair programming) and all of them suck as well. At least with leetcode you know if you "got it right", there is less subjectivity in the process. Also, assuming someone is less smart because do leetcode interviews is a bad take. All companies (not only startups) need obedient workers, and the interview method have very little to do with that. If you are a true rebel, would you work for somebody else?
Re: Thinking machines. If you are looking at moving on from your current company, and are evaluating 6 companies, you have to bank up 6 weeks of PTO to 'try to work' at various companies for a week?
It seems like a huge inefficiency in the economy that thousands of people are studying and practicing for an entrance exam that has no value in the real work that the companies actually do.
But overall that may be a good thing for society. If Google fired all their leet coders and replaced them with real engineers they would not need as many. And the biggest problem may be that there is a limited number of real engineers in the world. Companies have to figure out a way to make so with the people that are available.
Systemic problems being blamed on individuals is not going to help. Of course society is a shared responsibility model essentially but it means even a highly skilled and otherwise technically competent and qualified engineer in an inadequately structured system can regularly produce garbage work. Having to hire nothing but the best to achieve decent or viable output IMO moreso implies that the processes or systems are lacking than whether people are even capable of output of the desired feature set.
Culture is important at companies. Without it quality engineering can’t/won’t happen. People will rise to the bar set by their leadership unfortunately.
> Ingenieurinnen und Ingenieure sind alleine oder – bei arbeitsteiliger Zusammenarbeit – mitverantwortlich für die
Folgen ihrer beruflichen Arbeit sowie für die sorgfältige Wahrnehmung ihrer spezifischen Pflichten, die ihnen aufgrund ihrer Kompetenz und ihres Sachverstandes zukommen
Translated with Google translate:
> Engineers are solely or – in the case of collaborative work – jointly responsible for the
Consequences of their professional work as well as for the physical performance of their specific duties, which are due to them due to their competence and their expertise
Carrying this responsibility is a (indirekt) requirement for being legally able to call yourself engineer here.
Edit: a Netflix movie is a form of entertainment and not a valid source.
I understand this but there’s a huge gulf of nuance between expecting hand holding and keeping your head down while bad management expects great work output and results despite problems of their individual failures either.
It's more about professionalism and stricter standards of engineering and less about being "good despite bad context".
You can hire the best artisans for your pot making company and make some of the best pots. However, sooner or later you will encounter problems like disparate outcomes of quality (due to different people having different standards of style, crafting and discipline), unpredictability and reliability.
An engineer role is to standardize processes and turn things like quality into a predictable outcome. It's not just about being good and not being bad.
Sure, that's true. I haven't taken one in over 25 years. I don't remember enough of the fine details off the top of my head to solve these well. But I can count on one hand the number of times it's held me back in real life.
Instead in my day to day I'm able to recognize when a problem needs something beyond the braindead solution and a sense of roughly where the solution lies, and am able to then go do a bit of research on the topic.
Solving leetcode hard style problems in ~45 mins off the top of one's head is just not a problem many developers face in their jobs. I agree that these tests demonstrate some information, but the ability for that person to be a good developer is not among that signal.
I generally agree, but in the case of algorithms courses you generally know what you're expected to know for tests, and for course work you have ample access to course materials.
Getting a random question from an infinite pool and completing it with fewer resources under a short time crunch is a different game.
The low-level/assembly course I took had tests that felt like puzzles, and those tests were a ton of fun to take. Can't say I've had the same feeling for leetcode puzzles.
What if my surgeon went to med school and was top of class, but had an attending who insisted on using the wrong tools? Or there was another surgeon on the team who created the "architecture" for a brain surgery, but for some reason in order to get to the brain they went through the chest. There can be many reasons for unreliable and buggy software that are completely unrelated to the developer.
I will take a programmer from age 6 over a dozen alleged “software engineers” performing “best practices” theater.
The former can deliver good software with less bugs, the latter might be able to, but are more likely to bikeshed endlessly, while being hamstrung by “best” practices that turn out to be inapplicable in a given domain, or even damaging.
Do wake me up when software engineering becomes a standardized practice that isn’t a mere cargo cult.
I swear, if I read one more comment about doctors and surgery and licensing I’m going to have to diverge on a rant exposing the multiple flaws in popular SE best practices.
Is it gatekeeping? They hire so many junior devs, do you not think it’d be possible to strip the team down by hiring far more capable engineers? Would Google crumble if they only hired senior devs like Netflix? I’m convinced they hire so many purely because they have the money and managers want bigger teams to go up the managerial track.
This comparison isn't totally valid. Behavioral and system design rounds become much more important in the typical Silicon Valley interview process as you get more senior.
Oh good! We finally moved past the doctor metaphor.
There is no standardized methodology in SE as of 2022 that has anything resembling actual science as it’s basis, or we would certainly have thrown out LC interview hazing by now.
Your comparison is inaccurate. You might as well rail against non-conservatory musicians who taught themselves to play.
For fields where software safety matters, THERE ARE ALREADY EXISTING LEGALLY ENFORCED CODING STANDARDS
I'm not saying that they should be ashamed for being leet coders. I'm just pointing out the inefficiency of the system. It isn't unique to software. Historically, lot of jobs have required irrelevant accomplishments or certifications before employment.
In your opinion, what's the difference between a leetcoder and a real engineer?
In my mind, a real engineer really shines in the non technical aspect of things, like coordination, communication, prioritization, and getting hard questions answered. But that's just me, I'm curious what everyone else's experiences are.
I agree with all of those things. A real engineer has soft skills and the ability to look beyond the immediate problem to see what is at the heart of the issue. They can see the effects of a solution and see the problems that might arise from it. They see connections.
Leetcoders just find the most efficient solution for the problem at hand. They don't see connections to other problems, whether current or potential. They're a scalpel when you really need a massage.
I've seen leetcoders throw away less-efficient, safer solutions just to implement something more efficient and clever. And when things fail, they're nowhere to be found. Too busy with whatever current Story they're on.
All of that non-technical stuff wont help you when the bridge you signed off on collapses because you had no idea what the blueprint was actually saying.
I think a bunch of this "engineers don't actually need to know stuff" comes from a generation that never had to deal with things that might kill people if it fails.
A lone idiot can certainly foul things up, but look at how catastrophes usually unfold.
The right information is usually somewhere in the organization, but it isn’t shared or acted upon appropriately. The Challenger report is pretty clear that it was “an accident rooted in history”. They had a decade of data about O-ring performance in the cold, but it was ignored/overruled in the decision to launch. It certainly wasn’t the case that The New Guy just grabbed some rubber from the wrong shelf and blew up a space shuttle. The Mars Climate Orbiter was lost because of a metric vs imperial mixup, but it wasn’t one dev who capriciously decided to work in inches; there were whole teams that weren’t synced up.
Engineers certainly need to know something, but I’d bet the farm that a team of B+ engineers with good coordination can run circles around “rockstars” that won’t work together.
In fact, I'd go so far as to say that if a lone idiot can wreak havoc, it's usually a coordination problem further up.
For example, suppose your team generates and shares some kind of data. You want to change the wire format. You could just go ahead and do it, letting the chips fall where they may, but I think that's bad engineering/engineering management.
It'd be better to coordinate with the folks consuming the data. Can you make their migration easier? While you're making a breaking change, are there others that would make sense to roll in? Are there times when it'd be better/worse to roll out the new version? Being proactive about this sort of stuff seems like good engineering to me.
All of those things are people management, and I agree people management is an important skill to have if you want to work in a big company in particular it isn't related to engineering. A guy who is completely non-verbal but makes a solar panel that is 5% more efficient is a great engineer, perhaps not liked by those around him but still a great engineer.
Terry Davis was a great engineer and he had the worst commutation skills you could imagine.
HR does not do that. They don't prioritize nor coordinate - unless you count getting people to company dinner as coordination. They do communicate, but rarely about product or how things are done or should be done.
And I don't mean that as criticism, I mean ythat as "it is not their job"
Same. The biggest point that I didn’t see in comments so far is that Leet Code exercices are really fast to evaluate and it accelerates the pipeline for filtering and hiring. Even a recruiter can “evaluate” them. Just looking at the code more or less and if the automatic tests pass. Which is not the case for exercises for engineers. If you want a real engineering problem it would be mostly about architecture which are long and hard to evaluate. Or about how to break down a rotten program and apply SOLID and other like that to redesign it. Same thing long and hard to evaluate.
I mean, I'm an introvert and I prefer working with code, not people, but I've found my own productivity has skyrocketed now that I'm making an effort to reach out more and raise questions about bigger issues than just the immediate technical issue I'm facing.
I've worked with many people much smarter than me. And I've watched smart, very technical engineers silo themselves off. And then I've watch smart technical engineers that make an effort to communicate and lead initiatives to improve the codebase, make arguments to management for tackling tech debt, and speak up during technical grooming to propose better solutions. And while the siloed off introvert might be able to write better code faster, the engineer that is looking at the whole picture makes an entire team move faster.
How many companies have written and still do write their very own IAM solution? Duplicating work and inefficiencies is at the core of modern IT production.
And just because in case of companies that purpose is often unspoken, doesn't mean it doesn't apply. And that's why leetcode and inane interview processes.
I think you're overthinking it. I think very simple questions are appropriate just because you want to see if the person knows how to code. Not knows how to balance a binary tree, but literally knows how to code. You'd be surprised how many programmers I've interviewed that literally couldn't get anything to compile. To be fair, you see the same in the finance world where someone lists Excel and data analysis as skill sets and then doesn't know how to do a vlookup.
People write all sorts of stuff on their resume. They took a C course in college 10 years ago and write C as a skill. They could be programming managers and not know what a pull request is.
I think its definitely gone too far, but basic coding proficiency is essential IMO.
Some of startups are founded by people with limited technical experience who, as a crutch, just cargo-cult technology trends from bigco's. IME you're better off going full "cowboy" than trying to crib bigco practices.
I haven't thought about the bigco tech interview process like this. But now you put it out, it makes perfect sense :) I tried white board interviews about 10-12 years ago with big companies and failed miserably. Then I decided never to take a white board coding interview again which I haven't done yet. However, after working for startups and a few acquisitions later I ended up in a silicon valley big company without doing a white board coding interview. I'm not sure where I went with the story, may be to show that there are alternatives if you don't want to do white board coding.
I think it's more along the lines of "well, this is how I got in and if it recognized my genius it must be pretty good".
Along with "if we all had to go through this shit, so should you".
IME its biggest defenders are people who were a little bit extra proud to have gotten a job at Google and I reckon its as much those types as much as upper management keeping it going.
Like frat hazing rituals the fact it intrinsically makes no sense is sort of beside the point. Like frat hazing rituals it'll have to be rooted out like a weed to actually go away coz it's well and truly baked in.
I don't think a lot of thought is put into it. It's inertia and 'what the the other successful big companies do'. There is also a subset that enjoys leetcode interviews and complain if you don't do it. Everyone does it because big daddy successful google started it, and thus it self perpetuates. 20 years ago it was bullshit microsoft-style lateral thinking questions that were that generation's leetcode.
On top of that, the only people who can really get rid of leetcode are pretty much sr staff or principle engineers and high level HR people. It also doesn't get you promoted or paid more in most companies to go on a long project to change the interview process. Thus it doesn't get changed.
I estimate that leetcode will probably go away in many big companies in 5-15 years from now as today's sr engineers become the next sr management and founders that actually makes these decisions. Many startups today explicitly do not have leetcode interview loops, while 10 years ago many did have leetcode interview loops. When that generation of startups have their facebook that defines their decade and thus defines the next common interview fad, leetcode will start going away.
It also allows you have any engineer interview pretty much any other engineer, which makes interviews overall easier to do.
A point that I didn't see yet made. The truth is that lots (though of course not all) startup companies just have no idea what they are doing.
I had to deal with the complexities that imagined possible future scalability issues introduced into our stack when the whole customer base could be served by a raspberry pi, and it happened in multiple companies.
And it is not just infrastructure, but also code. I have to follow the twisted clean architecture rules and write tons of boilerplate code (that could make sense on the backend for a complex system) for a mobile app that is basically just a skin over a graphql API (basically no business logic on the client).
The same happens when hiring: we need top talent like FAANG companies do (but we can't pay for it), because we are awesome, where in reality anyone with a sceptical, product-focused mindset and a "get things done" attitude would do.
In my opinion this happens because we don't want to admit that our startups will likely not need the scalability for years, we won't need to follow fancy architecture patterns because our app is simple, and we could do well with developers who don't have top computer science skills.
I have see an over reliance on leetcode style question in startups actually. May be more fairly, "we want to be google" style companies which is every startup. In fact some large companies have a more relaxed way of interviewing.
"companies want people who are smart enough to do the work, but obedient enough to put up with all the bullshit"
Which is why I love leetcode interviews: I get an insight at how a company, especially tech management, operates. During the dog-n-pony show interview, I will put up with it, but will definitely cast the company in a poor light. I tell companies that I am interviewing with that I will not prepare with leetcode (some will suggest it). I rather spend my time reading higher-level concepts. You know, the stuff that actually helps in modern software development.
I left out the word "big" in my quote. Companies of all sizes do leetcode interviews and want heads down and STFU type of developers.
"... However, employers do refer to the educational background to differentiate between high-quality workers and low-quality workers (Kasika, 2015). Consequently, individuals with higher-ability use their educational background to gain the "education signals" that allow them to move into high level and high wage positions"
In this case, the "educational background" is shown by solving leetcode-style algos.
> No, we all know it measures those things, and those are exactly the things the measurers want to measure.
I wouldn't be so sure everyone realizes it doesn't measure anything relevant to actual job performance although it should be obvious (since nobody whiteboards algorithm puzzles all day in an actual job).
In all of these threads there will be many people saying they only want to hire the top% of developers (as if it was a strict linear scale, as opposed to hundreds of intertwined skills) and they justify leetcoding as the way to do that in a belief that top leetcoder is somehow a top software engineer, instead of realizing these are unrelated skillsets.
> Big companies want people who are smart enough to do the work, but obedient enough to put up with all the bullshit that comes with working at a big company. What person better matches that than someone who's able and willing to study for and pass a tech version of the SAT?
Leetcode rather measures whether you are into brutal cramming. Let's put it this way: the kind of people that fits this property is in my opinion often "a little bit special", i.e. not the kind of employee that in my experience both startups and big companies prefer.
those who will overwork, be on call 24/7, doing 10 men's jobs at once and can tolerate abuse from managers, because you want your stock options to vest
> You might ask, so why do startups do leetcode too?
Not all big companies do, not all startups do and not all companies of sizes in between do. It's also a little bit up to the candidates to just not put up with the bs and stop the interview process or not even start it in the first place.
> What person better matches that than someone who's able and willing to study for and pass a tech version of the SAT?
Is it actually? "Leetcode" is pretty much covered by 6.006 [0] or equivalent. You shouldn't have to "grind" or study for hours if you passed the class (your problem sets should more than cover what you'll do on the blackboard).
Anyone applying for a serious engineering job should have at least done one algorithm class. This is what leetcode filters for.
I don’t think companies are doing it consciously, but I do think that they do use eighty hours of interviews and coding challenges to select for compliant fungible cogs.
Someone who builds a truly novel technology solution involving hundreds of hours of effort gets filtered out of an interview involving contrived scenarios. You may have built the next generation X, but given an array of strings and a fixed width, can you format the text such that each line has exactly maxWidth characters and is fully justified -- in the next 30 minutes? Maybe you should have cultivated that skillset instead, because around here we value parlor tricks more than real world accomplishments.
I have over a decade of experience, including driving big technical change at one organisation, and was filtered out by a timed leetcode test. I put together a repository of leetcode practice, as I had a feeling that I would have bad luck on the day. They didn't look at this.
The internal recruiter said it kept happening for seniors and people with a lot of experience, but his hands were tied, as the leetcode process was deemed important by the CTO.
If you think it’s about flexibility or brilliance or intuition or experience, you couldn’t be more wrong. Leetcode and others and purely about practice and that too recent practice, all questions follow similar patterns and if you have it fresh in memory you can write it in 5 mins. But writing it from brainstorming can take more time tha allotted.
So are top leet coders better programmers? Not really, a lot of them are colleges students who have time to practice and they are in similar competitive circle. I’ve interviewed many and couldn’t hire even one
Bingo. I was asked recently to find the maximum subarray of an array, in a live coding exercise. In TypeScript. So, JS, but with type annotations.
The interviewer themself said "this is probably more aimed at someone who just graduated."
It was for a senior data engineering role, where the odds of me implementing classic dynamic programming problems on the regular are slim to none.
They made me an offer anyway, but I just wonder what value they found in that. Oh, he knows about Big O? He's heard of memoisation?
They were big on FP, apparently. But not Scala, there was too much FP in that for them, hence the TypeScript.
Mind you my first ever rejection was for a Python role back in 2011 when Python was still very niche in my country, and while I had a portfolio of, imo, pretty decent Python code, they weren't interested because I didn't have a degree, and people without degrees write unstructured code.
Which is a very long way of saying, every interview process ultimately devolves into people hiring people like them.
I was once asked "how do you sort a list of integers?" - the question was so ill defined that I thought it was a joke (in a database?, how long is the list?, in a flat file?, memory only?, on an embedded system?, a dataset in spark? ...).
Thinking, ok, this must be an ice breaking joke I responded, "I don't know, how _do you_ sort a list of integers?". In a condescending tone they responded "are you even a programmer?". At the time, I had been in the game for more than 10 years.
I am really not sure how one could successfully build several companies and have shipped several products without knowing "how to sort a list of integers".
Trying to find a good place to grow your career is difficult on many levels, but if you find leetcode questions silly, you might be too advanced for entry level jobs. ...and you probably don't want to work there.
People straight out of college will spend a month doing 100+ Leetcode problems targeted at the companies they're aiming for. Leetcode tracks problems used at interviews, so there's a very good chance they'll see a problem identical or similar to one they've solved already, and it's just pattern recognition.
After the 3rd or 4th time doing this, practicing the same problems just to pass interviews isn't very appealing, especially if you've saved money and can do something more interesting.
You could try to "wing it" and derive it on the spot, but you'll be outcompeted by someone doing a lookup of a solution + alternate solutions from cache.
Nah, it’s just that LC problems take practice to get good at. Similar to weight lifting you wouldn’t expect to bench 250 lbs your first week, but after training and making progress you’ll get there (or close).
If you’ve never done LC (or any competitive programming) you’ll struggle. Practice more and you’ll recognize the dozen or so patterns.
Also LC is not “novel,” maybe at the time when the algorithm was first devised but not when you have 20 minutes to solve one.
The tests aren't testing for competency at the job, and after a decade of experience writing software you have long ago realized that party tricks and cute algorithms are a fairly rare part of the job (generalizing here of course), so you stop thinking about them as much and get out of practice. When they do show up, you certainly don't have to do them in 10 minutes, and I think everyone would rather you didn't anyway, so that you write a robust solution rather than a clever one.
Students are often better at leetcode because school has been drilling this shit into them for the past three years, but it will probably be the last time they see such a compelling algorithmic challenge until their next leetcode exam.
but why must it be mutually exclusive? Are you implying all "robust" solutions, whatever that means, are dumb? Surely you put some thought in it to make it "robust"?
In general in this type of use, "clever" and "dumb" would better be called "tricky" and "obvious" respectively. Usually people describe very tricky solutions as "clever", and much more obvious solutions as "dumb" jokingly.
For example, storing some flag in the high bits of a pointer field of a struct is a "clever" solution, whereas having a separate bool field is a "dumb" solution. In most cases, the "dumb" solution is much more robust over time (less likely to cause bugs as the code changes and is modified by various people). Of course, the "clever" solution is necessary in some situations (very constrained environment, critical infrastructure such as an object header used for every type etc), but should often be avoided if possible.
What's important is that the way this is often presented is that more experienced people will prefer "dumber" solutions, as experience often shows that long-time maintainability trumps many small losses of efficiency. So using "clever" and "dumb" in this way is not at all intended to put down the engineer writing the more robust version.
When it comes to everyday software solutions you are usually aiming for obvious and clear, and often 'clever' is obtuse and opaque but definitely not always.
I think there might just be some vernacular nuance here though, maybe we can call it smart and robust, versus clever and opaque. Some problems are just difficult though, and if you get to work on that kind of problem regularly then that is pretty lucky.
Novel solutions aren't as helpful as you think. Pragmatic, simple, vanilla solutions are reliable.
You won't create your own linked list library, you'll use one from the standard library.
General runtime analysis can be helpful - but production, real-world benchmarks trump all theoretical performance values.
Code changes - how do I make a change to a production system in a million line code base that has good test coverage and when deployed, won't bring the entire system down. That's an exercise in the coding interviews that is completely ignored but most useful in the day-to-day professional setting.
> General runtime analysis can be helpful - but production, real-world benchmarks trump all theoretical performance values
This is what I always found funny. Most of the software development work these days is related to web apps. Optimizing that nested loop won't do anything if you have to wait 300ms on some shitty API to answer anyway. It literally doesn't matter, noone cares.
Related meme I saw on reddit some time ago where senior developer says 'haha nested for loop go brrrrr':
I've forgotten most of the algorithm stuff I learned at university that I haven't used in my job. I know how dynamic programming works and I can recognise such a problem, but don't ask me to implement it in an hour. I know how graphs work and what the various algorithms are, but why would I implement any of them when I can just import networkx?
More likely a lot of us don't feel we have to cram for problems suitable to screen entry-level candidates at most.
Personally I'm perfectly happy to be filtered out by such tests and refuse to practice for them, as companies that use them for senior level positions are companies I really don't want to work at.
> What do you suggest for the 99%+ other candidates?
What about (instead of forcing a months long decision process upon the candidates and the company) bringing them into the company after a short interview (maybe 2hrs), and making sure they can afford housing, food and everything else they need.
If you like their work, they stay employed. If, say after one month, you do not like what you see, you can easily let them go. Of course you tell them upfront what the deal is.
We could call it, I don't know, maybe trial or probationary period.
That's a big overhead for both the candidate and the company. Only an unemployed candidate could do that, and even then they'd have to stop interviewing at other places to dedicate the month. No thanks.
I think we'd need to share more detail (and fwiw, I'm half from europe and half from Canada :).
Certainly companies have probation periods. And on paper, that reality and what's proposed in previous post are similar.
But I think there's a massive real world difference between "Default stay hired" and "Default not stay hired".
Probation, as it has currently been implemented in most companies I've worked in, exists, is formal, can and has been used, but is an exception. It's used when there's a massive, unanticipated, egregious problem in performance.
What is sometimes proposed in these threads is effectively replacing long/multiple interviews, with a probation period. While such probation period may look similar or same on paper, I think it's a completely different approach: "We're sure of you (though possibly wrong) so we're hiring you" vs "We're not sure of you so let's hire you and see!". I for one would have only touched the latter with a 100ft pole maybe once in my life. Certainly, I imagine anybody with current job and monthly obligations, would be quite wary in taking a "we don't know so let's try it!" approach to hiring. No, let's figure it out first please :)
I've only done the latter in the form of being brought on as a contractor at (high) contractor rates but with the understanding they'd prefer to have me join full time, at a time when I was already doing contracting and had other clients in parallel covering parts of my costs. In that situation I was not taking on any more risk than I had already chosen (and planned for) by contracting, so it was fine.
It's the only kind of context in which I'd ever consider the "we don't know so let's try it" approach.
> But then why do people with that kind of reputation still (at certain companies) have to jump through these hoops?
Your article points out that in this example: "I'm not allowed to check in code, no... I just haven't done it. I've so far found no need to.".
> If, say after one month, you do not like what you see, you can easily let them go. Of course you tell them upfront what the deal is.
> We could call it, I don't know, maybe trial or probationary period.
You make it sound like it's a better solution for candidates, but it's way worse for many of them and it has been explained by other commenters already.
How many companies actually do this? At which scale?
Some companies increased their difficulty to hire by having aggressive PIP objectives. Likewise, having a "real" probation period where you fire, say, 10%+ of employees is not gonna make you competitive when candidates compare their offers.
My expectation with such an arrangement is that if you do decide to hire me, since I am now a known quantity you won't have an excuse to pay anything other than top of the market rates. "You said you hire only the best right, and you've seen me work, you want to hire me, looks like the best make $X."
It would be fair if the company has to pay you 5-11 months of salary if they decide no after the evaluation period. That would leave ample time to find another job. Also, in many jurisdictions, this kind of arrange isn't legal, for good reason, as the company has way more power over the individual worker.
We could make sure that a person only has to do that trial once in their career and then every other company should accept that they've done it because they've proven they've done it. We could call it an Apprenticeship or an Engineer-In-Training stage. (Where have I heard those before?~)
>This person should already have enough of a reputation to get a job at many companies, if their work is public enough.
You can't just hire someone based on their reputation at a company of any maturity. That's a legal and HR nightmare. There has to be a process with a semblance of objectivity, and that process has to demonstrably apply to everyone equally, always.
Which in practice means you put out a fake job posting where the qualifications uncannily mirror this persons resume to a tee, and you hand them the job formally after a week.
I've been in situations where I was already somewhat working and onboarding while the faux job ad was up for the two week or however long mandatory posting period. tallied the hours separately and got paid back after i was hired.
I can't at all imagine why this would be the case. Why would a company have any kind of liability for hiring biases such as reputation (except for systematically refusing candidates from protected groups, of course)?
Hiring based on reputation is the same as hiring based on resume. And it's extremely common in almost any company. Why would it be a legal or HR nightmare?
> What do you suggest for the 99%+ other candidates?
To apply for the 90% tech companies out there. 10% of all tech companies out there are FAANG or FAANG-like. 90% of tech companies are normal tech companies (they'll care about your education and cv and the interviews are usually just a chat. No IQ tests)
I think it would be exceptionally hard to build an individual reputation in the tech community in general, props to those who have, but building a reputation in a specific industry and local community is much more achievable for everyone.
In smaller industry niches this can be true for companies more than people. At this point in my career, the fact that I worked at Company X is evidence enough that I can do the job Company Y wants me for, since it's a tight industry they essentially know of the work I was doing, even though it wasn't a groundbreaking novel technology of my own.
Wasn't there famously the story of the guy who wrote a package manager (brew?) that become massively popular and was widely used at google, and yet google rejected him for a job because he couldn't invert a binary tree or something?
Don't know all the details so I could be missing something crucial, but if not it'd seem that reputation isn't enough
I don’t necessarily think leetcode should be the only litmus test for a candidate but Max is an outlier. Not every candidate getting rejected by a leetcode question is also capable of building out homebrew on their free time.
I had a great track record that was out in the open, actually displaying vast knowledge of algorithms and data structures and exactly the stuff that's being asked in those interviews. FAANG interviewers did not care one bit about it. They actually consider it bias to look at a person's prior work.
They 100% can get a job and a decent one. I know because I fall into this category BUT the FANG salary are 2 to 2.5 times what I make at this point which pushes me down the study leetcode part.
You are factually wrong. Not only that, you're conflating several different processes and team rules.
Many googlers use brew to install applications on their laptops. This not against policy. Other googlers work with code stored directly on their laptop. There may even be developers who are obtaining deps (for their own builds) from brew.
The problem with the brew author is that he had every opportunity to make himself look hirable at Google but instead chose to write an incorrect screed and publish it on the internet.
You need a full Santa exemption with business reason to use brew. The average person working on some server that deploys to borg does indeed use their Macbook as a thin client. Who's obtaining deps for their builds from brew? If you're building stuff on Mac, it's via bazel and all your deps are in source control.
I never said anybody is obtaining deps for builds- that's all UncleMeat.
At the time I used brew (5 years ago) it didn't require a santa exception with business justfication (and my justification would have been "I need this for my work"). Fortunately this wasn't really a problem for me any way as I don't even look at Mac machines as anything other than a thin client.
It is true that this was the problem with the brew author. Homebrew being part of the typical Google workflow is entirely independent of the situation.
But, as usual for internet discussions, it is fun to rathole on side conversations.
The best part is, I was defending the use of homebrew (which I absolutely hate) and local development (which is far inferior, IMHO, to blaze/forge/citc/piper). I had really hoped releasing abseil/bazel would help but sadly, it was done too little, too late.
I’m sure there are a few open source developers who use brew or Xcode directly, but most Mac builds happen in a distributed build system wired up to use remote macs. Yes, via Xcode. Not on local machines.
The mac build system is wild. It is still all done remotely with a farm of macs running xcode. You still don't build code for running on macs on your local machine.
You actually do, what Tulsi generates is an XCode project that has shell script build steps that call out to bazel. Bazel underneath the covers will end up calling the clang that comes with XCode.
Do you work for Google? I'm just doubting you a little. Like the guy from project zero is finding bugs in Windows via ssh to a Linux machine? Definitely going with Doubt here.
There are a few outliers, but yeah. Per policy, no code is allowed on laptops. And because Google's build tooling is very centralized basically everybody works on the same kind of machine.
The folks developing Chrome for Windows or iOS apps might have different workflows, but even then they aren't going to be using brew because of Google's third party code policies.
The first program you build in Noogler training takes more compute and io to build than the Linux kernel. The distributed build systems laughs at such a trivial program and barely breaks a sweat at programs 10x that size.
Google has a giant monorepo. It is too big for git. (Virtually) everything is built from source. Building a binary that just runs InitGoogle() is going to crush a laptop.
I believe that there are also a bunch of IP reasons for this policy, but from a practical perspective doing everything with citc and blaze is really the only option.
Google lives and does by its distributed build system. Most devs don’t even have exactly have code on their local workstation either—it comes via remotely mounting a file system.
But 99.9% of the builds happen remotely. So local vs remote code just isn’t that relevant.
It is also true that Google spends millions and millions on its dev environment every year, so this isn’t your average “no code on laptops” situation.
Why would I install homebrew on a work laptop if I cannot build or run code on my work laptop? Why would I install a dependency management system if company policy is that all third party code is checked into the repo as source and built using blaze?
Because you can also install applications via homebrew. I use it for a few things, including installing bat, delta and other Rust coreutil replacements that I prefer. I absolutely do not use homebrew for project dependencies.
When I worked for google, several of my projects were developed locally on laptops. My intern (who was developing tensorflow robotics computer vision stuff) used homebrew to install tools. Not everybody at Google used blaze.
what in the world does that have to do with whether he was qualified to work for google? the average developer at google is definitely a much much less proficient programmer than the creator of homebrew.
You shouldn't use Brew to obtain dependencies. This is how you end up with people complaining about a brew upgrade replacing the version of Postgres their project depends on.
You probably shouldn't be using dpkg or rpm for that, either, unless your CI and deployment targets are running the exact same version of Linux that you are, and even then—there are usually cleaner and more cross-platform/distro ways to do it, especially if you need to easily be able to build or run older versions of your own software (say, for debugging, for git-bisecting, whatever). I continue to wonder how TF people have been using typical Linux package managers, that they end up footgunning themselves with brew. "Incorrectly", I suspect is the answer, more often than not.
Where it excels is installing the tools that you use, that aren't dependencies of projects, but things you use to do your work.
Get your hammer from Brew. Get your lumber from... uh, the proverbial lumber yard, I suppose. Docker, environment-isolated language-specific package managers, vendored-in libs, that kind of thing.
I don't install project deps with Brew (it's a bad idea, but, again, so is doing that with dpkg or rpm or whatever directly on your local OS, a lot of the time) but I do install: wget, emacs, vscode, any non-Safari browsers I want, various xvm-type programs (nvm, pyenv, that stuff), spectacle, macdown, Slack, irssi, and so on.
That's fine, but almost nobody is running tools on their MBP. So for this sort of thing you'd be using the package manager distributed with glinux. And Google is also a really weird island where tons of tools are custom. You cant use some open source tool for git bisecting because Google doesn't use git. You cant use some open source tool for debugging because borg is a weird custom mess and attaching debuggers requires specialized support.
Google uses git. I used to sit next to Junio Hamano, the primary developer of git, and lots of teams that used my team's services were using git. Lots and lots of teams. There was even an extension to use git with google3, which was really nice, but was replaced with a system that used hg instead.
I was very imprecise. Git is used both for OSS stuff as well as some other stuff. But the norm is development in google3 and even if you've got a layer of git commands on top of that, the actual source and change management is being done by citc/piper.
True, although I predated citc and piper and we definitely built apps locally on machines with source code (from perforce). I was strongly advocating that more people switch to abseil and build their code like open source in the cloud (the vast majority of compiled code doesn't have interesting secrets that could be used to game ranking, or make money from ads).
I don't do leetcodes because I am not really into programming puzzles (or parlor tricks :D ), but I roughly tried to do what you described. ~45 minutes it took, deciding in the middle to not worry about the algorithm. Guess I won't get the job ... shrugs.
You won't get the job not because you "didn't worry about the algorithm" but because you didn't ask any questions about the problem; just went straightforward to the implementation. In FAANG interviews that would be a red flag.
This is a myth. It doesn't matter if you ask "questions" but implement an n^2 solution. Unless you implement the optimal solution, usually using a top-down DP or some array trick, you aren't moving forward in the interview process.
In the FAANG interviews I've done you're never allowed to ask questions...? Maybe for clarification of the problem space, but not about the algorithm or the promise of a particular solution.
If I'm the interviewer and you don't ask questions I'm going to rate you very low on your communications skills. You may have the best algorithm in your head and write the most elegant code, but working at a company requires you to communicate your ideas and plans and code and everything else to your team. And no, communicating only at the end (code review time) is not enough. This is not a school assignment that you silently write and then turn in for a grade.
The best interviews I've experienced, both as an interviewer and an interviewee, are the ones that feel like two team members collaborating to narrow down requirements and solve a problem.
> about the algorithm or the promise of a particular solution
It's not about "the" algorithm or "a" solution. It's about you the candidate being able to propose multiple solutions, perhaps with space-time tradeoffs, to provide a recommendation based on your judgement, and to ask the interviewer what they think of your proposal.
I mean, that's great, and I feel the same way. Yet every time—since the introduction of leetcode questions, anyway—as the interviewee I've been asked not to ask questions about the algorithm or the solution, just clarification of the problem space. FWIW I have been employed at several of the FAANGs or whatever they're called now and have also been on the hiring side, where I certainly did not discourage asking questions of any sort.
Facebook interviews were very much interactive with my interviewer probing me on O(n) type questions and me refining it down to be more efficient. I was certainly allowed to answer questions, typically about scale. My code which was on a whiteboard certainly wouldn't compile and a good amount of the discussion was making sure that the interviewer could follow it and that he was satisfied with each of the steps.
My first round I passed with a less than optimally efficient solution, but he was satisfied every step of the way during my work.
While I was lukewarm to the prospect of working for Facebook, the interview process was very positive and reflected very well.
On a personal level, self interest would have me like the leetcode style problems because I can get most of them right on the first try during a timed interview, without studying. If I were pursuing a job at a FAANG, I might actually study them and I'm sure it would go well for the testing portion of the interview.
However, when I interview this is not what I'm looking for. I'm typically looking for someone who knows the particular language that I'm hiring for. My questions run from the very simple to as deep as they can go on either language or implementation details. From the most junior to the most senior, they get the same starting questions and I expect the senior people to go deeper and explain why they choose something over something else. I'm also testing their ability to explain it to me (not just get it right) as that is part of their job working with juniors.
I really don't even care if they have the names of things right and don't really count things wrong against them if they get the names of two things backwards for instance. For example in Go, a huge percent of the time you might use slice over arrays. Some people get the names backwards, but can identify which one they actually use and they know that one can change size. They are correct in usage and misnaming them. I inform them of the name, encourage them a bit and move on.
I've never liked the "look at this code, what's wrong with it" approach. There are too many contexts that I have to jump into at the same time. There is often an expectation that I find a specific problem with it. I'm lacking the usual tools like an IDE or compiler. What level am I looking at in the code? Does it compile? Are there off by one errors? Cache invalidation? Spelling errors? Logic errors? Business errors?
This guy has missing tests on code that needs to be refactored in order to make those tests. Maybe he has it figured out just right, but the "jump into my code" interviews I've been in on all seemed like they had secret gotchas that the interviewer expected specific answers about.
In short, I haven't seen a proper, repeatable process for interviewing for software development.
Well, good to know if that were ever an opportunity for me. Probably would have to invent questions because it's hard to ask yourself out of being dumb-founded on the spot for the moment.
The interview for my current job had something simpler: something like finding random permutations, then what's the algorithmic complexity of this random algorithm. (It was years ago, I forget the details of it.) I just talked through the solution. That was nicer that having to come up with questions. :)
Yes, because getting to the right answer is not the point of the interview. Apart from anything else, getting to the right answer may mean you memorised it and are incapable of doing anything else. Always show your working and thought process. Asking questions and showing that you understand tradeoffs and that users have different requirements is a good way to do that.
> given an array of strings and a fixed width, can you format the text such that each line has exactly maxWidth characters and is fully justified
So many questions. What character encoding is the string? What is human language? Should I honor non-breaking spaces and other similar codepoints? Is the string full of 'simple' characters or 'complex' characters? Graphme's? Emoji's? What's the min and max limits on width? How long (or short) can each string me, and how large could the array be? On what system am I running, and does the array fit in memory or is it paged off a disk? Does the font we're using support all of the graphme's present in the string?
Amusingly, I had a variation of this problem as part of an Amazon L8 IC role interview, framed as a Prefix Tree. Solved the problem, didn't get the role. :(
Sadly enough, I'm literally scoping out a new feature at the startup I work at that involves rationally splitting text into lines with a max length and buffers with a max line count so we can interface with a legacy system from the late-80s/early-90s
I've done that in interviews: "oh hey, the most efficient known algorithm is this classic named X". That tends not to win interviews either, but in the real world knowing the name of the best algorithm (or even just that a best algorithm exists and a general idea of what you'd google to find it) is more useful than knowing the details of it. If I need to reimplement a well known algorithm I can often reimplement it from the Wikipedia description, that's trivial and boring make-work (and will always be trivial and boring make-work). But I need to know which well known algorithm sometimes and that's a far more useful practical skill.
yeah, but in real world when this kind of problem will need to be solved people will most probably sort O(NlogN), or use priority queue O(NlogK), or even will go with something like O(N*K), almost no one will go with O(N) algo and because usually N and K are rather small and this code will not be called too often time complexity may be ignored. Still any solution shorter than O(N) will be called inefficient. And in real world they will know N and K from this what kind of problem they are solving, and this will not be hidden in mist of abstraction with assumption that "candidate should ask".
I mean, in the real world you probably use a library method. If I were an interviewer I would not be expecting the candidate know about median-of-medians (O(n) worst case). I wouldn't even expect they know a-priori about quickselect (O(n) avg). But I don't think it's unreasonable that given a few hints, a candidate could understand and implement quickselect in 30 mins. Most people know about quicksort already, and quickselect is not very different. You can even give them the partition and select_pivot function at the start and then if there's time have them fill those in. In the rare situation they haven't even heard of quicksort, you can even write the shell of the algorithm for them, and have them adapt it to quickselect.
Even then, all thats probably a bonus - a priority queue implementation, or many other possible solutions are probably good enough for me.
Moreover, with the micro-optimized SIMD quicksort algos that are perennially cropping up on this website... I would be willing to bet that "sort and take first N" is objectively faster than my crappy Python implementation -- even if it is linear time.
FOMO is an atrocious hiring practice. Chances of your scenario, where you somehow find a person that can create something novel, that they will stick around to actually produce novel thing and then this novel thing “making it” are nonexistent.
In my team we do technical interviews in three steps:
- an algorithmic challenge. It's related to what we do day to day. I work in domain names so we ask to parse a domain name. There are oddities with domain names so we check multiple things: does the candidate know what basic string manipulation functions exist? do they ask questions to get more info? how do they react when we give additional info that break the code they did so far? What we don't check: whether the code compiles or actually works. We don't care. We explicitly tell the candidate they can write pseudo code or comments defining the steps of the algorithm. We're interested in their reflection.
- an architecture challenge. We ask the candidate how they would scale an API worldwide. There's no code, it's an open discussion. They can talk about whatever they want: asynchronous, statelessness, load balancing, replication, anycast, whatever. We can also guide the candidate to know whether they know some specifics concepts (for example I can ask "what would you do if you have a GET REST endpoint that returns the same thing every time" and expect "cache its result", even with this question I get different answers (which is great), some will talk about HTTP cache headers, others will talk about Redis or in memory caching, rarely do candidates talk about both)
- a refactoring challenge. We work with tons of legacy code. So we show the candidate a crappy piece of code with performance issues and no tests and ask them for what their strategy would be. No writing code here, just thinking and discussion.
So yeah, just a quick screening to check if the candidate can write basic code (you'd be surprised of the results), and open discussions on our day to day problems.
This is great, I would totally like to interview like this.
The best one I had was a task where you have to basically brute force an api endpoint that uses a semi known password (you have to generate all permutations of a string with alterate spellings ex "pA$Sw0rD" and one of them will match), if you succeed the endpoint returns a url & token to upload your zipped solution. So you end up with a console application that has: network requests, string manipulation, concurrency/parallelism/throttling if you want (I did it to impress, wasn't a junior role), file access (zip & attach & send async), some error handling, good console outputs/logging and comments.
When you go to the interview we basically discuss the code I uploaded with one of their senior developers, what compromises I made, how would I improve it and so on. I got the job back then but since moved on. But that was the best interview process I've seen in my career. No leetcode etc, just a basic application that test if you know how to do a bunch of different things, without it being a whole framework mess or a whole product in some cases. it took about two hours, so the balance between spending time on and getting judged for my skill was good; it felt fair & realistic.
This kind of exercise is underrated. A similar one from a former job: the takehome assignment was something like "Here are some CSV files with sanitized real data from our business. It needs to be converted to XML. Use Java (the language our team mostly used then and there) and expect to be judged on your code readability" IIRC there may have been something along the lines of "Bonus: Now we want to be able to convert to/from JSON and perhaps other unknown as yet formats. Refactor your code accordingly"
Other than not using a library, nobody can complain it's unrealistic. Real enterprise devs spend a fair amount of time munging data from one format to another or otherwise "gluing" pieces together. And the kind of person I'd want to hire should be able to complete it to quickly to complain about a "time consuming take home assignment"
And it provides plenty enough opportunity to make sure you're hiring someone who writes code that is pleasant for their teammates to work with.
IIRC the process before that was just the usual 5min recruiter chat and after it was a single on site panel interview before offer.
"an architecture challenge" oooh, these are the worst :)
I can usually research a decent solution for most "architectural challenges" in 15-30 mins, but I find it so difficult to have on my brain rolodex a wide variety of possible answers and probe live which of them will not disappoint a specific interviewer :)
Again, this is mainly for screening and understanding what the candidate knows. There's no single right answers: some candidates will talk about assets caching, others will focus on software optimizations, others on infrastructure, or database...
High availability is one of our core problems, so a candidate must be familiar with at least some of the answers. And again, we still guide the candidate on the different points if they're stuck ("what about the db?", "what if the clients are in the US and in Europe?", etc.)
If an interviewer expects a single "right" answer, they're doing it wrong.
In my old job we used to have combined problems for job interviews.
First were some easy leetcode, little Big O, all that.
But then it got tricky, the problem turned into a "design" part were you could see how the applicant would setup a new system, wether s/he would overdo complexity etc.
Finally we had tests for communication skills. For a unprecise formulated requirement, would the applicant ask us or would s/he over-implement?
Usually there was also something which was noch archievable, for which we substracted points if attempted unsystematically and at all.
It was a good test, because the simple leet code calmed the apllicants, and for the rest you goto see something much more valuable. Their approach to work, their thought process, their output, wethere they would go for Quantity or Quality and could self-manage their workflow.
The problem with the "easy leetcode" problems is that a lot of them are "gotcha" type of problems: you either know the formula/puzzle-logic/solving-strategy or you'll fail.
I remember asking the typical "traverse an array in spiral" or "traverse this array in diagonals" a long time ago. The problem is that, if the candidate solved it, I just knew they knew how to play with array indexes. And if they didn't do it, I just knew that they got nervous and were not good using indexed arrays... it didn't give me anything.
That's why FizzBuzz is good: it has no false negatives. If a developer candidate can't do it, it means thay cannot code.
This is pretty similar to what we did. We didn’t offer a refactor problem though. We just had a basic problem to create a calculator from a string, only supporting addition and subtraction. Then a high-level question on how would you architect and scale a simple API/service.
We didn’t reject people if they weren’t able to complete the algorithm, because it’s a lot of unrealistic pressure. For both questions, we mostly just want to see how they think things through, how they identify pain points, how open they are to feedback, and discuss their approach.
Even though we would tell people this, I think they still put a lot of pressure on themselves because of the status quo of leetcode interviews.
I really like the refactoring challenge idea. Any tips on what’s worked or not? Do you do domain specific code or try and keep it generic? I think having them not write code is particularly good way to handle this.
- Showing code is optional. A bit of storytelling to set the scene is enough. So you can keep it generic, or you can add some details, as you wish! Just make sure that the candidate understands the scene. If you present something generic and feel they don't understand, tell it again with more details. If you think you've lost your explanation in details, start over with less details.
- Be sure to know what answer you want. The number 1 thing we want is for the candidate to talk about adding (non-regression) tests. But the candidate can talk about many different things: profiling, tracing, A/B testing of the new implementation, etc. If they don't talk about the #1 thing you want, try to subtly bring them to that point ("how do you ensure that the new implem works as well as the previous one?")
Not specific to the refactoring challenge:
- Ask for feedback. After each challenge, we ask the candidate their honest opinion on the challenge.
- Grasp a feeling of whether or not you'd like to work with this candidate. Try to challenge them, correct them, ask them to explain things in more details and see how they react.
- We do it with 2 interviewers: main and observer (watcher?). Both from the technical team (so 2 devs). We encourage anyone from the tech team to do it if they want, even juniors.
- We do the 3 challenges in 1 hour but that's a bit short. 1h30 would be better, if the candidate is ok with that.
I wouldn't because my company has been burned before by hiring people who would otherwise have been filtered by your test. I'd say your test is the kind of responsible, yet reasonable test I'd like to see every company adopt some form of.
I just tell recruiters I simply won't. Turned down continuing with an interview process with a company just last week for this reason.
I'll do take-home programming, I'll do collaborative debugging and coding in a shared IDE with something that looks like a real project, I'll do system design etc. interviews, talk to you about programming, etc. and I'll show you my GitHub etc. projects and you can judge from that, and my 20 year long resume, whether I might be a fit.
Want me to write CS-class algorithm & data structure problems on a timed clock on a whiteboard or equivalent? You've just told me everything I need to know about your engineering culture.
I worked at Google for 10 years and I hated their interview process, it needs to stop spreading to the rest of the job market. If enough of us say to no to this process, it will end.
And to hiring managers: most of you are not Google (thankfully) and don't have a bottomless pit of talent to choose from. Stop pretending otherwise. You'll get better results. If you feel you have to do it, save it for new grads and stop using it for senior talent. All you're testing for is whether people practiced leetcode or whether they're straight out of a CS program.
>>I just tell recruiters I simply won't. Turned down continuing with an interview process with a company just last week for this reason.
I've done the same - and got hired anyway after I told the recruiter was not interested in taking any tests - just happened to have the right in-demand skills at the right time I guess.
I doubt it actually told you anything about their engineering culture. Companies with all sorts of different cultures are copy and pasting processes, there's no reason to expect it to be super representative of anything.
Yeah, leetcode type of test got popular because hugely successful Microsoft and Google used to use them to pick the smartest geeks for amazing technical challenges for their times. Now that leetcode is being abused by both companies and candidates, we really should find a different way to interview.
I interviewed hundreds of C++ developers as a freelance assessment interviewer. The most candidates I interviewed wanted to work in the automotive industry in Europe.
Many candidates (like: maybe the half) are fancy talkers without any skill in writing code. I really don't know why they are applying for dev jobs. It is easy to filter out these persons with a very simple coding test.
I agree with the article that 'leetcode' tests (find that complicated algorithm in 30 minutes while I am staring at you) are bad. But I think coding tests are good! Give the candidate just a really simple coding task with stuff they normally do every day. Create and delete object, fill arrays, iterate over arrays, and so on. 50% of the candidates will fail! The rest are OK engineers.
Exactly. Give a simple coding task that the candidate does on their own time (but not one that's too complicated, it's rude to waste the time of people who aren't even working for you), and then, crucially, have them briefly explain their solution during the interview. The task can be so easy that reasonable solutions are just 10-50 lines of code – my experience has been that people's explanation of their own work very clearly tells you whether or not they're a decent coder.
What if that test was done once by an external company, and then the stamp of approval is good for a year of job interviews. That would save the candidate and interviewer a lot of hassle!
I agree. You would expect that from a bachelor or master degree. But nowadays you have a lot of people coming out of coding bootcamps, and their level can vary a lot.
> I agree with the article that 'leetcode' tests (find that complicated algorithm in 30 minutes while I am staring at you) are bad. But I think coding tests are good! Give the candidate just a really simple coding task with stuff they normally do every day. Create and delete object, fill arrays, iterate over arrays, and so on. 50% of the candidates will fail! The rest are OK engineers.
Do the candidates know that ahead of time?
Your approach seems similar to what I've done as a coding interviewer and to what my interviewers did when I last interviewed (at Google back in 2007). Assess people's coding with something relatively simple. Make sure they can gather requirements, describe why they chose this approach instead of a couple alternatives, and (if they make a mistake) that they can diagnose it if you describe the symptoms. If you have extra time, have them review some bad code. See if they spot the problems and how they gently help the author understand/fix them. (They also should be tested on system design, but that's a whole other interview slot.)
I'm studying to be a coding interviewee for the first time in 15 years. This process is stressful in part because they just tell you coding on hackerrank/leetcode/coderpad, which is so broad. If the problem they pick is as you describe, I should be fine. If it's for example some advanced dynamic programming problem...well, those haven't come up for me in the last 17 years, so I'm probably in trouble. It's a perfectly valid area of computer science, but it's not one that matches my experience or what I'd likely be doing if accepted. I'm not excited about taking the time to prepare for that, but I also don't want to make a fool of myself if they do. They probably won't pick this area...and if they do, it's probably a bad sign about the company or my understanding of the role...but the possibility is stressful nonetheless.
Don't get me wrong I hate LeetCode-style interviews as much as the next guy. In fact I really, really, really suck at them! Not sure if that says more about my ability then anything but c'est la vie
In the defence of LeetCode-style questions, I do think they work, and very well may I add - with the caveat you have the throughput of candidate to make it work well? Their ability to filter out 'those who can't code' in an efficient manor while sacrificing a small amount where it filters out 'those who can code' greatly out weighs the alternatives. The alternatives needing to fit into a 1 hour timebox, be objective while also favouring the positive cases (I think I got that the right way round).
My two cents would be more around the way in which they are conducted; in my experience I've found conflict with the interviewer more then the process itself - with interviewers in my past lacking.... empathy (may not be the right word) for the person on the other end of the screen/table feeling flustered, nervous or down right stupid that they're struggling to solve a simple fizz-buzz/reverse string problem, leads to a snowball effect and pilling onto that can effect the candidate in quite a spectacular way. Best interviewer I've had asked if I was alright and got me a glass of water, props to that guy!
I dunno - I've just come to terms with having to learn how to play the game, even if I find that part of the game really hard and to some parts unfair. Such is life
There's an assumption you're making that these challenges only filter out a small amount of qualified candidates. I suspect the percentage is actually quite significant. In my experience it's usually bigger "looks good on a resume" companies that do these challenges, which suggests they're getting enough candidates in the funnel to be able to afford turning away a lot of more-than-qualified applicants.
I've failed more than my fair share of these challenges, but never (being subjective here) because I wasn't actually capable of (1) solving the problem or (2) doing the job. My take here is that I _may_ have been unqualified for these roles, but that the interview failed to actually uncover it, due to spending all the available time on low signal exercises.
> There's an assumption you're making that these challenges only filter out a small amount of qualified candidates. I suspect the percentage is actually quite significant.
Statistically that’s irrelevant, as they optimize for “do not let through rotten apple”, rather than “find good apple”.
Which is horseshit. I'm currently working at my first bigco company, and I would consider the MAJORITY of people they have hired in the last year to be horrifically bad hires, because they have optimized for new hires that can leetcode, but can't actually code.
We just fired a person on my team who didn't understand pass by value vs pass by reference, or how to debug in an ide, but she could manipulate strings in leetcode!
That assumption isn't being made at all. That's why it says with sufficient throughput of candidates. If you need to hire 100 people, have 10,000 candidates, 1,000 of whom are qualified, and a 90% false negative rate, you'll get the 100 true positives you need, while leaving 900 well-qualified people pissed off. The process is bad for most of the candidates, but works fine for the company doing the hiring.
The problem comes when smaller companies that don't have the same high rate of new applicants use the same process and then complain they can't find anyone.
> "Their ability to filter out 'those who can't code' in an efficient manor while sacrificing a small amount"
This is the assumption I was referring to, that the "sacrifice" is small. It's suggesting that the false-negative rate for LeetCode challenges is small, and I'd argue it's actually quite high -- as you also suggest (your rate is 90%).
Big companies also have the headache of standardizing across thousands of interviews. That said you can do that more with practical exercises (build this thing…) than leetcode
Any hiring process will necessarily filter out a large percentage of qualified candidates so it's not a major concern. If I'm hiring for X positions, and there are X + Y qualified candidates, then any hiring process whatsoever will necessarily have to filter out at a minimum Y candidates, and usually Y will be much much greater than X.
The problem is that the total number of candidates who apply for the position, Z, is significantly higher than X + Y by a very very large margin, and I mean orders and orders of magnitude. For every position I post I get on the order of 600-1000 applicants in a matter of a week, even though I'm only looking to hire maybe 2-3 people. Of those 1000 applicants, 80% of them are simply unqualified, and that's being really really generous just for the sake of argument (I'd wager the figure is closer to 90-95%). So once again just for the sake of argument that means 200 of them are qualified, and I'm hiring three, which means any process I choose whatsoever will filter out a minimum of 197 out of the 200 qualified people, no matter what I do.
Given that calculus, it's better for me to focus on making sure that I filter out the 800 people who are simply unqualified for the position even if that means I end up filtering some of the 200 good developers, because I have no choice but to filter out at least 197 of the good developers anyways no matter what, whereas I do have a choice about filtering out the 800 bad ones.
I have had a great experience in a recent interview that used Byteboard. The format was two parts: First there was a design document in a Google Doc for a hypothetical system with three implementation options. All options were defendable, you just had to defend one in an essay-style answer. There were also various comments to respond to throughout the document.
The code part was a small existing codebase simulating the system in said design document. You you are given three tasks, and explicitly told you are not expected to complete all of them. I ended up using virtually all of the time (70 minutes) completing the first two tasks, and using my remaining minutes writing comments about how I'd complete the third task. When that was complete, I was given 15 minutes or so to describe what I would do if I was given another 15 minutes of time to work on the project.
My only real complains were that the time limit added some pressure (that I was able to manage reasonably) and that the grading process is opaque. I know a human grades it according to a rubric, but I don't see any of my results. The company I was interviewing for just said "Everything looks great, we're moving you forward to the next stage of the interview process".
The code didn't involve writing any fancy algorithms, but instead getting to know a (very small) existing codebase and understanding how to use it to add functionality. This is much more realistic a gauge of how good of an employee you are than how well you can implement a search algorithm from memory.
In a previous job of mine, we would show candidates a printout of some buggy code, and ask them to find the bugs. We would leave the room and let the candidate work through it on their own. The code in question was basic algorithms and data structure stuff in C++, such as inserting into a doubly-linked list. I always thought it was a good exercise. Suits slow-thinkers and nervous people, and it's a good test of coding ability since if you can spot a bug then you can clearly read and understand the code.
Man, I would rock that so hard and I wish more companies would do this. My current job did a live coding exercise in the interview, which went okay. The interviewer gave me a chance to iterate on the code and improve it after our call and send it by email an hour later. My solution by then was far better than what I'd had at the end of the interview, once I had a chance to gather my thoughts in peace and think everything through thoroughly.
I've had a couple of sysadmin-interviews which were pretty interactive.
You're presented with a laptop connected to a remote VM, and told "Fix MySQL", or "Rewrite the git history in this repository".
Usually these are simple problems, which have obvious solutions. Every now and again you might get a surprise like an immutable-bit set on a file, or SELinux blocking access to specific files/paths, but the good thing about these kind of "challenges" is that you'll usually also have full google access.
I guess l33tcoding isn't really a thing for sysadmins, but I do appreciate a (fair) simple test like that, especially being given the opportunity to talk through the process.
So you basically implemented an analog version of Stack Overflow? :) That sounds really nice, once you get over the idea that people will show deliberately broken code (most folks on SO just don't know better, or at least that's my hope). Thanks for sharing!
Yes, I do exactly the same with a contrived example in a take-home coding assessment (with some failing test cases to not waste their time). Always seems to be a good indicator, especially of their ability to communicate technical concepts.
Was the bug in the algorithm resulting in wrong results? Was it a kind of implementation bug that happens due to peculiarities in the language/compiler?
It was a bug in the algorithm, resulting in wrong results. For example, I recall that to fix the linked-list insertion code, you needed to add a line of code to increment a pointer.
Considering the normalization of spending hundreds of hours grinding LC questions and the industry built around whiteboard interview preparation, my (n=1) conclusion is that LC interviews is not about technical assessment at all.
It's an assessment that's designed to find people who are ready to submit to an endless grind with little to no skepticism. Developers who question the technical usefulness of LC interviews are simply not the target audience anymore. The target audience seems to be potential employees that are hungry and without leverage.
The average college graduate makes ~60k out of school. The median law school new grad makes ~80k. Yet a large group on Hacker News acts like anyone who studies enough to get into a FAANG company fell for some horrible scam.
I've studied less than 200 hours in my life and now make ~320k and expect to get ~450k when I switch jobs later this year (both remote)– I have only a high school diploma. Chalk me up as another victim of big tech... I guess I should've been more "skeptical".
For me, there's an element of jealousy here. I have probably 20k+ hours of programming experience, and I don't make anywhere close to 320k. It is hard to read comments like yours against the backdrop of my experience, despite the fact it was very privileged. I try to take it as understanding that I haven't dipped my toes in that side of the industry yet.
Hope that adds some perspective to things. (I'm not trying to justify the extent of some responses, just let you know my take.)
Imagine misattributing FAANG comp to any kind of personal competency.
My FB interview panel was the biggest clowncar of unwarrantedly self important people.
Anyway, pretty orthogonal to your comment, but kind of hilarious seeing what comp can do to people's ego, and perception of self
I made it rich through hustle and major contributions to a startup from the early days. Most big tech employees are a cog in the machine, along for the ride
Somewhere out there, there could be people making 300k/year, responsible for Youtube "continue watching?" modal not hiding when you press Space to unpause the video. It's been like that for years.
> Considering the normalization of spending hundreds of hours grinding LC questions and the industry built around whiteboard interview preparation, my (n=1) conclusion is that LC interviews is not about technical assessment at all.
The “hundreds of hours” grinding LC is largely for juniors without experience. I don’t know any senior engineers who had to grind LeetCode like that for their FAANG interviews.
Disagree. I’ve interviewed quite a few people who are prepping and are senior. Even for those who have been at FAANG before - it’s common to spend multiple months prepping nights and weekends. Taking a full two months off just to study is uncommon. But if you think just doing an hour a day is sufficient - you’re wrong. Most candidates (even senior ones) fail with that level of studying.
I know because I’ve talked to dozens if not hundreds of people who have tried to get into FAANG with that level of study. It definitely takes more for the average person. Some people get lucky and spend maybe two weeks studying for a couple hours a day. But they’re lucky and shouldn’t be considered the norm.
Also - LC and the whole process is very fungible. You can say they did well if you happen to like the candidate for some other reason and you can knock them down if you don’t like them either. I’ve seen it go both ways where terrible candidates get offers and amazing ones get rejected. Ultimately - LC is still only part of the interview. If you are really handsome and charming - you might get an offer even if you’re not very good at LC.
I knew someone who basically contributed nothing at my company and spent most of his days studying LC. He got an offer and left. I wonder if he is still playing the system or if they are getting any work out of him.
> there are multiple blog posts from senior engineers who took 1-2 months off to grind LC just to get into Google/Meta etc.
Don’t read too much into blog posts. This is engagement farming to capture search traffic and trending topics related to LeetCode.
Taking months off to grind LeetCode all day isn’t common and doesn’t even make sense. LeetCode can be done on a lunch break. Even one problem per day in the evenings is more than enough for a senior to prep for an interview.
Having done LeetCode, I’m not even sure how a senior could justify spending 2 months doing all of the problems full-time.
I think people like to exaggerate the difficulty of these interviews to excuse themselves for not having one of these jobs.
The lifestyle/compensation gap between top tier tech and everything else is enormous and it's a tough pill to swallow that the only thing keeping you out is some light studying every day for a few weeks/months.
> Don’t read too much into blog posts. This is engagement farming to capture search traffic and trending topics related to LeetCode.
If that's true, then it should be fairly simple to defect from this prisoner's dilemma- just publish articles saying, "I didn't have to take months off to grind LC, I completed it through light prep and you can too!" In the realm of blog self-help, simple advice, framed with this sort of counter-common wisdom contrarianism, can be as popular as the ones that follow trends. Often even more popular.
But you don't really see articles like that. You do see some pro-LC articles from interviewers' points of view, but none saying, "It's actually easy! Here's three simple tips," despite the potential for search traffic capture.
Having taken some interviews myself I can see some value in having the interviewee do some small coding exercise when they do not have any personal projects to show. I think it can filter out some false positives of candidates that can talk well but can’t actually write code (you’d be surprised how many candidates can’t)
What surprises me most is the lack of flexibility in the process. If a candidate shows up with a broad portfolio I’d rather talk about that then doing some random coding problem. Yet our HR manager insists on the fixed program. This is worse when the candidate is interviewing for a senior role where I don’t really care. Then I am mostly interested in their past experiences and knowledge on how to build things that don’t fall apart after six months.
Again it is definitely process over people here… not sure if it is better in other places.
I would also say that these exercises are most effective when they are quite simple. They let you test ‘can this person write a function’. The complicated ones often filter more for people who have studied those type of problems. Harder problems != better coder. At least not for the projects I work on which are more integrating existing services than investing new novel highly efficient code.
Yeah; that's what I've preferred to do. Interestingly, I've had just as good results with the hires I've made regardless of the process used, from whiteboard coding that is little more than fizzbuzz, to actual pairing with an engineer on a real problem, to leetcode, across multiple different companies. Maybe the softer parts of the the interview, which was largely the same (and prioritized people who could talk intelligently about stuff on their resume, those who when led to places they had no experience were comfortable saying "I don't know" or similar, and those who seemed interested in learning new things), was better for distinguishing mediocre from excellent (with the coding being helpful for distinguishing imposters from everyone else).
You know after going through like 10 or so interviews this year while trying to find a new job, I grew to appreciate leetcode tests as a candidate more.
The first thing you start to realize when interviewing is that every company has their own unique process for interviewing. The second thing you realize is that you're not going to ace every process every time, it's a roulette wheel you're spinning to see if this specific process and this specific day and this specific set of interviewers and this specific mood you're in are able to align in a way that they feel confident giving you an offer (leetcode or no leetcode, doesn't really play in to this factor).
The nice thing about leetcode tests as a candidate is that you can study for them, go through a few rounds of it with different companies, and get better at it and know how to improve for the next interview. When companies drop the leetcode tests you end up getting judged on the arbitrary criteria and testing that they devised in its place. If you come out of that interview not doing well, you can't really use that as practice for the next one -- because the next company you interview at may have a process/test that doesn't overlap at all with that previous interview. Now you don't have to practice leetcode and spend an hour doing something that's not directly applicable to your day to day job, but instead you're subjected more to the whims of randomness and whether you were prepared to satisfy the process they came up with instead -- which may not be shared by other companies.
Leetcode isn't great, but it's also not that bad. Some companies (i.e. Google, Facebook) you will have to get very in depth with practicing and knowing data structures and algorithms to do well in the interviews. A lot of other companies you can pass the leetcode with a lot less work, just need to have a basic refresher on graphs, trees, linked lists, etc. Other companies yet, they won't ask you leetcode at all (but that doesn't mean the job/company is good in other areas either, it's always a tradeoff).
It's quite true that companies that don't do Leetcode may not necessarily be more challenging to get into, but it is more random whether you will. And you often need to study especially for that one company, who may ask you, a backend developer, to do a pair coding exercise to implement a RESTful web API, even hiring for a backend position.
I try not to be a part of the problem when I'm the interviewer, giving a problem that would be described as a Leetcode easy, with an optimal implementation that is below easy. But I want to see if the candidate can understand the problem, the big picture, talk about tradeoffs in the problem, properly analyze performance, properly test their code, etc. Most candidates struggle with this, many can't even code a working solution, and this is the talent pool for a "top-tier" software company.
> Deal with ambiguity, Reviewing code, understanding what it does, finding gaps, Testing, Code structure, Cleanliness, Learning new concepts
Yeah, no.
A lot of these are culture. I'm fairly confident I can teach someone smart and competent to write clean code, add tests, and properly modularise the project.
It's called training (progression from "junior" to "senior"), I think more companies need to invest in it.
What you can't teach someone, is how to be (1) smart, (2) understand how computers work, and (3) be passionate about tech. That's what interviews are supposed to test, and leetcode (and some deep discussions, e.g. "how does a hashtable work" then leading deeper into the details of CPU, memory, instruction scheduling, optimisation, ...) does that.
Why not just use some proxy-IQ test, if a big concern is to hire smart people? Aptitude tests are perfectly legal - and much harder to rote memorize / "game" than LC questions.
People who call for Leet Code either never tried to hire actually high skill developers or fools themselves into thinking that they're the cool kid in town.
You might think it's better than any other alternative. I agree - it's been an amazing filter to filter out companies who are on average, quite crap to work for.
From my personal experience, they are overburdened with process to a degree that even if they hired the best devs out there, they wouldn't be able to deliver anything because of all the red tape.
The best jobs I had to date, I met the person leading the company/project/team, we had a chat, talked what tech we like, dislike, how we'd structure a product, what are the preferences to the process around everything. And that's the key thing - it was always a discussion, no Q&A. The key is that the candidate is not the only one who needs to know his stuff - so does the lead.
As a side effect, all of those jobs were way above the market. Again, personal experience, but higher up you go - less BS like "we need leetcode to hire" you get. Unless you're Facebook and you have a genuine problem of too many qualified engineers constantly applying, you should aim to only disqualify truly hopeless cases.
The company can't hide behind process and expect great hires. Early in my career, in a small city I was working in (in return, in the dev community you know about what other devs are doing), our company denied so many devs that within a year or two were among the top performers, just because of the leaderships insistence of a take home tests, Q&A interviews and gotcha style questions...
So please - do continue using leetcode, it makes filtering your company out so much easier and I don't need to go through bullshit stages to know that the leadership has no balls to make the hard calls when it comes to hiring & firing.
This is a tired topic at this point. The mistakes people make are:
1. An interview proces exists to fill a position. It doesn't exist to fairly assess an individual candidate. Candidates would like that. That's not the point. If there are 10 candidates and the employer fills the role successfully, they've achieved their goal even if someone great was filtered out;
2. FizzBuzz came about because many people talked a great game but couldn't code a flor loop. Giving a simple coding problem is an excellent negative filter. Doing great at the problem means nothing; and
3. Interviewers make the mistake of thinking FizzBuzz is too easy so they give harder problems. This is a mistake that defeats the entire purpose of the filter. Stop doing this.
These points remain constant in every such engineering hiring or interviewing thread.
And even FizzBuzz could end up challenging if you happen to have no familiarity with the modulus operator -- which isn't that implausible for junior programmers.
That's really not how it works in Big Tech. The company is ravenous for anyone and everyone in the world who clears "the bar." Requisitions and headcount quotas are a way of apportioning bar-clearing candidates among teams/EMs, but the company never reaches some state of having filled the open positions and being done with hiring. And as an individual engineer, you're expected to help interview for sister teams (sometimes quite distantly related) so you never do either.
Even when hiring for my own team I have never seen a discussion like "okay we've seen 7 candidates for this role, let's pick the best one." It is always just take it or leave it for each candidate as they come. The goal is to get as many people as possible in the door. Making the interview easier would fit our goals quite well, it's just a social taboo, so instead we do things like cast a wider net and spend more hours interviewing.
It's much better than the alternative and ideally only one part of the interview process.
Ultimately the proof is in the pudding, I've had loads of candidates that could barely code and Leetcode-style problems are a great filter against that.
Otherwise you risk just getting PM-style bullshitters as engineers, who talk persuasively about projects that other people actually implemented.
People who can do leetcode challenges are only roughly correlated with great engineers.
The more I learn about software development the more I try to _not_ have leet code style parts in my code - in the very rare circumstances would I see something like that and say - yes there is _no_ other library out there that has a battle tested algo to solve this particular problem, _and I have to code my own_.
Worse people who excel at Leet Code could start to bring that sort of thing into your codebases, optimising small parts of your code to perfection, and then leaving the whole thing a mess or just plain reject to work on solving business issues and concentrate on polishing their small bit of algorithm.
Of corse there are positions where that sort of mindset is welcome and sought after. Especially in very big companies, but for the most developers out there leet code is _just_ for the interview part and they would (should) never use that kind of problem solving in their day-to-day work.
When I have to interview somebody, and there's leet code involved:
a) they are _really_ comfortable with those types of challenges - that tells me they have spent a lot of time preparing - either for this interview or just generally, either way that wouldn't _really_ tell me much about their problem solving skills
b) they feel uncomfortable and fail to solve anything due to stress or anxiety - again no really knowledge gained, as real-world work environments tend to try to minimise those, at least in the places where I work in.
c) they feel uncomfortable but get the hang of it - now I _might_ have a glimpse of their problem solving skills, but I've put a human being in a very uncomfortable position just to _test_ them. There _has_ to be a better way.
The way I like to conduct interviews where possible is to ask people to walk you through some of the code they've written and ask various questions about decisions - much more relaxed and I still get the sense of how they organise their stuff and how they work.
If that's not possible - a take home task or something. Or even "hire fast fire fast" approach works too.
> c) but I've put a human being in a very uncomfortable position just to _test_ them. There _has_ to be a better way.
> ...
> Or even "hire fast fire fast" approach works too.
Comment "c" shows a high level of care/concern for the candidate. "hire fast fire fast" does not.
What if the candidate left an unsatisfying but steady job to join your company? And they get fired in the first few weeks. Now they are unemployed. This is much more uncomfortable than an awkward interview question.
I think it's fair to do two Leetcode Easys - ideally with problems that allow you to ask further questions about memory management (when are allocations triggered, etc.), recursion, stack frames, adjacent memory location for efficient reads and so on.
The problem is that a take-home task can be impossible for some people already in a full-time job with kids, etc. and so has its own bias. And the "hire fast fire fast" approach doesn't really exist here in Scandinavia, even though there is a probation period it's still frowned upon to use it for anything but extreme circumstances, and if done en masse would likely raise issues with the trade unions.
>memory management (when are allocations triggered, etc.), recursion, stack frames, adjacent memory location for efficient reads and so on.
This sounds incredibly specific. Most places requiring at most LC easy don't really need any of this, and I'd be worried about the interviewer looking for key words and specific answers over actually testing whether the candidate grasps things or can learn what is necessary.
Then part of the process should be to allocate part of the interview schedule to the take home. Say, 1 hour interview block, at minute 1 candidate is sent an exercise and at minute 45 a zoom call starts to discuss things.
That way, some of the pressure is off of coding with someone looking over your shoulder, and you still have time allocated off already.
Leetcode should only be used for easy problems - FizzBuzz types - where you are checking that the candidate can code at all, probably as the very first round of an interview. This way, I think it is totally fair, including a relatively tight time limit. The harder the problem is, the more you focus on the special skill of leetcoding, and not much else.
It’s absolutely hilarious you think leetcode would give you the skills to design anything larger than a simple website. I don’t recall leet code ever talking about how to set up an infrastructure that has reliability on an enterprise-level where seconds of downtime means millions of dollars lost. But you’ll be really good at hash maps and quicksort which make for good worker drones, not necessarily good engineers. that requires outside the box thinking, The opposite of the type of memorization leetcode encourages
> It’s absolutely hilarious you think leetcode would give you the skills to design anything larger than a simple website.
The point of leetcode is not to show what you can do, it is to show what you can't do.
To keep up with your analogy, being able to design a simple website doesn't make you able to set-up an enterprise-level infrastructure, but if someone isn't able to design a simple website, I don't want to hire him for my enterprise-level infrastructure.
As for "outside the box" thinking, it only works if you know where the box is, otherwise you are just being clueless.
And by the way, because it is an argument I see way too often, calling the "sort" function of your favorite library instead of doing the exercise "as intended" is not "outside the box" thinking, it is literally the most obvious thing to do. What could count as "outside the box" would realizing that you can solve the problem more efficiently by not sorting anything at all, which requires you to know your sorting algorithms to show that it is indeed more efficient.
“Absolutely hilarious” that you’re forgetting there is usually a system design round that tests things like “setting up infrastructure on an enterprise-level”.
The coding round is supposed to test that.. you can code.
For the record I’m also not a fan of the Leet Code style rounds, even though I actually find completing them (outside of an interview) quite fun.
There's quite the distance between FizzBuzz and LeetCode medium/hard questions. If you want to filter out candidates that barely can't code, just have them walk you through a small code-base containing bugs.
What does the code do? Does it work? How would you get it working? How do you test it? What is the complexity? etc.
I really disagree here; I've been a successful engineer at a Fortune 500 company for a decade. I'm a front end engineer. I still have absolutely no idea why companies choose to test FE engineers with leetcode style questions (which I am terrible at). My job has almost NOTHING to do with leetcode, and yet it's a huge hurdle to go over for some unknown reason.
I'm a great engineer, I just suck at leetcode questions because...shocker! They have absolutely nothing to do with my day-to-day. It's similar to puzzles; I hate puzzles and doing puzzles/escape rooms/etc, but I'm a pretty smart guy.
Leetcode is merely a gatekeeping methodology that proves almost nothing and weeds out a lot of amazing engineers who aren't skilled at grinding leetcode or hate brain teaser shit.
> A first alternative is to look at some of the candidate’s code to begin with.
This is biased towards people who code for free in their free time. I'd call this terrible advice for most companies, since it filters out the vast majority of qualified candidates.
I have some open source contributions on my GH account, but none of them are representative of how I code since they are all bug fixes and/or small enhancements which are shoehorned into an existing codebase, since my goal is to add a feature with the smallest number of code changes (in order to increase the likelihood of a PR getting accepted).
Every time this comes up, lots of people start with the assumption that coding problems have a large amount of false negatives.
Because of this perceived "truth", I had the same worry when we started to implement a coding problem. Rather than guess, we decided to measure it: for the first six months, we used a wide filter (50% pass rate, actually like 65-80% of people who didn't cheat).
What we found was that there were zero candidates in the bottom 66% who passed the rest of the interviews. The plagiarism detector also had no false positives (based on manual review). So at least on this sample, we found that we could screen out about 80% of applicants without having _any_ false negatives.
I'm sure there are some bad employers misusing coding tests, just like with any tool, but I have to imagine many others have done similar experiments and found their tests to be effective.
The amount of false negatives really depends on where the bar is for passing the interviews.
If the bar is "it looks alright and the person knows the language they're using," it's probably not generating a whole lot of false negatives. If the bar is "the candidate comes writes the optimal solution to a complex algorithmic problem in 45 minutes," then that's highly noisy and tends to filter for people who have done a very similar problem recently.
Unfortunately, too many interviewers use the latter bar for passing or failing candidates.
Last xmas I worked on a side project that ended up requiring a moderately substantial algorithm; much more involved than anything I've ever had to do in a LeetCode interview - there's absolutely no way i'd be able to produce it under those circumstances. Yet, the algorithm wasn't the hard part.
The hard part was all the exploration I had to do around the problem to get to the point where I understood the constraints well enough to solve it. I had to rewrite my attempted solution a number of times as my understanding grew. In my opinion, this represents the real thing we should be trying to test - a candidate's ability to unearth the true definition of the problem. When it comes to writing real world algorithms, whether you can solve it in 30 minutes or 3 days is (mostly) irrelevant, because it makes up such a small part of the overall engineering time.
You could squint and make a case that this is what LeetCode challenges are doing, but I'd only agree if we removed the requirement/pressure to have working code passing all the tests within the time allocation; and to be honest, most of them just hand you all the constraints on a platter.
It was somewhere between a sudoku solver and wave function collapse. Had to work out what tiles to use to accurately reproduce the slopes for rendering a Mario Maker 2 level. Most of the level tiles are just X/Y/TileID, but for slopes all you know is the start/end nodes. So you have to look at all the adjacent tiles to work out the correct TileID to use for each X/Y paid within the bounding box of the slope. But there's also ambiguity where there are multiple possible tiles with the same edge properties, so I had to work out the right priority/weighting to favour certain tiles over others.
I had to derive and then write out all the edges rules by hand, which was many hundreds of lines.
Just talk to the person, ask them about projects they’ve worked on, problems they’ve solved etc. you’ll learn far more about them that way than getting them to put on a dog and pony show at a whiteboard!
I've found that it's relatively easy to figure out if someone is bullshitting. It just takes some work on your end by researching the candidate's application a bit deeper, thinking about relevant questions beforehand, and then drilling down during the conversation. I've hired 'non-silver-tongued' candidates this way, as well as filtered out people who were bullshitting. The key is to actually have a conversation, instead of just letting them produce canned stories and responses.
But, yes, this is more work on the recruiting side.
As a side note, I totally get it why large organizations would use Leetcode though. It does help filter out false positives if you're willing and able to bear the cost of passing on many good candidates.
Yep. I've found that you can give someone enough rope to hang theyself. If you let them explain their processes and ways of working, most of the bullshitters will out themselves :D
If you hear bullshit, then that's easy: you filter out such people.
Not really silver-tongued candidates? Well, what's the problem? Unless you are looking explicitly for silver-tongued candidates, you should ignore this "trait".
I have to say this is how I've always interviewed people - I tend to be optimistic and assume people don't lie about their skills and experiences. So when I've been asked to interview people I mostly want to talk about projects, hobbies, and general things about developing, debugging, and the process by which people solve problems.
I don't want to know if you can do specific things, if you can't you'll learn, but I do want to know if I'm gonna find it easy to get along with you, if you seem like an interesting person, and if I can stand being in a small room with you for 7 hours a day.
I suspect this is just another kind of bias, but I've had good results.
Great to hear this. We use a similar approach, we sometimes get flattery in the form of "this was the nicest interview I ever had". I believe in human centric approach (the golden- and silver rules), and if they have the fundamentals right, technical skills etc. they can learn the rest on the job! Quickly too. And stay working for many years, low turnover, more cost effective.
I thought that is one reason there are the standard 90 day trial period. Instead a lot of time is spent in interviews finding the perfect candidate (which doesn't exist).
I think an extra hour to talk will give me more information than watching them balance a binary tree.
For many people the stress of the leetcode session is just awful, I don’t think it’s a nice thing to inflict on someone if you’re ultimately going to disregard the outcome anyway!
After hundreds of interviews for a FAANG I'm become more and more weary of this format. Particularly I find the the leetcode style "problem solving" question to have the higher rate of false negatives. It's common for candidates to bomb that question and do well on the rest.
My favorite way to judge candidates now is by asking a “clean code” question. This doesn’t refer to Uncle’s Bob Clean Code, but to code that is simple, maintainable, and extensible. I give candidates a simple and slightly ambiguous problem statement, usually revolving around “write a library that does X”. I expect candidates to ask questions and clarify the ambiguities, then proceed to define the APIs and finally write the code. The implementation is straightforward, with no tricks or logical puzzles. Only use simple structures such as lists, hashmaps, and loops. Then I ask one or two follow-up questions for more requirements, such that they need to modify or extend their code. Depending on how this is organized this might be trivial or very complicated.
I feel this format is the closest to on-the-job work and gives me a good feeling of what it would be like to work with these people. Also has a lot of freedom and allows one to peek inside the candidate’s mindset. How do they deal with ambiguity? How do they approach API design? how do they handle incorrect values? Do they care about corner cases? It is also mostly devoid of what developers hate most e.g. trick questions and obscure algorithms.
Been an interviewer and interviewee recently, so being on both sides of the track has given me some perspective.
This is the current process that I think is fair and holistic:
1. meeting with the candidate, our manager, and some devs talking about their past exp., our company, our team, and their wants
2. Take home coding task based on our day to day work: This is linear with direct instructions for inputs and outputs; there is an optional part at the end for testing more tricky concepts. They are instructed to write clean and clear, no stress if they don’t finish, take their time with a week to do it (it’s a few hours work).
3. Interview with them walking through their code on their machine and describing their thought process, field questions from them if any are left.
Then we decide by a team discussion afterwards.
Gives them their space to think, reduced pressure for candidates who are socially pressured.
Thoughts?
I personally detest leet code as a recruiting tool.
> there is an optional part at the end for testing more tricky concepts.
If the candidates are anything like me then any optional or bonus features will be considered mandatory. I have no way of knowing what percentage of other candidates do the optional work and so I have no way of accurately assessing the risk of not doing the optional work myself. I will ignore your suggested timebox if the optional work will take longer and then I'll be a little pissed off at you for how long your take home assignment took me.
I used to work in a bank and wrote an interview script for my department (kdb+/q market data). It mainly consisted of having the candidate sit in front of an interpreter, and walk them through a script which looked something like:
1. Load a data file here
2. Tell me some facts about the data
3. Here's another dataset, can we use both to figure something out.
4. One of the executions for this order is missing, how can we find which one.
5. Here is a data feed, can you write a process to ingest the data and calculate something in real time.
I by far preferred this system to the alternative which was to ask trivia questions and see if the candidate memorised the docs. There is of course some value in asking basics, or to elaborate etc. But on the spot algo questions are usually only useful in filtering people who either like leetcode problems, or have grinded them for the last 6 months.
Totally against using LeetCode for interviewing engineers.
However, when asking around about why people who use it do so, I found out it does have one irrefutable advantage: it stops people who can't code at all.
From an engineer's point of view, LeetCode is a complete waste of everyone's time because it measures things that aren't factors in successful engineering (as TFA says).
Bu from the non-technical manager's point of view it's awesome because it gives a single, simple score for "how good is this engineer compared to the other ones?" and people who can't code at all can't complete it.
The non-technical manager's worry when interviewing is that they hire a really expensive employee who can't do the job. But because they don't understand the tech, and the tech is complicated and even expert engineers spend lots of time fighting it to no apparent end, it's really hard to understand if an engineer is incompetent and bullshitting them, or actually good but the problem is hard. Having a nice, easy metric that stops the complete bullshitters from getting in solves a problem for them.
What we need, obviously, is a professional association for software dev, that can then properly test us and verify that we can do the things we say we can do. But the industry has a lot of growing yet to do to get to a point where this is even possible.
That's ok. There will be a prep school for the best. People will study to the test, and we'll have "cracking the association".
People will always study to the test. So it is essential that when you interview you test for what you ate looking for. This is why using 3x leetcode interviews is a bit silly.
Testing if someone can code is 100% reasonable. If a company wants to waive their coding test for me, I tell them they shouldn't and that they should NEVER assume someone can code.
Yes, that means your company needs to have questions, and interview design, that actually looks for what your company truly wants and desires. Amazingly, this CAN be done. Interview design is hard, but very possible,
But asking a EASY leetcode question is 100% reasonable.
I recently failed interviews at Google and Amazon, exactly at this sort of algorithmic problems. And that's quite far from my client application daily job. I knew it's not my strong side, I would have applied before they approached me otherwise. Now I can tell everyone - I knew quicker than Google, that I'm not the right fit ;)
I can relate so hard with the part which says it does not favour slow thinkers. I've faced this countless times - person at work asks me something, I reply "I'll think about it and get back to you", which inevitably leads to their disappointment.
I am okay with disappointing people, but it can be unnerving when that disappointment means I miss out on a good opportunity.
To be honest if I was an employer I probably would do leet code or similar. The logic would be that yes, many engineers are shy or nervous around people. But losing those is worth it since hiring someone bad that you have to fire (that wastes some months of productive time at the company) and then re-hire someone else is definitely worth that risk. Having made something impressive on github is somewhat fakeable since you can get a lot of help from the internet or even copy large chunks (that you may understand but didn't have to invent) from stack overflow. Someone who can do leet code at least have some hard-to-fake ability to reason and work under pressure.
That said, I still don't like doing leet code interviews and I'm pretty bad at them, but that's the logic that imagine goes on in an employers mind (hence why "they're missing out on some types of candidates" logic likely won't sway anyone reading these comments I suspect).
Yeah but it's somewhat hard to determine how difficult a challenge it actually was to produce that piece of code from the outside. It may be a signal of competence, or it may be one of those things that seems really impressive but they actually just followed a youtube tutorial that explained most of it step by step. (Which is a fine solution if you found that for a particular problem you had to solve, but it's not a reliable signal they'll be able to solve arbitrary things in the future - not everything has been explained by someone else, especially when trying to tackle very novel problems.)
The point is that code challenges are a pretty hard to fake test and the loss of some types of people is worth it for the employer (like me, who sucks at them).
Nice article - Leet code style interviews give minimal signal. The fact that we use the word Leetcode makes me think the company is hiring for the masses and it's going to be a boring job. I call such a company a Dinosaur.
I think companies should offer a choice to interviewers if they prefer to give code samples , an at home problem solving or an in-person exercise. This addresses careful thinkers, adapts for anxiety during an interview.
I do appreciate when companies ask relevant questions that they have come across rather than mundane Sudoku questions.
I have interviewed with a few companies, and Stripe's interview style stands out. Coding questions are relevant day to day style questions.
I would say Google, Amazon and Facebook set this trend and have spoiled it for all.
Unfortunately, some companies cannot think on their feet to set a different approach. Maybe it's in your best interest to avoid these places.
I've decided I'm going to start interviewing with a set of PR's per each role we hire for. each PR will have obvious mistakes, complicated logic problems, as well as code that could be "refactored" once the whole PR has been read.
this provides a few things; it gives us an ability at how the interviewee problem solves. next we can see how they respond too obvious fixes (would they be someone you'd want to send a PR too?). finally, it tests their knowledge of the language and APIs, hopefully much better than Leet Code can. I would also like to see if the user can spot obvious bugs in the setup code (say, package.json, pyproject.toml, etc)
I am going to make an example PR for:
- frontend (React/NextJS, TypeScript, CSS)
- backend (Django, Python)
- DevOps (potentially some Pulumi code for deploying to a Kubernetes cluster?)
Although this is better than leet code, PRs have been shown to be a poor way to prevent bugs. I find they are mostly useful for disseminating knowledge.
I think there are flaws with this approach too, but I wish this approach was more common so that there was industry wide rapid iteration to a more optimized version of this.
What I’ve seen are time trials, which don’t show you how well a candidate will adapt to a codebase, wont show you how they will do tickets for your sprint. Only reinforces a flawed idea of the employer about how they wont “hit the ground running” despite having that job opening for 8 months.
What you described might not be a time trial, its what ive seen though.
I once a test with badly written code and gave my comments. A bot immediately asked me to implement those changes, but I assumed a human would respond. I made a YAGNI change, trying not to disturb existing code too much. They responded saying the comments raised during PR were good, but writing a big switch statement for all subclasses violates Liskov Substitution Principle.
Benefit #175 of having some kind of open source project in your company: you can open all types of different issues and user stories, and use them to discuss with candidates. You can even attach bounties to them so that candidates don't feel like they are doing free work, and it would still be cheaper than paying a recruiter that will likely just source candidates by spamming different sites.
I love this. Picking an issue from some well-organized dependency has worked for me (I wonder how I’d present this to AI/ML positions). I hadn’t thought of the positive signal of issuing a bounty.
I worked at a unicorn that did these type of interviews under the premise that they provided a way to reduce bias in the process. That seems reasonable if you believe that qualified engineers can walk in and compete these tasks without having practiced extensively.
I don’t happen to believe that and thus believe it biases in favor of candidates who can devote considerable quantities of free time to preparation.
Instead, I offer to share a private GitHub repository, (if not some open source) with the prospective employer. It offers them the ability to see how my code changed over time, how it ended up, and the quality and calibre I may or may not devote to projects.
It invites a conversation, in depth, about software construction, quality, and decision making from the point of view of real work. It also tells me if a company wants to simply filter. If the organization is unwilling to invest in a candidate interview, as I do to be interviewed - I in turn learn a lot, and decline to pursue accordingly.
My experience is that employers refuse to do this (I have dozens of repos, with tens of thousands of lines of code, entire shipping applications, a decade of checkin history, dozens of Web articles, and hundreds of pages of documentation and project planning artifacts). I even had one indicate that “I probably faked it.” It was jaw-dropping.
In fact, I have so much stuff out there, in the public realm (I’ve been doing open-source software for decades), that employers could easily evaluate me technically, without ever contacting me, and all they’d need to find out, is whether or not I’d be a decent team fit.
I suspect that the main reason they ignore my portfolio, is that they have already decided that they aren’t going to hire me, and don’t want to waste their time, reviewing my work.
It also helps me filter out companies that are either too lazy, or too strict in their procedures, or both, i.e. there is no value in asking the same questions, regardless of candidate’s resume.
As an example, a month or so ago I went through a couple rounds of interviews with this company. I told them forthright that I wasn’t doing any code assessments, and if they wanted to, they could take a look at my GitHub account. They seem willing to consider. Third round comes in and they ask me to write some sorting algorithm, to which I refuse.
Ironically enough, one of my repos does have an implementation of an advanced data structure, so they could as well just take a look at it.
I once failed a leet code graph problem because I solved it with a genetic algorithm.
The problem wasn’t that I was incapable of solving the problem, it was the narrow view of possible solutions. The interviewer was looking for the CS101 solution.
My biggest gripe with leetcode is they tend to filter diversity of thought.
Yes, this is one of my big complaints- I've presented solutions as efficient as the interviewer's, but got dinged because it wasn't the one they were looking for.
Leet Code tests can be a great way to find good engineers - after you've told them what complex puzzle you want them to code, you can reject any that don't ask you why you need it! :)
My favorite style of interviewing is through reviewing existing codebase, then ask question on it. Then i'll ask candidates how to improve their codebase (or someone else code).
If possible, i'll let them code some small functions and ask them how they gonna do the unit test.
To me, refactoring skills is a must, as most of engineering work is on refactoring.
I'm not sure.
Yep, I tanked several interviews because even when I got info at the beginning that "we will look for your way of thinking and this how you handle problems" finally I heard "Yeah, it was OK, but there exist better algo to do it".
But still for sure those interview questions are close to checking raw IQ/algos, and if somebody shines in those probably will be OK employee.
And now it is question of candidates pool size, if you are one of wanted employers like Google, Amazon you may use this filter, you will loose a lot of good candidates who aren't good in algos, but those who will pass your interviews are still good, and you have good amount of those.
Smaller, less sexy companies may have problem finding people with this approach, but bigger and better know probably don't have problems here.
This is crazy and needs to stop indeed it's like you have:
- A degree (In which you've proven you can understand these algos and spent 4 years studying)
I'm not going to redo all that in 1 week before your stupid puzzle!
- Experience (That has to be worth something it's not like everyone is lying about it)
- You may have open source contributions
But no, some companies will not even start to look at that or not look at all, before
they ask you this stupid puzzle.
Personally I now filter those companies out, I mean 20 years of experience, contributions
in major open source projects, if you can't recognize that? why would I interview?
Were I work now we give code assignments, while they take longer to do you can actually see structure which I agree with the author is the top quality I'm looking for. They are also less stressful for the candidate.
> A degree (In which you've proven you can understand these algos and spent 4 years studying)
Unfortunately this means next to nothing anymore. I have a bachelor's degree in CS and Math, and I found it very useful. However, I host a fairly large community and provide tutorials on coding projects. I've had several people working on a masters thesis in CS reach out to me because they're following one of my tutorials for the thesis. They then ask a very basic question that indicates they don't know how a package manager works, or how to look up documentation on a library. I think these two tasks are some of the most basic programming tasks available, and if you can make it through 5-6 years of college in CS and still not have the most basic understanding of this, then a degree means absolutely nothing anymore.
Some more anecdata, I've had several friends who graduated with me and can hardly code. It's unfortunate, but I think degrees are just an expensive piece of useless paper that tells you absolutely nothing about the individuals abilities.
I think the LC interview measures how determined your are. Are you willing to spend hours and hours to solve boring/meaningless coding exercises in order to pass the interview or not? Because even if you are smart and a good coder if you haven’t seen the problem before it is extremely unlikely to give optimal solution to 2 LC medium or 1 hard problem within 45 minutes. Your only option is to practice.
So if you willing to spend your free time mindlessly practicing, you will be a good ant at the company. Which is very desired since most of business programming is boring and repetitive and does not require creativity.
Also, validating the solution is simple. Does not need too much effort and creativity from the person conducting the interview.
that's an engineer's opinion about things beyond his understanding
75% of fresh grads are below mediocre, to put it very mildly. 50% of candidates with a seemingly OK employment record or portfolio are too.
leetcode filters them out right away. that's the purpose it serves. it's not there to get you good candidates, it's there to make sure that you only spend time interviewing potentially good candidates.
I would agree that it would be ideal to use coding challenges suited for the job you're hiring for, but that would take a lot more time and effort to make and review
That's a Microsoft engineering manager's opinion formed over his understanding over his past 10y of hiring. Leetcode doesn't act as an efficient filter. If you want actual talent you need to put effort into it.
These articles always seem to have an underlying assumption that really great people are being denied access to jobs because they can't get through these interviews but it is implying that the people who do get through these are not also great but are less hassle, easier to measure their ability and so what if they have swatted up on LeetCode to help their application? That means they are driven, that they have learned stuff on the way and are more likely to get something right the first time.
I have been on the receiving end of applying to an agency and being told I didn't make the grade technically. I was disappointed because I know I am a good engineer but I didn't expect them to magically know this. I can also see how my approach to the technical tests might have made me look less than what they were looking for, which is fine.
I am also not sure of any good alternatives because someone will always object to any alternative which they cannot achieve for some reason. A "take home" project is good for real life work but some people cannot (or will not) invest the time even if they are paid for it; discussions can be great for helping nervous people but that is not how work usually is, there are challenges, pressures etc. and the able people object that it is not fair that people are getting in too easily.
The alternative is to push companies to pay for long interviews, be realistic with their requirements (stop asking LC for simple CRUD work), have some faith in schooling, start carrying some risks again, and use the probationary period for what it's designed.
What's happening is companies are putting the burden of the risk on candidates more and more. Because they can. If candidates would put their foot down and stop accepting this, most of these shenanigans would stop. The junior market shows what happens when people are desperate for jobs and willing to bend over for any whim corporate has that might help their hiring process (even when most of it is completely unproven).
> use the probationary period for what it's designed
This is probably fine in an environment where the candidate has lots of other options, but think about this scenario: a person applying to multiple companies, possibly rejecting some offers, possibly relocating or otherwise changing their life, accept that one offer, and then be let go in their first weeks. Changing jobs can be emotionally difficult and being let go even more. If you have to restart your job search, because you were let go during the probation period, all the other roles you have applied to might have been filled. On top of this, contrary to when you were looking for a job last time, now you're actually unemployed and potentially under pressure to find something new.
So, in essence I don't think this is good for candidates.
You obviously can't take the above in a vacuum and think it's fine. You could say the same for the advice of "give people work which resembles what the company does" leading to a situation akin to having interns doing minimum wage work for free, putting pressure on others as a result.
But as things stand, nothing is preventing companies from doing the above anyway. If they think you're a bad fit, they will use the probationary period to cut ties with you. This is perfectly viable today. Your example assumes current filtering methods do in fact increase the ratio of true : false positives, and taking some of them out would decrease that ratio. This is not something that has been proven, and I'd even argue it's something that can't be reasonably proven within the next few years. This is even worse when considering a few interview rounds can only filter for the most obvious dummies, but can't decisively tell you the performance of that individual a few weeks down the line.
What your example does show is how much power employers have over employees. It just isn't healthy for individuals to have to carry this amount of risk while corporates continue to reap the benefits.
> I am also not sure of any good alternatives because someone will always object to any alternative which they cannot achieve for some reason.
When I was on the market, a couple of companies actually gave me a variety of options, which I appreciated. I don't want to spend hours on a take-home and I also don't want to do leetcode, but an open question/answer plus some code review and live debugging was an acceptable combination for me.
Providing the options definitely has the potential to take up developer time, but I think the tradeoffs are worthwhile for both the interviewee and the hiring team.
Whenever they ask me to do a take-home I make a mention that I have about 150k lines of code on GitHub; this is usually ignored. Now, I somewhat understand why larger companies just "follow procedure, even if it doesn't strictly makes much sense" for various reasons, but what baffles me is that even a lot of smaller companies do this; which seems odd since a lot of places I interviewed at are kinda desperate for developers, and often can't match the salaries of larger companies either.
The exception is my current position (only started last week): "oh yeah, we looked at your GitHub already and it's actually similar to the take-home anyway, so little point in that". Instead, they prepared an "alternative" interview where they posed some scenarios with "what would you do? How would you handle this?", which was intended to test both some technical skills, but also social/attitude things. I wrote down some answers, which took me about 30 minutes, and then we discussed them, taking a further 30-45 minutes.
The questions were a bit clunky because they were looking for someone ASAP and there was only 2 days between the first and second interview, but I felt that was a much better approach for the company as well, because they got a lot more information this way: they could already verify basic coding skills themselves, and this way they got a new chunk of information they wouldn't have had otherwise.
>but what baffles me is that even a lot of smaller companies do this
You named the exact problem with the industry.
No one is going to complain if FAANG does this with top tier, life changing salaries. No one will object to learning and memorizing DS&A at a job where DS&A is used heavily.
What people are upset about is your average no-name company hiring individuals based on things they won't use during the job, of which the knowledge is still incredibly varied[0], and they still complain about not being able to find anyone and play the "woe is me" card.
[0]: Dare I say it, DS&A is such a big topic not everyone learns the same things. Thinking in terms of a tree is different from thinking in terms of a linked list, graph, tree, heap, stack, queue, you name it. Most LC medium/hard require time you won't get, information you might not have. This knowledge isn't as universal among skilled graduates as people like to believe. And we all know the moment candidates are able to just memorize questions about linked lists, hiring will jump to the next topic.
> what baffles me is that even a lot of smaller companies do this
Thoughtfully creating an interview process and the procedures/questions is a skill that has to be learned... and when people haven't or don't have the opportunity to learn that skill, it makes sense to turn to resources like books or articles, the most popular of which are often based on FAANG practices.
I also know from my own experience structuring and performing interviews that there can also be a lot of pressure from the top to eliminate candidates following the belief that "the last one standing" is the best, rather than actually trying to evaluate the strengths and weaknesses of every candidate.
That way tends to lead toward interviews of 5-7 rounds that act as sieves.
This is all just my limited personal experience though and I'd love to hear from other people who've done interviewing at/for smaller companies!
> When I was on the market, a couple of companies actually gave me a variety of options, which I appreciated. I don't want to spend hours on a take-home and I also don't want to do leetcode, but an open question/answer plus some code review and live debugging was an acceptable combination for me.
That sounds excellent!
This would allow the company to determine what the applicant believes is their strongest suit.
Then, when it comes time to sit down as a team, and review all the "top shelf" applicants, people can decide, based on a number of criteria.
If an applicant avoided technical challenges, but spoke well, then some managers might like that, but others, might not be comfortable. Maybe they might devise a technical challenge, customized to the applicant, and using real-world problems.
The main deal, is that the vetting has been done. The chaff has been filtered out, so more attention can be paid to the wheat.
At my old corporation, headcount was something I had to fight like crazy to get (I was a manager). We seldom hired, so it was well worth it to spend a great deal of time on each candidate, as they would be responsible for important work, and would have a great effect on the corporate bottom line.
That sounds like a lot of small startups, to me. BigCorp (MAANG, et al) hires thousands of engineers per year. They need to have a cookie cutter system. Smaller companies do it, simply because they want to be like the Big Boys.
Also, and this is neither here, nor there, but binary tree tests are a great "young-pass filter."
A realistic problem similar to what they’ll be expected to d in the job followed by a discussion to ensure they actually understand it makes much more sense than leetcode. Ideally with time for them to complete some of it independently with full access to the internet, an editor, etc.
I don't think it will stop unless 1) It's illegal. 2) It's ineffective 3) It's not economical for the company. To stop the trend you have to achieve one of the three.
Similar to algorithms itself, interviewing is essentially searching for people that the company wants.
How a company interviews candidates define its effectiveness (hire the right person, less false positives, less false negatives) and cost (how much does the company spend on the interview process per hire).
Interviewing with LeetCode is acceptable effective: candidates.filter(leetCode) gives you a much smaller set of people good at algorithm brain teasers, and this set of people have acceptable approximation with the set of ideal candidates.
In other word, it's a lazy but effective enough. The majority of companies will only switch to alternatives when the cost is lower, or substantially more effective. It's broken from the candidates perspective, but not the companies'. Most companies will stick to "if it ain't broke don't fix it" unless we offer them 10x solutions - which is yet to be seen.
But I can also see it the other way around, we can undermine the effectiveness or cost for all companies: if we can develop better courses and bootcamps, letting more people hack LeetCode problems quickly. And then companies will naturally go for harder problems. In the end, the problems will be ridiculously hard that the companies can only filter people who memorizes LeetCode, and those set of people have virtually no correlation of good hires (good problem solving skills). And of course, it's also a profitable business to teach people this.
Between #2 and #3 you missed the most important thing there: it should be not only ineffective but ineffective enough to be measureablre at the top of C-level. Untill that it is just another pointless KPI alonh many other ones
You're right. But I also said in the following paragraphs which means the same thing:
> Most companies will stick to "if it ain't broke don't fix it" unless we offer them 10x solutions
It should also work if people undermine 1x solution to 0.1x solution. If there are a lot of bad hires at the company level, which they can't finish ordinary tasks other than LeetCode problems and keep screwing things - the team leads and managers will start to complain and eventually propagate to the top level.
I think that a blanket rejection of the "Leetcode" style interview is too broad. They are a tool, and like any tool, one should try and understand when they are appropriate for the job. If the "the job" is to try and provide a fairly level playing field on which to assess candidates on raw problem-solving ability and programming intuition, these are useful tools. There are the same problems as with any test of this kind (you can improve your results by learning the format of the test well), but there's still a lot of signal. If you're trying to hire based on the things these tests measure, you should you use them. As the post points out, correctly, those are far from the only qualities required for many jobs, and anecdotally it seems like some companies are overusing this one type of interview, but the solution is for interviewers to assess what they are trying to select for, and make sure that their selection process measures it as well as possible. Part of that process may well still be puzzle-style interviews.
> I don’t think it’s an all or nothing situation. You should use questions that provide data per whether the candidate will perform in the job they’re interviewing for. If that job is extremely algorithmic driven, e.g. in academia, or involves OS-level optimization, sure, maybe leetcode exercises are relevant
I didn’t think that part went far enough. Companies could reasonably want to hire people who can do well on programming puzzles, even outside of those narrow cases.
These threads are always so depressing. On one side you feel bad for the people that have to study leetcode so hard, but then again being good at leetcode offers you the ability to basically jump into a 6 figure software career that could very well change your life.
Without the ability to get hired by just "being good at leetcode," does that make it harder for people to break into the industry?
The best interview we've used was sharing a simple but very not idiomatic Python file with working (but slow) code and tests at the bottom. The task was to refactor and speed it up. This allows seeing the actual thought process and some basic skills, while at the same time being something any good dev could do in 20min without much pressure.
I tend toward this, especially with some canned performance report. Sometimes I take something relevant from their GitHub and introduce a glaring bug, like an impossible dependency or infinite loop.
As an alternative to Leetcode, I have had a good experience with Byteboard. It's sort of a "project" based interview format, that comes with 2 parts.
Part 1 is a design doc where a problem statement has been outlined with goals and notes from the development team. Your job is to respond to their questions and propose a high level architecture to solve the problem. Part 2 is the coding portion, you're dropped into a codebase that relates to the previous portion and are given a todo list of tasks to complete. It's basically feature implementation, ranging from trivial to somewhat involved.
I really like this approach. As someone who has interviewed a lot of candidates using Leetcode style questions, I would love if my org moved towards this format. Unfortunately, it's pretty hard getting FAANG companies to drastically change hiring practices, but if smaller companies start adopting it maybe it will get some traction.
Was contacted by Facebook recently for a data engineering position and was told I had to grind some leetcode to prepare for the interview process. I just could not be bothered and I can not be the only one who made that decision. My guess is that companies relying on leetcode style interview are also missing on some very capable engineers who just don't want to grind leetcode outside working hours.
But that's not a problem. These companies pay so much that they have too many candidates applying all the time, so missing good engineers is not a problem, but filtering out false positives is.
Though, I don't agree leetcode is a good interview practice.
Yup! And they work well as a filter for candidates too. If a company leans heavily on these then you know it’s probably a dehumanising corporate hellscape.
> And they work well as a filter for candidates too
I agree. A couple of years ago I was asked by two companies to solve leet code problems even before there was a screener meet-and-greet interview. They were quickly crossed off my list.
There are many issues with this approach, starting with using the same tools to evaluate vastly different candidates, e.g. a college kid would likely outperform a senior guy in Leetcode questions, so that offers no proof of competency.
Has this ever been tested properly? Has anyone ever given a failed candidate the job and monitored how they did compared to candidates who passed the LC?
> All starts with showing some code, a class that does some stuff and its corresponding tests. The code is not glaringly bad, but it’s also not great on purpose. [And all that follows]
This is worse than LeetCode in my opinion. Because all it is, is a shallow copy of LeetCode. You've constructed a puzzle by laying out a picture and cutting out particular pieces. It's "find the differences" between what you've given them and the image in your mind.
> If I’m hiring a landscaper, I’m not gonna ask them to tell me about the classification of ficus in Fiji, or the specific reproduction period of Douglas Fir in the West Coast. I’m gonna ask them to trim a tree and see if the result suits me.
And the landscaper will walk. They will not work for free. They'll give you references, they'll show you pictures of work they've done before, but they won't do work for you.
> And the landscaper will walk. They will not work for free. They'll give you references, they'll show you pictures of work they've done before, but they won't do work for you.
I didn't make this one up. That's how one of my friend got interviewed for his current job as a landscaper, he actually worked with the company for a day.
I'd kill for the industry to progress in this area. Really enjoyed when I interviewed at Netflix that they didnt put me through it as a manager. It was all about cross partner collaboration, working with and coaching devs, technical design/vision and handling customer/vendor relationships.
A lot of comments theorize about what characteristic - for the most part other than coding skill - these kinds of interviews select for. Mostly it's to justify their use. OK, then: is there any evidence that this style of interview effectively or efficiently selects for any particular characteristic that matters? If these companies were really as data driven as they all claim to be, they'd rigorously analyze predictors of success (whatever that means) within the system they've built, and then design an interview process that can rationally be expected to select for those characteristics. Having worked at one, interviewed at another, and heard lots of stories about the rest, I don't get the impression that any of them have actually done that.
Only the first one you mention (a task to be solved in real time with both the interviewer and the candidate on the same video call) is the only one that respects the candidate's time. The other ones do not. A take home assignment could mean hours or days of effort for the candidate, while it may take at most 1h or so for the interviewer to review it. Totally asymmetric.
I think it depends on the perspective (and the state of the market): two scenarios a) "candidates are willing to work for companies, therefore companies have a bit more of control over the whole interview process", and b) "companies are looking for (scarce) good talent out there, therefore candidates have a bit more of control over the whole interview process".
When I was a junior developer I used to deal with a). Now that I have more years of experience I usually deal with b).
I don't really understand the point almost all of the thing they list as downsides of using Leet Code are actually benefits. If someone can't manage to code some simple questions during an interview I can't imagine they'd ever make real contributions.
I haven't done many Leet Code-style interviews, but the ones I have are usually simple-ish problems I'd have no difficulty with under normal circumstances, and yet I fail them at least 50% of the time.
There are a couple of things at play here:
1. I'm regularly complimented on how sharp my mind is, but I can't reliably recruit that sharpness on-demand. If I get any kind of brain fog during a timed problem (with no opportunity for a break), it's game over.
2. Coding in an unfamiliar environment, like Coderpad.
The problems are hard enough to make them easy to fail if you can't spot a good approach immediately, but easy enough that a "good" answer gives very little signal relative to the time being dedicated to the problem. I've done interview processes where 50% of the total process is given over to these kinds of problems, and the process concluded with me feeling I didn't get any real opportunity to demonstrate my strengths.
And last but not least they are geared towards puzzles of the type encountered during CS education. Thus they are age discriminating and they are non CS discriminating.
- interviews are inherently more stressful than even a very busy day at work,
- they cover topics unrelated to what you'd do as part of your job so if you're good at the job you might still fail the interview
- people with time (==money) to prepare for interviews will come out ahead of those who don't have that luxury, despite the fact they may be far better prepared for the actual job.
So you get a bunch of false positives, a bunch of false negatives, and on top of that you're discriminating against people who are in a worse financial situation or have interview anxiety. I can hardly think of a worse outcome for a seemingly sensible hiring method.
What if they suffer from social anxiety and they just struggle under the pressure of an interview?
I’ve seen so many people over the years do terrible on these types of test that went on to be amazing contributors. We only found this out because we essentially ignored the outcome of these tests, which makes you wonder why you’re doing these tests at all!
if the "simple questions" are realistic problems they might face day to day, fair enough. But the classic example in these articles is an exercise like invert a binary tree, which most people never have nor will need to do, making it a poor test.
That doesn't mean that someone would be unable to program it. Inverting a binary tree tests if you know how to traverse a tree. I may not traverse trees all day at work, but I can easily do a tree traversal if I needed to and I expect that to be true of most people who can actually program and not just talk the talk.
I understand what you mean, I could also probably pass the test. But what is the utility in an arcane test like this? If I'm hiring web developers, I'd want them to see how they apply their knowledge to a realistic scenario. There is a real chance that a potentially great hire with decent experience in a desirable technology will fail the inversion task.
sure, I often have to do tree and graph traversals in my day to day but, I will 100% fail to do that under pressure with people looking at me and expecting me to "talk throught my thought process".
If you really really care about me traversing a graph, leave me alone with the task for a while, let me take my time, give me access to the internet even. Why do you care that I can write it on the spot? If anything that just proves that I memorised it just before the interview, not that I actually had to think much about it
I think the worst part is that half the candidates will never have seen the problem before, and half of them will. Which makes it really hard to make fair comparisons.
Not arguing that LC problems are a poor, degrading way to filter but every single post HN about how "interviewing is broken" assumes the process is targeted at the applicant's experience. I don't hear HR departments making this complaint.
Leet code definitely discriminates against slow thinkers and those who might be more inclined to use libraries and interfaces. It is biased against a certain types of developers who is usually not the best type of developer.
Have you ever seen corporate codebases? Leetcode emphasizes the old way of thinking is/was prone to do what you are told, dont think, just do the task in the timebox allocated for the sprint, always reinvent the wheel, and each axel, multiple times, for each wheel, in the same codebase.
This jira waterfall code now, dont think, and this might be fine for a unicycle, or even a bicycle, bad for trucks and trains.
Hint: With factor t - all tech debt - Everything will become a truck or train.
If people who had to study for the LSAT, or MCAT, or countless other high-intensity selection processes realized that we sat around bitching about LeetCode interviews as the only obstacle to making more annually than some of them ever do, oy vey…
In the cases you mention, there is usually years of study (much longer than a CS degree) coupled with the fact that those exams you mention are meant to qualify a person to help save another persons life or their rights under law, not to optimize ad-reach by an additional tenth of a percent
What is the point of this comment? No process should be discussed or improved as long as some other comparable but worse process exists elsewhere in the world? We should hold a moment of silence for the lawyers at the start of any posting about hiring practices in tech?
Pointless. All they do is confirm that you can remember algorithms you’ll never use in real life. I won’t work with anyone who uses them… I’d hate to work with a group of people who got their jobs solving riddles rather than building software.
Now, IQ tests are of dubious legality, at least in the US, but algorithmic coding questions basically get you an IQ test crossed with a programming skill check: win-win.
All the ire about how you don't actually invert binary trees or whatnot during your real job are rather missing the point.
Sorry, I've had both my kids go through the WISC-V "IQ" test in the last few years and there's no single "intelligence" number that comes out of that that would be meaningful for employment. It measures all sorts of things and you can be off the charts high in one kind of mental / reasoning skill while being average or below average in others and yet that still might tell you nothing about how well someone writes code at a job.
Most of us aren't doing intense mathematical reasoning stuff at work. Sometimes we have to, but most of the time the actual amount of novel algorithm stuff being done pales in comparison to "synthesize this knowledge from 10 different sources and evaluate the best way to integrate that into a reasonable solution."
I don't think coding tests are a good substitute for that, and an IQ test would not give you a clear answer there either.
Bingo! I like how all these arm-chair HNers who have never run a successful company or built a public-facing product used by millions suddenly are experts at how to build a successful company.
Yeah, you know how Search serves 4 Billion people or people created Chrome or gmail in their 20% free time?
Because they could leet code and not use Stack Overflow as a crutch.
Sure, not all companies need leet code. But someone who can leet code has demonstrated skills that has proven to be correlated with creating Trillion $$$ companies.
Now go back to developing your CRUD app used by 20 internal users (because they are forced to) and let companies who are successful continue to use their successful methods
Nah man, nice try, but you're wrong. I worked at Google on at least one of the codebases you're talking about, and I can tell you that the "skills" that most Google interviews test for are almost never applied there. Certainly not in an "on the spot" format either.
Most of the work @ Google, especially on an established codebase like Chromium (my experience), is about slow incremental engineering in very small bits and pieces doing mostly really mundane administrative things. And when there's 'core algorithm' type stuff to do, there's plenty of time and space to stop, consider, read the literature, and move on from there. Nobody is going to put a gun to your head and put you on a clock and then score your results like in an interview.
Most of the intractable difficult problems at Google are more "how to get there from here" and organizational; how can I pile up this series of code reviews over months to get to this final destination where this system is cleaned up or more efficient or this feature implemented/implementable.
Google's use of algorithm testing in the coding interview is simply a result of the fact that they have hundreds of thousands of applicants and need a way to filter in some repeatable and measurable fashion.
And it's worth pointing out that the interview process @ Google goes through a whole series of metrics & calibration towards that effort. It's not just "could solve this problem", it's "solve this problem according to interviewer X's satisfaction, but we've calibrated interviewer X's scores at level N, so adjust according to that" and so on. There's an attempt to be scientific about it.
Most other companies applying "leetcode" are not doing that, they often simply have a highly reductionist mental model of what "software engineering" is, which IMHO doesn't accord with the reality of the profession.
Are you lucky to work at a company that gets thousands of resumes from seemingly qualified people a day? Probably not.
But suffice to say, I think algorithmic interviews are a decent filtering tool for recruitment, and I haven't heard of anything better (though some other methods may be comparable for overall utility, but just measure different things).
I don't know, risky answer but I will give it nonetheless. This advice is given to many candidates, and I agree that Leet Code interviewing is bad and the industry needs to work on something better, but in the meantime, if you are starting out, think of it this way (and I have friends younger by 10 years or so and straight out of college and I say the same thing), a few months of studying gives you a huge salary. It's a win-win, and in the meantime, people will complain and hopefully the industry will fix it, but do not "Skip Leet Code" just because it's the cause du jour. That's just my opinion.
I am not sure if you are agreeing or disagreeing. If you agree, I'll take it. If your point is that we should try and fix it and not succumb to "Stockholm Syndrome", I say precisely that also, in the second part of my comment. We can both work to fix AND have folks get their money by studying. Often times what happens is someone young enough will read this and conclude there is no point to studying, and that is not good career wise on average.
The issue I have with these arguments is that they assume that just because a certain thing has flaws there must be a better solution. What if predicting long-term effectiveness using a limited amount of time is inherently approximate at best?
I'm sure interviewing can be improved, but I think it's worth remembering that we are one of the few industries that actually tries to do skills-based interviewing. When people say "get rid of leetcode" or what not, are they really saying that the alternative that the rest of the world uses (resume screen plus vibes check) is preferable?
> you can pick a ticket and pair program. Have them review an actual PR. Etc.
This seems so obvious. If you pick a leetcode question, there’s always the risk that your candidate has memorized the answer to that particular question. But if you pick an actual bug/PR from your codebase, that problem disappears completely, and you get to see how they would perform on the actual job you’re hiring for.
Can anybody think of any negatives here? The only thing that comes to mind is that it might be seen as the employer trying to get free labor if they use a bug report that hasn’t been resolved yet.
I've never worked somewhere with a codebase simple enough for a new engineer to find and fix a non-trivial bug on their first day. Especially not with someone looking over their shoulder deciding whether to hire them.
There are always bugs that don't require extensive familiarity with a codebase. Even if it's easier than most day-to-day work, it still shows the candidate's approach to problem solving, how effective they are at reading code, allows them to show off some relevant domain knowledge, etc. And as a bonus, it easily works to weed out those people who don't actually know how to code.
> Especially not with someone looking over their shoulder deciding whether to hire them.
That would help in this case, since presumably the person looking over their shoulder is familiar with the codebase (or at least as much is relevant to the question), and can answer questions.
I think the signal-to-noise ratio on an exercise like that is going to be way better than with leetcode questions.
Which makes the idea of having someone solve a "trapping rainwater" problem (with O(n) solution) in 20 mins even more ridiculous. It is so far removed from actual work.
The easy way to get around exploiting free labor is to pay for the labor.
Give the project a cost and time budget (e.g., "You've got six hours to try to resolve this issue that our [already trained and familiar with the codebase] engineers think will take ~2 hours to resolve, and we'll pay $100 an hour for the effort whether or not you succeed")
Please keep using Leet Code in interviews so I can continue to hire extremely talented developers with little competition from out of state companies that like to pay 60% more than local prevailing wages. Thanks.
Let's separate an idea from implementation. It really depends from how to conduct such interviews. A good example is Google and Bolt. Both use it but differently. And for me personally in both companies it showed clearly all my shortcomings. Even when I known how to solve the task in the best way and wrote code on a board which you can compile. And to be honest I have the same shortcomings when I solve problems bigger or harder. You know, unlike many others this experience was clear and useful for me. It needs practice but I like it.
I still don’t understand why more companies don’t do interviews that include solving real problems that their engineers have had to solve (or will solve), modified by some or more of the following factors:
1) domain specific knowledge removed so someone without company proprietary data can grasp it in an interview
2) code setup, library installation, all the time consuming fat removed
3) some level of repeatability and possible algorithm space so different candidates strengths can shine.
Leetcode tasks are just a starting point for discussion. Okay, you wrote that SQL query correctly (if you didn't - stop whining and go learn SQL, it was a pretty easy query). So, if those tables will grow with time, where will the performance bottleneck be? How will you avoid it? What types of indexes do you know? What is the difference? When you would use which? What do you know about data partitioning? Sharding? How do you avoid data resharding on growing cluster?
On the flip side, I'm about to start interviewing some engineering candidates, and I have no idea what to do. I've been in the field for 8+ years now but interviewing for technical positions is hard and I've never seen a company really get it right. How do you have high standards without a ton of false negatives? How do you avoid reductive coding exercises without selecting for charming incompetence? Genuinely asking, are there any good resources on this?
Leetcode interviews are not perfect, but they're vastly superior to the previous system of throwing away resumes which did not come from the right pedigree.
> Alternatively, look at under-performing people and find what they are lacking
This is great suggestion. While the “look at their github” one is a bad suggestion. Github polishing is theatre more suited for theatre majors instead of people actually working with integrity before coming to your company. Its very similar to the issue with the leetcode interviews as its geared towards people with time to optimize that instead of a day to day job.
I have 30+ years software dev and almost nothing on github. There are valid reasons why a person who writes lots of code, including on weekends/spare time, would not be on github.
For my personal projects I prefer bitbucket. For professional work, it is proprietary and thus cannot be shared (esp not in an interview!!).
Sure, but if you do have something meaningful on GitHub or where ever, and you share this in your CV, I would expect that the interviewers look at that. Or at least, I would not like to do some stupid coding exercises that are less complex, less authentic, and basically one-dimensional compared to my work on these open source projects.
Oh you focused on the “polishing” and not the “there’s nothing to show” part
People good at their jobs because they do their job: nothing on github
People in the business of performance theatre because they dont have a job to be good at: plenty on github. It can be legit code they wrote, that was not the point at all.
I don't like the idea that every developer has to have code on github. If they want to code for work and nothing more, then that's fine and shouldn't disqualify them from any jobs. We shouldn't expect people to spend years of employment building up a portfolio for the next time they're looking for a job.
However, I don't see how looking at the code people write isn't informative. You can see many things, from small-scale code style decisions to how they structure an application just from looking through their github profile.
Writing code isn't a performance, it's literally what you want to pay them for.
Of the people with code on github, a lot of that is forked projects with small contributions, and the rest is small convenience or learning projects that werent meant for scrutiny. If its curated its because they are making a performance that also has nothing to do with being with an employee before meeting you.
Github presence is just a non signal. Thats the only point.
We agree that if you have a different way to see how theyll actually structure code for you, then its useful
Okay, so you've addressed profiles with low-quality code and curated portfolios; but what about normal people who just put their projects on the platform? Their profiles will be a good representation of how they generally do things.
The stuff I have on Github is a mix of university projects (about 30 years ago), forked stuff I keep in sync with main repo (just in case they go away), a couple of half baked projects that I occasionally touch on rainy days and lack the rigeur of what gets deployed into production at work.
But hey, maybe HR will be happy I have a GitHub account.
Depends on what they have on there though, my GitHub mostly has projects from ~2010 when I was a very different engineer to the one I am now (this was before my first dev job)
But commits are timestamped though? It's easy to see if a github profile is active or not, and which projects likely represent someones current level of skill.
> People good at their jobs because they do their job: nothing on github
Even better if they have both, worked on open-source projects or have created useful open-source software used by other companies and are already working in their other job(s) or have personal real-world projects they can point to; which those are clear advantages and a simple quick filter to use.
No need to ask about frivolous leetcode questions around re-implementing sorting algorithms or wasting more time asking the candidate to write proofs for those algorithms where realistically you're going to just import it from a library or look up the solution on StackOverflow.
Unless you're Google, a FAAMNG company, university or general research related position or if the position isn't for a typical CRUD application development, then there is little to no justification for wasting everyone's time on pointless leet-code puzzles and this applies to the majority of companies.
With how often this topic shows up I'm surprised HN hasn't banned it.
There are lots of suggestions on how to better evaluate a candidate, but they are either not true or involve needing the person to dedicate an enormous amount of time for the interview, which most people would not agree it. Coding exercises are a "least-worst" scenario in terms of evaluation versus time-spent interviewing and companies know that.
I hear this all the time. Don’t use programming tests, don’t use LC or HackerRank, etc. What do other professions that demand high skill do? As an example - medicine. How are doctors interviewed when they switch jobs?
The beauty of LC type interviews is that it requires no validation by your existing employer or no public record or demonstration of work. In the absence of LC, I’m afraid we have to settle for some of that.
Doctors are an interesting comparison, because they also have gatekeeping in terms of how difficult it is to get the credential in the first place. That early filter basically guarantees anyone with an MD has a certain level of competence. SW engineering doesn't have anything like that - it's a complete free for all, and the number of inadequate applicants vastly dwarfs the competent ones.
Having said that, big tech interviews heavily bias against certain groups of people, especially older candidates and those with families. On the flip side, if you're young and have virtually no experience, you can still get a cushy big tech job simply by studying sets of questions. In that way, it's quite unique that "anyone" can get this great job simply by grinding exam style questions. No other high paying industry is like that.
We keep seeing posts like these on HN/Reddit, yet companies keep using them. And companies like Faang (or is it Manga now??), whose engineers I would think would hang out on HN
So it either means:
1. HN's influence is even less than we thought-- even MANGA engineers /10x silicon valley types dont hang out here
2. Everyone agrees its a good idea, but no one cares. Like everyone agrees we should care about the environment etc
> We keep seeing posts like these on HN/Reddit, yet companies keep using them.
Just because you see them, doesn’t mean they’re correct. They’re just a breeding ground for holy wars and discussions. On every post like that you can find tons of “I don’t like them, we need something better and no I don’t know what”. But if you dig into comments you usually find why those type of interviews are done.
You are learning that perceived consensus on an upvote/downvote site doesn’t mean much because if 60% of people agree with something because of upvotes it seems like 99% do. Reddit rediscovers this every election.
People often downvote thoughtful discussions they disagree with. It’s tiresome to see your text fade with downvotes so often people don’t bother.
If you hang out on Hacker News you might also think every engineer thinks crypto is a scam and no engineers want to return to the office. Neither situation is the case but if you disagree with the majority on those topics you will get downvoted to oblivion, your comments won’t be visible after a point, so why bother?
I personally don’t care about the endless leetcode debate but it’s a bummer there’s so much exciting technological advances in crypto that can’t be discussed on this site without an army of “crypto=scam” bros emerging from the woodworks.
MANGA engineers cant change shit. They are drones (not a criticism, I am saying like worker bees in a big colony). Startup Engineers can (but just for their startup)
Also if you have been hazed you wont vote to stop the hazing
As someone who just went through this (finished up loop at fb and google) - I can honestly say at first I HATED the idea of leetcoding/having to learn this stuff, but after a while I started actually learning the real concepts behind the problems and started looking at them like puzzles. I enjoy puzzles so this approach made these problems more approachable and engaging.
When do/did you have time to study LeetCode? After a day’s work at my actual job, I’m done. My brain is fried. Writing more code is the last thing I want to do.
I’ve kind of had it up to here with the deliberate proposal echoed by many folks here that Leetcode interview hazing has anything to do with either software quality or the safety of safety critical systems.
Safety critical systems already have legally defined coding standards.
No really.
The former notion is just not worth defending, but this thread will continue to grow, regardless
"you’re skewing the data with somebody’s ability to prepare for the interview"
And I love it, and use it to my advantage. It's so much easier to prepare for a round of interviews, than it is to actually be good at your job. So this flaw makes it much easier to pass interviews, if you know its there.
The hubris of interviewing companies is just unreal. Someone can talk until they're blue in the face about the problems they've solved and things they've built, but no, I'm not going to believe any of it until I see that, what, they can traverse a binary tree post order? Give me a fucking break.
Pretty frustrating when 2 years ago you're solving leetcode problems with GPT-3, and can demonstrate it inside a facebook interview but the interview process doesn't factor in the 10x programmer, and doesn't recognise any TRULY out of the box thinking. Leetcode was dead when GPT-3 came out
If you get a never before seen problem to solve, going trough the taught process and communicating the steps of solving it might display the skill of problem solving. Which should be what interviewers are looking for. The solution will probably not be the most efficient one, but can be refined later.
If you want to break the $200k TC barrier - leetcode seems to be the way to do it.
Does it suck? Yes.
Is it basically the only way to make real money in tech aside from toiling away with startup after startup. Also yes, from a person who spent five years thinking I was getting somewhere working for startups.
Whether someone gets a correct answer with a code challenge should only be a single factor in hiring. You should also look at how the attempt to solve the problem. Do they ask further questions? Do they talk it out?
There’s probably other factors I cannot think of too.
Such algorithmic tests are used by FAANG as a means to discriminate candidates based on age:
For example, if they want to get rid of old candidates, it's easier to do it by asking them to implement a BST algorithm which a freshly college graduate could do easier.
I've done a few interviews like this recently. I always thought I was a OK programmer, judging from the feedbacks I get from colleagues. But after these interviews, I felt terrible. Doing my daily Leetcode as we speak...
Since some orgs don't seem to have time to interview without LeetCode (or just suck at interviewing), why aren't there professional interviewers that get paid to vet candidates? Is this just an untapped market?
Isn’t preparing for an interview a skill? Seems like unprepared candidates who know there is a simple way to prepare should be cut. They don’t want it enough to go to the trouble of spending their time.
The process won't change until a new Google/Facebook/Amazon arrives on the scene that doesn't use leetcode. Then everyone will cargo cult whatever process they use.
I think the bias against Leetcode is not specific enough. My bias against leetcode: stop screening at leetcode easy/medium. Screen instead at Hard/Extra hard.
As a student that practiced LeetCode for getting summer internships, I'm getting stressed reading these comments! It's not an enjoyable process at all.
If you happen to find a landscaper that knows the reproduction period of Douglas Fir in the West Coast, odds are he is really into his job and very good at it.
the last Leet Code interview was a time ago when some firm, RealNetworks, had the bright idea that they would inject their own codecs into Android OS kernel and perhaps be able to sell them to OEMs.
Needless to say I stopped the interviewer when they started asking Leet code questions and have refused to do any of Leet code interviews since.
Life and fun code and fun design is way to short to waste it on
ineffective BS.
Was it actually useful to improve your skills and you ending up with good solution in the coding round? And how many hours of practice does it usually take to get ok with these type of problems/solutions? :D
It definitely helps to look at solutions explained in detail, and probably has the best rate of return when tech company offers can differ in several hundred thousand dollars per year.
As far as hours of practice it really depends on if you grasp the concepts of types of questions. I got hired at a faang with 2 hard, 13 medium, and 20 easy questions done, but I'm for sure an outlier.
Definitely leetcode helped with that. Not sure if I could directly attribute that to the extras I got from paying. I paid for access to more problems and less waiting to submit solutions, mostly. The solutions to the problems can essentially always be found in the comments.
Leetcode style questions are not a bad backdrop as long as you do a few things.
1) pick questions that are actually somewhat aligned with a problem that would come up for the role. Usually implementing a data structure of sorts is going to be way more predictive and relevant than a dynamic programming problem.
2) Ensure the question requires a fair amount and complexity of code to complete.
3) The question should just be a backdrop. Consider also how quickly and proficiently they can code. How intelligent they come across in conversation. Things they call out as side notes, testing, quality etc.
Many interviewers seem to have forgotten the purpose of the interview is to be predictive to on the job success, not to invent some separate funnel and gauge how well the candidate did on that funnel.
In practice, at scale, you will likely have enough correlation between success on a contrived interview system and general competency, but you're going to get a lot of false negatives/positives using that as a yard stick.
My experience has been that leetcode "theory" is very weakly correlated with competency for most roles, and quality and speed of coding much more highly correlated. One of my best hires was a guy who couldn't implement a tree traversal in the interview
_Most_ of the people that complain about the hiring bar don't put in the effort to pass the hiring bar. That amount of effort depends on intelligence. Really smart people can quickly grasp the patterns of leetcode questions, but for someone less intelligent it requires a lot more studying therefore they are less likely to pass.
the writer forgot to say they discriminate against those who can't (or don't want to) practice leetcode questions in their free time. This includes those with families, caretakers, people with after work hobbies, etc.
I presume your axis is hard-working in the left and laziness on the right. I don’t know if LeetCode (or any HR heuristic trying to couple algorithmic study to engineering design) is great for that. If Ferris Beuller is the top right of that matrix, does he score well for talking his way out of LeetCode without any coding at all?
I love this idea to check how quickly a candidate can learn: if blocked on some coding/refactoring question, show the candidate how to do it, erase everything and ask him to redo it alone.
This is very similar to my process and I've found it to be very effective. We hire engineers to work with a well known dynamic language's MVC-ish framework building a pretty standard fare web platform.
We tell candidates they can look things up on the condition they tell us when they are doing so (basically, "think out loud and walk us through the process -- knowing where to find answers is a valuable skill!") And in most cases they're permitted to use pseudocode if they want to, e.g., if the situation demands any kind of obscure syntax or boilerplate, they just have to note it.
Exercise 1
All candidates are shown some (poorly written) code and asked to pretend they're performing a code review for the author, who we describe as a novice programmer who is new to the language & framework. The code we use is a composite of real code pulled from many places in our system (basically, what the code would look like if all the mistakes we encounter were collected into one snippet). The functionality it implements is exactly the kind of functionality the candidate will be expected to implement on a daily basis.
We ask them to identify antipatterns, suggest edits to make the code more idiomatic, discover bugs, point out security or performance flaws, improve names, etc, and reassure them by telling them that no one spots all of the issues.
We're causal in demeanor and try really hard to remove stress from the situation, making jokes, etc. We help them if they get stuck.
Exercise 2
We share a 90% working piece of code that is missing a single method. Without getting too detailed, something like "This will setup a form based on this model, but the way it is written right now does not provide a mechanism for allowing the options of the select box to depend upon which user is signed in. What would you change to enable that?" They don't even have to write the code (though they often do), they just need to understand conceptually why it doesn't work and then talk through a solution.
Exercise 3
A very simple test of their ORM knowledge. They need to utilize a technique that they'll have used dozens of times if they are being honest about their experience but that they probably wouldn't learn in the most basic of tutorials.
..And so on.
For candidates applying for senior roles we have an additional live-coding exercise, but most of the same rules apply -- they can look up docs, we help them if they get stuck, etc. We give them a starting skeleton app and they have more than enough time to solve the problem. They can use their own editor, copy/paste sample code they find on stackoverflow or in docs, etc -- basically everything they do when they are actually coding.
The problem is a very realistic one -- a simplified version of a feature that had at one time been on our roadmap but which we eventually abandoned. We encourage them to add comments to indicate what they'd do if they had more time, and when they're done, we discuss the overall approach and ask questions about their decisions.
I've found the above approach to work far, far better than any "whiteboard coding" or "leetcode"-style unrealistic (for the places I've worked and the roles I hire for) interview problems. We rarely regret hires and people stick around for a (shockingly) long time.
I really wish other tech decision makers would adopt this style, for their own sake and for the sake of those seeking jobs.
None of them asked any leet code questions (I was actually looking forward to the silly puzzle questions MS was notorious for at the time because I love those puzzles, even if I think they're bullshit in an interview. Alas it turned out all those questions were basically for PM positions :( ).
The only question I got that seemed particularly bullshitty was at Google, where it was one of those questions where basically you're expected to work out "the trick", it seemed to me to be very gotcha like. But that was just one question among many, across many people.
I've also interviewed many people over the years, and no one has asked any of those stupid questions. They are completely and utterly useless - I see a few comments here saying we're testing for conformance and that has never been involved in any of it, because again it doesn't provide any knowledge of technical skill. As an interviewer you're also aware that the person on the other side of the table is often extremely stressed or nervous. So we understand that you might make mistakes, or stumble on answers, etc - failing to account for such issues simply means potentially discounting good candidates.
As far a whiteboard coding goes, for myself, and I believe many of my co-interviewers a lot of what is actually being looked for is your thinking and problem solving - seriously, I cannot emphasize enough how you should talk through all your reasoning as you write. That allows us to know whether a logic error is a failure to understand/do the correct thing, or just a standard typo-style mistake that everyone does from time to time (again recall we know you're stressed). Also by and large we aren't looking for /perfect/ code (ok, some do but in reality it's worthless metric - I only got this from the gotcha interviewer at G).
Personally my interviewing I often don't care about the language, I'm interested in the solution, and generally accept pseudo code, or your preferred language.
Just a few general tips as an interviewer:
* When asked a coding question, repeat back what you're being asked, you want to confirm it (I've had people try to solve the wrong problem before), and have (where reasonable) some follow up probe/clarification questions.
* Follow on from above coding question. If you're answering on a whiteboard, remember that the interviewers know that the nature of the format means you might make simple/silly/"stupid" mistakes. Listen for any feedback they give you while answering.
* Additional follow on. Another cannot be emphasized enough point. Write test cases for the problem you're solving. Do it before you start the solution. It demonstrates that you understand the need for them, and provides another opportunity to ensure there's agreement on the problem being solved. It also lets you clarify things like the expected API - not part of the actual problem, but something needed for any implementation. Try to make your test cases cover "normal" and edge cases.
* Be aware that if you are nervous, stressed, or worried, the interviewers are aware of that, and know that that can cause errors you wouldn't normally make
* Try to have a reasonable awareness of the job that you're being interviewed for, some of the relevant things the company does/how it does [software dev, engineering, product management, etc], if at all possible. Either so you can ask questions that indicate you have some understanding, or so you can tie in what the company does as it relates to a particular question (if appropriate, don't just shoe horn things in)
* Be polite - this is a "be subservient" thing, this is just if you act like an asshole the interviewer won't like you, and that will impact what they report. I believe it's consistent across FAANGs that immediately post interview every interview send an email that is basically "Yes/No. Reason: .."
* Don't be sexist, racist, homo-/transphobic, or just generally a bigot - I am aware of one woman interviewer having a candidate assume she was an admin, and treated her as such. Another case where a woman was interviewing someone for a position reporting to her, where a candidate asked who his manager would be, found out it was her. Then told her to her face that he didn't think he could work for a woman. That indicates not just incredible sexism, but also just a complete lack of judgement and common sense. The latter alone would warrant a no hire.
As an interviewer I can tell you are absolutely missing the point.
Why would I ever want to hire a developer without seeing them perform? And since being able to program a small piece of code to specification is such a basic, important part of development, why would it be bad for me to verify if you can do it?
If your friends are so good developers, why would they have a problem reasoning around a relatively simple, toy problem? Is it possible that your evaluation of your friends' prowess is biased?
Why do you think dealing with problems under pressure is not a valuable skill?
Why do you think leetcode questions are supposed to tell about quality of code that the candidate will produce? Is it possible that you just don't understand what leetcode is for?
It all seems to me like students complaining that the exam was hard. IT DOES NOT MATTER if the exam was hard. What matters is if you were better than other students. (And even that does not matter, because in the long run it only matters if you have learned something useful.)
So what is the point here? I think people just complain too much rather than focus on figuring out how to succeed.
Leetcode questions are supposed to tell me:
- Can the candidate understand the question? Can they think about the problem analytically? (Somehow there are a lot of people that can have nice conversation but they fail when they are supposed to apply hygiene to their thinking.)
- Can they follow instruction? (I explain the rules of the task and am interested in seeing if the person is able to follow basic instruction)
- Can they program? (I met a lot of people over the years who are able to fake their way through the process EXCEPT for when they have to actually write some code. For example they learned standard library by rote but do not have ability to use those functions when needed.)
- Can they plan? Are they organised? (A lot of people just do stuff at random that might work for very small change but will utterly fail for any larger task. Good developer inevitably have some kind of plan and organisation.)
- Can they work with somebody else on the problem? (Some people don't know how to work with others even when offered help.)
- Do they understand what the program they wrote is doing? (MOST people do not know if their program works or not or what it does. They need to run the program to be able to tell. All best developers I ever worked with can tell what the program will do before they run it. Any person that can't do this is destined to be creating huge number of bugs as they mindlessly retry code until it works in the process leaving every bug that did not stop it from working in their test environment.)
- Are they intelligent (enough)? (Leetcode is sort of intelligence test. You typically need to be at least at some intelligence level to solve the problem.)
The problem with leetcode is all those interviewers that do not understand how to use it as a tool to learn things about the interviewee. And that frequently is because they want to get information that you can't easily from leetcode.
So what you can't learn from leetcode question?
- Will they write nice code? You can't learn this because writing nice code is ability to adhere to the body of code you are already working with. Everybody's programming style is different and even best style might still be incomprehensible to a person that is not used to it.
- Knowledge. Do not ask stupid questions like how to transform a binary tree in a certain way because they only thing you are testing for is whether the candidate is lucky to know the answer to your problem.
- Can they solve complex problems? Do not give complex problems on interview. It is just too noisy and luck-driven. Perfect question is just complex enough to be novel and present some (but not too much) challenge to the candidate but not complex enough to run the risk of running out of time for a reasonable candidate.
I think you make good points. I also think that Leetcode problems are not good for this (at least for most companies) simply because the problems are too hard and require preparation or luck. Some of the algo's that are being asked took quite some time to initially develop, e.g. Maximum subarray problem. In my opinion it would be better to ask something a lot simpler, should still give you this information.
In general this is a mistake of interviewer asking question they would not be able to answer themselves.
It is super easy to get this wrong. A problem seems much easier when you know the solution. Also, as interviewer you don't want to make mistake of comparing the candidate's knowledge to your experiences -- the candidate might be completely fine but just happened to have different experience from yours and did not meet the same problems as you.
I have a small set of problems which I honed over the years. Some of them are problems which I got when I applied, which makes it easier for me to understand how it is when you are on the other side. Every single problem I have solved in every way I can imagine so that when I interview the candidate I can focus on the other stuff that I care about. I have developed understanding of where the candidates get stuck and for what reasons and how to best provide hints so that I can keep the session productive. When the candidate is stuck for a long time I tend to view this as my failure (nothing is happening == I am not learning anything) unless the candidate is really bad.
What I am trying to say is that preparing good problems for coding interview is a hard task and badly prepared problems are probably why so many leetcode interviews are frustrating and inefficient.
I've dropped the compulsion to make everybody like me a long time ago.
Actually, this is by design. See, the best outcome is when people who would not be able to work together productively find it before signing the contract.
You may not see it this way, but I make a favour to every single interviewee like you by not wasting your time (or mine) on a fruitless endeavour.
You don't like doing leetcode? I am completely fine about it. We live in a free world and fortunately for you there is no shortage of places that hire people who SAY they can program.
I just want to warn you. Your coworkers will also be the ones that passed similar sieve. Your future boss might be one of them. I have worked for one or two places which do not check for candidates ability to program and these tend to be miserable places. Not saying all of them, but still. My first boss was a "Senior Dev" who literally did not know how to write a loop, and that was over 20 years ago. Since then I refused working for couple of places based on their lacking hiring process and I have never regretted my decision. I am not ashamed of my programming skills and I am happy to prove to any reasonable interviewer on demand and comfortable in knowing that I will be working with other people who can do the same.
I keep seeing this kind of article that leetcode is not the answer. Yes it is not the answer but the best strategy to filter out not dumb, low IQ people who are going to panic when you give a vague, open ended problem. There are many people who call themselves engineers while trying to talk everything out instead of diving deep into problem.
You know what else will filter out dumb people? A vague open ended problem that is relevant to the position you're hiring for. Have them write an abstraction layer for two different implementations of [insert relevant problem here] or refactor messy duplicated code into reusable [methods/components/modules]... Better yet, find an actual problem that your engineers actually had to solve in the past and see how they do it.
> filter out not dumb, low IQ people who are going to panic when you give a vague, open ended problem
The article addresses this as well. Stage fright does not make someone “dumb”. Slow thinkers aren’t “low IQ”. And reversing a tree likely doesn’t make someone apt for a job.
You might ask, so why do startups do leetcode too? I heard startups are supposed to be, uh, innovating, developing new technology, and working on hard, meaningful problems? Shouldn't they want brilliant, super effective people, instead of smart-enough, obedient workers? Apparently not. Apparently they want the same workers bigcos want. The implication of this is left as an exercise to the reader.