Someone who builds a truly novel technology solution involving hundreds of hours of effort gets filtered out of an interview involving contrived scenarios. You may have built the next generation X, but given an array of strings and a fixed width, can you format the text such that each line has exactly maxWidth characters and is fully justified -- in the next 30 minutes? Maybe you should have cultivated that skillset instead, because around here we value parlor tricks more than real world accomplishments.
I have over a decade of experience, including driving big technical change at one organisation, and was filtered out by a timed leetcode test. I put together a repository of leetcode practice, as I had a feeling that I would have bad luck on the day. They didn't look at this.
The internal recruiter said it kept happening for seniors and people with a lot of experience, but his hands were tied, as the leetcode process was deemed important by the CTO.
If you think it’s about flexibility or brilliance or intuition or experience, you couldn’t be more wrong. Leetcode and others and purely about practice and that too recent practice, all questions follow similar patterns and if you have it fresh in memory you can write it in 5 mins. But writing it from brainstorming can take more time tha allotted.
So are top leet coders better programmers? Not really, a lot of them are colleges students who have time to practice and they are in similar competitive circle. I’ve interviewed many and couldn’t hire even one
Bingo. I was asked recently to find the maximum subarray of an array, in a live coding exercise. In TypeScript. So, JS, but with type annotations.
The interviewer themself said "this is probably more aimed at someone who just graduated."
It was for a senior data engineering role, where the odds of me implementing classic dynamic programming problems on the regular are slim to none.
They made me an offer anyway, but I just wonder what value they found in that. Oh, he knows about Big O? He's heard of memoisation?
They were big on FP, apparently. But not Scala, there was too much FP in that for them, hence the TypeScript.
Mind you my first ever rejection was for a Python role back in 2011 when Python was still very niche in my country, and while I had a portfolio of, imo, pretty decent Python code, they weren't interested because I didn't have a degree, and people without degrees write unstructured code.
Which is a very long way of saying, every interview process ultimately devolves into people hiring people like them.
I was once asked "how do you sort a list of integers?" - the question was so ill defined that I thought it was a joke (in a database?, how long is the list?, in a flat file?, memory only?, on an embedded system?, a dataset in spark? ...).
Thinking, ok, this must be an ice breaking joke I responded, "I don't know, how _do you_ sort a list of integers?". In a condescending tone they responded "are you even a programmer?". At the time, I had been in the game for more than 10 years.
I am really not sure how one could successfully build several companies and have shipped several products without knowing "how to sort a list of integers".
Trying to find a good place to grow your career is difficult on many levels, but if you find leetcode questions silly, you might be too advanced for entry level jobs. ...and you probably don't want to work there.
People straight out of college will spend a month doing 100+ Leetcode problems targeted at the companies they're aiming for. Leetcode tracks problems used at interviews, so there's a very good chance they'll see a problem identical or similar to one they've solved already, and it's just pattern recognition.
After the 3rd or 4th time doing this, practicing the same problems just to pass interviews isn't very appealing, especially if you've saved money and can do something more interesting.
You could try to "wing it" and derive it on the spot, but you'll be outcompeted by someone doing a lookup of a solution + alternate solutions from cache.
Nah, it’s just that LC problems take practice to get good at. Similar to weight lifting you wouldn’t expect to bench 250 lbs your first week, but after training and making progress you’ll get there (or close).
If you’ve never done LC (or any competitive programming) you’ll struggle. Practice more and you’ll recognize the dozen or so patterns.
Also LC is not “novel,” maybe at the time when the algorithm was first devised but not when you have 20 minutes to solve one.
The tests aren't testing for competency at the job, and after a decade of experience writing software you have long ago realized that party tricks and cute algorithms are a fairly rare part of the job (generalizing here of course), so you stop thinking about them as much and get out of practice. When they do show up, you certainly don't have to do them in 10 minutes, and I think everyone would rather you didn't anyway, so that you write a robust solution rather than a clever one.
Students are often better at leetcode because school has been drilling this shit into them for the past three years, but it will probably be the last time they see such a compelling algorithmic challenge until their next leetcode exam.
but why must it be mutually exclusive? Are you implying all "robust" solutions, whatever that means, are dumb? Surely you put some thought in it to make it "robust"?
In general in this type of use, "clever" and "dumb" would better be called "tricky" and "obvious" respectively. Usually people describe very tricky solutions as "clever", and much more obvious solutions as "dumb" jokingly.
For example, storing some flag in the high bits of a pointer field of a struct is a "clever" solution, whereas having a separate bool field is a "dumb" solution. In most cases, the "dumb" solution is much more robust over time (less likely to cause bugs as the code changes and is modified by various people). Of course, the "clever" solution is necessary in some situations (very constrained environment, critical infrastructure such as an object header used for every type etc), but should often be avoided if possible.
What's important is that the way this is often presented is that more experienced people will prefer "dumber" solutions, as experience often shows that long-time maintainability trumps many small losses of efficiency. So using "clever" and "dumb" in this way is not at all intended to put down the engineer writing the more robust version.
When it comes to everyday software solutions you are usually aiming for obvious and clear, and often 'clever' is obtuse and opaque but definitely not always.
I think there might just be some vernacular nuance here though, maybe we can call it smart and robust, versus clever and opaque. Some problems are just difficult though, and if you get to work on that kind of problem regularly then that is pretty lucky.
Novel solutions aren't as helpful as you think. Pragmatic, simple, vanilla solutions are reliable.
You won't create your own linked list library, you'll use one from the standard library.
General runtime analysis can be helpful - but production, real-world benchmarks trump all theoretical performance values.
Code changes - how do I make a change to a production system in a million line code base that has good test coverage and when deployed, won't bring the entire system down. That's an exercise in the coding interviews that is completely ignored but most useful in the day-to-day professional setting.
> General runtime analysis can be helpful - but production, real-world benchmarks trump all theoretical performance values
This is what I always found funny. Most of the software development work these days is related to web apps. Optimizing that nested loop won't do anything if you have to wait 300ms on some shitty API to answer anyway. It literally doesn't matter, noone cares.
Related meme I saw on reddit some time ago where senior developer says 'haha nested for loop go brrrrr':
I've forgotten most of the algorithm stuff I learned at university that I haven't used in my job. I know how dynamic programming works and I can recognise such a problem, but don't ask me to implement it in an hour. I know how graphs work and what the various algorithms are, but why would I implement any of them when I can just import networkx?
More likely a lot of us don't feel we have to cram for problems suitable to screen entry-level candidates at most.
Personally I'm perfectly happy to be filtered out by such tests and refuse to practice for them, as companies that use them for senior level positions are companies I really don't want to work at.
> What do you suggest for the 99%+ other candidates?
What about (instead of forcing a months long decision process upon the candidates and the company) bringing them into the company after a short interview (maybe 2hrs), and making sure they can afford housing, food and everything else they need.
If you like their work, they stay employed. If, say after one month, you do not like what you see, you can easily let them go. Of course you tell them upfront what the deal is.
We could call it, I don't know, maybe trial or probationary period.
That's a big overhead for both the candidate and the company. Only an unemployed candidate could do that, and even then they'd have to stop interviewing at other places to dedicate the month. No thanks.
I think we'd need to share more detail (and fwiw, I'm half from europe and half from Canada :).
Certainly companies have probation periods. And on paper, that reality and what's proposed in previous post are similar.
But I think there's a massive real world difference between "Default stay hired" and "Default not stay hired".
Probation, as it has currently been implemented in most companies I've worked in, exists, is formal, can and has been used, but is an exception. It's used when there's a massive, unanticipated, egregious problem in performance.
What is sometimes proposed in these threads is effectively replacing long/multiple interviews, with a probation period. While such probation period may look similar or same on paper, I think it's a completely different approach: "We're sure of you (though possibly wrong) so we're hiring you" vs "We're not sure of you so let's hire you and see!". I for one would have only touched the latter with a 100ft pole maybe once in my life. Certainly, I imagine anybody with current job and monthly obligations, would be quite wary in taking a "we don't know so let's try it!" approach to hiring. No, let's figure it out first please :)
I've only done the latter in the form of being brought on as a contractor at (high) contractor rates but with the understanding they'd prefer to have me join full time, at a time when I was already doing contracting and had other clients in parallel covering parts of my costs. In that situation I was not taking on any more risk than I had already chosen (and planned for) by contracting, so it was fine.
It's the only kind of context in which I'd ever consider the "we don't know so let's try it" approach.
> But then why do people with that kind of reputation still (at certain companies) have to jump through these hoops?
Your article points out that in this example: "I'm not allowed to check in code, no... I just haven't done it. I've so far found no need to.".
> If, say after one month, you do not like what you see, you can easily let them go. Of course you tell them upfront what the deal is.
> We could call it, I don't know, maybe trial or probationary period.
You make it sound like it's a better solution for candidates, but it's way worse for many of them and it has been explained by other commenters already.
How many companies actually do this? At which scale?
Some companies increased their difficulty to hire by having aggressive PIP objectives. Likewise, having a "real" probation period where you fire, say, 10%+ of employees is not gonna make you competitive when candidates compare their offers.
My expectation with such an arrangement is that if you do decide to hire me, since I am now a known quantity you won't have an excuse to pay anything other than top of the market rates. "You said you hire only the best right, and you've seen me work, you want to hire me, looks like the best make $X."
It would be fair if the company has to pay you 5-11 months of salary if they decide no after the evaluation period. That would leave ample time to find another job. Also, in many jurisdictions, this kind of arrange isn't legal, for good reason, as the company has way more power over the individual worker.
We could make sure that a person only has to do that trial once in their career and then every other company should accept that they've done it because they've proven they've done it. We could call it an Apprenticeship or an Engineer-In-Training stage. (Where have I heard those before?~)
>This person should already have enough of a reputation to get a job at many companies, if their work is public enough.
You can't just hire someone based on their reputation at a company of any maturity. That's a legal and HR nightmare. There has to be a process with a semblance of objectivity, and that process has to demonstrably apply to everyone equally, always.
Which in practice means you put out a fake job posting where the qualifications uncannily mirror this persons resume to a tee, and you hand them the job formally after a week.
I've been in situations where I was already somewhat working and onboarding while the faux job ad was up for the two week or however long mandatory posting period. tallied the hours separately and got paid back after i was hired.
I can't at all imagine why this would be the case. Why would a company have any kind of liability for hiring biases such as reputation (except for systematically refusing candidates from protected groups, of course)?
Hiring based on reputation is the same as hiring based on resume. And it's extremely common in almost any company. Why would it be a legal or HR nightmare?
> What do you suggest for the 99%+ other candidates?
To apply for the 90% tech companies out there. 10% of all tech companies out there are FAANG or FAANG-like. 90% of tech companies are normal tech companies (they'll care about your education and cv and the interviews are usually just a chat. No IQ tests)
I think it would be exceptionally hard to build an individual reputation in the tech community in general, props to those who have, but building a reputation in a specific industry and local community is much more achievable for everyone.
In smaller industry niches this can be true for companies more than people. At this point in my career, the fact that I worked at Company X is evidence enough that I can do the job Company Y wants me for, since it's a tight industry they essentially know of the work I was doing, even though it wasn't a groundbreaking novel technology of my own.
Wasn't there famously the story of the guy who wrote a package manager (brew?) that become massively popular and was widely used at google, and yet google rejected him for a job because he couldn't invert a binary tree or something?
Don't know all the details so I could be missing something crucial, but if not it'd seem that reputation isn't enough
I don’t necessarily think leetcode should be the only litmus test for a candidate but Max is an outlier. Not every candidate getting rejected by a leetcode question is also capable of building out homebrew on their free time.
I had a great track record that was out in the open, actually displaying vast knowledge of algorithms and data structures and exactly the stuff that's being asked in those interviews. FAANG interviewers did not care one bit about it. They actually consider it bias to look at a person's prior work.
They 100% can get a job and a decent one. I know because I fall into this category BUT the FANG salary are 2 to 2.5 times what I make at this point which pushes me down the study leetcode part.
You are factually wrong. Not only that, you're conflating several different processes and team rules.
Many googlers use brew to install applications on their laptops. This not against policy. Other googlers work with code stored directly on their laptop. There may even be developers who are obtaining deps (for their own builds) from brew.
The problem with the brew author is that he had every opportunity to make himself look hirable at Google but instead chose to write an incorrect screed and publish it on the internet.
You need a full Santa exemption with business reason to use brew. The average person working on some server that deploys to borg does indeed use their Macbook as a thin client. Who's obtaining deps for their builds from brew? If you're building stuff on Mac, it's via bazel and all your deps are in source control.
I never said anybody is obtaining deps for builds- that's all UncleMeat.
At the time I used brew (5 years ago) it didn't require a santa exception with business justfication (and my justification would have been "I need this for my work"). Fortunately this wasn't really a problem for me any way as I don't even look at Mac machines as anything other than a thin client.
It is true that this was the problem with the brew author. Homebrew being part of the typical Google workflow is entirely independent of the situation.
But, as usual for internet discussions, it is fun to rathole on side conversations.
The best part is, I was defending the use of homebrew (which I absolutely hate) and local development (which is far inferior, IMHO, to blaze/forge/citc/piper). I had really hoped releasing abseil/bazel would help but sadly, it was done too little, too late.
I’m sure there are a few open source developers who use brew or Xcode directly, but most Mac builds happen in a distributed build system wired up to use remote macs. Yes, via Xcode. Not on local machines.
The mac build system is wild. It is still all done remotely with a farm of macs running xcode. You still don't build code for running on macs on your local machine.
You actually do, what Tulsi generates is an XCode project that has shell script build steps that call out to bazel. Bazel underneath the covers will end up calling the clang that comes with XCode.
Do you work for Google? I'm just doubting you a little. Like the guy from project zero is finding bugs in Windows via ssh to a Linux machine? Definitely going with Doubt here.
There are a few outliers, but yeah. Per policy, no code is allowed on laptops. And because Google's build tooling is very centralized basically everybody works on the same kind of machine.
The folks developing Chrome for Windows or iOS apps might have different workflows, but even then they aren't going to be using brew because of Google's third party code policies.
The first program you build in Noogler training takes more compute and io to build than the Linux kernel. The distributed build systems laughs at such a trivial program and barely breaks a sweat at programs 10x that size.
Google has a giant monorepo. It is too big for git. (Virtually) everything is built from source. Building a binary that just runs InitGoogle() is going to crush a laptop.
I believe that there are also a bunch of IP reasons for this policy, but from a practical perspective doing everything with citc and blaze is really the only option.
Google lives and does by its distributed build system. Most devs don’t even have exactly have code on their local workstation either—it comes via remotely mounting a file system.
But 99.9% of the builds happen remotely. So local vs remote code just isn’t that relevant.
It is also true that Google spends millions and millions on its dev environment every year, so this isn’t your average “no code on laptops” situation.
Why would I install homebrew on a work laptop if I cannot build or run code on my work laptop? Why would I install a dependency management system if company policy is that all third party code is checked into the repo as source and built using blaze?
Because you can also install applications via homebrew. I use it for a few things, including installing bat, delta and other Rust coreutil replacements that I prefer. I absolutely do not use homebrew for project dependencies.
When I worked for google, several of my projects were developed locally on laptops. My intern (who was developing tensorflow robotics computer vision stuff) used homebrew to install tools. Not everybody at Google used blaze.
what in the world does that have to do with whether he was qualified to work for google? the average developer at google is definitely a much much less proficient programmer than the creator of homebrew.
You shouldn't use Brew to obtain dependencies. This is how you end up with people complaining about a brew upgrade replacing the version of Postgres their project depends on.
You probably shouldn't be using dpkg or rpm for that, either, unless your CI and deployment targets are running the exact same version of Linux that you are, and even then—there are usually cleaner and more cross-platform/distro ways to do it, especially if you need to easily be able to build or run older versions of your own software (say, for debugging, for git-bisecting, whatever). I continue to wonder how TF people have been using typical Linux package managers, that they end up footgunning themselves with brew. "Incorrectly", I suspect is the answer, more often than not.
Where it excels is installing the tools that you use, that aren't dependencies of projects, but things you use to do your work.
Get your hammer from Brew. Get your lumber from... uh, the proverbial lumber yard, I suppose. Docker, environment-isolated language-specific package managers, vendored-in libs, that kind of thing.
I don't install project deps with Brew (it's a bad idea, but, again, so is doing that with dpkg or rpm or whatever directly on your local OS, a lot of the time) but I do install: wget, emacs, vscode, any non-Safari browsers I want, various xvm-type programs (nvm, pyenv, that stuff), spectacle, macdown, Slack, irssi, and so on.
That's fine, but almost nobody is running tools on their MBP. So for this sort of thing you'd be using the package manager distributed with glinux. And Google is also a really weird island where tons of tools are custom. You cant use some open source tool for git bisecting because Google doesn't use git. You cant use some open source tool for debugging because borg is a weird custom mess and attaching debuggers requires specialized support.
Google uses git. I used to sit next to Junio Hamano, the primary developer of git, and lots of teams that used my team's services were using git. Lots and lots of teams. There was even an extension to use git with google3, which was really nice, but was replaced with a system that used hg instead.
I was very imprecise. Git is used both for OSS stuff as well as some other stuff. But the norm is development in google3 and even if you've got a layer of git commands on top of that, the actual source and change management is being done by citc/piper.
True, although I predated citc and piper and we definitely built apps locally on machines with source code (from perforce). I was strongly advocating that more people switch to abseil and build their code like open source in the cloud (the vast majority of compiled code doesn't have interesting secrets that could be used to game ranking, or make money from ads).
I don't do leetcodes because I am not really into programming puzzles (or parlor tricks :D ), but I roughly tried to do what you described. ~45 minutes it took, deciding in the middle to not worry about the algorithm. Guess I won't get the job ... shrugs.
You won't get the job not because you "didn't worry about the algorithm" but because you didn't ask any questions about the problem; just went straightforward to the implementation. In FAANG interviews that would be a red flag.
This is a myth. It doesn't matter if you ask "questions" but implement an n^2 solution. Unless you implement the optimal solution, usually using a top-down DP or some array trick, you aren't moving forward in the interview process.
In the FAANG interviews I've done you're never allowed to ask questions...? Maybe for clarification of the problem space, but not about the algorithm or the promise of a particular solution.
If I'm the interviewer and you don't ask questions I'm going to rate you very low on your communications skills. You may have the best algorithm in your head and write the most elegant code, but working at a company requires you to communicate your ideas and plans and code and everything else to your team. And no, communicating only at the end (code review time) is not enough. This is not a school assignment that you silently write and then turn in for a grade.
The best interviews I've experienced, both as an interviewer and an interviewee, are the ones that feel like two team members collaborating to narrow down requirements and solve a problem.
> about the algorithm or the promise of a particular solution
It's not about "the" algorithm or "a" solution. It's about you the candidate being able to propose multiple solutions, perhaps with space-time tradeoffs, to provide a recommendation based on your judgement, and to ask the interviewer what they think of your proposal.
I mean, that's great, and I feel the same way. Yet every time—since the introduction of leetcode questions, anyway—as the interviewee I've been asked not to ask questions about the algorithm or the solution, just clarification of the problem space. FWIW I have been employed at several of the FAANGs or whatever they're called now and have also been on the hiring side, where I certainly did not discourage asking questions of any sort.
Facebook interviews were very much interactive with my interviewer probing me on O(n) type questions and me refining it down to be more efficient. I was certainly allowed to answer questions, typically about scale. My code which was on a whiteboard certainly wouldn't compile and a good amount of the discussion was making sure that the interviewer could follow it and that he was satisfied with each of the steps.
My first round I passed with a less than optimally efficient solution, but he was satisfied every step of the way during my work.
While I was lukewarm to the prospect of working for Facebook, the interview process was very positive and reflected very well.
On a personal level, self interest would have me like the leetcode style problems because I can get most of them right on the first try during a timed interview, without studying. If I were pursuing a job at a FAANG, I might actually study them and I'm sure it would go well for the testing portion of the interview.
However, when I interview this is not what I'm looking for. I'm typically looking for someone who knows the particular language that I'm hiring for. My questions run from the very simple to as deep as they can go on either language or implementation details. From the most junior to the most senior, they get the same starting questions and I expect the senior people to go deeper and explain why they choose something over something else. I'm also testing their ability to explain it to me (not just get it right) as that is part of their job working with juniors.
I really don't even care if they have the names of things right and don't really count things wrong against them if they get the names of two things backwards for instance. For example in Go, a huge percent of the time you might use slice over arrays. Some people get the names backwards, but can identify which one they actually use and they know that one can change size. They are correct in usage and misnaming them. I inform them of the name, encourage them a bit and move on.
I've never liked the "look at this code, what's wrong with it" approach. There are too many contexts that I have to jump into at the same time. There is often an expectation that I find a specific problem with it. I'm lacking the usual tools like an IDE or compiler. What level am I looking at in the code? Does it compile? Are there off by one errors? Cache invalidation? Spelling errors? Logic errors? Business errors?
This guy has missing tests on code that needs to be refactored in order to make those tests. Maybe he has it figured out just right, but the "jump into my code" interviews I've been in on all seemed like they had secret gotchas that the interviewer expected specific answers about.
In short, I haven't seen a proper, repeatable process for interviewing for software development.
Well, good to know if that were ever an opportunity for me. Probably would have to invent questions because it's hard to ask yourself out of being dumb-founded on the spot for the moment.
The interview for my current job had something simpler: something like finding random permutations, then what's the algorithmic complexity of this random algorithm. (It was years ago, I forget the details of it.) I just talked through the solution. That was nicer that having to come up with questions. :)
Yes, because getting to the right answer is not the point of the interview. Apart from anything else, getting to the right answer may mean you memorised it and are incapable of doing anything else. Always show your working and thought process. Asking questions and showing that you understand tradeoffs and that users have different requirements is a good way to do that.
> given an array of strings and a fixed width, can you format the text such that each line has exactly maxWidth characters and is fully justified
So many questions. What character encoding is the string? What is human language? Should I honor non-breaking spaces and other similar codepoints? Is the string full of 'simple' characters or 'complex' characters? Graphme's? Emoji's? What's the min and max limits on width? How long (or short) can each string me, and how large could the array be? On what system am I running, and does the array fit in memory or is it paged off a disk? Does the font we're using support all of the graphme's present in the string?
Amusingly, I had a variation of this problem as part of an Amazon L8 IC role interview, framed as a Prefix Tree. Solved the problem, didn't get the role. :(
Sadly enough, I'm literally scoping out a new feature at the startup I work at that involves rationally splitting text into lines with a max length and buffers with a max line count so we can interface with a legacy system from the late-80s/early-90s
I've done that in interviews: "oh hey, the most efficient known algorithm is this classic named X". That tends not to win interviews either, but in the real world knowing the name of the best algorithm (or even just that a best algorithm exists and a general idea of what you'd google to find it) is more useful than knowing the details of it. If I need to reimplement a well known algorithm I can often reimplement it from the Wikipedia description, that's trivial and boring make-work (and will always be trivial and boring make-work). But I need to know which well known algorithm sometimes and that's a far more useful practical skill.
yeah, but in real world when this kind of problem will need to be solved people will most probably sort O(NlogN), or use priority queue O(NlogK), or even will go with something like O(N*K), almost no one will go with O(N) algo and because usually N and K are rather small and this code will not be called too often time complexity may be ignored. Still any solution shorter than O(N) will be called inefficient. And in real world they will know N and K from this what kind of problem they are solving, and this will not be hidden in mist of abstraction with assumption that "candidate should ask".
I mean, in the real world you probably use a library method. If I were an interviewer I would not be expecting the candidate know about median-of-medians (O(n) worst case). I wouldn't even expect they know a-priori about quickselect (O(n) avg). But I don't think it's unreasonable that given a few hints, a candidate could understand and implement quickselect in 30 mins. Most people know about quicksort already, and quickselect is not very different. You can even give them the partition and select_pivot function at the start and then if there's time have them fill those in. In the rare situation they haven't even heard of quicksort, you can even write the shell of the algorithm for them, and have them adapt it to quickselect.
Even then, all thats probably a bonus - a priority queue implementation, or many other possible solutions are probably good enough for me.
Moreover, with the micro-optimized SIMD quicksort algos that are perennially cropping up on this website... I would be willing to bet that "sort and take first N" is objectively faster than my crappy Python implementation -- even if it is linear time.
FOMO is an atrocious hiring practice. Chances of your scenario, where you somehow find a person that can create something novel, that they will stick around to actually produce novel thing and then this novel thing “making it” are nonexistent.