Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Launch HN: Litebulb (YC W22) – Automating the coding interview (litebulb.io)
139 points by garyjlin on March 7, 2022 | hide | past | favorite | 190 comments
Hi HN, I’m Gary from Litebulb (https://litebulb.io). We automate technical onsite interviews for remote teams. When I say “automate”, I should add “as much as possible”. Our software doesn’t decide who you should hire! But we set up dev environments for interviews, ask questions on real codebases, track candidates, run tests to verify correctness, and analyze the code submitted. On the roadmap are things like scheduling, tracking timing, and customizing questions.

I've been a software engineer at 11 companies and have gone through well over a hundred interviewing funnels. Tech interviews suck. Engineers grind LeetCode for months just so they can write the optimal quicksort solution in 15 minutes, but on the job you just import it from some library like you're supposed to. My friends and I memorized half of HackerRank just to stack up job offers, but none of these recruiting teams actually knew whether or not we were good fits for the roles. In some cases we weren't.

After I went to the other side of the interviewing table, it got worse. It takes days to create a good interview, and engineers hate running repetitive, multi-hour interviews for people they likely won't ever see again. They get pulled away from dev work to do interviews, then have to sync up with the rest of the team to decide what everyone thinks and come to an often arbitrary decision. At some point, HR comes back to eng and asks them to fix or upgrade a 2 year old interview question, and nobody wants to or has the time. Having talked with hundreds of hiring managers, VPs of eng, heads of HR, and CTOs, I know how common this problem is. Common enough to warrant starting a startup, hence Litebulb.

We don’t do LeetCode—our interviews are like regular dev work. Candidates get access to an existing codebase on Github complete with a DB, server, and client. Environments are Dockerized, and every interview's setup is boiled down to a single "make" command (DB init, migration, seed, server, client, tunnelling, etc), so a candidate can get started on coding within minutes of accepting the invite. Candidates code on Codespaces (browser-based VSCode IDE), but can choose to set up locally, though we don't guarantee there won't be package versioning conflicts or environment problems. Candidates are given a set of specs and Figma mockups (if it's a frontend/fullstack interview) and asked to build out a real feature on top of this existing codebase. When candidates submit their solution, it's in the form of a Github pull request. The experience is meant to feel the same as building a feature on the job. Right now, we support a few popular stacks: Node + Express, React, GraphQL, Golang, Ruby on Rails, Python/Django and Flask, and Bootstrap, and we’re growing support by popular demand.

We then take that PR, run a bunch of automated analysis on it, and produce a report for the employer. Of course there’s a limit to what an automated analysis can reveal, but standardized metrics are useful. Metrics we collect include linter output, integration testing, visual regression testing, performance (using load testing), cyclomatic/halstead complexity, identifier naming convention testing, event logs, edge case handling, code coverage. And of course all our interview projects come with automated tests that run automatically to verify the correctness of the candidate’s code (as much as unit and integration tests can do, at least—we’re not into formal verification at this stage!)

Right now, Litebulb compiles the report, but we're building a way for employers to do it themselves using the data collected. Litebulb is still early, so we're still manually verifying all results (24 hour turnaround policy).

There are a lot of interview service providers and automated screening platforms, but they tend to either not be automated (i.e. you still need engineers to do the interviews) or are early-funnel, meaning they test for basic programming or brainteasers, but not regular dev work. Litebulb is different because we're late-funnel and automated. We can get the depth of a service like Karat but at the scale and price point of a tool like HackerRank. Longer term, we're hoping to become something like Webflow for interviews.

Here's a Loom demo: https://www.loom.com/share/bdca5f77379140ecb69f7c1917663ae5, it's a bit informal but gets the idea across. There’s a trial mode too, for which you can sign up here: https://litebulb.typeform.com/to/J7mQ5KZI. Be warned that it’s still unpolished—we're probably going to still be in beta for another 3 months at least. That said, the product is usable and people have been paying and getting substantial value out of it, which is why we thought an HN launch might be a good idea.

We’d love to hear your feedback, your interview experiences or ideas for building better tech interviews. If you have thoughts, want to try out Litebulb, or just want to chat, you can always reach me directly at gary@litebulb.io. Thanks everyone!



As a hiring manager and someone who has designed interview processes, I probably wouldn't use this product, but in fairness to Litebulb I wouldn't use any overly structured code assessment tool in interviews. You'll likely find people willing to give you a shot, but I think you'll discover over time that your data-driven customers conclude your product doesn't give better overall signal on a candidate long term. The coding aspect of an interview is already a very minor part of overall feedback...especially for more senior positions.

Something I would use is a tool to rethink and manage the virtual onsite experience. Virtual onsites are awful and rooted in legacy thinking. The people designing them likely never experienced it from the candidate's perspective. Why is it usually done in a single, long day? Why aren't interview transcripts a thing? Why does it all have to be done synchronously? These are just some of the top of mind questions that come up when looking at the hiring process these days.


That's perfectly fair, in fact I don't even recommend using Litebulb for senior positions. The problem for senior openings is always going to be sourcing (there's just not that many good engineers to begin with that meet that bar). We've found that there's no discernible difference between the code of strong intermediates and seniors anyways. For seniors, you're looking for qualities like ability to mentor, to give direction, to architect, to produce specs given vague client demands, etc.

I do however think there is a non-negligible boost in signal accuracy for intern and junior candidates for the first 6 months. Predicting long term growth comes down to things like work ethic, willingness to learn/grow, and resourcefulness, but immediate production-readiness is an important attribute for many companies, especially earlier stage.

These virtual interviewing pain points all make sense, and if I'm not mistaken, https://www.dover.com might have solutions to some of these. Full disclaimer, they are a client of ours.


I was pretty prepared to hate this product and, for the first time, leave negative feedback on a LaunchHN telling you not to do this. I'm pleasantly surprised to find that you're not amplifying metrics that (mostly) college attendees would succeed in, and that you intentionally talk down the algorithmic interview and replace it with a real-world-esque system with real-world tasks. I hope you succeed, and let us know when you get further along so I can recommend it where I work. I hate our interviews.


My thoughts exactly.

We don’t do LeetCode—our interviews are like regular dev work. Candidates get access to an existing codebase on Github complete with a DB, server, and client. Environments are Dockerized, and every interview's setup is boiled down to a single "make" command (DB init, migration, seed, server, client, tunnelling, etc)

This is what all employers should be doing and the part that Litebulb should really focus their marketing on. Skills based assessments that mimic real-world responsibilities are far more predictive of a successful hiring outcome. If you want to pass LeetCode style interviews, you practice LeetCode style problems. Have you then proved that you can do the job? Sure, if the job is to solve LeetCode challenges; however, it's far more likely your role will involve adding a feature to an existing codebase while keeping all the tests passing, including appropriate test coverage, ensuring your solution is clean and maintainable, etc. Designing a problem set like this is hard, and there is overhead in setting the project up and maintaining it over time. I wish I had the time to design an interview like this.

Without knowing all the details or having gone through the experience myself, I can imagine it being positive for both employers and candidates. I assume a lot of the backlash I'm seeing is an allergic response from previous exposure to other tools that focus more on the automation part to the detriment of candidate experience, such as those selfish, impersonal and awkward as hell asynchronous video interviews.

I'd be willing to give this a chance.


I completely agree, our marketing message needs to hammer in the concept that a Litebulb interview is pretty much just picking up a ticket off the backlog. We've actually had candidates say that halfway through an interview they forgot they were in an interview.

Also, the biggest problem prepping for tech interviews is that it's a different vertical of skills you're optimizing on. As you said, getting better at LeetCode doesn't necessarily make you a better engineer for the specific job you're being hired on to do. One of our goals at Litebulb is that if you're a good dev, you shouldn't need to prep for any Litebulb interviews. If you do prep, all you're doing is making yourself a better developer in general.


Are there any employers using Lightbulb now where I can apply?


Yes! Gumroad.com is not hiring until late April, but their initial coding challenge is async, untimed, and uses Litebulb: https://www.notion.so/Jobs-f43f816013b2405aa41ddefb663a4a38.

Some other companies are On Deck, Mashgin, Dover, SnapEDA, Evidence.dev, getatlas.io, and Okteto.


This is a damn good idea. Like almost everyone else, I don't really enjoy the leetcode grind, but it seemed like a necessary evil.

I'm currently tasked with hiring a senior data scientist and I'm dealing with figuring out the take home case study right now. Would be great if you guys did something similar that wasn't for pure CRUD apps, but also included a normal data science case study as well as included things like unit tests, enforced a PR workflow, and other MLOps tasks like quantization, A/B+regression testing, and monitoring, etc. I've been asking candidates questions about those topics for a while and it just feels pretty tricky to come up with what are fair and reasonable questions that aren't too easy and at the same time can be graded objectively.

Probably a lot to bite off for your team right now, but I'm just sitting here wishing I didn't have to come up with this case study from scratch atm, so there's my dream scenario :)


Please don’t give a take home DS case study. It’s annoying, can take arbitrary amounts of time and very hard to judge well with. You can come up with some good interview questions if you want, (hmu if you want an example). Just give that and ask them to talk about their favorite ml model. You can easily figure the masses from the good ones.


Would gladly take any questions you'd suggest!

To be clear though, I'm not a big fan of take homes either and we're trying to limit ours to less than 2 hours total time spent and give folks a week to find those 2 hours, so it's not overly burdensome.


To be honest, if the employer gives proper consideration to the fact that personal time is being spent on a take home task (sounds like you're doing that if its a 2 hour task over a week), I much prefer take home to coding "live" or answering questions. The latter options just don't feel natural, and aren't at all reflective of my actual skills.


Thank you! Data science interviews are definitely on our roadmap, but unfortunately we won't be able to get there within the next quarter. Biggest reason for this? Nobody on our team has data science experience! I'm not going to pretend like we know how to interview data scientists when none of us actually have the experience, but we will be spending time collaborating with (if not straight up hiring) a few data scientists to help build out our first few DS interviews.


Well if you need some feedback, feel free to reach out. I'm not a manager about to suffer from interview burn out or anything, but would be happy to see something like Litebulb lighten the load in the future for both sides of the hiring process.


Appreciate this tons, because we'll definitely need the help! Would actually love to connect over email (gary@litebul.io) or directly in a chat (https://calendly.com/gary-j-lin/30min).


I don't know how much hiring you've done, but in my experience, everything you're asking for does not correlate to hiring quality employees. If you're hiring a senior role, there should be demonstrated experience in these areas you can discuss in a 30 minute interview. If they don't have that experience already and need a take home case study to demonstrate it, then they aren't fit for a senior role. Even junior candidates should be assessed by ML portfolio work and taught the rest.

Your job as an interviewer isn't to come up with questions that aren't too easy or grade homework assignments. It's to assess a candidate's experience and ability to do the role you're interviewing for. Making a grind interview means you'll drive away the best candidates that have options and find your process disrespectful of their time and experience.


Services that automate the interviewing or vetting process benefit the hiring manager, but feel impersonal and disrespectful to candidates. (Isn't my time valuable too?)

In today's market you will likely see a large percentage of candidates opt out of the interview process when you send them an automated technical interview.


Can confirm, I bail if you ask any amount of time at all be put into a "weed out" exercise before even talking to a real person. If my resume's not enough to get you interested in a conversation, I'll look elsewhere. If you have that many candidates and you find enough of them equally interesting that doing this is necessary, I reckon my odds are terrible anyway, so it's just a waste of time doing work and not just not getting paid, but very likely not seeing any kind of reward whatsoever.

Moreover, if this is necessary, I kinda assume the pay is mediocre at best. If you knew how to get enough value out of me to be able to pay me really, really well, and if I look like a good fit for the job, then again, my resume ought to justify our talking. If any of that's not the case, then it seems like the pay's gonna suck or I'm just not a good fit, so I'm out either way.

I want someone who's fishing for me with a lure, not by dragging a huge net through the water. Seems like a better sign all around.


As a developer, I understand this perspective.

But, from an employer's perspective, resumes tell you very little about a candidate and are essentially worthless. The sole usable metric we get from them is years of relevant experience.

We respond with a questionnaire that gets us more uniform answers about experience and technologies used.

Assuming the above doesn't filter the candidate out, we move on to a 90 min max "take home" exercise. In our case, it's not a coding exercise, but that's not that important for this discussion.

Only if all of the above hasn't filtered out the candidate, do we move to a Zoom meeting. In the Zoom meeting, after an initial introduction, we do a few easy coding exercises. Like, reverse a string or sum the integers in a file easy.

A few notable things from this experience:

1. The number of candidates who struggle with the Zoom interview coding exercise is surprising. It's a lot higher than expected and almost all of our candidates at that stage have 2+ years of professional development experience. 2. The number of candidates filtered out by the "take home" exercise is very high. 85+%

We literally don't have enough time to talk to every candidate that applies with the right amount of experience. And, even if we did, many/most candidates on the market aren't good developers. Our signal to noise ratio is already bad. In that case, it would be terrible.

As a developer, I sympathize with your perspective. But you may be passing on some good opportunities by not being willing to "play the game" a bit. Some employers are indeed bad actors and/or terrible at hiring. But given the current market, even those making a valiant effort to do it well, due to sheer volume, are usually going to have an exercise of some kind.

We post our salary ranges up-front, don't want to waste anyone's time that way. We also put a ton of effort into a very detailed job description and application process. The hope is that the candidate can see the effort we put in and will be willing to reciprocate. If that doesn't end up being the case, we assume, like you, it probably wouldn't have been a good for anyway.

FWIW.


If you sent me a 90m(!) takehome non-coding test I would absolutely nope out of that process. You're not respecting your applicants time. At all.

Your funnel is steep because it has bullshit steps. And the only people coming through it are probably desperate, which is why your signal/noise ratio is "already terrible".


And, given that perspective, you'd probably be a bad fit, so no time lost on either side. :)

What kind of experience do you have hiring? If you have a better process, that has been proven over time to work, please share.

I'm part of a small group of leaders/owners from other development shops. We meet monthly to collaborate and learn from each other. Hiring is a frequent topic and I assure you that our steep funnel and perspective on candidate ability is not a unique experience, despite divergent hiring processes.

It is true that our process could be filtering out candidates who might otherwise apply. In that case, it's most likely working as intended.


>In that case, it's most likely working as intended.

You are very casual in claiming that there is such an intentional correlation between developers who are not willing to spend 90+ minutes on a non-technical prescreen with a 15% success rate and competent developers who can translate business needs into actionable steps. It's reasonable to say you are willing to accept that there are developers who are unwilling to go through this process, but to say you are filtering them out intentionally is needlessly contemptuous.


Your comments are needlessly contemptuous? :)

But, seriously, it's intentional in the sense that if someone is a fantastic developer, but can't appreciate why a process like this is valuable to them and us, then it's a good thing they aren't applying. I need our developers to be competent at development but also reasonable in their perspective on the give and take between a business and their employees.

It's deliberate that I don't want people with your perspective/attitude to apply. I don't want people who can't appreciate the others parties constraints. It's not going to be a good match in the long run. And that's true even if said person can code circles around our best developers.


The problem is, unless if you're paying candidates, I think you're selecting for a group of desperate people. Which is to say, I don't think that your process respects candidate's time. You talk about give and take, but all I see is take--"submit to our process or F off." Or enlighten me, what are you giving in the exchange?

To me it sounds like you needed some way to thin the stack of resumes, and this is the method you chose. I'm just skeptical that you're going to get a lot of quality candidates. How are you going to pull developers who are comfortable in their existing jobs with such onerous requirements? These are people worth hiring: they're competent, skilled, and don't need to find another job.

Though looking at your profile a little, I think the answer is clear: you don't try to. Rather, I think maybe what you mean when you say fit is "we filter for people who need this job so badly that they'll go through our hoops to get it."

perhaps its that a web dev shop can't afford to hire the best talent, so the that sort of a filter is useful.


This is exactly the problem we're trying to solve for. You have too many candidates, you want to be fair to all of them, but you don't have the resources to manually talk to every one of them one by one. You could give them a LeetCode-style auto-test question, but that's not much better than flipping a coin on every candidate. You could also use a service like Karat to do the interviews for you, but before you know it you'll be out $200K. Give them a Litebulb interview, let them do it on their own time, everyone that pushes up a solid solution gets scheduled for a call.


> [W]e do a few easy coding exercises. ... The number of candidates who struggle with the Zoom interview coding exercise is surprising. It's a lot higher than expected and almost all of our candidates at that stage have 2+ years of professional development experience.

Indeed, this simple filter is the original purpose of FizzBuzz and other simple coding tests. Circa 2007: https://blog.codinghorror.com/why-cant-programmers-program/


>The number of candidates filtered out by the "take home" exercise is very high. 85+%

For a 90 minute take home exercise that I have to complete before I am allowed to speak to a human being, that is on average about 10 hours of work your company is expecting me to do, for free. And then I have to do a coding assignment afterwords anyways.

I will say that the fact that you post the salary range is good. This is the sort of time investment that would only be worth it for a top end salary.


How did we go from 90 minutes to 10 hours?

Also, I wouldn't say it's "not allowed", just not the standard process. We have, for specific reasons at the candidates request, done a Zoom meeting earlier in the process.

Candidates are also encouraged to email us with questions and we put a lot of effort into being responsive and good communicators with the applicants.


Just out of curiosity, what do you mean when you say many/most candidates on the market are poor developers? Seems crazy that a majority of people applying for any position lack the requisite ability to actually fufull the positions requirements.


I mean exactly what I said. :)

And yes, I agree, it seems crazy. But it's our reality and seems validated by others I know in the industry and anecdotes here on HN.

There is just so much demand, it pulls people into the industry that can't really perform as professional developers.


Wow! That's crazy. As a relatively Jr dev still early in my career, any recommendations for differentiating myself from those people on my resume/portfolio? I have a github, is that enough or are these people plagarizing github repos as well?


Not exactly what you asked for, but maybe helpful: https://www.level12.io/blog/advice-to-aspiring-developers/

I think the frustrating reality is that it's usually not possible to tell the good from the bad at the portfolio/resume level, some kind of skills/competency test must be given.

I personally don't look at GitHub repos because I don't know how long it took to write that code. If it's great code, but took 4x to write than it should have, I wouldn't know. There are also a lot of applicants that just don't have much public in GH.

Finally, to be clear, I'm not saying they are all bad actors. I'm sure some are and the same for employers. Mostly a result of the current world economics.


"...we move on to a 90 min max "take home" exercise. In our case, it's not a coding exercise, but that's not that important for this discussion."

This is definitely the most important aspect of this discussion.


In the past, we hired devs who could code well when given well defined problems, like you'd get in an interview process. But they struggled with the actual development work after we hired them. We put some effort into figuring out the disconnect. Turns out, one of the most important skills in our organization is being able to break down a business level description of the work to be done and turn it into discrete development steps. Also, need to have a good grasp of data modeling and how to change the model to support changes in the work requests.

So, those are the first two things we test. We give the candidate a description of the work to be done and ask them to interact with it. What questions would you ask the client and how would you break this down into stories/issues? Then, we give the schema of the existing database and ask them to modify it so the new schema supports the work requested.

Still very software development oriented, just not coding. We have more in-depth coding exercises later in the process.

I realize in some orgs, there would be a project manager or product owner of some type who would break that work down. And we have team leads who do the majority of that work. But for our org, devs who couldn't do this struggled to perform well in the development work too. So it became a skill that correlated to being able to succeed after hire.


I do understand that, but from my experience short coding tests give by far the best signal. We ran an experiment and found that actually not doing a pre interview and straight to code test gave us better results than interview then coding test.

Why? Because people tend to hire people with qualities like themselves, and overlook many things. It also then caused people to overrate/underrate the analysis of coding tests based on the previous interview (as they (dis)liked the person).

This also massively increased the diversity of our team.


Yeah we generally discourage users from using Litebulb as a "weed out" exercise, first of all because we just don't agree with that practice as a principle, and second because Litebulb is a terrible pre-screening filter. As a pre-screener, the only people Litebulb would filter out are the good devs that have options. ie: if I have companies A, B, and C that I want to check out and C needs a multi-hour take home before I even talk to someone, you can bet your ass I'm not applying to C.

That's not what Litebulb is for though, we're mid/late funnel. As a candidate, if I have to choose between a 4 hour onsite or a 5 hour take home, I'm probably going to do the take home.

In terms of pay, I know some really good companies with a healthy eng culture and heavy pay that do take homes (not as a pre-screener, for sure).

Btw from my experience, resumes are a pretty weak indication of skill. I have interviewed people with 15+ years of experience, some at MAANG, that couldn't comprehend basic systems infra scenarios, and I've interviewed interns that had production-grade code. You need to actually try it out, one way or another, to be sure.


I completely get what you're saying. We've seen many candidates that have expressed the concern of "if I'm going to put X units of time into this, you should also put X units of time into this". A good approach of running a Litebulb interview is to offer to do it in a sit-down session over a call with the interviewer, or to do it asynchronously. When we did sit-down sessions, we offered candidates a choice to just go off and do it throughout the day, whenever they can find the time. 100% of candidates chose to do it async. As it turns out, trying to code something with an interviewer breathing down your neck wasn't a good experience and made them very nervous.

Note that it's an option, not mandatory, to do it async. If the candidate wanted to do it in a sit down session, then we prioritize making time for it to be there the entire session. The reason we offer this option is because devs that have busy lives often can't easily find a single consecutive multi-hour session to do an interview. They have full time jobs, and might have family obligations at home and over the weekends. Basically, the only way to actually do a full onsite interview is to take a day off work to go do the interview. We're giving the flexibility for the candidate to find a chunk of time here, a chunk of time there to complete the interview.

We tested this above approach, because just as you said, we did see a huge dropoff when we just emailed candidates the interview link. After we offered to hop on a call and to be available for support, we saw dropoff reduced to less than 15%.

Also, a good chunk of candidates that drop out of interview processes are senior engineers that can't be bothered with lengthy processes, which is fair. I wouldn't give a Litebulb interview to a senior engineer to begin with. Debatably I wouldn't give any kind of technical interview, actually. If they have decades of experience leading teams at big or high-growth tech companies, those achievements speak for itself. The interview process for seniors should be more geared towards finding a product-team-eng fit, rather than an evaluation.


> As it turns out, trying to code something with an interviewer breathing down your neck wasn't a good experience and made them very nervous.

That is the fundamental issue with technical interviews, because there is very little about that manufactured situation which is applicable to the candidate's ability to do the work. There are extremely few real-world coding situations that come anywhere close to the pressures felt while having someone hyper-analyze everything you do in real-time, particularly when that someone is responsible for deciding whether you're able to pay bills next month or feed your family next week. It's a needless, pointless and ultimately self-harming methodology that filters out exceptional candidates in favor of the most extroverted or arrogantly confident, which is a bizarre twist considering the most influential and game-changing coders are also famously some of the most introverted and "unusual" people.

On top of that, because we're all in dependency hell at this point, far more time is spent googling for framework errors that don't make any sense until you find that one undocumented command line argument that a single person posted deep inside a GitHub issue thread. If you want to test someone's ability to do the real work, then give them an obscure error code from a little-used utility and see how long it takes to find the fix.


> If you want to test someone's ability to do the real work, then give them an obscure error code from a little-used utility and see how long it takes to find the fix.

This is actually a type of interview we're trying to support! A blocker right now is that once it's leaked, you basically have to throw the interview away. The good part about the "feature building" type interview questions is that it almost doesn't matter if the prompt + codebase is leaked, because your solution is probably still not optimal. "Good code" is super subjective and takes years of experience/learning to master, but an obscure bug fix shifts the end result to a boolean state, which is easy to leak (thus hard to detect plagiarism).


> The interview process for seniors should be more geared towards finding a product-team-eng fit, rather than an evaluation.

Please use your platform to spread this message to employers. I have increasingly been getting leads asking for some sort of automated testing, even when I bring 25 years of hands on and management industry experience. The tests typically require multiple hours of time investment, with no real guarantees. That's asking a lot.


Yes. If I’m going to put a large time and effort investment into interviewing I want an equal investment by the hiring team. We both need “skin in the game” so I know that they are semi-serious about hiring me.


Yup, our clients have brought this up with us as well, which is why we recommend they offer 3 choices for the candidate to do the interview:

1. Hop on a call, do the interview in one sitting together and share thoughts/questions

2. Hop on a call, make sure set up is smooth, prompt is clear, questions are answered, then candidate goes off on their own. Hop on another call later in the day to demo the solution and talk over design decisions.

3. Do it completely async, submit the solution whenever you have time

From what we've seen, this flexibility has simultaneously reduced dropoff rate and increased candidate experience, since we're catering to the candidate's needs. Some would prefer to do it face-to-face, others prefer to work on their own.


If I was a hiring manager with the actual time to do this I think what I would likely do is come up with a series of thought experiments and some actual technical tests for the candidate. Probably I would try to pick things that I did not (off the top of my head) know the answer to and exactly how to do it. And then I might try to go through with the candidate and see their thought process through the whole thing while also volunteering some of my thought process to try to solve it together.

The point would not even be to try to necessarily solve the problem itself - more just to see how I got along with them. Of course some people might freeze up when asked to do this, but I think that happens in any higher pressure interview environment. And if I do not know the solution off the top of my head either and am obviously even struggling in some parts myself I think that takes a ton of the pressure off of them to be a wizard that can solve it in the moment.


This makes a ton of sense to me. Seeing how you solve a difficult problem with someone collaboratively is a great insight on how you'll continue working together, because that's exactly what you'll be doing. It's also very valuable to see how someone receives suggestions or criticism and how they respond. We'd actually like to build a lot of these concepts into Litebulb, ie: responding to code review requests?

As an add-on, an interesting question one of our clients asks as a conversation starter is "what's the most technically complex project you've ever worked on?". This question sets up the space such that the candidate is the expert (since they've already worked on it), and it almost becomes a session where the candidate teaches the interviewer.


What matters most to me as a hiring manager is your ability to think critically. There's so much to know, and so many different permutations with how those things can present themselves that I'm not hiring you to know the answer every time. I'm hiring you to figure it out.


Sounds like the perfect tool to select for candidates willing to do lots of grunt work.


I wouldn't necessarily put it that way, primarily because some of our interviews require less than 20 lines of code! It's a bit more about selecting candidates with the right fundamentals. You don't need to write too much code, but the code that you do write, is it clean? Are you following best practices? Is it efficient? Have you thought about how your new service affects the rest of the system, and what the implications are?

I do agree though that some of our frontend interviews are a bit heavier on code + css and less on systems design thinking.

Btw just to clarify, we have a policy where we won't host any interviews that are just open dev tickets to build a feature that will be used in production. Like, we DO NOT tolerate companies using interviews to get free labor.


> we DO NOT tolerate companies using interviews to get free labor.

What does that change from a candidate perspective? In both cases the candidate wastes their time. I'd offer a better solution: the company pays market rate for the time needed for the test, and can then ask for anything they want - in which case an actual ticket is the best solution as it will evaluate the candidate's performance on the kinds of problems they'd actually be working day-to-day.


Making it cheaper for companies to interview increases the potential opportunities for me as a candidate.

I do not take personal offense to being tested in an automated fashion.


Have you taken a bunch of these and still gotten the same ghosting, empty promises, non communication and put in multiple hours of your time for free? I certainly have. The best interviews for me are where I can talk in depth about a topic and to a lesser extent, about a hypothetical problem and how I might handle it.


I tried a few of these with different companies. Each one has failed due to technical difficulties of the platform (in my case prerecorded video interviews) and in every case I tried to file a ticket and never heard anything back. Black hole.

In one case I did a prerecorded interview and they later contacted everyone because something went wrong and they wanted everyone to do it again.

Done feeling like a number.


I would have no problem with this as a 45min screen after having talking with a hiring manager. I’m going to invest that time anyway and it’s nicer to not have to “do it live”. There does need to be humans for the later rounds though - because I’m assessing the company as much as they’re assessing me!


Yup! That's why we recommend using Litebulb as a mid-funnel tool. Initial screening should be with a hiring manager (or a founder if it's an early stage startup), final meetings should be with team members/managers to get cultural fit. We just try to make the technical coding evaluation part easier.


I second this. What's worse is that being a devops guy, most coding interviews I've ever done are totally irrelevant to the roles I've had. In a good market I can avoid companies using services like this.


I built a very similar product 10 years ago and even back then we heard feedback like this. And I believe the situation may be more pronounced today.

There seem to be markets (or large enough organizations?) where coding job ads get a tonne of applicants, many of them inexperienced, and so automation is a good fit. But not when things are tipped the other way.

Our bet was for real-world tests too but it wasn't enough. A few things we missed that might help…

- Candidates don't want to be treated like cattle

- For many companies a good interview platform will be more beneficial than automation

- Companies say they care about the experience candidates receive, they'll say they're rigorous and try to be as objective as possible with the way they collate information and make decisions, they'll talk about how high their standards are, etc etc. Be weary

- There may be more gains to be made outside of the tech space

- The problem most companies complained about and is probably still the hardest: going out and finding people

I like the way you've solved the "real-world challenge" problem.

Good luck and all the best!


With COVID and the rise of remote working there are a whole lot of companies where the developers are the asset, the commodity, where those companies are just the middle-man receiving for the work the developers do and getting the big part of the cut in the process.

Those are mostly the "sweat-shops" that hire developers that can work for less for several reasons. I see that in this particular case this solution might be very welcome to assess new hires.

But for more high-profile jobs and companies, the company that eventually follow this path will probably end with low quality hires, unless of course it just create crud's anyway.

Because this is a self-fulfilling prophecy in the end. Once the candidates know what they will face, they will train until they get good in that game, which most of the time doesn't mirror the qualities required for the job.


I am eagerly awaiting a YC startup that automates the coding interview from the candidate's side.


We have something for this in our roadmap.

Currently: candidate does a Litebulb interview, gets a report, that report can be shared with whoever they like, hopefully speeding up interviews at other companies.

Next up: candidate does a Litebulb interview, gets a list of actively hiring companies that use that stack and would like the candidate to be inserted mid-way into the interviewing funnel.


Empowering workers is not a selling point for the companies that want to use a service like this.


Why not? Hiring is a two-way contract, if they benefit workers, they will have more and more competent ones to fill their positions with.

And if they mage to get a way to find underrated workers, they will make both sides extremely happy.


Not a bad idea - you could get "vetted" by the company doing the interview and then companies that want to hire you could just search by type.

So kind of LinkedIn badges but with actual meaning :-D


That’s how Triplebyte use to operate. It didn’t work out.


I think it did work out for them. Just not to the extent to justify the $100 million of VC money they took.


Do you know why it didn't work out? I'm curious.


I believe this already exists. However, my impression is that while they may ask for your github, no potential employer ever actually checks it.


Next level: a tool that rates quality and innovation in your github.


I've thought about this as well, the biggest problem here is that I have Github repos from 7 years ago with very, very, poor code quality, and also just a bunch of incomplete test repos. I wouldn't want that to lower the perception of my current skillset.

If I could select which repos to include, that could be very interesting. Also interesting additions: open source contribution analyzer, Stack Overflow Q&A quality analyzer.


Next level: maintaining two separate github repos. One for the dystopian automated interviews and one for you.


Count the number of stars?


Someone could get a ton of stars through bots or someone could have an incredible project with a dozen stars.

Stars should be counted but can't be the only measure.


That's what I thought this was, until I saw the "YC W22" mark.


Curious how this will work with more complex cases? E.g. distributed systems, concurrency safety, etc. Also, how do you deal with solutions that don't compile but are 95% of the way there conceptually? I find that's a common occurrence in non-FAANG settings that generally would still mean a pass to the next round (sometimes even with strong perf ratings).

And how do you prevent someone from cheating if there's no engineer there on the interview?


Great set of questions!

1. Complex cases:

We can spin up isolated AWS or GCP environments with some resources pre-filled, and the rest is up to the candidate to build. Basically a systems design whiteboard challenge, except instead of a whiteboard, it's actually in a cloud environment. Since "input" will be defined, and specific "output" will be expected, we can then run tests against this environment to test for sufficient robustness. For example, if the prompt is to build a load balanced CRUD api, we can run create resource a million times, then try to read, update, or delete, and measure total response time. For something like testing concurrency safety, we can run a bunch of parallel threads attempting to access the same resource, then test for timeouts (deadlocks) and inconsistencies.

2. A solution that's code complete but has a minor bug that prevents compilation

For a solution like this, code completeness scores are unfortunately going to be 0, simply because there's no way to test for functionality, performance, event logs, if it doesn't compile. However, our static code analysis will still work, so they may still get high scores on code style, code complexity, naming, etc. Something we might be able to add is a way to flag to employers that a submission has high scores for everything except a specific metric, and so we'd recommend manually diving into the code.

3. Cheating

We can't prevent cheating if the candidate gets their roommate to do the interview for them. However, if someone just blatantly copies an old solution their friend sent or they found on the internet, we can check for that. We can run a diff between the submitted git history and all previous submission's git histories. 2 legitimate final solutions might be close (ie 60-70% match), but you'll never have 2 organic git histories with multiple commits be close, (it'll pretty much be either 0% or 100% match). Looking things up on the internet isn't considered cheating, I don't know of a single dev that could survive the day without SO :D


> unfortunately going to be 0, simply because there's no way to test for functionality, performance, event logs, if it doesn't compile.

You could always have someone look at their work. Or better yet walk through it with them to try and ascertain their thinking. This is what I do with my team, multiple times week, in my actual job. Seems better than immediately excluding them


If TC can run the code (not sure why not in this environment) - then it not compiling should probably be a no hire.


Strongly disagree if we are talking a complex problem like coding a distributed system. It is very possible for an implementation to be 90% correct but it won't compile or it's wrong for very subtle reasons. And maybe if there was another 30 minutes it would be debugged correctly but it didn't work out in the 45 minute interview period. There is signal contained in the 90% right attempt that can show a lot of knowledge and skill which you'd be missing out using autorejects.

We are not talking some CRUD app where it is trivial to make each component work or a Leetcode problem that you can spit out in 20 minutes.


I'd still argue it's a bad sign if the candidate is stuck with nothing even compiling after 45 minutes. That's not a sign of the problem being complicated, that's a sign of the developer either (a) struggling to figure out dev environment basics, or (b) having a tendency to go way down one solution path without pausing for any simple gut checks / sanity tests of their work in progress.

In the latter case, is it possible for a developer to operate that way and still produce a great solution given more time? I suppose so - but in my experience, the people who are not in the habit of incrementally checking their work are virtually never strong developers.

There's also a time management question here. If a person knows they only have 45 minutes, that should create extra pressure to start testing earlier. Someone who still waits until time's almost up before checking anything is probably a fairly extreme case of this antipattern.


Yeah this is actually why we recommend liberally-timed take homes instead of a sit down 45 minute block. It's super frustrating to get 95% of the way there just to be cut out by time.


love this question: especially about the distributed systems.

Something I've been thinking about, not necessarily in the interviewing world, would be to have a simulation of distributed systems for non-production code: in theory I think you totally could have a simulation layer for databases / server loads and test how your code performs in such environments.

I guess it exists in production: but haven't seen it for either a) learninga bout distributed systems, or b) for interviews


At every company I've been part of the interview design process, I always insist on having practical tasks with real tools because how quickly someone can parse documentation and code context are not trivial aspects of the job. It does take a lot of time to set up sandbox projects however which a platform like this does away with. Looking forward to the day when no candidate sees Leetcode or HackerRank as part of a tech interview again.


Thank you, and love that you promote great interviewing practices! What were some examples of prompts you routinely used for interviewing? And any learnings on great (or not so great) interview design?


This was my process: built a sandbox/simplified version of the app and hosted on a separate repo. Told the candidate to either set up their laptop with the libraries installed (friction) or to come onsite an use one of out computers (not possible in remote any longer). Built a 2-3 hour onsite task plus a half-day take home extension that were features already in the production app. Did this for hiring iOS, Unity, React and Python devs and it worked pretty well for all these roles.


At the last company I worked for, the interview was amazing because it was tailored towards exercises that actually had to do with my job, but were obviously not going to be used in their product. Hope this becomes the standard, that + a genuine conversation with the team lead just works.


I certainly hope so too! And yeah getting to know the team is really important as well.


I hope this works out gangbusters for you!

Some feedback on the homepage. Its current form can be improved.

  1) To watch the video in the section, "Here's how it works", I need to load javascript from navattic.com.  JS, expecially from an external website, is something that should not be required to view a video.  This tells me there is likely tracking of hiring managers who only want to watch the how-to video.

  2) When I enabled JS for navattic.com, the video's first frame was shown.  But, to actually watch the video I had to input a work email and my name.  I closed your webpage at this point.  I could have clicked the REQUEST DEMO button if I were ready to engage your service.  However, I wanted to learn about your offerings by watching your video.  Even clicking your about-us link didn't work, since it scrolled me right back down the page.
My suggestion is to not try to get every visitor's email account information before you tell them how your product works. It might help raise the quality of engagements to provide your information first and then let hiring managers click on REQUEST DEMO.


Ah ok so that flow needs to be sharpened a bit. It's actually not a video, it's an interactive demo where you can play around with a small subset of the app in a controlled environment. It's a series of guided DOM snapshots of the actual app itself, but ultimately I realized it's not clear at all.

I'm debating on just not having that interactive demo at all, and instead to be a bit clearer in the rest of the site about what we do, and how we do it.


Either works for me! I don't have a current need, so I wanted to understand your offerings for when I a future need develops.


> JS, expecially from an external website, is something that should not be required to view a video

You expect everyone to self-host videos?


No, and that's not how the Internet works - JS is not required to access other computers in order to display videos. A video file can still be located on a CDN [1] in addition to being self-hosted. This is an extremely common operation that websites perform. You can look at the docs for any CDN. They all tell you how to do this.

[1] CDN: Content Delivery Network, i.e. youtube, cloudflare, or any one of the others. There are multiple CDNs used by this very article's webpage.


Nice. Let's see if I can restate this well enough to ward off further condescension from you:

Would you similarly be up in arms over the external JS were the site embedding a YouTube video? It also requires JS – as does most (all?) major video hosting platforms.


JS is not required to load a video over the Internet. When a CDN (or 2nd origin) without the modern proper CORS headers is used, you are forcing your code to use JS or same-origin resources. Access-Control-Allow-Origin is the solution designed into browsers for this.

That means that the Youtube CDN doesn't work with modern CORS for HTML video loading of resources by offsite webpages. Other major, more modern CDNs implement this.


I've been trying to show that your stance is at odds with how the majority of the web consumes videos – disable Javascript and attempt to load a video from YouTube, Vimeo, Dailymotion, Streamable, or any other major video hosting platform.

I'm not clear why you feel CORS factors into this.


For all of its faults, YT is a known quantity.


This looks really interesting! It seems like there's potentially 2 great companies in here:

1. Providing good programming tests with automatic env setup

2. Automatic PR reviewing and grading.

By this I mean that you could do less and still be useful, for example just run linting and tests on the PR and let me review it myself. That will match what I do at work anyway. That would let you focus on just providing great example projects and covering more languages, rather than on the automatic grading.


+1 on this. The env setup and test design along with a way for the candidate to submit their solution would save us a ton of work. I'd happily pay $30/interview just for that. I wouldn't be wanting to to rely on a 3rd party for grading. Nor would I want to assign a grade without having a chance to talk through the code with the candidate.


Three metrics you pointed out: 1. Linter 2. Cyclomatic/halstead complexity 3. identifier naming convention testing

Are those really things that candidates can get a negative ding for? Those are things that can/should be handled automatically by libraries or your CI/CD (ex: Rubocop)

More importantly, how do you get more details/nuance over things like naming or cyclomatic complexity.


I actually really value the naming metric. I'd value a dev who looks at other classes to to see what the convention is rather than just adding their own personal form of camel-case. It shows a sense of collaboration.


This looks pretty awesome! Congrats on the launch. It seems like a much needed alternative to Leetcode with more practical applications. As a developer, wondering how I can benchmark myself on your platform to see how I perform with a sample?


Thank you! And yeah we've had that request a lot, later this year we'd like to open a candidate-facing platform to hone their skills on Litebulb interviews. Conceptually, getting better at Litebulb interviews should just make you a better dev, not better at interviewing.

Potential feature: get high enough scores on a specific interview, get recommended to companies currently hiring that use this stack and get inserted mid-way through the funnel?


Yes please! I have never even gotten a shot at interviews like these, its such a black box to me, and I love tests.

I would honestly pay for this without even the hope of a job, just to be able to know where I am at, feel confident if I ever get past the first screen.


Honestly I feel you, I recall interviewing before and the number one thing I kept asking for was feedback, but rarely ever got any. Currently, when a candidate does a Litebulb interview and submits it, we email the same report PDF to the candidate that the employer gets. As in, feedback is built into the flow of the product.

The next step would be to open up that candidate-facing side, and hearing that you'd pay for just that alone is great feedback!


Being able to pass tests in advance which then pre-qualifies you for actual, human interviews for companies who choose that particular challenge (in the future, even if they weren't hiring at the time you took the test) would be a great feature.


> wondering how I can benchmark myself on your platform to see how I perform with a sample?

Reading this comment, it occurs to me that if just anyone can sign up for a trial, then candidates may be able to game the system and prepare for the specific work sample tests that this product provides. If so, then that's unfortunate, because I like the idea behind this product.


2 thoughts here:

1) This is actually why I was going back and forth on offering a trial version, and decided to use a form so that I can qualify employers who legitimately need this vs candidates just trying to game the system

2) It might not actually matter if candidates get preliminary access to Litebulb interview code bases. Our interviews aren't brainteasers that have a specific optimal solution, they're just regular feature requests, ie build a CRUD API for this new resource model, so it almost doesn't matter how long you have. From what we've seen with submissions so far, having extra time doesn't necessarily make your code quality better. Potential idea: open source all Litebulb interview codebases?


> open source all Litebulb interview codebases

This would be such a wonderful contribution to the industry. Imagine having a hundred JS front end questions, it’s not really game-able any more. I agree with your take. Keep qualifying your leads though, for the sake of your funnel.


It's "game-able" in the sense that someone could indeed learn all of those, but that's a win because it trains them for actual skills relevant to the job.


I appreciate that some people prefer this kind of interviewing, but instead of memorizing some algorithms, it all comes down to whether the dev knows the same specific framework or not in this case. I'm not saying one is better than the other, just different, and I think both are valid ways of interviewing.


We hire with projects derived from the specific frameworks we use. It's the best way to give them a taste of actual work.

We also give people time to learn the framework we're using as part of the hiring process. And we encourage them to do that if they want!

People with a tremendous amount of experience in our framework-of-choice have an advantage. But they're also more likely to be immediately valuable. It's imperfect, but it works well for us. And you'd be surprised how many people float right through the technical assessment learning as they go.


,, Tech interviews suck. Engineers grind LeetCode for months just so they can write the optimal quicksort solution in 15 minutes, but on the job you just import it from some library like you're supposed to.’’

I was responding to this negative part of the introduction. Actually all of the mentioned frameworks are disallowed at Google (where leet code style interviews were introduced), because they are not fit for serving billions of people in the latency budget in the search/ads stack.

If you would rephrase it that for most companies leet code style interviews don’t make sense (and probably are your customers), I would agree, but the few where it makes sense are really important companies.


I think you make a good point here, you're right, because leet code style interviews don't work for most companies, but they do work for some companies with specific needs. I recently spoke to a prospective client where they mentioned they need a quick way to filter through hundreds of candidates pre-funnel and they only want to know if they can write basic code. If you can write basic code, you pass and your resume gets a look. They specifically didn't want anything too deep or difficult. I actually recommended that they use a tool like HackerRank or LeetCode instead of us because it made more sense for their use case.


I've thought about this a lot too, which is why we heavily recommend using Litebulb for take-homes. Sit-down onsites on Litebulb are only good if the candidate knows the existing frameworks, if they don't it's pretty tough to figure it out on the spot.

Take homes are great because the candidate either knows the specific frameworks, or doesn't but can learn them quick enough. Either way, if you complete it within the allocated time and get relatively high enough scores, we'd recommend moving forward.

Actually @xiphias2 what type of interviews have you seen that were really good? Like even the in-person ones, what made them good?


I really liked LeetCode style interviews at big companies, and I loved that my colleagues also understood the importance of run time of complex algorithms. I also interviewed at startups where algorithmic knowledge is more of a disadvantage, as I may optimize things too early.

As an example of why for some (especially big) companies need deep algorithmic knowledge is that at Google I was working on adding new features to an ads prediction model, and even without changing the code base the new (slightly more complex) model increased the ads 99%-ile latency by 0.1ms (millions of dollars loss because of the RPC timeouts), which required director level exception there if I wanted to ramp up the experiment to total user traffic (the usual thing that people do is to optimize something else in the code base to stay in the latency budget).

The small problems that LeetCode provides for interviews are great filter for people in these companies when they are faced with much more complex optimization problems. The problem is when a small company with lots of user facing features uses the same type of interview (they shouldn't).


Could you set the tests up so that candidates have a choice of framework? (and the hiring company can choose which ones). We use React at work, but I'd probably be quite happy to accept solutions in Vue and Svelte (and maybe Angular 2+). Similarly for backend work we use Node.js + Postgres, but I might be willing to accept solutions in Python, Ruby, Java, C# and MySql, Sql Server for example.


We can already automate leetcode interviews much more easily then why don't we do it? Because people cheat. U can ask your competitive coder friend or search google to get the answers. Similarly, here also friends can give the complete solutions even more easily. Heck there are sites dedicated to these kinds of services. This leads to following consequences:

1. In both automated leetcode and automated "coding" interviews, most people get full score. This even turns into a negative bias as an actual good candidate who did not cheat has greater chance of getting rejected here than the cheaters.

2. Interviewers put both automated leetcode and in-person leetcode to first filter out candidates, then put them through actual interview hoops. If your product succeeds, this means another layer of "interviewing" and more wastage of engineer's time.

Sorry to say this isn't solving any problem but has the potential to create more harm for both sides.


I think you missed the entire premise of Litebulb — they are avoiding Leetcode-style testing in favor of development tasks on the real business codebase, and quickly ramping up environments that are based on your business' proprietary dev stack.

Many software companies opt to avoid leetcode-like interviewing entirely. If leetcode and prepackaged, recycled, self-contained code tests are your workplace's primary process for hiring, then I don't think this service is for you at all.


I suspect cheating is not the primary problem. I suspect, like so many other decisions in software, the primary problem is unrealistic and unfounded expectations. This behavior typically comes from unmeasured assumptions that may work well in one narrow circumstance applied into an unrelated circumstance and then blaming the result instead of the decision. This is bias.

Unless there is a solution that directly addresses hiring bias this company will fail like all the others before it for exactly the same reason: their clientele will apply bias (the actual cause of the problem), be unhappy with the result, and then blame the service provider. In sales the client is never wrong, which means the service provider is always wrong. This is why software overspends on recruiters and then invents fantasies like React developers with a year of experience are rare.


I don’t think cheating prevention is in scope for a tool like this and neither should be.

Any decent employer puts the new hire on a trial period first. During the trial period, the employer should task them with real work. When the candidate doesn’t deliver, the employer can let them go. When they do deliver, the employer can keep them.

In both cases, does it really matter if they cheated their way in or not? No. Preventing cheating is the wrong place to optimize.

While we’re here, dumb memorization of LeetCode style Q&As instead of deep understanding of underlying principles can be seen as cheating just as well. Doing actual work is better in every way.

And many employers already do exactly this coding work already, just manually. They make up a problem themselves, and give you a time limit to commit a solution without checking if you’re cheating or not.

Litebulb just automates this, by taking over the creation of the problem and the analysis of the solution. There’s lots of advantages of the centralization of this, like creating more problems, fixing and updating existing ones, etc.

Provided the problems and the analysis are as good as advertised, I think Litebulb has a great value proposition. I wish them success and hope they can have a lasting impact on the hiring process for software developers.


> During the trial period, the employer should task them with real work. When the candidate doesn’t deliver, the employer can let them go.

That would mean you waste a few months for every bad trial hire, especially if you're looking to fill just one or two positions.


If your company takes months to figure out if someone does good work or not then it has some serious problems.

If you have expertise in a field you can normally guess pretty quickly when someone bullshits their way through or actually knows what they’re talking about.

But your hiring staff usually don’t have any or deep enough expertise for the field they’re hiring for, which only gets worse the bigger and more advanced your company gets.

But your employees with which the new hire will be working with should have this expertise (or again your company has a problem). So they will find out pretty quickly if the hire is a fraud or not.

Cheating surely creates costs but not as much as you suggest. And as others have mentioned above it is surely not as widespread as you suggest, limiting the cost further.


You have a point, maybe the bad hires can be detected in weeks instead of months, but you still waste the same amount of time on the HR side. And if you only have one position on your team to fill that year, you may be cautious how you assign it.


Well, you’re not forced to straight-up hire the candidate just from Litebulb, right? A score in a report should not be assumed to reflect the entirety of a person, which unfortunately is a common mistake (e.g. school, college or university diploma). You’d most likely want to still sit down and talk with your candidates and evaluate them individually, which you'd have to do anyways if you want too see if they fit your individual company. Litebulb just replaces what would otherwise be stupid LeetCode Q&As or old unmaintained home-rolled coding problems, not the rest of the hiring process. Specifically, Litebulb does not offer “the magic button” to find the perfect hire for your specific company.


Really appreciate the support here and you make some excellent points! We're trying to make something that already exists just a bit better. :)


1. That point is actually one of the reasons behind Litebulb! For most LeetCode-style tools, getting 100% is easy if you just google the question, copy the top answer, then change up a few variables. For Litebulb, a legitimate submission has multiple commits, so when we do git history matching, no two legitimate submissions will have the exact same commit history, which means submissions either have a <5% or >95% match. The >95% matches are plagiarism and will be flagged.

2. Litebulb is meant to replace the initial LeetCode question + the mid-funnel technical screen. We still recommend doing a hiring manager screen at the beginning and team onsite at the end, but using Litebulb correctly should lessen an engineer's interviewing burden, not increase it.


so you're gunning to replace the 30 minute leetcode easy over the phone??


It's idealistic and naive to say that we'll replace the entire onsite interview, and I wouldn't even recommend we go in that direction. What we're trying to do is clear out the technical evaluation part of an onsite so that there's more time for engineers to have insightful conversations with the candidate, and to get to know them more as humans and potential colleagues. If you're spending too much time with a candidate staring at a whiteboard going over code, you don't have enough time to gauge culture fit.

However, because we're still early, it's not fair to ask clients to chuck out the tech eval part of their onsites, since we haven't proven effectiveness to them yet. So instead, we'll replace whatever they use mid-funnel, and also early funnel auto-interviews. We still recommend someone talk to the candidate both before and after the Litebulb interview.


We counter this by giving applicants a small homework problem and ask them to solve it in the language of their choice. Then we have tech interview centered around their solution. We'd know in about a minute if someone else did the did the thinking for them. It's also a good jumping off point for all kinds of deeper questions, systems knowledge, engineering practices, wherever we need to go.


I routinely see 75% fail.


This is a very interesting approach, I’ve seen some competitors who do this in the learning/course space as well. I’ve also done a lot of interviews from both sides of the table.

My question is around the grading examples. How do you automatically determine Functionality is Senior - Weak or Commit History is Intermediate - Strong?

I find these to be quite subjective and better understood as a whole for any given candidate. The grading mechanism provides a false sense of being unbiased, and further dehumanizes the process.

> Every metric we measure is numeric, and every test is deterministic. As a result, you can make hiring decisions based purely on merit.

I fully disagree with this for hiring, unless you’re looking for machines, not humans. Measuring things automatically does not result in purity of bias.


We're mid-way through building a V2 of the scorecard. This "Senior - Weak" grading model isn't working, since everyone has different ideas of what a senior engineer is.

Instead, we're going to just expose raw data about somebody's code submission, and let you decide what those numbers mean.

Some examples:

1. 7/10 unit tests passed

2. GET:/api/v1/vehicle/{id} OOM'd out at 1.5M records in the DB

3. Linter returned 10 warnings, 0 errors

4. Cyclomatic complexity counter returned max of 24

5. 5 new commits, shortest commit message 3 chars, longest commit message 45 chars


It's good to hear you're working on this, it's a hard problem to solve for sure.

In my experience, the metrics you've sampled aren't particularly useful for hiring. Some of my thoughts:

> 1. 7/10 unit tests passed

Automated unit tests are table stakes on all platforms like HackerRank, LeetCode, etc. Might be useful to classify/clarify the failing ones? Or even add an option for the candidate to clarify their thought process for those.

> 2. GET:/api/v1/vehicle/{id} OOM'd out at 1.5M records in the DB

As an engineer, I see the appeal of OOM checking. But as a candidate, it would be important to have the same information available during test-time, so I could fix it. Could be as simple as an accidental N+1 ORM mistake. Otherwise, hidden performance testing would make the experience very stressful, as then I need to double-check everything and over-optimise for perf.

> 3. Linter returned 10 warnings, 0 errors

I would argue that linting should be out-of-the-box and automated for the candidate. Or even if not, there should be heavy filtering of linter warnings. Every company has their own style guides, and candidates shouldn't be expected to learn that for an interview.

> 4. Cyclomatic complexity counter returned max of 24

While I do understand the concept of Cyclomatic complexity, I have /never/ seen this used IRL in any company (startup, mid-size, FAANG) that I've worked at. I would much rather judge code complexity in terms of readability and decision making.

> 5. 5 new commits, shortest commit message 3 chars, longest commit message 45 chars

I appreciate clean git commits, but these are enforced by company's internal policies during code reviews. Easy to learn once, and continue doing. But definitely not a deal-breaker in a time-constrained interview assignment.

Are these raw metrics you shared what constituted the grading from V1? I would definitely reevaluate most of these for V2, as they seem a little nitpicky than insightful.


Not only that, but this is an interview question. It isn't helpful at all to judge the quality of code in isolation; nobody's going to write perfect code in an interview setting, especially one in which the candidate can't have a conversation with the interviewer. When I conduct code interviews with candidates, we pair on projects. The candidate is asked to complete the task without regards to how clean the code is. We then have a conversation afterwards about how to clean things up, solve for complex cases, etc. It's much better signal to talk through these things to get the candidate's insight than it is to just get a chunk of code at the end that may-or-may-not meet the linting guidelines of my company.

I've interviewed hundreds of engineers in the last twenty years. I've been (and am) a hiring manager for many of them. I wouldn't touch this with a ten foot pole, and I'd run away as a candidate if my git commits were being judged as part of a coding interview.

These "metrics" are for conversation, not for automated "judgement."


Very cool, Gary. Grabbed some time on your Calendly. Eager to see if your product can work for us and always interested in talking to a founder in this space.


Thank you, looking forward to chatting!


Here’s a radical idea: hire the top 4 candidates on a probationary period. Have them work remote and with the same requirements and blind to each other.


Then you'd only ever get candidates that can afford to have a "maybe job" for a while until they get out with no pay check. I'd definitely not work for a company with that arrangement. Unless there is some guarantee they pay you until you land your next thing (within reasonable timeframes).


Nothing radical about it. People have reinvented this idea for decades. It's called "contract to hire," and it's a great way to hire a good line cook and a horrible way to hire a good software engineer.


Contractors are on a different time period contract. They’re treated as a temporary resource and more than likely will roll off than get hired. Filling an FTE role has a different motivation and so the probationary period would be very short, like several weeks.


You're confusing generalized contractors with "contract to hire." They aren't the same.


You’re confusing probationary period with contract to hire.


Have you considered adding a learning module that would help someone get to the level of passing the said interview? That would make this super helpful to both parties. Even if someone cant clear the interview today, they may have the talent and motivation to learn and come back to clear the interview. Great work nonetheless. I am going to try and use it to hire for my team.


The homepage looks great, but IMO does a poor job of transmitting the value proposition.

After the What, it is straight up about the How. It shows me how some hiring report looks like. But it misses completely on the Why. I don’t know what problem the tool would fix for me.

The HN post for example explains the problem much better:

- companies have even in the best case of not doing stupid LeetCode only low quality, unmaintained, unchanging coding problems because it’s valuable time spent not on the actual business. Outsourcing to a centralized service allows for high quality, maintained, and frequently new coding problems. A service precisely allows that not every company must redo the work again and roll their own.

- interview can spend more time on the human, less time on the work. Getting to know the team and the company, culture fit, etc.

IMO this should be the headline. And only after that the nitty gritty details about the How like how the report looks, etc.


There are other sites that do this and they generally just drop these exercises without warning on candidates without even telling them what stack it is. I got asked to do a MERN coding exercise when I haven't touched that stack for years. I expect this app will be used the same way. It's just a logical continuation of moronic recruiters doing blind keyword scanning and hiring managers who don't give a fuck. Then they wonder why they can't hire or retain anyone. It's not your fault they suck shit. This app looks useful when used as intended.

At least with this app, they (the company) can't turn it into doing unpaid work on their app that they then keep or using the interview as a dumping ground for problems they themselves are stumped on (Amazon loves doing both of these things). At least not yet.


> We automate technical onsite interviews for remote teams.

This seems a bit contradictory!


I think they're using "onsite" here to indicate the second interviewing stage, after the initial "phone screen." Both terms are somewhat misnomers these days since both stages are often done over video.


This feels like a solution in search of a problem

Fact: The LeetCode loop sucks. So couldn't any Director or VP in big tech just decide they are done giving that interview? They have the resources to spin up something like this in a week. Even outside big tech, companies can very well think of non LeetCode style interviews, some already do

The problem is, many engineers want new engineers forced into this type of interview, because they had to do it, so you have to do it too. Not for any reason better than that

I'm not sure a full featured well polished product like this one can change toxic engineering culture when it comes to joining the club


> They have the resources to spin up something like this in a week

That sentence right there is a pretty solid indicator that it’s a solid business to be in, no? What VP wants to take engineer time to create an internal tool when they could pay $40/user/month or whatever? How many SaaS startups do we see right now that occupy that exact sentiment area?

As far as this being more of a culture change than a process change — well, good products can change culture. Making something much easier is a huge step on the way to getting people to do it.


Completely agree! This is indeed a big problem, and we have so many CTOs, VPs and founders willing to jump into 6 figure contracts if it works. The biggest risk here is execution, not market size.


> What VP wants to take engineer time to create an internal tool when they could pay $40/user/month or whatever?

The cheapest pricing for this is $600/mo, the next plan up is $2,400/mo, the enterprise plans start at $60k/mo.


I mean, how many engineering interviews is a big enterprise company running per month? 1000 feels like a comfortable lower bound.

Most of those would use this tool at least twice during their interview process.

So you're paying... $60k/2000 = a cost of $30 per interview. If you save your company half an hour of engineer time (at $60/hr, another comfortable lower bound) you have already made back the cost.

And that's just strictly cost reduction. If this is genuinely a better interview process, if you get more applicants and qualified candidates as a result of using the tool, of course the value of said tool is gonna be even higher.

I'm pulling numbers out of a hat, obviously, but I don't think any of this is at all unreasonable. And once you start putting it like this, I think you can start to see the justification for that kind of pricing.

Edit: well, just looking at their Enterprise pricing in a little more detail, they're pricing at 60k/200 = $300/interview, then $30/interview after that. That's... a lot steeper, and a lot less friendly to mid-tier companies. But hey, frankly, I can still see the justification for it -- if it's really good.


Right, biggest pain point we're trying to solve for is engineers getting pulled away from their work to build, maintain, conduct, and evaluate interviews. I recently spoke to Stripe engineers that spend 2 hours a day, every day, on this. A Brex EM also every engineer on the team conducts 1 - 2 onsite interviews every week. Eng time is the biggest expense in the recruiting funnel, and it's particularly difficult to solve for that problem. We're hoping that if Litebulb can take care of some of the more tedious code analysis work, then we can help teams reduce their eng spend, or use the newly available time to build deeper connections with candidates as humans.

When it comes to changing engineering culture, I completely agree that a single tool like this isn't going to cut it, but we're seeing a big shift in new startups focusing a lot more on real work relevant take homes and/or work trials. For example, Gumroad (disclaimer: a client), does a Litebulb interview as the entrypoint, then it's straight to a 4-8 week paid trial on flexible hours with a choice of which project to work on. We're hoping that we can be a part of, and accelerate, this culture shift.


Hmmm, maybe not a question for you, but for Gumroad, isn't there already an issue with candidates investing so much time in the process? (E.g. with any large company it can literally take 3 months+)

For me as someone who just finished interviewing, if you told me that I either have to work on two jobs for 4-8 weeks or quit my current job for a limited contract that may or may not work out I'd almost certainly say no. (I think most people would too, I mean people already complain about take home quizes etc.)


I think there’s (unfortunately) a need for a low-effort high-pass filter for candidates because there are so many utterly unqualified candidates chasing the dollars that a software job can bring. I don’t blame them, but I also think the bottom 10% of all potential programmers is way over-represented in the applicant pool and finding a way to filter them out effectively has value.

It doesn’t have to be leetcode, but it does have to not be “have an engineer spend an hour with everyone”


This is what we saw too when we were hiring for our own engineers 2 months ago. A surprisingly good filter is "can you even get started?". Like, the fact that our interviews required users to navigate Github, setup a dev env (even if it's just the make command), understand the existing code and prompt alone filtered out a non-negligible percentage of the candidate pool.


That is simultaneously crazy and entirely believable from what I’ve seen. “Experienced” Java devs who couldn’t install Java or set $JAVA_HOME, devs who couldn’t use git to checkout code, and other “how did you possibly get to this point in the overall space?”


> The problem is, many engineers want new engineers forced into this type of interview, because they had to do it, so you have to do it too. Not for any reason better than that

I doubt HR people are on a coordinated mission to revenge their own suffering by letting new candidates suffer the same horrible LeetCode fate. Instead, I rather think LeetCode is chosen because it’s the easiest way, the path of least effort, or laziness.


I also know a lot of Directors who like buying a solid product with solid support. That could be a big selling point on its own.


I’m only going to do an automated interview if it gives me access to a huge number of companies. Otherwise the in person interviewer is a good check on the amount of my time they would be willing to waste.


Looks brilliant. Anyone that makes the LeetCode grind go away is already a positive in my book, but this seems to go much further to automate a lot of the steps companies need to do these days. Awesome idea!


Appreciate it, thank you!


Are you planning to create tech management interview cases? They don't seem to be as qualified as the people they hire. Seems like a problem in need of better application screening processes.


Engineering management skills are very, very hard to evaluate. From what I'm aware of, you can get a general idea of someone's competence as an engineering manager by seeing how their engineers perform under them vs under someone else. For example, if a team is doing so-so, then a new EM comes in, and a few months later the same team is now suddenly all rockstar devs, this is a sign of a good EM.

Also, the best engineers aren't necessarily the best engineering managers. See https://en.wikipedia.org/wiki/Peter_principle.

We don't believe tech management interviews should at all be automated. You really want these to be as qualitative and human-driven as possible.


> We don't believe tech management interviews should at all be automated. You really want these to be as qualitative and human-driven as possible.

It is a convenient myth that people who build the actual systems can be put through an automated filtering process capable of sifting wheat from chafe where as those who manage them are doing subjective, qualitative work that cannot possibly be measured.


How would you measure the quality of a manager?

I do think everything can be measured given the data, and hence automated. The problem in the latter case seems to me that the data is not easily accessible.


Look to the GMAT for inspiration. GMAT measures Critical reasoning skills, analytical skills, comprehension skills, etc. Also, case interviews (think: management consultancies) can be applied to tech management applicants.


Congrats on the launch! I run a job board for companies that don't do LeetCode interviews (www.nowhiteboard.org) and would love to add any companies that use your platform for free & see if we can partner up somehow!

I think a big reason (in addition to many others..) that companies still use LeetCode is the lack of interviewing platform alternatives, so I'm glad to see Litebulb is looking to fill that niche. I'll email you directly after work if I don't hear anything on here!


i like the concept much better than LeetCode

if you succeed in making this feel "native" to both employer and the applicant, you'd be billionaires by tomorrow


Appreciate it tons! And yes, making this experience feel "native" and familiar is one of our primary focal points. In the context of an interview, any slight deviation of an expected form factor can contribute to nervousness or loss of candidate performance, which is why we're trying to make sure that dev env spin up is quick (eventually <60s), coding environment is familiar (VSCode or JetBrains), and submissions are familiar (probably just Git). For employers, we still have lots of work to do to ensure the signals collected integrate well with popular ATS's like Greenhouse or Lever.


Is that list of tech stacks complete?

Personally I am looking for such a tool but have a hard requirement for native compiled languages (C/C++/Rust).

I know that there is some transfer - a good developer that does a decent Python/Go exercise likely could also be good in other languages. But candidates might not want to do that transition.


Any time you say merit-based hiring and quantifiable hiring metrics, you should realize that your best case is that you don’t add any additional bias to an already (racist, sexist, etc) biased system. And any hiring process company that doesn’t address this in their product pitch doesn’t understand the real barriers to equity in hiring today.


I thought it would automate the process for the person being interviewed. No more leetcode crap to worry about... Too bad.


So how are mobile / hardware engineering technical interviews done? This seems specific to only a subset of engineers.


Yeah for now it's primarily for frontend, fullstack, or backend cloud devs. We have an iOS Swift interview already built but it's not polished, so it's in the backlogs for now, and dev ops and data science interviews are coming up next. Hardware is going to be a tough one though, not too sure how much support we can easily provide there.


Let me get it straight… Is the quality of the code still checked manually? You’re a startup setting up interviewing environments, then.

I thought there was some sort of machine learning involved.

“fake it till you make it”

Who can guarantee that your “reviews” are aligned to my goals and not just outsourced to the cheapest contractor available?


Yeah so this is a common concern we've seen. "How do I know your idea of a senior engineer is the same as my idea of a senior engineer?" And the answer is you don't. We're modifying the report to no longer say things like "Strong Junior dev" but rather to just give you a bunch of raw data, and then you decide how that data is interpreted.

Some examples:

1. 7/10 unit tests passed

2. GET:/api/v1/vehicle/{id} OOM'd out at 1.5M records in the DB

3. Linter returned 10 warnings, 0 errors

4. Cyclomatic complexity counter returned max of 24

5. 5 new commits, shortest commit message 3 chars, longest commit message 45 chars

Also, the concern with outsourcing to a cheap contractor is a very legitimate concern, because we ALMOST actually did that. Instead, we decided that it was a bad move and to double down on building product + surfacing clear, unambiguous data. That's also why we're intentionally staying away from ML (at least for now), because it's inherently unpredictable.


To be blunt, a lot of those data points are not useful in evaluating a candidate beyond entry level positions. I really don’t care that much about commit message length or some linter’s idea of complex code. To me, trying to evaluate candidates on these kinds of quantitative factors is just not useful and it feels like grading a standardized test (in a bad way).


I agree. But that's the only metrics you can use to "judge" some code.

<sarcasm> I always forget number of lines of code. </sarcasm>


> We're modifying the report to no longer say things like "Strong Junior dev" but rather to just give you a bunch of raw data, and then you decide how that data is interpreted.

Nothing against you man, but that's something I (and maybe all the people looking for a dev) don't really need.

I can run linters and static analisys tools by myself if I really want. I never did that myself when I was interviewing people and I've never seen any other interviewers doing that.

And you're asking 600$/month for this?


There is a good percentage on the upper end of the experience spectrum that just won't take a coding challenge as part of the hiring process. You will miss out on a huge pool of talent that will automatically opt-out before the process even starts.


This ... I have a personal rule, I never interact with bots in a professional capacity (it's a sure fire strategy for demoting yourself to a cog in a machine), this includes slackbots etc. I would certainly not interact with a bot for a company I don't even have an business relationship with yet.


Ok, all I have to do now is to find a way to automate answering the interview questions


It looks like you're automating coding interviews. Not technical interviews.

I conduct technical interviews routinely and very rarely do I touch on anything coding-related. That's because I'm in security, compliance, and identity.

I'd strong suggest changing changing 'technical interview', because it gave me a completely incorrect idea of your product.

Coding interviews are technical. But not all technical interviews are coding.


This is a really good point, thanks for bringing it up, @dang has already helped us update the HN post title, and we've made that copy change on our website as well! From now on, our messaging will be "coding interviews", not "technical interviews". That is, until we start supporting all the other technical specialties ;)


Seriously. I'm not an actual software developer, I do systems admin and DevOps config whacking. If I have to bring up an IDE to write something I'm waaaaay out of my wheelhouse. I don't touch JS, React, Java or any of that stuff. So this "technical" interview would tell people absolutely nothing about the skills needed fort my job.

This is a software engineering test only, and very specific stacks at that. It would be a waste of a person's time unless that was the exact stack they used in the past, or was demanded by the job.


That is still very technical work however - and I bet you are decent at scripting in a language or two. Still plenty to test with that (theoretically).


Ok, we've s/technical/coding/'d the title above.


Very cool! I'd love to see game engine support added (mainly Unity & Unreal).


That would be super cool, and we'll be adding support for new stacks by popular demand. We've already had a few people mention support for Unity, so it's on our roadmap (but likely won't get to it until Q4)!


now we need an automated interview questions solver as well as an automated real time deepfake for taking interviews.


If I never have to interview again, it’ll be too soon.

Hats off to you for trying to fix a broken system.


Do you happen to be hiring for customer support / support engineer roles?


fyi, the long loom url in your post is causing this page to be nearly unreadable on iPhone Safari. Have to squint to read tiny text or scroll horizontally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: