You're spot on – those moments when someone is clearly reading from a script, parroting marketing material, or backpedaling on their opinions are incredibly telling, whether they're using AI help or otherwise. Ultimately, you want to hire someone who is a good fit for the company culture and the role – and that requires authenticity, critical thinking, and a genuine desire to be part of the team.
You're absolutely right, the traditional interview process, especially in tech, often fails to accurately assess the skills and qualities that matter most for success in a role. Many interviews focus heavily on rote memorization of algorithms, data structures, and obscure technical details that are rarely used in day-to-day work. This can advantage those who are good at cramming for tests but doesn't necessarily reflect real-world problem-solving abilities.
I do not have Chrome extension, the app just connects to the meeting tab via tab sharing. And yes while I use Whisper in Prep, in Copilot I use Deepgram for speach-to-text streaming (Whisper doesn't do streaming unfortunately).
That's a cool idea, and it could be achievable with just hours of gameplay. It would essentially be a game where the first person to reach the end receives a job offer.
No, hire all of them. Ideally design both the game and the company to blur the line between them. Each job title matches a monster in a dungeon, each manager an end boss. The product could be a dungeon master style taskmanager where you drag and drop minions onto tasks that have custom art works to fit the cliënt.
Just as writers use grammar checkers and artists use digital brushes, developers increasingly rely on AI tools in their daily work. In this view, using AI assistance is no different than using any other resource available to a developer on the job.
The purpose of an interview is to find the best person to do the job, so that should be the criterium if we judge something as "cheating", not just tool use as such. Say you have a junior developer, fresh out of school, who's only just studied all the theory and done all LeetCode out there. They're going to do better on the traditional interview than a senior developer, whose more recent experience is more practical. But the senior developer is more qualified to do the job. Is it really cheating for the senior developer to use an AI tool to assist with the theory? Or is it merely levelling the playing field and making interviews fairer?
Your counterargument here seems to be: "Interviews suck because they don't measure the right things anyway. Therefore it's okay for candidates to use an AI tool during the interview process, even though employers explicitly forbid them from doing so."
Except that it's a much more controversial technology than simply "digital brushes", at least as of now it is much more controversial. Could be widely accepted in the future of course
If a candidate is simply regurgitating AI-generated answers without genuine understanding, a skilled interviewer will see right through it. It's like trying to pass off a memorized speech as genuine conversation - the lack of depth and authenticity will be evident.
On the other hand, some candidates possess deep technical skills but lack the confidence to articulate them effectively, especially in high-pressure situations. AI assistance can level the playing field for candidates who might otherwise struggle to showcase their true abilities.
That sounds like a refreshing and practical approach to developer hiring! It's great to see companies moving away from traditional, often artificial, coding challenges and embracing assessments that reflect real-world workflows.
It is, until you're an adult and they want a tic-tac-toe rendition for a fintech. Ok, sure kid. A more serious fintech had a CoderPad that wouldn't compile a line of Swift; we laughed, switched to Xcode, and enjoyed our time together making actual software related to their SDK. The whole point is to understand how people think... so let them think with the tooling they think with.
You've hit the nail on the head! Your comparison to engineers refusing to use Google in the early 2000s is spot-on. It highlights how quickly our perception of "essential skills" can evolve with technology.
You raise a crucial point: instead of banning or penalizing AI use, why not embrace it as an opportunity to assess candidates on a deeper level?