interviewing.io is an anonymous mock interview platform and eng hiring marketplace. Engineers use us for mock interviews, and we use the data from those interviews to surface top performers, in a much fairer and more predictive way than a resume. If you’re a top performer on interviewing.io, we fast-track you at the world’s best companies.
We make money in two ways: engineers pay us for mock interviews, and employers pay us for access to the best performers. This means that we live and die by the quality of our interviewers in a way that no single employer does, no matter how much they say they care about people analytics or interviewer metrics or training. If we don’t have really well-calibrated interviewers, who also create great candidate experience, we don’t get paid.
In a recent post, we shared how, over time, we came up with two metrics that, together, tell a complete and compelling story about interviewer quality: the candidate experience metric and the calibration metric. In this post, we’ll talk about how to apply our learnings about interviewer quality to your own process. We’ve made a bunch of mistakes so you don’t have to!
But first, a pessimistic word. The more time I spend in this industry, the more I’m convinced that many top companies end up hiring great engineers not because of their process but despite it. The reality is that top companies are generally not incentivized to care about candidate experience or metrics. As long as you have a revolving door of candidates, both can be pretty meh, and if you’re just a little bit better on calibration than a coin flip and not bad enough to outright scare people away, you’ll be fine.
In the rare instances where I’ve seen companies really care about interviewer quality, it’s because an eng leader there has taken it upon themselves, as a labor of love. Otherwise, if we’re pragmatic about it, hiring is a cost center, and it’s never going to get the attention that a profit center will. At the same time, some of you reading this post will care enough about interviewer quality and candidate experience to make some changes within your organization.
On the back of that fervent hope, below is a punch list of things you can do to move the needle:
Look, the reality is that conducting interviews is a polarizing task — people either are passionate (so passionate that I put it in italics) about it, or they despise it. While speculating on what perfect storm of past experiences, heredity, and political leanings will make an interviewer fall into one camp or the other is outside the scope of this post, after personally interviewing hundreds of interviewers, we are 100% convinced that conducting interviews is polarizing.
What does this mean for you? Choose the people who are passionate about interviewing to conduct your interviews. It’s easy to figure out who they are. Just ask them. Over the years, I’ve listened to a lot of interview replays. You can immediately identify when an interviewer is checked out. It’s painful. You’ll hear them typing. You’ll hear them go silent for a while. You certainly won’t hear them collaborating with their candidate or gently guiding them away from a perilous rabbit hole. Compare that to a good experience like this one 1. We’ve all been on the receiving end of an interviewer’s callous indifference, but it’s absolutely preventable. If someone hates conducting interviews, don’t make them.
In our experience, a terrible question in the hands of a skilled, engaged interviewer can become great & carry lots of signal. A great question asked by an unskilled, disconnected interviewer will always be bad… like so:
If you want to be deliberate about choosing your best interviewers, follow the best practices we've found after sifting through thousands of interview recordings. The best interviewers see every interview as a collaborative exercise with the goal of seeing if they can “be smart together". The data shows that engagement really does matter.
First, please take a look at our post about the 2 metrics we track – the candidate experience score and the calibration score – if you haven’t already. Why track metrics in the first place? There’s an old adage that says you can’t fix what you can’t measure. I think this expression is overused — sometimes intuition is good enough.
That doesn’t work for interviews though — most of the time, an interview is a private interaction between two people, so an independent observer can’t acquire the anecdotal intuition to form opinions. Things are regularly happening, but unless you measure them, they’re likely happening in a vacuum. Yes, you might do some reverse shadows when training a new interviewer, but once they’re on their own, things tend to go sideways.
The second reason intuition isn’t good enough here is that you’re often not privy to interview outcomes, either because there’s a latency to them or because the candidate isn’t interviewing for your team… or because if you reject someone at the phone screen stage, you don’t get to find out if they were a false negative.
Unfortunately, as I mentioned earlier, no single company can reproduce the kind of data we have, either about candidate experience or about outcomes. For candidate experience, candidates don’t usually give post-interview feedback in the wild, and when they do, because the interviews aren’t anonymous and the candidate wants to work there, they’re not going to say bad things. And for outcomes, though you’ll know if someone passed an onsite, it’s just one data point for that person (e.g., you don’t know how they did in their interviews at other companies, and you typically won’t know what ended up happening to people you rejected).
Here’s what you can do despite these limitations:
Once you do these three things (or even if you do the first one and just one of the other two), you’ll be able to create a “candidate experience score” and an “accuracy score” for each interviewer. Track these scores over time, and use them to inform interview scheduling frequency and assignment.
We just talked about metrics. The right incentive structure gives your metrics teeth.
Every month, we run an onboarding session for all of our new interviewers where we talk about our standards, our business model, the metrics we track, and why they matter.
During these onboarding sessions, we hear one thing over and over from new interviewers — how much better the experience of being an interviewer is when they’re paid for their time. If you read that sentence again, you might find it somewhat ironic. After all, these are people who conduct interviews day in and day out at their jobs, where they are literally being paid for their time.
Unfortunately, even though that’s technically true, engineers don’t really feel like they’re being paid to do interviews because it’s so perfunctory. There’s no reward for doing a good job and, usually, no punishment for doing a bad job — it’s just something you have to do. Worse, at most companies, conducting interviews is an unwelcome distraction from doing real work (i.e., shipping product), and the interviewers who are passionate about doing a good job do it despite their incentive structure and not because of it.
This is simple to fix. Outside of using the two aforementioned metrics to inform who gets booked and how often, you can take it a step further and actually reward being a good interviewer.
If you use OKRs (Objectives and Key Results), make staying above a certain candidate experience score one of those OKRs and achieving a certain accuracy score another one.
Alternatively (or additionally), institute a cash bonus for conducting interviews while maintaining good ratings.
When establishing criteria for promotions, ensure that no one can become a people manager without being a stellar interviewer — I don’t think I have to convince you that those skill sets overlap.
I know this is a big ask, but trust me, it’s the right thing to do.
Your legal department will probably tell you not to do it, but we did the research, and literally zero companies (at least in the US) have ever been sued by an engineer who received post-interview feedback.
In cases where the candidate performed well, our data shows that delivering feedback will increase their odds of ultimately accepting your offer. The hard part, of course, is delivering feedback when the interview goes poorly. No one wants to deal with an angry, defensive candidate, even if they’re not worried about the candidate’s litigiousness.
While post-interview feedback is baked into interviewing.io, it’s not something that really happens in the wild, so we had to invent our own best practices for how to do it. Below is a slide taken verbatim from our monthly interviewer onboarding sessions. I hope that sharing it encourages a few of you to try this out. The most important takeaway from this slide is to not focus on the outcome but rather to get specific right away — this will keep your candidate from getting defensive and will set them up to actually hear and internalize your feedback.
If you do decide to give post-interview feedback, you can also check out our detailed playbook for exactly how to do it (it’s a much more detailed version of the slide pictured above).
We have no delusions that you’ll do most of these things. But even if you do a few, you will stand out, your candidates will remember you, and you will feel great about having helped someone. Years ago, back when I was head of recruiting at a startup, because I was a rare breed of recruiter who had also been an engineer, I also conducted technical interviews. In some ways, this setup gave me more freedom because I was the one that had to deal with candidates, end to end – when I took risks, I was only creating problems for myself.
At some point, I started giving candidates verbal feedback after their interviews, especially in cases where it was close and where the candidate would benefit from a nudge in the right direction (“Hey, make sure you get comfortable with manipulating hash tables.”) Then I took it a step further and started sending them a book.
Years later, many of those users who had failed their technical interview with me became interviewing.io’s first customers.
So, yeah, if you do any of these things, you generate a lot of good will, and accrued good will over time can make all the difference between a stellar employer brand and a mediocre one. It can also help you edge out companies who pay more or have flashier brands but don’t care about making interviewing better.
We also have replays of shitty interviews. That’s what made us come up with our metric, but I won’t share them to protect the guilty. ↩
Interview prep and job hunting are chaos and pain. We can help. Really.