Why giving feedback (whether it’s good or bad) will help you hire

By Aline Lerner | Published: February 13, 2023; Last updated: May 1, 2023

Note: This post originally appeared in TechCrunch on February 6, 2023.

One of the things that sucks most about technical interviews is that they’re a black box — candidates (usually) get told whether they made it to the next round, but they’re rarely told why they got the outcome that they did.

Lack of feedback isn’t just frustrating to candidates. It’s bad for business. We did a whole study on this. It turns out that 43% of all candidates consistently underrate their technical interview performance, and 25% of all candidates consistently think they failed when they actually passed.

What’s particularly important is that there’s a statistically significant relationship between whether people think they did well in an interview and whether they’d want to work with you. In other words, in every interview cycle, some portion of interviewees are losing interest in joining your company just because they don’t think they did well, even when they actually did.

Practically speaking, giving instant feedback to successful candidates can do wonders for increasing your close rate.

Giving feedback will not only make candidates you want today more likely to join your team, but it’s also crucial to hiring the candidates you might want down the road. Technical interview outcomes are erratic, and according to our data, only about 25% of candidates perform consistently from interview to interview. This means that the same candidate you reject today might be someone you want to hire in 6 months. It’s in your interest to forge a good relationship with them now.

But won't we get sued?

I surveyed founders, hiring managers, recruiters, and labor lawyers to understand why anyone who’s ever gone through interviewer training has been told in no uncertain terms to not give feedback.

The main reason: companies are scared of getting sued.

As it turns out, literally zero companies (at least in the US) have ever been sued by an engineer who received constructive post-interview feedback. As some of my lawyer contacts pointed out, a lot of cases get settled out of court, and that data is much harder to get, but given what we know, the odds of getting sued after giving useful feedback are extremely low.

What about candidates getting defensive?

For every interviewer on our platform, we track two key metrics: the candidate experience score and the interviewer calibration score.

For our purposes here, all you need to know is that the candidate experience is a measure of how likely candidates are likely to come back after interviewing with a given interviewer, and the interviewer calibration score tell us whether a given interviewer is too strict or too lenient, based on how their candidates do in subsequent, REAL interviews (which we host on our platform as well). If an interviewer continually gives good scores to candidates who fail real interviews, they’re too lenient, and vice versa.

When you put the candidate experience score and the interviewer calibration score together, you can reason about the value of delivering honest feedback! To wit, below is a graph of the average candidate experience score as a function of interviewer accuracy, representing data from over 1,000 distinct interviewers (comprising ~100K interviews).

As you can see, the candidate experience score peaks right at the point where interviewers are neither too strict or too lenient but are, in Goldilocks terms, just right. It drops off pretty dramatically on either side after that.

In short, based on our data, we’re confident that, if you do it right, candidates won’t get defensive and that the benefits of delivering honest feedback greatly outweigh the risks.

The playbook for how to deliver honest (and sometimes harsh) feedback

The first and most important thing is to NOT focus on the outcome but rather to get specific right away — this will keep your candidate from getting defensive and will set them up to actually hear and internalize your feedback.

In other words, whether they did well or poorly, don’t tell them right away. Instead, dive into a constructive, detailed assessment of their performance. Reframing feedback in this way takes some practice, but your candidates won’t push you to give them the outcome. Instead, their attention will be redirected to the details, which will make the pass/fail part much more of an afterthought (and, in some cases, entirely moot). After all, why do people get defensive? It’s not because they failed! Rather, it’s because they don’t understand why and feel powerless.

To help start the conversation, here are some leading questions for you to consider:

  • Did they ask enough questions about constraints before getting into coding or before starting to design a system?
  • Go over specific code snippets or portions of their solution, and talk about what they could have done better.
  • Could their solution have been more efficient?
  • Did they discuss and reason about tradeoffs?
  • Did they make mistakes when discussing time or space complexity? What were those specific mistakes?
  • Did they make any mistakes when trying to use their programming language of choice idiomatically (e.g., iterating in Python or JavaScript)?
  • For systems design questions, did they jump to suggesting a specific database, load balancer, tool, etc. without reasoning through why that tool is the right choice for the job?

Note that to answer these questions well and to give specific, constructive feedback, it’s critical to take notes, ideally timestamped ones, during the interview. Then you can always go back to your notes and say, “Hey, you jumped into coding just 5 minutes into the interview. Typically, you’ll want to spend a few minutes asking questions.”

And, of course, specific feedback really does mean being specific. One of the kindest, albeit most labor-intensive, things you can do is walk through their code with them, point out places where they went astray, and note what they could have done better.

One other useful pattern for giving feedback is to share objective benchmarks for a given interview question, both with respect to times and number of hints given. If you’re a great interviewer, you probably do something called layering of complexity, where after a candidate successfully solves a question, you change up the constraints in real time. You may even do this 3-4 times during the interview if a candidate is blowing through your questions quickly.

This means that you know exactly how many constraint changes you’ll be able to go through with a low-performing candidate vs. a mediocre one vs. one who’s truly exceptional.

Your candidates don’t know this, though! In fact, candidates commonly overestimate their performance in interviews because they don’t realize how many layers of complexity a question has. In this scenario, a candidate will finish, say, the first layer successfully right before time is called. They walk away thinking they did well, when in reality, the interviewer is benchmarking them against people who can complete 3 layers in that amount of time.

How do you put all of this info to practical use? Let your candidates know what the benchmarks are for a top-performing candidate at the end of the interview. For instance, you could say something like, “In the 45 minutes we spent working this problem, the strongest performers usually complete the brute-force solution in about 20 minutes, optimize it until it runs in linear time (which takes another 10 minutes), and then, in the last 15 minutes, successfully complete an enhancement where, instead of an array, your input is a stream of integers.”

Also, let them know exactly how many hints are par for the course. Just like with how much time should elapse for different parts of the interview, candidates have no idea what “normal” is when it comes to the number and detail level of hints. For instance, if a candidate needed a hint about which data structure to use, followed by a hint about what time complexity is associated with that data structure, followed by a hint about a common off-by-one error that comes up, you may want to tell them that the strongest performers usually need a hint about one of those things, but not all three.

The key to communicating these benchmarks constructively is, of course, to be as specific as possible with runtimes or space constraints or whatever success metric you’re using.

One final technique some of our interviewers employ is to ask their candidate to perform a detailed self-assessment at the end of the interview before giving feedback. This is an advanced maneuver, and if you’re completely new to giving synchronous feedback, I wouldn’t do it in your first few interviews. However, once you get comfortable, this approach can be a great way to zero in on the areas that the candidate needs the most help on immediately.

If you do end up going the self-assessment route, it’s good to ask your candidate some leading questions. For instance, for algorithmic interviews, you can ask:

  • How well do you think you did at solving the problem and arriving at an optimized solution?
  • How clean was your code?
  • Where are some places that you struggled?

While the candidate is responding, take notes (perhaps even in your shared editor!), and then go through their points together, and speak to each point in detail. For instance, if a candidate rates themselves well on code quality but poorly on their ability to solve the problem, you can agree or disagree and give them benchmarks (as discussed above) for both.

Here’s the summary playbook:

  • Take detailed notes during the interview, ideally with timestamps, that you can refer to later.
  • DON’T lead with whether they passed or failed. Instead, get specific and constructive right away. This will divert the candidate’s attention away from the outcome and put them in the right headspace to receive feedback.
  • As much as possible, give objective benchmarks for performance. For instance, tell candidates that the strongest performers are usually able to finish part 1 within 20 minutes, part 2 within 10 minutes, and part 3 within 15 minutes, with at most 1 hint.
  • Once you get comfortable with giving feedback, you can try asking candidates to do a self-assessment and then use it as a rubric that you can go down, point by point.

We know exactly what to do and say to get the company, title, and salary you want.

Interview prep and job hunting are chaos and pain. We can help. Really.