
If you've been in eng hiring for any length of time, you've probably noticed that the candidates who look best on paper don't always turn out to be the best engineers, and some of the strongest people you've ever worked with often took wildly unconventional paths to get there.
Despite that, most hiring pipelines are still built to filter for pedigree and brands. This insistence on brand names creates a massive talent arbitrage opportunity for recruiting teams who are willing to look beyond them. As everyone's AI-powered funnels converge on the same narrow set of profiles, a growing long tail of talented engineers — people who are smart, who can get things done, who will be easier to close and stay longer — are being systematically overlooked. Anyone who figures out how to identify these people is ironically going to win the talent war.
We analyzed ten years of hiring data and looked through our roster of non-traditional top performers and see what their resumes had in common. Here are the exact signals you need to look for to identify diamonds in the rough.

I've spent 15 years in technical recruiting, and I've spent most of that time trying to put into words everything that's wrong with how we hire engineers. Here it is.
Everyone is chasing the same candidates who look good on paper. Many of them aren't looking right now, and many of them aren't actually good. But there's no way to search for "good." LinkedIn doesn't have that filter. So recruiters rely on proxies. Where did someone work? FAANG? Top school? Specific VC-backed startup?
The new wave of AI sourcing tools was supposed to fix this. It didn't. The data about who's actually good and who's actually looking just doesn't exist in these tools. So instead of solving the problem, they let recruiters get way more specific about the wrong things. Now you can ask for a FAANG engineer who also worked at a startup backed by a specific investor and who previously owned a poodle (because some hiring manager told their recruiter that engineers with poodles write cleaner code).
The poodle thing is a joke. Sort of.
The result is what I call the technical recruiting death spiral. Criteria get narrower, sourcing takes longer, candidates still fail interviews, and a massive long tail of talented engineers who would be easier to close and stay longer get completely overlooked because they haven't owned a poodle.
In this post, I go deep on this death spiral and test one of the leading AI sourcing tools to see if it can actually find me good engineers. Spoiler: it can't.

In the last post in our hiring series, I talked about how, for six years, we ran the largest blind eng hiring experiment in history and placed thousands of people at top-tier companies. 46% of candidates who got offers at these companies didn't have top schools or top companies on their resumes. Despite that, these candidates performed as well (or better than) their pedigreed counterparts, were 2X more likely to accept offers, and stayed at their companies 15% longer.
Of course, it's easy to say that you should hire non-traditional candidates. But how do you separate great ones from mediocre ones, when you can't look at brand names on their resumes for signal? The short answer is that it's really hard. We spent years figuring it out.
But, we now have a predictive model that outperforms both human recruiters and LLMs and can reliably identify strong candidates, regardless of how they look on paper, just from an (anonymize) LinkedIn profile. Not only can it spot diamonds in the rough, but it can also identify candidates who look good but aren't actually good.
For years, hiring has relied on pedigree and optics because outcomes data was effectively inaccessible (especially data for candidates who don't pass resume screens). We think we've fixed that.

In October 2025, Meta began piloting an AI-enabled coding interview that replaces one of the two coding rounds at the onsite stage. It’s 60 minutes in a specialized CoderPad environment with an AI assistant built in. It’s highly likely that this round will be rolled out for all back-end and ops-focused roles in 2026.
While Meta’s official prep materials will tell you that AI usage during this interview is optional and will have no bearing on the outcome, in practice, that’s not entirely true, and we believe that using AI properly will give you an edge. To wit, this post is a practical walkthrough of how AI fits into these interviews, using concrete examples of prompts, code, and AI outputs, and showing how to integrate them without sacrificing judgment.

"Why don't you tell me about a time you received constructive feedback?"
Simple question. Staff-level candidate. Should be easy.
"I was leading development of a new service at Amazon. Tight deadlines, exciting technical challenges. My role included end-to-end delivery and then transition to the next project. I prioritized shipping the core functionality. Built it, tested it, launched it. The service worked technically. But during my next review cycle, my manager flagged it. The team struggled without proper docs. The handoff left gaps. I learned to treat documentation and handoff as first-class requirements, not afterthoughts. Now I add them as explicit tasks in the backlog from day one when planning projects."
Perfect CARL (or STAR) format. Clear context. Specific actions. Measurable results. Concrete learning.
Rejected on behavioral.
Why? Because at Senior+ levels, your story selection matters more than your story structure.

You’ve probably heard about the blind orchestra auditions described by Malcolm Gladwell in Outliers. We did the same thing with eng hiring.
With our blind approach, over six years, we placed thousands of engineers at FAANG and FAANG-adjacent companies and top-tier startups.
46% (almost half!) of those engineers didn’t have either a top school or a top company on their resume. In a normal (not blind) hiring process, these candidates wouldn’t even have gotten an interview.

A year and a half ago, we predicted that advances in AI would force companies to abandon cookie-cutter LeetCode questions. Despite that prediction, we bet heavily that, even if their content and format would change, algorithmic interviews were here to stay.
Now we're seeing the results. Despite clickbait headlines suggesting that Meta and other tech giants are ditching algorithmic interviews for AI-assisted ones, our survey of FAANG+ interviewers reveals a different reality: zero FAANG or FAANG-adjacent companies have moved away from algorithmic questions.
But what else is changing? Will we return to in-person interviews? Will questions get harder? How rampant is cheating, and what are companies doing about it? If candidates can now use AI in interviews, what will these new interview types look like? And how does all of this differ between FAANG & FAANG+ companies and startups?
Perhaps most importantly, have the advances in AI been a forcing function to change interviewers (and interviewers) for the better?
Read on to find out!

I’m Shivam Anand, currently leading machine learning engineering (MLE) efforts at Meta, focused on integrity, recommendation, and search systems. Over the past decade, I’ve applied state-of-the-art ML to some of the toughest challenges in big tech—from scaling anti-abuse systems at Google Ads to rebuilding ML systems for Integrity enforcement at Facebook.
I’ve seen first-hand how the nature of ML work varies massively across team types and career paths. This guide is my attempt to map that space for others navigating (or considering) careers in ML—especially those targeting roles in big tech. I will cover different ML team types, the kinds of roles you’re likely to see on those teams, how interview processes vary for ML roles, and how to make the lateral move from a software engineering role to an MLE one.

Years ago, Steve Krug wrote a book about web design called Don’t Make Me Think. It’s a classic, and the main point is that good design should make everything painfully obvious to users without demanding anything of them.
Resumes are just the same. Your resume shouldn’t make recruiters think. It should serve up the most important things about you on a platter that they can digest in 30 seconds or less. We've said before that spending a lot of time on your resume is a fool's errand, but if you’re going to do something to it, let’s make sure that that something is low-effort and high-return. Here's exactly what to do.

A lot of other platforms offer resume reviews or help with writing resumes for $$. We don't do it, despite a lot of our users asking for this feature. The reason I've refused to build them is because, simply put, resume writing is snake oil. Why? Because recruiters aren't reading resumes. If you don't have top brands, better wording won't help. If you do have top brands, the wording doesn't matter.
Interview prep and job hunting are chaos and pain. We can help. Really.