# Recursion Interview Questions & Tips

By Jai Pandya | Published: June 25, 2023

## What is Recursion?

Recursion is a strategy used in computer science where a function invokes itself to solve a problem. This self-referential nature of recursion helps to solve problems that can be broken down into simpler, similar problems. In other words, recursion is a strategy where the solution to a problem depends on solutions to smaller instances of the same problem.

Recursion is a potent tool when dealing with problems related to data structures, such as traversing trees or graphs, sorting arrays, or exploring permutations and combinations. Functional languages like Haskell, Scala, and Erlang, among others, tend to favor recursion for control flow since they lack traditional looping constructs present in imperative languages.

### How Recursion Works

Let's understand recursion with an analogy. Suppose you're standing at the bottom of a staircase and want to reach the top. The staircase has many steps, and your task is to climb them all. A non-recursive way of thinking would be to count each step as you climb, one after the other, until you reach the top.

However, a recursive approach would be different. Instead of thinking about all the steps you need to climb, in a recursive way, you wouldn't consider each step separately. Instead, you would break down the problem. How do you reach the top? You climb one step, and then you are left with a staircase that is shorter by one step.

This is your recursive step: climbing to the top of a staircase is the same as climbing one step and then climbing to the top of a smaller staircase.

But we're missing an important part - what if there's only one step? Or what if there are no steps at all? This brings us to the concept of a base case. In recursion, a base case acts as a stopping signal, telling the function when to stop calling itself and start returning.

In our analogy, the base case is when there are no more steps to climb. If there's only one step, you climb it, and you're done. If there are no steps, you're already at the top!

``````function climb_steps(n):
# Base case: if there are no more steps, stop recursion
if n == 0:
print("You're at the top! All steps climbed.")
return

# Recursive step: climb one step
print("Climb one step. Remaining steps: ", n-1)
# Recursive call: continue climbing the remaining steps
climb_steps(n - 1)``````

Let's see what the output will look like when we call `climb_steps(3)`:

``````Climb one step. Remaining steps: 2
Climb one step. Remaining steps: 1
Climb one step. Remaining steps: 0
You're at the top! All steps climbed.``````

This might seem like a mind-bender, but that's the nature of recursion! The process involves two key aspects: a recursive step, where the function calls itself to solve a smaller problem, and a base case, where the function knows to stop. Most looping operations can be expressed recursively, which is part of the reason why functional programming languages often favor recursion. These principles can be applied to a wide range of problems in computer science, which we'll explore further in the following sections.

### Call Stack

Before we move on, let's take a moment to understand how recursion works under the hood.

Think of each recursive call to `climb_steps` as sending a climber to ascend the staircase. When the function calls itself, it's like it's sending another climber to ascend a slightly smaller staircase. The original climber waits at his step until the climber he sent finishes his climb.

In terms of a call stack, each climber represents a function call placed on the stack. The call at the top of the stack is the current step being climbed, and the calls below it are the steps waiting to be completed. Each call waits for the calls above it (the steps yet to be climbed) to complete before it can finish.

So, when you call `climb_steps(n)`, you place `n` calls (climbers) on the stack. As each call completes (each climber reaches the top of their staircase), it's removed from the stack. The process continues until the stack is empty—all steps are climbed, and all climbers have finished.

In essence, the call stack is crucial to managing the flow of execution in recursive calls, ensuring that each function call is addressed correctly and executed in the correct order, no matter how many recursive calls are made. It is also important to note that the call stack is finite, and every recursive call takes up space on the stack. If you have too many recursive calls, you will eventually run out of space on the stack, resulting in a stack overflow error.

### Tail Call Optimization

While we did tell you that recursion takes space on the call stack, some modern compilers are smart, and canuse a cheeky trick called tail call optimization (TCO) to reduce the space used by recursion. In essence, if the very last act of a function is to call itself, the compiler can skip adding a new climber and simply let the current one climb further. In other words, the compiler can reuse the current stack frame for the next recursive call instead of adding a new one.

However, not all programming languages support this optimization, but when they do, it's like having a single, tireless climber that efficiently completes the climb without causing a queue on the staircase. Scheme, Erland, and Scala are some of the languages that support TCO, while Python and Java do not. JavaScript also supports TCO in the spec, starting with ES6, but it's not yet implemented in most browsers.

So what’s the point? Recursive implementations are often more concise and readable when compared with iterative alternatives. But they may be undesirable in production workloads because of the potential for stack overflows. Tail call recursion unlocks the benefits and avoids the downsides.

## When to Use Recursion in Interviews

The beauty of recursion lies in its ability to express complex problems in a few lines of code. While iterating with loops can achieve the same results, the ability to decompose a problem into smaller instances of itself makes recursion a favorite technique in problem-solving. In interviews, you may use recursion when the problem fits into one of the following patterns.

### Divide and Conquer

In the divide and conquer approach, we break down a problem into smaller subproblems, solve each subproblem independently, and combine the solutions to answer the main problem. This approach works best when the subproblems are independent, meaning the solution to one does not depend on the solution to another.

For example, problems like Merge Sort and Quick Sort are quintessential divide-and-conquer problems where you continually divide the array into smaller pieces until you reach a trivially solvable size. We have written about Merge and Quick Sort in detail in our guide on sorting algorithms. Binary Search is another example of a divide-and-conquer problem where you divide the array into two halves and search for the target element in one of the halves.

### Tree and Graph Traversal

Recursion naturally models the hierarchical structure of trees and graphs. It allows you to explore all possibilities from a given node by moving deeper into the structure until a base case is met, in other words, using Depth-First Search (DFS). In the context of a binary tree, the process often involves visiting the root, the left subtree, and finally, the right subtree. Depending on how you visit the root, you can perform pre-order, in-order, or post-order traversal in a binary tree.

Let's look at an example of pre-order traversal in a binary tree. In this case, we visit the root first, the left subtree, and finally the right subtree. The following code snippet shows how we can perform pre-order traversal using recursion.

``````class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None

def dfs(node):
if node is None:
return

print(node.value)  # Visit the node
dfs(node.left)     # Visit left subtree
dfs(node.right)    # Visit right subtree
``````

In this `dfs` function, we first visit the node (in this case, we just print its value). Then, we recursively visit the left and right subtrees. If a node is `None`, meaning we've reached a leaf node's child, we return and continue with the other nodes.

Another example is finding the leaves of a binary tree. You can see how we can use recursion to find the leaves of a binary tree in this detailed guide.

### Backtracking

Problems requiring exploring all possible configurations to find a solution can be tackled using recursion. Backtracking often involves a sequence of choices, where each choice leads you down a path, and if that path does not lead to a solution, you backtrack and explore another path. It is like navigating a maze; you start at a point and take a step in a chosen direction. If you reach a dead end, you backtrack to your previous position and try a different path.

However, it's essential to note that while backtracking can solve a wide variety of problems, it can also lead to performance issues, particularly in cases where the number of possible configurations is vast. Recursion-heavy backtracking can result in stack overflows (out-of-memory errors) or time-out issues due to the potentially massive number of recursive calls. In such cases, employing techniques such as caching or memoization can significantly improve performance by avoiding repetitive computation.

Backtracking-related problems appear quite frequently in technical interviews. Some famous examples include N-Queens, Sudoku Solver, Word Search, Permutations and Subsets.

### Dynamic Programming

Dynamic programming (DP) is a strategic approach employed for efficient problem-solving. Think of it as a well-organized toolkit that solves complex problems with ease. DP decomposes the main problem into simpler, smaller subproblems, which are solved only once. Their solutions are stored for future use. This unique approach is beneficial, especially when we encounter problems having overlapping subproblems and optimal substructure.

Overlapping subproblems imply that the same smaller problems reappear multiple times during the computation. On the other hand, optimal substructure suggests that we can construct an optimal solution to the overall problem from the optimal solutions to its subproblems. These two conditions make a problem well-suited for a DP solution.

Now, where does recursion fit into all this? Fundamentally, recursion helps us to break down the problem into manageable subproblems. However, unlike conventional recursion, where some calculations can be performed repeatedly, dynamic programming takes it a notch higher. DP stores the results of the subproblems by using a technique known as [memoization](https://interviewing.io/memoization-interview-questions, thus avoiding re-computation and increasing efficiency.

#### Recursion vs Dynamic Programming

Let's dive a little deeper into the distinction between dynamic programming and other recursion-based problems. When using dynamic programming with recursion, we introduce a memory function or a lookup table, known as memoization. This idea of "remembering" the results of solved subproblems separates DP from other recursive problems. By avoiding re-computation, we drastically reduce the time complexity, making DP a formidable tool to tackle complex problems efficiently.

Consider the famous Fibonacci sequence, where each number is the sum of the preceding two. The naive recursive function to calculate Fibonacci numbers might lead to many redundant calculations. For instance, to calculate the 5th Fibonacci number, you would need to calculate the 3rd Fibonacci number twice.

``````def fibonacci(n):
if n <= 1:
return n
else:
return (fibonacci(n-1) + fibonacci(n-2))
``````

But with dynamic programming and memoization, we store each result as we compute it, sidestepping the need to repeat our work:

``````def fibonacci(n, memo = {}):
if n <= 1:
return n
elif n not in memo:
memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
return memo[n]``````

The impact is evident. By remembering the results of previously computed Fibonacci numbers, we avoid the inefficiency of redundant computation, and our program runs faster and smoother. This is the essence and beauty of combining recursion with dynamic programming.

#### Top-down vs Bottom-up approaches

The recursive DP approach we have discussed here is also known as top-down dynamic programming. This method starts with the original problem and breaks it down into subproblems, storing the results of each along the way to avoid redundant computation. However, there's also a bottom-up approach, or tabulation, which solves all the subproblems first and uses their results to build up to the solution of the overall problem.

While both methods leverage the principles of DP, they differ in space usage. Top-down methods can sometimes use more space due to the recursive call stack, especially for deep recursion trees. In contrast, bottom-up methods typically use iterative structures and are more space-efficient. Therefore, while choosing an approach, it's essential to consider the space requirements and tailor your solution accordingly.

In an interview, it's important to recognize when a problem fits into these patterns, as it strongly indicates that recursion could be a practical approach to finding the solution. As you gain more experience with recursion, you'll find it easier to spot these patterns and implement recursive solutions quickly and efficiently.

## Common Mistakes in Interviews Featuring Recursion

### Misunderstanding Recursion Flow

Understanding recursion flow is fundamental to writing and debugging recursive algorithms effectively. A crucial part of recursion is how it involves a function calling itself with a modified argument, progressing toward the base case. Many candidates can reason through the recursive flow but struggle with passing data between recursive calls, especially when they have to modify the data at each level or aggregate information from recursive calls.

Consider a simple task of computing the depth of a binary tree, where depth is the number of nodes along the longest path from the root node down to the farthest leaf node. A common mistake is to neglect to pass information from child nodes back up to parent nodes:

``````def depth(node):
if node is None:
return 0
depth(node.left)
depth(node.right)
return 1``````

In this example, the function correctly makes recursive calls to `node.left` and `node.right`, but it doesn't do anything with the results of these calls.

To avoid this mistake, always consider what information needs to be passed to each recursive call and what information needs to be returned from each recursive call. Make sure to aggregate or utilize the information returned from recursive calls properly. For the tree depth problem, the correct approach would be to use the depths of the left and right subtrees to compute the depth of the current node:

``````def depth(node):
if node is None:
return 0
left_depth = depth(node.left)
right_depth = depth(node.right)
return max(left_depth, right_depth) + 1``````

In this corrected version, the function properly uses the recursive calls' results to calculate the tree's depth.

### Not Setting Base/Stop Conditions Correctly

Base cases form the foundation of any recursive algorithm. They determine the conditions under which the recursion should terminate, preventing the program from entering an infinite loop. A common oversight, however, is the incorrect setting of these base cases, which might result in skipping certain conditions, leading to incomplete or incorrect results. Let's look at some examples.

#### Forgetting to Set a Base Condition

A common mistake is diving into a problem's recursive logic without first setting a base condition. Without it, the function can become an endless loop, consuming more and more memory until the system runs out of resources.

Consider the problem of finding the factorial of a number. If we forget to set a base condition:

``````def factorial(n):
# No base condition
return n * factorial(n-1)``````

Without base condition, the function will keep calling itself indefinitely until the system runs out of resources. To avoid this, when working on a recursive problem, always begin by setting a base condition.

#### Missing Multiple Base Cases

In a problem like calculating the Fibonacci sequence, you might account for `n == 0` base condition but forget to account for `n == 1`:

``````def fibonacci(n):
if n == 0:
return 0
# Missing base case for n == 1
return fibonacci(n-1) + fibonacci(n-2)``````

To stay away from such mistakes, think about all the possible base cases and make sure to account for each of them in your code.

In a problem where you're tasked with reversing a linked list using recursion, a common mistake is to set a base case condition that is too broad. For instance, setting the base case as `if not head or not head.next`, would also halt the recursion when the function reaches the last node, not just when it's initially called with an empty list.

``````def reverseList(head):
return p``````

In this code, the base case halts the recursion too early, and the last node's link isn't properly reversed. The correct base case should be `if not head`, which would only halt the recursion when the function is called with an empty list.

#### Incorrect Order of Base Cases

The order in which base cases are presented in a recursive function matters significantly. If we check a less restrictive base case before a more restrictive one, we may encounter a situation where the function makes incorrect or unnecessary computations.

Let's illustrate this point with an example from a classic algorithmic problem: the subset sum problem. This problem requires determining whether a subset of a given set of numbers exists that sums up to a specific target value `k`.

``````def subset_sum_incorrect(nums, target, i=0):
# Incorrect order of base cases
if target == 0:
return True
if i == len(nums) or target < 0:
return False
return subset_sum_incorrect(nums, target - nums[i], i + 1) or subset_sum_incorrect(nums, target, i + 1)

def subset_sum_correct(nums, target, i=0):
# Correct order of base cases
if i == len(nums) or target < 0:
return False
if target == 0:
return True
return subset_sum_correct(nums, target - nums[i], i + 1) or subset_sum_correct(nums, target, i + 1)``````

### Overlooking Space Complexity

One of the often overlooked aspects of recursion is its space complexity. Each recursive call adds a new layer to the system's call stack, which can lead to high space complexity for deeply recursive algorithms. This space complexity often manifests as a Stack Overflow error, especially in languages that don't optimize for tail recursion. Candidates often forget to account for this in their complexity analysis.

For instance, using recursion for Depth-First Search in graph traversal problems can lead to extensive use of the call stack, resulting in high space complexity. A non-recursive approach like Breadth-First Search (BFS) might be more suitable in some cases.

To understand this better, let's return to the example of calculating the nth Fibonacci number.

``````def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)``````

This function, while correct, has an exponential time complexity because of the repeated computations, and it also has a linear space complexity due to the maximum depth of the recursion, which is `n`. However, candidates often miss out on accounting for the space complexity and state that the space complexity is constant, which is incorrect.

To avoid this mistake, it's essential to consider the depth of recursion while analyzing space complexity. The recursion depth equals the maximum number of nested calls, and each call adds a new layer to the system's stack, contributing to the space complexity.

An iterative approach or memoization can help optimize time and space complexity in the Fibonacci example. Here's the Fibonacci function implemented iteratively, which has a constant space complexity:

``````def fibonacci(n):
current, next = 0, 1
for _ in range(n):
current, next = next, current + next
return current
``````

There are no recursive calls in this iterative version of the Fibonacci function, so the space complexity is constant, `O(1)`, which is more space-efficient.

### Failing to Apply Memoization

Memoization is crucial for recursive algorithms where the same subproblems are solved multiple times. Unfortunately, during interviews, candidates often forget to apply it, resulting in unnecessary computation and increased time complexity. You should always watch for cases where the same computation is performed multiple times, as these are opportunities for memoization to improve both performance and efficiency.

## What to Say in Interviews to Show Mastery Over Recursion

Demonstrating mastery of recursion in interviews requires more than just solving the problem. You must also articulate your thought process, explain your problem-solving approach, and showcase your understanding of recursion. Let's look at some points to consider while dealing with problems related to recursion in a coding interview.

### Understanding Base Cases and the Recursive Step

The cornerstone of every recursive algorithm is its base case(s) and the recursive step. These two elements work together to solve a larger problem by systematically breaking it down into more manageable parts.

#### Discussing Base Cases

A base case is the simplest instance of a problem that can be answered directly without any further recursive calls. When solving recursive problems, always start by identifying the base case. This could sound like, "The base case for this problem is when we have an empty array, at which point we can directly return 0 as there are no elements to sum."

Once the base case is established, you can delve into the recursive step, the part of the function that breaks the problem into smaller subproblems. Express your thought process out loud: "Now, let's think about the recursive step. We want to divide our problem into smaller parts that look similar to the original problem. We could do this by taking one element from the array and recursively calling our function on the rest of the array."

By clearly explaining the base case(s) and the recursive step(s), you demonstrate a solid understanding of recursion and communicate your problem-solving approach effectively to your interviewer. This will set a strong foundation for the rest of your solution.

### Optimizing Recursive Solutions

#### Optimizing Recursive Solutions

While a brute-force recursive solution might be a good starting point, it's equally important to demonstrate how to optimize it. This often involves identifying and eliminating overlapping subproblems, a common issue in naive recursive solutions.

#### Starting with a Simple Recursive Solution

Begin by outlining a straightforward recursive solution. Articulate your approach like this: "The immediate approach that comes to mind is a simple recursion where we break down the problem into smaller subproblems. However, I notice that this approach might lead to repeated calculations, thereby increasing the time complexity." While discussing the basic recursive solution, this might also be a good time to state the input and output of the function clearly.

#### Identifying Optimizations

Identify potential inefficiencies in your solution. A common issue with recursive solutions is that they often end up solving the same subproblems multiple times. Share your observations with the interviewer: "I see that we're solving the same subproblems multiple times, which is inefficient."

#### Applying Optimizations

Next, suggest improvements to the solution. For example, if your solution involves solving overlapping subproblems, suggest using memoization or dynamic programming to store and reuse solutions to subproblems. This could be phrased as, "To make this solution more efficient, we could use memoization to store the results of subproblems. This way, we avoid redundant calculations, and if a subproblem needs to be solved again, we can just fetch its result from our memoization table."

By thinking critically about your initial recursive solution and applying optimization strategies like memoization or dynamic programming, you can show your interviewer that you're capable of developing efficient code, a critical skill for any software engineer.

## Handling Deep Recursion

Showing your awareness of recursion limits and how different languages handle them can demonstrate your depth of knowledge. Say something like, "Given the problem's constraints, the depth of the recursion might become an issue because it could lead to a stack overflow." You might want to suggest using an iterative approach involving a stack.

To show mastery over recursion, you can also discuss the idea of tail call optimization (explained above) and whether your language of choice supports it.

When it comes to implementing recursion in graph or tree structures, asking the right questions is crucial to fully grasp the problem and create an efficient solution.

#### Addressing the Nature of Nodes

Start by understanding the nature of the nodes in the graph or tree. You could ask, "Are the nodes in the graph directional or bidirectional? This impacts whether we treat the graph as a directed or undirected, subsequently affecting our recursive approach."

#### Considering Possible Edge Cases

Also, anticipate edge cases, which can drastically change how your recursion unfolds. Ask questions like, "Can there be loops in the graph? Is it possible to have a null or empty graph/tree? These scenarios need to be considered in our base case to avoid infinite recursion and null pointer exceptions."

By asking these critical questions upfront, you demonstrate your analytical thinking and attention to detail and ensure that your recursive solution is robust and well-rounded, capable of handling a wide range of scenarios.

## Common Recursion Interview Questions

Jai Pandya

Jai is a software engineer and a technical leader. In his professional career spanning over a decade, he has worked at several startups and companies such as SlideShare and LinkedIn. He is also a founder of a saas product used by over 10K companies across the globe. He loves teaching and mentoring software engineers. His mentees have landed jobs at companies such as Google, Facebook, and LinkedIn.