The Copy List With Random Pointers problem involves creating a deep copy of a linked list whose nodes point to random nodes within the list. Given this unique feature, the input list is challenging to traverse while building a new list in parallel. This problem demands careful use of pointers and can be solved with both a two-pass and a single-pass approach.
Given a linked list with nodes that have an additional pointer referring to another node in the list, return a deep copy of the list. Node definition:
class Node:
def __init__(self, value: int, next: 'Node'=None, random: 'Node'=None):
self.value = value
self.next = next
self.random = random
Input: head = [[5, Null], [13, 0], [7, 2], [13, 2]]
Output: new_head = [[5, Null], [13, 0], [7, 2], [13, 2]]
Note: Purely for the purposes of describing the list in text rather than a diagram, each node is represented by a tuple of [value, index_of_random_node]
, where value
is an integer and index_of_random_node
is the index of the node referred to by the random
pointer. Using the index allows us to distinguish between the two nodes with 13 as value
's because the constraints do not stipulate that value
's are unique.
Constraints
The number of nodes in the list is the range [0, 10,000]
-10,000 <= Node.value <= 10,000
Without the random
pointers, copying the linked list would be fairly straightforward. New nodes can be created and linked via the next
pointers as the list is traversed. To traverse the list, a temporary pointer is used in a loop that sets the pointer to the next node until the pointer reaches null
.
When random
pointers are added and we traverse the list visiting both next
and random
pointers simultaneously, the destination node for a given random
node might not exist yet in the copied list. We can work around this limitation by creating all the nodes required for our deep copy first and then linking them later.
We can work around this limitation by creating all the nodes required for our deep copy first and then linking them later. In the first traversal, we create new nodes using the value in the original node. We save a mapping from the original node to the new node in a hash table. In the second pass, link the nodes of the new list by connecting the next
and random
pointers, finding the destination nodes in the hash table as we go.
def deep_copy(head: Optional[Node]) -> Optional[Node]:
if not head:
return None
hash_table: dict[Node, Node] = {}
# 1st pass: copy the nodes and store the mapping
p: Node = head
while p:
hash_table[p] = Node(p.value)
p = p.next
# 2nd pass: connect the newly created nodes
p = head
while p:
p_new = hash_table[p]
if p.next:
p_new.next = hash_table[p.next]
if p.random:
p_new.random = hash_table[p.random]
p = p.next
return hash_table[head]
1def deep_copy(head: Optional[Node]) -> Optional[Node]:
2 if not head:
3 return None
4
5 hash_table: dict[Node, Node] = {}
6
7 # 1st pass: copy the nodes and store the mapping
8 p: Node = head
9 while p:
10 hash_table[p] = Node(p.value)
11 p = p.next
12
13 # 2nd pass: connect the newly created nodes
14 p = head
15 while p:
16 p_new = hash_table[p]
17 if p.next:
18 p_new.next = hash_table[p.next]
19 if p.random:
20 p_new.random = hash_table[p.random]
21
22 p = p.next
23
24 return hash_table[head]
O(n)
O(n)
(Traversing the list takes linear time.)Nodes are mapped from original to new list in a dictionary, which also scales linearly with list size.
In the first approach, we separated the creation and linking of the nodes because the random
pointer may refer to a node that hasn't yet been created. Instead of waiting until all nodes are created for the replica list, we could alternatively create any missing nodes proactively and avoid a second traversal.
To achieve this, we initialize the hash table with the head node and its copy. Using a temporary pointer to iterate through the original list, we first check if the next
and random
nodes are mapped to their copies in the hash table. If they don't already exist, they are created immediately and added to the hash table. Then, the copied node can be linked to both its next
and random
nodes without waiting.
def deep_copy(head: Optional[Node]) -> Optional[Node]:
if not head:
return None
hash_table: dict[Node, Node] = {head: Node(head.value)}
p: Node = head
while p:
if p.next:
if p.next not in hash_table:
hash_table[p.next] = Node(p.next.value)
hash_table[p].next = hash_table[p.next]
if p.random:
if p.random not in hash_table:
hash_table[p.random] = Node(p.random.value)
hash_table[p].random = hash_table[p.random]
p = p.next
return hash_table[head]
1def deep_copy(head: Optional[Node]) -> Optional[Node]:
2 if not head:
3 return None
4
5 hash_table: dict[Node, Node] = {head: Node(head.value)}
6
7 p: Node = head
8 while p:
9 if p.next:
10 if p.next not in hash_table:
11 hash_table[p.next] = Node(p.next.value)
12 hash_table[p].next = hash_table[p.next]
13 if p.random:
14 if p.random not in hash_table:
15 hash_table[p.random] = Node(p.random.value)
16 hash_table[p].random = hash_table[p.random]
17
18 p = p.next
19 return hash_table[head]
20
O(n)
O(n)
The list is traversed once and the hash table contains mappings from each node to its copy.
interviewing.io is a mock interview practice platform. We've hosted over 100K mock interviews, conducted by senior engineers from FAANG & other top companies. We've drawn on data from these interviews to bring you the best interview prep resource on the web.
Interview prep and job hunting are chaos and pain. We can help. Really.