You don't need to derive time complexity from scratch in an interview. You need to know it — instantly, confidently, without hesitation. When you say "this is O(n log n)" the interviewer nods and moves on. When you fumble, they start worrying.
This is the reference sheet. Bookmark it.
Data Structure Operations
Array
| Operation | Average | Worst |
| Access by index | O(1) | O(1) |
| Search | O(n) | O(n) |
| Insert (end) | O(1)* | O(n) |
| Insert (middle) | O(n) | O(n) |
| Delete (end) | O(1) | O(1) |
| Delete (middle) | O(n) | O(n) |
| Space | O(n) | O(n) |
*Amortized O(1) for dynamic arrays — occasionally O(n) when the array resizes, but averaged over all operations it's O(1).
Linked List
| Operation | Average | Worst |
| Access by index | O(n) | O(n) |
| Search | O(n) | O(n) |
| Insert (at head) | O(1) | O(1) |
| Insert (at position) | O(n) | O(n) |
| Delete (at head) | O(1) | O(1) |
| Delete (at position) | O(n) | O(n) |
| Space | O(n) | O(n) |
The trade-off: O(1) insertion/deletion at known positions, but no random access. You give up indexing for flexible insertion.
Hash Table (Dictionary/HashMap)
| Operation | Average | Worst |
| Search | O(1) | O(n) |
| Insert | O(1) | O(n) |
| Delete | O(1) | O(n) |
| Space | O(n) | O(n) |
The worst case happens with pathological hash collisions — every key maps to the same bucket. In practice, with a decent hash function, everything is O(1). This is why hash maps are the most-used data structure in interviews.
Binary Search Tree (BST)
| Operation | Average | Worst |
| Search | O(log n) | O(n) |
| Insert | O(log n) | O(n) |
| Delete | O(log n) | O(n) |
| Space | O(n) | O(n) |
The worst case is a degenerate tree — basically a linked list. That's why balanced BSTs (AVL, Red-Black) guarantee O(log n). In interviews, assume balanced unless stated otherwise.
Heap (Priority Queue)
| Operation | Time |
| Insert | O(log n) |
| Extract min/max | O(log n) |
| Peek min/max | O(1) |
| Build heap | O(n) |
| Space | O(n) |
Here's the thing people get wrong: building a heap from an array is O(n), not O(n log n). The math behind this (sift-down from the bottom) is a common interview discussion point.
Trie (Prefix Tree)
| Operation | Time |
| Insert | O(m) |
| Search | O(m) |
| Prefix search | O(m) |
| Space | O(n * m) |
Where
m is the length of the key. Tries trade space for speed on string operations — autocomplete, spell checking, IP routing.
Stack and Queue
| Operation | Time |
| Push/Enqueue | O(1) |
| Pop/Dequeue | O(1) |
| Peek | O(1) |
| Space | O(n) |
Both are O(1) for everything. The difference is order: LIFO vs FIFO.
Sorting Algorithms
| Algorithm | Best | Average | Worst | Space | Stable? |
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) | Yes |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) | No |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | Yes |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) | No |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) | No |
| Counting Sort | O(n + k) | O(n + k) | O(n + k) | O(k) | Yes |
| Radix Sort | O(n d) | O(n d) | O(n * d) | O(n + k) | Yes |
Where
k is the range of input values and
d is the number of digits.
The ones that matter for interviews: Merge Sort (stable, guaranteed O(n log n), but needs O(n) space), Quick Sort (in-place, fastest in practice, but O(n²) worst case with bad pivots), and knowing that Python's
sorted() uses Timsort — a hybrid of Merge Sort and Insertion Sort.
# Quick Sort — the one you should be able to write from scratch
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
This isn't the in-place version, but it shows the logic clearly. Partition around a pivot, recurse on each side.
Graph Algorithms
| Algorithm | Time | Space | Use Case |
| BFS | O(V + E) | O(V) | Shortest path (unweighted) |
| DFS | O(V + E) | O(V) | Exhaustive search, cycle detection |
| Dijkstra | O((V + E) log V) | O(V) | Shortest path (weighted, non-negative) |
| Bellman-Ford | O(V * E) | O(V) | Shortest path (negative weights) |
| Floyd-Warshall | O(V³) | O(V²) | All-pairs shortest path |
| Topological Sort | O(V + E) | O(V) | Dependency ordering (DAGs) |
| Kruskal's MST | O(E log E) | O(V) | Minimum spanning tree |
| Prim's MST | O((V + E) log V) | O(V) | Minimum spanning tree |
Where
V is vertices and
E is edges.
The critical insight: Dijkstra with a binary heap (priority queue) is O((V + E) log V). With a Fibonacci heap it's O(V log V + E), but nobody implements Fibonacci heaps in interviews. Know the binary heap version.
Common Search and Select
| Algorithm | Time | Space |
| Binary Search | O(log n) | O(1) |
| Two Pointer | O(n) | O(1) |
| Sliding Window | O(n) | O(k) |
| Quick Select (k-th element) | O(n) avg, O(n²) worst | O(1) |
The Mistakes Interviewers Catch
Forgetting amortized analysis. Dynamic array append is O(1) amortized, not O(n). Hash table insert is O(1) amortized. If you say "appending to an array is O(n)" you're technically describing the worst case of a single operation but missing the bigger picture.
Ignoring space complexity. "My solution is O(n log n)" — great, but if you're using O(n) extra space when the interviewer wants in-place, you haven't fully solved the problem. Always state both time and space.
Saying O(n) when it's O(n²). This happens with nested loops. If you loop through an array and for each element do a lookup in an unsorted array, that's O(n²). Using a hash set makes the inner lookup O(1), bringing total to O(n).
# O(n^2) — nested linear search
def has_pair_sum_slow(arr, target):
for i in range(len(arr)):
for j in range(i + 1, len(arr)):
if arr[i] + arr[j] == target:
return True
return False
# O(n) — hash set lookup
def has_pair_sum_fast(arr, target):
seen = set()
for num in arr:
if target - num in seen:
return True
seen.add(num)
return False
Confusing log bases. O(log₂ n) and O(log₁₀ n) are the same complexity class because they differ by a constant factor. In Big O, the base doesn't matter. But when interviewers ask "why is binary search O(log n)?", say "because we halve the search space each step, so we do at most log₂(n) steps."
Not considering the input size. Sorting strings isn't O(n log n) — it's O(n
m log n) where m is the average string length, because each comparison takes O(m). This matters for problems like Group Anagrams.
If you want to drill these complexities until they're automatic, CodeUp quizzes you on time and space complexity for every problem you solve — so you build the habit of analyzing complexity as part of your problem-solving process, not as an afterthought.