March 27, 20267 min read

Time and Space Complexity Cheat Sheet: Every Data Structure and Algorithm

Master reference for Big O complexities of every major data structure, sorting algorithm, and graph algorithm. Plus the common interview mistakes people make when analyzing complexity.

big-o time-complexity space-complexity data-structures algorithms
Ad 336x280

You don't need to derive time complexity from scratch in an interview. You need to know it — instantly, confidently, without hesitation. When you say "this is O(n log n)" the interviewer nods and moves on. When you fumble, they start worrying.

This is the reference sheet. Bookmark it.

Data Structure Operations

Array

OperationAverageWorst
Access by indexO(1)O(1)
SearchO(n)O(n)
Insert (end)O(1)*O(n)
Insert (middle)O(n)O(n)
Delete (end)O(1)O(1)
Delete (middle)O(n)O(n)
SpaceO(n)O(n)
*Amortized O(1) for dynamic arrays — occasionally O(n) when the array resizes, but averaged over all operations it's O(1).

Linked List

OperationAverageWorst
Access by indexO(n)O(n)
SearchO(n)O(n)
Insert (at head)O(1)O(1)
Insert (at position)O(n)O(n)
Delete (at head)O(1)O(1)
Delete (at position)O(n)O(n)
SpaceO(n)O(n)
The trade-off: O(1) insertion/deletion at known positions, but no random access. You give up indexing for flexible insertion.

Hash Table (Dictionary/HashMap)

OperationAverageWorst
SearchO(1)O(n)
InsertO(1)O(n)
DeleteO(1)O(n)
SpaceO(n)O(n)
The worst case happens with pathological hash collisions — every key maps to the same bucket. In practice, with a decent hash function, everything is O(1). This is why hash maps are the most-used data structure in interviews.

Binary Search Tree (BST)

OperationAverageWorst
SearchO(log n)O(n)
InsertO(log n)O(n)
DeleteO(log n)O(n)
SpaceO(n)O(n)
The worst case is a degenerate tree — basically a linked list. That's why balanced BSTs (AVL, Red-Black) guarantee O(log n). In interviews, assume balanced unless stated otherwise.

Heap (Priority Queue)

OperationTime
InsertO(log n)
Extract min/maxO(log n)
Peek min/maxO(1)
Build heapO(n)
SpaceO(n)
Here's the thing people get wrong: building a heap from an array is O(n), not O(n log n). The math behind this (sift-down from the bottom) is a common interview discussion point.

Trie (Prefix Tree)

OperationTime
InsertO(m)
SearchO(m)
Prefix searchO(m)
SpaceO(n * m)
Where m is the length of the key. Tries trade space for speed on string operations — autocomplete, spell checking, IP routing.

Stack and Queue

OperationTime
Push/EnqueueO(1)
Pop/DequeueO(1)
PeekO(1)
SpaceO(n)
Both are O(1) for everything. The difference is order: LIFO vs FIFO.

Sorting Algorithms

AlgorithmBestAverageWorstSpaceStable?
Bubble SortO(n)O(n²)O(n²)O(1)Yes
Selection SortO(n²)O(n²)O(n²)O(1)No
Insertion SortO(n)O(n²)O(n²)O(1)Yes
Merge SortO(n log n)O(n log n)O(n log n)O(n)Yes
Quick SortO(n log n)O(n log n)O(n²)O(log n)No
Heap SortO(n log n)O(n log n)O(n log n)O(1)No
Counting SortO(n + k)O(n + k)O(n + k)O(k)Yes
Radix SortO(n d)O(n d)O(n * d)O(n + k)Yes
Where k is the range of input values and d is the number of digits. The ones that matter for interviews: Merge Sort (stable, guaranteed O(n log n), but needs O(n) space), Quick Sort (in-place, fastest in practice, but O(n²) worst case with bad pivots), and knowing that Python's sorted() uses Timsort — a hybrid of Merge Sort and Insertion Sort.
# Quick Sort — the one you should be able to write from scratch
def quicksort(arr):
    if len(arr) <= 1:
        return arr
    pivot = arr[len(arr) // 2]
    left = [x for x in arr if x < pivot]
    middle = [x for x in arr if x == pivot]
    right = [x for x in arr if x > pivot]
    return quicksort(left) + middle + quicksort(right)

This isn't the in-place version, but it shows the logic clearly. Partition around a pivot, recurse on each side.

Graph Algorithms

AlgorithmTimeSpaceUse Case
BFSO(V + E)O(V)Shortest path (unweighted)
DFSO(V + E)O(V)Exhaustive search, cycle detection
DijkstraO((V + E) log V)O(V)Shortest path (weighted, non-negative)
Bellman-FordO(V * E)O(V)Shortest path (negative weights)
Floyd-WarshallO(V³)O(V²)All-pairs shortest path
Topological SortO(V + E)O(V)Dependency ordering (DAGs)
Kruskal's MSTO(E log E)O(V)Minimum spanning tree
Prim's MSTO((V + E) log V)O(V)Minimum spanning tree
Where V is vertices and E is edges. The critical insight: Dijkstra with a binary heap (priority queue) is O((V + E) log V). With a Fibonacci heap it's O(V log V + E), but nobody implements Fibonacci heaps in interviews. Know the binary heap version.

Common Search and Select

AlgorithmTimeSpace
Binary SearchO(log n)O(1)
Two PointerO(n)O(1)
Sliding WindowO(n)O(k)
Quick Select (k-th element)O(n) avg, O(n²) worstO(1)

The Mistakes Interviewers Catch

Forgetting amortized analysis. Dynamic array append is O(1) amortized, not O(n). Hash table insert is O(1) amortized. If you say "appending to an array is O(n)" you're technically describing the worst case of a single operation but missing the bigger picture. Ignoring space complexity. "My solution is O(n log n)" — great, but if you're using O(n) extra space when the interviewer wants in-place, you haven't fully solved the problem. Always state both time and space. Saying O(n) when it's O(n²). This happens with nested loops. If you loop through an array and for each element do a lookup in an unsorted array, that's O(n²). Using a hash set makes the inner lookup O(1), bringing total to O(n).
# O(n^2) — nested linear search
def has_pair_sum_slow(arr, target):
    for i in range(len(arr)):
        for j in range(i + 1, len(arr)):
            if arr[i] + arr[j] == target:
                return True
    return False

# O(n) — hash set lookup
def has_pair_sum_fast(arr, target):
    seen = set()
    for num in arr:
        if target - num in seen:
            return True
        seen.add(num)
    return False
Confusing log bases. O(log₂ n) and O(log₁₀ n) are the same complexity class because they differ by a constant factor. In Big O, the base doesn't matter. But when interviewers ask "why is binary search O(log n)?", say "because we halve the search space each step, so we do at most log₂(n) steps." Not considering the input size. Sorting strings isn't O(n log n) — it's O(n m log n) where m is the average string length, because each comparison takes O(m). This matters for problems like Group Anagrams.

If you want to drill these complexities until they're automatic, CodeUp quizzes you on time and space complexity for every problem you solve — so you build the habit of analyzing complexity as part of your problem-solving process, not as an afterthought.

Ad 728x90