Graph Algorithms Cheat Sheet: BFS, DFS, Dijkstra and Every Pattern You Need
Complete reference for graph algorithms — representations, traversal, shortest path, MST, topological sort, Union-Find, and cycle detection. Code templates in Python for every pattern.
Graphs show up in roughly 30% of coding interviews, and most candidates are shaky on anything beyond basic BFS/DFS. Here's the thing — there are only about 8 graph patterns that actually matter, and each one has a code template you can memorize. Once you know them, graph problems stop being scary and start being formulaic.
This is the complete reference. Every algorithm, when to use it, the time complexity, and working Python code.
Graph Representations
Before any algorithm, you need to represent the graph. Two options, and choosing wrong can cost you the interview.
Adjacency List
Use this for sparse graphs (most interview problems). Space: O(V + E). Iterating over a node's neighbors: O(degree).
# Dictionary-based adjacency list
graph = {
'A': ['B', 'C'],
'B': ['A', 'D'],
'C': ['A'],
'D': ['B']
}
# Building from edge list
def build_graph(edges, directed=False):
graph = {}
for u, v in edges:
graph.setdefault(u, []).append(v)
if not directed:
graph.setdefault(v, []).append(u)
return graph
Adjacency Matrix
Use this for dense graphs or when you need O(1) edge lookup. Space: O(V^2). Checking if edge exists: O(1).
# V x V matrix
def build_matrix(n, edges, directed=False):
matrix = [[0] * n for _ in range(n)]
for u, v in edges:
matrix[u][v] = 1
if not directed:
matrix[v][u] = 1
return matrix
Rule of thumb: if the problem gives you an edge list or adjacency description, use a list. If it gives you a grid, the grid is the implicit adjacency matrix.
Traversal
BFS (Breadth-First Search)
When to use: Shortest path in unweighted graphs, level-order processing, "minimum number of steps" problems. Time: O(V + E). Space: O(V).from collections import deque
def bfs(graph, start):
visited = {start}
queue = deque([start])
order = []
while queue:
node = queue.popleft()
order.append(node)
for neighbor in graph.get(node, []):
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
return order
BFS shortest path (unweighted):
def bfs_shortest_path(graph, start, end):
queue = deque([(start, [start])])
visited = {start}
while queue:
node, path = queue.popleft()
if node == end:
return path
for neighbor in graph.get(node, []):
if neighbor not in visited:
visited.add(neighbor)
queue.append((neighbor, path + [neighbor]))
return None # no path
DFS (Depth-First Search)
When to use: Connected components, cycle detection, topological sort, exhaustive path exploration, backtracking. Time: O(V + E). Space: O(V).# Iterative DFS
def dfs_iterative(graph, start):
visited = set()
stack = [start]
order = []
while stack:
node = stack.pop()
if node not in visited:
visited.add(node)
order.append(node)
for neighbor in graph.get(node, []):
if neighbor not in visited:
stack.append(neighbor)
return order
# Recursive DFS
def dfs_recursive(graph, node, visited=None):
if visited is None:
visited = set()
visited.add(node)
for neighbor in graph.get(node, []):
if neighbor not in visited:
dfs_recursive(graph, neighbor, visited)
return visited
Connected components:
def count_components(n, edges):
graph = build_graph(edges, directed=False)
visited = set()
count = 0
for node in range(n):
if node not in visited:
dfs_recursive(graph, node, visited)
count += 1
return count
Shortest Path Algorithms
Dijkstra's Algorithm
When to use: Shortest path with non-negative edge weights. The go-to for weighted graphs in interviews. Time: O((V + E) log V) with a binary heap. Space: O(V).import heapq
def dijkstra(graph, start):
# graph: {node: [(neighbor, weight), ...]}
dist = {start: 0}
heap = [(0, start)]
while heap:
d, node = heapq.heappop(heap)
if d > dist.get(node, float('inf')):
continue
for neighbor, weight in graph.get(node, []):
new_dist = d + weight
if new_dist < dist.get(neighbor, float('inf')):
dist[neighbor] = new_dist
heapq.heappush(heap, (new_dist, neighbor))
return dist
The critical line is the if d > dist[node]: continue — this skips stale entries in the heap. Without it, you process nodes multiple times and the algorithm is still correct but slower.
Bellman-Ford Algorithm
When to use: Shortest path with negative weights or when you need to detect negative cycles. Time: O(V * E). Space: O(V).def bellman_ford(n, edges, start):
# edges: [(u, v, weight), ...]
dist = [float('inf')] * n
dist[start] = 0
for _ in range(n - 1):
for u, v, w in edges:
if dist[u] != float('inf') and dist[u] + w < dist[v]:
dist[v] = dist[u] + w
# Detect negative cycle
for u, v, w in edges:
if dist[u] != float('inf') and dist[u] + w < dist[v]:
return None # negative cycle exists
return dist
Relax all edges V-1 times. If you can still relax after V-1 iterations, there's a negative cycle.
Floyd-Warshall Algorithm
When to use: All-pairs shortest path. When you need the shortest distance between every pair of nodes. Time: O(V^3). Space: O(V^2).def floyd_warshall(n, edges):
dist = [[float('inf')] * n for _ in range(n)]
for i in range(n):
dist[i][i] = 0
for u, v, w in edges:
dist[u][v] = w
for k in range(n):
for i in range(n):
for j in range(n):
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
return dist
Three nested loops over vertices. The key insight: k is the "intermediate node" — can we get from i to j faster by going through k?
Minimum Spanning Tree
Kruskal's Algorithm (with Union-Find)
When to use: Find MST by processing edges from lightest to heaviest. Works well with edge lists. Time: O(E log E). Space: O(V).def kruskal(n, edges):
# edges: [(weight, u, v), ...]
edges.sort()
parent = list(range(n))
rank = [0] * n
def find(x):
while parent[x] != x:
parent[x] = parent[parent[x]] # path compression
x = parent[x]
return x
def union(x, y):
px, py = find(x), find(y)
if px == py:
return False
if rank[px] < rank[py]:
px, py = py, px
parent[py] = px
if rank[px] == rank[py]:
rank[px] += 1
return True
mst = []
for w, u, v in edges:
if union(u, v):
mst.append((u, v, w))
if len(mst) == n - 1:
break
return mst
Prim's Algorithm
When to use: Find MST by growing from a single node. Works well with adjacency lists and dense graphs. Time: O((V + E) log V) with a binary heap. Space: O(V).import heapq
def prim(graph, n):
# graph: {node: [(neighbor, weight), ...]}
visited = set()
heap = [(0, 0)] # (weight, node)
total_weight = 0
while heap and len(visited) < n:
w, node = heapq.heappop(heap)
if node in visited:
continue
visited.add(node)
total_weight += w
for neighbor, weight in graph.get(node, []):
if neighbor not in visited:
heapq.heappush(heap, (weight, neighbor))
return total_weight
Topological Sort
When to use: Order nodes in a DAG (directed acyclic graph) such that every edge goes from earlier to later. Dependency resolution, course scheduling, build systems. Time: O(V + E). Space: O(V).Kahn's Algorithm (BFS-based)
from collections import deque
def topological_sort_bfs(graph, n):
in_degree = [0] * n
for node in graph:
for neighbor in graph[node]:
in_degree[neighbor] += 1
queue = deque([i for i in range(n) if in_degree[i] == 0])
order = []
while queue:
node = queue.popleft()
order.append(node)
for neighbor in graph.get(node, []):
in_degree[neighbor] -= 1
if in_degree[neighbor] == 0:
queue.append(neighbor)
return order if len(order) == n else [] # empty = cycle
Kahn's has a nice bonus: if the result has fewer than n nodes, the graph has a cycle. This makes it great for Course Schedule-type problems.
DFS-based Topological Sort
def topological_sort_dfs(graph, n):
visited = set()
stack = []
def dfs(node):
visited.add(node)
for neighbor in graph.get(node, []):
if neighbor not in visited:
dfs(neighbor)
stack.append(node)
for i in range(n):
if i not in visited:
dfs(i)
return stack[::-1]
Post-order DFS, then reverse. The last node to finish is the one with no dependencies.
Union-Find (Disjoint Set Union)
When to use: Dynamic connectivity queries — "are X and Y in the same group?" and "merge group X and group Y." Connected components, Kruskal's MST, redundant connections. Time: O(alpha(n)) per operation (effectively O(1)). Space: O(n).class UnionFind:
def __init__(self, n):
self.parent = list(range(n))
self.rank = [0] * n
self.components = n
def find(self, x):
while self.parent[x] != x:
self.parent[x] = self.parent[self.parent[x]]
x = self.parent[x]
return x
def union(self, x, y):
px, py = self.find(x), self.find(y)
if px == py:
return False
if self.rank[px] < self.rank[py]:
px, py = py, px
self.parent[py] = px
if self.rank[px] == self.rank[py]:
self.rank[px] += 1
self.components -= 1
return True
def connected(self, x, y):
return self.find(x) == self.find(y)
Path compression + union by rank makes this nearly O(1) per operation. The components counter is a useful addition — many problems ask "how many connected components?"
Cycle Detection
Directed Graphs — DFS Coloring
def has_cycle_directed(graph, n):
WHITE, GRAY, BLACK = 0, 1, 2
color = [WHITE] * n
def dfs(node):
color[node] = GRAY
for neighbor in graph.get(node, []):
if color[neighbor] == GRAY:
return True # back edge = cycle
if color[neighbor] == WHITE and dfs(neighbor):
return True
color[node] = BLACK
return False
return any(color[i] == WHITE and dfs(i) for i in range(n))
Three colors: WHITE (unvisited), GRAY (in current DFS path), BLACK (fully processed). A back edge to a GRAY node means a cycle.
Undirected Graphs — Union-Find
def has_cycle_undirected(n, edges):
uf = UnionFind(n)
for u, v in edges:
if uf.connected(u, v):
return True # already connected = adding this edge creates a cycle
uf.union(u, v)
return False
If two nodes are already in the same component and you add an edge between them, that's a cycle.
Bipartite Check (Graph 2-Coloring)
When to use: Can nodes be split into two groups such that no edge connects nodes in the same group? Job assignment, "is this graph bipartite?"from collections import deque
def is_bipartite(graph, n):
color = [-1] * n
for start in range(n):
if color[start] != -1:
continue
queue = deque([start])
color[start] = 0
while queue:
node = queue.popleft()
for neighbor in graph.get(node, []):
if color[neighbor] == -1:
color[neighbor] = 1 - color[node]
queue.append(neighbor)
elif color[neighbor] == color[node]:
return False
return True
BFS coloring: assign alternating colors. If a neighbor already has the same color, the graph isn't bipartite.
Quick Decision Table
| Need | Algorithm | Time |
|---|---|---|
| Shortest path (unweighted) | BFS | O(V + E) |
| Shortest path (weighted, non-negative) | Dijkstra | O((V+E) log V) |
| Shortest path (negative weights) | Bellman-Ford | O(VE) |
| All-pairs shortest path | Floyd-Warshall | O(V^3) |
| Minimum spanning tree | Kruskal's or Prim's | O(E log E) |
| Dependency ordering | Topological Sort | O(V + E) |
| Dynamic connectivity | Union-Find | O(alpha(n)) |
| Cycle detection (directed) | DFS coloring | O(V + E) |
| Cycle detection (undirected) | Union-Find | O(V + E) |
| Bipartite check | BFS coloring | O(V + E) |
If you want to practice recognizing which graph algorithm to apply with guided hints, CodeUp has graph problems organized by pattern — so you build the instinct for matching problems to algorithms instead of just memorizing templates.