March 26, 202610 min read

Dijkstra's Algorithm: The Shortest Path Algorithm You'll Actually Use

How Dijkstra's shortest path algorithm works, why it needs non-negative weights, the priority queue implementation, and how to handle the problems interviewers love to twist it into.

dijkstra shortest-path graphs algorithms interviews
Ad 336x280

Dijkstra's algorithm finds the shortest path from a source node to every other node in a weighted graph. It's probably the most practical graph algorithm that exists — GPS navigation, network routing, game pathfinding, and about a hundred other real systems use some variant of it.

The core idea is greedy: always process the closest unvisited node next. Once you've processed a node, you've found the shortest path to it. This works because of one key constraint: all edge weights must be non-negative. Remove that constraint and the whole thing falls apart (more on that later).

The Algorithm

Here's the mental model. Imagine you're pouring water from the source node. Water flows outward, always filling the nearest reachable point first. The order in which nodes get filled is the order Dijkstra processes them.

More concretely:

  1. Set the distance to the source as 0, all others as infinity
  2. Put the source in a priority queue (min-heap)
  3. Pop the node with the smallest distance
  4. For each neighbor, check if going through this node gives a shorter path
  5. If yes, update the distance and add the neighbor to the priority queue
  6. Repeat until the queue is empty
import heapq
from collections import defaultdict

def dijkstra(n, edges, source):
# build adjacency list
graph = defaultdict(list)
for u, v, w in edges:
graph[u].append((v, w))
graph[v].append((u, w)) # remove for directed graph

dist = [float('inf')] * n
dist[source] = 0
heap = [(0, source)] # (distance, node)

while heap:
d, u = heapq.heappop(heap)

if d > dist[u]:
continue # stale entry, skip

for v, w in graph[u]:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
heapq.heappush(heap, (dist[v], v))

return dist

That if d > dist[u]: continue line is important. Since we can't efficiently decrease a key in Python's heapq, we just push a new entry whenever we find a shorter path. The stale entries (with outdated larger distances) get skipped when we pop them. This is called "lazy deletion."

Why It Works

The greedy choice: when we pop the node with the smallest distance from the heap, that distance is final. No future path can improve it.

Why? Because all edge weights are non-negative. If you've reached node X with distance d, any other path to X must go through some unprocessed node Y with distance >= d, then follow edges with non-negative weights. So the total is at least d. You can't do better.

This is the critical insight. If edges could be negative, a longer path through an unprocessed node could end up shorter after a negative edge. The greedy choice would be wrong.

Tracing the Path

The algorithm above gives you shortest distances but not the actual paths. To reconstruct paths, track the predecessor of each node:

def dijkstra_with_path(n, edges, source, target):
    graph = defaultdict(list)
    for u, v, w in edges:
        graph[u].append((v, w))
        graph[v].append((u, w))

dist = [float('inf')] * n
dist[source] = 0
prev = [-1] * n
heap = [(0, source)]

while heap:
d, u = heapq.heappop(heap)
if d > dist[u]:
continue
if u == target:
break # found shortest path to target

for v, w in graph[u]:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
prev[v] = u
heapq.heappush(heap, (dist[v], v))

# reconstruct path if dist[target] == float('inf'): return float('inf'), []

path = []
node = target
while node != -1:
path.append(node)
node = prev[node]
path.reverse()

return dist[target], path

Note the early termination: if you only need the shortest path to a specific target (not all nodes), you can stop as soon as that target is popped from the heap.

Complexity Analysis

With a binary heap (which Python's heapq is):


  • Each vertex is popped at most once: O(V log V)

  • Each edge causes at most one push: O(E log V)

  • Total: O((V + E) log V)


With a Fibonacci heap (theoretical, rarely implemented):

  • O(V log V + E)


For dense graphs (E close to V^2), the Fibonacci heap version is better asymptotically. In practice, binary heap with lazy deletion is what everyone uses because Fibonacci heaps have huge constant factors.

For very dense graphs, you could skip the heap entirely and use an O(V^2) implementation with a simple array:

def dijkstra_dense(n, adj_matrix, source):
    dist = [float('inf')] * n
    dist[source] = 0
    visited = [False] * n

for _ in range(n):
# find unvisited node with minimum distance
u = -1
for v in range(n):
if not visited[v] and (u == -1 or dist[v] < dist[u]):
u = v

if dist[u] == float('inf'):
break

visited[u] = True
for v in range(n):
if adj_matrix[u][v] > 0 and dist[u] + adj_matrix[u][v] < dist[v]:
dist[v] = dist[u] + adj_matrix[u][v]

return dist

O(V^2) time, no heap needed. When E is close to V^2, this is actually faster than the heap version because the heap operations have a log V factor.

Classic Problems

Network Delay Time

Given a network of n nodes and weighted directed edges, find how long it takes for a signal from node k to reach all nodes. This is literally "run Dijkstra from k, return the maximum distance." If any node is unreachable, return -1.

def network_delay_time(times, n, k):
    graph = defaultdict(list)
    for u, v, w in times:
        graph[u].append((v, w))

dist = {i: float('inf') for i in range(1, n + 1)}
dist[k] = 0
heap = [(0, k)]

while heap:
d, u = heapq.heappop(heap)
if d > dist[u]:
continue
for v, w in graph[u]:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
heapq.heappush(heap, (dist[v], v))

max_dist = max(dist.values())
return max_dist if max_dist < float('inf') else -1

Cheapest Flights Within K Stops

Find the cheapest price from src to dst with at most k stops. Standard Dijkstra doesn't work directly because you need to track the number of stops. Modify the state: instead of just (distance, node), use (cost, node, stops_remaining).

def find_cheapest_price(n, flights, src, dst, k):
    graph = defaultdict(list)
    for u, v, w in flights:
        graph[u].append((v, w))

# (cost, node, stops_remaining)
    heap = [(0, src, k + 1)]
    visited = {}  # node -> min stops to reach it

while heap:
cost, node, stops = heapq.heappop(heap)

if node == dst:
return cost

if node in visited and visited[node] >= stops:
continue
visited[node] = stops

if stops > 0:
for neighbor, price in graph[node]:
heapq.heappush(heap, (cost + price, neighbor, stops - 1))

return -1

This is a common pattern: when the problem adds constraints beyond distance, expand the state space.

Shortest Path in a Grid with Obstacles

Given a grid where each cell has a cost, find the shortest path from top-left to bottom-right. This is Dijkstra where the "graph" is implicit — neighbors are adjacent cells, and edge weights are cell costs.

def shortest_path_grid(grid):
    rows, cols = len(grid), len(grid[0])
    dist = [[float('inf')] * cols for _ in range(rows)]
    dist[0][0] = grid[0][0]
    heap = [(grid[0][0], 0, 0)]

while heap:
d, r, c = heapq.heappop(heap)
if r == rows - 1 and c == cols - 1:
return d
if d > dist[r][c]:
continue

for dr, dc in [(0, 1), (0, -1), (1, 0), (-1, 0)]:
nr, nc = r + dr, c + dc
if 0 <= nr < rows and 0 <= nc < cols:
new_dist = d + grid[nr][nc]
if new_dist < dist[nr][nc]:
dist[nr][nc] = new_dist
heapq.heappush(heap, (new_dist, nr, nc))

return dist[rows - 1][cols - 1]

Grid problems with varying costs are just Dijkstra in disguise. If all costs are equal (unweighted), use BFS instead — it's simpler and O(V + E).

0-1 BFS

A special case: edge weights are only 0 or 1. You can use a deque instead of a heap. Push weight-0 edges to the front, weight-1 edges to the back.

from collections import deque

def zero_one_bfs(n, edges, source):
graph = defaultdict(list)
for u, v, w in edges: # w is 0 or 1
graph[u].append((v, w))

dist = [float('inf')] * n
dist[source] = 0
dq = deque([source])

while dq:
u = dq.popleft()
for v, w in graph[u]:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
if w == 0:
dq.appendleft(v) # front
else:
dq.append(v) # back

return dist

O(V + E), no log factor. This comes up in grid problems like "minimum cost to make all cells reachable" where some moves are free and others cost 1.

Dijkstra vs. Other Shortest Path Algorithms

BFS — works when all edges have equal weight. O(V + E). Use this when the graph is unweighted. Dijkstra — works with non-negative weights. O((V + E) log V). The go-to for weighted graphs. Bellman-Ford — works with negative weights (but no negative cycles). O(V * E). Slower, but handles what Dijkstra can't. Floyd-Warshall — all pairs shortest paths. O(V^3). Use when you need the distance between every pair of nodes. A* search — Dijkstra with a heuristic that guides the search toward the target. Same worst case, but much faster in practice for point-to-point queries in geometric graphs (like maps).

The decision tree is simple: unweighted graph -> BFS. Non-negative weights -> Dijkstra. Negative weights -> Bellman-Ford. All pairs -> Floyd-Warshall.

Common Mistakes

Using Dijkstra with negative edges. The greedy choice breaks. Once you "finalize" a node's distance, a negative edge could later provide a shorter path. Use Bellman-Ford instead. Forgetting the stale entry check. Without if d > dist[u]: continue, you'll process nodes multiple times with outdated distances. The algorithm still gives correct results but runs much slower — potentially O(V E log V). Using a visited set incorrectly. Some implementations use if u in visited: continue and then visited.add(u) after popping. This works but is subtly different from the distance-based check. The distance-based check (d > dist[u]) is safer because it handles the case where a node gets a shorter distance while it's already in the heap. Not handling unreachable nodes. Always check if dist[target] is still infinity before returning it as a valid answer. Pushing (distance, node) tuples where nodes aren't comparable. In Python, if two tuples have the same distance, heapq compares the second element. If nodes aren't integers, this can crash. Either use integer IDs or add a tiebreaker: (dist, counter, node) where counter is a monotonically increasing integer.

The Practical Takeaway

Dijkstra's algorithm is the single most useful graph algorithm for shortest path problems. The heap-based implementation with lazy deletion is clean, efficient, and works for the vast majority of interview problems involving weighted graphs.

Master the template: build the adjacency list, initialize distances, push source to heap, pop-process-push in a loop. Then recognize the variants: path reconstruction, early termination for specific targets, expanded state space for constrained problems, 0-1 BFS for binary weights.

Work through these patterns on CodeUp — start with the basic single-source shortest path, then tackle the constrained variations. Once you see how the same skeleton adapts to different problems, Dijkstra becomes one of the most reliable tools in your interview toolkit.

Ad 728x90