Dijkstra’s Algorithm - Core Insights
The Core Intuition
Dijkstra’s algorithm finds the shortest path from a source node to all other nodes in a weighted graph. The fundamental insight is: once you’ve found the shortest path to a node, you never need to revisit it.
This works because the algorithm is greedy - it always explores the closest unvisited node next. Since all weights are non-negative, any path through a farther node would necessarily be longer.
What People Often Miss
1. It’s Not Just BFS with Weights
Many think Dijkstra is simply BFS adapted for weighted graphs. The key difference: BFS explores by distance in edges, Dijkstra explores by distance in weight.
In BFS, all edges have equal weight (1). Dijkstra generalizes this by using a priority queue instead of a regular queue.
2. The Relaxation Step is Critical
The “relaxation” operation is where the algorithm updates distance estimates:
if distance[current] + weight(current, neighbor) < distance[neighbor]:
distance[neighbor] = distance[current] + weight(current, neighbor)
previous[neighbor] = current
This is constantly asking: “Is going through the current node a better path to the neighbor?” This is the mechanism that ensures optimality.
3. Negative Weights Break Everything
Dijkstra fails with negative weights because the core assumption (once visited, we have the shortest path) breaks down. A negative weight edge could create a shorter path through an already-visited node.
For negative weights, use Bellman-Ford Algorithm instead.
4. It’s Actually a Dynamic Programming Solution
Dijkstra exhibits optimal substructure: the shortest path from A to C through B consists of the shortest path from A to B plus the edge from B to C. This is why greedy works here.
Quick Implementation
import heapq
from collections import defaultdict
def dijkstra(graph, start):
"""
graph: dict of dict, graph[u][v] = weight of edge u->v
start: starting node
returns: dict of shortest distances from start
"""
# Distance to all nodes starts at infinity
distances = {node: float('inf') for node in graph}
distances[start] = 0
# Priority queue: (distance, node)
pq = [(0, start)]
visited = set()
while pq:
current_dist, current = heapq.heappop(pq)
# Skip if already visited
if current in visited:
continue
visited.add(current)
# Relaxation step for all neighbors
for neighbor, weight in graph[current].items():
distance = current_dist + weight
# Only consider if this is a better path
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(pq, (distance, neighbor))
return distances
# Example usage
graph = {
'A': {'B': 4, 'C': 2},
'B': {'C': 1, 'D': 5},
'C': {'D': 8, 'E': 10},
'D': {'E': 2},
'E': {}
}
print(dijkstra(graph, 'A'))
# Output: {'A': 0, 'B': 4, 'C': 2, 'D': 9, 'E': 11}Time Complexity Insight
The complexity depends on the priority queue implementation:
- With binary heap: O((V + E) log V)
- With Fibonacci heap: O(E + V log V)
The bottleneck is the priority queue operations. Each vertex is extracted once (V extractions), and each edge causes at most one decrease-key operation (E operations).
When to Use
Use Dijkstra when:
- Single-source shortest path needed
- All edge weights are non-negative
- You need the shortest path to multiple destinations
Don’t use Dijkstra when:
- Negative weights exist (use Bellman-Ford Algorithm)
- All edges have equal weight (use BFS)
- You need all-pairs shortest paths (use Floyd-Warshall Algorithm)
Key Reconstruction Points
- The greedy choice: always pick the unvisited node with smallest known distance
- The relaxation operation updates path estimates
- Non-negative weights guarantee the greedy choice is optimal
- The priority queue is essential for efficient “pick smallest” operation
Related Concepts
- Graph Theory
- Greedy Algorithms
- Priority Queue
- Dynamic Programming
- Bellman-Ford Algorithm
- A* Search Algorithm (Dijkstra with a heuristic)
Source: This is synthesized from first principles understanding of shortest path algorithms and computational complexity.