🕷️ Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 32 (from laksa117)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

ℹ️ Skipped - page is already crawled

📄
INDEXABLE
CRAWLED
19 hours ago
🤖
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0 months ago
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/
Last Crawled2026-04-05 23:23:55 (19 hours ago)
First Indexed2025-09-04 00:34:44 (7 months ago)
HTTP Status Code200
Meta TitleGraph Algorithms in Python: BFS, DFS, and Beyond
Meta DescriptionHave you ever wondered how Google Maps finds the fastest route or how Netflix recommends what to watch? Graph algorithms are behind these decisions. Graphs, made up of nodes (points) and edges (connections), are one of the most powerful data structur...
Meta Canonicalnull
Boilerpipe Text
Have you ever wondered how Google Maps finds the fastest route or how Netflix recommends what to watch? Graph algorithms are behind these decisions. Graphs, made up of nodes (points) and edges (connections), are one of the most powerful data structures in computer science. They help model relationships efficiently, from social networks to transportation systems. In this guide, we will explore two core traversal techniques: Breadth-First Search (BFS) and Depth-First Search (DFS). Moving on from there, we will cover advanced algorithms like Dijkstra’s, A*, Kruskal’s, Prim’s, and Bellman-Ford. Table of Contents: Understanding Graphs in Python Ways to Represent Graphs in Python Breadth-First Search (BFS) Depth-First Search (DFS) Dijkstra’s Algorithm A* Search Kruskal’s Algorithm Prim’s Algorithm Bellman-Ford Algorithm Optimizing Graph Algorithms in Python Key Takeaways Understanding Graphs in Python A graph consists of nodes (vertices) and edges (relationships) . For examples, in a social network, people are nodes and friendships are edges. Or in a roadmap, cities are nodes and roads are edges. There are a few different types of graphs: Directed : edges have direction (one-way streets, task scheduling). Undirected : edges go both ways (mutual friendships). Weighted : edges have values (distances, costs). Unweighted : edges are equal (basic subway routes). Now that you know what graphs are, let’s look at the different ways they can be represented in Python. Ways to Represent Graphs in Python Before diving into traversal and pathfinding, it’s important to know how graphs can be represented. Different problems call for different representations. Adjacency Matrix An adjacency matrix is a 2D array where each cell (i, j) shows whether there is an edge from node i to node j . In an unweighted graph , 0 means no edge, and 1 means an edge exists. In a weighted graph , the cell holds the edge weight. This makes it very quick to check if two nodes are directly connected (constant-time lookup), but it uses more memory for large graphs. graph = [ [ 0 , 1 , 1 ] , [ 1 , 0 , 1 ] , [ 1 , 1 , 0 ] ] Here, the matrix shows a fully connected graph of 3 nodes. For example, graph[0][1] = 1 means there is an edge from node 0 to node 1. Adjacency List An adjacency list represents each node along with the list of nodes it connects to. This is usually more efficient for sparse graphs (where not every node is connected to every other node). It saves memory because only actual edges are stored instead of an entire grid. graph = { 'A' : [ 'B' , 'C' ] , 'B' : [ 'A' , 'C' ] , 'C' : [ 'A' , 'B' ] } Here, node A connects to B and C , and so on. Checking connections takes a little longer than with a matrix, but for large, sparse graphs, it’s the better option. Using NetworkX When working on real-world applications, writing your own adjacency lists and matrices can get tedious. That’s where NetworkX comes in, a Python library that simplifies graph creation and analysis. With just a few lines of code, you can build graphs, visualize them, and run advanced algorithms without reinventing the wheel. import networkx as nx import matplotlib . pyplot as plt G = nx . Graph ( ) G . add_edges_from ( [ ( 'A' , 'B' ) , ( 'A' , 'C' ) , ( 'B' , 'C' ) ] ) nx . draw ( G , with_labels = True ) plt . show ( ) This builds a triangle-shaped graph with nodes A, B, and C. NetworkX also lets you easily run algorithms like shortest paths or spanning trees without manually coding them. Now that we’ve seen different ways to represent graphs, let’s move on to traversal methods, starting with Breadth-First Search (BFS). Breadth-First Search (BFS) The basic idea behind BFS is to explore a graph one layer at a time. It looks at all the neighbors of a starting node before moving on to the next level. A queue is used to keep track of what comes next. BFS is particularly useful for: Finding the shortest path in unweighted graphs Detecting connected components Crawling web pages Here’s an example: from collections import deque def bfs ( graph , start ) : visited = { start } queue = deque ( [ start ] ) while queue : node = queue . popleft ( ) print ( node , end = " " ) for neighbor in graph [ node ] : if neighbor not in visited : visited . add ( neighbor ) queue . append ( neighbor ) graph = { 'A' : [ 'B' , 'C' ] , 'B' : [ 'A' , 'D' , 'E' ] , 'C' : [ 'A' , 'F' ] , 'D' : [ 'B' ] , 'E' : [ 'B' , 'F' ] , 'F' : [ 'C' , 'E' ] } bfs ( graph , 'A' ) Here’s what’s going on in this code: graph is a dict where each node maps to a list of neighbors. deque is used as a FIFO queue so we visit nodes level-by-level. visited keeps track of nodes we’ve already processed so we don’t loop forever on cycles. In the loop, we pop a node, print it, then for each unvisited neighbor, we mark it visited and enqueue it. And here’s the output: A B C D E F Now that we have seen how BFS works, let’s turn to its counterpart: Depth-First Search (DFS). Depth-First Search (DFS) DFS works differently from BFS. Instead of moving level by level, it follows one path as far as it can go before backtracking. Think of it as diving deep down a trail, then returning to explore the others. We can implement DFS in two ways: Recursive DFS , which uses the function call stack Iterative DFS , which uses an explicit stack DFS is especially useful for: Cycle detection Maze solving and puzzles Topological sorting Here’s an example of recursive DFS: def dfs_recursive ( graph , node , visited = None ) : if visited is None : visited = set ( ) if node not in visited : print ( node , end = " " ) visited . add ( node ) for neighbor in graph [ node ] : dfs_recursive ( graph , neighbor , visited ) graph = { 'A' : [ 'B' , 'C' ] , 'B' : [ 'A' , 'D' , 'E' ] , 'C' : [ 'A' , 'F' ] , 'D' : [ 'B' ] , 'E' : [ 'B' , 'F' ] , 'F' : [ 'C' , 'E' ] } dfs_recursive ( graph , 'A' ) visited is a set that tracks nodes already processed so you don’t loop forever on cycles. On each call, if node hasn’t been seen, it’s printed, marked visited, then the function recurses into each neighbor. Traversal order: A B D E F C Explanation: DFS visits B after A, goes deeper into D, then backtracks to explore E and F, and finally visits C. And here’s an example of iterative DFS: def dfs_iterative ( graph , start ) : visited = set ( ) stack = [ start ] while stack : node = stack . pop ( ) if node not in visited : print ( node , end = " " ) visited . add ( node ) stack . extend ( reversed ( graph [ node ] ) ) dfs_iterative ( graph , 'A' ) visited tracks nodes you’ve already processed so you don’t loop on cycles. stack is LIFO (last in, first out) – you pop() the top node, process it, then push its neighbors. reversed(graph[node]) pushes neighbors in reverse so they’re visited in the original left-to-right order (mimicking the usual recursive DFS). Here’s the output: A B D E F C With BFS and DFS explained, we can now move on to algorithms that solve more complex problems, starting with Dijkstra’s shortest path algorithm. Dijkstra’s Algorithm Dijkstra’s algorithm is built on a simple rule: always visit the node with the smallest known distance first. By repeating this, it uncovers the shortest path from a starting node to all others in a weighted graph that doesn’t have negative edges. import heapq def dijkstra ( graph , start ) : heap = [ ( 0 , start ) ] shortest_path = { node : float ( 'inf' ) for node in graph } shortest_path [ start ] = 0 while heap : cost , node = heapq . heappop ( heap ) for neighbor , weight in graph [ node ] : new_cost = cost + weight if new_cost < shortest_path [ neighbor ] : shortest_path [ neighbor ] = new_cost heapq . heappush ( heap , ( new_cost , neighbor ) ) return shortest_path graph = { 'A' : [ ( 'B' , 1 ) , ( 'C' , 4 ) ] , 'B' : [ ( 'A' , 1 ) , ( 'C' , 2 ) , ( 'D' , 5 ) ] , 'C' : [ ( 'A' , 4 ) , ( 'B' , 2 ) , ( 'D' , 1 ) ] , 'D' : [ ( 'B' , 5 ) , ( 'C' , 1 ) ] } print ( dijkstra ( graph , 'A' ) ) Here’s what’s going on in this code: graph is an adjacency list: each node maps to a list of (neighbor, weight) pairs. shortest_path stores the current best-known distance to each node (∞ initially, 0 for start ). heap (priority queue) holds frontier nodes as (cost, node) , always popping the smallest cost first. For each popped node , it relaxes its edges: for each (neighbor, weight) , compute new_cost . If new_cost beats shortest_path[neighbor] , update it and push the neighbor with that cost. And here’s the output: { 'A' : 0 , 'B' : 1 , 'C' : 3 , 'D' : 4 } Moving on, let’s look at an extension of this algorithm: A Search. * A* Search A* works like Dijkstra’s but adds a heuristic function that estimates how close a node is to the goal. This makes it more efficient by guiding the search in the right direction. import heapq def heuristic ( node , goal ) : heuristics = { 'A' : 4 , 'B' : 2 , 'C' : 1 , 'D' : 0 } return heuristics . get ( node , 0 ) def a_star ( graph , start , goal ) : g_costs = { node : float ( 'inf' ) for node in graph } g_costs [ start ] = 0 came_from = { } heap = [ ( heuristic ( start , goal ) , start ) ] while heap : f , node = heapq . heappop ( heap ) if f > g_costs [ node ] + heuristic ( node , goal ) : continue if node == goal : path = [ node ] while node in came_from : node = came_from [ node ] path . append ( node ) return path [ : : - 1 ] , g_costs [ path [ 0 ] ] for neighbor , weight in graph [ node ] : new_g = g_costs [ node ] + weight if new_g < g_costs [ neighbor ] : g_costs [ neighbor ] = new_g came_from [ neighbor ] = node heapq . heappush ( heap , ( new_g + heuristic ( neighbor , goal ) , neighbor ) ) return None , float ( 'inf' ) graph = { 'A' : [ ( 'B' , 1 ) , ( 'C' , 4 ) ] , 'B' : [ ( 'A' , 1 ) , ( 'C' , 2 ) , ( 'D' , 5 ) ] , 'C' : [ ( 'A' , 4 ) , ( 'B' , 2 ) , ( 'D' , 1 ) ] , 'D' : [ ] } print ( a_star ( graph , 'A' , 'D' ) ) This one’s a little more complex, so here’s what’s going on: graph : adjacency list – each node maps to [(neighbor, weight), ...] . heuristic(node, goal) : returns an estimate h(node) (lower is better). It’s passed goal but in this demo uses a fixed dict. g_costs : best known cost from start to each node (∞ initially, 0 for start). heap : min-heap of (priority, node) where priority = g + h . came_from : backpointers to reconstruct the path once we pop the goal. Then in the main loop: We pop the node with smallest priority. If it’s the goal, we backtrack via came_from to build the path and return it with g_costs[goal] . Otherwise, we relax the edges: for each (neighbor, weight) , compute new_cost = g_costs[node] + weight . If new_cost improves g_costs[neighbor] , update it, set came_from[neighbor] = node , and push (new_cost + heuristic(neighbor, goal), neighbor) . Output: ( [ 'A' , 'B' , 'C' , 'D' ] , 4 ) Next up, let’s move from shortest paths to spanning trees. This is where Kruskal’s algorithm comes in. Kruskal’s Algorithm Kruskal’s algorithm builds a Minimum Spanning Tree (MST) by sorting all edges from smallest to largest and adding them one at a time, as long as they don’t create a cycle. This makes it a greedy algorithm as it always picks the cheapest option available at each step. The implementation uses a Disjoint Set (Union-Find) data structure to efficiently check whether adding an edge would create a cycle. Each node starts in its own set, and as edges are added, sets are merged. class DisjointSet : def __init__ ( self , nodes ) : self . parent = { node : node for node in nodes } self . rank = { node : 0 for node in nodes } def find ( self , node ) : if self . parent [ node ] != node : self . parent [ node ] = self . find ( self . parent [ node ] ) return self . parent [ node ] def union ( self , node1 , node2 ) : r1 , r2 = self . find ( node1 ) , self . find ( node2 ) if r1 != r2 : if self . rank [ r1 ] > self . rank [ r2 ] : self . parent [ r2 ] = r1 else : self . parent [ r1 ] = r2 if self . rank [ r1 ] == self . rank [ r2 ] : self . rank [ r2 ] += 1 def kruskal ( graph ) : edges = sorted ( graph , key = lambda x : x [ 2 ] ) mst , ds = [ ] , DisjointSet ( { u for e in graph for u in e [ : 2 ] } ) for u , v , w in edges : if ds . find ( u ) != ds . find ( v ) : ds . union ( u , v ) mst . append ( ( u , v , w ) ) return mst graph = [ ( 'A' , 'B' , 1 ) , ( 'A' , 'C' , 4 ) , ( 'B' , 'C' , 2 ) , ( 'B' , 'D' , 5 ) , ( 'C' , 'D' , 1 ) ] print ( kruskal ( graph ) ) Output: [ ( 'A' , 'B' , 1 ) , ( 'C' , 'D' , 1 ) , ( 'B' , 'C' , 2 ) ] Here, the MST includes the smallest edges that connect all nodes without forming cycles. Now that we have seen Kruskal’s, we can move further to analyze another algorithm. Prim’s Algorithm Prim’s algorithm also finds an MST, but it grows the tree step by step. It starts with one node and repeatedly adds the smallest edge that connects the current tree to a new node. Think of it as expanding a connected “island” until all nodes are included. This implementation uses a priority queue (heapq) to always select the smallest available edge efficiently. import heapq def prim ( graph , start ) : mst , visited = [ ] , { start } edges = [ ( w , start , n ) for n , w in graph [ start ] ] heapq . heapify ( edges ) while edges : w , u , v = heapq . heappop ( edges ) if v not in visited : visited . add ( v ) mst . append ( ( u , v , w ) ) for n , w in graph [ v ] : if n not in visited : heapq . heappush ( edges , ( w , v , n ) ) return mst graph = { 'A' : [ ( 'B' , 1 ) , ( 'C' , 4 ) ] , 'B' : [ ( 'A' , 1 ) , ( 'C' , 2 ) , ( 'D' , 5 ) ] , 'C' : [ ( 'A' , 4 ) , ( 'B' , 2 ) , ( 'D' , 1 ) ] , 'D' : [ ( 'B' , 5 ) , ( 'C' , 1 ) ] } print ( prim ( graph , 'A' ) ) Output: [ ( 'A' , 'B' , 1 ) , ( 'B' , 'C' , 2 ) , ( 'C' , 'D' , 1 ) ] Notice how the algorithm gradually expands from node A , always picking the lowest-weight edge that connects a new node. Let’s now look at an algorithm that can handle graphs with negative edges: Bellman-Ford. Bellman-Ford Algorithm Bellman-Ford is a shortest path algorithm that can handle negative edge weights, unlike Dijkstra’s. It works by relaxing all edges repeatedly : if the current path to a node can be improved by going through another node, it updates the distance. After V-1 iterations (where V is the number of vertices), all shortest paths are guaranteed to be found. This makes it slightly slower than Dijkstra’s but more versatile. It can also detect negative weight cycles by checking for further improvements after the main loop. def bellman_ford ( graph , start ) : dist = { node : float ( 'inf' ) for node in graph } dist [ start ] = 0 for _ in range ( len ( graph ) - 1 ) : for u in graph : for v , w in graph [ u ] : if dist [ u ] + w < dist [ v ] : dist [ v ] = dist [ u ] + w return dist graph = { 'A' : [ ( 'B' , 4 ) , ( 'C' , 2 ) ] , 'B' : [ ( 'C' , - 1 ) , ( 'D' , 2 ) ] , 'C' : [ ( 'D' , 3 ) ] , 'D' : [ ] } print ( bellman_ford ( graph , 'A' ) ) Output: { 'A' : 0 , 'B' : 4 , 'C' : 2 , 'D' : 5 } Here, the shortest path to each node is found, even though there’s a negative edge ( B → C with weight -1). If there had been a negative cycle, Bellman-Ford would detect it by noticing that distances keep improving after V-1 iterations. With the main algorithms explained, let’s move on to some practical tips for making these implementations more efficient in Python. Optimizing Graph Algorithms in Python When graphs get bigger, little tweaks in how you write your code can make a big difference. Here are a few simple but powerful tricks to keep things running smoothly. 1. Use deque for BFS If you use a regular Python list as a queue, popping items from the front takes longer the bigger the list gets. With collections.deque , you get instant ( O(1) ) pops from both ends. It’s basically built for this kind of job. from collections import deque queue = deque ( [ start ] ) # fast pops and appends 2. Go Iterative with DFS Recursive DFS looks neat, but Python doesn’t like going too deep – you’ll hit a recursion limit if your graph is very large. The fix? Write DFS in an iterative style with a stack. Same idea, no recursion errors. def dfs_iterative ( graph , start ) : visited , stack = set ( ) , [ start ] while stack : node = stack . pop ( ) if node not in visited : visited . add ( node ) stack . extend ( graph [ node ] ) 3. Let NetworkX Do the Heavy Lifting For practice and learning, writing your own graph code is great. But if you’re working on a real-world problem – say analyzing a social network or planning routes – the NetworkX library saves tons of time. It comes with optimized versions of almost every common graph algorithm plus nice visualization tools. import networkx as nx G = nx . Graph ( ) G . add_edges_from ( [ ( 'A' , 'B' ) , ( 'A' , 'C' ) , ( 'B' , 'D' ) , ( 'C' , 'D' ) ] ) print ( nx . shortest_path ( G , source = 'A' , target = 'D' ) ) Output: [ 'A' , 'B' , 'D' ] Instead of worrying about queues and stacks, you can let NetworkX handle the details and focus on what the results mean. Key Takeaways An adjacency matrix is fast for lookups but is memory-heavy. An adjacency list is space-efficient for sparse graphs. NetworkX makes graph analysis much easier for real-world projects. BFS explores layer by layer, DFS explores deeply before backtracking. Dijkstra’s and A* handle shortest paths. Kruskal’s and Prim’s build spanning trees. Bellman-Ford works with negative weights. Conclusion Graphs are everywhere, from maps to social networks, and the algorithms you have seen here are the building blocks for working with them. Whether it is finding paths, building spanning trees, or handling tricky weights, these tools open up a wide range of problems you can solve. Keep experimenting and try out libraries like NetworkX when you are ready to take on bigger projects. Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started
Markdown
[![freeCodeCamp.org](https://cdn.freecodecamp.org/platform/universal/fcc_primary.svg)](https://www.freecodecamp.org/news/) Menu Menu - [Forum](https://forum.freecodecamp.org/) - [Curriculum](https://www.freecodecamp.org/learn) [Donate](https://www.freecodecamp.org/donate/) [Learn to code — free 3,000-hour curriculum](https://www.freecodecamp.org/) September 3, 2025 / [\#Python](https://www.freecodecamp.org/news/tag/python/) # Graph Algorithms in Python: BFS, DFS, and Beyond ![Oyedele Tioluwani](https://cdn.hashnode.com/res/hashnode/image/upload/v1725575384596/35e5eb92-a04c-49cd-b77e-84cc2c09a52b.jpeg?w=500&h=500&fit=crop&crop=entropy&auto=compress,format&format=webp) [Oyedele Tioluwani](https://www.freecodecamp.org/news/author/Tioluwani/) ![Graph Algorithms in Python: BFS, DFS, and Beyond](https://cdn.hashnode.com/res/hashnode/image/upload/v1756916679855/9b173128-ed79-4ae0-8cc8-79fca17662dd.png) Have you ever wondered how Google Maps finds the fastest route or how Netflix recommends what to watch? Graph algorithms are behind these decisions. Graphs, made up of nodes (points) and edges (connections), are one of the most powerful data structures in computer science. They help model relationships efficiently, from social networks to transportation systems. In this guide, we will explore two core traversal techniques: Breadth-First Search (BFS) and Depth-First Search (DFS). Moving on from there, we will cover advanced algorithms like Dijkstra’s, A\*, Kruskal’s, Prim’s, and Bellman-Ford. ### Table of Contents: 1. [Understanding Graphs in Python](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-understanding-graphs-in-python) 2. [Ways to Represent Graphs in Python](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-ways-to-represent-graphs-in-python) 3. [Breadth-First Search (BFS)](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-breadth-first-search-bfs) 4. [Depth-First Search (DFS)](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-depth-first-search-dfs) 5. [Dijkstra’s Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-dijkstras-algorithm) 6. [A\* Search](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-a-search) 7. [Kruskal’s Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-kruskals-algorithm) 8. [Prim’s Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-prims-algorithm) 9. [Bellman-Ford Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-bellman-ford-algorithm) 10. [Optimizing Graph Algorithms in Python](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-optimizing-graph-algorithms-in-python) 11. [Key Takeaways](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-key-takeaways) ## Understanding Graphs in Python A graph consists of **nodes (vertices)** and **edges (relationships)**. For examples, in a social network, people are nodes and friendships are edges. Or in a roadmap, cities are nodes and roads are edges. There are a few different types of graphs: - **Directed**: edges have direction (one-way streets, task scheduling). - **Undirected**: edges go both ways (mutual friendships). - **Weighted**: edges have values (distances, costs). - **Unweighted**: edges are equal (basic subway routes). Now that you know what graphs are, let’s look at the different ways they can be represented in Python. ## Ways to Represent Graphs in Python Before diving into traversal and pathfinding, it’s important to know how graphs can be represented. Different problems call for different representations. ### Adjacency Matrix An adjacency matrix is a 2D array where each cell `(i, j)` shows whether there is an edge from node `i` to node `j`. - In an **unweighted graph**, `0` means no edge, and `1` means an edge exists. - In a **weighted graph**, the cell holds the edge weight. This makes it very quick to check if two nodes are directly connected (constant-time lookup), but it uses more memory for large graphs. ``` graph = [ [0, 1, 1], [1, 0, 1], [1, 1, 0] ] ``` Here, the matrix shows a fully connected graph of 3 nodes. For example, `graph[0][1] = 1` means there is an edge from node 0 to node 1. ### Adjacency List An adjacency list represents each node along with the list of nodes it connects to. This is usually more efficient for sparse graphs (where not every node is connected to every other node). It saves memory because only actual edges are stored instead of an entire grid. ``` graph = { 'A': ['B','C'], 'B': ['A','C'], 'C': ['A','B'] } ``` Here, node `A` connects to `B` and `C`, and so on. Checking connections takes a little longer than with a matrix, but for large, sparse graphs, it’s the better option. ### Using NetworkX When working on real-world applications, writing your own adjacency lists and matrices can get tedious. That’s where **NetworkX** comes in, a Python library that simplifies graph creation and analysis. With just a few lines of code, you can build graphs, visualize them, and run advanced algorithms without reinventing the wheel. ``` import networkx as nx import matplotlib.pyplot as plt G = nx.Graph() G.add_edges_from([('A','B'), ('A','C'), ('B','C')]) nx.draw(G, with_labels=True) plt.show() ``` This builds a triangle-shaped graph with nodes A, B, and C. NetworkX also lets you easily run algorithms like shortest paths or spanning trees without manually coding them. Now that we’ve seen different ways to represent graphs, let’s move on to traversal methods, starting with Breadth-First Search (BFS). ## Breadth-First Search (BFS) The basic idea behind BFS is to explore a graph one layer at a time. It looks at all the neighbors of a starting node before moving on to the next level. A queue is used to keep track of what comes next. BFS is particularly useful for: - Finding the shortest path in unweighted graphs - Detecting connected components - Crawling web pages Here’s an example: ``` from collections import deque def bfs(graph, start): visited = {start} queue = deque([start]) while queue: node = queue.popleft() print(node, end=" ") for neighbor in graph[node]: if neighbor not in visited: visited.add(neighbor) queue.append(neighbor) graph = { 'A': ['B','C'], 'B': ['A','D','E'], 'C': ['A','F'], 'D': ['B'], 'E': ['B','F'], 'F': ['C','E'] } bfs(graph, 'A') ``` Here’s what’s going on in this code: - `graph` is a dict where each node maps to a list of neighbors. - `deque` is used as a FIFO queue so we visit nodes level-by-level. - `visited` keeps track of nodes we’ve already processed so we don’t loop forever on cycles. - In the loop, we pop a node, print it, then for each unvisited neighbor, we mark it visited and enqueue it. And here’s the output: ``` A B C D E F ``` Now that we have seen how BFS works, let’s turn to its counterpart: Depth-First Search (DFS). ## Depth-First Search (DFS) DFS works differently from BFS. Instead of moving level by level, it follows one path as far as it can go before backtracking. Think of it as diving deep down a trail, then returning to explore the others. We can implement DFS in two ways: - **Recursive DFS**, which uses the function call stack - **Iterative DFS**, which uses an explicit stack DFS is especially useful for: - Cycle detection - Maze solving and puzzles - Topological sorting Here’s an example of recursive DFS: ``` def dfs_recursive(graph, node, visited=None): if visited is None: visited = set() if node not in visited: print(node, end=" ") visited.add(node) for neighbor in graph[node]: dfs_recursive(graph, neighbor, visited) graph = { 'A': ['B','C'], 'B': ['A','D','E'], 'C': ['A','F'], 'D': ['B'], 'E': ['B','F'], 'F': ['C','E'] } dfs_recursive(graph, 'A') ``` - `visited` is a set that tracks nodes already processed so you don’t loop forever on cycles. - On each call, if `node` hasn’t been seen, it’s printed, marked visited, then the function recurses into each neighbor. Traversal order: ``` A B D E F C ``` Explanation: DFS visits B after A, goes deeper into D, then backtracks to explore E and F, and finally visits C. And here’s an example of iterative DFS: ``` def dfs_iterative(graph, start): visited = set() stack = [start] while stack: node = stack.pop() if node not in visited: print(node, end=" ") visited.add(node) stack.extend(reversed(graph[node])) dfs_iterative(graph, 'A') ``` - `visited` tracks nodes you’ve already processed so you don’t loop on cycles. - `stack` is LIFO (last in, first out) – you `pop()` the top node, process it, then push its neighbors. - `reversed(graph[node])` pushes neighbors in reverse so they’re visited in the original left-to-right order (mimicking the usual recursive DFS). Here’s the output: ``` A B D E F C ``` With BFS and DFS explained, we can now move on to algorithms that solve more complex problems, starting with Dijkstra’s shortest path algorithm. ## Dijkstra’s Algorithm Dijkstra’s algorithm is built on a simple rule: always visit the node with the smallest known distance first. By repeating this, it uncovers the shortest path from a starting node to all others in a weighted graph that doesn’t have negative edges. ``` import heapq def dijkstra(graph, start): heap = [(0, start)] shortest_path = {node: float('inf') for node in graph} shortest_path[start] = 0 while heap: cost, node = heapq.heappop(heap) for neighbor, weight in graph[node]: new_cost = cost + weight if new_cost < shortest_path[neighbor]: shortest_path[neighbor] = new_cost heapq.heappush(heap, (new_cost, neighbor)) return shortest_path graph = { 'A': [('B',1), ('C',4)], 'B': [('A',1), ('C',2), ('D',5)], 'C': [('A',4), ('B',2), ('D',1)], 'D': [('B',5), ('C',1)] } print(dijkstra(graph, 'A')) ``` Here’s what’s going on in this code: - `graph` is an adjacency list: each node maps to a list of `(neighbor, weight)` pairs. - `shortest_path` stores the current best-known distance to each node (∞ initially, 0 for `start`). - `heap` (priority queue) holds frontier nodes as `(cost, node)`, always popping the smallest cost first. - For each popped `node`, it relaxes its edges: for each `(neighbor, weight)`, compute `new_cost`. If `new_cost` beats `shortest_path[neighbor]`, update it and push the neighbor with that cost. And here’s the output: ``` {'A': 0, 'B': 1, 'C': 3, 'D': 4} ``` Moving on, let’s look at an extension of this algorithm: *A Search.*\* ## A\* Search A\* works like Dijkstra’s but adds a heuristic function that estimates how close a node is to the goal. This makes it more efficient by guiding the search in the right direction. ``` import heapq def heuristic(node, goal): heuristics = {'A': 4, 'B': 2, 'C': 1, 'D': 0} return heuristics.get(node, 0) def a_star(graph, start, goal): g_costs = {node: float('inf') for node in graph} g_costs[start] = 0 came_from = {} heap = [(heuristic(start, goal), start)] while heap: f, node = heapq.heappop(heap) if f > g_costs[node] + heuristic(node, goal): continue if node == goal: path = [node] while node in came_from: node = came_from[node] path.append(node) return path[::-1], g_costs[path[0]] for neighbor, weight in graph[node]: new_g = g_costs[node] + weight if new_g < g_costs[neighbor]: g_costs[neighbor] = new_g came_from[neighbor] = node heapq.heappush(heap, (new_g + heuristic(neighbor, goal), neighbor)) return None, float('inf') graph = { 'A': [('B',1), ('C',4)], 'B': [('A',1), ('C',2), ('D',5)], 'C': [('A',4), ('B',2), ('D',1)], 'D': [] } print(a_star(graph, 'A', 'D')) ``` This one’s a little more complex, so here’s what’s going on: - `graph`: adjacency list – each node maps to `[(neighbor, weight), ...]`. - `heuristic(node, goal)`: returns an estimate `h(node)` (lower is better). It’s passed `goal` but in this demo uses a fixed dict. - `g_costs`: best known cost from `start` to each node (∞ initially, 0 for start). - `heap`: min-heap of `(priority, node)` where `priority = g + h`. - `came_from`: backpointers to reconstruct the path once we pop the goal. Then in the main loop: - We pop the node with smallest priority. - If it’s the goal, we backtrack via `came_from` to build the path and return it with `g_costs[goal]`. - Otherwise, we relax the edges: for each `(neighbor, weight)`, compute `new_cost = g_costs[node] + weight`. If `new_cost` improves `g_costs[neighbor]`, update it, set `came_from[neighbor] = node`, and push `(new_cost + heuristic(neighbor, goal), neighbor)`. Output: ``` (['A', 'B', 'C', 'D'], 4) ``` Next up, let’s move from shortest paths to spanning trees. This is where Kruskal’s algorithm comes in. ## Kruskal’s Algorithm Kruskal’s algorithm builds a Minimum Spanning Tree (MST) by sorting all edges from smallest to largest and adding them one at a time, as long as they don’t create a cycle. This makes it a greedy algorithm as it always picks the cheapest option available at each step. The implementation uses a Disjoint Set (Union-Find) data structure to efficiently check whether adding an edge would create a cycle. Each node starts in its own set, and as edges are added, sets are merged. ``` class DisjointSet: def __init__(self, nodes): self.parent = {node: node for node in nodes} self.rank = {node: 0 for node in nodes} def find(self, node): if self.parent[node] != node: self.parent[node] = self.find(self.parent[node]) return self.parent[node] def union(self, node1, node2): r1, r2 = self.find(node1), self.find(node2) if r1 != r2: if self.rank[r1] > self.rank[r2]: self.parent[r2] = r1 else: self.parent[r1] = r2 if self.rank[r1] == self.rank[r2]: self.rank[r2] += 1 def kruskal(graph): edges = sorted(graph, key=lambda x: x[2]) mst, ds = [], DisjointSet({u for e in graph for u in e[:2]}) for u,v,w in edges: if ds.find(u) != ds.find(v): ds.union(u,v) mst.append((u,v,w)) return mst graph = [('A','B',1), ('A','C',4), ('B','C',2), ('B','D',5), ('C','D',1)] print(kruskal(graph)) ``` Output: ``` [('A','B',1), ('C','D',1), ('B','C',2)] ``` Here, the MST includes the smallest edges that connect all nodes without forming cycles. Now that we have seen Kruskal’s, we can move further to analyze another algorithm. ## Prim’s Algorithm Prim’s algorithm also finds an MST, but it grows the tree step by step. It starts with one node and repeatedly **adds the smallest edge** that connects the current tree to a new node. Think of it as expanding a connected “island” until all nodes are included. This implementation uses a **priority queue (heapq)** to always select the smallest available edge efficiently. ``` import heapq def prim(graph, start): mst, visited = [], {start} edges = [(w, start, n) for n,w in graph[start]] heapq.heapify(edges) while edges: w,u,v = heapq.heappop(edges) if v not in visited: visited.add(v) mst.append((u,v,w)) for n,w in graph[v]: if n not in visited: heapq.heappush(edges, (w,v,n)) return mst graph = { 'A':[('B',1),('C',4)], 'B':[('A',1),('C',2),('D',5)], 'C':[('A',4),('B',2),('D',1)], 'D':[('B',5),('C',1)] } print(prim(graph,'A')) ``` Output: ``` [('A','B',1), ('B','C',2), ('C','D',1)] ``` Notice how the algorithm gradually expands from node `A`, always picking the lowest-weight edge that connects a new node. Let’s now look at an algorithm that can handle graphs with negative edges: Bellman-Ford. ## Bellman-Ford Algorithm Bellman-Ford is a shortest path algorithm that can handle negative edge weights, unlike Dijkstra’s. It works by **relaxing all edges repeatedly**: if the current path to a node can be improved by going through another node, it updates the distance. After `V-1` iterations (where `V` is the number of vertices), all shortest paths are guaranteed to be found. This makes it slightly slower than Dijkstra’s but more versatile. It can also detect negative weight cycles by checking for further improvements after the main loop. ``` def bellman_ford(graph, start): dist = {node: float('inf') for node in graph} dist[start] = 0 for _ in range(len(graph)-1): for u in graph: for v,w in graph[u]: if dist[u] + w < dist[v]: dist[v] = dist[u] + w return dist graph = { 'A':[('B',4),('C',2)], 'B':[('C',-1),('D',2)], 'C':[('D',3)], 'D':[] } print(bellman_ford(graph,'A')) ``` Output: ``` {'A': 0, 'B': 4, 'C': 2, 'D': 5} ``` Here, the shortest path to each node is found, even though there’s a negative edge (`B → C` with weight -1). If there had been a negative cycle, Bellman-Ford would detect it by noticing that distances keep improving after `V-1` iterations. With the main algorithms explained, let’s move on to some practical tips for making these implementations more efficient in Python. ## Optimizing Graph Algorithms in Python When graphs get bigger, little tweaks in how you write your code can make a big difference. Here are a few simple but powerful tricks to keep things running smoothly. **1\. Use** `deque` for BFS If you use a regular Python list as a queue, popping items from the front takes longer the bigger the list gets. With `collections.deque`, you get instant (`O(1)`) pops from both ends. It’s basically built for this kind of job. ``` from collections import deque queue = deque([start]) # fast pops and appends ``` **2\. Go Iterative with DFS** Recursive DFS looks neat, but Python doesn’t like going too deep – you’ll hit a recursion limit if your graph is very large. The fix? Write DFS in an iterative style with a stack. Same idea, no recursion errors. ``` def dfs_iterative(graph, start): visited, stack = set(), [start] while stack: node = stack.pop() if node not in visited: visited.add(node) stack.extend(graph[node]) ``` **3\. Let NetworkX Do the Heavy Lifting** For practice and learning, writing your own graph code is great. But if you’re working on a real-world problem – say analyzing a social network or planning routes – the NetworkX library saves tons of time. It comes with optimized versions of almost every common graph algorithm plus nice visualization tools. ``` import networkx as nx G = nx.Graph() G.add_edges_from([('A','B'), ('A','C'), ('B','D'), ('C','D')]) print(nx.shortest_path(G, source='A', target='D')) ``` **Output:** ``` ['A', 'B', 'D'] ``` Instead of worrying about queues and stacks, you can let NetworkX handle the details and focus on what the results mean. ## Key Takeaways - An adjacency matrix is fast for lookups but is memory-heavy. - An adjacency list is space-efficient for sparse graphs. - NetworkX makes graph analysis much easier for real-world projects. - BFS explores layer by layer, DFS explores deeply before backtracking. - Dijkstra’s and A\* handle shortest paths. - Kruskal’s and Prim’s build spanning trees. - Bellman-Ford works with negative weights. ## Conclusion Graphs are everywhere, from maps to social networks, and the algorithms you have seen here are the building blocks for working with them. Whether it is finding paths, building spanning trees, or handling tricky weights, these tools open up a wide range of problems you can solve. Keep experimenting and try out libraries like NetworkX when you are ready to take on bigger projects. *** ![Oyedele Tioluwani](https://cdn.hashnode.com/res/hashnode/image/upload/v1725575384596/35e5eb92-a04c-49cd-b77e-84cc2c09a52b.jpeg?w=500&h=500&fit=crop&crop=entropy&auto=compress,format&format=webp) [Oyedele Tioluwani](https://www.freecodecamp.org/news/author/Tioluwani/) Python developer Studying Mechanical Engineering Machine Learning enthusiast *** If this article was helpful, share it. Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. [Get started](https://www.freecodecamp.org/learn) ADVERTISEMENT freeCodeCamp is a donor-supported tax-exempt 501(c)(3) charity organization (United States Federal Tax Identification Number: 82-0779546) Our mission: to help people learn to code for free. We accomplish this by creating thousands of videos, articles, and interactive coding lessons - all freely available to the public. Donations to freeCodeCamp go toward our education initiatives, and help pay for servers, services, and staff. You can [make a tax-deductible donation here](https://www.freecodecamp.org/donate/). ## Trending Books and Handbooks - [REST APIs](https://www.freecodecamp.org/news/build-consume-and-document-a-rest-api/) - [Clean Code](https://www.freecodecamp.org/news/how-to-write-clean-code/) - [TypeScript](https://www.freecodecamp.org/news/learn-typescript-with-react-handbook/) - [JavaScript](https://www.freecodecamp.org/news/learn-javascript-for-beginners/) - [AI Chatbots](https://www.freecodecamp.org/news/how-to-build-an-ai-chatbot-with-redis-python-and-gpt/) - [Command Line](https://www.freecodecamp.org/news/command-line-for-beginners/) - [GraphQL APIs](https://www.freecodecamp.org/news/building-consuming-and-documenting-a-graphql-api/) - [CSS Transforms](https://www.freecodecamp.org/news/complete-guide-to-css-transform-functions-and-properties/) - [Access Control](https://www.freecodecamp.org/news/how-to-build-scalable-access-control-for-your-web-app/) - [REST API Design](https://www.freecodecamp.org/news/rest-api-design-best-practices-build-a-rest-api/) - [PHP](https://www.freecodecamp.org/news/the-php-handbook/) - [Java](https://www.freecodecamp.org/news/the-java-handbook/) - [Linux](https://www.freecodecamp.org/news/learn-linux-for-beginners-book-basic-to-advanced/) - [React](https://www.freecodecamp.org/news/react-for-beginners-handbook/) - [CI/CD](https://www.freecodecamp.org/news/learn-continuous-integration-delivery-and-deployment/) - [Docker](https://www.freecodecamp.org/news/the-docker-handbook/) - [Golang](https://www.freecodecamp.org/news/learn-golang-handbook/) - [Python](https://www.freecodecamp.org/news/the-python-handbook/) - [Node.js](https://www.freecodecamp.org/news/get-started-with-nodejs/) - [Todo APIs](https://www.freecodecamp.org/news/build-crud-operations-with-dotnet-core-handbook/) - [JavaScript Classes](https://www.freecodecamp.org/news/how-to-use-classes-in-javascript-handbook/) - [Front-End Libraries](https://www.freecodecamp.org/news/front-end-javascript-development-react-angular-vue-compared/) - [Express and Node.js](https://www.freecodecamp.org/news/the-express-handbook/) - [Python Code Examples](https://www.freecodecamp.org/news/python-code-examples-sample-script-coding-tutorial-for-beginners/) - [Clustering in Python](https://www.freecodecamp.org/news/clustering-in-python-a-machine-learning-handbook/) - [Software Architecture](https://www.freecodecamp.org/news/an-introduction-to-software-architecture-patterns/) - [Programming Fundamentals](https://www.freecodecamp.org/news/what-is-programming-tutorial-for-beginners/) - [Coding Career Preparation](https://www.freecodecamp.org/news/learn-to-code-book/) - [Full-Stack Developer Guide](https://www.freecodecamp.org/news/become-a-full-stack-developer-and-get-a-job/) - [Python for JavaScript Devs](https://www.freecodecamp.org/news/learn-python-for-javascript-developers-handbook/) ## Mobile App - [![Download on the App Store](https://cdn.freecodecamp.org/platform/universal/apple-store-badge.svg)](https://apps.apple.com/us/app/freecodecamp/id6446908151?itsct=apps_box_link&itscg=30200) - [![Get it on Google Play](https://cdn.freecodecamp.org/platform/universal/google-play-badge.svg)](https://play.google.com/store/apps/details?id=org.freecodecamp) ## Our Charity [Publication powered by Hashnode](https://hashnode.com/) [About](https://www.freecodecamp.org/news/about/) [Alumni Network](https://www.linkedin.com/school/free-code-camp/people/) [Open Source](https://github.com/freeCodeCamp/) [Shop](https://www.freecodecamp.org/news/shop/) [Support](https://www.freecodecamp.org/news/support/) [Sponsors](https://www.freecodecamp.org/news/sponsors/) [Academic Honesty](https://www.freecodecamp.org/news/academic-honesty-policy/) [Code of Conduct](https://www.freecodecamp.org/news/code-of-conduct/) [Privacy Policy](https://www.freecodecamp.org/news/privacy-policy/) [Terms of Service](https://www.freecodecamp.org/news/terms-of-service/) [Copyright Policy](https://www.freecodecamp.org/news/copyright-policy/)
Readable Markdown
![Graph Algorithms in Python: BFS, DFS, and Beyond](https://cdn.hashnode.com/res/hashnode/image/upload/v1756916679855/9b173128-ed79-4ae0-8cc8-79fca17662dd.png) Have you ever wondered how Google Maps finds the fastest route or how Netflix recommends what to watch? Graph algorithms are behind these decisions. Graphs, made up of nodes (points) and edges (connections), are one of the most powerful data structures in computer science. They help model relationships efficiently, from social networks to transportation systems. In this guide, we will explore two core traversal techniques: Breadth-First Search (BFS) and Depth-First Search (DFS). Moving on from there, we will cover advanced algorithms like Dijkstra’s, A\*, Kruskal’s, Prim’s, and Bellman-Ford. ### Table of Contents: 1. [Understanding Graphs in Python](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-understanding-graphs-in-python) 2. [Ways to Represent Graphs in Python](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-ways-to-represent-graphs-in-python) 3. [Breadth-First Search (BFS)](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-breadth-first-search-bfs) 4. [Depth-First Search (DFS)](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-depth-first-search-dfs) 5. [Dijkstra’s Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-dijkstras-algorithm) 6. [A\* Search](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-a-search) 7. [Kruskal’s Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-kruskals-algorithm) 8. [Prim’s Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-prims-algorithm) 9. [Bellman-Ford Algorithm](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-bellman-ford-algorithm) 10. [Optimizing Graph Algorithms in Python](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-optimizing-graph-algorithms-in-python) 11. [Key Takeaways](https://www.freecodecamp.org/news/graph-algorithms-in-python-bfs-dfs-and-beyond/#heading-key-takeaways) ## Understanding Graphs in Python A graph consists of **nodes (vertices)** and **edges (relationships)**. For examples, in a social network, people are nodes and friendships are edges. Or in a roadmap, cities are nodes and roads are edges. There are a few different types of graphs: - **Directed**: edges have direction (one-way streets, task scheduling). - **Undirected**: edges go both ways (mutual friendships). - **Weighted**: edges have values (distances, costs). - **Unweighted**: edges are equal (basic subway routes). Now that you know what graphs are, let’s look at the different ways they can be represented in Python. ## Ways to Represent Graphs in Python Before diving into traversal and pathfinding, it’s important to know how graphs can be represented. Different problems call for different representations. ### Adjacency Matrix An adjacency matrix is a 2D array where each cell `(i, j)` shows whether there is an edge from node `i` to node `j`. - In an **unweighted graph**, `0` means no edge, and `1` means an edge exists. - In a **weighted graph**, the cell holds the edge weight. This makes it very quick to check if two nodes are directly connected (constant-time lookup), but it uses more memory for large graphs. ``` graph = [ [0, 1, 1], [1, 0, 1], [1, 1, 0] ] ``` Here, the matrix shows a fully connected graph of 3 nodes. For example, `graph[0][1] = 1` means there is an edge from node 0 to node 1. ### Adjacency List An adjacency list represents each node along with the list of nodes it connects to. This is usually more efficient for sparse graphs (where not every node is connected to every other node). It saves memory because only actual edges are stored instead of an entire grid. ``` graph = { 'A': ['B','C'], 'B': ['A','C'], 'C': ['A','B'] } ``` Here, node `A` connects to `B` and `C`, and so on. Checking connections takes a little longer than with a matrix, but for large, sparse graphs, it’s the better option. ### Using NetworkX When working on real-world applications, writing your own adjacency lists and matrices can get tedious. That’s where **NetworkX** comes in, a Python library that simplifies graph creation and analysis. With just a few lines of code, you can build graphs, visualize them, and run advanced algorithms without reinventing the wheel. ``` import networkx as nx import matplotlib.pyplot as plt G = nx.Graph() G.add_edges_from([('A','B'), ('A','C'), ('B','C')]) nx.draw(G, with_labels=True) plt.show() ``` This builds a triangle-shaped graph with nodes A, B, and C. NetworkX also lets you easily run algorithms like shortest paths or spanning trees without manually coding them. Now that we’ve seen different ways to represent graphs, let’s move on to traversal methods, starting with Breadth-First Search (BFS). ## Breadth-First Search (BFS) The basic idea behind BFS is to explore a graph one layer at a time. It looks at all the neighbors of a starting node before moving on to the next level. A queue is used to keep track of what comes next. BFS is particularly useful for: - Finding the shortest path in unweighted graphs - Detecting connected components - Crawling web pages Here’s an example: ``` from collections import deque def bfs(graph, start): visited = {start} queue = deque([start]) while queue: node = queue.popleft() print(node, end=" ") for neighbor in graph[node]: if neighbor not in visited: visited.add(neighbor) queue.append(neighbor) graph = { 'A': ['B','C'], 'B': ['A','D','E'], 'C': ['A','F'], 'D': ['B'], 'E': ['B','F'], 'F': ['C','E'] } bfs(graph, 'A') ``` Here’s what’s going on in this code: - `graph` is a dict where each node maps to a list of neighbors. - `deque` is used as a FIFO queue so we visit nodes level-by-level. - `visited` keeps track of nodes we’ve already processed so we don’t loop forever on cycles. - In the loop, we pop a node, print it, then for each unvisited neighbor, we mark it visited and enqueue it. And here’s the output: ``` A B C D E F ``` Now that we have seen how BFS works, let’s turn to its counterpart: Depth-First Search (DFS). ## Depth-First Search (DFS) DFS works differently from BFS. Instead of moving level by level, it follows one path as far as it can go before backtracking. Think of it as diving deep down a trail, then returning to explore the others. We can implement DFS in two ways: - **Recursive DFS**, which uses the function call stack - **Iterative DFS**, which uses an explicit stack DFS is especially useful for: - Cycle detection - Maze solving and puzzles - Topological sorting Here’s an example of recursive DFS: ``` def dfs_recursive(graph, node, visited=None): if visited is None: visited = set() if node not in visited: print(node, end=" ") visited.add(node) for neighbor in graph[node]: dfs_recursive(graph, neighbor, visited) graph = { 'A': ['B','C'], 'B': ['A','D','E'], 'C': ['A','F'], 'D': ['B'], 'E': ['B','F'], 'F': ['C','E'] } dfs_recursive(graph, 'A') ``` - `visited` is a set that tracks nodes already processed so you don’t loop forever on cycles. - On each call, if `node` hasn’t been seen, it’s printed, marked visited, then the function recurses into each neighbor. Traversal order: ``` A B D E F C ``` Explanation: DFS visits B after A, goes deeper into D, then backtracks to explore E and F, and finally visits C. And here’s an example of iterative DFS: ``` def dfs_iterative(graph, start): visited = set() stack = [start] while stack: node = stack.pop() if node not in visited: print(node, end=" ") visited.add(node) stack.extend(reversed(graph[node])) dfs_iterative(graph, 'A') ``` - `visited` tracks nodes you’ve already processed so you don’t loop on cycles. - `stack` is LIFO (last in, first out) – you `pop()` the top node, process it, then push its neighbors. - `reversed(graph[node])` pushes neighbors in reverse so they’re visited in the original left-to-right order (mimicking the usual recursive DFS). Here’s the output: ``` A B D E F C ``` With BFS and DFS explained, we can now move on to algorithms that solve more complex problems, starting with Dijkstra’s shortest path algorithm. ## Dijkstra’s Algorithm Dijkstra’s algorithm is built on a simple rule: always visit the node with the smallest known distance first. By repeating this, it uncovers the shortest path from a starting node to all others in a weighted graph that doesn’t have negative edges. ``` import heapq def dijkstra(graph, start): heap = [(0, start)] shortest_path = {node: float('inf') for node in graph} shortest_path[start] = 0 while heap: cost, node = heapq.heappop(heap) for neighbor, weight in graph[node]: new_cost = cost + weight if new_cost < shortest_path[neighbor]: shortest_path[neighbor] = new_cost heapq.heappush(heap, (new_cost, neighbor)) return shortest_path graph = { 'A': [('B',1), ('C',4)], 'B': [('A',1), ('C',2), ('D',5)], 'C': [('A',4), ('B',2), ('D',1)], 'D': [('B',5), ('C',1)] } print(dijkstra(graph, 'A')) ``` Here’s what’s going on in this code: - `graph` is an adjacency list: each node maps to a list of `(neighbor, weight)` pairs. - `shortest_path` stores the current best-known distance to each node (∞ initially, 0 for `start`). - `heap` (priority queue) holds frontier nodes as `(cost, node)`, always popping the smallest cost first. - For each popped `node`, it relaxes its edges: for each `(neighbor, weight)`, compute `new_cost`. If `new_cost` beats `shortest_path[neighbor]`, update it and push the neighbor with that cost. And here’s the output: ``` {'A': 0, 'B': 1, 'C': 3, 'D': 4} ``` Moving on, let’s look at an extension of this algorithm: *A Search.*\* ## A\* Search A\* works like Dijkstra’s but adds a heuristic function that estimates how close a node is to the goal. This makes it more efficient by guiding the search in the right direction. ``` import heapq def heuristic(node, goal): heuristics = {'A': 4, 'B': 2, 'C': 1, 'D': 0} return heuristics.get(node, 0) def a_star(graph, start, goal): g_costs = {node: float('inf') for node in graph} g_costs[start] = 0 came_from = {} heap = [(heuristic(start, goal), start)] while heap: f, node = heapq.heappop(heap) if f > g_costs[node] + heuristic(node, goal): continue if node == goal: path = [node] while node in came_from: node = came_from[node] path.append(node) return path[::-1], g_costs[path[0]] for neighbor, weight in graph[node]: new_g = g_costs[node] + weight if new_g < g_costs[neighbor]: g_costs[neighbor] = new_g came_from[neighbor] = node heapq.heappush(heap, (new_g + heuristic(neighbor, goal), neighbor)) return None, float('inf') graph = { 'A': [('B',1), ('C',4)], 'B': [('A',1), ('C',2), ('D',5)], 'C': [('A',4), ('B',2), ('D',1)], 'D': [] } print(a_star(graph, 'A', 'D')) ``` This one’s a little more complex, so here’s what’s going on: - `graph`: adjacency list – each node maps to `[(neighbor, weight), ...]`. - `heuristic(node, goal)`: returns an estimate `h(node)` (lower is better). It’s passed `goal` but in this demo uses a fixed dict. - `g_costs`: best known cost from `start` to each node (∞ initially, 0 for start). - `heap`: min-heap of `(priority, node)` where `priority = g + h`. - `came_from`: backpointers to reconstruct the path once we pop the goal. Then in the main loop: - We pop the node with smallest priority. - If it’s the goal, we backtrack via `came_from` to build the path and return it with `g_costs[goal]`. - Otherwise, we relax the edges: for each `(neighbor, weight)`, compute `new_cost = g_costs[node] + weight`. If `new_cost` improves `g_costs[neighbor]`, update it, set `came_from[neighbor] = node`, and push `(new_cost + heuristic(neighbor, goal), neighbor)`. Output: ``` (['A', 'B', 'C', 'D'], 4) ``` Next up, let’s move from shortest paths to spanning trees. This is where Kruskal’s algorithm comes in. ## Kruskal’s Algorithm Kruskal’s algorithm builds a Minimum Spanning Tree (MST) by sorting all edges from smallest to largest and adding them one at a time, as long as they don’t create a cycle. This makes it a greedy algorithm as it always picks the cheapest option available at each step. The implementation uses a Disjoint Set (Union-Find) data structure to efficiently check whether adding an edge would create a cycle. Each node starts in its own set, and as edges are added, sets are merged. ``` class DisjointSet: def __init__(self, nodes): self.parent = {node: node for node in nodes} self.rank = {node: 0 for node in nodes} def find(self, node): if self.parent[node] != node: self.parent[node] = self.find(self.parent[node]) return self.parent[node] def union(self, node1, node2): r1, r2 = self.find(node1), self.find(node2) if r1 != r2: if self.rank[r1] > self.rank[r2]: self.parent[r2] = r1 else: self.parent[r1] = r2 if self.rank[r1] == self.rank[r2]: self.rank[r2] += 1 def kruskal(graph): edges = sorted(graph, key=lambda x: x[2]) mst, ds = [], DisjointSet({u for e in graph for u in e[:2]}) for u,v,w in edges: if ds.find(u) != ds.find(v): ds.union(u,v) mst.append((u,v,w)) return mst graph = [('A','B',1), ('A','C',4), ('B','C',2), ('B','D',5), ('C','D',1)] print(kruskal(graph)) ``` Output: ``` [('A','B',1), ('C','D',1), ('B','C',2)] ``` Here, the MST includes the smallest edges that connect all nodes without forming cycles. Now that we have seen Kruskal’s, we can move further to analyze another algorithm. ## Prim’s Algorithm Prim’s algorithm also finds an MST, but it grows the tree step by step. It starts with one node and repeatedly **adds the smallest edge** that connects the current tree to a new node. Think of it as expanding a connected “island” until all nodes are included. This implementation uses a **priority queue (heapq)** to always select the smallest available edge efficiently. ``` import heapq def prim(graph, start): mst, visited = [], {start} edges = [(w, start, n) for n,w in graph[start]] heapq.heapify(edges) while edges: w,u,v = heapq.heappop(edges) if v not in visited: visited.add(v) mst.append((u,v,w)) for n,w in graph[v]: if n not in visited: heapq.heappush(edges, (w,v,n)) return mst graph = { 'A':[('B',1),('C',4)], 'B':[('A',1),('C',2),('D',5)], 'C':[('A',4),('B',2),('D',1)], 'D':[('B',5),('C',1)] } print(prim(graph,'A')) ``` Output: ``` [('A','B',1), ('B','C',2), ('C','D',1)] ``` Notice how the algorithm gradually expands from node `A`, always picking the lowest-weight edge that connects a new node. Let’s now look at an algorithm that can handle graphs with negative edges: Bellman-Ford. ## Bellman-Ford Algorithm Bellman-Ford is a shortest path algorithm that can handle negative edge weights, unlike Dijkstra’s. It works by **relaxing all edges repeatedly**: if the current path to a node can be improved by going through another node, it updates the distance. After `V-1` iterations (where `V` is the number of vertices), all shortest paths are guaranteed to be found. This makes it slightly slower than Dijkstra’s but more versatile. It can also detect negative weight cycles by checking for further improvements after the main loop. ``` def bellman_ford(graph, start): dist = {node: float('inf') for node in graph} dist[start] = 0 for _ in range(len(graph)-1): for u in graph: for v,w in graph[u]: if dist[u] + w < dist[v]: dist[v] = dist[u] + w return dist graph = { 'A':[('B',4),('C',2)], 'B':[('C',-1),('D',2)], 'C':[('D',3)], 'D':[] } print(bellman_ford(graph,'A')) ``` Output: ``` {'A': 0, 'B': 4, 'C': 2, 'D': 5} ``` Here, the shortest path to each node is found, even though there’s a negative edge (`B → C` with weight -1). If there had been a negative cycle, Bellman-Ford would detect it by noticing that distances keep improving after `V-1` iterations. With the main algorithms explained, let’s move on to some practical tips for making these implementations more efficient in Python. ## Optimizing Graph Algorithms in Python When graphs get bigger, little tweaks in how you write your code can make a big difference. Here are a few simple but powerful tricks to keep things running smoothly. **1\. Use** `deque` for BFS If you use a regular Python list as a queue, popping items from the front takes longer the bigger the list gets. With `collections.deque`, you get instant (`O(1)`) pops from both ends. It’s basically built for this kind of job. ``` from collections import deque queue = deque([start]) # fast pops and appends ``` **2\. Go Iterative with DFS** Recursive DFS looks neat, but Python doesn’t like going too deep – you’ll hit a recursion limit if your graph is very large. The fix? Write DFS in an iterative style with a stack. Same idea, no recursion errors. ``` def dfs_iterative(graph, start): visited, stack = set(), [start] while stack: node = stack.pop() if node not in visited: visited.add(node) stack.extend(graph[node]) ``` **3\. Let NetworkX Do the Heavy Lifting** For practice and learning, writing your own graph code is great. But if you’re working on a real-world problem – say analyzing a social network or planning routes – the NetworkX library saves tons of time. It comes with optimized versions of almost every common graph algorithm plus nice visualization tools. ``` import networkx as nx G = nx.Graph() G.add_edges_from([('A','B'), ('A','C'), ('B','D'), ('C','D')]) print(nx.shortest_path(G, source='A', target='D')) ``` **Output:** ``` ['A', 'B', 'D'] ``` Instead of worrying about queues and stacks, you can let NetworkX handle the details and focus on what the results mean. ## Key Takeaways - An adjacency matrix is fast for lookups but is memory-heavy. - An adjacency list is space-efficient for sparse graphs. - NetworkX makes graph analysis much easier for real-world projects. - BFS explores layer by layer, DFS explores deeply before backtracking. - Dijkstra’s and A\* handle shortest paths. - Kruskal’s and Prim’s build spanning trees. - Bellman-Ford works with negative weights. ## Conclusion Graphs are everywhere, from maps to social networks, and the algorithms you have seen here are the building blocks for working with them. Whether it is finding paths, building spanning trees, or handling tricky weights, these tools open up a wide range of problems you can solve. Keep experimenting and try out libraries like NetworkX when you are ready to take on bigger projects. *** *** Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. [Get started](https://www.freecodecamp.org/learn)
Shard32 (laksa)
Root Hash13723046482134587832
Unparsed URLorg,freecodecamp!www,/news/graph-algorithms-in-python-bfs-dfs-and-beyond/ s443