ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.2 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/ |
| Last Crawled | 2026-04-04 21:03:59 (6 days ago) |
| First Indexed | 2025-03-28 22:44:43 (1 year ago) |
| HTTP Status Code | 200 |
| Meta Title | Multi-Threading, Multi-Processing, Async and Event Loop in Python – A Developer Bird Blog |
| Meta Description | multi-threading vs multi-processing vs async and event loops described |
| Meta Canonical | null |
| Boilerpipe Text | Post Views:
546
In Python, you’ve probably come across terms like
multi-threading, multi-processing, async and event loops
. They can be confusing
at first. What should we use? When? Why does Python have multiple ways to do the same thing?
In this post, I’ll break it all down in a way that actually makes sense, and to wrap it up, I’ll show you real-world code examples that demonstrate how these tools can improve performance in your system.
Multi-Threading (Good for I/O-Bound Tasks)
Multi-threading is when you run
multiple threads inside the same process
. But because of Python’s
Global Interpreter Lock (GIL)
, only
one thread can execute Python bytecode at a time
. This means multi-threading is NOT good for CPU-heavy tasks but can be useful for I/O-bound operations like
web scraping, file I/O, and API calls
.
Example: Multi-Threading for Downloading Web Pages
import threading
import time
def download_page(url):
print(f"Downloading {url} ...")
time.sleep(2) # Simulate network delay
print(f"Finished {url}")
urls = ["http://example.com/page1", "http://example.com/page2", "http://example.com/page3"]
threads = [threading.Thread(target=download_page, args=(url,)) for url in urls]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print("All downloads complete!")
We’re just waiting for the network, so using threads allows the OS to switch between them while a thread is waiting for a I/O task to be finished.
Threads share memory, making it lightweight.
â›”
Downside:
GIL prevents true parallel execution for CPU-bound tasks. So, again, don’t use it for calculation or image/data processing tasks.
Multi-Processing 🖥️ (Best for CPU-Bound Tasks)
Multi-processing, on the other hand,
spawns multiple processes
, each with its
own memory space
. This means Python can actually run code in
parallel
on multiple CPU cores.
Example: Multi-Processing for CPU-Heavy Work
import multiprocessing
def compute_square(n):
return n * n
if __name__ == "__main__":
numbers = [1, 2, 3, 4, 5]
with multiprocessing.Pool(processes=3) as pool:
results = pool.map(compute_square, numbers)
print("Squares:", results)
Each process runs independently,
bypassing the GIL
.
Ideal for CPU-heavy tasks like
image processing, machine learning, and data analysis
.
In multi-processing, processes don’t share memory, so communication between them requires extra effort. Each process is actually a new instance of the Python interpreter, and each one has its own
private memory area
. This is different from multi-threading, where threads share the same memory within a single process.
Async Event Loop ⚡ (Best for I/O-Heavy & High-Concurrency Tasks)
Async programming uses an
event loop
to handle
thousands of tasks
efficiently
without blocking
. Instead of waiting (like threads do), an event loop will
switch to another task
while a task is waiting for I/O. In other word, instead of relying on OS-level thread management like what happens in multi-threading, an event loop
switches between tasks cooperatively
, meaning a task gives up control voluntarily to the main thread when it encounters an
await
statement, then the main thread can pay to another task while the waiting task is not backed. All is done in a single thread.
Event loop fetches task from queue, give it CPU until finished or blocked by I/O operation
Example: Async Event Loop for Non-Blocking Tasks
import asyncio
async def task():
print("Start Task")
await asyncio.sleep(3) # Non-blocking wait
print("Task Complete")
async def main():
print("Before Task")
await task()
print("After Task")
asyncio.run(main())
Since all tasks run in the same process and thread, they can access the same global variables or objects in memory. But, just like with regular Python code, if you want to safely share data between tasks, you need to manage synchronization or use other mechanisms like locks or other safe data structures.
It’s
single-threaded but non-blocking
.
Ideal for
web scraping, API calls, database queries, and file I/O
.
⛔ Like multi-threading it’s not good for CPU-heavy tasks (multi-processing is better for that).
Running Multiple Async Tasks (Concurrency)
Here is an example of two tasks that
asyncio
will accomplish them and waits for both to be completed using
gather()
method.
Example: Running Multiple Async Tasks in Parallel
import asyncio
async def task1():
print("Task 1 Start")
await asyncio.sleep(2)
print("Task 1 Done")
async def task2():
print("Task 2 Start")
await asyncio.sleep(3)
print("Task 2 Done")
async def main():
await asyncio.gather(task1(), task2()) # Run both tasks concurrently
asyncio.run(main())
Task 1 takes
2 seconds
.
Task 2 takes
3 seconds
.
Total time:
Only 3 seconds instead of 5
.
When to Use What?
Use Multi-Threading đź§µ If:
Use Multi-Processing 🖥️ If:
Use Async ⚡ If:
You have
I/O-bound
tasks
You have
CPU-bound
tasks
You have
high concurrency I/O
tasks
Need
lightweight concurrency
Need
true parallel execution
Need
thousands of async operations
Examples:
Web scraping, file I/O, database queries
Examples:
Machine learning, image processing, data analysis
Examples:
APIs, web scraping, real-time applications
Multi-threading, Multi-processing and Async Event-loop comparision
Real World Example Combining Multi-Processing and Async for Heavy I/O + CPU Tasks
To help you fully grasp the advantages of using these tools in real-world scenarios, let’s look at a practical example: fetching
shopping cart data
from an API (I/O-bound) and then
calculating the total price
of each cart (CPU-heavy).
Example: Web Scraping + CPU-Intensive Processing
import asyncio
import time
import aiohttp
import multiprocessing
async def fetch_cart(session, cart_id):
url = f"https://dummyjson.com/carts/{cart_id}"
await asyncio.sleep(cart_id) # Simulate network delay for each cart
async with session.get(url) as response:
return await response.json()
def calculate_cart_total_price(cart):
products = cart["products"]
time.sleep(cart["id"]) # Simulate CPU-heavy work for each cart
return cart["id"], sum(product["total"] for product in products)
async def main():
start_time = time.time()
card_ids = [1, 2, 3, 4, 5]
async with aiohttp.ClientSession() as session:
tasks = [fetch_cart(session, url) for url in card_ids]
responses = await asyncio.gather(*tasks)
fetching_elapsed_time = time.time() - start_time
print("All carts fetched in {} seconds, instead of ~{}".format(fetching_elapsed_time, sum(card_ids)))
# Use multi-processing for CPU-intensive processing
with multiprocessing.Pool(processes=5) as pool:
results = pool.map(calculate_cart_total_price, responses)
print("Total price of all carts:", sum(result[1] for result in results))
processing_elapsed_time = time.time() - start_time - fetching_elapsed_time
print("Calculation done in {} seconds instead of ~{}".format(processing_elapsed_time, sum(card_ids)))
total_elapsed_time = time.time() - start_time
print("Total elapsed time: {} seconds, instead of ~{}".format(total_elapsed_time, sum(card_ids)*2))
asyncio.run(main())
In this example, multiple shopping carts are fetched concurrently instead of waiting for each request to complete one by one. This significantly reduces the total time spent on I/O operations. Once all the data is retrieved, we used multi-processing to perform CPU-heavy calculations in parallel across multiple processes, making full use of the available CPU cores.
If we were to fetch carts sequentially without async, each request would block execution until it completed, resulting in a total wait time of approximately 15 seconds (1+2+3+4+5). Similarly, if we processed each cart’s total price one after another without multiprocessing, it would add another 15 seconds, leading to an overall execution time of around
30 seconds
. Thanks to async-io and multi-processing, now our optimized approach reduces this to roughly
5-6 seconds
. You can try running this code on your machine to experience firsthand how async I/O and multi-processing work together to optimize performance.
Long story short
đź§µ
Multi-threading
is great for
I/O-bound
tasks but is
limited by GIL
.
🖥️
Multi-processing
is best for
CPU-bound
tasks and
bypasses GIL
.
⚡
Async programming
is perfect for
high concurrency I/O
(e.g., handling thousands of requests).
So next time you’re wondering
“Which one should I use?”
, just ask yourself:
Is it
I/O-heavy?
→ Use
multi-threading
or
async
.
Is it
CPU-heavy?
→ Use
multi-processing
.
You need to handle
thousands of concurrent I/O-heavy tasks?
→ Use
async
because it is more efficient than making thousand threads.
Happy coding! |
| Markdown | [Skip to content](https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/#content)
March 29, 2026
[A Developer Bird Blog](https://mshaeri.com/blog/)
- [Home](https://mshaeri.com/blog)
- [Python](https://mshaeri.com/blog/category/programming/python/)
- [Java](https://mshaeri.com/blog/category/programming/java/)
- [Go](https://mshaeri.com/blog/category/programming/go/)
- [AI](https://mshaeri.com/blog/category/computer-science/ai/)
- [SQL](https://mshaeri.com/blog/category/programming/sql/)
Menu
- [Home](https://mshaeri.com/blog)
- [Python](https://mshaeri.com/blog/category/programming/python/)
- [Java](https://mshaeri.com/blog/category/programming/java/)
- [Go](https://mshaeri.com/blog/category/programming/go/)
- [AI](https://mshaeri.com/blog/category/computer-science/ai/)
- [SQL](https://mshaeri.com/blog/category/programming/sql/)

[Computer Science](https://mshaeri.com/blog/category/computer-science/) / [Programming](https://mshaeri.com/blog/category/programming/) / [Python](https://mshaeri.com/blog/category/programming/python/)
# Multi-Threading, Multi-Processing, Async and Event Loop in Python
by [Bird](https://mshaeri.com/blog/author/mostafa_shaeri_tj/)
[March 29, 2025July 14, 2025](https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/)
Post Views: 546
In Python, you’ve probably come across terms like **multi-threading, multi-processing, async and event loops**. They can be confusingat first. What should we use? When? Why does Python have multiple ways to do the same thing?
In this post, I’ll break it all down in a way that actually makes sense, and to wrap it up, I’ll show you real-world code examples that demonstrate how these tools can improve performance in your system.
***
## **Multi-Threading (Good for I/O-Bound Tasks)**
Multi-threading is when you run **multiple threads inside the same process**. But because of Python’s **Global Interpreter Lock (GIL)**, only **one thread can execute Python bytecode at a time**. This means multi-threading is NOT good for CPU-heavy tasks but can be useful for I/O-bound operations like **web scraping, file I/O, and API calls**.
### **Example: Multi-Threading for Downloading Web Pages**
```
import threading
import time
def download_page(url):
print(f"Downloading {url} ...")
time.sleep(2) # Simulate network delay
print(f"Finished {url}")
urls = ["http://example.com/page1", "http://example.com/page2", "http://example.com/page3"]
threads = [threading.Thread(target=download_page, args=(url,)) for url in urls]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print("All downloads complete!")
```
- We’re just waiting for the network, so using threads allows the OS to switch between them while a thread is waiting for a I/O task to be finished.
- Threads share memory, making it lightweight.
⛔ **Downside:** GIL prevents true parallel execution for CPU-bound tasks. So, again, don’t use it for calculation or image/data processing tasks.
***
## **Multi-Processing 🖥️ (Best for CPU-Bound Tasks)**
Multi-processing, on the other hand, **spawns multiple processes**, each with its **own memory space**. This means Python can actually run code in **parallel** on multiple CPU cores.
### **Example: Multi-Processing for CPU-Heavy Work**
```
import multiprocessing
def compute_square(n):
return n * n
if __name__ == "__main__":
numbers = [1, 2, 3, 4, 5]
with multiprocessing.Pool(processes=3) as pool:
results = pool.map(compute_square, numbers)
print("Squares:", results)
```
- Each process runs independently, **bypassing the GIL**.
- Ideal for CPU-heavy tasks like **image processing, machine learning, and data analysis**.
In multi-processing, processes don’t share memory, so communication between them requires extra effort. Each process is actually a new instance of the Python interpreter, and each one has its own **private memory area**. This is different from multi-threading, where threads share the same memory within a single process.
***
## **Async Event Loop ⚡ (Best for I/O-Heavy & High-Concurrency Tasks)**
Async programming uses an **event loop** to handle **thousands of tasks** efficiently **without blocking**. Instead of waiting (like threads do), an event loop will **switch to another task** while a task is waiting for I/O. In other word, instead of relying on OS-level thread management like what happens in multi-threading, an event loop **switches between tasks cooperatively**, meaning a task gives up control voluntarily to the main thread when it encounters an `await` statement, then the main thread can pay to another task while the waiting task is not backed. All is done in a single thread.
[](https://mshaeri.com/blog/wp-content/uploads/2025/03/image.png)
Event loop fetches task from queue, give it CPU until finished or blocked by I/O operation
### **Example: Async Event Loop for Non-Blocking Tasks**
```
import asyncio
async def task():
print("Start Task")
await asyncio.sleep(3) # Non-blocking wait
print("Task Complete")
async def main():
print("Before Task")
await task()
print("After Task")
asyncio.run(main())
```
Since all tasks run in the same process and thread, they can access the same global variables or objects in memory. But, just like with regular Python code, if you want to safely share data between tasks, you need to manage synchronization or use other mechanisms like locks or other safe data structures.
- It’s **single-threaded but non-blocking**.
- Ideal for **web scraping, API calls, database queries, and file I/O**.
⛔ Like multi-threading it’s not good for CPU-heavy tasks (multi-processing is better for that).
***
## **Running Multiple Async Tasks (Concurrency)**
Here is an example of two tasks that **asyncio** will accomplish them and waits for both to be completed using `gather()` method.
### **Example: Running Multiple Async Tasks in Parallel**
```
import asyncio
async def task1():
print("Task 1 Start")
await asyncio.sleep(2)
print("Task 1 Done")
async def task2():
print("Task 2 Start")
await asyncio.sleep(3)
print("Task 2 Done")
async def main():
await asyncio.gather(task1(), task2()) # Run both tasks concurrently
asyncio.run(main())
```
- Task 1 takes **2 seconds**.
- Task 2 takes **3 seconds**.
- Total time: **Only 3 seconds instead of 5**.
***
## **When to Use What?**
| **Use Multi-Threading 🧵 If:** | **Use Multi-Processing 🖥️ If:** | **Use Async ⚡ If:** |
|---|---|---|
| You have **I/O-bound** tasks | You have **CPU-bound** tasks | You have **high concurrency I/O** tasks |
| Need **lightweight concurrency** | Need **true parallel execution** | Need **thousands of async operations** |
| Examples: **Web scraping, file I/O, database queries** | Examples: **Machine learning, image processing, data analysis** | Examples: **APIs, web scraping, real-time applications** |
Multi-threading, Multi-processing and Async Event-loop comparision
***
## **Real World Example Combining Multi-Processing and Async for Heavy I/O + CPU Tasks**
To help you fully grasp the advantages of using these tools in real-world scenarios, let’s look at a practical example: fetching **shopping cart data** from an API (I/O-bound) and then **calculating the total price** of each cart (CPU-heavy).
### **Example: Web Scraping + CPU-Intensive Processing**
```
import asyncio
import time
import aiohttp
import multiprocessing
async def fetch_cart(session, cart_id):
url = f"https://dummyjson.com/carts/{cart_id}"
await asyncio.sleep(cart_id) # Simulate network delay for each cart
async with session.get(url) as response:
return await response.json()
def calculate_cart_total_price(cart):
products = cart["products"]
time.sleep(cart["id"]) # Simulate CPU-heavy work for each cart
return cart["id"], sum(product["total"] for product in products)
async def main():
start_time = time.time()
card_ids = [1, 2, 3, 4, 5]
async with aiohttp.ClientSession() as session:
tasks = [fetch_cart(session, url) for url in card_ids]
responses = await asyncio.gather(*tasks)
fetching_elapsed_time = time.time() - start_time
print("All carts fetched in {} seconds, instead of ~{}".format(fetching_elapsed_time, sum(card_ids)))
# Use multi-processing for CPU-intensive processing
with multiprocessing.Pool(processes=5) as pool:
results = pool.map(calculate_cart_total_price, responses)
print("Total price of all carts:", sum(result[1] for result in results))
processing_elapsed_time = time.time() - start_time - fetching_elapsed_time
print("Calculation done in {} seconds instead of ~{}".format(processing_elapsed_time, sum(card_ids)))
total_elapsed_time = time.time() - start_time
print("Total elapsed time: {} seconds, instead of ~{}".format(total_elapsed_time, sum(card_ids)*2))
asyncio.run(main())
```
In this example, multiple shopping carts are fetched concurrently instead of waiting for each request to complete one by one. This significantly reduces the total time spent on I/O operations. Once all the data is retrieved, we used multi-processing to perform CPU-heavy calculations in parallel across multiple processes, making full use of the available CPU cores.
If we were to fetch carts sequentially without async, each request would block execution until it completed, resulting in a total wait time of approximately 15 seconds (1+2+3+4+5). Similarly, if we processed each cart’s total price one after another without multiprocessing, it would add another 15 seconds, leading to an overall execution time of around **30 seconds**. Thanks to async-io and multi-processing, now our optimized approach reduces this to roughly **5-6 seconds**. You can try running this code on your machine to experience firsthand how async I/O and multi-processing work together to optimize performance.
## **Long story short**
- đź§µ **Multi-threading** is great for **I/O-bound** tasks but is **limited by GIL**.
- 🖥️ **Multi-processing** is best for **CPU-bound** tasks and **bypasses GIL**.
- ⚡ **Async programming** is perfect for **high concurrency I/O** (e.g., handling thousands of requests).
So next time you’re wondering **“Which one should I use?”**, just ask yourself:
- Is it **I/O-heavy?** → Use **multi-threading** or **async**.
- Is it **CPU-heavy?** → Use **multi-processing**.
- You need to handle **thousands of concurrent I/O-heavy tasks?** → Use **async** because it is more efficient than making thousand threads.
Happy coding\!
Tagged[async](https://mshaeri.com/blog/tag/async/)[asyncio](https://mshaeri.com/blog/tag/asyncio/)[concurrency](https://mshaeri.com/blog/tag/concurrency/)[i/o heavy](https://mshaeri.com/blog/tag/i-o-heavy/)[multi-processing](https://mshaeri.com/blog/tag/multi-processing/)[python](https://mshaeri.com/blog/tag/python/)[threading](https://mshaeri.com/blog/tag/threading/)
## Post navigation
[Previous Post Previous post: Improve Pagination Performance in SQL Queries Using Keyset Pagination](https://mshaeri.com/blog/improve-pagination-performance-in-sql-queries-using-keyset-pagination/)
[Next Post Next post: Python’s Protocol vs. Golang’s interface: Are They Siblings?](https://mshaeri.com/blog/pythons-protocol-vs-golangs-interface-are-they-siblings/)

#### Bird
[View all posts by Bird →](https://mshaeri.com/blog/author/mostafa_shaeri_tj/ "Bird")
### You might also like
### [Game theory model for every thing\!](https://mshaeri.com/blog/game-theory-model-for-every-thing/ "Game theory model for every thing!")
[April 12, 2014May 15, 2019](https://mshaeri.com/blog/game-theory-model-for-every-thing/)
[](https://mshaeri.com/blog/scanned-document-image-preprocessing-for-machine-learning-classification-feature-extraction/)
### [Scanned Document Preprocessing For Classification and Feature Extraction](https://mshaeri.com/blog/scanned-document-image-preprocessing-for-machine-learning-classification-feature-extraction/ "Scanned Document Preprocessing For Classification and Feature Extraction")
[June 7, 2022September 6, 2024](https://mshaeri.com/blog/scanned-document-image-preprocessing-for-machine-learning-classification-feature-extraction/)
[](https://mshaeri.com/blog/polymorphic-deserialization-in-spring-boot-api-with-jaksons-jsontypeinfo-and-jsonsubtypes/)
### [Polymorphic Deserialization In Spring Boot API With Jackson’s JsonTypeInfo And JsonSubTypes](https://mshaeri.com/blog/polymorphic-deserialization-in-spring-boot-api-with-jaksons-jsontypeinfo-and-jsonsubtypes/ "Polymorphic Deserialization In Spring Boot API With Jackson’s JsonTypeInfo And JsonSubTypes")
[March 24, 2024July 14, 2025](https://mshaeri.com/blog/polymorphic-deserialization-in-spring-boot-api-with-jaksons-jsontypeinfo-and-jsonsubtypes/)
## 2 thoughts on “Multi-Threading, Multi-Processing, Async and Event Loop in Python”
1.  **Sebastian** says:
[July 14, 2025 at 10:05 am](https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/#comment-2074)
Why Python has threading library when GIL doesn’t allow running a block of code concurrently?
[Reply](https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/?replytocom=2074#respond)
1.  **Bird** says:
[July 27, 2025 at 9:54 am](https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/#comment-2113)
Yes, the GIL ensures that only one thread executes Python bytecode at a time. But threading is still useful because:
1\. Many programs are I/O-bound, not CPU-bound.
Threads can release the GIL when waiting for I/O (like file reads, network requests, database calls), so other threads can run.
2\. Threading makes it easier to model certain problems (like concurrent user requests, background tasks, or timers).
Threading is useful in Python for I/O-bound programs, despite the GIL, because the GIL only limits CPU-bound parallelism.
[Reply](https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/?replytocom=2113#respond)
### Leave a Reply [Cancel reply](https://mshaeri.com/blog/multi-threading-multi-processing-and-async-event-loop-in-python/#respond)
#### Welcome
I’m a software developer with a focus on web development, software architecture and database optimization. I occasionally share insights and practical tips from my work on this blog.
**My projects on github :**
- [** Dequest](https://github.com/birddevelper/dequest)
- [** Azure IoTHub API](https://github.com/birddevelper/azure-iot-hub-api)
- [** Spring Rest Framework](https://github.com/nikanique/spring-rest-framework)
- [** Django Query To Table](https://github.com/birddevelper/django-query-to-table)
- [** Salmos Report](https://github.com/birddevelper/salmos-report-spring-boot-starter)
- [** Flask Report](https://github.com/birddevelper/Flask_SqlAlchemy_Report)
- [** GomoLogin](https://github.com/birddevelper/flexitable)
- [** Flexitable](https://github.com/birddevelper/flexitable)
- [** MockiMouse Mock Server](https://github.com/birddevelper/mockimouse)
#### Recent Posts
- [Understanding RAG, Agentic AI, and MCP By a Real-World AI Assistant Example](https://mshaeri.com/blog/understanding-rag-agentic-ai-and-mcp-by-a-real-world-ai-assistant-example/)
- [Speed Up Python With C Functions](https://mshaeri.com/blog/speed-up-python-with-c-functions/)
- [Solving Azure Blob Trigger Delays by Switching to Event Grid](https://mshaeri.com/blog/azure-storage-blob-triggers-vs-event-grid-the-lesson-i-learned-in-production/)
- [Make Python API Calls a Breeze with dequest Declarative Client](https://mshaeri.com/blog/make-python-api-calls-a-breeze-with-dequest-declarative-client/)
- [Design Patterns I use in External API Integration in Python](https://mshaeri.com/blog/design-patterns-i-use-in-external-service-integration-in-python/)
#### Recent Comments
- [vans](https://open-claw.online/) on [Understanding RAG, Agentic AI, and MCP By a Real-World AI Assistant Example](https://mshaeri.com/blog/understanding-rag-agentic-ai-and-mcp-by-a-real-world-ai-assistant-example/#comment-2639)
- Anthony Nguyen on [Design Patterns I use in External API Integration in Python](https://mshaeri.com/blog/design-patterns-i-use-in-external-service-integration-in-python/#comment-2573)
- Angila on [Understanding RAG, Agentic AI, and MCP By a Real-World AI Assistant Example](https://mshaeri.com/blog/understanding-rag-agentic-ai-and-mcp-by-a-real-world-ai-assistant-example/#comment-2540)
- [Bird](http://mshaeri.com/) on [Solving Azure Blob Trigger Delays by Switching to Event Grid](https://mshaeri.com/blog/azure-storage-blob-triggers-vs-event-grid-the-lesson-i-learned-in-production/#comment-2266)
- Andrew Hill on [Solving Azure Blob Trigger Delays by Switching to Event Grid](https://mshaeri.com/blog/azure-storage-blob-triggers-vs-event-grid-the-lesson-i-learned-in-production/#comment-2252)
#### Archives
- [October 2025](https://mshaeri.com/blog/2025/10/)
- [September 2025](https://mshaeri.com/blog/2025/09/)
- [July 2025](https://mshaeri.com/blog/2025/07/)
- [April 2025](https://mshaeri.com/blog/2025/04/)
- [March 2025](https://mshaeri.com/blog/2025/03/)
- [February 2025](https://mshaeri.com/blog/2025/02/)
- [December 2024](https://mshaeri.com/blog/2024/12/)
- [November 2024](https://mshaeri.com/blog/2024/11/)
- [August 2024](https://mshaeri.com/blog/2024/08/)
- [April 2024](https://mshaeri.com/blog/2024/04/)
- [March 2024](https://mshaeri.com/blog/2024/03/)
- [November 2023](https://mshaeri.com/blog/2023/11/)
- [March 2023](https://mshaeri.com/blog/2023/03/)
- [February 2023](https://mshaeri.com/blog/2023/02/)
- [January 2023](https://mshaeri.com/blog/2023/01/)
- [June 2022](https://mshaeri.com/blog/2022/06/)
- [May 2022](https://mshaeri.com/blog/2022/05/)
- [February 2022](https://mshaeri.com/blog/2022/02/)
- [January 2022](https://mshaeri.com/blog/2022/01/)
- [November 2021](https://mshaeri.com/blog/2021/11/)
- [October 2021](https://mshaeri.com/blog/2021/10/)
- [May 2021](https://mshaeri.com/blog/2021/05/)
- [April 2021](https://mshaeri.com/blog/2021/04/)
- [June 2020](https://mshaeri.com/blog/2020/06/)
- [November 2019](https://mshaeri.com/blog/2019/11/)
- [April 2016](https://mshaeri.com/blog/2016/04/)
- [September 2014](https://mshaeri.com/blog/2014/09/)
- [April 2014](https://mshaeri.com/blog/2014/04/)
- [February 2014](https://mshaeri.com/blog/2014/02/)
- [January 2014](https://mshaeri.com/blog/2014/01/)
#### Categories
Copyright © 2026 [A Developer Bird Blog](https://mshaeri.com/blog/ "A Developer Bird Blog"). Powered by [WordPress](https://wordpress.org/) and [Bam](https://themezhut.com/themes/bam/). |
| Readable Markdown | Post Views: 546
In Python, you’ve probably come across terms like **multi-threading, multi-processing, async and event loops**. They can be confusingat first. What should we use? When? Why does Python have multiple ways to do the same thing?
In this post, I’ll break it all down in a way that actually makes sense, and to wrap it up, I’ll show you real-world code examples that demonstrate how these tools can improve performance in your system.
***
## **Multi-Threading (Good for I/O-Bound Tasks)**
Multi-threading is when you run **multiple threads inside the same process**. But because of Python’s **Global Interpreter Lock (GIL)**, only **one thread can execute Python bytecode at a time**. This means multi-threading is NOT good for CPU-heavy tasks but can be useful for I/O-bound operations like **web scraping, file I/O, and API calls**.
### **Example: Multi-Threading for Downloading Web Pages**
```
import threading
import time
def download_page(url):
print(f"Downloading {url} ...")
time.sleep(2) # Simulate network delay
print(f"Finished {url}")
urls = ["http://example.com/page1", "http://example.com/page2", "http://example.com/page3"]
threads = [threading.Thread(target=download_page, args=(url,)) for url in urls]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print("All downloads complete!")
```
- We’re just waiting for the network, so using threads allows the OS to switch between them while a thread is waiting for a I/O task to be finished.
- Threads share memory, making it lightweight.
⛔ **Downside:** GIL prevents true parallel execution for CPU-bound tasks. So, again, don’t use it for calculation or image/data processing tasks.
***
## **Multi-Processing 🖥️ (Best for CPU-Bound Tasks)**
Multi-processing, on the other hand, **spawns multiple processes**, each with its **own memory space**. This means Python can actually run code in **parallel** on multiple CPU cores.
### **Example: Multi-Processing for CPU-Heavy Work**
```
import multiprocessing
def compute_square(n):
return n * n
if __name__ == "__main__":
numbers = [1, 2, 3, 4, 5]
with multiprocessing.Pool(processes=3) as pool:
results = pool.map(compute_square, numbers)
print("Squares:", results)
```
- Each process runs independently, **bypassing the GIL**.
- Ideal for CPU-heavy tasks like **image processing, machine learning, and data analysis**.
In multi-processing, processes don’t share memory, so communication between them requires extra effort. Each process is actually a new instance of the Python interpreter, and each one has its own **private memory area**. This is different from multi-threading, where threads share the same memory within a single process.
***
## **Async Event Loop ⚡ (Best for I/O-Heavy & High-Concurrency Tasks)**
Async programming uses an **event loop** to handle **thousands of tasks** efficiently **without blocking**. Instead of waiting (like threads do), an event loop will **switch to another task** while a task is waiting for I/O. In other word, instead of relying on OS-level thread management like what happens in multi-threading, an event loop **switches between tasks cooperatively**, meaning a task gives up control voluntarily to the main thread when it encounters an `await` statement, then the main thread can pay to another task while the waiting task is not backed. All is done in a single thread.
[](https://mshaeri.com/blog/wp-content/uploads/2025/03/image.png)
Event loop fetches task from queue, give it CPU until finished or blocked by I/O operation
### **Example: Async Event Loop for Non-Blocking Tasks**
```
import asyncio
async def task():
print("Start Task")
await asyncio.sleep(3) # Non-blocking wait
print("Task Complete")
async def main():
print("Before Task")
await task()
print("After Task")
asyncio.run(main())
```
Since all tasks run in the same process and thread, they can access the same global variables or objects in memory. But, just like with regular Python code, if you want to safely share data between tasks, you need to manage synchronization or use other mechanisms like locks or other safe data structures.
- It’s **single-threaded but non-blocking**.
- Ideal for **web scraping, API calls, database queries, and file I/O**.
⛔ Like multi-threading it’s not good for CPU-heavy tasks (multi-processing is better for that).
***
## **Running Multiple Async Tasks (Concurrency)**
Here is an example of two tasks that **asyncio** will accomplish them and waits for both to be completed using `gather()` method.
### **Example: Running Multiple Async Tasks in Parallel**
```
import asyncio
async def task1():
print("Task 1 Start")
await asyncio.sleep(2)
print("Task 1 Done")
async def task2():
print("Task 2 Start")
await asyncio.sleep(3)
print("Task 2 Done")
async def main():
await asyncio.gather(task1(), task2()) # Run both tasks concurrently
asyncio.run(main())
```
- Task 1 takes **2 seconds**.
- Task 2 takes **3 seconds**.
- Total time: **Only 3 seconds instead of 5**.
***
## **When to Use What?**
| **Use Multi-Threading 🧵 If:** | **Use Multi-Processing 🖥️ If:** | **Use Async ⚡ If:** |
|---|---|---|
| You have **I/O-bound** tasks | You have **CPU-bound** tasks | You have **high concurrency I/O** tasks |
| Need **lightweight concurrency** | Need **true parallel execution** | Need **thousands of async operations** |
| Examples: **Web scraping, file I/O, database queries** | Examples: **Machine learning, image processing, data analysis** | Examples: **APIs, web scraping, real-time applications** |
Multi-threading, Multi-processing and Async Event-loop comparision
***
## **Real World Example Combining Multi-Processing and Async for Heavy I/O + CPU Tasks**
To help you fully grasp the advantages of using these tools in real-world scenarios, let’s look at a practical example: fetching **shopping cart data** from an API (I/O-bound) and then **calculating the total price** of each cart (CPU-heavy).
### **Example: Web Scraping + CPU-Intensive Processing**
```
import asyncio
import time
import aiohttp
import multiprocessing
async def fetch_cart(session, cart_id):
url = f"https://dummyjson.com/carts/{cart_id}"
await asyncio.sleep(cart_id) # Simulate network delay for each cart
async with session.get(url) as response:
return await response.json()
def calculate_cart_total_price(cart):
products = cart["products"]
time.sleep(cart["id"]) # Simulate CPU-heavy work for each cart
return cart["id"], sum(product["total"] for product in products)
async def main():
start_time = time.time()
card_ids = [1, 2, 3, 4, 5]
async with aiohttp.ClientSession() as session:
tasks = [fetch_cart(session, url) for url in card_ids]
responses = await asyncio.gather(*tasks)
fetching_elapsed_time = time.time() - start_time
print("All carts fetched in {} seconds, instead of ~{}".format(fetching_elapsed_time, sum(card_ids)))
# Use multi-processing for CPU-intensive processing
with multiprocessing.Pool(processes=5) as pool:
results = pool.map(calculate_cart_total_price, responses)
print("Total price of all carts:", sum(result[1] for result in results))
processing_elapsed_time = time.time() - start_time - fetching_elapsed_time
print("Calculation done in {} seconds instead of ~{}".format(processing_elapsed_time, sum(card_ids)))
total_elapsed_time = time.time() - start_time
print("Total elapsed time: {} seconds, instead of ~{}".format(total_elapsed_time, sum(card_ids)*2))
asyncio.run(main())
```
In this example, multiple shopping carts are fetched concurrently instead of waiting for each request to complete one by one. This significantly reduces the total time spent on I/O operations. Once all the data is retrieved, we used multi-processing to perform CPU-heavy calculations in parallel across multiple processes, making full use of the available CPU cores.
If we were to fetch carts sequentially without async, each request would block execution until it completed, resulting in a total wait time of approximately 15 seconds (1+2+3+4+5). Similarly, if we processed each cart’s total price one after another without multiprocessing, it would add another 15 seconds, leading to an overall execution time of around **30 seconds**. Thanks to async-io and multi-processing, now our optimized approach reduces this to roughly **5-6 seconds**. You can try running this code on your machine to experience firsthand how async I/O and multi-processing work together to optimize performance.
## **Long story short**
- đź§µ **Multi-threading** is great for **I/O-bound** tasks but is **limited by GIL**.
- 🖥️ **Multi-processing** is best for **CPU-bound** tasks and **bypasses GIL**.
- ⚡ **Async programming** is perfect for **high concurrency I/O** (e.g., handling thousands of requests).
So next time you’re wondering **“Which one should I use?”**, just ask yourself:
- Is it **I/O-heavy?** → Use **multi-threading** or **async**.
- Is it **CPU-heavy?** → Use **multi-processing**.
- You need to handle **thousands of concurrent I/O-heavy tasks?** → Use **async** because it is more efficient than making thousand threads.
Happy coding\! |
| Shard | 116 (laksa) |
| Root Hash | 10197841126852103316 |
| Unparsed URL | com,mshaeri!/blog/multi-threading-multi-processing-and-async-event-loop-in-python/ s443 |