ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.9 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://livebook.manning.com/book/python-concurrency-with-asyncio/chapter-6 |
| Last Crawled | 2026-03-19 20:35:28 (26 days ago) |
| First Indexed | 2023-05-17 21:54:08 (2 years ago) |
| HTTP Status Code | 200 |
| Meta Title | 6 Handling CPU-bound work · Python Concurrency with asyncio |
| Meta Description | The multiprocessing library · Creating process pools to handle CPU-bound work · Using async and await to manage CPU-bound work · Using MapReduce to solve a CPU-intensive problem with asyncio · Handling shared data between multiple processes with locks · Improving the performance of work with both CPU- and I/O-bound operations |
| Meta Canonical | null |
| Boilerpipe Text | The multiprocessing library
Creating process pools to handle CPU-bound work
Using
async
and
await
to manage CPU-bound work
Using MapReduce to solve a CPU-intensive problem with asyncio
Handling shared data between multiple processes with locks
Improving the performance of work with both CPU- and I/O-bound operations
Until now, we’ve been focused on performance gains we can get with asyncio when running
I/O-bound work concurrently. Running I/O-bound work is asyncio’s bread and butter, and with the way we’ve written code so far, we need to be careful not to run any CPU-bound code in our coroutines. This seems like it severely limits asyncio, but the library is more versatile than just handling I/O-bound work.
asyncio has an API for interoperating with Python’s multiprocessing library. This lets us use async
await
syntax as well as asyncio APIs with multiple processes. Using this, we can get the benefits of the asyncio library even when using CPU-bound code. This allows us to achieve performance gains for CPU-intensive work, such as mathematical computations or data processing, letting us sidestep the global interpreter lock and take full advantage of a multicore machine.
6.1 Introducing the multiprocessing library
6.2 Using process pools
6.2.1 Using asynchronous results
6.3 Using process pool executors with asyncio
6.3.1 Introducing process pool executors
6.3.2 Process pool executors with the asyncio event loop
6.4 Solving a problem with MapReduce using asyncio
6.4.1 A simple MapReduce example
6.4.2 The Google Books Ngram dataset
6.4.3 Mapping and reducing with asyncio
6.5 Shared data and locks
6.5.1 Sharing data and race conditions
6.5.2 Synchronizing with locks
6.5.3 Sharing data with process pools
6.6 Multiple processes, multiple event loops
Summary |
| Markdown | chapter six
# 6 Handling CPU-bound work
### This chapter covers
- The multiprocessing library
- Creating process pools to handle CPU-bound work
- Using `async` and `await` to manage CPU-bound work
- Using MapReduce to solve a CPU-intensive problem with asyncio
- Handling shared data between multiple processes with locks
- Improving the performance of work with both CPU- and I/O-bound operations
Until now, we’ve been focused on performance gains we can get with asyncio when running I/O-bound work concurrently. Running I/O-bound work is asyncio’s bread and butter, and with the way we’ve written code so far, we need to be careful not to run any CPU-bound code in our coroutines. This seems like it severely limits asyncio, but the library is more versatile than just handling I/O-bound work.
asyncio has an API for interoperating with Python’s multiprocessing library. This lets us use async `await` syntax as well as asyncio APIs with multiple processes. Using this, we can get the benefits of the asyncio library even when using CPU-bound code. This allows us to achieve performance gains for CPU-intensive work, such as mathematical computations or data processing, letting us sidestep the global interpreter lock and take full advantage of a multicore machine.
## 6\.1 Introducing the multiprocessing library
## 6\.2 Using process pools
### 6\.2.1 Using asynchronous results
## 6\.3 Using process pool executors with asyncio
### 6\.3.1 Introducing process pool executors
### 6\.3.2 Process pool executors with the asyncio event loop
## 6\.4 Solving a problem with MapReduce using asyncio
### 6\.4.1 A simple MapReduce example
### 6\.4.2 The Google Books Ngram dataset
### 6\.4.3 Mapping and reducing with asyncio
## 6\.5 Shared data and locks
### 6\.5.1 Sharing data and race conditions
### 6\.5.2 Synchronizing with locks
### 6\.5.3 Sharing data with process pools
## 6\.6 Multiple processes, multiple event loops
## Summary
 |
| Readable Markdown | - The multiprocessing library
- Creating process pools to handle CPU-bound work
- Using `async` and `await` to manage CPU-bound work
- Using MapReduce to solve a CPU-intensive problem with asyncio
- Handling shared data between multiple processes with locks
- Improving the performance of work with both CPU- and I/O-bound operations
Until now, we’ve been focused on performance gains we can get with asyncio when running I/O-bound work concurrently. Running I/O-bound work is asyncio’s bread and butter, and with the way we’ve written code so far, we need to be careful not to run any CPU-bound code in our coroutines. This seems like it severely limits asyncio, but the library is more versatile than just handling I/O-bound work.
asyncio has an API for interoperating with Python’s multiprocessing library. This lets us use async `await` syntax as well as asyncio APIs with multiple processes. Using this, we can get the benefits of the asyncio library even when using CPU-bound code. This allows us to achieve performance gains for CPU-intensive work, such as mathematical computations or data processing, letting us sidestep the global interpreter lock and take full advantage of a multicore machine.
6\.1 Introducing the multiprocessing library
6\.2 Using process pools
6\.2.1 Using asynchronous results
6\.3 Using process pool executors with asyncio
6\.3.1 Introducing process pool executors
6\.3.2 Process pool executors with the asyncio event loop
6\.4 Solving a problem with MapReduce using asyncio
6\.4.1 A simple MapReduce example
6\.4.2 The Google Books Ngram dataset
6\.4.3 Mapping and reducing with asyncio
6\.5 Shared data and locks
6\.5.1 Sharing data and race conditions
6\.5.2 Synchronizing with locks
6\.5.3 Sharing data with process pools
6\.6 Multiple processes, multiple event loops
Summary |
| Shard | 20 (laksa) |
| Root Hash | 7867033290496853820 |
| Unparsed URL | com,manning!livebook,/book/python-concurrency-with-asyncio/chapter-6 s443 |