ā¹ļø Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.1 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html |
| Last Crawled | 2026-04-10 05:22:02 (2 days ago) |
| First Indexed | 2025-01-16 19:29:21 (1 year ago) |
| HTTP Status Code | 200 |
| Meta Title | The GIL problem - Rust-Python interoperability |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | Keyboard shortcuts
Press
ā
or
ā
to navigate between chapters
Press
S
or
/
to search in the book
Press
?
to show this help
Press
Esc
to hide this help
The GIL problem
Concurrent, yes, but not parallel
On the surface, our thread-based solution addresses all the issues we identified in the
multiprocessing
module:
from threading import Thread
from queue import Queue
def word_count(text: str, n_threads: int) -> int:
result_queue = Queue()
threads = []
for chunk in split_into_chunks(text, n_threads):
t = Thread(target=word_count_task, args=(chunk, result_queue))
t.start()
threads.append(t)
for t in threads:
t.join()
results = [result_queue.get() for _ in range(len(threads))]
return sum(results)
When a thread is created, we are no longer cloning the text chunk nor incurring the overhead of inter-process communication:
t = Thread(target=word_count_task, args=(chunk, result_queue))
Since the spawned threads share the same memory space as the parent thread, they can access the
chunk
and
result_queue
directly.
Nonetheless, there's a major issue with this code:
it won't actually use multiple CPU cores
.
It will run sequentially, even if we pass
n_threads > 1
and multiple CPU cores are available.
Python concurrency
You guessed it: the infamous Global Interpreter Lock (GIL) is to blame.
As we discussed in the
GIL chapter
,
Python's GIL prevents multiple threads from executing Python code simultaneously
1
.
As a result, thread-based parallelism has historically
seen limited use in Python, as it doesn't provide the performance benefits one might expect from a
multithreaded application.
That's why the
multiprocessing
module is so popular: it allows Python developers to bypass the GIL.
Each process has its own Python interpreter, and thus its own GIL. The operating system schedules these processes
independently, allowing them to run in parallel on multicore CPUs.
But, as we've seen, multiprocessing comes with its own set of challenges.
Native extensions
There's a third way to achieve parallelism in Python:
native extensions
.
We must
be holding the GIL
when we invoke a Rust function from Python, but
pure Rust threads are not affected by the GIL, as long as they don't need to interact with Python objects.
Let's rewrite again our
word_count
function, this time in Rust!
Exercise
The exercise for this section is located in
03_concurrency/02_gil
This is the current state of Python's concurrency model. There are some exciting changes on the horizon, though!
CPython
's free-threading mode
is an experimental feature
that aims to remove the GIL entirely.
It would allow multiple threads to execute Python code simultaneously, without forcing developers to rely on multiprocessing.
We won't cover the new free-threading mode in this course, but it's worth keeping an eye on it as it matures out of the experimental phase.
ā© |
| Markdown | ## Keyboard shortcuts
Press `ā` or `ā` to navigate between chapters
Press `S` or `/` to search in the book
Press `?` to show this help
Press `Esc` to hide this help
- Auto
- Light
- Rust
- Coal
- Navy
- Ayu
# Rust-Python interoperability
# [The GIL problem](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#the-gil-problem)
## [Concurrent, yes, but not parallel](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#concurrent-yes-but-not-parallel)
On the surface, our thread-based solution addresses all the issues we identified in the `multiprocessing` module:
```
from threading import Thread
from queue import Queue
def word_count(text: str, n_threads: int) -> int:
result_queue = Queue()
threads = []
for chunk in split_into_chunks(text, n_threads):
t = Thread(target=word_count_task, args=(chunk, result_queue))
t.start()
threads.append(t)
for t in threads:
t.join()
results = [result_queue.get() for _ in range(len(threads))]
return sum(results)
```
When a thread is created, we are no longer cloning the text chunk nor incurring the overhead of inter-process communication:
```
t = Thread(target=word_count_task, args=(chunk, result_queue))
```
Since the spawned threads share the same memory space as the parent thread, they can access the `chunk` and `result_queue` directly.
Nonetheless, there's a major issue with this code: **it won't actually use multiple CPU cores**.
It will run sequentially, even if we pass `n_threads > 1` and multiple CPU cores are available.
## [Python concurrency](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#python-concurrency)
You guessed it: the infamous Global Interpreter Lock (GIL) is to blame. As we discussed in the [GIL chapter](https://rust-exercises.com/rust-python-interop/01_intro/05_gil), Python's GIL prevents multiple threads from executing Python code simultaneously[1](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#footnote-free-threading).
As a result, thread-based parallelism has historically seen limited use in Python, as it doesn't provide the performance benefits one might expect from a multithreaded application.
That's why the `multiprocessing` module is so popular: it allows Python developers to bypass the GIL. Each process has its own Python interpreter, and thus its own GIL. The operating system schedules these processes independently, allowing them to run in parallel on multicore CPUs.
But, as we've seen, multiprocessing comes with its own set of challenges.
## [Native extensions](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#native-extensions)
There's a third way to achieve parallelism in Python: **native extensions**.
We must [be holding the GIL](https://rust-exercises.com/rust-python-interop/01_intro/05_gil#pythonpy) when we invoke a Rust function from Python, but pure Rust threads are not affected by the GIL, as long as they don't need to interact with Python objects.
Let's rewrite again our `word_count` function, this time in Rust\!
## [Exercise](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#exercise)
The exercise for this section is located in [`03_concurrency/02_gil`](https://github.com/mainmatter/rust-python-interoperability/tree/main/exercises/03_concurrency/02_gil)
***
1. This is the current state of Python's concurrency model. There are some exciting changes on the horizon, though! [`CPython`'s free-threading mode](https://docs.python.org/3/howto/free-threading-python.html) is an experimental feature that aims to remove the GIL entirely. It would allow multiple threads to execute Python code simultaneously, without forcing developers to rely on multiprocessing. We won't cover the new free-threading mode in this course, but it's worth keeping an eye on it as it matures out of the experimental phase. [ā©](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#fr-free-threading-1) |
| Readable Markdown | ## Keyboard shortcuts
Press `ā` or `ā` to navigate between chapters
Press `S` or `/` to search in the book
Press `?` to show this help
Press `Esc` to hide this help
## [The GIL problem](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#the-gil-problem)
## [Concurrent, yes, but not parallel](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#concurrent-yes-but-not-parallel)
On the surface, our thread-based solution addresses all the issues we identified in the `multiprocessing` module:
```
from threading import Thread
from queue import Queue
def word_count(text: str, n_threads: int) -> int:
result_queue = Queue()
threads = []
for chunk in split_into_chunks(text, n_threads):
t = Thread(target=word_count_task, args=(chunk, result_queue))
t.start()
threads.append(t)
for t in threads:
t.join()
results = [result_queue.get() for _ in range(len(threads))]
return sum(results)
```
When a thread is created, we are no longer cloning the text chunk nor incurring the overhead of inter-process communication:
```
t = Thread(target=word_count_task, args=(chunk, result_queue))
```
Since the spawned threads share the same memory space as the parent thread, they can access the `chunk` and `result_queue` directly.
Nonetheless, there's a major issue with this code: **it won't actually use multiple CPU cores**.
It will run sequentially, even if we pass `n_threads > 1` and multiple CPU cores are available.
## [Python concurrency](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#python-concurrency)
You guessed it: the infamous Global Interpreter Lock (GIL) is to blame. As we discussed in the [GIL chapter](https://rust-exercises.com/rust-python-interop/01_intro/05_gil), Python's GIL prevents multiple threads from executing Python code simultaneously[1](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#footnote-free-threading).
As a result, thread-based parallelism has historically seen limited use in Python, as it doesn't provide the performance benefits one might expect from a multithreaded application.
That's why the `multiprocessing` module is so popular: it allows Python developers to bypass the GIL. Each process has its own Python interpreter, and thus its own GIL. The operating system schedules these processes independently, allowing them to run in parallel on multicore CPUs.
But, as we've seen, multiprocessing comes with its own set of challenges.
## [Native extensions](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#native-extensions)
There's a third way to achieve parallelism in Python: **native extensions**.
We must [be holding the GIL](https://rust-exercises.com/rust-python-interop/01_intro/05_gil#pythonpy) when we invoke a Rust function from Python, but pure Rust threads are not affected by the GIL, as long as they don't need to interact with Python objects.
Let's rewrite again our `word_count` function, this time in Rust\!
## [Exercise](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#exercise)
The exercise for this section is located in [`03_concurrency/02_gil`](https://github.com/mainmatter/rust-python-interoperability/tree/main/exercises/03_concurrency/02_gil)
***
1. This is the current state of Python's concurrency model. There are some exciting changes on the horizon, though! [`CPython`'s free-threading mode](https://docs.python.org/3/howto/free-threading-python.html) is an experimental feature that aims to remove the GIL entirely. It would allow multiple threads to execute Python code simultaneously, without forcing developers to rely on multiprocessing. We won't cover the new free-threading mode in this course, but it's worth keeping an eye on it as it matures out of the experimental phase. [ā©](https://rust-exercises.com/rust-python-interop/03_concurrency/02_gil.html#fr-free-threading-1) |
| Shard | 189 (laksa) |
| Root Hash | 6762144529815807189 |
| Unparsed URL | com,rust-exercises!/rust-python-interop/03_concurrency/02_gil.html s443 |