ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.1 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://jaehong21.com/posts/python/python-gil/ |
| Last Crawled | 2026-04-09 06:32:19 (3 days ago) |
| First Indexed | 2024-05-11 01:01:13 (1 year ago) |
| HTTP Status Code | 200 |
| Meta Title | Python GIL (Global Interpreter Lock) · Home |
| Meta Description | Poor performance of python compare to other programming language is always being an issue, and this cannot be independent of GIL. |
| Meta Canonical | null |
| Boilerpipe Text | Welcome to jaehong21! :tada:
/
Posts
/
Python GIL (Global Interpreter Lock)
/
14 January 2023
·
515 words
Introduction
#
Row rate of speed of python compare to other programming language is always being an issue, and this cannot be independent from GIL.
Global Interpreter Lock
is one of the most important keyword when trying to use multi-thread in python.
sysctl hw.physicalcpu hw.logicalcpu
Test environment where code below is executed.
hw.physicalcpu: 8
hw.logicalcpu: 8
Test code
#
import
random
import
threading
import
time
# Finding max number in random generated array
def
working
():
max
([
random
.
random
()
for
i
in
range
(
500000000
)])
# 1 Single thread
s_time
=
time
.
time
()
working
()
working
()
e_time
=
time
.
time
()
print
(
f
'
{
e_time
-
s_time
:
.5f
}
'
)
# 2 Double threads
s_time
=
time
.
time
()
threads
=
[]
for
i
in
range
(
2
):
threads
.
append
(
threading
.
Thread
(
target
=
working
))
threads
[
-
1
]
.
start
()
for
t
in
threads
:
t
.
join
()
e_time
=
time
.
time
()
print
(
f
'
{
e_time
-
s_time
:
.5f
}
'
)
Result:
Single thread:
70.46266
Double threads:
103.42579
Simply, we can expect using multiple threads will be faster than single thread execution. But, ironically the performance of double threads is more poor than single thread in python. That is because of
GIL
.
GIL
#
Global Interpreter Lock is a lock that allows Python interpreters to execute only one thread of byte code. Allowing all resources to one thread, then locking it, preventing the other from running. It also similar with Mutex locks in concurrency programming.
Let’s pretend working with three threads. In general, we expect each thread will work in parallel,
but not because of this GIL. Below is an example of three threads operating on python.
Each thread gets a GIL and works, and all other threads stop working. In addition, there also
context switching
in multi-threads which is time-consuming compare to single thread operation.
Reason why using GIL
#
Then, why python is using this GIL that makes python too slow?
It’s because it makes reference counting much more efficient. Python manages its memory using
Garbage collection
and
Reference counting
.
In other words, python counts all how much objects & variables are being referenced.
In this situation, when multiple threads try to use single variable,
locks
for every single variables will be essential for managing reference counts.
To prevent this, python is acquiring and releasing locks globally.
Multi thread in Python is not always slow
#
import
random
import
threading
import
time
def
working
():
time
.
sleep
(
0.1
)
max
([
random
.
random
()
for
i
in
range
(
10000000
)])
time
.
sleep
(
0.1
)
max
([
random
.
random
()
for
i
in
range
(
10000000
)])
time
.
sleep
(
0.1
)
max
([
random
.
random
()
for
i
in
range
(
10000000
)])
time
.
sleep
(
0.1
)
max
([
random
.
random
()
for
i
in
range
(
10000000
)])
time
.
sleep
(
0.1
)
max
([
random
.
random
()
for
i
in
range
(
10000000
)])
time
.
sleep
(
0.1
)
# 1 Thread
s_time
=
time
.
time
()
working
()
working
()
e_time
=
time
.
time
()
print
(
f
'
{
e_time
-
s_time
:
.5f
}
'
)
# 2 Threads
s_time
=
time
.
time
()
threads
=
[]
for
i
in
range
(
2
):
threads
.
append
(
threading
.
Thread
(
target
=
working
))
threads
[
-
1
]
.
start
()
for
t
in
threads
:
t
.
join
()
e_time
=
time
.
time
()
print
(
f
'
{
e_time
-
s_time
:
.5f
}
'
)
Result:
Single thread:
6.93310
Double threads:
6.05917
This time, Double thread operation is actually faster than single thread operation. The reason is beacus of
sleep()
function. While
sleep
in single thread must wait, nothing can be done. On the other hand, context switch can be happend while
sleep
in multi-thread operation.
In real world example, rather than
sleep
when there are operations that threads must wait (ex.
I/O operation
), multi-thread can have better performance even with GIL in python.
Reference
#
https://ssungkang.tistory.com/entry/python-GIL-Global-interpreter-Lock%EC%9D%80-%EB%AC%B4%EC%97%87%EC%9D%BC%EA%B9%8C |
| Markdown | [↓Skip to main content](https://jaehong21.com/posts/python/python-gil/#main-content)
[Home](https://jaehong21.com/)
- [Posts](https://jaehong21.com/posts/ "Posts")
- [Projects](https://jaehong21.com/projects/ "Projects")
- [About Me](https://jaehong21.com/about/ "About me")
- [Posts](https://jaehong21.com/posts/ "Posts")
- [Projects](https://jaehong21.com/projects/ "Projects")
- [About Me](https://jaehong21.com/about/ "About me")
1. [Welcome to jaehong21! :tada:](https://jaehong21.com/)/
2. [Posts](https://jaehong21.com/posts/)/
3. [Python GIL (Global Interpreter Lock)](https://jaehong21.com/posts/python/python-gil/)/
# Python GIL (Global Interpreter Lock)
14 January 2023
·515 words
[Python](https://jaehong21.com/tags/python/) [Concurrency](https://jaehong21.com/tags/concurrency/)
Table of Contents
- [Introduction](https://jaehong21.com/posts/python/python-gil/#introduction)
- [Test code](https://jaehong21.com/posts/python/python-gil/#test-code)
- [GIL](https://jaehong21.com/posts/python/python-gil/#gil)
- [Reason why using GIL](https://jaehong21.com/posts/python/python-gil/#reason-why-using-gil)
- [Multi thread in Python is not always slow](https://jaehong21.com/posts/python/python-gil/#multi-thread-in-python-is-not-always-slow)
- [Reference](https://jaehong21.com/posts/python/python-gil/#reference)
## Introduction [\#](https://jaehong21.com/posts/python/python-gil/#introduction)
Row rate of speed of python compare to other programming language is always being an issue, and this cannot be independent from GIL. **Global Interpreter Lock** is one of the most important keyword when trying to use multi-thread in python.
```
sysctl hw.physicalcpu hw.logicalcpu
```
Test environment where code below is executed.
`hw.physicalcpu: 8` `hw.logicalcpu: 8`
## Test code [\#](https://jaehong21.com/posts/python/python-gil/#test-code)
```
import random
import threading
import time
# Finding max number in random generated array
def working():
max([random.random() for i in range(500000000)])
# 1 Single thread
s_time = time.time()
working()
working()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
# 2 Double threads
s_time = time.time()
threads = []
for i in range(2):
threads.append(threading.Thread(target=working))
threads[-1].start()
for t in threads:
t.join()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
```
Result: Single thread: `70.46266` Double threads: `103.42579`
Simply, we can expect using multiple threads will be faster than single thread execution. But, ironically the performance of double threads is more poor than single thread in python. That is because of **GIL**.
## GIL [\#](https://jaehong21.com/posts/python/python-gil/#gil)
Global Interpreter Lock is a lock that allows Python interpreters to execute only one thread of byte code. Allowing all resources to one thread, then locking it, preventing the other from running. It also similar with Mutex locks in concurrency programming.
Let’s pretend working with three threads. In general, we expect each thread will work in parallel, but not because of this GIL. Below is an example of three threads operating on python.

Each thread gets a GIL and works, and all other threads stop working. In addition, there also [context switching](https://www.ibm.com/docs/en/zvm/7.2?topic=exits-context-switching) in multi-threads which is time-consuming compare to single thread operation.
## Reason why using GIL [\#](https://jaehong21.com/posts/python/python-gil/#reason-why-using-gil)
Then, why python is using this GIL that makes python too slow? It’s because it makes reference counting much more efficient. Python manages its memory using **Garbage collection** and **Reference counting**.
In other words, python counts all how much objects & variables are being referenced. In this situation, when multiple threads try to use single variable, **locks** for every single variables will be essential for managing reference counts. To prevent this, python is acquiring and releasing locks globally.
## Multi thread in Python is not always slow [\#](https://jaehong21.com/posts/python/python-gil/#multi-thread-in-python-is-not-always-slow)
```
import random
import threading
import time
def working():
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
# 1 Thread
s_time = time.time()
working()
working()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
# 2 Threads
s_time = time.time()
threads = []
for i in range(2):
threads.append(threading.Thread(target=working))
threads[-1].start()
for t in threads:
t.join()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
```
Result: Single thread: `6.93310` Double threads: `6.05917`
This time, Double thread operation is actually faster than single thread operation. The reason is beacus of `sleep()` function. While `sleep` in single thread must wait, nothing can be done. On the other hand, context switch can be happend while `sleep` in multi-thread operation.
In real world example, rather than `sleep` when there are operations that threads must wait (ex. **I/O operation**), multi-thread can have better performance even with GIL in python.
## Reference [\#](https://jaehong21.com/posts/python/python-gil/#reference)
1. [https://ssungkang.tistory.com/entry/python-GIL-Global-interpreter-Lock%EC%9D%80-%EB%AC%B4%EC%97%87%EC%9D%BC%EA%B9%8C](https://ssungkang.tistory.com/entry/python-GIL-Global-interpreter-Lock%EC%9D%80-%EB%AC%B4%EC%97%87%EC%9D%BC%EA%B9%8C)

Author
Jaehong Jung
Currently working as a DevOps engineer at ChannelTalk, and love programming as a hobby.
***
[←→ What is NAT instance? 28 September 2022](https://jaehong21.com/posts/aws/nat-instance/)
[Concurrent Database Connection and Query execution in Rust 5 March 2023 →←](https://jaehong21.com/posts/concurrent-postgres-query/)
[↑](https://jaehong21.com/posts/python/python-gil/#the-top "Scroll to top")
© 2026 Jaehong Jung
Powered by [Hugo](https://gohugo.io/) & [Congo](https://github.com/jpanther/congo)
EN
[English](https://jaehong21.com/posts/python/python-gil/)
[Korean](https://jaehong21.com/ko/posts/python/python-gil/) |
| Readable Markdown | 1. [Welcome to jaehong21! :tada:](https://jaehong21.com/)/
2. [Posts](https://jaehong21.com/posts/)/
3. [Python GIL (Global Interpreter Lock)](https://jaehong21.com/posts/python/python-gil/)/
14 January 2023·515 words
## Introduction [\#](https://jaehong21.com/posts/python/python-gil/#introduction)
Row rate of speed of python compare to other programming language is always being an issue, and this cannot be independent from GIL. **Global Interpreter Lock** is one of the most important keyword when trying to use multi-thread in python.
```
sysctl hw.physicalcpu hw.logicalcpu
```
Test environment where code below is executed.
`hw.physicalcpu: 8` `hw.logicalcpu: 8`
## Test code [\#](https://jaehong21.com/posts/python/python-gil/#test-code)
```
import random
import threading
import time
# Finding max number in random generated array
def working():
max([random.random() for i in range(500000000)])
# 1 Single thread
s_time = time.time()
working()
working()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
# 2 Double threads
s_time = time.time()
threads = []
for i in range(2):
threads.append(threading.Thread(target=working))
threads[-1].start()
for t in threads:
t.join()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
```
Result: Single thread: `70.46266` Double threads: `103.42579`
Simply, we can expect using multiple threads will be faster than single thread execution. But, ironically the performance of double threads is more poor than single thread in python. That is because of **GIL**.
## GIL [\#](https://jaehong21.com/posts/python/python-gil/#gil)
Global Interpreter Lock is a lock that allows Python interpreters to execute only one thread of byte code. Allowing all resources to one thread, then locking it, preventing the other from running. It also similar with Mutex locks in concurrency programming.
Let’s pretend working with three threads. In general, we expect each thread will work in parallel, but not because of this GIL. Below is an example of three threads operating on python.

Each thread gets a GIL and works, and all other threads stop working. In addition, there also [context switching](https://www.ibm.com/docs/en/zvm/7.2?topic=exits-context-switching) in multi-threads which is time-consuming compare to single thread operation.
## Reason why using GIL [\#](https://jaehong21.com/posts/python/python-gil/#reason-why-using-gil)
Then, why python is using this GIL that makes python too slow? It’s because it makes reference counting much more efficient. Python manages its memory using **Garbage collection** and **Reference counting**.
In other words, python counts all how much objects & variables are being referenced. In this situation, when multiple threads try to use single variable, **locks** for every single variables will be essential for managing reference counts. To prevent this, python is acquiring and releasing locks globally.
## Multi thread in Python is not always slow [\#](https://jaehong21.com/posts/python/python-gil/#multi-thread-in-python-is-not-always-slow)
```
import random
import threading
import time
def working():
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
max([random.random() for i in range(10000000)])
time.sleep(0.1)
# 1 Thread
s_time = time.time()
working()
working()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
# 2 Threads
s_time = time.time()
threads = []
for i in range(2):
threads.append(threading.Thread(target=working))
threads[-1].start()
for t in threads:
t.join()
e_time = time.time()
print(f'{e_time - s_time:.5f}')
```
Result: Single thread: `6.93310` Double threads: `6.05917`
This time, Double thread operation is actually faster than single thread operation. The reason is beacus of `sleep()` function. While `sleep` in single thread must wait, nothing can be done. On the other hand, context switch can be happend while `sleep` in multi-thread operation.
In real world example, rather than `sleep` when there are operations that threads must wait (ex. **I/O operation**), multi-thread can have better performance even with GIL in python.
## Reference [\#](https://jaehong21.com/posts/python/python-gil/#reference)
1. [https://ssungkang.tistory.com/entry/python-GIL-Global-interpreter-Lock%EC%9D%80-%EB%AC%B4%EC%97%87%EC%9D%BC%EA%B9%8C](https://ssungkang.tistory.com/entry/python-GIL-Global-interpreter-Lock%EC%9D%80-%EB%AC%B4%EC%97%87%EC%9D%BC%EA%B9%8C) |
| Shard | 4 (laksa) |
| Root Hash | 4087976041225119604 |
| Unparsed URL | com,jaehong21!/posts/python/python-gil/ s443 |