🕷️ Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 194 (from laksa013)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

ℹ️ Skipped - page is already crawled

📄
INDEXABLE
✅
CRAWLED
4 days ago
🤖
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0.2 months ago
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://blog.dailydoseofds.com/p/what-is-was-gil-in-python
Last Crawled2026-04-02 15:07:28 (4 days ago)
First Indexed2024-10-17 13:44:56 (1 year ago)
HTTP Status Code200
Meta TitleWhat is (was?) GIL in Python? - by Avi Chawla
Meta DescriptionUpdate from Python 3.13.
Meta Canonicalnull
Boilerpipe Text
Python 3.13 was released recently. Of the many interesting updates (which I intend to cover this week), the update that you can disable GIL (global interpreter lock) is getting the most attention. However, even before I can explain what this update means, it’s essential to understand what GIL is in the first place and why Python had been using it so far. Let’s dive in! A process is isolated from other processes and operates in its own memory space. This isolation means that if one process crashes, it typically does not affect other processes. Multi-threading occurs when a single process has multiple threads. These threads share the same resources, like memory. Simply put, GIL (global interpreter lock) restricts a process from running more than ONE thread at a time, as depicted below: In other words, while a process can have multiple threads, ONLY ONE can run at a given time. Quite evidently, the process cannot use multiple CPU cores for performance optimization, which means multi-threading leads to similar (or even poor) performance as single-threading. Let me show you this real quick with a code demo! First, we start with some imports and define a long function: The code for single threading, wherein we invoke the same function twice, is demonstrated below: With multi-threading, however, we can create two threads, one for each function. This is demonstrated below: As shown above, this performs hardly any better than single-threading. The reason? GIL. By the way, as you might expect, we do experience a run-time boost with multi-processing: The above three scenarios (single-threading, multi-threading, and multi-processing) can be explained visually as follows: Single-threading : A single thread executes the same function twice in order. Multi-threading : Each thread is assigned the job to execute the function once. Due to GIL, however, only one thread can run at a time: Multi-processing : Each function is executed under a different process: If this is clear, you might be having two questions now: Thread safety. When multiple threads run in a process and share the same resources (such as memory), problems can arise when they try to access and modify the same data. For instance, consider we have a Python list, and we want to run two operations with two threads: If t1 runs before t2, we get the following output: If t2 runs before t1, we get the following output: Different outputs! More formally, this can lead to race conditions , where the outcome depends on the timing of the threads’ execution. If these are not carefully controlled, it can lead to unpredictable behavior. This, along with a few more reasons, just made it convenient to enforce that only one thread can execute at any given time. On a side note, GIL usually affects CPU-bound tasks and not I/O-bound tasks, where multi-threading can still be useful. This is easier said than done. Unlike threads, which share the same memory space, processes are isolated . As a result, they cannot directly share data as threads do. While there are inter-process communication (IPC) mechanisms like pipes, queues, or shared memory to exchange information between processes, they add a ton of complexity. Thankfully, Python 3.13 allows us to disable GIL, which means a process can fully utilize all CPU cores. I have been testing Python 3.13 lately, so I intend to share these updates in a detailed newsletter issue this week. 👉 Over to you: What are some other reasons for enforcing GIL in Python? That said, if you want to get hands-on with actual GPU programming using CUDA, learn about how CUDA operates GPU’s threads, blocks, grids (with visuals), etc., we covered it here: Implementing (Massively) Parallelized CUDA Programs From Scratch Using CUDA Programming . At the end of the day, all businesses care about impact . That’s it! Can you reduce costs? Drive revenue? Can you scale ML models? Predict trends before they happen? We have discussed several other topics (with implementations) in the past that align with such topics. Develop "Industry ML" Skills Here are some of them: Learn sophisticated graph architectures and how to train them on graph data: A Crash Course on Graph Neural Networks – Part 1 Learn techniques to run large models on small devices: Quantization: Optimize ML Models to Run Them on Tiny Hardware Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust: Conformal Predictions: Build Confidence in Your ML Model’s Predictions . Learn how to identify causal relationships and answer business questions: A Crash Course on Causality – Part 1 Learn how to scale ML model training: A Practical Guide to Scaling ML Model Training . Learn techniques to reliably roll out new models in production: 5 Must-Know Ways to Test ML Models in Production (Implementation Included) Learn how to build privacy-first ML systems: Federated Learning: A Critical Step Towards Privacy-Preserving Machine Learning . Learn how to compress ML models and reduce costs: Model Compression: A Critical Step Towards Efficient Machine Learning . All these resources will help you cultivate key skills that businesses and companies care about the most. Get your product in front of 100,000 data scientists and other tech professionals. Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., who have influence over significant tech decisions and big purchases. To ensure your product reaches this influential audience, reserve your space here or reply to this email to ensure your product reaches this influential audience. Subscribe to Daily Dose of Data Science A free newsletter for continuous learning about data science and ML, lesser-known techniques, and how to apply them in 2 minutes. We keep things no-fluff. Join 100,000+ data scientists from top companies like Google, NVIDIA, Microsoft, Uber, etc.
Markdown
[![Daily Dose of Data Science](https://substackcdn.com/image/fetch/$s_!heKx!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5dc1fee-2d1e-4892-b219-4b96f6998ab5_288x288.png)](https://blog.dailydoseofds.com/) # [Daily Dose of Data Science](https://blog.dailydoseofds.com/) Subscribe Sign in # What is (was?) GIL in Python? ### Update from Python 3.13. [![Avi Chawla's avatar](https://substackcdn.com/image/fetch/$s_!fRqh!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0dc0dc6-c4ff-4fe7-b467-bfb654e7dc6f_287x287.jpeg)](https://substack.com/@avichawla) [Avi Chawla](https://substack.com/@avichawla) Oct 14, 2024 51 1 5 Share Python 3.13 was released recently. Of the many interesting updates (which I intend to cover this week), the update that you can disable GIL (global interpreter lock) is getting the most attention. [![](https://substackcdn.com/image/fetch/$s_!a596!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30758488-8461-46fc-a074-36dc1c030f3e_1810x425.png)](https://substackcdn.com/image/fetch/$s_!a596!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30758488-8461-46fc-a074-36dc1c030f3e_1810x425.png) However, even before I can explain what this update means, it’s essential to understand what GIL is in the first place and why Python had been using it so far. Let’s dive in\! *** ### Some fundamentals - **A process** is isolated from other processes and operates in its own memory space. This isolation means that if one process crashes, it typically does not affect other processes. [![](https://substackcdn.com/image/fetch/$s_!OB-o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png)](https://substackcdn.com/image/fetch/$s_!OB-o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png) - **Multi-threading** occurs when a single process has multiple threads. These threads share the same resources, like memory. [![](https://substackcdn.com/image/fetch/$s_!liC2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3f1512a-4b70-45d6-9c46-0363db5d1400_2302x453.png)](https://substackcdn.com/image/fetch/$s_!liC2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3f1512a-4b70-45d6-9c46-0363db5d1400_2302x453.png) *** ### GIL explained visually Simply put, GIL (global interpreter lock) restricts a process from running more than ONE thread at a time, as depicted below: [![](https://substackcdn.com/image/fetch/$s_!-4S5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe76c641f-6428-4247-8aad-6140949dc850_2352x792.png)](https://substackcdn.com/image/fetch/$s_!-4S5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe76c641f-6428-4247-8aad-6140949dc850_2352x792.png) In other words, while a process can have multiple threads, ONLY ONE can run at a given time. Quite evidently, the process cannot use multiple CPU cores for performance optimization, which means multi-threading leads to similar (or even poor) performance as single-threading. Let me show you this real quick with a code demo\! - First, we start with some imports and define a long function: [![](https://substackcdn.com/image/fetch/$s_!Kt6L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71b65949-840f-4571-9775-3887221e946b_2012x996.png)](https://substackcdn.com/image/fetch/$s_!Kt6L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71b65949-840f-4571-9775-3887221e946b_2012x996.png) - The code for single threading, wherein we invoke the same function twice, is demonstrated below: [![](https://substackcdn.com/image/fetch/$s_!EbY8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0172e62-ff7a-416a-93a8-fb670026cc0e_2208x1048.png)](https://substackcdn.com/image/fetch/$s_!EbY8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0172e62-ff7a-416a-93a8-fb670026cc0e_2208x1048.png) - With multi-threading, however, we can create two threads, one for each function. This is demonstrated below: [![](https://substackcdn.com/image/fetch/$s_!BQVj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ef2d395-6366-457a-a836-00b31d9ebd98_2364x1512.png)](https://substackcdn.com/image/fetch/$s_!BQVj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ef2d395-6366-457a-a836-00b31d9ebd98_2364x1512.png) As shown above, this performs hardly any better than single-threading. The reason? **GIL.** By the way, as you might expect, we do experience a run-time boost with multi-processing: [![](https://substackcdn.com/image/fetch/$s_!Oa5l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06b9912a-a881-4f15-afcf-009f415c0baa_2428x1548.png)](https://substackcdn.com/image/fetch/$s_!Oa5l!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06b9912a-a881-4f15-afcf-009f415c0baa_2428x1548.png) The above three scenarios (single-threading, multi-threading, and multi-processing) can be explained visually as follows: - **Single-threading**: A single thread executes the same function twice in order. [![](https://substackcdn.com/image/fetch/$s_!nIgx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbbbe15b-dfed-4d96-b633-8933db1e413d_2718x509.png)](https://substackcdn.com/image/fetch/$s_!nIgx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbbbe15b-dfed-4d96-b633-8933db1e413d_2718x509.png) - **Multi-threading**: Each thread is assigned the job to execute the function once. Due to GIL, however, only one thread can run at a time: [![](https://substackcdn.com/image/fetch/$s_!C7Vn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe5303ed-0663-4852-a561-e86a71cadb51_2718x972.png)](https://substackcdn.com/image/fetch/$s_!C7Vn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe5303ed-0663-4852-a561-e86a71cadb51_2718x972.png) - **Multi-processing**: Each function is executed under a different process: [![](https://substackcdn.com/image/fetch/$s_!ke28!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbaabab05-d6a5-4bf8-9beb-dcdea438ee22_2718x817.png)](https://substackcdn.com/image/fetch/$s_!ke28!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbaabab05-d6a5-4bf8-9beb-dcdea438ee22_2718x817.png) If this is clear, you might be having two questions now: #### 1\) Why has Python been using GIL even when it is clearly suboptimal? Thread safety. When multiple threads run in a process and share the same resources (such as memory), problems can arise when they try to access and modify the same data. For instance, consider we have a Python list, and we want to run two operations with two threads: [![](https://substackcdn.com/image/fetch/$s_!iMUr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0389b108-2905-4516-a47e-3f7991c2d4fe_2555x483.png)](https://substackcdn.com/image/fetch/$s_!iMUr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0389b108-2905-4516-a47e-3f7991c2d4fe_2555x483.png) - If t1 runs before t2, we get the following output: [![](https://substackcdn.com/image/fetch/$s_!-7ww!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774a21ac-355f-48c7-9191-62aaf3148a96_4250x279.png)](https://substackcdn.com/image/fetch/$s_!-7ww!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774a21ac-355f-48c7-9191-62aaf3148a96_4250x279.png) - If t2 runs before t1, we get the following output: [![](https://substackcdn.com/image/fetch/$s_!BxTJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ff5c8bc-a3ee-4bcf-be72-33b99479c9ce_4536x302.png)](https://substackcdn.com/image/fetch/$s_!BxTJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ff5c8bc-a3ee-4bcf-be72-33b99479c9ce_4536x302.png) Different outputs\! More formally, this can lead to **race conditions**, where the outcome depends on the timing of the threads’ execution. If these are not carefully controlled, it can lead to unpredictable behavior. This, along with a few more reasons, just made it convenient to enforce that **only one thread** can execute at any given time. > On a side note, GIL usually affects CPU-bound tasks and not I/O-bound tasks, where multi-threading can still be useful. #### 2\) If multi-processing works, why not use that as a workaround? This is easier said than done. Unlike threads, which share the same memory space, **processes are isolated**. [![](https://substackcdn.com/image/fetch/$s_!OB-o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png)](https://substackcdn.com/image/fetch/$s_!OB-o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png) As a result, they cannot directly share data as threads do. While there are inter-process communication (IPC) mechanisms like pipes, queues, or shared memory to exchange information between processes, they add a ton of complexity. *** Thankfully, Python 3.13 allows us to disable GIL, which means a process can fully utilize all CPU cores. I have been testing Python 3.13 lately, so I intend to share these updates in a detailed newsletter issue this week. 👉 Over to you: What are some other reasons for enforcing GIL in Python? That said, if you want to get hands-on with actual GPU programming using CUDA, learn about how CUDA operates GPU’s threads, blocks, grids (with visuals), etc., we covered it here: **[Implementing (Massively) Parallelized CUDA Programs From Scratch Using CUDA Programming](https://www.dailydoseofds.com/implementing-massively-parallelized-cuda-programs-from-scratch-using-cuda-programming/)**. Thanks for reading Daily Dose of Data Science! Subscribe below and receive a free data science PDF (530+ pages) with 150+ core data science and machine learning lessons. *** ### **For those wanting to develop “Industry ML” expertise:** [![](https://substackcdn.com/image/fetch/$s_!cn8y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F939bede7-b0de-4770-a3e9-34d39488e776_2733x1020.png)](https://substackcdn.com/image/fetch/$s_!cn8y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F939bede7-b0de-4770-a3e9-34d39488e776_2733x1020.png) At the end of the day, all businesses care about *impact*. That’s it\! - Can you reduce costs? - Drive revenue? - Can you scale ML models? - Predict trends before they happen? We have discussed several other topics (with implementations) in the past that align with such topics. [Develop "Industry ML" Skills](https://www.dailydoseofds.com/membership) Here are some of them: - Learn sophisticated graph architectures and how to train them on graph data: [A Crash Course on Graph Neural Networks – Part 1](https://www.dailydoseofds.com/a-crash-course-on-graph-neural-networks-implementation-included/) - Learn techniques to run large models on small devices: [Quantization: Optimize ML Models to Run Them on Tiny Hardware](https://www.dailydoseofds.com/quantization-optimize-ml-models-to-run-them-on-tiny-hardware/) - Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust: [Conformal Predictions: Build Confidence in Your ML Model’s Predictions](https://www.dailydoseofds.com/conformal-predictions-build-confidence-in-your-ml-models-predictions/). - Learn how to identify causal relationships and answer business questions: [A Crash Course on Causality – Part 1](https://www.dailydoseofds.com/a-crash-course-on-causality-part-1/) - Learn how to scale ML model training: [A Practical Guide to Scaling ML Model Training](https://www.dailydoseofds.com/how-to-scale-model-training/). - Learn techniques to reliably roll out new models in production: [5 Must-Know Ways to Test ML Models in Production (Implementation Included)](https://www.dailydoseofds.com/5-must-know-ways-to-test-ml-models-in-production-implementation-included/) - Learn how to build privacy-first ML systems: [Federated Learning: A Critical Step Towards Privacy-Preserving Machine Learning](https://www.dailydoseofds.com/federated-learning-a-critical-step-towards-privacy-preserving-machine-learning/). - Learn how to compress ML models and reduce costs: [Model Compression: A Critical Step Towards Efficient Machine Learning](https://www.dailydoseofds.com/model-compression-a-critical-step-towards-efficient-machine-learning/). All these resources will help you cultivate key skills that businesses and companies care about the most. *** ### **SPONSOR US** Get your product in front of 100,000 data scientists and other tech professionals. Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., who have influence over significant tech decisions and big purchases. To ensure your product reaches this influential audience, reserve your space **[here](https://scorecard.dailydoseofds.com/sponsorship-assessment)** or reply to this email to ensure your product reaches this influential audience. *** #### Subscribe to Daily Dose of Data Science A free newsletter for continuous learning about data science and ML, lesser-known techniques, and how to apply them in 2 minutes. We keep things no-fluff. Join 100,000+ data scientists from top companies like Google, NVIDIA, Microsoft, Uber, etc. By subscribing, you agree Substack's [Terms of Use](https://substack.com/tos), and acknowledge its [Information Collection Notice](https://substack.com/ccpa#personal-data-collected) and [Privacy Policy](https://substack.com/privacy). [![thomas.kim@buzzvil.com's avatar](https://substackcdn.com/image/fetch/$s_!y9Me!,w_32,h_32,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4eb1b8c1-de4d-40ff-a67a-a4f309f67725_144x144.png)](https://substack.com/profile/153602397-thomaskimbuzzvilcom) [![GiuliaG's avatar](https://substackcdn.com/image/fetch/$s_!izT5!,w_32,h_32,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b655ed6-fb40-4a32-ba4e-52a6cfe5c1e6_144x144.png)](https://substack.com/profile/111106305-giuliag) [![Aakash's avatar](https://substackcdn.com/image/fetch/$s_!cejn!,w_32,h_32,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff1d1f3c2-44e1-445f-93a3-0d4f091a14c3_1080x1080.jpeg)](https://substack.com/profile/161532219-aakash) [![Taylor L. Riché's avatar](https://substackcdn.com/image/fetch/$s_!oaPb!,w_32,h_32,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0de32751-0cad-43d4-a5f6-738c2d3c99f7_1675x2089.jpeg)](https://substack.com/profile/75589812-taylor-l-riche) [![Chandu's avatar](https://substackcdn.com/image/fetch/$s_!yUVF!,w_32,h_32,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e9d8668-61a9-4c83-9fb1-0880965ae34b_486x486.jpeg)](https://substack.com/profile/10886454-chandu) [51 Likes]()∙ [5 Restacks](https://substack.com/note/p-150228379/restacks?utm_source=substack&utm_content=facepile-restacks) 51 1 5 Share Previous Next #### Discussion about this post Comments Restacks [![Yuvraj Dhepe's avatar](https://substackcdn.com/image/fetch/$s_!Tfxb!,w_32,h_32,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Favatars%2Forange.png)](https://substack.com/profile/135747572-yuvraj-dhepe?utm_source=comment) [Yuvraj Dhepe](https://substack.com/profile/135747572-yuvraj-dhepe?utm_source=substack-feed-item) [Oct 23, 2024](https://blog.dailydoseofds.com/p/what-is-was-gil-in-python/comment/73736477 "Oct 23, 2024, 9:54 AM") Awesome, Thanks for sharing, waiting for the detailed article. [Like]() [Reply]() [Share]() Top Latest Discussions [FREE Daily Dose of Data Science PDF](https://blog.dailydoseofds.com/p/free-daily-dose-of-data-science-pdf) [Collection of posts on core DS/ML topics.](https://blog.dailydoseofds.com/p/free-daily-dose-of-data-science-pdf) Apr 20, 2023 • [Avi Chawla](https://substack.com/@avichawla) 600 22 20 ![](https://substackcdn.com/image/fetch/$s_!cWR1!,w_320,h_213,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d99ac7-b629-4494-b7e9-8d7f3f969097_2820x4500.jpeg) [Anatomy of the .claude/ Folder](https://blog.dailydoseofds.com/p/anatomy-of-the-claude-folder) [A complete guide to CLAUDE.md, custom commands, skills, agents, and permissions, and how to set them up properly.](https://blog.dailydoseofds.com/p/anatomy-of-the-claude-folder) Mar 23 • [Avi Chawla](https://substack.com/@avichawla) 87 5 6 ![](https://substackcdn.com/image/fetch/$s_!ITpM!,w_320,h_213,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b81cc25-df87-4ea8-a11b-9a719d5836b1_1166x1176.png) [9 MCP Projects for AI Engineers](https://blog.dailydoseofds.com/p/9-mcp-projects-for-ai-engineers) [(explained with visuals & open-source code)](https://blog.dailydoseofds.com/p/9-mcp-projects-for-ai-engineers) Jun 10, 2025 • [Avi Chawla](https://substack.com/@avichawla) 437 7 54 ![](https://substackcdn.com/image/fetch/$s_!PJHI!,w_320,h_213,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39330805-8f88-421f-84f0-bf2029caf587_2307x2199.png) See all ### Ready for more? © 2026 Avi Chawla · [Privacy](https://substack.com/privacy) ∙ [Terms](https://substack.com/tos) ∙ [Collection notice](https://substack.com/ccpa#personal-data-collected) [Start your Substack](https://substack.com/signup?utm_source=substack&utm_medium=web&utm_content=footer) [Get the app](https://substack.com/app/app-store-redirect?utm_campaign=app-marketing&utm_content=web-footer-button) [Substack](https://substack.com/) is the home for great culture This site requires JavaScript to run correctly. Please [turn on JavaScript](https://enable-javascript.com/) or unblock scripts
Readable Markdown
Python 3.13 was released recently. Of the many interesting updates (which I intend to cover this week), the update that you can disable GIL (global interpreter lock) is getting the most attention. [![](https://substackcdn.com/image/fetch/$s_!a596!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30758488-8461-46fc-a074-36dc1c030f3e_1810x425.png)](https://substackcdn.com/image/fetch/$s_!a596!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30758488-8461-46fc-a074-36dc1c030f3e_1810x425.png) However, even before I can explain what this update means, it’s essential to understand what GIL is in the first place and why Python had been using it so far. Let’s dive in\! - **A process** is isolated from other processes and operates in its own memory space. This isolation means that if one process crashes, it typically does not affect other processes. [![](https://substackcdn.com/image/fetch/$s_!OB-o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png)](https://substackcdn.com/image/fetch/$s_!OB-o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png) - **Multi-threading** occurs when a single process has multiple threads. These threads share the same resources, like memory. [![](https://substackcdn.com/image/fetch/$s_!liC2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3f1512a-4b70-45d6-9c46-0363db5d1400_2302x453.png)](https://substackcdn.com/image/fetch/$s_!liC2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3f1512a-4b70-45d6-9c46-0363db5d1400_2302x453.png) Simply put, GIL (global interpreter lock) restricts a process from running more than ONE thread at a time, as depicted below: [![](https://substackcdn.com/image/fetch/$s_!-4S5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe76c641f-6428-4247-8aad-6140949dc850_2352x792.png)](https://substackcdn.com/image/fetch/$s_!-4S5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe76c641f-6428-4247-8aad-6140949dc850_2352x792.png) In other words, while a process can have multiple threads, ONLY ONE can run at a given time. Quite evidently, the process cannot use multiple CPU cores for performance optimization, which means multi-threading leads to similar (or even poor) performance as single-threading. Let me show you this real quick with a code demo\! - First, we start with some imports and define a long function: [![](https://substackcdn.com/image/fetch/$s_!Kt6L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71b65949-840f-4571-9775-3887221e946b_2012x996.png)](https://substackcdn.com/image/fetch/$s_!Kt6L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71b65949-840f-4571-9775-3887221e946b_2012x996.png) - The code for single threading, wherein we invoke the same function twice, is demonstrated below: [![](https://substackcdn.com/image/fetch/$s_!EbY8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0172e62-ff7a-416a-93a8-fb670026cc0e_2208x1048.png)](https://substackcdn.com/image/fetch/$s_!EbY8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0172e62-ff7a-416a-93a8-fb670026cc0e_2208x1048.png) - With multi-threading, however, we can create two threads, one for each function. This is demonstrated below: [![](https://substackcdn.com/image/fetch/$s_!BQVj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ef2d395-6366-457a-a836-00b31d9ebd98_2364x1512.png)](https://substackcdn.com/image/fetch/$s_!BQVj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ef2d395-6366-457a-a836-00b31d9ebd98_2364x1512.png) As shown above, this performs hardly any better than single-threading. The reason? **GIL.** By the way, as you might expect, we do experience a run-time boost with multi-processing: [![](https://substackcdn.com/image/fetch/$s_!Oa5l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06b9912a-a881-4f15-afcf-009f415c0baa_2428x1548.png)](https://substackcdn.com/image/fetch/$s_!Oa5l!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06b9912a-a881-4f15-afcf-009f415c0baa_2428x1548.png) The above three scenarios (single-threading, multi-threading, and multi-processing) can be explained visually as follows: - **Single-threading**: A single thread executes the same function twice in order. [![](https://substackcdn.com/image/fetch/$s_!nIgx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbbbe15b-dfed-4d96-b633-8933db1e413d_2718x509.png)](https://substackcdn.com/image/fetch/$s_!nIgx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbbbe15b-dfed-4d96-b633-8933db1e413d_2718x509.png) - **Multi-threading**: Each thread is assigned the job to execute the function once. Due to GIL, however, only one thread can run at a time: [![](https://substackcdn.com/image/fetch/$s_!C7Vn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe5303ed-0663-4852-a561-e86a71cadb51_2718x972.png)](https://substackcdn.com/image/fetch/$s_!C7Vn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe5303ed-0663-4852-a561-e86a71cadb51_2718x972.png) - **Multi-processing**: Each function is executed under a different process: [![](https://substackcdn.com/image/fetch/$s_!ke28!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbaabab05-d6a5-4bf8-9beb-dcdea438ee22_2718x817.png)](https://substackcdn.com/image/fetch/$s_!ke28!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbaabab05-d6a5-4bf8-9beb-dcdea438ee22_2718x817.png) If this is clear, you might be having two questions now: Thread safety. When multiple threads run in a process and share the same resources (such as memory), problems can arise when they try to access and modify the same data. For instance, consider we have a Python list, and we want to run two operations with two threads: [![](https://substackcdn.com/image/fetch/$s_!iMUr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0389b108-2905-4516-a47e-3f7991c2d4fe_2555x483.png)](https://substackcdn.com/image/fetch/$s_!iMUr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0389b108-2905-4516-a47e-3f7991c2d4fe_2555x483.png) - If t1 runs before t2, we get the following output: [![](https://substackcdn.com/image/fetch/$s_!-7ww!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774a21ac-355f-48c7-9191-62aaf3148a96_4250x279.png)](https://substackcdn.com/image/fetch/$s_!-7ww!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774a21ac-355f-48c7-9191-62aaf3148a96_4250x279.png) - If t2 runs before t1, we get the following output: [![](https://substackcdn.com/image/fetch/$s_!BxTJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ff5c8bc-a3ee-4bcf-be72-33b99479c9ce_4536x302.png)](https://substackcdn.com/image/fetch/$s_!BxTJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ff5c8bc-a3ee-4bcf-be72-33b99479c9ce_4536x302.png) Different outputs\! More formally, this can lead to **race conditions**, where the outcome depends on the timing of the threads’ execution. If these are not carefully controlled, it can lead to unpredictable behavior. This, along with a few more reasons, just made it convenient to enforce that **only one thread** can execute at any given time. > On a side note, GIL usually affects CPU-bound tasks and not I/O-bound tasks, where multi-threading can still be useful. This is easier said than done. Unlike threads, which share the same memory space, **processes are isolated**. [![](https://substackcdn.com/image/fetch/$s_!OB-o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png)](https://substackcdn.com/image/fetch/$s_!OB-o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F080909c4-5b26-408c-b3d1-ed7b865af2df_2460x687.png) As a result, they cannot directly share data as threads do. While there are inter-process communication (IPC) mechanisms like pipes, queues, or shared memory to exchange information between processes, they add a ton of complexity. Thankfully, Python 3.13 allows us to disable GIL, which means a process can fully utilize all CPU cores. I have been testing Python 3.13 lately, so I intend to share these updates in a detailed newsletter issue this week. 👉 Over to you: What are some other reasons for enforcing GIL in Python? That said, if you want to get hands-on with actual GPU programming using CUDA, learn about how CUDA operates GPU’s threads, blocks, grids (with visuals), etc., we covered it here: **[Implementing (Massively) Parallelized CUDA Programs From Scratch Using CUDA Programming](https://www.dailydoseofds.com/implementing-massively-parallelized-cuda-programs-from-scratch-using-cuda-programming/)**. [![](https://substackcdn.com/image/fetch/$s_!cn8y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F939bede7-b0de-4770-a3e9-34d39488e776_2733x1020.png)](https://substackcdn.com/image/fetch/$s_!cn8y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F939bede7-b0de-4770-a3e9-34d39488e776_2733x1020.png) At the end of the day, all businesses care about *impact*. That’s it\! - Can you reduce costs? - Drive revenue? - Can you scale ML models? - Predict trends before they happen? We have discussed several other topics (with implementations) in the past that align with such topics. [Develop "Industry ML" Skills](https://www.dailydoseofds.com/membership) Here are some of them: - Learn sophisticated graph architectures and how to train them on graph data: [A Crash Course on Graph Neural Networks – Part 1](https://www.dailydoseofds.com/a-crash-course-on-graph-neural-networks-implementation-included/) - Learn techniques to run large models on small devices: [Quantization: Optimize ML Models to Run Them on Tiny Hardware](https://www.dailydoseofds.com/quantization-optimize-ml-models-to-run-them-on-tiny-hardware/) - Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust: [Conformal Predictions: Build Confidence in Your ML Model’s Predictions](https://www.dailydoseofds.com/conformal-predictions-build-confidence-in-your-ml-models-predictions/). - Learn how to identify causal relationships and answer business questions: [A Crash Course on Causality – Part 1](https://www.dailydoseofds.com/a-crash-course-on-causality-part-1/) - Learn how to scale ML model training: [A Practical Guide to Scaling ML Model Training](https://www.dailydoseofds.com/how-to-scale-model-training/). - Learn techniques to reliably roll out new models in production: [5 Must-Know Ways to Test ML Models in Production (Implementation Included)](https://www.dailydoseofds.com/5-must-know-ways-to-test-ml-models-in-production-implementation-included/) - Learn how to build privacy-first ML systems: [Federated Learning: A Critical Step Towards Privacy-Preserving Machine Learning](https://www.dailydoseofds.com/federated-learning-a-critical-step-towards-privacy-preserving-machine-learning/). - Learn how to compress ML models and reduce costs: [Model Compression: A Critical Step Towards Efficient Machine Learning](https://www.dailydoseofds.com/model-compression-a-critical-step-towards-efficient-machine-learning/). All these resources will help you cultivate key skills that businesses and companies care about the most. Get your product in front of 100,000 data scientists and other tech professionals. Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., who have influence over significant tech decisions and big purchases. To ensure your product reaches this influential audience, reserve your space **[here](https://scorecard.dailydoseofds.com/sponsorship-assessment)** or reply to this email to ensure your product reaches this influential audience. #### Subscribe to Daily Dose of Data Science A free newsletter for continuous learning about data science and ML, lesser-known techniques, and how to apply them in 2 minutes. We keep things no-fluff. Join 100,000+ data scientists from top companies like Google, NVIDIA, Microsoft, Uber, etc.
Shard194 (laksa)
Root Hash565122898595625394
Unparsed URLcom,dailydoseofds!blog,/p/what-is-was-gil-in-python s443