🕷️ Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 87 (from laksa163)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

ℹ️ Skipped - page is already crawled

📄
INDEXABLE
✅
CRAWLED
1 day ago
🤖
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0 months ago
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://opensourceoptions.com/asynchronous-parallel-programming-in-python-with-multiprocessing/
Last Crawled2026-04-16 10:29:24 (1 day ago)
First Indexed2023-09-16 10:55:37 (2 years ago)
HTTP Status Code200
Meta TitleAsynchronous Parallel Programming in Python with Multiprocessing – OpenSourceOptions
Meta Descriptionnull
Meta Canonicalnull
Boilerpipe Text
A flexible method to speed up code on a personal computer Do you wish your Python scripts could run faster? Maybe they can. And you won’t (probably) have to buy a new computer, or use a super computer. Most modern computers contain multiple processing cores but, by default, python scripts only use a single core. Writing code can run on multiple processors can really decrease your processing time. This article will demonstrate how to use the  multiprocessing  module to write parallel code that uses all of your machines processors and gives your script a performance boost. Synchronous vs Asynchronous Models An asynchronous model starts tasks as soon as new resources become available without waiting for previously running tasks to finish. By contrast, a synchronous model waits for task 1 to finish before starting task 2. For a more detailed explanation with examples, check out  this article  in The Startup. Asynchronous models often offer the greatest opportunity for performance improvement, if you can structure your code in the proper manner. That is, tasks can run independently of one another. For the sake of brevity, this article is going to focus solely on asynchronous parallelization because that is the method that will likely boost performance the most. Also, if you structure code for asynchronous parallelization on your laptop, it is much easier to scale up to a super computer. Installation Since Python 2.6  multiprocessing  has been included as a basic module, so no installation is required. Simply  import multiprocessing . Since ‘multiprocessing’ takes a bit to type I prefer to  import multiprocessing as mp . The Problem We have an array of parameter values that we want to use in a sensitivity analysis. The function we’re running the analysis on is computationally expensive. We can cut down on processing time by running multiple parameter simultaneously in parallel. Setup Import  multiprocessing  ,  numpy  and  time . Then define a function that takes a row number,  i  , and three parameters as inputs. The row number is necessary so results can later be linked to the input parameters. Remember, the asynchronous model does not preserve order. import multiprocessing as mp import numpy as np import time def my_function(i, param1, param2, param3): result = param1 ** 2 * param2 + param3 time.sleep(2) return (i, result) For demonstrative purposes, this is a simple function that is not computationally expensive. I’ve added a line of code to pause the function for 2 seconds, simulating a long run-time. The function output is going to be most sensitive to  param1  and least sensitive to  param3 . In practice, you can replace this with any function. We need a function that can take the result of  my_function  and add it to a results list, which is creatively named,  results . def get_result(result): global results results.append(result) Let’s run this code in serial (non-parallel) and see how long it takes. Set up an array with 3 columns of random numbers between 0 and 100. These are the parameters that will get passed to  my_function . Then create the empty  results  list. Finally, loop through all the rows in  params  and add the result from  my_function  to  results . Time this to see how long it takes (should be about 20 seconds) and print out the  results  list. if __name__ == '__main__': params = np.random.random((10, 3)) * 100.0 results = [] ts = time.time() for i in range(0, params.shape[0]): get_result(my_function(i, params[i, 0], params[i, 1], params[i, 2])) print('Time in serial:', time.time() - ts) print(results) As expected, this code took about 20 seconds to run. Also, notice how the results were returned in order. Time in serial: 20.006245374679565 [(0, 452994.2250955602), (1, 12318.873058254741), (2, 310577.72144939064), (3, 210071.48540466625), (4, 100467.02727256044), (5, 46553.87276610058), (6, 11396.808561138329), (7, 543909.2528728382), (8, 79957.52205218966), (9, 47914.9078853125)] Now use  multiprocessing  to run the same code in parallel. Simply add the following code directly below the serial code for comparison. A gist with the full Python script is included at the end of this article for clarity. Reset the  results  list so it is empty, and reset the starting time. We’ll need to specify how many CPU processes we want to use.  multiprocessing.cpu_count()  returns the total available processes for your machine. Then loop through each row of  params  and use  multiprocessing.Pool.apply_async  to call  my_function  and save the result. Parameters to  my_function  are passed using the  args  argument of  apply_async  and the callback function is where the result of  my_function  is sent. This will start a new process as soon as one is available, and continue doing so until the loop is complete. Then close the process pool.  multiprocessing.Pool.join()  waits to execute any following code until all process have completed running. Now print the time this code took to run and the results. results = [] ts = time.time() pool = mp.Pool(mp.cpu_count()) for i in range(0, params.shape[0]): pool.apply_async(my_function, args=(i, params[i, 0], params[i, 1], params[i, 2]), callback=get_result) pool.close() pool.join() print('Time in parallel:', time.time() - ts) print(results) Notice, using  apply_async  decreased the run-time from 20 seconds to under 5 seconds. Also, notice that the results were not returned in order. That is why the row index was passed and returned. Time in parallel: 4.749683141708374 [(0, 452994.2250955602), (2, 310577.72144939064), (1, 12318.873058254741), (3, 210071.48540466625), (4, 100467.02727256044), (5, 46553.87276610058), (6, 11396.808561138329), (7, 543909.2528728382), (9, 47914.9078853125), (8, 79957.52205218966)] Conclusion Implementing asynchronous parallelization to your code can greatly decrease your run time. The  multiprocessing  module is a great option to use for parallelization on personal computers. As you’ve seen in this article you can get dramatic speed increases, depending on your machine’s specs. Beware that  multiprocessing  has limitations if you eventually want to scale up to a super computer. If super computing is where you’re headed, you’ll want to use a parallelization model compatible with Message Passing Interface (MPI). https://gist.github.com/konradhafen/aa605c67bf798f07244bdc9d5d95ad12
Markdown
![fbpx](https://www.facebook.com/tr?id=884969326494839&ev=PageView&noscript=1) [Skip to content](https://opensourceoptions.com/asynchronous-parallel-programming-in-python-with-multiprocessing/#main) [OpenSourceOptions](https://opensourceoptions.com/) - [Home](https://opensourceoptions.com/) - [About](https://opensourceoptions.com/about/) [OpenSourceOptions](https://opensourceoptions.com/) ![](https://opensourceoptions.com/wp-content/uploads/2020/12/tomas-sobek-nVqNmnAWz3A-unsplash.jpg) [Data Science](https://opensourceoptions.com/category/data-science/) \| [Open Source](https://opensourceoptions.com/category/open-source/) \| [Python](https://opensourceoptions.com/category/python/) # Asynchronous Parallel Programming in Python with Multiprocessing By[krad](https://opensourceoptions.com/) June 23, 2020 September 22, 2021 #### A flexible method to speed up code on a personal computer Do you wish your Python scripts could run faster? Maybe they can. And you won’t (probably) have to buy a new computer, or use a super computer. Most modern computers contain multiple processing cores but, by default, python scripts only use a single core. Writing code can run on multiple processors can really decrease your processing time. This article will demonstrate how to use the `multiprocessing` module to write parallel code that uses all of your machines processors and gives your script a performance boost. ## Synchronous vs Asynchronous Models An asynchronous model starts tasks as soon as new resources become available without waiting for previously running tasks to finish. By contrast, a synchronous model waits for task 1 to finish before starting task 2. For a more detailed explanation with examples, check out [this article](https://medium.com/swlh/understanding-sync-async-concurrency-and-parallelism-166686008fa4) in The Startup. Asynchronous models often offer the greatest opportunity for performance improvement, if you can structure your code in the proper manner. That is, tasks can run independently of one another. For the sake of brevity, this article is going to focus solely on asynchronous parallelization because that is the method that will likely boost performance the most. Also, if you structure code for asynchronous parallelization on your laptop, it is much easier to scale up to a super computer. ## Installation Since Python 2.6 `multiprocessing` has been included as a basic module, so no installation is required. Simply `import multiprocessing`. Since ‘multiprocessing’ takes a bit to type I prefer to `import multiprocessing as mp`. ## The Problem We have an array of parameter values that we want to use in a sensitivity analysis. The function we’re running the analysis on is computationally expensive. We can cut down on processing time by running multiple parameter simultaneously in parallel. ## Setup Import `multiprocessing` , `numpy` and `time`. Then define a function that takes a row number, `i` , and three parameters as inputs. The row number is necessary so results can later be linked to the input parameters. Remember, the asynchronous model does not preserve order. ``` import multiprocessing as mp import numpy as np import time def my_function(i, param1, param2, param3): result = param1 ** 2 * param2 + param3 time.sleep(2) return (i, result) ``` For demonstrative purposes, this is a simple function that is not computationally expensive. I’ve added a line of code to pause the function for 2 seconds, simulating a long run-time. The function output is going to be most sensitive to `param1` and least sensitive to `param3`. In practice, you can replace this with any function. We need a function that can take the result of `my_function` and add it to a results list, which is creatively named, `results`. ``` def get_result(result): global results results.append(result) ``` Let’s run this code in serial (non-parallel) and see how long it takes. Set up an array with 3 columns of random numbers between 0 and 100. These are the parameters that will get passed to `my_function`. Then create the empty `results` list. Finally, loop through all the rows in `params` and add the result from `my_function` to `results`. Time this to see how long it takes (should be about 20 seconds) and print out the `results` list. ``` if __name__ == '__main__': params = np.random.random((10, 3)) * 100.0 results = [] ts = time.time() for i in range(0, params.shape[0]): get_result(my_function(i, params[i, 0], params[i, 1], params[i, 2])) print('Time in serial:', time.time() - ts) print(results) ``` As expected, this code took about 20 seconds to run. Also, notice how the results were returned in order. ``` Time in serial: 20.006245374679565 [(0, 452994.2250955602), (1, 12318.873058254741), (2, 310577.72144939064), (3, 210071.48540466625), (4, 100467.02727256044), (5, 46553.87276610058), (6, 11396.808561138329), (7, 543909.2528728382), (8, 79957.52205218966), (9, 47914.9078853125)] ``` ## Run in Parallel Now use `multiprocessing` to run the same code in parallel. Simply add the following code directly below the serial code for comparison. A gist with the full Python script is included at the end of this article for clarity. Reset the `results` list so it is empty, and reset the starting time. We’ll need to specify how many CPU processes we want to use. `multiprocessing.cpu_count()` returns the total available processes for your machine. Then loop through each row of `params` and use `multiprocessing.Pool.apply_async `to call `my_function` and save the result. Parameters to `my_function` are passed using the `args` argument of `apply_async` and the callback function is where the result of `my_function` is sent. This will start a new process as soon as one is available, and continue doing so until the loop is complete. Then close the process pool. `multiprocessing.Pool.join()` waits to execute any following code until all process have completed running. Now print the time this code took to run and the results. ``` results = [] ts = time.time() pool = mp.Pool(mp.cpu_count()) for i in range(0, params.shape[0]): pool.apply_async(my_function, args=(i, params[i, 0], params[i, 1], params[i, 2]), callback=get_result) pool.close() pool.join() print('Time in parallel:', time.time() - ts) print(results) ``` Notice, using `apply_async` decreased the run-time from 20 seconds to under 5 seconds. Also, notice that the results were not returned in order. That is why the row index was passed and returned. ``` Time in parallel: 4.749683141708374 [(0, 452994.2250955602), (2, 310577.72144939064), (1, 12318.873058254741), (3, 210071.48540466625), (4, 100467.02727256044), (5, 46553.87276610058), (6, 11396.808561138329), (7, 543909.2528728382), (9, 47914.9078853125), (8, 79957.52205218966)] ``` ## Conclusion Implementing asynchronous parallelization to your code can greatly decrease your run time. The `multiprocessing` module is a great option to use for parallelization on personal computers. As you’ve seen in this article you can get dramatic speed increases, depending on your machine’s specs. Beware that `multiprocessing` has limitations if you eventually want to scale up to a super computer. If super computing is where you’re headed, you’ll want to use a parallelization model compatible with Message Passing Interface (MPI). <https://gist.github.com/konradhafen/aa605c67bf798f07244bdc9d5d95ad12> Post Tags: [\#asynchronous](https://opensourceoptions.com/tag/asynchronous/ "asynchronous")[\#data science](https://opensourceoptions.com/tag/data-science/ "data science")[\#multiprocessing](https://opensourceoptions.com/tag/multiprocessing/ "multiprocessing")[\#python](https://opensourceoptions.com/tag/python/ "python")[\#technology](https://opensourceoptions.com/tag/technology/ "technology") ## Post navigation [Previous4 Reasons to Use Open Source Software for your Business and Yourself](https://opensourceoptions.com/4-reasons-to-use-open-source-software-for-your-business-and-yourself/) [NextParallel Programming in Python with Message Passing Interface (mpi4py)](https://opensourceoptions.com/parallel-programming-in-python-with-message-passing-interface-mpi4py/) ## Similar Posts - [![Vectorize Moving Window Grid Operations on NumPy Arrays](https://opensourceoptions.com/wp-content/uploads/2021/01/bin-foch-P84Oo0M2MIg-unsplash-768x512.jpg)](https://opensourceoptions.com/vectorize-moving-window-grid-operations-on-numpy-arrays/) [Data Science](https://opensourceoptions.com/category/data-science/) \| [numpy](https://opensourceoptions.com/category/python/numpy/) \| [Python](https://opensourceoptions.com/category/python/) ### [Vectorize Moving Window Grid Operations on NumPy Arrays](https://opensourceoptions.com/vectorize-moving-window-grid-operations-on-numpy-arrays/) By[krad](https://opensourceoptions.com/) January 13, 2021 January 13, 2021 There’s a good chance you’ve done something today that used a sliding window (also known as a moving window) and you didn’t even know it. Have you done any photo editing? Many editing algorithms are based on moving windows. Do you do terrain analysis in GIS? Most topographic raster metrics (slope, aspect, hillshade, etc.) are… [Read More Vectorize Moving Window Grid Operations on NumPy Arrays](https://opensourceoptions.com/vectorize-moving-window-grid-operations-on-numpy-arrays/) - [PyQGIS](https://opensourceoptions.com/category/python/pyqgis/) \| [Python](https://opensourceoptions.com/category/python/) ### [PyQGIS: Render (Print/Save) a Layer as an Image](https://opensourceoptions.com/pyqgis-render-print-save-a-layer-as-an-image/) By[krad](https://opensourceoptions.com/) December 27, 2019 December 27, 2019 This tutorial will teach you how to render (print or save) a QGIS layer as an image using the PyQGIS Python API. Automating layer rendering can be very useful to create and save visualizations that have consistent styling, extent and format. The first thing you need to do is create a QImage. Here we create… [Read More PyQGIS: Render (Print/Save) a Layer as an Image](https://opensourceoptions.com/pyqgis-render-print-save-a-layer-as-an-image/) - [![python logo](https://opensourceoptions.com/wp-content/uploads/2022/01/Python-Symbol-768x432.png)](https://opensourceoptions.com/run-python-and-start-an-interactive-session-from-the-windows-command-prompt-terminal/) [Python](https://opensourceoptions.com/category/python/) ### [Run Python and Start an Interactive Session from the Windows Command Prompt (Terminal)](https://opensourceoptions.com/run-python-and-start-an-interactive-session-from-the-windows-command-prompt-terminal/) By[krad](https://opensourceoptions.com/) May 4, 2022 May 4, 2022 Python is a very versatile and popular cross-platform programming language that is used for many computational tasks. One thing that makes Python so versatile is that scripts can easily be run from both integrated development environments (IDEs) and the command prompt or terminal (check out this tutorial for instructions). Additionally, you can type and run… [Read More Run Python and Start an Interactive Session from the Windows Command Prompt (Terminal)](https://opensourceoptions.com/run-python-and-start-an-interactive-session-from-the-windows-command-prompt-terminal/) - [PyQGIS](https://opensourceoptions.com/category/python/pyqgis/) \| [Python](https://opensourceoptions.com/category/python/) ### [PyQGIS: Clip Vector Layers](https://opensourceoptions.com/pyqgis-clip-vector-layers/) By[krad](https://opensourceoptions.com/) March 3, 2020 July 30, 2020 It is very simple to clip vector layers with PyQGIS. This tutorial will demonstrate how to use Python in QGIS to clip a line layer with a polygon layer. First, import the processing module and set the paths to the line, polygon and output layers. The next step is to run the clip tool. This… [Read More PyQGIS: Clip Vector Layers](https://opensourceoptions.com/pyqgis-clip-vector-layers/) - [![Fill NumPy Arrays with numpy.fill and numpy.full](https://opensourceoptions.com/wp-content/uploads/2020/12/matt-hoffman-RiLzlQzwzik-unsplash-768x512.jpg)](https://opensourceoptions.com/fill-numpy-arrays-with-numpy-fill-and-numpy-full/) [numpy](https://opensourceoptions.com/category/python/numpy/) \| [Python](https://opensourceoptions.com/category/python/) ### [Fill NumPy Arrays with numpy.fill and numpy.full](https://opensourceoptions.com/fill-numpy-arrays-with-numpy-fill-and-numpy-full/) By[krad](https://opensourceoptions.com/) January 5, 2021 January 5, 2021 Filling NumPy arrays with a specific value is a typical task in Python. It’s common to create an array, then initialize or change some values, and later reset the array to a starting value. It’s also common to initialize a NumPy array with a starting value, such as a no data value. These operations may… [Read More Fill NumPy Arrays with numpy.fill and numpy.full](https://opensourceoptions.com/fill-numpy-arrays-with-numpy-fill-and-numpy-full/) - [numpy](https://opensourceoptions.com/category/python/numpy/) \| [Python](https://opensourceoptions.com/category/python/) ### [numpy: Array shapes and reshaping arrays](https://opensourceoptions.com/numpy-array-shapes-and-reshaping-arrays/) By[krad](https://opensourceoptions.com/) September 6, 2019 July 28, 2021 We’ve gone through the basics to manually create arrays of different dimensions and shapes. There will be times that you will want to query array shapes, or automatically reshape arrays. This tutorial will show you how to use numpy.shape and numpy.reshape to query and alter array shapes for 1D, 2D, and 3D arrays. Different methods are required to find… [Read More numpy: Array shapes and reshaping arrays](https://opensourceoptions.com/numpy-array-shapes-and-reshaping-arrays/) - [Home](https://opensourceoptions.com/) - [About](https://opensourceoptions.com/about/) - [Terms of Use](https://opensourceoptions.com/terms-of-use/) - [Privacy Policy](https://opensourceoptions.com/privacy-policy/) - [Contact](https://opensourceoptions.com/contact/) © 2026 OpenSourceOptions - WordPress Theme by [Kadence WP](https://www.kadencewp.com/) - [Home](https://opensourceoptions.com/) - [About](https://opensourceoptions.com/about/)
Readable Markdown
#### A flexible method to speed up code on a personal computer Do you wish your Python scripts could run faster? Maybe they can. And you won’t (probably) have to buy a new computer, or use a super computer. Most modern computers contain multiple processing cores but, by default, python scripts only use a single core. Writing code can run on multiple processors can really decrease your processing time. This article will demonstrate how to use the `multiprocessing` module to write parallel code that uses all of your machines processors and gives your script a performance boost. ## Synchronous vs Asynchronous Models An asynchronous model starts tasks as soon as new resources become available without waiting for previously running tasks to finish. By contrast, a synchronous model waits for task 1 to finish before starting task 2. For a more detailed explanation with examples, check out [this article](https://medium.com/swlh/understanding-sync-async-concurrency-and-parallelism-166686008fa4) in The Startup. Asynchronous models often offer the greatest opportunity for performance improvement, if you can structure your code in the proper manner. That is, tasks can run independently of one another. For the sake of brevity, this article is going to focus solely on asynchronous parallelization because that is the method that will likely boost performance the most. Also, if you structure code for asynchronous parallelization on your laptop, it is much easier to scale up to a super computer. ## Installation Since Python 2.6 `multiprocessing` has been included as a basic module, so no installation is required. Simply `import multiprocessing`. Since ‘multiprocessing’ takes a bit to type I prefer to `import multiprocessing as mp`. ## The Problem We have an array of parameter values that we want to use in a sensitivity analysis. The function we’re running the analysis on is computationally expensive. We can cut down on processing time by running multiple parameter simultaneously in parallel. ## Setup Import `multiprocessing` , `numpy` and `time`. Then define a function that takes a row number, `i` , and three parameters as inputs. The row number is necessary so results can later be linked to the input parameters. Remember, the asynchronous model does not preserve order. ``` import multiprocessing as mp import numpy as np import time def my_function(i, param1, param2, param3): result = param1 ** 2 * param2 + param3 time.sleep(2) return (i, result) ``` For demonstrative purposes, this is a simple function that is not computationally expensive. I’ve added a line of code to pause the function for 2 seconds, simulating a long run-time. The function output is going to be most sensitive to `param1` and least sensitive to `param3`. In practice, you can replace this with any function. We need a function that can take the result of `my_function` and add it to a results list, which is creatively named, `results`. ``` def get_result(result): global results results.append(result) ``` Let’s run this code in serial (non-parallel) and see how long it takes. Set up an array with 3 columns of random numbers between 0 and 100. These are the parameters that will get passed to `my_function`. Then create the empty `results` list. Finally, loop through all the rows in `params` and add the result from `my_function` to `results`. Time this to see how long it takes (should be about 20 seconds) and print out the `results` list. ``` if __name__ == '__main__': params = np.random.random((10, 3)) * 100.0 results = [] ts = time.time() for i in range(0, params.shape[0]): get_result(my_function(i, params[i, 0], params[i, 1], params[i, 2])) print('Time in serial:', time.time() - ts) print(results) ``` As expected, this code took about 20 seconds to run. Also, notice how the results were returned in order. ``` Time in serial: 20.006245374679565 [(0, 452994.2250955602), (1, 12318.873058254741), (2, 310577.72144939064), (3, 210071.48540466625), (4, 100467.02727256044), (5, 46553.87276610058), (6, 11396.808561138329), (7, 543909.2528728382), (8, 79957.52205218966), (9, 47914.9078853125)] ``` Now use `multiprocessing` to run the same code in parallel. Simply add the following code directly below the serial code for comparison. A gist with the full Python script is included at the end of this article for clarity. Reset the `results` list so it is empty, and reset the starting time. We’ll need to specify how many CPU processes we want to use. `multiprocessing.cpu_count()` returns the total available processes for your machine. Then loop through each row of `params` and use `multiprocessing.Pool.apply_async `to call `my_function` and save the result. Parameters to `my_function` are passed using the `args` argument of `apply_async` and the callback function is where the result of `my_function` is sent. This will start a new process as soon as one is available, and continue doing so until the loop is complete. Then close the process pool. `multiprocessing.Pool.join()` waits to execute any following code until all process have completed running. Now print the time this code took to run and the results. ``` results = [] ts = time.time() pool = mp.Pool(mp.cpu_count()) for i in range(0, params.shape[0]): pool.apply_async(my_function, args=(i, params[i, 0], params[i, 1], params[i, 2]), callback=get_result) pool.close() pool.join() print('Time in parallel:', time.time() - ts) print(results) ``` Notice, using `apply_async` decreased the run-time from 20 seconds to under 5 seconds. Also, notice that the results were not returned in order. That is why the row index was passed and returned. ``` Time in parallel: 4.749683141708374 [(0, 452994.2250955602), (2, 310577.72144939064), (1, 12318.873058254741), (3, 210071.48540466625), (4, 100467.02727256044), (5, 46553.87276610058), (6, 11396.808561138329), (7, 543909.2528728382), (9, 47914.9078853125), (8, 79957.52205218966)] ``` ## Conclusion Implementing asynchronous parallelization to your code can greatly decrease your run time. The `multiprocessing` module is a great option to use for parallelization on personal computers. As you’ve seen in this article you can get dramatic speed increases, depending on your machine’s specs. Beware that `multiprocessing` has limitations if you eventually want to scale up to a super computer. If super computing is where you’re headed, you’ll want to use a parallelization model compatible with Message Passing Interface (MPI). <https://gist.github.com/konradhafen/aa605c67bf798f07244bdc9d5d95ad12>
Shard87 (laksa)
Root Hash18410739763379425687
Unparsed URLcom,opensourceoptions!/asynchronous-parallel-programming-in-python-with-multiprocessing/ s443