🕷️ Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 134 (from laksa097)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

ℹ️ Skipped - page is already crawled

đź“„
INDEXABLE
âś…
CRAWLED
1 day ago
🤖
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0.1 months ago
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://ipython.org/ipython-doc/3/parallel/asyncresult.html
Last Crawled2026-04-08 09:16:04 (1 day ago)
First Indexed2016-04-26 01:50:43 (9 years ago)
HTTP Status Code200
Meta TitleThe AsyncResult object — IPython 3.2.1 documentation
Meta Descriptionnull
Meta Canonicalnull
Boilerpipe Text
Warning This documentation is for an old version of IPython. You can find docs for newer versions here . In non-blocking mode, apply() submits the command to be executed and then returns a AsyncResult object immediately. The AsyncResult object gives you a way of getting a result at a later time through its get() method, but it also collects metadata on execution. Beyond multiprocessing’s AsyncResult ¶ Our AsyncResult objects add a number of convenient features for working with parallel results, beyond what is provided by the original AsyncResult. get_dict ¶ First, is AsyncResult.get_dict() , which pulls results as a dictionary keyed by engine_id, rather than a flat list. This is useful for quickly coordinating or distributing information about all of the engines. As an example, here is a quick call that gives every engine a dict showing the PID of every other engine: In [10]: ar = rc [:] . apply_async ( os . getpid ) In [11]: pids = ar . get_dict () In [12]: rc [:][ 'pid_map' ] = pids This trick is particularly useful when setting up inter-engine communication, as in IPython’s examples/parallel/interengine examples. Map results are iterable! ¶ When an AsyncResult object has multiple results (e.g. the AsyncMapResult object), you can actually iterate through results themselves, and act on them as they arrive: from __future__ import print_function import time from IPython import parallel # create client & view rc = parallel . Client () dv = rc [:] v = rc . load_balanced_view () # scatter 'id', so id=0,1,2 on engines 0,1,2 dv . scatter ( 'id' , rc . ids , flatten = True ) print ( "Engine IDs: " , dv [ 'id' ]) # create a Reference to `id`. This will be a different value on each engine ref = parallel . Reference ( 'id' ) print ( "sleeping for `id` seconds on each engine" ) tic = time . time () ar = dv . apply ( time . sleep , ref ) for i , r in enumerate ( ar ): print ( " %i : %.3f " % ( i , time . time () - tic )) def sleep_here ( t ): import time time . sleep ( t ) return id , t # one call per task print ( "running with one call per task" ) amr = v . map ( sleep_here , [ . 01 * t for t in range ( 100 )]) tic = time . time () for i , r in enumerate ( amr ): print ( "task %i on engine %i : %.3f " % ( i , r [ 0 ], time . time () - tic )) print ( "running with four calls per task" ) # with chunksize, we can have four calls per task amr = v . map ( sleep_here , [ . 01 * t for t in range ( 100 )], chunksize = 4 ) tic = time . time () for i , r in enumerate ( amr ): print ( "task %i on engine %i : %.3f " % ( i , r [ 0 ], time . time () - tic )) print ( "running with two calls per task, with unordered results" ) # We can even iterate through faster results first, with ordered=False amr = v . map ( sleep_here , [ . 01 * t for t in range ( 100 , 0 , - 1 )], ordered = False , chunksize = 2 ) tic = time . time () for i , r in enumerate ( amr ): print ( "slept %.2f s on engine %i : %.3f " % ( r [ 1 ], r [ 0 ], time . time () - tic )) That is to say, if you treat an AsyncMapResult as if it were a list of your actual results, it should behave as you would expect, with the only difference being that you can start iterating through the results before they have even been computed. This lets you do a dumb version of map/reduce with the builtin Python functions, and the only difference between doing this locally and doing it remotely in parallel is using the asynchronous view.map instead of the builtin map. Here is a simple one-line RMS (root-mean-square) implemented with Python’s builtin map/reduce. In [38]: X = np . linspace ( 0 , 100 ) In [39]: from math import sqrt In [40]: add = lambda a , b : a + b In [41]: sq = lambda x : x * x In [42]: sqrt ( reduce ( add , map ( sq , X )) / len ( X )) Out[42]: 58.028845747399714 In [43]: sqrt ( reduce ( add , view . map ( sq , X )) / len ( X )) Out[43]: 58.028845747399714 To break that down: map(sq, X) Compute the square of each element in the list (locally, or in parallel) reduce(add, sqX) / len(X) compute the mean by summing over the list (or AsyncMapResult) and dividing by the size take the square root of the resulting number See also When AsyncResult or the AsyncMapResult don’t provide what you need (for instance, handling individual results as they arrive, but with metadata), you can always just split the original result’s msg_ids attribute, and handle them as you like. For an example of this, see examples/parallel/customresult.py
Markdown
[![IPython Documentation](https://ipython.org/ipython-doc/3/_static/logo.png)](https://ipython.org/) ### Navigation - [index](https://ipython.org/ipython-doc/3/genindex.html "General Index") - [modules](https://ipython.org/ipython-doc/3/py-modindex.html "Python Module Index") \| - [next](https://ipython.org/ipython-doc/3/parallel/parallel_mpi.html "Using MPI with IPython") \| - [previous](https://ipython.org/ipython-doc/3/parallel/parallel_task.html "The IPython task interface") \| - [home](https://ipython.org/)\| - [search](https://ipython.org/ipython-doc/3/search.html)\| - [documentation](https://ipython.org/ipython-doc/3/index.html) » - [Using IPython for parallel computing](https://ipython.org/ipython-doc/3/parallel/index.html) » ### [Table Of Contents](https://ipython.org/ipython-doc/3/index.html) - [The AsyncResult object](https://ipython.org/ipython-doc/3/parallel/asyncresult.html) - [Beyond multiprocessing’s AsyncResult](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#beyond-multiprocessing-s-asyncresult) - [get\_dict](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#get-dict) - [Metadata](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#metadata) - [Timing](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#timing) - [Map results are iterable\!](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#map-results-are-iterable) #### Previous topic [The IPython task interface](https://ipython.org/ipython-doc/3/parallel/parallel_task.html "previous chapter") #### Next topic [Using MPI with IPython](https://ipython.org/ipython-doc/3/parallel/parallel_mpi.html "next chapter") ### This Page - [Show Source](https://ipython.org/ipython-doc/3/_sources/parallel/asyncresult.txt) ### Quick search Enter search terms or a module, class or function name. > Warning > > This documentation is for an old version of IPython. You can find docs for newer versions [here](http://ipython.readthedocs.org/en/stable/). # The AsyncResult object[¶](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#the-asyncresult-object "Permalink to this headline") In non-blocking mode, `apply()` submits the command to be executed and then returns a [`AsyncResult`](https://ipython.org/ipython-doc/3/parallel/parallel_details.html#AsyncResult "AsyncResult") object immediately. The AsyncResult object gives you a way of getting a result at a later time through its `get()` method, but it also collects metadata on execution. ## Beyond multiprocessing’s AsyncResult[¶](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#beyond-multiprocessing-s-asyncresult "Permalink to this headline") Note The [`AsyncResult`](https://ipython.org/ipython-doc/3/parallel/parallel_details.html#AsyncResult "AsyncResult") object provides a superset of the interface in [`multiprocessing.pool.AsyncResult`](http://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.AsyncResult "(in Python v2.7)"). See the [official Python documentation](http://docs.python.org/library/multiprocessing#multiprocessing.pool.AsyncResult) for more on the basics of this interface. Our AsyncResult objects add a number of convenient features for working with parallel results, beyond what is provided by the original AsyncResult. ### get\_dict[¶](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#get-dict "Permalink to this headline") First, is [`AsyncResult.get_dict()`](https://ipython.org/ipython-doc/3/api/generated/IPython.parallel.client.asyncresult.html#IPython.parallel.client.asyncresult.AsyncResult.get_dict "IPython.parallel.client.asyncresult.AsyncResult.get_dict"), which pulls results as a dictionary keyed by engine\_id, rather than a flat list. This is useful for quickly coordinating or distributing information about all of the engines. As an example, here is a quick call that gives every engine a dict showing the PID of every other engine: ``` In [10]: ar = rc[:].apply_async(os.getpid) In [11]: pids = ar.get_dict() In [12]: rc[:]['pid_map'] = pids ``` This trick is particularly useful when setting up inter-engine communication, as in IPython’s `examples/parallel/interengine` examples. ## Metadata[¶](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#metadata "Permalink to this headline") IPython.parallel tracks some metadata about the tasks, which is stored in the `Client.metadata` dict. The AsyncResult object gives you an interface for this information as well, including timestamps stdout/err, and engine IDs. ### Timing[¶](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#timing "Permalink to this headline") IPython tracks various timestamps as `datetime` objects, and the AsyncResult object has a few properties that turn these into useful times (in seconds as floats). For use while the tasks are still pending: - `ar.elapsed` is just the elapsed seconds since submission, for use before the AsyncResult is complete. - `ar.progress` is the number of tasks that have completed. Fractional progress would be: ``` 1.0 * ar.progress / len(ar) ``` - `AsyncResult.wait_interactive()` will wait for the result to finish, but print out status updates on progress and elapsed time while it waits. For use after the tasks are done: - `ar.serial_time` is the sum of the computation time of all of the tasks done in parallel. - `ar.wall_time` is the time between the first task submitted and last result received. This is the actual cost of computation, including IPython overhead. Note wall\_time is only precise if the Client is waiting for results when the task finished, because the `received` timestamp is made when the result is unpacked by the Client, triggered by the `spin()` call. If you are doing work in the Client, and not waiting/spinning, then `received` might be artificially high. An often interesting metric is the time it actually cost to do the work in parallel relative to the serial computation, and this can be given simply with ``` speedup = ar.serial_time / ar.wall_time ``` ## Map results are iterable\![¶](https://ipython.org/ipython-doc/3/parallel/asyncresult.html#map-results-are-iterable "Permalink to this headline") When an AsyncResult object has multiple results (e.g. the `AsyncMapResult` object), you can actually iterate through results themselves, and act on them as they arrive: ``` from __future__ import print_function import time from IPython import parallel # create client & view rc = parallel.Client() dv = rc[:] v = rc.load_balanced_view() # scatter 'id', so id=0,1,2 on engines 0,1,2 dv.scatter('id', rc.ids, flatten=True) print("Engine IDs: ", dv['id']) # create a Reference to `id`. This will be a different value on each engine ref = parallel.Reference('id') print("sleeping for `id` seconds on each engine") tic = time.time() ar = dv.apply(time.sleep, ref) for i,r in enumerate(ar): print("%i: %.3f"%(i, time.time()-tic)) def sleep_here(t): import time time.sleep(t) return id,t # one call per task print("running with one call per task") amr = v.map(sleep_here, [.01*t for t in range(100)]) tic = time.time() for i,r in enumerate(amr): print("task %i on engine %i: %.3f" % (i, r[0], time.time()-tic)) print("running with four calls per task") # with chunksize, we can have four calls per task amr = v.map(sleep_here, [.01*t for t in range(100)], chunksize=4) tic = time.time() for i,r in enumerate(amr): print("task %i on engine %i: %.3f" % (i, r[0], time.time()-tic)) print("running with two calls per task, with unordered results") # We can even iterate through faster results first, with ordered=False amr = v.map(sleep_here, [.01*t for t in range(100,0,-1)], ordered=False, chunksize=2) tic = time.time() for i,r in enumerate(amr): print("slept %.2fs on engine %i: %.3f" % (r[1], r[0], time.time()-tic)) ``` That is to say, if you treat an AsyncMapResult as if it were a list of your actual results, it should behave as you would expect, with the only difference being that you can start iterating through the results before they have even been computed. This lets you do a dumb version of map/reduce with the builtin Python functions, and the only difference between doing this locally and doing it remotely in parallel is using the asynchronous view.map instead of the builtin map. Here is a simple one-line RMS (root-mean-square) implemented with Python’s builtin map/reduce. ``` In [38]: X = np.linspace(0,100) In [39]: from math import sqrt In [40]: add = lambda a,b: a+b In [41]: sq = lambda x: x*x In [42]: sqrt(reduce(add, map(sq, X)) / len(X)) Out[42]: 58.028845747399714 In [43]: sqrt(reduce(add, view.map(sq, X)) / len(X)) Out[43]: 58.028845747399714 ``` To break that down: 1. `map(sq, X)` Compute the square of each element in the list (locally, or in parallel) 2. `reduce(add, sqX) / len(X)` compute the mean by summing over the list (or AsyncMapResult) and dividing by the size 3. take the square root of the resulting number See also When AsyncResult or the AsyncMapResult don’t provide what you need (for instance, handling individual results as they arrive, but with metadata), you can always just split the original result’s `msg_ids` attribute, and handle them as you like. For an example of this, see `examples/parallel/customresult.py` ©The IPython Development Team. \| Powered by [Sphinx 1.3.1](http://sphinx-doc.org/) & [Alabaster 0.7.3](https://github.com/bitprophet/alabaster) \| [Page source](https://ipython.org/ipython-doc/3/_sources/parallel/asyncresult.txt)
Readable Markdownnull
Shard134 (laksa)
Root Hash6938430845472928934
Unparsed URLorg,ipython!/ipython-doc/3/parallel/asyncresult.html s443