multiprocess module documentation

connection module

Client(address, family=None, authkey=None)

Returns a connection to the address of a Listener

class Listener(address=None, family=None, backlog=1, authkey=None)

Bases: object

Returns a listener object.

This is a wrapper for a bound socket which is ‘listening’ for connections, or for a Windows named pipe.

__enter__()
__exit__(exc_type, exc_value, exc_tb)
accept()

Accept a connection on the bound socket or named pipe of self.

Returns a Connection object.

property address
close()

Close the bound socket or named pipe of self.

property last_accepted
Pipe(duplex=True)

Returns pair of connection objects at either end of a pipe

wait(object_list, timeout=None)

Wait till an object in object_list is ready/readable.

Returns list of those objects in object_list which are ready/readable.

context module

dummy module

class Barrier(parties, action=None, timeout=None)

Bases: object

Implements a Barrier.

Useful for synchronizing a fixed number of threads at known synchronization points. Threads block on ‘wait()’ and are simultaneously awoken once they have all made that call.

Create a barrier, initialised to ‘parties’ threads.

‘action’ is a callable which, when supplied, will be called by one of the threads after they have all entered the barrier and just prior to releasing them all. If a ‘timeout’ is provided, it is used as the default for all subsequent ‘wait()’ calls.

_break()
_enter()
_exit()
_release()
_wait(timeout)
abort()

Place the barrier into a ‘broken’ state.

Useful in case of error. Any currently waiting threads and threads attempting to ‘wait()’ will have BrokenBarrierError raised.

property broken

Return True if the barrier is in a broken state.

property n_waiting

Return the number of threads currently waiting at the barrier.

property parties

Return the number of threads required to trip the barrier.

reset()

Reset the barrier to the initial state.

Any threads currently waiting will get the BrokenBarrier exception raised.

wait(timeout=None)

Wait for the barrier.

When the specified number of threads have started waiting, they are all simultaneously awoken. If an ‘action’ was provided for the barrier, one of the threads will have executed that callback prior to returning. Returns an individual index number from 0 to ‘parties-1’.

class BoundedSemaphore(value=1)

Bases: Semaphore

Implements a bounded semaphore.

A bounded semaphore checks to make sure its current value doesn’t exceed its initial value. If it does, ValueError is raised. In most situations semaphores are used to guard resources with limited capacity.

If the semaphore is released too many times it’s a sign of a bug. If not given, value defaults to 1.

Like regular semaphores, bounded semaphores manage a counter representing the number of release() calls minus the number of acquire() calls, plus an initial value. The acquire() method blocks if necessary until it can return without making the counter negative. If not given, value defaults to 1.

release(n=1)

Release a semaphore, incrementing the internal counter by one or more.

When the counter is zero on entry and another thread is waiting for it to become larger than zero again, wake up that thread.

If the number of releases exceeds the number of acquires, raise a ValueError.

class Condition(lock=None)

Bases: object

Class that implements a condition variable.

A condition variable allows one or more threads to wait until they are notified by another thread.

If the lock argument is given and not None, it must be a Lock or RLock object, and it is used as the underlying lock. Otherwise, a new RLock object is created and used as the underlying lock.

__enter__()
__exit__(*args)
__repr__()

Return repr(self).

_acquire_restore(x)
_at_fork_reinit()
_is_owned()
_release_save()
notify(n=1)

Wake up one or more threads waiting on this condition, if any.

If the calling thread has not acquired the lock when this method is called, a RuntimeError is raised.

This method wakes up at most n of the threads waiting for the condition variable; it is a no-op if no threads are waiting.

notifyAll()

Wake up all threads waiting on this condition.

This method is deprecated, use notify_all() instead.

notify_all()

Wake up all threads waiting on this condition.

If the calling thread has not acquired the lock when this method is called, a RuntimeError is raised.

wait(timeout=None)

Wait until notified or until a timeout occurs.

If the calling thread has not acquired the lock when this method is called, a RuntimeError is raised.

This method releases the underlying lock, and then blocks until it is awakened by a notify() or notify_all() call for the same condition variable in another thread, or until the optional timeout occurs. Once awakened or timed out, it re-acquires the lock and returns.

When the timeout argument is present and not None, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof).

When the underlying lock is an RLock, it is not released using its release() method, since this may not actually unlock the lock when it was acquired multiple times recursively. Instead, an internal interface of the RLock class is used, which really unlocks it even when it has been recursively acquired several times. Another internal interface is then used to restore the recursion level when the lock is reacquired.

wait_for(predicate, timeout=None)

Wait until a condition evaluates to True.

predicate should be a callable which result will be interpreted as a boolean value. A timeout may be provided giving the maximum time to wait.

class Event

Bases: object

Class implementing event objects.

Events manage a flag that can be set to true with the set() method and reset to false with the clear() method. The wait() method blocks until the flag is true. The flag is initially false.

_at_fork_reinit()
clear()

Reset the internal flag to false.

Subsequently, threads calling wait() will block until set() is called to set the internal flag to true again.

isSet()

Return true if and only if the internal flag is true.

This method is deprecated, use is_set() instead.

is_set()

Return true if and only if the internal flag is true.

set()

Set the internal flag to true.

All threads waiting for it to become true are awakened. Threads that call wait() once the flag is true will not block at all.

wait(timeout=None)

Block until the internal flag is true.

If the internal flag is true on entry, return immediately. Otherwise, block until another thread calls set() to set the flag to true, or until the optional timeout occurs.

When the timeout argument is present and not None, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof).

This method returns the internal flag on exit, so it will always return True except if a timeout is given and the operation times out.

JoinableQueue

alias of Queue

Lock()

allocate_lock() -> lock object (allocate() is an obsolete synonym)

Create a new lock object. See help(type(threading.Lock())) for information about locks.

Manager()
Pipe(duplex=True)
Pool(processes=None, initializer=None, initargs=())
Process

alias of DummyProcess

class Queue(maxsize=0)

Bases: object

Create a queue object with a given maximum size.

If maxsize is <= 0, the queue size is infinite.

__class_getitem__ = <bound method GenericAlias of <class 'queue.Queue'>>
_get()
_init(maxsize)
_put(item)
_qsize()
empty()

Return True if the queue is empty, False otherwise (not reliable!).

This method is likely to be removed at some point. Use qsize() == 0 as a direct substitute, but be aware that either approach risks a race condition where a queue can grow before the result of empty() or qsize() can be used.

To create code that needs to wait for all queued tasks to be completed, the preferred technique is to use the join() method.

full()

Return True if the queue is full, False otherwise (not reliable!).

This method is likely to be removed at some point. Use qsize() >= n as a direct substitute, but be aware that either approach risks a race condition where a queue can shrink before the result of full() or qsize() can be used.

get(block=True, timeout=None)

Remove and return an item from the queue.

If optional args ‘block’ is true and ‘timeout’ is None (the default), block if necessary until an item is available. If ‘timeout’ is a non-negative number, it blocks at most ‘timeout’ seconds and raises the Empty exception if no item was available within that time. Otherwise (‘block’ is false), return an item if one is immediately available, else raise the Empty exception (‘timeout’ is ignored in that case).

get_nowait()

Remove and return an item from the queue without blocking.

Only get an item if one is immediately available. Otherwise raise the Empty exception.

join()

Blocks until all items in the Queue have been gotten and processed.

The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate the item was retrieved and all work on it is complete.

When the count of unfinished tasks drops to zero, join() unblocks.

put(item, block=True, timeout=None)

Put an item into the queue.

If optional args ‘block’ is true and ‘timeout’ is None (the default), block if necessary until a free slot is available. If ‘timeout’ is a non-negative number, it blocks at most ‘timeout’ seconds and raises the Full exception if no free slot was available within that time. Otherwise (‘block’ is false), put an item on the queue if a free slot is immediately available, else raise the Full exception (‘timeout’ is ignored in that case).

put_nowait(item)

Put an item into the queue without blocking.

Only enqueue the item if a free slot is immediately available. Otherwise raise the Full exception.

qsize()

Return the approximate size of the queue (not reliable!).

task_done()

Indicate that a formerly enqueued task is complete.

Used by Queue consumer threads. For each get() used to fetch a task, a subsequent call to task_done() tells the queue that the processing on the task is complete.

If a join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue).

Raises a ValueError if called more times than there were items placed in the queue.

RLock(*args, **kwargs)

Factory function that returns a new reentrant lock.

A reentrant lock must be released by the thread that acquired it. Once a thread has acquired a reentrant lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it.

class Semaphore(value=1)

Bases: object

This class implements semaphore objects.

Semaphores manage a counter representing the number of release() calls minus the number of acquire() calls, plus an initial value. The acquire() method blocks if necessary until it can return without making the counter negative. If not given, value defaults to 1.

__enter__(blocking=True, timeout=None)

Acquire a semaphore, decrementing the internal counter by one.

When invoked without arguments: if the internal counter is larger than zero on entry, decrement it by one and return immediately. If it is zero on entry, block, waiting until some other thread has called release() to make it larger than zero. This is done with proper interlocking so that if multiple acquire() calls are blocked, release() will wake exactly one of them up. The implementation may pick one at random, so the order in which blocked threads are awakened should not be relied on. There is no return value in this case.

When invoked with blocking set to true, do the same thing as when called without arguments, and return true.

When invoked with blocking set to false, do not block. If a call without an argument would block, return false immediately; otherwise, do the same thing as when called without arguments, and return true.

When invoked with a timeout other than None, it will block for at most timeout seconds. If acquire does not complete successfully in that interval, return false. Return true otherwise.

__exit__(t, v, tb)
acquire(blocking=True, timeout=None)

Acquire a semaphore, decrementing the internal counter by one.

When invoked without arguments: if the internal counter is larger than zero on entry, decrement it by one and return immediately. If it is zero on entry, block, waiting until some other thread has called release() to make it larger than zero. This is done with proper interlocking so that if multiple acquire() calls are blocked, release() will wake exactly one of them up. The implementation may pick one at random, so the order in which blocked threads are awakened should not be relied on. There is no return value in this case.

When invoked with blocking set to true, do the same thing as when called without arguments, and return true.

When invoked with blocking set to false, do not block. If a call without an argument would block, return false immediately; otherwise, do the same thing as when called without arguments, and return true.

When invoked with a timeout other than None, it will block for at most timeout seconds. If acquire does not complete successfully in that interval, return false. Return true otherwise.

release(n=1)

Release a semaphore, incrementing the internal counter by one or more.

When the counter is zero on entry and another thread is waiting for it to become larger than zero again, wake up that thread.

active_children()
current_process()

Return the current Thread object, corresponding to the caller’s thread of control.

If the caller’s thread of control was not created through the threading module, a dummy thread object with limited functionality is returned.

freeze_support()

forkserver module

connect_to_new_process(fds)

Request forkserver to create a child process.

Returns a pair of fds (status_r, data_w). The calling process can read the child process’s pid and (eventually) its returncode from status_r. The calling process should write to data_w the pickled preparation and process data.

ensure_running()

Make sure that a fork server is running.

This can be called from any process. Note that usually a child process will just reuse the forkserver started by its parent, so ensure_running() will do nothing.

get_inherited_fds()

Return list of fds inherited from parent process.

This returns None if the current process was not started by fork server.

set_forkserver_preload(modules_names)

Set list of module names to try to load in forkserver process.

heap module

class BufferWrapper(size)

Bases: object

_heap = <multiprocess.heap.Heap object>
create_memoryview()

managers module

class BaseManager(address=None, authkey=None, serializer='pickle', ctx=None)

Bases: object

Base class for managers

_Server

alias of Server

__enter__()
__exit__(exc_type, exc_val, exc_tb)
_create(typeid, /, *args, **kwds)

Create a new shared object; return the token and exposed tuple

_debug_info()

Return some info about the servers shared objects and connections

static _finalize_manager(process, address, authkey, state, _Client)

Shutdown the manager process; will be registered as a finalizer

_number_of_objects()

Return the number of shared objects

_registry = {}
classmethod _run_server(registry, address, authkey, serializer, writer, initializer=None, initargs=())

Create a server, report its address and run it

property address
connect()

Connect manager object to the server process

get_server()

Return server object with serve_forever() method and address attribute

join(timeout=None)

Join the manager process (if it has been spawned)

classmethod register(typeid, callable=None, proxytype=None, exposed=None, method_to_typeid=None, create_method=True)

Register a typeid with the manager type

start(initializer=None, initargs=())

Spawn a server process for this manager object

class BaseProxy(token, serializer, manager=None, authkey=None, exposed=None, incref=True, manager_owned=False)

Bases: object

A base for proxies of shared objects

__deepcopy__(memo)
__reduce__()

Helper for pickle.

__repr__()

Return repr(self).

__str__()

Return representation of the referent (or a fall-back if that fails)

_address_to_local = {}
_after_fork()
_callmethod(methodname, args=(), kwds={})

Try to call a method of the referent and return a copy of the result

_connect()
static _decref(token, authkey, state, tls, idset, _Client)
_getvalue()

Get a copy of the value of the referent

_incref()
_mutex = <multiprocess.util.ForkAwareThreadLock object>
class SharedMemoryManager(*args, **kwargs)

Bases: BaseManager

Like SyncManager but uses SharedMemoryServer instead of Server.

It provides methods for creating and returning SharedMemory instances and for creating a list-like object (ShareableList) backed by shared memory. It also provides methods that create and return Proxy Objects that support synchronization across processes (i.e. multi-process-safe locks and semaphores).

ShareableList(sequence)

Returns a new ShareableList instance populated with the values from the input sequence, to be tracked by the manager.

SharedMemory(size)

Returns a new SharedMemory instance with the specified size in bytes, to be tracked by the manager.

_Server

alias of SharedMemoryServer

__del__()
get_server()

Better than monkeypatching for now; merge into Server ultimately

class SyncManager(address=None, authkey=None, serializer='pickle', ctx=None)

Bases: BaseManager

Subclass of BaseManager which supports a number of shared object types.

The types registered are those intended for the synchronization of threads, plus dict, list and Namespace.

The multiprocess.Manager() function creates started instances of this class.

Array(*args, **kwds)
Barrier(*args, **kwds)
BoundedSemaphore(*args, **kwds)
Condition(*args, **kwds)
Event(*args, **kwds)
JoinableQueue(*args, **kwds)
Lock(*args, **kwds)
Namespace(*args, **kwds)
Pool(*args, **kwds)
Queue(*args, **kwds)
RLock(*args, **kwds)
Semaphore(*args, **kwds)
Value(*args, **kwds)
_registry = {'Array': (<function Array>, ('__len__', '__getitem__', '__setitem__'), None, <class 'multiprocess.managers.ArrayProxy'>), 'AsyncResult': (None, None, None, <function AutoProxy>), 'Barrier': (<class 'threading.Barrier'>, ('__getattribute__', 'wait', 'abort', 'reset'), None, <class 'multiprocess.managers.BarrierProxy'>), 'BoundedSemaphore': (<class 'threading.BoundedSemaphore'>, ('acquire', 'release'), None, <class 'multiprocess.managers.AcquirerProxy'>), 'Condition': (<class 'threading.Condition'>, ('acquire', 'release', 'wait', 'notify', 'notify_all'), None, <class 'multiprocess.managers.ConditionProxy'>), 'Event': (<class 'threading.Event'>, ('is_set', 'set', 'clear', 'wait'), None, <class 'multiprocess.managers.EventProxy'>), 'Iterator': (None, ('__next__', 'send', 'throw', 'close'), None, <class 'multiprocess.managers.IteratorProxy'>), 'JoinableQueue': (<class 'queue.Queue'>, None, None, <function AutoProxy>), 'Lock': (<built-in function allocate_lock>, ('acquire', 'release'), None, <class 'multiprocess.managers.AcquirerProxy'>), 'Namespace': (<class 'multiprocess.managers.Namespace'>, ('__getattribute__', '__setattr__', '__delattr__'), None, <class 'multiprocess.managers.NamespaceProxy'>), 'Pool': (<class 'multiprocess.pool.Pool'>, ('apply', 'apply_async', 'close', 'imap', 'imap_unordered', 'join', 'map', 'map_async', 'starmap', 'starmap_async', 'terminate'), {'apply_async': 'AsyncResult', 'map_async': 'AsyncResult', 'starmap_async': 'AsyncResult', 'imap': 'Iterator', 'imap_unordered': 'Iterator'}, <class 'multiprocess.managers.PoolProxy'>), 'Queue': (<class 'queue.Queue'>, None, None, <function AutoProxy>), 'RLock': (<function RLock>, ('acquire', 'release'), None, <class 'multiprocess.managers.AcquirerProxy'>), 'Semaphore': (<class 'threading.Semaphore'>, ('acquire', 'release'), None, <class 'multiprocess.managers.AcquirerProxy'>), 'Value': (<class 'multiprocess.managers.Value'>, ('get', 'set'), None, <class 'multiprocess.managers.ValueProxy'>), 'dict': (<class 'dict'>, ('__contains__', '__delitem__', '__getitem__', '__iter__', '__len__', '__setitem__', 'clear', 'copy', 'get', 'items', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values'), {'__iter__': 'Iterator'}, <class 'multiprocess.managers.DictProxy'>), 'list': (<class 'list'>, ('__add__', '__contains__', '__delitem__', '__getitem__', '__len__', '__mul__', '__reversed__', '__rmul__', '__setitem__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort', '__imul__'), None, <class 'multiprocess.managers.ListProxy'>)}
dict(*args, **kwds)
list(*args, **kwds)
class Token(typeid, address, id)

Bases: object

Type to uniquely identify a shared object

__getstate__()
__repr__()

Return repr(self).

__setstate__(state)
address
id
typeid

pool module

class Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None, context=None)

Bases: object

Class which supports an async version of applying functions to arguments.

static Process(ctx, *args, **kwds)
__del__(_warn=<built-in function warn>, RUN='RUN')
__enter__()
__exit__(exc_type, exc_val, exc_tb)
__reduce__()

Helper for pickle.

__repr__()

Return repr(self).

_check_running()
_get_sentinels()
static _get_tasks(func, it, size)
static _get_worker_sentinels(workers)
_guarded_task_generation(result_job, func, iterable)

Provides a generator of tasks for imap and imap_unordered with appropriate handling for iterables which throw exceptions during iteration.

static _handle_results(outqueue, get, cache)
static _handle_tasks(taskqueue, put, outqueue, pool, cache)
classmethod _handle_workers(cache, taskqueue, ctx, Process, processes, pool, inqueue, outqueue, initializer, initargs, maxtasksperchild, wrap_exception, sentinels, change_notifier)
static _help_stuff_finish(inqueue, task_handler, size)
static _join_exited_workers(pool)

Cleanup after any worker processes which have exited due to reaching their specified lifetime. Returns True if any workers were cleaned up.

static _maintain_pool(ctx, Process, processes, pool, inqueue, outqueue, initializer, initargs, maxtasksperchild, wrap_exception)

Clean up any exited workers and start replacements for them.

_map_async(func, iterable, mapper, chunksize=None, callback=None, error_callback=None)

Helper function to implement map, starmap and their async counterparts.

_repopulate_pool()
static _repopulate_pool_static(ctx, Process, processes, pool, inqueue, outqueue, initializer, initargs, maxtasksperchild, wrap_exception)

Bring the number of pool processes up to the specified number, for use after reaping workers which have exited.

_setup_queues()
classmethod _terminate_pool(taskqueue, inqueue, outqueue, pool, change_notifier, worker_handler, task_handler, result_handler, cache)
static _wait_for_updates(sentinels, change_notifier, timeout=None)
_wrap_exception = True
apply(func, args=(), kwds={})

Equivalent of func(*args, **kwds). Pool must be running.

apply_async(func, args=(), kwds={}, callback=None, error_callback=None)

Asynchronous version of apply() method.

close()
imap(func, iterable, chunksize=1)

Equivalent of map() – can be MUCH slower than Pool.map().

imap_unordered(func, iterable, chunksize=1)

Like imap() method but ordering of results is arbitrary.

join()
map(func, iterable, chunksize=None)

Apply func to each element in iterable, collecting the results in a list that is returned.

map_async(func, iterable, chunksize=None, callback=None, error_callback=None)

Asynchronous version of map() method.

starmap(func, iterable, chunksize=None)

Like map() method but the elements of the iterable are expected to be iterables as well and will be unpacked as arguments. Hence func and (a, b) becomes func(a, b).

starmap_async(func, iterable, chunksize=None, callback=None, error_callback=None)

Asynchronous version of starmap() method.

terminate()
class ThreadPool(processes=None, initializer=None, initargs=())

Bases: Pool

static Process(ctx, *args, **kwds)
_get_sentinels()
static _get_worker_sentinels(workers)
static _help_stuff_finish(inqueue, task_handler, size)
_setup_queues()
_wait_for_updates(sentinels, change_notifier, timeout)
_wrap_exception = False

popen_fork module

class Popen(process_obj)

Bases: object

_launch(process_obj)
_send_signal(sig)
close()
duplicate_for_child(fd)
kill()
method = 'fork'
poll(flag=1)
terminate()
wait(timeout=None)

popen_forkserver module

class Popen(process_obj)

Bases: Popen

DupFd

alias of _DupFd

_launch(process_obj)
duplicate_for_child(fd)
method = 'forkserver'
poll(flag=1)

popen_spawn_posix module

class Popen(process_obj)

Bases: Popen

DupFd

alias of _DupFd

_launch(process_obj)
duplicate_for_child(fd)
method = 'spawn'

process module

class BaseProcess(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)

Bases: object

Process objects represent activity that is run in a separate process

The class is analogous to threading.Thread

_Popen()
__repr__()

Return repr(self).

static _after_fork()
_bootstrap(parent_sentinel=None)
_check_closed()
property authkey
close()

Close the Process object.

This method releases resources held by the Process object. It is an error to call this method if the child process is still running.

property daemon

Return whether process is a daemon

property exitcode

Return exit code of process or None if it has yet to stop

property ident

Return identifier (PID) of process or None if it has yet to start

is_alive()

Return whether process is alive

join(timeout=None)

Wait until child process terminates

kill()

Terminate process; sends SIGKILL signal or uses TerminateProcess()

property name
property pid

Return identifier (PID) of process or None if it has yet to start

run()

Method to be run in sub-process; can be overridden in sub-class

property sentinel

Return a file descriptor (Unix) or handle (Windows) suitable for waiting for process termination.

start()

Start child process

terminate()

Terminate process; sends SIGTERM signal or uses TerminateProcess()

active_children()

Return list of process objects corresponding to live child processes

current_process()

Return process object representing the current process

parent_process()

Return process object representing the parent process

queues module

class JoinableQueue(maxsize=0, *, ctx)

Bases: Queue

__getstate__()
__setstate__(state)
join()
put(obj, block=True, timeout=None)
task_done()
class Queue(maxsize=0, *, ctx)

Bases: object

__getstate__()
__setstate__(state)
_after_fork()
static _feed(buffer, notempty, send_bytes, writelock, reader_close, writer_close, ignore_epipe, onerror, queue_sem)
static _finalize_close(buffer, notempty)
static _finalize_join(twr)
static _on_queue_feeder_error(e, obj)

Private API hook called when feeding data in the background thread raises an exception. For overriding by concurrent.futures.

_reset(after_fork=False)
_start_thread()
cancel_join_thread()
close()
empty()
full()
get(block=True, timeout=None)
get_nowait()
join_thread()
put(obj, block=True, timeout=None)
put_nowait(obj)
qsize()
class SimpleQueue(*, ctx)

Bases: object

__class_getitem__ = <bound method GenericAlias of <class 'multiprocess.queues.SimpleQueue'>>
__getstate__()
__setstate__(state)
close()
empty()
get()
put(obj)

reduction module

DupFd(fd)

Return a wrapper for an fd.

class ForkingPickler(*args, **kwds)

Bases: Pickler

Pickler subclass used by multiprocess.

This takes a binary file for writing a pickle data stream.

The optional protocol argument tells the pickler to use the given protocol; supported protocols are 0, 1, 2, 3, 4 and 5. The default protocol is 4. It was introduced in Python 3.4, and is incompatible with previous versions.

Specifying a negative protocol version selects the highest protocol version supported. The higher the protocol used, the more recent the version of Python needed to read the pickle produced.

The file argument must have a write() method that accepts a single bytes argument. It can thus be a file object opened for binary writing, an io.BytesIO instance, or any other custom object that meets this interface.

If fix_imports is True and protocol is less than 3, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2.

If buffer_callback is None (the default), buffer views are serialized into file as part of the pickle stream.

If buffer_callback is not None, then it can be called any number of times with a buffer view. If the callback returns a false value (such as None), the given buffer is out-of-band; otherwise the buffer is serialized in-band, i.e. inside the pickle stream.

It is an error if buffer_callback is not None and protocol is None or smaller than 5.

_copyreg_dispatch_table = {<class 'complex'>: <function pickle_complex>, <class 'types.UnionType'>: <function pickle_union>, <class 're.Pattern'>: <function _pickle>}
_extra_reducers = {<class 'method'>: <function _reduce_method>, <class 'method_descriptor'>: <function _reduce_method_descriptor>, <class 'wrapper_descriptor'>: <function _reduce_method_descriptor>, <class 'functools.partial'>: <function _reduce_partial>, <class 'socket.socket'>: <function _reduce_socket>, <class 'multiprocess.connection.Connection'>: <function reduce_connection>, <class 'multiprocess.heap.Arena'>: <function reduce_arena>, <class 'array.array'>: <function reduce_array>, <class 'dict_items'>: <function rebuild_as_list>, <class 'dict_keys'>: <function rebuild_as_list>, <class 'dict_values'>: <function rebuild_as_list>}
classmethod dumps(obj, protocol=None, *args, **kwds)
loads(ignore=None, **kwds)

Unpickle an object from a string.

If ignore=False then objects whose class is defined in the module __main__ are updated to reference the existing class in __main__, otherwise they are left to refer to the reconstructed type, which may be different.

Default values for keyword arguments can be set in dill.settings.

classmethod register(type, reduce)

Register a reduce function for a type.

dump(obj, file, protocol=None, *args, **kwds)

Replacement for pickle.dump() using ForkingPickler.

recv_handle(conn)

Receive a handle over a local connection.

recvfds(sock, size)

Receive an array of fds over an AF_UNIX socket.

register(type, reduce)

Register a reduce function for a type.

send_handle(conn, handle, destination_pid)

Send a handle over a local connection.

sendfds(sock, fds)

Send an array of fds over an AF_UNIX socket.

resource_sharer module

class DupFd(fd)

Bases: object

Wrapper for fd which can be used at any time.

detach()

Get the fd. This should only be called once.

stop(timeout=None)

Stop the background thread and clear registered resources.

resource_tracker module

ensure_running()

Make sure that resource tracker process is running.

This can be run from any process. Usually a child process will use the resource created by its parent.

register(name, rtype)

Register name of resource with resource tracker.

unregister(name, rtype)

Unregister name of resource with resource tracker.

shared_memory module

Provides shared memory for direct access across processes.

The API of this package is currently provisional. Refer to the documentation for details.

class ShareableList(sequence=None, *, name=None)

Bases: object

Pattern for a mutable list-like object shareable via a shared memory block. It differs from the built-in list type in that these lists can not change their overall length (i.e. no append, insert, etc.)

Because values are packed into a memoryview as bytes, the struct packing format for any storable value must require no more than 8 characters to describe its format.

__class_getitem__ = <bound method GenericAlias of <class 'multiprocess.shared_memory.ShareableList'>>
__getitem__(position)
__len__()
__reduce__()

Helper for pickle.

__repr__()

Return repr(self).

__setitem__(position, value)
_alignment = 8
_back_transforms_mapping = {0: <function ShareableList.<lambda>>, 1: <function ShareableList.<lambda>>, 2: <function ShareableList.<lambda>>, 3: <function ShareableList.<lambda>>}
static _extract_recreation_code(value)

Used in concert with _back_transforms_mapping to convert values into the appropriate Python objects when retrieving them from the list as well as when storing them.

property _format_back_transform_codes

The struct packing format used for the items’ back transforms.

property _format_packing_metainfo

The struct packing format used for the items’ packing formats.

property _format_size_metainfo

The struct packing format used for the items’ storage offsets.

_get_back_transform(position)

Gets the back transformation function for a single value.

_get_packing_format(position)

Gets the packing format for a single value stored in the list.

property _offset_back_transform_codes
property _offset_data_start
property _offset_packing_formats
_set_packing_format_and_transform(position, fmt_as_str, value)

Sets the packing format and back transformation code for a single value in the list at the specified position.

_types_mapping = {<class 'int'>: 'q', <class 'float'>: 'd', <class 'bool'>: 'xxxxxxx?', <class 'str'>: '%ds', <class 'bytes'>: '%ds', <class 'NoneType'>: 'xxxxxx?x'}
count(value) integer -- return number of occurrences of value.
property format

The struct packing format used by all currently stored items.

index(value) integer -- return first index of value.

Raises ValueError if the value is not present.

class SharedMemory(name=None, create=False, size=0)

Bases: object

Creates a new shared memory block or attaches to an existing shared memory block.

Every shared memory block is assigned a unique name. This enables one process to create a shared memory block with a particular name so that a different process can attach to that same shared memory block using that same name.

As a resource for sharing data across processes, shared memory blocks may outlive the original process that created them. When one process no longer needs access to a shared memory block that might still be needed by other processes, the close() method should be called. When a shared memory block is no longer needed by any process, the unlink() method should be called to ensure proper cleanup.

__del__()
__reduce__()

Helper for pickle.

__repr__()

Return repr(self).

_buf = None
_fd = -1
_flags = 2
_mmap = None
_mode = 384
_name = None
_prepend_leading_slash = True
property buf

A memoryview of contents of the shared memory block.

close()

Closes access to the shared memory from this instance but does not destroy the shared memory block.

property name

Unique name that identifies the shared memory block.

property size

Size in bytes.

Requests that the underlying shared memory block be destroyed.

In order to ensure proper cleanup of resources, unlink should be called once (and only once) across all processes which have access to the shared memory block.

sharedctypes module

Array(typecode_or_type, size_or_initializer, *, lock=True, ctx=None)

Return a synchronization wrapper for a RawArray

RawArray(typecode_or_type, size_or_initializer)

Returns a ctypes array allocated from shared memory

RawValue(typecode_or_type, *args)

Returns a ctypes object allocated from shared memory

Value(typecode_or_type, *args, lock=True, ctx=None)

Return a synchronization wrapper for a Value

copy(obj)
synchronized(obj, lock=None, ctx=None)

spawn module

_main(fd, parent_sentinel)
freeze_support()

Run code for process object if this in not the main process

get_command_line(**kwds)

Returns prefix of command line used for spawning a child process

get_executable()
get_preparation_data(name)

Return info about parent needed by child to unpickle process object

import_main_path(main_path)

Set sys.modules[‘__main__’] to module at main_path

set_executable(exe)

synchronize module

class BoundedSemaphore(value=1, *, ctx)

Bases: Semaphore

__repr__()

Return repr(self).

class Condition(lock=None, *, ctx)

Bases: object

__enter__()
__exit__(*args)
__getstate__()
__repr__()

Return repr(self).

__setstate__(state)
_make_methods()
notify(n=1)
notify_all()
wait(timeout=None)
wait_for(predicate, timeout=None)
class Event(*, ctx)

Bases: object

clear()
is_set()
set()
wait(timeout=None)
class Lock(*, ctx)

Bases: SemLock

__repr__()

Return repr(self).

class RLock(*, ctx)

Bases: SemLock

__repr__()

Return repr(self).

class Semaphore(value=1, *, ctx)

Bases: SemLock

__repr__()

Return repr(self).

get_value()

util module

class Finalize(obj, callback, args=(), kwargs=None, exitpriority=None)

Bases: object

Class which supports object finalization using weakrefs

__call__(wr=None, _finalizer_registry={}, sub_debug=<function sub_debug>, getpid=<built-in function getpid>)

Run the callback unless it has already been called or cancelled

__repr__()

Return repr(self).

cancel()

Cancel finalization of the object

still_active()

Return whether this finalizer is still waiting to invoke callback

class ForkAwareLocal

Bases: _local

__reduce__()

Helper for pickle.

class ForkAwareThreadLock

Bases: object

__enter__()
__exit__(*args)
_at_fork_reinit()
close_all_fds_except(fds)
debug(msg, *args)
get_logger()

Returns logger used by multiprocess

get_temp_dir()
info(msg, *args)
is_exiting()

Returns true if the process is shutting down

log_to_stderr(level=None)

Turn on logging and add a handler which prints to stderr

register_after_fork(obj, func)
sub_debug(msg, *args)
sub_warning(msg, *args)