|
1 | 1 | # Development |
2 | 2 | The `pympipool` package is developed based on the need to simplify the up-scaling of python functions over multiple |
3 | 3 | compute nodes. The project is under active development, so the difference between the individual interfaces might not |
4 | | -always be clearly defined. The `pympipool.Pool` interface is the oldest and consequently currently most stable but at |
5 | | -the same time also most limited interface. The `pympipool.Executor` is the recommended interface for most workflows but |
6 | | -it can be computationally less efficient than the `pympipool.PoolExecutor` interface for large number of serial python |
7 | | -functions. Finally, the `pympipool.MPISpawnPool` is primarily a prototype of an alternative interface, which is available |
8 | | -for testing but typically not recommended, based on the limitations of initiating new communicators. |
| 4 | +always be clearly defined. |
9 | 5 |
|
10 | | -Any feedback and contributions are welcome. |
| 6 | +## Contributions |
| 7 | +Any feedback and contributions are welcome. |
| 8 | + |
| 9 | +## Integration |
| 10 | +The key functionality of the `pympipool` package is the up-scaling of python functions with thread based parallelism, |
| 11 | +MPI based parallelism or by assigning GPUs to individual python functions. In the background this is realized using a |
| 12 | +combination of the [zero message queue](https://zeromq.org) and [cloudpickle](https://github.com/cloudpipe/cloudpickle) |
| 13 | +to communicate binary python objects. The `pympipool.communication.SocketInterface` is an abstraction of this interface, |
| 14 | +which is used in the other classes inside `pympipool` and might also be helpful for other projects. It comes with a |
| 15 | +series of utility functions: |
| 16 | + |
| 17 | +* `pympipool.communication.interface_bootup()`: To initialize the interface |
| 18 | +* `pympipool.communication.interface_connect()`: To connect the interface to another instance |
| 19 | +* `pympipool.communication.interface_send()`: To send messages via this interface |
| 20 | +* `pympipool.communication.interface_receive()`: To receive messages via this interface |
| 21 | +* `pympipool.communication.interface_shutdown()`: To shutdown the interface |
| 22 | + |
| 23 | +## Alternative Projects |
| 24 | +[dask](https://www.dask.org), [fireworks](https://materialsproject.github.io/fireworks/) and [parsl](http://parsl-project.org) |
| 25 | +address similar challenges. On the one hand they are more restrictive when it comes to the assignment of resource to |
| 26 | +a given worker for execution, on the other hand they provide support beyond the high performance computing (HPC) |
| 27 | +environment. |
0 commit comments