Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .ci_support/environment-docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
channels:
- conda-forge
dependencies:
- nbsphinx
- sphinx
- myst-parser
28 changes: 28 additions & 0 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

build:
os: "ubuntu-20.04"
tools:
python: "mambaforge-4.10"

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Optionally build your docs in additional formats such as PDF and ePub
formats: []

# Install pyiron from conda
conda:
environment: .ci_support/environment-docs.yml

# Optionally set the version of Python and requirements required to build your docs
python:
install:
- method: pip
path: .
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
35 changes: 35 additions & 0 deletions docs/make.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
@ECHO OFF

pushd %~dp0

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build

%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)

if "%1" == "" goto help

%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end

:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%

:end
popd
59 changes: 59 additions & 0 deletions docs/source/advanced.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Advanced Configuration
Initially `pysqa` was only designed to interact with the local queuing systems of an HPC cluster. This functionality has recently been extended to support remote HPC clusters in addition to local HPC clusters. These two developments, the support for remote HPC clusters and the support for multiple clusters in `pysqa` are discussed in the following. Both of these features are under active development so this part of the interface might change more frequently than the rest.

## Remote HPC Configuration
Remote clusters can be defined in the `queue.yaml` file by setting the `queue_type` to `REMOTE`:
```
queue_type: REMOTE
queue_primary: remote
ssh_host: hpc-cluster.university.edu
ssh_username: hpcuser
known_hosts: ~/.ssh/known_hosts
ssh_key: ~/.ssh/id_rsa
ssh_remote_config_dir: /u/share/pysqa/resources/queues/
ssh_remote_path: /u/hpcuser/remote/
ssh_local_path: /home/localuser/projects/
ssh_continous_connection: True
ssh_delete_file_on_remote: False
queues:
remote: {cores_max: 100, cores_min: 10, run_time_max: 259200}
```
In addition to `queue_type`, `queue_primary` and `queues` parameters, this also has the following required keywords:

* `ssh_host` the remote HPC login node to connect to
* `ssh_username` the username on the HPC login node
* `known_hosts` the local file of known hosts which needs to contain the `ssh_host` defined above.
* `ssh_key` the local key for the SSH connection
* `ssh_remote_config_dir` the `pysqa` configuration directory on the remote HPC cluster
* `ssh_remote_path` the remote directory on the HPC cluster to transfer calculations to
* `ssh_local_path` the local directory to transfer calculations from

And optional keywords:

* `ssh_delete_file_on_remote` specify whether files on the remote HPC should be deleted after they are transferred back to the local system - defaults to `True`
* `ssh_port` the port used for the SSH connection on the remote HPC cluster - defaults to `22`

A definition of the `queues` in the local system is required to enable the parameter checks locally. Still it is sufficient to only store the individual submission script templates only on the remote HPC.

## Access to Multiple HPCs
To support multiple remote HPC clusters additional functionality was added to `pysqa`.

Namely, a `clusters.yaml` file can be defined in the configuration directory, which defines multiple `queue.yaml` files for different clusters:
```
cluster_primary: local_slurm
cluster: {
local_slurm: local_slurm_queues.yaml,
remote_slurm: remote_queues.yaml
}
```
These `queue.yaml` files can again include all the functionality defined previously, including the configuration for remote connection using SSH.

Furthermore, the `QueueAdapter` class was extended with the following two functions:
```
qa.list_clusters()
```
To list the available clusters in the configuration and:
```
qa.switch_cluster(cluster_name)
```
To switch from one cluster to another, with the `cluster_name` providing the name of the cluster like `local_slurm` and `remote_slurm` in the configuration above.
73 changes: 73 additions & 0 deletions docs/source/command.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Command Line Interface
The command line interface implements a subset of the functionality of the python interface. While it can be used locally to check the status of your calculation, the primary use case is accessing the `pysqa` installation on a remote HPC cluster from your local `pysqa` installation. Still here the local execution of the commands is discussed.

The available options are the submission of new jobs to the queuing system using the submit option `--submit`, enabling reservation for a job already submitted using the `--reservation` option, listing jobs on the queuing using the status option `--status`, deleting a job from the queuing system using the delete option `--delete`, listing files in the working directory using the list option `--list` and the help option `--help` to print a summary of the available options.

## Submit job
Submission of jobs to the queuing system with the submit option `--submit` is similar to the submit job function `QueueAdapter().submit_job()`. Example call to submit the `hostname` command to the default queue:
```
python -m pysqa --submit --command hostname
```
The options used and their short forms are:
* `-p`, `--submit` the submit option enables the submission of a job to the queuing system
* `-c`, `--command` the command that is executed as part of the job

Additional options for the submission of the job with their short forms are:
* `-f`, `--config_directory` the directory which contains the `pysqa` configuration, by default `~/.queues`.
* `-q`, `--queue` the queue the job is submitted to. If this option is not defined the `primary_queue` defined in the configuration is used.
* `-j`, `--job_name` the name of the job submitted to the queuing system.
* `-w`, `--working_directory` the working directory the job submitted to the queuing system is executed in.
* `-n`, `--cores` the number of cores used for the calculation. If the cores are not defined the minimum number of cores defined for the selected queue are used.
* `-m`, `--memory` the memory used for the calculation.
* `-t`, `--run_time` the run time for the calculation. If the run time is not defined the maximum run time defined for the selected queue is used.
* `-b`, `--dependency` other jobs the calculation depends on.

## Enable reservation
Enable reservation for a job already submitted to the queuing system using the reservation option `--reservation` is similar to the enable reservation function `QueueAdapter().enable_reservation()`. Example call to enable the reservation for a job with the id `123`:
```
python -m pysqa --reservation --id 123
```
The options used and their short forms are:
* `-r`, `--reservation` the reservation option enables a reservation for a specific job.
* `-i`, `--id` the id option specifies the job id of the job which should be added to the reservation.

Additional options for enabling the reservation with their short forms are:
* `-f`, `--config_directory` the directory which contains the `pysqa` configuration, by default `~/.queues`.

## List jobs
List jobs on the queuing system option `--status`, list calculations currently running and waiting on the queuing system for all users on the HPC cluster:
```
python -m pysqa --status
```
The options used and their short forms are:
* `-s`, `--status` the status option lists the status of all calculation currently running and waiting on the queuing system.

Additional options for listing jobs on the queuing system with their short forms are:
* `-f`, `--config_directory` the directory which contains the `pysqa` configuration, by default `~/.queues`.

## Delete job
The delete job option `--delete` deletes a job from the queuing system:
```
python -m pysqa --delete --id 123
```
The options used and their short forms are:
* `-d`, `--delete` the delete option enables the deletion of a job from the queuing system.
* `-i`, `--id` the id option specifies the job id of the job which should be deleted.

Additional options for deleting jobs from the queuing system with their short forms are:
* `-f`, `--config_directory` the directory which contains the `pysqa` configuration, by default `~/.queues`.

## List files
The list files option `--list` lists the files in working directory:
```
python -m pysqa --list --working_directory /path/on/remote/hpc
```
The options used and their short forms are:
* `-l`, `--list` the list files option lists the files in the working directory.
* `-w`, `--working_directory` the working directory defines the folder whose files are listed.

## Help
The help option `--help` prints a short version of this documentation page:
```
python -m pysqa --help
```
28 changes: 28 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html

# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information

project = 'pysqa'
copyright = '2023, Jan Janssen'
author = 'Jan Janssen'
release = '0.0.22'

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration

extensions = ["myst_parser"]

templates_path = ['_templates']
exclude_patterns = []



# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output

html_theme = 'alabaster'
html_static_path = ['_static']
9 changes: 9 additions & 0 deletions docs/source/debug.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Debugging
The configuration of a queuing system adapter, in particular in a remote configuration with a local installation of `pysqa` communicating to a remote installation on your HPC can be tricky. To simplify the process `pysqa` provides a series of utility functions:
* Login to the remote HPC cluster and import `pysqa` on a python shell.
* Validate the queue configuration by importing the queue adapter using `from pysqa import QueueAdapter` then initialize the object from the configuration dictionary `qa = QueueAdapter(directory="~/.queues")`. The current configuration can be printed using `qa.config`.
* Try to submit a calculation to print the hostname from the python shell on the remote HPC cluster using the `qa.submit_job(command="hostname")`.
* If this works successfully then the next step is to try the same on the command line using `python -m pysqa --submit --command hostname`.

This is the same command the local `pysqa` instance calls on the `pysqa` instance on the remote HPC cluster, so if the steps above were executed successfully, then the remote HPC configuration seems to be correct. The final step is validating the local configuration to see the SSH connection is successfully established and maintained.

42 changes: 42 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
.. pysqa documentation master file, created by
sphinx-quickstart on Thu May 4 14:01:49 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.

pysqa - a simple queue adapter for python
=========================================

High-performance computing (HPC) does not have to be hard. In this context the aim of pysqa is to simplify the submission of calculation to an HPC cluster as easy as starting another subprocess locally. This is achieved based on the assumption that even though modern HPC queuing systems offer a wide range of different configuration options, most users submit the majority of their jobs with very similar parameters.

Therefore in pysqa users define submission script templates once and reuse them to submit many different calculation of workflows. These templates are defined in the jinja2 template language, so current submission scripts can be easily extended to templates. In addition to the submission of new jobs to the queuing system pysqa also allows the users to track the progress of their jobs, delete them or enable reservations using the built-in functionality of the queuing system.

Features
--------
The core feature of pysqa is the communication to an HPC queuing system. This includes:

* Submission of new calculation to the queuing system.
* List of calculation currently waiting or running on the queuing system.
* Deleting calculation which are currently waiting or running on the queuing system.
* List of available queue templates created by the user.
* Restriction of templates to a specific number of cores, run time or other computing resources. With integrated checks if a given calculation follows these restrictions.

In addition to these core features, pysqa is continously extended to support more usecases for a larger group of users. These new features include the support for remote queuing systems:

* Remote connection via the secure shell protocol to access remote HPC clusters.
* Transfer of file to and from remote HPC clusters, based on a predefined mapping of the remote file system into the local file system.
* Support for both individual connections as well as continous connections depending on the network availability.

Finally, there is current work in progress to support a combination of multiple local and remote queuing systems from within pysqa, which are represented to the user as a single resource.

Documentation
-------------

.. toctree::
:maxdepth: 2

installation
queue
python
command
advanced
debug
24 changes: 24 additions & 0 deletions docs/source/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Installation
The `pysqa` package can be installed either via `pip` or `conda`. While most HPC systems use Linux these days, the `pysqa` package can be installed on all major operation systems. In particular for connections to remote HPC clusters it is required to install `pysqa` on both the local system as well as the remote HPC cluster. In this case it is highly recommended to use the same version of `pysqa` on both systems.

## pypi-based installation
`pysqa` can be installed from the python package index (pypi) using the following command:
```
pip install pysqa
```
On `pypi` the `pysqa` package exists in three different versions:

* `pip install pysaq` - base version - with minimal requirements only depends on `jinja2`, `pandas` and `pyyaml`.
* `pip install pysaq[sge]` - sun grid engine (SGE) version - in addition to the base dependencies this installs `defusedxml` which is required to parse the `xml` files from `qstat`.
* `pip install pysaq[remote]` - remote version - in addition to the base dependencies this installs `paramiko` and `tqdm`, to connect to remote HPC clusters using SSH and report the progress of the data transfer visually.

## conda-based installation
The `conda` package combines all dependencies in one package:
```
conda install -c conda-forge pysqa
```
When resolving the dependencies with `conda` gets slow it is recommended to use `mamba` instead of `conda`. So you can also install `pysqa` using:
```
mamba install -c conda-forge pysqa
```

Loading