Plugin Reference

In this chapter we describe the different plugins bundled with Fastr (e.g. IOPlugins, ExecutionPlugins). The reference is build automatically from code, so after installing a new plugin the documentation has to be rebuild for it to be included in the docs.

CollectorPlugin Reference

CollectorPlugins are used for finding and collecting the output data of outputs part of a FastrInterface

scheme

CollectorPlugin

JsonCollector

JsonCollector

PathCollector

PathCollector

StdoutCollector

StdoutCollector

JsonCollector

The JsonCollector plugin allows a program to print out the result in a pre-defined JSON format. It is then used as values for fastr.

The working is as follows:

  1. The location of the output is taken

  2. If the location is None, go to step 5

  3. The substitutions are performed on the location field (see below)

  4. The location is used as a regular expression and matched to the stdout line by line

  5. The matched string (or entire stdout if location is None) is loaded as a json

  6. The data is parsed by set_result

The structure of the JSON has to follow the a predefined format. For normal Nodes the format is in the form:

[value1, value2, value3]

where the multiple values represent the cardinality.

For a FlowNodes the format is the form:

{
  'sample_id1': [value1, value2, value3],
  'sample_id2': [value4, value5, value6]
}

This allows the tool to create multiple output samples in a single run.

PathCollector

The PathCollector plugin for the FastrInterface. This plugin uses the location fields to find data on the filesystem. To use this plugin the method of the output has to be set to path

The general working is as follows:

  1. The location field is taken from the output

  2. The substitutions are performed on the location field (see below)

  3. The updated location field will be used as a regular expression filter

  4. The filesystem is scanned for all matching files/directory

The special substitutions performed on the location use the Format Specification Mini-Language Format Specification Mini-Language. The predefined fields that can be used are:

  • inputs, an objet with the input values (use like {inputs.image[0]}) The input contains the following attributes that you can access:

    • .directory for the directory name (use like input.image[0].directory) The directory is the same as the result of os.path.dirname

    • .filename is the result of os.path.basename on the path

    • .basename for the basename name (use like input.image[0].basename) The basename is the same as the result of os.path.basename and the extension stripped. The extension is considered to be everything after the first dot in the filename.

    • .extension for the extension name (use like input.image[0].extension)

  • output, an object with the output values (use like {outputs.result[0]}) It contains the same attributes as the input

    • special.cardinality, the index of the current cardinality

    • special.extension, is the extension for the output DataType

Example use:

<output ... method="path" location="{output.directory[0]}/TransformParameters.{special.cardinality}.{special.extension}"/>

Given the output directory ./nodeid/sampleid/result, the second sample in the output and filetype with a txt extension, this would be translated into:

<output ... method="path" location="./nodeid/sampleid/result/TransformParameters.1.txt>

StdoutCollector

The StdoutCollector can collect data from the stdout stream of a program. It filters the stdout line by line matching a predefined regular expression.

The general working is as follows:

  1. The location field is taken from the output

  2. The substitutions are performed on the location field (see below)

  3. The updated location field will be used as a regular expression filter

  4. The stdout is scanned line by line and the regular expression filter is applied

The special substitutions performed on the location use the Format Specification Mini-Language Format Specification Mini-Language. The predefined fields that can be used are:

  • inputs, an objet with the input values (use like {inputs.image[0]})

  • outputs, an object with the output values (use like {outputs.result[0]})

  • special which has two subfields:

    • special.cardinality, the index of the current cardinality

    • special.extension, is the extension for the output DataType

Note

because the plugin scans line by line, it is impossible to catch multi-line output into a single value

ExecutionPlugin Reference

This class is the base for all Plugins to execute jobs somewhere. There are many methods already in place for taking care of stuff.

There are fall-backs for certain features, but if a system already implements those it is usually preferred to skip the fall-back and let the external system handle it. There are a few flags to enable disable these features:

  • cls.SUPPORTS_CANCEL indicates that the plugin can cancel queued jobs

  • cls.SUPPORTS_HOLD_RELEASE indicates that the plugin can queue jobs in a hold state and can release them again (if not, the base plugin will create a hidden queue for held jobs). The plugin should respect the Job.status == JobState.hold when queueing jobs.

  • cls.SUPPORTS_DEPENDENCY indicate that the plugin can manage job dependencies, if not the base plugin job dependency system will be used and jobs with only be submitted when all dependencies are met.

  • cls.CANCELS_DEPENDENCIES indicates that if a job is cancelled it will automatically cancel all jobs depending on that job. If not the plugin traverse the dependency graph and kill each job manual.

    Note

    If a plugin supports dependencies it is assumed that when a job gets cancelled, the depending job also get cancelled automatically!

Most plugins should only need to redefine a few abstract methods:

  • __init__ the constructor

  • cleanup a clean up function that frees resources, closes connections, etc

  • _queue_job the method that queues the job for execution

Optionally an extra job finished callback could be added:

  • _job_finished extra callback for when a job finishes

If SUPPORTS_CANCEL is set to True, the plugin should also implement:

  • _cancel_job cancels a previously queued job

If SUPPORTS_HOLD_RELEASE is set to True, the plugin should also implement:

  • _hold_job hold_job a job that is currently held

  • _release_job releases a job that is currently held

If SUPPORTED_DEPENDENCY is set to True, the plugin should:

  • Make sure to use the Job.hold_jobs as a list of its dependencies

Not all of the functions need to actually do anything for a plugin. There are examples of plugins that do not really need a cleanup, but for safety you need to implement it. Just using a pass for the method could be fine in such a case.

Warning

When overwriting other functions, extreme care must be taken not to break the plugins working, as there is a lot of bookkeeping that can go wrong.

scheme

ExecutionPlugin

BlockingExecution

BlockingExecution

DRMAAExecution

DRMAAExecution

LinearExecution

LinearExecution

ProcessPoolExecution

ProcessPoolExecution

RQExecution

RQExecution

SlurmExecution

SlurmExecution

StrongrExecution

StrongrExecution

BlockingExecution

The blocking execution plugin is a special plugin which is meant for debug purposes. It will not queue jobs but immediately execute them inline, effectively blocking fastr until the Job is finished. It is the simplest execution plugin and can be used as a template for new plugins or for testing purposes.

DRMAAExecution

A DRMAA execution plugin to execute Jobs on a Grid Engine cluster. It uses a configuration option for selecting the queue to submit to. It uses the python drmaa package.

Note

To use this plugin, make sure the drmaa package is installed and that the execution is started on an SGE submit host with DRMAA libraries installed.

Note

This plugin is at the moment tailored to SGE, but it should be fairly easy to make different subclasses for different DRMAA supporting systems.

Configuration fields

The following configuration fields are added to the fastr config:

name

type

description

default

drmaa_queue

str

The default queue to use for jobs send to the scheduler

‘week’

drmaa_max_jobs

int

The maximum jobs that can be send to the scheduler at the same time (0 for no limit)

0

drmaa_engine

str

The engine to use (options: grid_engine, torque

‘grid_engine’

drmaa_job_check_interval

int

The interval in which the job checker will start to check for stale jobs

900

drmaa_num_undetermined_to_fail

int

Number of consecutive times a job state has be undetermined to be considered to have failed

3

LinearExecution

An execution engine that has a background thread that executes the jobs in order. The queue is a simple FIFO queue and there is one worker thread that operates in the background. This plugin is meant as a fallback when other plugins do not function properly. It does not multi-processing so it is safe to use in environments that do no support that.

ProcessPoolExecution

A local execution plugin that uses multiprocessing to create a pool of worker processes. This allows fastr to execute jobs in parallel with true concurrency. The number of workers can be specified in the fastr configuration, but the default amount is the number of cores - 1 with a minimum of 1.

Warning

The ProcessPoolExecution does not check memory requirements of jobs and running many workers might lead to memory starvation and thus an unresponsive system.

Configuration fields

The following configuration fields are added to the fastr config:

name

type

description

default

process_pool_worker_number

int

Number of workers to use in a process pool

3

RQExecution

A execution plugin based on Redis Queue. Fastr will submit jobs to the redis queue and workers will peel the jobs from the queue and process them.

This system requires a running redis database and the database url has to be set in the fastr configuration.

Note

This execution plugin required the redis and rq packages to be installed before it can be loaded properly.

Configuration fields

The following configuration fields are added to the fastr config:

name

type

description

default

rq_host

str

The url of the redis serving the redis queue

‘redis://localhost:6379/0’

rq_queue

str

The redis queue to use

‘default’

SlurmExecution

Configuration fields

The following configuration fields are added to the fastr config:

name

type

description

default

slurm_job_check_interval

int

The interval in which the job checker will startto check for stale jobs

30

slurm_partition

str

The slurm partition to use

‘’

StrongrExecution

A execution plugin based on Redis Queue. Fastr will submit jobs to the redis queue and workers will peel the jobs from the queue and process them.

This system requires a running redis database and the database url has to be set in the fastr configuration.

Note

This execution plugin required the redis and rq packages to be installed before it can be loaded properly.

FlowPlugin Reference

Plugin that can manage an advanced data flow. The plugins override the execution of node. The execution receives all data of a node in one go, so not split per sample combination, but all data on all inputs in one large payload. The flow plugin can then re-order the data and create resulting samples as it sees fits. This can be used for all kinds of specialized data flows, e.g. cross validation.

To create a new FlowPlugin there is only one method that needs to be implemented: execute.

scheme

FlowPlugin

CrossValidation

CrossValidation

CrossValidation

Advanced flow plugin that generated a cross-validation data flow. The node need an input with data and an input number of folds. Based on that the outputs test and train will be supplied with a number of data sets.

IOPlugin Reference

IOPlugins are used for data import and export for the sources and sinks. The main use of the IOPlugins is during execution (see Execution). The IOPlugins can be accessed via fastr.ioplugins, but generally there should be no need for direct interaction with these objects. The use of is mainly via the URL used to specify source and sink data.

scheme

IOPlugin

CommaSeperatedValueFile

CommaSeperatedValueFile

FileSystem

FileSystem

HTTPPlugin

HTTPPlugin

Null

Null

Reference

Reference

S3Filesystem

S3Filesystem

VirtualFileSystem

VirtualFileSystem

VirtualFileSystemRegularExpression

VirtualFileSystemRegularExpression

VirtualFileSystemValueList

VirtualFileSystemValueList

XNATStorage

XNATStorage

CommaSeperatedValueFile

The CommaSeperatedValueFile an expand-only type of IOPlugin. No URLs can actually be fetched, but it can expand a single URL into a larger amount of URLs.

The csv:// URL is a vfs:// URL with a number of query variables available. The URL mount and path should point to a valid CSV file. The query variable then specify what column(s) of the file should be used.

The following variable can be set in the query:

variable

usage

value

the column containing the value of interest, can be int for index or string for key

id

the column containing the sample id (optional)

header

indicates if the first row is considered the header, can be true or false (optional)

delimiter

the delimiter used in the csv file (optional)

quote

the quote character used in the csv file (optional)

reformat

a reformatting string so that value = reformat.format(value) (used before relative_path)

relative_path

indicates the entries are relative paths (for files), can be true or false (optional)

The header is by default false if the neither the value and id are set as a string. If either of these are a string, the header is required to define the column names and it automatically is assumed true

The delimiter and quota characters of the file should be detected automatically using the Sniffer, but can be forced by setting them in the URL.

Example of valid csv URLs:

# Use the first column in the file (no header row assumed)
csv://mount/some/dir/file.csv?value=0

# Use the images column in the file (first row is assumed header row)
csv://mount/some/dir/file.csv?value=images

# Use the segmentations column in the file (first row is assumed header row)
# and use the id column as the sample id
csv://mount/some/dir/file.csv?value=segmentations&id=id

# Use the first column as the id and the second column as the value
# and skip the first row (considered the header)
csv://mount/some/dir/file.csv?value=1&id=0&header=true

# Use the first column and force the delimiter to be a comma
csv://mount/some/dir/file.csv?value=0&delimiter=,

FileSystem

The FileSystem plugin is create to handle file:// type or URLs. This is generally not a good practice, as this is not portable over between machines. However, for test purposes it might be useful.

The URL scheme is rather simple: file://host/path (see wikipedia for details)

We do not make use of the host part and at the moment only support localhost (just leave the host empty) leading to file:/// URLs.

Warning

This plugin ignores the hostname in the URL and does only accept driver letters on Windows in the form c:/

HTTPPlugin

Warning

This Plugin is still under development and has not been tested at all. example url: https://server.io/path/to/resource

Null

The Null plugin is create to handle null:// type or URLs. These URLs are indicating the sink should not do anything. The data is not written to anywhere. Besides the scheme, the rest of the URL is ignored.

Reference

The Reference plugin is create to handle ref:// type or URLs. These URLs are to make the sink just write a simple reference file to the data. The reference file contains the DataType and the value so the result can be reconstructed. It for files just leaves the data on disk by reference. This plugin is not useful for production, but is used for testing purposes.

S3Filesystem

Warning

As this IOPlugin is under development, it has not been thoroughly tested.

example url: s3://bucket.server/path/to/resource

VirtualFileSystem

The virtual file system class. This is an IOPlugin, but also heavily used internally in fastr for working with directories. The VirtualFileSystem uses the vfs:// url scheme.

A typical virtual filesystem url is formatted as vfs://mountpoint/relative/dir/from/mount.ext

Where the mountpoint is defined in the Config file. A list of the currently known mountpoints can be found in the fastr.config object

>>> fastr.config.mounts
{'example_data': '/home/username/fastr-feature-documentation/fastr/fastr/examples/data',
 'home': '/home/username/',
 'tmp': '/home/username/FastrTemp'}

This shows that a url with the mount home such as vfs://home/tempdir/testfile.txt would be translated into /home/username/tempdir/testfile.txt.

There are a few default mount points defined by Fastr (that can be changed via the config file).

mountpoint

default location

home

the users home directory (expanduser('~/'))

tmp

the fastr temprorary dir, defaults to tempfile.gettempdir()

example_data

the fastr example data directory, defaults $FASTRDIR/example/data

VirtualFileSystemRegularExpression

The VirtualFileSystemValueList an expand-only type of IOPlugin. No URLs can actually be fetched, but it can expand a single URL into a larger amount of URLs.

A vfsregex:// URL is a vfs URL that can contain regular expressions on every level of the path. The regular expressions follow the re module definitions.

An example of a valid URLs would be:

vfsregex://tmp/network_dir/.*/.*/__fastr_result__.pickle.gz
vfsregex://tmp/network_dir/nodeX/(?P<id>.*)/__fastr_result__.pickle.gz

The first URL would result in all the __fastr_result__.pickle.gz in the working directory of a Network. The second URL would only result in the file for a specific node (nodeX), but by adding the named group id using (?P<id>.*) the sample id of the data is automatically set to that group (see Regular Expression Syntax under the special characters for more info on named groups in regular expression).

Concretely if we would have a directory vfs://mount/somedir containing:

image_1/Image.nii
image_2/image.nii
image_3/anotherimage.nii
image_5/inconsistentnamingftw.nii

we could match these files using vfsregex://mount/somedir/(?P<id>image_\d+)/.*\.nii which would result in the following source data after expanding the URL:

{'image_1': 'vfs://mount/somedir/image_1/Image.nii',
 'image_2': 'vfs://mount/somedir/image_2/image.nii',
 'image_3': 'vfs://mount/somedir/image_3/anotherimage.nii',
 'image_5': 'vfs://mount/somedir/image_5/inconsistentnamingftw.nii'}

Showing the power of this regular expression filtering. Also it shows how the ID group from the URL can be used to have sensible sample ids.

Warning

due to the nature of regexp on multiple levels, this method can be slow when having many matches on the lower level of the path (because the tree of potential matches grows) or when directories that are parts of the path are very large.

VirtualFileSystemValueList

The VirtualFileSystemValueList an expand-only type of IOPlugin. No URLs can actually be fetched, but it can expand a single URL into a larger amount of URLs. A vfslist:// URL basically is a url that points to a file using vfs. This file then contains a number lines each containing another URL.

If the contents of a file vfs://mount/some/path/contents would be:

vfs://mount/some/path/file1.txt
vfs://mount/some/path/file2.txt
vfs://mount/some/path/file3.txt
vfs://mount/some/path/file4.txt

Then using the URL vfslist://mount/some/path/contents as source data would result in the four files being pulled.

Note

The URLs in a vfslist file do not have to use the vfs scheme, but can use any scheme known to the Fastr system.

XNATStorage

Warning

As this IOPlugin is under development, it has not been thoroughly tested.

The XNATStorage plugin is an IOPlugin that can download data from and upload data to an XNAT server. It uses its own xnat:// URL scheme. This is a scheme specific for this plugin and though it looks somewhat like the XNAT rest interface, a different type or URL.

Data resources can be access directly by a data url:

xnat://xnat.example.com/data/archive/projects/sandbox/subjects/subject001/experiments/experiment001/scans/T1/resources/DICOM
xnat://xnat.example.com/data/archive/projects/sandbox/subjects/subject001/experiments/*_BRAIN/scans/T1/resources/DICOM

In the second URL you can see a wildcard being used. This is possible at long as it resolves to exactly one item.

The id query element will change the field from the default experiment to subject and the label query element sets the use of the label as the fastr id (instead of the XNAT id) to True (the default is False)

To disable https transport and use http instead the query string can be modified to add insecure=true. This will make the plugin send requests over http:

xnat://xnat.example.com/data/archive/projects/sandbox/subjects/subject001/experiments/*_BRAIN/scans/T1/resources/DICOM?insecure=true

For sinks it is import to know where to save the data. Sometimes you want to save data in a new assessor/resource and it needs to be created. To allow the Fastr sink to create an object in XNAT, you have to supply the type as a query parameter:

xnat://xnat.bmia.nl/data/archive/projects/sandbox/subjects/S01/experiments/_BRAIN/assessors/test_assessor/resources/IMAGE/files/image.nii.gz?resource_type=xnat:resourceCatalog&assessor_type=xnat:qcAssessmentData

Valid options are: subject_type, experiment_type, assessor_type, scan_type, and resource_type.

If you want to do a search where multiple resources are returned, it is possible to use a search url:

xnat://xnat.example.com/search?projects=sandbox&subjects=subject[0-9][0-9][0-9]&experiments=*_BRAIN&scans=T1&resources=DICOM

This will return all DICOMs for the T1 scans for experiments that end with _BRAIN that belong to a subjectXXX where XXX is a 3 digit number. By default the ID for the samples will be the experiment XNAT ID (e.g. XNAT_E00123). The wildcards that can be the used are the same UNIX shell-style wildcards as provided by the module fnmatch.

It is possible to change the id to a different fields id or label. Valid fields are project, subject, experiment, scan, and resource:

xnat://xnat.example.com/search?projects=sandbox&subjects=subject[0-9][0-9][0-9]&experiments=*_BRAIN&scans=T1&resources=DICOM&id=subject&label=true

The following variables can be set in the search query:

variable

default

usage

projects

*

The project(s) to select, can contain wildcards (see fnmatch)

subjects

*

The subject(s) to select, can contain wildcards (see fnmatch)

experiments

*

The experiment(s) to select, can contain wildcards (see fnmatch)

scans

*

The scan(s) to select, can contain wildcards (see fnmatch)

resources

*

The resource(s) to select, can contain wildcards (see fnmatch)

id

experiment

What field to use a the id, can be: project, subject, experiment, scan, or resource

label

false

Indicate the XNAT label should be used as fastr id, options true or false

insecure

false

Change the url scheme to be used to http instead of https

verify

true

(Dis)able the verification of SSL certificates

regex

false

Change search to use regex re.match() instead of fnmatch for matching

overwrite

false

Tell XNAT to overwrite existing files if a file with the name is already present

For storing credentials the .netrc file can be used. This is a common way to store credentials on UNIX systems. It is required that the file is only accessible by the owner only or a NetrcParseError will be raised. A netrc file is really easy to create, as its entries look like:

machine xnat.example.com
        login username
        password secret123

See the netrc module or the GNU inet utils website for more information about the .netrc file.

Note

On windows the location of the netrc file is assumed to be os.path.expanduser('~/_netrc'). The leading underscore is because windows does not like filename starting with a dot.

Note

For scan the label will be the scan type (this is initially the same as the series description, but can be updated manually or the XNAT scan type cleanup).

Warning

labels in XNAT are not guaranteed to be unique, so be careful when using them as the sample ID.

For background on XNAT, see the XNAT API DIRECTORY for the REST API of XNAT.

Interface Reference

Abstract base class of all Interfaces. Defines the minimal requirements for all Interface implementations.

scheme

Interface

FastrInterface

FastrInterface

FlowInterface

FlowInterface

NipypeInterface

NipypeInterface

FastrInterface

The default Interface for fastr. For the command-line Tools as used by fastr. It build a commandline call based on the input/output specification.

The fields that can be set in the interface:

Attribute

Description

id

The id of this Tool (used internally in fastr)

inputs[]

List of Inputs that can are accepted by the Tool

id

ID of the Input

name

Longer name of the Input (more human readable)

datatype

The ID of the DataType of the Input 1

enum[]

List of possible values for an EnumType (created on the fly by fastr) 1

prefix

Commandline prefix of the Input (e.g. –in, -i)

cardinality

Cardinality of the Input

repeat_prefix

Flag indicating if for every value of the Input the prefix is repeated

required

Flag indicating if the input is required

nospace

Flag indicating if there is no space between prefix and value (e.g. –in=val)

format

For DataTypes that have multiple representations, indicate which one to use

default

Default value for the Input

description

Long description for an input

outputs[]

List of Outputs that are generated by the Tool (and accessible to fastr)

id

ID of the Output

name

Longer name of the Output (more human readable)

datatype

The ID of the DataType of the Output 1

enum[]

List of possible values for an EnumType (created on the fly by fastr) 1

prefix

Commandline prefix of the Output (e.g. –out, -o)

cardinality

Cardinality of the Output

repeat_prefix

Flag indicating if for every value of the Output the prefix is repeated

required

Flag indicating if the input is required

nospace

Flag indicating if there is no space between prefix and value (e.g. –out=val)

format

For DataTypes that have multiple representations, indicate which one to use

description

Long description for an input

action

Special action (defined per DataType) that needs to be performed before creating output value (e.g. ‘ensure’ will make sure an output directory exists)

automatic

Indicate that output doesn’t require commandline argument, but is created automatically by a Tool 2

method

The collector plugin to use for the gathering automatic output, see the Collector plugins

location

Definition where to an automatically, usage depends on the method 2

Footnotes

1(1,2,3,4)

datatype and enum are conflicting entries, if both specified datatype has presedence

2(1,2)

More details on defining automatica output are given in [TODO]

FlowInterface

The Interface use for AdvancedFlowNodes to create the advanced data flows that are not implemented in the fastr. This allows nodes to implement new data flows using the plugin system.

The definition of FlowInterfaces are very similar to the default FastrInterfaces.

Note

A flow interface should be using a specific FlowPlugin

NipypeInterface

Experimental interfaces to using nipype interfaces directly in fastr tools, only using a simple reference.

To create a tool using a nipype interface just create an interface with the correct type and set the nipype argument to the correct class. For example in an xml tool this would become:

<interface class="NipypeInterface">
  <nipype_class>nipype.interfaces.elastix.Registration</nipype_class>
</interface>

Note

To use these interfaces nipype should be installed on the system.

Warning

This interface plugin is basically functional, but highly experimental!

ReportingPlugin Reference

Base class for all reporting plugins. The plugin has a number of methods that can be implemented that will be called on certain events. On these events the plugin can inspect the presented data and take reporting actions.

scheme

ReportingPlugin

ElasticsearchReporter

ElasticsearchReporter

PimReporter

PimReporter

SimpleReport

SimpleReport

ElasticsearchReporter

NOT DOCUMENTED!

Configuration fields

The following configuration fields are added to the fastr config:

name

type

description

default

elasticsearch_host

str

The elasticsearch host to report to

‘’

elasticsearch_index

str

The elasticsearch index to store data in

‘fastr’

elasticsearch_debug

bool

Setup elasticsearch debug mode to send stdout stderr on job succes

False

PimReporter

NOT DOCUMENTED!

Configuration fields

The following configuration fields are added to the fastr config:

name

type

description

default

pim_host

str

The PIM host to report to

‘’

pim_username

str

Username to send to PIM

Username of the currently logged in user

pim_update_interval

float

The interval in which to send jobs to PIM

2.5

pim_batch_size

int

Maximum number of jobs that can be send to PIM in a single interval

100

pim_debug

bool

Setup PIM debug mode to send stdout stderr on job success

False

pim_finished_timeout

int

Maximum number of seconds after the network finished in which PIM tries to synchronize all remaining jobs

10

SimpleReport

NOT DOCUMENTED!

Target Reference

The abstract base class for all targets. Execution with a target should follow the following pattern:

>>> with Target() as target:
...     target.run_commmand(['sleep', '10'])

The Target context operator will set the correct paths/initialization. Within the context command can be ran and when leaving the context the target reverts the state before.

scheme

Target

DockerTarget

DockerTarget

LocalBinaryTarget

LocalBinaryTarget

MacroTarget

MacroTarget

SingularityTarget

SingularityTarget

DockerTarget

A tool target that is located in a Docker images. Can be run using docker-py. A docker target only need two variables: the binary to call within the docker container, and the docker container to use.

{
  "arch": "*",
  "os": "*",
  "binary": "bin/test.py",
  "docker_image": "fastr/test"
}
<target os="*" arch="*" binary="bin/test.py" docker_image="fastr/test">

LocalBinaryTarget

A tool target that is a local binary on the system. Can be found using

environmentmodules or a path on the executing machine. A local binary target has a number of fields that can be supplied:

  • binary (required): the name of the binary/script to call, can also be called bin for backwards compatibility.

  • modules: list of modules to load, this can be environmentmodules or lmod modules. If modules are given, the paths, environment_variables and initscripts are ignored.

  • paths: a list of paths to add following the structure {"value": "/path/to/dir", "type": "bin"}. The types can be bin if the it should be added to $PATH or lib if it should be added to te library path (e.g. $LD_LIBRARY_PATH for linux).

  • environment_variables: a dictionary of environment variables to set.

  • initscript: a list of script to run before running the main tool

  • interpreter: the interpreter to use to call the binary e.g. python

The LocalBinaryTarget will first check if there are modules given and the module subsystem is loaded. If that is the case it will simply unload all current modules and load the given modules. If not it will try to set up the environment itself by using the following steps:

  1. Prepend the bin paths to $PATH

  2. Prepend the lib paths to the correct environment variable

  3. Setting the other environment variables given ($PATH and the system library path are ignored and cannot be set that way)

  4. Call the initscripts one by one

The definition of the target in JSON is very straightforward:

{
  "binary": "bin/test.py",
  "interpreter": "python",
  "paths": [
    {
      "type": "bin",
      "value": "vfs://apps/test/bin"
    },
    {
      "type": "lib",
      "value": "./lib"
    }
  ],
  "environment_variables": {
    "othervar": 42,
    "short_var": 1,
    "testvar": "value1"
  },
  "initscripts": [
    "bin/init.sh"
  ],
  "modules": ["elastix/4.8"]
}

In XML the definition would be in the form of:

<target os="linux" arch="*" modules="elastix/4.8" bin="bin/test.py" interpreter="python">
  <paths>
    <path type="bin" value="vfs://apps/test/bin" />
    <path type="lib" value="./lib" />
  </paths>
  <environment_variables short_var="1">
    <testvar>value1</testvar>
    <othervar>42</othervar>
  </environment_variables>
  <initscripts>
    <initscript>bin/init.sh</initscript>
  </initscripts>
</target>

MacroTarget

A target for MacroNodes. This target cannot be executed as the MacroNode handles execution differently. But this contains the information for the MacroNode to find the internal Network.

SingularityTarget

A tool target that is run using a singularity container, see the singulary website

  • binary (required): the name of the binary/script to call, can also be called bin for backwards compatibility.

  • container (required): the singularity container to run, this can be in url form for singularity

    pull or as a path to a local container

  • interpreter: the interpreter to use to call the binary e.g. python