Scale up your experiments

Suggest edits
Documentation

Content:

1 - Setting up an Authentication
2 - Defining an execution environment
3 - Grouping
4 - Available environments


A key feature in OpenMOLE is the possibility to delegate the workload to a remote execution environment. Tasks in OpenMOLE have been designed so that delegating a part of the workload to a remote environment is declarative.

Setting up an Authentication 🔗

You first need to declare the environments you want to use and the corresponding authentication credentials, in the OpenMOLE GUI (see the GUI guide for more information). Have a look here to set up an authentication in console mode.

Defining an execution environment 🔗

The actual delegation of the task is noted by the keyword on followed by a defined Environment:

// Define the variables that are transmitted between the tasks
val i = Val[Double]
val res = Val[Double]

// Define the model, here it is a simple task executing "res = i * 2", but it can be your model
val model =
  ScalaTask("val res = i * 2") set (
    inputs += i,
    outputs += (i, res)
  )

// Declare a local environment using 10 cores of the local machine
val env = LocalEnvironment(10)

// Make the model run on the the local environment
DirectSampling(
  evaluation = model on env hook display,
  sampling = i in (0.0 to 100.0 by 1.0)
)

You do not need to install anything or perform any kind of configuration on the target execution environment, OpenMOLE does all the work and uses the infrastructure in place. You will however be required to provide the authentication information in order for OpenMOLE to access the remote environment (see here). In case you face authentication problems when targeting an environment through SSH, please refer to the corresponding entry in the FAQ.
When no specific environment is specified for a task or a group of tasks, they will be executed sequentially on your local machine.

Grouping 🔗

The use of a batch environment is generally not suited for short tasks (less than 1 minute for a cluster, or less than 1 hour for a grid). In case your tasks are short, you can group several executions with the keyword by in your workflow. For instance, the workflow below groups the execution of model by 100 in each job submitted to the environment:

// Define the variables that are transmitted between the tasks
val i = Val[Double]
val res = Val[Double]

// Define the model, here it is a simple task executing "res = i * 2", but it can be your model
val model =
  ScalaTask("val res = i * 2") set (
    inputs += i,
    outputs += (i, res)
  )

// Declare a local environment using 10 cores of the local machine
val env = LocalEnvironment(10)

// Make the model run on the the local environment
DirectSampling(
  evaluation = model on env by 100 hook display,
  sampling = i in (0.0 to 1000.0 by 1.0)
)

Available environments 🔗

Multiple environments are available to delegate your workload, depending on the kind of resources you have at your disposal.