Frequently Asked Questions

Suggest edits
Documentation > Documentation

Content:

1 - Which Java version should I use?
2 - Where do I find previous versions of OpenMOLE?
3 - Why is my SSH authentication not working?
4 - Is OpenMOLE doing something?
5 - I've reached my home folder size / file quota
6 - My sampling generates a type error
7 - I get an error related to files on Linux and there is 'too many open files' written somewhere in the error
8 - When shall I use Path over File?
9 - My problem is not listed here


Which Java version should I use? 🔗

OpenMOLE is fully working under OpenJDK 21 and higher, thus it is our recommended option. You can check which Java version you're running by typing java -version in a console.

Where do I find previous versions of OpenMOLE? 🔗

Previous versions of the OpenMOLE application and documentation are available here. Due to a data loss in 2016, only the versions from OpenMOLE 6 are available.

Why is my SSH authentication not working? 🔗

When one of the SSH authentications you've added to OpenMOLE is marked as failed, you can try these few steps to identify the problem.

Console mode 🔗

If you are using OpenMOLE in console mode, try enabling the FINE level of logging in the console using: logger.level("FINE").

Password authentication 🔗

If you are using the LoginPassword authentication you might want to double check the user and password you entered since one of them is more than likely incorrect.

SSH Keypair Authentication 🔗

In such a case, we'll have to investigate multiple options, as SSH public key authentications are sensitive to several configuration parameters.
Public key authentication usually has a higher priority than password-based authentication when trying to connect to a remote server. Thus, when you attempt an SSH connection to the target environment, if your client asks you to enter a password (please note that a passphrase is different from a password), then your public key authentication is not taken into account. SSH will skip your public key in case of bad configuration. The most common cases of badly configured keypairs are the following:
  • You haven't created an SSH keypair yet (using ssh-keygen). Private keys are usually stored in /home/login/.ssh/id_rsa or /home/login/.ssh/id_dsa, and should have a matching /home/login/.ssh/id_[rd]sa.pub next to them. You can find additional info on how to create an SSH public key here.
  • Permissions of your /home/login/.ssh folder must be set to drwx—— (700 in octal). Also, too permissive home directories (with write access given to the whole group for instance) might prove problematic.
  • A /home/login/.ssh/authorized_keys file must be present on the remote system. It should at least contain a line matching the content of the /home/login/.ssh/id_[rd]sa.pub from your base system.
  • If you entered a passphrase when you generated your SSH keys and cannot remember it, it might be better to generate another keypair.
If you still could not solve your SSH authentication problems, another option is to recreate a public/private keypair using the ssh-keygen shell command. Store it in a different file to avoid overwriting the already existing one. You might also want to try a simple LoginPassword authentication as explained in the SSH page.
Adding the -vvv flag to your ssh command will give a lot more details on the communication between your client and the remote server. This will allow you to find out which authentication is successful as well as the order in which the authentication modes are tried.

Is OpenMOLE doing something? 🔗

If you think OpenMOLE is crashed or stuck for some reason, here are a few things you can check to decide whether it's just a temporary slow down or if the platform did actually crash.

Using tools from the Java Development Kit 🔗

A simple call to jps from your command line will list all the instrumented JVMs on your system. If OpenMOLE is running, it will be among these processes. Now that you know the OpenMOLE's process ID, you can use jstack to print the eventual stack traces collected from OpenMOLE's threads. It's a bit low level but can at least give you enough material to thoroughly document your problem in the issue list or the forum. The same procedure can be applied to the dbserver running along OpenMOLE to manage the replica of the files copied to execution environments.

Inspecting the temporary folders 🔗

OpenMOLE automatically creates temporary folders on the machine it's running on, in order to handle various inputs and outputs. If you have access to the machine running OpenMOLE, change to your OpenMOLE's preferences folder down to the following path: /home/user/.openmole/my_machine/.tmp. List the content of this directory and change to the most recently created directory.
If you're using a remote environment, it should contain the tar archives used to populate new jobs on your remote computing environment, along with the input data files required by the task. The presence of these files is a good indicator that OpenMOLE is functioning correctly and preparing the delegation of your workflow. Hardcore debuggers might want to go even deeper and extract the content of the tar archives to verify them, but this is out of scope. However, touching on temporary file creation in OpenMOLE seamlessly leads us to our next entry...

I've reached my home folder size / file quota 🔗

OpenMOLE generates a fair amount of temporary files in the .openmole/mymachine/.tmp folder associated to your machine. Although these are deleted at the end of an execution, they can lead to a significant increase of the space occupied by your .openmole folder, and of the number of files present in the same folder. Because some systems place stringent limitations on these two quotas, you might want to move your .openmole folder to a file system not restricted by quotas in order to run your OpenMOLE experiment successfully. The simplest way to do so is to create a destination folder in the unrestricted file system and then create a symbolic link name .openmole in your home directory that points to this newly created folder. On a UNIX system, this procedure translates into the following commands:
# assumes /data is not restricted by quotas
cp -r ~/.openmole /data/openmole_data
rm -rf ~/.openmole
ln -s /data/openmole_data ~/.openmole
In order for this procedure to work, you'll want to ensure the target folder (/data/openmole in the example) can be reached from all the machines running your OpenMOLE installation.
Moving your .openmole folder to a different location is also strongly advised on remote execution hosts (typically clusters) on which you own a personal account used with OpenMOLE. In the case of remote environments, the OpenMOLE runtime and the input files of your workflow will be copied to the .openmole folder, again leading to problematic over quotas on these systems. For this specific case, we recommend using the sharedDirectory option of the cluster environment to set the location where OpenMOLE should copy your files without hitting any quota restrictions.

My sampling generates a type error 🔗

Combining samplings is straightforward in OpenMOLE, but can sometimes results in syntax errors a bit cryptic to new users. Let's take the example of a combined sampling made of a file exploration sampling and an integer range exploration:
(input in (workDirectory / "../data/").files withName inputName) x
i in (1 to 10)
This combined sampling will generate the following error when compiling the workflow:
found   : org.openmole.core.workflow.data.Prototype[Int]
required: org.openmole.core.workflow.sampling.Sampling
OpenMOLE cannot identify the integer range as a valid sampling. Simply wrapping the expression in parentheses fixes the problem as shown in this correct version:
(input in (workDirectory / "../data/").files withName inputName) x
(i in (1 to 10))

I get an error related to files on Linux and there is 'too many open files' written somewhere in the error 🔗

On Linux servers, the number of files a user can open is generally limited to 1024. OpenMOLE increases this number to 4096 on launch, but if it doesn't seem to work on your system, you might want to understand why. To check the current state of your system limit, execute ulimit -a in a terminal:
reuillon@docker-host1:~$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 64040
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 64040
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
In this example you can see the max number of open files is 1024. This is generally a soft limitation that can be overridden by the user. To do so, execute ulimit -n 4096 before launching OpenMOLE in the same terminal. You can check that your command had the expected effect using ulimit -a. If nothing changed in the terminal output, it means that a hard limit has been set in the limits.conf file of your system. If you have root access, you can fix it by modifying the file /etc/security/limits.conf, otherwise you should contact the system administrator and ask them kindly to modify it.

When shall I use Path over File? 🔗

OpenMOLE takes care of everything for you, from connecting to remote environments to submitting jobs and copying your files. However, most clusters installations take advantage of a shared file system between the nodes. If the file structure you're exploring is located on a shared file system, you do not need OpenMOLE to duplicate the target files, as they are already available on the compute node directly. In case you're manipulating very large files, it might not even be possible to duplicate them. When you find yourself in such a use case, you might want to try the Path optimization for your scripts. By replacing the Val[File] variables by Val[Path] in your scripts, OpenMOLE will store the file's location and not its content as it would when using Val[File]. This optimization is only available for clusters and not for the EGI grid. You can find an example of using Path variables in the dataflow in the data processing page.

My problem is not listed here 🔗

If you could not resolve your problem, feel free to post your problem on the forum,or ask us directly on our chat. If you think your problem is induced by a bug in OpenMOLE, please report the issue exhaustively on our GitHub page.