User Tools

Site Tools


Book the testbed with OAR

Once you have accessed the testbed (i.e. connected on CorteXlab server), you can reserve the nodes for an experimentation.

To avoid cross interference between multiple experiments, only one person can use the whole CorteXlab testbed at a time. The OAR scheduler is used to book nodes on the platform. As soon as you book one or more nodes, the CorteXlab room is reserved for your usage during the requested time.

The state of reservation of the CorteXlab testbed can be visualized here:

The role of OAR is to schedule node reservations. It manages jobs associated with users. A job has a start time, a duration (walltime), and uses some resources (CorteXlab nodes).


The principle of operation of CorteXlab is that users submit jobs to OAR. When the job starts, the user gets exclusive access to the platform, and inside an OAR job, the user can perform (interactively, or in batch) one or several experiments.

A basic example to submit an OAR interactive job requesting all available nodes, for the default duration (which is 2 hours):

$ oarsub -I -l nodes=BEST

The same example with a max duration of 4 hours :

$ oarsub -I -l nodes=BEST,walltime=4:00:00

This command will wait for the resources to be available, and as soon as they are (i.e. -I stands for interactive), a job is allocated, is started, and a subshell is instantiated where you can work on experiments. As soon as the subshell is closed, the job ends. (It can be useful to work in a screen session to avoid losing jobs in case of network disconnection).

Submissions may not be interactive. You can provide a script name to execute when the job starts. It has the strong advantage that it allows you to avoid waiting for the job start, which can be long if the platform is heavily used. But for this to work you have to automate everything:

$ oarsub -l nodes=BEST '/path/to/script/to/execute/when/job/starts script args'

A particular case of this syntax is:

$ oarsub -l nodes=BEST 'sleep 1000000'

It allows you to have a job which is not tied to a terminal, but you still need to manually submit minus tasks when the job starts


By default OAR submissions are scheduled as soon as possible. It is also possible to ask for an OAR reservation where you choose the date at which the job will be scheduled.

This other simple example is reserving all the nodes on the 18 of September 2015 from 10AM to 11AM:

$ oarsub -l nodes=BEST,walltime=1:00:00 -r "2015-09-18 10:00:00" 

Booking specific nodes

If you want to reserve specific nodes, there are several possible syntaxes.

To make a submission using two nodes:

$ oarsub -I -l nodes=2

But the nodes will be randomly chosen by OAR, so you'll have to adapt your task's scenario to the allocated nodes.

It is possible to ask for explicit nodes with a less user-friendly syntax (especially in situations where you need lots of nodes). For example, to make a submission using specifically nodes 4 and 6, for a 30 minutes job:

$ oarsub -l {"network_address in ('', '')"}/nodes=2,walltime=0:30:00

Advanced usage: sharing the platform

It is possible to share the platform, for specific situations such as tutorials, courses, challenges. In these situations, you want several users to be able to use the platform at the same time. For this, you need to follow these steps:

The organizer of the tutorial/course/challenge submits or reserve the whole (or part of) the platform for the duration of the event, with the -t container option:

$ oarsub -l nodes=BEST,walltime=4:00:00 -t container -r '2018-07-21 14:00:00'

This will reserve all available nodes for a 4 hours event, between 14 and 18 on July 21, 2018.

Then, participants can submit jobs inside the container job with this (example) syntax:

$ oarsub -t inner=<job_id of the container job> -l {"network_address in ('', '')"}/nodes=2,walltime=0:30:00 -I

or (another example):

$ oarsub -t inner=<job_id of the container job> -l nodes=2,walltime=0:30:00 'sleep 10000000'

OAR Documentation

The complete OAR documentation, with much more details and examples, is available here:

reserve.txt · Last modified: 2017/08/31 16:09 by mimbert