Scheduling your workloads on Viking
Viking uses a queuing system called Slurm to ensure that your jobs are fairly scheduled to run on Viking.
Information: What is Slurm?
What is Slurm and what is a scheduler?
Slurm is a job scheduling system for small and large clusters. As a cluster workload manager, Slurm has three key functions.
- Lets a user request a resources on a compute node to run their workloads
- Provides a framework (commands) to start, cancel, and monitor a job
- Keeps track of all jobs to ensure everyone can efficiently use all computing resources without stepping on each others toes.
When a user submits a job Slurm will decide when to allow the job to run on a compute node. This is very important for shared machines such as the Viking cluster so that the resources are shared as fairly between users so one persons jobs does not dominate.
Resource allocation
In order to interact with the job/batch system (SLURM), the user must first give some indication of the resources they require. At a minimum these include:
- how long does the job need to run for
- on how many processors to run the job
The default resource allocation for jobs can be found /wiki/spaces/RCS/pages/39159441.
Armed with this information, the scheduler is able to dispatch the jobs at some point in the future when the resources become available. A fair-share policy is in operation to guide the scheduler towards allocating resources fairly between users.
Exercise: Running Slurm commands on Viking
Slurm Command Summary
To interact with Slurm there are a number of command you can use. This table summerises the most common commands that can be used on Viking. We will be using some of these commands in the examples going forward in this exercise.
Command | Description |
---|---|
squeue | reports the state of jobs (it has a variety of filtering, sorting, and formatting options), by default, reports the running jobs in priority order followed by the pending jobs in priority order |
srun | used to submit a job for execution in real time |
salloc | allocate resources for a job in real time (typically used to allocate resources and spawn a shell, in which the srun command is used to launch parallel tasks) |
sbatch | submit a job script for later execution (the script typically contains one or more srun commands to launch parallel tasks) |
sattach | attach standard input, output, and error to a currently running job , or job step |
scancel | cancel a pending or running job |
sinfo | reports the state of partitions and nodes managed by Slurm (it has a variety of filtering, sorting, and formatting options) |
sacct | report job accounting information about active or completed jobs |
Squeue
The squeue command will be a command you use often. To run the command first login to Viking.
Run the following command. What do you see?
[abc123@login1(viking) ~]$ squeue
You should see a list of jobs. Each column describes the status of each job. This table below summarizes each column.
Column | Description |
---|---|
JOBID | A number used to uniquely identify your job within SLURM |
PARTITION | The partition the job has been submitted to |
NAME | The job's name |
USER | The username of the job owner |
ST | Current job status: R (running), PD (pending - queued and waiting) |
TIME | The time the job has been running |
NODES | The number of nodes used by the job |
NODELIST (REASON) | The nodes used by the job or reason the job is not running |
When you start to run your jobs
[abc123@login1(viking) ~]$ squeue -u abc123
or
[abc123@login1(viking) ~]$ squeue -j JOBID
Will provide information on the jobs you have queued or are running.
Other useful flags you can use with squeue are summerised here:
-a | display all jobs |
-l | display more information |
-u | only display users jobs |
-p | only display jobs in a particular partition |
--usage | print help |
-v | verbose listing |
Sinfo
The sinfo [options] command displays node and partition (queue) information and state.
Column | Description |
---|---|
PARTITION | Asterisk after partition name indicates the default partition |
AVAIL | Partition is able to accept jobs |
TIMELIMIT | Maximum time a job can run for |
NODES | Number of available nodes in the partition |
STATE | down - not available, alloc - jobs being run, idle - waiting for jobs |
NODELIST | Nodes available in the partition |
Completed jobs - sacct
To display a list of recently completed jobs use the sacct command.
[abc123@login1(viking) scratch]$ sacct -j 147874 JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 147874 simple.job nodes dept-proj+ 1 COMPLETED 0:0 147874.batch batch dept-proj+ 1 COMPLETED 0:0
Important switches to sacct are:
Switch | Action |
-a | display all users jobs |
-b | display a brief listing |
-E | select the jobs end date/time |
-h | print help |
-j | display a specific job |
-l | display long format |
--name | display jobs with name |
-S | select the jobs start date/time |
-u | display only this user |
-v | verbose listing |
There are many more commands you can use to query Slurm. Please see the Slurm documentation for further details.