Introduction to High-Performance Computing

Why use a Cluster?

Overview

Teaching: 15 min
Exercises: 5 min
Questions
  • Why would I be interested in High Performance Computing (HPC)?

  • What can I expect to learn from this course?

Objectives
  • Describe what an HPC system is

  • Identify how an HPC system could benefit you.


Frequently, research problems that use computing can outgrow the capabilities of the desktop or laptop computer where they started:


...

Genomics

A genomics researcher has been using small datasets of sequence data, but soon will be receiving a new type of sequencing data that is 10 times as large. It's already challenging to open the datasets on a computer -- analyzing these larger datasets will probably crash it. In this research problem, the calculations required might be impossible to parallelize, but a computer with more memory would be required to analyze the much larger future data set.

...

Engineering

An engineer is using a fluid dynamics package that has an option to run in parallel. In this research problem, the calculations in each region of the simulation are largely independent of calculations in other regions of the simulation. It's possible to run each region's calculations simultaneously (in parallel), communicate selected results to adjacent regions as needed, and repeat the calculations to converge on a final set of results.

...

Humanities

A graduate student is using a named entity recognizer to identify named entities (important people, places, and things) in the works of Ralph Waldo Emerson. In this research problem, each of Emerson's works are independent of each and can be analyzed simultaneously in parallel. Results from this retrieval task can be aggregated for higher level analyses such as knowledge graphing, mapping references, or social network analysis.

In all these cases, access to more (and larger) computers is needed. Those computers should be usable at the same time, solving many researchers’ problems in parallel.

/hpc-intro/tips%20on%20parallelization

Jargon Busting Presentation

Open the HPC Jargon Buster in a new tab. To present the content, press C to open a clone in a separate window, then press P to toggle presentation mode.

I’ve Never Used a Server, Have I?

Take a minute and think about which of your daily interactions with a computer may require a remote server or even cluster to provide you with results.

Some Ideas

  • Checking email: your computer (possibly in your pocket) contacts a remote machine, authenticates, and downloads a list of new messages; it also uploads changes to message status, such as whether you read, marked as junk, or deleted the message. Since yours is not the only account, the mail server is probably one of many in a data center.
  • Searching for a phrase online involves comparing your search term against a massive database of all known sites, looking for matches. This “query” operation can be straightforward, but building that database is a monumental task! Servers are involved at every step.
  • Searching for directions on a mapping website involves connecting your (A) starting and (B) end points by traversing a graph in search of the “shortest” path by distance, time, expense, or another metric. Converting a map into the right form is relatively simple, but calculating all the possible routes between A and B is expensive.

Checking email could be serial: your machine connects to one server and exchanges data. Searching by querying the database for your search term (or endpoints) could also be serial, in that one machine receives your query and returns the result. However, assembling and storing the full database is far beyond the capability of any one machine. Therefore, these functions are served in parallel by a large, “hyperscale” collection of servers working together.

Key Points

  • High Performance Computing (HPC) typically involves connecting to very large computing systems elsewhere in the world.

  • These other systems can be used to do work that would either be impossible or much slower on smaller systems.

  • HPC resources are shared by multiple users.

  • The standard method of interacting with such systems is via a command line interface.


Connecting to a remote HPC system

Overview

Teaching: 25 min
Exercises: 10 min
Questions
  • How do I log in to a remote HPC system?

Objectives
  • Configure secure access to a remote HPC system.

  • Connect to a remote HPC system.

Secure Connections

The first step in using a cluster is to establish a connection from our laptop to the cluster. When we are sitting at a computer, we have come to expect a visual display with icons, widgets, and perhaps some windows or applications: a graphical userinterface, or GUI. Since computer clusters are remote resources that we connect to over slow or intermittent interfaces (WiFi and VPNs especially), it is more practical to use a command-line interface, or CLI, to send commands as plain-text. If a command returns output, it is printed as plain text as well. The commands we run today will not open a window to show graphical results.

If you have ever opened the Windows Command Prompt or macOS Terminal, you have seen a CLI. If you have already taken The Carpentries’ courses on the UNIX Shell or Version Control, you have used the CLI on your local machine extensively. The only leap to be made here is to open a CLI on a remote machine, while taking some precautions so that other folks on the network can’t see (or change) the commands you’re running or the results the remote machine sends back. We will use the Secure SHell protocol (or SSH) to open an encrypted network connection between two machines, allowing you to send & receive text and data without having to worry about prying eyes.

/hpc-intro/Connect%20to%20cluster

SSH clients are usually command-line tools, where you provide the remote machine address as the only required argument. If your username on the remote system differs from what you use locally, you must provide that as well. If your SSH client has a graphical front-end, such as PuTTY or MobaXterm, you will set these arguments before clicking “connect.” From the terminal, you’ll write something like ssh userName@hostname, where the argument is just like an email address: the “@” symbol is used to separate the personal ID from the address of the remote machine.

When logging in to a laptop, tablet, or other personal device, a username, password, or pattern are normally required to prevent unauthorized access. In addition to your Stanford password, you will be required to use Duo Two-Factor Authentication.

Log In to the Cluster

Go ahead and open your terminal or graphical SSH client, then log in to the cluster. Replace SUNetID with your username or the one supplied by the instructors.

[you@laptop:~]$ ssh SUNetID@login.farmshare.stanford.edu

You may be asked for your password. Watch out: the characters you type after the password prompt are not displayed on the screen. Normal output will resume once you press Enter.

You may have noticed that the prompt changed when you logged into the remote system using the terminal (if you logged in using PuTTY this will not apply because it does not offer a local terminal). This change is important because it can help you distinguish on which system the commands you type will be run when you pass them into the terminal. This change is also a small complication that we will need to navigate throughout the workshop. Exactly what is displayed as the prompt (which conventionally ends in $) in the terminal when it is connected to the local system and the remote system will typically be different for every user. We still need to indicate which system we are entering commands on though so we will adopt the following convention:

Looking Around Your Remote Home

Very often, many users are tempted to think of a high-performance computing installation as one giant, magical machine. Sometimes, people will assume that the computer they’ve logged onto is the entire computing cluster. So what’s really happening? What computer have we logged on to? The name of the current computer we are logged onto can be checked with the hostname command. (You may also notice that the current hostname is also part of our prompt!)

[SUNetID@rice-02:~]$ hostname
rice-02

So, we’re definitely on the remote machine. Next, let’s find out where we are by running pwd to print the working directory.

[SUNetID@rice-02:~]$ pwd
/home//SUNetID

Great, we know where we are! Let’s see what’s in our current directory:

[SUNetID@rice-02:~]$ ls
  afs-home    go

The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. If they did not, your home directory may appear empty. To double-check, include hidden files in your directory listing:

[SUNetID@rice-02:~]$ ls -a
  .            .bashrc           afs-home
  ..           .ssh              go

In the first column, . is a reference to the current directory and .. a reference to its parent (/home/). You may or may not see the other files, or files like them: .bashrc is a shell configuration file, which you can edit with your preferences; and .ssh is a directory storing SSH keys and a record of authorized connections.

Key Points

  • An HPC system is a set of networked machines.

  • HPC systems typically provide login nodes and a set of worker nodes.

  • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).

  • Files saved on one node are available on all nodes.


Exploring Remote Resources

Overview

Teaching: 25 min
Exercises: 10 min
Questions
  • How does my local computer compare to the remote systems?

  • How does the login node compare to the compute nodes?

  • Are all compute nodes alike?

Objectives
  • Survey system resources using nproc, free, and the queuing system

  • Compare & contrast resources on the local machine, login node, and worker nodes

  • Learn about the various filesystems on the cluster using df

  • Find out who else is logged in

  • Assess the number of idle and occupied nodes

Look Around the Remote System

If you have not already connected to FarmShare, please do so now:

[you@laptop:~]$  ssh SUNetID@login.farmshare.stanford.edu

Take a look at your home directory on the remote system:

[SUNetID@rice-02:~]$ ls

What’s different between your machine and the remote?

Open a second terminal window on your local computer and run the ls command (without logging in to FarmShare). What differences do you see?

Solution

You would likely see something more like this:

[you@laptop:~]$ ls
Applications Documents    Library      Music        Public
Desktop      Downloads    Movies       Pictures

The remote computer’s home directory shares almost nothing in common with the local computer: they are completely separate systems!

Most high-performance computing systems run the Linux operating system, which is built around the UNIX Filesystem Hierarchy Standard. Instead of having a separate root for each hard drive or storage medium, all files and devices are anchored to the “root” directory, which is /:

[SUNetID@rice-02:~]$ ls /
afs   etc         lib32     lost+found  proc     sbin       srv        usr
bin   farmshare   lib64     media       root     scratch    swap.img   var
boot  home        mnt       root        snap     tmp        sys
dev   lib         libx32    opt         run      software   tmp

The “home” directory is the one where we generally want to keep all of our files. Other folders on a UNIX OS contain system files and change as you install new software or upgrade your OS.

Using HPC filesystems

On HPC systems, you have a number of places where you can store your files. These differ in both the amount of space allocated and whether or not they are backed up.

  • Home ($HOME)– often a network filesystem, data stored here is available throughout the HPC system, and often backed up periodically. Files stored here are typically slower to access, the data is actually stored on another computer and is being transmitted and made available over the network!
  • Scratch ($SCRATCH)– typically faster than the networked Home directory, but not usually backed up, and should not be used for long term storage.

Nodes

Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the login node, head node, landing pad, or submit node. A login node serves as an access point to the cluster.

As a gateway, the login node should not be used for time-consuming or resource-intensive tasks. You should be alert to this, and check with your site’s operators or documentation for details of what is and isn’t allowed. It is well suited for uploading and downloading files, setting up software, and running tests. Generally speaking, in these lessons, we will avoid running jobs on the login node.

Who else is logged in to the login node?

[SUNetID@rice-02:~]$ who

This may show only your user ID, but there are likely several other people (including fellow learners) connected right now.

Dedicated Transfer Nodes

If you want to transfer larger amounts of data to or from the cluster, SRC offers dedicated nodes for data transfers only. The motivation for this lies in the fact that larger data transfers should not obstruct operation of the login node for anybody else. As a rule of thumb, consider all transfers of a volume larger than 500 MB to 1 GB as large. But these numbers change, e.g., depending on the network connection of yourself and of your cluster or other factors.

The real work on a cluster gets done by the compute (or worker) nodes. Compute nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.

All interaction with the compute nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Slurm). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the compute nodes.

For example, we can view all of the compute nodes by running the command sinfo.

[SUNetID@rice-02:~]$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
normal*      up 2-00:00:00      1    mix wheat-01
normal*      up 2-00:00:00     10   idle oat-[09-10],rye-02,wheat-[02-08]
bigmem       up 2-00:00:00      1   idle rye-02
gpu          up 2-00:00:00      4   idle oat-[01-02,09-10]

A lot of the nodes are busy running work for other users: we are not alone here!

There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically logon to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.

What’s in a Node?

All of the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside of it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.

/hpc-intro/Node%20anatomy

Explore Your Computer

Try to find out the number of CPUs and amount of memory available on your personal computer.

Note that, if you’re logged in to the remote computer cluster, you need to log out first. To do so, type Ctrl+d or exit:

[SUNetID@rice-02:~]$ exit
[you@laptop:~]$

Solution

There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:

  • Run system utilities
    # Linux
    [you@laptop:~]$ nproc --all
    [you@laptop:~]$ free -m
    
    # MacOS
    [you@laptop:~]$ sysctl -n hw.ncpu
    
  • Read from /proc
    # Linux
    [you@laptop:~]$ cat /proc/cpuinfo
    [you@laptop:~]$ cat /proc/meminfo
    
    # MacOS
    [you@laptop:~]$ sysctl -a | grep machdep.cpu
    [you@laptop:~]$ vm_stat
    
  • Run system monitor
    # Linux, and can be installed on MacOS
    [you@laptop:~]$ htop
    

Explore the Login Node

Now compare the resources of your computer with those of the login node.

Solution

[you@laptop:~]$ ssh SUNetID@login.farmshare.stanford.edu
[SUNetID@rice-02:~]$ nproc --all
[SUNetID@rice-02:~]$ free -m

You can get more information about the processors using lscpu, and a lot of detail about the memory by reading the file /proc/meminfo:

[SUNetID@rice-02:~]$ less /proc/meminfo
# Use "q" to exit

You can also explore the available filesystems using df to show disk free space. The -h flag renders the sizes in a human-friendly format, i.e., GB instead of B. The type flag -T shows what kind of filesystem each resource is.

[SUNetID@rice-02:~]$ df -Th

Different results from df

  • The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on).
  • Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include SUNetID, depending on how it is mounted.

Shared Filesystems

This is an important point to remember: files saved on one node (computer) are often available everywhere on the cluster!

Explore a Worker Node

Finally, let’s look at the resources available on the worker nodes where your jobs will actually run. Try running this command to see the name, CPUs and memory available on the worker nodes:

[SUNetID@rice-02:~]$ sinfo -o "%n %c %m" | column -t

Compare Your Computer, the Login Node and the Compute Node

Compare your laptop’s number of processors and memory with the numbers you see on the cluster login node and compute node. What implications do you think the differences might have on running your research work on the different systems and nodes?

Solution

Compute nodes are usually built with processors that have higher core-counts than the login node or personal computers in order to support highly parallel tasks. Compute nodes usually also have substantially more memory (RAM) installed than a personal computer. More cores tends to help jobs that depend on some work that is easy to perform in parallel, and more, faster memory is key for large or complex numerical tasks.

Differences Between Nodes

Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphics Processing Units (GPUs or “video cards”).

With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!

Key Points

  • An HPC system is a set of networked machines.

  • HPC systems typically provide login nodes and a set of compute nodes.

  • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).

  • Files saved on shared storage are available on all nodes.

  • The login node is a shared machine: be considerate of other users.


Scheduler Fundamentals

Overview

Teaching: 45 min
Exercises: 30 min
Questions
  • What is a scheduler and why does a cluster need one?

  • How do I launch a program to run on a compute node in the cluster?

  • How do I capture the output of a program that is run on a node in the cluster?

Objectives
  • Submit a simple script to the cluster.

  • Monitor the execution of jobs using command line tools.

  • Inspect the output and error files of your jobs.

  • Find the right place to put large datasets on the cluster.

Job Scheduler

An HPC system might have thousands of nodes and thousands of users. How do we decide who gets what and when? How do we ensure that a task is run with the resources it needs? This job is handled by a special piece of software called the scheduler. On an HPC system, the scheduler manages which jobs run where and when.

The following illustration compares these tasks of a job scheduler to a waiter in a restaurant. If you can relate to an instance where you had to wait for a while in a queue to get in to a popular restaurant, then you may now understand why sometimes your job do not start instantly as in your laptop.

/hpc-intro/Compare%20a%20job%20scheduler%20to%20a%20waiter%20in%20a%20restaurant

The scheduler used in this lesson is Slurm. Although Slurm is not used everywhere, running jobs is quite similar regardless of what software is being used. The exact syntax might change, but the concepts remain the same.

Running a Batch Job

The most basic use of the scheduler is to run a command non-interactively. Any command (or series of commands) that you want to run on the cluster is called a job, and the process of using a scheduler to run the job is called batch job submission.

In this case, the job we want to run is a shell script – essentially a text file containing a list of UNIX commands to be executed in a sequential manner. Our shell script will have three parts:

[SUNetID@rice-02:~]$ nano example-job.sh
#!/bin/bash

echo -n "This script is running on "
hostname

Creating Our Test Job

Run the script. Does it execute on the cluster or just our login node?

Solution

[SUNetID@rice-02:~]$ bash example-job.sh
This script is running on rice-02

This script ran on the login node, but we want to take advantage of the compute nodes: we need the scheduler to queue up example-job.sh to run on a compute node.

To submit this task to the scheduler, we use the sbatch command. This creates a job which will run the script when dispatched to a compute node which the queuing system has identified as being available to perform the work.

[SUNetID@rice-02:~]$ sbatch example-job.sh
Submitted batch job 277317

And that’s all we need to do to submit a job. Our work is done – now the scheduler takes over and tries to run the job for us. While the job is waiting to run, it goes into a list of jobs called the queue. To check on our job’s status, we check the queue using the command squeue -u $USER.

[SUNetID@rice-02:~]$ squeue -u $USER
 JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
277317    normal example-  SUNetID  R       0:03      1 wheat-01

We can see all the details of our job, most importantly that it is in the R or RUNNING state. Sometimes our jobs might need to wait in a queue (PENDING) or have an error (E).

Where’s the Output?

On the login node, this script printed output to the terminal – but now, when squeue shows the job has finished, nothing was printed to the terminal.

Cluster job output is typically redirected to a file in the directory you launched it from. Use ls to find and cat to read the file.

Customising a Job

The job we just ran used all of the scheduler’s default options. In a real-world scenario, that’s probably not what we want. The default options represent a reasonable minimum. Chances are, we will need more cores, more memory, more time, among other special considerations. To get access to these resources we must customize our job script.

Comments in UNIX shell scripts (denoted by #) are typically ignored, but there are exceptions. For instance the special #! comment at the beginning of scripts specifies what program should be used to run it (you’ll typically see #!/usr/bin/env bash). Schedulers like Slurm also have a special comment used to denote special scheduler-specific options. Though these comments differ from scheduler to scheduler, Slurm’s special comment is #SBATCH. Anything following the #SBATCH comment is interpreted as an instruction to the scheduler.

Let’s illustrate this by example. By default, a job’s name is the name of the script, but the -J option can be used to change the name of a job. Add an option to the script:

[SUNetID@rice-02:~]$ cat example-job.sh
#!/bin/bash
#SBATCH -J hello-world

echo -n "This script is running on "
hostname

Submit the job and monitor its status:

[SUNetID@rice-02:~]$ sbatch example-job.sh
[SUNetID@rice-02:~]$ squeue -u $USER
 JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
277340    normal hello-wo  SUNetID  R       0:02      1 wheat-01

Fantastic, we’ve successfully changed the name of our job!

Resource Requests

What about more important changes, such as the number of cores and memory for our jobs? One thing that is absolutely critical when working on an HPC system is specifying the resources required to run a job. This allows the scheduler to find the right time and place to schedule our job. If you do not specify requirements (such as the amount of time you need), you will likely be stuck with your site’s default resources, which is probably not what you want.

The following are several key resource requests:

Note that just requesting these resources does not make your job run faster, nor does it necessarily mean that you will consume all of these resources. It only means that these are made available to you. Your job may end up using less memory, or less time, or fewer nodes than you have requested, and it will still run.

It’s best if your requests accurately reflect your job’s requirements. We’ll talk more about how to make sure that you’re using resources effectively in a later episode of this lesson.

Submitting Resource Requests

Modify our hostname script so that it runs for a minute, then submit a job for it on the cluster.

Solution

[SUNetID@rice-02:~]$ cat example-job.sh
#!/bin/bash
#SBATCH -t 00:01 # timeout in HH:MM

echo -n "This script is running on "
sleep 20 # time in seconds
hostname
[SUNetID@rice-02:~]$ sbatch example-job.sh

Why are the Slurm runtime and sleep time not identical?

Resource requests are typically binding. If you exceed them, your job will be killed. Let’s use wall time as an example. We will request 1 minute of wall time, and attempt to run a job for two minutes.

[SUNetID@rice-02:~]$ cat example-job.sh
#!/bin/bash
#SBATCH -J long_job
#SBATCH -t 00:01 # timeout in HH:MM

echo "This script is running on ... "
sleep 240 # time in seconds
hostname

Submit the job and wait for it to finish. Once it is has finished, check the log file.

[SUNetID@rice-02:~]$ sbatch example-job.sh
[SUNetID@rice-02:~]$ squeue -u $USER
[SUNetID@rice-02:~]$ cat slurm-277344.out
This script is running on
slurmstepd: error: *** JOB 277344 ON wheat-01 CANCELLED
AT 2024-10-17T10:39:31 DUE TO TIME LIMIT ***

Our job was killed for exceeding the amount of resources it requested. Although this appears harsh, this is actually a feature. Strict adherence to resource requests allows the scheduler to find the best possible place for your jobs. Even more importantly, it ensures that another user cannot use more resources than they’ve been given. If another user messes up and accidentally attempts to use all of the cores or memory on a node, Slurm will either restrain their job to the requested resources or kill the job outright. Other jobs on the node will be unaffected. This means that one user cannot mess up the experience of others, the only jobs affected by a mistake in scheduling will be their own.

Cancelling a Job

Sometimes we’ll make a mistake and need to cancel a job. This can be done with the scancel command. Let’s submit a job and then cancel it using its job number (remember to change the walltime so that it runs long enough for you to cancel it before it is killed!).

[SUNetID@rice-02:~]$ sbatch example-job.sh
[SUNetID@rice-02:~]$ squeue -u $USER
Submitted batch job 277347

 JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
277347    normal long_job  SUNetID  R       0:02      1 wheat-01

Now cancel the job with its job number (printed in your terminal). A clean return of your command prompt indicates that the request to cancel the job was successful.

[SUNetID@rice-02:~]$ scancel 277347
# It might take a minute for the job to disappear from the queue...
[SUNetID@rice-02:~]$ squeue -u $USER
JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

Cancelling multiple jobs

We can also cancel all of our jobs at once using the -u option. This will delete all jobs for a specific user (in this case, yourself). Note that you can only delete your own jobs.

Try submitting multiple jobs and then cancelling them all.

Solution

First, submit a trio of jobs:

[SUNetID@rice-02:~]$ sbatch example-job.sh
[SUNetID@rice-02:~]$ sbatch example-job.sh
[SUNetID@rice-02:~]$ sbatch example-job.sh

Then, cancel them all:

[SUNetID@rice-02:~]$ scancel -u SUNetID

Other Types of Jobs

Up to this point, we’ve focused on running jobs in batch mode. Slurm also provides the ability to start an interactive session.

There are very frequently tasks that need to be done interactively. Creating an entire job script might be overkill, but the amount of resources required is too much for a login node to handle. A good example of this might be building a genome index for alignment with a tool like HISAT2. Fortunately, we can run these types of tasks as a one-off with srun.

srun runs a single command on the cluster and then exits. Let’s demonstrate this by running the hostname command with srun. (We can cancel an srun job with Ctrl-c.)

[SUNetID@rice-02:~]$ srun hostname
wheat-01

srun accepts all of the same options as sbatch. However, instead of specifying these in a script, these options are specified on the command-line when starting a job. To submit a job that uses 2 CPUs for instance, we could use the following command:

[SUNetID@rice-02:~]$ srun -n 2 echo "This job will use 2 CPUs."
This job will use 2 CPUs.
This job will use 2 CPUs.

Typically, the resulting shell environment will be the same as that for sbatch.

Interactive jobs

Sometimes, you will need a lot of resources for interactive use. Perhaps it’s our first time running an analysis or we are attempting to debug something that went wrong with a previous job. Fortunately, Slurm makes it easy to start an interactive job with srun:

[SUNetID@rice-02:~]$ srun --pty bash

You should be presented with a bash prompt. Note that the prompt will likely change to reflect your new location, in this case the compute node we are logged on. You can also verify this with hostname.

Creating remote graphics

To see graphical output inside your jobs, you need to use X11 forwarding. To connect with this feature enabled, use the -Y option when you login with the ssh command, e.g., ssh -Y SUNetID@login.farmshare.stanford.edu.

To demonstrate what happens when you create a graphics window on the remote node, use the xeyes command. A relatively adorable pair of eyes should pop up (press Ctrl-C to stop). If you are using a Mac, you must have installed XQuartz (and restarted your computer) for this to work.

If your cluster has the slurm-spank-x11 plugin installed, you can ensure X11 forwarding within interactive jobs by using the --x11 option for srun with the command srun --x11 --pty bash.

When you are done with the interactive job, type exit to quit your session.

Key Points

  • The scheduler handles how compute resources are shared between users.

  • A job is just a shell script.

  • Request slightly more resources than you will need.


Environment Variables

Overview

Teaching: 10 min
Exercises: 5 min
Questions
  • How are variables set and accessed in the Unix shell?

  • How can I use variables to change how a program runs?

Objectives
  • Understand how variables are implemented in the shell

  • Read the value of an existing variable

  • Create new variables and change their values

  • Change the behaviour of a program using an environment variable

  • Explain how the shell uses the PATH variable to search for executables

Episode provenance

This episode has been remixed from the Shell Extras episode on Shell Variables and the HPC Shell episode on scripts

The shell is just a program, and like other programs, it has variables. Those variables control its execution, so by changing their values you can change how the shell behaves (and with a little more effort how other programs behave).

Variables are a great way of saving information under a name you can access later. In programming languages like Python and R, variables can store pretty much anything you can think of. In the shell, they usually just store text. The best way to understand how they work is to see them in action.

Let’s start by running the command set and looking at some of the variables in a typical shell session:

$ set
BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:complete_fullquote:expand_aliases:extglob:extquote:force_fignore:globasciiranges:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=([0]="0")
BASH_ARGV=()
BASH_CMDS=()
BASH_COMPLETION_VERSINFO=([0]="2" [1]="11")
BASH_ENV=/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-12.3.0/lmod-8.7.24-zo2r3he7kqr2ohenyvha5mmsxh7t3x54/lmod/lmod/init/bash
...

As you can see, there are quite a few — in fact, four or five times more than what’s shown here. And yes, using set to show things might seem a little strange, even for Unix, but if you don’t give it any arguments, it might as well show you things you could set.

Every variable has a name. All shell variables’ values are strings, even those (like UID) that look like numbers. It’s up to programs to convert these strings to other types when necessary. For example, if a program wanted to find out how many processors the computer had, it would convert the value of the NUMBER_OF_PROCESSORS variable from a string to an integer.

Showing the Value of a Variable

Let’s show the value of the variable HOME:

$ echo HOME
HOME

That just prints “HOME”, which isn’t what we wanted (though it is what we actually asked for). Let’s try this instead:

$ echo $HOME
/home/users/SUNetID

The dollar sign tells the shell that we want the value of the variable rather than its name. This works just like wildcards: the shell does the replacement before running the program we’ve asked for. Thanks to this expansion, what we actually run is echo /home/vlad, which displays the right thing.

Creating and Changing Variables

Creating a variable is easy — we just assign a value to a name using “=” (we just have to remember that the syntax requires that there are no spaces around the =!):

$ SECRET_IDENTITY=Dracula
$ echo $SECRET_IDENTITY
Dracula

To change the value, just assign a new one:

$ SECRET_IDENTITY=Camilla
$ echo $SECRET_IDENTITY
Camilla

Environment variables

When we ran the set command we saw there were a lot of variables whose names were in upper case. That’s because, by convention, variables that are also available to use by other programs are given upper-case names. Such variables are called environment variables as they are shell variables that are defined for the current shell and are inherited by any child shells or processes.

To create an environment variable you need to export a shell variable. For example, to make our SECRET_IDENTITY available to other programs that we call from our shell we can do:

$ SECRET_IDENTITY=Camilla
$ export SECRET_IDENTITY

You can also create and export the variable in a single step:

$ export SECRET_IDENTITY=Camilla

Using environment variables to change program behaviour

Set a shell variable TIME_STYLE to have a value of iso and check this value using the echo command.

Now, run the command ls with the option -l (which gives a long format).

export the variable and rerun the ls -l command. Do you notice any difference?

Solution

The TIME_STYLE variable is not seen by ls until is exported, at which point it is used by ls to decide what date format to use when presenting the timestamp of files.

You can see the complete set of environment variables in your current shell session with the command env (which returns a subset of what the command set gave us). The complete set of environment variables is called your runtime environment and can affect the behaviour of the programs you run.

Job environment variables

When Slurm runs a job, it sets a number of environment variables for the job. One of these will let us check what directory our job script was submitted from. The SLURM_SUBMIT_DIR variable is set to the directory from which our job was submitted. Using the SLURM_SUBMIT_DIR variable, modify your job so that it prints out the location from which the job was submitted.

Solution

[SUNetID@rice-02:~]$ nano example-job.sh
[SUNetID@rice-02:~]$ cat example-job.sh
#!/bin/bash
#SBATCH -t 00:00:30

echo -n "This script is running on "
hostname

echo "This job was launched in the following directory:"
echo ${SLURM_SUBMIT_DIR}

To remove a variable or environment variable you can use the unset command, for example:

$ unset SECRET_IDENTITY

The PATH Environment Variable

Similarly, some environment variables (like PATH) store lists of values. In this case, the convention is to use a colon ‘:’ as a separator. If a program wants the individual elements of such a list, it’s the program’s responsibility to split the variable’s string value into pieces.

Let’s have a closer look at that PATH variable. Its value defines the shell’s search path for executables, i.e., the list of directories that the shell looks in for runnable programs when you type in a program name without specifying what directory it is in.

For example, when we type a command like analyze, the shell needs to decide whether to run ./analyze or /bin/analyze. The rule it uses is simple: the shell checks each directory in the PATH variable in turn, looking for a program with the requested name in that directory. As soon as it finds a match, it stops searching and runs the program.

To show how this works, here are the components of PATH listed one per line:

/home/users/SUNetID/bin
/home/users/SUNetID/.local/bin
/usr/local/sbin
/usr/local/bin
/usr/sbin
/usr/bin
/sbin
/bin
/usr/games
/usr/local/games
/snap/bin

On our computer, there are actually three programs called analyze in three different directories: /home/users/SUNetID/bin/analyze, /usr/local/bin/analyze, and /bin/analyze. Since the shell searches the directories in the order they’re listed in PATH, it finds /home/users/SUNetID/bin/analyze first and runs that. Notice that it will never find the program /scratch/users/SUNetID/analyze unless we type in the full path to the program, since the directory /scratch/users/SUNetID isn’t in PATH.

This means that I can have executables in lots of different places as long as I remember that I need to to update my PATH so that my shell can find them.

What if I want to run two different versions of the same program? Since they share the same name, if I add them both to my PATH the first one found will always win. In the next episode we’ll learn how to use helper tools to help us manage our runtime environment to make that possible without us needing to do a lot of bookkeeping on what the value of PATH (and other important environment variables) is or should be.

Key Points

  • Shell variables are by default treated as strings

  • Variables are assigned using “=” and recalled using the variable’s name prefixed by “$

  • Use “export” to make an variable available to other programs

  • The PATH variable defines the shell’s search path


Accessing software via Modules

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How do we load and unload software packages?

Objectives
  • Load and use a software package.

  • Explain how the shell environment changes when the module mechanism loads or unloads packages.

On a high-performance computing system, it is seldom the case that the software we want to use is available when we log in. It is installed, but we will need to “load” it before it can run.

Before we start using individual software packages, however, we should understand the reasoning behind this approach. The three biggest factors are:

Software incompatibility is a major headache for programmers. Sometimes the presence (or absence) of a software package will break others that depend on it. Two well known examples are Python and C compiler versions. Python 3 famously provides a python command that conflicts with that provided by Python 2. Software compiled against a newer version of the C libraries and then run on a machine that has older C libraries installed will result in a nasty 'GLIBCXX_3.4.20' not found error.

Software versioning is another common issue. A team might depend on a certain package version for their research project - if the software version was to change (for instance, if a package was updated), it might affect their results. Having access to multiple software versions allows a set of researchers to prevent software versioning issues from affecting their results.

Dependencies are where a particular software package (or even a particular version) depends on having access to another software package (or even a particular version of another software package). For example, the VASP materials science software may depend on having a particular version of the FFTW (Fastest Fourier Transform in the West) software library available for it to work.

Environment Modules

Environment modules are the solution to these problems. A module is a self-contained description of a software package – it contains the settings required to run a software package and, usually, encodes required dependencies on other software packages.

There are a number of different environment module implementations commonly used on HPC systems: the two most common are TCL modules and Lmod. Both of these use similar syntax and the concepts are the same so learning to use one will allow you to use whichever is installed on the system you are using. In both implementations the module command is used to interact with environment modules. An additional subcommand is usually added to the command to specify what you want to do. For a list of subcommands you can use module -h or module help. As for all commands, you can access the full help on the man pages with man module.

On login you may start out with a default set of modules loaded or you may start out with an empty environment; this depends on the setup of the system you are using.

Listing Available Modules

To see available software modules, use module avail:

[SUNetID@rice-02:~]$ module avail
------------------ /software/modules/linux-ubuntu22.04-x86_64/Core -------------------
   apptainer/1.1.9                        libjpeg-turbo/2.1.5.1
   blast-plus/2.14.1                      libpng/1.5.30
   boost/1.85.0                           llvm/18.1.3
   bowtie2/2.5.2                          micromamba/1.4.2
   cuda/11.4.4                            mpich/4.2.1

[removed most of the output here for clarity]

  Where:
   D:  Default Module

If the avail list is too long consider trying:

"module --default avail" or "ml -d av" to just list the default modules.
"module overview" or "ml ov" to display the number of modules for each name.

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of
the "keys".

Listing Currently Loaded Modules

You can use the module list command to see which modules you currently have loaded in your environment. If you have no modules loaded, you will see a message telling you so

[SUNetID@rice-02:~]$ module list
No modules loaded

Loading and Unloading Software

To load a software module, use module load. In this example we will use Python 3.

Initially, Python 3 is not loaded. We can test this by using the which command. which looks for programs the same way that Bash does, so we can use it to tell us where a particular piece of software is stored.

[SUNetID@rice-02:~]$ which python3

If the python3 command was unavailable, we would see output like

/usr/bin/which: no python3 in (/home/users/SUNetID/bin:/home/users/SUNetID/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin)

Note that this wall of text is really a list, with values separated by the : character. The output is telling us that the which command searched the following directories for python3, without success:

/home/users/SUNetID/bin
/home/users/SUNetID/.local/bin
/usr/local/sbin
/usr/local/bin
/usr/sbin
/usr/bin
/sbin
/bin
/usr/games
/usr/local/games
/snap/bin

However, in our case we do have an existing python3 available so we see

/usr/bin/python3

We need a different Python than the system provided one though, so let us load a module to access it.

We can load the python3 command with module load:

[SUNetID@rice-02:~]$ module load python
[SUNetID@rice-02:~]$ which python3
/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/python-3.11.7-pph34wf44o63tsszsra7m7ihjrmcniaj/bin/python3

So, what just happened?

To understand the output, first we need to understand the nature of the $PATH environment variable. $PATH is a special environment variable that controls where a UNIX system looks for software. Specifically $PATH is a list of directories (separated by :) that the OS searches through for a command before giving up and telling us it can’t find it. As with all environment variables we can print it out using echo.

[SUNetID@rice-02:~]$ echo $PATH
/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/python-3.11.7-pph34wf44o63tsszsra7m7ihjrmcniaj/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/util-linux-uuid-2.38.1-zaohlkc7x4n5d3fbxpfb672inndarvau/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/sqlite-3.43.2-4hpmcprlw5equdicrtcmacl5psvhhmxf/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/openssl-3.3.0-4gl4yy3vwevsukqvlffjeeyofzrqrsxy/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/gettext-0.22.5-yrjlrvvghvrkmemdgnymjytmqnjydwnf/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/tar-1.34-ddpzee5n4ckjguinf6mvzwdnmhjezjln/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/zstd-1.5.6-77bmnajavm5hebchfmfxhjq3xgp45w7r/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/pigz-2.8-gvcpolzhshratajggf3jptqfnqsufhhn/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/libxml2-2.10.3-pwcbmqyzxybnsdc65wpvi4szbxgs5ywx/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/xz-5.4.6-x7ef77ycuvfkealpqz7efodaehjj2xbm/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/libiconv-1.17-agtuexjs5f4hbr34gniwqgcza6wlsdh5/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/gdbm-1.23-rtgm7swq4xhs6uosx7kd2zbx2lgn4rsy/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/readline-8.2-yy655utp5k7pzjkxjpadu7lnbg2vq3bl/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/ncurses-6.5-l7iqip2kzaxff54gqpuxtqwse222qvea/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/expat-2.6.2-gugyyi4jfqm36v2pvpmz2ij34e77cokg/bin:/software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/bzip2-1.0.8-4z7zft5br5b6o2m7zr5oiqoxxsgv3gxf/bin:/home/users/SUNetID/bin:/home/users/SUNetID/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin

You’ll notice a similarity to the output of the which command. In this case, there’s only one difference: the different directory at the beginning. When we ran the module load command, it added a directory to the beginning of our $PATH. Let’s examine what’s there:

[SUNetID@rice-02:~]$ ls /software/spack/opt/spack/linux-ubuntu22.04-x86_64_v3/gcc-13.2.0/python-3.11.7-pph34wf44o63tsszsra7m7ihjrmcniaj/bin
2to3       idle3.11   python         python3-config     python3.11-gdb.py
2to3-3.11  pydoc3     python-config  python3.11
idle3      pydoc3.11  python3        python3.11-config

Taking this to its conclusion, module load will add software to your $PATH. It “loads” software. A special note on this - depending on which version of the module program that is installed at your site, module load will also load required software dependencies.

To demonstrate, let’s use module list. module list shows all loaded software modules.

[SUNetID@rice-02:~]$ module list
Currently Loaded Modules:
  1) glibc/2.35-hwm6jll          13) libxml2/2.10.3-pwcbmqy
  2) gcc-runtime/13.2.0-4b46r64  14) pigz/2.8-gvcpolz
  3) bzip2/1.0.8-4z7zft5         15) zstd/1.5.6-77bmnaj
  4) libmd/1.0.4-zgn4nm3         16) tar/1.34-ddpzee5
  5) libbsd/0.12.1-i7vok2f       17) gettext/0.22.5-yrjlrvv
  6) expat/2.6.2-gugyyi4         18) libffi/3.4.6-3p64pum
  7) ncurses/6.5-l7iqip2         19) libxcrypt/4.4.35-3ofajra
  8) readline/8.2-yy655ut        20) openssl/3.3.0-4gl4yy3
  9) gdbm/1.23-rtgm7sw           21) sqlite/3.43.2-4hpmcpr
 10) libiconv/1.17-agtuexj       22) util-linux-uuid/2.38.1-zaohlkc
 11) xz/5.4.6-x7ef77y            23) python/3.11.7
 12) zlib-ng/2.1.6-4xk6kiq
[SUNetID@rice-02:~]$ module load julia
[SUNetID@rice-02:~]$ module list
Currently Loaded Modules:
  1) glibc/2.35-hwm6jll              27) curl/8.7.1-dilktws
  2) gcc-runtime/13.2.0-4b46r64      28) dsfmt/2.2.5-yrb7poa
  3) bzip2/1.0.8-4z7zft5             29) gmp/6.2.1-brxkiho
  4) libmd/1.0.4-zgn4nm3             30) libblastrampoline/5.8.0-pajuz6u
  5) libbsd/0.12.1-i7vok2f           31) pcre/8.45-bsep6cd
  6) expat/2.6.2-gugyyi4             32) libgit2/1.6.4-6n6p7pm
  7) ncurses/6.5-l7iqip2             33) libunwind/1.6.2-bzhjldn
  8) readline/8.2-yy655ut            34) libuv-julia/1.44.3-mpcmz2j
  9) gdbm/1.23-rtgm7sw               35) binutils/2.42-pyz2his
 10) libiconv/1.17-agtuexj           36) pkgconf/2.2.0-euy2z2u
 11) xz/5.4.6-x7ef77y                37) elfutils/0.190-r6vhdnt
 12) zlib-ng/2.1.6-4xk6kiq           38) libpciaccess/0.17-pfgymna
 13) libxml2/2.10.3-pwcbmqy          39) hwloc/2.9.1-smlej6r
 14) pigz/2.8-gvcpolz                40) libedit/3.1-20230828-ls2cusj
 15) zstd/1.5.6-77bmnaj              41) unzip/6.0-w3hyu2g
 16) tar/1.34-ddpzee5                42) lua/5.3.6-bshf2me
 17) gettext/0.22.5-yrjlrvv          43) swig/4.0.2-fortran-irv7aqc
 18) libffi/3.4.6-3p64pum            44) llvm/15.0.7-aibkinw
 19) libxcrypt/4.4.35-3ofajra        45) mpfr/4.2.1-bpff5zj
 20) openssl/3.3.0-4gl4yy3           46) openlibm/0.8.1-vk6saea
 21) sqlite/3.43.2-4hpmcpr           47) p7zip/17.05-gtsuz3k
 22) util-linux-uuid/2.38.1-zaohlkc  48) pcre2/10.43-lfijy3h
 23) python/3.11.7                   49) metis/5.1.0-jfogols
 24) mbedtls/2.28.2-7husfdf          50) suite-sparse/7.2.1-ybtegdu
 25) libssh2/1.11.0-z7hjcm2          51) utf8proc/2.8.0-ib2ggng
 26) nghttp2/1.52.0-zz56qrn          52) julia/1.10.2

So in this case, loading the julia module (a high-level, high-performance dynamic programming language for numerical computing), also loaded many other dependencies as well. Let’s try unloading the julia package.

[SUNetID@rice-02:~]$ module unload julia
[SUNetID@rice-02:~]$ module list
Currently Loaded Modules:
  1) glibc/2.35-hwm6jll          13) libxml2/2.10.3-pwcbmqy
  2) gcc-runtime/13.2.0-4b46r64  14) pigz/2.8-gvcpolz
  3) bzip2/1.0.8-4z7zft5         15) zstd/1.5.6-77bmnaj
  4) libmd/1.0.4-zgn4nm3         16) tar/1.34-ddpzee5
  5) libbsd/0.12.1-i7vok2f       17) gettext/0.22.5-yrjlrvv
  6) expat/2.6.2-gugyyi4         18) libffi/3.4.6-3p64pum
  7) ncurses/6.5-l7iqip2         19) libxcrypt/4.4.35-3ofajra
  8) readline/8.2-yy655ut        20) openssl/3.3.0-4gl4yy3
  9) gdbm/1.23-rtgm7sw           21) sqlite/3.43.2-4hpmcpr
 10) libiconv/1.17-agtuexj       22) util-linux-uuid/2.38.1-zaohlkc
 11) xz/5.4.6-x7ef77y            23) python/3.11.7
 12) zlib-ng/2.1.6-4xk6kiq

So using module unload “un-loads” a module, and depending on how a site is configured it may also unload all of the dependencies (in our case it does). If we wanted to unload everything at once, we could run module purge (unloads everything).

[SUNetID@rice-02:~]$ module purge
[SUNetID@rice-02:~]$ module list
No modules loaded

Note that module purge is informative. It will also let us know if a default set of “sticky” packages cannot be unloaded (and how to actually unload these if we truly so desired).

Note that this module loading process happens principally through the manipulation of environment variables like $PATH. There is usually little or no data transfer involved.

The module loading process manipulates other special environment variables as well, including variables that influence where the system looks for software libraries, and sometimes variables which tell commercial software packages where to find license servers.

The module command also restores these shell environment variables to their previous state when a module is unloaded.

Software Versioning

So far, we’ve learned how to load and unload software packages. This is very useful. However, we have not yet addressed the issue of software versioning. At some point or other, you will run into issues where only one particular version of some software will be suitable. Perhaps a key bugfix only happened in a certain version, or version X broke compatibility with a file format you use. In either of these example cases, it helps to be very specific about what software is loaded.

Let’s examine the output of module avail more closely.

[SUNetID@rice-02:~]$ module avail
------------------ /software/modules/linux-ubuntu22.04-x86_64/Core -------------------
   apptainer/1.1.9                        libjpeg-turbo/2.1.5.1
   blast-plus/2.14.1                      libpng/1.5.30
   boost/1.85.0                           llvm/18.1.3
   bowtie2/2.5.2                          micromamba/1.4.2
   cuda/11.4.4                            mpich/4.2.1

[removed most of the output here for clarity]

  Where:
   D:  Default Module

If the avail list is too long consider trying:

"module --default avail" or "ml -d av" to just list the default modules.
"module overview" or "ml ov" to display the number of modules for each name.

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of
the "keys".

Using Software Modules in Scripts

Create a job that is able to run python3 --version. Remember, no software is loaded by default! Running a job is just like logging on to the system (you should not assume a module loaded on the login node is loaded on a compute node).

Solution

[SUNetID@rice-02:~]$ nano python-module.sh
[SUNetID@rice-02:~]$ cat python-module.sh
#!/bin/bash
#SBATCH #SBATCH -t 00:00:30

module load python

python3 --version
[SUNetID@rice-02:~]$ sbatch python-module.sh

Key Points

  • Load software with module load softwareName.

  • Unload software with module unload

  • The module system handles software versioning and package conflicts for you automatically.


Transferring files with remote computers

Overview

Teaching: 15 min
Exercises: 15 min
Questions
  • How do I transfer files to (and from) the cluster?

Objectives
  • Transfer files to and from a computing cluster.

Performing work on a remote computer is not very useful if we cannot get files to or from the cluster. There are several options for transferring data between computing resources using CLI and GUI utilities, a few of which we will cover.

Download Files from the Internet with wget and git

One of the most straightforward ways to download files is to use either wget or git. These are usually installed in most Linux shells, on Mac OS terminal and in GitBash. Any file that can be downloaded in your web browser through a direct link can be downloaded using wget. This is a quick way to download datasets or source code. The syntax for this command is

git can be used to download files and code from a repository, such as GitHub. You can

Transferring Single Files with scp

To copy a single file to or from the cluster, we can use scp (“secure copy”). The syntax can be a little complex for new users, but we’ll break it down. The scp command is a relative of the ssh command we used to access the system, and can use the same public-key authentication mechanism.

To upload to another computer, the template command is

[you@laptop:~]$ scp local_file SUNetID@login.farmshare.stanford.edu:remote_destination

in which @ and : are field separators and remote_destination is a path relative to your remote home directory, or a new filename if you wish to change it, or both a relative path and a new filename. If you don’t have a specific folder in mind you can omit the remote_destination and the file will be copied to your home directory on the remote computer (with its original name). If you include a remote_destination, note that scp interprets this the same way cp does when making local copies: if it exists and is a folder, the file is copied inside the folder; if it exists and is a file, the file is overwritten with the contents of local_file; if it does not exist, it is assumed to be a destination filename for local_file.

Transferring a Directory with scp

To transfer an entire directory, we add the -r flag for “recursive”: copy the item specified, and every item below it, and every item below those… until it reaches the bottom of the directory tree rooted at the folder name you provided.

[you@laptop:~]$ scp -r local_dir SUNetID@login.farmshare.stanford.edu:

Caution

For a large directory – either in size or number of files – copying with -r can take a long time to complete.

When using scp, you may have noticed that a : always follows the remote computer name. A string after the : specifies the remote directory you wish to transfer the file or folder to, including a new name if you wish to rename the remote material. If you leave this field blank, scp defaults to your home directory and the name of the local material to be transferred.

On Linux computers, / is the separator in file or directory paths. A path starting with a / is called absolute, since there can be nothing above the root /. A path that does not start with / is called relative, since it is not anchored to the root.

If you want to upload a file to a location inside your home directory – which is often the case – then you don’t need a leading /. After the :, you can type the destination path relative to your home directory. If your home directory is the destination, you can leave the destination field blank, or type ~ – the shorthand for your home directory – for completeness.

With scp, a trailing slash on the target directory is optional, and has no effect. A trailing slash on a source directory is important for other commands, like rsync.

Transferring Data with rsync

As you gain experience with transferring files, you may find the scp command limiting. The rsync utility provides advanced features for file transfer and is typically faster compared to both scp and sftp (see below). It is especially useful for transferring large and/or many files and for synchronizing folder contents between computers.

The syntax is similar to scp. To transfer to another computer with commonly used options:

[you@laptop:~]$ rsync -avP local_file SUNetID@login.farmshare.stanford.edu:

The options are:

To recursively copy a directory, we can use the same options:

[you@laptop:~]$ rsync -avP local_dir SUNetID@login.farmshare.stanford.edu:~/

As written, this will place the local directory and its contents under your home directory on the remote system. If a trailing slash is added to the source, a new directory corresponding to the transferred directory will not be created, and the contents of the source directory will be copied directly into the destination directory.

To download a file, we simply change the source and destination:

[you@laptop:~]$ rsync -avP SUNetID@login.farmshare.stanford.edu:local_dir ./

Transferring Data with Globus

While scp and rsync are excellent tools for quick transfers from your local machine to HPC filesystems, sometimes you need to transfer many gigabytes or even terabytes of data. In this case, scp and rsync may not be robust enough to complete your data transfer; they require a constant connection. If there is an interruption in your connection then you will need to rerun the command, often from the beginning to ensure that files were not corrupted during the disconnect. This is where Globus comes in.

Globus is a not-for-profit service developed and operated by the University of Chicago. Globus allows you to move, share, and access data on any system where the Globus software is installed. Globus can be installed on your laptop or lab computer, and many universities and national labs have Globus installed for their core facilities and HPC systems.

Globus has three main advantages over scp and rsync:

Installing Globus on Your Local Machine

To use Globus to transfer data between your local machine and FarmShare, you will first need to install Globus on your own computer.

  1. Go to https://www.globus.org/globus-connect-personal.
  2. Click the “INSTALL NOW” link for the Globus Connect Personal for your operating system (e.g. MacOS, Windows, Linux).
  3. Follow the installation instructions for your operating system.

How to Transfer Data from Your Local Machine to FarmShare

Once Globus Connect Personal is installed on your local machine, you can transfer data between your computer and FarmShare.

  1. In your local machine’s web browser, go to https://app.globus.org/. You may need to authenticate with your SUNetID or Cardinal Key first (select “Stanford University” from the dropdown menu that appears).
  2. In the top right-hand corner, click the Panels icon that has two panels (the middle one).
  3. Above the left panel, click the Search field next to “Collection”.
  4. In the screen that opens, click “Your Collections” and select the collection that represents your local machine.
  5. Navigate to the files/folders you want to transfer and select them.
  6. Above the right hand panel, click the Search field next to “Collection”.
  7. Start typing “FarmShare” and select that collection once it appears.
  8. Navigate to the desired destination for your files.
  9. Once your source files and desired destination are selected, you can begin the transfer. Click the “Start” button over the left panel.

Transferring Data to Your Local Machine

The above steps can also be used to transfer data from FarmShare to your local machine. Simply select the files you want to transfer from FarmShare and the desired destination on your local machine, and then click the “Start” button over the right-side “FarmShare” panel.

Transferring Data with FarmShare OnDemand

FarmShare OnDemand is a web interface to the FarmShare HPC system. OnDemand allows you to manage your files, access a shell session, and use interactive apps like JupyterLab, RStudio, and VS Code. We will only discuss OnDemand’s File Manager below, but we will cover the other features in-depth in the next section.

Managing Files with the FarmShare OnDemand File Manager

To create, edit or move files, click on the Files menu from the Dashboard page. A drop-down menu will appear, listing your most common storage locations on FarmShare: $HOME, Class Directories, Group Directories, and $SCRATCH.

Choosing one of the file spaces opens the File Explorer in a new browser tab. The files in the selected directory are listed.

There are two sets of buttons in the File Explorer.

/hpc-intro/OnDemand%20File%20Explorer%20buttons%20next%20to%20filename

Those buttons allow you to View, Edit, Rename, Download, or Delete a file.

/hpc-intro/OnDemand%20File%20Explorer%20buttons%20in%20top%20right%20menu
Button Function
Open in Terminal Open a terminal window on Sherlock in a new browser tab
Refresh Refresh the list of directory contents
New File Create a new, empty file
New Directory Create a new subdirectory
Upload Copy a file from your local machine to Sherlock
Download Download selected files to your local machine
Copy/Move Copy or move selected files (after moving to a different directory)
Delete Delete selected files
Change directory Change your current working directory
Copy path Copy the current working directory path to your clipboard
Show Dotfiles Toggle the display of dotfiles (files starting with a ., which are usually hidden)
Show Owner/Mode Toggle the display of owner and permission settings

Working with Windows

When you transfer text files from a Windows system to a Unix system (Mac, Linux, BSD, Solaris, etc.) this can cause problems. Windows encodes its files slightly different than Unix, and adds an extra character to every line.

On a Unix system, every line in a file ends with a \n (newline). On Windows, every line in a file ends with a \r\n (carriage return + newline). This causes problems sometimes.

Though most modern programming languages and software handles this correctly, in some rare instances, you may run into an issue. The solution is to convert a file from Windows to Unix encoding with the dos2unix command.

You can identify if a file has Windows line endings with cat -A filename. A file with Windows line endings will have ^M$ at the end of every line. A file with Unix line endings will have $ at the end of a line.

To convert the file, just run dos2unix filename. (Conversely, to convert back to Windows format, you can run unix2dos filename.)

Key Points

  • wget and git clone download a file from the internet.

  • scp and rsync transfer files to and from your computer.

  • You can use Globus or FarmShare OnDemand to transfer data through a GUI.


Open OnDemand

Overview

Teaching: 10 min
Exercises: 10 min
Questions
  • How do we use Open OnDemand?

Objectives
  • Open a shell session in your web browser

  • Start an interactive Desktop session

  • Start an interactive JupyterLab session

Open OnDemand is an open-source web portal developed by Ohio Supercomputer Center that enables researchers to access HPC systems from their web browsers. In the previous section we showed how the FarmShare OnDemand File Manager can be used to access, edit, and transfer data on an HPC filesystem. FarmShare OnDemand also allows you to access a FarmShare shell, start an interactive desktop session, and use interactive apps like JupyterLab, RStudio, Matlab, and VS Code.

To access the FarmShare OnDemand Dashboard, open your web browser and go to http://ondemand.farmshare.stanford.edu.

Access a FarmShare Shell

You can access a FarmShare by selecting Clusters > FarmShare Shell Access from the top menu of the OnDemand Dashboard.

A new window will open in your browser, and you will automatically be logged onto a FarmShare login node. This will work just like the shell session in your local terminal app, but you don’t need to log in with SSH or authenticate with Duo Two-Factor authentication.

X11 Forwarding

X11 forwarding does not work with the OnDemand shell. If you need to use software with X11, you can continue to use your local terminal app or you can use an interactive desktop session.

Start an Interactive Desktop Session

The FarmShare Desktop session launches an interactive desktop on one or more compute nodes, granting full access to the resources those nodes provide. This is similar to an interactive job (srun --pty bash), with the added bonus of a graphical user interface (GUI).

  1. From the OnDemand Dashboard, select Interactive Apps > FarmShare Desktop.
  2. In the screen that opens, specify the Size (number of cores and memory), whether to Allocate a GPU, and the Number of hours that the session should run for.
  3. Click the Launch button.
  4. The My Interactive Sessions page will open. You will see a card for your pending FarmShare Desktop session. Once the resources have been allocated, you can click the Launch FarmShare Desktop button to open the desktop session.

Start an Interactive JupyterLab Session

FarmShare OnDemand allows users to launch common software GUIs in a web browser, powered by the compute resources that you request. Currently available interactive apps include JupyterLab, MATLAB, RStudio, and VS Code.

We will only cover how to start a JupyterLab session, but the process is largely the same for the other apps.

  1. From the OnDemand Dashboard, select Interactive Apps > JupyterLab.
  2. In the screen that opens, specify the Python version, Size (number of cores and memory), whether to Allocate a GPU, and the Number of hours that the session should run for.
  3. Click the Launch button.
  4. The My Interactive Sessions page will open. You will see a card for your pending JupyterLab session. Once the resources have been allocated, you can click the Connect to Jupyter button to open the session.

Key Points

  • Open OnDemand allows users to interface with an HPC system through a web browser.


Running a SLURM job array

Overview

Teaching: 30 min
Exercises: 60 min
Questions
  • How do we execute a task in parallel?

Objectives
  • Write a batch script for a single task.

  • Implement --array and $SLURM_ARRAY_TASK_ID in the batch script.

  • Submit and monitor the job script.

Often we need to run a script across many input files or samples, or we need to run a parameter sweep to determine the best values for a model. Rather than painstakingly submitting a batch job for every iteration, we can use a SLURM job array to simplify the task.

If you disconnected, log back in to the cluster.

[you@laptop:~]$ ssh SUNetID@login.farmshare.stanford.edu

An Illustrative Example

For our example job array, we are going to import an input text file. Each row of the text file represents a unique sample. We are going to process the data in each row separately, or in individual job array steps. The output for each array step will be written to its own output text file.

To begin, use nano to create a text file called input.txt and enter the table below (you can copy and paste):

[SUNetID@rice-02:~]$ nano input.txt
SampleID    SampleName    NCats    NDogs
     001         Henry        0        2
     002           Rob        1        1
     003       Harmony        3        0
     004         Nevin        0        0

Our task is to add the number of cats NCats and the number of dogs NDogs for each sample to determine the total number of pets per sample. For each sample, we will create an output text file called sample-<SampleID>.txt. The output text file will contain a line of text as follows:

<SampleName> has a total of <NCats + NDogs> pets.

1. Write and Test an SBATCH Script for a Single Sample

Write a batch script for a single iteration of your workflow. We want to make sure it runs as expected before submitting potentially thousands of copies of our job to the scheduler. This is often where the most work needs to be done, therefore this is the longest section of the example (even though there is no parallelization happening here!).

[SUNetID@rice-02:~]$ nano jobtest.sbatch
#!/bin/bash
#SBATCH --cpus-per-task=1
#SBATCH --mem=500m
#SBATCH --time=00:01

# Specify the input text file
input=$HOME/input.txt

# Select a single row to test
test_row=1

# Extract the SampleID
sample_id=$(awk -v i=$test_row '$1==i {print $1}' $input)

# Extract the SampleName
sample_name=$(awk -v i=$test_row '$1==i {print $2}' $input)

# Extract NCats
ncats=$(awk -v i=$test_row '$1==i {print $3}' $input)

# Extract NDogs
ndogs=$(awk -v i=$test_row '$1==i {print $4}' $input)

# Add ncats and ndogs to get npets
npets=$(expr $ncats + $ndogs)

# Specify output text filename
output=$HOME/sample-${sample_id}.txt

# Write to output file
echo "$sample_name has a total of $npets pets." >> $output

Now we can submit our test sbatch script.

[SUNetID@rice-02:~]$ sbatch jobtest.sbatch

We can check the status of our job with squeue.

[SUNetID@rice-02:~]$ squeue --me

And then when the job is complete, we can verify that we get the expected output.

[SUNetID@rice-02:~]$ cat sample-001.txt
Henry has a total of 2 pets.

2. Set the --array SBATCH Directive

The SBATCH directive --array tells the scheduler how many copies of your code should run, or rather, how many job array steps there should be. In the case of our example, we have four samples so --array=1-4.

  1. Create a copy of jobtest.sbatch and name it jobarray.sbatch.
  2. Add #SBATCH --array=1-4 to our list of SBATCH directives.
[SUNetID@rice-02:~]$ cp jobtest.sbatch jobarray.sbatch
[SUNetID@rice-02:~]$ nano jobarray.sbatch
#!/bin/bash
#SBATCH --cpus-per-task=1
#SBATCH --mem=500m
#SBATCH --time=00:01
#SBATCH --array=1-4

# Specify the input text file
input=$HOME/input.txt

# Select a single row to test
test_row=1
...

3. Use the $SLURM_ARRAY_TASK_ID Variable

Much like the iterator of a for loop, the $SLURM_ARRAY_TASK_ID variable is used to handle individual tasks or job array steps. In our example where #SBATCH --array=1-4, we will have four separate array tasks corresponding to our four samples. For the first array task where we process the first sample, $SLURM_ARRAY_TASK_ID will be set to 1; in the second array task, $SLURM_ARRAY_TASK_ID will equal 2, and so on.

In our original test of a single task, we created the variable test_row and set it to 1. We then used test_row to extract variables from a single row of input.txt. Using test_row in this way was sort of like setting SLURM_ARRAY_TASK_ID=1.

In our production job, we will remove the line creating the test_row variable. We will then replace all instances of $test_row with $SLURM_ARRAY_TASK_ID.

[SUNetID@rice-02:~]$ nano jobarray.sbatch
#!/bin/bash
#SBATCH --cpus-per-task=1
#SBATCH --mem=500m
#SBATCH --time=00:01
#SBATCH --array=1-4

# Specify the input text file
input=$HOME/input.txt

# Extract the SampleID
sample_id=$(awk -v i=$SLURM_ARRAY_TASK_ID '$1==i {print $1}' $input)

# Extract the SampleName
sample_name=$(awk -v i=$SLURM_ARRAY_TASK_ID '$1==i {print $2}' $input)

# Extract NCats
ncats=$(awk -v i=$SLURM_ARRAY_TASK_ID '$1==i {print $3}' $input)

# Extract NDogs
ndogs=$(awk -v i=$SLURM_ARRAY_TASK_ID '$1==i {print $4}' $input)

# Add ncats and ndogs to get npets
npets=$(expr $ncats + $ndogs)

# Specify output text filename
output=$HOME/sample-${sample_id}.txt

# Write to output file
echo "$sample_name has a total of $npets pets." >> $output

4. Submit the Job Array

Now we can submit our job array to the scheduler. We only have to run the sbatch command once, and SLURM will handle the creation of all the individual array tasks.

[SUNetID@rice-02:~]$ sbatch jobarray.sbatch
Submitted batch job 277394

When you submit the job array, you will receive a main job ID.

[SUNetID@rice-02:~]$ squeue --me
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
          277394_1    normal jobarray  SUNetID  R       0:06      1 wheat-01
          277394_2    normal jobarray  SUNetID  R       0:06      1 wheat-01
          277394_3    normal jobarray  SUNetID  R       0:06      1 wheat-01
          277394_4    normal jobarray  SUNetID  R       0:06      1 wheat-01

Each individual array task will also receive its own subID.

Advanced Job Array Options

  • %N: By default, SLURM will try to submit all your array tasks at once. If you are trying to run thousands of tasks, you will probably run into job submission limits. You can use the %N option to limit the number of simultaneous tasks. For example, #SBATCH --array=1-100%10 will submit 100 total array tasks, 10 at a time.

  • Select array steps: You can specify particular array steps by changing the value of --array.

    • --array=5 will only submit array task 5.
    • --array=1,6,9 will submit array tasks 1, 6, and 9.
    • --array=0-100:10 will submit array tasks 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100.

Key Points

  • Parallel programming allows applications to take advantage of parallel hardware.

  • The queuing system facilitates executing parallel tasks.


Using resources effectively

Overview

Teaching: 10 min
Exercises: 20 min
Questions
  • How can I review past jobs?

  • How can I use this knowledge to create a more accurate submission script?

Objectives
  • Look up job statistics.

  • Make more accurate resource requests in job scripts based on data describing past performance.

We’ve touched on all the skills you need to interact with an HPC cluster: logging in over SSH, loading software modules, submitting parallel jobs, and finding the output. Let’s learn about estimating resource usage and why it might matter.

Estimating Required Resources Using the Scheduler

Although we covered requesting resources from the scheduler earlier, how do we know what type of resources the software will need in the first place, and its demand for each? In general, unless the software documentation or user testimonials provide some idea, we won’t know how much memory or compute time a program will need.

Read the Documentation

Most HPC facilities maintain documentation as a wiki, a website, or a document sent along when you register for an account. Take a look at these resources, and search for the software you plan to use: somebody might have written up guidance for getting the most out of it.

Why estimate resources accurately when you can just ask for the maximum CPUs/GPUs/RAM/time?

Will my code run faster if I use more than 1 CPU/GPU?

Only if your code can use > 1 CPU/GPU. Please read your code’s documentation! Look for the software’s flags/options for CPUs/threads/cores and match these to sbatch parameters (-c or -n)

Method 1: ssh to compute node and monitor performance with htop

This method can be done before you scale up and run your code with an sbatch script.

  1. srun --pty bash

  2. load modules, run your code in the background

     [SUNetID@wheat01:~]$ python3 mycode.py > /dev/null 2>&1 &
     [SUNetID@wheat01:~]$ htop -u $USER
    

You will see how many CPUs, threads and how much RAM your code is using in real-time by running the htop or top command on the compute node as your code runs in the background.

More info: htop, top

Note: > /dev/null 2>&1 & will redirect all code output away from the terminal and keep the command line prompt available for you to run htop/top.

htop example on a compute node: showing all 4 requested CPUs used

[SUNetID@rice-02:~]$ srun -c 4 --pty bash
[SUNetID@wheat01:~]$ ml load matlab
[SUNetID@wheat01:~]$ matlab -batch "pfor" > /dev/null 2>&1&
[SUNetID@wheat01:~]$ htop
/hpc-intro/htop%20with%20matlab

Method 2: Use seff to look at resource usage after a job completes

seff displays statistics related to the efficiency of resource usage by a completed job. These are approximations based on SLURMs job sampling rate of one sample every 5 minutes.

seff <jobid>

[SUNetID@rice-02:~]$ seff 66594168
Job ID: 66594168
Cluster: sherlock
User/Group: mpiercy/<PI_SUNETID>
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 12
CPU Utilized: 00:02:31
CPU Efficiency: 20.97% of 00:12:00 core-walltime
Job Wall-clock time: 00:01:00
Memory Utilized: 5.79 GB
Memory Efficiency: 12.35% of 46.88 GB

Example 1: Job 43498042

[SUNetID@rice-02:~]$ seff 43498042
Job ID: 43498042
Cluster: sherlock
User/Group: mpiercy/<PI_SUNETID>
State: TIMEOUT (exit code 0)
Nodes: 1
Cores per node: 2
CPU Utilized: 00:58:51
CPU Efficiency: 96.37% of 01:01:04 core-walltime
Job Wall-clock time: 00:30:32
Memory Utilized: 3.63 GB
Memory Efficiency: 90.84% of 4.00 GB

So, this job ran on all CPUs for the entire time requested, 30 minutes. Thus core-walltime is 2x30 minutes or 1 hour. 96.37%, pretty efficient! Note that seff is for completed jobs only.

Example 2 (over-requested resources): Job 43507209

[SUNetID@rice-02:~]$ seff 43507209
Job ID: 43507209
Cluster: sherlock
User/Group: mpiercy/ruthm
State: TIMEOUT (exit code 0)
Nodes: 1
Cores per node: 2
CPU Utilized: 00:29:15
CPU Efficiency: 48.11% of 01:00:48 core-walltime
Job Wall-clock time: 00:30:24
Memory Utilized: 2.65 GB
Memory Efficiency: 66.17% of 4.00 GB

Because 2 CPUs were requested for 30 minutes (Job Wall-clock time) but only one was used by the code (CPU Utilized), we get a CPU Efficiency of 48.11% - basically 50%. 2 CPUs where requested for 30 minutes each, so we see 1 hour total core-walltime requested but only 30 minutes was used.

So in this case there was no logical reason to request 2 CPUs, 1 would have been sufficient.

The memory was sufficiently utilized and we did not get an out of memory error, so we probably don’t need to request any extra.

Method 3: Use sacct to look at resource usage after a job completes

We can also use sacct to estimate a job’s resource requirements. sacct provides much more detail about our job than seff does, and we can customize the output with the -o flag.

sacct -o reqmem,maxrss,averss,elapsed,alloccpu -j <jobid>

Let’s compare two jobs, 20292 and 426651.

[SUNetID@rice-02:~]$ sacct -o reqmem,maxrss,averss,elapsed,alloccpu -j 20292
    ReqMem     MaxRSS     AveRSS    Elapsed  AllocCPUS
---------- ---------- ---------- ---------- ----------
       4Gn                         00:08:53          1
       4Gn      3552K      5976K   00:08:57          1
       4Gn    921256K    921256K   00:08:49          1

Here, the job only used .92 GB but 4 GB was requested. So, about 3 GB was needlessly requested so the job waited in the queue longer than needed before it ran. Note that SLURM only samples a job’s resources every few minutes, so this is an average. Jobs with a MaxRSS close to ReqMem can still get an out of memory (OOM event) error and die. When this happens, request more memory in your sbatch with --mem=

[SUNetID@rice-02:~]$ sacct -o reqmem,maxrss,averss,elapsed,alloccpu -j 426651
    ReqMem     MaxRSS     AveRSS    Elapsed  AllocCPUS
---------- ---------- ---------- ---------- ----------
       4Gn                         00:08:53          1
       4Gn      3552K      5976K   00:08:57          1
       4Gn   2921256K   2921256K   00:08:49          1

Here, the job came close to hitting the requested memory, 4 GB, 2.92 GB was used. This was a pretty accurate request.

sacct accuracy and sampling rate

sacct memory values are based on a sampling of the applications memory at a specific time.

Remember that sacct results for memory usage (MaxVMSize, AveRSS, MaxRSS) are often not accurate for Out Of Memory (OOM) jobs.

This is because the job is often terminated prior to next sacct sampling and also terminated prior to it reaching full memory allocation.

Example 1: Job 43498042

Here we will use sacct to look at resource usage for job 43498042 (the same job that is in Example 1 of the seff section).

[SUNetID@rice-02:~]$ sacct --format=JobID,state,elapsed,MaxRss,AveRSS,MaxVMSize,TotalCPU,ReqCPUS,ReqMem -j 43498042
/hpc-intro/sacct%20output%20for%20example%201

So, sacct shows that this job used 3.8GB of memory (MaxRSS and AveRSS) and 4GB was requested, an accurate request!

MaxVMSize, virtual memory size is usually not representative of the memory used by your application, it aggregates potential memory ranges from shared libraries, memory that has been allocated but not used, in addition to swap memory, etc. It’s usually much greater than what your job’s application actually used. MaxRSS and AveRSS are much better metrics.

There are times when a very large MaxVMSize does indeed indicate that insufficient memory was requested.

Example 2: Job 43507209

Here we will use sacct to look at resource usage for job 43507209 (the same job that is in Example 2 of the seff section).

[SUNetID@rice-02:~]$ sacct --format=JobID,state,elapsed,MaxRss,AveRSS,MaxVMSize,TotalCPU,ReqCPUS,ReqMem -j 43507209
/hpc-intro/sacct%20output%20for%20example%202

So MaxRSS, the maximum resident memory set size of all tasks in the job was 2.77 GB and AveRSS was also 2.77 GB. 4GB was requested so this was a pretty accurate request.

Key Points

  • Accurate job scripts help the queuing system efficiently allocate shared resources.


Using shared resources responsibly

Overview

Teaching: 15 min
Exercises: 5 min
Questions
  • How can I be a responsible user?

  • How can I protect my data?

  • How can I best get large amounts of data off an HPC system?

Objectives
  • Describe how the actions of a single user can affect the experience of others on a shared system.

  • Discuss the behaviour of a considerate shared system citizen.

  • Explain the importance of backing up critical data.

  • Describe the challenges with transferring large amounts of data off HPC systems.

  • Convert many files to a single archive file using tar.

One of the major differences between using remote HPC resources and your own system (e.g. your laptop) is that remote resources are shared. How many users the resource is shared between at any one time varies from system to system, but it is unlikely you will ever be the only user logged into or using such a system.

The widespread usage of scheduling systems where users submit jobs on HPC resources is a natural outcome of the shared nature of these resources. There are other things you, as an upstanding member of the community, need to consider.

Be Kind to the Login Nodes

The login node is often busy managing all of the logged in users, creating and editing files and compiling software. If the machine runs out of memory or processing capacity, it will become very slow and unusable for everyone. While the machine is meant to be used, be sure to do so responsibly – in ways that will not adversely impact other users’ experience.

Login nodes are always the right place to launch jobs. Cluster policies vary, but they may also be used for proving out workflows, and in some cases, may host advanced cluster-specific debugging or development tools. The cluster may have modules that need to be loaded, possibly in a certain order, and paths or library versions that differ from your laptop, and doing an interactive test run on the head node is a quick and reliable way to discover and fix these issues.

Login Nodes Are a Shared Resource

Remember, the login node is shared with all other users and your actions could cause issues for other people. Think carefully about the potential implications of issuing commands that may use large amounts of resource.

Unsure? Ask your friendly systems administrator (“sysadmin”) if the thing you’re contemplating is suitable for the login node, or if there’s another mechanism to get it done safely.

You can always use the commands top and ps ux to list the processes that are running on the login node along with the amount of CPU and memory they are using. If this check reveals that the login node is somewhat idle, you can safely use it for your non-routine processing task. If something goes wrong – the process takes too long, or doesn’t respond – you can use the kill command along with the PID to terminate the process.

Login Node Etiquette

Which of these commands would be a routine task to run on the login node?

  1. python physics_sim.py
  2. make
  3. create_directories.sh
  4. molecular_dynamics_2
  5. tar -xzf R-3.3.0.tar.gz

Solution

Building software, creating directories, and unpacking software are common and acceptable > tasks for the login node: options #2 (make), #3 (mkdir), and #5 (tar) are probably OK. Note that script names do not always reflect their contents: before launching #3, please less create_directories.sh and make sure it’s not a Trojan horse.

Running resource-intensive applications is frowned upon. Unless you are sure it will not affect other users, do not run jobs like #1 (python) or #4 (custom MD code). If you’re unsure, ask your friendly sysadmin for advice.

If you experience performance issues with a login node you should report it to the system staff (usually via the helpdesk) for them to investigate.

Test Before Scaling

Remember that you are generally charged for usage on shared systems. A simple mistake in a job script can end up costing a large amount of resource budget. Imagine a job script with a mistake that makes it sit doing nothing for 24 hours on 1000 cores or one where you have requested 2000 cores by mistake and only use 100 of them! This problem can be compounded when people write scripts that automate job submission (for example, when running the same calculation or analysis over lots of different parameters or files). When this happens it hurts both you (as you waste lots of charged resource) and other users (who are blocked from accessing the idle compute nodes). On very busy resources you may wait many days in a queue for your job to fail within 10 seconds of starting due to a trivial typo in the job script. This is extremely frustrating!

Most systems provide dedicated resources for testing that have short wait times to help you avoid this issue.

Test Job Submission Scripts That Use Large Amounts of Resources

Before submitting a large run of jobs, submit one as a test first to make sure everything works as expected.

Before submitting a very large or very long job submit a short truncated test to ensure that the job starts as expected.

Have a Backup Plan

Although many HPC systems keep backups, it does not always cover all the file systems available and may only be for disaster recovery purposes (i.e. for restoring the whole file system if lost rather than an individual file or directory you have deleted by mistake). Protecting critical data from corruption or deletion is primarily your responsibility: keep your own backup copies.

Version control systems (such as Git) often have free, cloud-based offerings (e.g., GitHub and GitLab) that are generally used for storing source code. Even if you are not writing your own programs, these can be very useful for storing job scripts, analysis scripts and small input files.

If you are building software, you may have a large amount of source code that you compile to build your executable. Since this data can generally be recovered by re-downloading the code, or re-running the checkout operation from the source code repository, this data is also less critical to protect.

For larger amounts of data, especially important results from your runs, which may be irreplaceable, you should make sure you have a robust system in place for taking copies of data off the HPC system wherever possible to backed-up storage. Tools such as rsync can be very useful for this.

Your access to the shared HPC system will generally be time-limited so you should ensure you have a plan for transferring your data off the system before your access finishes. The time required to transfer large amounts of data should not be underestimated and you should ensure you have planned for this early enough (ideally, before you even start using the system for your research).

In all these cases, the helpdesk of the system you are using should be able to provide useful guidance on your options for data transfer for the volumes of data you will be using.

Your Data Is Your Responsibility

Make sure you understand what the backup policy is on the file systems on the system you are using and what implications this has for your work if you lose your data on the system. Plan your backups of critical data and how you will transfer data off the system throughout the project.

Transferring Data

As mentioned above, many users run into the challenge of transferring large amounts of data off HPC systems at some point (this is more often in transferring data off than onto systems but the advice below applies in either case). Data transfer speed may be limited by many different factors so the best data transfer mechanism to use depends on the type of data being transferred and where the data is going.

The components between your data’s source and destination have varying levels of performance, and in particular, may have different capabilities with respect to bandwidth and latency.

Bandwidth is generally the raw amount of data per unit time a device is capable of transmitting or receiving. It’s a common and generally well-understood metric.

Latency is a bit more subtle. For data transfers, it may be thought of as the amount of time it takes to get data out of storage and into a transmittable form. Latency issues are the reason it’s advisable to execute data transfers by moving a small number of large files, rather than the converse.

Some of the key components and their associated issues are:

As mentioned above, if you have related data that consists of a large number of small files it is strongly recommended to pack the files into a larger archive file for long term storage and transfer. A single large file makes more efficient use of the file system and is easier to move, copy and transfer because significantly fewer metadata operations are required. Archive files can be created using tools like tar and zip. We have already met tar when we talked about data transfer earlier.

/hpc-intro/Schematic%20of%20network%20bandwidth
Schematic diagram of bandwidth and latency for disk and network I/O. Each of the components on the figure is connected by a blue line of width proportional to the interface bandwidth. The small mazes at the link points illustrate the latency of the link, with more tortuous mazes indicating higher latency.

Consider the Best Way to Transfer Data

If you are transferring large amounts of data you will need to think about what may affect your transfer performance. It is always useful to run some tests that you can use to extrapolate how long it will take to transfer your data.

Say you have a “data” folder containing 10,000 or so files, a healthy mix of small and large ASCII and binary data. Which of the following would be the best way to transfer them to FarmShare?

  1. [you@laptop:~]$ scp -r data SUNetID@login.farmshare.stanford.edu:~/
    
  2. [you@laptop:~]$ rsync -ra data SUNetID@login.farmshare.stanford.edu:~/
    
  3. [you@laptop:~]$ rsync -raz data SUNetID@login.farmshare.stanford.edu:~/
    
  4. [you@laptop:~]$ tar -cvf data.tar data
    [you@laptop:~]$ rsync -raz data.tar SUNetID@login.farmshare.stanford.edu:~/
    
  5. [you@laptop:~]$ tar -cvzf data.tar.gz data
    [you@laptop:~]$ rsync -ra data.tar.gz SUNetID@login.farmshare.stanford.edu:~/
    

Solution

  1. scp will recursively copy the directory. This works, but without compression.
  2. rsync -ra works like scp -r, but preserves file information like creation times. This is marginally better.
  3. rsync -raz adds compression, which will save some bandwidth. If you have a strong CPU at both ends of the line, and you’re on a slow network, this is a good choice.
  4. This command first uses tar to merge everything into a single file, then rsync -z to transfer it with compression. With this large number of files, metadata overhead can hamper your transfer, so this is a good idea.
  5. This command uses tar -z to compress the archive, then rsync to transfer it. This may perform similarly to #4, but in most cases (for large datasets), it’s the best combination of high throughput and low latency (making the most of your time and network connection).

Key Points

  • Be careful how you use the login node.

  • Your data on the system is your responsibility.

  • Plan and test large data transfers.

  • It is often best to convert many files to a single archive file before transferring.