Rivanna HPC — Job Submission Guide

Quick Reference for Requesting & Managing Jobs on UVA's HPC Cluster
UVA Research Computing · March 2026

1. Web-Based Access (Open OnDemand)

Open OnDemand provides browser-based access to Rivanna. No VPN is required. The following walkthrough demonstrates how to launch a JupyterLab session with GPU resources.

Once a session is allocated, it continues running on the cluster independently of your local machine. You can safely close the browser and return to it later.

Step 1

Navigate to the UVA HPC login page and click “Launch Open OnDemand,” or go directly to ood.hpc.virginia.edu.

RC website with Launch Open OnDemand button highlighted

Step 2

You will land on the Open OnDemand dashboard. Click My Interactive Sessions in the top navigation bar.

Open OnDemand dashboard with My Interactive Sessions highlighted

Step 3

Interactive Apps dropdown with JupyterLab highlighted

Select JupyterLab under Servers.

Other available apps include:

  • Code Server — VS Code in the browser
  • RStudio Server — for R workflows
  • MATLAB Desktop / Web — for MATLAB users
  • Desktop — full Linux desktop environment
JupyterLab is the most common choice for Python-based work.

Step 4

JupyterLab configuration form

Configure your session:

  • Rivanna/Afton Partition — select “GPU” for GPU-enabled jobs, or “Standard” for CPU-only jobs
  • Number of hours — maximum wall time for your session (up to 72 hours)
  • Number of cores — CPU cores allocated
  • Memory — RAM in GB
  • Work Directory — where your session’s files are stored (see table below)
  • Allocation — your group’s allocation
  • GPU type — choose a specific GPU or leave as “default”
  • Number of GPUs — how many GPUs to request

The screenshot shows one example configuration. Adjust values to match your workload.

Click Launch when ready.

Available GPU types: a6000, a40, v100, a100, h200. If unsure, leave GPU type as “default” and the scheduler will assign any available GPU.

Work Directory Reference

OptionPathSizeNotes
SCRATCH/scratch/<computing_id>10 TBFast, no backups, files purged after 90 days of inactivity
HOME/home/<computing_id>200 GBWeekly snapshots, slower I/O
PROJECTLeased group storageVariesShared research group storage, must be requested
STANDARDLeased group storageVariesShared internal/public storage, must be requested
SCRATCH is the default and recommended for most jobs. Backup important results to HOME afterward.

Step 5

Once the session is running, click Connect to Jupyter to open JupyterLab in a new tab.

Running session with Connect to Jupyter button highlighted

2. SSH Login

To connect to Rivanna, open a terminal on your local computer and run:

$ ssh -Y <computing_id>@login.hpc.virginia.edu

The -Y flag enables X11 forwarding for GUI applications.

VPN Required Off-Grounds: If you are connecting from outside the UVA network, you must first connect to the UVA Anywhere VPN before SSH will work.

VS Code (Recommended)

Most users connect through VS Code with the Remote – SSH extension, which provides a full editor, file browser, and integrated terminal on the cluster.

  1. Install the Remote – SSH extension in VS Code.
  2. Open the Command Palette (Ctrl+Shift+P) → Remote-SSH: Connect to Host.
  3. Enter: <computing_id>@login.hpc.virginia.edu

3. Submitting Jobs

Rivanna is a shared computing cluster. To use its resources (GPUs, CPUs, memory), you must submit a job request to the scheduler (SLURM), which allocates resources on your behalf. There are two types of jobs:

Interactive Job

A live terminal session on a compute node. You run commands directly and see output in real time.

  • Debugging
  • Testing code
  • Short experiments

Ends when you disconnect or time expires.

ijob
Batch Job

A script submitted to the scheduler. Runs when resources are available. Output is written to a log file.

  • Long training runs
  • Reproducible experiments
  • Overnight computation

Keeps running even if you log out.

sbatch

3.1 Interactive Jobs

Request an interactive session with one A6000 GPU:

$ ijob -A mylab -p gpu --gres=gpu:a6000:1 -c 8 --mem=64G -t 4:00:00
FlagMeaning
-A mylabYour allocation account
-p gpuGPU partition
--gres=gpu:a6000:1Request 1 A6000 GPU
-c 88 CPU cores
--mem=64G64 GB of memory
-t 4:00:00Time limit (4 hours)
Available GPU types: a6000, a40, v100, a100, h200. Replace a6000 in --gres=gpu:a6000:1 with the desired type.

To end the session:

$ exit

3.2 Batch Jobs

Create a SLURM script (e.g. job.slurm):

#!/bin/bash
#SBATCH -A mylab
#SBATCH -p gpu
#SBATCH --gres=gpu:a6000:1
#SBATCH -c 8
#SBATCH --mem=64G
#SBATCH -t 8:00:00
#SBATCH -o output_%j.log
#SBATCH -J my_job

module load cuda python
python my_script.py

Submit the script:

$ sbatch job.slurm
DirectivePurpose
-o output_%j.logLog file (%j is replaced by job ID)
-J my_jobJob name (appears in squeue)
-e error_%j.logSeparate error log (optional)
--mail-type=ENDEmail notification on completion (optional)

4. Monitoring & Managing Jobs

CommandDescription
squeue -u <computing_id>List all your running and pending jobs
scancel <job_id>Cancel a job
sacct -j <job_id>View job history and resource usage
scontrol show job <job_id>View full details of a specific job
nvidia-smiCheck GPU status (run from a compute node)
Quick check: After submitting a batch job, run squeue -u <computing_id> to confirm it is queued or running. The ST column shows status: PD = pending, R = running.

5. Common Pitfalls

Missing shebang line: The first line of every SLURM script must be #!/bin/bash. Without it, sbatch will reject the script with: "This does not look like a batch script."
Leading spaces before #SBATCH: All #SBATCH directives must start at the beginning of the line with no leading whitespace. Indented directives are silently ignored by SLURM.
VPN not connected: SSH connections from off-grounds will hang or time out if the UVA Anywhere VPN is not active.
Verify your GPU: After landing on a compute node, run nvidia-smi to confirm the GPU has been allocated correctly.

6. References