Open OnDemand provides browser-based access to Rivanna. No VPN is required. The following walkthrough demonstrates how to launch a JupyterLab session with GPU resources.
Once a session is allocated, it continues running on the cluster independently of your local machine. You can safely close the browser and return to it later.
Navigate to the UVA HPC login page and click “Launch Open OnDemand,” or go directly to ood.hpc.virginia.edu.
You will land on the Open OnDemand dashboard. Click My Interactive Sessions in the top navigation bar.
Select JupyterLab under Servers.
Other available apps include:
Configure your session:
The screenshot shows one example configuration. Adjust values to match your workload.
Click Launch when ready.
a6000, a40, v100, a100, h200.
If unsure, leave GPU type as “default” and the scheduler will assign any available GPU.
| Option | Path | Size | Notes |
|---|---|---|---|
| SCRATCH | /scratch/<computing_id> | 10 TB | Fast, no backups, files purged after 90 days of inactivity |
| HOME | /home/<computing_id> | 200 GB | Weekly snapshots, slower I/O |
| PROJECT | Leased group storage | Varies | Shared research group storage, must be requested |
| STANDARD | Leased group storage | Varies | Shared internal/public storage, must be requested |
Once the session is running, click Connect to Jupyter to open JupyterLab in a new tab.
To connect to Rivanna, open a terminal on your local computer and run:
$ ssh -Y <computing_id>@login.hpc.virginia.edu
The -Y flag enables X11 forwarding for GUI applications.
Most users connect through VS Code with the Remote – SSH extension, which provides a full editor, file browser, and integrated terminal on the cluster.
Ctrl+Shift+P) → Remote-SSH: Connect to Host.<computing_id>@login.hpc.virginia.eduRivanna is a shared computing cluster. To use its resources (GPUs, CPUs, memory), you must submit a job request to the scheduler (SLURM), which allocates resources on your behalf. There are two types of jobs:
A live terminal session on a compute node. You run commands directly and see output in real time.
Ends when you disconnect or time expires.
ijobA script submitted to the scheduler. Runs when resources are available. Output is written to a log file.
Keeps running even if you log out.
sbatchRequest an interactive session with one A6000 GPU:
$ ijob -A mylab -p gpu --gres=gpu:a6000:1 -c 8 --mem=64G -t 4:00:00
| Flag | Meaning |
|---|---|
-A mylab | Your allocation account |
-p gpu | GPU partition |
--gres=gpu:a6000:1 | Request 1 A6000 GPU |
-c 8 | 8 CPU cores |
--mem=64G | 64 GB of memory |
-t 4:00:00 | Time limit (4 hours) |
a6000, a40, v100, a100, h200.
Replace a6000 in --gres=gpu:a6000:1 with the desired type.
To end the session:
$ exit
Create a SLURM script (e.g. job.slurm):
#!/bin/bash
#SBATCH -A mylab
#SBATCH -p gpu
#SBATCH --gres=gpu:a6000:1
#SBATCH -c 8
#SBATCH --mem=64G
#SBATCH -t 8:00:00
#SBATCH -o output_%j.log
#SBATCH -J my_job
module load cuda python
python my_script.py
Submit the script:
$ sbatch job.slurm
| Directive | Purpose |
|---|---|
-o output_%j.log | Log file (%j is replaced by job ID) |
-J my_job | Job name (appears in squeue) |
-e error_%j.log | Separate error log (optional) |
--mail-type=END | Email notification on completion (optional) |
| Command | Description |
|---|---|
squeue -u <computing_id> | List all your running and pending jobs |
scancel <job_id> | Cancel a job |
sacct -j <job_id> | View job history and resource usage |
scontrol show job <job_id> | View full details of a specific job |
nvidia-smi | Check GPU status (run from a compute node) |
squeue -u <computing_id> to confirm it is queued or running.
The ST column shows status: PD = pending, R = running.
#!/bin/bash.
Without it, sbatch will reject the script with:
"This does not look like a batch script."
#SBATCH:
All #SBATCH directives must start at the beginning of the line with no leading whitespace.
Indented directives are silently ignored by SLURM.
nvidia-smi
to confirm the GPU has been allocated correctly.