Request an HPC login here
Please put “Accessing Great Lakes for the course “Music and AI” (PAT 463/5663, Fall 2025)” in the request.

Log into your account in the web portal
You must connect to U-M VPN to access Great Lakes if you’re not connected to MWireless.

Select the application
You’ll likely want to select “JupyterLab” for the assignments.

Specify the configuration of the instance (see below for suggestions)
If you’re unsure, select “python3.11-anaconda/2024.02” as your Python distribution.
Also, make sure to select “pat463f25_class” as the slurm account so that the cost is charged to the right account.

Connect to the launched instance

There are three GPU-available partitions.
spgpuis usually the one with the least wait time, which has 224 NVIDIA A40 GPUs (48GB VRAM). Onspgpu, you can request 4 CPUs and 48 GB RAM without extra cost for each GPU requested on thespgpupartition, so the following is the most cost-effective option.
spgpu
For each GPU requested, Great Lakes provides some CPU cores and RAM memory without additional charge. Please use the following configurations for each partition to maximize the freely-provided CPUs and memory (as of Winter 2025; see the current configurations):
| Partition | CPU cores | Memory (RAM) | GPU | GPU speed | GPU memory | #GPUs available |
|---|---|---|---|---|---|---|
| spgpu | 4 | 48 GB | A40 | faster | 48 GB | 224 |
| gpu_mig40 | 8 | 124 GB | A100 | fastest | 40 GB | 16 |
| gpu | 20 | 90 GB | V100 | fast | 16 GB | 52 |
| standard* | 1 | 7 GB | - | - | - | - |
Here is the cost for each partition (as of Winter 2025; see the current rates):
| Partition | Hourly cost | CPU hours equivalent |
|---|---|---|
| spgpu | 0.11 | 7.33 |
| gpu_mig40 | 0.16 | 10.66 |
| gpu | 0.16 | 10.66 |
| standard* | 0.015 | 1 |
/home/{UNIQUENAME}, 80 GB limit per user/scratch/pat463f25_class_root/pat463f25_class/{UNIQUENAME}, 10TB limit as a whole (note that data in the scratch space may be removed if not used in 30 days)Copied from https://sled-group.github.io/compute-guide/great-lakes
Sometimes you want to quickly launch a node and ssh into it instead of launching a whole JupyterLab session or remote desktop. In that case, you can put tail -f /dev/null as the last command of your job, which will prevent the job from exiting without eating up CPU cycles. For example, your job script might be something like:
#!/bin/bash
echo $SLURMD_NODENAME
tail -f /dev/null
Then, either use the web interface or inspect the $SLURMD_NODENAME environment variable to figure out the node name and simply ssh into it from your login machine.
If you prefer to interact with Great Lakes using command line, you might find this cheat sheet helpful.