Getting Started with Great Lakes


Important Notes


Creating an ARC account

  1. Request an HPC login here

    Please put “Accessing Great Lakes for the course “Music and AI” (PAT 463/5663, Fall 2025)” in the request.

    Account Request

  2. Log into your account in the web portal

    You must connect to U-M VPN to access Great Lakes if you’re not connected to MWireless.

    Dashboard


Launching a Session

  1. Navigate to “Interactive Apps” in the web portal
  2. Select the application

    You’ll likely want to select “JupyterLab” for the assignments.

    Navigation -- Interactive Apps

  3. Specify the configuration of the instance (see below for suggestions)

    If you’re unsure, select “python3.11-anaconda/2024.02” as your Python distribution.

    Also, make sure to select “pat463f25_class” as the slurm account so that the cost is charged to the right account.

    Configuration

  4. Connect to the launched instance

    Interactive Sessions


Configuration

There are three GPU-available partitions. spgpu is usually the one with the least wait time, which has 224 NVIDIA A40 GPUs (48GB VRAM). On spgpu, you can request 4 CPUs and 48 GB RAM without extra cost for each GPU requested on the spgpu partition, so the following is the most cost-effective option.

Configuration

Other configurations

For each GPU requested, Great Lakes provides some CPU cores and RAM memory without additional charge. Please use the following configurations for each partition to maximize the freely-provided CPUs and memory (as of Winter 2025; see the current configurations):

Partition CPU cores Memory (RAM) GPU GPU speed GPU memory #GPUs available
spgpu 4 48 GB A40 faster 48 GB 224
gpu_mig40 8 124 GB A100 fastest 40 GB 16
gpu 20 90 GB V100 fast 16 GB 52
standard* 1 7 GB - - - -

Here is the cost for each partition (as of Winter 2025; see the current rates):

Partition Hourly cost CPU hours equivalent
spgpu 0.11 7.33
gpu_mig40 0.16 10.66
gpu 0.16 10.66
standard* 0.015 1

Storage


Tips

Copied from https://sled-group.github.io/compute-guide/great-lakes

Sometimes you want to quickly launch a node and ssh into it instead of launching a whole JupyterLab session or remote desktop. In that case, you can put tail -f /dev/null as the last command of your job, which will prevent the job from exiting without eating up CPU cycles. For example, your job script might be something like:

#!/bin/bash
echo $SLURMD_NODENAME
tail -f /dev/null

Then, either use the web interface or inspect the $SLURMD_NODENAME environment variable to figure out the node name and simply ssh into it from your login machine.

If you prefer to interact with Great Lakes using command line, you might find this cheat sheet helpful.


Hosted on GitHub Pages. Powered by Jekyll. Theme adapted from minimal by orderedlist.