Getting Started with Great Lakes

Content

Available Resources

You must connect to MWireless or U-M VPN to access Great Lakes.

Compute

Storage


Creating an ARC Account

  1. Request an HPC login here

    Please put “Accessing Great Lakes for the course “Generative AI for Music and Audio Creation” (PAT 464/564, Winter 2026)” in the request.

    Account Request

  2. Log into your account in the web portal

    You must connect to MWireless or U-M VPN to access Great Lakes.

    Dashboard


Configuration

Click to show the screenshot

Great Lakes Configuration - CPU Only

There are three GPU-available partitions. spgpu is usually the one with the least wait time, which has 224 NVIDIA A40 GPUs (48GB VRAM). On spgpu, you can request 4 CPUs and 48 GB RAM without extra cost for each GPU requested on the spgpu partition, so the following is the most cost-effective option.

Click to show the screenshot

Great Lakes Configuration

Other configurations

For each GPU requested, Great Lakes provides some CPU cores and RAM memory without additional charge. Please use the following configurations for each partition to maximize the freely-provided CPUs and memory (as of Winter 2025; see the current configurations):

Partition CPU cores Memory (RAM) GPU GPU speed GPU memory #GPUs available
spgpu 4 48 GB A40 faster 48 GB 224
gpu_mig40 8 124 GB A100 fastest 40 GB 16
gpu 20 90 GB V100 fast 16 GB 52
standard* 1 7 GB - - - -

Here is the cost for each partition (as of Winter 2025; see the current rates):

Partition Hourly cost CPU hours equivalent
spgpu 0.11 7.33
gpu_mig40 0.16 10.66
gpu 0.16 10.66
standard* 0.015 1

Launching a Session

Launching an Interactive Session

  1. Navigate to “Interactive Apps” on the web portal
  2. Select the application

    You’ll likely want to select “JupyterLab” for the assignments.

    Navigation -- Interactive Apps

  3. Specify the configuration of the instance (see above for suggestions)

    If you’re unsure, select “python3.11-anaconda/2024.02” as your Python distribution.

    Also, make sure to select “pat464564w26_class” as the slurm account so that the cost is charged to the right account.

    Configuration

  4. Connect to the launched instance

    Interactive Sessions

  5. Make sure to delete the job if your workload is finished earlier

Launching an SSH Session

This will create a persistent compute node that won’t end automatically. Please make sure to kill the job after your workload is finished so that the resources are released.

  1. Navigate to “Job Composer” on the web portal for Great Lakes

    Job Composer

  2. Click “New Job > From Default Template”

    New Job

  3. Click “Open Editor”

    Open Editor

  4. Replace everything in the editor with the following and click “Save”

    This will request 1 GPU with 4 CPUs and 48 GB RAM for 2 hours on Great Lakes. Adjust the time limit (in the format of D-HH:MM:SS) if the wait time is too long. See above for configuration suggestions.

    #!/bin/bash
    #SBATCH --account=pat464564w26_class
    #SBATCH --partition=spgpu
    #SBATCH --cpus-per-task=4
    #SBATCH --mem=48G
    #SBATCH --gres=gpu:1
    #SBATCH --time=2:00:00
    echo $SLURMD_NODENAME
    tail -f /dev/null
    

    Script Editor

  5. In the “Jobs” tab, select the created job and click “Submit”

    Submit Job

  6. At the right panel, click the “slurm-{ID}.out” file

    Job Details

  7. Copy the node name

    Job Outputs

  8. To access the compute node, you will need to first SSH into the login node

    ssh {UNIQNAME}@greatlakes.arc-ts.umich.edu
    

    And then you can SSH into the compute node

    ssh {NODENAME}.arc-ts.umich.edu
    
    Click to show how to set up the SSH configuration file

    Add the following lines to the SSH configuration file (usually at ~/.ssh/config):

    For Lighthouse:

    Host lighthouse-login
       HostName lighthouse.arc-ts.umich.edu
       User {UNIQNAME}
    
    Host lighthouse
       HostName lh2300.arc-ts.umich.edu
       User {UNIQNAME}
       ProxyJump lighthouse-login
    

    For Great Lakes:

    Host greatlakes-login
       HostName greatlakes.arc-ts.umich.edu
       User {UNIQNAME}
    
    Host gl*.arc-ts.umich.edu
       User {UNIQNAME}
       ProxyJump greatlakes-login
    
  9. Make sure to delete the job after your workload is finished

    Delete Job

Creating a Job Template

You can create a template to save some time from editing the job script every time.

  1. Navigate to the “Templates” tab Templates

  2. Click “New Template”

    Create Template

  3. Enter the template name and click “Save”

    New Template

  4. Select the created template and click “View Files”

    View Files

  5. Click the triple dots next to “main_job.sh” and click “Edit”

    Edit Job Script

  6. Replace everything in the editor with the following and click “Save”

    This will request 1 GPU with 4 CPUs and 48 GB RAM for 2 hours on Great Lakes. Adjust the time limit (in the format of D-HH:MM:SS) if the wait time is too long. See above for configuration suggestions.

    #!/bin/bash
    #SBATCH --account=pat464564w26_class
    #SBATCH --partition=spgpu
    #SBATCH --cpus-per-task=4
    #SBATCH --mem=48G
    #SBATCH --gres=gpu:1
    #SBATCH --time=2:00:00
    echo $SLURMD_NODENAME
    tail -f /dev/null
    

    Script Editor

  7. Return to the previous page and click “Create New Job”

    Create New Job

Connecting via SSH on VS Code

  1. Add the following lines to the SSH configuration file (usually at ~/.ssh/config):

    Host greatlakes-login
       HostName greatlakes.arc-ts.umich.edu
       User {UNIQNAME}
    
    Host gl*.arc-ts.umich.edu
       User {UNIQNAME}
       ProxyJump greatlakes-login
    

    On Great Lakes, the node name will change when you launch a new job as you’ll be assigned to one of the many compute nodes on the cluster.

  2. Click “Open Remote Window” at the bottom left of the VS Code window

    VS Code - Open Remote Window

  3. Click “Connect to Host”

    VS Code - Connect to Host

  4. Click “greatlakes” or “lighthouse”

    VS Code - Great Lakes

  5. Click the highlighted “details” link in the pop-up box at the bottom right

    Ignore the pop-up password prompt at the top

    VS Code - Initialization

  6. Enter your password at the terminal tab and hit “Enter”

    Ignore the pop-up password prompt at the top

    VS Code - Initialization - Password Prompt

  7. Authenticate through Duo for two-factor authentication

    VS Code - Initialization - Complete

  8. Click the pop-up password prompt at the top and hit “Enter” 2-3 times until the pop-up prompt box at the bottom right changes from “Initializing VS Code Server” to “Setting up SSH tunnel”

  9. Click the highlighted “details” link in the pop-up box at the bottom right

    Ignore the pop-up password prompt at the top

    VS Code - SSH Tunnel Setup

  10. Enter your password at the terminal tab and hit “Enter”

    Ignore the pop-up password prompt at the top

    VS Code - SSH Tunnel Setup - Password Prompt

  11. Authenticate through Duo for two-factor authentication

    VS Code - SSH Tunnel Setup - Complete

  12. It should say “Connected to SSH Host - Please do not close this terminal”

    VS Code - Connected to SSH Host

  13. Click the pop-up password prompt at the top and hit “Enter” 2-3 times until the pop-up box at the bottom right disappears (you should also see “SSH: greatlakes” at the bottom left)

    VS Code - SSH Great Lakes

  14. Now, you may start coding in VS Code! You can click the “+” sign at the right to open a terminal to verify the connection.

    VS Code - New Terminal

    Once you click it, you should see that you’re connected to the compute node:

    VS Code - New Terminal Opened


Setting Up Your Own Base Conda

Follow the instructions below if you want to set up your own base Conda other than the default one installed on the cluster.

  1. Follow the official instructions to install Conda
  2. Update the Slurm script as follows:

    #!/bin/bash
    #SBATCH --account=aimusic_project
    #SBATCH --partition=aimusic_project
    #SBATCH --cpus-per-task=8
    #SBATCH --mem=62G
    #SBATCH --gres=gpu:1
    
    CONDA_EXE=/home/{UNIQNAME}/miniconda3/bin/conda
    CONDA_PREFIX=/home/{UNIQNAME}/miniconda3
    CONDA_PYTHON_EXE=/home/{UNIQNAME}/miniconda3/bin/python
    export PATH="/home/{UNIQNAME}/miniconda3/bin:/home/{UNIQNAME}/miniconda3/condabin:$PATH"
    
    echo $SLURMD_NODENAME
    tail -f /dev/null
    

Useful Resources


Hosted on GitHub Pages. Powered by Jekyll. Theme adapted from minimal by orderedlist.