Getting Started with Great Lakes

Content

Important Notes


Creating an ARC account

  1. Request an HPC login here

    Please put “Accessing Great Lakes for research projects” in the request.

    Account Request

  2. Log into your account in the web portal

    You must connect to U-M VPN to access Great Lakes if you’re not connected to MWireless.

    Dashboard


Configuration

Recommended Configuration for CPU-only Workloads

Click to show the screenshot

Great Lakes Configuration - CPU Only

Recommended Configuration for GPU Workloads

There are three GPU-available partitions. spgpu is usually the one with the least wait time, which has 224 NVIDIA A40 GPUs (48GB VRAM). On spgpu, you can request 4 CPUs and 48 GB RAM without extra cost for each GPU requested on the spgpu partition, so the following is the most cost-effective option.

Click to show the screenshot

Great Lakes Configuration

Other Configurations

For each GPU requested, Great Lakes provides some CPU cores and RAM memory without additional charge. Please use the following configurations for each partition to maximize the freely-provided CPUs and memory (as of Winter 2025; see the current configurations):

Partition CPU cores Memory (RAM) GPU GPU speed GPU memory #GPUs available
spgpu 4 48 GB A40 faster 48 GB 224
gpu_mig40 8 124 GB A100 fastest 40 GB 16
gpu 20 90 GB V100 fast 16 GB 52
standard* 1 7 GB - - - -

Here is the cost for each partition (as of Winter 2025; see the current rates):

Partition Hourly cost CPU hours equivalent
spgpu 0.11 7.33
gpu_mig40 0.16 10.66
gpu 0.16 10.66
standard* 0.015 1

Storage


Launching a Session

Launching an Interactive Session

  1. Navigate to “Interactive Apps” on the web portal for Great Lakes or Lighthouse
  2. Select the application (here we take JupyterLab for example)

    Navigation -- Interactive Apps

  3. Specify the configuration of the instance (see below for suggestions)

    Click to show an example configuration for a CPU only workload on Great Lakes

    Great Lakes Configuration -- CPU Only

  4. Connect to the launched instance

    Interactive Sessions

  5. Make sure to delete the job if your workload is finished earlier

Launching an SSH Session

This will create a persistent compute node that won’t end automatically. Please make sure to kill the job after your workload is finished so that the resources are released.

  1. Navigate to “Job Composer” on the web portal for Great Lakes or Lighthouse

    Job Composer

  2. Click “New Job > From Default Template”

    New Job

  3. Click “Open Editor”

    Open Editor

  4. Replace everything in the editor with the following and click “Save”

    This defaults to requesting 1 GPU with 8 CPUs and 62 GB RAM on Lighthouse. See below for configuration suggestions.

    #!/bin/bash
    #SBATCH --account=aimusic_project
    #SBATCH --partition=aimusic_project
    #SBATCH --cpus-per-task=8
    #SBATCH --mem=62G
    #SBATCH --gres=gpu:1
    echo $SLURMD_NODENAME
    tail -f /dev/null
    

    Script Editor

    Click to show an example configuration for GPU workloads on Great Lakes
    #!/bin/bash
    #SBATCH --account=hwdong0
    #SBATCH --partition=spgpu
    #SBATCH --cpus-per-task=4
    #SBATCH --mem=48G
    #SBATCH --gres=gpu:1
    echo $SLURMD_NODENAME
    tail -f /dev/null
    
  5. In the “Jobs” tab, select the created job and click “Submit”

    Submit Job

  6. At the right panel, click the “slurm-{ID}.out” file

    Job Details

  7. Copy the node name

    Job Outputs

  8. To access the compute node, you will need to first SSH into the login node

    ssh {UNIQNAME}@lighthouse.arc-ts.umich.edu
    

    And then you can SSH into the compute node

    ssh {NODENAME}.arc-ts.umich.edu
    
  9. Make sure to delete the job after your workload is finished

    Delete Job

Creating a Job Template

You can create a template to save some time from editing the job script every time.

  1. Navigate to the “Templates” tab Templates

  2. Click “New Template”

    Create Template

  3. Enter the template name and click “Save”

    New Template

  4. Select the created template and click “View Files”

    View Files

  5. Click the triple dots next to “main_job.sh” and click “Edit”

    Edit Job Script

  6. Replace everything in the editor with the following and click “Save”

    This defaults to requesting 1 GPU with 8 CPUs and 62 GB RAM on Lighthouse. See below for configuration suggestions.

    #!/bin/bash
    #SBATCH --account=aimusic_project
    #SBATCH --partition=aimusic_project
    #SBATCH --cpus-per-task=8
    #SBATCH --mem=62G
    #SBATCH --gres=gpu:1
    echo $SLURMD_NODENAME
    tail -f /dev/null
    

    Script Editor

    Click to show an example configuration for GPU workloads on Great Lakes
    #!/bin/bash
    #SBATCH --account=hwdong0
    #SBATCH --partition=spgpu
    #SBATCH --cpus-per-task=4
    #SBATCH --mem=48G
    #SBATCH --gres=gpu:1
    echo $SLURMD_NODENAME
    tail -f /dev/null
    
  7. Return to the previous page and click “Create New Job”

    Create New Job


Useful Resources


Hosted on GitHub Pages. Powered by Jekyll. Theme adapted from minimal by orderedlist.