Skip to content

SDSS-CC Resources: Overview

Sherlock

Overview:

The Sherlock HPC system is the University’s compute cluster, purchased and supported with seed funding from the Provost, and available for use by all Stanford faculty and their research teams. Sherlock offers free compute cycles to Stanford researchers, and also allows PI’s to purchase dedicated resources. Sherlock is maintained by Stanford Research Computing Center (SRCC); more information can be found at https://www.sherlock.stanford.edu. Sherlock’s collaborative, shared resource approach facilitates scales of computing, varieties of available software, and levels of support that are not easily achieved by individual research groups or schools.

See also this OnBoarding slide for more information: SDSS-CfC_onboarding_20230419.pdf

Sherlock SERC Partition and Oak Storage:

In addition to generall access to the public Sherlock compute reources ( normal, gpu, dev, bigmem, and owners partitions), SDSS users may also submit jobs to the serc partition on Sherlock, and storage is available on SRCC’s oak platform. More information on how to access Sherlock and the serc, partition can be found in the Sherlock and Oak documentation.

The Sherlock cluster includes a broad, capable variety of computing tools. It is difficult to say exactly how big, and what specific resources are available, because it is constantly in flux as users subscribe to the system, nodes are added, and old nodes are swapped out for new ones. SDSS-CFC Sherlock resources include:

  • Traditional HPC “batch” computing, managed by SLURM
  • Interactive sessions, including multi-core instances
  • serc partition:
    • 200 x 32 core (AMD Epyc 7502), 256 GB RAM
    • 8 x 128 core (AMD Epyc 7742) 1024 GB RAM
    • 24 x 24 core (Intel Skylake), 192/384 GB RAM
    • 10 x 8 NVIDIA Tesla A100 GPUs, 128 CPU cores (AMD Epyc 7662), 1024 GB RAM
    • 2 x 4 NVIDIA Tesla A100 GPUs, 64 CPU cores (AMD Epyc), 512 GB RAM
    • 2 x 4 NVIDIA Tesla V100 GPUs, 24 CPU cores (Intel Skylake), 192GB RAM
  • Sherlock owners partition: Access to idle reseroudes owned by other PI groups.
  • Public partitions: normal, gpu, bigmem, dev
  • Oak 1.35PB storage: /oak/stanford/schools/ees/{PI SUNetID}
  • ssh (requires 2-factor auth):
    • $ ssh sherlock.stanford.edu
More Information:
More Information:

SDSS-CC website: https://sdss-compute.stanford.edu