Skip to content

ANSYS Lumerical

Overview

Lumerical (now part of Ansys) is commercial, licensed software.

Due to various system configurations, Lumerical – or at least some elements of Lumerical, for example GUI components, may not run natively on Sherlock. The recommended approach to run Lumerical on Sherlock then, is to create an Apptainer (formerly Singularity) container. In addition to the ANSYS software, VNC (or a similar graphical friently protocol) can be bundled into the container to faciliate a better GUI interface and visualization experience.

For details on how to configure Lumerical licenses, run Lumerical jobs, run Lumerical with MPI mode, and other operational details, see the ANSYS/Lumerical documentation.

Prerequisites

  • Active Lumerical/Ansys license: Lumerical is licensed software. It is always a good idea to check Stanford licensing and/or the Stanford Software store, to see if licenses are available, but you will likely need to provide your own license.
  • License Server Access: For a floating network type license (FNL), access to a license server is required (know the server hostname and port number)
  • Sherlock account
  • Lumerical installation media (optional, to build a new container)

Quick Start on Sherlock

One or more Lumerical containers are avaialbe on Sherlock in the directory,

/home/groups/sh_support/share/containers/lumerical/

This directory contains at least one contianer file, eg. lumerical_v2025R24.sif – implying Lumerical version 2025-R2.4, an Apptainer definition *.def file, and one or more test job. The implementation of Lumerical on Sherlock (availability of containers) is likely in flux, so naming conventions and variety of containers *.sif, definition *.def files, and test jobs will vary.

The container(s) can be run directly from this location, but it may be necessary to copy the test job to a working directory, as Lumerical may interpret the input file location as a working directory. Workflows will vary, depending on the nature of the project and personal preferences; the basic sequence to ru n the container is:

  • Request compute resources – interactive (sh_dev or salloc), or submit a job to the scheduler (sbatch)
  • Create a working directory
  • Copy the (test) job to the working directory
  • Either:
    • Run the job using apptainer exec or apptainer run, depending on personal preference, job details, and container configuration.
    • Launch an apptainer shell, then run the job from the container shell

Both run options are essentially equivalent; the first can be scripted into a batch job; second shell option is more interactive.

QuickStep 1: Request resources

For this example, select a minimal, single CPU job from the dev partition

salloc -p dev --ntasks=1 --cpus-per-task=1

QuickStep 2: Create a working Directory

For this example, we assume the working directory does not yet exist. We create our working directory on $SCRATCH for fast IO performance

mkdir -p $SCRATCH/lumerical_test
cd $SCRATCH/lumerical_test

QuickStep 3: Copy the test job to the working directory

cp -r /home/groups/sh_support/share/containers/lumerical/test_job/ ./
ls -lh test_job

total 1.6M
-rw-r--r-- 1 ****** ###### 1.6M Mar 24 07:18 solver_far_field.fsp
-rw-r--r-- 1 ****** ###### 3.6K Mar 24 07:18 solver_far_field_p0.log

QuickStep 4: Run the test job

Note that the license information must be passed to the container, either by using --bind to configure a license file (ie, to “bind” a local license file over a file in the container), configuring a local license file on the host machine – that the container will read, or by setting an environment variable.

Due to conflicts with standard enviornment configuraiton on Sherlock, it might be necessary to run the container in --cleanenv mode, so the contaiern will not read the Sherlock environment.

apptainer shell --cleanenv --writable-tmpfs --env ANSYSLMD_LICENSE_FILE={port}@{License Server} /home/groups/sh_support/share/containers/lumerical/lumerical_v2025R24.sif

This will launch the container; note the Apptainer> prompt. Now, run the test job:

Apptainer> fdtd-engine test_job/solver_far_field.fsp 
24% initialized.
33% initialized.
57% initialized.
66% initialized.
90% initialized.
99% initialized.
100% initialized.
11.4394% complete. Elapsed simulation time: 9.15156e-15 secs. Max time remaining: 1 secs. Auto Shutoff: 1
26.692% complete. Elapsed simulation time: 2.13536e-14 secs. Max time remaining: 1 secs. Auto Shutoff: 1
57.1972% complete. Elapsed simulation time: 4.57578e-14 secs. Max time remaining: 1 secs. Auto Shutoff: 1
70.5433% complete. Elapsed simulation time: 5.64346e-14 secs. Max time remaining: 0 secs. Auto Shutoff: 8.73102e-08

Or in a single step:

apptainer exec --cleanenv --writable-tmpfs --env ANSYSLMD_LICENSE_FILE={port}@{License Server}  /home/groups/sh_support/share/containers/lumerical/lumerical_v2025R24.sif fdtd-engine test_job/solver_far_field.fsp

Notes about the Apptainer options:

  • These options should be more or less identical for the Apptainer shell, exec, and run sub-commands.
  • --cleanenv: Launch the container with a “clean” environment – the container does not inherit environment variables from the host
  • --writable-tmpfs: Allows the container to write to a small temporarly filesystem. This may not be necessary, but gives the container added flexibility if programs write temporary files, etc. It can also be helpful for debugging a container in Shell mode.
  • --env: Specifically, set the ANSYSLMD_LICENSE_FILE variable inside the container, to correctly define the license. Generally, this is comman separated lit list of {var}={val} pairs, eg. --env var1=val1,var2=val2.
    • NOTE: Consult the Apptaienr documentation for alternative methods to set environment variables – for example, --env-file or by setting APPTAINTER_{var_name} variables on the host.

QuickStep 5: A Batch script

Ideally, jobs – especially large jobs, will be scripted and submitted to the scheduler. Note that this is just a sample script; for a real script, directives like --ntasks, --cpus-per-task, --mem-per-cpu, --partition, and --time should be adjusted to the scope of the job, and resource available to the user.

#!/bin/bash
#
#SBATCH --job-name=lumerical_test
#SBATCH --output=lum_test_%j.out
#SBATCH --error=lum_test_%j.err
#SBATCH --partition=serc
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=4g
#SBATCH --time=01:00:00
#
module purge
#
LUM_DIR=/home/groups/sh_support/share/containers/lumerical/
LUM_SIF=${LUM_DIR}/lumerical_v2025R24.sif
LIC_FILE={lic_port_num}@{lic_server_address}

mkdir -p $SCRATCH/lumerical_test
cd $SCRATCH/lumerical_test
#
cp -r ${LUM_DIR}/test_job ./
rm test_job/*.log
#
apptainer exec --cleanenv --writable-tmpfs --env ANSYSLMD_LICENSE_FILE=${LIC_FILE}  ${LUM_SIF} fdtd-engine test_job/solver_far_field.fsp

Build a container

It might be preferable to build a customized container, for example to

  • Embed licensing data, to simplify runtime syntax
  • Add additional SW to the container
  • Generally optimize the Lumerical installation (add or remove executables), possiby to reduce the size of a container
  • Start with a different OS base, eg. Rocky Linux vs Ubuntu
  • Use a different MPI, perhaps one that is more compatible with a given HPC environment
  • Some other reason we haven’t thought of
  • Personal challenge or punishment

The proces to build a container is detailed below, but this should be interpreted as a (possibly longwinded) framework and guide. What follows is almost certainly not cut-paste ready – and really, it is not intended to be. Both ANSYS and Apptainer documentaiton should be consulted during this process.

Step 1: Obtain the Lumerical Installer

Download the appropriate Lumerical installer for Linux from the Ansys/Lumerical download portal. Transfer this file to your Sherlock home or group directory, eg.

# Example transfer from local machine
scp LUMERICAL_2025R2.04_LINX64.tar <SUNetID>@dtn.sherlock.stanford.edu:home/groups/{pi_sunet}/{my_sunetid}

Step 2: Obtain or Create an Apptainer Definition File

Lumerical provides DockerFile container definitions, for the Docker container system. While Docker is a very popular system, arguably pioneered the concept of modern containerization, and is nominally the industry standard in enterprise circles, it is effectively not compatible with HPC – at least not for regular users, because it requires root access to build and run containers. Most HPC systems today use Apptainer or Podman – either of which will work on Sherlock, but Apptainer is the perfered and better supported option of the two.

Three basic approaches to building an Apptainer container for your ANSYS software include,

  1. Build a Docker container, from one of the provided DockerFiles, on a different system. Then build (or “compose”) an Apptainer container using the Docker container as a base or Bootstrap:, eg
    Bootstrap: localimage
    From: /path/to/docker_image
    
  2. Create an Apptainer definition file, eg. lumerical.def from scratch, by following the regular Lumerical installation instructions
  3. Convert a DockerFile to an Apptainer definition file.

The third option is really a variation on the second, principally in the sense that there is not currently a “proper” way to dependably convert a DockerFile to an Apptainer.def. That said, the syntax is not terribly difficlut to follow and modern LLM (“AI”) tools, eg CLAUDE, Gemini, or ChatGPT will do most of the heavy lifting if asked.

A Lumerical container will inevitably require numerous dependency libraries to be installed, so we recommend a combination of 2 and 3. A sample, and likely working, container file (e.g., lumerical.def) for building the container:

Bootstrap: docker
From: ubuntu:latest
#From: ubuntu:22.04

# Build Notes:
# Since there is some nuance, like passing --build-arg, etc. I like to stash build-time notes
#  (an exact build command...) here.
# apptainer build --build-arg license_server=port@server lumerical.sif lumerical.def 

%setup
    mkdir ${APPTAINER_ROOTFS}/LUMERICAL

%arguments
    license_server=

%files
    #rpm_install_files/Lumerical*.rpm /LUMERICAL
    Lumerical-2025-R2.4-4336-b98875243d9 /LUMERICAL/

%post
    # Copyright (c) 2003-2025, Ansys, Inc. All rights reserved.
    #
    # Unauthorized use, distribution, or duplication is prohibited.
    # This product is subject to U.S. laws governing export and re-export.
    #
    # For full Legal Notice, see license.txt
    #
    ##########################
    # General container parts:
    ###########################
    # Define options here (maybe tie with %environment)
    INSTALL_VNC=1
    #
    # Get latest
    apt-get update
    apt-get upgrade -y
    
    export DEBIAN_FRONTEND=noninteractive 
    export TZ=America/Pacific

    apt-get install -y vim nano
    apt-get install -y build-essential
# NOTE: Lumerical uses QT, but it packages its own version, so we do *not* want to install it here.
#    apt-get install -y qtcreator qtbase5-dev qt5-qmake cmake
#    apt-get install -y libqt5gui5

    
    # Install extra packages. Some of these are probably ANSYS/Lumerical specific, so our division of container
    #  parts is not very complete.
    #########
    # if we use their extract-rpm.sh, do we need alien? We do. Or at least some parts. Intall alien, then remove it during cleanup.
    apt-get install -y alien
    #apt-get install -y rpm2cpio cpip  # eventually, we will figure out which alien libraries we actually need...
    #
    apt-get install -y freeglut3-dev libxslt-dev libxcursor1 wget sudo
    #apt-get install -y --allow-downgrades cpio=2.13+dfsg-7
    #
    # from lito (we will trim the redundant package)
    apt-get -y install libxcb-xinerama0 libxcb-cursor0 libxcursor-dev
    apt-get -y install libxcb-randr0-dev libxcb-xtest0-dev libxcb-xinerama0-dev libxcb-shape0-dev libxcb-xkb-dev

    #apt-get -y install libxcb-xinerama0 libxcb-cursor0 libxcursor-dev
    #apt-get -y  install freeglut3-dev
    #
    # then, hold to see if we really need to do this (we do...):
    #cd /usr/lib/x86_64-linux-gnu
    ln -s /usr/lib/x86_64-linux-gnu/libglut.so.3.12.0 /usr/lib/x86_64-linux-gnu/libglut.so.3 

    
    # Add lumerical user
    useradd --shell /bin/bash --create-home lumerical

    # Installation of Open MPI
    apt-get install -y openmpi-bin

    # Installation of Lumerical RPM
    # cd /tmp
    # yoder: move this to a more specialized location.
    #cd /LUMERICAL
    cd /LUMERICAL/Lumerical-2025-R2.4-4336-b98875243d9
    #
    # yoder: instead of alien, use their extract-rpm.sh method/script. Both should work.
    # But we still need to install alien -- or at least parts of that package?
    export LUMERICAL_DIR=/opt/lumerical/v252
    printf "LUMERICAL_DIR: ${LUMERICAL_DIR}\n"
    echo "export LUMERICAL_DIR=${LUMERICAL_DIR}" >> /envfile
    ./extract-rpm.sh
    mv opt/* /opt/
    
    #export LUMERICAL_DIR=$(eval rpm -qlp ./Lumerical*.rpm | grep VERSION$ | cut -f1-4 -d"/")
    #echo "export LUMERICAL_DIR=${LUMERICAL_DIR}" >> /envfile
    #
    #alien -i Lumerical*.rpm --scripts
    # yoder: for now, keep this...
    # looks like extract-rpm.sh cleans up (deletes the .rpm files)
    #rm -rf Lumerical*.rpm
    
    # Update PATH
    echo "export PATH=${LUMERICAL_DIR}/bin:${LUMERICAL_DIR}/python/bin:\$PATH" >> /home/lumerical/.bashrc
    
    # Set OpenMPI as default resource on startup scripts
    mkdir -p ${LUMERICAL_DIR}/Lumerical
    echo "setresource('FDTD',1,'job launching preset','Remote: OpenMPI');" >> ${LUMERICAL_DIR}/Lumerical/global_fd_ide_startup_script.lsf
    echo "setresource('EME',1,'job launching preset','Remote: OpenMPI');" >> ${LUMERICAL_DIR}/Lumerical/global_mfd_ide_startup_script.lsf
    echo "setresource('varFDTD',1,'job launching preset','Remote: OpenMPI');" >> ${LUMERICAL_DIR}/Lumerical/global_mfd_ide_startup_script.lsf
    #
######
#    # Add VNC to container:
    if [ ${INSTALL_VNC} -eq 1 ]; then
      apt-get install -y --no-install-recommends \
          xfce4 \
          xfce4-goodies \
          dbus-x11 \
          xterm
      apt-get install -y libcurl4-openssl-dev --fix-broken
      #
      # Set xfce4-terminal as default terminal emulator:
      update-alternatives --set x-terminal-emulator /usr/bin/xfce4-terminal.wrapper
      #
      # TurboVNC:
      apt-get -y install wget gpg
      apt-get -y install lsb-release
      #
      #
      #echo "deb bit done..."
      wget -q -O- "https://packagecloud.io/dcommander/turbovnc/gpgkey" | gpg --dearmor > "/etc/apt/trusted.gpg.d/TurboVNC.gpg"
      wget -q -O "/etc/apt/sources.list.d/TurboVNC.list" "https://raw.githubusercontent.com/TurboVNC/repo/main/TurboVNC.list"
      #
      wget -q -O- https://packagecloud.io/dcommander/libjpeg-turbo/gpgkey | gpg --dearmor >/etc/apt/trusted.gpg.d/libjpeg-turbo.gpg
      wget -q -O "/etc/apt/sources.list.d/libjpeg-turbo.list" "https://raw.githubusercontent.com/libjpeg-turbo/repo/main/libjpeg-turbo.list"
      #
      wget -q -O- https://packagecloud.io/dcommander/virtualgl/gpgkey | gpg --dearmor >/etc/apt/trusted.gpg.d/VirtualGL.gpg
      wget -q -O "/etc/apt/sources.list.d/VirtualGL.list" "https://raw.githubusercontent.com/VirtualGL/repo/main/VirtualGL.list"
      #
      apt-get update
      apt-get -y install turbovnc
      apt-get -y install virtualgl libjpeg-turbo-official
      #
      # Set VNC password (replace 'your_vnc_password' with a strong password)
      export PATH=/opt/TurboVNC/bin:${PATH}
      #######

      # There are ways to configure startup scripts, manage the desktop from the container, etc.,
      #  but I do not use them.
      #mkdir -p /root/.vnc
      #echo "monkey018" | vncpasswd -f > /root/.vnc/passwd
      #chmod 600 /root/.vnc/passwd

      ## Configure xstartup script for XFCE
      #echo "#!/bin/bash" > /root/.vnc/xstartup
      #echo "unset SESSION_MANAGER" >> /root/.vnc/xstartup
      #echo "unset DBUS_SESSION_BUS_ADDRESS" >> /root/.vnc/xstartup
      #echo "[ -x /etc/vnc/xstx:artup ] && exec /etc/vnc/xstartup" >> /root/.vnc/xstartup
      #echo "[ -r \$HOME/.Xresources ] && xrdb \$HOME/.Xresources" >> /root/.vnc/xstartup
      #echo "startxfce4 &" >> /root/.vnc/xstartup
      #chmod +x /root/.vnc/xstartup
  
      ## END VNC
    fi
   

    # Cleanup
    apt-get -y remove alien
    apt-get clean
    rm -rf /var/lib/apt/lists/*

%environment
    # Set offscreen for running in headless mode
    export QT_QPA_PLATFORM=offscreen
    
    # Set the licensing variable
    export ANSYSLMD_LICENSE_FILE=
    #
    # this is not a smart way to set the Lumerical PATH, but it is what we are going to do for now...
    #  (ideally, we would get a better handle on variable scope, but sometimes we just do things the sloppy way)
    export PATH=/opt/lumerical/v252/bin:${PATH}
    export LD_LIBRARY_PATH=/opt/lumerical/v252/lib:${LD_LIBRARY_PATH}
    export LIBRARY_PATH=/opt/lumerical/v252/lib:${LIBRARY_PATH}
    #
    # VNC:
    export PATH=/opt/TurboVNC/bin:${PATH}

    # Source the environment file if it exists
    if [ -f /envfile ]; then
        . /envfile
    fi

%runscript
    # Source bashrc for lumerical user
    if [ -f /home/lumerical/.bashrc ]; then
        . /home/lumerical/.bashrc
    fi
    exec "$@"

%labels
    Author Ansys, Inc.
    Version 1.0

%help
    This container includes Lumerical software from Ansys.
    
    To build this container:
        apptainer build lumerical.sif lumerical.def --build-arg license_server=<your_license_server>
    
    To run:
        apptainer run lumerical.sif <command>

Step 3: Build the Container

Build the container on Sherlock, eg.

Copy code

# Build the container
apptainer build lumerical.sif lumerical.def

If I know my license data, I might be able to embed it into the container. For example, if I have a network floating license (FNL), the container script will use the license_server build-time variable to set the appropriate runtime environment variable.

apptainer build --build-arg license_server=port@server lumerical.sif lumerical.def 

Step 4: Configure License Settings

See ANSYS documentaiton on how to set up a license. The precise implmentaiton may vary with version of ANSYS software, type of license (machine-locked, FNL, etc.), and other factors. Typically, the license file will be configured in your $HOME directory, then your container will be directed to look for and/or validate that license via environment variables. Specifically, Lumerical uses the ANSYSLMD_LICENSE_FILE variable to define various licensing elements.

The license data is then transferred to the container either

  • At build time, passed as a file or environment variable
  • From a file or environment variable on the host system
  • As an environment variable passed to the container at runtime

This docuimentation will, more or less, review all three of these approaches. Inevitably, trial and error will come into play when customizing a container.

Option 1: Environment variable

Under most circumstances, you can automatically pass environment variables from the host to a container, so the license server can be defined by setting the local (on your host machine) variable ANSYSLMD_LICENSE_FILE to point to the license server:

export ANSYSLMD_LICENSE_FILE=<port>@<license_server_hostname>

Lumerical however, may conflict with some of Sherlock’s standard environment variable settings, and so it may be preferable to either build the container with that Licensing variable defined (see above),

apptainer build --build-arg license_server=port@server lumerical.sif lumerical.def 

Alternatively, as discussed in the QuickStart section, the internal (container) variable ANSYSLMD_LICENSE_FILE can be set via the ANSYSLMD_LICENSE_FILE variable or by runing the container with the --env ANSYSLMD_LICENSE_FILE= option.

Option 2: Create a license file

echo “SERVER ANY " > /home/groups//lumerical/license.lic echo "USE_SERVER" >> /home/groups//lumerical/license.lic

Again, consult the ANSYS documentation for details as to how to configure your license and license server.

Step 5: Test the Container

Interactive test(s)

apptainer shell --cleanenv lumerical.sif
Apptainer> fdtd-solutions -v
Apptainer> exit

Run a simple test job to verify the installation. A test job is provided with the shared container,

mkdir -p $SCRATCH/lumerical_test
cd $SCRATCH/lumerical_test
#
cp -r /home/groups/sh_support/share/containers/lumerical/test_job/ ./
apptainer exec --cleanenv --writable-tmpfs --env ANSYSLMD_LICENSE_FILE={port}@{License Server}  lumerical.sif fdtd-engine test_job/solver_far_field.fsp