Using STAR-CCM+ on Nova

Introduction

STAR-CCM+ is used for solving problems in Computational Fluid Dynamics (CFD) and related multiphysics.  On Nova, users typically execute STAR-CCM+ analysis as batch jobs, though it can also be used interactively from Nova OnDemand.

Full STAR-CCM+ User Guide

Complete STAR-CCM+ User Guide is available as a PDF file that is available at the path:

/shared/hpc/starccm/starccm-userguide_19.04.009_en.pdf 

GPU Support

STAR-CCM+ has excellent support for GPUs.   Many STAR-CCM+ jobs that use a GPU will execute up to four times faster than a job that only uses CPU processors. 

Note also that running STAR-CCM+ with a GPU still requires CPUs for many operations, particularly with post-processing analysis.  It is recommended that you request at least 4 CPUs for each GPU requested.

STAR-CCM+ Versions

Use the module spider command to see the available versions:  

$ module spider starccm
----------------------------------------------------------------------------------------------------------------------------------------
 starccm:
----------------------------------------------------------------------------------------------------------------------------------------
    Versions:
       starccm/16.04.012-R8
       starccm/16.04.012
       starccm/17.04.008-R8
       starccm/17.04.008
       starccm/18.02.008-R8
       starccm/18.02.008
       starccm/19.04.007-R8
       starccm/19.04.007

 

Single Versus Double Precision

Note that the versions ending in -R8 are the double precision versions.   The -R8 versions use double precision floating numbers for all calculations.  Double precision is recommended for complex geometries.

Sample Batch Script for STAR-CCM+ Using a GPU

The following Slurm batch script shows a typical job that uses a GPU.   Some key points to note about how this job is launched:

  • The job requests 1 GPU from Slurm:   #SBATCH --gres=gpu:a100:1 
    You can request more than one GPU, of course.  It is recommended that you start with 1 GPU first and expand to additional CPUs as needed.
  • We use the option -batchsystem slurm to tell STAR-CCM+ to query the Slurm scheduler to determine the hardware assigned for the job.
  • We tell STAR-CCM+ not to use the Nvidia MPS system:    -gpgpu auto:1:nomps 
    (The Nvidia MPS system can conflict with the Slurm scheduler, so it is best not to use it.)
  • The job uses OpenMPI:  -mpi openmpi   
    (In earlier versions of STAR-CCM+,  Intel MPI was recommended over OpenMPI, but the latest versions of STAR-CCM+ works best with OpenMPI).
#!/bin/bash
# Sample script that creates an sbatch script for running a STARCCM+ job and also submits the job to Slurm.
# This particular script is set up for GPU support.
# Instructions:
#   1.  Modify the items under "job parameters" below. Save the file. 
#   2.  Run this file  (chmod +x starccm-gpu.sh ; ./starccm-gpu.sh) 

# Set the job parameters below: (modify as needed)
JOBNAME=MyJob-1gpu             # Name you want for this job
ACCOUNT="GroupAccount"         # Account group name
WORKDIR=/work/group/username/starccm    # Work directory path
SIMFILE="MyJob1.sim"           # Simulation file name
STARCCM_VERSION="20.04.007"    # The desired version of STAR-CCM+
PARTITION=instruction          # Partition (job queue) 
NUM_NODES=1                    # Number of compute nodes
NUM_PROCS_PER_NODE=6           # Number of CPU cores per node (Note: GPU jobs will use far less CPU cores).
NUM_GPUS=1                     # Number of GPUs. (Set to 0 if no GPUs are required).
GPU_MODEL="a100"               # GPU model, such as "a100" or "v100".  Leave blank if no preference.
MAX_TIME=12:00:00              # Maximum wall clock time the job will take.
TOTAL_MEMORY=64G               # Total non-GPU memory required. (Note: GPUs have their own dedicated memory.)
EMAIL=yournetid@iastate.edu    # Your email address.
MAIL_TYPES=BEGIN,FAIL,END      # Receive e-mail when the job begins, ends, or fails.
# End of job parameters.   

TOTAL_PROCS=$((NUM_NODES*NUM_PROCS_PER_NODE))
ERROR_FILE=${JOBNAME}.%j.error
OUTPUT_FILE=${JOBNAME}.%j.output
SIMFILEPATH="${WORKDIR}/${SIMFILE}"


# Set up the --gres value for Slurm and the -gpgpu option for STAR-CCM+ for GPU jobs:
if [[ $NUM_GPUS -gt 0 ]]; then           
   GPGPU_OPTION="-gpgpu auto:${NUM_GPUS}:nomps"
   if [[ $GPU_MODEL == "" ]]; then
	  GRES="gpu:${NUM_GPUS}"
   else
	  GRES="gpu:${GPU_MODEL}:${NUM_GPUS}"
   fi	
else 
	GRES="gpu:0"
	GPGPU_OPTION=""
fi

# Everything below until END_OF_SCRIPT gets passed to sbatch.  Edit carefully.
# Note that the regular shell variables (i.e.  $var,  ${var} ) are 
# filled in by bash when you run this script.
# The escaped variables (i.e.  \$var ) are filled in by SLURM at run time.
cat <<END_OF_SCRIPT > ${JOBNAME}.sbatch
#!/bin/bash
#SBATCH -A  $ACCOUNT
#SBATCH -J $JOBNAME
#SBATCH -D $WORKDIR
#SBATCH -N $NUM_NODES
#SBATCH -n $TOTAL_PROCS
#SBATCH --partition=$PARTITION
#SBATCH --ntasks-per-node=$NUM_PROCS_PER_NODE
#SBATCH --mem=$TOTAL_MEMORY
#SBATCH --gres=${GRES}
#SBATCH --time=$MAX_TIME
#SBATCH --error=$ERROR_FILE
#SBATCH --output=$OUTPUT_FILE
#SBATCH --mail-type=$MAIL_TYPES
#SBATCH --mail-user=$EMAIL
#SBATCH --hint=nomultithread

cd $WORKDIR

SIMFILEPATH=${WORKDIR}/$SIMFILE
module load starccm/${STARCCM_VERSION}


# The option "-batchsystem slurm" tells STAR-CCM+ to get job details from Slurm.
starccm+ -batch -batchsystem slurm -mpi openmpi $GPGPU_OPTION -np $TOTAL_PROCS $SIMFILEPATH

END_OF_SCRIPT

# Now send the sbatch script created above to sbatch..
echo "Creating sbatch file: ./${JOBNAME}.sbatch.  Executing sbatch ..."
sbatch ./${JOBNAME}.sbatch