Using ANSYS RSM

Introduction

It is fairly straightforward  to submit batch jobs to the cluster to run jobs for Ansys products such as Fluent and Ansys APDL.   However, submitting batch jobs using Ansys Workbench is a bit more involved.  In order to submit a batch job for Ansys Workbench, we need to configure a tool called Ansys Remote Solve Manager (RSM) that sets up the interaction between Workbench and the Slurm job scheduler.  Here is a video showing how Ansys uses RSM to send jobs to a compute cluster:

 https://www.youtube.com/watch?v=lgGID55Lwzg&feature=youtu.be

For this guide, we will use Nova OnDemand to launch a graphical desktop. Then we will use the RSM cluster configuration tool set up how Workbench interacts with the Slurm scheduler.   The remainder of this guide will show you how to configure the RSM tool and submit jobs to Workbench.  

Launch a Nova Desktop Session from Nova OnDemand

The first step is to get logged in to Nova OnDemand (https://nova-ondemand.its.iastate.edu).  Once you're logged in, go ahead and start a Nova Desktop session from the Interactive Apps menu.  

NOTE:  When you are specifying the resources for running Workbench, it's important to keep in mind that the resources you choose for running the graphical interface to Workbench are not the same resources you will use when running a simulation.  That's because when you submit a job from Workbench to the cluster using RSM, you're actually sending the batch request to Slurm, which may execute the job on completely different compute nodes than the one where you are running Workbench.  So, when running Ansys Workbench, you will only need something like 4 cores and 16GB of memory.   But when you submit a job from Workbench, you could request dozens of cores and 100GB of memory or more depending on the size of the simulation.  

ANSYS Remote Solve Manager (RSM) is used by ANSYS Workbench to submit computational jobs to the cluster.  This allows you to take advantage of much more computing power that you likely have on your desktop or laptop computer.   What's more, you don't have to write sbatch scripts on the cluster, or manually transfer files between your desktop computer and the cluster.  RSM takes care of those details automatically.  In fact, some types of analysis, such as coupling mechanical FEA and fluid dynamics, are difficult to set up to run on the clusters without RSM.  Here is a video demonstrating how ANSYS Workbench can be used to submit jobs to RSM:
  https://www.youtube.com/watch?v=lgGID55Lwzg&feature=youtu.be

Prerequistes

  1. You must have ANSYS version 2020 R2 or later installed. 
  2. You must be connected to the campus network over the ISU VPN, even if you are on campus. 
  3. You must have an account on one of the ISU Clusters.  See here for info on how to get an account:  https://www.hpc.iastate.edu/faq#Access

Launch the RSM Configuration Tool

The RSM configuration tools are installed automatically when you install ANSYS.  Launch the RSM Configuration tool from your desktop: 

Open a Linux shell.   Inside the shell use the module command to load the version of Ansys you require, for example:
      $ module load ansys/24.2     (use 'module spider ansys' to see which versions are installed).
      $ rsmclusterconfig

 

run the applications <RSMInstall>/Config/tools/linux/rsmclusterconfig script.

Configuring RSM

The main RSM Configuration window looks like this:

Main RSM Configuration window

Creating an HPC Resource

First, we need to create an HPC Resource that defines a cluster.  Click the blue plus sign to open the HPC Resource pane.  In the fields, enter these values:

  • In the Name field, enter the name you want to call this HPC resource.  For example, Nova.
  • In the field HPC type, select SLURM from the dropdown menu. The ISU clusters use the SLURM job scheduler.
  • For the RSM Submit Host, enter: localhost
  • In the field labeled SLURM job submission arguments (optional), enter:  -N 1 -n 16 -t 4:00:00   
    NOTE:  The values specified here are just for testing the submit setup.  The actual values for number of hosts, number of processors, and maximum time will likely be changed once you begin submitting actual jobs.
  • Leave the box checked that is labeled Use SSH protocol for inter and intra-node communication (Linux only).
  • Click the button labeled: Able to directly submit and monitor HPC jobs

 

After entering the fields, click the Apply button at the bottom of the page to save these settings.  The final settings should look something like this:

RSM HPC Resource Settings

Next, select the File Management tab.   Note that in this configuration, all of the input files will be used from whatever location you have uploaded them.  Any input files you intend to use on the cluster will need to be copied into place under your work directory path on the cluster.   File out the File Management settings as below:  Be sure to press the Apply button the save this setup.

Configuring Queues

Now select the Queues tab at the top right of the RSM Configuration page.  From this page, the RSM will look up the list of available queues and add them to list of queues.   (Since we select localhost as the submit host, the configuration tool can talk directly with Slurm to get the list of queue).    
To import the available queus, clock the first icon that depicts two blue areas in a circle.

After imporing the available queues, the list should look something like this:

Enable Desired RSM Queues

Enable the queues you wish to use. Most users will use the Nova queue.   De-select any queues you don't intend to use.  Press Apply to save this configuration.

Testing the Queue

Next, we want to run a test of the enabled queues.  For this, in the table listing the queues, click on the Submit button in the Test column that matches the queue you wish to use.   Wait for the test to complete.  When done, it should show a green check in the Status column as shown below:

Test RSM Queues


If you have a green check mark, you should be able to RSM to submit jobs from Workbench to the cluster.