Converge and Converge Studio
Contents
Overview
Converge is a computational fluid dynamics (CFD) application used for simulating three-dimensional fluid flow. Converge Studio is the graphical user interface (GUI) front-end for Converge, that includes a simplified, customized version of ParaView as a built-in module. Converge Studio is not necessary for running Converge (Converge can be run in command line mode), but may be useful for pre- and post-processing.
This document is intended to provide basic guidance on running Converge and Converge Studio on Matilda. Because Converge Studio is a GUI application, it is necessary to establish an forwarded X-windows session to Matilda. Please be aware that X-windows over SSH can have very poor performance, especially off-campus (on-campus performance is much more acceptable). This is not a flaw in the application itself, but rather a limitation of the method of access.
Converge Studio
Usage Basics
Converge Studio may be accessed using the following:
ssh -X hpc-login.oakland.edu [someuser@hpc-login-p01 ~]$ module load ConvergeStudio [someuser@hpc-login-p01 ~]$ CONVERGE_Studio
Note that when running Converge Studio (especially from off-campus) you may encounter a dialog stating the application is not responding. This is generally NOT the case. What is usually happening is that slow graphic load times are pushing up against your workstation timeouts. This those cases, hit "Wait", and eventually the display will update.
Explanation
An X-windows SSH session must be established to the login node, after which the modulefile is loaded. At this point, Converge Studio may be started. Note that an X-windows window will popup on your local workstation to display the Converge Studio interface.
Caution
Users are strongly advised NOT to run full simulations in Converge Studio directly on the login node. Runtime resources are limited on login, and your job may be killed if you attempt to use excessive resources, or if your run continues for an excessive period of time. This can result in premature termination of your simulation. Converge Studio should only be used in this way when performing pre- and/or post-processing of data, or for short term testing.
To use Converge Studio to run multi-cpu simulations, please refer to Simulations with Converge Studio outlined later in this document.
Converge
Usage Basics
Converge can be run in command-line mode by loading the appropriate modulefile. At present, there are available versions installed on Matilda for OpenMPI, MPICH, IntelMPI, and HPC-X. Presented below are the avaiable modulefiles as of this writing:
module load Converge/3.0.28-ompi (default) module load Converge/3.0.28-mpich module load Converge/3.0.28-intelmpi module load Converge/3.0.28-hpcx
Converge CFD currently recommends Converge OMPI (the Matilda default) for most users. Note that MPICH versions will generally underperform the others, as evidenced by MPICH testing on Matilda.
SLURM Batch Jobs
The preferred way to run Converge in CLI mode is using a SLURM batch job script. Presented below is an example job script (see: Job Script Examples for more options):
# ====Sample Job Script=== #!/bin/bash #SBATCH --job-name=myConverge #SBATCH --nodes=2 #SBATCH --ntasks=80 #SBATCH --ntasks-per-node=40 #SBATCH --cpus-per-task=1 #SBATCH --time=0-04:00:00 cd ${SLURM_SUBMIT_DIR} module load Converge/3.0.28-ompi mpirun converge-ompi --super 2>&1 > runConverge.log
Explanation
The job script above specifies 80 tasks with 2 nodes (40 cpus per node) and one (1) cpu per task. We make sure to load the desired Converge modulefile, and then launch the corresponding Converge executable (converge-ompi) using "mpirun". Make sure to include the "--super" flag after the Converge execuable, otherwise your job will fail! Standard and error output will be directed in this case, to the file "runConverge.log".
IMPORTANT
Note that in the example above, we do NOT use "-np <#>" to designate the number of cpus to use for the run. The SLURM resource manager works seemlessly with all of the MPI variants of Converge, and the correct number of cpus will be selected. If you use "-np <#>" with the mpirun command, your job will significantly over-utilize the runtime compute nodes and your job may fail or be killed without warning.
Simulations with Converge Studio
Job Configuration with salloc
Converge may be run using the Converge Studio GUI front-end for convenience. Converge Studio connections over SSH will be somewhat slow, so it is generally not recommended to run it from on-campus networks only. In order to run high-resource Converge jobs using Converge Studio, it is necessary to allocate resources in SLURM prior to launching Converge Studio. To begin, login to Matilda using an X-windows session as discussed previously. Then preallocate resources in SLURM as illustrated below:
ssh -X hpc-login.oakland.edu [someuser@hpc-login-p01 ]$ salloc -N 2 -n 80 --ntasks-per-node=40 -c 1 -t 4:00:00 1> /dev/null salloc: Granted job allocation 1234567 salloc: Nodes hpc-compute-p03,hpc-compute-p08 are ready for job salloc: lua: Submitted: 1234567.4294967294 component: 0 using: salloc [someuser@hpc-login-p01 ~]$
Explanation: salloc and Studio Start
Note that in the example above, we use the command "salloc" to allocate resources for our simulation. We specify the number of nodes (-N 2), the number of tasks (-n 80), the number of tasks per node (--ntasks-per-node=40), and the number of cpus-per-task (-c 1). The "1> /dev/null" is used to direct superfluous terminal output to the "trash". We then see that 2 nodes (hpc-compute-p03 and hpc-compute-p08) have been assigned to our job. We are then returned to a new login prompt.
Launch Converge Studio
At this point, you should be inside an interactive session on the login node which is connected to the SLURM allocation. You may now load the Converge Studio modulefile, and launch Converge Studio:
[someuser@hpc-login-p01 ~]$ module load ConvergeStudio [someuser@hpc-login-p01 ~]$ CONVERGE_Studio
You should now see the following X-windows popup on your local workstation:
First, ensure that you have configured your Converge Studio preferences correctly (example shown below). First select Edit->Preferences:
Then fill-in the preferences to designate the "Converge Root", MPI Type, and command line options:
Explanation: Converge Studio Preferences
It is necessary to minimally set the following parameters as follows:
- Set the CONVERGE ROOT to "/cm/shared/apps/Converge/3.0.28/Convergent_Science"
- Select the correct MPI Type (here: OPEN MPI)
- After setting the parameters above, the Version dropdown box should be populated. Here we default to 3.0.28
- Set the appropriate number of processes (cpus) - here 80
IMPORTANT: Enter "--super" in the Command Line Arguments box
When you're finished, click the OK button to save your choices. It is necessary to provide the correct values as shown above (vary processes and MPI Type as desired) or your job will fail. Make sure that the number of processes you select is equal to the total number of cpus assigned to the job by "salloc" (i.e. 80 tasks x 1 cpu-per-task in this case).
Start and Run the Simulation
After you load or create your project, hit the "Run CONVERGE" button to begin
First you will see an "Export Files" window where you can export files to a different location if you want:
You will see a popup window that will list runtime messages:
If you wish to stop the simulation and "checkpoint" the run, select the "Normal Stop" button, otherwise let the run continue until completion.
When the simulation is stopped or complete, select the "Hide" button.
Terminating the Job
It is important to terminate your "salloc" allocation when you're finished. To do so, simple exit out of Converge Studio, and type "exit" at the command prompt:
[someuser@hpc-login-p01 ~]$ exit salloc: Relinquishing job allocation 1234567 [someuser@hpc-login-p01 ~]$
More Information
Converge User Resources - includes manuals and examples
CategoryHPC