OpenFOAM

OpenFOAM is a free, open source CFD software packaage.

This entry provides basic information on how to run OpenFOAM from Open CFD. NB OpenFOAM is still in testing, and this guide is very liable to change.

General OpenFOAM documentation

OpenFOAM is complex, and you should start by reading the official documentation at http://www.openfoam.org/docs/user

Setting up the environment

You can set up the environment variables necessary to run both OpenFOAM version 2.3.0 and ParaView version 4.1.0 by loading the openfoam environment module.

% module add openfoam

This module also sets up environment variables that define default locations for your OpenFOAM cases and binaries.

The ~ means your login directory.

$WM_PROJECT_USER_DIR  ~/OpenFOAM/2.3.0
$FOAM_RUN             ~/OpenFOAM/2.3.0/run
$FOAM_USER_APPBIN     ~/OpenFOAM/2.3.0/platforms/linux64Gcc47DPOpt/bin

The first time you use OpenFOAM, you should create these directories with the commands below and use them for your OpenFoam cases.

mkdir -p $WM_PROJECT_USER_DIR
mkdir -p $FOAM_RUN
mkdir -p $FOAM_USER_APPBIN

Getting Started

Create the project directories above.

Copy the tutorial examples directory in the OpenFOAM distribution to the run directory.

cp -r $FOAM_TUTORIALS $FOAM_RUN

Run the first example case of incompressible laminar flow in a cavity:

cd $FOAM_RUN/tutorials/incompressible/icoFoam/cavity
blockMesh
icoFoam
paraFoam

Now refer to the OpenFOAM User Guide to get more information!

Launching on the login node

Visualisation of OpenFOAM results can be done using paraFoam at the command prompt; i.e.:

% paraFoam

Warning

If you’re visualising large data sets this should NOT be done on the login nodes, since this can use a considerable amount of RAM and CPU. Instead, you should request an interactive job with UGE (SGE).

Using Univa Grid Engine

Univa Grid Engine (UGE) allows both interactive and batch jobs to be submitted, with exclusive access to the resources requested.

Running through an interactive shell

To launch paraFoam interactively, displaying the full GUI:

% qrsh -q eng-inf_parallel.q -cwd -V -l h_rt=<hh:mm:ss> paraFoam

In the above command, hh:mm:ss is the length of real-time the shell will exist for, -cwd- means use the current working directory and -V- exports the the current environment. e.g. to run paraFoam for 1 hour:

% qrsh -q eng-inf_parallel.q  -cwd -V -l h_rt=1:00:00 paraFoam

This will run paraFoam within the terminal from which it was launched. You will need to be in an appropriate directory for paraFoam to find the correct files.

Batch Execution

To run OpenFOAM in batch-mode you first need to setup your case.

A script must then be created that will request resources from the queuing system and launch the desired OpenFOAM executables

eg script runfoam.sh:

#!/bin/bash
# Use the current working directory
#$ -cwd
# Use the Engineering/Informatics parallel queue
#$ -q eng-inf_parallel.q
# Load OpenFOAM module
. /etc/profile.d/modules.sh
module add openfoam
# Run actual OpenFoam commands
blockMesh
icoFoam

This can be submitted to the queuing system using:

Parallel Execution

If you’ve configured your OpenFOAM job to be solved in parallel, you need to submit it differently. It uses its own private version of OpenMPI 1.6.5 so you don’t need to explicitly load an openmpi module. This is an example of a suitable submission script that reserves 2GB RAM per slot.

NB there is a space between ”.” and “/etc/profile.d/modules.sh” in the script below!

create the script runfoam-mpi.sh

#!/bin/bash
#$ -l h_vmem=2G
#$ -pe openmpi 8
#$ -cwd -V
#$ -q eng-inf_parallel.q
export MPI_BUFFER_SIZE=8192
. /etc/profile.d/modules.sh
module purge
module add openfoam
mpirun -np $NSLOTS interFoam -parallel
qsub runfoam-mpi.sh