Compiler et lancer un code Fortran et MPI

Introduction

This tutorial is aimed to run on Dahu machine. After the connection on Dahu with your own login/passwd, we begin with the creation of a guix profile and install the following tools: fortran and mpi. We copy then the source code and Makefile, compile and execute it.

Creating a guix profile gnu-fortran-mpi-tuto and software/libs installation

First, we activate GUIX to install the software environment we need:

$ source /applis/site/guix-start.sh

If you run this command for the first time, it may take several minutes. If you have not run $ guix pull for a while, please run it before continuing this tutorial.

Looking for and installing the fortran compiler

Here we are interested in the GNU fortran package. To look for it through available GUIX packages, please run this command:

$ guix search fortran 

This command lists several guix packages. We choose which contains name: gfortran-toolchain in the name. Then, we install this package in a new GUIX profile called gnu-fortran-mpi-tuto:

$ guix install -p $GUIX_USER_PROFILE_DIR/gnu-fortran-mpi-tuto gfortran-toolchain

Looking for and installing the MPI lib

We also need a MPI library to compile and run properly our program on several nodes:

$ guix search mpi

We choose which contains name:openmpi and we install it:

$ guix install -p $GUIX_USER_PROFILE_DIR/gnu-fortran-mpi-tuto openmpi

Finally, as indicated in the output of the previous installation stage, we source the profile, to activate our freshly installed packages (be aware to replace <your-login> by your login):

$ GUIX_PROFILE="/var/guix/profiles/per-user/<your-login>/gnu-fortran-mpi-tuto"
$ . "$GUIX_PROFILE/etc/profile"

We can check the installed package in our profile with the command:

$ guix package -p $GUIX_USER_PROFILE_DIR/gnu-fortran-mpi-tuto -I

gfortran-toolchain	7.5.0	out	/gnu/store/0j35sgs9jfbr5w07plq0zjxfbmqw7w1i-gfortran-toolchain-7.5.0
openmpi	4.1.1	out	/gnu/store/8pzsa54chmgimrmvxxry6nxs4z00ld2f-openmpi-4.1.1

The fortran wrapper for mpi is mpif90:

$ which mpif90

/var/guix/profiles/per-user/<yoour-login>/gnu-fortran-mpi-tuto/bin/mpif90

Compilation of the source code and program execution

We create a specific directory in your $HOME:

$ mkdir $HOME/tuto-mpi
$ cd $HOME/tuto-mpi

and then we copy the code and the makefile:

$ cp /bettik/bouttiep/tuto-mpi/* .

Compilation

To compile the exemples, each directory contains a Makefile. This Makefile will performs compilation and execution. So we advice you to make an interactive submission to the nodes of Dahu through OAR:

To reserve one node in interactive (i.e. OAR connect us to the node when the ressource is available), execute this command:

$ oarsub -t devel --project your_project -l /nodes=1,walltime=00:30:00 -I

The -t devel means we want to go on a devel nodes that are easier to reach and made to perform compilation and tests.

Once connected, we activate the software environment we have previously installed and then we compile:

# source the guix profile
$ GUIX_PROFILE="/var/guix/profiles/per-user/<your-login>/gnu-fortran-mpi-tuto"
$ . "$GUIX_PROFILE/etc/profile"
# We compile the code and launch it 
$ cd $HOME/tuto-mpi
$ make

The make command automatically launch the execution. You can run the application by yourself running:

$ cd $HOME/tuto-mpi
$ mpiexec -np 4 --machinefile $OAR_NODE_FILE ./pair_impair

Here we have run the program on 4 cores on the same node (in that case, --machinefile option is not useful. However, if you want to run your program on more thanone ndoes, you have to specify this option).

Now, we want to launch it on many more cores and more nodes.

Creating and launching an execution script

Before continuing, quit your interactive jobs and return on Dahu head node (one way to do that is to run exit in your interactive job shell).

To run more efficiently our program, it is more convenient to write an OAR execution script as described here. Let’s create run.sh:

#!/bin/sh

#OAR -t devel
#OAR --project your_project
#OAR -l /nodes=4/core=1,walltime=00:30:00

source /applis/site/guix-start.sh

cd $HOME/tuto-mpi

mpirun --np 4 -npernode 1 --machinefile $OAR_NODE_FILE ./pair_impair

With this configuration, the program will run on 4 cores, each one being on different nodes. We do not forget to give execution permission on the submission script:

$ chmod +x ./run.sh

And finally, we submit our job:

$ oarsub -S ./run.sh

If we want to execute it on 4 cores on the same node, we can modify this script accordingly:

#!/bin/sh

#OAR -t devel
#OAR --project your_project
#OAR -l /nodes=1/core=4,walltime=00:30:00

source /applis/site/guix-start.sh

mpirun --np 4 --machinefile $OAR_NODE_FILE --mca plm_rsh_agent "oarsh" ./pair_impair