There are currently modules for two Telemac versions. You can search for the names of modules with the "module spider ..." command:
The version that seems to work the best is the v7 one. To get more information about this module you can use the "module show ..." command like:
Useful information about this:
the Telemac software expects the old, obsolete 2.7 version of Python
the PATH variable gets updated with: /opt/ohpc/pub/telemac/v7/scripts/python27 added to the beginning
the Telemac configuration file is: /opt/ohpc/pub/telemac/v7/configs/systel.cis-centos.cfg
specific versions of MVAPICH2 (MPI Distributed Parallel libraries), METIS and Anaconda (Python) are loaded automatically
The MVAPICH2 and Anaconda modules that are loaded may in turn load other modules.
As an example, when you first login to Katahdin, you will have a set of default modules loaded:
By loading the Telemac module, it should provide all dependencies to run Telemac. You do not need to worry about setting up Python or MVAPICH2.
To load the module run:
Then, if you check to see what modules you have loaded you will see:
The telemac/v7 module is there but also the anaconda2/5.3.1 module has been added. You can also see that the version of the MVAPICH2 module has changed from mvapich2/2.3.2 to mvapich2-intel/intel-2.2. In addition, the intel/2017.1.132 compiler module has been added as well as a module called legacy/1.0. The gnu8/8.3.0 module is still loaded because the Intel compilers require it.
Telemac is different than most programs that are run on the cluster in that in order to submit a job you just run a Telemac command rather than needing to create a job script to submit. The general Telemac commands for running models are:
A typical command to submit a job would be something like:
To find out what options there are for these programs run:
In setting up how these parameters are translated to SLURM, I found that the descriptions are misleading. Here is what I ended up in the configuration file:
I found that the descriptions in --help for --ncnode and --nctile are backwards. That is, nctile refers to the number of nodes to allocate and --nctile refers to the number of tasks/cores per node to run. So in the example command above:
it will allocate 10 nodes and 6 tasks/cores per node for a total of 60 tasks/cores/processes to run the job with.
When this command is run a number of things are done:
a new directory is created based on the name of the input file and the current date/time
a new program is compiled in that new directory based on the settings of the input file
a job script is created
the job is submitted
Here is an example:
The directory that gets created has the following:
And the SLURM script that was made (called HPC_STDIN) looks like: