Environment Modules

Introduction

The JHPCE cluster uses modulefiles to allow users to configure their shell environments. Some applications will not run until you load the corresponding modulefile. A handful of widely used modulefiles are loaded by default when you log into the cluster (e.g., SGE, R, gcc, perl etc.). To see what modules are loaded you can enter the following command at the shell prompt:

module list

Modulefiles cure the the age-old headaches associated with configuring paths, environment variables and different software versions. For example, gcc or open64 compilers need different libraries. Some users need python 2.6 while other users need python 2.7 or python 3. Some users want a standard stable R, while some want the latest and greatest development version of R that was compiled the night before.

We use the lmod modulefile system developed at the Texas Advanced Computing Center (TACC).

Basic module commands for users

A modulefile is a script that sets up the paths and environment variables that are needed for a particular application or development environment. Most users will just use our modulefiles. But no doubt some of you will want to finely control your shell environment. In which case you can start developing your own custom modulefiles. There are six basic commands that users should know

module list                 # list your currently loaded modules
module load   <MODULEFILE>  # configures your environment according to modulefile 
module unload <MODULEFILE>  # rolls back the configuration performed by the associated load
module avail                # shows what modules are available for loading
module swap <OLD> <NEW>     # unloads <OLD> modulefile and loads <NEW> modulefile
module initadd <MODULEFILE> # configure a module to be loaded at every login
module spider               # lists all modules, including ones not in your MODULEPATH
module spider <NAME>        # search for a module whose name includes <NAME>

Please refer to the TACC documentation for more details.

Defaults

By default the following modules are loaded on all compute hosts and the login hosts when you log in

sge/2011.11p1  # allows access to grid engine commands
gcc/4.4.7      # environment for gcc 4.4.7
R/all          # environment for all versions of R. Visit our R page
perl/5.10.1    # perl 5.10.1 and libraries. Visit our perl page

By default the following modules are loaded on all compute hosts

matlab         # environment for matlab
stata          # environment for stata

By default the following modules are loaded only on the hosts of the appropriate queue

sas            # loaded on the sas.q host
mathematica    # loaded on the math.q

Configuring your .bashrc

It is critical that your .bashrc file sources the system-wide bashrc file. Otherwise nothing will work! After you source the system-wide bashrc file you can tailor your environment variables for any applications or versions that you want. For example:

# Always source the global bashrc
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi

# If I prefer gcc/4.8.1 as my default compiler
module load gcc/4.8.1

Community maintained applications

We use a community-based model of application maintenance to support our diverse user base. Briefly, this means that we support essentially no applications as a service center. Instead we encourage and facilitate power users to maintain their tools in a manner that makes their tools available to all users. These users maintain their applications as well as the corresponding modulefiles. Below is a list of applications and application suites that are maintained by community maintainers. Please refer to their documentation for details. Also please be considerate. Maintaining software for you is not their day job. There is absolutely no point in getting bent out of shape if they can’t (or won’t) service your request.

Description Maintainer Documentation
R Kasper Hansen Documentation
Perl Jiong Yang Documentation
Python Alyssa Frazee Documentation
ShortRead Tools Kasper Hansen Documentation

Frequently asked questions

How do I run array jobs on the JHPCE Cluster?

Array jobs allow multiple instances of a program to be run via a single qsub command.  This can often be more convenient than running numerous repetitive qsubs of the same program. The different instances of the job that get run are known as “tasks”.  These task values are numeric, and are specified by using the "-t START-END" option to qsub. The specific task is referenced within the qsub script via the $SGE_TASK_ID environment variable.

As an example, suppose you have 3 data files you want to run your program against:

$ ls data*
data1    data2    data3

In this simple example, the SGE script simply "cat"s each file.

$ more script1.sh
#$ -cwd

FILENAME="data$SGE_TASK_ID"
cat $FILENAME

exit 0

When the job is submitted, the "-t" option is used to specify the range of tasks to be run, so in our example, the command to submit 3 tasks, numbered 1, 2, and 3 would be "qsub -t 1-3 script1.sh". Within the script, the $SGE_TASK_ID variable will be assigned to 1, 2, and 3 for the 3 instances of the script that gets run.

$ qsub -t 1-3 script1.sh
Your job-array 5204694.1-3:1 ("script1.sh") has been submitted
$ qstat
job-ID  prior   name        user    state submit/start at     queue            slots ja-task-ID  
----------------------------------------------------------------------------------------------
5204694 0.00000 script1.sh mmill116 qw    06/27/2018 18:12:56                      1 1-3:1
$ qstat
job-ID  prior   name        user    state submit/start at     queue            slots ja-task-ID  
----------------------------------------------------------------------------------------------
5204694 0.59661 script1.sh mmill116 r     06/27/2018 18:12:59 shared.q@compute-087 1 1
5204694 0.54831 script1.sh mmill116 r     06/27/2018 18:12:59 shared.q@compute-086 1 2
5204694 0.53220 script1.sh mmill116 r     06/27/2018 18:12:59 shared.q@compute-054 1 3
$ qstat
$ ls
data1   data3       script1.sh.e5204694.1  script1.sh.e5204694.3  script1.sh.o5204694.2
data2   script1.sh  script1.sh.e5204694.2  script1.sh.o5204694.1  script1.sh.o5204694.3

The result of running this qsub would be 3 output files, where each output file has the task ID appended to it.

Now consider a more complicated scenario where the file names are not neatly numbered. One way to handle this situation is to create a file that contains a list of the files, and then use the $SGE_TASK_ID number to refer to the line number of the entry in that file to get to the file name. For this example, let’s say we have 3 files:

$ ls
first   second   third      

We could create a file list using the “ls” command…

$ ls > file-list
$ cat file-list
first
second
third

We can now create and SGE script that uses the awk command to pull out the line number from file-list based on the value of $SGE_TASK_ID (there are of course numerous other options to use in Unix instead of awk).

$ cat script2.sh
#$ -cwd

FILENAME=`awk "NR==$SGE_TASK_ID {print $1}" file-list`
cat $FILENAME

exit 0
$ qsub -t 1-3 script2.sh

By submitting this array job, 3 instances of the script2.sh script would get run, where each instance would access the filename from the file-list file, where the line number in file-list matches the value of $SGE_TASK_ID. As in our previous example, 3 output files would get created by the 3 tasks, and each output file would contain the contents of the respective input file.

How do I get the Rstudio program to work on the cluster?

The RStudio program (https://www.rstudio.com/products/rstudio/) is a graphical development interface to the R statistical package.  Some people find the graphical RStudio program helpful in organizing R projects, and writing and debugging R programs.

To run rstudio on the JHPCE cluster, you can use the following steps:

  • First, make sure you have an X11 server installed on your laptop/desktop (either Xquartz for MacOS, or MobaXterm for PCs).
  • Next, ssh into the JHPCE cluster, making sure the the X11 forwarding option is used for SSH.  To enable X11 forwarding from MacOs, add the “-X” option to your ssh command.  For MobaXterm on Windows, X11 forwarding is enabled by default.
  • Once you are on the cluster, use “qrsh” to connect to a compute node
  • Load the “rstudio” module by running “module load rstudio
  • Start the rstudio program by running “rstudio
  • Within a couple of seconds, the RStudio interface should be displayed.

An example session for user “bob” would look something like:

BobsMac$ ssh -X bob@jhpce01.jhsph.edu
Last login: Mon Dec 26 08:24:01 2016 from 10.11.12.13
---
Use of this system constitutes agreement to adhere to all applicable 
JHU and JHSPH network and computer use policies.
---
[jhpce01 /users/bob ]$ qrsh
[compute-072 /users/bob]$ module load rstudio
[compute-072 /users/bob]$ rstudio

Screen Shot 2016-12-26 at 8.59.42 AM

Please be aware that rstudio is a very graphics-heavy program and uses a fair amount of network bandwidth.  You will need to make sure that you have a fairly fast network connection in order to use rstudio effectively.

Why do I get memory errors when running Java?

You may see errors such as the following when you try to run Java:

$ java
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

This is due to the default Maximum Heap Size in java being 32GB and the JHPCE cluster defaults for mem_free, your h_vmem being set to 2GB and 3GB respectively.  The default settings for qrsh would be too small to  accommodate the memory required by Java.  You have 3 options to get this to work.

1) If you think you will really need 32GB of memory for your java program, you can increase your mem_free and h_vmem settings of your qrsh command:

jhpce01: qrsh -l mem_free=40G,h_vmem=40G
compute-085: java

2) More likely, you do not need 32GB for your Java program, so you can direct Java to use less memory by using the “-Xmx” and “-Xms” options to Java.  For instance, if you want to set the initial heap size to 1GB and the maximum heap size to 2GB you could use:

jhpce01: qrsh 
compute-085: java -Xms1g -Xmx2g

3) An alternative way to set the Java memory settings is to use the “_JAVA_OPTIONS” environment variable.  This is useful if the call to run java is embedded within a  script that cannot be altered.  For instance, if you want to set the initial heap size to 1GB and the maximum heap size to 2GB you could use:

jhpce01: qrsh
compute-085: export _JAVA_OPTIONS="-Xms1g -Xmx2g" 
compute-085: java

Why was RAM made a consumable resource on the cluster?

The way that SGE on the JHPCE cluster had been  configured, was to NOT view RAM as a “consumable” resource. The recent change to the SGE configuration changed this so that RAM is now “consumable” and gets reserved for a job when it is requested.  What does this mean?

As an example, for simplicity sake, let’s say you have a cluster with just 1 node, and that node has 20GB of RAM.  If you run a job that requests 8GB of RAM ( mem_free=8GB,-h_vmem=9GB) it will start to run on the node immediately. Now, this job takes a few minutes for all 8GB to actually be used by the program – let’s say it consumes 2GB/minute, so after 4 minutes all 8GB will be in use.  A minute later, the running job is using 2GB of RAM, and now a second job comes along and requests 8GB of RAM.  SGE will see that there is still 18GB of RAM on the node and start the second job.  Now, a minute later, a third job comes along, also requesting 8GB.  The first job is using 4GB, the second job is using 2GB, the node has 14GB free, so SGE, seeing that 8GB is available, starts the third job.  So now you have 3 jobs running that will eventually need 24GB of RAM in total, and there is only 20 GB on the system,  so at some point the node becomes RAM starved and the Linux oom-killer gets invoked to kill a process.  (For extra credit – at what time does the node run out of RAM? 🙂 )

The change made to the cluster alters the behavior of SGE so that RAM is “consumable”, so that when you request 8GB, SGE marks that 8GB as reserved.  In the above example, the first 2 jobs would have run, and SGE would have marked 16GB of RAM as “consumed”, so the third job would not have run until one of the other jobs finished.  The biggest downside to this approach though is that if people request much more RAM than what their job need, then jobs will have to wait longer to run, and resources may go unused.  If, in the above example, the first job requested 15GB of RAM “to be safe”, then that would have prevented the second job from starting until the first completed, even though the 2 jobs could have run concurrently.

My X11 forwarding stops working after 20 minutes.

X11 forwarding can be enable in your ssh session using the -X options to the ssh command:

$ ssh -X username@jhpce01.jhsph.edu

This will allow you to run X based programs  from the JHPCE cluster back top the X server running on your desktop (such as XQuartz on Mac computers).  On some Mac computers X11 forwarding will work for a while but may eventually time out, with the error message:

Xt error: Can't open display: localhost:15.0

This error comes from the “ForwardX11Timeout” variable, which is set by default to 20 minutes.  To avoid this issue, a larger timeout  can be supplied on the command line to, say, 336 hours (2 weeks):

$ ssh -X username@jhpce01.jhsph.edu -o ForwardX11Timeout=336h

or it can be changed in the /etc/.ssh_config file on your desktop by adding the line:

ForwardX11Timeout 336h

to the end of the /etc/.ssh_config file, or your own ~/.ssh/config file.  Note: a value higher than 596h may cause the X window server on your desktop to fail, as it is greater than 2^31 milliseconds and will exceed the signed 32bit  size of the “ForwardX11Timeout” variable.

How do I copy a large directory structure from one place to another.

As an example, to copy a directory tree from /home/bst/bob/src to /dcs01/bob/dst, first, create a cluster script, let’s call it “copy-job“, that contains the line:

rsync -avzh /home/bst/bob/src/ /dcs01/bob/dst/

Next, submit a batch job to the cluster

qsub -cwd -m e -M bob@jhu.edu copy-job

This will submit the “copy-job” script to the cluster, which will run the job on one of the compute nodes, and send an email when it finishes.

My app is complaining that it can’t find a shared library, e.g. libgfortran.so.1 could you please install it?

We would guess that 9 times out of 10, the allegedly missing library is there. The problem is that your application is looking for the version of the library that is compatible with the old system software. It will not help to point your application to the new libraries. They are more than likely to be incompatible with the new system and we won’t help you debug any problems  if you try to do this. The correct solution is to reinstall your software. If the problem persists after the reinstallation, then please contact us and we will install standard libraries that are actually missing.

My app claims it’s out of disk space, but I see there is plenty of space, what gives?

By default, every user should have a .sge_request file in their home directory.  This file contains a line like this:

-l h_fsize=10G

This limits the size of all created files to 10GB.  If you plan on creating larger files you should increase this limit, either in the .sge_request file before you start your qrsh session, or in the batch script you submit via qsub. From the command line,  you would start a qrsh session as follows:

qrsh  -l h_fsize=300G

ssh gave a scary warning: REMOTE HOST IDENTIFICATION HAS CHANGED!

Go into the ~/.ssh directory of your laptop/desktop and edit the known_hosts file.
Search for the line that starts with the host that you ssh’d to. Delete that line (it is probably a long line that wraps). Then try again.

Why aren’t SGE commands, or R, or matlab, or… available to my cron job?

cron jobs are not launched from a login shell, but the module commands and the JHPCE default environment is initialized automatically only when you log in. Consequently, in a cron job, you have to do the initialization yourself. Do this by wrapping your cron job in a bash script that initializes the module command and then loads the default sge modules. You bash shell script should start with the following lines:

#!/bin/bash

# Source the global bashrc
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
module load JHPCE_DEFAULT_ENV

This should allow your cron jobs to run within SGE.

1 2