Environment Modules

Introduction

The JHPCE cluster uses modulefiles to allow users to configure their shell environments. Some applications will not run until you load the corresponding modulefile. A handful of widely used modulefiles are loaded by default when you log into the cluster (e.g., SLURM, R, gcc, perl etc.). To see what modules are loaded you can enter the following command at the shell prompt:

module list

Modulefiles cure the the age-old headaches associated with configuring paths, environment variables and different software versions. For example, gcc or open64 compilers need different libraries. Some users need python 2.6 while other users need python 2.7 or python 3. Some users want a standard stable R, while some want the latest and greatest development version of R that was compiled the night before.

We use the lmod modulefile system developed at the Texas Advanced Computing Center (TACC).

Basic module commands for users

A modulefile is a script that sets up the paths and environment variables that are needed for a particular application or development environment. Most users will just use our modulefiles. But no doubt some of you will want to finely control your shell environment. In which case you can start developing your own custom modulefiles. There are six basic commands that users should know

module list                 # list your currently loaded modules
module load   <MODULEFILE>  # configures your environment according to modulefile 
module unload <MODULEFILE>  # rolls back the configuration performed by the associated load
module avail                # shows what modules are available for loading
module swap <OLD> <NEW>     # unloads <OLD> modulefile and loads <NEW> modulefile
module initadd <MODULEFILE> # configure a module to be loaded at every login
module spider               # lists all modules, including ones not in your MODULEPATH
module spider <NAME>        # search for a module whose name includes <NAME>

Please refer to the TACC documentation for more details.

Defaults

By default the following modules are loaded on all compute hosts and the login hosts when you log in

JHPCE_ROCKY9_DEFAULT_ENV  # allows access to grid engine commands
JHPCE_tools/3.0           # A set of scripts and tools for JHPCE

By default the following modules are loaded only on the hosts of the appropriate queue

sas            # loaded on the sas.q host
mathematica    # loaded on the math.q

Configuring your .bashrc

It is critical that your .bashrc file sources the system-wide bashrc file. Otherwise nothing will work! After you source the system-wide bashrc file you can tailor your environment variables for any applications or versions that you want. For example:

# Always source the global bashrc
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi

# If I prefer gcc/4.8.1 as my default compiler
module load gcc/4.8.1

Community maintained applications

We use a community-based model of application maintenance to support our diverse user base. Briefly, this means that we support essentially no applications as a service center. Instead we encourage and facilitate power users to maintain their tools in a manner that makes their tools available to all users. These users maintain their applications as well as the corresponding modulefiles. Below is a list of applications and application suites that are maintained by community maintainers. Please refer to their documentation for details. Also please be considerate. Maintaining software for you is not their day job. There is absolutely no point in getting bent out of shape if they can’t (or won’t) service your request.

Description Maintainer Documentation
R Kasper Hansen Documentation
Perl/Python Jiong Yang Documentation
ShortRead Tools Kasper Hansen Documentation
Numerous stats and Genomics packages Lieber Documentation

Frequently asked questions

I’m on a Mac, and the ~C command to interrupt an ssh session isn’t working. It used to, but I upgraded MacOS and now it does not work.

Some versions of MacOS have disable by default the ability to send an SSH Escape with “~C”.  To reenable this, on you Mac, you need to set the “EnableEscapeCommandline” option.  You can do this by either running “ssh -o EnableEscapeCommandline=yes . . .” or by editing your ~/.ssh/config file, and at the top of that file add the line:

EnableEscapeCommandline=yes

This should now let you use the “~C” as an interrupt to ssh.

 

 

How do I get the Rstudio program to work on the cluster?

The RStudio program (https://www.rstudio.com/products/rstudio/) is a graphical development interface to the R statistical package.  Some people find the graphical RStudio program helpful in organizing R projects, and writing and debugging R programs.

To run rstudio on the JHPCE cluster, you can use the following steps:

  • First, make sure you have an X11 server installed on your laptop/desktop (either Xquartz for MacOS, or MobaXterm for PCs).
  • Next, ssh into the JHPCE cluster, making sure the the X11 forwarding option is used for SSH.  To enable X11 forwarding from MacOs, add the “-X” option to your ssh command.  For MobaXterm on Windows, X11 forwarding is enabled by default.
  • Once you are on the cluster, use “srun –pty –x11 bash” to connect to a compute node
  • Load the “rstudio” module by running “module load rstudio
  • Start the rstudio program by running “rstudio
  • Within a couple of seconds, the RStudio interface should be displayed.

An example session for user “bob” would look something like:

BobsMac$ ssh -X bob@jhpce01.jhsph.edu
Last login: Mon Dec 26 08:24:01 2016 from 10.11.12.13
---
Use of this system constitutes agreement to adhere to all applicable 
JHU and JHSPH network and computer use policies.
---
[jhpce01 /users/bob ]$ srun --pty --x11 bash
[compute-072 /users/bob]$ module load rstudio
[compute-072 /users/bob]$ rstudio

Screen Shot 2016-12-26 at 8.59.42 AM

Please be aware that rstudio is a very graphics-heavy program and uses a fair amount of network bandwidth.  You will need to make sure that you have a fairly fast network connection in order to use rstudio effectively.

Why do I get memory errors when running Java?

UPDATE – 2024-02-01 – The JHPCE Cluster has been recently migrated from SGE to SLURM, and this does not appear to be an issues

[user@login31 ~]$ srun --pty --x11 bash
[user@compute-127 ~]$ java -version
openjdk version "1.8.0_372"
OpenJDK Runtime Environment (build 1.8.0_372-b07)
OpenJDK 64-Bit Server VM (build 25.372-b07, mixed mode)
[user@compute-127 ~]$ module avail java
----------------------- /jhpce/shared/jhpce/modulefiles ------------------------
   java/19 (D)
------------------------ /jhpce/shared/libd/modulefiles ------------------------
   java/17    java/18
  Where:
   D:  Default Module
[user@compute-127 ~]$ module load java/19
[user@compute-127 ~]$ java -version
java version "19.0.1" 2022-10-18
Java(TM) SE Runtime Environment (build 19.0.1+10-21)
Java HotSpot(TM) 64-Bit Server VM (build 19.0.1+10-21, mixed mode, sharing)

The information below is no longer relevant in JHPCE, but is being retained for historical purposes. 

You may see errors such as the following when you try to run Java:

$ java
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

This is due to the default Maximum Heap Size in java being 32GB and the JHPCE cluster defaults for mem_free, your h_vmem being set to 2GB and 3GB respectively.  The default settings for qrsh would be too small to  accommodate the memory required by Java.  You have 3 options to get this to work.

1) If you think you will really need 32GB of memory for your java program, you can increase your mem_free and h_vmem settings of your qrsh command:

jhpce01: qrsh -l mem_free=40G,h_vmem=40G
compute-085: java

2) More likely, you do not need 32GB for your Java program, so you can direct Java to use less memory by using the “-Xmx” and “-Xms” options to Java.  For instance, if you want to set the initial heap size to 1GB and the maximum heap size to 2GB you could use:

jhpce01: qrsh 
compute-085: java -Xms1g -Xmx2g

3) An alternative way to set the Java memory settings is to use the “_JAVA_OPTIONS” environment variable.  This is useful if the call to run java is embedded within a  script that cannot be altered.  For instance, if you want to set the initial heap size to 1GB and the maximum heap size to 2GB you could use:

jhpce01: qrsh
compute-085: export _JAVA_OPTIONS="-Xms1g -Xmx2g" 
compute-085: java

Why was RAM made a consumable resource on the cluster?

UPDATE – 2024-02-01 – The JHPCE Cluster has been recently migrated from SGE to SLURM. RAM is still considered a consumable resource in SLURM, and you can read the historical rationale for that below.

The way that SGE on the JHPCE cluster had been  configured, was to NOT view RAM as a “consumable” resource. The recent change to the SGE configuration changed this so that RAM is now “consumable” and gets reserved for a job when it is requested.  What does this mean?

As an example, for simplicity sake, let’s say you have a cluster with just 1 node, and that node has 20GB of RAM.  If you run a job that requests 8GB of RAM ( mem_free=8GB,-h_vmem=9GB) it will start to run on the node immediately. Now, this job takes a few minutes for all 8GB to actually be used by the program – let’s say it consumes 2GB/minute, so after 4 minutes all 8GB will be in use.  A minute later, the running job is using 2GB of RAM, and now a second job comes along and requests 8GB of RAM.  SGE will see that there is still 18GB of RAM on the node and start the second job.  Now, a minute later, a third job comes along, also requesting 8GB.  The first job is using 4GB, the second job is using 2GB, the node has 14GB free, so SGE, seeing that 8GB is available, starts the third job.  So now you have 3 jobs running that will eventually need 24GB of RAM in total, and there is only 20 GB on the system,  so at some point the node becomes RAM starved and the Linux oom-killer gets invoked to kill a process.  (For extra credit – at what time does the node run out of RAM? 🙂 )

The change made to the cluster alters the behavior of SGE so that RAM is “consumable”, so that when you request 8GB, SGE marks that 8GB as reserved.  In the above example, the first 2 jobs would have run, and SGE would have marked 16GB of RAM as “consumed”, so the third job would not have run until one of the other jobs finished.  The biggest downside to this approach though is that if people request much more RAM than what their job need, then jobs will have to wait longer to run, and resources may go unused.  If, in the above example, the first job requested 15GB of RAM “to be safe”, then that would have prevented the second job from starting until the first completed, even though the 2 jobs could have run concurrently.

My X11 forwarding stops working after 20 minutes.

X11 forwarding can be enable in your ssh session using the -X options to the ssh command:

$ ssh -X username@jhpce01.jhsph.edu

This will allow you to run X based programs  from the JHPCE cluster back top the X server running on your desktop (such as XQuartz on Mac computers).  On some Mac computers X11 forwarding will work for a while but may eventually time out, with the error message:

Xt error: Can't open display: localhost:15.0

This error comes from the “ForwardX11Timeout” variable, which is set by default to 20 minutes.  To avoid this issue, a larger timeout  can be supplied on the command line to, say, 336 hours (2 weeks):

$ ssh -X username@jhpce01.jhsph.edu -o ForwardX11Timeout=336h

or it can be changed in the /etc/.ssh_config file on your desktop by adding the line:

ForwardX11Timeout 336h

to the end of the /etc/.ssh_config file, or your own ~/.ssh/config file.  Note: a value higher than 596h may cause the X window server on your desktop to fail, as it is greater than 2^31 milliseconds and will exceed the signed 32bit  size of the “ForwardX11Timeout” variable.

How do I copy a large directory structure from one place to another.

As an example, to copy a directory tree from /home/bst/bob/src to /dcs01/bob/dst, first, create a cluster script, let’s call it “copy-job“, that contains the line:

rsync -avzh /home/bst/bob/src/ /dcs01/bob/dst/

Next, submit a batch job to the cluster

sbatch --mail-type=FAIL,END --mail-user=bob@jhu.edu copy-job

This will submit the “copy-job” script to the cluster, which will run the job on one of the compute nodes, and send an email when it finishes.

My app is complaining that it can’t find a shared library, e.g. libgfortran.so.1 could you please install it?

We would guess that 9 times out of 10, the allegedly missing library is there. The problem is that your application is looking for the version of the library that is compatible with the old system software. It will not help to point your application to the new libraries. They are more than likely to be incompatible with the new system and we won’t help you debug any problems  if you try to do this. The correct solution is to reinstall your software. If the problem persists after the reinstallation, then please contact us and we will install standard libraries that are actually missing.

My app claims it’s out of disk space, but I see there is plenty of space, what gives?

UPDATE – 2024-02-01 – The JHPCE Cluster has been recently migrated from SGE to SLURM. SLURM does not have a file size limit, to this page is no longer relevant, but is being retained for historical purposes.  If you do get an “out of space” message, you can use the unix “df” command to look at the disk usage of your current filesystem (df -h .)

By default, every user should have a .sge_request file in their home directory.  This file contains a line like this:

-l h_fsize=10G

This limits the size of all created files to 10GB.  If you plan on creating larger files you should increase this limit, either in the .sge_request file before you start your qrsh session, or in the batch script you submit via qsub. From the command line,  you would start a qrsh session as follows:

qrsh  -l h_fsize=300G

ssh gave a scary warning: REMOTE HOST IDENTIFICATION HAS CHANGED!

Go into the ~/.ssh directory of your laptop/desktop and edit the known_hosts file.
Search for the line that starts with the host that you ssh’d to. Delete that line (it is probably a long line that wraps). Then try again.

Why aren’t SLURM commands, or R, or matlab, or… available to my cron job?

cron jobs are not launched from a login shell, but the module commands and the JHPCE default environment is initialized automatically only when you log in. Consequently, in a cron job, you have to do the initialization yourself. Do this by wrapping your cron job in a bash script that initializes the module command and then loads the default sge modules. You bash shell script should start with the following lines:

#!/bin/bash

# Source the global bashrc
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
module load JHPCE_ROCKY9_DEFAULT_ENV

This should allow your cron jobs to run within SLURM.

1 2 3