Dear JHPCE community,
As part of our continuing efforts to more efficiently utilize RAM on the JHPCE cluster, we will be making a couple of changes to the current cluster configuration. Our analysis of jobs run over the last year has shown that that are there are often jobs run that do not make efficient use of RAM, and this can unfairly impact overall utilization of the cluster. The changes that we will be making are intended to help you get the most out of using the cluster, and make the cluster more fairly available to all users. Please see our recent “Memory Usage and Good Citizenship” blog post at https://jhpce.jhu.edu/2017/05/17/memory_usage_analysis/ for more details.
1) First, we will be lowering the per-user RAM limit to 512 GB for jobs on the shared queue. Our previous limit had been 1 TB per user, however this higher limit at times led to utilization of the cluster being constricted by RAM. We’re expecting that the reduced RAM limit will better balance core and RAM utilization on the cluster. This change will be made on Monday, June 26th at 5:00 PM. There will not be any downtime for this change, and running jobs will be completely unaffected, however jobs or tasks that start running after the change on Monday will be regulated by this new limit.
What this means for you is that if you submit a lot of jobs which use a lot of RAM, it may take longer for your jobs to complete. For example, if you have jobs with use 10GB, you will now only be able to have 50 jobs running, where with the old setting you could have 100 jobs running simultaneously. In our analysis though, the vast majority of users will not be impacted by this change.
We can of course increase your RAM limit for a short period, if, say, there is deadline to have your jobs get run and the cluster is not too busy. In those cases, please email bitsupport to have your RAM limit temporarily increased.
2) In the past we had recommended that jobs be submitted with “h_vmem” set slightly higher than “mem_free” (either 1G higher, or 10% higher). Going forward, you should have “h_vmem” set to be the same as “mem_free”. This will help to ensure that RAM on the compute nodes does not get oversubscribed.
3) Lastly, we will be performing a weekly analysis of jobs on the JHPCE cluster to identify those jobs where RAM is not being utilized efficiently. Each week we will be emailing those people that have run jobs where either a) their actual RAM usage is far less than requested, or b) where actual RAM usage was much more than requested. It is hoped that these email messages can be a tool to help everyone tune their jobs to run more efficiently.
Lastly, on a non-RAM related note, we recently installed a graphical text editor called “gedit” on the JHPCE cluster. The “gedit” editor has an interface similar to “Notepad” on the PC, or “TexteEdit” on MacOS. Note that you will need to have a working X11 environment running on your laptop/desktop to use “gedit”.
Please let us know if you have any questions about these changes.