Dear JHPCE Community,
On September 14th, a change was made on the JHPCE cluster to the memory allocation algorithm used for scheduling jobs. This change was completed successfully and should have an overall beneficial impact to the performance of the cluster.
One side effect that this change had was altering the way that memory should be requested when using a multi-core “parallel environment”. In the past, when requesting a “parallel environment”, the “mem_free” value was set to the total amount of RAM that the job was going to need, and “h_vmem” was set to “total RAM requested/number of slots requested”. With the change this weekend, the “mem_free” value should also be divided by the number of slots requested.
As an example, a job which will need 160GB of RAM and use 8 cores would have been submitted as:
$ qsub -l mem_free=160G,h_vmem=20G -pe local 8 job.sh
This should now be submitted as:
$ qsub -l mem_free=20G,h_vmem=20G -pe local 8 job.sh
Please let us know if you have any questions about this change, or need further clarification.