Current Storage Offerings

There are 2 main categories of storage space that are available for purchase on the JHPCE cluster that are listed on the chart below.

1) The blue and green lines represent “pay-as-you-go” space.  These spaces include home directory space, legacy storage space, and a couple of “leased” spaces.  For these spaces, one is only charged for the actual space that used, and only for the time that data is stored there, at the rate shown in the chart.  As a specific example, if one were to use 10 TB of /dcl01/leased space for a year, the cost would be $1200/year.

2) The yellow lines on the chart represent “project spaces” that one has to buy into.  These are the large storage arrays that we build (historically every 18 months or so) and are funded by various labs that purchase allocations on the storage array.  In mid-2018 we stood up our most recent “project space” (/dcl02), so we do not have any additional “project space” available currently.  For these “project spaces”, one purchases a set allocation for an initial buy-in fee, and then pays an annual storage management fee.  For /dcl02, the buy-in cost was $43/TB.  So, as a specific example, if one had purchased a 10TB allocation of /dcl02, the buy-in cost would have been $430, and there would be a $300/year for the storage management fee.

The “pay-as-you-go” spaces are more expensive than the “project spaces” because 1) they are smaller, more-expensive-per-TB devices, 2) the cost of the physical storage array is rolled into the annual fee instead of being charged as an up-front buy-in cost, and 3) they require more time and effort to maintain.

There are also 2 types of scratch storage space on the cluster. The first type of scratch space is the SGE scratch space. The SGE scheduler will create a unique $TMPDIR directory for every job/task that is run on the cluster. This directory is created when the job starts on a compute node, and is removed when the job completes. The $TMPDIR space will be created on the local compute node’s internal disk drives, so the amount of scratch scratch space will vary from node to node, but will be between 500GB and 4TB. The SGE scratch space will typically be faster than other storage space, so this space is useful for intermediary files which do not need to be stored long term, or for jobs which will be performing multiple reads/writes of the same files.

The second type of scratch space is the Personal Scratch space. The Personal Scratch space is a network-based (NFS) filesystem with a backend NVME based storage array. The Personal Scratch space is going to be faster than most other storage on the cluster. The Personal Scratch space is intended to be used for short-term storage of large files, so users are limited to 1TB of space, and files older than 30 days will be purged from the Personal Scratch space. The Personal Scratch space can be accessed via the $MYSCRATCH environment variable from the compute and transfer nodes of the cluster.

Lastly, off-site backup space is available. The /users directory is currently backed up, and other directories can be backed up upon request, and for a modest fee.

Please email if you would like to discuss purchasing storage on the JHPCE cluster.

Type Location Env Var Capacity Quota Lifetime FY2019Q2 Rate SU Notes
temp scratch /scratch/temp/<JQT> $TMPDIR 500GB – 4TB. Varies by node. None transient free [1]
personal scratch /fastscratch/myscratch/<userid> $MYSCRATCH 22TB 1TB 30 days free [1][2]
leased Lustre /dcl01/leased, /dcl02/leased 70TB as agreed intermediate $35/TByr used TB
leased ZFS /legacy 100TB from legacy short < $1350/TByr used TB [3]
leased ZFS /starter/starter-02 10TB 10TB short $1,041/TByr used TB [3]
home ZFS /users/<userid> $HOME 34TB 100GB long $345/TByr used TB
project Lustre /dcl01 3,400TB as purchased long $26-$29/TByr + buyin purchased TB
project Lustre /dcl02 2,463TB as purchased long $22/TB-yr + buyin purchased TB [4]
project ZFS /dcs01 688TB as purchased long $20/TByr + buyin purchased TB
backup ZFS varies 2,675TB permanent $11/TByr purchased TB [5]
[All] Rates fluctuate slightly from quarter to quarter based on actual JHPCE expenses and capacities. Rates in the above table are from most recent quarter.
[1] Scratch space is only visible on compute and transfer nodes. Scratch is not visible on the login node. <JQT> = ‘job’.’queue’.’task’
[2] Currently there is a 1TB quota on files in /fastscratch/myscratch, and a 30 day file retention limit, however we reserve the option to alter these limits if need be.
[3] Users with space in /legacy and /starter may want to consider moving to /dcl01/leased, as it is significantly less expensive.
[4] Estimated rate. System goes into production in July 2018
[5] Backups are done of the /users filesystem. Other filesystems may be backed up upon arrangement with PI.