Skip to content

Joint HPC Exchange

cluster logo cluster usage

  • About Us


    The Joint High Performance Computing Exchange (JHPCE) is a High-Performance Computing (HPC) facility in the Department of Biostatistics at the Johns Hopkins Bloomberg School of Public Health. JHPCE began in 2008 as a collaborative effort between Biostatistics and the Computational Biology & Research Computing group in the department of Molecular Microbiology and Immunology. Since then the facility has grown to provide HPC services to over 100 labs and departments in the JHU Bloomberg School of Public Health, JHU School of Medicine, JHU Carey Business school, the Lieber Institue for Brain Development, Kennedy Krieger institute, and numerous departments on the JHU Homewood campus. The facility is open to all Johns Hopkins affiliated researchers.

  • Community


    The JHPCE operates as a formal Common Pool Resource (CPR) Hierarchy with rights to specific resources based on stakeholder ownership of resources. All of the computing resources on JHPCE have been provided by various stakeholders on the cluster. To benefit the entire research community, excess computing capacity is made available to non-stakeholders on an as-available basis, in exchange for fees that defray the operating costs of the stakeholders.

    Throughout the years, JHPCE has provided HPC services to over 3000 researchers across JHU, with 300 active users at any given quarter.

    The JHPCE cluster has over 100 statistical and genomics applications installed, including R, SAS, plink, bowtie.... Many of these applications have been installed, and are maintained by members of the community. We allow for users to install their own applications on the cluster, and encourage sharing of applications as a "module" with other users.

  • Cluster Details


    The computing and storage systems on the JHPCE cluster are optimized for genomics and biomedical research. The cluster has 85 compute nodes, providing over 4000 cores, 53TB of DRAM and over 20 PB of low-cost networked mass storage (ZFS and Lustre-over-ZFS). The network fabric consists of a 10/40 Gbps ethernet backbone. The facility is connected via a 40Gbps network to the University’s Science DMZ.

    The JHPCE cluster is optimized for the embarrassingly parallel applications that are the bread-and-butter of our stakeholders, e.g., genomics and statistical applications, rather than the tightly-coupled applications that are typical in traditional HPC fields, e.g., physics, fluid-dynamics, quantum simulation etc. Job scheduling is performed with the Simple Linux Utility for Resource Management (SLURM).

If your lab is interested in joining the JHPCE community, either as a stakeholder or as a non-stakeholder, please contact the directors (jhpce@jhu.edu) to determine whether we can accommodate your needs.

If your lab is already a member, and you need to add new users, then have the users fill out the JHPCE new user request form.

Mark Miller and Brian Caffo
Co-Directors, JHPCE