Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Panel

National (Tier-1) Facilities

ARCHER

ARCHER2, the UK national supercomputing service offers a capability resource  to allow researchers to run simulations and calculations that require large numbers of processing cores working in a tightly-coupled, parallel fashion

. Based around a 118,080 core Cray XC system, the service includes a helpdesk staffed by HPC experts from EPCC with support from Cray Inc. Access is free at point of use for academic researchers working in the EPSRC and NERC domains. Users can also purchase access at a variety of ratesNICE

The full ARCHER2 system will be an HPE Cray EX supercomputing system with an estimated peak performance of 28 PFLOP/s. The machine will have 5,860 compute nodes, each with dual AMD EPYC Zen2 (Rome) 64 core processors at 2.2GHz, giving 750,080 cores in total.

ARCHER2 should be capable, on average, of over eleven times the science throughput of its predecessor, ARCHER. This estimate is based on benchmarks of five of the most heavily used research software packages on ARCHER. As with all new systems, the relative speedups over ARCHER vary by software and problem size.


Panel

International Facilities

Some facilities around the world may also be accessible to UK users. The list below includes facilities or organisations that can provide access to users from the UK.

  • PRACE - PRACE is the pan-European HPC infrastructure through which UK users can get access to some of the largest HPC systems in Europe.
  • DOE INCITE - The US Department of Energy makes access to its Leadership Computing facilities available to users worldwide through the INCITE programme.

Image Added



Panel

Regional (Tier-2) Facilities

The Tier-2 layer of HPC forms a vital part of an integrated e-infrastructure landscape; it addresses the gulf in capability from the university Viking system to ARCHER. 

Facilities

(Coming soon!) will be NICE will
  • is one of the EPSRC Tier-2 facilities.
 
  • Bede primarily
comprise
  • comprises of 32 IBM Power 9 dual-CPU nodes, each with 4 NVIDIA V100 GPUs and high performance interconnect. This is the same architecture as the US government’s SUMMIT and SIERRA supercomputers, which occupied the top two places in a recently published list of the world’s fastest supercomputers.

The university does not currently have RSE support for Bede.  The HPC team will try our best to assist but you may get better help on the Bede slack channel.

  • JADE II: The University of York is a partner of the JADE II facility. Anyone who is involved in AI/ML research and related data science applications and in need of GPUs for scaling up experiments can apply for an account.   JADE II harnesses the capabilities of the NVIDIA DGX MAX-Q Deep Learning System and comprise of 63 servers, each containing 8 NVIDIA Tesla V100 GPUs linked by NVIDIA’s NV link interconnect technology. JADE II uses environment modules and SLURM, similar to Viking.

            To apply for an account, please fill in this form:
            https://forms.gle/NEemtVf9yi7mWqpZA
            Once a user has completed the form, they are sent an email with instructions for registering for JADE 2 at STFC.
            There is also a jade2-users slack channel in the UoY workspace for current users.

  • Cirrus at EPCC is one of the EPSRC Tier-2 HPC facilities. The main resource is a 10,080 core SGI/HPE ICE XA system. Free access is available to academic researchers working in the EPSRC domain and from some UK universities; academic users from other domains and institutions can purchase access. Industry access is available.
  • Isambard at GW4 is one of the EPSRC Tier-2 HPC facilities. Isambard provides multiple advanced architectures within the same system in order to enable evaluation and comparison across a diverse range of hardware platforms. Free access is available to academic researchers working in the EPSRC domain and from some UK universities; academic users from other domains and institutions can purchase access. Industry access is available.
  • Cambridge Service for Data Driven Discovery (CSD3) is one of the EPSRC Tier-2 HPC facilities. CSD3 is a multi-institution service underpinned by an innovative, petascale, data-centric HPC platform, designed specifically to drive data-intensive simulation and high-performance data analysis. Free access is available to academic researchers working in the EPSRC domain and from some UK universities; academic users from other domains and institutions can purchase access. Industry access is available.
  • Athena at HPC Midlands+ is one of the EPSRC Tier-2 HPC facilities. The main resource is a 14,336 core Huawei X6000 system supplied by Clustervision. The service also includes 5 POWER8 compute nodes with large amounts of RAM to support high performance data analysis
. Free access is available to academic researchers working in the EPSRC domain and from some UK universities; academic users from other domains and institutions can purchase access. Industry access is available.JADE (Joint Academic Data science Endeavour) is one of the EPSRC Tier-2 HPC facilities. The system design exploits the capabilities of NVIDIA's DGX-1 Deep Learning System which has eight of its newest Tesla P100 GPUs tightly coupled by its high-speed NVlink interconnect. The DGX-1 runs optimized versions of many standard machine learning software packages such as Caffe, TensorFlow, Theano and Torch
  • . Free access is available to academic researchers working in the EPSRC domain and from some UK universities; academic users from other domains and institutions can purchase access. Industry access is available.
  • MMM Hub (Materials and Molecular Modelling Hub) The theory and simulation of materials is one of the most thriving and vibrant areas of modern scientific research today. Designed specifically for the materials and molecular modelling community, this Tier 2 supercomputing facility is available to HPC users all over the UK. The MMM Hub was established in 2016 with a £4m EPSRC grant awarded to collaborators The Thomas Young Centre (TYC), and the Science and Engineering South Consortium (SES). The MMM Hub is led by University College London on behalf of the eight collaborative partners who sit within the TYC and SES: Imperial, King’s, QMUL, Oxford, Southampton, Kent, Belfast and Cambridge.
  • DiRAC DiRAC is the STFC HPC facility for particle physics and astronomy researchers. It is currently made up of five different systems with different architectures. These range from an extreme scaling IBM BG/Q system, a large SGI/HPE UV SMP system, and a number of Intel Xeon multicore HPC systems. Free access is available to academic researchs working in the STFC domain; academic researchers from other domains can purchase access. Industry access is also available.

Tier-2 HPC is fundamental because it provides a diversity of computing architectures, which are driven by science needs and are not met by the national facilities or universities.

Training of skilled computational scientists and computational software developers is also provided at Tier-2 level. The Tier-2 layer of HPC provides easy, local access and training for users.