Posts with the tag updates:

MACS Major Upgrade to RHEL8

The GW4 Isambard Multi-Architecture Comparison System will be unavailable for the week of the 19th April to perform planned upgrades & maintenance of the software stack. This is a major software upgrade to Red Hat Enterprise Linux 8, bringing the Operating System major version inline with the A64FX service, it will provide a better base for software development and improve the MACS compatibility with scientific software, including the Cray software stack. Some user software compatibility issues are to be expected due to changed/updated libraries, so recompilations may be required to continue running on MACS. XCI & A64FX remain available during this time.

Cray Compiler Environment (CCE) 9.0.0 installed

Cray CCE 9.0.0 has been installed on XCI, feel free to test it out by loading the cdt/19.06 module! This is a major revision to CCE with the compilers being based on LLVM. Documentation can be found here: https://pubs.cray.com/content/S-5212/9.0/cray-compiling-environment-cce-release-overview/cce-900-release-overview-introduction

XCI: Huge page bug fixed

Cray has deployed the first monthly patchset on XCI which has included a fix for the Out-Of-Memory errors which some jobs using Huge Pages have experienced.

XCI: System update

Cray CDT/18.12 installed as module cdt/18.12 Cray CDT/18.11 remains available Arm Compiler version 19 installed as module PrgEnv-allinea Arm Compiler version 18.4.2 is also available GCC 8.2.0 installed as module gcc/8.2.0 GCC 7.3.0 & GCC 6.1.0 are also available. All of the new modules have been set as the default versions, which means they will be loaded if you omit the version number from the module name.

XC50 Approaches...

The single cabinet consists of approx 164 compute nodes of 64 cores each, for a total of 10'496 cores of Cavium Thunder X2 ARMv8, backed by the same Aries interconnect. A 0.5 Petabyte Lustre filesystem is dedicated to the Isambard system. Discussions are underway on acceptance tests, we expect to run HPL (LINPACK), HPCG, STREAM, MPI & I/O benchmarks. Some practical codes will also be run for comparison against the numbers produced on the Early Access nodes ( http://www.goingarm.com/slides/2017/SC17/GoingArm_SC17_Bristol_Isambard.pdf ), including UM/NEMO, a chemistry and an engineering code. The HPC group at Bristol Uni has recently put out a paper on these numbers in more depth: https://uob-hpc.github.io/assets/cug-2018.pdf