NOVA 26 Node Computing Cluster

NOVA is a 26 node high performance computing cluster with 720 cores, 3.5 TB memory and 36 TB of storage capacity in total. Of the 26 compute nodes, 22 are populated with dual socket 16 core Intel(R) Xeon(R) E5-2683 v4 with an Rpeak of 23.6 TF and 128 GB DDR4 memory per node and the remaining 4 nodes have single socket 4 core Intel(R) Xeon(R) Gold 5122 with an Rpeak of 1.8 TF and 192 GB of DDR4 memory per node. Interconnect is through Intel’s Omnipath (100Gb/s)

Specifications

  • Computing Nodes (22+4):
    • CPU (22 Nodes): Intel(R) Xeon(R) E5-2683 v4, dual socket 16 cores per socket, 2.1 GHz, 9.6 GT/s
    • Memory: 128 GB (8x16GB) 2400 MHz DDR4 - 4GB per core
    • Storage: 2x1TB NL-SAS hot plug HDD in RAID-1
    • CPU (4 Nodes): Intel(R) Xeon(R) Gold 5122, single socket 4 cores per socket, 3.6 GHz, 10.4 GT/s
    • Memory : 192 GB DDR4 (6x32GB) 2666 MHz DDR4 – 48GB per core
    • Storage: 3x4TB NL-SAS hot plug HDD in RAID-5
    • Interconnect Intel Omnipath (100Gb/s)
    • Associated Storage: 12 x 4TB NL-SAS disks in RAID-5 (40TB of available disk space)


    Compilers/Interpreters

    • MPI : Intel Parallel Studio XE 2018, 2019 Cluster Edition
    • C/C++ : gcc/g++ 4.8.5, Intel C/C++ XE 19.0.4.243, 18.0.0
    • Fortran 77/90/95+ : gfortran 4.8.5, Intel Fortran XE 19.0.4.243, 18.0.0 (includes fortran compiler and Math Kernel Library)
    • Python: v2.7.5, v3.6.3
    • Perl: v5.16, with multi-thread support

    Job Scheduler

    • slurm 18.08.7

    Visualization/Analysis Packages

    • IDL 8.6

    Account Request Form available at: https://nova.iiap.res.in/samooh/register

    Usage Help

    Please contact hpcsupport AT iiap.res.in to log any tickets or requests to install/upgrade software. All users will be a part of the HPC mailing list as well, where users can discuss all relevant topics on HPC usage and assistance. Any new announcements from the HPC management team will also be circulated in this mailing list.

    Acknowledgement:

    • Users are requested to acknowledge the computing facility/resources that they are using for their research work by using the below template in their publication(s).

    Submitting a job through scheduler

    Last updated on: February 20, 2024