IIA Bangalore
Computer Networks and IT Infrastructure Division : NOVA Computing Cluster
NOVA 26 NODE COMPUTING CLUSTER

NOVA 26 Node Computing Cluster

NOVA is a 26 node high performance computing cluster with 720 cores, 3.5 TB memory and 36 TB of storage capacity in total. Of the 26 compute nodes, 22 are populated with dual socket 16 core Intel(R) Xeon(R) E5-2683 v4 with an Rpeak of 23.6 TF and 128 GB DDR4 memory per node and the remaining 4 nodes have single socket 4 core Intel(R) Xeon(R) Gold 5122 with an Rpeak of 1.8 TF and 192 GB of DDR4 memory per node. Interconnect is through Intel’s Omnipath (100Gb/s)

Specifications

  • Computing Nodes (22+4):
    • CPU (22 Nodes): Intel(R) Xeon(R) E5-2683 v4, dual socket 16 cores per socket, 2.1 GHz, 9.6 GT/s
    • Memory: 128 GB (8x16GB) 2400 MHz DDR4 - 4GB per core
    • Storage: 2x1TB NL-SAS hot plug HDD in RAID-1
    • CPU (4 Nodes): Intel(R) Xeon(R) Gold 5122, single socket 4 cores per socket, 3.6 GHz, 10.4 GT/s
    • Memory : 192 GB DDR4 (6x32GB) 2666 MHz DDR4 – 48GB per core
    • Storage: 3x4TB NL-SAS hot plug HDD in RAID-5
  • Interconnect Intel Omnipath (100Gb/s)
  • Associated Storage: 12 x 4TB NL-SAS disks in RAID-5 (40TB of available disk space)

Compilers/Interpreters

  • MPI : Intel Parallel Studio XE 2018, 2019 Cluster Edition
  • C/C++ : gcc/g++ 4.8.5, Intel C/C++ XE 19.0.4.243, 18.0.0
  • Fortran 77/90/95+ : gfortran 4.8.5, Intel Fortran XE 19.0.4.243, 18.0.0 (includes fortran compiler and Math Kernel Library)
  • Python: v2.7.5, v3.6.3
  • Perl: v5.16, with multi-thread support

Job Scheduler

  • slurm 18.08.7

Visualization/Analysis Packages

  • IDL 8.6

Usage Help

Please contact hpcsupport AT iiap.res.in to log any tickets or requests to install/upgrade software. All users will be a part of the HPC mailing list as well, where users can discuss all relevant topics on HPC usage and assistance. Any new announcements from the HPC management team will also be circulated in this mailing list.

Acknowledgement:

  • Users are requested to acknowledge the computing facility/resources that they are using for their research work by using the below template in their publication(s).

"This research has made use of the High Performance Computing (HPC) resources (centers/main-campus/computing-and-i-t/) made available by the Computer Center of the Indian Institute of Astrophysics, Bangalore"

Submitting a job through scheduler

Job submission on NOVA is facilitated through a GUI portal which has advance features such as job template, job accounting details (both user and cluster) etc.

Queue Details

PARTITIONMax. COREMax. TIME LIMIT
serial1 20-00:00:00 ( 20 days )
small32 5-00:00:00 ( 5 days )
medium644-00:00:00 ( 4 days )
large128 3-00:00:00 ( 3 days )
vlarge 2562-00:00:00 ( 2 days )
test No limitation02:00:00 ( 2 hours )

Few commands to use in CLI:

squeue --to list your jobs status

scancel --to cancel your job

module list -- to list the modules

module av -- to see available modules

module load --to load modules

Last updated on December 16, 2024