LEO - 50 Node Computing Cluster
LEO is a 50 node high performance computing cluster with 2512 cores, 11.7 TB memory and 492 TB of storage capacity in total. It is a complex mix of hardware, including three different generations of servers (32, 64, and 128 cores per node) and components from different manufacturers (Intel and AMD), as well as both SAN and NAS storage solutions.
Specifications
- 50 Computing Nodes (22+20+4+4):
- 22 Nodes:
- CPU: Intel(R) Xeon(R) E5-2683 v4, dual socket 16 cores per socket, 2.1 GHz, 9.6 GT/s
- Memory: 128 GB (8x16GB) 2400MT/s DDR4 - 4GB per core
- Rpeak of 23.6 TF
- 20 Nodes:
- CPU: Intel(R) Xeon(R) Gold 6338, dual socket 32 cores per socket, 2.00 GHz
- Memory: 256 GB (8x16GB) 3200MT/s DDR4 - 4GB per core
- Rpeak of 81.9 TF
- 4 Nodes:
- CPU: AMD EPYC 9534, dual socket 64 cores per socket, 2.45 GHz
- Memory: 768 GB (24x32GB) 4800MT/s DDR5 - 6GB per core
- 29 TF
- 4 Nodes:
- CPU: Intel(R) Xeon(R) Gold 5122 CPU, Single socket with 4 Cores, 3.60 GHz
- Memory: 192 GB (6x32GB) 2666 MT/s DDR4 – 48 GB per core
- Rpeak of 1.8 TF
- Interconnect: Intel Omnipath (100Gb/s)
- Associated Storage: The cluster is configured with two distinct storage solutions, each optimized for different data access patterns:
- High-Performance SAN Storage (Hot): This 354TB usable, high-speed storage is intended for application input and other frequently accessed data. User home directories (/home) are provisioned on this storage, which uses a parallel file system (PFS) over Lustre for maximum I/O performance.
- High-Capacity NFS Storage (Cold): This 138TB usable, high-capacity NAS storage is designed for long-term retention of less active data. User data directories (/data), which are accessible across all nodes via NFS using Linux automount feature, should be used for storing the processed data or final results.
Compilers/Interpreters
- MPI: oneAPI 2023 and 2024
- C/C++: GNU 12.2 and 14.2 (gcc, g++), oneAPI DPC++/C++ Compiler 2023.2.0 and 2024.0.0 (icx, icpx)
- Fortran: GNU 12.2 and 14.2 (gfortran), IFX 2023.2.0 and 2024.0.0 (ifx)
- Python: v3.9, v3.11
- Perl: v5.32
Job Scheduler
Visualization/Analysis Packages
Usage Help
Please contact hpcsupport AT iiap.res.in to log any tickets or requests to install/upgrade software. All users will be a part of the HPC mailing list as well, where users can discuss all relevant topics on HPC usage and assistance. Any new announcements from the HPC management team will also be circulated in this mailing list.
Acknowledgement:
- Users are requested to acknowledge the computing facility/resources that they are using for their research work by using the below template in their publication(s).
"This research has made use of the High Performance Computing (HPC) resources (centers/main-campus/computing-and-i-t/) made available by the Computer Center of the Indian Institute of Astrophysics, Bangalore"
Submitting a job through scheduler
Job submission on LEO is facilitated through a GUI portal which has advance features such as job template, job accounting details (both user and cluster) etc. Please find the portal link: https://leo.iiap.res.in