Hydra 20 Node Computing Cluster
Hydra is a 20 node high performance computing cluster with a peak performance of around 3.2 TFlops. Based on Xeon X5675 processors, this HPC cluster sports 12 processing cores and 96 GB DDR3 memory per node. Interconnect is through Infiniband QDR (40Gb/s).
- Computing Nodes (20):
- CPU: dual Xeon X5675, 6 cores per processor, 3.06 GHz clock speed, 3.46 GHz Turbo, 6.4GT/s.
- Memory: 96 GB (12 x 8GB) 1333 MHz DDR3 - 8 GB per core
- Storage: 2 x 1TB SATA hot-plug hard disks in RAID-1.
- Interconnect: Infiniband QDR 40 Gb/s
- Associated Storage: 22 x 2TB SATA disks in RAID-5 (40TB of available disk space)
- C/C++ : gcc/g++ 4.4.4, Intel C/C++ XE 2011.5.220
- Fortran 77/90/95+ : gfortran 4.4.4, Intel Fortran XE 2011.5.220 (includes fortran compiler and Math Kernel Library)
- Python: v2.4.3
- Perl: v5.8.8, with multi-thread support
High Performance Computing Tools
- mvapich 1.2/mvapich2 1.6
mpich/mpich2 based MPI implementation for infiniband networks
(supports GNU and Intel compilers)
- openmpi 1.4.3 (supports GNU and Intel compilers)
- MPI for Python
- Maui Scheduler
- Torque Resource Manager
This machine, while being part of the cluster, is also accessible directly from campus LAN. Available software are listed below:
- IDL 8.3
- Mathematica 10.2
- Root 5.28
- R 2.13
- Plotting Packages: Gnuplot 4.0, pgplot 5.2, plplot 5.7, xmgrace 5.1
Please contact hpcsupport AT iiap.res.in to log any tickets or requests to install/upgrade software. All users will be a part of the HPC mailing list as well, where users can discuss all
relevant topics on HPC usage and assistance. Any new announcements from the HPC management team will also be circulated in this mailing list.
- Cluster Usage policies and guidelines
- Choosing your MPI compiler
- Submitting a job through scheduler
- External Links