Cluster Usage Guidelines

Usage Policies

  1. Getting Access: Hydra computing cluster is available to all members (staff and students) of Indian Institute of Astrophysics. Please fill out the form (link below) to obtain an account on the cluster.

    Users have the responsibility to protect their accounts from unauthorized access, and for the proper expenditure of allocated resources.

    Please contact hpcsupport AT iiap.res.in to log any tickets or requests to install/upgrade software. All users will be a part of the HPC mailing list as well, where users can discuss all
    relevant topics on HPC usage and assistance. Any new announcements from the HPC management team will also be circulated in this mailing list.

  2. Job Submission: All computing jobs on the cluster should be submitted through the scheduler. Users may consult the scheduler help page for help on configuration and usage.

    Users are not expected to specify hosts in their PBS scripts since this will interfere with scheduler's abilities
    to allocate resources. Any such jobs will be detected and killed by a vicious daemon.

    Master node of the cluster shall be used only for management and scheduling purpose. Computational tasks are never to be run on the master node, as this has the potential of impairing operation of the entire system. Any user owned process on this node will be systematically down-prioritized to least priority and eventually killed.

  3. There are two queues for jobs: serial and parallel
    • » Parallel queue has a limit of 4 running jobs per user and another 4 jobs in the queue
    • » Serial queue has a limit of 24 running jobs and another 24 jobs in the queue
    • » A job script that asks for more than one processor will be sent to the parallel queue
  4. Node-21 are set aside for interactive usage and visualization software.
  5. NOTE:Users are requested to use the head node only for job submission, Please use node-21 for editing files and debugging codes

Disk Usage Guidelines

  • Home: Individual users will have a disk quota of 50 GB in the home area, which will be backed up daily. This partition is available to all computing nodes of the cluster. However, excessive read/write operations to /home may slow down the jobs, since data need to be transferred over the network.
  • Scratch: Each node has a 750GB scratch space, accessible to all users. This area will not be backed up and shall be cleared at regular intervals. Users are encouraged to use this area as temporary storage for jobs running on the respective node. By choosing a local scratch over the globally mounted /home, you will be eliminating all bottle necks in disk i/o and data transfer over the network. Code speed!
  • Additional Storage: Extra disk space, if required, shall be provided after an approval from the HPCN committee. Data backup will not be available for this extra disk space. These extra disks will be available on master node and node 20.
Last updated on: February 20, 2024