Our Location
1st Floor, Sir J.C. Bose Annexe
IIT Kharagpur, Kharagpur 721302
The supercomputer PARAM Shakti is based on a heterogeneous and hybrid configuration of Intel Xeon Skylake processors, and NVIDIA Tesla V100. The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC).It consists of 2 Master nodes, 8 Login nodes, 10 Service/Management nodes and 442 (CPU+GPU) nodes with total peak computing capacity of 1.66 (CPU+GPU) PFLOPS performance. PARAM Shakti systems are based on Intel Xeon SKL G-6148, NVIDIA Tesla V100 with total peak performance of 1.6 PFLOPS. The cluster consists of compute nodes connected with Mellanox (EDR) infiniBand interconnect network. The system uses the Lustre parallel file system.
Master Nodes supervise and coordinate the cluster. They manage hardware health, workloads, and track
utilization across all components.
2 × Intel Xeon SKL G-6148
Cores = 40, 2.4 GHz
Total Cores = 80 cores
Memory = 384 GB
Total Memory = 768 GB
HDD = 900 GB
Login Nodes act as user entry points. They support tasks like file transfers, editing scripts, and job
submissions, with time and memory limits.
2 × Intel Xeon SKL G-6148
Cores = 40, 2.4 GHz
Total Cores = 320 cores
Memory = 384 GB
Total Memory = 3,072 GB
HDD = 900 GB
Service Nodes handle job scheduling and cluster services. They maintain reliability and ensure smooth
day-to-day operation of PARAM Shakti.
2 × Intel Xeon SKL G-6148
Cores = 40, 2.4 GHz
Total Cores = 240 cores
Memory = 384 GB
Total Memory = 2,304 GB
HDD = 900 GB
CPU Compute Nodes are the workhorses of PARAM Shakti. They execute both interactive and batch jobs
with local SSDs for fast scratch storage.
2 × Intel Xeon SKL G-6148
Cores = 40, 2.4 GHz
Total Cores = 15,360 cores
Memory = 192 GB
Total Memory = 73,728 GB
SSD = 480 GB
High Memory Nodes provide extended RAM per node, enabling simulations and jobs with very large memory
requirements beyond standard compute nodes.
2 × Intel Xeon SKL G-6148
Cores = 40, 2.4 GHz
Total Cores = 1,440 cores
Memory = 768 GB
Total Memory = 27,648 GB
SSD = 480 GB
GPU Compute Nodes combine CPUs with NVIDIA V100 GPUs. CUDA and OpenCL optimized applications achieve
huge speedups for AI and HPC workloads.
2 × Intel Xeon SKL G-6148
Cores = 40, 2.4 GHz
Total Cores = 880 cores
Memory = 192 GB
Total Memory = 4,224 GB
2 × NVIDIA V100 (16 GB each)
PARAM Shakti uses the Lustre parallel file system. It offers scalable and reliable storage with high
throughput, ideal for large scientific workloads.
Primary Storage = 2.1 PiB
Archival Storage = 500 TiB
Throughput = 50 GB/s
The cluster runs on Linux (CentOS 7.6), a stable operating system widely used in HPC. It provides
compatibility and reliable performance for users.
OS = Linux
Distribution = CentOS 7.6
Associate Professor | Department of Computer Science and Engineering
soumya@cse.iitkgp.ac.in
+91-3222-283344
Associate Professor | Mechanical Engineering
somnath.roy@mech.iitkgp.ac.in
+91-3222-282920
Member
Professor | Dept. of Computer Science & Technology
pabitra@cse.iitkgp.ac.in
+91-3222-282356
Member
Professor | Dept. of Chemistry
sanjoy@chem.iitkgp.ac.in
+91-3222-283344
Member
Professor | Dept. of Computer Science & Engineering
pralay@cse.iitkgp.ac.in
+91-3222-282344
Member
Professor | Physics
sonjoym@phy.iitkgp.ac.in
+91-3222-283808
Member
Assistant Professor | Centre for Computational and Data Sciences
skreddy@iitkgp.ac.in
Senior Software Engineer Grade-I | Centre for Computational and Data Sciences
devraj@adm.iitkgp.ac.in
+91-3222-283344
Jr. Technical Superintendent
Centre for Computational and Data Sciences
sbanerjee@iitkgp.ac.in
+91-3222-282095
Senior Assistant
Centre for Computational and Data Sciences
gopal1168@adm.iitkgp.ac.in
Science is a beautiful gift to humanity; we should not distort it but harness its power to transform our nation.
- Dr. A.P.J. Abdul Kalam
A Hub for Advanced Research and Analytical Facilities
Paramshakti is the name of the supercomputer hosted at IIT Kharagpur.
All IITKGP Faculty Adviser and their Research Group who are working in HPC domain can get an account provided their Account is approved by PS administration. You can apply HERE
A new User can start with quick start guide.
At present, all users can run jobs on all partitions
(shared, medium, large, gpu) using the
free queue (default QoS: iitkgp_freeq) without any charge.
For high-priority job submissions, payment is required as per the approved cost calculation
and compute requirements. Such paid usage is applicable only to faculty or supervisor
accounts.
There is no provision for paid accounts for student users. However, students may submit
high-priority jobs by specifying their faculty/supervisor's approved account in the job
script using: #SBATCH -A <faculty_account_name>
Users are required to acknowledge the use of PS in all
publications, presentations,
thesis, webpages, etc by including the following or similar statement:
“ This work used the Supercomputing facility of IIT Kharagpur established under National
Supercomputing Mission (NSM), Government of India and supported by Centre for Development of
Advanced Computing (CDAC), Pune”
Users are also requested to inform the PS administration on any such outcome for annual
reports/documentation/uploading on website, etc.
Users can submit their DOI here
Raise Support Ticket here .
1st Floor, Sir J.C. Bose Annexe
IIT Kharagpur, Kharagpur 721302
+91 32222 82229
hpc@iitkgp.ac.in
shaktisupport@iitkgp.ac.in