Loading Events

« All Events

  • This event has passed.

Ph.D. Thesis {Colloquium}: CDS: 08th August 2022 : “Communication Overlapping Krylov Subspace Methods for Distributed Memory Systems”

08 Aug @ 3:00 PM -- 4:00 PM


Ph.D. Thesis Colloquium 


Speaker          : Ms. Manasi Tiwari 

S.R. Number  : 06-18-02-10-12-17-1-14938

Title                : “Communication Overlapping Krylov Subspace Methods for Distributed Memory Systems”

Date & Time  : August 08, 2022 (Monday), 03:00 PM

Research Supervisor: Prof. Sathish Vadhiyar

Venue              : #102, CDS Seminar Hall



Many high performance computing applications in computational fluid dynamics, electromagnetics etc. need to solve a linear system of equations Ax=b. For linear systems where A is generally large and sparse, Krylov Subspace methods (KSMs) are used. In this thesis, we propose communication overlapping KSMs. We start with the Conjugate Gradient (CG) method, which is used when A is sparse symmetric positive definite. Recent variants of CG include a Pipelined CG (PIPECG) method which overlaps the allreduce in CG with independent computations i.e., one Preconditioner (PC) and one Sparse Matrix Vector Product (SPMV).


As we move towards the exascale era, the time for global synchronization and communication in allreduce increases with the large number of cores available in the exascale systems, and the allreduce time becomes the performance bottleneck which leads to poor scalability of CG. Therefore, it becomes necessary to reduce the number of allreduces in CG and adequately overlap the larger allreduce time with more independent computations than the independent computations provided by PIPECG. Towards this goal, we have developed PIPECG-OATI (PIPECG-One Allreduce per Two Iterations) which reduces the number of allreduces from three per iteration to one per two iterations and overlaps it with two PCs and two SPMVs. For better scalability with more overlapping, we also developed the Pipelined s-step CG method which reduces the number of allreduces to one per s iterations and overlaps it with s PCs and s SPMVs. We compared our methods with state-of-art CG variants on a variety of platforms and demonstrated that our method gives 2.15x – 3x speedup over the existing methods.


We have also generalized our research with parallelization of CG on multi-node CPU systems in two dimensions. Firstly, we have developed communication overlapping variants of KSMs other than CG, including Conjugate Residual (CR), Minimum Residual (MinRes) and BiConjugate Gradient Stabilised (BiCGStab) methods for matrices with different properties. Secondly, we developed communication overlapping CG variants for GPU accelerated nodes, where we proposed and implemented three hybrid CPU-GPU execution strategies for the PIPECG method. The first two strategies achieve task parallelism and the last method achieves data parallelism. Our experiments on GPUs showed that our methods give 1.45x – 3x average speedup over existing CPU and GPU-based implementations. The third method gives up to 6.8x speedup for problems that cannot be fit in GPU memory.


08 Aug
3:00 PM -- 4:00 PM


Room No: 102 (Seminar Hall of CDS)