Ph.D: Thesis Colloquium: 102 : CDS: 27, June 2024 “Improving the Efficiency of Variational PINNs and its applications to fluid flow problems”

When

27 Jun 24    
10:00 AM - 11:00 AM

Event Type

DEPARTMENT OF COMPUTATIONAL AND DATA SCIENCES
Ph.D. Thesis Colloquium

Speaker          : Mr. Thivin Anandh D
S.R. Number  : 06-18-01-10-12-18-1-15722
Title                : “Improving the Efficiency of Variational PINNs and its applications to fluid flow problems”
Research Supervisor : Prof. Sashikumaar Ganesan
Date & Time  : June 27, 2024 (Thursday), 10:00 AM
Venue              :  # 102 CDS Seminar Hall

ABSTRACT

FastVPINNs: A Tensor-Driven Accelerated framework for Variational Physics informed neural networks in complex domains: Variational Physics-Informed Neural Networks (VPINNs) utilize a variational loss function to solve partial differential equations, mirroring Finite Element Analysis techniques. Traditional hp-VPINNs, while effective for high-frequency problems, are computationally intensive and scale poorly with increasing element counts, limiting their use in complex geometries. This work introduces FastVPINNs, a tensor-based advancement that significantly reduces computational overhead and handles complex geometries. Using optimized tensor operations, FastVPINNs achieve a 100-fold reduction in the median training time per epoch compared to traditional hp-VPINNs. With proper choice of hyperparameters, FastVPINNs can surpass conventional PINNs in speed and accuracy, especially in problems with high-frequency solutions. We have also demonstrated solving inverse problems(constant parameter inverse and domain inverse) for scalar PDEs.

A Open-Source PyPI package for FastVPINNs: This work presents the implementation details of the FastVPINNs library as a Python pip package. Developed using TensorFlow 2.0, the package now supports 3D scalar problems, making it one of the first hp-VPINNs frameworks to support 3D problems on complex geometries. The library includes a comprehensive test suite with unit, integration, and compatibility tests, achieving over 96% code coverage. It also features CI/CD actions on GitHub for streamlined deployment. Documentation is available at https://cmgcds.github.io/fastvpinns.

FastVPINNs for Flow problems (Navier Stokes): The incompressible Navier-Stokes equations (NSE) are essential for solving fluid dynamics problems. While PINNs have been used to solve NSE problems, there is no literature on VPINNs due to challenges such as the need for a higher number of elements for vector-valued problems and the complexity of implementing variational PINNs for the three components of the equations. These issues also lead to infeasible training times with existing implementations. In this work, we implement NSE using FastVPINNs and compare our results with PINNs in terms of accuracy and training time. We solve forward problems such as a lid-driven cavity, flow through a channel, Falkner-Skan boundary layer, flow past a cylinder, flow past a backward-facing step, and Kovasznay flow for Reynolds numbers ranging from 1 to 200 in the laminar regime. Our experiments show that FastVPINNs code runs twice as fast as PINNs and achieves accuracy comparable to results in the literature. Additionally, we solve inverse problems for the NSE, identifying the Reynolds number of the flow based on sparse solution observations.

FastVPINNs for Singularly-Perturbed problems: Singularly-perturbed problems arise in convection-dominated regimes and are challenging test cases to solve due to the spurious oscillations that might occur while solving the problem with conventional numerical methods. Stabilization schemes like Streamline-Upwind Petrov-Galerkin (SUPG) and cross-wind loss functionals enhance numerical stability. Since SUPG stabilization is proposed in the weak formulation of PDEs, Variational PINNs are a suitable candidate for solving these problems. In this work, we explore different stabilization schemes and their effects on singularly-perturbed problems, comparing the accuracy of our results with the existing literature. We demonstrate that stabilized VPINNs perform better than PINNs proposed in the literature. Additionally, we propose an neural network model that predicts the SUPG stabilization parameter along with the solution, addressing a challenging task in conventional methods. We also explore adaptive hard constraint functions for boundary layer problems, using neural networks to adjust the slope based on diffusion coefficients, improving accuracy and reducing the need for tuning hyperparameters.

Domain-decomposition-based distributed training approach for FastVPINNs: Variational Physics-Informed Neural Networks (VPINNs) can be computationally expensive to train, especially on larger domains with many elements. To address this, a domain-decomposition based training approach, known as Finite Basis PINNs, was proposed in the literature. We extend this approach to Variational PINNs with domain decomposition. In FBVPINNs(Finite Basis VPINNs), the domain is divided into subdomains and each subdomain is assigned to a separate neural network, with information exchange between subdomains managed by aggregating gradients and solutions in overlapping regions using smooth, differentiable window functions. This approach transforms complex global optimization into smaller optimization problems, significantly reducing training times and addressing spectral bias on higher frequency problems. Additionally, we present an MPI-based implementation of FBVPINNs for distributed training for lower frequency solution problems.


ALL ARE WELCOME