Minisymposium
MS6G - Modern PDE Discretization Methods and Solvers in a Non-Smooth World
Replay
Session Chair
Description
This minisymposium will explore the tension between high-order discretisation methods for PDEs and the fact that many physical phenomena are non-smooth. We will also investigate connections to machine learning, such as the use of reduced precision arithmetics in both domains. High order discretisations in space and time can make optimal use of FLOP-bound exascale hardware and have the potential to unlock additional parallelism. However, it is an open question how these methods can be applied to time-dependent PDEs with elliptic constraints. Off-the-shelf preconditioners are not sufficient and multigrid methods are being developed to solve the resulting large sparse linear systems of equations. Implementing advanced reliable and performance portable PDE based simulation tools requires the combined expertise of specialists from different domains. Real-life codes are starting to use novel discretisation techniques: the UK Met Office explores the solution of the equations of atmospheric fluid dynamics with hybridised finite elements, non-nested multigrid preconditioners and parallel-in-time methods. The ADER-DG ExaHype engine is being extended to include elliptic constraints and support implicit timestepping for astrophysics simulations. A discussion session will explore how the advantages of sophisticated PDE solvers and Machine Learning can be combined productively.
Presentations
While most scientific applications are still computed in double precision, mixed-precision algorithms are becoming more commonplace as a way to improve the performance of an algorithm without overly increasing the resulting error. The impact of numerical precision on the results and stability of an algorithm however remain difficult to estimate.
We present a study on the impact of using mixed and variable numerical precision in the high-order ADER discontinuous Galerkin method for solving hyperbolic PDEs.As a baseline, the entire algorithm is computed in multiple precisions and the results compared. Then we measure the effects of changing the precision of individual kernels to estimate whether a mixed-precision approach could reduce the overall loss of accuracy.In addition, we simulate two stationary but numerically challenging scenarios in the isentropic vortex for the Euler equations and the resting lake scenario for the shallow water equations, to see whether variable precision can be used to resolve local stability issues.Finally we review the effects of numerical precision on the features of Lagrange interpolations, which are commonly used but are susceptible to small changes in the nodal values.
Parallel-in-time algorithms offer a route to continued parallel scaling for simulation of space and time dependent partial differential equations once spatial parallelism is saturated. The equations relevant to geophysical fluid dynamics pose particular challenges for the convergence of parallel-in-time algorithms due to the hyperbolicity of the equations and the range of timescales spanned by the physical processes being modelled. In particular, weather and climate models depend on subgridscale parameterisations that describe physical processes that take place on scales that are not resolved by the dynamical partial differential equations in space or in time. These parameterisations often contain fast timescales and discontinuities, for example, it is commonly assumed in cloud parameterisations that the amount of water vapour above the saturation concentration is instantaneously converted into cloud droplets. In this talk I will present several parallel-in-time algorithms in the context of the shallow water equations, a commonly used equation set for the testing of new numerical algorithms relevant to weather and climate prediction. I will show how the equation set can be extended to include clouds, focussing on the challenges that this poses to convergence of parallel-in-time algorithms.
The solution of the Einstein--Euler equations by the vast majority of numerical codes is still based on traditional finite difference schemesfor the Einstein sector, while it relies on conservative schemes for the matter part. Discontinuous Galerkin (DG) schemes, in spite of many potentialadvantages, have not reached a mature stage yet. I will compare performances of a new class of finite difference schemes for the full Einstein--Eulerequations with DG schemes, showing possible future directions of research that may promote DG schemes to be the dominant ones in a future with exascale hardware. In the last part of the talk I will discuss the impact of machine learning on the data analysis of gravitational waves observations.
This session will be an open discussion of how the respective advantages of sophisticated traditional simulation techniques and ML algorithms can be combined productively.