Back

Minisymposium Presentation

IPPL: A Massively Parallel, Performance Portable C++ Library for Particle-Mesh Methods and Efficient Solvers

Monday, June 3, 2024
12:30
-
13:00
CEST
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Chemistry and Materials
Chemistry and Materials
Chemistry and Materials
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Humanities and Social Sciences
Humanities and Social Sciences
Humanities and Social Sciences
Engineering
Engineering
Engineering
Life Sciences
Life Sciences
Life Sciences
Physics
Physics
Physics

Presenter

Sonali
Mayani
-
Paul Scherrer Institute

Sonali studied Physics at the Ecole Polytechnique Fédérale de Lausanne (EPFL), completing her Bachelor and Master with a year abroad at the National University of Singapore. After graduating, she worked as a Research Engineer at the Barcelona Supercomputing Center (BSC), where she carried out performance analysis for climate physics codes. Currently, Sonali is pursuing a PhD in computational physics at the Paul Scherrer Institut/ETH Zürich, focusing on efficient and massively parallel solvers for particle dynamics simulations in the context of HPC.

Description

We present the Independent Parallel Particle Layer (IPPL), a performance portable C++ library for particle-in-cell methods. IPPL makes use of Kokkos (a performance portability abstraction layer), HeFFTe (a library for large scale FFTs), and MPI (Message Passing Interface) to deliver a portable, massively parallel toolkit for particle-mesh methods. One of the advantages of such a framework is the ability to be a test-bed for new algorithms which seek to improve runtime and efficiency of large scale simulations, for example in the beam and plasma physics communities. Concretely, we have implemented an efficient and portable free-space solver for the Poisson equation based on the algorithm suggested by Vico et al. (2016). This fast solver has spectral convergence, as opposed to the second-order convergence of the state-of-the-art Hockney-Eastwood method. The ability to use coarser grids to achieve similar resolutions with the new solver allows for higher resolution simulations with a lower memory footprint, which is especially important for GPU usage. Finally, we show scaling studies on the Perlmutter machine at NERSC, on both CPUs and GPUs, with efficiencies staying above 50% in the strong scaling case.

Authors