Back

Minisymposium

MS1G - Novel Algorithms and HPC Implementations for Exascale Particle-In-Cell Methods

Fully booked
Monday, June 3, 2024
11:30
-
13:30
CEST
HG F 26.3

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

The primary aim of this minisymposium is to assemble a focused discussion on the challenges, emerging best practices, and innovative algorithms associated with Particle-in-Cell (PIC) methods as they relate to exascale architectures. The minisymposium specifically addresses demanding scientific problems that can be resolved through the application of cutting-edge exascale techniques, primarily rooted in the PIC paradigm. The spotlight of this minisymposium is predominantly on the complexities encountered at the exascale level, deviating from the routine narratives of success stories that have been repeatedly recounted. In this setting, we aim to foster a conducive environment for active dialogue, encouraging participants to delve deeper into unresolved issues and explore the potential of PIC at the exascale level.

Presentations

11:30
-
12:00
CEST
A New Lagrangian-Based Approach to Studying Highly-Turbulent Fluid Flows

Despite their inherent diffusivity and the additional effort required for modelling turbulent eddies on the subgrid scale, Eulerian-based approaches are nowadays widely accepted tools for the simulation of turbulent fluid flows. Lagrangian methods, on the other hand, represent a viable alternative as they explicitly model the small-scale turbulence at the cost of higher computational complexity. In this talk I present a new Lagrangian-based approach that is based on deformable elliptical parcels which has proven to be highly effective in resolving turbulence. The Elliptical Parcel-In-Cell (EPIC) method represents a fluid flow with a set of space-filling ellipsoids that deform due to the local strain (velocity gradient). While extremely elongated parcels are split, very small parcels are merged with their nearest other parcel, resulting in a natural process of turbulent mixing. Performance benchmarkswill demonstrate its scalability up to 16,384 cores.

Matthias Frey and David Dritschel (University of St Andrews) and Steven Böing (University of Leeds)
With Thorsten Kurth (NVIDIA Inc.)
12:00
-
12:30
CEST
Particle-In-Cell (PIC) in Extreme Plasma Conditions: Strategies for HPC in QED-Dominated Interactions

To answer the needs of modelling strong field interaction, PIC codes require tailored additional development. In particular, we have incorporated additional physics such that in-house developed OSIRIS framework could be used to simulate plasmas in extreme conditions, from the classical to the fully QED-dominated interaction. OSIRIS-QED module has an embedded Monte Carlo algorithm to account for quantum processes, which has been thoroughly benchmarked against known analytical results, codes used in the particle physics community (e.g. GUINEA-PIG). Besides physics developments, simulating the extreme regime has particular challenges on memory, load balance and temporal discretization, which required particular code developments oriented towards performance. We have developed Macro-particle merging algorithm and tailored load balance techniques to address the exponentially rising number of particles in QED cascades. This was followed by semi-analytical particle pusher development for better accuracy and coupling the OSIRIS Quasi-3D geometry with OSIRIS-QED module. More recent developments of my team include the Bethe-Heitler pair production and cross-section evaluation based on Machine Learning. All these developments are aimed to incorporate new physics while minimizing impact on performance and scalability.

Marija Vranic (Instituto Superior Técnico)
With Thorsten Kurth (NVIDIA Inc.)
12:30
-
13:00
CEST
IPPL: A Massively Parallel, Performance Portable C++ Library for Particle-Mesh Methods and Efficient Solvers

We present the Independent Parallel Particle Layer (IPPL), a performance portable C++ library for particle-in-cell methods. IPPL makes use of Kokkos (a performance portability abstraction layer), HeFFTe (a library for large scale FFTs), and MPI (Message Passing Interface) to deliver a portable, massively parallel toolkit for particle-mesh methods. One of the advantages of such a framework is the ability to be a test-bed for new algorithms which seek to improve runtime and efficiency of large scale simulations, for example in the beam and plasma physics communities. Concretely, we have implemented an efficient and portable free-space solver for the Poisson equation based on the algorithm suggested by Vico et al. (2016). This fast solver has spectral convergence, as opposed to the second-order convergence of the state-of-the-art Hockney-Eastwood method. The ability to use coarser grids to achieve similar resolutions with the new solver allows for higher resolution simulations with a lower memory footprint, which is especially important for GPU usage. Finally, we show scaling studies on the Perlmutter machine at NERSC, on both CPUs and GPUs, with efficiencies staying above 50% in the strong scaling case.

Sonali Mayani (Paul Scherrer Institute, ETH Zurich); Antoine Cerfon (Type One Energy, New York University); Matthias Frey (University of St Andrews); Veronica Montanaro (ETH Zurich); Sriramkrishnan Muralikrishnan (Forschungszentrum Jülich); Alessandro Vinciguerra (ETH Zurich); and Andreas Adelmann (Paul Scherrer Institute)
With Thorsten Kurth (NVIDIA Inc.)
13:00
-
13:30
CEST
Porting Lattice QCD Simulations to Exascale Architectures: Opportunities and Challenges

In this talk, we explore the transition of lattice Quantum Chromodynamics (Lattice QCD) simulations to exascale computing architectures, highlighting the significant interdisciplinary opportunities and challenges inherent in this effort. Lattice QCD, a crucial tool for understanding the strong force within the Standard Model (SM) of particle physics, demands substantial computational resources at HPC centres worldwide. The era of exascale computing opens opportuneness to obtain predictions of the SM observables with greater accuracy and on larger volumes and finer lattices than currently possible. This advancement could lead to breakthroughs in understanding the properties of hadronic matter, the nature of the early universe, and the new physics beyond the SM. The talk will delve into the scalability and portability hurdles that must be overcome to fully leverage exascale capabilities, focusing on efficient usage of heterogeneous computing resources, including GPUs, and the opportunities for cross-disciplinary collaboration in software development and optimization.

Marina Krstic Marinkovic (ETH Zurich, CERN)
With Thorsten Kurth (NVIDIA Inc.)