Back

Minisymposium

MS3G - High Performance Computing for Magnetic Fusion Applications - Part I

Fully booked
Tuesday, June 4, 2024
11:00
-
13:00
CEST
HG F 26.3

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

This series of three minisymposia will be dedicated to addressing frontier challenges in magnetic fusion research. (1) Machine Learning and Quantum Computing: the four speakers will cover various aspects of machine learning, from real-time control of tokamaks to turbulence simulations to HPC issues. One talk will be devoted to the topic of quantum computing and examine opportunities for application in the field of fusion plasma physics. (2) New developments for Edge and Scrape-Off Layer (SOL) simulations: this is recognized as a frontier domain, involving significant challenges at various levels. Three talks will be devoted to progress made on three different kinetic codes, while a generalization of gyrokinetic models to magnetized sheath conditions will be presented in a fourth talk. (3) Beyond gyrokinetic models: standard gyrokinetic theories have their limitations which prevent them to be applied as is to various situations, in particular in presence of steep gradients as found in the outer plasma region. Advanced kinetic simulations beyond the standard gyrokinetic approach used in magnetic fusion will be presented. The relation between (fully-)kinetic, gyrokinetic, drift-kinetic and the MHD limit of these will be discussed. In all three sessions, the latest HPC applications in the field will be emphasized.

Presentations

11:00
-
11:30
CEST
Towards Neural Green's Operators for Magnetic Fusion

Operator networks have emerged as promising machine learning tools for reduced order modeling of a wide range of physical systems described by partial differential equations (PDEs). This work describes a new architecture for operator networks that approximates the Green's operator to a linear PDE. Such a ‘Neural Green’s Operator’ (NGO) acts as a surrogate for the PDE solution operator: it maps the PDE’s input functions (e.g. forcings, boundary conditions, material parameters) to its solution. We apply NGOs to relevant canonical PDEs and ask the question whether the NGO architecture would lead to significant computational benefits and conclude the discussion with numerical examples.

Michael Abdelmalik (Eindhoven University of Technology); Jonathan Citrin (DeepMind); and Josefine Proll, Joost Prins, and Hugo Melchers (Eindhoven University of Technology)
With Thorsten Kurth (NVIDIA Inc.)
11:30
-
12:00
CEST
Artificial Intelligence/Machine Learning/HPC Acceleration of Progress in Fusion Energy R&D

The US goal (March, 2022) to deliver a Fusion Pilot Plant [1] has underscored urgency for accelerating the fusion energy development timeline. This will rely heavily on validated scientific and engineering advances driven by HPC together with advanced statistical methods featuring artificial intelligence/deep learning/machine learning (AI/DL/ML) that must properly embrace verification, validation, and uncertainty quantification (VVUQ). Especially time-urgent is the need to predict and avoid large­ scale “major disruptions” in tokamak systems. This keynote highlights the deployment of recurrent and convolutional neural networks in Princeton's Deep Learning Code -- "FRNN" – that enabled the first adaptable predictive DL model for carrying out efficient "transfer learning" while delivering validated predictions of disruptive events across prominent tokamak devices [2]. Moreover, the AI/DL capability can provide not only the “disruption score,” as an indicator of the probability of an imminent disruption but also a “sensitivity score” in real-time to indicate the underlying reasons for the predicted disruption [3]. A real-time prediction and control capability has recently been significantly advanced with a novel surrogate model/HPC simulator ("SGTC") [4] -- a first-principles-based prediction and control surrogate necessary for projections to future experimental devices (e.g., ITER, FPP's) for which no "ground truth" observational data exist.

William Tang (Princeton University, Princeton Plasma Physics Lab)
With Thorsten Kurth (NVIDIA Inc.)
12:00
-
12:30
CEST
Exploration of Quantum Computing for Fusion Energy Science Applications

Quantum computing promises to deliver large gains in computational power that can potentially benefit a number of Fusion Energy Science (FES) application areas. We will review our recent efforts [1] to develop and extend quantum algorithms to perform both classical and quantum FES-relevant calculations, as well as to perform calculations on present-day quantum hardware platforms. We have developed and explored quantum algorithms that can compute nonlinear and non-Hamiltonian dynamics by simulating the Koopman-von Neumann and Liouville equations; perform eigenvalue estimation for generalized eigenvalue problems common in plasma physics and MHD theory; simulate nonlinear wave-wave interactions; and explore the chaotic dynamics of both quantum and classical systems. We have implemented toy models of these algorithms on state-of-the-art quantum computing architectures to test the fidelity of emerging quantum hardware capabilities including Grover’s search, nonlinear three-wave interactions, and the chaotic dynamics of the quantum sawtooth map, a simple model for wave-particle interactions. The fidelity of the experimental results match noise models that include decay and dephasing processes and highlights key differences between state-of-the-art approaches to quantum computing hardware platforms.

[1] I. Joseph, Y. Shi, M. D. Porter, et al., Phys. Plasmas 30, 010501 (2023).

Ilon Joseph (Lawrence Livermore National Laboratory)
With Thorsten Kurth (NVIDIA Inc.)
12:30
-
13:00
CEST
Scientific Machine Learning to Optimize Plasma Turbulence Simulations

Controlled fusion offers the promise of sustainable and safe energy production on Earth. In magnetic fusion devices, the power gain increases nonlinearly with the energy confinement time. The quality of the plasma energy confinement thus largely determines the size and therefore the cost of a fusion reactor. Unfortunately, small-scale turbulence limits the quality of confinement in most fusion devices. Hence modelling of turbulent transport is mandatory to find routes towards improved confinement regimes. Numerical simulations are based on a kinetic description of the plasma that can only be performed on most powerful supercomputers. The gyrokinetic GYSELA code runs efficiently on several hundred thousand CPU cores. With a consumption of 150 million CPU hours per year, the code makes massive use of petascale computing capacities and manipulates Petabytes of data. However, there are still many challenges to overcome in order to achieve a full description of electron dynamics that will require exascale simulations. In this context, scientific machine learning could play an important role to optimize the storage and the number of CPU hours consumed. We will present here our on-going work on integrating in-situ diagnostics using Artificial Intelligence techniques for data compression and automatic detection of anomalies or rare events.

Virginie Grandgirard (CEA); David Zarzoso (CNRS); Robin Varennes (National University of Singapore); and Feda Almuhisen, Kevin Obrejan, and Julien Bigot (CEA)
With Thorsten Kurth (NVIDIA Inc.)