Back

Minisymposium

MS6H - Scalable Optimal Control and Learning Algorithms in High Performance Computing

Fully booked
Wednesday, June 5, 2024
11:30
-
13:30
CEST
HG F 26.5

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

Large scale scientific computing is increasingly pervasive in many research fields and industrial applications, such as: biological modeling of coupled soft-tissue systems, computational fluid dynamics and turbulence, and tsunami inundation via shallow water equations. Not only can forward simulation of these problems be prohibitively expensive, but additional computation and storage of gradient or adjoint information further compounds such numerical burden. This matter becomes further complicated when conducting an inverse or optimal control problem, which requires many such forward simulations and can suffer from the curse of dimensionality. Optimization and control problems can alleviate this burden somewhat via reduced-order surrogate models, randomized compression techniques, neural network surrogates designed to imitate dynamics or operators. The goal of this minisymposium is to bring together researchers working on finite- and infinite-dimensional control and simulation to discuss new methodologies for analyzing and solving problems in the extreme computing regime. It focuses on new techniques in model reduction for high-fidelity physics simulations; surrogate modeling techniques based on learning; distributed and multilevel optimization on HPCs; compression and storage with both deterministic and randomized methods; nonsmooth optimization; and adaptive discretizations.

Presentations

11:30
-
12:00
CEST
Adaptive ROM Methods in Optimal Design and Control

ROM is already utilized successfully in optimization and control. Based on trust-region methods new adaptive strategies for reduced basis schemes are introduced. As numerical examples parameter estimation and optimization are considered. The presented results are joint works with B. Azmi, B. Kaltenbacher, M. Kartmann, T. Keil, M. Ohlberger and A. Petrocchi.

Stefan Volkwein (University of Konstanz)
With Thorsten Kurth (NVIDIA Inc.)
12:00
-
12:30
CEST
Hermite Kernel Surrogates for the Value Function of High-Dimensional Nonlinear Optimal Control Problems

Numerical methods for the optimal feedback control of high-dimensional dynamical systems typically suffer from the curse of dimensionality. We devise a mesh-free data-based approximation method for the value-function for high dimensional optimal control problems, which partially mitigates the dimensionality problem. The data comes from open-loop control systems, which are solved via the first-order necessary conditions of the problem, called the Pontryagin’s maximum principle. In this, the most informative initial states for the open-loop process are chosen using a greedy selection strategy. Furthermore, the approximation method is based on a greedy Hermite-interpolation scheme, and incorporates context-knowledge by its structure. Especially, the value function surrogate is elegantly enforced to be 0 in the target state, non-negative and constructed as a correction of a linearized model. The algorithm is proposed in a matrix-free way, which avoids assembling a large system representing the interpolation conditions. For finite time horizons, convergence of the corresponding scheme can be proven for both the value-function and the surrogate as well as for the optimal vs. the surrogate controlled dynamical system. Experiments support the effectiveness of the scheme, using among others a new academic toy model with an explicit given value function, that may be useful for the community.

Tobias Ehring (University of Stuttgart)
With Thorsten Kurth (NVIDIA Inc.)
12:30
-
13:00
CEST
Adaptive Randomized Sketching for Dynamic Nonsmooth Optimization

Dynamic optimization problems arise in many applications, such as optimal flow control, full waveform inversion, and medical imaging. Despite their ubiquity, such problems are plagued by significant computational challenges. For example, memory is often a limiting factor when determining if a problem is tractable, since the evaluation of derivatives requires the entire state trajectory. Many applications additionally employ nonsmooth regularizers such as the L1-norm or the total variation, as well as auxiliary constraints on the optimization variables. We introduce a novel trust-region algorithm for minimizing the sum of a smooth, nonconvex function and a nonsmooth, convex function that addresses these two challenges. Our algorithm employs randomized sketching to store a compressed version of the state trajectory for use in derivative computations. By allowing the trust-region algorithm to adaptively learn the rank of the state sketch, we arrive at a provably convergent method with near optimal memory requirements. We demonstrate the efficacy of our method on a few control problems in dynamic PDE-constrained optimization.

Robert Baraldi and Drew Kouri (Sandia National Laboratories) and Harbir Antil (George Mason University)
With Thorsten Kurth (NVIDIA Inc.)
13:00
-
13:30
CEST
Low-Rank PINNs for Model Reduction of Nonlinear Hyperbolic Conservation Laws

Model reduction for hyperbolic PDEs using classical techniques is difficult due to the slow decay in the Kolmogorov n-width, making it necessary to explore new forms of approximation. We will discuss a new approach using deep neural networks endowed with a particular low-rank structure, which we call low-rank Physics-Informed Neural Networks (LR-PINNs). LR-PINNs are a form of implicit neural representation in which the weights and biases belong to linear spaces of small dimensions. We will show that entropy solutions to scalar conservation laws can be represented efficiently by such a representation. Numerical examples illustrating the efficacy of the neural network will be shown, and we will also discuss applications of LR-PINNs regarding the so-called failure modes of PINNs.

Randall LeVeque (University of Washington), Donsub Rim (Washington University in St. Louis), and Gerrit Welper (University of Central Florida)
With Thorsten Kurth (NVIDIA Inc.)