Back

Minisymposium

MS6E - Julia for HPC: Tools and Applications - Part II

Fully booked
Wednesday, June 5, 2024
11:30
-
13:30
CEST
HG E 3

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

Performance portability and scalability on large-scale heterogeneous hardware represent crucial aspects challenging current scientific software development. Beyond software engineering considerations, workflows making further use of large datasets to constrain physical models are also emerging and are indispensable to develop, e.g., digital twins. GPU computing and differentiable programming constitute leading-edge tools that provide a promising way to combine physics-based simulations with novel machine learning and AI based methods to address interdisciplinary problems in science. The Julia language leverages both tools, as it includes first-class support for various accelerator types and an advanced compiler interface that supports native automatic differentiation capabilities. Julia makes it possible to differentiate efficiently through both CPU and GPU code without significant impact on performance. The goal of this minisymposium is to bring together scientists who work on or show interest in large-scale Julia HPC development, with a particular focus on the necessary tool stack for automatic differentiation and machine learning in the Julia GPU ecosystem, and on applications built on top of it. The selection of speakers, with expertise spanning from computer to domain science, offers a unique opportunity to learn about the latest development of Julia for HPC to drive discoveries in natural sciences.

Presentations

11:30
-
12:00
CEST
Adaptively Coupled Multiphysics Simulations with Trixi.jl

We extended the capabilities of the numerical simulation framework Trixi.jl to be able to simulate adaptively coupled multiphysics systems. Coupling is performed through the boundary values of the systems where the coupling functions can be freely defined, depending on the physical nature of the interface. This allows us to couple any pair of systems, like Navier-Stokes equations with magnetohydrodynamic equations. This is particularly useful for hierarchical systems found in e.g. Astrophysics where we can have a complex model for a small part of the domain and a simplified model on a larger part. This can greatly reduce the computational cost and decrease the computational time. To account for dynamic changes in the physics that need to be solved at any given point in space, we support adaptively coupled domains. The criteria for changing the domain boundaries can be freely defined and tailored to the problem. One application is the propagation of magnetic fields in space where we solve the magnetohydrodynamic equations only for the part of the domain with a significant magnetic field.

Simon Candelaresi (University of Stuttgart, High-Performance Computing Center Stuttgart)
With Thorsten Kurth (NVIDIA Inc.)
12:00
-
12:30
CEST
GPU4GEO: Frontier GPU Multiphysics Solvers Using Julia

The GPU4GEO project aims at developing new High-Performance Computing (HPC) tools for modelling geodynamics and ice sheet dynamics written in the Julia language. This initiative is a response to the practical demands of HPC, particularly the need for optimal performance in supercomputing environments that rely on GPU accelerators. We will present our flagship applications JustRelax.jl (geodynamics) and FastIce.jl (ice flow). These applications offer a high-level API for massively parallel thermo-mechanical Stokes solvers based on the highly-scalable pseudo-transient iterative method. We will further discuss the developed software upon which these applications are built: (i) portability to multi-GPU systems (ParallelStencil.jl and ImplicitGlobalGrid.jl); (ii) solver-agnostic material physics computations (GeoParams.j); and (iii) particles-in-cell advection (JustPIC.jl).We tackle the increasing demand for merging data-driven workflows with physics-based modelling, utilising Julia’s native support for differentiable programming. Leveraging automatic differentiation (AD), we efficiently compute model sensitivities, offering a unified framework for both inverse modelling and physics-informed machine learning. We will demonstrate Julia’s powerful AD application via Enzyme.jl for computing adjoint sensitivities in our solvers, and present benchmarks showcasing multi-GPU performance and scalability.

Albert de Montserrat and Ivan Utkin (ETH Zurich), Ludovic Räss (University of Lausanne), and Boris Kaus (Johannes Gutenberg University Mainz)
With Thorsten Kurth (NVIDIA Inc.)
12:30
-
13:00
CEST
Enhancing GPU-Accelerated Scientific Computing in Julia with Ginkgo.jl

Solving sparse linear systems on GPU-accelerated systems efficiently is a highly specialized and demanding task. The implementation of efficient solvers incorporates not only deep insights into the problem but also an extensive understanding of the underlying hardware and the respective platform-specific languages. This interdisciplinary orchestration poses significant challenges in scientific software development.

This talk presents recent developments on Ginkgo.jl, a Julia wrapper package of the modern C++-based sparse linear algebra library Ginkgo. We demonstrate its performance through a series of benchmarks and showcase an example where we solve the sparse linear system assembled from the finite element toolbox package, Ferrite.jl, using a preconditioned iterative solver. We highlight its interoperability with existing packages in the Julia package ecosystem.

You Wu (ETH Zurich) and Tobias Ribizel and Hartwig Anzt (Technical University of Munich)
With Thorsten Kurth (NVIDIA Inc.)
13:00
-
13:30
CEST
Advanced HPC Workflows for Urgent and Interactive Computing Using Julia

Modern data-driven discovery algorithms and workflows require the tight interpretation of Simulation, Data Analysis, and AI. This means that all too often modern workflows fail to mesh well with HPC environments which are optimized for isolated applications over integrated workflows; and high utilization over fast feedback.

An example of this is real-time data analysis for experiment steering: time at large scientific instruments (such as particle accelerators, electron microscopes, or telescopes) is a scarce resource. Yet modern instruments often produce data at a rate that outpaces local computing resources. Therefore, scientists are turning to live data processing at HPC centers in order to gain the necessary insight to effectively steer their experiments.

In this talk, we demonstrate how Julia’s unique language features make it a natural language for developing tightly integrated Simulation + Analysis + AI workflows. We will also show an example of a workflow that flexibly grows its pool of compute nodes on an HPC system, thereby overcoming the constraints of the resource scheduler.

Johannes Blaschke (Lawrence Berkeley National Laboratory, NERSC)
With Thorsten Kurth (NVIDIA Inc.)