Back

Paper

AP1C - ACM Papers Session 1C

Fully booked
Monday, June 3, 2024
17:00
-
18:00
CEST
HG E 1.1

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

Presentations

17:00
-
17:30
CEST
Performance Analysis and Optimizations of ERO2.0 Fusion Code

In this paper, we present the thorough performance analysis of a highly parallel Monte Carlo code for modeling global erosion and redeposition in fusion devices, ERO2.0. The study shows that the main bottleneck preventing the code from efficiently using the resources is the load imbalance at different levels. Load imbalance is inherent to the problem being solved, particle transport, and deposition. Based on the findings of the analysis, we also describe the optimizations implemented on the code to improve its performance on HPC clusters. The proposed optimizations use MPI and OpenMP features, making them portable across architectures and achieving a 3.34x speedup.

Marta Garcia-Gasulla and Joan Vinyals-Ylla-Catala (Barcelona Supercomputing Center) and Juri Romazanov, Christoph Baumann, and Dmitry Matveev (Forschungszentrum Jülich)
With Thorsten Kurth (NVIDIA Inc.)
17:30
-
18:00
CEST
Libyt: A Tool for Parallel In Situ Analysis with yt, Python, and Jupyter

In the era of extreme-scale computing, large-scale data storage and analysis have become more critical and challenging. For post-processing, the simulation first needs to dump snapshots on a hard disk before processing any data. This becomes a bottleneck for high spatial and temporal resolution simulation. In situ analysis provides a viable solution for analyzing extreme scale simulations by processing data in memory, which skips the step of storing data on disk. We present libyt, an open-source C library that allows researchers to analyze and visualize data using yt or other Python packages in parallel computing during simulation runtime. We describe the code method for connecting simulation runtime data to Python, handling data transition and redistribution between Python and simulation processes with minimal memory overhead, and supporting interactive Python prompt and Jupyter Notebook for users to probe the ongoing simulation data at the current time step. We demonstrate how it solves the problem of visualizing large-scale astrophysical simulations, improving disk usage efficiency, and monitoring simulations closely. We conclude it with discussions and compare libyt to post-processing.

Shin-Rong Tsai (University of Illinois Urbana-Champaign, National Taiwan University); Hsi-Yu Schive (National Taiwan University, National Center for Theoretical Sciences); and Matthew Turk (University of Illinois Urbana-Champaign)
With Thorsten Kurth (NVIDIA Inc.)