Back

Minisymposium

MS3F - Breaking the HPC Silos for Outstanding Results

Fully booked
Tuesday, June 4, 2024
11:00
-
13:00
CEST
HG D 1.2

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

The minisymposium will be in two parts: first there will be three scientific presentations from HPC experts from different backgrounds. These talks will offer a view on a variety of carrier paths, focusing on how inclusivity and diversity played a role in achieving the presented results. The first talk will take us on a tour of tools and best practices to enhance portability and performance of HPC codes. Then we will hear about how digital twins can be positioned as bridges between centers offering top end capacities and clouds in order to facilitate merging AI and HPC, mainly for environmental science projects. During the third talk, the recipient of the 2023 PRACE Ada Lovelace Award for HPC will share her experience in pushing performance of applications to reach new horizons. Finally, we will hear from IDEAS4HPC about the activity plan for 2024-2026: mentorship, support for students or young scientists from under represented groups to join HPC programs in Switzerland. A round table with the speakers and the public will conclude the minisymposium.

Presentations

11:00
-
11:30
CEST
Best Practices for Performance, Portability and Inclusivity

Application developers and scientists from different fields relying on HPC struggle to prepare their codes to run efficiently, even on current leadership computer systems. Some of the challenges they face are well known (e.g., parallelization inefficiencies caused by synchronizations, load imbalance, or communication patterns). Other constraints/inefficiencies arise from new computing paradigms (e.g., how to manage input/output when dealing with big data) and the heterogeneity of computing resources (e.g., when/how to efficiently exploit an accelerator).

We will share patterns frequently found in parallel codes that lead to performance or portability loss and a set of recommendations and best practices to avoid these practices.

The proposed best practices are oriented to improve the codes' performance and maintain developer productivity and portability of the codes. These will be crucial in the exascale race to cope with the increasing complexity of applications and computer systems, deal with the variety of architectures and paradigms, and ultimately run efficiently in an exascale supercomputer.

Marta Garcia-Gasulla (Barcelona Supercomputing Center)
With Thorsten Kurth (NVIDIA Inc.)
11:30
-
12:00
CEST
The Digital Twin Initiative at CERN: New Innovative and Multi-Disciplinary Ways to Handle Large Amounts of Data

In the next years, current and next generation observational scientific experiments will provide large amounts of observational data. This has the great potential to improve our understanding of nature, but it will also come with intrinsic challenges. For example, handling these data will require the use of powerful supercomputing infrastructures. One of the main challenges consists in defining tools and protocols to extract information from such large scale datasets.

Digital twins, in particular, are fundamental tools to democratize science and bridge the gap between the HPC centers, where such large models are implemented, and clouds, where researchers at all levels can inspect the data in a user-friendly fashion. The talk will illustrate two main projects: interTwin, which aims at developing an open source digital twin engine for fundamental science, and EMP2, which builds on the interTwin project to develop a platform for environmental applications. Both projects are multi-disciplinary collaborations involving physicists and computer scientists from CERN and external partners with other backgrounds, like astrophysicists or earth system scientists. The outcomes of these project will be a set of tools that can be used by researchers to analyse their data and solve new scientific challenges in the near future.

Ilaria Luise (CERN)
With Thorsten Kurth (NVIDIA Inc.)
12:00
-
12:30
CEST
Breaking Limits: Scaling HPC Performance Engineering Horizons to Maximize Potential

In the ever-evolving realm of high-performance computing (HPC), and in an era where data-driven insights and complex simulations are essential, pushing the boundaries of performance is paramount to scientific discovery and technological advancement. This presentation will delve into innovative approaches to scale HPC performance engineering horizons and unlock the full potential of computational capabilities. Strategies for optimizing hardware and software utilization, identifying and mitigating performance bottlenecks, and implementing cutting-edge techniques to maximize scalability, efficiency, and productivity will be explored. Using the example of tracking the data path, this presentation will illustrate how monitoring the movement of data from computation to storage has the potential to break through performance barriers and propel research and development efforts to new heights.

Sarah Neuwrirth (Johannes Gutenberg University Mainz)
With Thorsten Kurth (NVIDIA Inc.)
12:30
-
13:00
CEST
Transforming Science and Engineering Research through an Innovative High Performance AI+HPC Ecosystem at PSC

AI is transforming research through analysis of voluminous datasets and accelerating simulations by factors of up to a billion. Such acceleration far exceeds the speedups possible through improvements in CPU process and other kinds of algorithmic advances. To continue exploring the possibilities enabled, the research community requires an ecosystem that seamlessly and efficiently brings together scalable AI, HPC and large-scale data management. The Pittsburgh Supercomputing Center (PSC) offers an innovative computational ecosystem to enable AI-enabled research, bringing together carefully designed systems and groundbreaking technologies to provide, at no cost, a uniquely capable ecosystem composed of two major systems: Neocortex and Bridges-2. Neocortex embodies a revolutionary processor architecture to vastly shorten the time required for deep learning training, foster greater integration of artificial deep learning with scientific workflows, and accelerate graph analytics. Bridges-2 integrates additional scalable AI, HPC and high-performance parallel file systems for simulation, visualization and Big Data as a Service. Neocortex and Bridges-2 are integrated to form a tightly coupled and flexible ecosystem for AI- and data-driven research. We will cover a detailed description of the AI+HPC ecosystem at PSC and highlight a set of representative scientific research projects that are leveraging this advanced cyberinfrastructure

Paola Buitrago (Pittsburgh Supercomputing Center)
With Thorsten Kurth (NVIDIA Inc.)