Back

Minisymposium

MS6A - Architectures for Hybrid Next-Generation Weather and Climate Models

Fully booked
Wednesday, June 5, 2024
11:30
-
13:30
CEST
HG F 1

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

Climate and weather models, traditionally built using low-level languages like Fortran for performance, face sustainability challenges due to evolving hardware architectures and advances in machine learning (ML). ML models are rapidly approaching the effectiveness of physics-based models, suggesting a future shift towards hybrid systems that blend classic numerical methods with ML. This evolution necessitates exploring new tools and methodologies to address performance portability issues and the integration of physics-based models with high-performance GPU and ML frameworks in Python. Significant progress has been made with domain specific languages or general-purpose software libraries, but integrating these with traditional model components remains a challenge. The next-generation weather and climate models must accommodate a range of tools, including numerical methods, performance-portable frameworks, auto differentiation toolkits, and ML libraries. However, the architectural complexities of integrating these diverse tools are often overlooked in scientific software development. The minisymposium will focus on architectural design for scalable weather and climate models, addressing key topics like automatic differentiation, optimization, integration of various model components, and efficient data handling for ML-enabled simulations.

Presentations

11:30
-
12:00
CEST
Design and Interfaces for CliMA’s Next-Generation Performance-Portable Earth System Model

The Climate Modeling Alliance (CliMA) is developing a new Earth System Model (ESM) entirely written in Julia. The CliMA model achieves performance portability by targeting both CPU and GPU architectures with a common codebase. In this talk, we will illustrate some of the different package architectural designs with flexible interfaces that provide core functionality and allow seamless coupling and composition of the ESM components.

The main solver architecture, data management, and composable discretization tools to solve the governing equations of the ESM component models are provided by the dynamical core (dycore) library, ClimaCore.jl. Its high-level API facilitates modularity and composition of differential operators, the definition of flexible discretizations, and reconciliation between different characteristics, such as numerics and physical formulations. In the backend, low-level APIs support different data layouts, specialized implementations, and flexible models for threading, to better face high-performance optimization, data storage, and scalability challenges on modern heterogeneous architectures.

Some distinct design patterns in the CliMA ecosystem, such as modularity, extensibility, and interoperability translate across the different packages in the CliMA’s codebase. For example, they are evident in the ClimaCoupler.jl: the package responsible for coupling the different component models comprising the ESM (e.g., atmosphere, ocean, land, etc).

Valeria Barra (San Diego State University); Simon Byrne (NVIDIA Inc.); and Akshay Sridhar, Shriharsha Kandala, Lenka Novak, Julia Sloan, Dennis Yatunin, Charles Kawczynski, Gabriele Bozzola, and Tapio Schneider (California Institute of Technology)
With Thorsten Kurth (NVIDIA Inc.)
12:00
-
12:30
CEST
NVIDIA and Earth-2's Contributions to Tools, Libraries, Data and Workflow Infrastructure in the Era of ML-Driven Weather and Climate Modeling

In this talk I will discuss contributions from NVIDIA and the Earth-2 Initiative towards scalable, performance-portable, user-friendly tools, libraries, data and workflow infrastructure in the era of ML-driven weather and climate modeling. I'll delve into what approaches might move the needle on digital twinning at ultra-high-resolutions (km- and sub-km-scales), a shared goal across Destination Earth, Earth-2, EVE, and other large international initiatives. This talk will not only provide a few examples that are showing early successes but also speculate on what approaches might be most impactful in this fast changing landscape, especially when data compression and data re-generation, in contrast to data movement, might become the norm in just a few years.

Karthik Kashinath (NVIDIA Inc.)
With Thorsten Kurth (NVIDIA Inc.)
12:30
-
13:00
CEST
Keeping Pace: Using DSLs to Create a Modeling Platform for Next Generation Models

As hardware architectures and algorithmic approaches diversify, flexibility becomes a greater and greater virtue for weather and climate model developers. The approach of porting models to a domain-specific language can provide this flexibility; a Python frontend allows natural integration of ML components, and the compiler toolchain can optimize the code for target backends. However, not every task is trivial; challenges include training physics emulators, coupling and optimizing hybrid Fortran/dsl model configurations, and ensuring all algorithmic motifs are supported by the dsl. We discuss our experiences with these issues in the context of developing Pace, the GT4Py and DaCe implementation of the FV3GFS atmospheric model, and NDSL, the NOAA/NASA DSL middleware platform we use for model development. We explain which design decisions were made to address which obstacles, ongoing work on the modeling framework, and future development plans.

Oliver Elbert (Geophysical Fluid Dynamics Laboratory); Florian Deconinck (NASA); and Frank Malatino, Rusty Benson, and Lucas Harris (Geophysical Fluid Dynamics Laboratory)
With Thorsten Kurth (NVIDIA Inc.)
13:00
-
13:30
CEST
Can we Build Composable Atmospheric Models Without Sacrificing Performance?

Atmospheric models consist of a dynamical core – integrating the equations of motion on a computational mesh – and physical parameterizations – taking into account the bulk effect of subgrid-scale phenomena (e.g. radiative heat transfer, microphysics, turbulence). For ease of software development, dynamical cores and physics packages have historically been written in isolation, leading to model components based on inconsistent assumptions and featuring incompatible structures. We present recent efforts to devise model components with a common and expressive interface, which can be more easily transferred between models and favor modular code designs. We discuss the challenges and benefits associated with this approach, both from a software engineering perspective (e.g. maintainability, reusability, interoperability, readability) and a scientific point of view (e.g. process coupling). Moreover, we address the integration of modern HPC tools (e.g. domain-specific languages) into composable code architectures and discuss potential impacts on performance, as compared to a monolithic code design.

Stefano Ubbiali (ETH Zurich), Christian Kühnlein (ECMWF), Christoph Schär (ETH Zurich), Linda Schlemmer (DWD), Thomas C. Schulthess (ETH Zurich / CSCS), and Heini Wernli (ETH Zurich)
With Thorsten Kurth (NVIDIA Inc.)