Paper
AP1A - ACM Papers Session 1A
Replay
Numerical models of the ocean and ice sheets are crucial for understanding and simulating the impact of greenhouse gases on the global climate. Oceanic processes affect phenomena such as hurricanes, extreme precipitation, and droughts. Ocean models rely on subgrid-scale parameterizations that require calibration and often significantly affect model skill. When model sensitivities to parameters can be computed by using approaches such as automatic differentiation, they can be used for such calibration toward reducing the misfit between model output and data. Because the SOMA model code is challenging to differentiate, we have createdneural network-based surrogates for estimating the sensitivity of the ocean model to model parameters. We first generated perturbed parameter ensemble data for an idealized ocean model and trained three surrogate neural network models. The neural surrogates accurately predicted the one-step forward ocean dynamics, of which we then computed the parametric sensitivity.
We solve the 3D acoustic wave equation using the finite-difference time-domain (FDTD) formulation in both first and second order. The FDTD approach is expressed as a stencil-based computational scheme with a long-range discretization, i.e., 8th order in space and 2nd order in time, which is routinely used in the oil and gas industry and environmental geophysics for high subsurface imaging fidelity purposes. Absorbing Boundary Conditions (ABCs) are employed to attenuate reflections from artificial boundaries. The high order discretization engenders extensive data movement across the memory subsystem and may consequently impact the kernel throughput due to the inherent memory-bound behavior of the stencil operator, especially on systems facing memory starvation. The first-order formulation of the 3D acoustic equation further exacerbates this phenomenon because it calculates both the pressure and velocity fields, which corresponds to 1.6X the memory footprint of second-order formulation. To address this memory bottleneck, we design, implement, and deploy the multicore wavefront diamond tiling with temporal blocking (MWD-TB) to boost the performance of seismic wavefield modeling by exploiting spatial&temporal data reuse. MWD-TB leverages the large capacity of last-level cache (LLC) of modern x86 systems and extracts high bandwidth memory from the underlying architecture. We demonstrate the numerical accuracy of MWD-TB on the Salt3D model from the Society of Exploration Geophysicists. Our MWD-TB implementations for the first- and second-order FDTD formulations achieve speedups of up to 3.5X and 3X on a large grid size on AMD systems equipped with large LLC, respectively, compared to the traditional spatial blocking method alone.