Back

Minisymposium

MS1D - Next-Generation Computing for the Large Hadron Collider

Fully booked
Monday, June 3, 2024
11:30
-
13:30
CEST
HG E 1.2

Replay

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Session Chair

Description

The Large Hadron Collider (LHC) is the "energy frontier" of high energy physics, but the accelerator itself has advanced fundamental physics only with extensive computing resources and R&D. Data collection from the detectors around LHC's collision sites require advanced GPU- and FPGA-based triggers to filter the incoming data. Simulating the creation of known particles from hypothetical physics requires GPU-accelerated quantum chromodynamics (QCD). Modeling those particles' interaction with the detectors to create simulated responses requires massive compute resources that are being gradually transitioned to GPUs. Finally, confirming new physics requires complex analysis frameworks that reconstruct experimental particle tracks and compare them with the simulated results to deduce the fundamental physics creating those particles. Our minisymposium will dive into the advanced computing necessary to enable these new discoveries: a big-picture overview of the LHC, the detector experiments, their cumulative computing requirements; high-level descriptions of the R&D in GPU accelerators and AI/ML, "online" computing for filtering and capturing data from the experiments, "offline" computing for event generation and detector simulation, and "reconstruction" that determines new physics by comparing the experimental and computational results.

Presentations

11:30
-
12:00
CEST
Analysis Model and Future Challenges: A Perspective from the LHC

The Large Hadron Collider (LHC) is entering an era where the experiments are collecting hundreds of petabytes of data from which a wide variety of science goals can be pursued by distilling the data to high-level observables. Current data analysis methods rely on a combination of experiment-specific centrally-provided code, as well as community supported tools and analysis specific software. In addition, several computing and storage resources are used for processing the data in stages. In the coming years, upgrades to the LHC and the experiments are expecting to result in a major increase in the data volume. In addition, the complexity of the analyses is increasing. This presentation will discuss the current methods used for data analysis in big-science experiments at the LHC, as well as ideas for the future to address challenges in areas such as analysis techniques for parallelized data access, organization of workflows and the incorporation of new technologies such as heterogeneous resources.

Verena Ingrid Martinez Outschoorn (University of Massachusetts Amherst)
With Thorsten Kurth (NVIDIA Inc.)
12:00
-
12:30
CEST
Hardware Acceleration for Hard Event Generation

The first step in the particle physics simulation chain involves the evaluation of exact, analytic scattering amplitudes in a process called hard event generation. Although these amplitudes are given by explicit mathematical expressions, their complexity alongside the sheer magnitude of necessary evaluations make them a noteworthy bottleneck for LHC computing purposes; and unlike other simulation bottlenecks, which often involve a significantly branching control flow, scattering amplitude evaluations are computed identically across many different phase space points. As such, hard event generation is a well-suited task for data-level parallelism. In the last few years, a working group of data scientists and physicists across Europe and the US have worked on porting leading order (LO) hard event generation within the particle physics framework MadGraph5_aMC@NLO (MG5aMC) to SIMD/SIMT architectures, such as vector CPUs and GPUs, and recently work has been started on additionally parallelising next-to-leading order (NLO) corrections to these amplitudes. This involves significant restructuring and rewriting of legacy code, but has proven fruitful. Preliminary tests for LO event generation often show maximal theoretical speedup found when compared to native MG5aMC, and there is ongoing work to integrate this software into the LHC experiments’ simulation chains.

Zenny Wettersten (CERN, TU Wien)
With Thorsten Kurth (NVIDIA Inc.)
12:30
-
13:00
CEST
Celeritas: Accelerating HEP Detector Simulation on GPUs

Celeritas is a new Monte Carlo (MC) detector simulation code designed to help meet the increasing computational demands of high energy physics (HEP) experiments by leveraging accelerator-based HPC architectures. The upcoming high luminosity upgrade of the Large Hadron Collider (LHC) and its four main detectors will increase the volume and complexity of the data of the future particle physics experiments by an order of magnitude. This will in turn require a proportional increase in computational capacity for much of the software that these experiments rely on, including detector simulations. Celeritas is designed to meet this challenge by leveraging the new generation of heterogeneous computing architectures to perform full-fidelity MC simulations of LHC detectors. This includes full electromagnetic (EM) physics in complex geometries in the presence of a magnetic field, with an interface that enables straightforward integration with existing Geant4 applications. This talk will provide an overview of current Celeritas capabilities, focusing in particular on the strategy for integration with experimental HEP computing workflows, and early performance results on DOE's Leadership Computing Facilities (LCFs).

Amanda Lund (Argonne National Laboratory)
With Thorsten Kurth (NVIDIA Inc.)
13:00
-
13:30
CEST
Enhancing High Energy Physics Analysis: Advancements in Computing Infrastructure and Software for the LHC and Future

High Energy Physics (HEP) is fundamentally statistical, relying on the Standard Model (SM) hypothesis, which encapsulates entities like the Higgs Boson, Quarks, Leptons, and force-mediating Bosons. Despite its comprehensive framework, the SM has limitations, unable to explain several phenomena. Particle accelerators such as the LHC serve as a tools in investigating the SM's potential inadequacies, offering clues that might lead to beyond Standard Model (BSM) Physics. A significant challenge in HEP is to handle enormous data volumes, aiming to search for new particles or to scrutinize exceptionally rare SM processes, with any enhancement in event rates may come from BSM. Since the beginning, the development of a robust computing infrastructure and software has been crucial for effectively managing and analyzing this data. This includes leveraging heterogeneous computing, harnessing the power of GPUs or FPGAs, and integrating machine learning and AI into analysis workflows to handle data more efficiently. With the LHC set to evolve into the High Luminosity LHC, significantly increasing data volumes, it’s essential to fortify our computational capabilities. This presentation will discuss into the current developments, highlighting the integration of innovative tools that empower physicists to analyze data more proficiently and pave the way for future of HEP.

Phat Srimanobhas (Chulalongkorn University)
With Thorsten Kurth (NVIDIA Inc.)