Back

Minisymposium Presentation

Challenges and Opportunities in Running Kubernetes Workloads on HPC

Wednesday, June 5, 2024
12:00
-
12:30
CEST
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Climate, Weather and Earth Sciences
Chemistry and Materials
Chemistry and Materials
Chemistry and Materials
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics
Humanities and Social Sciences
Humanities and Social Sciences
Humanities and Social Sciences
Engineering
Engineering
Engineering
Life Sciences
Life Sciences
Life Sciences
Physics
Physics
Physics

Description

Cloud and HPC increasingly converge in hardware platform capabilities and specifications, nevertheless still largely differ in the software stack and how it manages available resources.The HPC world typically favors Slurm for job scheduling, whereas Cloud deployments rely on Kubernetes to orchestrate container instances across nodes. Running hybrid workloads is possible by using bridging mechanisms that submit jobs from one environment to the other.However, such solutions require costly data movements, while operating within the constraints set by each setup's network and access policies. In this presentation, we introduce an container-based approach design that enables running unmodified Kubernetes workloads directly on HPC systems, by having users deploy their own private Kubernetes mini Cloud, which internally converts container lifecycle management commands to use the HPC system-level Slurm infrastructure for scheduling and Singularity/Apptainer as the container runtime. We consider this approach to be practical for deployment in HPC centers, as it requires minimal pre-configuration and retains existing resource management and accounting policies.

Authors