GPULab is a distributed system for running jobs in GPU-enabled Docker-containers. It consists out of a set of heterogeneous clusters, each with their own characteristics (GPU model, CPU speed, memory, bus speed, …), allowing you to select the most appropriate hardware. Each job runs isolated within a Docker containers with dedicated CPU’s, GPU’s and memory for maximum performance.

Some key numbers:

  • +150 GPUs
  • 700.000 cuda cores, 3TB GPU RAM
  • job-based GPU processing
  • jupyterhub access
  • AI and data processing research

 

GPU HPC distributed computing
Want to know more?
Contact us at IDLab, the research center for Internet technologies and Data science.
Contact us
Copyright © 2024 IDLab. All rights reserved.