The Crick underwent a full HPC replacement that included replacing storage, networking, and CPU compute, as well as a GPU refresh. James Clements, Director of IT Operations & Deputy CIO at the Francis Crick Institute, looked across the 120 labs and 15 science and technology platforms to understand plans, desires, and what was or wasn’t working.
In the TRACERx EVO project alone, the team saw significant speed-ups for whole genome sequencing when testing Parabricks—including FastQ alignment and DeepVariant calling. “This will save nearly nine years of processing time relative to our current HPC service offering,” Clements explains.
In addition to the impressive time savings, the Crick team appreciated the hands-on approach with NVIDIA and the ability to provide feedback. As Clements states, “we’ve been able to work directly with the product team to test in development functionality and contribute ideas for future development.”
As a result, the Crick’s implementation consists of three clusters, all connected through NDR InfiniBand network, including:
- NVIDIA A100 for a cost-effective and space-efficient general-purpose cluster, used for unoptimized workloads.
- NVIDIA L40 for structure biology, cryo-electron microscopy works for lower-cost GPUs.
- NVIDIA H100 for specific workloads, including optimized solutions like Parabricks.
Both A100 and H100 are on Dell servers using 80GB SXM4 GPUs.
Clements summarizes that NVIDIA’s impact will “benefit the Crick with tens of thousands of hours of saved wait time every single year. It will also provide a hardware platform for future innovation.”