Training recap: AI Hackathon 2025
Training recap: AI Hackathon 2025
On 14–23 October 2025, the EuroCC AI Hackathon brought together AI engineers, data scientists, and researchers from across Europe for an intensive, hands-on training focused on accelerating AI workflows with high-performance computing (HPC).
The event was jointly organised by Austrian Scientific Computing (ASC), Leibniz Supercomputing Centre (LRZ), and Academic Computer Centre Cyfronet AGH, in collaboration with the OpenACC organisation and NVIDIA, as well as National Competence Centres EuroCC Austria, EuroCC@GCS, and EuroCC Poland.
A mentored approach to AI acceleration
The Open Hackathon format offered a mix of structured mentoring and self-directed technical work. Each participating team arrived with a concrete AI or ML challenge derived from their real-world research projects and over the course of several days, worked with mentors to optimise code, scale workloads, and improve performance on HPC systems equipped with the latest NVIDIA GPUs.
This time, ten teams from academia and industry joined the hackathon, supported by over 20 mentors and assistants who provided continuous guidance on GPU optimisation, performance profiling, and best practices for large-scale AI. The computations ran on several supercomputers: Helios (Cyfronet), LEONARDO (CINECA), the LRZ AI System (LRZ), VSC-5 (ASC), as well as NVIDIA DGX Cloud.
Hackathon highlights
1. Integration for running Dagster assets on Slurm HPC clusters
A team from the Complexity Science Hub (CSH) and ASCII (Austria) explored how Dagster-Slurm can modernise the way AI and HPC workflows are developed and executed. Their project tackled a common problem in HPC: the reliance on handwritten batch scripts, manual environment management, and limited observability.
Using Dagster-Slurm, the team demonstrated how reproducible, testable, and observablepipelinescan seamlessly move from a laptop to a Tier-0 supercomputer without code changes – bridging the gap between modern data orchestration and high-performance computing.
Read more about the team’s success in integrating Dagster Slurm.
2. Accelerating GAN training for biomedical applications
A team from the University of Silesia (Poland) focused on improving the efficiency of Generative Adversarial Network (GAN) training on NVIDIA GH200 Grace Hopper™ Superchip on the Helios supercomputer.
By tuning their training pipeline, they reduced runtime from around 10 seconds per epoch to 0.4 seconds – achieving a 25× speed-up and over 98% GPU kernel utilisation.
3. AI safety through neural network verification
A team from TU Wien (Austria) developed a GPU-accelerated verification system for assessing the robustness of neural networks against adversarial attacks. Initially conceived as an academic prototype, the project evolved into a practical tool for verifying AI models used in aerospace logistics – after mentors encouraged the team to test real-world challenges.
The system scaled from 191,000 to 105.8 million parameters – a 550× increase – and successfully verified Airbus Beluga optimisation problems, achieving a 5× speed-up in verification time.
4. Improving predictions in the terrestrial carbon cycle
A team from the Barcelona Supercomputing Center (Spain) brought an application at the intersection of Machine Learning and weather and climate models. Aiming for accurate, high-resolution representation of land surface boundaries, the team focused on improving the parallelisation and speed of the training at the hackathon.
Moving from a single-node, multi-GPU setup to a multi-node setup, the team achieved a 3.3x faster data loading and a 29% overall speedup in training.
5. Explaining large model behaviour for reliable multilingual LLMs
A team from LMU (Germany) investigated how the training dynamics cause inconsistencies in LLMs representation of facts across languages.
The team dove deep into the training, expanded and optimised their custom-made interpretability framework ExPLAIND. The team familiarised themselves with the Pytorch Profiler and gained insights into the performance bottlenecks of their framework. By the end of the hackathon, the team implemented best practices in efficient data loading of the FineWebHQ dataset and was able to train models in a multi-GPU multi-node setting using Accelerate and Deepspeed.
Through these optimizations, the team achieved a 20% lower memory footprint and more accurate scores within ExPLAIND.
6. Reconstructing functional brain activity with NeuroGraph inverse solver
A team from RAU/UBB (Romania/UK) brought to the hackathon the NeuroGraph Inverse Solver, which reconstructs functional brain connectivity from non-invasive EEG scalp recordings. Working to optimise a combination of graph neural networks, spiking neural network simulation and end-to-end differential signal reconstruction, the team pivoted in minimal time, adjusting assumptions and working on the algorithm’s stability.
Through profiling and performance optimisation, the team enabled proper multi-GPU training and achieved a speedup of 9.7x, reducing epoch time from 12 minutes to 2 minutes.
7. Decoding the fibrosis to cancer transition with genomic LLMs
A collaborative project with members from IISGS and CESGA (Spain) worked on fine-tuning Evo 2, a Genomic Foundation Model, within the NVIDIA BioNeMo framework to study the fibrosis-to-cancer transition.
By the end of the hackathon, the team successfully fine-tuned the Evo 2 genomic language model with 1B, 7B, and 40B parameters by learning how to handle complex multi-node training setups, optimise GPU performance, and establish reproducible pipelines for biomedical LLM research.
8. AI models for faster weather forecasts on Madeira
A team from O Observatório Oceânico da Madeira entered the hackathon with the goal of prototyping faster and more accurate weather forecast models using neural networks for multivariate time series. Through rigorous testing of various AI models, the team arrived at two working prototypes, which will serve as a base for developing more refined weather predictions suited for the Madeira archipelago.
9. GPU-accelerated search for massive black hole binaries
A collaboration between the University Milano-Bicocca, SISSA, and INAF (Italy) came to the hackathon to further develop GAMES, GPU-Accelerated HaMiltonian nEsted Sampling algorithm. To allow for fast and robust identification of periodicities in the massive amounts of data, the team tackled the parallelisation of the original CPU-based algorithm and in a matter of weeks produced a working GPU version of the algorithm that performs 10 times faster than the original code.
10. High performing models for airspace protection
A group of AI and Computer Vision engineers (Turkey) set the goal of advancing their video analytics for swarm drone re-identification. Through extensive experimentation with Deep Metric Learning and Generative AI, the team enhanced model accuracy and real-time performance, contributing to intelligent drone defense and airspace security technologies.
Takeaways
The EuroCC AI Hackathon showed how focused mentoring and access to state-of-the-art HPC systems can significantly advance applied AI work. Teams arrived with existing models and research questions and used the hackathon to analyse performance bottlenecks, optimise algorithms, and adapt their workflows to GPU-accelerated and multi-node environments.
Across disciplines, participants achieved substantial improvements in scalability, efficiency, and robustness, while gaining practical experience in profiling, parallelisation, and best practices for large-scale AI. The close collaboration with mentors enabled rapid iteration and informed technical decisions, strengthening both individual projects and long-term expertise.
The organising team thanks all participants, mentors, and assistants for making this event possible. We are proud to see complex problems solved, new frontiers explored, and lasting collaborations formed during this hackathon!
For information about upcoming training events, please visit events.asc.ac.at.
Links