Staying at the forefront of sustainable technological development requires timely and proactive action from public administrations, scientists, researchers, and innovators. Numerous initiatives in Austria take the lead and help people and organisations thrive in the digital decade by expanding skills, infrastruture and adressing complex problems in science, research, economy and society with the help of advanced computing.
Below you will find an overview of ongoing projects in the field of HPC / Big Data / AI in Austria: projects that range from studying the universe to tackling societal challenges, creating innovative products and inventing breakthrough technologies; cooperations that foster scientific excellence, economic growth, and evidence-based policymaking; initiatives that meet challenges of rapid technological advancement, turn them into opportunities, and shape our digital future — today.
The projects are listed according to their funding programmes. The following information comes from official project descriptions that can be found on the respective websites.
EuroHPC JU - European High Performance Computing Joint Undertaking
EuroCC aims to build a European network of 33 national HPC competence centres to bridge the existing HPC skills gaps while promoting cooperation across Europe. To do so, each of the participating countries are tasked with establishing a single National Competence Centre (NCC) in the area of HPC in their respective countries. These NCCs will coordinate activities in all HPC-related fields at the national level and serve as a contact point for customers from industry, science, (future) HPC experts, and the general public alike. Each of the 33 national competence centres will act locally to map available HPC competencies and identify existing knowledge gaps. The competence centres will coordinate HPC expertise at national level and ease access to European HPC opportunities for research and scientific users, public administration but also in different industrial sectors, delivering tailored solutions for a wide variety of users.
HPCQS – High Performance Computer and Quantum Simulator hybrid
The project HPCQS aims to integrate two quantum simulators, each controlling about 100+ quantum bits (qubits) in two already existing supercomputers:
In doing so, HPCQS will become an incubator for quantum-HPC hybrid computing that is unique in the world.
The seamless integration of quantum hardware with classical computing resources will enable research entities and industries to exploit new quantum technologies and find solutions to complex challenges in physics, chemistry and numerical optimisation with practical applications, for example, to materials and drug design, logistics and transportation.
HPCQS will develop the programming platform for the quantum simulator and offer cloud-based access to users and researchers. The project will build an open and evolutionary infrastructure that aims at expanding in the future by including a diversity of quantum computing platforms at different technology readiness levels in an HPC system and by allowing the integration of other European partners. The HPCQS infrastructure is a first step towards a European quantum computing infrastructure in synergy with the ongoing European efforts to establish a world-leading HPC infrastructure.
LIGATE aims to integrate and co-design best in class European open-source components together with European Intellectual Properties (whose development has already been co-funded by previous Horizon 2020 projects). It will support Europe to keep worldwide leadership on Computer-Aided Drug Design (CADD) solutions, exploiting today’s high-end supercomputers and tomorrow’s exascale resources, while fostering the European competitiveness in this field. The project will enhance the CADD technology of the drug discovery platform EXSCALATE.
The MICROCARD project will develop an exascale application platform for cardiac electrophysiology simulations that is usable for cell-by-cell simulations. The platform will be co-designed by HPC experts, numerical scientists, biomedical engineers, and biomedical scientists, from academia and industry. They will develop numerical schemes suitable for exascale parallelism, problem-tailored linear-system solvers and preconditioners, and a compiler to translate high-level model descriptions into optimized, energy-efficient system code for heterogeneous computing systems. The code will be resilient to hardware failures and will use an energy-aware task placement strategy.
With exascale systems almost outside our door, we need now to turn our attention on how to make the most out of these large investments towards societal prosperity and economic growth. REGALE aspires to pave the way of next-generation HPC applications to exascale systems. To accomplish this we define an open architecture, build a prototype system and incorporate in this system appropriate sophistication in order to equip supercomputing systems with the mechanisms and policies for effective resource utilization and execution of complex applications.
REGALE brings together leading supercomputing stakeholders, prestigeous academics, top European supercomputing centers and end users from five critical target sectors, covering the entire value chain in system software and applications for extreme scale technologies.
PRACE - Partnership for Advanced Computing in Europe
Gamma-ray binaries are among the most extreme astronomical objects. They consist of a giant star and a compact stellar companion, presumably a neutron star. The majority of radiation detected from these systems is emitted at energies beyond the X-ray regime. Observation of this radiation indicates that it is produced from high-energy particles present in these systems. These particles can be accelerated in the interaction region of a highly-relativistic outflow emitted by the neutron star with the stellar wind from the giant star. Since the origin of the radiation can not be observationally resolved, understanding of the physical processes in these systems requires modelling efforts that try to recover the observations. Since these show a pronounced energy-dependence together with temporal variability, analytical models can not recover the observations due to the necessary simplifications. Modelling such a system is even challenging numerically: In a gamma-ray binary, orbital motions and (relativistic) outflows interact to produce a particle population that by a further interaction with this dynamical environment emits the observed radiation. Using HPC simulations will help improve our understanding of these objects by modelling the dynamics and the particle transport in one of these systems with unprecedented complexity.
Turbulence friction causes a significant reduction of the flow-rate and a consequent increase of the pumping cost in a wide range of applications involving the transportation of high viscous fluids. Among the possible drag reduction (DR) techniques developed in recent years, water-lubricated pipelining has emerged as one of the most promising. This technique takes advantage of the natural tendency of water to migrate towards the wall and thus to lubricate the flow. In this project, we want to investigate numerically the performance of this technique performing large-scale simulations of a turbulent channel in which two near-wall layers of water lubricate the core-flow of oil. The simulations will adopt an innovative approach based on direct numerical simulations of turbulence coupled with a phase-field method to describe the complex dynamics of the system, which is governed by the interplay of phenomena occurring on a wide range of spatial and temporal scales. The accurate description of these phenomena requires the adoption of high-resolution grids and thus large high-performance computing infrastructures are needed.
In this project, a series of large-scale fully kinetic simulations will be performed to understand energy transfer physics in space plasmas. Space between planets, stars, and galaxies is filled with plasma, a collection of high-energy, charged particles with its density small enough to neglect particle collisions. In such a collisionless system, the boundary layer between regions with different plasma properties plays a central role in transferring energy and controlling the dynamics of the system. In the Earth’s magnetosphere, a representative collisionless plasma system, the energy input from the solar wind is transferred through different physical processes at various boundary layers, which eventually control the global dynamics of the magnetosphere related to many space weather phenomena like auroral substorms and geomagnetic storms. On the other hand, plasma turbulence has been commonly observed in many locations in space, and understanding how the energy cascades between different spatiotemporal scales in the turbulence is key for understanding the energy transfer in collisionless plasmas. Indeed, the recently launched high-resolution Magnetospheric Multiscale (MMS) mission, the first mission to resolve electron-scales in-situ, very frequently observed turbulence at each boundary layer in the magnetosphere. This project will systematically investigate the realistic energy transfer processes across the turbulent boundary layers in the solar wind-magnetosphere system based on large-scale fully kinetic simulations through the comparisons with latest observations by MMS. Since the sizes of the magnetospheric boundary layers are basically in magneto-hydrodynamic (MHD) scales (>104km), to quantitatively understand the energy transfer processes in the magnetosphere, it is required to handle the MHD-scales resolving the smallest electron-scales (10-100 km) where the energy is expected to be eventually dissipated. The scientific focus of this project is to quantitatively investigate these MHD-scale energy transfer processes resolving the electron-scales using large-scale fully kinetic simulations and their comparisons with the MMS observations. This project is timely because the combination of our high-performance fully kinetic simulation code, our new techniques for the direct, quantitative comparisons with MMS, and the requested Tier-0 resources allow us to perform such large-scale simulations and their quantitative comparisons with the real observations.
SHAPE - SME HPC Adoption Programme in Europe
Vienna Scientific Cluster is supporting Reintrieb to optimise their new side-by-side propulsion system with the help of High-Performance Computing (HPC).
Vienna Scientific Cluster is supporting TAILSIT to port and optimise their engineering simulation software tools to take advantage of modern High-Performance Computing (HPC) systems.
More information: https://vsc.ac.at/news/2021/news-records/shape-project-with-tailsit/
HORIZON 2020 - EU Programme for Research and Innovation
The DAPHNE project aims to define and build an open and extensible system infrastructure for integrated data analysis pipelines, including data management and processing, high-performance computing (HPC), and machine learning (ML) training and scoring. Key observations are that (1) systems of these areas share many compilation and runtime techniques, (2) there is a trend towards complex data analysis pipelines that combine these systems, and (3) the used, increasingly heterogeneous, hardware infrastructure converges as well. Yet, the programming paradigms, cluster resource management, as well as data formats and representations differ substantially. Therefore, this project aims – with a joint consortium of experts from the data management, ML systems, and HPC communities – at systematically investigating the necessary system infrastructure, language abstractions, compilation and runtime techniques, as well as systems and tools necessary to increase the productivity when building such data analysis pipelines, and eliminating unnecessary performance bottlenecks.
The CloudiFacturing innovative solution integrates software and hardware platforms to assist manufacturing SMEs and their demand for advanced cloud- or HPC-based ICT solutions. The EU-funded DIGITbrain project will extend CloudiFacturing with an augmented digital-twin concept called digital product brain (DPB) and a smart business model called manufacturing as a service (MaaS). The DPB will allow the customisation and adaptation of on-demand data, models, algorithms and resources for industrial products according to individual conditions. MaaS will permit manufacturing SMEs to reach advanced manufacturing facilities within their territories and beyond. The project aims to support the development of advanced digital and manufacturing technologies through more than 20 highly innovative cross-border experiments, in addition to training and assisting digital innovation hubs in the implementation of the Maas model, contributing to their long-term sustainability.
Most of Europe’s SMEs lag behind in data-driven innovation. To tackle this problem, the EU-funded EUHubs4Data project will build a European federation of Data Innovation Hubs based on existing key players in this area and connecting with data incubators and platforms, SME networks, AI communities, skills and training organisations and open data repositories. A European catalogue of data sources and federated data-driven services and solutions will be made accessible to European SMEs, start-ups and web entrepreneurs through the Data Innovation Hubs. Cross-border and cross-sector data-driven experimentation will be facilitated through data-sharing, as well as data- and service interoperability, becoming a reference instrument for growth in a global data economy and contributing to the creation of common European data spaces.
The EXPLORE project gathers experts from different science domains and technological expertises to develop new tools that will enable and promote the explotiation of space science data.
Neuroscientists are set to benefit from a new consortium formed by five of the most significant supercomputing centres in Europe (Switzerland, Germany, Spain, Italy and France). Together, they formed the High Performance Analytics and Computing (HPAC) Platform of the Human Brain Project (HBP). The EU-funded ICEI project will set up cloud-like services compatible with the work cultures of scientific computing and data science. This will offer scientists a new quality of supercomputing that is both highly interactive and capable of extreme calculations. This elastic and scalable system will be adapted for neuroscientific research. ICEI is the first implementation project of the Fenix infrastructure (https://fenix-ri.eu).
Public administrations in all countries and science, technology and innovation (STI) stakeholders produce a vast amount of dynamic, multilingual and heterogeneous data. Understanding and analysing these data are instrumental to evidence-based policymaking. The EU-funded IntelComp project will establish an innovative cloud platform to assist public administrators and policymakers in the STI domain with AI-based services. The project will engage multidisciplinary groups to co-develop innovative analytics services, natural language processing pipelines and AI workflows and to exploit open data, services and computational resources from the European Open Science Cloud and high-performance computing environments as well as federated distributed operations. IntelComp will adopt living lab methodologies and involve public and private stakeholders to explore and assess STI policies.
The concept of digital twins has been around but the Internet of Things has enabled its cost-effective implementation. Digital twins refer to a virtual representation of a physical product or process. The EU-funded IoTwins project plans to build testbeds for digital twins in the manufacturing and facility management sectors. The digital models will integrate data from various sources such as data APIs, historical data, embedded sensors and open data. This will give manufacturers an unprecedented view into how their products are performing. In facility management, the technology will be instrumental in improving the way buildings and their systems operate and in preventing prospective problems.
Over the past 60+ years CMOS-based digital computing has giving rise to ever-greater computational performance, „big data“-based business models and the accelerating digital transformation of modern economies. However, the ever-growing amounts of data to be handled and the increasing complexity of today’s tasks for high performance computing (HPC) are becoming unmanageable as the data handling and energy consumption of HPCs, server farms and cloud services grow to unsustainable levels. New concepts and technologies are needed. One such HPC technology is Quantum computing (QC). QC utilizes so-called quantum bits (qubits) to perform complex calculations fundamentally much faster than a conventional digital-bit computers can. First quantum computer prototypes have been created. Superconducting Josephson junctions (SJJs) have been shown to be extremely promising qubit candidates to achieve a significant nonlinear increase of computational power with the number of qubits. For novel materials there is a great challenge yet opportunity in Europe to create a complete value chain for SSJs and QCs. Such a complete value chain will contribute to Europe’s technology sovereignty. The MATQu project aims at validating the technology options to produce SJJs on industrial 300 mm silicon-based process flows. It covers substrate technology, superconducting metals, resonators, through-wafer-via holes, 3D integration, and variability characterization. These will be assessed with respect to integration practices of qubits. Core substrate and process technologies with high quality factors, improved material deposition on large-substrates, and increased critical temperature for superconducting operation, will be developed and validated. The MATQu partners complement each other in an optimal way across the value chain to create a substantial competitive advantage, e.g. faster time-to-market and roll-out of technologies and materials for better Josephson junctions for quantum computing.
Predicting novel materials with specific desirable properties is a major aim of ab initio computational materials science (aiCMS) and an urgent requirement of basic and applied materials science, engineering and industry. Such materials can have immense impact on the environment and on society, e.g., on energy, transport, IT, medical-device sectors and much more. Currently, however, precisely predicting complex materials is computationally infeasible. NOMAD CoE will develop a new level of materials modelling, enabled by upcoming HPC exascale computing and extreme-scale data hardware.
The Partnership for Advanced Computing in Europe (PRACE) is the permanent pan-European high-performance computing (HPC) service offering world-class computing and data management resources and services. PRACE is concluding the transition to PRACE 2. The EU-funded PRACE-6IP project will build on and continue the achievements of PRACE, initiating new innovative and collaborative activities to assist the development of PRACE 2, strengthen the internationally recognised PRACE brand, maintain and expand advanced training, and prepare strategies and best practices towards exascale computing. The project will work on future software solutions, coordinate and increase the operation of the multi-tier HPC systems and services, and support users to take advantage of parallel systems and innovative architectures.
The overarching goal of the TRANSACT project is to develop a universal, distributed solution architecture for the transformation of safety-critical cyber-physical systems, from localised standalone systems into safe and secure distributed solutions leveraging edge and cloud computing.
TREX’s main focus will be the development of a user-friendly and open-source software suite in the domain of stochastic quantum chemistry simulations, which integrates TREX community codes within an interoperable, high-performance platform. In parallel, TREX will work on showcases to leverage this methodology for commercial applications as well as develop and implement software components and services that make it easier for commercial operators and user communities to use HPC resources for these applications.
The social sciences and humanities (SSH) – from sociology and economics to psychology, political science and cultural science – play an important role in improving our assessment of and response to complex societal issues. It is important to find new ways to conduct, connect and discover research and to support scientific, industrial and societal applications of SSH science. Thanks to a consortium of 18 partners, the EU-funded TRIPLE project will develop a multilingual and multicultural solution for the appropriation of SSH resources. It will make it easier for researchers to discover and reuse SSH research and to embark on interdisciplinary collaboration initiatives. It will make use of the online platform ISIDORE (created by France’s CNRS).
In recent years, the data market has been flourishing. A sharp downfall in trust towards platforms considered secure and privacy-aware, however, has hampered the market. This lack of trust has hit the data economy hard by limiting its resources to open data. This downfall is likely to continue if technical standards are not adopted. The EU-funded TRUSTS project aims to reinstate trust previously placed in the data market by developing a new platform using the experiences of two large national projects while also allowing the future introduction of newer platforms. The project plans to use this platform as a platform federator and start a thorough investigation into the ethics of the data market.
AI4DI's mission is bringing AI from the cloud to the edge and making Europe a leader in silicon-born AI by advancing Moore's law and accelerating edge processing adoption in different industries through reference demonstrators.
The research project European Production Giganet for calamity avoiding self-orchestration of value chain and learning ecosystems works on central questions related to “smart and sovereign use of data in manufacturing” and demonstrates how a highly networked production ecosystem can orchestrate itself, and be equipped with stabilising characteristics.