What is intelligence, and what is artificial intelligence all about? Where are the opportunities and risks of AI, and how can startups and SMEs train their artificial neural networks on Austria's supercomputer? Martin Pfister from EuroCC Austria answers all these and many other questions about machine learning, deep learning, and artificial intelligence – which he does not refer to as "intelligent."
Interview by Bettina Benesch
Companies that want to engage with machine learning, deep learning, and artificial neural networks turn to the EuroCC industry team. In an initial meeting, we assess the company's needs and expertise, and then we tailor the service individually for them. Later on, different experts from our team get involved, and we help implement the project on the Vienna Scientific Cluster (VSC), Austria's supercomputer. The company develops the project independently, and we help whenever technical support is needed.
Yes, and most of them have a certain level of know-how or the interest to engage with deep learning or machine learning.
Deep learning or machine learning may not sound as fancy as artificial intelligence, but I find these terms more fitting because they better convey that the artificial neural network learns certain parameters within a given framework. Essentially, it’s nothing more and nothing less than very detailed statistics.
“
We tailor the services to individual needs of companies and help to implement their projects on European supercomputers.
„
If you transcribe our interview later, it will probably be done by an artificial neural network trained on large amounts of transcribed audio samples. It has already heard interviews, speeches, or audiobooks and has been fed both the audio file and the text as letters. Or, when you ask your phone, "What will the weather be like tomorrow?" or say, "Call grandma," it will use these kinds of algorithms, at least in part.
Well, weather is still mostly forecasted using classical High-Performance Computing (HPC or supercomputing), though AI is starting to be used, too. But it's a good example to highlight the difference between HPC and artificial intelligence. When calculating weather models in the traditional way, the system operates with certain rules that are meant to represent physical laws. For example: If it's raining in the north and the wind is blowing south, it will soon rain in the south.
There are a vast number of other rules, and the computer calculates a result from this massive amount of data. It doesn’t learn by itself; we tell it, "These are the laws, this is the situation, now calculate what will happen."
Machine learning takes a different approach: The model isn’t given the physical relationships or any other rules, but instead, it is provided with information about what happened in the past. The model then tries to derive connections on its own.
“
Essentially, AI is nothing more and nothing less than very detailed statistics
„
Training is retrospective. But once it’s trained, it can be applied prospectively.
That’s always a challenge, and it’s impossible to predict the future with 100% certainty. Essentially, it's always about statistical considerations, which provide more or less likely scenarios.
I don’t see the danger in AI becoming too intelligent for us, because it’s not intelligent. My concern is rather that people will delegate decisions to AI that it is not capable of making. A few years ago, a self-driving car failed to recognise a pedestrian who was pushing a bicycle across an unlit rural road at night. The AI wasn’t trained to expect pedestrians pushing bicycles on unlit rural roads at night. A human would have instinctively known what to do right away, but the computer model was overwhelmed by the situation. Of course, one can argue that humans make mistakes too, but this is something that needs to be taken into account.
I would say that intelligence includes the ability to comprehend entirely new things. Artificial intelligence can learn new things, but only within a limited framework and always with algorithms predefined by humans. The versatility with which humans can learn new skills is just in a completely different category.
When AI hallucinates, it provides false information as if it were true, and in a very convincing way. This mainly concerns large language models (LLMs), which are text-based AI. Initially, LLMs are trained to complete texts: a sentence ends in the middle, and the model searches for the most likely continuation.
“
I don't see the danger in artificial intelligence becoming too intelligent for us. Because it is not intelligent.
„
Whether it makes logical sense or not is something the model learns over time. And sometimes it makes something up because it assumes that’s the most likely continuation.
There are strategies being explored on the training level, but as a user, there's little you can do. One approach would be to phrase the question in different ways and see if the result is the same each time. It also helps to ask follow-up questions to try to verify the result. And it’s definitely a good idea not to take everything the model outputs at face value.
I think we’ll learn a lot more about what artificial neural networks are good and aren’t good for. Some people or companies will have to pay the price for those lessons.
For example, a major airline employed a chatbot for customer support, and it gave a customer incorrect information. The customer insisted on the information they received, sued, and won the case.
Then there will be areas where it will work well. I can imagine that medicine is one field where AI can be very successful. Not to replace doctors, but to access and analyse the large amounts of historical patient data. I also believe that even outside the typical technical professions, there is huge potential, and people who aren’t very tech-savvy will discover new applications in their domains
“
Medicine is one of the fields where AI can be very successful. Not to replace doctors, but to access and analyse the large amounts of historical patient data.
„
Yes, exactly. I believe machine learning is an area where many people from very different backgrounds can make valuable contributions. What’s really needed is an open mindset. Not a “This is how it is and how I always want it to be,” but more of a “Yay, there’s something new to try!”
Short bio
Martin Pfister has been working at EuroCC Austria since January 2024. Before that, he studied physics at TU Wien, completed his diploma thesis at the Austrian Institute Of Technology (AIT), and moved to MedUni Wien for his dissertation in medical physics.
About the key concepts
Believe it or not, High-Performance Computing (HPC) is actually a relatively old concept: the word "supercomputing" was first used in 1929, and the first mainframe computers appeared in the 1950s. However, they had far less capacity than today's mobile phones. The technology really took off in the 1970s.
HPC systems are used whenever the personal computer's memory is too small, larger simulations are required that cannot be run on the personal system, or when what was previously calculated locally now needs to be calculated much more frequently.
The performance of supercomputers is measured in FLOPS (Floating Point Operations Per Second). In 1997, a supercomputer achieved 1.06 TeraFLOPS (1 TeraFLOPS = 10^12 FLOPS) for the first time; Austria's currently most powerful supercomputer, the VSC-5, reaches 2.31 PetaFLOPS or 2,310 TeraFLOPS (1 PetaFLOPS = 10^15 FLOPS). The era of exascale computers began in 2022, with performance measured in ExaFLOPS (1 ExaFLOPS = 10^18 FLOPS). An ExaFLOPS equals one quintillion floating-point operations per second.
As of June 2024, there were only two exascale systems in the TOP500 list of the world's best supercomputers: Frontier at Oak Ridge National Laboratory and Aurora at Argonne National Laboratory, both in the USA. In Europe, there are currently three pre-exascale computers, which are precursors to exascale systems. Two European exascale systems will be operational shortly.
VSC (Vienna Scientific Cluster) is Austria's supercomputer, co-financed by several Austrian universities. The computers are located at the TU Wien university in Vienna. From 2025, the newest supercomputer, MUSICA (Multi-Site Computer Austria), will be in use at locations in Vienna, Linz, and Innsbruck.
Researchers from the participating universities can use the VSC for their simulations, and under the EuroCC programme, companies also have easy and free access to computing time on Austria's supercomputer. Additionally, the VSC team is an important source of know-how: in numerous workshops, future HPC users, regardless of their level, learn everything about supercomputing, AI and big data.
EuroHPC Joint Undertaking is a public-private partnership of the European Union aimed at building a Europe-wide high-performance computing infrastructure and keeping it internationally competitive.
EuroCC is an initiative of EuroHPC.
Each participating country (EU plus some associated states) has established a national competence centre for supercomputing, big data and artificial intelligence – EuroCC Austria is one of them. They are part of the EuroCC project, which brings technology closer to future users and facilitates access to supercomputers. The goal of the project is to help industry, academia and private sector adopt and leverage HPC, AI and high-performance data analytics. EuroHPC also supports the EUMaster4HPC project, an educational programme for future HPC experts.