What AI can and cannot do - Martin Pfister


03.11.2024

"Artificial Intelligence is Not Intelligent": What AI Can and Cannot Do


auf Deutsch
 

What is intelligence, and what is artificial intelligence all about? Where are the opportunities and risks of AI, and how can startups and SMEs train their artificial neural networks on Austria's supercomputer? Martin Pfister from EuroCC Austria answers all these and many other questions about machine learning, deep learning, and artificial intelligence – which he does not refer to as "intelligent."

Interview by Bettina Benesch

Martin, at EuroCC Austria, you support startups in working with artificial intelligence (AI). How exactly does this collaboration work?


Companies that want to engage with machine learning, deep learning, and artificial neural networks turn to the EuroCC industry team. In an initial meeting, we assess the company's needs and expertise, and then we tailor the service individually for them. Later on, different experts from our team get involved, and we help implement the project on the Vienna Scientific Cluster (VSC), Austria's supercomputer. The company develops the project independently, and we help whenever technical support is needed.


Does this mean that companies already need to have expertise in AI?
 

Yes, and most of them have a certain level of know-how or the interest to engage with deep learning or machine learning.
 

You just mentioned two terms that are currently ubiquitous: machine learning and deep learning. What are they about?


Deep learning or machine learning may not sound as fancy as artificial intelligence, but I find these terms more fitting because they better convey that the artificial neural network learns certain parameters within a given framework. Essentially, it’s nothing more and nothing less than very detailed statistics.

We tailor the services to individual needs of companies and help to implement their projects on European supercomputers.


What are the application areas of these detailed statistics?


If you transcribe our interview later, it will probably be done by an artificial neural network trained on large amounts of transcribed audio samples. It has already heard interviews, speeches, or audiobooks and has been fed both the audio file and the text as letters. Or, when you ask your phone, "What will the weather be like tomorrow?" or say, "Call grandma," it will use these kinds of algorithms, at least in part.
 

So, does the weather forecast come from AI?
 

Well, weather is still mostly forecasted using classical High-Performance Computing (HPC or supercomputing), though AI is starting to be used, too. But it's a good example to highlight the difference between HPC and artificial intelligence. When calculating weather models in the traditional way, the system operates with certain rules that are meant to represent physical laws. For example: If it's raining in the north and the wind is blowing south, it will soon rain in the south.

There are a vast number of other rules, and the computer calculates a result from this massive amount of data. It doesn’t learn by itself; we tell it, "These are the laws, this is the situation, now calculate what will happen."

Machine learning takes a different approach: The model isn’t given the physical relationships or any other rules, but instead, it is provided with information about what happened in the past. The model then tries to derive connections on its own.

Essentially, AI is nothing more and nothing less than very detailed statistics


The large datasets needed for this come from the past. Does that mean that the predictions are always retrospective?


Training is retrospective. But once it’s trained, it can be applied prospectively.


Now with climate change, we are dealing with something that hasn’t occurred before. How can predictions be derived in this case?


That’s always a challenge, and it’s impossible to predict the future with 100% certainty. Essentially, it's always about statistical considerations, which provide more or less likely scenarios.
 

If artificial intelligence isn’t actually that mighty, do we not need to worry about it surpassing us one day?
 

I don’t see the danger in AI becoming too intelligent for us, because it’s not intelligent. My concern is rather that people will delegate decisions to AI that it is not capable of making. A few years ago, a self-driving car failed to recognise a pedestrian who was pushing a bicycle across an unlit rural road at night. The AI wasn’t trained to expect pedestrians pushing bicycles on unlit rural roads at night. A human would have instinctively known what to do right away, but the computer model was overwhelmed by the situation. Of course, one can argue that humans make mistakes too, but this is something that needs to be taken into account.
 

What does intelligence mean to you, and what is the difference between artificial and human intelligence?
 

I would say that intelligence includes the ability to comprehend entirely new things. Artificial intelligence can learn new things, but only within a limited framework and always with algorithms predefined by humans. The versatility with which humans can learn new skills is just in a completely different category.


However, AI can also do something that humans can – hallucinating. Can you briefly explain what hallucinations in AI are?


When AI hallucinates, it provides false information as if it were true, and in a very convincing way. This mainly concerns large language models (LLMs), which are text-based AI. Initially, LLMs are trained to complete texts: a sentence ends in the middle, and the model searches for the most likely continuation.

I don't see the danger in artificial intelligence becoming too intelligent for us. Because it is not intelligent.

Whether it makes logical sense or not is something the model learns over time. And sometimes it makes something up because it assumes that’s the most likely continuation.
 

Can users avoid such hallucinations when using tools like ChatGPT?
 

There are strategies being explored on the training level, but as a user, there's little you can do. One approach would be to phrase the question in different ways and see if the result is the same each time. It also helps to ask follow-up questions to try to verify the result. And it’s definitely a good idea not to take everything the model outputs at face value.
 

How do you think artificial intelligence will develop in the coming years?


I think we’ll learn a lot more about what artificial neural networks are good and aren’t good for. Some people or companies will have to pay the price for those lessons.

For example, a major airline employed a chatbot for customer support, and it gave a customer incorrect information. The customer insisted on the information they received, sued, and won the case.

Then there will be areas where it will work well. I can imagine that medicine is one field where AI can be very successful. Not to replace doctors, but to access and analyse the large amounts of historical patient data. I also believe that even outside the typical technical professions, there is huge potential, and people who aren’t very tech-savvy will discover new applications in their domains

Medicine is one of the fields where AI can be very successful. Not to replace doctors, but to access and analyse the large amounts of historical patient data.


This is also a goal of EuroCC: to introduce other fields, like the humanities and social sciences, to artificial intelligence.
 

Yes, exactly. I believe machine learning is an area where many people from very different backgrounds can make valuable contributions. What’s really needed is an open mindset. Not a “This is how it is and how I always want it to be,” but more of a “Yay, there’s something new to try!”


Short bio

Martin Pfister has been working at EuroCC Austria since January 2024. Before that, he studied physics at TU Wien, completed his diploma thesis at the Austrian Institute Of Technology (AIT), and moved to MedUni Wien for his dissertation in medical physics.


About the key concepts