How does generative AI think? Szeged researchers are looking for answers to the secrets of chatbots
Artificial intelligence (AI) helps us with more and more tasks in our everyday lives, but what happens in the “head” of a chatbot when it answers a question or interprets an instruction? This is one of the questions that the research of the Artificial Intelligence Competence Center of the University of Szeged, which will start in April, is seeking to answer, in collaboration with Rutgers University in the US and Ludwig-Maximilians-Universität in Germany. The aim is to explore in more depth the functioning of generative language models in order to apply the technology more safely and effectively.
A look inside the heads of generative AI models
Artificial intelligence can imitate human thinking, but does it really understand its own decisions? Generative models – which use various algorithms and machine learning models to create new content based on given instructions – may be able to play chess, but the question arises whether they really know the rules of the game or simply follow patterns without understanding how the game works.
One of the important topics of research launched within the RAItHMA project is how generative AI models represent individual concepts and how these concepts are related to each other. In other words, if a chatbot judges a statement to be true, does it automatically consider its negation to be false? This is self-evident for human thinking, but it is not always the case with language models.
The surprising limitations of chatbots
“Large language models do not acquire actual knowledge or understanding of rules, but are based solely on the continuation of texts. As a result, chatbots sometimes make mistakes in basic questions that even a child can answer. For example, if we list the names of the seven dwarfs and then ask whether a given name was on the list, the model does not always know the correct answer. AI is capable of solving even extremely complex mathematical problems, but at the same time it has difficulty coping with the concept of a set and sometimes even with very simple puzzles. If we can uncover the background to this, we can make great strides towards a better understanding of artificial intelligence and its safer, more efficient use,”
– said Dr. Márk Jelasity, Head of the Artificial Intelligence Competence Center of the Interdisciplinary Center of Excellence for Research, Development and Innovation.
The researchers are also looking for answers to the questions behind these contradictions, what internal knowledge the model has, and how to reduce communication disruption between humans and machines. In addition to improving the reliability of generative AI, this work may open new perspectives in the application of models in many areas.
Related news
The EU is investing €140 million to introduce generative artificial intelligence in the agri-food industry
The European Commission has launched four new calls for proposals…
Read more >A pilot project on the use of artificial intelligence in cities is launched
The first Hungarian pilot project demonstrating the practical application of…
Read more >GVH: supporting the SME sector is a good direction in the new artificial intelligence strategy
The renewal of the artificial intelligence (AI) strategy is in…
Read more >Related news
Easter long weekend: this is how store opening hours will be in 2025
Easter this year will bring significant changes to the opening…
Read more >Eurozone industrial production exceeded expectations in February
Eurozone industrial production rose more than expected in February, both…
Read more >Róbert Zsigó: the average effect of margin stops is almost twenty percent
As a result of the introduction of the margin freeze,…
Read more >