How does generative AI think? Szeged researchers are looking for answers to the secrets of chatbots
Artificial intelligence (AI) helps us with more and more tasks in our everyday lives, but what happens in the “head” of a chatbot when it answers a question or interprets an instruction? This is one of the questions that the research of the Artificial Intelligence Competence Center of the University of Szeged, which will start in April, is seeking to answer, in collaboration with Rutgers University in the US and Ludwig-Maximilians-Universität in Germany. The aim is to explore in more depth the functioning of generative language models in order to apply the technology more safely and effectively.
A look inside the heads of generative AI models
Artificial intelligence can imitate human thinking, but does it really understand its own decisions? Generative models – which use various algorithms and machine learning models to create new content based on given instructions – may be able to play chess, but the question arises whether they really know the rules of the game or simply follow patterns without understanding how the game works.
One of the important topics of research launched within the RAItHMA project is how generative AI models represent individual concepts and how these concepts are related to each other. In other words, if a chatbot judges a statement to be true, does it automatically consider its negation to be false? This is self-evident for human thinking, but it is not always the case with language models.
The surprising limitations of chatbots
“Large language models do not acquire actual knowledge or understanding of rules, but are based solely on the continuation of texts. As a result, chatbots sometimes make mistakes in basic questions that even a child can answer. For example, if we list the names of the seven dwarfs and then ask whether a given name was on the list, the model does not always know the correct answer. AI is capable of solving even extremely complex mathematical problems, but at the same time it has difficulty coping with the concept of a set and sometimes even with very simple puzzles. If we can uncover the background to this, we can make great strides towards a better understanding of artificial intelligence and its safer, more efficient use,”
– said Dr. Márk Jelasity, Head of the Artificial Intelligence Competence Center of the Interdisciplinary Center of Excellence for Research, Development and Innovation.
The researchers are also looking for answers to the questions behind these contradictions, what internal knowledge the model has, and how to reduce communication disruption between humans and machines. In addition to improving the reliability of generative AI, this work may open new perspectives in the application of models in many areas.
Related news
Consumers mainly use generative AI to search for products
Generative AI is more and more popular among shoppers who…
Read more >Consumers want AI labelling on food packaging
According to a recent survey, commissioned by Ingredient Communications and…
Read more >SPAR Group: growing acceptance for AI in retail
AI is conquering the world of retail and both shoppers…
Read more >Related news
Accelerating inflation in Hungary: brutal food price hikes and measures of questionable effectiveness
Inflation in Hungary accelerated again in February 2025: consumer prices…
Read more >Detailed regulations on margin stops have been published: who is affected and what products does it apply to?
The detailed regulations of the margin freeze introduced by the…
Read more >Challenges and opportunities of the turkey sector: this is how Gallicoop sees the future
Turkey meat was a key product on the domestic and…
Read more >