How does generative AI think? Szeged researchers are looking for answers to the secrets of chatbots
Artificial intelligence (AI) helps us with more and more tasks in our everyday lives, but what happens in the “head” of a chatbot when it answers a question or interprets an instruction? This is one of the questions that the research of the Artificial Intelligence Competence Center of the University of Szeged, which will start in April, is seeking to answer, in collaboration with Rutgers University in the US and Ludwig-Maximilians-Universität in Germany. The aim is to explore in more depth the functioning of generative language models in order to apply the technology more safely and effectively.
A look inside the heads of generative AI models
Artificial intelligence can imitate human thinking, but does it really understand its own decisions? Generative models – which use various algorithms and machine learning models to create new content based on given instructions – may be able to play chess, but the question arises whether they really know the rules of the game or simply follow patterns without understanding how the game works.
One of the important topics of research launched within the RAItHMA project is how generative AI models represent individual concepts and how these concepts are related to each other. In other words, if a chatbot judges a statement to be true, does it automatically consider its negation to be false? This is self-evident for human thinking, but it is not always the case with language models.
The surprising limitations of chatbots
“Large language models do not acquire actual knowledge or understanding of rules, but are based solely on the continuation of texts. As a result, chatbots sometimes make mistakes in basic questions that even a child can answer. For example, if we list the names of the seven dwarfs and then ask whether a given name was on the list, the model does not always know the correct answer. AI is capable of solving even extremely complex mathematical problems, but at the same time it has difficulty coping with the concept of a set and sometimes even with very simple puzzles. If we can uncover the background to this, we can make great strides towards a better understanding of artificial intelligence and its safer, more efficient use,”
– said Dr. Márk Jelasity, Head of the Artificial Intelligence Competence Center of the Interdisciplinary Center of Excellence for Research, Development and Innovation.
The researchers are also looking for answers to the questions behind these contradictions, what internal knowledge the model has, and how to reduce communication disruption between humans and machines. In addition to improving the reliability of generative AI, this work may open new perspectives in the application of models in many areas.
Related news
Two from three consumers expect a better shopping experience
According to The State of Customer Experience report of Genesys,…
Read more >Heineken quenches its thirst for data management with artificial intelligence
Heineken Simplifies Data Analytics for Employees with SAP AI Solution.…
Read more >AI and drones to improve Just Eat’s services in Ireland
Just Eat Ireland is introducing an AI assistant and expanding…
Read more >Related news
Viktor Orbán: we will introduce margin reduction for new products as well, if necessary
The margin regulation must be maintained because people must be…
Read more >Healthy meat products rich in fiber and protein have been developed in Debrecen
A new product line consisting of health-promoting, fiber- and protein-rich…
Read more >German retail sales fell month-on-month in April
In Germany, retail sales fell by 1.1 percent in real…
Read more >