How does generative AI think? Szeged researchers are looking for answers to the secrets of chatbots
Artificial intelligence (AI) helps us with more and more tasks in our everyday lives, but what happens in the “head” of a chatbot when it answers a question or interprets an instruction? This is one of the questions that the research of the Artificial Intelligence Competence Center of the University of Szeged, which will start in April, is seeking to answer, in collaboration with Rutgers University in the US and Ludwig-Maximilians-Universität in Germany. The aim is to explore in more depth the functioning of generative language models in order to apply the technology more safely and effectively.
A look inside the heads of generative AI models
Artificial intelligence can imitate human thinking, but does it really understand its own decisions? Generative models – which use various algorithms and machine learning models to create new content based on given instructions – may be able to play chess, but the question arises whether they really know the rules of the game or simply follow patterns without understanding how the game works.
One of the important topics of research launched within the RAItHMA project is how generative AI models represent individual concepts and how these concepts are related to each other. In other words, if a chatbot judges a statement to be true, does it automatically consider its negation to be false? This is self-evident for human thinking, but it is not always the case with language models.
The surprising limitations of chatbots
“Large language models do not acquire actual knowledge or understanding of rules, but are based solely on the continuation of texts. As a result, chatbots sometimes make mistakes in basic questions that even a child can answer. For example, if we list the names of the seven dwarfs and then ask whether a given name was on the list, the model does not always know the correct answer. AI is capable of solving even extremely complex mathematical problems, but at the same time it has difficulty coping with the concept of a set and sometimes even with very simple puzzles. If we can uncover the background to this, we can make great strides towards a better understanding of artificial intelligence and its safer, more efficient use,”
– said Dr. Márk Jelasity, Head of the Artificial Intelligence Competence Center of the Interdisciplinary Center of Excellence for Research, Development and Innovation.
The researchers are also looking for answers to the questions behind these contradictions, what internal knowledge the model has, and how to reduce communication disruption between humans and machines. In addition to improving the reliability of generative AI, this work may open new perspectives in the application of models in many areas.
Related news
Számlázz.hu, now 20 years old, expects a fintech revolution
Számlázz.hu, one of Hungary’s largest fintech companies, which helps nearly…
Read more >Another privacy scandal in the AI industry: a mass of Grok user conversations leaked
Another privacy scandal has shaken the artificial intelligence industry: according…
Read more >GKI Analysis: Trust may be the key to the spread of artificial intelligence
In July, GKI conducted a survey on a representative sample…
Read more >Related news
Keurig Dr Pepper To Buy Dutch Coffee Company JDE Peet’s
US soft drinks giant Keurig Dr Pepper is set to…
Read more >Kaufland Installs Myflexbox Parcel Stations at 40 Locations Across Germany
Retail giant Kaufland has taken another step toward enhancing customer…
Read more >Coca-Cola explores sale of Costa Coffee, source says
U.S. soft drinks company Coca-Cola is working with investment bank…
Read more >