The dark side of AI – who will stop the spread of deepfake?
Deepfakes, or fake images, videos, and audio generated by artificial intelligence, are a growing problem online. As technology advances, it becomes increasingly difficult to distinguish between real and artificial content, which undermines trust in online information. According to the Deloitte TMT Predictions 2025 study, curbing this phenomenon is a costly and complex task in which technology companies, content producers, advertisers, and users must all play a role. In order to maintain credibility and restore trust, all stakeholders must mobilize significant resources.
Deloitte’s “2024 Connected Consumer Study” highlights the severity of the problem. Half of respondents say that the trustworthiness of online content has deteriorated compared to the previous year. Two-thirds of people who are familiar with or use generative artificial intelligence fear that these technologies will be used for manipulation and deception. Several people reported that it is increasingly difficult to distinguish between real people and artificial intelligence-generated content. The majority of respondents said that clearly labeling AI-generated photos, videos, and audio is essential, but it is not enough to solve the problem.
The cost of fighting deepfakes
Efforts to detect and filter out fake content are increasingly expensive. According to expert estimates, major technology and social media platforms spent $5.5 billion in 2023 on filtering out deepfakes, and this amount is expected to reach $15.7 billion by 2026. Such costs do not only affect companies, but consumers, advertisers, and content creators must also make sacrifices if they want to stay ahead of the counterfeiters.
A key tool for defense is recognizing AI-generated content. Technology companies are developing deep learning algorithms and machine vision solutions to identify signs of digital manipulation. Small details such as unnatural lip movements, voice fluctuations or light reflection patterns can be telltale signs. However, such technologies are not perfect, as deepfake developers are constantly inventing new ways to circumvent filters.
The challenge is that the accuracy of today’s deepfake detection tools is at most 90 percent, and fraudsters have easy access to AI models that can help them evade or simply overload these filters with increasingly sophisticated methods,” said Csilla Gercsák, manager of Deloitte Hungary’s technology consulting business.
Related news
Fidelity Outlook 2026: Who will bring the pick and shovel to artificial intelligence?
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >Bagels, stuffed cabbage and online scams – this could be the Christmas menu for many due to the rise in cybercrime
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >How will AI continue to shape our lives? Seven areas to watch in 2026
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >Related news
Christmas shock in commerce: for the first time, we can pay with bank cards in fewer places
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >Hungarian Confectionery Manufacturers Association: trends in 2025 and prospects for 2026
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >Most grocery chains will be open until noon on December 24th
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >
