The dark side of AI – who will stop the spread of deepfake?
Deepfakes, or fake images, videos, and audio generated by artificial intelligence, are a growing problem online. As technology advances, it becomes increasingly difficult to distinguish between real and artificial content, which undermines trust in online information. According to the Deloitte TMT Predictions 2025 study, curbing this phenomenon is a costly and complex task in which technology companies, content producers, advertisers, and users must all play a role. In order to maintain credibility and restore trust, all stakeholders must mobilize significant resources.
Deloitte’s “2024 Connected Consumer Study” highlights the severity of the problem. Half of respondents say that the trustworthiness of online content has deteriorated compared to the previous year. Two-thirds of people who are familiar with or use generative artificial intelligence fear that these technologies will be used for manipulation and deception. Several people reported that it is increasingly difficult to distinguish between real people and artificial intelligence-generated content. The majority of respondents said that clearly labeling AI-generated photos, videos, and audio is essential, but it is not enough to solve the problem.
The cost of fighting deepfakes
Efforts to detect and filter out fake content are increasingly expensive. According to expert estimates, major technology and social media platforms spent $5.5 billion in 2023 on filtering out deepfakes, and this amount is expected to reach $15.7 billion by 2026. Such costs do not only affect companies, but consumers, advertisers, and content creators must also make sacrifices if they want to stay ahead of the counterfeiters.
A key tool for defense is recognizing AI-generated content. Technology companies are developing deep learning algorithms and machine vision solutions to identify signs of digital manipulation. Small details such as unnatural lip movements, voice fluctuations or light reflection patterns can be telltale signs. However, such technologies are not perfect, as deepfake developers are constantly inventing new ways to circumvent filters.
The challenge is that the accuracy of today’s deepfake detection tools is at most 90 percent, and fraudsters have easy access to AI models that can help them evade or simply overload these filters with increasingly sophisticated methods,” said Csilla Gercsák, manager of Deloitte Hungary’s technology consulting business.
Related news
GVH: Leading economists from the world’s competition authorities will meet in Hungary in April
Between April 23-25, the Hungarian Competition Authority (GVH) and the…
Read more >Samsung’s latest devices are designed for extreme work environments, with advanced AI features and enterprise security solutions
Samsung is taking mobile productivity to the next level with…
Read more >International quotation system – a more transparent event planning market
MaReSz proposes the introduction of an international quotation system for…
Read more >Related news
Márton Nagy: there is a high chance that the margin reduction will be maintained
There is a high chance that the margin reduction measure…
Read more >In addition to Hungarians, Slovak millers also reject insects in food
The Slovak Milling Industry Association has joined the initiative of…
Read more >New EU guidelines help prepare for the application of the EUDR
The European Commission has published new guidance and frequently asked…
Read more >