The dark side of AI – who will stop the spread of deepfake?
Deepfakes, or fake images, videos, and audio generated by artificial intelligence, are a growing problem online. As technology advances, it becomes increasingly difficult to distinguish between real and artificial content, which undermines trust in online information. According to the Deloitte TMT Predictions 2025 study, curbing this phenomenon is a costly and complex task in which technology companies, content producers, advertisers, and users must all play a role. In order to maintain credibility and restore trust, all stakeholders must mobilize significant resources.
Deloitte’s “2024 Connected Consumer Study” highlights the severity of the problem. Half of respondents say that the trustworthiness of online content has deteriorated compared to the previous year. Two-thirds of people who are familiar with or use generative artificial intelligence fear that these technologies will be used for manipulation and deception. Several people reported that it is increasingly difficult to distinguish between real people and artificial intelligence-generated content. The majority of respondents said that clearly labeling AI-generated photos, videos, and audio is essential, but it is not enough to solve the problem.
The cost of fighting deepfakes
Efforts to detect and filter out fake content are increasingly expensive. According to expert estimates, major technology and social media platforms spent $5.5 billion in 2023 on filtering out deepfakes, and this amount is expected to reach $15.7 billion by 2026. Such costs do not only affect companies, but consumers, advertisers, and content creators must also make sacrifices if they want to stay ahead of the counterfeiters.
A key tool for defense is recognizing AI-generated content. Technology companies are developing deep learning algorithms and machine vision solutions to identify signs of digital manipulation. Small details such as unnatural lip movements, voice fluctuations or light reflection patterns can be telltale signs. However, such technologies are not perfect, as deepfake developers are constantly inventing new ways to circumvent filters.
The challenge is that the accuracy of today’s deepfake detection tools is at most 90 percent, and fraudsters have easy access to AI models that can help them evade or simply overload these filters with increasingly sophisticated methods,” said Csilla Gercsák, manager of Deloitte Hungary’s technology consulting business.
Related news
GKI Analysis: Trust may be the key to the spread of artificial intelligence
In July, GKI conducted a survey on a representative sample…
Read more >AI-powered skin analysis now available at German dm stores thanks to Dermanostic
A new digital service has arrived at select dm drugstores…
Read more >Adapt or disappear – this is how companies can stay competitive in the age of AI and crises
How can you stay afloat in a world where one…
Read more >Related news
Egg market 2025: price increases and opposing trends in the world
In the third quarter of this year, a decline in…
Read more >Eurozone economic activity unexpectedly rose to a yearly high in August
The eurozone private sector unexpectedly improved in August, showing its…
Read more >Eurozone inflation remains at 2 percent in July
After June, annual inflation in the eurozone was 2 percent…
Read more >