KPMG: How to use AI in credit risk analysis ethically and competitively
Financial institutions and financial service providers are expected to be among the biggest beneficiaries of the AI revolution, provided they do not sacrifice consumer and user trust on the altar of quick success. The EU regulation, which will gradually come into force by August 2026, is designed to preserve trust while ensuring innovative skills. KPMG Hungary recently held a business discussion with Hungarian and international experts, as well as relevant professionals from their client base, about the details.
It is not taking a big risk to predict that artificial intelligence (AI) will significantly transform the operations of financial institutions worldwide. Various forms of AI – such as deep learning, machine automation or generative AI – hold opportunities in the areas of risk analysis, lending, customer service, cybersecurity and transaction processing that it would be a sin to miss. At the same time, the application of the technology also entails risks that require appropriate regulation and responsible use.
Christoph Anders, manager of KPMG Germany, said: AI can already help the process at several points in the loan application process; in the initial stage, contact with the loan applicant can be carried out using language models, improved by machine learning, and this is complemented by the processing of documents that are only available on paper or scanned, with simple robotization. The examination of the requested data according to creditworthiness criteria can also be automated, and over time, further lessons can be drawn from many similar cases using deep learning, so that future loan applications can be accelerated. In the process of filling gaps and in the list of tasks expected until disbursement, customer information can also be automated at many points.
Based on where each organization is in the use of these tools, enterprises in the financial sector can be classified into three categories. The first group includes “reluctant” organizations that are still skeptical about the technology, lack concrete use cases, and lack a strategy for using AI safely. In contrast, “earlier” organizations already have a working technology platform, are continuously developing their AI strategy and governance structure, and are ensuring that generative AI is used in a secure manner, such as by keeping data in-house. The middle tier consists of “norm-following” organizations, where AI adoption has taken off but is often still in an unregulated state, and the lack of security barriers poses challenges. To effectively use AI, such organizations need to develop a clear technology platform and typically already use secure generative AI applications, at least in financial planning activities.
Related news
Samsung’s latest devices are designed for extreme work environments, with advanced AI features and enterprise security solutions
Samsung is taking mobile productivity to the next level with…
Read more >The EU is investing €140 million to introduce generative artificial intelligence in the agri-food industry
The European Commission has launched four new calls for proposals…
Read more >A pilot project on the use of artificial intelligence in cities is launched
The first Hungarian pilot project demonstrating the practical application of…
Read more >Related news
The BioTechUSA group was able to grow despite market challenges
The purely domestically owned BioTechUSA group has published its annual…
Read more >More than 13 tons of donations were collected at the joint Easter campaign of NOE and CBA
More than 13 tons of donations were collected during the…
Read more >MOHU supports Easter redemption with increased capacity
As the holidays approach, store traffic is expected to increase,…
Read more >