KPMG: How to use AI in credit risk analysis ethically and competitively

By: Trademagazin Date: 2025. 03. 26. 10:57

Financial institutions and financial service providers are expected to be among the biggest beneficiaries of the AI ​​revolution, provided they do not sacrifice consumer and user trust on the altar of quick success. The EU regulation, which will gradually come into force by August 2026, is designed to preserve trust while ensuring innovative skills. KPMG Hungary recently held a business discussion with Hungarian and international experts, as well as relevant professionals from their client base, about the details.

It is not taking a big risk to predict that artificial intelligence (AI) will significantly transform the operations of financial institutions worldwide. Various forms of AI – such as deep learning, machine automation or generative AI – hold opportunities in the areas of risk analysis, lending, customer service, cybersecurity and transaction processing that it would be a sin to miss. At the same time, the application of the technology also entails risks that require appropriate regulation and responsible use.

Christoph Anders, manager of KPMG Germany, said: AI can already help the process at several points in the loan application process; in the initial stage, contact with the loan applicant can be carried out using language models, improved by machine learning, and this is complemented by the processing of documents that are only available on paper or scanned, with simple robotization. The examination of the requested data according to creditworthiness criteria can also be automated, and over time, further lessons can be drawn from many similar cases using deep learning, so that future loan applications can be accelerated. In the process of filling gaps and in the list of tasks expected until disbursement, customer information can also be automated at many points.

Based on where each organization is in the use of these tools, enterprises in the financial sector can be classified into three categories. The first group includes “reluctant” organizations that are still skeptical about the technology, lack concrete use cases, and lack a strategy for using AI safely. In contrast, “earlier” organizations already have a working technology platform, are continuously developing their AI strategy and governance structure, and are ensuring that generative AI is used in a secure manner, such as by keeping data in-house. The middle tier consists of “norm-following” organizations, where AI adoption has taken off but is often still in an unregulated state, and the lack of security barriers poses challenges. To effectively use AI, such organizations need to develop a clear technology platform and typically already use secure generative AI applications, at least in financial planning activities.

Tags: ,

Related news