KPMG: How to use AI in credit risk analysis ethically and competitively
Financial institutions and financial service providers are expected to be among the biggest beneficiaries of the AI revolution, provided they do not sacrifice consumer and user trust on the altar of quick success. The EU regulation, which will gradually come into force by August 2026, is designed to preserve trust while ensuring innovative skills. KPMG Hungary recently held a business discussion with Hungarian and international experts, as well as relevant professionals from their client base, about the details.
It is not taking a big risk to predict that artificial intelligence (AI) will significantly transform the operations of financial institutions worldwide. Various forms of AI – such as deep learning, machine automation or generative AI – hold opportunities in the areas of risk analysis, lending, customer service, cybersecurity and transaction processing that it would be a sin to miss. At the same time, the application of the technology also entails risks that require appropriate regulation and responsible use.
Christoph Anders, manager of KPMG Germany, said: AI can already help the process at several points in the loan application process; in the initial stage, contact with the loan applicant can be carried out using language models, improved by machine learning, and this is complemented by the processing of documents that are only available on paper or scanned, with simple robotization. The examination of the requested data according to creditworthiness criteria can also be automated, and over time, further lessons can be drawn from many similar cases using deep learning, so that future loan applications can be accelerated. In the process of filling gaps and in the list of tasks expected until disbursement, customer information can also be automated at many points.
Based on where each organization is in the use of these tools, enterprises in the financial sector can be classified into three categories. The first group includes “reluctant” organizations that are still skeptical about the technology, lack concrete use cases, and lack a strategy for using AI safely. In contrast, “earlier” organizations already have a working technology platform, are continuously developing their AI strategy and governance structure, and are ensuring that generative AI is used in a secure manner, such as by keeping data in-house. The middle tier consists of “norm-following” organizations, where AI adoption has taken off but is often still in an unregulated state, and the lack of security barriers poses challenges. To effectively use AI, such organizations need to develop a clear technology platform and typically already use secure generative AI applications, at least in financial planning activities.
Related news
Careless corporate use of artificial intelligence can also violate trade secrets
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >Retailers turn to AI for marketing, merchandising
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >The future of AI agents: key predictions and trends for 2026
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >Related news
Turning point in the dairy sector
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >The secret currencies of loyalty
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >Belgium’s Colruyt Group launches data protection initiative
🎧 Hallgasd a cikket: Lejátszás Szünet Folytatás Leállítás Nyelv: Auto…
Read more >
