KPMG: How to use AI in credit risk analysis ethically and competitively
Financial institutions and financial service providers are expected to be among the biggest beneficiaries of the AI revolution, provided they do not sacrifice consumer and user trust on the altar of quick success. The EU regulation, which will gradually come into force by August 2026, is designed to preserve trust while ensuring innovative skills. KPMG Hungary recently held a business discussion with Hungarian and international experts, as well as relevant professionals from their client base, about the details.
It is not taking a big risk to predict that artificial intelligence (AI) will significantly transform the operations of financial institutions worldwide. Various forms of AI – such as deep learning, machine automation or generative AI – hold opportunities in the areas of risk analysis, lending, customer service, cybersecurity and transaction processing that it would be a sin to miss. At the same time, the application of the technology also entails risks that require appropriate regulation and responsible use.
Christoph Anders, manager of KPMG Germany, said: AI can already help the process at several points in the loan application process; in the initial stage, contact with the loan applicant can be carried out using language models, improved by machine learning, and this is complemented by the processing of documents that are only available on paper or scanned, with simple robotization. The examination of the requested data according to creditworthiness criteria can also be automated, and over time, further lessons can be drawn from many similar cases using deep learning, so that future loan applications can be accelerated. In the process of filling gaps and in the list of tasks expected until disbursement, customer information can also be automated at many points.
Based on where each organization is in the use of these tools, enterprises in the financial sector can be classified into three categories. The first group includes “reluctant” organizations that are still skeptical about the technology, lack concrete use cases, and lack a strategy for using AI safely. In contrast, “earlier” organizations already have a working technology platform, are continuously developing their AI strategy and governance structure, and are ensuring that generative AI is used in a secure manner, such as by keeping data in-house. The middle tier consists of “norm-following” organizations, where AI adoption has taken off but is often still in an unregulated state, and the lack of security barriers poses challenges. To effectively use AI, such organizations need to develop a clear technology platform and typically already use secure generative AI applications, at least in financial planning activities.
Related news
EY: Hungarian companies are already being attacked with artificial intelligence
Artificial intelligence (AI) has become a common tool for corporate…
Read more >AI-generated campaign revolution: Unilever takes marketing to a new level
Unilever is revolutionizing its marketing processes with a spectacular innovation:…
Read more >Twenty-two out of a hundred Hungarians are afraid of humanoid robots
If someone has an extra 12–14 million forints, they can…
Read more >Related news
Focus on supporting sustainable food systems: Nestlé’s joint value creation summary published
Nestlé is committed to providing sustainable, balanced and affordable food…
Read more >The European Commission has proposed to strengthen the competitiveness of the EU wine sector
The European Commission is proposing measures to improve the competitiveness…
Read more >Jägermeister Orange: an ice-cold novelty with the aroma of Sicilian orange
1400 hours of Sicilian sunlight, 56 herbs and the aroma…
Read more >