Crucial data for financial companies using AI
NEW YORK — The use of AI covers almost all areas of finance. From customer experience applications in banking to fraud detection in credit, financial organizations rely heavily on AI tools.
An important aspect of AI in finance makes technology explainable. Organizations should ensure that their models are unbiased and can be easily explained in light of compliance and regulatory oversight of AI and machine learning technology.
During the Ai4 Financial Summit 2022 on March 1, H2O.ai CEO Sri Ambati spoke to attendees about Explainable AI, AI Governance and Data Governance.
H2O.ai is an AI cloud provider that develops an open-source machine learning platform for businesses. Recently, the vendor has introduced a new deep learning tool.
In this Q&A with TechTarget at the conference, Ambati digs deeper into some of these topics, focusing on the impact of AI and data in finance.
You said “every company should be an AI company” and “every company should be a fintech company”. Why is that?
Sri Ambati: Every business should be an AI business because AI somehow transforms all software. The real superpower of being an AI company is creating new data. For example, Amazon sells books and the new data they created was book reviews. The reviews they had were better than others. As a result, they could predict what the future is. As an AI company, you can create new data that no one else has. Therefore, you have the first entry into your market.
Now the second piece, which is that everyone has to be fintech. Many of my non-banking clients work in areas such as fast moving consumer goods, telecommunications. They have data about their consumers, customers, and consumer behavior. With “buy now, pay later” and all these new ways of banking, digital and even crypto, people have sort of deconstructed traditional banks. These retailers and telecom operators can predict with the help of a better signal who is solvent. These predictions occur based on purchase choices that do not necessarily go through the traditional banking system. This is why this disintermediation occurs. Anyone with data could essentially become a banking company.
What trends do you notice on the use of AI in finance?
Sri Ambati: The biggest thing we see is that AI is no longer just good to have; it has become a must. AI has become a service more often with our more mature customers.
You can basically learn from how the data changes, and as the data drifts you have to rebuild the patterns. Traditionally, you will promote models every six months or every 12 months. But with COVID, the economy has experienced more frequent events…than before. This means stress testing the models more frequently and updating them just as frequently. Thus, stress testing and validation of AI models becomes more important.
Sri AmbatiCEO, H2O.ai
How will explainable AI affect the financial industry in the years to come?
Ambati: Finance is a highly regulated space. If the data is biased, the models are also biased. Therefore, we have to stress test these models. Find their blind spots and understand and deconstruct a deep learning model, or a hard-to-interpret model. Explainable models – which are a kind of post hoc explanation of models – can also be dangerous, because then you are trying to create a way to avoid classic mismatches [involving race or gender].
One of the biggest dangers we face in the financial industry today is that many people who have deep knowledge of the field are on the verge of retirement or leaving the industry itself.
This means that much institutional knowledge on good lending practices could be lost.
So we can basically create a kind of learning and create a new domain of experts to replace the domain experts who are moving away from the industry and create models and features on how they would solve these problems.
On the compliance side of the house, we’re seeing regulators who are more open to using machine learning and AI models than they were before. And that’s a positive sign. There is a concerted set of best practices across Discover Card, Wells Fargo, Capital One, Citi, which will also begin to be reusable by other banks.
One of the things we’ve been working on is creating model validation methodologies that allow you to repeat benchmark tests or create automatic validation for underwriting models, models that are used for lending.
How do explainable AI and governance intersect?
Ambati: When models are wrong, it’s easier to debug them and figure out where they went wrong or how they went wrong with interpretable models. Companies need to build a very rich repertoire to understand the cause, not just the correlation.
With AI, you can find [the bias] down to the actual functionality that led to the model malfunctioning or the calculations that led it to go wrong, so you can try to fix it. But of course the problem with AI is that you can apply it very quickly to a billion people and it will lead to very strange results. This is why governance and restraints are important safeguards to ensure models don’t fall off the cliff.
How do you control the models? First, you could put a lot of rules, right? But of course, the rules are fragile. You control patterns by using patterns. Use AI to rule AI.
Editor’s note: This interview has been edited for clarity and conciseness.