May 16, 2023

Unlocking AI’s Potential in Banking: Mitigating Security, Compliance, and Reputational Risks for Financial Institutions

by Adrian Hsieh in Fintech , IT Management , Tips and Tricks 0 comments

With the booming popularity of OpenAI’s ChatGPT, Artificial Intelligence (AI) is once again the hottest topic in technology. As the widespread adoption of AI in everyday practical applications becomes more of a reality, the banking industry grows more eager to deploy AI-based solutions. The possibilities range from smart chatbots that handle customer inquiries with high satisfaction ratings to automated review and approval processes for more sophisticated account opening and loan requests. The opportunities to deliver both cost-saving and customer-enhancing solutions are virtually limitless and enough to make any banker salivate. These benefits, however, are accompanied by significant risks that every financial institution (FI) should be mindful of and know how to mitigate.

What risks should FIs consider with the adoption of AI technology?

In the financial services industry, the risk of AI adoption is elevated due to the high level of regulation and customer expectations. However, AI, especially generative AI solutions based on Large Language Model (LLM), comes with inherent risks given that the core of LLM engines are, in essence, “black boxes” performing Markov Chain computations. In financial services, the risk of AI adoption is elevated due to the high level of regulation and customer expectations in the industry. For banks and credit unions, key risks associated with employing AI can be categorized into security, compliance, and reputational risks.

Security Risks

AI models require training to understand the nuances and requirements of a particular domain, such as financial services. The goal is to teach the model how to identify patterns and relationships in data relevant to that domain. However, to effectively accomplish this training, FIs must allow the AI model to scan through large volumes of sensitive information in the institution’s data warehouse. 

This access poses a significant security concern for FIs as it is very difficult to comprehensively sanitize such data. As an AI model produces outputs based on its learnings, it could inadvertently incorporate sensitive information into results provided to customers. For example, a chatbot may provide an answer to one customer that contains other customers’ financial information.

Compliance Risks

Banks and credit unions operate within a highly regulated environment and are closely monitored by regulatory authorities to ensure compliance with rules and guidelines. One example is the prohibition of loan application denials based on factors such as race and gender in numerous jurisdictions. Despite these regulations, there is a risk that AI models could inadvertently perpetuate historical biases. Because these models are trained on historical data, it’s possible that they could interpret such biases as desired patterns and mistakenly incorporate them into the decision-making processes. And their “black boxes” nature makes it difficult to identify if legacy biases have influenced this internal logic.

Reputational Risks

AI models based on LLM have significantly improved their ability to generate human-like and articulate conversations. Despite these linguistic capabilities, however, it is important to note that AI models may occasionally provide inaccurate information, even if the response appears grammatically correct and sophisticated in structure. FIs that utilize AI solutions should be aware of this limitation, as providing customers with incorrect answers will negatively impact the organization’s reputation and credibility.

FIs can mitigate these risks through Continuous Audit and Continuous Training (CA/CT)

Just as the adoption of Continuous Integration and Continuous Deployment (CI/CD) processes has significantly improved software quality, practicing Continuous Audit and Continuous Training (CA/CT) can mitigate the risks associated with AI technologies.

Continuous Audit

Continuous audit comes in two different flavors: reactive and proactive.

Reactive auditing occurs when interactions between customers and AI algorithms are sampled and verified by auditing software with well-defined rules stipulating expected results from given input parameters. Any results that violate these rules can then be flagged for further examination. 

Proactive auditing is accomplished by deploying software agents to act like “secret shoppers” by interacting with the AI algorithms with the ideal outcomes defined in advance. Any interactions that vary from the expected results are flagged for scrutiny.

Continuous Training

As we’ve already covered, generative AI technology such as LLM-based ChatGPT, relies on a large volume of data to improve its performance and accuracy. Through continuous auditing, new data points confirmed as accurate can be continuously added to the training database to maintain the “freshness” of the training set. In addition to adding to the dataset, it’s critical that older records be retired and purged as well. As the training database gets refreshed, AI algorithms can be trained repeatedly to improve the quality and accuracy of results.

The FIs integration platform plays a key role in reducing risk

An FI’s integration platform plays a crucial role in facilitating CA/CT processes, as these procedures necessitate the extraction, cleansing, transformation, and delivery of large volumes of curated data from AI deployment sites to audit and training locations in a timely and coordinated manner. To achieve this, the integration platform must include a well-designed, flexible, and robust AI data orchestration layer. As AI technologies continue to evolve rapidly, it is vital that the orchestration layer is easily configurable and scalable to adapt to these advancements.

At PortX, we have extensive experience designing, developing, and hosting flexible, robust, and scalable integration platforms. If you are considering the implementation of AI technologies, start a conversation with our team today.


Leave a comment