June 20, 2024 | InBrief

How governance allows banks to realize the value of AI while mitigating risks

Banks must prioritize their investment in AI governance to take advantage of its full value—while minimizing exposure to emerging risks.

How governance allows banks to realize the value of AI while mitigating risks

The pace of artificial intelligence (AI) adoption is accelerating at an exceptional rate. Banks leading the adoption of AI have implemented use cases across business and operational functions to realize myriad benefits, ranging from enhanced data monetization and customer experiences to improved workforce productivity.

They’ve begun to look beyond traditional and narrowly defined use cases, evaluating entire value chains to uncover new opportunities for AI to optimize existing processes and innovate new ones. Further down-market, banks that have not developed AI may still gain benefits, as leading service providers are actively working to embed AI technology into software products and services. 

While AI’s growing footprint brings great promise, it also introduces new risks and concerns that banks will be forced to contend with:

  • AI models must be transparent and explainable so that it is clearly understood by leaders and risk and compliance teams how algorithms and data are used to generate results.
  • Decision-making processes must be fair and unbiased, requiring proactive measures throughout the AI lifecycle, including controls within training and deployment environments to identify and mitigate algorithmic bias.
  • Data should be secured from unauthorized access and exfiltration, and end users deserve to understand when and how their data is used.
  • Third-party risk management will need to be augmented to assess AI risks introduced by vendor products and services. 
  • Risk management frameworks will need to be re-oriented to accommodate for new processes, controls, and operating models.

To effectively address these concerns, banks must prioritize their investment in building robust AI governance capabilities. Regardless of the bank’s AI maturity, governance should be able to support future AI ambitions and be complimentary to the existing risk management framework.


New AI risks call for new controls

The first stage of building AI governance capabilities should include an understanding of the new risks presented by AI and then developing controls to mitigate these risks. The levels and types of risk introduced by AI will vary based on a variety of factors—such as whether it’s developed internally or by a vendor, supports revenue-generating or back-office functions, and uplifts an existing process or creates an entirely new one.

Once risks are identified, the control framework should be evaluated against the identified risks and industry frameworks (e.g., NIST) to determine whether to augment existing controls or create new ones.  Controls should align to the bank’s technology platforms and tools, and provide adequate methods to both implement controls and evidence compliance.

For new controls with significant technology and process dependencies, it may be prudent to create a control maturity roadmap to incrementally enhance controls over time as dependencies are resolved.

Have a dedicated focus and oversight of potential AI risks

AI adoption will necessitate more focus from existing risk oversight committees for how risks are controlled, managed, and escalated across the bank. Responsibilities across business lines, lines of defense, and oversight committees should be defined and well understood.

Initially, existing risk committees can expand their focus to include AI-specific topics across data, technology, and estimations risks. As adoption increases, there may be a need to spin out new committees dedicated to AI development and risk management. In either case, the oversight structure should reflect the bank’s AI maturity and effectively support orchestrated and coordinated risk management of AI across the bank—with engagement and representation across the three lines of defense and from all business lines.

Develop an architecture that enables AI security

Technology architecture should be a key enabler in both enforcing and observing compliance to firmwide controls and standards. If done correctly, software applications and tools will allow AI developers to seamlessly integrate controls during the model development process with little friction or ambiguity. Some of the most impactful priorities from an architectural standpoint include centralization, integration, and standardization. Applications used to develop and release AI models should be tightly integrated horizontally to enable seamless progression of AI use cases from experimentation to model training and business serving.

Additionally, integrations beyond the critical path of development—such as technology inventories, data catalogues, vendor monitoring systems, and model risk management systems—will allow for better alignment with systems impacted downstream. Centralizing development applications and processes will help to re-enforce standardization of process and protocols. Where appropriate, controls should be directly embedded within the development tools themselves, allowing for developers to automatically comply with control requirements when onboarding to the tools.

Conclusion

In the not-so-distant future, banks will need to manage their risk exposure to AI—whether it be from their decision to develop their own models or by way of AI-embedded vendor services. Building governance capabilities for AI that effectively enable banks to identify, control, and monitor these risks will allow banks to gain the value promised by AI while successfully navigating around its challenges. 

Explore our latest perspectives