From Manual to Intelligent

How AI Can Strengthen Operational Risk Governance in U.S. Financial Institutions

Image: Depositphotos

The Scalability Problem

In March 2023, the failure of Silicon Valley Bank exposed what practitioners had long understood: operational risk governance failures at individual institutions can cascade into systemic crises. The Federal Reserve’s post-mortem found that SVB had 31 unaddressed supervisory warnings at the time of its failure — triple the average of its peer institutions. The root causes were not exotic. They were failures of basic risk identification, control documentation, and management oversight.

Yet the primary tool for meeting heightened regulatory demands — the Risk and Control Self-Assessment (RCSA) — has not fundamentally evolved in decades. Having conducted RCSAs across equities trading, prime brokerage, foreign exchange, securities research, card fraud operations, and digital banking, I can attest that the process remains labor-intensive, dependent on subjective judgment, and difficult to scale. A typical RCSA for a single business unit at a large institution requires four to eight weeks. An institution with dozens of units faces a multi-year cycle to achieve coverage — by which time early assessments may already be outdated.

This is the structural problem AI is now being deployed to address. The framework presented here is forward-looking: it draws on my cross-institutional experience to propose how AI tools should be integrated into the RCSA lifecycle, building on the principle — well established in the BPM literature — that a structured process orientation is foundational to enterprise AI success in regulated, risk-averse sectors (1).

What the Market Is Already Building

Several enterprise vendors have begun productizing AI for RCSA. IBM’s OpenPages platform, augmented by watsonx generative AI, evaluates control descriptions against structured frameworks (the 5 Ws), flags incomplete or vague language, and suggests improvements aligned with regulatory standards.(2) MetricStream, ServiceNow, and Archer offer adjacent capabilities in their GRC platforms, including AI-assisted risk scoring and control mapping. These products are real, deployed, and improving rapidly.

What they cannot do — and what most practitioners have learned the hard way — is compensate for governance gaps in the underlying process. AI tools that evaluate control descriptions assume those descriptions exist in a usable format. They assume a defined taxonomy. They assume that someone has mapped the process before mapping the controls. Where these foundations are absent, the AI does not fix the gap. It accelerates the production of polished documentation around it.

What Cross-Institutional Practice Reveals

Across my work at Wells Fargo Securities, the United Nations Federal Credit Union, the World Bank, and the Inter-American Development Bank, one pattern has consistently struck me: recurring risks travel across institutions, and we have no systematic way to capture them. A control weakness in trade settlement at the World Bank often mirrors vulnerabilities I have seen in prime brokerage at Wells Fargo. Card disputes patterns at UNFCU rhyme with consumer protection gaps documented in investment banking operations. Yet each RCSA starts largely from scratch, because the cross-institutional pattern recognition that experienced practitioners develop lives only in their heads. AI trained on historical RCSA outputs — with appropriate data governance controls — could surface these patterns and turn individual practitioner experience into transferable institutional knowledge. This is the dimension of AI augmentation least visible in current vendor offerings, and arguably the most valuable.

A Four-Stage Framework

The framework below integrates AI into the RCSA lifecycle while preserving human judgment in the areas where it is most critical. Each stage is anchored in a specific lesson from my own work.

Stage 1 — AI-Assisted Discovery. Before SME interviews, existing documentation (SOPs, prior RCSAs, audit reports, regulatory correspondence) is processed by an LLM to produce a draft process map and risk inventory. At the World Bank, where I built process models from a fragmented documentation base across HR, IT, and Accounting, I would have compressed weeks of pre-interview synthesis into hours had this capability existed. The practitioner reviews the draft critically; it is the starting point, not the finished product.

Stage 2 — Human-Led Assessment. SME interviews remain irreplaceable. The most consequential moments in any RCSA I have led occurred when an interviewee said some version of “that is what the procedure says, but here is what we actually do” — a gap no AI can detect from documentation alone. AI augments the practitioner here only by providing a sharper baseline against which interview findings are tested. Materiality judgment stays human.

Stage 3 — AI-Enhanced Documentation. AI enforces taxonomic consistency, maps controls to regulatory requirements, and cross-references findings against historical RCSA data. This is where the most concrete value sits in my experience. At Wells Fargo Securities across six business units, taxonomy drift between desks was the single largest barrier to executive comparability. At UNFCU during the $10B threshold expansion, Regulation E and Z mapping was substantially redundant across processes. Both are textbook AI use cases; both are largely unautomated today.

Stage 4 — Human-Led Governance. Final RCSA documentation must be defended in front of boards, audit committees, and regulators. At Wells Fargo, every RCSA I led passed board review on the first attempt — not because of tooling, but because the documentation reflected genuine institutional understanding. AI can accelerate the work product; it cannot translate complex operational realities into a defensible governance narrative. That responsibility rests with the practitioner, and should.

Implications

The financial institutions that will benefit most from AI-augmented operational risk governance are not those that deploy the most sophisticated tools. They are those that treat process discipline as a prerequisite, not a parallel workstream — and that preserve human accountability at the points where regulators will demand it.

AI in RCSA is real, and it is improving. Whether it strengthens or hides governance gaps depends entirely on the discipline of the institution deploying it.


References

1 Andrew Spanyi, “How a Process Orientation Contributes to Success with AI,” Cognitive World, November 24, 2024. Citation used with the author’s permission.

2 Jesus Olivera, “Automate RCSA and enhance risk management with generative AI,” IBM Think Insights, 2024.


Lino Lorenzon

About the Author

Lino Lorenzon is a Senior Business Process Engineer at Garanti BBVA International in Amsterdam. He has seven years of experience in operational risk governance and business process management at the World Bank Group, the Inter-American Development Bank, Wells Fargo Securities, and the United Nations Federal Credit Union, with thirty-six process governance engagements across systemically important U.S. banks, federally regulated credit unions, multilateral development institutions, and European banking. linkedin.com/in/linolorenzon

Lino Lorenzon