There’s no question that AI adoption is a top priority in the financial industry. Yet, while industry investment in AI is high, low deployment rates tell another story. What’s accounting for that lag? And what can finance leaders do to address it?
AI-Informed Financial Modelling, a new online course from MacEwan University’s School of Continuing Education, answers these questions, equipping finance professionals, risk managers, compliance officers and finance IT leads with the tools to move AI beyond experimental applications to real-world deployment that meets governance and regulatory standards.
Nadia Frolova, a financial risk manager who teaches the course, shares her insights on the challenges of AI for financial risk management and how finance leaders can transform AI from a hidden liability to a defensible strategic asset – even if they’re not coders.
The black box problem
In a 2024 report from the Office of the Superintendent of Financial Institutions and the Financial Consumer Agency of Canada, 75 per cent of financial institutions indicated they planned to invest in AI. Yet in PwC’s 2025 Global CEO Survey, only eight per cent of Canadian financial services respondents anticipated significant adoption of AI into their core business strategies.
This disconnect is primarily attributed to the black box problem.
Black box AI is a complex system whose internal logic is opaque to users. While the system’s inputs and outputs are clear, how the system arrived at those outputs remains a mystery. This lack of transparency is the number one barrier to regulatory compliance. In a 2025 Institute of International Finance–Ernst & Young report on AI and machine learning use in financial services, the explainability/black box nature of some algorithms was the top issue cited by regulators. “In regulated finance, leaders cannot trust what they cannot explain,” says Frolova.
Advancing transparency
This conflict between opacity and accountability is the biggest hurdle leaders face in making AI for financial risk management a trusted, practical tool. To overcome it, organizations you need to move from simply predicting AI outcomes to understanding how a model reached those outcomes and proving that the results are fair, unbiased and accurate. Frolova addresses this in the course by introducing explainable AI (XAI) in finance and AI model risk management.
XAI is a methodology that allows people to understand – and trust – AI’s output. It provides context for a model’s accuracy, fairness and transparency, explaining its logic in a way that makes its results verifiable and accountable. Explainable AI is critical to building organizational confidence.
Model risk management entails identifying, gauging and controlling model risk. These risks include algorithmic bias and model drift, where predictive power fades as economic conditions change. As Frolova cautions, “Ignoring these risks creates significant financial and regulatory exposure, making active continuous monitoring a non-negotiable skill for modern risk professionals.”
In fact, model risk management in AI is the primary takeaway from the course. Frolova wants to give students the confidence to transition from static to dynamic risk management, so they are able to understand and implement an adaptive AI framework that automates processes with full transparency and ethical oversight. “It’s about becoming the indispensable leader who bridges the gap between data scientists and business strategy, ensuring AI is used responsibly and effectively across the enterprise.”
A human-enabled governance tool
Ultimately, a comprehensive governance framework will help users crack the black box mystery to meet regulatory requirements, even when they lack technical skills. For example, the Three Lines of Defence model reclaims human agency by enforcing accountability without requiring coding skills, says Frolova.
The three lines of defence are:
- First: Management is directly responsible for owning and managing the risks associated with day-to-day operations.
- Second: Risk management and compliance teams oversee emerging risks in a business’s daily operations. This line monitors the first line, setting policies and defining risk tolerances and ensuring they are met.
- Third: Internal auditors provide objective, independent assurance – usually to senior management, boards of directors and external auditors – that risk management and control processes are being appropriately managed by the first and second lines.
By mandating “human-in-the-loop” validation like this and using XAI outputs, organizations can effectively challenge model decisions, Frolova points out. “This ensures AI remains a tool subject to human oversight, allowing managers to confidently meet regulatory demands for transparency and fairness without personally writing the algorithms.”
New course: AI-Informed Financial Modelling This seven-hour advanced online course explores the transformative impact of AI on financial risk management. You will learn how to use AI and machine learning techniques to enhance core risk management processes, analyze the inherent risks of AI models and identify the governance frameworks, ethical principles and organizational strategies required to responsibly deploy AI in the financial sector.
Through concrete use cases, practical frameworks for the entire model lifecycle and tangible XAI techniques like LIME and SHAP, you’ll discover how to transform the black box of AI into a transparent, auditable asset that satisfies both regulators and business stakeholders.
Register now for AI-Informed Financial Modelling.
Your first step to a professional development certificate
This AI-Informed Financial Modelling course is part of two professional development certificates: AI for Innovation Management and AI for Business Operations and Risk Management. Start with this course and begin the path to earning a PD certificate!