With the rapid adoption of artificial intelligence across all industries, there’s a new frontier of assurance out there. And, for the audit profession, it presents an unprecedented opportunity for auditors to evolve into strategic leaders.

With the rapid adoption of artificial intelligence across all industries, there’s a new frontier of assurance out there. And, for the audit profession, it presents an unprecedented opportunity for auditors to evolve into strategic leaders.

According to the IBM Global AI Adoption Index 2023, 42 per cent of businesses have deployed AI in their operations, and another 40 per cent are exploring or experimenting with AI. This widespread integration is creating a demand for professionals who possess a synergy of human insight and technical fluency in governance, risk and compliance controls. As this adoption accelerates, so too is the demand for skilled AI auditors. No longer a discretionary specialty, AI auditing is quickly becoming a core competency for a wide variety of auditors.

To address this, MacEwan University’s School of Continuing Education is partnering with the Information Systems Audit and Control Association (ISACA) on a preparatory course for writing the Certified Information Systems Auditor exam, DGDR 0201 Certified Information Systems Auditor (CISA): Exam Prep.

“As always, the technology is moving faster than other areas, for example, legal, financial and social. The quantum of technological change that we will see in the next five years will be more than the quantum of change that we saw in the last 50 years,” says course instructor Sharan Khurana. “To keep pace with the changing technological landscape, auditors must keep upskilling.”

What is AI auditing?

An AI audit is the independent and systematic evaluation of an AI system to determine its compliance with security, legal and ethical standards. It assesses the system’s data inputs, algorithmic processing and outputs to identify and mitigate risks such as bias, privacy violations or performance failures. The ultimate goal of an AI audit is to provide human-centric assurance to stakeholders that an organization’s AI usage is safe, ethical and effective.

AI auditing methodology: A pathway to responsible innovation

While AI models present new risk factors, a structured methodology transforms anxiety into agency, creating a clear roadmap for auditors to navigate this landscape with confidence and enable responsible innovation.

This ISACA methodology helps bridge the gap between understanding the risks and taking the necessary steps to ensure the ethical deployment of AI:

  • Audit planning, risk and controls
    This stage identifies the AI systems under review and seeks to understand the purpose, data and potential impact of each. In this stage, an AI risk assessment that identifies and prioritizes potential threats is produced.
  • Deployment and monitoring
    Once a model is deployed, its performance can degrade. As a result, continuous monitoring is required and should include tracking key performance indicators and establishing thresholds that trigger alerts for model retraining or review.

When an issue arises, Khurana suggests first asking why that issue occurred. Once you get that answer, ask why that happened. Finally, ask why that happened. “We generally reach the root cause when we get the answer to the third ‘why.’”

Validating models and spotting red flags

An organization’s assurances of validation is not a sufficient means of model validation. “The auditor needs trustworthy and valid evidence for their audit report,” Khurana notes. This is where an auditor’s durable skills — critical thinking and problem-solving — become just as important as their technical knowledge. If your review of a model validation report raises red flags, observe the process of validation or conduct it yourself.

Khurana advises watching out for these red flags:

  • The dataset used to train the model is not representative of the population it claims to represent, has inherent defects or was not sanitized before use.
  • The process to periodically update the data has not been defined.
  • The model was not tested for fairness.

“AI does wonders,” says Khurana, “but it should be used very carefully because the results depend on the data the tool was trained on.”

Conducting an algorithmic bias audit

Auditing for algorithmic bias is at the heart of a human-centric approach to AI. Since AI models can inherit biases from their training data, developing critical thinking skills alongside technical ones is crucial for identifying and mitigating unfairness in your work. To ensure an organization has performed a meaningful bias audit, Khurana suggests asking its development team these questions:

  • What process was adopted to test the model’s fairness? Was it followed?
  • Was the training on the algorithm adequate?
  • Was the data complete and accurate?
  • What was the population size? Sample size? Test data size?
  • Was the sample representative of the population?
  • Who designed the test data?
  • Was fairness tested by the development team or the test/quality assurance team?
  • Was the segregation of duties between the development team and the test/quality assurance team maintained?
  • What percent of the results were fair?

In addition, a bias test should be done on the model test data the team created. “The result of this test will give conclusive evidence whether the model is ‘fair’ or not,” says Khurana.

Take the Certified Information Systems Auditor Preparatory Course

Now that you have an overview of AI auditing, take a deep dive into it with the Certified Information Systems Auditor Preparatory Course, taught by ISACA instructor Sharan Khurana. In this course, you will do more than just learn about the frameworks — you will build the confidence and competence to lead. This hands-on, inquiry-based program applies the frameworks of AI governance, IT controls, bias testing and practical auditing steps to real-world case studies. It’s an accessible pathway to becoming a future-ready professional in an AI-driven world.