The conversation around AI in the workplace is shifting. We’re moving from fear that it will take our jobs and render us obsolete to confidence that, when used effectively, AI can work for us, enhancing our efficiency, innovation and decision-making. This evolution is at the heart of the human-centric approach to technology that guides the new AI-aligned programming offered by MacEwan University’s School of Continuing Education (SCE). It’s about moving beyond simply interacting with AI to engaging with it critically.
It starts with a shift in how you think about AI, says Diana Ionescu, P.Eng., instructor of the AI-Enabled Workforce course. Instead of thinking of yourself as “skilled executor,” reframe your role as “intelligent orchestrator.” The key is to lean on your competitive advantages: how quickly you learn, how well you direct AI tools to extend your capabilities and how effectively you apply judgment to AI outputs.
Here are three core areas where you can leverage those attributes to work effectively in AI-enhanced environments across various industries.
Data literacy: the foundation of human agency
AI doesn’t think – it processes patterns in data and transforms those patterns into output. You, on the other hand, are capable of creativity, critical thinking and connecting disparate ideas. These traits are all foundational to data literacy, which is where your strategic use of AI starts. “Think of it like learning to read before learning to use a library search system,” says Ionescu. “The search tool is powerful, but useless if you can’t interpret what you find.”
To bolster your team’s data literacy, Ionescu recommends this approach:
Immediate actions (weeks 1–2)
- Audit your team’s data touchpoints, including reports, dashboards and spreadsheets.
- Establish a “data question of the week”: ask your team to explain a metric they regularly use, what it measures and why it matters.
Foundation building (months 1–2)
- Teach the data lifecycle to understand where your data comes from and how it’s collected, cleaned and stored.
- Practice data storytelling by having team members use the framework “Here’s what the data shows → Here’s what it means → Here’s what we should consider doing.”
- Create psychological safety around “I don't understand”: model asking basic questions about data yourself.
Skill development (ongoing)
- Pair data-confident and data-hesitant team members on projects.
- Analyze actual team challenges with data.
- Focus on critical thinking over technical skills.
Data literacy for AI isn’t some abstract concept. It’s an essential prerequisite for establishing your human agency in a data-driven world. Without it, human-AI collaboration becomes a matter of baseless trust – you can’t identify bias in data you don’t understand. With it, you can assess the quality of inputs, translate business questions into data-answerable formats and critically evaluate any AI recommendation.
Human-AI collaboration: designing the partnership
Human-AI collaboration is a partnership where humans and AI systems leverage their respective strengths. A successful collaboration starts with defining roles and responsibilities based on these strengths. To do this, Ionescu suggests the DECIDE framework:
Define the decision type: identify what decisions can be owned entirely by either humans or AI, and what decisions are a partnership between the two,
Establish AI’s role: Generator (create content), Advisor (provide recommendations), Executor (handle routine tasks) or Analyst (identify patterns),
Clarify human responsibilities: make judgments involving complex ethical decisions, provide nuance and organizational knowledge and priorities, validate AI outputs and manage edge cases AI wasn’t designed for,
Identify handoff points: pinpoint when work moves from human to AI and when it returns from AI to human review,
Document accountabilities: determine accountability if AI makes an error, how to resolve disputes between human and AI judgment, and metrics to measure collaboration effectiveness,
Evaluate and iterate: conduct regular check-ins on what’s working and adjust division of labour accordingly.
For example, in customer service, AI might handle the role of advisor, while a human maintains responsibility for judgment. The handoff point is a complaint involving legal or safety issues, which immediately goes to a human. The DECIDE framework provides the structure to ensure technology augments, rather than replaces, human expertise.
AI ethics and governance: mitigating algorithmic bias
Algorithmic bias in AI can feel insurmountable, but it needn’t be. Building a human-centric system requires you to engage with AI critically and ensure it aligns with your values. “The simplest starting question is, ‘If this AI makes a mistake, who is most likely to be harmed?’” suggests Ionescu. “Then monitor specifically for harm to those groups.” From there, she recommends following this process to mitigate bias in AI:
Before adoption
- Ask the vendor what the data was trained on and to see examples.
- If relevant, request demographic performance breakdowns.
- Test with edge cases from your actual context.
During evaluation
- Run the tool on a diverse sample from your own data or scenarios.
- Compare outputs across different demographic groups, regions or categories.
- Identify systematic patterns: does it consistently favour certain groups or types?
- Check for “ghost work”: are underpaid workers labelling data in ways that embed their biases?
Implementation
- Establish a bias reporting mechanism that makes it easy to flag concerning outputs.
- Designate a monthly rotating bias monitor who watches for patterns.
- Create an override authority that ensures humans can reject AI decisions when bias is suspected.
Ongoing
- Review random samples of AI decisions through monthly audits.
- Track outcome disparities: are certain groups disproportionately affected?
- Document and investigate outliers: why did AI make unusual recommendations?
- Update the tool or your processes when bias is confirmed.
Cultural integration
- Include bias discussion in team meetings: normalize naming the problem.
- Reward team members who identify bias issues.
- Connect bias prevention to your organization’s values and commitments.
Master these three competencies and you will be set to evolve and succeed in this new era. As Ionescu says, “The professionals who thrive will be those who most effectively partner with AI while maintaining the irreplaceable human elements of wisdom, ethics and contextual judgment.” This is what the human-centric future of work could look like: one that moves professionals beyond the “awe-anxiety spectrum” and empowers them with the confidence and durable skills needed to lead in today’s workplace.
In this course, you’ll master the skills essential for navigating the AI-powered workplace. Topics include AI technologies and workplace applications, human-AI collaboration, data literacy, AI ethics and governance, AI productivity tolls and adapting to AI-driven change in the workplace.
Eligible employers can receive partial reimbursement for professional development training like this course through the Canada-Alberta Productivity Grant.