AI is the New Electricity, But Learning is the Real Enabler
Driving past Stonehenge after Storm Goretti had swept through Cornwall, I was listening to the radio. The hedgerows were battered, the sky still unsettled and those ancient stones stood exactly where they always have, quiet, enduring and unconcerned with the weather.
It was in that moment that technology correspondent Rory Cellan-Jones made an observation that stopped me mid-thought: Saying your company “uses AI” is like saying it “uses electricity”.
Surrounded by something that has lasted thousands of years, the comment struck a chord, because it perfectly described what I see every day, working with organisations trying to make AI deliver real value: AI isn’t the enabler – learning is.
From AI Adoption to Organisational Learning
As AI becomes embedded across platforms, products, and processes, the conversation has to move on.
The question is no longer: “How do we deploy AI?”
It is now: “What do our people need to learn for AI to make a difference?”
When electricity transformed industry, organisations didn’t just install cables. They learned new ways of working, new safety standards, new roles and entirely new mental models of productivity and then created new products!
AI demands the same and at speed.
Learn more about the essentials of AI for business in my recent AI Operating Models Insight.
Learning Is the Real Constraint on AI Value
In practice, most AI initiatives don’t stall because of technology. They stall because learning lags behind deployment.
People struggle to answer:
- What is this AI actually for in my role?
- When should I trust it and when should I challenge it?
- How does my judgement still matter?
- What do I do differently tomorrow?
And because the success of any technology initiative hinges on whether people adopt the new tool or not, if those questions aren’t answered, AI remains theoretical, and value stays locked in proof of concepts and backlogs.
This is why learning can no longer be treated as a phase, a programme, or a nice-to-have. It must become continuous, contextual, and embedded in work.
Learning Starts with Inclusion, Not Expertise
One question I often hear is: “How do we help older or less technical people with AI?”
That question isn’t really about age, but about how learning is designed.
If learning assumes technical fluency, confidence or a willingness to experiment publicly, adoption will stall. But if learning is human-centred, grounded in real work and real decisions, then AI becomes accessible to everyone.
And in fact, AI can leverage the experience and deep domain knowledge of those ‘older’ people, benefitting the whole organisation.
Learning Is the Engine of Business Change
“AI systems learn continuously, and organisations must do the same“
At Afiniti, we see learning as the core mechanism of the business change necessary to embed AI and not an afterthought.
Our human-centred AI transformation approach treats learning as part of the system itself:
- learning embedded in workflows, not classrooms
- learning as a mindset, not a course
- learning focused on decisions, not tools
- learning that builds confidence and trust
- learning that evolves alongside the AI
AI systems learn continuously, and organisations must do the same.
Making Learning Practical Moves AI from Pilots to BAU
Electricity scaled because of standards, safety frameworks and shared understanding. Learning needs the same discipline.
To move AI from pilots into business-as-usual, organisations need learning frameworks, not just tools, training or communications. These frameworks provide clarity, confidence and consistency. Without them, learning stays informal and fragmented, adoption varies by team and AI value inevitably plateaus. Here’s what those frameworks must do and how leaders can make them real.
1. Clarify Decision Ownership
Why it matters:
AI often blurs accountability. When something goes wrong, people ask: Was it the model? The data? The human? If ownership is unclear, trust erodes and people either over-rely on AI or ignore it completely.
How to make it practical:
- Explicitly define which decisions are AI-informed, AI-recommended or human-owned
- Document who has the authority to accept, override or escalate AI recommendations
- Make this part of onboarding and role expectations, not buried in governance documents
Learning here is about helping people understand where responsibility sits so confidence replaces caution.
2. Explain How AI Recommendations Should Be Used
Why it matters:
Most AI systems fail at the moment of interpretation, not prediction. People don’t know:
- how much weight to give a recommendation
- when context should override it
- what signals should trigger caution
- the validation process
Without guidance, people either defer blindly or disregard insights altogether. This is nothing new – think risk based monitoring.
How to make it practical:
- Provide decision playbooks with real examples (“If X, then Y; if not, consider Z”)
- Embed prompts or cues in workflows that guide interpretation
- Coach managers to reinforce judgement-plus-AI, not judgement versus AI
This turns AI from a “black box” into a decision partner.
3. Create Safe Feedback Loops
Why it matters:
AI improves through feedback; it needs to learn too! But people often hesitate to challenge or correct it, especially in regulated or performance-driven environments. If feedback feels risky, learning stops.
How to make it practical:
- Make it psychologically safe to question AI recommendations
- Capture feedback in simple, low-friction ways (not long forms or tickets)
- Regularly share examples where human insight improved outcomes
This signals that learning goes both ways: humans learn from AI, and AI learns from humans.
4. Support Responsible and Explainable Use
Why it matters:
Trust is fragile. Without understanding why an AI made a recommendation, people disengage or escalate everything. Therefore, responsible use is a learning issue as much as it is a compliance issue.
How to make it practical:
- Tailor explanations to the audience (frontline, manager, executive, risk)
- Train people on what explanations mean, not just how to access them
- Embed ethical and responsible-use scenarios into learning, not policy alone
When people understand the “why,” they’re far more likely to use new tools effectively – as this case study demonstrates.
5. Evolve as the Organisation Evolves
Why it matters:
AI systems change. Markets change. Teams change. Learning frameworks that stay static quickly become irrelevant. If learning doesn’t evolve, confidence erodes and adoption regresses.
How to make it practical:
- Treat learning content as a living asset, reviewed alongside AI performance
- Update learning when models, workflows or metrics change
- Create ownership for learning evolution, not just learning delivery
This ensures learning keeps pace with reality, not last quarter’s assumptions.
What This Means for Leaders
For leaders responsible for learning, transformation and performance, AI fundamentally changes the job. Learning is no longer about preparing people for the future but about enabling performance today, in an AI-enabled world. This requires a shift in mindset, and the organisations that succeed will be those that:
Treat Learning as Infrastructure
Learning becomes as critical and as reliable as the technology itself. It’s built in, maintained and continuously improved.
Design learning around real decisions
Learning starts with: What decisions matter most? Not: What do people need to know about AI?
Make inclusion a performance requirement
If AI only works for a subset of people, it won’t scale. Learning must be designed for diverse confidence levels, experience and roles.
Help people grow alongside AI not sit in its shadow
The goal is not compliance or automation. It’s confidence, capability and better judgement; supported by AI.
AI can be deployed quickly, but learning takes intention, and in an AI-enabled organisation, learning isn’t a support function but the mechanism through which value is created, sustained and scaled.
Forming a Learning Culture in Practice
One global life sciences company’s award-winning leadership programme offers a practical example of learning as infrastructure in action. Facing major strategic change, the focus was not on tools or technology, but on equipping leaders to understand why change was happening, how their judgment still mattered and what needed to change in everyday decisions.
Learning was embedded into work through reflection, immersive leadership experiences, reusable toolkits and strong feedback loops, building confidence and consistency across highly technical, global teams.
Crucially, this people-centred learning culture did more than support transformation at the time: it laid the foundations for the organisation to accelerate its AI initiatives in 2026, with value already created through a learning culture that helps people think, decide, and perform differently.
Read the full case study to see how learning design created lasting impact and readiness for change.
The Bottom Line
Driving past Stonehenge that day, the contrast was clear: technologies come and go, but what lasts is the humans at the heart of it all, and how those humans learn.
If saying “we use AI” is now as unremarkable as saying “we use electricity”, then learning becomes the real foundation of meaningful, lasting business change.
AI creates capability, but it’s learning that creates value.
And the organisations that learn fastest, and most inclusively, won’t experience AI as a storm to be weathered, but as an opportunity for growth and high performance.
Get in touch
Contact the team today to discuss your AI, digital and business change goals, and how Afiniti’s expertise can help you realise them and maximise value.
To get the latest change tips, advice and guidance directly to your inbox, sign up to our monthly Business Change Digest.