1. Data Foundations: The Hard Truth
Before we talk about scaling AI, it’s worth addressing the foundations: AI success is not a model problem, it’s a data and platform discipline challenge.
The most sophisticated algorithm cannot compensate for fragmented data or poorly architected systems. Banks that treat AI as a purely technical exercise – focused on model sophistication over infrastructure – are setting themselves up for failure. The reality is that industrialising your data foundations matters far more than chasing the latest model innovations.
This has raised widespread industry demand for a Unified Engagement Layer. Without it, customer data remains siloed and AI applications struggle to deliver meaningful insights without context. Equally critical is adopting “Data as a Product” principles, with decentralised ownership that follows Data Mesh thinking. This allows teams closest to the data to govern it effectively, rather than relying on a centralised function that cannot scale.
Add to this the regulatory requirements for explainability and auditability, and the bar becomes even higher. AI models in financial services must not only perform well, they must be defensible.
Then there are the technical realities: training-serving skew and real-time inference demand strong architectural controls. A model trained on batch data may behave differently when deployed in production, and without the right guardrails, this gap can undermine trust and performance.
The core insight
At the heart of it, industrialising your infrastructure is more important than perfecting your models. Get the foundations right, and AI becomes scalable.
2. Engineering Excellence & Generative Coding
It’s time for a reality check in 2026. The AI hype cycle has met the hard edges of reality. Public AI failures and hallucinations have made headlines, raising questions about deployment readiness in financial services. And cloud resilience concerns continue to test the operational limits of institutions relying on third-party infrastructure.
What’s worked:
- Copilot tools have proven their worth through incremental daily gains. Nothing revolutionary, but compounding improvements in productivity that add up over time.
- Generative coding, when approached with discipline and iteration, has delivered major uplift; particularly when paired with rigorous testing and review cycles.
- The banks that have strategically leveraged global engineering capability have found ways to scale expertise without compromising quality.
What’s gone wrong:
- Outputs from AI tools have at times been patchy and best and reputationally damaging at worst. Even things that may seem simple like document generation and OCR remain limited by accuracy thresholds that fall short of business-critical standards.
- Agentic AI, despite the excitement, has proven complex and costly to implement at scale. The promise is there, but the operational reality lags behind.
The core insight
For those who have navigated this landscape successfully, senior engineering leadership, architecture discipline, and test-driven development remain as critical as ever. They prioritise methodology and design matter more than raw code generation. All attendees were aligned that AI can accelerate delivery, but it cannot replace the judgment and rigour that come from experienced teams building with intent.
3. ROI Debate: Efficiency vs Growth
The conversation around AI ROI often defaults to operational efficiency: cutting costs, automating processes, reducing headcount. But this framing misses the bigger opportunity.
ROI calculations around AI should prioritise scale and growth over efficiency alone. The institutions that will win are those using AI to unlock new revenue streams, accelerate time to market, and serve customers in ways that were previously uneconomical. Efficiency gains matter, but they are table stakes. The real value lies in doing things that could not be done before.
Yet there are uncomfortable questions around the long-term economics of LLM providers. Current pricing models may not reflect the true cost of delivering these services at scale. Many banks are building strategic capabilities on what could be subsidised AI pricing – a risk that carries structural dependency implications. If costs rise sharply or providers pivot their models, institutions could find themselves locked into unsustainable economics.
The core insight
Beyond the procurement issues, it is a board-level strategic concern. Leaders need to understand not only what AI enables today, but what happens if the commercial landscape shifts. The smartest banks are already stress-testing their AI investments against different pricing scenarios and building optionality into their architecture.
4. Adoption, Experimentation, and Scaling at Speed
AI transformation is as much about people as it is about technology. The workforce "fear vs empowerment" dynamic is real, and how organisations navigate it will determine whether AI delivers value or creates resistance.
As roles evolve and traditional job boundaries blur, cross-discipline expertise has become more important than ever. AI literacy is becoming a core capability, not a nice-to-have, and senior sponsorship remains critical to signal commitment and sustain momentum. But operating model changes are required beyond deployment: new workflows, new accountabilities, new ways of working.
The Value of Sandbox Environments
Even with internal readiness, the pace of innovation can be stifled by outdated processes. Traditional vendor onboarding can take three to twelve months, a timeline that is incompatible with the speed at which AI is evolving. The answer lies in sandbox environments, which reduce onboarding to weeks rather than quarters. A structured approach – Educate, Validate, Scale – allows banks to assess new capabilities quickly without exposing production systems to risk. Secure, disconnected environments enable safe experimentation, giving teams the freedom to test, learn, and iterate.
Through sandboxing, firms can achieve rapid mobilisation, proving that speed and control are not mutually exclusive. The institutions that will lead are those that can experiment fast, fail safely, and scale what works. They key is pairing pace with discipline.