1. Anchor the change in purpose and value
GenAI must be more than a headline to please investors. It should be a strategic enabler, linked directly to outcomes that matter: improved client experience, faster processes, more informed decisions. McKinsey estimates that GenAI could add $200–$340 billion in annual value to banking alone, largely through productivity gains.
But the technology alone won’t suffice. AI adoption must be framed around human benefit: better service, not just efficiency. Trust, judgment and relationships remain irreplaceable in finance. GenAI should be used to amplify these.
2. Rethink team structures and workflows
GenAI can automate around thirty percent of front-office workflows in financial services. In this light, the work should be clearly delineated: which tasks are AI-assisted, and what remains human-owned. Common use cases in financial services include automating regular reporting, reviewing contracts, or building pitchbooks.
Team structures must also adapt, relying less on the typical analyst-heavy format designed for vast volumes of manual work, instead pivoting to have more strategic input from associates and above. Cross-functional pods – bringing together business, tech, compliance and data – are proving effective for our clients. Additionally, new career paths must be clarified. Roles will evolve, and new opportunities will emerge within the workforce.
3. Create room to experiment
Create structured environments to test out the new technology. To destigmatise the risk of initial failure, teams should be given room to experiment and build internal best practices.
Pilots in low-risk, common areas – like summarising internal meeting notes, or synthesising a range of non-confidential data – can build quick momentum within your workforce.
As teams become more comfortable with the new technology, ensure you recognise and celebrate the teams creating the most value from AI. Experimentation can foster unexpected use cases and help inform you of new best practices for your business which could give you a competitive edge.
4. Prioritise upskilling your teams
Replacement anxiety over AI has been widespread across the industry, especially in a cooling job market. In this light, teams must be trained specifically around how AI can augment their workflows and improve their business outcomes. Upskilling is non-negotiable.
Running regular seminars, releasing tailored tutorials, or keeping clear and accessible internal documentation around best practices will keep your workforce engaged in the long-term. The more skilled employees become with their new tooling, the more confidence they will have in their ability to augment their workflows with AI, warding off replacement anxiety.
5. Be transparent about AI’s input
As workflows change, so should the performance metrics around them. Shift from activity-based metrics (such as billable hours) to value-based ones that reflect the quality of decision making, innovation, and client impact.
Aim to track the influence of AI as transparently as possible so your firm can precisely measure the accuracy, time saved, and compliance outcomes from the new technology.
The transparency benefit is as clear for client relationships. Given the average value and impact of the work in financial and professional services, the risk is far greater for firms who get it wrong. Deloitte Australia, for example, were recently forced to refund their $290k fee on a government contract after AI hallucinated research cited in their reports.
Ensure AI tools are easy to explain and easy to audit. Both internally and externally, trust is built through clarity.
6. Set clear ethical guardrails
Ethical guardrails around AI has become a hot topic for many of our clients, given the amount of proprietary data and the regulatory pressures across the financial services sector. To futureproof your AI strategy, these policies should have clear ownership – be it in Operations, Compliance, IT – to be uniformly implemented across the business. Centrally-led GenAI organisations are reaping the biggest rewards.
Moreover, they must always take the customer experience into account. AI that could compromise customer data or provide customers with inaccurate information is a far higher risk than the potential efficiency loss from ethical guardrails.
Unlike traditional software which might be reviewed annually, AI and the policies around it require much more regular attention. Annual reviews are obsolete; quarterly check-ins should be the industry standard.
7. Empower leadership at all levels
Change leadership can’t be limited to the C-Suite’s brief. Middle managers are closest to the teams and must be activated as coaches and role models. They should then report into the strategic owners within your business. Rely on their insights to inform and develop your strategy down the line.
Senior management should also lead by example, communicating openly and consistently on their AI usage – for example to summarise meeting notes, or to collate reports from middle management.
8. Encourage continuous feedback
GenAI adoption is a marathon, not a sprint, so it’s important to check in on employee sentiment and adoption barriers. Real-world usage should feed back into model improvements. Performance reviews must evolve to include adaptability, AI fluency and collaboration.
Change at this scale can spark anxiety throughout a workforce, especially when it poses a risk to employee retention. All sectors are seeing declining confidence in the job market, with AI largely held responsible. But the opportunity lies in reframing AI as a co-pilot, not a replacement.
To preserve psychological safety across a workforce, leaders must address the change, and the rationale behind it, openly. We recommend a regular ENPS survey with a specific focus on AI to gather actionable feedback from everyone affected by the new tooling.