1. The CIO Owns It
This is the most common answer, and on the face of it the most logical. The CIO controls the infrastructure, manages the vendor relationships, holds the budget. If AI is a technology, the technology leader should own it.
The difficulty is that AI does not sit neatly inside a technology function the way an ERP system or a document management platform does. Deploy a generative AI tool to fee earners and you are immediately touching client confidentiality, regulatory compliance, professional conduct, pricing, and risk – all at the same time. The Solicitors Regulation Authority (SRA) expects Compliance Officers for Legal Practice (COLPs) to take direct responsibility for compliance when new technology is introduced. The Information Commissioner’s Office (ICO) is developing an AI code of practice under the Data (Use and Access) Act. The Competition and Markets Authority (CMA) is bringing its competition powers to bear on AI deployment. These are live regulatory expectations running in parallel, and no single technology leader I have spoken to feels they have genuine line of sight across all of them.
In practice, this tends to leave CIOs in an uncomfortable position: accountable for delivery but without real authority over strategy, fielding questions from the management board about the firm's AI governance posture while still working out what that posture should look like – and doing so with frameworks that were built for infrastructure decisions, not for tools that can draft legal advice.
2. The Innovation Team Owns It
A lot of UK firms set up innovation functions or digital transformation teams specifically to handle this, and the logic made sense: AI needs experimentation and a different risk appetite than core IT, so give it a dedicated team with the space to move quickly.
The trouble tends to show up at scale. An innovation team can run pilots brilliantly, but it is not built to govern enterprise deployment. Once a firm moves from testing a contract review tool with ten lawyers to rolling it out across the partnership, the questions change completely – data security, integration with practice management systems, training, change management, ongoing vendor oversight. These are operational muscles that sit in IT, and most innovation teams were never designed to carry that weight.
There is also a budgetary dimension that I think gets underplayed. When IT, innovation, and data functions each hold their own budgets with no consolidated owner, nobody has a clear picture of whether the platforms being procured are complementary or duplicative. This matters more now than it did two years ago: as Microsoft Copilot matures and starts to overlap with purpose-built legal AI products, the question of what the firm is actually paying for gets harder and harder to duck. That is not really a finance issue – it is a strategy gap that AI spending is making visible.
So you end up with innovation teams that were created to accelerate AI adoption becoming bottlenecks once adoption actually succeeds. The handoff to operations is rarely designed in advance, and when it has to happen under pressure, governance is usually the thing that falls through the gap.
3. A Committee Owns It
This has become the default answer in 2026, and it is easy to see why. Set up a cross-functional AI governance board with representation from IT, innovation, risk, compliance, and the partnership. Shared ownership across the functions that are affected, collective accountability for how AI is deployed.
The structure mirrors how law firms tend to govern everything else, which is both the appeal and the limitation. Committees work well for oversight – reviewing a policy, approving a framework, running a quarterly review. Where they struggle is with ownership in the operational sense: making a deployment decision quickly enough for it to matter, working out in real time whether a particular AI tool should go live with a practice group that is under competitive pressure to adopt it, or pulling something back when the risk profile shifts.
There is also growing evidence that governance boards are being stood up faster than the substantive frameworks they are meant to enforce. Only 39% of UK firms report having strong AI policies and oversight in place; the majority are still working with partial guidance, ad hoc decisions, or nothing formal at all. A board without a framework to govern against is not really governing – it is convening, and those are not the same thing.
The practical risk is that shared ownership quietly becomes no ownership, where nobody wakes up on any given morning with personal accountability for whether AI governance is actually functioning. And when the SRA comes asking who is responsible, pointing to a committee structure is not the same as pointing to a person.