The Governance Gap in Law Firms

AI adoption is high, but how do law firms decide who owns the new technology?

Kate Brownson, Director at Qurated, comments on a question at the heart of legal sector transformation.

Who Owns AI in a Law Firm?

The adoption numbers are not in dispute. 96% of UK law firms have integrated AI into their operations, 62% of practitioners use it regularly, and most firms are planning significant technology investment for the year ahead. Whatever debate there once was about whether firms would adopt AI, that conversation is finished. The one that has not been settled – and that I keep coming back to in my work – is who, inside any given firm, is actually responsible for making it work.

I have spent the last few months talking to CIOs, CTOs, and COOs across UK and international firms, and a pattern has become hard to ignore. Ask who owns AI strategy and you will get one of three answers. All sound reasonable on the surface, and all start to unravel once you look at how decisions actually get made. This paper lays out those three default positions and where each one tends to come unstuck – not the last word on the subject, but the framing for an exclusive industry conversation we are hosting for legal sector leadership on the 16th June.

Register Interest

1. The CIO Owns It

This is the most common answer, and on the face of it the most logical. The CIO controls the infrastructure, manages the vendor relationships, holds the budget. If AI is a technology, the technology leader should own it.

The difficulty is that AI does not sit neatly inside a technology function the way an ERP system or a document management platform does. Deploy a generative AI tool to fee earners and you are immediately touching client confidentiality, regulatory compliance, professional conduct, pricing, and risk – all at the same time. The Solicitors Regulation Authority (SRA) expects Compliance Officers for Legal Practice (COLPs) to take direct responsibility for compliance when new technology is introduced. The Information Commissioner’s Office (ICO) is developing an AI code of practice under the Data (Use and Access) Act. The Competition and Markets Authority (CMA) is bringing its competition powers to bear on AI deployment. These are live regulatory expectations running in parallel, and no single technology leader I have spoken to feels they have genuine line of sight across all of them.

In practice, this tends to leave CIOs in an uncomfortable position: accountable for delivery but without real authority over strategy, fielding questions from the management board about the firm's AI governance posture while still working out what that posture should look like – and doing so with frameworks that were built for infrastructure decisions, not for tools that can draft legal advice.



2. The Innovation Team Owns It

A lot of UK firms set up innovation functions or digital transformation teams specifically to handle this, and the logic made sense: AI needs experimentation and a different risk appetite than core IT, so give it a dedicated team with the space to move quickly.

The trouble tends to show up at scale. An innovation team can run pilots brilliantly, but it is not built to govern enterprise deployment. Once a firm moves from testing a contract review tool with ten lawyers to rolling it out across the partnership, the questions change completely – data security, integration with practice management systems, training, change management, ongoing vendor oversight. These are operational muscles that sit in IT, and most innovation teams were never designed to carry that weight.

There is also a budgetary dimension that I think gets underplayed. When IT, innovation, and data functions each hold their own budgets with no consolidated owner, nobody has a clear picture of whether the platforms being procured are complementary or duplicative. This matters more now than it did two years ago: as Microsoft Copilot matures and starts to overlap with purpose-built legal AI products, the question of what the firm is actually paying for gets harder and harder to duck. That is not really a finance issue – it is a strategy gap that AI spending is making visible.

So you end up with innovation teams that were created to accelerate AI adoption becoming bottlenecks once adoption actually succeeds. The handoff to operations is rarely designed in advance, and when it has to happen under pressure, governance is usually the thing that falls through the gap.



3. A Committee Owns It

This has become the default answer in 2026, and it is easy to see why. Set up a cross-functional AI governance board with representation from IT, innovation, risk, compliance, and the partnership. Shared ownership across the functions that are affected, collective accountability for how AI is deployed.

The structure mirrors how law firms tend to govern everything else, which is both the appeal and the limitation. Committees work well for oversight – reviewing a policy, approving a framework, running a quarterly review. Where they struggle is with ownership in the operational sense: making a deployment decision quickly enough for it to matter, working out in real time whether a particular AI tool should go live with a practice group that is under competitive pressure to adopt it, or pulling something back when the risk profile shifts.

There is also growing evidence that governance boards are being stood up faster than the substantive frameworks they are meant to enforce. Only 39% of UK firms report having strong AI policies and oversight in place; the majority are still working with partial guidance, ad hoc decisions, or nothing formal at all. A board without a framework to govern against is not really governing – it is convening, and those are not the same thing.

The practical risk is that shared ownership quietly becomes no ownership, where nobody wakes up on any given morning with personal accountability for whether AI governance is actually functioning. And when the SRA comes asking who is responsible, pointing to a committee structure is not the same as pointing to a person.

The Question Beneath The Question

Each of these three answers is really a symptom of the same underlying tension. Law firms are partnership-based organisations trying to govern a technology that does not respect functional boundaries – it cuts across every practice group, every level of seniority, every part of the business at once. The governance models most firms have inherited – functional silos, committee oversight, consensus-driven decision-making – were built for a world where technology stayed inside IT, and AI simply does not.

What I notice about the firms navigating this with the most confidence is that they are not necessarily the ones with the best tools or the biggest AI budgets. They are the ones that have looked at the ownership question head-on and made a deliberate structural choice – even an imperfect one – rather than letting it drift into ambiguity. A named executive owner with shared accountability across the CDO, CISO, legal, and the wider business is starting to emerge as a recognisable pattern. But the law firm context – partnership governance, dispersed authority, the way COLP accountability actually works in practice – makes this genuinely harder than in a corporate setting, and most of the available playbooks were not written with partnerships in mind.

So the question I would leave you with is not whether your firm has an AI governance framework – most do, in some form. It is whether you could draw a clear line from any given deployment decision to a named individual who is accountable for its consequences – regulatory, commercial, reputational – and whether that person actually has enough authority to do something about it.

About the Author



Kate Brownson

Director of Legal Transformation

kate.brownson@qurated.network

Kate is Director within our Adaptive Teams practice. At Qurated, Kate plays a key role in creating value for our legal services clients through leveraging people, processes, and technology.

With several years at PwC helping legal teams access cutting-edge AI know-how, Kate is focused on enabling firms to scale their capability and accelerate modernisation.