Skip to content
A collection of monitors displaying images.
A collection of monitors displaying images.

Artificial Intelligence

As the world's leading firm on AI, we help responsible pioneers harness the potential and manage the risks of this transformational technology.

In February 2023 we became the first firm in the world to deploy generative AI at enterprise level. More than 3,000 of our lawyers in 43 jurisdictions now use GPT-4-based tools in their day-to-day work. We have undertaken a rigorous and extensive programme to deploy the generative AI tool Harvey across our entire business, with ongoing governance and risk management at its core.

We don’t just use AI-based tools, we build them. Our proprietary contract drafting tool, ContractMatrix, streamlines contract drafting, review and analysis. It has been developed in partnership with Microsoft and Harvey. ContractMatrix has been tested and perfected by over 1,000 A&O lawyers and, following launch at the end of 2023 with Financial Times coverage, is now being licenced to clients.

Our AI advisory practice, therefore, is grounded in deep expertise and experience. We understand all forms of the technology and the specific issues each raises from a risk management and contracting perspective. Our experience spans everything from helping nation states shape their AI policies to advising businesses across sectors on how to develop effective and responsible AI solutions, handle AI-focused transactions, and manage AI-related disputes.

AI at A&O

Harvey

Our deployment of Harvey, an OpenAI-backed tool based on GPT-4, began with a sandbox.

Read more Harvey
Read more Harvey

Our deployment of Harvey, an OpenAI-backed tool based on GPT-4, began with a sandbox. In other words, we gave access to a limited number of lawyers in a ring-fenced environment. Sandboxes are crucial for any business looking to deploy generative AI because it’s hard to predict what the technology will do until you use it. We tested, adapted, and moved ahead – all in a safe and secure environment. We only rolled out Harvey to a wider group once we could mitigate its risks, and we continue to gather and act on feedback we receive.

We also established an AI steering committee and an AI brains trust to help our experts understand AI’s current and future capabilities and how it can be harnessed across every area of our business. Alongside this, all our existing governance structures, including our risk committee, now consider generative AI in their day-to-day decisions.

Clear governance and guardrails are critical to successfully deploying AI. We have specific rules of use in place and train our people how to use AI tools effectively and safely.

People are the common thread that runs through all our work with AI. We know that generative AI is an augmentative tool. Everything Harvey produces is rigorously checked, edited and finessed by our team. It enhances the work our lawyers do and helps us produce better results for our clients. In turn, it is governed and augmented by the gold-standard critical thinking and creativity for which A&O lawyers are known.

Our AI Group

Our multidisciplinary AI Group advises clients on the responsible development, deployment and use of AI.

Read more Our AI Group
Read more Our AI Group

We combine a sophisticated understanding and experience of technology with deep expertise in intellectual property, data privacy, regulation, technology transactions, litigation and change management.

We help clients to manage the risks associated with this powerful technology which fall under two broad categories.

First, AI models make errors. Crucially, even those who build and train the models can’t explain and account for them. This so-called “black box” problem creates significant risks.

  • Hallucinations: These are incorrect outputs that could lead to, for example, tort liabilities, consumer harm or regulatory breaches. Hallucinations can be the result of incorrect or out-of-date data, inaccurate mathematical predictions based on weighting of sources or randomisation, or historical bias in the datasets used to train the models.
  • Unpredictability: A lack of explainability also creates a lack of predictability: you can’t be certain exactly what the model will say in response to a question. This can make it extremely difficult to check that it meets standards of quality and accountability.
  • Response divergence: By their very nature, AI models will give multiple answers to the same question. This could be evidentially relevant if, for example, an AI chatbot built to give financial advice delivers different responses to two individuals leading to divergent outcomes.

Second, generative AI models take human content and account for it in a mathematic response. A user may therefore be working with someone else’s information without permission, credit, knowledge, or even awareness. This raises significant IP Infringement questions: for example, can the user assert ownership over the model’s output? And is their own IP safe if they are using the model?

There are also consequential questions about data privacy and data protection, for example, where an AI model has been trained using personal data or a user inputs personal data as a prompt.

Our AI Group provides answers to these substantive legal questions on a syndicated basis. You can sign up to join a series of one-hour calls with other businesses, each in a controlled environment monitored by an antitrust lawyer present. The calls deal with specific issues and are supplemented by minutes and additional written materials such as formal memos, policy guidelines, or comparative analyses.

So far, we have covered topics including a primer on AI, ChatGPT policy, IP infringement and data risks, licensing an LLM and change management, and have welcomed attendees from industries including financial services, pharma, technology and telecoms.

For more information, please get in touch with your usual A&O contact.

ContractMatrix

Harness AI. Free the lawyer.

Read more ContractMatrix
Read more ContractMatrix

ContractMatrix streamlines contract drafting, review and analysis using:

  • Generative AI-assisted interrogation and drafting
  • Real-time access to your gold-standard precedents and policies
  • Inbuilt risk management and governance designed by A&O lawyer

It has been developed in partnership with Microsoft and Harvey, which builds custom LLMs for lawyers. ContractMatrix has been tested and perfected by over 1,000 A&O lawyers and, following launch at the end of 2023 with Financial Times coverage, is now being licenced to clients. 

Please get in touch to find out more, or request a demo.

Our deployment of Harvey, an OpenAI-backed tool based on GPT-4, began with a sandbox. In other words, we gave access to a limited number of lawyers in a ring-fenced environment. Sandboxes are crucial for any business looking to deploy generative AI because it’s hard to predict what the technology will do until you use it. We tested, adapted, and moved ahead – all in a safe and secure environment. We only rolled out Harvey to a wider group once we could mitigate its risks, and we continue to gather and act on feedback we receive.

We also established an AI steering committee and an AI brains trust to help our experts understand AI’s current and future capabilities and how it can be harnessed across every area of our business. Alongside this, all our existing governance structures, including our risk committee, now consider generative AI in their day-to-day decisions.

Clear governance and guardrails are critical to successfully deploying AI. We have specific rules of use in place and train our people how to use AI tools effectively and safely.

People are the common thread that runs through all our work with AI. We know that generative AI is an augmentative tool. Everything Harvey produces is rigorously checked, edited and finessed by our team. It enhances the work our lawyers do and helps us produce better results for our clients. In turn, it is governed and augmented by the gold-standard critical thinking and creativity for which A&O lawyers are known.

We combine a sophisticated understanding and experience of technology with deep expertise in intellectual property, data privacy, regulation, technology transactions, litigation and change management.

We help clients to manage the risks associated with this powerful technology which fall under two broad categories.

First, AI models make errors. Crucially, even those who build and train the models can’t explain and account for them. This so-called “black box” problem creates significant risks.

  • Hallucinations: These are incorrect outputs that could lead to, for example, tort liabilities, consumer harm or regulatory breaches. Hallucinations can be the result of incorrect or out-of-date data, inaccurate mathematical predictions based on weighting of sources or randomisation, or historical bias in the datasets used to train the models.
  • Unpredictability: A lack of explainability also creates a lack of predictability: you can’t be certain exactly what the model will say in response to a question. This can make it extremely difficult to check that it meets standards of quality and accountability.
  • Response divergence: By their very nature, AI models will give multiple answers to the same question. This could be evidentially relevant if, for example, an AI chatbot built to give financial advice delivers different responses to two individuals leading to divergent outcomes.

Second, generative AI models take human content and account for it in a mathematic response. A user may therefore be working with someone else’s information without permission, credit, knowledge, or even awareness. This raises significant IP Infringement questions: for example, can the user assert ownership over the model’s output? And is their own IP safe if they are using the model?

There are also consequential questions about data privacy and data protection, for example, where an AI model has been trained using personal data or a user inputs personal data as a prompt.

Our AI Group provides answers to these substantive legal questions on a syndicated basis. You can sign up to join a series of one-hour calls with other businesses, each in a controlled environment monitored by an antitrust lawyer present. The calls deal with specific issues and are supplemented by minutes and additional written materials such as formal memos, policy guidelines, or comparative analyses.

So far, we have covered topics including a primer on AI, ChatGPT policy, IP infringement and data risks, licensing an LLM and change management, and have welcomed attendees from industries including financial services, pharma, technology and telecoms.

For more information, please get in touch with your usual A&O contact.

ContractMatrix streamlines contract drafting, review and analysis using:

  • Generative AI-assisted interrogation and drafting
  • Real-time access to your gold-standard precedents and policies
  • Inbuilt risk management and governance designed by A&O lawyer

It has been developed in partnership with Microsoft and Harvey, which builds custom LLMs for lawyers. ContractMatrix has been tested and perfected by over 1,000 A&O lawyers and, following launch at the end of 2023 with Financial Times coverage, is now being licenced to clients. 

Please get in touch to find out more, or request a demo.

Our Risk Management Pillars

Use Case + The sweeping abilities of large language models means there is a high risk of mission creep. When deploying these tools it’s vital that the use case is tightly defined. We call this the ‘+’ –  the strict governance controls required to keep the use of the AI within its original boundaries. This should be reinforced with playbooks, training, system settings and working practices. 
Operational Our experience deploying generative AI means we know how important it is for legal departments to work in lockstep with information security teams as well as those aligning the AI tools with existing technology infrastructure. In AI projects, the interdependencies between legal, operational and security stakeholders is greater than in non-AI rollouts. To take just one example, it’s not enough to put in place a contractual restriction designed to protect trade secrets if no practical steps are also taken to implement encryption measures or configure systems to support contractual terms.
Contractual Contract terms are vital in mitigating legal risk. This is true both in the contract between the AI user and the developer, and – where generative AI is used in customer-facing products – between the business and its customers. We have negotiated many of these contracts and are working on similar agreements with clients across sectors.

AI deployment risk is further complicated by the fact that there are often trade-offs between these three pillars, with some more important than others depending on the situation. A&O’s AI Group is helping clients to manage this careful calibration.              

AI Insights

Server room

Keep informed of the latest developments in AI regulation, and the associated risks and opportunities, on our dedicated insights hub.

  

Meet the next big thing

A group of young people attending a Fuse presentation

Explore the fifteen technology companies joining Cohort 7 of Fuse, Allen & Overy’s tech innovation hub.