Artificial Intelligence, Real Risks: Legal Due Diligence Issues and Purchase Agreement Provisions Aimed at Managing AI Risk in M&A Transactions
To listen to an audio recording of this article, click here.
There is broad consensus in Canada and around the world that artificial intelligence (“AI”) has been a transformative force in recent years, driving innovation and efficiency in companies across industries, and that it will continue to do so at an increasingly rapid pace. A 2023 study by PricewaterhouseCoopers found that 54% of companies are increasing their investment in AI research and development, with many focusing on creating proprietary AI models and tools.[1] Other companies have benefited from these services and licensing arrangements, finding it more efficient and cost-effective to utilize the services and AI technology developed by other companies in order to grow their own businesses. McKinsey and Company reported that, as of early 2024, approximately 65% of organizations reported regular use of generative AI technologies, nearly double the share that reported using them in a similar survey just 10 months earlier.[2]
Whether they are concerned with developing or utilizing AI technologies, both types of AI-related companies have found themselves at the heart of a rapidly developing mergers & acquisitions (“M&A”) landscape. In 2023, the AI sector witnessed 271 finalized M&A deals, and projections indicated a 20% year-over-year increase.[3] In our experience, these numbers dramatically understate the importance of addressing AI in M&A transactions since an ever-growing number of target entities (“Targets”) have adopted some element of AI in their businesses, an aspect that prospective acquiror entities (“Acquirors”) should carefully evaluate. Acquirors considering M&A transactions with Targets that are focused on AI as a core component of their business or are implementing some elements of AI in their business should, to varying degrees as applicable, consider the legal due diligence issues raised in this article and address AI-related matters in the purchase agreement.
In this rapidly developing AI landscape, characterized by the acquisition of novel technology and businesses based on or otherwise incorporating elements of such technology, it is no surprise that new legal considerations have emerged in M&A transactions.
In this article, we discuss legal due diligence issues arising out of a Target’s development or utilization of AI technology. On this basis, we then set out key provisions to be considered in a purchase agreement involving the acquisition of the business or assets of a Target that has developed its own AI technology or relies upon third-party AI solutions.
IP Legal Issues and Due Diligence Considerations Arising Out of the Development or Utilization of AI Technology
A number of novel legal risks arise when an Acquiror is looking to purchase the business or assets of a Target which has either developed its own AI technology or utilizes AI technology developed by a third party in order to provide its products and services or for internal operations. Although each circumstance is unique, some of the more common legal issues and questions to be asked during the diligence process are briefly discussed below:
- Proprietary Rights in AI Output. In many cases, a Target’s assets may be comprised of output/work product generated by an AI technology (“AI Generated Assets”). AI Generated Assets raise questions as to whether such work product can be validly owned by the Target and whether such work product can be the subject of protection under law.
For example, whether AI Generated Assets qualify for protection under existing intellectual property (“IP”) regimes is a question currently being addressed by courts, regulators and policymakers around the world. Consider, for example, protection under copyright, which is the most commonly relied upon IP right in the context of AI Generated Assets. Broadly speaking, in most jurisdictions, copyright protection requires at least some human authorship. For example, the U.S. Copyright Office recently issued guidance on the copyrightability of AI-generated works, clarifying that purely AI-generated works cannot be copyrighted, and the copyrightability of AI-assisted works will depend on the level of human creative authorship integrated into the work. In 2023, a court in China held that an image created by using Stable Diffusion (a generative AI model that produces unique photorealistic images from text and image prompts) was protected under China’s copyright law and that the person who used AI to create the image was the author. In Canada, there is no clear guidance on the copyrightability of AI-generated content and/or whether an AI tool may be named as an “author.” Many have argued that the language of the Canadian Copyright Act, which only grants protection to original works that result from an author’s “skill and judgment,” implies that an author must be a natural person. However, in December 2021, the Canadian Intellectual Property Office allowed the registration of a copyright co-authored by an individual, Mr. Ankit Sahni, and the RAGHAV Artificial Intelligence Painting App. This registration, and specifically whether an AI tool can be a named as an author under the Copyright Act, has been subsequently challenged, and the matter is currently before the Federal Court of Canada.
Similarly, whether AI Generated Assets are eligible for patent protection is a question still being considered in many jurisdictions, including in Canada, where we continue to await guidance from the Federal Court. In contrast, in the United States, the United States Patent and Trademark Office has taken the position that AI-assisted inventions may be patentable provided that one or more natural persons made a significant contribution to the invention.
Questions as to a Target’s ability to protect or own AI Generated Assets also arise where a Target uses a third-party-owned AI tool to generate AI Generated Assets, as the terms and conditions applicable to the use of some AI tools may prohibit a user from claiming “ownership” over output generated.
- Proprietary Rights in and/or Rights to Use Training Data. AI models are trained on large datasets, which may include proprietary, user-generated or publicly available data. Where a client owns proprietary AI technology, a key question is whether it has legally obtained/has the right to use the content it used to train AI models. If the Target lacks clear rights to the data used for training, the resulting AI models could be legally vulnerable.
One of the key issues in this respect is a Target’s compliance with privacy laws. For example, a Target may use data collected from customers/users well before it had contemplated developing its AI technology, such that the privacy policies in place at the time of collection may not have notified the user of the Target’s intention to use the data for that purpose. Any use made by the Target of data for a purpose which it has not received appropriate consent may lead to a violation of privacy laws. Further, in some jurisdictions, such as the United States, government authorities have the authority to impose penalties including “algorithmic disgorgement” or “model destruction,” requiring the destruction and/or deletion of not only the training data but also any models and algorithms built using that data.
In some cases, however, a Target will not have its own internal body of training data to draw from and will therefore look to external (publicly available or proprietary) sources to serve as its training dataset. In such cases, it is important to consider if the Target’s use of the data was in compliance with the terms of any licence or other terms of use applicable to the dataset. For example, where a Target has engaged in “scraping” of data freely accessible online, the question is whether the owner of the websites imposed any terms or conditions on users of its website which prohibited the “scraping” of data for such purposes. Liability may arise where owners of data that is freely accessible online might claim that their data was “scraped” without permission or compensation. For instance, in November 2024, a coalition of Canadian news publishers, including The Canadian Press, Torstar, the Globe and Mail, Postmedia, and CBC/Radio-Canada, filed a lawsuit alleging copyright infringement against OpenAI, alleging that OpenAI’s ChatGPT system used their content without permission to train its AI models.
Likewise, where the Target licensed datasets from a third-party data supplier, questions should be asked, including whether the supplier provided any assurances that the data could validly be used for training purposes (for example, by providing reps and warranties in their supply or licence agreements).
- Risks Surrounding a Target’s Use of Third-Party AI Tools. Use of third-party-owned AI tools raises questions regarding the security and confidentiality of the Target’s material trade secrets and other confidential information. When a Target’s employees use AI tools in their jobs, the information supplied by an employee may be retained by the AI tool indefinitely and can be accessed by third parties, presenting a material risk to the Target’s continued ability to continue to claim any proprietary rights in and to such trade secrets or other confidentiality information. An Acquiror should have a clear understanding of any policies which have been put in place by a Target regarding its employees’ use of third-party-owned AI tools to ensure that their use does not introduce unintended liability to the company.
In addition, an Acquiror should understand the diligence performed by the Target before the adoption of any third-party-owned AI tools and the licence or terms governing its use of the tool, which raise issues beyond those of the standard vendor diligence. For example, the terms of use for AI tools may require a user to indemnify the tool provider if the output infringes a third party’s IP.
As well, some jurisdictions have enacted (or are in the process of enacting) AI-specific laws and regulations to address ethical considerations such as bias, transparency and accountability such that liability may be imposed on users of an AI tool in cases where, for example, an AI tool introduces certain biases into company processes (such as where the tool is relied upon to make HR hiring decisions, etc.).
Drafting Purchase Agreements
Once diligence is complete, or in parallel with that process, AI-related IP ownership and licensing risks should be considered in the definitive purchase agreement, whether that agreement is a share purchase agreement or an asset purchase agreement (each a “Purchase Agreement”).
As a matter of standard practice, Purchase Agreements include a number of IP-related representations and warranties that cover certain AI-related issues. Such representations include Target representations: (i) that it owns the IP it claims to own; (ii) that it has not infringed on the IP rights of any third party; (iii) that it has available to it (through ownership or licensing) all of the IP it needs to operate its business; and (iv) that it complies with all applicable data privacy laws and maintains policies to ensure ongoing compliance with the same. For Targets that are implementing some elements of AI in their business, AI-related risks may be addressed with these customary representations and warranties. For other Targets, particularly for those that are focused on AI as a core component of their business and/or where risks have been identified as a result of its due diligence, further AI-specific representations and warranties Acquirors may consider including:
- Representations specific to ownership of AI Generated Assets. Given the uncertainties surrounding the application of traditional IP regimes to AI Generated Assets (i.e., whether such materials may be the subject of copyright, patent or trade secret protection), a Target’s ownership over AI Generated Assets may result from the application of many different forms of IP. The Acquiror should ensure that the definition of IP rights in any AI Generated Assets is sufficiently broad to capture all possible IP rights and receive appropriate assurances that the Target has taken reasonable steps to protect each aspect.
- Representations specific to training datasets. Acquirors should seek assurances that the Target has: (i) obtained all required licences or consents to collect and use any data which has been utilized to train, refine or improve the AI technology in the course of its business; (ii) confirmed that its use of such data complies with applicable regulatory regimes; (iii) conducted appropriate diligence in respect of third-party-supplied datasets used as its training data; (iv) provided assurances that the training dataset will continue to be available to the Acquiror post-closing; and/or (v) has a record of all licences and consents received in respect of the training data.
- Representations specific to the allocation of liability as between the Target and its suppliers of AI tools or licensed datasets. Acquirors may also include specific representations to mitigate risk associated with liability arising out of the terms and conditions agreed to/accepted by the Target with any key suppliers and vendors of AI tools or licensed datasets. Specifically, Acquirors may require that the Target represent that it has complied with specific use restrictions, limitations and guidelines set out in the Target’s contracts with any such key suppliers or vendors and has put in place applicable mechanisms, policies and practices, as required, to ensure compliance with any such contractual terms.
- Representations specific to the Target’s control over the use of AI tools in its business. Acquirors should consider including representations that: (i) the Target has in place appropriate policies and procedures to ensure no sensitive personal information, trade secrets or other confidential/proprietary information has been provided in any prompts or inputs in tools that utilize prompts or inputs to improve its AI technology, or any third-party AI technology; and (ii) relatedly, assurances that such policies have been followed and implemented and that it is not aware of any violations of its policies.
If AI-related IP risks are identified during diligence, Acquirors should consider negotiating indemnification provisions, including specific indemnities, as part of their indemnity package. These include indemnities for breaches of IP representations, open-source licence violations or unauthorized use of third-party data in AI training. Additionally, in cases where the Target retains certain AI-related assets or rights, the Purchase Agreement, or a transition services agreement, licence or other agreement between the Acquiror and the Target, should define clear boundaries for post-closing use. For example, if the Target retains rights to train AI models on specific datasets, the Acquiror may require non-compete or non-solicitation covenants to protect its competitive advantage.
Conclusion
Aird & Berlis LLP frequently assists public and private companies across various industries with completing M&A transactions that grow and enhance their businesses for the immediate and long-term benefit of their stakeholders. The Capital Markets Group and Intellectual Property Group at Aird & Berlis LLP will continue to monitor developments in the treatment of AI-related considerations in M&A transactions. Please contact the authors if you have any questions concerning any AI-related considerations in your contemplated or ongoing M&A transactions.
[1] PwC’s Global Artificial Intelligence Study: Exploiting the AI Revolution | PwC