Agency creations with the help of AI – challenges

The definition of artificial intelligence is vast, and goes far beyond the scope of this article.

In fact, it’s sometimes hard to guess when and for what AI is used for, as AI is increasingly integrated into everyday tools, such as office software (when the text editor “suggests” a stylistic change), search engines or image-editing filters on social networks, for example. This raises a number of ethical and other issues, which will be addressed in the final article of this series.

Tools for visual creation are particularly tricky to handle in legal terms, such as Dall-E™ or MidJourney™ .

These AI-based tools use images from countless publicly accessible websites (known as “data sets” or “training data”) to train their neural network and return results. There is no case law (yet) defining whether this use is legally permissible (what the Common law world defines as fair use), but the question is currently controversial. On the one hand, as databases are protected for themselves in some countries (in Switzerland, in the absence of ad hoc protection, the law against unfair competition applies), but also because the images used in these databases are very often themselves protected by copyright and are not in the public domain.

In addition, databases very often contain annotations (metadata) alongside the images if the images have already been processed (in particular, a textual description added to each image). Image processing belongs to the database owner, but copyright on the image itself remains with the original creator of the image (or work). It is often necessary to dissect the contractual conditions accepted by the author of a work when he uploaded it to a database accessible on the Internet, in order to determine whether its use to train artificial intelligence algorithms was admissible or not (for example, vacation photos uploaded to the Flickr™ image platform).

This raises legal issues on several levels: firstly, the use of training data, and secondly, the legal regime applicable to images generated by these artificial intelligence tools, which are based on images created by third parties.

Several lawsuits have already been filed on the first issue, for example Microsoft, GitHub and OpenAI™ have been accused in a class action lawsuit of infringing copyright by allowing Copilot™, a code-generating AI system, to use billions of lines of code published in open source in freely accessible directories and to pull out code snippets without providing the credit to the authors required under the applicable open source licenses. Getty Images™ is also suing the publisher of Stable Diffusion™, alleging that the company copied its content to train its AI image generator. Twitter™ (henceforth “X”) accuses Microsoft of “illegally training” its artificial intelligence (AI) technologies from Twitter data, which it allegedly ingested massively using public APIs.

In view of the above, agencies face the dilemma of how to exploit or find inspiration from the results obtained by AI tools without falling into plagiarism? The output does not necessarily reproduce the formal elements of the original works, but sometimes collisions can occur and a work remains recognizable.

The good news is that the applicable legal provisions exist and are not new. The question for the creative agency essentially concerns the distinction between a derivative work and an original work: a derivative work consists of a creation which has an individual character but which has been conceived on the basis of one or more pre-existing works, (still) recognizable in the new work. To avoid plagiarism, the new creation does not necessarily have to completely obliterate the pre-existing work, but its characteristic expressions must no longer be recognizable. If these are still recognizable, the agreement of the author of the original work is required to use the derivative work, which is not without problems, particularly in terms of identification.

For example, imitating a style without adapting a pre-existing work is not infringing, since the style as such is not protected by copyright  Thus, an artist reproducing the cubist style by painting 3 women, does not infringe the copyright of Picasso’s heirs; however, if the style is unique to an artist, the creator could on the other hand be accused of parasitism (for example, if we take the idea of Andy Wahrol’s famous negative portraits).

There are tools available online to compare whether a given image strongly “resembles” a work, such as Google Image™ or TinyEye™ to name but a few. Depending on the result, particularly in terms of similarity, if the works are close, the issue of a risk of plagiarism may arise, but the answer could in some cases be difficult to provide. A wicked mind would mention that precisely such tools also rely on artificial intelligence to analyze content and display results… But such a precaution is recommended and thus already avoids a risk of coarse plagiarism.

It is therefore good practice for creative agencies to make it clear to their clients that their deliverables are also the result of the use of AI tools, and to explain how they came about. In the agency’s contractual terms and conditions, it will undoubtedly have to refer to the conditions themselves granted by the AI tools. Indeed, it is not always certain that the results (“output”) proposed by artificial intelligence can be transferred in full to the customer (if the latter so requires), so that sometimes only a license (exclusive or not) on the work may be granted.

For example, OpenAi states[1] that creators who wish to publish content created in part with the OpenAI API under their name (for example, a book or collection of short stories) are authorized to do so under the following conditions: the role of AI in formulating the content is clearly disclosed in a way that no reader could possibly miss, and that a typical reader would find sufficiently easy to understand. OpenAi also provides a text model, which is unambiguous about the transfer of responsibility entirely onto the shoulders of the creator: The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.

It is thus quite clear that responsibility for not infringing the pre-existing rights of third parties is transferred entirely to the agency, even if the validity of such a clause could always be challenged before a judicial authority.

 

P&TS will be happy to discuss the terms of use with you in greater detail, and to recommend any limitations that you may wish to include in your contractual terms.

 

[1] https://openai.com/policies/sharing-publication-policy