The future of AI

The arrival of AI tools is turning the business model of companies and agencies that generate creative content for their customers, particularly in terms of productivity. Indeed, content creation is becoming simpler, but paradoxically this will lead to an increase in the complexity of the rules of the game, which are also evolving at a particularly rapid pace.

While open source and accessible software will coexist, there’s no doubt that AI tools require colossal resources, not only financial (for the development aspects) but also in terms of hardware and energy. All this therefore has a cost, and the question is obviously who will have to pay for it.

Generally speaking, a trend is already emerging, with free tools offering limited functionality (either in technical terms, or involving the acceptance of unfavorable terms and conditions), or paid tools offering access to superior functionalities, or the possibility of retaining more rights over creations made using AI tools.

The regulatory aspects are being put in place, and some lawmakers are already working on them, such as the European Union, which intends to play a pioneering role worldwide in this respect.

Indeed, both ethical and legal issues arise, and both are intimately linked. The fundamental issues at stake are combating the risk of partiality (bias), accountability (understood as responsibility) and transparency (how it works, who can explain the results). It is essential to set clear and imperative ethical foundations to avoid undesirable developments, while recognizing the positive aspects of the arrival of AI such as better healthcare, safer and cleaner transport, more efficient production and more sustainable and less expensive energy.

In 2020, the European Commission published a White Paper[1] on Artificial Intelligence, which sets out the main principles of a future EU regulatory framework for AI in Europe, based on the EU’s fundamental values, including respect for human rights (respect for human dignity, respect for privacy and data protection, equality and non-discrimination, access to justice and access to social rights).

On June 14, 2023, the European Parliament has adopted the text and negotiating mandate for the European Commission to establish a comprehensive regulatory framework, applicable to all AI industry stakeholders active on the European market (suppliers, importers, distributors, users, etc.). Switzerland will obviously be affected.

The draft text also includes a framework for general-purpose artificial intelligence, partly influenced by the potential impact of certain generative AI tools that have received a great deal of media coverage. New requirements could include the obligation to mention that content is generated by AI and that such tool uses data protected by copyright. All systems considered to represent a clear threat to EU citizens will be banned: these could include social rating by public authorities, or toys that use voice assistance to encourage children to adopt dangerous behaviors.

The Commission has proposed three interrelated legal initiatives that will contribute to the establishment of trustworthy AI[2] :

  • a European legal framework to address the fundamental rights and security risks inherent in AI systems (including risk classification);
  • a civil liability framework – adapting liability rules to the digital age and AI; who is liable for decisions made using AI?
  • revision of sector-specific safety legislation (e.g. Machinery Regulation, General Product Safety Directive).

A parallel but related strategic issue also concerns access to data, particularly all the data collected by the Internet of Things (IoT), which put together represent treasures for improving AI training data: without them no AI, and the stakes of monopolization by the GAFAM giants has already alerted governments. In 2019, the EU has already adopted a Directive on Open Data and the need to maintain the broadest possible access.

The unknown is frightening, but AI can be seen as an opportunity for humanity, not just a threat, even if it’s human nature to fear what we don’t understand, so education is essential. Knowing what is happening in the field of AI (transparency) and setting clear ethical and legal limits will therefore be decisive for its acceptance.

[1] https://op.europa.eu/fr/publication-detail/-/publication/ac957f13-53c6-11ea-aece-01aa75ed71a1

[2] https://digital-strategy.ec.europa.eu/fr/policies/european-approach-artificial-intelligence