The EU AI Act

The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.

John Higgins joined the London Futurists Podcast to discuss the AI Act. He is the Chair of Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the UK’s IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.

Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world’s most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.

Two Big Bangs in AI

When a new technology or service is launched it is hard to know the eventual scale of its impact, but like most observers, Higgins thinks that large language models will turn out to have a deep and broad impact.

The AI systems generating so much noise at the moment use a type of AI called transformers. These were launched in 2017 by a Google paper called “Attention is all you need”, riffing on the 1967 Beatles song “All you need is love”. The launch of transformer AIs was arguably the second Big Bang in AI.

The first Big Bang in AI came five years before, in 2012, when the first deep learning systems were introduced by Geoff Hinton and colleagues, who thereby revived one of the oldest forms of AI, known as neural networks. These systems discriminate, in the sense that they classify things into appropriate categories. They analyse.

The second Big Bang in AI, the arrival of transformers, is giving us generative AI, which produce text, images, and music. Their abilities are remarkable, and they will have enormous impacts.

Anglo-Saxon vs Franco-German regulation

Higgins describes the Franco-German approach to regulation as seeking to enshrine a framework of rules before a new technology, product, or service can be deployed. (He acknowledges this is an over-simplification.) The Anglo-American approach, by contrast, is to allow the service to be launched, and fix any market failures once they have become apparent.

Traditionally, the Franco-German approach has prevailed in the EU, and especially so since Brexit. In addition there is a precautionary principle, which says that if there is a possibility of harm being caused, then measures should be in place to mitigate that harm at the outset.

This is not the same as saying that a technology cannot deployed until possible harms are identified and eliminated. The EU has not collectively signed up to the Future of Life Institute (FLI) letter calling for a six-month moratorium on the development of large language models, and there is no general appetite to follow the Italian government’s short-lived ban on ChatGPT. The idea is to get a regulatory framework in place at the outset rather than to delay deployment.

Risk-based regulation

Regulators often face a choice between making rules about a technology, and making rules about its applications. The preference in the EU is generally to address the applications, and in particular any applications which may impact safety, and human rights. Higgins thinks this generally produces regulations which are not onerous, because they mostly oblige developers to take precautions that responsible ones would take anyway. For instance, they should ask themselves whether there is a likelihood of bias lurking in the data, and whether it is possible to be transparent about the algorithms.

When a large language model has been trained on a large percentage of all the data in the internet, it is obviously hard to avoid bias. So the developers of these models should be as open as possible about how they were trained. Organisations deploying AI need to consider the practices of all the other organisations in their supply chain, and what steps each of them has taken to mitigate possible harms. The level of scrutiny and precaution should vary according to the degree of possible harm. A healthcare provider should generally exercise great diligence than a games developer, for instance.

Another over-simplification of the AI Act is that it instructs developers to take appropriate steps (undefined) to ensure that no harms (undefined) are caused which rise above a particular threshold (also undefined). Put like this it may seem unfair, but what alternatives are there? It is not possible to define all the possible harms in advance, nor all the possible steps to mitigate those harms. It is unacceptable to most people to allow developers to let rip and deploy whatever system they like without accountability, on the grounds (as argued recently by Eric Schmidt, former chair of Google) that regulators are completely out of their depth and will generally cause more problems than they prevent, so the tech companies must be left to govern themselves. It is also unacceptable to ban all new systems from being launched.

The EU process

The first step in the creation of EU legislation is that the Council of Ministers, which comprises the heads of government of the member countries, asks the EU Commission to draft some legislation. The Commission is the EU’s civil service. When that draft is ready, it is reviewed by a committee of the Council of Ministers, who then pass their revised version to the EU Parliament, the collection of MEPs who are elected by EU citizens. MEPs review the draft in various committees, and the result of that process is brought to the full Parliament for a vote. In the case of the AI Act, that vote is expected during June.

Finally, the three institutions (Commission, Council and Parliament) engage in negotiations called “trilogue”. The current best guess is that an agreed text of the AI Act will be ready by the end of the year.

The quality of legislation and regulation produced by this process is controversial. Some people think the 2016 General Data Protection Regulation (GDPR) is eminently sensible, while others – including Higgins – think it is a sledgehammer to crack a nut.

The FLI open letter

Two weeks after OpenAI launched GPT-4, the Massachusetts-based Future of Life Institute (FLI) published an open letter calling for a six-month moratorium on the development of large language models. The British Computer Society (BCS) which Higgins chaired published a response arguing against it. He argues that it is a bad idea to call for actions which are impossible, and that although good actors might observe a moratorium, bad actors would not, so the effect would be to deny ourselves the enormous benefits these models can provide, without avoiding the risks that their further development will cause.

EU tech giants

Higgins argues that the prize for the EU in using large language models and other advanced AI systems is to create improved public services, and also to enhance the productivity of its private sector companies. Europe is home to many well-run companies producing extremely high-quality products and services – think of the German car industry and the legion of small manufacturing companies in northern Italy. He thinks this is more important than creating a European Google.

Europe’s single market is a work in progress. It works well for toilet rolls and tins of beans, and as prosaic as that sounds, it is an enormously beneficial system, created by the hard work of people from all over Europe – not least the UK, where Margaret Thatcher was one of its earliest and strongest proponents, before she turned Eurosceptic. The lies and exaggerations about the EU that were spread for many years by the Murdoch press, Boris Johnson and others have concealed from many Britons what an impressive and important achievement it is. As the Joni Mitchell song says, you don’t know what you’ve got ‘till it’s gone, and the British are quickly realising how much they have lost by leaving. Unfortunately the malign hands of Murdoch, the Mail and the Telegraph continue to impede the obviously sensible step of re-joining the single market and the customs union, if not the EU itself.

Impressive as it is, the EU single market is far from complete, and Higgins believes that indigenous tech companies are more hindered by this than many other types of company. It is much easier to start and grow a tech giant in the genuinely single markets of the US and China than it is in Europe. Higgins argues that we cannot fix this, at least in the short term, so Europe should focus on the areas where it has great strengths. But it remains something of a mystery that the EU contains global champions in pharmaceuticals, energy, luxury goods, and financial services, to name a few, but seems unable to build any in technology.

 

Related Posts