AI regulation in the EU: The risks of a human-rights based approach

Posted November 13, 2019
by Ben Hooper

European Commission President-elect Ursula Von der Leyen has promised proposals for an AI law within her first 100 days in office. The EU will be adopting a human rights-based approach to this regulatory effort — focusing on impacts on individuals, and framing potential harms by reference to human rights concepts such as individual dignity and autonomy. Ben Hooper and Ying Wu explore the three important consequences.

First, the policy discussion surrounding the drafting of the new AI law is likely to focus on what to prohibit or restrict, rather than how to encourage development and take-up of an important new technology that may boost productivity and have wider social benefits.

Second, a human rights approach tends to exclude certain types of trade-off from consideration. If possible harms are cast in human rights terms then only weighty ‘public interests’ — such as state security, or the prevention and detection of serious crime — can justify exemptions from regulatory rules. Potential consumer benefits, or the prospect of greater dynamism in the economy, will generally not suffice.

Third, casting harms in human rights terms means that the regime will prioritise minimising, in advance, the risk of those harms arising. Given the difficulty of prediction, this almost inevitably leads to overinclusive rules, i.e. ones that — in the process of prohibiting innovations that would lead to harm — also prohibit innovations that could lead to good outcomes for consumers or society. A ‘wait and see’ approach, by contrast, would allow more experimentation in the market, and seek to correct harms in a targeted way if and when they arose.

Features not bugs?

Advocates of the human rights approach likely consider these consequences to be features rather than bugs: together they should minimise the risk of a range of societal harms that could not straightforwardly be remedied after the event, such as AI-enabled voter manipulation, or AI-encoded bias in the criminal justice system. These represent real and serious threats, and a ‘wait and see’ approach to regulation might struggle to contain them before irreparable damage is done.

It could be argued that the importance of avoiding these types of harm justifies limiting the growth of AI. The EU’s position is, however, more ambitious. It wants to prevent these harms, but also maintain that its preferred mode of regulation will simultaneously stimulate the growth of AI in the EU. How is the circle squared? By the claim that growth requires trust, and that trust is promoted by a human rights-based approach.

This claim is certainly a convenient one for the EU. But there is no real evidence for it. And the lukewarm response to the EU’s recent efforts to pilot its ‘Ethics guidelines for trustworthy AI’ suggests that companies remain unpersuaded.

The claim is also strikingly similar to another EU claim that the GDPR will stimulate the digital economy by driving up consumer trust. But this latter claim is looking increasingly suspect. Although consumers report that they care about privacy, their behaviour suggests that privacy plays little part in their decisions online — a phenomenon known as the ‘privacy paradox’. In time, we may see the emergence of an ‘AI paradox’ where consumers say they don’t want AI dictating their choices, but in practice gravitate to companies that lean most heavily on AI.

A risky bet

So the EU is in effect just hoping that trust will drive AI growth. Does this matter?

This depends on what the rest of the world does. The EU aspires to lead the global debate on AI regulation. It may think that the GDPR is a promising precedent here.

The GDPR has come to be regarded as the global gold standard for privacy regulation, and it has inspired other jurisdictions to update their privacy laws.

But other jurisdictions may view keeping the lead in AI as more important than keeping up with privacy standards. And the reality is that the other key players — China and the US — are unlikely to follow suit when it comes to AI regulation. China considers the collective good to be as important as individual rights, meaning that Chinese regulatory moves may be of a very different character. The US is closer to the EU in theory, but wide-ranging legislation to regulate AI at the Federal level seems a distant prospect.

In a world where the EU becomes the regulatory outlier, the EU’s hope that trust will drive AI growth looks like a risky bet.

If trust turns out not to be a prime driver for growth, then the EU will have a regulatory regime that restricts innovation whilst conferring no competitive advantage. Other jurisdictions, with more permissive regimes, will be more likely to attract leading talent and in time nurture the most successful AI-driven companies. It is not clear that other measures that the EU is currently contemplating to stimulate Europe’s digital sector will be enough to bridge the gap. And the EU regime will not be able to flex in response to make different trade-offs for the benefit of consumers or to boost market dynamism. These types of trade off will have been excluded at the outset.

In this future, EU consumers and businesses would come to prefer the AI-driven products of foreign companies to their more trustworthy EU rivals. This would make it increasingly hard for the EU to enforce its more restrictive regime. Competition, in the form of more powerful or effective AI, would be only ‘one click away’. Policing customer preferences in the digital space would require ever greater effort and resources.

But losing the trust bet is about much more than the practicalities of enforcement. AI looks set to be the most important technology of the next few decades. Given these stakes, losing the global AI race would have immense implications for the EU’s future competitiveness.