How will the new EU regime impact the UK AI sector?
John Buyers, partner and head of Artificial Intelligence at Osborne Clarke shared his views with Business Leader regarding the future of AI in the UK.
The EU Commission is proposing a whole new regulatory framework and infrastructure for developers and providers of Artificial Intelligence (AI) systems.
This includes conformity assessments, labelling obligations, a new EU-level body, new national enforcement roles and another batch of extremely heavy penalties for non-compliance, potentially even higher than GDPR fines in some areas. This legislation will have a global impact on the AI sector since it will create new regulatory barriers for supplying or developing AI systems for EU customers.
This new proposal follows the Data Governance Act published at the end of 2020 which similarly envisages a new regulatory framework for the EU data ecosystem. For businesses active in the EU in the AI field, with data as a key raw material, there will be a lot to get to grips within the coming years and a swathe of new regulatory risk to manage.
The Commission mentioned in its press conference the very significant engagement from all types and sizes of organisation with its consultation process leading up to this proposal – that’s likely to continue. We can expect significant lobbying on today’s proposals and the provisions are likely to shift and change as this draft makes its way through the legislative processes. This document has a long way to travel.
The regime provides for a comprehensive mechanism to enable new AI regulators to examine the underlying logic (algorithms) in AI systems, as well as far reaching audit reviews of data sets used to train systems (including training methodologies). Despite provisions (ie Article 70) which call for confidentiality and proprietary rights in such systems to be respected, this is likely to cause international providers of such systems to “blink” and weigh the merits of entering into the EU market. Much of the protection for such systems (and perceived competitive edge) is based around trade secrets and businesses will inevitably be reluctant to open their valuable intellectual property up to such scrutiny.
INCREASE IN FINES
The scope for penalties has been massively extended compared to the leaked draft legislation. The most eye-watering fine proposed is set at the higher rate of 6 per cent of worldwide turnover, or €30m for supplying prohibited AI, or breaching the data and data governance requirements for “high risk” AI systems.
The proposed new fines are not only higher than under the GDPR regime for non-compliant use of data but also extend far beyond personal data, covering any data used in training, validating and testing high-risk AI systems. As the understanding of the commercial value of data grows, so does the need for careful governance and compliance in making use of it.
Producers of AI that use personal data should also be mindful of the compounding risks of joint fines under both GDPR and this AI regime – it seems to us that the risks of infringing both regimes are enhanced as there is a high degree of overlap between the measures.
HIGH RISK AI
High risk AI systems will need to comply with detailed obligations, including around the data sets used to train, validate and test machine learning systems. These obligations are quite extensive, requiring a thorough understanding and scrutiny of the data put through an AI system. The requirement that data sets “take into account” the particularities of the geographic, behavioural or functional setting where the AI system is to be used offers a potential boost to the EU’s data ecosystem, potentially driving demand for home-grown training sets. This is in line with the EU’s overarching push for “tech sovereignty” and self-sufficiency.
There is already an AI skills shortage – one issue which springs to mind is whether there will be sufficient appropriately skilled people to undertake the extensive assessment processes which are envisaged under this new regime? AI is a very complex field and although much of the emphasis is on self-certification, regulators will need to have a sophisticated level of skill and understanding if they are to deliver effective scrutiny and cross-examination of providers’ self-assessments.
WHAT WILL THE UK DO?
The post-Brexit question of whether the UK should align or diverge comes into sharp focus with this legislation. This regime will likely increase both time to market and cost for developing AI tools for the EU market. The UK could differentiate itself – and protect its leading position in AI development in Europe – by continuing to deal with particular issues through particular regulators or in particular verticals, rather than trying to shape “one size fits all” horizontal regulation across all applications.
On the other hand, technology markets are rarely national, so many British AI developers will have to grapple with this new legislation in any case.