The Future Lawyer Weekly Update – w/c 20th December
December 23, 2021Space Travel: The Opportunities and Limitations
December 23, 2021Article by Shivanii Arun
Artificial intelligence (AI) can be thought of as ‘a group of digital techniques used to perform tasks previously requiring cognitive intelligence’. Under this umbrella definition, the prevalence of AI in the modern world is obvious. Algorithms and classification devices feature in online dispute resolution systems like eBay’s, self-driving technology, digital voice assistants and biometric devices on social media, amongst multiple other facets of everyday life. Usage of AI isn’t limited to private companies, either – countries worldwide use algorithms in a public context to manage tasks like visa-streaming, recidivism predictions, immigration planning and loan generation. In China, for example, ‘internet courts’ have been deciding millions of legal cases since 2019 without needing citizens to appear in person.
With developments like these, an international spotlight has been cast on the future of AI. The EU, home of the General Data Protection Regulation (GDPR), wants to be backstage.
EU legal attitudes towards AI
The GDPR, established in 2016, aimed to curtail the exploitation of European inhabitants’ personal data in response to the growing harvested and sold by multinational corporations. This cemented the EU’s role as stern – some say draconian – guardian of personal protections and privacies in the advent of rapid technological evolution. Since then, prompted by the flood of digital development churned out by Silicon Valley in particular, Brussels has also released a ‘Digital Services Act Package’ aimed at U.S. platform giants. The Digital Services Act and Digital Markets Act both aim to protect the fundamental rights of digital users and to establish a level playing field for competition and innovation – in the European Single Market, and globally.
To round off this strong digital legislative agenda, in April this year, the EU finally set its sights on AI with a proposed AI Draft Regulation. This was touted as the ‘first ever AI legal framework’, and contains obligations for providers, manufacturers, importers, distributors and users of AI. Furthermore, these restrictions apply not only to AI developed in Europe, but to technology with visible impacts in Europe. The EU aims to apply a three-tiered ‘risk’ system to measure the accountability and transparency of AI parallel to its growing influence in both public and private spheres. Of this extensive digital control, Margrethe Vestage (Executive Vice President for ‘A Europe fit for the Digital Age’) states that ‘on artificial intelligence, trust is a must, not a nice to have’. Whether or not the EU’s restrictive approach will engender such ‘trust’ remains to be seen.
The UK standpoint on AI development
Post-Brexit UK appears to have departed from the example of its European predecessors by attempting to disapply article 22 of the GDPR. Currently, Article 22 guarantees that algorithmic decisions, such as an online decision to award a loan, or a recruitment aptitude test, can be double-checked by way of a human moderator to ensure fairness. Yet, a government task force led by Brexiteer Iain Duncan Smith, found that it made it ‘burdensome, costly and impractical’ for organisations to use AI for routine processes. Hence, in a move towards greater AI usage in the public sphere, a total departure from necessary human review has been suggested – the antithesis of the EU’s ‘trust first’ approach. Oliver Dowden, the Culture Secretary, has vocally supported this measure to deliver a ‘data dividend’ for the UK economy, and boost innovation. Indeed, an Industrial White Paper in 2020 suggested that the UK could become a world leader in AI if given greater freedom from the restrictive regulatory controls of the EU. Though clearly a speculative comment, it is true that some of the European Commission’s AI regulations – such as the ban of ‘unacceptable’ AI – has halted developments in areas such as facial biometrics in Europe, despite the prowess of other countries in this field.
Conclusion
As development in technology and AI continues to be prominent on a global level over the next decade, this friction between progress and security will likely continue to chafe EU-UK relations. Without further binding restrictions from the EU, the UK will certainly seize the chance to take bolder risks with AI, no longer being slowed by seemingly bureaucratic checks on the power of technology. This could be the chance Europe needs to storm to the forefront of innovation as a true competitor with Silicon Valley, rather than playing catch-up with digital restrictions and Regulations.
Yet, on the other hand, removing those restrictions could also unleash a lot of trouble for the UK. If AI is employed in a wider public capacity to make critical decisions without human supervision, such as its proposed independent usage for citizenship requests, this arguably requires a slim to nonexistent margin of error to comply with human rights. However, so far, AI usage in the public sphere has not yet reached this level of accuracy. In fact, with historical failures such as the fiasco of A-level grades in 2020, a racist visa-streaming algorithm and faulty language test checker that wound up deporting thousands of students, the error margin for AI in the UK is appears rather high. When a single error can affect lives so profoundly, as public decisions do, using the public as a laboratory for AI development is dangerous. It is, after all, difficult to put the genie back in the bottle.
Both the UK’s high-risk-high-reward approach and the EU’s propensity to err on the side of safety evidently have merit. The question is – which side will blink first?