How to Look After Your Mental Health During Pupillage Interview Season
March 20, 2024Closing the Gender Gap: Addressing Issues and Solutions in the Legal Community
March 20, 2024Article by Samuel Page
Introduction:
The aim of this article is to compare the approaches of several jurisdictions to the regulation of the rapidly evolving field of artificial intelligence (AI). Across the jurisdictions examined, three key trends emerge: a so-called human-centric foundation to AI regulation; the adoption of a risk-based approach; and a growing momentum for international collaboration. The article also includes includes a brief overview of interesting developments across major jurisdictions.
Key trends:
A global benchmark
In 2019, the Group of 20 countries (G20) adopted the OECD AI principles to assist governments and organisations develop their approach to artificial intelligence in the most sustainable and human-centric way. The principles include ensuring AI does not breach the vital safeguards of human rights and the rule of law, committing to the benefits of AI being distributed throughout society and guaranteeing that organisations have appropriate procedures in place to keep provide accountability.
Emerging policy in the field of AI by these countries is consistent with these principles. For example, in Canada, pending legislation seeks to establish mandatory requirements for high-risk applications of AI, such as its use in hiring decisions or in policing and national security. These accord with the OECD principles of accountability and robustness in managing the potential dangers posed by AI. There are also close parallels between these principles and the eight principles and priorities adopted by President Biden’s Executive Order of 30 October 2023, which included a focus on safety and security, on investment in education and training and on a commitment to civil rights and workers’ rights.
Though presently the specific policies vary across each jurisdiction and range from voluntary guidance (for example the UK, Japan and Singapore) to a mixture of guidance and mandatory rules (for example in the EU’s AI Act), the principles outlined by the OECD and adopted by these countries are at the centre of all thinking in the sector.
A risk based approach
Building on the above, it is evident from various government press releases and formulated policy that a risk-based approach is favoured to a prescriptive rules-based methodology.
This approach requires the identification of perceived risks that specific activities in the field of AI pose and tailoring regulations to prevent those risks harming the core principles outlined by the OECD, such as privacy, transparency, security and non-discrimination. The benefits underscoring this methodology appear to be introducing an element of oversight that is not overly prescriptive with the aim of facilitating innovation. In theory, this will also keep the cost of compliance proportional to the negative impacts the regulations intend to prevent.
The EU’s AI Act and Canada’s AI and Data Act are good examples of this approach. Both pieces of legislation would use risk and impact assessments to categorise distinct AI systems. Each category would then be assigned certain compliance obligations for industry operating in those areas to follow, with higher risk categories naturally involving more onerous obligations. It is thought that this method will prevent any rules introduced from quickly becoming outdated in this rapidly evolving environment.
However, there is a notable exception. The EU, whilst largely adopting the risk-based approach, has included within its AI Act mandatory prohibitions of certain uses of AI across the Union, deeming their threat too great even at this early stage of global regulation.
Growing momentum for consensus
As well as adopting the same core principles within the G20, there have been other notable instances of global cooperation in the past 12 months. For example, the G7 reached an agreement in October 2023 on International Guiding Principles on Artificial Intelligence and on a Code of Conduct for AI Developers, which aim to provide guidance to industry and promote safety and trustworthiness of AI systems. Outside of the G7 and the G20 there have been efforts by the United Nations to bolster international cooperation, such as creating the UNESCO Recommendations on the Ethics of AI which have been adopted by countries such as China, Egypt, India, Peru, Saudi Arabia, Singapore and South Korea. Further, in November 2023 the UK hosted an AI Safety Summit which was attended by the US, China and many African countries, which led to the Bletchley declaration on unlocking the potential of AI whilst restricting its use on safety grounds.
A brief overview of some interesting developments in regulation:
European Union
The most significant development in the regulation of AI has come from the EU. In December 2023, the EU AI Act completed the political trilogue stage, with agreement reached between the European Commission, Council and Parliament. This sweeping piece of legislation offers the most comprehensive regulation of AI seen to date. The EU appears intent on setting a global standard akin to their success on GDPR which led to a raft of copycat legislation to fall in line with EU models.
In brief, the AI Act creates harmonised rules for placing AI on the EU market, with the contents of the Act applying to EU Member States and any third-country providers and deployers that operate AI systems within the EU market. The Act is ‘sector agnostic’ meaning it applies to the use of AI in principle, rather than adopting rules for its use in different industries.
As mentioned above, the Act categorises the use of AI based on risk and sets certain requirements depending on into which category that usage falls. The most stringent requirements in the EU’s Act apply to the use of general-purpose AI systems, including generative AI, which pose possible “systemic risk.” Those requirements include human oversight, establishing reliable documentation, creating risk and quality management systems and robust cybersecurity measures. As for the minimal risk categories, initial risk assessments and ongoing transparency requirements are the prescribed limit, but the Act envisages companies committing to voluntary codes of conduct for responsible use. In a global first, the Act also includes the prohibition of certain AI systems deemed manifestly injurious to vital rights, including the creation of facial recognition databases as seen in operation in the US by the controversial Clearview AI system.
A notable addition still under consideration is the EU’s AI Liability Directive, which will provide people harmed by AI technology recourse to financial compensation. Again, this seems to be setting a global benchmark and could see other jurisdictions implementing similar institutions to address compensation in the wake of what is introduced by the EU.
China
China was one of the first countries to implement AI regulations and is party to the OECD’s AI principles. It also participated in the UK’s AI Summit and has adopted UNESCO’s Recommendations on the Ethics of AI.
However, its regulations are currently piecemeal and have been introduced to address concerns as and when they have arisen. For example, the first piece of regulation targeted recommendation algorithms, those which power everything from social media to navigation. This targeted the promotion and dissemination of content online. More recent policies have targeted autonomous vehicles, AI’s use in medicine and its use for facial recognition.
Due to the opaque nature of much Chinese policy, it is difficult to ascertain exactly what is in place, though I understand that Chinese lawmakers are currently drafting a comprehensive piece of AI regulation that will bring together existing rules and implement additional policies. How prescriptive an approach it adopts is anyone’s guess, but it will be interesting to see whether they attempt alignment with Western developments in light of their commitments on the world stage or adopt a more individualistic approach.
Canada
The Canadian government is looking to adopt a stricter approach akin to the EU, and its anticipated AI and Data Act (AIDA) is its chosen vehicle. AIDA’s key aims are to ensure high-impact AI systems meet existing safety and human rights expectations, as well as prohibit the reckless and malicious uses of AI. Alongside this, the government has published a code of practice for generative AI developers in anticipation and to ensure compliance with AIDA.
Additionally, a Directive on Automated Decision-Making was issued in 2023 imposing requirements on the federal government’s use of automated decision-making systems, in an effort to lead the private sector by example. How effective this will be, given the different concerns relative to the private sector remains to be seen.
Notably, the federal government of Ontario is currently deciding whether an amendment to the Ontario Working for Workers Act is needed requiring employers to disclose the use of AI in its hiring process. This seems to be the first concept of this kind in the employment field and could spark an interesting development to be adopted by other countries.
Japan
The Japanese approach, laid out in its National AI Strategy (2022) is the notion of ‘agile governance’. This involves the government providing non binding guidance for business and defers to the private sector’s voluntary efforts to self-regulate. This has been developed through several publications including the AI Utilisation Guidelines and, in 2023, the draft AI Operator Guidelines which seek to clarify how operators should develop, provide, and use AI.
Like the attitude of the UK government, this hands-off approach is an effort to foster innovation and become a global AI hub.
United States
The nature of US regulation became apparent at the end of October 2023 upon the issue of President Biden’s executive order on AI, which called for more transparency and the implementation of new standards. The reliance, it seems, will be on different agencies creating their own rules with each sector of the economy regulated differently.
It is expected that a grading system that ranks types and uses of AI by how much risk they pose – similar to the framework in the EU’s AI Act – will be adopted. Indeed, the National Institute of Standards and Technology has already proposed a framework for each sector and agency to put into practice in the coming months.
While some form of progress is being made in the US, it seems the national strategy really depends on who wins the upcoming election, with the possibility of a second Trump administration using this field as a burgeoning sector in its ongoing trade war with China.
United Kingdom
In February 2024, the UK Government revealed its response to the 2023 White Paper on regulating AI. The principles remain largely unchanged, which accords with the messaging in Government press releases of a ‘pro-innovation’ approach. It has adopted an outcome-based framework for regulating AI, underpinned by five core principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The intention is for existing regulators within each sector of the economy applying existing laws and issuing supplementary regulatory guidance. This guidance is to be in place by 30th April 2024. The Government will also issue voluntary safety and transparency measures to supplement the individual changes made by regulators. While envisaging amendments to the law eventually, the Government deem it unnecessary at this present time.
The challenge for the UK is how its approach will meld with global industry trends. There is a risk that in the desire to not restrict innovation, the ambiguity in how policies are enacted leads to uncertainty and an unwillingness from businesses to establish themselves in the UK. Further, while AI is obviously a digital tool with global impact, adopting an approach seemingly at odds with your closest trading partner, the EU, may be counter-intuitive, as companies desiring to trade across the 28 Member States more readily adopt EU rules.
Conclusion:
Looking at these approaches in the round, it seems 2024 will be the year that rules and procedures on the regulation of AI are adopted in all major jurisdictions. Consensus is definitely on the agenda, with further AI Safety Summits planned in South Korea and France later this year. However, it remains to be seen how each jurisdiction will develop their own rules as many seek to be at the forefront of innovation of this potentially revolutionary technology. In my submission, the significance of global collaboration should not be underestimated, as comprehensive, and effective safeguards adopted across the world will be necessary to adapt to the complex challenge presented by the ethics and governance of AI in the coming years.
Sources
The UK’s framework for AI regulation | Deloitte UK
How US, UK and EU Regulators Differ in Governing AI (aibusiness.com)
UK’s Context-Based AI Regulation Framework: The Government’s Response | White & Case LLP – JDSupra
What’s next for AI regulation in 2024? | MIT Technology Review
global_ai_law_policy_tracker.pdf (iapp.org)
Tracing the Roots of China’s AI Regulations – Carnegie Endowment for International Peace