The Future Lawyer Weekly Briefing – W/C 29th January 2024
January 29, 2024Balancing the Scales: Time Management Tips for Law Students
January 30, 2024By Imran Chaudhri
Introduction
Artificial Intelligence (AI) is defined as computers that can “perform tasks commonly associated with intelligent beings”. After recent generative AI developments, such as ChatGPT, were released, the world frantically prophesied whether they would be the beginning of the end for the job market or the first step towards individuals becoming truly enhanced creatively. These developments have consisted of “deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” Regulators, specifically within the EU and UK, have had to consider this more as they are currently legislating safeguards for the development of AI. Within this article, the potential impact of prospective AI legislation on the UK’s healthcare industry will be considered in light of AI’s rich history.
A Quick Recap
AI is a concept that many would consider to be in conjunction with the existence of modern computers. However, Alan Turing is credited with pioneering AI developments as long ago as 1950. Since his renowned paper on “Computing Machinery and Intelligence”, many other scientists developed programmes between the 1950s and 1960s that could perform basic tasks, such as noughts-and-crosses and the generation of love letters.
Further progression through the 70s and 80s diverted the eventual target of AI innovation towards “produc[ing] performance that would be certainly ascribed to intelligence if produced by a human” rather than producing “machines that think”. People such as Margaret Boden and Andy Clark helped in defining this distinction, leading the world to fundamentally pursue the fulfilment of different criteria when building these machines and programmes.
AI was reignited in the 1980s by John Hopfield and David Rumelhart’s popularisation of “deep learning techniques which allowed computers to learn using experience”. These are better known as expert systems. They collect knowledge from all the experts on all the questions so that the AI could provide answers for non-experts. This exciting conceptualisation of intelligence in relation to AI programmes led to heavy investment from countries such as Japan and the USA.
Until the 1990s there was a period of stagnation in the development of AI, causing some funding sources to dwindle away. However, IBM’s Deep Blue, which was a chess playing computer programme, shocked the world, as it beat the reigning world chess champion Gary Kasparov. Dragon Systems’ speech recognition software was implemented on Windows soon after, and Kismet conquered the ‘human emotion’ challenge with AI, as their robot could “recognise and display emotions”, giving the Turing Test a substantial challenger (the Turing Test is a method of determining if a machine can “demonstrate human intelligence” by observing whether it can “engage in a conversation with a human without being detected as a machine”).
Since the invention of the internet, we have reached the age of big data where large volumes of information are available for consumption. Machine learning, a subset of AI that focuses on the use of algorithms to learn from data and make informed decisions (deep learning is a subset of this), propelled the AI world to think more ambitiously. Between 2011-2020, Apple released Siri, Germany developed an advanced Traffic Sign Recognition neural network, generative adversarial networks were invented (allowing photos and deepfakes to be generated), facial recognition systems were established by the likes of Facebook and Apple, and self-driving cars were tested by Uber.
The latest, and arguably most significant, crusade in AI has been achieved through Large Language Models (LLMs). In 2018, Open AI released GPT which has subsequently been surpassed by improved LLMs, roughly leading us up to now.
Why is ChatGPT so Important in the Healthcare Sector ?
Considering that the last 70-80 years has hosted an immense wave of innovation, it is worth asking what specifically has the world so excited about OpenAI’s LLM?
The UK’s NHS has faced several issues in recent years, ranging from bureaucracy-fuelled mismanagement to negative patient experiences. With over 7.4 million people now waiting for treatment, from an institution that seemingly cannot afford to treat anyone and anticipate overspending their 2024 budget by at least £7bn already, the NHS’ best hope would be to find a genie lamp lying in one of their wards.
Seeking to address a technical overhaul of their system, the NHS should perhaps look to AI to solve their problems. Although previous attempts to utilise AI, like Babylon Health’s digital first service in 2018, were unsuccessful, the expansive variety of applications of LLMs, as well as AI generally, would surely entice the NHS to continue evaluating its uses. Google recently launched a healthcare focused LLM called MedLM, which aims to help applications like Augmedix assist physicians to perform tasks, such as taking medical notes hands-free.
Technology such as this would arguably have major benefits for the daily activities of healthcare professionals in the NHS and would bear a minimal operational downside. The true concern for supporters of AI within the NHS would be regulators. The EU recently drew up a wide-ranging AI Act that seeks to control AI companies making high-risk technologies. The Act aims to balance commercial, economic, and social considerations. That being said, some of the restrictions could be purported to threaten the beneficial advancements that the AI industry has achieved recently.
Regulation Preventing Revelations?
Within the politically agreed EU AI Act, the legislation’s scope regarding its ability to characterise an AI system’s risk as unacceptable, seems justifiable because many of the red flags within the Act are processes such as: the biometric categorisation of people, the biometric facial recognition of people, and the cognitive behavioural manipulation of people. These processes would harmfully infringe the public’s right to privacy and freedom and therefore there should be safeguards in place to protect the public. However, the requirement for systems in specific areas, like law enforcement, to be assessed before being brought to market could make companies in EU member countries uncompetitive as they would take longer to bring innovative products to consumers.
Since the UK left the EU, this new AI Act will not be immediately binding. However, if there is a wider international shift towards protecting the public rather than promoting innovation, which could be demonstrated by the USA, then the UK may feel obliged to fall in line. The government is set to soon publish a series of tests that need to be fulfilled to pass new laws on artificial intelligence. The tests would act as an intervention if the UK’s new AI safety institute fails to identify risks around the technology. According to the Financial Times, it is speculated that the government considers there would need to be evidence that such a move would mitigate the risks of AI without stifling innovation in regard to restricting a company’s AI system.
What Should We Take from This?
Throughout this article, many different aspects of AI have been examined. From studying the history of AI, it can be understood that AI technology has drastically developed from algorithms that can display impressive individual skills to models that can consume enormous sums of data and display shakingly accurate depictions of the skill sets that humans have e.g., writing, painting, and analysing.
The sudden evolution of this industry has both excited and frightened the public, leaving the world divided on how to nurture, or obstruct, AI’s future growth. Either way, the EU AI Act is a step in the right direction towards commencing a much needed conversation on the future of AI. Whether some of the provisions, such as the one that requires companies to have their AI systems assessed before going to market, are too restrictive is yet to be seen. The difficulty with this technology is that we do not truly know what we are protecting yet. Although ChatGPT is impressive currently, it may be completely shadowed by an LLM in two years’ time, so it is arguably completely justifiable that the EU and UK stand on the back foot when it comes to regulating AI.
Sources:
B.J. Copeland, ‘Artificial Intelligence’, Encyclopaedia Britannica (Online Encyclopaedia, 12 Jan 2024) <https://www.britannica.com/technology/artificial-intelligence> accessed 12 Jan 2024
Kim Martineau, ‘What is generative AI?’, IBM (Web Page, 20 Apr 2023) < https://research.ibm.com/blog/what-is-generative-AI> accessed 12 Jan 2024
Committee on Standards in Public Life, Artificial Intelligence and Public Standards: report (Independent Report, 10 Feb 2020) 17-20
Rockwell Anyoha, ‘The History of Artificial Intelligence’, SITN Harvard (Blog, 28 Aug 2017) < https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/> accessed 13 Jan 2024
Plato Stanford, ‘The Turing Test’, Plato Stanford (Web Page, 4 Oct 2021) < https://plato.stanford.edu/entries/turing-test/> accessed 13 Jan 2024
Jake Frankenfield, ‘The Turing Test: What Is It, What Can Pass It, and Limitations’, Investopedia (Web Page, 31 Jul 2023) < https://www.investopedia.com/terms/t/turing-test.asp#:~:text=by%20Melody%20Kazel-,What%20Is%20the%20Turing%20Test%3F,it%20has%20demonstrated%20human%20intelligence.> accessed 13 Jan 2024
Ron Karjian, ‘The history of artificial intelligence: Complete AI timeline’, TechTarget (Web Page, 16 Aug 2023) <https://www.techtarget.com/searchenterpriseai/tip/The-history-of-artificial-intelligence-Complete-AI-timeline> accessed 13 Jan 2024
Sam Freedman, ‘How bad does the NHS crisis need to get?’, Institute for Government (Web Page, 13 Jun 2023) < https://www.instituteforgovernment.org.uk/comment/how-bad-nhs-crisis#:~:text=Patient%20satisfaction%20with%20the%20NHS,record%20high%20of%202.55%20million.> accessed 13 Jan 2024
Toby Helm, D. Campbell, ‘NHS sinks into £7bn cash crisis as inflation and strikes bite’, The Guardian (Online News, 17 Sep 2023) < https://www.theguardian.com/society/2023/sep/17/nhs-sinks-into-7bn-cash-crisis-as-inflation-and-strikes-bite#:~:text=The%20NHS%20is%20heading%20into,according%20to%20leading%20independent%20experts.> accessed 13 Jan 2024
NHS, ‘GP at Hand – Fact Sheet’, NHS (Online fact sheet) <https://www.england.nhs.uk/london/our-work/gp-at-hand-fact-sheet/>. accessed 13 Jan 2024
David Chou, ‘Google Launches A Healthcare-Focused LLM’, Forbes (Web Page, 17 Dec 2023) < https://www.forbes.com/sites/davidchou/2023/12/17/google-launches-a-healthcare-focused-llm/> accessed 13 Jan 2024
Christina Criddle, A. Gross, ‘UK government to publish ‘tests’ on whether to pass new AI laws’, Financial Times (Online News, 12 Jan 2023) < https://www.ft.com/content/61630015-faaa-4f16-a8aa-67787f46fafe> accessed 13 Jan 2024
European Parliament, ‘EU AI Act: first regulation on artificial intelligence’ (W0 < https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> accessed 13 Jan 2024