HXA (Respondent) v Surrey County Council (Appellant) [2023] UKSC 52
November 17, 2024AI in Insurance
By Marko Laughlin
The rise of Artificial Intelligence (AI) has not been without controversy, as it raises significant ethical, security, privacy, and copyright concerns. While some argue that AI could create new problems, others believe it may offer solutions to existing challenges.
AI has had an increasing role in the commercial sector, particularly in insurance. Insurers have utilised AI to enhance cost-efficiency and accuracy in personalising offers and risk modelling, reflecting broader trends toward technology-driven operations focused on productivity and efficiency.
The Financial Conduct Authority (FCA) regulates and supervises banks, insurers, and investment firms, ensuring consumer protection and promoting competition. Its Chief Executive, Nikhil Rathi, emphasised the need for greater financial inclusivity to avoid further marginalising vulnerable customers as AI increases prevalence. He also stressed the importance of addressing the risks that could arise from an overreliance on AI in financial services.
The benefits and the risks
What are the issues with AI? AI has already proven its positive influence in the sector, having various benefits for customers that would otherwise not have been available.
For example, AI-based services, such as anonymous chatbots, help mitigate the stigma associated with financial difficulties, allowing customers to engage more openly with financial institutions. Customers are more likely to seek out financial advice, particularly about debt and other sensitive economic problems, as the FCA chief cited from the organisation FinTech Scotland, demonstrating that using AI could mitigate psychological or societal barriers.
AI has also introduced innovation in areas such as credit scoring. Finexos uses AI to generate alternative credit scores for individuals with limited credit histories, expanding access to financial services, even for those who have little information and are new customers to the sector. The predictability accuracy of AI systems suggests that they could produce fairer and more consistent results than human workers, who may be prone to error or implicit bias.
Customers may benefit from AI’s use of live data, which could improve individual services. Hyper-personalisation enabled by AI provides tailored premiums and services and could reduce costs for many, allowing for more financially accessible services for customers and insurers to have a stable, dedicated customer base. However, this raises questions about the ethics and fairness of relying solely on AI-driven decision-making.
AI systems use historical data and patterns for decision-making, and the existing historical biases may be replicated in AI systems, possibly resulting in AI having biases against specific groups and discriminating against vulnerable groups. For example, AI could render groups uninsurable or have different standards and assessments based on certain factors, such as those with previous financial difficulties or who are unhealthier may face discrimination. Due to AI’s binary approach to data, these groups may face higher premiums or limited access to insurance, making such services less affordable or even inaccessible to them.
Similarly, there is a growing trend in the use of AI for fraudulent claims and scams, with fraudsters employing similar AI technologies to those used by insurers. Claim handlers have reported an increase in AI-generated fraudulent claims, making it harder to detect and prevent fraud, as well as allowing fraudsters to generate claims faster and efficiently.
Additionally, scams and AI-driven hoaxes have spread on social media, relaying disinformation and incorrectly influencing markets. These issues highlight the need for modern, developed fraud prevention systems as AI-enabled fraud becomes increasingly sophisticated and widespread, posing significant challenges for insurers, regulatory bodies, and law firms.
Financial exclusion or financial inclusion?
AI’s use by insurers could have significant implications, particularly for vulnerable populations. The potential for AI-driven biases may disproportionately impact these groups, leading to further financial exclusion. Mental health issues could arise from poor financial situations, and there are established links between over-indebtedness, mental health challenges, and reduced productivity or an inability to work. This would lead to a domino effect where financial insecurity prevents access to insurance, worsens mental health, limits employment opportunities, and further entrenches financial instability, leaving individuals trapped in a difficult cycle.
The FCA Chief highlighted “We often find ourselves dealing with the symptoms of financial exclusion, but we also need to confront the causes”. To mitigate the negative impacts of AI in the insurance industry, it is essential to recognise that financial inclusion and economic growth are not mutually exclusive. Enhancing financial and digital literacy would improve inclusion and access to services, fostering economic growth.
The European Insurance and Occupational Pensions Authority (Eiopa) also emphasised that companies must make “reasonable efforts to monitor and mitigate biases from data and AI systems.” By reducing AI bias and promoting greater financial inclusion, insurers can broaden their customer base and create more equal access to financial services. This approach helps ensure that AI systems do not create financial exclusion but instead contribute to inclusive growth.
The role of law firms
Why, and how, would law firms be involved? The FCA has already planned some changes, with Rathi stating they are “ready to rethink some of our rules and regulatory approaches”, and that they want further input from the market on the role of Big Tech firms as gatekeepers of data and the “implications of the ensuing data-sharing asymmetry between Big Tech firms and financial services firms”.
These statements would suggest that changes to corporate and financial law regarding data and AI could soon be on the horizon. Law firms would need to interpret these laws and help advise insurance companies with risk mitigation and complying with these regulatory changes in the future.
Why would law firms play a key role? The FCA has already planned some changes, with its Chief stating they are “ready to rethink some of our rules and regulatory approaches” and has invited further market input on the role of Big Tech as data gatekeepers. This suggests changes to corporate and financial regulations, particularly regarding data asymmetry and anti-discrimination.
Law firms would be essential in interpreting these new regulations and advising insurance companies on compliance. They would assist in mitigating risks associated with AI and ensure that companies comply with evolving legal standards and ethical guidelines, particularly in data protection and financial conduct. This involvement would be crucial in helping companies implement corporate governance reforms, navigating the complexities of AI systems while staying aligned with regulatory standards, and minimising legal risk and societal backlash.
Law firms would also deal with litigation and dispute resolution. Due to existing bias in AI systems, law firms would have to assist with handling disputes from claims of discrimination or service denial once AI bias becomes a bigger legal issue. Similarly, Cyber fraud, cyber attacks, and identity fraud have all become increasingly prevalent and more sophisticated, meaning that law firms would have to defend insurers against fraudulent claims associated with or facilitated by AI.
Conclusion
With the risks and possible benefits of AI, there is a necessity for a balanced approach within the insurance sector. We should encourage safe and responsible use of AI that allows innovation to flourish whilst also safeguarding consumer rights and recognising the risks associated with AI. Generative AI can affect markets in new, larger scale ways, and the drive toward generative AI in insurance could reflect similar trends and scandals. Therefore, to mitigate these risks, we should foster collaboration between regulatory organisations, insurance companies, and law firms to use and adopt AI within a responsible manner that benefits all.