In a groundbreaking move, Nationwide is set to deposit an unprecedented £340 million directly into customer accounts. With annual profits soaring by 40% due to increased deposits and higher interest rates, Nationwide is making waves in the financial industry.
But what about customers of other banks? Why aren’t they receiving a payment? In this article, we will explore the differences between banks and building societies to uncover the reasons behind this unique payment.
Understanding the difference between banks and building societies involves examining their ownership structures and relationships with shareholders.
Banks, typically owned by shareholders, rely on stock investments for dividends and capital appreciation. Shareholders wield decision-making power through voting rights to maximise profits and shareholder value.
In contrast, building societies operate under a distinct ownership model. Owned by their members, who are also their customers, these societies grant members voting rights and a stake in the organisation. Serving the interests of their members is the primary focus, offering financial services like mortgages and savings accounts.
The absence of external shareholders empowers building societies to prioritise member needs without the pressure to generate excessive profits. This often results in a more personalised and customer-centric approach to banking. It’s worth noting that some building societies have transitioned into banks in recent years, altering their ownership structures to become shareholder-owned entities.
Nevertheless, the traditional model of building societies, with members as owners, persists as an alternative banking option for individuals seeking a different experience.
In a significant move, Halifax Building Society, a prominent UK building society, converted into a bank in 1997, adopting the name Halifax plc. This conversion signalled a shift in their ownership structure. Subsequently, Halifax merged with the Bank of Scotland in 2001 to form HBOS, eventually becoming part of Lloyds Banking Group.
This conversion from a building society to a bank had implications for the distribution of profits. Unlike building societies, where members are the owners and beneficiaries, Halifax, as a bank, redirects its profits towards its investors rather than distributing dividends to its customers.
This shift highlights the contrasting financial arrangements between building societies and banks, emphasising the importance of understanding ownership structures when considering potential benefits or payouts.
Despite their distinct structures, lawyers play a vital role in offering guidance to both banks and building societies. While banks and building societies may operate differently, their core purpose remains broadly similar.
Both institutions cater to the financial needs of individuals and businesses by providing services such as mortgages, loans, overdrafts, and bank accounts. Consequently, lawyers provide valuable counsel to high street banks and building societies, employing various approaches tailored to their unique requirements.
Nationwide’s payment may tempt customers and potentially lead to a shift in their banking preferences, but only time will reveal the true impact. Nevertheless, it remains vital to understand the differences between various financial institutions (although Nationwide customers will just be pleased with their £100)!
According to scientists, ‘artificial general intelligence – that is, the ability of a computer to ‘think’ at the same level as a human or higher – is only a couple of decades away, or even closer’.
AI application permeates all sectors of society, from smart home appliances to online shopping and medical science. Even so, the rate at which AI has developed over the last couple of years is worrying for most and, as a result, is on the top agenda for governments and public bodies worldwide, with the threats and opportunities of AI being one of the main topics at this year’s G7 Hiroshima Summit.
On a personal level, the advancement of AI instigates an ominous tone owing to its ethical and safety intimations; think I, Robot and the Terminator series.
Regulation in the UK is leisurely compared to AI advancements, especially considering the UK is third in investments behind the USA and China, with more than £2.5 billion spent on transformative and ‘ethical’ AI since 2015.
PwC published a UK government-commissioned report accosting the potential impact of AI on the UK job market in October 2021. The upshot of this report is that AI is not expected to cause ‘mass technological unemployment’. However, it is set to expedite ‘significant changes in the structure of employment across occupations, sectors and regions of the UK’. While these changes will be relatively minor within five years of the report, there is an expectation that the magnitude of the impact will be far more quantifiable over the next 10 to 20 years.
AI, on the one hand, increases job productivity and efficiency, monitors employees, thereby augmenting job performance, and creates new jobs across all sectors, in turn, maximising profits, thus contributing to economic growth, with PwC stating that ‘UK GDP will be up to 10.3% higher in 2030 as a result of AI – the equivalent of an additional £232 billion’.
Inversely, there is significant unease at AI’s advancements, leading to calls for monitoring and, in some cases, curtailment. One of these concerns is that using AI systems in workplaces is often a standalone decision primarily made by executives with little involvement or consultation of those on whom such data-driven technologies have the most effects.
Another is the alteration of the working experiences of employees, mainly as AI systems are increasingly utilised for monitoring performance, as well as AI, through automation, arrogating tasks traditionally reserved for humans.
An obscure but prodigious threat is AI data laundering, which is ‘outsourcing the heavy lifting of data collection and model training to non-commercial entities’. These entities include academic settings such as universities and, in other cases, research organisations, many of which are funded by commercial, mostly tech companies.
Said companies use research data (effectively copyrighted material and likely disobliging its producers) to train their AI deep learning models, most of which are put to commercial use, such as in the case of Alexa, Siri and Cortana.
This enablement emanates from the ‘research exception’ as per the Intellectual Property Office’s Exceptions to Copyright for Non-Commercial Research and Private Study. As a result, consent from data subjects is absent, there is a limitation of accountability, and there is the successful avoidance of legal liabilities by commercial companies.
The Arts sector is no exception to the risks posed by AI, with Paul Fleming of Equity, the performance union, demonstrating his concerns when he said that ‘if an AI can simply watch every performance from a given actor and create character models that move like them, that actor may never work again’.
AI also appears a threat to democracy, with the government seeking ‘to protect the next general election from interference or manipulation by artificial intelligence’ owing to the hazards of deep fakes and misinformation by bad actors, which are easy to accomplish due to AI’s current capabilities.
Therefore, despite the advantages presented by AI, the risks seem to outweigh the former, so much so that pioneers of AI, such as Sam Altman, the CEO of OpenAI, the company behind ChatGPT and Sundar Pichai, CEO of Alphabet, Google’s parent company, are calling for urgent regulation of the sector.
The government thus acknowledged the urgent need to remedy said inadequacies and demonstrated its acquiescence by publishing a White Paper in March 2023, clarifying its approach to regulating AI.
Even so, the government faces substantial difficulties in regulating AI and its dichotomies, owing to how AI is mainly privately researched although commercially funded and developed, and with the opportunities it advances; how do you scrupulously cross-regulate and curtail a sector which draws on commercial and private laws and fosters economic growth?
Nonetheless, Labour MP Mick Whitley introduced the Private Member’s Artificial Intelligence (Regulation and Workers’ Rights) Bill in the House of Commons in May 2023. Whilst this certainly is a step in the right direction, there also needs to be a coordinated effort on a transnational level. Conventions and Charters may be necessary for regulatory measures to prove even more effective.
Any future regulatory controls will affect the legal sector; however, it proves difficult to qualify the extent to which the industry will be affected.
Consequently, it is a waiting game in deciphering how governments supervene AI advancements and what the resulting precedents in respective jurisdictions and within jurisprudence would be.