Financial Services Ireland

Embracing Socially Conscious AI in Financial Services: Navigating Risks and Unlocking Opportunities

Read more

In the rapidly evolving landscape of artificial intelligence (AI), financial services are now at the forefront of transformative change. As AI becomes ubiquitous across economies and sectors, dominating discourse and reshaping daily interactions, financial institutions are increasingly integrating AI across their operations. From risk assessments and payment processing to lending decisions and fraud detection, AI is revolutionising how financial services engage with customers, offering personalised content and innovative solutions.

Social Concerns and the Regulatory Landscape

However, with great innovation comes a responsibility to address the unique social concerns associated with AI. Financial organisations must proactively consider the potential impact on peoples’ fundamental rights, ensuring that the benefits of AI do not compromise the rights of customers and broader communities.

The regulatory landscape is evolving in response to these concerns, as exemplified by the EU AI Act, a landmark agreement struck between the Council of the European Union and the European Parliament on 9 December 2023 regarding the regulation of AI. Whilst the precise details are yet to be announced, a core element of the law will be ensuring business respect for fundamental rights in their use of AI. This consideration will be especially salient for financial services organisations as their sector explores and expands their use of AI while balancing these changes against their high-impact role in the economy and broader society.

Challenges and Risks

AI introduces unprecedented challenges, particularly in mitigating risks related to biases and societal shifts. Several potential uses of AI by financial services will be categorised as “high-risk” under the EU AI Act. The law’s proposal lists the use of AI for the evaluation of credit scores or creditworthiness and for life or health insurance pricing as examples of “high-risk” AI usage. This categorisation is based on the potential of these activities to impact the fundamental rights of equality and non-discrimination, privacy, access to financial services and consumer protection. “High-risk” activities will then be subject to additional rules under the law, including requirements to conduct risk impact analyses and to establish risk management systems specific to the protection of fundamental rights.

Just as the regulatory landscape shifts in favour of business accountability for social harms, the social construct is also changing – and perhaps more quickly than in decades and generations past. The answers to questions such as “What does a career look like?”, “What is a family?”, “What does our society need?”, and “What ‘values’ do we actually care about?”, have changed and will continue to change. In contrast, AI models are built upon data reflecting historical social standards and structures. There exists the risk that AI will reproduce existing, including potentially undetected, bias in financial activities such as lending decisions. If, for example, AI draws anachronous inferences on gender inputs that result in unfair rejections or disadvantageous lending terms, issues such as the US$1.5 trillion credit gap for women-owned SMEs (as highlighted in a recent IFC study) may perpetuate. Financial services organisations need to ensure that their use of AI is not replicating outdated norms whilst facilitating the opportunities that societal shifts can bring.

Adapting to Change and Ensuring Inclusivity

AI is trained on data which, depending on its inputs, can over or under-represent certain demographics. These datasets can change significantly and unexpectedly. Take, for example, the rise of the gig economy in recent years. As more and more workers are engaged in non-traditional working relationships, will AI systems be able to respond appropriately to ensure these workers have equal access to financial services and a fair assessment of, say, their creditworthiness?

AI also cannot apply an informed, human lens to its programming without intervention. This carries a risk of relying on historical decisions about the family unit which could result in algorithmic biases against individuals in so-called “non-traditional” family constructs. How can financial institutions ensure that their AI can capture and adapt to changing customer bases? Likewise, AI may not be able to capture emerging topics that are influencing investment and lending behaviour, such as the growing importance of sustainability factors in financial decision-making.

Many customers may not fully understand how AI may be used by their financial institutions, which can contribute to issues with financial literacy and inclusion. They may be confused when deciding to give consent for the use of their data, for how their data will be used, or indeed may lack understanding of how AI influences the offers being made to them. Transparency and education are key considerations here to ensure that customers have the requisite knowledge to provide informed consent to the collection, use, and storage of their data.

The growing use of AI also implies a growing reliance on technology to conduct financial services activities. Will the use of AI by financial institutions consider how to include customers without access to these technologies or who prefer a non-digital approach to their banking?

The Power of Socially Conscious AI

In taking a more considered approach, financial services organisations have an opportunity to harness the power of socially conscious AI. This approach to the design, maintenance, and oversight of AI centres on the protection of individuals and their fundamental rights as a core principle of AI usage. Socially conscious AI can enable financial services organisations to protect themselves from risks and realise opportunities by applying a variety of lenses in considering the potential social impacts of their use of this technology.

A socially conscious AI programme would consider, for example, the multifaceted ways that AI can impact people, organisations, and the broader societal environment as a core element of AI design. It would also apply an impact lens to AI governance, examining the severity and likelihood of potential negative effects of AI use on the individual, groups, and society both before implementation of the system and on an ongoing, iterative basis. This effort will likely become mandatory for “high-risk” AI systems under the EU AI Act, which will also mandate the establishment of an ongoing, iterative risk management system to identify and address risks to fundamental rights of those who may be affected by “high-risk” systems.

These considerations will allow financial services to unlock the full potential of their customer bases. A socially conscious approach to AI allows its users to evaluate an individual based on their unique profile, enabling wider access to services and increased revenue. The iterative nature of a socially conscious approach also creates the opportunity to capture social change as it happens, allowing a faster response to the changing market.

Socially conscious AI can also empower consumer trust. FIS’s Trust in Generative AI (2023) found that two of the most important factors for improving individuals’ trust in AI are transparency into how their data was being used and knowing that human beings are being charged with AI oversight. As financial services increasingly leverage AI for customer-facing interactions, understanding these perspectives will allow them to build, govern, and communicate about AI in ways that will resonate.

Opportunities for Financial Services

Socially conscious AI enables financial organisations to include diverse customer groups, monitor and detect biases effectively, and reduce the likelihood of discrimination. By applying a human lens to AI, businesses can proactively identify and rectify biased outcomes, fostering a more inclusive and equitable financial landscape. Moreover, a socially conscious approach can improve the customer experience, resulting in higher-quality interactions built on individual empowerment.

By embracing this approach, businesses can not only comply with evolving regulations but also proactively contribute to societal well-being. The power of socially conscious AI lies in its adaptability, ensuring financial institutions remain agile in the face of evolving social, economic, and technological landscapes.

As AI continues to shape the future, financial organizations have the choice to either meet regulatory requirements begrudgingly or embrace a socially conscious approach, unlocking the full potential of AI investments and capturing opportunities in an ever-changing world.

Further Reading

Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world (Council of the European Union)

EU AI Act Proposal (Council of the European Union)

 

This article was co-authored by Madeline Parkinson.

Contact Us

If you would like more information on how EY's team of experts can help, please reach out today.