Overlay

Why is the ethics of AI and data so important for financial services? 

What all artificial intelligence (AI) models have in common is that they are probabilistic in the way they operate. What that means is they uncover and find patterns in data they are trained on, and these patterns form the basis for how the model makes decisions and recommendations. However, this probabilistic nature means that it’s impossible to guarantee specific outcomes in advance.

Why is this an important consideration for financial services? The foundation of a financial institution’s operations relies on deploying IT systems that are not only robust and resilient but also accurate and fair. For the vast majority of the time, it’s achieved through excellence in software development practices accompanied with robust controls and guardrails. However, in exceptional cases, there is a risk of algorithmic bias appearing in AI models. Now some biases are acceptable, for example, in the case of lending decisions which are made based on well-trusted and proven data and customers’ individual circumstances, which is considered to be a responsible approach to lending.

However, some biases are unfair and this might (in exceptional cases) inadvertently and unintentionally influence AI models – often stemming from hidden biases in the AI model’s training data – leading to flawed outputs.

As a bank we have to ensure that we don’t allow this to happen. The financial decisions we make – relating to everything from mortgage approvals to loan decisions – impact people’s lives and their futures. We therefore aim to take the necessary steps to ensure our AI models are making correct and fair decisions. Central to our approach is our AI and Data Ethics Code of Conduct¹. 

Why is it important to have a Code of Conduct?

The Code of Conduct is, in effect, a statement of our intent, outlining our fundamental principles regarding ethical use of AI and it reflects our aspirations to foster responsible and transparent practices. So, when we created our Code of Conduct, our aim was to align it to NatWest Group’s purpose, values and strategic priorities. The Code of Conduct contains principles that govern how we use and process customer data and how our AI systems are developed and deployed.

For me, this is hugely important. It’s not some conceptual idea, based on abstract ideas. The principles in our Code of Conduct are embedded into the way we design and build AI systems, and the data used to train them.

Because AI models operate based on patterns that emerge from their training data, we need to demonstrate that the mechanisms and techniques we have put in place ensure the right decisions and fair outcomes for customers.

It is essential for our customers to be aware of our Code of Conduct, as they may have concerns about the risks associated with AI, as highlighted in the media. By knowing we are committed to mitigating these risks, our customers can hopefully feel more secure in our use of AI.

Another important point is that the foundation of NatWest Group’s strategy on AI is underpinned by ethical considerations. Our strategy on AI contains several ambitious uses of AI with the aim to help transform how we operate and serve our customers. Ensuring ethical use of AI and the data it uses is central to each use of AI and I am lucky enough to run a bank-wide program of work to engage various teams in this effort.

AI has incredible potential to give our customers relevant and personalised interactions with us

How is the Code of Conduct used in practice?

First and foremost, the Code of Conduct gives all our AI engineers a framework through a set of high-level principles, that they need to work within. Among other things, it helps to work towards ensuring: that our AI systems are subject to human oversight, and that they respect and promote human agency; that they are technically robust, resilient, and safe, so we prevent unintentional harm to individuals; that we comply with privacy and data protection laws; and that decisions or predictions that they produce can be explained to customers and are free from unfair bias or discrimination.

Every AI ‘use case’ (i.e. any significant deployment of AI, such as a credit lending model) undergoes an Ethical AI Impact Assessment to explore if it could unintentionally exhibit any ethical risks. The assessment is reviewed by a dedicated AI & Data Ethics team. Use cases deemed high risk, referred to as ‘edge cases’, are formally discussed in the AI and Data Ethics Panel which evaluates the ethical concerns based on the principles in the Code of Conduct and makes recommendations as to how potential ethical risks should be mitigated.

The Panel is comprised of colleagues drawn from right across the bank to help provide the widest diversity of thoughts and opinions. Panel members are required to complete a set of training modules which qualifies them to sit on the Panel. For every Panel meeting, we then select from this diverse pool of trained people.

Each member on the Panel represents our different stakeholder groups – including customers, colleagues, communities, shareholders & investors, and future generations. Their individual roles are to challenge the ‘edge case’ business owner on that specific aspect. The Panel offers guidance on necessary measures to minimize ethical risks.

We also only adopt technologies which have been rigorously tested in safe, ‘sandbox-type’ environments.

How do these guardrails help us succeed with our customers?

AI has incredible potential to give our customers relevant and personalised interactions with us. We have a rich archive of data about our customers, built up through long-term relationships, and I believe AI can leverage this data to benefit our customers. It’s similar to saying, ‘How can we take AI technology, combine it with data, and use it to deepen and strengthen existing relationships for existing, and future customers?’

Used in this way, I almost see it as an easier, more ‘always-on’ extension of the traditional banking experience, where a customer formed a personal bond with their bank manager. And I believe, if used ethically, AI can do the same thing, except in a much more scalable and accessible way. AI could potentially help create trust and deepen relationships by providing timely and intelligent responses to the matters that are the most important to our customers.

But to be able to do that we need to be completely comfortable in the technology we are deploying. This is why having the Code of Conduct is so important. It gives us the vital reassurance and creative space required to truly focus on how we can use technology to succeed with our customers.

AI and Data Ethics Code of Conduct

Read more about our AI and Data Ethics Code of Conduct on our Innovation and Digitalisation pages and download the full document here.

[1]It is important to note that the Code of Conduct is not a comprehensive overview of every aspect of AI usage within our organisation. As we navigate the complexities of AI, we recognise that our practices will evolve as technology and societal norms and legal and regulatory requirements develop and that adaptation and a holistic approach will be essential in addressing the broader implications and challenges that arise.

Caution about this article. The views and opinions expressed in this article are those of the interviewee and do not necessarily represent the views of the NatWest Group.

The material published on this page is for information purposes only and should not be regarded as providing any specific advice, or used by consumers to make financial decision. Terms and conditions apply to any products or services mentioned.

scroll to top