As the integration of artificial intelligence (AI) continues to redefine the financial services landscape, the emphasis on developing responsible and trustworthy AI systems has never been more important. Responsible AI, encompassing practices such as data privacy, accuracy, misinformation and bias reduction, is central to the success of AI tools and services.
Recognizing the complexity and breadth of this field, the National Institute for Standards and Technology (NIST) has endeavored to standardize the terminology surrounding trustworthy AI, facilitating a common understanding that advances the practical implementation of these principles. The principles of trustworthy AI were also part of the DARPA AI Forward initiative, which described AI that operates competently, interacts appropriately with humans and behaves in a moral manner.
Within the financial industry, where the stakes of AI-driven decisions are exceptionally high, the principles of unbiased, fair and dependable AI are especially critical. However, the aspiration to fully embody these principles in deployed AI systems faces significant challenges, given the current limitations of AI in mirroring the moral reasoning capabilities of humans. This challenge is compounded by the rigorous regulatory scrutiny faced by financial institutions, making the journey toward responsible AI a key focus area for the industry.
Moreover, the development of AI solutions in the financial industry is a mix of in-house processes, and partnerships with external vendors. A Deloitte survey from 2020 found that 56% of organizations use both in-house and external resources for AI development, while 23% primarily use in-house resources and 21% rely mainly on external resources. This indicates a significant trend towards a hybrid approach, combining in-house talent, outsourced resources and co-development with external partners. This approach allows financial institutions to leverage both their internal resources, including regulatory relations and compliance, and the specialized expertise of third-party AI providers.
As leaders within the financial services industry embark on the adoption and deployment of AI initiatives, it is imperative to engage in critical questioning. This dialogue ensures that both your internal and external teams are committed to the development of AI technologies that adhere to the highest standards of ethical responsibility and integrity. These questions can serve as valuable guidelines for discussions with AI vendors and internal teams alike.
1. What Are Your Data And Algorithm Audit Procedures?
Critical to ensuring the integrity and fairness of AI systems is the establishment of rigorous data and algorithm audit procedures. Financial institutions should inquire about the frequency, methodologies and outcomes of these audits. This is fundamental to identifying and rectifying biases and inaccuracies and ensuring that AI systems operate within the ethical boundaries set by regulatory standards and societal expectations.
2. How Do You Address Bias In Dataset Samples?
Given the pivotal role of data in shaping AI outputs, all stakeholders must rigorously assess whether representative dataset samples are of the populations they aim to serve. This involves a thorough examination of the measures in place to mitigate bias in data collection and ensure ongoing representativeness, a crucial step in maintaining the fairness and reliability of AI systems.
3. How Do You Integrate Responsible AI In Technology Development?
A holistic integration of responsible AI principles throughout the technology development process is essential. Financial institutions should seek to understand how ethical guidelines, awareness of AI ethics and frameworks guiding AI development are incorporated, ensuring that AI systems are designed with responsibility and ethical considerations at their core. They should also consider the limits of the algorithm’s technical capabilities, and thus where human oversight is required.
4. What Was Your System’s Original Design And Intent?
Understanding the original design intentions of AI systems and their primary applications is vital. Stakeholders must evaluate the risk of concept drift when AI systems are applied to domains beyond their initial scope. When there is drift, the integrity of the algorithm may not hold up in the new domain, and therefore needs rigorous adaptability and quality assurance testing to maintain the integrity of AI algorithms across varied applications.
5. How Do You Enable Organizational Culture And Human Oversight?
The organizational culture and human oversight practices of AI developers play a critical role in the development of responsible AI. Inquiries into policies, training programs, financial incentives and engagement in ethical practices can provide insights into how a culture of responsibility and transparency is fostered within the vendor's operations, a critical factor for long-term success in the fast-evolving AI landscape. This will include not only the written policies and procedures, but also the unwritten rules that actually drive behavior at the company.
Embracing The Future Responsibly
In the pursuit of integrating AI into financial services, the commitment to responsible AI is not just an ethical imperative but a strategic one, safeguarding against future ethical and legal challenges. By engaging in transparent and rigorous evaluation of AI partners, financial institutions can better navigate the complexities of this journey, while ensuring that the deployment of AI technologies aligns with both regulatory requirements and societal values.
Azish Filabi is the executive director of the American College Maguire Center for Ethics in Financial Services. In 2023, she was selected to participate in the DARPA AI Forward trustworthy AI for national security initiative. Neeraja Rasmussen is the founder and CEO of Spyglaz, and an Advisory Council member of the American College Center for Women in Financial Services.