Parijat Banerjee is Global Business Head for the Banking, Financial Services, and Insurance (BFSI) sector at LatentView Analytics.
In 1988, an Austrian scientist named Hans Moravec observed that computers were adept at tasks that humans considered complex (chess, math, etc.), but that they struggled to match the perception and motor skills of an infant born with billions of years of human evolution. Or, as the cognitive psychologist Steven Pinker later put it, “The main lesson of thirty-five years of artificial intelligence (AI) research is that the hard problems are easy, and the easy problems are hard.”
Known as Moravec’s paradox, the concept is common in the field of robotics, however, it could just as easily be applied to banking, financial services and insurance (BFSI)—sectors that are always looking for a competitive edge. In BFSI, milliseconds can mean millions and generative artificial intelligence (GAI) can help provide that transformative speed. But, speed without direction gets you nowhere fast. Paradoxically, one could win and lose at the same time.
BFSI industry leaders have been at the vanguard of GAI for a while now, but the past year has been an explosion of development since the introduction of tools like ChatGPT. As tech leaders, how do we talk about this progress? And how do we contextualize it among the potential risks posed by overlooking the easy for the hard?
Here are a few opportunities and threats that GAI presents for BFSI specifically, along with thoughts on how to move forward responsibly.
How GAI Can Strengthen And Accelerate BFSI Operations
Less Overhead, Lower Costs And Larger Returns: According to McKinsey, current GAI technologies have the potential to automate work activities that absorb 60%-70% of employees’ time today. And in banking specifically, it could deliver value equal to an additional $200 billion-plus annually. So, to recap, that’s fewer people to pay and more profit. For anyone working in BFSI, it’s a mic-drop moment.
• Improved Risk Management And Fraud Protection: Stripe, a payment servicer, had been experimenting with GAI for some time, but the “game changer” occurred when Stripe started using GPT-4 (OpenAI’s latest language learning model, or LLM). For payment service providers (PSPs) like Stripe, security is critical—one big breach of sensitive customer data could be ruinous.
To mitigate that threat, Stripe uses GPT-4 to analyze vast online communities (like Discord), flag potential bad actors and help scan inbound communications to prevent fraud. Another innovative use of GAI is its ability to create “synthetic data” that mimics fraudulent transactions, which can then be used to train other LLMs on how to spot malicious activity.
• Increased Productivity And Better Customer Service: Earlier this year, Stanford and MIT released results from one of the first real-world studies of GAI in the workplace. They found that GAI tools like chatbots helped boost worker productivity at one tech company by 14% and that improvement was even more pronounced for “novice and low-skilled workers” who were able to get their work done 35% faster.
The experiment also revealed improved customer satisfaction, reduced requests for managerial intervention and better employee retention. The key here is that the true power and value of GAI is unlocked when it is used alongside human intelligence—not merely to replace it.
How GAI Can Weaken And Compromise BFSI Operations
• Potential For Bias: GAI systems are tools, not solutions, and they are only as good as the data they are given. Studies from Deloitte have proven that incomplete or unrepresentative data sets could limit GAI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias. This is hugely problematic, especially for BFSI companies that are relying on GAI for things like loan approvals. As systems learn, they may acquire new discriminatory behaviors that could deny applications based on race and gender, seriously undermining the trust between financial institutions and customers.
• Flawed Logic: Often referred to as “model hallucinations” or “black box” thinking, LLM models currently “tend to produce authoritative-sounding answers to questions, even when it doesn’t know the answer.” If it’s impossible to interpret the validity of data or understand how GAI arrived at its findings, BFSI companies might be making important decisions on flawed logic.
• Spread Of Misinformation And Loss Of Intellectual Property: In early 2023, JPMorgan Chase & Co. made headlines when it banned the use of ChatGPT among its employees. The bank didn’t provide much detail on why the move was made, but it sparked a lot of debate and hand-wringing within BFSI and other heavily regulated industries. Axios speculated that big companies are worried about how the tools might disseminate inaccurate information to customers, and “want to make sure that employees aren’t sharing confidential or proprietary info with ChatGPT” and its operators. Many other companies are also preaching caution, but as we’ve seen in the past, it’s difficult to maintain principles over profits for very long.
We are very much still in the early days of GAI development, but this is a critical period and we must remain thoughtful. While other technologists or BFSI leaders might view caution as timidity, it’s the price of sustainable innovation. We need more governance, not less. It would be easy to “move fast and break things” like in the old days—and restraining that urge will be hard. But like Moravec’s paradox, the easy problems are often hard.
The information provided here is not investment, tax or financial advice. You should consult with a licensed professional for advice concerning your specific situation.