AI Adoption in Banking

Get It Right with a Tech Formula (Part 1)

Anna Oleksyuk
Towards Data Science

--

Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details.

You’ve heard the buzz: artificial intelligence (AI) is the hot new commodity in finance. But can you just sprinkle some “intelligence” atop your core banking systems and call it a win? Hardly.

Formalizing an AI use case and even running a successful pilot is the easy part. Deploying and scaling that AI algorithm is where things get complex.

// Only 22% of businesses using machine learning (ML) have successfully deployed a model to the production environment. //

Getting AI right is hard, but who needs it, anyway?

Well, most banks do if they want to stay competitive in the long term. According to a report by Temenos, 77% of banking executives say that successful implementation of AI will differentiate leaders from laggards in the banking space in the next several years.

Clearly, the first to solve the AI scaling and deployment challenge will gain the most market rewards.

But you can’t win the race without knowing what obstacles are ahead.

Why implementing AI in finance is hard

Regulations, compliance, privacy, and data bias — these are common concerns among financial institutions looking into AI. But let’s pretend for a second that these don’t exist and gauge the complexity factor from a purely technological standpoint.

Picture this: a group of executives gets sold on a new decision analytics engine. On paper, the project looks perfect:

  • Predictive analytics can right-size sales pricing
  • A next-best-action (NBA) component can improve upsells/cross-sells
  • The analytical component can reduce the volume of false positives in payments
  • The payback period can be as little as a year or two

“Cool beans,” says the leadership. “Let’s get that engine up and running! Get the ML team working on this ASAP.”

Now comes the fun part: the machine learning team gets elbow deep into the project only to get hit by one obstacle after another:

  • Enormous data silos and lack of a unified data management process subvert the process of preparing the needed datasets.
  • Legacy infrastructure, where the key data rests, needs to be re-platformed or replaced altogether to avoid disrupting core systems.
  • New cloud infrastructure with loads of GPUs needs to be assembled and configured to support the testing and deployment of algorithms.
  • Too few people are around to actually get all of these things done.

That’s how a profitable pilot with crazy-high ROI on paper turns into a big-ticket investment that’s losing its attractiveness by the minute and eventually gets canceled after one pilot (and some publicity).

So why do so many financial companies get burned on their early AI investments and fail to scale beyond pilots?

There are several reasons:

  • Lack of production-ready data and the ability to access it fast enough due to outdated (or non-existent) data governance processes.
  • AI algorithms need attention, especially at the earliest stages of deployment, in the form of supervision, maintenance, compliance, and cybersecurity, among other things. Yes, AI can become self-learning, but it never fully becomes hands-free.
  • A weak or nonexistent integrated development environment creates testing and deployment bottlenecks.
  • AI deployments create a new architectural layer in core banking systems that need to coexist with legacy systems without disrupting them.
  • Packing an algorithm into an attractive customer-facing solution that adds value requires extra time and expertise.

So are we suggesting you abandon your AI dreams altogether and let the digital-native competition win?

Not at all. Introducing AI is expensive to get wrong. But you can minimize the risks, costs, and adoption timeline by sticking with a bottom-up approach. One that can be summarized using this formula:

(Legacy + Frontend Transformations) × AI = Modern Bank

P.S. This is a two-part post. In this installment, we’ll focus on legacy systems. If you are tempted to get to front-end transformations first, go here (but you should really read this part too!).

Legacy transformations for AI in banking: 4 areas to address

Image by Intellias

AI in finance needs several key building blocks:

  • Data governance and management platform
  • IT governance framework and architecture optimization
  • Cloud computing and cloud GPUs
  • MLOps

Two-thirds of banking executives (66%) said new technologies such as AI, cloud, and DevOps will continue to drive global banking transformation over the next five years.” Temenos report

Data governance and management platform

A unified data management platform, connecting and consolidating both internal and external data sources, is the backbone of every AI implementation.

Beyond providing you with a lineup of initial datasets for analysis, a data governance platform can help you:

  • improve data traceability and accountability
  • enhance data security and compliance
  • support scaling of AI use cases, as you’ll always have streamlined data for analysis.

The data management platform you need to assemble needs to be further backed by an IT governance framework.

IT governance framework and architecture optimization

The problem with most legacy banking systems is that they’re old, tangled, and rigid, leaving no space for new elements.

The goal of the IT governance process is to help you figure out where to place the AI piece into your current architectural puzzle.

In essence, such frameworks are designed to help you pick and probe your legacy software to see which system components can be decoupled and modernized without setting the core on fire (figuratively, that is).

So rather than attempting to replace the core in one fell swoop (at major risk), you can perform modernizations at the system level and evolve your platform one element at a time, as I wrote in another post on legacy modernization for banking.

Infographics by Intellias

Migrating to a more decoupled architecture will help you:

  • allocate the place where new AI services can sit
  • connect more data sources to your data management platform
  • figure out how the new algorithm can be integrated with other services.

Cloud computing and cloud GPUs

Here’s a very simple explanation of AI:

AI = (code + data) × computing power

Considering that you’ve already organized your code and streamlined access to data (and perhaps migrated it to the cloud), you can now go searching for that computing power.

After all, deep neural networks and other sophisticated ML algorithms are power-hungry creatures, requiring a ton of computing power to operationalize the data they’re given and churn out predictions.

In essence, a neural network is the result of numerous matrix multiplications performed on network inputs (your data) during the training and prediction phases so that you can get a good output (prediction or insight).

Depending on the complexity of the task, a network can use 10, 100, or even 10 billion parameters to identify patterns within the given data and produce results.

To run these operations, your algorithms need computing power, and that’s where GPUs (graphics processing units) come into play. These chips help you accelerate your calculations and get your hands on the results faster.

You can go ahead and stock up on a bunch of GPUs in-house, or you can rent cloud GPU capacity from service providers — something that will allow you to automate and scale larger algorithm deployments in the future (or avoid scrambling for resources when your training ends up being more complex than anticipated).

Considering that the GPU as a Service market is predicted to hit $7 billion by 2025, you can guess what most companies prefer to do. Now, apart from computing power, you’ll also need some extra resources for storing your data, test sets, experiments, version controls, and test results. Again, keeping all this in the cloud is the route most companies choose.

Which brings us to the next important point: keeping your AI projects organized and ready for deployment.

MLOps

Machine learning + DevOps = MLOps, which is a new movement towards creating a streamlined way to build, test, and deploy ML/AI models.

The goal of MLOps is to help you adopt and automate continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning (ML) models. Google Cloud

Greater automation adds predictability to your development process and reduces the chances of model failure due to minor errors.

Here’s what you get out of MLOps:

  • Reusable pipelines and repeatable workflows for launching new models
  • Improved data integration and unified data governance
  • Automated model setup, training, and testing (for similar projects)
  • One-click replication and automated version control
  • Hotkey access to all necessary libraries, frameworks, and integrations for your projects

So rather than wasting time on assembling all the bits and bobs for a new AI project, your ML team can get straight to the action and run new experiments faster, at a lower cost, and with fewer risks.

To sum up the legacy transformation part of the equation:

  • Assess your systems to locate the optimal position for your AI.
  • Decouple data sources and connect them to a new data management platform.
  • Look into further opportunities for integration and modernization.
  • Allocate cloud infrastructure for new AI tests.
  • Get the GPU capacity you need.
  • Set a clear roadmap using MLOps principles.

Alright, so we are done with the first part of the equation — Legacy Transformations.

Since it’s been quite a ride already, we are keeping the second part of the equation — front-end and AI transformations — for Part 2 of this post.

--

--

FinTech Enthusiast at intellias.com, looking beyond the hype in the adoption of the new technologies.