At first blush, you might think the kind of Artificial Intelligence applications that are deployed in industrial contexts would have nothing to do with financial services. Whether we’re talking manufacturing, process control, automotive, or any other industrial setting, what could they possibly have in common with banking? The answer: a surprisingly large amount. And there are lessons learned from industrial applications that could help to shape the development of FinTech solutions, particularly those focused on risk management.
This blog post results from a conversation between Nikunj Mehta, Founder and CEO of Falkonry, and Graham Seel, a 30 year banking veteran and Principal with BankTech Consulting. We think you’ll be surprised at the parallels and the promise.
Industrial Control and Banking Operations
What is similar between an engineer controlling an industrial facility and a bank operations manager controlling payment processing? Both deal with operational risks that require constant scrutiny and immediate action at the earliest sign of trouble.
But aside from this intuition that complex operations are the same everywhere, what really is alike between industrial operations and financial operations? We thought this blog post would highlight the similarities with an eye to techniques that can cross over from one to the other. The similarities include:
- Data-Informed Operations
- Alarm Fatigue
- Trends And Human Processing
- Expert instead of Learning Systems
- Regulatory Oversight
- Trade Secrets
Financial Institutions are primarily risk managers and manage a variety of financial risks — market, credit, operational, currency, liquidity, and others. All industrial companies are managing a variety of industrial risks — safety, quality, efficiency, asset, inventory, and others.
Here is our take on the similarities between financial and industrial operations that suggests that data-informed technologies being applied to one apply to the other and vice versa.
1. Data-Informed Operations
Data-informed operations is the basis for day to day operations. No matter how sophisticated the data collection and processing systems are, a trained human is ultimately responsible for making critical decisions. For industrial activities, there are closed-loop control systems that will automatically carry out safety and basic efficiency measures. Similarly, bank operations managers can carry out exception processing and error recovery based upon system-generated communications. Still, any analytics applications that are guiding their decisions will submit their findings to a human to effect any changes to operations. This is why all operations are run by substantial-sized teams who are challenged to keep improving their effectiveness through improved process and technology. Nothing at this point suggests that will change completely this decade.
2. Alarm Fatigue
One of the biggest hurdles to improving effectiveness is false alarms in the analytics technologies used. Not being sensitive enough means that the risk exposure is higher. However, being too sensitive also means burying the operations team under a lot of busy work chasing false alarms.
Most operations teams use expert systems, primarily based on rules, to monitor operations and generate alerts. These expert systems are set up by, who else, experts in the domain! Rules, thresholds, specific KPIs, and the tuning of each of these takes up a lot of the expert’s time in order to maintain the balance between risk exposure and team efficiency– and that has its own consequences. In Anti-Money Laundering (AML) and anti-fraud applications, for example, traditional rules-based models must be submitted to periodic re-calibration and detailed testing, at significant cost.
Rules-based systems pose a major operational dilemma. Depending on the cost of missing a true exception, models may be tuned very conservatively. In the case of AML monitoring, bank fines have been so high that the probability of missing a money-laundering transaction must be reduced as near to zero as possible. This results in large quantities of false positives (legitimate transactions flagged for operator review). Not only does this significantly increase operational cost, but they also create “alarm fatigue,” in which operators expect false alarms to such an extent that they miss a true positive and allow an improper transaction.
Institutions in both areas are looking for innovative means of reducing alarm fatigue while, at the same time, improving effectiveness.
3. Trends and Human Processing
Both when dealing with sensor and transaction-oriented data, there is an important aspect of time that affects how decisions are made. In general, humans are good at interpreting simple time trends and looking at slopes and levels. Some of these trends can be encoded in expert systems but any complex trend cannot. This is mainly due to the limits of human ability to describe complex time trends. Also, where different pieces of information don’t arrive at the same time or rate, incorporating the trends in such information into any expert system tends to be hard.
This problem is exacerbated in financial services applications where trends are formed (and change) over periods of days, weeks or even years. Individual operators cannot expect to recognize long-term trends in customer behavior without computer assistance.
The result of such difficulties is that current operations systems are not programmed to recognize complex trends. This implies that issues are discovered later than when their first signs start to appear. Also, it increases the amount of effort people have to put into confirming alarms by interpreting patterns.
4. Learning instead of Expert Systems
The next issue is that expert systems do not change by themselves, as they have to be programmed by experts. Learning occurs in the minds of experts who then apply the lessons of their learning into new versions of the rule base used by the expert system. In today’s fast changing landscape, this means operational systems cannot be evolved rapidly enough. In fraud detection, for example, monitoring rules bases need to be modified sufficiently frequently to keep up with new criminal approaches to fraud, in addition to incorporating experts’ learning about signs of potential fraud.
It should ideally be possible to use a thumbs-up/thumbs-down or a photo-tagging type of approach to improve the quality of results continuously. That requires that operations management use a learning system, more akin to the AI we use now with voice assistants, for example.
Some work is being done today in financial services monitoring “after the fact” to incorporate AI machine learning techniques. This includes providing feedback to the model as to real and false positives, allowing automated tuning of the models that identify suspect transactions. However, this is primarily still a batch process, working on feeds of transactional data from core banking systems. Wire transfers undergo real-time monitoring (particularly for sanctions scanning) but their volumes are relatively low. As other payments move more toward real-time and irreversible mechanisms across the world, the importance of real-time monitoring will greatly increase.
Experience with high-volume real-time industrial monitoring may be transferable to financial services, particularly as AI technologies are deployed and operationally proven.
5. Regulatory Oversight
A critical requirement of systems in both domains is that regulators exercise oversight over methods used in operations. Therefore, it is necessary to keep the methods explainable and, potentially, easily provable. Techniques much more complex than high school algebra can hardly be defended.
This makes black box methods, especially those that include proprietary algorithms, harder to deploy in a scalable manner. Also, it means that vendors should be willing to disclose the algorithms used and not rely on secret sauces. Finally, it requires math to be simple enough to explain to various stakeholders.
6. Trade Secrets
The last, but not the least important, challenge is to maintain complete control over the analytics models in order to derive a competitive advantage through superior operations, as well as protect against unwanted legal and criminal behavior from third parties.
Many vendors and techniques are operated as SaaS under full control of the vendors. Moreover, many vendors create data derivatives from their customer data and use some of those derivatives as an offering to other customers. While operational methods should be open to scrutiny and best practices can be shared across an industry, models should be under the control of institutions.
Falkonry, through its AI for Live Operations, provides a learning system for automatically processing trends in the data to create an accurate and timely warning system for undesirable conditions– without requiring any trade secrets about the operation to be divulged to Falkonry or anyone else. Its methods can be explained to regulators and it equips existing operations teams to operate informed by the operations data in a more effective and efficient manner.
While the linked video is specifically focused on the Internet of Things (IoT) and industrial applications, it is easy to see where there are analogous opportunities within Financial Services. A number of FinTech companies are looking at IoT for opportunities around, for example, customer experience. Banks have also traditionally looked at industrial process management methodologies to improve their own operational procedures (e.g. Six Sigma).
In summary, there is an opportunity to use industrial process exception management techniques to address some of the most challenging issues in bank operations today, the highest value among other potential areas being fraud detection and Anti-Money Laundering.
Graham Seel is an expert in commercial banking, and provides strategic insight and internal business cases to banks. He works as a fractional Customer Success Executive to Fintech firms, facilitating their partnership with banks.
This post was originally written June 06, 2016.