Most of the use of AI in the world follows this paradigm: A Data Scientist creates an algorithm to “classify” something you care about versus something you don’t. This model is trying to identify a specific pattern where it will flag something for a user. Remembering that the A in AI means artificial, this seemingly human-like identification of some pattern is devoid of any real context of what is actually going on in the business. It can tell you only what you tell it to say. It is a “warning,” a “novel event” or some such generic statement of where you wanted it to tell you to pay attention. At the root, AI is just math. It trains with data that contains the type of event you want it to find. In most industrial settings, however, there are few examples of significant failures because out of necessity, industrial companies have employed many engineers to monitor operations and iron out the biggest issues. This leaves the model with a very hard job. It is left to find the remainder of what the humans didn’t find exclusively in an area that the human believes, without real evidence, is still a source of problems.
This spotlight, if you will, is exclusively pointed at that area where the problem is likely to occur, like a flashlight pointed at a hole in the wall waiting for a mouse to come out. Often the humans get it wrong and the condition that used to be present is no longer visible after a particular setting was changed or a part was swapped out. Metaphorically, the humans still believe a mouse is going to come out of a hole at some point, but may now want to point it at a different hole in the same room. This means trying to find new failure data to train the algorithm and possibly the need to make changes to the algorithm itself. It is a never-ending game of “flashlight and mouse,” if you will.
This is the state of most of the AI in the world today, especially in industrial settings. The model maintenance alone is a huge part of the effort to derive value. With this approach, there is also the built-in conceit that the humans think they know where the mouse will emerge or feel they have an educated guess. This results in the algorithm having the deck stacked against it before it even starts working. The subtext is I know where all the problems are around here, so if you are so smart AI, find me something I didn’t know in machines I have been studying for years. This approach to AI is referred to colloquially as a use-case driven approach. It equates to I know what I want to find so go find it. As you can no-doubt imagine there is a ton of manual work here for relatively expensive and highly trained resources resulting in most AI being developed and deployed as an expensive services engagement. These services only scale when you add more people to do the work.
Falkonry, since its inception, has been driving a completely different approach with a goal of democratizing applications that use AI. We first made models easy to create without code so citizen data scientists could build and deploy models without experts. The Falkonry team then moved on to helping end users organize and label what the AI was telling them so the AI would “learn” what the human cared about most. What was interesting here, however, was that humans can be unsure about the similarity of things that AI can detect and what they should track and resolve. This is not a failing of humanity, but rather difficulty communicating with AI, which arguably, while not sentient, could be thought of as another life form that senses the world very differently than we do, but in no less valid ways.
What we decided to do next was to yet again take a giant leap in human interaction with AI and turn the whole approach on its head with our newest product, Falkonry Insights. If AI is meant to see things we can’t see to augment our abilities as a superpower, then we thought we better let it do its thing. This meant we would have to take all the data the factory could give us, and not just the data the humans thought was important. We had to make quantum leaps in our ability to ingest data in terabytes on a near continuous basis. We also had to find new ways to process and store that data that wouldn’t cripple us financially and that we could improve over time to become a high-scale software business. This meant moving from CPU processing to GPU processing, an area where there is a lot of talk but no big breakthroughs and few success stories. Additionally, if we were going to look at all the data all the time, there is no way labor could or should be deployed to create, train, tune, and deploy models. This meant developing a whole new engine capable of assessing all the data after it trained itself to understand what normal is. This approach is known as a “self-supervised model,” and it is the highest level of difficulty and current Holy Grail of AI. With some incredible people in our CTO’s organization from AI gurus to data processing wizards to backend software savants, we pulled it off.
We now have what we believe to be the world’s only fully automated and explainable anomaly detection application that operates on high-speed automation and sensor time series data. What does this mean to you? Well, when you think about the gremlins you are trying to find in your factory to keep it humming as close to 24×7 as possible, there are places you expect failure and poor performance to come from (use cases) and from our experience with customers, there are places you don’t expect it (plant-scale monitoring). We know you need both to get to peak performance. You need AI to look everywhere with every data point from every sensor you have to detect new and emerging hotspots in places you can’t see at data resolution levels you’ll never find.
We also make it easy to add words in your language of choice as to just what types of things are happening and where; so effectively, the AI tells you clearly what is wrong and precisely where to look. We further augment this with a plant-wide signal manager capability where you can pan and zoom intuitively through every signal you’ve sent our way. You can also share your findings and your notes within a reporting environment that combines visualizations where you can compare values and signals with a rich text editor to explain the details of a particular situation to those you need to loop in. When a broad application like Falkonry Insights helps you find these hotspots, you’ll likely still want to point a model directly at it so you can detect very nuanced behavior and tune that model over time to precisely dial in what would be most concerning with our Clue and Workbench applications. These applications are now even more valuable once Insights helps you pinpoint the most important places to look.
By flipping the traditional script on AI, we believe you’ll agree that we are making AI work much harder for you and giving you many more modes of finding the concerning behavior that might lead to slowdowns or shutdowns. This is machine and condition monitoring on steroids, and we believe it will be a game-changer. It utilizes AI in the way it is most useful by not asking it to “see” in the same way we do, but to leverage its computational strengths to literally see it all. Additionally, we are already onto our next innovation where we will give you the power to see exactly what you are interested in without tuning models or altering datasets. With Falkonry Insights, we’ve made what we believe is a great leap forward to put you in the driver’s seat to find all the emerging issues only visible by assessing ALL your operational data. We are anxious to see where you’ll drive.