If 2020 was all about an industry-wide acknowledgment of the importance of Industrial IIoT, and 2021 the year of recognizing the advantages of AI, then 2022 will be the year of smart manufacturing at scale. Analysts are unequivocally hailing 2022 to be the year of the smart factory and we believe time series AI will play a major role in the realization of that vision. Time series AI is aimed at the largest subset of industrial AI needs, as most industrial operations cannot be visually monitored. It’s the need of the hour for process industries that tend to have the most well-instrumented factories. Furthermore, after ascertaining the state of business and various technical approaches to industrial AI, enterprises have rightly come to the conclusion that approaches that scale to the widest set of needs across their operations should be the ones that are adopted; ergo, time series AI is that silver bullet.
With that said, here are some developments, we believe, we will see in the smart manufacturing space in 2022:
Highly automated and high-value industries such as metals, oil and gas production, weapons system operations, automotive part making, and semiconductor manufacturing which have continuous operations will go “all in” on digital smart manufacturing technologies, particularly time series AI. What is special about these industries (when compared to discrete manufacturing) is that manufacturing capacity is limited and the cost of stoppage or defects is very high. In fact, some of these industries have to necessarily aim for “zero-defect” not because the cost of recall is too high but because they are mission-critical in nature. In order to take on such a goal, these industries which are generally well-instrumented will need to fully exploit the terabytes of time series data that they generate.
While it may seem counterintuitive to apply AI to line-scale rather than to use cases, after working with our customers over the years we can confidently say it is the more efficient way of deploying AI. Targeted approaches designed for specific use cases don’t scale beyond POCs and science projects. This is because the operating environment is constantly changing, hence the goal post is constantly moving. Too much problem curation effort is needed before any real benefit is possible with a precision aiming approach. AI needs to fit into existing plant workflows, which tend to arise at line-scale as opposed to on a small number of problem spots. This realization that only when AI is applied at scale do significant benefits begin to accrue is proliferating through the industry and 2022 will see significant line-scale adoption.
A development related to the above trend of moving away from use cases is the concept of model-free AI. Use cases usually mean the reliance on well-crafted models. These models have to go through the manual process of development, testing, and validation. More importantly, they need to be constantly tuned and maintained. When applying AI to line-scale the number of interesting operating modes across a line far exceeds the ability to create precise models. Even if one were to do it at that scale, anytime human intervention is needed costs become prohibitive and without automated workflows, bottlenecks arise. Going forward a layer of model-free AI will be applied to all the data, all the time. The aim of doing so is to discover behaviors such as signals that may be flat-lining, or not transmitting, some signals that might be reporting erroneous values, which tells us that the sensor is not correctly configured etc. This gives us a better picture of the kind of data we have that we can work with (i.e. input), and that’s where model-free AI comes in. Model-free AI will, in a way, mimic the software development process. Similar to how code is committed to a repository where automated processes build, test, report and in some cases, deploy the code, tools in the AI world will automate the entire model dev-deploy process with an even broader automated pipeline. Model-free AI hence is a combination of auto ML, MLOps, and data ops wherein only inputs, outputs, and explanations matter; and the rest is magic. Model-free AI will result in a consistent way of working where there is input, there is a way to highlight periods of time within that input that need attention, there is supporting evidence for why attention is needed, there is the human-provided feedback, and the whole process continues to keep improving. The rest happens behind the scenes.
It will be finally possible to obtain the complete human-understandable, digital chronology of events in a production environment from the operational data it produces. This would enable accurate prioritization of improvements as well as the ability to truly validate industrial AI performance. As a concept, we have been working towards providing this capability within our products to a great extent, wherein you don’t have to rely on human recordkeeping to have a breakdown of what happened when; instead, that can be derived from the signal data. To illustrate with an example consider a prospect we were working with that had the need to manage and maintain fleets of trucks. By analyzing the data coming from the engines right after scheduled maintenance was performed, reliability teams would able to tell, by observing system behavior right before and after, whether the maintenance positively impacted performance or if in case no change was observed, that the scheduled maintenance was unnecessary. All of these inferences are constructed from the data and recorded. It’s a much more consistent and reliable way of working than depending on a human to certify that inspection or maintenance was performed and resulted in improvements. Instead, the digital chronology of events will reflect all such actions and resulting changes in system behavior.
Early Industry 4.0 practitioners started by piecing together various IT/OT technologies to build makeshift analytics solutions in-house. These technologies might include IIoT platforms, streaming analytics tools, data science and machine learning tools and operational frameworks etc. But now the realization is dawning that companies have to spend inordinate amounts of time piecing together these technologies and end up creating insufficient value for themselves. Secondly, this approach of hand constructing AI models or user interfaces and piecing together various technologies is not conducive to scale. For that reason, smart factory platforms that have various capabilities or have such elements in-built, and are usable directly by factory staff in a variety of scenarios will become popular going forward. IT effort in digital industrial AI projects will be down to essentially a tiny fraction of its current level and projects will be managed directly by maintenance and reliability engineers.
Most IIoT platforms leave the analytics development to a computation path that is attuned to more heavy lifting. Whereas in the Falkonry smart factory platform, the univariate is already automated and there are some very easy no-code mechanisms to do it even at the multivariate level. And it’s all integrated. So for instance you don’t need to understand the Azure architecture since such plug and play solutions can be run directly by reliability engineers. Instead of IT teams developing and managing these integrations, it’s better if the smart factory platform already has the smarts to do the analysis. Of course, you might want to add more analytics and other modular components when needed, but at the outset it provides an interface for reliability engineers to manage this whole continuous monitoring, analysis and improvement mechanism on their own. Essentially, it’s DIY in the hands of reliability engineers. After all, a smart factory is not a factory with more IT people, instead, it’s a factory that outputs more with the same number of people.