You probably encounter predictive analytics every day in…
The basic idea is this: given some information about your behavior in the past, can I determine, within a limited context, what you’ll do in the future? While industrial applications for predictive analytics look different from those in the commercial space, the same basic idea applies: given some past information about equipment, a process or a plant, can I determine how it will behave in the future?
Commercial and social applications tend to revolve around making recommendations because suggesting what someone might want next increases prospect engagement and influences their buying behavior. Financial applications tend to revolve around risk assessment because understanding risk helps maximize investment returns in the face of uncertainty. Industrial applications tend to revolve around availability and quality because these metrics are key to ensuring that a plant delivers usable products, on schedule, at an acceptable cost.
Predictive analytics can be used to address a variety of problems and increase OEE for a plant.
There are two basic approaches to predicting the behavior of systems: physics-based and statistics-based. Each has advantages and disadvantages which determine when it is appropriate to apply one or the other.
Physics-based approaches rely on a deep understanding of either the entire system or of critical parts of the system being monitored. Broadly speaking there are two ways to follow this path:
Statistics-based approaches rely on finding reliable relationships between some system measurement and a system state (e.g. between a sensor reading and a bearing failure).
Broadly speaking, there are three approaches describing these relationships:
For example, unsupervised ML might take sensor readings over time and tell a user both that a pump operated in 4 distinct ways and the times when those operational modes occurred. Supervised ML might take those same readings and classify the pump operation into 3 specific states of interest: normal operation, cavitation and shut-down.
Machine learning based techniques can be very powerful, able to find very subtle relationships between sensor values and system states. However, unlike Falkonry Clue, ML tools typically require significant work by data scientists to define features, select algorithms and tune parameters to achieve the desired results.
The saying “use the right tool for the job” applies in analytics just as it applies in maintenance. The approach you should take depends on the outcome you want to achieve. Broadly speaking, one can break Industrial Predictive Analytics applications into two categories: Optimization and event detection / prediction.
Optimization means determining how inputs should be adjusted in order to achieve some desired output. Traditionally, this has been done through design of experiment (DOE) where parameters are systematically varied and the outcomes observed. As computers have become more powerful and less expensive, simulation is increasingly supplementing DOE in order to speed the time to results by reducing the parameter space which must be explored. Getting to an optimal output in less time means more time producing in an optimal way.
Event detection and prediction means finding and understanding system behaviors so that they can be avoided or fixed. This has been done through a combination of inspections and SPC but is being increasingly supplemented and replaced by predictive analytics methods. As cost pressures increase, it has become more important to find issues early so that their impact on OEE is minimized.
Which approach to pursue depends on where your excess costs are coming from:
When evaluating new analytics tools, it is common to start by looking at historical data. If this works on an old dataset, then it will work for current operations. However, experience suggests that this does not work as expected in industrial operations because the record keeping during operations is not amenable to advanced predictive analytics. While there is frequently less organizational resistance to starting with a proof of concept (POC), understanding what the POC results actually tell you about the business impact of the technology is difficult. This is because the operational experts who are needed to provide context and interpret the impact of analysis results don’t have the free time to look at problems which happened a year or two ago. Without the support of the subject matter experts (SME) critical buy-in is not established. Likewise, because the problem being “solved” in the POC is old, there is usually little practical impact that investigating it has on production today. Without SME support and without a clear bright spot where a current production problem has been solved, analytic products get stuck in POC purgatory, withering from lack of interest and dying.
Instead of starting with a POC, going straight to a limited production pilot can result in better results. By taking this path, the problems which result in projects getting stuck are addressed:
By following this Predictive Operational Excellence (POE) approach, teams are able to show value quickly and get a fast return on their predictive analytics investment.
There are a wide variety of predictive analytics packages available on the market today. However, few of them support a predictive operations approach – an approach which emphasizes working on today’s production challenges with current data, fully engaged with operational experts in a production pilot instead of a proof-of-concept. If your progress on the path to operational excellence is limited by your current event detection and prediction capabilities, we can help. Contact us to learn more about Falkonry and how Falkonry Clue can accelerate your journey to predictive operations.