In my last post, we defined predictive maintenance (PdM) and looked at how it stacks up against other strategies, such as condition-based maintenance. It’s a clear winner when it comes to preventing failure on critical assets, but buyer beware: Not all PdM solutions are created equally. There are actually several methodologies grouped under the name “predictive maintenance,” and each has its pros and cons. Today, let’s double-click on this elusive term to break down the different methodologies behind it—and find out which could be right for you.
The four approaches to predictive maintenance
First, let’s recap:
Predictive maintenance (PdM) is a technique that uses data analysis tools and techniques to detect anomalies in your operation and possible defects in equipment and processes so you can fix them before they result in failure.
Or, as I put it in the previous post, PdM attempts to see into the future so you can act to prevent failures before they happen. With this in mind, here are the four main methods currently used to implement predictive maintenance:
-
Human/experiential
-
Unsupervised machine learning
-
Supervised machine learning
-
Physics-based, a.k.a. first principles
We’ll explore each method in turn and compare them all at the end.
#1. Human/experiential PdM
Okay, so this one isn’t a software solution, but it’s worth mentioning. Human/experiential predictive maintenance occurs when workers manually collect and analyze machine or sensor data. For example, a technician may collect vibration data from a motor and compare it to a spec sheet to determine if failure is likely and maintenance is needed. It could even be when a seasoned maintenance technician predicts that the oil needs changing just by the smell of it or the feel of it between his fingers. As you can imagine, this method isn’t an exact science, and results can vary widely.
Pros
| Cons
|
#2. Unsupervised machine learning PdM
Machine learning (ML) uses data and algorithms to teach computers to perform complex tasks, gradually improving in accuracy over time. Unsupervised machine learning accomplishes this without requiring any setup or input on the part of the user. Basically, you feed the system data, it learns the baseline, and then detects statistically significant deviations. For this reason, this type of PdM algorithm is also referred to as “anomaly detection.” It also means that the system is “blind”: it can tell you when a statistically significant deviation has occurred (which may indicate an impending failure) but this remains a probability—it can’t tell for sure that an asset has failed or will fail. In fact, sometimes when a machine is improved, its operational baseline changes and so the system will raise the alarm. All it knows is that a deviation has occurred. In such a case, the system may need to be retrained to learn the new and improved baseline.
On the other hand, because this approach to PdM only requires operational data, it’s the most flexible PdM solution. It has universal applicability—it's compatible with any asset or sensor that generates data. It can even go beyond physical assets and be used to detect deviations in maintenance metrics (like OEE) or other data streams (more on that in a future post).
Pros
| Cons
|
#3. Supervised machine learning PdM
This approach is similar to unsupervised ML, but in addition to automatically learning from the data it collects, it also requires feedback from the user to learn when certain events (especially failures) have occurred. After a lengthy training period in which the technician registers each and every failure, the system will eventually learn to predict when a failure is about to occur. And by lengthy, I mean that the training period can be as long as a year—or even longer—and requires many failure examples to teach the machine. For applications where quick time-to-value is important or failures are infrequent, this might be a prohibitive factor. That said, once the system is fully trained, it can offer more accurate predictions than an unsupervised system.
Pros
| Cons
|
#4. Physics-based a.k.a. first principles PdM
For some people, predictive maintenance has the reputation of being complex, costly, and hardly worth the hassle. Most of that reputation comes from early physics-based PdM solutions. This method is the heavy hitter of predictive maintenance solutions: It requires data labelling, custom models, and a whole lot of expertise (and money!) to implement. But it comes with a huge benefit: once the system is set up, its power and accuracy are unmatched. Essentially, a model is created that captures every little detail about your asset and the way it’s supposed to operate. That’s why this method is sometimes also referred to as “first principles” predictive maintenance. From this model, the system can immediately identify when something isn’t right with your asset—no probabilities, statistics, and guesswork required.
Most physics-based PdM solutions cater to a specific category of assets. Some, for example, only work with variable-frequency drives. Basically, your asset needs to be in the system’s model database. If it’s not, you need to build a custom model (which requires a lot of time, money, and expertise). And if for some reason your asset is functioning fine but nevertheless behaves a little bit differently than the model says it should (perhaps due to the operating conditions or recipe), you may get some inaccurate predictions. That’s why these systems are increasingly being combined with machine learning to create hybrid solutions.
If you have a very specific problem to solve, you have enough time and money to solve it, and the problem is costly enough to justify the expense, you can get very granular and very specific outcomes with this approach.
Pros
| Cons
|
Weighing your options
Okay, now that we’ve covered each of the four types of predictive maintenance in some detail, let’s compare them. I’ve ranked each solution in four categories: prediction accuracy, cost, time-to-value, and user upkeep. This is how they stack up:
Prediction accuracy/granularity (best to worst)
| Cost (lowest to highest)
|
Time-to-value (shortest to longest)
Disqualified: human/experiential (it only delivers value when the technicians happen to be there to catch the problems) | User upkeep required (least to most)
|
For most maintenance managers, the question top of mind is likely to be, “What’s the cost/benefit analysis of each solution?” So, let’s summarize the results in terms of cost (time, money, expertise, etc) vs. effectiveness (prediction accuracy):
-
Unsurprisingly, human/experiential PdM doesn’t offer a great tradeoff between cost and effectiveness. It requires constant work by trained technicians to make it pay off, and it’s only as accurate as the meter readings and spec sheets it’s based upon.
-
Unsupervised ML offers a substantial jump in effectiveness for a bit more cost. It has the quickest time-to-value among the software solution, and while there’s usually an implementation cost up front, once the system is running it requires very little input to work. It also has universal applicability, which makes it flexible enough to be used with any assets and switch assets when needed.
-
Supervised ML offers slightly improved prediction accuracy, at the cost of a much longer training period and therefore longer time-to-value. They also require regular user intervention during training.
-
Finally, physics-based solutions offer great accuracy, but require a substantial investment in time, money, and expertise. Time-to-value is generally longer, and flexibility is limited, but when it works, it really works.
Making decisions
Keep in mind that predictive maintenance is rarely the only strategy to be used on an asset—especially a critical asset. When we’re talking about maintenance strategies, they’re rarely mutually exclusive. If you implement predictive maintenance on a critical asset, it creates a fantastic safety net, but it’s still wise to keep your scheduled maintenance and inspections going on that asset. For this reason, you need to ask yourself if the asset's criticality warrants the most accurate, most expensive solution, or if there’s a sweet spot between cost and effectiveness that will meet your needs without being excessive.
Here are some questions to ask yourself when deciding between these four predictive maintenance solutions:
- How much prediction accuracy do I require? Do I need instant, certain predictions or failure? Or can I benefit from a probabilistic approach?
- What time-to-value is acceptable to me? A week? A year?
- How much money am I willing to spend? New hires?
- How much user input am I willing to commit to training and upkeep of this system? Is it a substantial amount, or do I want to “set it and forget it”?
Ultimately, each team will have to consider where the line between cost and effectiveness falls for them. But for most teams, we think method #2 offers the best return on investment.
Fiix Asset Risk Predictor
The new predictive maintenance solution from Fiix, Asset Risk Predictor, is based on method #2: the unsupervised machine learning method. As we explain in our Help Center article on ARP, the “system works on the concept of ‘normal behavior’, which is learned from repeated observations.” This approach keeps costs lower and time-to-value shorter than supervised ML and physics-based approaches, and it has the flexibility to work with any asset, improving over time as it gathers more data.
We also think it’s the most accessible approach to predictive maintenance—no Fortune 500 budgets or data scientists required. To give Fiix Asset Risk Predictor a try, book a free demo using the link below.
Do you have a preferred approach to predictive maintenance? Let me know in the comments down below.