Predictive maintenance is transforming how industries monitor equipment performance and manage machine health. Instead of relying on static service schedules or reacting to unexpected failures, teams can now anticipate issues before they happen. This shift is powered by the Industrial Internet of Things (IIoT), which enables real-time data collection, continuous performance tracking, and advanced analytics to detect early signs of trouble.
At its core, predictive maintenance aims to answer a simple but crucial question: “When should we service this machine?” The goal is to maintain assets at exactly the right time—not too early, which wastes resources, and not too late, which leads to downtime, costly repairs, and potential safety risks.
While machine learning and artificial intelligence often come up in predictive maintenance discussions, it’s important to understand that not every use case requires complex algorithms. Many valuable outcomes can be achieved using straightforward techniques like runtime tracking, threshold-based alerts, and basic anomaly detection. For teams without a data science background, IIoT platforms like Ubidots provide the tools to build practical, cost-effective solutions with minimal complexity.
This guide focuses on how IoT engineers, system integrators, and operations teams can implement strategies to analyze data and predict equipment failures using IIoT tools—without needing to become machine learning experts. From collecting runtime data to deploying models in production, each section offers actionable guidance to help you move from reactive to proactive maintenance.
1. The Maintenance Dilemma
Every maintenance decision carries a tradeoff. Service too early, and you waste time, parts, and labor. Service too late, and you risk costly breakdowns, production losses, or even safety incidents. The challenge is finding the right moment—when maintenance is truly needed and adds measurable value.
Maintenance professionals typically face three scenarios:
1. Over-maintenanceThis occurs when machines are serviced too frequently, often based on fixed schedules or manufacturer guidelines. While this approach minimizes risk, it introduces unnecessary costs. Each visit may involve travel, labor, and part replacements that weren’t yet needed. Across dozens or hundreds of machines, these costs add up quickly—especially when equipment usage varies significantly across the fleet.
2. Under-maintenanceInfrequent or delayed maintenance is the most dangerous path. It leads to unexpected equipment failure, production downtime, emergency repairs, and—depending on the environment—legal or safety risks. A failed compressor in a manufacturing line can halt output for hours. A malfunctioning actuator in an excavator can stop operations entirely. In critical sectors, a single incident can result in millions of dollars in losses.
3. Optimal maintenanceThe ideal approach is to service assets only when needed—no sooner, no later. Identifying and addressing equipment problems before they lead to failures is crucial. This strategy is based on actual usage and condition, not assumptions. It reduces waste, extends asset life, and improves reliability. While it’s not possible to eliminate all maintenance costs, the goal is to make them predictable and justifiable.
The difference between these paths isn’t just operational—it’s financial. In a factory with hundreds of machines, avoiding unnecessary service visits and preventing just a few major failures can save millions over time. Predictive maintenance provides the framework to achieve this balance. But it requires the right data, the right tools, and a shift from reactive habits to proactive strategies.
2. Foundations of Predictive Maintenance: How to Predict Equipment Downtime
Predictive maintenance depends on one essential ingredient: data. Without reliable data on how machines perform—and how they fail—there’s no basis for prediction. This is where the Industrial Internet of Things (IIoT) plays a critical role, enabling live monitoring and real-time data collection across assets, environments, and processes.
The key concept underlying predictive maintenance is condition monitoring. In the past, this meant periodic inspections or manual logging of machine behavior. Technicians would jot down vibration levels, temperatures, or oil conditions in notebooks or spreadsheets. These records were often incomplete or inconsistent.
Today, IIoT has moved condition monitoring online. IoT sensors now transmit vibration, temperature, energy consumption, and runtime metrics continuously. That information is streamed to centralized platforms like Ubidots, where it can be processed, visualized, and analyzed in real time. As a result, operators can detect anomalies the moment they arise—not days or weeks later.
But real-time monitoring is only the first step. To move from preventive to predictive, historical data must include both normal behavior and failure events. This combination is what allows models to learn and identify patterns to anticipate problems. A dataset with 10,000 hours of operation is useful. A dataset with 10,000 hours that also includes labeled failures is powerful.
In practice, many organizations find that data quality is a barrier. Machines may not be instrumented. Failures might go undocumented. Even when devices are connected, not all relevant variables are captured. That’s why the first step toward predictive maintenance is often just starting to log the right information—systematically and consistently.
It’s also important to understand the ecosystem involved. Predictive models require several components working together:
Data acquisition from sensors or third-party systems
Central data storage and access in a time-series platform
Feature preparation to enrich raw data
A trained model to generate predictions
A deployment mechanism to apply the model to new data
A visualization layer for alerts, insights, and reporting
While this may sound complex, it’s increasingly accessible. Tools like synthetic variables, automated triggers, and serverless environments make it possible to build predictive capabilities without building everything from scratch. Predictive maintenance doesn’t start with machine learning. It starts with instrumentation, context, and asking the right questions.
3. Getting Started with Data Collection
Predictive maintenance begins with one essential task: capturing the right data, at the right time, from the right sources. Without high-quality, labeled historical data, even the most advanced algorithms won’t produce reliable predictions. The sooner data collection starts, the sooner extracting valuable insights becomes possible.
Focus first on collecting time-series data that reflects both machine performance and operating conditions. Key variables often include:
Runtime status (ON/OFF)
Cycle counts or production totals
Vibration RMS or peak values
Temperature of motors, bearings, or enclosures
Energy consumption (current, voltage, power factor)
Environmental context (humidity, air quality, ambient temperature)
Start simple. Even a binary ON/OFF signal from a current or vibration sensor can produce valuable insights. For many use cases, this alone enables usage-based maintenance or anomaly detection.
Equally important is collecting event and failure logs. A dataset with 10,000 hours of sensor readings is useful—but it becomes powerful only when paired with a timestamped record of failures, maintenance activities, and known anomalies. These annotations provide the context that turns raw numbers into learnable patterns.
Use standardized naming conventions and maintain consistency in how variables are labeled, timestamped, and organized. Centralize all data in a platform like Ubidots, where it can be aggregated, visualized, and exported for data analysis or modeling.
For environments without existing instrumentation, begin by deploying wireless sensors to the most failure-prone or high-impact assets. Devices that support vibration, temperature, or current sensing often cover 80% of the needed signals for predictive use cases.
If additional data is needed—from external APIs, ERP systems, or manual logs—Ubidots Plugins can ingest that information and combine it with sensor data. This allows maintenance records, weather conditions, or production volumes to become part of the same analytic environment.
Finally, make data quality a priority. Ensure sensors are calibrated, reporting intervals are consistent, and outliers are addressed early. Clean, consistent, well-labeled data is the foundation of every successful predictive maintenance program.
4. Practical Techniques for Predictive Maintenance
Predictive maintenance doesn’t always require complex algorithms or deep data science expertise. In many industrial environments, significant gains can be achieved through practical, accessible techniques. These methods help detect early signs of degradation, extend machine life, and reduce unnecessary interventions.
The following sections outline four effective strategies that can be deployed with IIoT platforms like Ubidots. Each method can operate independently or as part of a broader IoT predictive maintenance system. The goal is to start with what’s measurable, extract actionable insights, and scale from there.
4.1. Usage-Based Maintenance
Usage-based maintenance is one of the simplest and most impactful starting points for predictive analytics strategies. Rather than servicing equipment on a fixed calendar schedule, assets are maintained based on how much they’ve actually been used.
This approach is especially useful when the same maintenance schedule is applied across machines that operate under very different workloads. Servicing a lightly used machine every 60 days makes little sense if its twin on the factory floor runs 24/7.
To implement usage-based maintenance, the key is to track runtime—the total time a machine has been operating. This can be done with basic binary inputs:
A current sensor that detects whether the machine is drawing power
A vibration sensor that signals when the machine is in motion
A digital input triggered by machine state (e.g., ON/OFF)
Once collected, runtime signals can be transformed into numeric values using synthetic expressions. For example, a binary signal (1 = ON, 0 = OFF) sent every 10 minutes can be accumulated over time to estimate total operational hours. If a reading of 1 is received six times in an hour, the runtime increases by 1 hour.
Ubidots supports this with synthetic variables that use conditional logic and accumulation. A typical setup includes:
A raw signal from the sensor
A synthetic variable to convert that signal into 1s and 0s
A second synthetic variable to sum those values and produce total runtime
With runtime tracking in place, it becomes easy to trigger alerts, schedule maintenance, or even automate replacement orders when a machine reaches a certain number of hours. The process is transparent, repeatable, and tailored to actual equipment usage.
This technique is widely applicable across smart manufacturing, construction, and utilities. It’s often the first IIoT use case companies deploy, thanks to its simplicity and fast ROI. It also serves as a foundation for more advanced strategies—feeding valuable historical data into future predictive models.
4.2. Vibration Monitoring
Vibration monitoring is a proven technique for identifying mechanical issues before they cause critical failures. Excessive or abnormal vibration is often one of the earliest signs of wear, imbalance, misalignment, or loose components in rotating machinery. By tracking vibration patterns over time, teams can spot deviations and intervene before damage occurs. This proactive approach significantly contributes to reducing downtime by identifying issues early.
Traditional vibration analysis has long relied on handheld devices. Technicians collect readings manually, analyze frequency spectrums, and issue condition reports. While effective, this method is reactive and labor-intensive. It also limits monitoring frequency, often to once per month or quarter.
With IIoT, vibration analysis becomes continuous and automated. Wireless sensors stream vibration data around the clock, enabling real-time condition monitoring without technician intervention. Instead of waiting for symptoms to escalate, the system can flag anomalies immediately.
There are two primary ways to approach vibration monitoring:
1. Time-domain monitoring (recommended for most use cases)This method uses statistical summaries of raw sensor data—such as maximum, minimum, and RMS (root mean square) values—captured at fixed intervals. It’s bandwidth-efficient, easy to interpret, and ideal for ongoing machine health assessments.
For example, an IIoT vibration sensor might record acceleration data for 500 milliseconds every 10 minutes. It then transmits summary values for each axis (X, Y, Z), allowing you to compare them against normal operating ranges. If RMS values spike unexpectedly or consistently drift upward, the machine may require inspection.
Ubidots users can apply conditional expressions to trigger alerts when vibration exceeds defined thresholds. These thresholds are often based on standards (e.g., ISO 10816) or experience with similar equipment.
2. Frequency-domain analysis (advanced and data-intensive)
This method involves applying a Fast Fourier Transform (FFT) to raw acceleration data to identify frequency components. Specific vibration frequencies can indicate distinct failure modes, such as bearing faults or motor imbalance.
While powerful, frequency analysis requires higher sampling rates, increased bandwidth, and more complex processing. In most cases, this level of analysis is unnecessary unless a highly detailed diagnostic is needed.
For teams looking to explore frequency analysis, Ubidots supports cloud-side processing through UbiFunctions. Python scripts can run FFTs on incoming data and report frequency amplitudes as new variables. This is ideal for pilot projects or diagnostics on critical assets.
When used effectively, vibration monitoring can act as both a preventive and predictive tool. It enables early fault detection with minimal setup and helps prioritize maintenance actions based on actual machine condition—not guesswork.
4.3. Anomaly Detection with Moving Averages
Not all failures happen suddenly. In many cases, equipment downtime begins with gradual performance degradation—just enough to escape notice during routine checks. Anomaly detection bridges that gap by flagging unexpected behavior based on historical patterns. One of the simplest and most effective techniques for this is using moving averages.
A moving average smooths out fluctuations in sensor data by calculating the average of the last N readings. It acts like a “slow follower” that highlights trends while filtering out short-term noise. When the live signal suddenly deviates from its moving average, it may indicate that something has changed—potentially a sign of early failure. Moving averages can be instrumental in identifying trends in sensor data, allowing for early detection of anomalies.
For example, consider a vibration sensor on a conveyor motor. During normal operation, the RMS values hover steadily around a baseline. But if a bearing starts to fail, the raw data might spike while the moving average lags behind. The growing difference (or delta) between the two becomes an indicator of abnormal behavior.
In Ubidots, anomaly detection with moving averages is easy to implement using synthetic variables. The typical setup includes:
A raw sensor variable (e.g., temperature, pressure, vibration RMS)
A synthetic variable that calculates the moving average of the last N values
An expression that computes the difference between the live signal and its moving average
From there, alerts can be triggered when the delta exceeds a defined threshold. This approach works well in time-series environments where data flows regularly, such as every minute or every 10 minutes.
The sensitivity of the system can be tuned by adjusting the averaging window. A shorter window (e.g., 5 readings) reacts quickly but may produce false alarms. A longer window (e.g., 30 or more readings) provides stability but may miss rapid shifts. The right balance depends on the asset and the criticality of the parameter being monitored.
This method is lightweight, adaptable, and requires no historical failure data. It’s particularly useful when working with legacy systems or new installations that haven’t experienced many faults yet. While not a prediction in the strictest sense, anomaly detection gives teams a proactive edge—catching issues early and preventing costly surprises.
5. From Statistics to Machine Learning
As maintenance programs mature, teams often look beyond thresholds and anomaly detection toward true predictive modeling. This is where machine learning (ML) comes in—not as a replacement for simple techniques, but as a natural progression. ML allows you to uncover hidden patterns in data, anticipate failures before symptoms arise, and continuously refine predictions as new data flows in.
The transition doesn’t require a full data science team. With the right tools and a clear understanding of the problem, IoT engineers can train and deploy models using familiar technologies like Python and serverless environments.
5.1. Understanding the ML Landscape (Without the Jargon)
Machine learning is a subset of artificial intelligence that involves training algorithms to learn from data and make predictions or decisions. In the context of predictive maintenance, machine learning models are used to analyze sensor data and identify patterns that may indicate potential equipment failures. These models can be trained on labeled data, which allows them to learn from experience and improve their accuracy over time.
The machine learning ecosystem is broad—and at times overwhelming. There are hundreds of libraries, tools, and techniques. But for predictive maintenance, the path is often much narrower than it appears.
At its simplest, a machine learning model is a function that maps input data to a prediction. In the context of maintenance, that prediction might be:
Will this machine fail tomorrow?
Will vibration exceed a critical threshold within the next 72 hours?
Is this combination of sensor readings abnormal for this time of day?
To build such a model, historical data is required—specifically, data that includes both normal and failure conditions. From there, the process includes selecting features, training a model, evaluating its performance, and deploying it for live predictions.
ML models fall into two broad categories:
Classification models predict discrete outcomes (e.g., fail vs. no fail)
Regression models predict continuous values (e.g., time to failure in hours)
Popular algorithms include decision trees, random forests, and gradient boosting. These methods are favored for their balance of performance and interpretability. For many predictive maintenance tasks, they outperform more complex neural networks—especially when working with small or structured datasets.
It’s also important to distinguish between tools used for experimentation and those used for deployment. Training often happens offline, using platforms like Jupyter or notebooks running on your laptop. Deployment, on the other hand, requires a lightweight, automated way to apply the model to incoming sensor data in real time.
The key takeaway is this: you don’t need to understand every ML library to succeed. Focus on solving one problem at a time. Choose a question worth answering. Start simple, iterate often, and build on what works.
5.2. A Real-World, Internet of Things-Based ML Example
To illustrate how accessible machine learning can be in an IIoT environment, consider the following example: predicting poor visibility conditions using weather data. While not a machine failure, this scenario mimics a typical predictive maintenance challenge—using time-series inputs to forecast an unwanted event.
The problem was framed as a classification task. The goal: predict whether visibility in Boston would fall below 5,000 meters on the following day. Input variables included wind speed, cloudiness, rain status, humidity, and time features such as hour, day, and month.
The dataset had over 300,000 records collected via Ubidots’ weather plugin. To simplify modeling, a synthetic variable was created to classify visibility as either “normal” (0) or “low” (1). This enabled the use of a classification algorithm, rather than a regression model.
Before training the model, additional features were added to improve accuracy:
Day of the week, to capture behavioral patterns (e.g., recurring Monday issues)
Month of the year, to account for seasonal effects
Hour of the day, which can impact visibility due to light and temperature
Rolling averages and standard deviations, to highlight trends and variability
Training was conducted offline using Python and the scikit-learn library. After testing several options, a Random Forest classifier delivered the best performance. With minimal tuning, it achieved an accuracy of approximately 79% on unseen data.
The trained model was then exported using joblib, producing a small .pkl binary file ready for deployment.
Once the model was validated, it was deployed using UbiFunctions—Ubidots’ serverless Python environment. A function was configured to run hourly, fetch the most recent weather data, apply the model, and write the prediction to a new Ubidots variable.
The result: a fully automated prediction pipeline, built without external ML platforms or MLOps tools. While the accuracy wasn’t perfect, it was good enough to support operational decisions. More importantly, it demonstrated that ML can be applied in practical IIoT settings with minimal overhead—especially when paired with structured data and clearly defined outcomes.
6. Deploying Your Predictive Model
Once a model is trained and tested, the next step is turning it into something useful—running in production, ingesting new data, and generating real-time predictions. Deployment is where predictive maintenance becomes actionable. Without it, even the most accurate model remains a spreadsheet experiment. This is where UbiFunctions comes into play.
6.1. UbiFunctions Overview
UbiFunctions is a serverless environment within Ubidots that allows you to run code in the cloud—no servers, no infrastructure. It comes preloaded with common Python libraries and is designed to handle tasks like data transformation, event routing, and, critically, model inference. By deploying your model to UbiFunctions, you can process data automatically, generate predictions on a schedule, and write those results directly to Ubidots variables. This enables seamless integration between analytics and action.
6.2. Running Predictions in Production
Running a model in production means applying it to live data—automatically, consistently, and with minimal overhead. In an IIoT environment, this typically involves fetching new sensor readings, processing them with the trained model, and storing the prediction as a new variable in your platform. This provides valuable information for maintenance decisions, helping to optimize operations and foresee potential equipment issues.
With UbiFunctions, this workflow can be implemented as a scheduled job. For example, a function can be triggered every hour to:
Retrieve the latest values from relevant Ubidots variables.
Preprocess those values into the expected format (e.g., add time-based features).
Load the model file from a URL or Ubidots file storage.
Run the prediction using Python’s joblib or similar libraries.
Store the result (e.g., 0 or 1) in a designated Ubidots variable for visualization or alerts.
This process allows the system to react in real time. If the model predicts an upcoming failure, dashboards can highlight it, SMS/email alerts can be triggered, or downstream actions can be automated via Ubidots Events.
The power of this approach lies in its simplicity. No need to set up external servers or manage infrastructure. Once deployed, the model can operate continuously in the background—delivering insight where and when it’s needed. And because everything runs inside Ubidots, all predictions are immediately available for use in dashboards, reports, or automated workflows.
7. Best Practices and Final Recommendations
Building an effective predictive maintenance strategy doesn’t require perfection—it requires consistency, clarity, and iteration. Leveraging the Internet of Things (IoT) in this context, through embedded sensors and cloud-based analytics, enables manufacturers to monitor equipment health, predict potential failures, and execute proactive maintenance strategies. The most successful implementations start small, focus on measurable goals, and evolve over time as data accumulates and insights grow.
Here are key best practices to guide your journey:
Start logging data early
You can’t predict what you haven’t measured. Begin by collecting runtime, vibration, temperature, and fault events—even if you’re not yet ready to build models. The more history you have, the stronger your predictive foundation becomes.
Capture both normal and failure conditionsA million data points without a single failure event won’t train a useful model. Ensure your dataset includes examples of what doesn’t work—not just what does. This means logging downtimes, service events, and anomalies with as much context as possible.
Frame predictive questions clearly
Avoid vague objectives like “predict failure.” Instead, define specific questions: Will the machine exceed 80°C in the next 24 hours? or Is failure likely within the next 7 days? The precision of your question will shape the effectiveness of your model.
Keep models as simple as the problem allows
Use thresholds, usage counters, and statistical alerts before jumping into machine learning. When you do use ML, start with proven, interpretable models like decision trees or random forests. Complexity should be justified by the value it adds.
Iterate based on feedback and outcomes
No model is perfect on the first try. Evaluate its predictions, track false positives and false negatives, and refine your features. As new data arrives, retrain and redeploy to keep the model aligned with reality.
Balance accuracy with operational cost
A model with 70% accuracy may sound promising—but if your machine only fails 1% of the time, that accuracy can lead to more confusion than clarity. Always weigh prediction performance against the cost of acting on that prediction.
Embed predictions into workflows
Predictions alone are not enough. They need to trigger alerts, update dashboards, and guide maintenance actions. Integrate model outputs into your Ubidots Events and visualization tools to turn insight into impact.
Own the process end-to-end
Whenever possible, empower your IoT team to manage data acquisition, modeling, and deployment. With platforms like Ubidots, predictive maintenance can remain within the hands of engineers—without depending on separate data science or IT departments.
By focusing on the fundamentals and applying the right level of sophistication, predictive maintenance becomes both approachable and transformative. It’s not about replacing human judgment—it’s about giving maintenance teams better visibility, earlier warnings, and more time to respond.