In today's fast-paced and information-driven manufacturing environments, remote monitoring technologies are providing significant benefits to processors, helping them remain competitive. By leveraging data generated from a growing number of sensors embedded throughout industrial facilities, engineers and operators can gain valuable insights into real-time critical process parameters, equipment performance and environmental conditions. This information empowers teams to optimize operations and reduce costs.
Furthermore, remote monitoring enables the early identification of potential safety hazards, prompting plant personnel to take swift action and prevent process disruptions. With efficient monitoring and centralized control from remote operations centers, subject matter experts (SMEs) — including process engineers and data scientists — are increasingly overseeing operations across entire enterprises continuously, reducing the risk of unplanned downtime and equipment failures.
Successfully implementing enterprise-wide remote monitoring is not an easy undertaking, but innovative advanced analytics platforms are helping organizations address the challenges and drive digital transformation.
Legacy infrastructure poses challenges
Modern process manufacturers increasingly rely on sophisticated networks of sensors, equipment and other devices that generate large volumes of data. For instance, the heavy industries frequently require collecting data from multiple locations — often hazardous environments — and combining the information for analysis at a central point to maintain efficient and safe operations.
From process historians to data lakes and other databases, operational data is often stored in numerous sources that vary by equipment type and manufacturer, making it difficult to standardize and deploy a centralized remote monitoring environment. This prohibits engineering teams from efficiently identifying and addressing critical issues, which can impact production and create safety risks.
Despite significant investments in data generation and even storage, geographically dispersed operations often remain “data-rich, information poor” without the necessary software tools to analyze and leverage the information effectively. These challenges are further compounded by the complexities involved in integrating modern monitoring systems with existing legacy infrastructure, such as:
- Issues connecting the two systems, with legacy platforms relying on outdated communications protocols that are vastly no longer supported.
- The need to upgrade some legacy sensors for compatibility with newer monitoring software.
- High costs and time-consuming commissioning to align both systems.
Traditionally, gleaning significant value from remote monitoring systems required specialized skillsets and coordination between engineering and data scientist teams. However, not all plant personnel possess the skillsets required to perform these tasks.
Advanced analytics answers
Advanced analytics platforms address this skillset gap and other challenges by automatically connecting to and interpreting data spread among various locations, regardless of format or structure. This is achieved through alignment algorithms and protocols that ensure data consistency and accuracy wherever it is used.
By automating data alignment and reducing the need for manual intervention, advanced analytics platforms provide significant benefits for remote monitoring. This automation not only frees up valuable SME time, but it also minimizes the risk of human error when manually wrangling, importing and cleansing data.
These modern software platforms provide an array of point-and-click tools for data contextualization, calculation and asset scaling, making it easier to extract valuable process insights. With intuitive interfaces, users of all skill levels are empowered to quickly become proficient data analysts, even without coding expertise.
In addition to ease of use, these platforms provide advanced capabilities for complex analysis and calculations, including statistical modeling, predictive analytics and machine learning. Extensibility features enable users to implement custom calculations, providing historical data analysis, along with the ability to forecast future outcomes and optimize processes accordingly.
Results: Valve health monitoring
Valves are essential components in many industrial systems, regulating the flow of fluids or gases through pipes and process equipment. However, accurately predicting valve performance in real-time can be challenging, making effective maintenance difficult to achieve.
Regular upkeep is critical to ensure valves continue to function properly, and to avoid unplanned downtime. While scheduled preventive maintenance — which does not account for usage or condition assessment — can be expensive and time-consuming, it does help reduce unexpected failures that can disrupt production, create revenue losses or cause damage to surrounding equipment or the environment.
Seeking to bolster valve reliability while also eliminating unnecessary over-maintenance, one oil and gas company implemented Seeq, an advanced analytics platform, into its operational workflows for condition-based valve health analysis. To generate a health score and preemptively detect valve failures, the platform continuously monitors valve parameters, including cycle duration, apparent stiction, rate of travel and cycle count.
Within the software platform, these parameters are displayed using Treemaps, which provide a snapshot of overall valve health so maintenance teams can easily prioritize their efforts (Figure 1).
Additional dashboards provide detailed trends for all valve health parameters, facilitating pattern detection and visualization of factors that affect valve performance. This helps identify when corrective actions are needed and ensures maintenance resources are used efficiently.
In another case, a leading agriculture company used this same approach to reduce fugitive emissions by monitoring nitrogen blanket control valves. This saved the company an estimated $120k annually in wasted nitrogen per valve that would have otherwise faulted.
Results: Distillation column efficiency monitoring
Distillation columns are commonly used in the refining and petrochemical industries to separate products and materials. Differential pressure (DP) offers valuable insight into a column's performance and operational efficiency, representing the pressure difference between two points in the column, typically measured across the trays or packing.
A significant increase in differential pressure often indicates a potential flooding condition. Flooding occurs when excessive liquid accumulation disrupts vapor flow, reducing the separation efficiency. This can lead to poor product quality and reduce overall production.
By monitoring DP closely, operators can detect early signs of flooding and take timely corrective actions. This proactive approach helps minimize damage when equipment malfunctions, feed composition changes or blockages occur. Additionally, it can reduce operational costs and plant downtime.
Implementing a DP monitoring dashboard can be difficult because readings are highly prone to noise or fluctuations, but developers can address this challenge by applying signal filtering techniques to differentiate between significant changes and random variations. While a sharp rise in DP typically indicates flooding, other factors, such as changes in feed composition, can also cause fluctuations. To differentiate between flooding and normal variations, both data analysis and process experience are needed.
Process engineers at a global refining company’s remote monitoring center implemented Seeq to monitor the performance of multiple distillation columns. To ensure sufficient data cleansing, the team applied automated signal smoothing to remove random variation from noisy DP signals, excluding downtime data. Using an XY plot, the team charted the cleansed DP signal along with the reflux flow rate of the distillation column, as shown in Figure 2.
The team color-coded the plot based on fixed time conditions to easily visualize and compare the characteristic of the DP against the reflux profile over time. They also added a high limit to the plot to identify anomalies, and then they expanded the analysis across multiple columns by developing asset trees and transferring assets from one column to another (Figure 3).
By scaling the DP monitoring across more than 100 columns, the company standardized the workflow at its extensive monitoring center with over 200 engineers.
Results: Decoking optimization
Decoking — the process of removing coke deposits from the internal surfaces of furnaces and reactors — is essential for ensuring efficient and safe operations. While the specifics can vary depending on the furnace type and organizational practices, key parameters commonly monitored during decoking include pressure, steam and gas flow rates, furnace temperature, decoking duration, coke removal rates, effluent composition and coke quality.
Ineffective decoking can lead to negative outcomes, such as reduced heat transfer efficiency, which in turn lowers furnace capacity and production rates. Underperforming furnaces also consume more energy, requiring increased fuel usage to achieve optimal temperatures and maintain desired production levels. Ineffective decoking also results in more frequent maintenance shutdowns, leading to unplanned downtime and disruptions to the production schedule.
One global oil and gas company deployed Seeq to closely monitor its decoking procedures, decreasing engineering time spent creating dashboards by 20% and improving furnace decoke performance by 10%. The solution enabled engineers to:
- Create conditions for each furnace decoke parameter and assign scores based on a stringent performance matrix.
- Calculate overall decoke quality by amalgamating all scores.
- Visualize performance using built-in tools, such as Seeq’s Histogram, which includes a customizable display range for viewing specific time periods of interest (Figure 4).
Streamline innovation with advanced analytics
Remote monitoring has become a critical component of manufacturing workflows, enabling SMEs to collaborate and innovate from virtually anywhere. However, challenges deploying effective monitoring systems remain.
Advanced analytics platforms aid organizations throughout the process industries in automating and streamlining collaborative workflows, aggregating and cleansing data from disparate locations. These platforms also help personnel view process information in broad operational context, facilitating invaluable insight generation for effective operational decision-making.
These modern technologies are enhancing process safety and efficiency, helping manufacturers boost uptime, increase production and maximize profitability in competitive and quickly evolving markets.