In the hydrocarbon and other process
industries, engineering and operations teams have a fundamental responsibility
to maximize asset profitability. However, reporting asset performance clearly and
accurately can be challenging, and in the absence of fully quantified key
performance drivers, many organizations resign themselves to generating qualitative
responses.
In today’s landscape, the proliferation of
digital technologies is providing more comprehensive and effective solutions to
quantitatively assess asset performance. These software platforms are
increasingly incorporating machine-learning and artificial intelligence (AI),
enabling agile development of first principal, empirical, hybrid and black-box models
to depict asset performance more accurately.
Legacy
models present several challenges. Despite
this digital progress, teams often struggle with challenges when
operationalizing the AI-generated models, and this step is crucial to turning
insights into productivity gains. Furthermore, these models are sometimes
difficult for teams to understand. When models remain misunderstood or unproven,
poor decisions often follow, leading to negative and unintended consequences.
Even after verifying a model, there is additional
work to do. Frequently, models originate in a text-based coding application
(e.g., Python), so they must be integrated into operational spaces in a
user-friendly way for use on a regular basis. Even the best models provide no value
to an organization until engineering and operations teams gain near-real-time
access to dashboards, insights and results.
To successfully quantify asset performance,
organizations must not only deploy modern digital solutions for use by
technical operations teams, but also follow a framework to realize the full
impact of their investment in digitalization. This framework comprises five stages
(FIG. 1):
The following use cases demonstrate the ways
in which performance criteria can be established and insights can be
operationalized.
Enhanced
filter performance monitoring leads to increased profitability and reduced
maintenance. Many processes consist of filtration, but in
almost every case, the filters plug over time. Typically, operations teams have
a single operating limit—or performance criteria—for a filter (e.g., the
pressure drop across the filter cannot exceed a particular failure limit). Keeping
this sort of limit prevents mechanical integrity issues within the system;
however, it does not provide insight into the process.
By monitoring a process value like pressure
alone, engineers and analysts—henceforth referred to as subject matter experts (SMEs)—cannot
tell whether the process filter is fouling at a normal, accelerated or slow
rate. By using more sophisticated metrics, operations teams can anticipate plugging
because of a dirty feed and adjust maintenance schedules to better suit filter
conditions to save money and increase throughput.
Understand the process. To quantify the performance of a filter as it plugs, SMEs can apply concepts from Bernoulli’s equation, which defines how the pressure drop across a filter is related to the square root of flow. Using an XY graph, the flow can be plotted on the x-axis and the pressure drop on the y-axis. The data points can be colored based on time to observe how the parameters are changing over time.
Build a fit-for-purpose model. Advanced analytics software platforms can be used to effortlessly identify specific periods of interest. Continuing with the filter efficiency example, a performance data baseline can be established right after a filter changeout by plotting the clean filter behavior across a range of operating parameters.
Operationalize the model. As the filter ages, its pressure drop vs. flow performance can be superimposed on the performance baseline, so operations and maintenance personnel can monitor changes in performance over time. Using an advanced analytics application, users can visualize many data dimensions in a single plot in near-real time (FIG. 2).
Identify deviations from the mode. By subtracting the pressure drop from the baseline pressure drop, given a similar flowrate, a quantitative measure can be calculated to define filter plugging severity over time. When this measurement exceeds a threshold determined by SMEs, the software can transmit alerts to the team, signaling the need for maintenance.
Optimize assets with informed decisions. These quantitative insights can help operators identify the ideal timeframe for filter replacement. By doing so, practitioners of this methodology attain two direct optimization benefits.
Initially, one major oil and gas company
changed its filters at a predetermined interval, or upon reaching the pressure
drop mechanical integrity limit. After implementing a leading advanced
analytics platforma, the manufacturer gained greater filter
performance situational awareness, and it was able to quantify production
losses related to increased pressure drop across each filter (FIG. 3).
Next, the team evaluated the cost of
maintenance vs. downtime required to shut down a vessel for filter replacement
at various points in time. By comparing these two costs, the manufacturer can
make informed decisions about the ideal time to replace filters, increasing
productivity and profitability.
Near-real-time plant performance management and optimization. The framework for establishing filter performance criteria can also be applied to larger systems and units. The next example details the ways this mode of thought can be applied to an entire production facility.
The first step to understanding asset health is establishing well-defined performance criteria. These data points can take on various names depending on the industry, including operating envelopes, unit fingerprints, alert setpoints and alarm limits, among others. The best demonstrated rate methodology for establishing asset performance criteria is straightforward: comparing an asset’s current performance to its highest performance in the last month. This concept has gained popularity due to its simplicity, and it empowers teams to measure asset performance quickly and quantitatively. However, when employing this methodology alone, teams frequently leave value on the table.
This approach does not consider external factors, such as feed quality, ambient
temperature, pressure drop and other aspects that can impact asset performance.
Sometimes, cases of production loss can go unnoticed when a plant is producing
standard yields, despite favorable external operating conditions in which it
could be producing more than usual. The best demonstrated rate methodology is
retrospective, and it cannot always capture these impactful periods of stealth
profitability loss, especially in the moment.
Understand the process. Operations and technical teams must have a
strong conceptual understanding of the factors that drive high asset
performance. For instance, they know when an input variable (e.g., feed temperature,
pressure or quality) changes and production yield either increases or
decreases. However, quantifying the impact on profitability is frequently
overlooked or impossible when restricted by traditional software tools.
Build
a fit-for-purpose model. Often,
focusing on the top drivers of plant production is an ideal starting point for
building a model, and its complexity depends on specific plant operations and the
data available. Production drivers—such as plant feedrate, feed quality,
pressure, temperature and downtime—vary widely by plant type. Traditional
engineers frequently advocate for first-principles models, while data scientists
often prefer purely data-based approaches.
Each application should be taken
case-by-case, and the best models are those that provide the most use to
operators and end users. At one gas treating facility, an end user examined plant
influent flow, ambient temperature and influent pressure in a leading advanced
analytics platform to create an insightful linear regression model to increase
production efficiency (FIG.
4).
Operationalize the model. Before implementing the advanced analytics platform, the team struggled to process results on a real-time basis. In many plant environments, teams must wait for backward-looking reports before they can take corrective, preventative or optimizing actions. However, with advanced analytics software, SMEs can compare plant performance data to baselines and models in near-real time. Additionally, teams can set dynamic operating envelopes or limits to assess performance based on varying operating conditions.
Identify deviations from the mode. After quantifying performance metrics, SMEs can identify previously-unnoticed periods in which a plant produces subpar outputs. With advanced analytics tools, these suboptimal periods can be quantified in terms of lost revenue and production yield to highlight the most impactful opportunities for improvement (FIG. 5).
Optimize assets with informed decisions. A global energy producer applied this methodology, and equipped with the results, the team created justification for several new operating paradigms to reduce production impacts. By applying a linear regression model, the producer can now continually react to real-time events that impact its profitability.
Tapping
data-based insights to optimize productivity and profitability. Maximizing asset profitability will always be a fundamental
responsibility for business teams. By deploying advanced analytics platforms,
these individuals, along with operational groups, are empowered to improve
situational awareness over multiple processes to better quantify the levers of
profitability. Building and operationalizing fit-for-purpose models can expose untapped
potential in facilities throughout the industrial sectors, providing decision-makers
with critical insights to optimize their operations. HP
NOTE
a Seeq
RUPESH PARBHOO is a Principal Analytics Engineer at Seeq, where he helps connect people with the right advanced analytics solutions to maximize value from their time series data. Before joining Seeq, he served in engineering and leadership positions at ExxonMobil in multiple upstream oil and gas process manufacturing facilities. Parbhoo earned a BS degree in chemical engineering from the University of Southern California.