A. Buenemann, Seeq Corp., Seattle, Washington
Uncertainty in global hydrocarbon markets has led to the tightening of belts on capital expenditures while new capacity comes online, driving prices lower and pinching profit margins. Companies are forced to get creative, using data to understand their processes better and overcome constraints without the budget to increase throughput by upgrading plant equipment. However, parsing out prescriptive process optimization from time series data is arduous, and it often requires the expertise of expensive internal or external data science teams. Conversely, companies’ internal engineers and frontline subject matter experts (SMEs) can leverage data science concepts equipped with point-and-click tools and user-friendly interfaces by deploying self-service advanced analytics applications. Combined with the process knowledge of internal experts, these applications transform data into insights, helping plant personnel optimize their operations.
Profit drivers in refining and petrochemicals production. Across the process industries, the two primary drivers for increasing profits using existing plant equipment are producing more products or improving the quality. In the commoditized world of petroleum refining, there is little space to create profit gains with higher-quality hydrocarbon products. Instead, refineries typically focus on alleviating rate constraints to ensure maximum throughput and minimal downtime.
However, venturing further downstream in the petrochemical value chain, there is a closer correlation between quality and profits through product differentiation. These hydrocarbon derivatives are frequently made to custom quality specifications and tailored to unique customers or product applications, resulting in various product grades.
While it is obvious that producing products outside of specification limits harms profits, improving product quality while inside specification limits is a less-obvious source for profit increase. For this and other reasons, collaboration with product business teams can help identify opportunities where tightened product spec limits can differentiate the product from others on the market, providing significant value gains.
The data access bottleneck. Physical bottlenecks in hydrocarbon processes are not the only constraints when using data to drive process optimizations, and giving the right people access to the right data for analysis is the first hurdle.
Time series sensor data from process historians is a primary data type, but this data can be enriched by overlaying data from other sources, such as maintenance work orders, local weather, quality and manufacturing execution system data. Loading different data types into a single analysis environment historically meant assembling a giant spreadsheet. Each data source required manual tasks like executing individual queries, aligning timestamps via dozens of VLOOKUP (Excel function) and match functions, and incorporating column after column of conditional logic and filtering to find the data subset of interest. Additionally, when the focus event did not happen to fall within the period of data extracted from each source, process experts were forced to repeat the steps from the beginning.
While automated tools now assemble the large dataset required for solving process optimization problems, this only addresses the first portion of the battle. Proper visualization tools are also needed to spark analysis ideas and drive toward insight. Event-driven views—like stringing common events together side-by-side or overlaying event profiles—can provide significant insight without the lift of advanced model-building. When advanced modeling or machine-learning is needed, they are best served by process SMEs in collaboration with central data science teams.
New technologies bring process optimization to the frontline. Software as a service (SaaS)-based advanced analytics applications alleviate the pain points of multiple source queries and static data sets by establishing a live connection to all relevant databases. Cloud-based resources alleviate the dreaded performance degradation that occurs when analyzing hundreds of thousands of samples per sensor in a desktop-based spreadsheet, while web browser interfaces ensure the SME's access to process optimization analyses from anywhere with an internet connection.
Advanced data science algorithms wrapped into point-and-click tools help democratize data science capabilities for more employees. These tools enable them to use time series-specific calculation functions and visualization options to investigate, model and monitor process variables, helping identify opportunities to mitigate constraints.
Connectivity to asset hierarchy data and embedded asset grouping functionality enables SMEs to tailor asset templates to meet the scaling needs of their analyses without bothering system administrators for access. In addition, organizing signals into asset groups using common nomenclature ensures process optimizations identified at one site can be templatized and scaled to other sites within the organization, regardless of the data historian infrastructure make or model.
KEY USE CASES HIGHLIGHT PRODUCTIVITY AND QUALITY ADVANCES
One of the pitfalls of process optimization efforts is thinking too big from the onset. Rather than drawing a box around the entire plant or refinery, consider which portions have the lowest-rated throughput capacity or the smallest capacity for surge volumes. Focusing on just one- or two-unit operations constraining the facility can guide ideation and lower the inertia for beginning a process optimization analysis.
By starting small and scaling out, process SMEs armed with SaaS-based advanced analytics applications deliver significant profitability improvements, as exemplified in the following three use cases.
Maximizing throughput rates in fouling service. Many heavy hydrocarbon and polymer production units experience significant rate constraints due to steady-state fouling or coking. Remedies for this include offline maintenance, which entails prolonged unit shutdown and a significant hit to production capacity, and online procedures, which are typically less effective but involve only a small rate hit or temporary quality upset.
Heat exchangers and furnaces are prime candidates for this analysis because the fixed-equipment internals foul or coke universally throughout the industry. Using an advanced analytics application connected to process historian data and historical maintenance events, engineering teams can easily differentiate between runtime and maintenance periods, calculate fouling rates and identify process variables with a significant impact on the fouling rate. By extrapolating conditions and planned operations, the application can calculate optimal maintenance windows, providing operations teams with significant lead time to coordinate resources and materials.
In ethylene production, for example, applying decoking procedures to mitigate rate limitations and optimizing the run time between decoking can ensure operation at a higher average throughput rate over time. In the past, decoke timing was frequently based on estimating and calculating an optimal coil outlet temperature aligned with visual observation, but even the ideal decoke trigger point was singular and static.
As equipment ages and undergoes irreversible coking, degradation rates steepen and models must be adjusted. The best models are dynamic, using online process data in addition to production and scheduling targets to dynamically calculate the optimal decoke frequency to meet planned production volumes as quickly as possible.
An example of this is shown in FIG. 1, where an empirical model of coil outlet temperature degradation is optimized for the time between decokes; this optimal run length is overlaid against live operation as an ideal degradation and recovery profile.
Steady-state quality improvements through statistical process control (SPC) and process capability analysis (Cpk). Tighter process control is key to pushing closer to physical or theoretical operating limits. Statistical methods, including SPC and Cpk, are widely used in the refining and chemical industries to measure capability and flag when a process variable is statistically out of control. However, these drifts can be subtle and are often imperceptible using other detection methods, such as limit monitoring.
Typically, the challenge with these statistical analyses is not doing the math for a run of each type, but applying the math at scale over various changing operating modes or product types for every run. Historically, this led companies in one of two directions:
SaaS-based advanced analytics applications provide a third option, addressing the pain points associated with the prior two thanks to online calculations, cloud scaling capabilities and cost efficiency.
Asset hierarchy capabilities can help with templatizing and scaling SPC and Cpk calculations across many sensors of the same type or different product types. Single calculations of statistical limits and violations of run rules can be replicated from a single measurement point across all sensors an organization is interested in monitoring.
Templatized methods within asset hierarchies also enable “slicing and dicing” sensor and quality data into distinct operating modes or product types, which are necessary for eliminating these sources of known and intended variability from statistical calculations. FIG. 2 depicts some tabular and graphical visualizations achievable with a Cpk calculation built for a single product grade, then scaled across other products.
Minimizing downgrades during transient operations. One of the largest sources of quality downgrades for polymer plants is the transition period between different types, or grades, of products. Process engineers responsible for minimizing quality downgrades associated with transitions have two main optimization levers:
The biggest challenge in adjusting either of these levers is understanding when similar transitions have occurred and focusing the analysis on data during only one transition type. Event data objects called capsules—with a clearly-defined start and end time—and product transition type metadata can be calculated from process signals based on set-point parameters and identifiable step changes. Once identified, these capsules can focus the analysis solely on the transition datasets of interest.
Segmenting continuous sensor data into discrete events is crucial for product transition analysis because certain transitions occur infrequently, requiring the examination of years of data to get a representative sample of transitions. Critical independent and dependent process parameters can be overlaid during the identified transitions, making it possible to outline an optimal set-point adjustment sequence and to profile the best-case transition (FIG. 3). An ideal profile can help guide live transitions, getting them as close as possible to the best-case scenario.
These capsules can calculate values like transition duration or volume of off-spec produced, aggregating over time to identify best-case scenarios to use as targets. Understanding the distribution of transition periods across transition types can help inform supply chain and scheduling decisions. It can also help processors adjust production cycles to limit particularly painful transitions or line up more closely with product demand and inventory needs.
Getting ahead in tight markets. Data-driven process optimization efforts are integral to growing profit margins in the tight capital environment. These analyses are best handled by focusing on small components of a production process, delivering results, then deploying a “nail it and scale it” approach to expand the analysis to additional equipment or facilities.
Identifying optimization opportunities requires extensive data analysis, most efficiently served by SaaS-based advanced analytics applications with live data source connectivity. Combining these applications—which provide data science concepts in digestible point-and-click interfaces—with SME process knowledge is a recipe for successful debottlenecking. By leveraging these tools and the insights generated, organizations can optimize their operations and are empowered to scale the resulting analyses across many additional processes within their asset fleets to maximum efficiency and return on investment. HP
ALLISON BUENEMANN is the Chemicals Industry Principal for Seeq Corp. She has a process engineering background with a BS degree in chemical engineering from Purdue University and an MBA from Louisiana State University. Buenemann has a decade of experience working for and with bulk and specialty chemical manufacturers like ExxonMobil Chemical and Eastman to solve high-value business problems leveraging time series data. In her current role, she monitors the rapidly changing trends surrounding digital transformation in the chemical industry, translating them into product requirements for Seeq.