DEEPWATER—Pollak (OneSubsea)

Reinventing the subsea production system: The first 20,000-psi SPS

A new, 20,000-psi, subsea production system has been developed and is now being manufactured and factory-tested for installation in the Gulf of Mexico. The system was rigorously qualified by an independent third party to ensure all previous technology gaps were addressed. This story explains the technology’s development.

Although an onshore production system rated for 20,000 psi exists, the subsea version had remained unrealized until a joint industry program was initiated in June 2015 between an operator in the Gulf of Mexico and OEM OneSubsea®, SLB’s subsea technologies, production and processing systems business.

Now commercially available, the 20,000-psi subsea production system (SPS) includes new technologies required to meet the demands of high-pressure, high-temperature (HPHT) conditions. Development included conducting rigorous qualification according to API Technical Report 17TR8 guidelines for equipment used in HPHT environments, which API defines as pressures greater than 15,000 psi or temperatures greater than 350o F.

In conjunction with HPHT qualification, an independent third-party (I3P) was commissioned to perform an engineering review of the equipment design, qualification and manufacturing plans, as required before the U.S. Bureau of Safety and Environmental Enforcement (BSEE) can approve operator plans or permits using the technology. The result of this extensive, comprehensive process, shown in Fig. 1, is a safer, more sustainable approach for producing 20,000-psi subsea fields.

Fig. 1. Numerous major technology developments and regulatory approvals have been accomplished along the project’s expedited timeline.

Closing technology gaps with teamwork. Extracting hydrocarbons from an offshore deepwater well with shut-in pressures higher than 15,000 psi requires that every component of the entire subsea system can handle these conditions across drilling, installation, commissioning, production, intervention and transportation. This requirement has revealed many technology gaps to be addressed in developing a 20,000-psi SPS.

To tackle this broad challenge, a multidisciplinary global team was assembled. It was decided to develop all the necessary technologies and equipment, as possible, within the OEM’s organization to ensure that the development process and methods were consistent and easy to monitor to keep their progress on schedule. The team’s HPHT design strategy was to initially leverage existing lower-capacity designs by reinforcing the geometry and using stronger materials. This was supplemented by reducing overall loads by minimizing pressurized areas, pressure balancing and offsetting weight, which also supported manufacturing sustainability. Considering that the design team members were not necessarily located at the manufacturing and fabrication plants, a “design anywhere, manufacture anywhere” approach was also implemented, which was greatly facilitated by digitalization.

Highlights of the new technology development and qualification required for the 20,000-psi SPS are as follows:

Couplers. Traditional use of third-party technology for couplers was avoided through the development of interchangeable wet-mateable electric and hydraulic couplers with debris resistance. New robust seals also had to be developed, but overall, the number of parts in the couplers was minimized to keep tolerance stack-up low. These couplers enable more efficient footprint allocation, which allows fitting more downhole lines in the subsea tubing hanger.

Gate valves. Although OneSubsea’s more than 60 years of OEM experience included a 20,000-psi-rated gate valve for surface operations, the demands of the subsea environment required new gate and actuator designs. The full suite of HPHT gate valves developed for the 20,000-psi SPS includes 2-in. hydraulically-actuated and 5-in. manually and hydraulically actuated valves for the tree system. The manifold uses 5-in. hydraulically-actuated and 7-in. manual and hydraulic valves.

Connection systems. Two subsea tree-to-wellhead connector configurations were developed with the capacity to withstand extensive pressure, bending and tension loading conditions while efficiently interfacing with industry-recognized third-party lockdown profiles.

Four ROV-operable vertical clamp connection systems with dual metal gaskets were also developed with verified bending and torsional capacity at high pressures for use on the tree system, jumper and flowline connections and choke insert.

Multiphase flowmeter. Integrated into the main production flow path of the vertical monobore subsea tree system, the Vx Omni* subsea multiphase flowmeter enables HPHT subsea multiphase flow testing, with a venturi throat design that can measure the flowrates of gas, oil and water in any combination of proportions, from 0% to 100%. In addition to the reliability provided by having no moving parts and fully redundant electronics, 90% of the components are standardized, and their number is reduced by two-thirds from prior-generation designs, Fig. 2.

Fig. 2. Providing highly accurate measurement at the wellhead, the HPHT multiphase flowmeter forms the core of an interconnected, informed digital SPS.

Material and welding qualification program. Years before verification against industry and regulatory HPHT requirements was initiated, as part of the design process the multidisciplinary team collaborated to collect foundational data. The qualified materials include low-alloy steels (LAS) and nickel alloys, with nickel alloy cladding over LAS also qualified for use with all production equipment for ASTM Class HH steel. Additional testing was conducted for fastener documentation.

The results of this testing program documented material performance within air, sour production fluid, seawater and completion brine environments. From this basis, the fatigue characteristics and equipment performance in various environments were explored as an essential part of HPHT verification.

Integrated multiphase pumping system and production manifold. Rated to 16,500 psi, this permanently-installed large subsea structure houses a complete manifold section that connects up to four wells and a boosting section that enables increased field production, with a retrievable multiphase pump module, Fig. 3. This gravity-based structure will be supported on one of the largest suction piles in the Gulf of Mexico.

Fig. 3. The manifold pump station assembly integrates numerous components in a highly efficient configuration, from which weight has been shaved off.

Manufacturing the necessary large, heavy-wall forgings, pipes and fittings required developing and qualifying new manufacturing methods. The process valves for operating the pump were similarly qualified for a 7 1/6-in. bore size, and they incorporate a nonreturn valve and fail-open and fail-close hydraulic spring-return gate valves.

Additional new subsystems include a flow mixer to suppress slugs in the multiphase flow and a pressure equalization circuit, for safer pressurization and depressurization of the pump process section when using the pump control system during intervention. At a system level, the new control logic approach reduces the number of components by using the production tree subsea control module to control manifold valves within the production manifolds and the integrated manifold pump station.

Pump module. Installed with the integrated manifold pump station, the pump module is a retrievable subsea structure featuring a helicoaxial multiphase pump with a maximum flow capacity of 120,000 bopd, 2,320 psi of design differential pressure and a gas volume fraction range from 0% to 100%, Fig. 4. The pump housing is joined to the motor housing through a new flange,with a secure, dual-metal gasket. Inside the housing is an electric motor with 4 MW of shaft power that drives a multistage impelled pump joined by flexible coupling.

Fig. 4. Based on the rotodynamic pumping principle and using helicoaxial technology, the pump module is inherently robust and wear-resistant.

The pump’s dielectric hydraulic fluid system lubricates the pump, cools the motor and acts as a barrier to prevent process fluid from entering the motor by maintaining a constant overpressure during all operation and intervention modes of the pump. The fluid is supplied from a topside hydraulic power unit through the umbilical. Mechanical valves ensure that the overpressure is maintained throughout rapid increases or decreases of process pressure in the pump.

For flow control, the pump uses a single-phase flowmeter on the discharge side.

Vertical monobore tree. The subsea tree was designed to house many of these newly developed technologies in a compact package that ensures that the total weight and footprint are within the capabilities of the system’s handling and testing facilities, Fig. 5. Its other major innovation is the tubing hanger system, with lockdown and orientation packages in a slim annular design, which, in combination with the small-footprint couplers that were developed, allows space for 13 downhole lines.

Fig. 5. The 20,000-psi vertical monobore tree is another industry-first successor to OneSubsea’s introduction of the first 15,000-psi subsea vertical monobore tree, two decades ago.

 

Packaging the 20,000-psi subsystems and demonstrating standards compliance. Project execution during the detailed engineering phase had to meet several new challenges while staying on time and on cost. An additional constraint was accounting for the added weight of the components required for the vertical monobore tree system to control and direct fluid flow at the elevated pressures.

The design strategy to improve on lower-rated designs by increasing wall thickness and using stronger, denser materials had the potential to add significant weight. But the system still had to be transportable on water, land and air; conform to offshore rig deployment and testing capacity; and dimensionally fit the maximum height allowance and fatigue loads of cranes.

Our innovative annulus routing path for shortening the tubing head spool by placing the primary annulus valves horizontally aligned to each other, instead of the conventional vertical arrangement, reduced both height and weight. Establishing self-balancing layouts mitigated balance weight requirements, and the weight of the thermal insulation on the pipework of the manifold structural frame was optimized via thermal analysis studies.

It was during this project phase that compliance to BSEE regulatory guidance, issued as Notices to Lessees and Operators, was addressed. A team of the SPS subject matter experts developed a foundational design philosophy toward globally meeting HPHT design requirements, including previous alignment with API 17TR8, that would eliminate numerous interpretation cycles. The coordination and collaboration between all three entities was essential in establishing the ground rules by which planning, execution and reviews involving the I3P, operator and OEM would impact the schedule. The potential consequence of missing just a single element within a guideline or standard could delay equipment delivery by months, so maintaining this alignment has been critical to maintaining timely progress.

The operator also requested additional testing to reduce the risks associated with the first deployment of equipment that has not been field-tested. Because it would be highly inefficient to use the complete SPS assembly for testing parts, qualification test fixtures with equivalent configurations, forces, friction properties, debris introduction and other aspects were used.

Leveraging digitalization for optimization and sustainability. In line with the industry’s trend toward digitalization, emerging software solutions were employed to significantly improve all aspects across every phase of the project. For example, advanced finite element methods were used to analyze complex components and assemblies. Admittedly, HPHT systems exert higher stresses and deflections on structures, but traditional/conservative methods of analysis output requirements for high-cost materials, with high yield strength and unnecessarily thick cross-sections to support pressures. The software enabled conducting realistic modeling of load cases without the need to build a physical prototype.

Virtual reality (VR) technology was used from the early stages of the project to provide an immersive interactive experience with the production equipment and simulate assembly operations and subsea operations, such as ROV interfacing. VR also supported a live feed of the assembly, testing and manufacturing processes to engineers and other subject matter experts for remote real-time witnessing. This was especially useful in connecting with the I3P and operator when travel restrictions were posed by the COVID-19 pandemic.

Having digital tools for modeling and simulation also contributed to sustainability. This approach accelerated the ability to affect weight reductions, as previously mentioned, and shortened the prototype stage, both of which inherently reduced embodied carbon emissions. Similarly, a new tooling system to reduce operator installation time will help minimize carbon emissions from offshore facilities and vessel usage during those operations.

Meeting fabrication challenges with upgraded facilities and digital capabilities. The equipment-build phase included upgrading the OEM manufacturing plants to state-of-the-art manufacturing and testing facilities to address the size and complexity of equipment that was beyond their previous 15,000-psi applications. The upgrade also included the testing facilities. In particular, the test cells increased their pressure capacity, footprint, and depth and added remote monitoring tools. New processes were developed, such as using custom fastener preload equipment, ultrasonic inspection methods and automated remote monitoring and testing capabilities for the high-pressure test cells.

Virtual-build simulation was incorporated in the manufacturing process to first assemble products in a 3D virtual environment once the top-level assembly models and procedure were released. This approach requires only a single engineer, not an entire physical process and necessary materials and personnel, to confirm sequence optimization, verify access to connections for validation, check for incompatibility between assembly components and tooling, and confirm that all areas for improvement are identified before starting the physical build.

All findings are extensively documented and available across the design, manufacture, assembly and test teams. The virtual build also helps in reducing HSE risks, such as those posed by handling concerns. Following a “virtual first” process has proactively mitigated sticking points in the physical process to reduce downtime during subsequent assembly and in turn, reduce nonconformance issues.

New capabilities introduced across the industry. The new 20,000-psi subsea production system is not just for Gulf of Mexico application. The components are already commercially available and meet most current regulatory requirements and design guidelines for other HPHT offshore developments. The recently published third edition of API 17TR8 provides additional design guidance, with which this system was proactively designed to already comply. WO

*Mark of Schlumberger

ACKNOWLEDGEMENT

This article contains elements from OTC paper 32024-MS, which was presented at the Offshore Technology Conference, May 2–5, 2022.

JOSHUA POLLAK joined SLB in 2012 as a mechanical engineer in Houston. His career has focused on the design, building and testing of subsea production system hardware, including trees, manifolds, wellheads, connectors, and associated tooling. Mr. Pollak has worked on projects devoted to HPHT technologies, vertical tree development, digitalization and sustainability efforts.

PARTH PATHAK is a sustainability champion for SLB subsea production systems. He has been with the company for more than 14 years and was a technical leader in development of these HPHT products. He also has been a key contributing member for the industry guidance documents related to this HPHT equipment.

Scroll down to read

Dallas Fed: Oil and gas expansion continues; cost pressures, supply-chain delays persist

The recovery of U.S. upstream activity is being slowed by high costs for equipment and services, personnel issues, and supply chain shortages, including spare parts.

Upstream oil and gas activity in the southwestern U.S. expanded at a strong pace in the third quarter, according to oil and gas executives responding to the Dallas Fed Energy Survey. The business activity index—the survey’s broadest measure of conditions facing Eleventh District energy firms—remained elevated at 46.0 but below the record-breaking 57.7 reading of the second quarter, Fig. 1. This suggests the pace of the expansion decelerated slightly but remains solid.

Fig. 1. Measurement of oil and gas activity in Federal Reserve District 11.

OVERVIEW

The Dallas Fed conducts the Dallas Fed Energy Survey quarterly, to obtain a timely assessment of energy activity among oil and gas firms located or headquartered in the Eleventh District (Texas, northern Louisiana, southern New Mexico).

Methodology. Firms are asked whether business activity, employment, capital expenditures and other indicators increased, decreased or remained unchanged, compared with the prior quarter and with the same quarter a year ago. Survey responses are used to calculate an index for each indicator. Each index is calculated by subtracting the percentage of respondents reporting a decrease from the percentage reporting an increase.

When the share of firms reporting an increase exceeds the share reporting a decrease, the index will be greater than zero, suggesting the indicator has increased over the previous quarter. If the share of firms reporting a decrease exceeds the share reporting an increase, the index will be below zero, suggesting the indicator has decreased over the previous quarter.

Data were collected Sept. 14–22, and 163 energy firms responded. Of the respondents, 105 were exploration and production firms, and 58 were oilfield services firms.

Special questions asked of executives this quarter focused on prospects for significant oil market tightening; whether the era of inexpensive U.S. natural gas will end; expectations for financial investors returning to the oil and gas sector; the impact of the increased 45Q tax credit on profitability of proposed carbon capture, utilization and storage projects; and the net firm-level impact of the methane tax in the 2022 Inflation Reduction Act.

OIL & GAS PRICES

On average, respondents expect a West Texas Intermediate (WTI) oil price of $89/bbl by year-end 2022. Responses ranged from $65/bbl to $122/bbl, Table 1. Survey participants expect Henry Hub natural gas prices of $7.97 per million British thermal units (MMBtu) at year-end, Table 2. For reference, WTI spot prices averaged $85.49/bbl during the survey collection period, while Henry Hub spot prices averaged $8.16/MMBtu.

COSTS/FACTORS

Costs increased for a seventh straight quarter, with the indexes near historical highs. Among oilfield service firms, the index for input costs remained elevated but slipped from its series high to 83.9. None of the 58 responding oilfield service firms reported lower input costs. Among E&P firms, the index for finding and development costs was 64.7, down slightly from its high, last quarter, of 70.6. Additionally, the index for lease operating expenses was 70.2, easing slightly from the high, last quarter, of 74.1.

It is taking longer for firms to receive materials and equipment. The supplier delivery time index remained elevated at 28.4 in the third quarter, down slightly from a series high of 31.9 in the second quarter. Among oilfield service firms, the measure of lag time in delivery of services declined from 36.0 to 21.1 but remained well above average.

One of the “special questions” asked of survey respondents was “Do you expect a significant tightening of the oil market by the end of 2024, given the current underinvestment in exploration?” Not surprisingly, 85% of executives said they expect a significant tightening of the oil market by the end of 2024, Fig. 2.

Fig. 2. Special Question on oil market tightening.

STRATEGY/REGULATIONS

Another “special question” posed to executives was “Do you expect financial investors to return to the oil and gas sector? As shown in Fig. 3, a clear majority of the executives—79%—said they expect some financial investors to return to the oil and gas sector. Eleven percent expect many will return, while 10% expect investors not to return.

Fig. 3. Special Question on financial investors returning.

On the regulatory front, a “special question” asked of executives was “In your opinion, will proposed carbon capture, utilization and storage projects be profitable with the increase in the 45Q tax credit in the 2022 Inflation Reduction Act?” As depicted in Fig. 4, slightly over half—54%—of executives expect most proposed carbon capture, utilization and storage projects won’t be profitable, despite the increase in the 45Q tax credit in the Inflation Reduction Act. Forty-two percent expect some proposed projects to be profitable, and 4% expect more projects to be profitable.

Fig. 4. Special Question on whether CCUS projects can be profitable.

Still on regulatory matters, another “special question” asked executives, “What net impact will the methane tax in the 2022 Inflation Reduction Act have on your firm?” Predictably, most executives expect a negative net impact on their firm from the methane tax in the 2022 Inflation Reduction Act, Fig. 5. Forty-eight percent of executives said they think the net impact will be slightly negative, and an additional 14% anticipate that it will be significantly negative. Thirty-one percent expect a neutral impact. Seven percent anticipate a positive impact.

Fig. 5. Special Question on net impact of a methane tax.

OIL & GAS PRODUCTION

Oil and natural gas production increased at a similar pace, compared with the prior quarter, according to executives at exploration and production (E&P) firms. The oil production index held fairly steady at 31.7 in the third quarter. The natural gas production index was essentially unchanged at 35.6.

One of the “special questions” posed to survey respondents was “Do you expect the age of inexpensive U.S. natural gas to come to an end, as liquefied natural gas exports to Europe expand?” In response, most executives (69%) expect the age of inexpensive U.S. natural gas to end by year-end 2025, Fig. 6. An additional 12% of executives think it will happen by year-end 2030, and 3% expect it to occur after 2030.The remaining 16% don’t expect the age of inexpensive U.S. natural gas to end.

Fig. 6. Special Question on whether inexpensive U.S. natural gas will end.

OFS SECTOR

Oilfield services firms reported broad-based improvement, with key indicators remaining in solidly positive territory. The equipment utilization index remained elevated but fell from 66.7 in the second quarter to 55.2 in the third. The operating margin index remained positive but declined from 32.7 to 25.4. The index of prices received for services inched higher, from 62.7 to 64.9—a record high.

EMPLOYMENT TRENDS

All labor market indexes in the third quarter remained elevated, pointing to strong growth in employment, hours and wages. The aggregate employment index posted a seventh consecutive positive reading, increasing from 22.6 in the second quarter to a record 30.0. The aggregate employee hours index was 33.3, close to its historical high. The aggregate wages and benefits index remained elevated and was largely unchanged at 47.3. WO

MICHAEL PLANTE joined the Federal Reserve Bank of Dallas in July 2010. Recent research has focused on such topics as the economic impact of the U.S. shale oil boom, structural changes in oil price differentials, and macroeconomic uncertainty. He also has been the project manager of the Dallas Fed Energy Survey since its inception in 2016. Mr. Plante received his PhD in economics from Indiana University in August 2009.

KUNAL PATEL is a senior business economist at the Federal Reserve Bank of Dallas. He analyzes and investigates developments and topics in the oil and gas sector. Mr. Patel is also heavily involved with production of the Dallas Fed Energy Survey. Before joining the Dallas Fed in 2017, he worked in a variety of energy-related positions at Luminant, McKinsey and Co., and Bank of America Merrill Lynch. Mr. Patel received a BBA degree from the Business Honors Program at the University of Texas at Austin and an MBA degree in finance from the University of Texas at Dallas.

Scroll down to read

Lifecycle performance of WET (subsea) and DRY Tree systems for big, complex reservoirs in ultra-deep waters

Part 2 of this three-part series examines the full lifecycle operations simulations performed using BMT’s SLOOP event domain software, to compare subsea (wet tree) and dry tree concepts. This article argues for using a permanently moored dry tree concept with key performance statistics of DP MODUs and platform rigs.

To remind readers, Frontier Deepwater’s three World Oil articles in 2020 and 2021 covered the savings and value created by adopting a phased dry tree development concept, rather than a risky and expensive hub-spoke subsea (wet tree) development scheme, extrapolated from industry’s Miocene experience. Frontier presented research that used public domain information from the BSEE Gulf of Mexico database to clarify how, and why, the industry’s efforts to exploit massive discoveries in the once promising but deeply challenging Lower Tertiary Wilcox trend have failed commercially. While project organizations within the major players seem content with their hub-subsea development schemes, drilling and reservoir groups are well-aware of these failures and the documented problems.

The first part of this year’s three-part series (published in World Oil last month) presented the method, modeling assumptions, and high-level results of simulated operations for a ten-well ultra-deepwater field development. The simulations, covering a wide range of sensitivity cases, were generated by BMT’s SLOOP event domain software, using realistic system and metocean models to cover a 30-year project life. The results led to important conclusions about the differences in operating lifecycles:

The Dry Tree option is expected to provide at least $20 billion of higher value.

  1. The “DRY Tree” field phased approach (using two “FrPS” platforms) is expected to recover ~31% more of the reserves in the first 20 years of production from the modeled reservoir, yielding over $10 billion more in revenue (at $100/bbl) during that period, as compared to the analogous WET tree subsea/hub scheme; this is $6.3 billion more revenue in the first 10 years.
  1. While Frontier’s FrPS DRY tree development is generating at least $10 billion more revenue, the WET/subsea tree scheme is spending almost $10 billion more on operations.

UNDERSTANDING RESULTS FROM SIMULATING LIFECYCLE OPERATIONS

First, for convenient reference, we return to the comparison of the production profiles for the competing field development options, Fig. 1. The SLOOP software and case models were described in Part 1. The chart shows the mean, as well as the P10 and P90 percentiles of the average annual production rate. For the FrPS (Dry Tree) case, there are four distinct peaks in production rate, corresponding to completing all wells at each well center and, later, sidetrack/recompletion of the wells.

Fig. 1. Comparison of simulated production profiles.

For the subsea (Wet Tree) case, there are no such distinct peaks; the peak production is much lower and occurs much later in the field life. Also, there is more spread between the P10 and P90 production ranges. These shortcomings are seen in part, because the many efforts to run and retrieve the drilling riser and subsea BOP are significantly affected over time by metocean conditions (especially loop currents and hurricanes).

While not directly incorporated into the dynamic simulation of the project, fuel consumption is an important part of operating costs and pollution. The information in Table 1 is used to estimate these costs for economic and environmental considerations.

Understanding the simulation results. The following discussion and charts are intended to help the reader more deeply understand the main conclusion of Part 1. Figure 3 in Part 1 showed the 20-Ksi DP MODU staying busy, once it arrives at the field; however, there was no period of time where both of the FrPS rigs would be fully utilized.

Except for the rig utilization chart discussed above, the results from simulations presented in the preceding article may appear somewhat simplified and smoothed, belying the detail of analysis behind what was reported. For example, it is worth considering how the statistics plotted in the following chart (Fig. 2) are assessed through compilation of 240 runs of the simulated 30 years of project duration. The time history of production from one “realization” (time history) for the WET case is overlaid on the statistical curves produced by data from all 240 realizations. This indicates how the monthly average production rate from “one life history” can exhibit substantial variability.

Fig. 2. Monthly average production rate for one realization plotted over the statistics for all realizations.

The P50 curve for the 20-Ksi DP MODU utilization curve, shown in the preceding article, might cause the operator’s asset manager to expect that the MODU will be too busy to ever be allowed to leave the field. However, if the operator adopts a policy allowing the 20-Ksi DP MODU to leave the field, then situations can occur where production in this field drops, because a backlog of service requests builds for several wells before the 20-Ksi MODU returns. The model adopted for this study does allow the 20-Ksi DP MODU to leave the field, if no backlog of requests exists when it finishes a service request. Then, the MODU has “free time” to work away from the field until new service requests start popping up. Unfortunately, the MODU cannot return immediately. So, a backlog of service requests begins to accumulate.

The big drops in production rate seen in the “Example” curve of Fig. 2 could be due to an extended absence of the MODU or, possibly, to hurricanes. The following charts provide insights into how various causes of downtime play into the overall history of well service delays and interruptions of production. All in all, even though it is expensive to have the rig stay in the field when idled, the operator’s asset manager would be tested severely in making a decision that would allow the highly specialized and expensive 20-Ksi DP MODU to leave the field.

Figure 3 shows how much time the 20-Ksi DP MODU is predicted to be away, if the operator’s policy allows temporary demobilizations. For about 11% of the realizations, it never gets away from the field. If it does demobilize, there is a 50% likelihood that a service request will pop up within ~32 days. The “time to return” adds the assumed “rig remobilization response time” distribution to the statistics for time until a first service request (the light blue curve in Fig. 3). So, the result is that there is a 50% likelihood that the rig will make it back to the field within ~224 days. This means that if the MODU does demobilize from the field, there is a good chance that several service requests will have piled up before the MODU makes it back into service at the field, and production may be dropping off.

Fig. 3. 20-Ksi DP MODU remobilization statistics.

Figures 4 and 5 show us how the service request (workload) backlog builds while the MODU is away. Figure 4 indicates that while there is ~17% chance that only one service request will be waiting, there is just over a 60% likelihood of three or more service requests and an almost 30% chance that more than four jobs will be waiting when the MODU returns to the field. Figure 5 reveals a few instances where more than six jobs ended up being backlogged.

Fig. 4. Cumulative probability of the 20-Ksi MODU’s work backlog.

Fig. 5. 20-Ksi DP MODU service request backlog statistics.

When asset managers have this kind of information at their fingertips, they will fight hard to keep the 20-Ksi DP MODU permanently located at the field—even if it could be “idle” for a month or two! While the WET Case model for this study does allow the 20-Ksi DP MODU to leave the field to work elsewhere, the model does not consider absences for periodic drydocking/maintenance. Such an absence would be extended for several months.

Thus, while preventing the MODU from leaving the field during idle times would tend to increase the cumulative lifetime production for the WET case by some amount, sending the MODU away for drydocking every five-or-so years would seriously decrease the cumulative production. These offsetting factors mean that the current WET case model presents a reasonably realistic picture of production/reserves recovery over the simulated time history of the project.

UNDERSTANDING “WELL DOWNTIME” AND “RIG NON-PRODUCTIVE TIME”

We now have a picture of how demobilizations affect the working performance of the 20-Ksi DP MODU, but what happens when it and the FrPS units are actively attempting to perform their intended operations at the field? Both the DRY tree platforms (the FrPS units) and the 20-Ksi DP MODU will experience metocean conditions and equipment failures that halt or delay operations. The following charts explore these influences and the differences between the WET case subsea well systems and DRY tree systems with direct surface access to the wells.

The time needed to complete any well operation accounts for interruptions due to unfavorable metocean conditions (including hurricanes as well as high winds, waves and currents) and BOP failures (as the only type of equipment failure modeled). Even though predicted and actual arrival of hurricanes completely shuts down well operations for both cases, a DP MODU—and its interface with the well it is servicing in ultra-deep water—is much more sensitive to troublesome metocean conditions than the permanently moored FrPS units with permanently attached, fully redundant tie-back risers. In addition, when a hurricane threat is past, it takes longer for the DP MODU to restart well operations. The following charts show how the modeled sensitivities have resulted in substantial differences in performance for the competing systems.

It is no surprise that there are many events that cause interruption of well servicing operations for both DRY and WET systems. Figure 6 confirms this expectation. In the early years, when the DRY scenario has a 15-Ksi DP MODU deployed to the field to perform pre-drilling at the two well centers, it is subject to a high degree of interruptions. Once a DP MODU is no longer needed, the number of interruptions drops dramatically—even though two FrPS units are working in the field. There are a couple of jumps in the number of interruptions when sensitive operations occur during the periods when the FrPS units are performing sidetracking and recompletions (years 13-14, 17-18).

Fig. 6. Mean number of interruptions each year: All causes.

As a result, one would expect that throughout the producing life of the field, the 20-Ksi DP MODU servicing the wells of the WET case will experience a lot more Non-Productive (rig) Time (the much hated NPT). Figure 7 confirms this expectation, which impacts the “Well Downtime.”

Fig. 7. Mean total annual rig non-productive time: All causes.

During the early years, the 15-Ksi DP MODU, working for the WET case, spends less time in the field, because it has less to accomplish. As a result, the WET case experiences less NPT in years 1-6. Still, subsea BOP failures feature prominently in NPT (as in the real world). When the 20-Ksi DP MODU arrives, and the 15Ksi MODU moves to Well Center #2, BOP failures dominate NPT for the WET case while both rigs are working. Figure 8 has the WET case’s 15-Ksi MODU finishing up in year 7, but the D&C activities of the 20-Ksi MODU carry a high NPT, due to subsea BOP failures through year 13. Deepsea loop currents and waves are always big part of NPT for the WET case.

Fig. 8. Mean total annual NPT per cause: Core years.

It is worth looking more closely at the role that hurricanes play in causing NPT and, indirectly, well downtime and production delays. Figure 9 shows NPT days for both cases during the core years of project life. When the FrPS units of the DRY case are sidetracking and recompleting wells, hurricane-induced NPT rises dramatically. However, for most years, the WET case suffers much more hurricane-induced NPT. On average, the WET case experiences about 50% more hurricane-induced NPT than the DRY case over the core years. The main reason the WET case has so much more NPT, due to hurricanes, is that it takes much longer to retrieve and re-run the riser, reconnect the LMRP, and restart well operations when the 20-Ksi DP MODU returns from a hurricane-induced site abandonment.

Fig. 9. Hurricane-induced mean total annual NPT: Core years.

Of course, the DRY case suffers substantial hurricane-induced NPT in the early years, when a 15-Ksi DP MODU is performing pre-drilling operations at the two well centers—even more than the WET case during that time period. The DRY case development plan calls for much more pre-drilling activity by its 15-Ksi MODU and is thus exposed to more hurricane NPT in those early years. This is reflected in Fig. 10.

Fig. 10. Hurricane-induced Mean Total Annual NPT: Early years.

Having looked at all the causes of NPT, it is worth considering how the NPT is distributed across the various types of well operations modeled to see which operations suffer the most NPT. Figure 11 presents the results from the full life simulations (not just core producing years). This chart helps us understand why the DRY case suffers so much NPT during the early years. The NPT of the pre-drilling program for the DRY case is a combination “MODU pre-DRILL to TD” plus “MODU pre-DRILL/Set 14 in” (22.2 days + 10.5 days = 32.7days) that all occurs within the first six years of the project. The NPT of the pre-drilling program for the WET case is just 24.5 days in those years because, in that case, the 15-Ksi DP MODU only drills three wells to Target Depth (TD) at Well Center #1.

Fig. 11. Mean well operations non-productive time (rig NPT): All causes.

For the DRY case, just over half of all NPT occurs during the pre-drilling program. For the WET case, only ~24% of all NPT occurs during pre-drilling operations. However, as a key factor in well downtime, the total of the “mean NPT” across all operations is important to keep in mind—with ~126 days totaled across the means for the WET case and only about half as much for the DRY case.

The two FrPS units will be able to perform more well interventions needed to keep wells producing as intended, which is a key reason for the recovery of an additional 31% of reserves. The following charts confirm that more tubing cleanouts and acid stimulation jobs are performed for the DRY case than for the WET (see Figs. 12 and 13). Every time a stim job or tubing clean out is performed, production from the affected well is interrupted, so it is important that such tasks are accomplished efficiently, as well as safely. Figure 11 did show that there is very little NPT associated with these operations, for either the DRY or the WET case, but it is still clear that the advantage goes to the DRY case.

Fig. 12. Probability distributions on the number of acid stimulation jobs (WET v. DRY).

Fig. 13. Probability distributions on the number of tubing clean out jobs (WET v. DRY).

Remembering that pending interventions have priority over drilling and completing a new well or starting a sidetrack/recompletion, we wrap up this section by considering how all the interventions and delays discussed above impacted full field development. Figure 14 shows the P10 and P90 range for “milestone” well completions and recompletions, with the DRY case completing all ten of its wells within ~8.5 years, while the WET case does not finish until five years later. The sidetracking results are even more disappointing for the WET case, as the tenth sidetrack has ~13% chance of being delayed outside the 30-year window.

Fig. 14. P10-P90 timing for well completion milestones (WET v. DRY).

CONCLUSIONS

Frontier decided to perform what we called a “Green Ops study,” to investigate whether a “Dry Tree” field development was any more efficient and less polluting than the subsea (“Wet Tree”) hub-spoke field development schemes, currently adopted and still favored by operators’ project departments, for deepwater fields in the U.S. Gulf of Mexico. As revealed in Part 1 of this series, we were somewhat surprised that Frontier’s DRY Tree field development plan (using two “FrPS” platforms) would be expected to provide at least a $20 billion increase in value to the asset owners during the core producing years. This second part presents more results of the SLOOP simulations to clarify and support that conclusion while confirming expected reductions in pollution.

The huge increase in value does not include the front-end savings on field appraisal or CAPEX reduction in field development—or even properly reflect the difference in Expected Net Present Value that comes with full consideration of the time value of money or the big improvement in risk management. It also does not include the increase in Expected Ultimate Recovery from the field that is provided by a dry tree field development.

When all the differences in value (and the tremendous reduction in emissions and pollution risk) are accounted, it is clearly time for a step-change away from a perspective locked on what have proven to be failed and failing hub-subsea scenarios for the ultra-deep Lower Tertiary Wilcox trend. WO

Editor’s note: Part 3 of this article series will add analyses of field appraisal and abandonment to further highlight key financial and environmental impacts by reflecting on the concepts of ALARP and AHARP.

CHUCK WHITE, Frontier’s Executive V.P. and co-founder, is a naval architect (University of Michigan, 1975) with a masters degree in mechanical engineering (University of Houston, 1983). He is a past chairman of SNAME Texas. Mr. White worked for IOCs for over 20 years as a project manager and deepwater technology leader. Since 2000, he has worked primarily on deepwater and natural gas industry projects and technology development. He has led several large joint industry projects, as well as the API global task forces in writing the FPS and riser design RPs. Mr. White also co-chaired creation of the first probabilistic riser design code. He holds multiple U.S. and international patents.

ROY SHILLING, Frontier’s President and co-founder, is an Ocean Engineer with a Mechanical BE (Vanderbilt, 1976) and Ocean Engineering MS (Texas A&M University, 1978). He has over 40 years of deepwater development experience, including 37 years with BP. Throughout his career, Mr. Shilling has worked as a project manager and deepwater technology leader, including Project 20K and BP’s Lower Tertiary projects, like Kaskida and Tiber.  During Macondo, he led BP’s containment effort by patenting the free-standing riser system, installed in 51 days and successfully operated with the Helix Producer 1.  Mr. Shilling holds multiple U.S. and international patents, and has authored several technical publications.

PAUL HYATT is Frontier’s V.P., drilling and completions, and managing director of TD Solutions. He is a specialist in all phases of well design and operations, from exploration to full field development. He has 41 years of global experience, including technical and project management roles in offshore, deep water, arctic operations, remote heli-rig exploration, HTHP completions and extended-reach design for various major operators and clients. Mr. Hyatt has a BS degree in petroleum engineering with honors from the University of Texas at Austin and is a life member of SPE.

WILLIAM BRENDLING has a PhD in Applied Mathematics and is a Fellow of the Institute of Mathematics and its Application. He has been employed by BMT or predecessor companies for 40+ years. During that time, he has worked on many offshore projects looking at environmental loading and its impact on operational performance. Among other software projects, Dr. Brendling is the primary architect of BMT’s SLOOP software for quantitative simulation and assessment of the performance of offshore operations.

Scroll down to read

How intelligent automation is helping the industry drive sustainability goals

IA helps by driving innovation, filling some skills gaps for engineers, and enhancing data usage and operational efficiency.

The oil and gas (O&G) industry is in a state of flux. It’s facing all-time high energy prices and pressures for less carbon-intensive energy systems; demands for clean energy to address the climate crisis; tight margins; and talent shortages. The industry, well aware of these issues and the growing need to evolve for a viable future, is seeking ways to reduce CO2 and methane emissions, as well as source alternatives. But effective change will require substantial investment, innovation, and research and development, at a much faster pace than we currently see.

Intelligent automation (IA) has a role in making this happen, Fig. 1. An astonishing 81% of O&G executives agree that a digital-first workforce needs to be developed over the next ten years. Regulations and societal demands for reducing fossil fuel dependency will only continue. IA helps by driving innovation, filling some skills gaps for engineers, and enhancing data usage and operational efficiency. Much of this sector is process-driven, yet many of these tasks are still operated manually and with legacy systems. This creates value-obstructing data siloes, which IA can help break down—offering insights and analysis to inform progress promoting decision-making. Together, these outcomes enhance cash flows while helping meet emerging sustainability objectives and requirements.

Fig. 1. Intelligent automation (IA) has a role in making clean energy happen.

THE O&G ROLE IN TODAY’S GLOBAL ECONOMY?

This year, O&G will make up over half of the global energy supply, which shows the world’s continued dependence on these commodities. Alternative energy sources have not yet advanced to the point where we can effectively use them to replace all the world’s O&G demands. Developing economies will struggle to facilitate their industrial needs without oil and gas. They have not had the capacity of developed countries to invest in clean energies and technologies.

Although the world is not at the point of saying goodbye to O&G, this does not absolve us of the responsibility to transition to a renewable energy supply. The O&G industry has a prominent role to play in bringing forward a global net-zero economy (Fig. 2), which is estimated to need $3.5 trillion in annual investment through 2050. Fossil fuels—coal, oil and gas—are reputedly the most significant contributors to climate change, accounting for more than 75% of global greenhouse gas emissions, according to some scientific sources.

Fig. 2. The O&G industry has a prominent role to play in bringing forward a global net-zero economy by bringing forth its considerable knowledge and capabilities.

The industry is working to make progress regarding environmental, social and governance (ESG) issues, as doing so has become an economic, political and reputational imperative. Several O&G companies have established net-zero-emissions targets and are persevering with decarbonization efforts, despite the economic volatility of the past few years. For example, Occidental Petroleum, one of the largest international oil companies in the U.S., has partnered with Carbon Engineering, a Canadian-based clean energy company, to build a plant intended to capture half a million metric tons of CO2 every year. Also, in January 2022, ExxonMobil announced its target to achieve net-zero greenhouse gas emissions by 2050.

HOW IA HELPS THE INDUSTRY REACH ESG TARGETS

The sector’s efforts to mitigate the effects of climate change center around three primary areas: reducing CO2 emissions, reducing methane emissions, and recycling CO2, Fig. 3. An IA formula can support forward movement on these goals, replacing manual tasks with automation and relieving people to unleash efficiency and focus on value-adding, transformative work.

Fig. 3. IA can help the industry reach ESG targets that include reducing CO2 emissions, reducing methane emissions, and recycling CO2.

Any movement in this space needs substantial research and development, combined with innovation. This requires effective and complete data usage, which is impossible with outdated IT infrastructures and siloed information. Intelligent automation can connect these legacy systems; consolidate the reams of structured and unstructured data commonplace in the industry; and then process it to provide insightful and innovation-driving analytics. Free-moving data facilitates IA and maximizes its use and impact across the organization. Actionable insights sourced from these data speed up decision-making, promote fast problem-solving and assist with meeting future compliance and regulatory standards.

Employing IA can free workers to focus on experimentation, innovation and processes for detecting, measuring and mitigating emissions levels. It can be used to measure progress and success in reaching emissions targets through impact analytics and help businesses ensure they’re operating in line with regulations, which can be costly and cumbersome.

We don’t know what the solutions will be in 20 years, but the faster we test and adapt different solutions—whether hydrogen, electric or something else entirely—the closer we’ll be to a long-term answer, Fig. 4.

Fig. 4. IA can assist in bringing forth electrically oriented solutions.

LOOKING TO THE FUTURE

As the oil and gas industry looks to the future, we’re seeing lines between this sector, the energy sector and the telecommunications sector begin to blur. The business model is evolving—O&G businesses are diversifying and expanding to adjacent areas, such as hydrogen and carbon capture, utilization and storage (CCUS). They’re working to establish their leadership and expertise in the renewable space for a greener future.

It is easy to scapegoat the O&G industry, but we all have a role to play—landfills, agriculture and transportation, among other factors, are all significant contributors to global emissions. For the most part, the oil and gas industry recognizes its role, but this transformation will take time. It takes time to uncover equally capable alternatives to a commodity integral to the global economy and the living standards we have come to view as essential. However, it is a necessary transformation, if we are to leave the world a good place for our children and grandchildren. There is still time for progress on this issue, and this is an empowering reality—innovation, research and development will be critical on this journey to clean energy, which is where IA can help. WO

SARANJIT SINGH is Vice President, Telecommunications and Utilities APAC, for SS&C Blue Prism. He has over three decades of diverse professional experience, employing his business understanding and expertise in the digitization and intelligent automation field. Over the course of his professional career, Mr. Singh has accumulated considerable knowledge in the Information Technology, Telecommunications, media, utilities, and energy industries. He has worked as a key member of the senior management team with global organizations like Microsoft, Oracle, SingTel Optus, Telstra and Lucent Technologies, expanding business and technology innovations. Mr. Singh earned a Master of Computer Science degree from Victoria University, Australia, wherein he was also an elected member of the University academic board.

Scroll down to read

Advertising sales offices

2 Greenway Plaza, Suite 1020, Houston, TX 77046 USA

WorldOil.com

Andy McDowell, Senior Vice President, Media / Publisher

Phone/Fax: +1 (713) 520-4463

Andy.McDowell@WorldOil.com

North America

North Houston, Fort Worth, Sugar Land, Upper Midwest

Jim Watkins

+1 (713) 525-4632

Jim.Watkins@GulfEnergyInfo.com

Dallas, North Texas, Midwest/Central U.S.

Josh Mayer

+1 (972) 816-6745

Josh.Mayer@GulfEnergyInfo.com

Western U.S., British Colombia

Rick Ayer

+1 (949) 366-9089

Rick.Ayer@GulfEnergyInfo.com

Southeast Houston, Gulf Coast, Southeast U.S.

Austin Milburn

+1 (713) 525-4626

Austin.Milburn@GulfEnergyInfo.com

Northeast U.S., Eastern Canada

Merrie Lynch

+1 (617) 594-4943

Merrie.Lynch@GulfEnergyInfo.com

OUTSIDE NORTH AMERICA

Africa

Dele Olaoye

+1 (713) 240-4447

Africa@GulfEnergyInfo.com

Brazil

Evan Sponagle

Phone: +55 (21) 2512-2741

Mobile: +55 (21) 99925-3398

Evan.Sponagle@GulfEnergyInfo.com

China, Hong Kong

Iris Yuen

China: +86 13802701367

Hong Kong: +852 69185500

China@GulfEnergyInfo.com

Western Europe

Hamilton Pearman

+33 608 310 575

Hamilton.Pearman@GulfEnergyInfo.com

India

Manav Kanwar

+91 (22) 2837 7070

India@GulfEnergyInfo.com

Italy, Greece & Turkey

Filippo Silvera

Office: +39 02 2846716

Mobile: +39 392 4431741

info@silvera.it

Japan

Yoshinori Ikeda

+81 (3) 3661-6138

Japan@GulfEnergyInfo.com

Korea

YB Jeon

+82 (2) 755-3774

Korea@GulfEnergyInfo.com

Russia, FSU

Lilia Fedotova

+7 (495) 628-10-33

Lilia.Fedotova@GulfEnergyInfo.com

UK, Scandinavia, Ireland, Middle East

Brenda Homewood

+44 (0) 1622 297123

Brenda.Homewood@GulfEnergyInfo.com

CLASSIFIED SALES

Cheryl Willis

+1 (713) 525-4633

Cheryl.Willis@GulfEnergyInfo.com

Scroll down to read

Path to net zero


It appears that the Biden administration has finally realized that successfully achieving the energy transition is a monumental challenge that will take the collective brainpower and cooperation of industry and the world’s political leaders. Since the launch of the ET initiative, the actual cost and technological complexities of achieving net-zero are better defined.

Although Mr. Biden assumed his energy policy would force U.S. producers to capitulate and embrace green energy (like flipping a light switch), the actual transformation process and the role that hydrocarbons play in driving our economy were more than Joe’s brain could comprehend. Fortunately, energy producers and OFS companies are providing valuable leadership and are defining how ET will be implemented, along with realistic cost and timeframe estimates.

ADNOC’s plan produces win-win. UAE President His Highness Sheikh Mohamed bin Zayed Al Nahyan used the annual ADNOC Board of Directors meeting to outline the UAE’s Net Zero by 2050 strategic initiative. The plan will establish a new low-carbon solutions and international growth vertical, focused on new energies, gas, LNG and chemicals. During his presentation, bin Zayed said that ADNOC has taken steps to further reduce its carbon footprint while expanding operations to meet rising global energy demand. ADNOC’s comprehensive, multiple-pronged approach is testament to the UAE’s commitment to remaining a responsible global energy provider while enabling a more sustainable future.

H.H. Sheikh bin Zayed emphasized ADNOC’s role as a primary catalyst for the UAE’s growth and diversification, and recognized the company for maximizing value for the nation and creating new economic and industrial opportunities for the private sector. A key component of ADNOC’s effort is to drive industrial growth through its In-Country Value (ICV) program and support its “Make it in the Emirates” initiative. This year, ADNOC’s ICV program has driven $9.54 billion back into the nation’s economy and created employment for 2,000 UAE nationals in ADNOC’s supply chain. These achievements have infused $38 billion into the economy since the program was launched in 2018. In addition, a total of 5,000 UAE nationals have been employed in ADNOC’s supply chain since the initiative was launched.

The “Make it in the Emirates” initiative has signed agreements for local manufacturing opportunities worth $6.8 billion with UAE and international companies this year. The program is on target to locally manufacture 100 products in its procurement pipeline, worth $19 billion by 2030. At the meeting, the board also endorsed ADNOC’s plan to increase oil output by 5 MMbopd by 2027, from the previous target of 2030, as part of the accelerated growth strategy. ADNOC’s five-year business plan and capex of $150 billion for 2023-2027 was approved to enable the accelerated growth strategy. As part of this plan, ADNOC aims to drive $48 billion back into the UAE economy through its ICV program.

Reducing GHG. ADNOC’s net zero by 2050 ambition covers its operational Scope 1 and Scope 2 greenhouse gas emissions. The strategy is underpinned by a continued focus on key decarbonization levers of energy efficiency and operational excellence across the value chain, large scale implementation of CCUS and the use of renewable energy sources.

Talos CCS plan. Talos Energy founder, President and CEO Tim Duncan is working on a CCS initiative in the GOM. “We are oil and gas guys with a significant amount of geological/geophysical capabilities, but we can also do carbon capture and sequestration” Duncan explained. There has been a significant amount of discussion about what to do with free cash flow. The focus on ESG and safety leadership led Talos to launch an aggressive CCS initiative. “We expect CCUS technology will take 6-15 years to develop, but to reach net-zero CCS must be part of the solution,” Duncan continued. Talos has taken a leadership role in the developing market for global emissions reductions, but additional CCS capacity is required to meet global emissions reductions and climate objectives.

CCS value chain proposition. Duncan outlined Talos’ CCS strategy that involves capturing, transporting, injecting and permanently storing CO2 emission from industrial sources back into the ground in saline aquifers:

  1. Capture – CO2 emissions removal uses proven gathering, processing, compression technology.
  2. Transport – CO2 safely piped through midstream assets (potential to use existing pipelines).
  3. Sequester – CO2 safely stored underground in EPA Class VI injection wells (disposal type wells).

Complementary skill sets. Duncan then outlined how the company is applying its overlapping geological expertise and business development skills to build a large-scale decarbonization solutions company. The items below are core engineering E&P competencies the company possesses, which are applicable to building-out Talos’ aggressive CCS initiative:

  1. Conventional reservoir expertise and G&G team
  2. Significant Gulf Coast / GOM presence
  3. Vast seismic database
  4. Established operator and project management capabilities
  5. Strong HSE track record
  6. Business development and commercially driven.

Two CCS project types. “The U.S. Gulf Coast is a world-class market opportunity for CO2 capture,” Duncan stated. The area contains the ingredients for a new business, because it’s rife with big industrial emitters, who are financially motivated to capture, transport and store CO2. They require a dedicated technology partner with the geological expertise and practical business plan to accomplish their goals. Duncan then drilled down deeper into Talos CCS strategy, revealing the two types of projects underway on the U.S. Gulf Coast (Texas and Louisiana) that include regional hubs (clustered industrial base) and point source (single facility / plant).

Exceptional value creation. Duncan concluded by outlining his vision to combine the company’s upstream and carbon capture segments, to create a unique and specialized complementary economic framework. The CCS and upstream businesses will form the foundation for a successful energy company in the future. The upstream segment will execute high-return projects drawn from an ample inventory of high-quality geological prospects. The developing carbon capture segment will offer stable long-term projects that will deliver steady cash flow. These complementary business models will optimize value for the enterprise providing growth optionality, risk allocation and unique investment opportunities relative to publicly traded U.S. E&Ps.

SLB defines challenge, launches new business ecosystem. As the gap between net zero ambitions and actual cumulative CO2 emissions grows, more scenarios are pivoting toward carbon capture, utilization, and sequestration (CCUS). But it doesn’t come without significant challenges. According to SLB, “there is no path to net zero without CCUS.” It is essential to reach net-zero greenhouse gas emissions. It’s one of the few decarbonization mechanisms that’s technically viable using current technology; the challenge lies in feasibility. High costs, operational risks and complexities navigating the value chain often stall CCUS project development. To help solve the problems, SLB has launched an innovative market ecosystem and partnered with like-mined companies to help overcome the challenges.

Industry collaboration. To develop CCUS, SLB has entered into a strategic collaboration to accelerate decarbonization solutions across industrial and energy sectors. The new venture will combine decades of experience in CO2 capture and sequestration, innovative technology portfolios and execution expertise in addition to engineering, procurement and construction capabilities. The collaboration will focus on hydrogen and ammonia production, where CO2 is a by-product and in natural gas processing. CCUS abates the emissions from these energy-intensive industries, creating new low-carbon energy sources and products. “CCUS is vital in creating the decarbonized energy systems our planet needs to balance energy demand with climate objectives, said Olivier Le Peuch, SLB CEO. We are excited about this collaboration with Linde to develop CCUS projects and support the growth of low-carbon energy products from conventional energy sources.”

Reality check. However, CCUS technologies are being adopted too slowly to achieve the IPCC’s 2.0° upper limit for global warming (McKinsey). The new study suggests that scaling the CCUS industry has the potential to achieve net-zero emissions and can decarbonize 45% of remaining emissions from carbon-intensive industries. But CCUS adoption needs to grow 120 times by 2050 for the world to meet its existing net-zero commitments, at a cost of $130 billion per year—more than governments are willing or able to afford alone.

The industry needs to reduce the cost of CCUS through small-scale pilots, while collaborating to form cross-sector clusters to share large infrastructure like pipeline networks. Meanwhile, governments need to define the role of CCUS in their industrial strategies and create the regulatory, tax and reporting frameworks that will allow the industry to scale, while still using subsidies for early projects to stimulate future growth.

“CCUS is critical to delivering the world’s net-zero commitments and will need to play a material role in low-carbon hydrogen and decarbonizing tens of thousands of carbon-intensive industrial facilities worldwide, said McKinsey Partner Luciano Di Fiori. “Close collaboration between the public and private sectors will be needed to scale and mobilize the industry.”

While governments need to create the right tax and legislative frameworks to incentivize and de-risk private investment in CCUS, the industry, itself, must develop innovative new business models and new sources of revenue, rather than relying on limited state subsidies. CCUS has the potential to decarbonize a significant share of the 25,000 carbon-emitting industrial facilities worldwide, but this will require major capital investment across many projects on a global scale, Di Fiori concluded.

World-class R&D scientists and engineers. Our industry has been defined by overcoming seemingly insurmountable engineering and operational challenges. Remember the issues of drilling multiple offshore wells from a single platform. No problem. Use advanced PDM/RSS and M/LWD to efficiently steer the wellbore to the reservoir. Water too deep. No problem. Use floating drillships with GPS navigation and positioning systems. Oh no, we are running out of oil. We need more sand, less shale. No problem. Launch the shale revolution utilizing miraculous extended-reach horizontal drilling and high-tech staged fracing techniques. Need energy independence? No problem. U.S. oil output hit an all-time high of 12.24 MMbopd in May 2019 and started exporting crude.

Need to achieve net zero? Big problem. But as demonstrated by an unprecedented list of remarkable technological achievements and recent initiatives, it’s clear our industry is not the problem. On the contrary, we are integral to the solution. WO

Scroll down to read

Converting natural gas emissions to viable products

Two NETL-funded projects are working to develop efficient natural gas conversion technologies that produce marketable solids and liquids to mitigate methane emissions into the atmosphere.

The United States (U.S.) has more than 2 million active, abandoned or repurposed wells, as well as its oil and natural gas pipeline network, compressor stations and other oil and natural gas infrastructure that emit approximately 8 million tons of methane annually (a greenhouse gas, equivalent to 200 million tons of carbon dioxide (CO2)—the amount of annual CO2 emissions from 400,000 vehicles). Significant progress has been made over the past decade for detecting and quantifying methane emissions at the source, using surface-based technologies like hand-held measurement devices and vehicle-based detection sensors, but these technologies cannot quickly assess large areas.

Other technologies, such as atmospheric sensing equipment, attached to satellites or manned and unmanned aircraft, can better estimate the volume of methane emissions across wide areas. Even though there are a variety of emissions mitigation solutions available, flaring of produced methane, plus associated products, is the most common solution when takeaway capacity is insufficient, or economics are not favorable for transport and processing. However, due to the greenhouse implications of flaring, other options are being explored that would transform methane into marketable products through low-temperature, greenhouse-gas-free methods.

One of the main areas of interest of the U.S. Department of Energy’s (DOE) Methane Mitigation Technologies program, supported by the National Energy Technology Laboratory (NETL), is to develop process-intensified technologies for the conversion of flare/venting gas (mainly methane) into transportable, value-added liquid products. However, the current technologies for natural gas to liquids (GTL) are facing significant challenges, including:

  1. The deployment and intermittent operation at isolated sites often lacks convenient access to electricity, make-up water, and other required services; and
  2. The GTL technologies (e.g., indirect catalytic conversion of methane to liquid chemicals via synthesis gas) are confirmed to be complicated, inefficient and environmentally unfriendly (enormous CO2 emission), requiring large economies-of-scale to compete in existing commodity markets, and relying on extensive supporting infrastructure to be available.

This means that indirect GTL technologies are presently impractical for meeting the program’s objectives. Thus, the research foci have changed, and now focus on converting methane that is being flared into liquid and solid products, with production criteria that include:

  1. Development of new catalysis materials
  2. Processes that can operate without convenient access to electricity and make-up water and with highly efficient conversion at a lower temperature
  3. Modular, compact, integrated and transportable reactor designs
  4. Technology platforms capable of producing a variety of products.

Below are two NETL-funded projects that align with the current directives to develop efficient natural gas conversion technologies that produce marketable solids and liquids to mitigate methane emissions into the atmosphere.

Process Intensification by a One-Step, Plasma-Assisted Synthesis of Liquid Chemicals from Light Hydrocarbons (DE-FE0031862). This project’s goal, led by the University of Notre Dame, is to use plasma stimulation of a light hydrocarbon resource to synthesize value-added liquid chemicals. This work will evaluate the hypothesis that the plasma will serve multiple roles in this transformative chemistry, including activation of carbon-hydrogen (C-H) bonds at low bulk gas temperature and pressure, providing a fast response for immediate startup and shutdown. This enhances the lifetime of the catalyst through plasma-assisted removal of surface impurities, and provides a means to activate nitrogen (N2) to allow for the direct formation of chemicals containing nitrogen–carbon (N-C) bonds. In addition, the project will explore the potential for exploiting these processes more broadly, by building on recent discoveries using plasma-assisted methods to convert hydrogen and N2 feeds, Fig. 1.

Fig. 1. A newly designed multimodal spectroscopic instrument combining polarization modulation infrared reflection-absorption spectroscopy (PM-IRAS), mass spectrometry, and optical emission spectroscopy (OES) for the investigation of plasma−surface interactions.

This project offers the opportunity to assess a potential mechanism to reduce quantities of flared gas at oil and natural gas production sites, where gas transport options are insufficient or do not exist, by converting the gas to energy-dense liquid products. In addition to providing a value-added pathway for use of the gas, the proposed technology will also offer environmental and economic benefits through the reduction of CO2 emissions caused by the flaring of light hydrocarbon feeds and through the potential use of CO2 as a soft oxidant. All of these factors offer the potential to meaningfully contribute to ensuring U.S. security and prosperity by addressing energy and environmental challenges through transformative science and technology solutions.

Project accomplishments, thus far, include completing construction of three plasma-stimulated reactor systems that are leak-free and demonstrate successful hydrocarbon nitrogen coupling at lower bulk temperatures. Experimental procedures and data collection methods were developed to quantify the liquid products and capture operating and performance parameters for the reactors and materials. To fully understand the reaction mechanism, in-situ spectroscopy was implemented successfully to identify surface-adsorbed intermediates from plasma stimulation, using the plasma reactor. The project team plans to further develop the approach with additional in situ characterization and computer simulation. Additional experimental testing, including alternative plasmas, such as gliding arc, will also be evaluated.

For further information, please contact Robert Noll at NETL (robert.noll@netl.doe.gov).

Modular Processing of Flare Gas for Carbon Nanoproduct (DE-FE0031870). The goal of this project, led by the University of Colorado, is to develop a low-cost and easily scalable natural gas decarbonization process to form nanoparticles and nanofibers, to be used as a structural additive in concrete. This project will focus on the synthesis of the carbon nanoproducts through chemical vapor deposition, and on the impact of these fibers to the durability of the concrete. The process is conceptualized to be modular/mobile for manufacture on a skid, with easy transport between gas wells and a high turndown ratio to handle production rate changes.

Domestically produced carbon nanoproducts from natural gas is an attractive solution to our nation’s economic, environmental and energy concerns. This project focuses on natural gas decarbonization for flare gas reduction through conversion to carbon nanoproducts. The research and development in this project will result in a mobile unit that provides a high-quality carbon nanoproduct that addresses current challenges faced by flare gas processing in remote areas.

The carbon nanoproducts provide value to the concrete industry for their ability to mitigate cracking and improve infrastructure lifetimes. Current costs for ultra-high-performance concrete make it a novelty product; this innovation in chemical engineering will enable durable concrete infrastructure and allow for the growth of carbon fiber-reinforced products in additive manufacturing applications. Research developments in the processing approach will also enable larger-scale centralized gas decarbonization.

Project accomplishments, thus far, include completion of modifications to a laboratory reactor to integrate a proprietary fluidization system of the silica fume. This was operated successfully at 900°C and the mechanical conceptual designs were completed to incorporate the components into a modular system. The concrete mix using the produced carbon product was optimized for compressive strength and fluidity, meeting the performance metrics set with mixes using commercially available carbon nanofibers.

An initial techno-economic analysis for the approach shows a 25% investor’s rate of return, assuming the carbon nanoproduct-coated silica that is produced can achieve a selling price of $2 to $4 per kilogram. The project team plans to continue with the lab-scale testing while constructing a skid-mounted modular version of the reactor, Fig. 2. A process also will be developed for a large-scale synthesis of the required catalyst.

Fig. 2. Process flow diagram for the proposed modular version of the reactor, utilizing chemical vapor deposition to create carbon nanoproducts from natural gas for use as a concrete additive.

For further information, please contact William Fincham at NETL (William.Fincham@netl.doe.gov).

A larger list of natural gas conversion projects supported by NETL includes:

Microwave Catalysis for Process Intensified Modular Production of Carbon Nanomaterials from Natural Gas (FE0031866)
West Virginia University

Oxidative Aromatization Catalysts for Single Step Liquefaction of Distributed Shale Gas (FE0031869)
North Carolina State University

Electrocatalytically Upgrading Methane to Benzene in a Highly Compacted Microchannel Protonic Ceramic Membrane Reactor (FE0031871)
Clemson University

One-Step Non-Oxidative Methane Upgrading to Hydrogen and Value-Added Hydrocarbons (FE0031877)
University of Maryland College Park

Methane Partial Oxidation Over Multifunctional 2-D Materials (FE0031878)
University of South Carolina

Production of Hydrogen and Carbon from Catalytic Flare Gas Pyrolysis (FWP-1022467)
National Energy Technology Laboratory

Commercialization Study of NETL Technology for Flare Gas to Olefins and Liquids (FWP-1022467)
National Energy Technology Laboratory

Microwave Enhanced Flare Gas Conversion to Value-Added Chemicals (FWP-1022467)
National Energy Technology Laboratory

Additional details about NETL can be found at www.netl.doe.gov. WO

DISCLAIMER

“This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.”

ACKNOWLEDGMENT

KeyLogic Systems, Inc.’s contributions to this work were funded by the National Energy Technology Laboratory under the Mission Execution and Strategic Analysis contract (DE-FE0025912) for support services.

Scroll down to read

How to digitalize sustainability in FPSO production

Digital technologies and a proactive design approach can help reduce containment-loss risks and improve safety in floating production vessels.

As the oil and gas industry has responded to increased pressure to improve its environmental performance, one segment that has faced unique challenges in its efforts to be more sustainable is floating production. The same factors that complicate production on floating production, storage, and offloading (FPSO) vessels can also hinder sustainability efforts. Sea conditions, weather events and the remoteness of the vessels all create potential risks for containment loss and worker safety. Also, while every aspect of a floating vessel’s operation can impact sustainability, the process used to design and build a vessel is typically more focused on a different priority—getting the vessel built and operating at the lowest possible cost.

Digital technologies can give you better insights into energy usage and emissions across your vessel’s processes. And they can help staff make better decisions to reduce containment and safety risks. But in order to make meaningful improvements that will help you achieve your sustainability goals, digitalization can’t be an afterthought. It should be built into your vessel’s design and address your vessel’s near- and long-term sustainability targets.

Addressing your risks. An OpenWater Energy analysis of a typical FPSO found that the top sources of CO2 emissions were gas compression (36%), general power (23%), water injection (20%), inert gas (12%), flared gas (5%) and fugitive emissions (4%). And even the less substantial emissions sources can have a significant impact on the environment and your bottom line. Flared gases are a prime example. According to Capterio, FPSOs today contribute 540 MMcfd of flared gas. That represents 4% of all global flaring and 21% of all offshore flaring—and nearly $1 billion of potential revenue per year.

REDUCING EMISSIONS

A digitalized connected vessel approach to risk assessment can help you better understand and improve your performance across emissions risk areas. Consider these three examples:

Energy measurement. Energy management is important on FPSOs, given how much hydrocarbon flows through the vessel and the various utilities required for the production and processing of the oil and gas streams, Fig. 1. Understanding your energy balance can give you insights into the sustainability and profitability of the vessel. But first, you need to be able to measure what’s going in versus coming out of your vessel. Only then can you begin to understand what’s happening within the process itself and start to identify opportunities for improvement.

Fig. 1. Understanding an FPSO’s energy requirements can improve profitability and provide insights into the vessel’s sustainability potential.  

Digital energy-management tools can help teams understand and optimize a vessel’s energy usage. The tools can provide trending data on the consumption and intensity of water, air, gas, electricity and steam (WAGES) on a vessel, and do so in the context of what’s happening with production.

Using pre-built and customizable dashboards, the tools can help make sure the right people get the right information. For instance, onboard staff may want to view energy consumption and energy intensity information for each utility to identify areas for optimization. On the other hand, onshore teams may need different information, like weekly WAGES trend reports or energy cost calculations and monitoring data. With the proper investment in instrumentation and digital infrastructure, raw process data can be transformed into useful information via these digital tools.

Carbon capture and storage processes can help you meet aggressive goals for reducing emissions. But as is true with energy management, better carbon capture and storage begins with better carbon accounting.

As some FPSO operators already know, getting detailed and accurate emissions data can be a complicated process. Flare analysis, for instance, can be elusive, because the precise oil and gas mixture coming out of a flare stack is difficult to measure. This is where an advanced emissions management solution can be helpful. It can combine artificial intelligence (AI), advanced analytics and physical property modeling to produce more accurate emissions measurements. These solutions can collect measurements for energy and emissions inventory and perform flare analysis to derive energy usage and predict emissions from the process. With these insights, carbon accounting can be improved, and staff can potentially make operational changes to reduce a vessel’s emissions and energy usage.

Insights from the tool can help staff uncover ways to improve or smooth out start-up operations. Analytics can help guide when operators should intervene or even trigger automatic optimization to help reduce energy and emissions in near-real time. And asset efficiency calculations can be used to implement predictive maintenance activities that can save you time, money, effort and energy.

Throughput. Keeping your vessel running at optimal conditions and achieving a process balance across multiple liquid and gas streams can help you better manage energy and emissions. An intelligent throughput optimization tool can help you achieve this by bringing real-time process engineering to your vessel’s control systems. Through a managed process relationship to the phase envelope, the tool can provide greater insights into process operation and help specify the minimum necessary additives for hydrate formation, benzene freezing and other hazards. And by giving you a process balance and prediction of emissions, the tool can help staff change operations, based on those results. For example, they can address cycle times and operate closer to control limits.

These three use cases and the technologies associated with them may not yet be able to eliminate all containment risk from your FPSO operations. But by helping you monitor, control and optimize vessel operations—or even reimagine what operations can be—they have the potential to drastically reduce the threat of containment loss and optimize vessel safety.

DIGITALIZATION STRATEGY

Whether you’re in the planning stages of a new floating production vessel or looking to improve the sustainability of an existing vessel, four elements should be at the core of your digitalization strategy.

First, accurate data. Your ability to monitor, report and ultimately optimize hydrocarbon containment on your vessel hinges on having timely and accurate data, Fig. 2. This means using smart instruments to track advanced vessel operations in real time. Too often, vessels are only instrumented for stable operations, not for advanced applications or operations. But adding smart instruments to a vessel at any point after the design or build phase can be cost-prohibitive, potentially jeopardizing your digital sustainability efforts. Along with good data, you also need data science capabilities in place to help discern data quality, such as by monitoring instrument health or comparing the before and after data points around an instrument.

Fig. 2. Using smart instruments to track vessel operations in real time enables engineers to monitor, report and ultimately optimize hydrocarbon containment.

Second, a digital infrastructure. An open, digital infrastructure allows data to flow freely across your vessel’s systems without requiring customized connectors. And it allows you to use your preferred mix of application solutions instead of being locked into using one supplier’s tools.

Your infrastructure should also be flexible to support new technologies, as they become available. FPSOs are expected to operate for 20 to 25 years. It’s difficult to comprehend how digital technologies will further transform oil and gas operations in 2040 and beyond—but your vessel should be ready to use them. When it comes to what solutions should be built into your digital infrastructure today, advanced analytics and remote-connectivity capabilities are essential. They can help staff not only see data but also make sense of it, whether they’re in the vessel’s control room, at an onshore support center or accessing information from somewhere else as a subject matter expert.

Disruptive technologies like AI can also be a difference-maker, helping operators predict events like process upsets to help reduce flaring and other containment losses. And they can help you adopt predictive maintenance strategies to keep vessel assets running at optimal efficiency.

In addition, constructing newer vessels with modern modular automation standards in mind will better position vessel operators to upgrade future systems. Industry leaders are working to implement a set of modular automation system guidelines known as the module type package standard. The purpose of this standard is to help FPSO operators lower implementation and upgrade costs over a vessel's lifecycle, while also allowing for improved interoperability with OEM-supplied automation features in different modules. Having this flexibility will help FPSO operators to upgrade vessels with sustainability goals in mind.

Third, the correct workflows. They’re sometimes overlooked, but accurate workflows should always be a critical aspect of your sustainability strategy. Why? Because data and insights only have value, if they’re reaching the right people. Vessel workflows should align with your technology—and even be developed in tandem with your vessel’s digital applications. Make sure that staff is getting actionable, relevant insights, so that they can interpret quality data and apply their insights across vessel operations.

The current generation entering the workforce will likely welcome the blending of digital technology into their jobs, because it’s already woven into their daily lives. However, seasoned workers may be resistant to new technologies telling them how to operate equipment that they’ve been running for several years. Activities like change-management plans and executive sponsorship can help break down any barriers to change as you roll out new workflows.

TRAINING

More than merely reading data, staff must be able to act on it, too. Training programs should be designed to help vessel staff know what to do with the information they’re getting, such as what changes they may need to make to process operations. Staff should also understand the information well enough to know if the changes they’re making are having a positive or negative impact on vessel operations and sustainability.

Of course, robust cybersecurity must be woven into all four of these elements. It can’t be an overlay or patched in—it must be addressed from the very beginning, at the conceptual phase. Similar to the suggested approach to digital technologies, “cybersecurity by design” is a critical concept to be adopted for all new floating production projects, taking into account the latest international standards and best practices.

OPERATIONAL BALANCE

An inevitable question when discussing digital sustainability is, how do you achieve your sustainability goals without a trade-off in your productivity goals? Often, the efforts required to minimize containment loss and safety risks are seen as conflicting with the efforts required to maximize production and reduce operating and maintenance costs.

But with a connected vessel strategy, you can address both your production and sustainability objectives. You don’t need to pick one over the other. There’s maybe no better example of how you can support both sets of goals at once than with a connected workforce. A connected vessel empowers both onboard and onshore workers by connecting them to data-based insights, digital tools and each other. This can simultaneously benefit your vessel’s sustainability and productivity.

For instance, insights into machine performance from advanced analytics and AI applications can help operators diagnose problems before they become an issue—and even predict when maintenance or replacement will be required. This can help you maximize equipment uptime by catching and addressing issues earlier. At the same time, it can help you reduce the risk of containment loss through improved maintenance practices and asset-management strategies.

What’s more, a connected onshore operations center can reduce the number of staff needed onboard to operate the vessel. Subject matter experts also can support vessel operations remotely, without traveling to and from the vessel. These personnel can still be just as effective—perhaps even more effective—because they have access to richer data and instant connectivity to people and systems onboard. And by reducing the number of people onboard an FPSO, you can reduce potential exposure to safety incidents and lower your transportation costs, CO2 emissions and other operating expenses.

NEW PARTNERSHIP MODEL

To rethink sustainability on a floating production vessel, you first need to rethink how you design and construct that vessel, Fig. 3. Historically, the procurement process has favored a vessel’s near-term implementation over its long-term performance. As a result, opportunities to optimize sustainability using digital technologies typically aren’t addressed until after the vessel is built and in operation. But this can be a risky game. Design changes can be up to 10 times more expensive at the shipyard—and up to 100 times more expensive during commissioning and startup—compared to making them in the engineering phase. Additionally, when you holistically design a digital foundation into your vessel in the engineering phase, you can intentionally align it to your sustainability needs. And you can design the foundation to be open and standardized to ease the technology changes that will be inevitable across the vessel’s decades-long lifespan.

Fig. 3. To enhance an FPSO’s long-term performance, engineers need to implement a digital foundation into the vessel at the design phase to intentionally align it to future sustainability requirements before build-out starts.

New type of partner. A vessel automation and digitalization partner (VADP) can help you achieve this. The partner can still fill the role of a traditional MAC, putting a focus on cost-savings during the design and construction phases. But because this partner focuses on the operating life of the vessel, they can also help you realize long-term cost-savings. They can manage the responsibility of your vessel’s automation, control, safety and digital processes not only during procurement but for the entire life of the vessel, serving as both an integrator and a permanent modernization advisor.

Using this new, more holistic partnership model can help you maximize production and meet your sustainability goals across the life of your vessel. The partner can help you understand the operational data you’ll need and then identify the smart instruments that can produce it. Then, they can help you put digital infrastructure in place that will support your advanced applications, even if you aren’t installing all of those applications on day one.

At the same time, a VADP partner can manage the coordination and integration of all OEM modules and packages into main vessel systems to reduce potential interface issues and schedule slippage. They can standardize the vessel on one control technology, which can help reduce training, simplify maintenance and consolidate parts. And because the partner is with you for the life of the vessel, they can help you deploy new technologies as they become available or as needs on the vessel evolve.

DUAL CHALLENGE

Trying to maximize a vessel’s production and meet sustainability goals at the same time can be a difficult balancing act. And it will likely only be more challenging in the years ahead, as the oil and gas industry moves toward goals like net-zero emissions. Digital technologies can provide valuable insights and capabilities to help manage these priorities together. And even more powerful technologies that require seamless connectivity and data sharing will no doubt be available in the future. The question is, will your vessel be ready and able to use them when the time comes? WO

GREG TROSTEL has 30 years of experience in the energy industry and is a global BDM for Rockwell Automation, leading the FPSO initiative for Rockwell and SENSIA. He started his career as a coop student with MW Kellogg, based in Houston, and spent a combined 13 years with Kellogg/KBR in several engineering disciplines before moving into international sales. He also worked in technology and software development, including sales at Aspen, AVEVA, Meridium and PAS. Before joining Rockwell, he was involved in global project pursuit and strategic account management with Puffer/Emerson. Mr. Trostel has a degree in chemical engineering from Texas A&M University and an MBA from the University of Houston.

Scroll down to read

Managing by exception: Simplifying digital chemical management

Digital technology and smart devices have fundamentally changed the way that we manage our lives. These advances are now improving operational efficiency in the chemical management sector of the oil and gas industry.

Chemical injection on remote sites is becoming more common within the industry. Applications can include methanol injection to prevent “freeze-ups” or the injection of corrosion inhibitors to protect equipment and pipelines. Chemicals injected include methanol, demulsifiers, defoamers, foaming agents, biocides, wax inhibitors, and clarifiers. Chemical injection management has traditionally been labor-intensive, requiring that staff make frequent trips to many distant sites to check pump calibration, change injection rates, check chemical tank levels, and perform maintenance. A (minimum) weekly visit is required to ensure that any period of pump downtime will not cause excessive damage to equipment, lead to environmental harm, or a loss of production.

Forward-looking operators have embraced digitization of their fields and are “managing by exception” to improve productivity. A digital control and monitoring system eliminates the need for frequent site visits and provides immediate notification when problems arise. This ensures that staff can be dispatched quickly to correct issues before they further compound. Managing by exception also results in fewer overall trips to site and fewer hours on the road, with a reduced overall exposure to workplace and travel hazards.

The benefits of complete chemical management programs are well-proven and widely documented, and yet, many operators remain hesitant to adopt digital systems. “Complete” chemical management systems can be overwhelming, requiring a broad rollout of software, hardware, IT support, data security, smart phone, and wireless device connections. In addition, legacy injection equipment in the field is normally not capable of being digitally controlled, and these legacy pumps are not sufficiently reliable or accurate to be left unattended for longer durations. As a result, many operators feel trapped, leaving digital chemical management on the backburner.

Operators will be relieved to hear that enterprise-wide systems and wholesale organizational changes are no longer required to realize the benefits of digital chemical management. The first step is identifying the most essential components of a well-designed system, which includes a smart pump with high reliability, a tank level measurement system, and a remote monitoring system. Let’s take a closer look into these three technologies:

SMART PUMPS

The two fundamentals to the success of every chemical program are selecting the right chemical and ensuring it is delivered to the process as planned. The basic design of low-cost chemical injection pumps has been unchanged for decades, pre-dating the digitization of the modern oil field. Design features such as manually adjustable stroke lengths or manually adjustable plunger packing do not lend themselves to digital control and unattended operation.

With the goal of digitization in mind, it cannot be stressed enough to ensure that a smart pump (pump/motor/controller) is included as part of new system design, Fig. 1. For existing fields, legacy pumps can be retrofitted with smart pump controllers and Smart Sight Glass technology (flow measurement and feedback control) that can make legacy pumps act like a “smart” pump; however, these retrofits do not address the pitfalls of legacy pump designs. Below is an outline of the critical features in a smart pump required for a successful chemical program.

Fig. 1. Schematic of smart pump, motor, and controller.

Autonomous local control (ALC) is the local monitoring of multiple parameters combined with intelligent internal control adjustments made without manual interaction, analysis, or human input. ALC can, for example, maintain injection rates by compensating for process pressure changes, over-tightening of packing, day/night voltage variations on solar-powered systems, and pump wear over time.

Communication. The ability for the controller to pair with a control and monitoring system (SCADA, InSight Connect or other). Data monitored typically includes desired flowrate, actual flowrate, and system voltage. This can be expanded to include variables such as power draw, process pressure, temperature, as well as pump-specific performance indicators such as pump calibration factor and preventative maintenance data.

Automation. Legacy chemical pumps typically require a combination of manual stroke length adjustment and timer setting changes to adjust injection rates. These features seriously limit the systems’ potential for unattended operation. Today’s smart pumps are controlled with fully digital controllers and BLDC brushless motors which enable broad turn-down ratios and the ability to inject chemical continuously across a wide range of flowrates. This is especially important for applications where the desired or optimal injection rate varies as a function of local variables such as temperature, production rate, or process pressure. Without having a smart pump, these numerous rate adjustments require frequent site visits and time-consuming adjustments.

Accurate injection is essential for the success of a chemical program. The pumping system design must be capable of maintaining accuracy in the face of:

  1. Pressure fluctuations—for legacy pumps, motor load, and thus motor speed, is correlated to pump pressure.
  2. Voltage fluctuations—for solar-powered systems, natural day/night battery voltage fluctuations can have a direct impact on motor speed.
  3. Seal conditions—Many legacy pumps use an adjustable packing that must be tightened regularly; this frequent wear and readjustment leads to large variations in friction on the plunger, which causes varying motor speeds and pump stroke rates. Furthermore, these seals are subject to dimensional changes due to temperature such as day/night temperature swings or friction-induced heating. Modern pumps use self-adjusting, spring-energized seals that provide consistent performance with no need for regular adjustment.

Low maintenance. The “managing by exception” philosophy requires that equipment runs reliably to minimize unplanned site visits. For example, legacy pumps with adjustable packing rely on operators to regularly visit the pump for packing adjustment and pump calibration. Modern pump spring-energized seals, which are self-adjusting, can run unattended for long periods. Figures 2 and 3 show the difference in the seal designs.

Fig. 2. Cut away rendering of legacy packing gland.

Fig. 3. Innovative spring energized seal design.

The performance of spring-energized seals used in chemical injection applications was documented over an eight-year period. On average, the seals operated for over four years before a service kit was required, Fig. 4. This service life far outlasts that of adjustable packing, which tends to last six months to a year, not including the frequent adjustments that are necessary to prevent leaks. Another high maintenance item for legacy pumps is the use of brushed DC motors. Modern, digital motor controls paired with brushless DC motors have been proven to provide a longer service life and lower cost of ownership.

Fig. 4. The performance of spring-energized seals is plotted over an eight-year period. The seals operated for an average of four years before a service kit was required.

TANK LEVEL MEASUREMENT

Traditionally, tank level measurements for chemical inventories are taken only when a chemical technician or pumper is on site. Any issue with the injection system, such as a tank running dry or a pump stopping, could take days or weeks to detect.

Remote tank level measurement is becoming more commonplace and is used to monitor chemical inventories, optimize chemical delivery scheduling, and in some cases, invoicing. These measures are reliant on watching large chemical tanks draw down at extremely slow rates over a long period of time and noting the difference week to week. These long periods of time are the main reason tank level measurement is not sufficiently accurate to determine injection rates.

Additional sources of error include natural tank level fluctuations due to daily temperature swings and chemical loss due to evaporation. As a result, it may take days or weeks to clearly establish long term trends in chemical consumption, defeating its use in any “management by exception” chemical program. New technology like the Smart Sight Glass addresses these limitations and combines both tank level measurement and instantaneous fluid rate measurement in one intelligent device, Fig. 5.

Fig. 5. Smart sight glass.

REMOTE MONITORING SYSTEM

To truly manage by exception without frequent site visits, a remote monitoring system is used to monitor system performance and chemical tank levels. Remote monitoring can be integrated easily into existing SCADA networks or using a standalone IIoT cellular connection, such as the InSight Connect. These systems provide two-way control as well as graphical displays and dashboards, which can be configured with alarms to notify users when performance drifts outside acceptable ranges, Fig. 6. Operators have used these systems to manage and schedule chemical tank refilling, remotely change chemical injection rates, and even remotely diagnose chemical leaks or worn equipment that may soon require service.

Fig. 6. InSight Connect dashboard.

In many cases data access and data security can be a barrier to widespread adoption of these systems. For example, in cases where an oil and gas production company is monitoring and controlling the complete injection system, and a third-party chemical provider requires access only to the chemical tank measurement to monitor chemical inventory and manage chemical deliveries, many oil and gas companies may deny this third-party access for network security reasons. A solution to this issue is to split the data stream at the device level, connecting the device to a SCADA network for full control by the oil and gas operator, and then providing chemical tank monitoring to the chemical provider through a separate, independent IIOT gateway.

CONCLUSION

With today’s technology, oil and gas companies and their chemical suppliers will benefit from switching to smart pumps and digital chemical management solutions to improve efficiencies. New devices make the transition much less difficult and free users from many of the traditional barriers to automation, such as internal IT conflicts or the need to hire SCADA experts or software engineers. Many operators choose to first adopt these systems on high-value or remote sites to become comfortable with the technology and the methodology, which can then be deployed at scale throughout both new and existing fields. WO

REFERENCES

  1. Gould, P., “Digital technology enables intelligent, real-time chemical management,” AOGR October 2018.
  2. Moffatt, T., “Innovations in autonomous local control gain adoption for wellsite and pipeline chemical injection,” World Oil, November 2018.
  3. Moffatt, T., “Advances in seal technology drive down operating costs for chemical injection,” World Oil, November 2020.

Opening photo: A digital chemical management system reduces site visits and provides immediate notification when problems arise.

TERRY MOFFATT is president of Midland, Texas-based Sirius Controls Inc. He has played a key leadership role in founding PROMORE Engineering, Quality Wireline and Sirius Instrumentation and Controls. He has 17 years of experience in the design and application of chemical injection systems. Mr. Moffatt is a registered professional engineer and holds a BSc degree in mechanical engineering from the University of Alberta.

CURTIS JABUSCH is lead mechanical engineer at Sirius Instrumentation and Controls Inc. He has 14 years of experience in design and simulation of oilfield equipment, including pumps, valves and downhole drilling and production equipment. Mr. Jabusch holds a BSc in mechanical engineering from the University of Alberta.

Scroll down to read

Developing an F22, umbilical-compatible scale inhibitor for West Africa

Chemical scale inhibitors are an essential component in oilfield management. Selecting the appropriate chemical for the conditions and mode of application is critical to successful treatment. As operators encounter more challenging scaling environments, the need to develop synergistic scale inhibitor blends is more crucial.

A novel scale management program was needed in a field offshore Angola after barite and calcite scales were detected in separate flowlines, as well as calcite in the hydrocyclones on the FPSO vessel. Due to changes in the Angolan environmental regulatory framework, an environmentally improved solution was required to effectively treat the scale.

The presence of F22 low-alloy steel (which contained a significantly lower value of chromium than that found in 316L stainless steel) within the well completion represented a further challenge. The required scale inhibitor not only had to be effective, but it also had to be benign to the materials of construction and compatible with the process fluids. Adding further complication, the solution had to be suitable for application via umbilical. If an inappropriate product is sent through an umbilical, the potential results can lead to significant downtime and add exceptional costs to remediate. Therefore, comprehensive umbilical compatibility testing would need to be performed on any proposed application.

ChampionX, a global leader in chemistry solutions and highly engineered equipment and technologies, was tasked by the operator with conducting development test work to select the optimal chemical solution that would meet all the required specifications, Fig. 1.

Fig. 1. An innovative scale management program was needed in a field offshore Angola that was compatible with low-alloy steel, could be safely applied via umbilical, and met Angolan environmental restrictions.

Customized, synergistic blending. Initially, several scale inhibitor chemistries were examined by dynamic scale loop (DSL) testing; however, none achieved the required scale inhibition performance needed for this particular set of challenges on their own. It was then decided that the next step would be to evaluate blends of scale inhibitors to assess whether a synergy between two different chemistries could offer improved inhibition performance while still having low corrosivity toward F22 metallurgy, being fit for application via umbilical, and meeting the Angolan environmental guidelines.

The chemistries selected for testing were known to have lower corrosivity toward metals. One used a polymer which had previously been applied successfully with F22 alloy and another used a polymer widely deployed in the North Sea in cases where low corrosivity is required along with strict environmental properties.

Since many traditional scale inhibitors are known to be corrosive against low-alloy steels, several additive chemistries were incorporated to further reduce the corrosivity of the blend. These final blends were successful in meeting the scale inhibition performance target and were found to offer good compatibility with F22 alloy under application conditions.

Since the product would be applied through an umbilical, sufficient concentration of dispersants and surfactants were added to make stable formulations. Each candidate chemistry was tested for thermal stability, high-pressure viscosity, cold stress, hydrate resistance and particle count to understand the feasibility of application via umbilical. Furthermore, the candidate chemistries to be tested were compatible with the Angolan guidelines on environmental acceptability.

EXPERIMENTAL METHODS

Upon successful selection of the scale inhibitors to be used, a comprehensive testing program was carried out. These tests included:

  1. Static thermal stability: formulations are screened for thermal stability at field-appropriate temperatures.
  2. F22 alloy corrosivity: neat corrosivity toward F22 alloy is assessed at field-appropriate temperatures.
  3. Brine compatibility: formulations are injected at a range of concentrations into field-representative brine, which is then aged at the required temperature for 24 hours (hr) and assessed for signs of precipitation or haze.
  4. Dynamic scale loop: Determines the minimum concentration of chemistry required to effectively inhibit scale formation.
  5. Viscosity: Formulations are subjected to viscosity determination across a range of temperatures to ensure the final product could be successfully used in the field.
  6. The water compositions used for testing were specified by the operator and closely represented the produced water composition of the field. This water composition (Brine 1) was the base case, and a second brine (Brine 2), representing a stressed condition with increased barium and sulphate concentrations, was also tested, ensuring the chosen product would be suitable, should field conditions change. Both brines presented a harsh barium sulphate and calcium carbonate scaling condition, presenting a challenging environment for effective scale inhibitor performance.

Materials compatibility testing. Materials compatibility testing is done to determine if the selected scale inhibitor is likely to pose a threat to the integrity of the materials of construction. To understand the corrosivity trends, each individual component of the selected chemistries was first evaluated for materials compatibility in a three-day static test at 120°C. To expedite screening of different additives in the formulations, each candidate formulation was then screened under the same conditions. Based on the test results over three days, different formulations were ranked and final corrosivity tests with the top candidates were conducted for 28 days.

Metal coupons were visually inspected, pre-weighed and then mounted on a tree assembly before being fully submerged in bottles of the test chemistries. The bottles were capped and sealed, and samples were placed in ovens for testing. After three and 28 days, the coupons were removed from the oven, cooled, cleaned, visually inspected, weighed and photographed. The general corrosion rate was estimated, based on the weight loss of individual coupons following the equation as shown below.

Where W = Weight loss (mg)

D = Metal density of coupon (g/cm3)

A = Area of coupon (cm2)

T = Period of exposer (hr)

Post-test coupons were further inspected for localized corrosion. Two coupons were used for screening purposes (3-day test) and three coupons were used for the final round of testing (28-day test). Localized corrosion assessment using vertical scanning interferometry (VSI) of the final recommended product was also conducted for the specific assets. A Bruker optical three-dimensional surface profilometer NPFlex-LA was used for characterizing metal coupon surface features. The surface-scanning technique based on VSI provides non-contact, quantitative measurements with a micron resolution in the vertical axis.

The NPFlex-LA determines the size and shape of the surface features over the whole scan area. Interferometry scans the entire coupon surface and quantitatively reports regions or features that deviate from the average surface baseline of the coupon. Not all surface features should be identified as a “pit” or “pitting corrosion,” as very minor changes in the surface structure may be labeled as a deviation from the average surface plane. These features may include inclusion complexes in the metal, grain structure lines, or imperfections, due to the surface finish technique. The scanning process identifies the topography of the surface, and post-scanning filtration of the data set allows for differentiation of scanning noise versus well defined, localized corrosion or pitting corrosion.

A description of surface features via these parameters is intended to give a more quantitative representation of the overall surface quality after exposing the coupon with different chemistry. However, for this project, no specific criteria were set, so all features greater than 10 microns in depth on the post-test coupons were captured and processed accordingly. Notably, one of the scale inhibitors intrinsically showed low corrosivity, so emphasis was given to consider, including this specific scale component in the final formulation.

Brine compatibility testing. For brine compatibility testing, the scale inhibitors and brines were mixed at varying concentrations, including 0 ppm, 1,000 ppm, and 1%, 10%, 25%, 50%, 75% and 90% chemical in brine. The solutions were then shaken and placed into an oven, to be heated to 110°C for the evaluation period. They were observed at 0, 2, 4 and 24 hr for precipitation, changes in clarity or phase separation.

While an area of incompatibility was seen, initially comprising haze formation immediately after mixing at 10% and increasing to form precipitation between 1% and 75% after 24 hr, one of the proprietary scale inhibitor blends was fully compatible at the lower concentrations recommended for application. This was over the time periods that the chemical was likely to be present at application concentration.

Dynamic compatibility testing. Dynamic compatibility testing aims to replicate the application of scale inhibitor from neat chemical to the laboratory-determined minimum inhibitor concentration (MIC), which would determine the ideal application concentration, Fig. 2. This testing was performed by injecting the scale inhibitors into the brine at the application concentration. The samples were then heated to field temperatures, and the injection point was observed for any haze, precipitation, or gel formation. No precipitation or haze formation was seen at any point during or after injection, indicating full compatibility with the brine at the tested temperature.

Fig. 2. Dynamic compatibility testing in brine at 90°C (mid-injection).

Static jar testing. Static jar testing assesses scale inhibitor performance as crystal growth modifiers against barium sulphate and is an industry-standard method. For static jar testing, powder jars were prepared with anion brine. The candidate scale inhibitors were then dosed into these jars at the appropriate volume for the required concentrations. In addition, a blank sample of anion brine with no scale inhibitor present and a control jar were prepared with sulphate-free brine to replace the anion solution.

Another set of jars was dosed with cation brine. The jars were sealed and placed into a pre-heated oven. After one hour, the cations were mixed with the anions and placed back into the oven. Samples were taken after 2, 4 and 24 hr and were injected into a plastic test tube containing quenching solution to inhibit further scale precipitation. A cap was placed on the test tube and the solution was mixed. Each sample was analyzed for barium within 48 hr.

A chemistry is considered to effectively inhibit scale if, at the desired concentration, the percentage inhibition is greater than 80%. The scale inhibition efficiency was calculated as:

Where:

CB = cation concentration in the blank

Ci = cation concentration in the test (inhibited) samples

CO = cation concentration in the control

Figure 3 shows the results conducted using the static jar test method, which measured efficacy against barium sulphate scale formation. The results indicate that over a 24-hr test period, 5 ppm was sufficient to prevent scale for both scale inhibitors (SI-B and SI-D) being tested using Brine 1.

Fig. 3. SI-B and SI-D under static conditions, 90°C, 24 hr, Brine 1.

For Brine 2, with increased barium and sulphate concentration, the minimum inhibitor concentration (MIC) for SI-B increased to 10 ppm after two hr, 15 ppm after four hr and 20 ppm after 24 hr. For SI-D, the MIC increased to 15 ppm at two and four hr and 20 ppm after 24 hr, indicating comparable performance, Fig. 4.

Fig. 4. SI-D under static conditions, 90°C, 24 hr, Brine 2.

Dynamic scale loop performance testing. In line with standard DSL testing protocol, freshly prepared brines were filtered through a 0.45µm membrane under vacuum. Both brines then passed into separate heating coils within the oven. At a T-junction, the brines mixed and passed into the scaling coil. The differential pressure is measured across the coil and will increase, once scale formation and adhesion to the coil wall cause a blockage. If the formation of scale within the coil rises 1psi above the baseline, the result is a “fail.”

DSL testing assesses scale inhibitor performance as nucleation inhibitors against both barium sulphate and calcium carbonate. The method was selected to confirm which of the scale inhibitor blends would show equal, or superior, performance over multiple test methods. DSL testing was carried out on several scale inhibitors, Fig. 5.

Fig. 5. DSL testing using Brine 1, 1000psi, 110°C, 6.8 pH.

The blank time, i.e., the time taken for scale formation to cause a differential pressure rise of one psi above the baseline for Brine 1 was recorded as around 12 min. This results in a chemical hold time of 36 min. (3x the blank). The results show SI-D was the highest-performing scale inhibitor, with an MIC of 15 ppm, followed by SI-C and SI-B, both with an MIC of 30 ppm. The polymer SI-A had an MIC of 20-30 ppm.

For Brine 2, the blank time was recorded at about 8 min. with a chemical hold time of 24 min. (3x the blank). Again, the results show that SI-D was the best-performing chemical with an MIC of 15 ppm, followed by SI-B with an MIC of 20 ppm.

CONCLUSION

All testing performed by ChampionX detailed the process whereby different additives were evaluated with scale inhibitors to develop a formulation that provided an acceptable general, as well as localized, corrosion rate against F22 low-alloy steel. The final candidate formulation, SI-D, was compatible with the brine at the concentrations and time periods likely to be experienced in the field and was subsequently effective against a mixed barium sulphate and calcium carbonate scaling regime.

Several conventional and widely applied scale inhibitor formulations were evaluated against the brine chemistry provided and were found to have higher MICs than the proprietary blend of the same scale inhibitors tested via DSL. There, the lowest MIC was determined to be 15 ppm, versus 20-30 ppm for the scale inhibitors evaluated on their own. The results of DSL testing suggest a degree of synergistic effect was seen.

However, static jar testing indicated a similar level of performance between the different test chemistries against a purely barium sulphate scaling regime, indicating the synergistic effect may not be applicable to all scale types. Further laboratory testing to quantify the synergistic effect, or otherwise, of the scale inhibitor blend against a range of conditions and scales is ongoing. WO

ACKNOWLEDGEMENT

This article contains elements of SPE paper 209495-MS that was presented at the SPE International Oilfield Scale Conference and Exhibition, held May 25-26, 2022, Aberdeen, Scotland.

Opening photo: ChampionX provides chemistry, technology and engineering solutions to reduce risk, create value and help operators produce energy safely and more responsibly.

HELEN WILLIAMS is ChampionX senior RD&E group leader for scale, based in Aberdeen, Scotland. She is an experienced manager with more than 15 years of experience in the energy industry. Dr. Williams has a PhD in chemical and process engineering from Heriot Watt University.

BEN ADAMSON Ben is a senior chemist with ChampionX, based in Aberdeen, Scotland. He previously held roles at Clariant Oil Services and Intertek before joining ChampionX in 2017. His work is focused on supporting scale management research, testing and development. Mr. Adamson has an MSc degree in chemical sciences from Aberdeen University.

SUBHASIS DE is a staff scientist at ChampionX, based in Sugar Land, Texas. He has more than 10 years of research experience in the oil and gas sector. Dr. De holds a PhD in organic chemistry from Wake Forest University and previously worked as a postdoctoral researcher in medicinal chemistry at University of Texas Medical Branch and University of Houston.

MANOJ BHANDARI is a staff scientist at ChampionX, based in Sugar Land, Texas, where he focuses on scale and corrosion management. Dr. Bhandari completed his PhD in organic chemistry from the University of Texas at Arlington, with a focus on total synthesis of natural products prior to joining ChampionX in 2011.

CLARE JOHNSTON is a senior staff scientist at ChampionX, based in Aberdeen, Scotland, with more than 22 years of experience in the energy industry. Her work supports the management of topside and downhole scale control programs and monitoring methods. Ms. Johnston is the author or co-author of over 25 papers on oilfield scale inhibition and holds a BSc degree in applied chemistry from Robert Gordon University.

Scroll down to read