The General Bond Log Interpreter (GBLI) is capable of interfacing with all cement bond logging tools and performing automated interpretation at >90% accuracy.
KIRK HARRIS, GERGELY SEBESTYEN and LASLO OLAH, Texas Institute of Science
Understanding and achieving integrity in cemented wellbores is critical for oil and gas operators, to ensure that freshwater aquifers are sealed, and the surface environment is protected. Well integrity is also key to safely drilling and efficiently producing an oil and gas well. The cemented annulus is monitored and repaired as needed, throughout the life of the well, including the final plug and abandonment.
To help determine whether we have adequately sealed the annular space in an oil and gas well, an acoustic-type bond log is the most commonly used diagnostic tool, and it has been for the past 50 years. The basic objective of the logging tool’s operation is to determine cement bonding quality by transmitting acoustic signals and recording their attenuated reflections from the wellbore. The acoustic data received from the logging tools assist the bond log analyst in assessing the quality of the annular seal, determine any defects in the cement, and provide information that would guide the remediation efforts.
Acoustic tools used today include the basic Cement Bond Log (CBL) which uses one sonic-wave transmitter and two receivers; Radial Bond Log (RBL) containing up to eight receivers positioned to record signal reflections on each side of the wellbore; and the more advanced Ultrasonic Bond Logs (including USIT from SLB, CAST tools from Halliburton, URS from Weatherford, and ABI-43 from READ Cased Hole) using a rotating transmitter/receiver that pitches and catches high-frequency signals to provide a more detailed view of the full circumference of the wellbore. Additional acoustic tools include Compensated and Segmented Bond, Shear Attenuation and Flexural Attenuation Tools.
At the end of the logging operation, the oil and gas company expects to not only receive the acoustic data, but an interpretation of that data, which draws a complete and descriptive picture of the wellbore’s condition.
The advancement of these acoustic logging technologies, and the efforts of cased-hole logging companies, should be applauded. Today’s logging runs can give us excellent acoustic data quality and can provide a picture that aids greatly in cement bond log interpretation. However, increased industry efforts notwithstanding, the theoretical level of bond log interpretation has stayed within the realm of “art form,” with humans in the center of interpretation.
In making the jump from acoustic waveforms to a correct interpretation, current data collection methodologies aren’t completely serving human interpretation limitations, causing misinterpretation of challenging or complex logs.
Such limitations are born out of the capacity of logging tools, wellbore effects (there are more than 40 wellbore effects skewing log signals) and cementing knowledge. For a complete analysis, the interpreter’s objective is to determine not only what happened, but why it happened. It is a well-known fact that logging experts cannot exclusively rely on tool data, because logging tools don’t take wellbore effects and cementing knowledge into full account. Tool data will provide limited information as to what happened and will not provide data to the logging expert to reason the problem.
A complete bond log interpretation must include information with regard to depth of top-of-cement, what is the bond quality along the cemented intervals, and the condition of zonal isolation across these intervals.
Bond log interpretation also must describe the probable conclusion in categories of positive and negative outcomes. The former includes good cement bond to casing, good cement bond to formation, good formation/solids bond and thin cement sheath, while the latter consists of free pipe (no cement), channel, microannulus (weak cement bond to casing) and no cement bond to formation
To complete the full circle of accurate bond log analysis, a judgement should be made concerning any future actions—whether it is a remediation to regain wellbore integrity or a change in future well operations to improve the cementing result.
Unfortunately, most challenging cement bond logs are misinterpreted, and practically all of them are under-interpreted. The aforementioned limitations are further complicated by human limitations. Interpreting bond logs requires time;energy; a vast amount of prior experience; complex data interpretation and keen observation skills; both cementing and logging expertise; and an understanding of all well components and well operations. The interpreter also needs software to correlate several tracks of data.
Incorrect or inconclusive interpretations often result in significant negative financial consequences:
While the final interpretation consists of a finite number of bonding conditions, the interpreter must make hundreds, if not thousands, of real-time decisions, using various parameters until the interpretation is complete.
An interpreter must pay attention to many wellbore conditions affecting the interpretation’s final conclusions, including: casing quality (coated, non-coated, age), cement density and curing time, lithology column, condition of floats, pressure testing, swapping logging fluid, logging fluid parameters, thin cement sheath, washouts, lost circulation, shale creep, fast formation, casing-in-casing, cement settling, cement losses while cementing, fallback, compressive strength, centralizers placement, barite sag, oil wet casing, cement shrinkage or expansion, thermal cycling, mud filter cake, formation factors, hole size and rugosity, fluid influx, mud channels, annular fluid levels, formation porosity, early drill-out, casing collars, and micro-annulus).The interpreter also must understand multiple tool types.
The complexity of real-time consideration of all pertinent data and conditions stretches the human mind’s capacity, thus the low percentage of correct interpretations. So far, all technology improvements have basically been focused on bond-log creation in an attempt to assist the interpretation’s improvement, Fig. 1. However, the basic structure remained the same; bond log tools and the software processing the tools’ signal stream have been upgraded continuously and are becoming more sophisticated. However, their “product” (the visualized log) ends up in the hands of a human interpreter.
The driving force behind GBLI’s development was to build a system to overcome all historical problems by adding the necessary computing power, thereby reducing the “art form” nature of the human interpretation. The obvious, most desired objective is to develop a system that not only overcomes the shortcomings of a human interpretation but could be attached to any existing tool and technology by not requiring the tool manufacturers and logging companies’ consent to use their software source code and/or confidential data.
The theory of GBLI’s developers was to use the Digital Log Interchange Standard (DLIS) file because of its uniformity. DLIS is the file responsible for the log’s “visualization”; it is used by the computer’s monitor and serves as “print file.” GBLI’s developers targeted the digital analysis of DLIS. It was concluded that one of the best common denominators across the spectrum of all logging tools’ DLIS files is the Variable Density Log (VDL). VDL reliably shows the basic characteristics of the well’s bonding conditions.
SYSTEM ARCHITECTURE
GBLI is a highly sophisticated software package that operates a multi-level tree of analytics with built-in specific, proprietary mathematical algorithms at each level’s decision points. The software’s Central Analytical Tree (CAT) functions as a central processing entity. CAT is aided by various libraries, such as previous log interpretations within the same field; local geological data; density log waveforms; cement signature waveforms; historical data as to how cementing affected logs; formation change; lithology/formation types; and tool data and tool limitations, Fig. 2.
These libraries are created and updated by the user of GBLI. Libraries are easy to add to the system or be integrated with other users’ or manufacturers’ libraries. Users can modify data within each library, and activate and deactivate libraries that CAT will be utilizing at a particular interpretation. GBLI’s architecture makes the system open-ended in accommodating each user’s specific needs.
OPERATIONAL ARCHITECTURE
Over the course of the last 30 years, various bond logging technologies have been used by the industry. However, the advancement of these tools stayed within a particular manufacturer’s domain and served as better visualization for the human interpreter. Also, various manufacturers made tool modifications, using the same technology and producing a spectrum of bond log data fields.
GBLI is successfully standardizing the process by examining VDL first, then using all other data fields on the log for further analysis, Fig. 3.
Figure 4 demonstrates how GBLI reviews the entire log first to identify suspicious areas (Phase 1). These areas are going to be tagged as “NO GO.” At this point, GBLI does not establish the specific reason; the areas are simply tagged for further analysis.
In Phase 2, the system will go back to the tagged areas in Phase 1. Using the log’s other fields as “data” and aided by relevant information stored in the system’s libraries, GBLI will analyze each NO GO area to establish those that could potentially pose a problem as far as the final bond log analysis is concerned.
Phase 3 provides a comparative analysis between the areas tagged by Phase 2. By way of demonstrating the operation, let us assume that Phase 1 identified all NO GO areas, Phase 2 identified two areas (at 1,350-ft and 1,825-ft depths) having thin cement sheaths or micro annulus (4 ft long, each). However, the system detected perfect isolation of 475 ft between the two areas. After comparing these areas in Phase 3, final log evaluation is not going to report this specific area as a concern, Fig. 5.
DATA PROCESSING
Phase 1 is crucial for setting priorities and directing further detailed analysis. Due to lack of standardization in channel data, DLIS files’ data must be decoded, using significant preprocessing efforts. Part of CAT is the interpreter system to standardize these data by resolution, units, and depth alignment across different data fields in the log.
With the data fully prepared, CAT advances to the feature processing stage (Phase 2.). Here, specific proprietary filters are applied to each data channel. These filters are designed, according to a set of predetermined rules aimed at isolating essential features that are indicative of the well's structural and compositional characteristics.
GBLI is using various tools for data decoding and processing. Its Feature Selection and Application applies filters to create a refined data channel, tailored to each rule’s input. Most frequently used filters include detection of ringing collars from VDL; VDL attenuation analysis;
evaluation of the stochastic characteristics of the VDL line; assessment of amplitude uniformity; correlation analysis between amplitude and Gamma-Ray (GR) measurements; and detection of rapid fluctuations/changes in data.
GBLI also uses classifier application across the entire depth range in a modular mode, which allows for segment-specific analysis. The output from the classifier provides the best-fit class for each module, along with a detailed explanation for each class decision. Output utilization includes detailed information for the next phase. These data are used, both in the immediate subsequent phase and in the final evaluation of the log.
The Fuzzy Rule-Based Classification System (FRBC) is utilized in GBLI’s operations. However, as FRBC is a viable solution that meets the classifier’s requirements, it is not dependent on a specific type of classifier.
The classifier plays a pivotal role in GBLI’s operation, leveraging the nuanced understanding of expert knowledge encoded in linguistic rules. This advanced classification system is designed to provide a robust framework for interpreting complex data from DLIS files. FRBC integrates fuzzy logic principles, which are particularly suitable for dealing with the inherent uncertainties and variables in well log data.
Unlike traditional binary classifiers, FRBC can handle degrees of membership and truth, allowing it to offer more graded, nuanced decisions rather than binary outcomes. The operating principles of GBLI’s classifier are designed, based on linguistic rules and algorithms utilizing geological formations, well characteristics and logging data. Figure 6 illustrates a segment of FBRC’s structure, showcasing how these linguistic rules are implemented within the system.
FRBC uses these rules to evaluate the data processed through various filters, such as those for detecting ringing collars or assessing amplitude uniformity. Each rule applies to specific features extracted and preprocessed from the raw DLIS data, ensuring that the inputs to the FRBC are both relevant and optimally conditioned for accurate classification.
One of the distinguishing features of FRBC is its ability to not only classify data but also provide explanations for each classification. The output of each classification decision includes a rationale, based on the specific rules that were triggered. The transparency of the classifier’s decision is important for subsequent phases and for the final interpretation. It ensures that the user can follow the logic and reasoning of the system.
It also allows for iterative improvements to the system, as operators and engineers can review and refine the rules and parameters, based on fluctuating field of conditions and parameter lists.
Figure 7 illustrates the effectiveness of the input filters demonstrated on the ringing collar filter applied to VDL data. The filter significantly enhances the detection of ringing collars, which are crucial for identifying structural irregularities within the well. The image below uses a Sobel filter-based specific algorithm to process the vertical gradient of the preprocessed VDL data. This method emphasizes the distinct patterns associated with ringing collars, highlighted in red for a relative value, such as higher and lower ringing collar value.
FUNCTION OF GBLI’S LIBRARY SYSTEM
GBLI’s core is its library system, designed to support and participate in the execution of the Phase-based process. These libraries are flexible and are designed to allow the user to perform updates, add new information or even add another library to the system, in order to enhance the ability to interpret data accurately. Three distinct types of libraries are deployed within GBLI supporting CAT.
Static-Data Libraries. CBL interpretation resources holding essential well data, such as Casing Tally, Rig Operations, Logging Tools, Cement Jobs, Geology and Wellbore Information. Tailored for specific interpretations, these libraries hold fundamental data necessary for analysis and serve as inputs for the decision-making processes. These data will mainly be entered manually from reports created during drilling and cementing operations, and include both qualitative and quantitative aspects. Systematic standardization is achieved through specific questions that establish necessary details for accurate interpretations.
Decision-Making Libraries. Used in Phases 2 and 3, these libraries utilize decision trees to manage the automatic interpretation process of CAT. They organize decision-making, based on the initial results from Phase 1. The framework within these libraries integrates decision trees that draw on inputs from the Phase 1 classifier, static-data and system libraries, as shown on Fig. 8. Initially, Phase 1 provides a preliminary explanation of the results generated by the initial classifier output. Further examination of these results to confirm their accuracy and check for any inconsistencies is done by the decision trees, ensuring that interpretations are based on complete and correct data.
System Libraries. Constructed from past interpretation results, these libraries serve as a comparative basis to enhance the quality of new interpretations. Important historical data that aid the decision-making process and can reference past results within their decision trees is provided by them.
GBLI libraries use standardized protocol on their interface to export and import data.
Separate libraries could be added to, combined with others, removed from, or enabled/disabled within the system. The drilling and cementing reports frequently include manually entered information that is not consistently standardized. It is crucial to standardize library data to ensure precise interpretations. To gather essential data in a standardized format, forms with predefined questions and data fields are used in the automated interpreter, which helps maintain the consistency and reliability of the information. The decision-making library also adopts this standardized format, making it interchangeable and exportable for use with other automated interpreters.
LIMITATIONS OF ARTIFICIAL INTELLIGENCE
After three years study of most aspects of bond log interpretation and investigation of multiple research avenues, the creators of GBLI concluded: bond log’s automated interpretation cannot be achieved by the sole application of artificial intelligence (AI). The core problem is in the inherent divergence between AI’s required data fields (including data uniformity and quantity) and the stochastic dispersion of a bond log. The significant parameter variance in bond log parameters (even within the same field) is unable to provide the large amount of similar data (by which AI can be “taught” in a managed way) that AI’s operation requires.
A published article (“Automatic interpretation of cement evaluation logs from cased boreholes using supervised deep neural networks”) on the digital library SCRIBD (https://www.scribd.com) portrays the underlying problem well. The article’s abstract states in part: “…Cement evaluation logs must, therefore, be interpreted by trained professionals. To aid these interpreters, we propose a system for automatically interpreting cement evaluation logs, which they can use as a basis for their own interpretation. This system is based on deep convolutional neural networks, which we train in a supervised manner, using a dataset of around 60 km of interpreted well log data. Thus, the networks learn the connections between data and interpretations during training…”
(Ref: https://www.scribd.com/document/662519603/1-s2-0-S0920410520306100-main)
As it was stated above, manufacturers, service companies and their customers (rig owners) handle bond logs on a confidential basis, which negates the successful “training in supervised manner” of the neural network. Utilizing partial information (those that the AI company was able to gather) could be highly dangerous at interpretation. Even if the “60 km of interpreted well log data” (approx. 37 miles) are available and represent the totality of a data package regarding a certain (or certain type of) field, geophysical, tool, cementing data and a list of a large (but fluctuating) number of other parameters, the given data package will be slightly or significantly modified by the introduction of one new parameter. AI is unable to react to each variance of the data field, or it takes a wholly separate “teaching” system of AI to make the necessary correction (whereby a separate AI system is going to supervise, manage the AI that works on the interpretation).
CONCLUSION
Bond log interpretation is one of the most important components of a well’s life. It spans from drilling through inspection to well abandonment, because the structure must be securely sealed from the formation and all its potential effects. However, to this day, bond log interpretation has kept its original modus operandi, while its software and hardware side produces the log, and interpretation is created by humans.
This original set-up has kept bond log interpretation in the theoretical realm of art form, heavily dependent on the interpreter’s experience, knowledge, and even the interpreter’s momentary mental acuity. As overall industry experience has shown, incomplete or inaccurate interpretations almost always results in significant, additional cost.
Based on latest technology advancements, it is highly feasible to standardize the underlying conditions and various data stacks, opening a potential to develop standardized mathematical algorithms for standardized and automated bond log interpretation. It will fundamentally change the process and significantly increase the quality of bond log interpretation. GBLI by Texas Institute of Science is the first system in the industry that requires human contribution, only at the data entry point, and completes the interpretation, based on standardized rules and data formats. WO
KIRK HARRIS is the founder and senior technical advisor for ThoroughBond LLC, based in Lafayette, La. The firm provides global technical support for cementing and bond log interpretation. He currently supports several operators in deep water cementing in the Gulf of Mexico, P & A operations in Africa, and global CO2/H2 storage projects. Formerly the global cementing advisor for Oxy, Talisman, and Repsol, Mr. Harris began his career with Halliburton, where he worked as a field and research engineer. He graduated from Purdue University with a bachelor’s degree in civil engineering.
GERGELY SEBESTYÉN, based in Budapest, Hungary, is a co-founder and R&D engineer at SensNet Kft., specialising in Industrial IoT and custom software development in data analytics and automation. He also serves as a lecturer at the Kandó Kálmán Faculty of Electrical Engineering at Óbuda University, focusing on Hydrogen Technologies and Industrial IoT. Mr. Sebestyén is pursuing his PhD at the Doctoral School of Applied Informatics and Applied Mathematics at Óbuda University, with research centered on sensor networks and data analytics. He holds a degree in electrical engineering from the Budapest University of Technology and Economics.
LASLO OLAH, a veteran in the High Technology sector, is the co-founder and CEO of Texas Institute of Science (TxIS). He is one of the original theoreticians of Science Globalization and the growing influence of Eastern Science on the world’s high-tech economy. Mr. Olah has spent his 30-year career in the high-tech arena. Before TxIS, he spent ten years as CEO of the Gamma Group. The firm was operating, based upon his theory of sourcing R&D globally. Mr. Olah holds 34 U.S. patents. He also holds EECSc and MSCSc degrees. He is a Chairman of the Board of DB Funds, Inc. (a worldwide educational non-profit organization), as well as a Rotary International Paul Harris Fellow.