Fully automated 3D seismic surface detection on a laptop. (Images of) 3D views of the data input volume, detected surfaces volumes are shown.
NORMAN MARK, Consultant
INTRODUCTION
This article is an extension of my work appearing in this publication about two years ago, which presented automating horizon picking from 2D seismic data. After accomplishing that, I had a long-enduring mental block on how to extend the method to 3D for the purposes of automating subsurface mapping.
Subsurface mapping is the most tedious, time-consuming, error-prone and least-appreciated process in all of oil and gas exploration but arguably the most important: no map, no drilling! Bad map—dry hole (usually). Good map—production (maybe). Maps based on artistic inclinations, office politics or sales purposes, rather than strictly honoring data, have led to legendary O&G financial failures.
Here, I present the output of code I wrote, which actually does automate 3D seismic interpretation, fully honoring all the data, and does it fast—and without any wishful-thinking component built into the algorithms.
My decades-long experience as a geophysicist working with oil and gas exploration and seismic service companies gave me the curiosity for pursuing the goal of automating seismic survey data mapping and immunized me against self-punishment (learning how to code).
To map seismic surfaces, best-selling seismic interpretation software requires the user to pick a point within a positive or negative reflection, a seed point from which the software builds a surface propagating through adjacent similar amplitudes in all directions. A surface is created, once all similar touching amplitudes have been found.
My main contribution is that the decisions on where to pick are internal to the software. The software finds all touching points in all directions, and all surfaces, top to bottom, within the data volume, and does so very quickly.
DATA AND HARDWARE
The input data used here are from a good-quality, public domain, 3D seismic data volume, Parihaka, offshore New Zealand, and are pushed through my code on a 10-year old Lenovo Notebook with Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz.
As it has no graphics card, I tested it for graphics capacity to learn how much data it could handle in case I got lucky with my code. Surprisingly, the upper limit for drawing a 3D random point cloud in a browser is 30 million points, with about 5 mins. of waiting for the blank browser window to light up with points, Fig. 1. Forty million points will load, producing a similar image with 40 layers, but it completely freezes the CPU, requiring a reboot.
An arbitrary data prism of 101 inlines x 101 crosslines - 10201 traces contains 2 secs of better-than-average seismic reflection data quality from the Parihaka cube. The sample rate is 3 ms. From this sub-volume, local maxima reflection times were computed from SEGY data and used in this study. No filters of any kind were applied to these input data. This 101 traces x 101 traces x 2 seconds data prism contains almost 1 million/over 940,000 peak reflection times.
Figure 2 contains views of the data at three different depth ranges within my ThreeJS 3D viewer and are computed within a few seconds. On the right, the full volume of reflection points is colored by depth.
Much can be learned about the geologic structure from these views of the local maxima data alone, without any further computing. This style of data viewing has not been available in the commercial seismic interpretation software I have seen.
These views, along with transparency options within the ThreeJS code, make it quick and easy to identify zones of interest/targets. Again, these data are raw peak times. There is no amplitude information or isolation of surfaces within a text file yet.
WORK FLOW
The first step in finding surfaces is finding 2D horizons within each of the separate seismic inlines. Horizons are defined by touching 2D points within an inline or x-plane. By touching, I mean x coordinates change by one and y coordinates change by zero or one.
The images in Fig. 3 show the results of this process. Red points are local maxima, and the green lines are the horizons. The left side shows a trace-long section of an inline with 2 secs peak reflections. The right side is a detailed view of horizons indicated by their connectivity. Green lines connect reflection points, one separated by one crossline, typically 12.5 to 25 m in a modern seismic survey.
The three images in Fig. 3 show views of the entire data volume of touching—one unit separation—positive reflection points connected by lines.
These entire volumes of chains of points are computed in less than 4 mins. and are output to a text file in the following x,y,z format style:
1837,-92,4240 -> 1837,-93,4241
1837,-92,4244 -> 1837,-92,4245 -> 1837,-92,4246 -> 1837,-93,4247
1837,-92,4251 -> 1837,-92,4252 -> 1837,-92,4253 -> 1837,-92,4254 -> 1837,-92,4255
1837,-92,4257 -> 1837,-92,4258 -> 1837,-92,4259 -> 1837,-92,4260 -> 1837,-92,4261
-> 1837,-92,4262 -> 1837,-92,4263 -> 1837,-93,4264
Here, four chains of 3D points—4 horizons—are printed with inline, reflection time, and crossline integers, with connections between points indicated by arrows. Time is negative, in order to display reflection time increasing downward within the 3D viewer.
Although surfaces composed of touching reflection points are seen in these images (Fig. 4), the 2D horizons have not been organized into surfaces yet. The chains of points are the input to the algorithm that computes the 3D surfaces. My definition of a surface for this purpose is an assembly of touching points, one point thick, completely surrounded by open space.
3D viewers to display the input data and surfaces were built using ThreeJS. To minimize the display of surface fragments, only surfaces larger than 5000 points, out of a possible 10201 points, are shown in Fig. 5. The surfaces were computed in about 15 mins. and are shown with unique colors to certify they are in fact entities.
The red axis indicates inline direction, and the blue axis indicates crossline direction. The green y axis shows increasing reflection time downward, 0 to 2 secs. In another viewer, Fig. 6, surfaces are colored by depth to show structural details.
The output format for the surfaces is nearly identical to the format for the 2D horizons, except that the crossline (z-direction) values vary, as one would expect with surfaces. The output file is a listing of touching/adjacent 2D horizons in text format, easily read by mapping software which reads ASCII files.
Computing hundreds of surfaces took twenty minutes of computing on my 10-year old laptop. Mapping these same surfaces using O&G exploration commercial mapping software would take months!
Tutorial on Displaying 3D Data
The image below is a screen shot of the HTML page that follows it. All the images in this article are from html pages, using ThreeJS and JavaScript. ThreeJS is a JavaScript library of WebGL functions to which everyone has access. There is no need to download a language or install an IDE to be able to create a 3D image in a web page very quickly.
The HTML page that follows the image contains many comments about the purposes of the JavaScript syntax, but not everything is commented. However, one only needs limited fluency in HTML and JS to get a movable image onscreen quickly.
threejs.org is the dedicated website, with many breathtaking examples of 3D graphics. It sold me! The views of seismic data I created in this article are trivial, compared to what can be done in ThreeJS. Learning its full capability takes a while, and the online manual is cryptic. Further, tedious searching is most often necessary to understand a particular function parameter. Some, though, are straightforward.
Copy and paste the text below the image into a text editor and name the file with an html suffix, load the file into a browser (I use Firefox), and you will have the below image on your screen and be able to move the data volume and see point coordinates by clicking on them. Give your input data file a name with a txt suffix. The file should contain point format txt as in x1,y1,z1 x2,y2,z2, x3,y3,z3…
<!DOCTYPE html>
<html>
<head>
<title>Read Text File and 3D Viewer</title>
<h1> Read and Plot Data From File Tutorial </h1>
<h3> Select a text file of 3D points data you want to view.</h3><h3>Its contents will be printed, parsed into points and plotted in 3D.</h3>
<h3>Click on a cube to see its coordinates.</h3>
<style>
/* Set viewer dimensions and border */
#viewer {
width: 400px;
height: 400px;
margin-left: 100px;
margin-bottom: 200px;
border: 5px solid blue;
position: absolute;
box-sizing: border-box;
}
</style>
</head>
<body>
<input type="file" name="inputfile" id="inputfile" text="input">
<br>
<pre id="output"></pre> <!-- this is where the input text is printed after being parsed into points for QC purposes -->
<div id="viewer"></div>
<div id="tooltip" style="position: absolute; display: none; background: white; border: 1px solid black; padding: 5px; font-size: 12px; pointer-events: none;"></div>
<script type="module">
import * as THREE from 'https://threejs.org/build/three.module.js';
import { OrbitControls } from 'https://cdn.skypack.dev/three@0.132.2/examples/jsm/controls/OrbitControls.js';
let scene, camera, renderer, controls;
// Initialize the 3D scene
function init() {
scene = new THREE.Scene();
renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(390,390); // 10 less than the width and height
//renderer.setSize(viewer.offsetWidth, viewer.offsetHeight);
renderer.setClearColor(0xdde5ff); // Pale blue background color
//html color picker at https://www.w3schools.com/colors/colors_picker.asp
document.getElementById('viewer').appendChild(renderer.domElement);
const raycaster = new THREE.Raycaster();
//raycaster.far = 20;
raycaster.near = .5;
raycaster.params.Points.threshold = 0.01; // Tolerance can be adjusted as needed
const mouse = new THREE.Vector2();
const tooltip = document.getElementById('tooltip');
renderer.domElement.addEventListener('click', onMouseClick, false);
// Camera setup
camera = new THREE.PerspectiveCamera(45, viewer.offsetWidth / viewer.offsetHeight, 0.1, 1000);
camera.updateProjectionMatrix();
camera.position.set(40,30,40);
camera.lookAt(scene.position);
//camera.lookAt(0, 0, 0);
// OrbitControls for rotation, zoom, and pan
controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true;
controls.dampingFactor = 0.1;
controls.enableZoom = true;
controls.enablePan = true;
controls.update();
// Axes helper
const axesHelper = new THREE.AxesHelper(10);
scene.add(axesHelper);
// Add light sources to the scene for shading
const ambientLight = new THREE.AmbientLight(0xa0a0a0); // Soft white ambient light
scene.add(ambientLight);
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);
directionalLight.position.set(15, 15, 15); // Position the light above and to the side of the scene
scene.add(directionalLight);// required for shading cubes
// This section displays point coordinates on click
function onMouseClick(event) {
console.log("Click detected at:", event.clientX, event.clientY);
const rect = renderer.domElement.getBoundingClientRect();
mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
mouse.y = -((event.clientY - rect.top) / rect.height) * 2 + 1;
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObjects(scene.children, true);
console.log("Intersections:", intersects);
if (intersects.length > 0) {
const intersectedObject = intersects[0].object;
if (intersectedObject.geometry && intersectedObject.geometry.type === 'BoxGeometry') {
const position = intersectedObject.position;
tooltip.style.left = `${event.clientX + 10}px`;
tooltip.style.top = `${event.clientY + 10}px`;
tooltip.style.display = 'block';
tooltip.innerHTML = `X: ${position.x.toFixed(2)}, Y: ${position.y.toFixed(2)}, Z: ${position.z.toFixed(2)}`;
} else {
tooltip.style.display = 'none';
// Optional: Hide tooltip on mouse move
renderer.domElement.addEventListener('mousemove', () => {
});
// Hide the tooltip when the mouse moves away or another point is clicked
// Function to add sprite labels for each axis
function addSpriteLabel(text, x, y, z) {
const canvas = document.createElement('canvas');
const context = canvas.getContext('2d');
canvas.width = 256;
canvas.height = 256;
context.font = 'Bold 75px Arial';
context.fillStyle = 'rgba(255,0,0,1)'; // Red color for text
context.textAlign = 'center';
context.textBaseline = 'middle';
context.fillText(text, canvas.width / 2, canvas.height / 2);
const texture = new THREE.CanvasTexture(canvas);
const spriteMaterial = new THREE.SpriteMaterial({ map: texture });
const sprite = new THREE.Sprite(spriteMaterial);
sprite.scale.set(10, 10, 1); // Make the text larger and more readable
// Set the sprite position
sprite.position.set(x, y, z);
scene.add(sprite);
// Call addSpriteLabel for each axis
addSpriteLabel('X', 10, 0, 0); // Position near the end of the X axis
addSpriteLabel('Y', 0, 10, 0); // Position near the end of the Y axis
addSpriteLabel('Z', 0, 0, 10); // Position near the end of the Z axis
// Render the scene
animate();
// Animate the scene
function animate() {
requestAnimationFrame(animate);
renderer.render(scene, camera);
// Initialize the scene
init();
// Parse the file's text into points and print the points in the console
document.getElementById('inputfile').addEventListener('change', function() {
const fr = new FileReader();
fr.onload = function() {
const data = fr.result.trim();
const points = data.split(' ').map(coord => {
return coord.split(',').map(Number);
// Print the points to the console
console.log(points);
// Display the raw data in the output element
document.getElementById('output').textContent = fr.result;
// Add unit cubes at each parsed point
points.forEach(point => {
const cubeGeometry = new THREE.BoxGeometry(1, 1, 1);
const cubeMaterial = new THREE.MeshStandardMaterial({ color: 0x00ff00, transparent: true, opacity: .75 });// for shaded sides
const cube = new THREE.Mesh(cubeGeometry, cubeMaterial);
cube.position.set(point[0], point[1], point[2]);
scene.add(cube);
};
fr.readAsText(this.files[0]);
</script>
</body>
</html> WO
NORM MARK is a Philadelphia native, who earned a BS degree in geology from West Virginia University. He also holds an MS degree in geophysics from Florida State University and did some doctoral work at University of Hawaii. Some of his early work was with the Army Corps of Engineers in Jacksonville, Fla. Mr. Mark worked many years with seismic service companies and major oil companies, some of which, as he says, no longer exist: Arco, Marathon, Unocal, Petronas, CGG, Nutech, PGS and Paradigm. He also spent a few years doing freelance consulting before retiring. Mr. Mark says that during one oil bust period around 9/11/2001, he worked for Sprint in Overland Park, Kans., as a Unix guru, where he “got a good grip on coding, especially building GUI's using Perl/Tk.” He believes the work detailed in this article can help many oil exploration companies by speeding up the drilling cycle and by providing much better drilling-target images. norm_mark@hotmail.com / +1 858 2848777