Digging Deep
Issue: Volume: 31 Issue: 9 (Sept. 2008)

Digging Deep

For those accustomed to thinking in terms of realistic 3D models and animated movies, there would seem to be little connection between a graphics card and a deep-water oil well. Even less intuitive is the notion that the same technologies powering today’s most immersive video games might lead to increased energy independence tomorrow. But as oil and gas companies seek new ways to work with the ever-larger datasets involved in exploration and seismic analysis, the industry is being transformed through the processing power of general-purpose GPU computing, as well as the increasing graphics memory and GPU density now available for visualization. It’s a long way from the graphics industry’s roots, but for a geoscientist, an untapped energy resource can make for the prettiest picture of all.

There are several factors driving the increased volumes of data used in oil exploration. On one hand, technological advances in recent years have made it feasible to perform higher resolution surveys, which are used to identify the kind of geological formations and seismic activity associated with new oil and gas reserves, while the areas measured have increased as well.

At the same time, the accumulation of seismic data over the decades has made it possible to merge existing datasets into “megamerges,” 3D surveys spanning as much as 20,000 square kilometers. This broader perspective helps geoscientists understand features of the larger landscape, particularly offshore, which smaller, lower-resolution surveys might obscure. Finally, repeated surveys over older fields can reveal the flow of formations over time, providing even greater insight—while multiplying the amount of data being visualized.

This is a timely development; as higher oil prices have made it economical to pursue reserves that are harder to reach, oil and gas companies are re-sampling previously explored fields at higher resolution to discover deposits that may have been missed. They are also measuring ground shifts over time in fields that are under production for determining where pumping efforts are the most effective. Similarly, new seismic tools and real-time imaging software offer the potential to greatly increase available global reserves. Meanwhile, as offshore exploration and development move into deeper water, the higher cost of drilling makes the accuracy of seismic analysis critically important.

Between new sensor technologies and the accumulation of data, the acquisition of these larger surveys poses less of a challenge than making sense of them once they’re in hand. Processing the vast amounts of seismic data being gathered—often 100gb or more—requires either horsepower on the order of a supercomputer or an unlimited amount of time for number crunching. Even then, empirical analysis of the highly complex results in numerical form would be virtually impossible. Instead, geophysicists rely heavily on data visualization to spot evidence of promising reserves.

In an ideal world, this would mean being able to visualize an entire regional dataset at high resolution in a single view. Geophysicists could perform detailed multi-attribute analysis with extreme precision while retaining the full context needed to identify large-scale trends. Better decisions could be made more quickly, and oil and gas companies could drill more productive wells.


Geophysicists require enormous computing power to visualize the large datasets generated for oil and gas visualization when examining seismic and other related information. Software tool maker Headwave has solutions that put GPU computing to work in this field.

Until recently, though, graphics processing power had failed to keep pace with the rapid growth in seismic datasets. As Dallas Dunlap, a research scientist associate at the University of Texas’s Bureau of Economic Geology, explains: “As you load these big surveys and analyze the data based on different attributes, you can display seismic volumes, such as coherency: How similar is the data to what’s directly adjacent to it? Where are the faults and channels in the formations? But three or four years ago, even a high-end Windows or Solaris desktop workstation would be limited to 2gb to 4gb of memory, so that’s all the data you could work with. Then, as computers got more processors and 64gb and 128gb of memory, you could load that 40gb megamerged seismic survey into the computer, but graphics memory became the bottleneck.”

Unable to process or visualize an entire dataset at high resolution, geophysicists were forced to work with smaller subsets of data or settle for lower-resolution screens that may misrepresent an area’s true potential. Both slowing analysis and reducing its precision, this trade-off between detail and context limited the amount of data that could be explored and the scenarios that could be evaluated. That is, until advances in general-purpose GPU computing, in tandem with innovative new applications of graphics memory for visualization, provided the quantum leap the industry had been waiting for.

From Theory to Practicality
After several years of growing interest in the theoretical potential of graphics processing power for general-purpose applications, one of the first practical demonstrations of GPU computing came in the early 2000s with the development of a technique called Digital Breast Tomosynthesis (DBT) by the Breast Imaging Division in the Department of Radiology at Massachusetts General Hospital (MGH). By constructing a 3D map of the breast based on as many as 25 views, each taken from a different vantage point along an arc, DBT helps radiologists see tumors that might be obscured on 2D scans. Based on these views, a computer estimates the location of structures throughout the breast using Maximum Likelihood Expectation Maximization, an iterative reconstruction algorithm co-developed by Brandeis University and MGH.

Although tomosynthesis as a general concept dates back to the 1960s, the lack of adequate processing power rendered it impractical. Each of the images used in the typical DBT scan comprises an 1800x2304 array of pixels, each only 100 microns in size—all of which must be read out in a third of a second to minimize patient movement. MGH’s original attempts to synthesize DBT data with a standard PC took all night to process a single breast; even a parallel processing system of 34 PCs was unable to deliver the speed and efficiency needed for real clinical utility. To make DBT practical, Mercury Computer Systems, Inc., a data transformation specialist working on the project, first optimized the Maximum Likelihood Expectation Maximization, then tapped into Nvidia Quadro GPUs for the needed processing power.

Mercury began by mapping the Maximum Likelihood Expectation Maximization algorithm to a GPU based on Nvidia Quadro professional graphics processor technology, designed for mission-critical enterprise applications, which provides a unique programmable rendering pipeline. At the time, the only way to harness the GPU was through the OpenGL cross-platform application programming interface, an approach that required quite a bit of skill. With no interface for mathematics, the GPU could be awakened only by a graphics command, forcing engineers to think in graphics analogies. Instead of simply multiplying two arrays, engineers would need to tell the GPU to draw a triangle in a given location, which would kick off the computation, providing a result in the form of a value stored in a pixel location.


GPU computing, along with Headwave software, creates a visual geological picture of inline, crossline, and time/depth information used in oil and gas exploration.

While there was no performance penalty associated with this indirect approach—the GPU had gigaFLOPS of power to spare—a new bottleneck was created on the human side. Not only did it take longer to write the program, few programmers had both the graphics capabilities and the medical- industry expertise to undertake the project. Still, the initial results—a 60X acceleration of the original single-system solution, reducing the time needed by a single Nvidia GPU to process a DBT scan to under five minutes—were compelling enough to inspire further efforts to make the GPU’s power more broadly accessible.

During this time, growing interest in the promise of GPU computing had led to the introduction of high-level abstractions, such as BrookGPU from Stanford, StreamIt from MIT, and other methods for translating C code into graphics language. For its part, Nvidia developed the Compute Unified Device Architecture (CUDA), a technology that enables programmers to code for the Nvidia GPU directly in C and gain unfettered access to its native instruction set and memory. This made the full potential of the GPU available not only to the relatively small number of people who understand OpenGL or graphics programming, by also to the far larger population of C programmers across every industry. Popular Science recognized Nvidia’s efforts with the inclusion of the CUDA C compiler and software development kit (SDK) in its 2007 Top 100 Innovations of the Year, as well as awarding it the Best of What’s New award.

Responding to the request of customers for a product that could be more readily commercialized, in 2007 Nvidia unveiled the G80 Series Graphics Processors, the first line of GPUs designed explicitly to support CUDA, simplifying the customization process by allowing developers writing in C to tap directly into the massive parallelization of the GPU.

A New Breed in Oil and Gas
The shift from a traditional computing architecture to high-performance GPUs has the potential to fundamentally change workflows by speeding some operations by one or two orders of magnitude: Processes that once took 10 minutes could potentially be completed in one minute. New oil and gas industry solutions are now being built on Nvidia’s Tesla, a dedicated, high-performance GPU computing solution with the industry’s first massively multi-threaded architecture using a 128-processor computing core. Optimized for GPUs, the latest generation of software is also cluster-aware, making it possible for multiple work groups to focus on the same data space without stepping on each other’s toes.

Headwave, a Houston-based provider of software tools for seismic analysis, has developed solutions that put GPU computing to work on the front lines of exploration: in the field—or, more precisely, on the water. The company has developed compression algorithms for the processing of “pre-stack” information: the raw data collected by seismic sensors in individual traces, prior to being averaged out (or “stacked”) with additional traces to find common midpoints and repress ortho­gonal information. “You wind up with a lot of data,” says Steve Briggs, Headwave’s vice president of integration and deployment. “For quality control, you have to perform exceedingly complicated computations to look for subtle things like pressure waves, heat waves, and other complex physics of rocks, gases, and porous materials; it all has to be accounted for prior to stacking.”

Given that pre-stack information is typically 30 to 50 times the volume of post-stack information, or terabytes of data in all, the ability to put vastly more processing power—the equivalent of a supercomputer—onboard a survey boat makes a tremendous difference. “We are able to put serious supercomputing out there on the seas. We can quickly look through the data and find problems to see where we might need to re-shoot while we’re still out there, instead of finding the errors only after sailing from Indonesia back to Amsterdam,” says Briggs.

If a promising structure, such as a salt dome, is identified, the boat can sail outward in a spiral to better resolve the dome, or place hydrophones in a circle around it. Sensors can also be used to capture 4D data that shows how the contents of a volume are changing over time. Once a producing zone has been identified, GPU-based systems help determine whether gas is present and whether pressures might exist, vital information for avoiding blowouts or fires on the rig—the worst thing that can happen in 7000 feet of water.


GPU computing has proven its mettle for processing extreme datasets, including raw seismic data such as this “pre-stacked” (non-averaged) information.

Data Comes Alive
With the rise of general-purpose GPU computing, the oil and gas industry had a solution for the first part of its challenge: speeding calculations to convert raw seismic data into X, Y, Z points with property values. Meanwhile, oil and gas industry technology providers, like Landmark, Paradigm, and Schlumberger, were tackling the second part of the challenge, moving forward with the development of a new breed of visualization systems with the graphics processing power to visualize large-scale surveys without compromising on context or detail.

One such system, a Landmark solution incorporating a Verari Systems E&P 7500 visualization server and Landmark GeoProbe software, can drive powerful displays, such as the Sony SXRD 4k projector, with a resolution of eight million pixels (four times the resolution of a standard HDTV projector) as well as the highest-resolution LCD monitors now available. By enabling higher resolution on larger screens than was previously possible, the system provides a wider field of view while making it possible to see the details in any exploration prospect more readily. Designed specifically to enable interactive interpretation using multi-attribute and multi-volume seismic data, well data, cultural data, and reservoir models, the Landmark visualization system enables geoscientists to drill deep into their datasets while maintaining a view of the big picture.

Built to handle the most demanding industrial applications, the Landmark visualization system comes with plenty of horsepower, powered by up to eight AMD Opteron processors and 128gb of memory. But the system’s key innovation lies in its graphics technology, which uses a unique approach to overcome the physical constraints that have limited the graphics processing power of traditional visualization systems.

As a dataset grows larger, a visualization system requires greater graphics processing power to translate its full depth and detail into pixels on a screen. However, as graphics cards—such as those from AMD, Nvidia, and, soon, Intel—get more powerful, they consume more space and power, and generate more heat; thus, they also are constrained by the physical capacity of the server or workstation within which they are installed. However, solutions such as the Nvidia Quadro Plex visual computing system (VCS) and the recently unveiled Boxx VizBoxx change this picture by literally thinking outside the box.

Housed within a stand-alone chassis rather than inside the workstation or server, the Quadro Plex, for instance, allows power, space, and heat to be managed more effectively. Plugged into existing PCs, workstations, or servers, the system acts as a supercharged graphics card to deliver the scalability needed for large-scale, high-definition visualization. Instead of being limited to two graphics cards, a single workstation can now draw on four to eight high-end cards, dramatically increasing pixel-processing power. This broad configurability enables the development of server-based solutions that incorporate multiple processors and memory modules to deliver super high-end performance and drive a large number of projectors at high resolution.

As part of a high-end visualization system, the Quadro Plex delivers a 20X increase in density when compared to traditional graphics processing solutions. By delivering rendering power of up to 80 billion pixels per second and seven billion vertices per second, with resolutions as high as 148 megapixels on 16 synchronized digital-output channels or eight HD SDI channels, the Quadro Plex can be configured as a single system or added to a visualization cluster for immense scalability. A unified architecture dynamically allocates compute, geometry, shading, and pixel processing power to deliver optimized performance.


Today, the lower cost of 3D visualization tools makes it possible for smaller companies to use the viz technology.

Back at the University of Texas Bureau of Economic Geology, scientists like Dallas Dunlap are already using these systems to visualize large datasets in ways that once seemed impossible. “We can take seismic data and do an X, Y, Z calculation on every point in the volume to visualize it, then take out certain amplitudes so we can see where it goes from fast rock to slow rock, or vice versa,” he explains. “This lets us see the true internal architecture, the channel bodies floating in space, using 3D goggles to get the depth perception we need to understand what we’re seeing. Visualizing truly complex seismic geomorphologies across regional surveys of thousands of square kilometers, in real time, in 3D—it just wouldn’t have been possible before.”

Cheaper, Smarter Exploration
The low cost of a visualization system such as the Landmark implementation—comparable to the annual maintenance cost of an SGI Onyx system alone—makes it possible for oil and gas companies to rethink the way they deploy these capabilities within their organization. While the cost of a high-end display system remains significant for the time being, in principle, enough graphics processing power can be added to a given workstation to support a large number of projectors and a correspondingly higher number of pixels. By providing high-resolution visualization in a team room or collaboration room scenario, companies could free geophysicists from the need to schedule time in a shared visualization center. The system can also enable other applications to run in a high-resolution display environment, such as asset team software tools used to create field development plans.

In addition to incrementally lowering the cost of exploration, general-purpose GPU computing and advances in visualization technology are helping change the dynamics of the oil and gas industry for the better. “A $50,000 machine can now do what it took a $2 million system to do just a few years ago,” says Dunlap. “Having these capabilities at the desktop level lets more people look at these surveys and get a better understanding of geologic processes than in the past. Younger professionals can learn faster, so it’s not just a matter of better tools; it’s also helping us get better scientists.”

While the easiest resources to exploit have already been exhausted, two-thirds of the oil already identified is still in the ground, both in the deep continental shelves off the East and West Coasts and in existing producing fields too small for major companies to bother with. The latter are the province of the smaller independent producers that operate at a scale that can make these small pay areas profitable.

“It won’t be a fantastic amount of production, but for a small company, it’s a living,” says Briggs. “And now, even the smallest companies can afford the kind of advanced seismic processing technology that used to be only for the majors.” The impact of a single such company on the domestic oil supply is minimal—but there are 4500 such companies in Houston alone. “It all adds up, and it’s all new production. Over the next two years, you’re going to see that much more oil and gas produced in the US, which is one way of improving our energy independence,” he adds. “It’s a nice feeling.”

Dan Janzen is a freelance writer in the graphics industry. He can be reached at jdjanzen@panix.com .