HPC-Driven Visualization Market: The Billion-Dollar Opportunity
Rob Johnson
October 24, 2019

HPC-Driven Visualization Market: The Billion-Dollar Opportunity

Our ability to see is our dominant form of information acquisition. Human brains allocate roughly 30 percent of their capacity for processing and interpreting the imagery around us for meaning. Our highly adapted minds breeze through the complex process of mentally rendering objects around us. However, accurately simulating those images using high-performance computing (HPC) represents a much more arduous task.

Top image: Powerful HPC clusters with the capacity to perform software-defined visualization performed in-situ enable high-fidelity visualization with scalable CPU rendering. Rendered by Corona with Embree). (Property of Jeff Patton. Lamborghini mesh from Turbosquid.)

The original vector-based computer monitors of the 1960s weren’t much to look at – literally . However, compute advancements in the 1980s through the 2000s accelerated delivery of more realistic simulations, more detailed scientific visualization, and seemingly realistic objects. As a result, visualization increasingly carves out its role in furthering scientific discovery, bringing better products to market faster, and entertaining us, too.

Visualization also aids in making complex ideas or objects understandable. With it, scientists and engineers gain more significant comprehension of a model. Visualization allows them to zoom in and out or rotate a simulated object. In doing so, researchers can see anomalies in the model that may not be evident through numerical calculations. 

Rasterization and Ray Tracing

Creating realistic-looking objects represents a complicated process. A rendering engine must accommodate all the reflections, shadows, transparencies, and ambient subtleties a human eye perceives to convince us an object is authentic. As surface rendering of object supplements yields to next-generation three-dimensional volume rendering, computer-rendered imagery gains significant traction in its ability to derive realism.

Today, two major visualization approaches take center stage: rasterization and ray tracing. Rasterization, using OpenGL, is the process in which a vectorized, linearly based image converts into colored pixels. While the rasterization approach to imaging can process information quickly, the level of detail it can offer cannot compete with visualization involving ray tracing. 

As the name implies, ray tracing is a process of simulating the complex interaction of an object and light rays. It requires an understanding of optics, the branch of physics that studies the behavior and properties of light. For example, when we see a parked Bentley with the sun shining on it, our mind distinguishes many visual subtleties unconsciously. If the car is painted blue, shadowing can alter the color’s appearance on various parts of the vehicle. Reflections of the sun off the windshield or chrome accents may also create secondary reflections on other parts of the car. Light bouncing off the car, to our eyes, generates naturally occurring visual complexities and nuances. When our brain makes sense of all that information, we perceive an actual three-dimensional vehicle. 

Rendering of groundwater flow using Paraview OpenGL-based visualization (rasterization). (Dataset from Florida International University.)


Ray tracing using OSPRay delivers more detailed and useful visualization. (Dataset from Florida International University.)

OSPRay and Embree

Intel has a couple of solutions that help speed ray tracing workloads. First, an open-source, scalable, and portable ray tracing engine (OSPRay) enables high-fidelity visualization with scalable CPU rendering. As a result, photo-quality, 3D objects render more quickly. Also, Embree, Intel’s ray tracing kernel, uses a parallelized algorithm developed to optimize ray tracing performance on Xeon processors. 

Due to the heavy workload ray tracing imposes on a system, it may not be the quickest way to render graphics. However, it does enable the highest level of photorealism, which scientists and researchers need for greater insight from the models they create. For this reason, researchers rely on HPC infrastructure for complex simulation endeavors like visualizing the galaxies around us, creating 3D renderings of molecules, "seeing" underground structures based on seismic data, analyzing weather patterns, and much more. 


BMW image rendered by Corona with Embree. (Photo property of Jeff Patton. Geometry mesh created by RexFu @ Grabcad.)

Visualizing the Future

Industries find increasing opportunity to put visualization to work in furthering their business goals. For many years now, the film industry employed computer-aided visualization technology to combine actual people with realistic-looking fantasy creatures. However, usage scenarios emerge and expand each year. 

For example, large stores use visualization capabilities to create product images for their catalogs because the process is faster and easier than traditional photography approaches, and they can control the final look of the products. In the video gaming industry, augmented reality (AR) and virtual reality (VR) allow a player to immerse in increasingly realistic settings, thereby enhancing the user’s experiences. 

Major manufacturers also rely on visualization technologies for computer-aided design (CAD) and computer-aided engineering (CAE) to assist in developing new products. Because fewer physical prototypes are needed, designers can get their products to market faster than ever before, and they can provide their customers with photorealistic animations of how their products might behave in real-world conditions. 

Other fields such as architecture and construction benefit from visualization for 3D renderings of proposed building designs. Developing photorealistic animations of architectural layouts through AR and VR help architectural firms get contracts faster with their customers, who want to see the architectural plans as close to reality as possible.

In geosciences, engineers collect massive amounts of data from geological surveys or sub-surface mapping. Visualization approaches, in turn, can use these vast datasets to form realistic 3D renderings of oil reserves deep underground. By pinpointing ideal locations for oil drilling, companies optimize placement of their oil rigs to extract the maximum amount of oil while minimizing the external environmental impact.

Benefits like these fuel Jon Peddie Research’s estimate of 2.1 million users of 3D rendering software, and ray tracing usage growth to almost a quarter of a billion dollars in 2023. 

More than Meets the Eye

For companies and scientific institutions considering a move to HPC systems to power their visualizations, they should first consider their use case. Some usage scenarios require real-time insights, and in other instances, speed is a secondary priority to the quality of rendered images.  For instance, high-quality rendered images for movies can process at a slower speed and as available compute power permits if the outcome is not time-sensitive. However, in use cases such as rapid prototyping, a manufacturing company wishing to remain competitive must iterate products quickly. In a situation like this, a substantial HPC system on the backend can accelerate design and iteration to get new products to market faster. 

Past approaches to visualization involving HPC required a standalone cluster for modeling and simulation, followed by a second dedicated system to manage the visualization process. This supplemental system usually sped applicable software using a “hardware visualization” approach involving multiple GPUs. With images rendered, the third and final step in the process – viewing the resulting imagery through post-processing – was handled by a desktop computer or mobile device. 

Today’s approach, however, removes the middleman. More powerful clusters will have the extended capacity to perform software-defined visualization performed in-situ and on a single HPC system. By minimizing data transfer, inputs, and outputs, this approach allows the system to work with more massive datasets and avoid past GPU-based restraints. Visualization conducted under this approach also saves budget, reduces the time required for systems management, and lessens power requirements by centralizing the visualization process on a single system. 

The emergence of exascale computers, like Argonne National Laboratory’s Aurora system scheduled for deployment in 2020, will offer unprecedented ability in one environment, the ability to mix simulation and artificial intelligence, and combine workloads in meaningful ways. Converged workloads such as AI, simulation, and 3D modeling will help scientists perform endeavors more quickly than ever before. With exascale, efforts like modeling neural interactions within the human brain, or generating safe, magnetically contained fusion reactors become possible.

Writer’s note: This article was produced as part of Intel’s editorial program, with the goal of highlighting cutting-edge science, research, and innovation driven by the HPC and AI communities through advanced technology. The publisher of the content has final editing rights and determines what articles are published.

Rob Johnson spent much of his professional career consulting for a Fortune 25 technology company. Currently, Rob owns Fine Tuning, LLC ( www.finetuning.consulting ), a strategic marketing and communications consulting company based in Portland, Oregon. As a technology, audio, and gadget enthusiast his entire life, Rob also writes for TONEAudio Magazine, reviewing high-end home audio equipment.