Issue: Volume: 25 Issue: 1 (January 2002)

Model Mammals



Where the tiger goes, so go its stripes. Unfortunately, this simple law of nature has no equally simple counterpart in computer graphics. In the digital realm, the tiger and its stripes (as with the giraffe and its patches and the leopard and its spots) are fundamentally independent of each other. A geometric model of a tiger will be mapped with a striped texture, but there is nothing natural about the union. "It is a typical divide-and-conquer idea, where a complex problem is divided into 2 simpler problems, but later we have to somehow integrate the two solutions," says Marcelo Walter, a computer graphics professor and researcher at Unisinos, a private university in Brazil.

"Texture mapping is a great idea, but in some cases it is hard to get the right result. The main problems are the distortions that can occur and the fact that we often do not have a good texture map to start with," says Walter. Moreover, he adds, "some care must be taken when animating texture-mapped deformable objects."

While effective for many applications, the two-step process breaks down when the geometry of the modeled object is highly complex, in which case it becomes difficult to map visual characteristics to every point on the surface. Also, implicit in the two-step approach is the assumption that there's no interplay between the processes that define shape and visual attributes. In fact, often, the visual appearance is the direct result of interaction between these processes.
A new procedural texturing system defines the visual attributes of a giraffe's coat directly onto the surface of the model. The rules-based system ensures that the giraffe and its spots will move in tandem in a physically plausible manner.




A case in point are patterned animals such as zebras, giraffes, and tigers. "The pattern visible on the fur of an adult animal is the result of a much earlier process that took place while the animal was in the womb," says Walter. In such cases, he notes, "it is important to model not only the individual processes themselves, but also the interplay between the embryo growing and the pattern-formation process [in order to achieve a believable dynamic pattern]."

In an effort to achieve integration between such pattern-formation and geometry, Walter, with the help of the late Alain Fournier at the University of British Colombia and Daniel Menevaux of the Laboratoire SIC in France, has devised a method through which the visual attributes of an object are defined directly on the surface of that object while at the same time taking into account any dynamic change of shape due to growth or other reasons. The proof-of-concept application shows how a mammalian coat pattern can be generated by a biologically plausible model simulated on the surface of a changing geometry.

At the heart of the system is a procedural texturing model targeted to animal coat patterns that generates the texture on the surface of the object, avoiding the mapping step. The idea dates back to the introduction of reaction/diffusion textures in 1991. Unique to this implementation, however, is that it does not only synthesize the texture on the surface of the object, but also takes into account the interplay between geometry and texture. "In other words, we can morph the shape while the pattern is being formed," says Walter. "This introduces a new paradigm where changes in shape can affect the final result, much as nature does." Using the technique, the researchers are able to simulate the complete growth of a patterned animal from embryo through adulthood.

The texturing system is built on a clonal mosaic (CM) procedural model. The CM theory for mammalian coat pattern formation suggests that the typical yellow-black striped and spotted patterns occurring in several species of mammals reflect a spatial arrangement, or mosaic, of cloned epithelial (outer layer of skin) cells. In this model, the patterns reflect an underlying cell arrangement, and different hair colors result from different types of underlying cells. The mechanism for pattern formation is based on cell-to-cell interactions and divisions. Its specific goal is to generate repeating spotted patterns occurring in several species of mammals, especially the big cats-such as the leopard, tiger, and cheetah-as well as giraffes.

This implementation, says Walter, "can synthesize a wide range of visually realistic coat patterns, and because the CM texture is generated on the object's surface, it is easier for animation tasks than texture mapping because the visual pattern follows whatever animation transformation is applied." Additional benefits include the fact that the model/texture solution is integrated, thus requiring no second pass, and because of this integration, the geometry can be used to control texture effects.

On the down side, says Walter, "as with most procedural texturing models, it is hard to achieve a specific result because it is difficult to control and fine tune the results."
The spatial arrangement of the cells that make up the outer layer of the giraffe's skin is based on a clonal mosaic procedural model, whereby cell-to-cell interactions and divisions drive pattern formation (right). The new system uses the model to gen




While the new technique is well suited for texturing complex shapes with intricate or detailed visual patterns, such as patterned animals, says Walter, "it is not appropriate for classic texturing tasks that are al ready well handled by current techniques."

The researchers are working to enhance the capabilities of the current system by investigating which geometry attributes are responsible for controlling which aspects of the tex turing process, as well as what other visual effects can be affected by geometry. In addition, the researchers are looking at applying the technique in other domains, such as feather patterns and butterfly wing patterns, and to other procedural texturing models to build its commercial potential. They are also considering the possibility of controlling ordinary texture maps with geometry. Their long-term goal is to develop intelligent textures that adapt automatically to the environment.

Currently, says Walter, there is a dearth of work addressing integration as a task in itself. "I think our work is a step forward in this direction. In the long run, we should be able to model all aspects of pattern and body shape in one step." In fact, he says, "the modeling software of the future should allow the user to model any object as a whole." More information on the integration of pattern and shape using this technique can be found at http://inf.unisinos.br/~marcelow.




Diana Phillips Mahoney is chief technology editor of Computer Graphics World.