The world’s leading publication for data science, AI, and ML professionals.

A Random Walk in Real-Time Computer Graphics

Lighting, texture and displacement mapping in GLSL

Fig 1. A Remote Scene (2021) | Sean Zhai | created with ShaderToy
Fig 1. A Remote Scene (2021) | Sean Zhai | created with ShaderToy

Modeling with Mathematics

Computer Graphics begin with defining shapes mathematically. Modeling sphere and cube can be solved with ease, but defining real-world objects are challenging. The famous 3D teapot model was created in 1975 by Martin Newell at the University of Utah, and it was featured in many research papers due to the fact that only very limited models were available in the early days of computer graphics.

The alternative of modeling is use polygon meshes to approximating the shape; combining 3D digitizer and scanner, high resolution polygon models are widely used, but often requires long rendering time. In 1995, the first CG feature film Toy Story was released, and it required 800,000 machine hours.

The advancement in hardware has greatly improved the situation, especially the modern GPU has made real-time computer graphics a reality. But even today, polygon models still need to be highly optimized, and game engines often use low poly models. Define shapes with mathematics still has a clear advantage.


Depict Reality with Light

Light is recorded as color of pixels in images. The wonder of modern GPU made it possible to great astonishing graphics in real-time, and the language to interface with GPU is OpenGL Shading Language (GLSL). With GLSL, we define how color of a pixel is rendered; the rest is carried out automatically and is is done in parallel.

We have total freedom to setup a scene in any way, though, it is helpful to follow the principles of the real-world. As human being, our emotions often resonant with objects bearing the similarities associate with past experiences. Even when building an abstract composition, we often need certain familiarity.

Fig 2. The Dimension of Lights | Image created by Sean Zhai
Fig 2. The Dimension of Lights | Image created by Sean Zhai

From observation, we know how lights reflect on surface is determined by the normal of its surface, and the diffusive lights are not sensible to directions. With GLSL, a simple and effective way for digital lighting is often configured to follow the 3-point studio lighting. The rule of thumb is that key light is roughly 10 times stronger as the fill light.

In technical terms, computer graphics lighting more or less depends on bidirectional reflectance distribution function (BRDF), which describes a theory of how lights behave upon hitting a surface.


Texture, UV Mapping and Box Map

Fig 3. Tracy Reese Show 2008 | Image source: commons.wikimedia.org (CV BY 2.0)
Fig 3. Tracy Reese Show 2008 | Image source: commons.wikimedia.org (CV BY 2.0)

To create more realistic look, the most direct thinking is to bring in a picture. It would make life easier if we have a picture of the rock to use when we want to create the rock in the digital world. An immediate question is that how the image wraps around the surface of the object, which can be fairly complicated when the shapes are complex. The typical solution is called uv mapping in computer graphics, where u and v are coordinates define the texture. This is very similar to how fabric is used in the fashion industry, and it is an art in itself to make the patterns look nice where the different surfaces meet.

The image below demonstrate a technique called box mapping. It was invented by Mitch Prater when he worked on RenderMan at Pixar. The basic idea is to project texture from an imaginary cube that surrounds the object, and the textures mix according to the direction in relation to x, y, or z-axis.

Fig 4. Texture Mapping Demonstration | Sean Zhai | Created with ShaderToy
Fig 4. Texture Mapping Demonstration | Sean Zhai | Created with ShaderToy

The following is function of boxmap in GLSL code developed by Inigo Quilez at ShaderToy.

// "p" point being textured 
// "n" surface normal at "p" 
// "k" controls the sharpness of the blending in the transitions areas 
// "s" texture sampler 
vec4 boxmap( in sampler2D s, in vec3 p, in vec3 n, in float k ) 
{     
     // texture along x, y, z axis
     vec4 x = texture( s, p.yz );
     vec4 y = texture( s, p.zx );
     vec4 z = texture( s, p.xy );

     // blend factors
     vec3 w = pow( abs(n), vec3(k) );
     // blend and return     
     return (x*w.x + y*w.y + z*w.z) / (w.x + w.y + w.z); 
}

Noise, a Realistic Touch

The difference between noise and random is that noise is not random. Noise, even with its seemingly irregular look, it needs to produce results that are repeatable. To be useful in computer graphics, a noise function needs to create smooth and gradual changes. Perlin noise was developed for the movie Tron (1982) to add realistic touch. The algorithm was ground breaking as it made possible to create rich and nuanced images without heavy dataset or expensive calculations, and it inspired the development of other noises, such as the voronoi, worly, and simplex noise. Computer graphics has never look back the days without a noise function.

The development of Perlin Noise has allowed computer graphics artists to better represent the complexity of natural phenomena in visual effects for the motion picture industry.

  • 1997 Oscar Award Ceremony
Fig 5. Bloodie Writes an Anthem | Rebecca Xu, Sean Zhai (Generative animation created using the Perlin noise field, programmed in Processing)
Fig 5. Bloodie Writes an Anthem | Rebecca Xu, Sean Zhai (Generative animation created using the Perlin noise field, programmed in Processing)

Fractal Noise and Self-Similarity

Nearly all common patterns in nature are rough. They have aspects that are exquisitely irregular and fragmented – not merely more elaborate than the marvelous ancient geometry of Euclid but of massively greater complexity. For centuries, the very idea of measuring roughness was an idle dream. This is one of the dreams to which I have devoted my entire scientific life. – Benoit Mandelbrot

The fractional Brownian motion (fBm) is sometimes described as "random walk process". It was Mandelbrot who revealed the most important feature of fBm, the self-similarity. To illustrate, Mandelbrot showed self-similarity could be observed in many places in nature, such as the structure of a cauliflower, where a portion of the plant has the same structure as the whole.

To see this in real-time, let us implement fBM in GLSL. One example is shown below. Complexity is built gradually by adding iterations of noise that is used to manipulate the sampled texture; each time the sampling frequency is doubled, and its amplitude is reduced to half the size. By doing this, self-similarity is maintained.

// code by ig at shadertoy.com
float noise1f( sampler2D tex, in vec2 x )
{
    return texture(tex,(x+0.5)/64.0).x;
}
float fbm1f( sampler2D tex, in vec2 x )
{
    float f = 0.0;
    f += 0.5000*noise1f(tex,x); x*=2.01;
    f += 0.2500*noise1f(tex,x); x*=2.01;
    f += 0.1250*noise1f(tex,x); x*=2.01;
    f += 0.0625*noise1f(tex,x);
    f = 2.0*f-0.9375;
    return f;
}

As a side note to someone who stumbled upon ShaderToy, it is tempting to randomly change the parameters in a shader for different look, but without understanding the underlying principles, it might be difficult to master the art of constructing shaders, and frustrating when artifacts occurred.

Displacement Mapping

Noise can also be used to change the actual shape of an object, this is often known as the displacement mapping. Adding fBm noise as displacement map to Fig 4, I created the title image of Fig 1. The source code can be found below.

The image is very simple, and I hope that it still creates a feel of remote atmosphere. I would like to share some notes while making this.

  • The composition: generally, your main object should not be in the dead center; the camera target position ta is moved slightly on the x-axis.
  • The modeling of the "architectural structure" is done with 2 spheres joint together, and minus a sphere to create the concave shape (can be easily seen in Fig 4.)
  • A touch with the displacement map can completely change the mood of the image (compare Fig 4. and Fig 1.).

You can visit ShaderToy to check out the real-time version.

Shadertoy

Epilogue

Chances are that you’ve done graphics Programming of drawing lines on a canvas, which basically can be explained as a "painter’s model". What GLSL offers is something with more power. It can be experienced in real-time, and it can be more impressive. Give it a try using ShaderToy, or another GLSL composition tool. Cheers!


Related Articles