Archive for April, 2008

Back-face artifacts

Posted on April 29, 2008. Filed under: Uncategorized |

If the grid of the water surface gets into motion, alpha blending can cause new artifacts. The main problem is, that overlapping triangles of the water surface are getting blended together, and depending on the an place and on the number of them, the waves get stripy. Even the back side of the triangles can be rendered, as visualized on the next figure.

Blending artifacts

The beam intersects a triangle facing away from the camera (the second red line), and these areas are unrealistic darker, than surrounding triangles. There exist some solutions which try to handle the effect by adjusting the normal vectors toward the camera, but after several futile attempt I decided to blend reflection and refraction in the shader code without the usage of alpha blending.

Read Full Post | Make a Comment ( None so far )

Choppy waves

Posted on April 25, 2008. Filed under: Lake water shader, shader, Technical background | Tags: , , , , |

The general methods discussed in these pages use randomly generated or sinusoidal wave formations. They can be absolutely enough for water scenes with normal conditions, but there are some cases, when choppy waves are needed. For example, stormy weather or shallow water where the so-called “plunging breaker” waves are formed. In the following paragraphs I will briefly introduce some of the approaches to get choppier waves.

Analytical Deformation Model

[UVTDFRWR] describes an efficient method which disturbs displaced vertex positions analytically in the vertex shader. Explosions are important for computer games. To create an explosion effect, they use the following formula:

where t is the time, r is the distance from the explosion center in the water plane and b is a decimation constant. The values of I0, w, and k are chosen according to a given explosion and its parameters.

For rendering, they displace the vertex positions according to the previous formula, which results convincing explosion effects.

Dynamic Displacement Mapping

[UVTDFRWR] introduces another approach as well. The necessary vertex displacement can be rendered in a different pass and later used to combine it with the water height-field. This way, some calculations can be done before running the application to gain performance. Depending on the bases of the water rendering, the displacements can be computed by the above-mentioned analytical model or, for example, by the Navier-Stokes equations as well.

Although these techniques can result realistic water formations, they need huge textures to describe the details. The available texture memory and the shader performance can limit the applications of these approaches.

Direct displacement

In [DWAaR] they compute the displacement vectors with FFT. Instead of modifying the height-field directly, the vertexes are horizontally displaced using the following equation:

X = X + λD(X,t)

where λ is a constant controlling the amount of displacement, and D is the displacement vector. D is computed with the following sum:

choppy waves - displacement vector computation

where K is the wave direction, t is the time, k is the magnitude of vector K and h(K,t) is a complex number representing both amplitude and phase of the wave .

The difference between the original and the displaced waves is visualized on the following figure. The displaced waves on the right are much sharper than the original ones:

choppy waves - deformation

The source of the image is [DWAaR].

Choppy Waves Using Gerstner Waves

If the rendered water surface is definded by the Gerstner equations, our task is easier. Gerstner waves are able to describe choppy wave forms. Amplitudes need to be limited in size, otherwise the breaks can look unrealistic. A fine solution to create choppy waves can be the summation of Gerstner waves with different amplitudes and phases. The summation can be carried out through the following sum:

where ki is the set of wavevectors, ki is the set of magnitudes, Ai is the set of wavefrequencies, ωi is the set of phases and N is the number of sine waves.

The sum of 3 Gerstner waves is visulaized on the following figure:

Gerstner wave summation

The source of the image is [GW].

References

[UVTDFRWR] – Using Vertex Texture Displacement for Realistic Water Rendering

[IAoOW] – Damien Hinsinger, Fabrice Neyret, Marie-Paule Cani: Interactive Animation of Ocean Waves

[GW] – Jefrey Alcantara: Gerstner waves

Read Full Post | Make a Comment ( None so far )

Rendering caustics

Posted on April 21, 2008. Filed under: Lake water shader, shader, Uncategorized | Tags: , , , , , |

However environment mapping is supported by graphic hardware, it is only good approximation in the case where the reflecting/refracting object is small compared to its distance from the environment. This means, environment mapping can be used only when the objects are close to the water surface. Objects under dynamic water surfaces need an often updated environment map, so the usability is limited.

Several approaches render accurate caustics through ray tracing methods, but generally, they are too time-consuming for real-time applications. (See [LWIuBBT]). Other techniques approximate textures of underwater caustics on a plane using wave theory. Although, these moving textures can be rendered onto arbitrary receivers at interactive frame rates, the repeating texture patterns are usually disturbing.

Graphics hardware has made significant progress in performance recently and many hardware-based approaches has been developed for rendering caustics. Real caustics calculation needs intersection tests between the objects and the viewing ray reflected at the water surface. Generally, the illumination distribution of object surfaces needs to be computed, but these are really time-consuming and difficult. Although, backward ray tracing, adaptive radiosity textures and curved reflectors are published methods for creating realistic images of caustics, they can’t be done real time because of the huge computational cost. For more details about these approaches, see [BRT], [ARTfBRT] and [IfCR].

[FRMfRaRCDtWS] describes a technique for rendering caustics fast. Their method takes into account three optical effects, reflective caustics, refractive caustics, and reflection/refraction on the water surface. It calculates the illumination distribution on the object surface through an efficient method using the GPU. In their texture based volume rendering technique objects are sliced and stored in two or three-dimensional textures. By rendering the slices in back to front order, the final image is created, and the intensities of caustics are approximated on the slices only, not on the entire object. The method is visualized on the next figure:

Rendering Caustics

The source of the image is: [FRMfRaRCDtWS].

Although, this reduces computation time, it does not enable real-time caustics rendering. The caustics map cannot be refreshed for every frame using this method.

Caustics-maps show the intensifies of caustics. They are generated by projecting the triangles of the water surface onto the objects in the water. The intersecting triangles influence the force of light on the object. The intensity of the caustic triangles are proportional to the area of the water surface triangle divided by the area of the caustic triangle. The more triangles intersect each other and the higher their intensity is at a given point, the lighter that point is. In the end, caustics map and the original illumination map is merged as on the next figure:

Caustics rendering 2

The source of the image is: [FRMfRaRCDtWS].

[IISTfAC] introduces a faster approach for rendering caustics. The method emits particles from the light source and gathers their contributions as viewed from the eye. To gain efficiency, they emit photons in a regular pattern, instead of random paths. The pattern is defined by the image pixels in a rendering from the viewpoint of the light. Or in another way: counting how many times the light-source sees a particular region is equivalent to counting how many particles hit that region. For multiple light sources, multiple rendering passes are required. Several steps are approximated to reduce the required resources, for example, interpolation among neighbouring pixels, no volumetric scattering effect or restriction to point lights.

In [IRoCuIWV], a more accurate method is described. In the first pass, the position of receivers are rendered to a texture. In the second pass, a bounding volume is drawn for each caustic volume. For points inside the volume, caustic intensity is computed and accumulated in the frame buffer. They take warped caustic volumes into account also, which is skipped in the other caustics-rendering techniques. Their technique can produce real-time performance for general caustic computation, but it is not fast enough for entire ocean surfaces. For fully dynamic water surfaces with dynamic lighting, their method rendered the following image at 1280 x 500 pixels with 0.2 fps:

Caustics rendering example

For more details, see [IRoCuIWV].

In [DWAaR], they optimise the problem to real-time performance. They consider only first-order rays and assume the receiving surface at a constant depth. The incoming light beams are refracted, and the refracted rays are then intersected against a given plane. The next figure illustrates the method:

Caustic trinagles

To reduce the necessary calculations, only a small part of the caustics-map is calculated, and they show a method to tile it for the entire image seamlessly. Finally, the sun’s ray direction and the position of the triangles are used to calculate the texture-coordinates by projection. For futher discuss on this method, see [DWAaR].

The main ideas of caustics rendering were briefly introduced. The accurate methods use ray tracing techniques, but they cannot produce real-time performance without cheating. The most often used approaches use pre-generated caustic textures and try to avoid the visible repetition.

References

[FRMfRaRCDtWS] – Kei Iwasaki1, Yoshinori Dobashi and Tomoyuki Nishita: A Fast Rendering Method for Refractive and Reflective Caustics Due to Water Surfaces

[BRT] – J. Arvo, “Backward Ray Tracing,” SIGGRAPH

[ARTfBRT] – P.S. Heckbert, “Adaptive Radiosity Textures for Bidirectional Ray Tracing,” Proc. SIGGRAPH

[IfCR] – D. Mitchell, P. Hanrahan, “Illumination from Curved Reflections,” Proc. SIGGRAPH

[IISTfAC] – Chris Wyman, Scott Davis: Interactive Image-Space Techniques for Approximating Caustics

[IRoCuIWV] – Manfred Ernst, Tomas Akenine-Möller, Henrik Wann Jensen: Interactive Rendering of Caustics using InterpolatedWarped Volumes

[LWIuBBT] – Mark Watt: Light-Water Interaction using Backward Beam Tracing

[DWAaR] – Lasse Staff Jensen, Robert Goliáš: Deep-Water Animation and Rendering

Read Full Post | Make a Comment ( None so far )

Specular highlights

Posted on April 18, 2008. Filed under: Lake water shader, shader, Technical background | Tags: , , , , , , , |

Specular highlights

Specular highlights are approximated by adding some light color to specific areas as the Phong illumination model describes. For computational reasons, the half-vector is used instead of the vector of reflectance, for more details, see the Water mathematics chapter. The half-vector is also approximated, and some perturbations are added from the values of the bump-map (specPerturb). In this demo, I used the following code for this:

float4 speccolor;
float3 lightSourceDir = normalize(float3(0.1f,0.6f,0.5f));
float3 halfvec = normalize(eyeVector+lightSourceDir+float3(perturbation.x*specPerturb,perturbation.y*specPerturb,0));

The angle between the surface-normal and the half-vector is calculated using the dot product between them. An input variable (specpower) adjusts the power, which results the specular highlights only in case of a very little angle between the vectors. Finally, the specular color is added to the original one.

float3 temp = 0;
temp.x = pow(dot(halfvec,normalVector),specPower);
speccolor = float4(0.98,0.97,0.7,0.6);
speccolor = speccolor*temp.x;
speccolor = float4(speccolor.x*speccolor.w,speccolor.y*speccolor.w,speccolor.z*speccolor.w,0);
Output.Color = Output.Color + speccolor;

Some screenshots of lake water specular highlights:

screenshot_specular_4

screenshot_specular_2 screenshot_specular_3 screenshot_specular_4

Read Full Post | Make a Comment ( 1 so far )

UV Flipping Technique to Avoid Repetition

Posted on April 11, 2008. Filed under: Lake water shader, shader, Uncategorized | Tags: , , |

Alex Vlachos
A common problem exists among many shaders that rely on scrolling two copies of the same
texture in slightly different directions. No matter what angle or speed the textures are scrolling,
they will eventually line up exactly and cause a visual hiccup. This scrolling technique is commonly
used to gain the effect of having random patterns, like water caustics. With caustics, the
pattern appears to stop every once in a while even though it is scrolling at a constant speed.
This was also encountered when implementing the Ocean Water and Reflective and
Refractive Water shaders when we scrolled two copies of the same bump map in opposite
directions to produce high frequency waves (see “Rendering Ocean Water” and “Rippling
Refractive and Reflective Water” later in this book). The method we used to solve this problem
is to compute the two sets of texture coordinates like you normally would. Then immediately
before using those texture coordinates to fetch from textures, choose one of the sets of texture
coordinates and swap the U and V. This will effectively flip the texture along the diagonal.
Using this method, the intermittent hiccup that appears due to the two copies being perfectly
aligned is now impossible since one of the textures is reflected about the diagonal of the
texture.

Alex Vlachos: UV Flipping Technique to Avoid Repetition [D3DShaderX]

Read Full Post | Make a Comment ( None so far )

Particle systems

Posted on April 10, 2008. Filed under: Lake water shader, shader, Technical background, Uncategorized | Tags: , , , , , |

Physics-based approaches have become very popular recently. The improving hardware performance makes the application of real-time particle systems also possible. Depending on the issue, vertex-based and pixel-based solutions can be appropriate as well to make huge a amount of independet particles seem alive. Particle system techniques can be combined with other water animation approaches to get a more realistic result.

Particle system approaches need to answer to questions: how do the particles move, and what are the particle as objects. The whole system can have a velocity, as a vector, but this vector does not need to be constant across the entire flow. The next figure visualize this:

particle system velocity vector

The answer to the second question is: our particles can be negligible in siza and in mass as well. But they can carry further information to make other kind of interaction also possible, for example, color, temperature and pressure, depending on the expected result.

The particles move according to the physical laws, they motion can be calculated in time steps with the help of our previously discussed velicity-vector map. To be able to make these calculations on graphic hardware, a texture must store the place of the particles, so their place are sampled into a texture. These textures are called particle maps:

particle map

To get the place of the paticles in the next timestap, we trace them just like if they moved alone along the velocity-vector map. This approach is called forward-mapping. This is illustrated on the next figure:

forward mapping

This described technique suffers from some problems. First, if the velocity is to small, some particles can stay in the same grid cell forever as they are assumed to start from the center of the cell in each iteration, but they cant leave the cell in one timestap, and they are located to the center again. Second, there might be cells which stay alwayes empty because of the same reasions, which cause stationary particles.

To overcome these issues, backward mapping can be used instead of forward mapping. Fore each grid cell, we calculate, which cell its particle can be originated from. Then, we determine the color of the cell using the color of the color of the original cell. If interpolation is used, the surrounding colors can be also taken into account, and we can avoid stationary particles and empty gaps as well:

backward mapping

Based on the previous considerations, the graphics hardware-based method to texture advection is as follows. The velocity-map and the particle-map are stored in separate textures, which have two components. A standard 2D map can be represented this way, the third dimension is added by approximations to gain performace. Offset textures are part of hardware-supported pixel operations, so the move along the velocity-field can be impleneted by them. Inflow and outflow (particle generation and removal) is outside the scope of this paper. More detailed explanations and source codes can be found in [SHADERX].

The particle systems can be good solutions to make real-time interaction between external objects and the water surface. They can efficiently animate the moving surface as well, but usually they are applied with other techniques at the same time. Flowing water, water-drops, spay, waterfalls are just some of the possible water-related topics that can be implemented through particle systems.

Sprays are modeled as a separate subsytem in [DSoSF], as metioned earlier in The Navier-Stokes Equations chapter. When an area of the surface has high upward velocity, particles are distributed over that area. The particles don’t iteract with each other, they only fall back to the water surface becase of the gravity, and then they are removed from the system. This technique can be really convincing visually for spray-simulation.

The source of these illustrative figures is [SHADERX]

Read Full Post | Make a Comment ( None so far )

Perlin noise

Posted on April 9, 2008. Filed under: Lake water shader, Technical background | Tags: , , , |

There are several cases when random generated noise is needed for realistic rendering. Ken Perlin published a method which gives continuous noise that is much more similar to random noises in nature than simple random ones. This difference is visulaized on the next figures:

Noncoherent noiseCoherent Perlin noise

Th source of the images is [PNM].

The 2D random noise on the left is generated by a simple random generator. The Perlin noise on the right is much closer to random phenomena in the nature.

The basic Perlin noise does not look very interesting in itself but by layering multiple noise-functions at different frequencies and amplitudes a more interesting fractal noise can be created:

Perlin Noise a Perlin Noise b Perlin Noise c Perlin Noise d Perlin Noise e Perlin Noise f

The sum of them results:

Perlin noise result

The frequency of each layer is the double of the previous one which is why the layers usually are referred to as octaves. By making the noise three-dimensional animated two-dimensional textures can be generated as well. More good explanations and illustrations can be found at [PN2].

A detailed and easy to understand explanation of Perlin noise generaion can be found in [PNM]. For complex detalis, see [SHADERX].

References

[PN] – Perlin Noise

[PNM] – Matt Zucker: The Perlin noise math FAQ

[PN2] – Perlin Noise

[SHADERX] – Wolfgang F. Engel: Direct3D ShaderX

Read Full Post | Make a Comment ( None so far )

Reflection rendering into texture

Posted on April 9, 2008. Filed under: Lake water shader, Technical background | Tags: , , , , |

In the chapter describing water mathematics I discussed a method to determine the reflected color for every point of the water surface. One of the most precise solutions for that is creating a virtual view on the other side of the water plane, and rendering the same scene into a texture, which can be used as a reflection map later. This means, that before rendering the final image, a pre-rendering phase should be added. The place of the camera and the view-vector is mirrored onto the water plane during this phase, and every object of the virtual world which can be reflected by the water on the final image is rendered from this virtual view into a texture. Let me show the figure again:

Reflection map

To get the expected result, the place of point B must be calculated. For this, we have to determine how far is the original place of the camera from the water plane, so we have to determine the distance k. If the water is horizontal, this disctance has to be subtracted from the height of the water plane to find the height coordinat, the other coordinates of the points A and B are the same. To avoid artifacts, the underwater objects can be removed from the world before the rendering into texture. When the final image is created, this texture can be used as a reflection-map. The reflective color can be sampled by the help of the vector between the camera and the points of the water surface, and the shape of the waves can be taken into accout as well. Smaller adjustments can be needed to have better results, for instance, smaller modifications of height of the clipping plane or ponit B can improve the reality by producing less artifacts.

Read Full Post | Make a Comment ( None so far )

High Level Shader Language

Posted on April 7, 2008. Filed under: Technical background | Tags: , , , , , |

High Level Shader Language or HLSL is a programming language for GPUs developed by Microsoft for use with the Microsoft Direct3D API, so it works only on Microsoft platforms and on Xbox. Its syntax, expressions and functions are similar to the ones in the programming language C, and with the introduction of Direct3D 10 API, the graphic pipeline is virtually 100% programmable using only HLSL; in fact, assembly is no longer needed to generate shader code with Direct3D 10.

HLSL has several advantages compared to using assembly, programmers do not need to think about hardware details, it is much easier to reuse the code, the readability has impoved a lot as well, and the compiler optimizes the code. For more details about improvements, see [MHLSLR].

HLSL code is used in the demo applications, so I briefly outline here the basics of the language.

Variable declaration

Variable definitions are similar to the ones in C:

float4x4 view_proj_matrix;
float4x4 texture_matrix0;

Here the types are float4x4. This means, 4 x 4 = 12 float numbers are stored in them together, and depending on the operation type, they all participate in the operations. This means, matrix operations can be implemented by them.

Sturctures

C-like structures can be defined with the keyword struct, as in the following example:

struct MY_STRUCT
{
float4 Pos : POSITION;
float3 Pshade : TEXCOORD0;
};

The name of the sturcture is MY_STRUCT, and it has two fields (the names are Pos and Pshade and the types are float4 and float3). For each field, the storing-registers are defined after the colon (:). I discussed the possible register types in the Shader chapter, although the possible register names vary on different Shader versions. The two types in the example are float4 and float3, which means, they are compounded of more float numbers (tree and four floats) which are handled together.

Functions

Functions can be also familiar after using C:

MY_STRUCT main (float4 vPosition : POSITION)
{
MY_STRUCT Out = (MY_STRUCT) 0;

// Transform position to clip space
Out.Pos = mul (view_proj_matrix, vPosition);

// Transform Pshade
Out.Pshade = mul (texture_matrix0, vPosition);

return Out;
}

The name of the fuction is main, and its returns MY_STRUCT variable. The only input parameter is a float4 variable called vPositions, and it is stored in the POSITION register. Two multiplication is also demonstrated in the example (mul operation), they are performed on different types: a matrix-vector multiplication is shonw in the example. By changing only the variables, it is possible to multiply a vector with another vector, or two matrices with each other as well.

Variable components

It is possible to get the components(x, y, z, w) of the compound variables, as vector and matrix components. It is important to mention, that binary variables are performed also per componont:

float4 c = a * b;

Assuming a and b are both of type float4, this is equivalent to:

float4 c;
c.x = a.x * b.x;
c.y = a.y * b.y;
c.z = a.z * b.z;
c.w = a.w * b.w;

Note that this is not a dot product between 4D vectors.

Samplers

Samplers are used to get values from textures. For each different texture-map, which you want to use, a sampler must be declared.

sampler NoiseSampler = sampler_state
{
Texture = (tVolumeNoise);
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
AddressU = Wrap;
AddressV = Wrap;
AddressW = Wrap;
MaxAnisotropy = 16;
};

Effects

The Direct3D library helps developers with an encapsulating technique called effects. Effects are usually stored in a separate text file with .fx or .fxl extension. They can encupsulate rendering states as well as
shaders written in asm or HLSL.

Techniques

An .fx or .fxl file can contain multiple versions of an effects which are called techniques. For example, it is possible to support various hardware versions by using more techniques in an effect file. A technique can include multiple passes, and it is defined in each pass, which functions are the pixel shader and vertex shader functions:

technique my_technique
{
pass P0
{
VertexShader = compile vs_2_0 vertexFunction();
PixelShader = compile ps_2_0 pixelFunction();
}
}

Conclusion

The main ideas were shortly introduced about HLSL in the previous paragraphs. The shader programs of the demo applications are written using HLSL. For more detailed information, see the corresponding chapter or one of the mentioned references ([IttD9HLSL] or [MHLSLR]).

References

[IttD9HLSL] – Craig Peeper, Jason L. Mitchell: Introduction to the DirectX® 9 High Level Shading Language

[MHLSLR] – Microsoft HLSL Reference

Read Full Post | Make a Comment ( None so far )

Reflections

Posted on April 7, 2008. Filed under: Lake water shader, Technical background | Tags: , , , , |

Static cube-map reflections

If the water does not need to reflect everything, it is possible to use a pre-generated cube-map to calculate reflected colors. Cube-maps are a kind of hardware-accelerated texture maps (other approaches are for example sphere mapping and dual paraboloid mapping). Just imagine a normal cube with six images on its sides. These images are taken as a photo from the center point of the cube, and they show what is visible from the surrounding terrain through the points of the sides. An example is shown on the next figure:

Cub map sides

As shown on the following figure, the six sides of the cube are named after the three axle of the coordinate-system: x, y and z in positive and in negative directions:

Cube map center

So we have a cube map and the reflecting surface of the water. We can calculate the vector to each point of the water that points into the direction of the reflected object. Using this 3-dimensional vector (the red one on the last figure) the points of the cube-texture can be addressed from the center of the cube. This vector aims exactly one point of the cube, which has the same color as the reflected object in the original environment. But this calculations are much more efficient and hardware-accelerated to match the real-time requirements, while calculating global illuminations for every reflecting point needs much more time. Using cube-maps has one more advantage: the cube has sides which represent the environment that is not visible by the camera, so even points behind the camera can be reflected. On the other hand, cube-maps needs to be pre-rendered, so it is impossible to reflect changing environment (for instance, with moving objects) if we want to meet the real-time conditions. Using this technique, sky can be easily reflected on the water surface, but a moving boat needs to be handled in another way. Additionally, artifacts can be discovered at the edges of the cube, that are really hard to avoid.

Read Full Post | Make a Comment ( None so far )

Liked it here?
Why not try sites on the blogroll...