Lake water shader

Click here to download the word document version of this chapter.

There are many different types of water in our world and they range from small water surfaces in a mug to endless areas of oceans. Typically, smaller waters interact with floating or falling objects while bigger surfaces get into reaction with the wind and form waves. This chapter describes an example of middle-sized water area rendering: smaller lakes and rivers with moderate waves.

Preconditions

In this chapter I discuss creating water effects with the following preconditions:

  • Realistic, nice-looking water needed
  • Middle-sized, flat water surface
  • Moderate interaction with the wind
  • No need for breaking waves or foam
  • No need for underwater effects (The view point is always over the water surface)
  • Real time performance

Using high-fields or triangle-strips can result in very nice-looking effects, but if the waves do not need to be braking and the performance is an important factor than simpler only-shader-driven solutions can be a good compromise. To gain efficiency, the water surface will be approximated only by a single square.


The main steps of this water effect

  1. Adding a plane which reflects everything above it
  2. Get the plane in motion by some ripples
  3. Set the ration between reflection and refraction on the plane
  4. Adding some dull color to make the water dirtier
  5. Ready 🙂


Before using the shader

The water surface will be a square which means that it is represented only by four vertices. This water-plane intersects the virtual world at a certain height, and if the landscape is lower than the height of the water plane, the water is visible. Everywhere else the water will be covered by the landscape.

Covered water surface

Landscape can be created, for example, from a high-map. I discuss a technique for this and for creating sky-dome at the General terrain chapter. Those ideas can be the bases for the following water effects.


HLSL code

In the HLSL code we define the technique to create the water effect. It has only one pass, and both shaders can be set to version 2.0 in it. The definition is the following:

technique Water

{

pass Pass0

{

VertexShader = compile vs_2_0 WaterVS();

PixelShader = compile ps_2_0 WaterPS();

}

}

The definition of the structure returned by the vertex shader describes the variables. At first, we need to determine the sampling positions which are used later in the pixel shader. The sampling positions are returned (the name of the variables shows their use), and Position3D is needed to be able to calculate the accurate eye vector in the pixel shader:

struct WaterVertexToPixel
{

float4 Position : POSITION;

float4 ReflectionMapSamplingPos : TEXCOORD1;

float2 BumpMapSamplingPos : TEXCOORD2;

float4 RefractionMapSamplingPos : TEXCOORD3;

float4 Position3D : TEXCOORD4;

};

The structure returned by the pixel shader is not so complicated any more. Using the relayed information, we sample the different textures and calculate the final color of the pixel which is written to the frame buffer. Only this color value is returned:

struct WaterPixelToFrame

{

float4 Color : COLOR0;

};

Calculating the sampling coordinates for reflection and refraction maps is possible through creating the necessary matrices. Multiplying the view-matrix and the projection-matrix results the view-projection matrix. This can is multiplied with the world-matrix to get the world-view-projection matrix, and so on and so forth. We can get the sampling positions, for example, with these lines:

WaterVertexToPixel WaterVS(float4 inPos : POSITION, float2 inTex: TEXCOORD)

{

WaterVertexToPixel Output = (WaterVertexToPixel)0;

float4×4 preViewProjection = mul (xView, xProjection);

float4×4 preWorldViewProjection = mul (xWorld, preViewProjection);

float4×4 preReflectionViewProjection = mul (xReflectionView, xProjection);

float4×4 preWorldReflectionViewProjection = mul (xWorld, preReflectionViewProjection);

Output.Position = mul(inPos, preWorldViewProjection);

Output.ReflectionMapSamplingPos = mul(inPos, preWorldReflectionViewProjection);

Output.RefractionMapSamplingPos = mul(inPos, preWorldViewProjection);

return Output;

}

Reflection and refraction maps are sampled using the perturbated positions in the pixel shader. The reason for using perturbations is described in the “Waves” section. The perturbated texture coordinates can be calculated as follows:

WaterPixelToFrame WaterPS(WaterVertexToPixel PSIn)

{

ProjectedTexCoords.x = PSIn.ReflectionMapSamplingPos.x/PSIn.ReflectionMapSamplingPos.w/2.0f + 0.5f;

ProjectedTexCoords.y = PSIn.ReflectionMapSamplingPos.y/PSIn.ReflectionMapSamplingPos.w/2.0f + 0.5f;

float4 reflectiveColor = tex2D(ReflectionSampler, perturbatedTexCoords);

ProjectedRefrTexCoords.x = PSIn.RefractionMapSamplingPos.x/PSIn.RefractionMapSamplingPos.w/2.0f + 0.5f;

ProjectedRefrTexCoords.y = -PSIn.RefractionMapSamplingPos.y/PSIn.RefractionMapSamplingPos.w/2.0f + 0.5f;

float4 refractiveColor = tex2D(RefractionSampler, perturbatedRefrTexCoords);

return Output;

}


Reflections

To be able to reflect the objects above the surface as described in the Water mathematics chapter, we need to have the image of the reflected objects, which shows the reflected color for each pixel of the water. Before creating the final picture, this image can be rendered into a texture as a new render-target and later can be used to the reflection effects.

In the C# code the new texture (the new render-target) needs to be defined and initialized first:

private RenderTarget2D reflectionRenderTarg;
private Texture2D reflectionMap;
reflectionRenderTarg = new RenderTarget2D(device, 512, 512, 1, SurfaceFormat.Color);

The original view-point and view-direction needs to be mirrored onto the plane of the water. (Check the paragraph Reflection.)

We need to create the matrix of the virtual view by mirroring the original one onto the water plane:

private Matrix reflectionViewMatrix;
float reflectionCamZCoord = -cameraPosition.Z + 2*waterHeight;
Vector3 reflectionCamPosition = new Vector3(cameraPosition.X, cameraPosition.Y, reflectionCamZCoord);

float reflectionTargetZCoord = -targetPos.Z + 2 * waterHeight;
Vector3 reflectionCamTarget = new Vector3(targetPos.X, targetPos.Y, reflectionTargetZCoord);

Vector3 forwardVector = reflectionCamTarget - reflectionCamPosition;
Vector3 sideVector = Vector3.Transform(new Vector3(1, 0, 0), cameraRotation); Vector3 reflectionCamUp = Vector3.Cross(sideVector, forwardVector);

reflectionViewMatrix = Matrix.CreateLookAt(reflectionCamPosition, reflectionCamTarget, reflectionCamUp);

After the entire world (without water) is drawn from the virtual view, this image needs to be rendered onto our temporary render-target. To avoid ghost-reflections and hidden reflected areas, a clipping plane can be used to discard the objects under the plane of the water. This step helps eliminating unnecessary rendering and avoiding possible artifacts.

Riemer published a very good tutorial on his homepage. I used his solutions in my demo source codes as well.

Clipping planes must be set to remove areas, which cannot be reflected on the water surface, but can hide reflections. The idea is visualized on the next figure:

clipping plane

If we want to get the possible reflections from point A, we have to render the reflection map from point B. But before rendering, we have to remove every underwater object, because they can hide real reflections, as the underwater terrain does on the figure. Although the arrow points the reflected point if you look onto the water surface, the first intersection from point B is an underwater part of the scene. After removing everything under the water level with a clipping plane, the first intersection point will be our desired target, which is reflected on the water.

The clipping planes can be set up as follows to remove the underwater objects:

Vector3 planeNormalDirection = new Vector3(0, 0, 1);
planeNormalDirection.Normalize();
Vector4 planeCoefficients = new Vector4(planeNormalDirection, -waterHeight);
Matrix camMatrix = reflectionViewMatrix * projectionMatrix;
Matrix invCamMatrix = Matrix.Invert(camMatrix);
invCamMatrix = Matrix.Transpose(invCamMatrix);
planeCoefficients = Vector4.Transform(planeCoefficients, invCamMatrix);
Plane refractionClipPlane = new Plane(planeCoefficients);
device.ClipPlanes[0].Plane = refractionClipPlane;

After this, the clipping plane and the view matrix can be used by the draw method:

private void DrawReflectionMap()
{

Vector3 planeNormalDirection = new Vector3(0, 0, 1);
planeNormalDirection.Normalize();
Vector4 planeCoefficients = new Vector4(planeNormalDirection, -waterHeight);
Matrix camMatrix = reflectionViewMatrix * projectionMatrix;
Matrix invCamMatrix = Matrix.Invert(camMatrix);
invCamMatrix = Matrix.Transpose(invCamMatrix);
planeCoefficients = Vector4.Transform(planeCoefficients, invCamMatrix);
Plane reflectionClipPlane = new Plane(planeCoefficients);
device.ClipPlanes[0].Plane = reflectionClipPlane;
device.ClipPlanes[0].IsEnabled = true;
device.SetRenderTarget(0, reflectionRenderTarg);
device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
DrawTerrain(reflectionViewMatrix);
DrawSkyDome(reflectionViewMatrix);
device.ResolveRenderTarget(0);
reflectionMap = reflectionRenderTarg.GetTexture();
device.SetRenderTarget(0, null);
device.ClipPlanes[0].IsEnabled = false;

}

I have to notice that, to restore the original state, the clipping plane is set to false at the end of the draw method. The terrain and the sky-dome are drawn using the matrix of the virtual view (reflectionViewMatrix) because of the reasons discussed earlier. At this point all the reflection data is stored on a texture.

On the next figure, the reflection map is shown on the right which was used to produce the image on the left:

Lake-water_reflection_samplereflectionmap

The reflection-map is captured from underneath the water level as discussed earlier, and from that point it is possible to see what is “behind the sky”. This results the black area on the image.


Refractions

The method to produce a refraction map is similar to one of the reflections map. There is no need to change the view point, the virtual and the original view-vectors are the same, but the clipping plane needs to be inverted, as everything is to be rendered below and not over the water level.

RenderTarget2D refractionRenderTarg;
Texture2D refractionMap;
refractionRenderTarg = new RenderTarget2D(device, 512, 512, 1, SurfaceFormat.Color);

The clipping plane is applied on the graphic hardware, and therefore, the vertices are in camera space already when they will be compared to the plane. Because of this, the plane needs to be transformed with the inverse of the camera matrix. This can be achieved by the following lines:

Vector3 planeNormalDirection = new Vector3(0, 0, -1);
planeNormalDirection.Normalize();
Vector4 planeCoefficients = new Vector4(planeNormalDirection, 5.0f);
Matrix camMatrix = viewMatrix * projectionMatrix;
Matrix invCamMatrix = Matrix.Invert(camMatrix);
invCamMatrix = Matrix.Transpose(invCamMatrix);
planeCoefficients = Vector4.Transform(planeCoefficients, invCamMatrix);
Plane refractionClipPlane = new Plane(planeCoefficients);

In the draw method the clipping plane needs to be created, applied, and finally the original state needs to be restored after drawing onto a texture. The source code for this is the following:

private void DrawRefractionMap()
{

Vector3 planeNormalDirection = new Vector3(0, 0, -1);
planeNormalDirection.Normalize();
Vector4 planeCoefficients = new Vector4(planeNormalDirection, 5.0f);
Matrix camMatrix = viewMatrix * projectionMatrix;
Matrix invCamMatrix = Matrix.Invert(camMatrix);
invCamMatrix = Matrix.Transpose(invCamMatrix);
planeCoefficients = Vector4.Transform(planeCoefficients, invCamMatrix);
Plane refractionClipPlane = new Plane(planeCoefficients);evice.ClipPlanes[0].Plane = refractionClipPlane;
device.ClipPlanes[0].IsEnabled = true;
device.SetRenderTarget(0, refractionRenderTarg);

device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

DrawTerrain();
device.ResolveRenderTarget(0);
refractionMap = refractionRenderTarg.GetTexture();
device.SetRenderTarget(0, null);
device.ClipPlanes[0].IsEnabled = false;

}

The refraction data is also stored on a texture at this point of the source code. In the next section they are available to create the final image.

An example is given on the next figure. The refraction map is shown on the right which was used to produce the image on the left:

Lake-water refractionmap - sampleLake-water - refractionmap

Notice that the clipping plane is set not exactly to the water level, but a little bit higher to avoid artifacts at the edges. To gain performance, the refraction map is half the size of the original image, just like the reflection map.


Fresnel term

Operations to calculate the Fresnel term correctly are very complex (described here). To blend the previously determined reflected and refracted color we need the proper ratio between them. In this demo application I use various solutions to approximate the Fresnel effect.

Reflection and refraction colors need to be blended depending on the cosine of the angle between the eye-vector and the normal-vector. As both of these vectors are one unit long, the cosine of the angle can be determined by the dot product of them.

The first solution is the projection of the eye vector on the normal vector, which approximates the Fresnel term relatively good, but not accurate enough. The projection can be calculated by dot product. This is only adjusted with some terms to get similar result with the approaches described later:

if ( fresnelMode == 0 )

{

fresnelTerm = dot(eyeVector, normalVector);

//correction

fresnelTerm = 1-fresnelTerm*1.3f

}

The next approach uses the formula of the Realistic compromise chapter. For more details, see [EMaRoTWoNT].

if ( fresnelMode == 1 )
{

fresnelTerm = 0.02f+0.97f*pow((1-dot(eyeVector, normalVector)),5);
}

The third approximation is discussed in [CgTOOLKIT]. It also calculates the dot product between the eye-vector and the normal-vector. After adding 1 to this, to get the result, we divide 1 by the fifth power of this value:

if ( fresnelMode == 2 )
{

float fangle = 1+dot(eyeVector, normalVector);

fangle = pow(fangle ,5);

fresnelTerm = 1/fangle;
}

To be able to adjust the settings the xDrawMode input variable influences the Fresnel value. It is then reduced between 0 and 1. Finally, the reflection and refraction values are combined:

//Hardness factor – user input
fresnelTerm = fresnelTerm * xDrawMode;

//just to be sure that the value is between 0 and 1;
fresnelTerm = fresnelTerm < 0? 0 : fresnelTerm;
fresnelTerm = fresnelTerm > 1? 1 : fresnelTerm;

// creating the combined color
float4 combinedColor = refractiveColor*(1-fresnelTerm) + reflectiveColor*(fresnelTerm);

Some screenshots of water with the differently adjusted Fresnel value in the demo application:

screenshot_fresnel_0 screenshot_fresnel_1 screenshot_fresnel_2 screenshot_fresnel_3 screenshot_fresnel_4 5


Waves

To create an efficient wave effect for a bigger area the number of the vertices must be limited. In the lake water shader I used the following optimization techniques:

  • Creating the water effect only by pixel shader. In this manner, the water can be made of a very limited number of vertices.
  • Wave motion effect created only by bump-map.

In this water effect the ripples of the water is animated by a moving bump-map, check [Riemer] for more details. From an original wave picture it is possible to create the gradient map of the image which shows the perturbations of the surface. A gradient map is the same size as the original picture, and every pixel stores a vector in the RGB components. This vector defines the deviation from the original normal vector at every single point of the image. The original normal vector for an absolutely flat surface is (0;0;1), for more precise calculations the value-range (-1;1) can be scaled to the values of the color components: 1 will be the maximum (256), 0 will be scaled to the half (128), while -1 to the minimum (0). For example the vector (0;0;1) is scaled to (128;128;256). As long as the perturbations are not very significant, every pixel of the image should have some similar values to (128;128;256), and this means, the blue component has always the highest value. This results a predominantly blue gradient map:

Lake water gradient map

Gradient maps are usually called bump map in graphic development. In the XNA code we need to load the bump map and pass it as a parameter for the shaders, just like the elapsed time to get the waves in motion:

private Texture2D waterBumpMap;
waterBumpMap = content.Load (”waterbump”);
effect.Parameters[“xWaterBumpMap”].SetValue(waterBumpMap);elapsedTime += (float)gameTime.ElapsedGameTime.Milliseconds / 100000.0f;
effect.Parameters[“xTime”].SetValue(elapsedTime);

In the HLSL code, the input bumpmap and the sampler is defined:

Texture xWaterBumpMap;Texture xWaterBumpMap;
sampler WaterBumpMapSampler = sampler_state { texture = ; magfilter = LINEAR; minfilter = LINEAR; mipfilter=LINEAR; AddressU = mirror; AddressV = mirror;};

The vertex shader passes the texture coordinates to the pixel shader. Note, that the inTex value is divided by the wavelength to make the bump map be stretched over the entire surface. Adding a time-dependant move-vector will move the waves:

struct WaterVertexToPixel
{

float4 Position : POSITION

float4 ReflectionMapSamplingPos : TEXCOORD1;

float2 BumpMapSamplingPos : TEXCOORD2;

};


float2 moveVector = float2(0, 1);
Output.BumpMapSamplingPos = inTex/xWaveLength + xTime*xWindForce*moveVector;

At the beginning of the pixel shader code, the bump map is sampled, the values are scaled back and related to the wave height variable. Finally, the perturbation is added to the original coordinates:

float4 bumpColor = tex2D(WaterBumpMapSampler, PSIn.BumpMapSamplingPos);
float2 perturbation = xWaveHeight*(bumpColor.rg – 0.5f);
float2 perturbatedTexCoords = ProjectedTexCoords + perturbation;

The result can be get by sampling the reflection and refraction maps with the perturbated coordinates:

float4 reflectiveColor = tex2D(ReflectionSampler, perturbatedTexCoords);
float4 refractiveColor = tex2D(RefractionSampler, perturbatedRefrTexCoords);

To avoid incorrect edges at the border, the clipping pane can be set to higher point:

Vector4 planeCoefficients = new Vector4(planeNormalDirection, -waterHeight+1.0f);

In the final version of the source code, the wind direction is also a parameter to make the water move along the river and the rotation matrices are generated in the XNA code to gain some performance.


Adding dull color

To get more realistic result, some dark-bluish color is added to the final water color. This can be also adjusted by the user:

float4 dullColor = float4(0.1f, 0.1f, 0.2f, 1.0f);
float dullBlendFactor = xdullBlendFactor;
Output.Color = (dullBlendFactor*dullColor + (1-dullBlendFactor)*combinedColor);


Specular highlights

Specular highlights are approximated by adding some light color to specific areas as the Phong illumination model describes. For computational reasons, the half-vector is used instead of the vector of reflectance. For more details about this, see the Water mathematics chapter. The half-vector is also approximated, and some perturbations are added from the values of the bump-map (specPerturb). In this demo, I used the following code for this:

float4 speccolor;
float3 lightSourceDir = normalize(float3(0.1f,0.6f,0.5f));
float3 halfvec = normalize(eyeVector+lightSourceDir+float3(perturbation.x*specPerturb,perturbation.y*specPerturb,0));

The angle between the surface-normal and the half-vector is calculated using the dot product between them. An input variable (specpower) adjusts the power, which results the specular highlights only in case of a very little angle between the vectors. Finally, the specular color is added to the original one.

float3 temp = 0;
temp.x = pow(dot(halfvec,normalVector),specPower);
speccolor = float4(0.98,0.97,0.7,0.6);
speccolor = speccolor*temp.x;
speccolor = float4(speccolor.x*speccolor.w,speccolor.y*speccolor.w,speccolor.z*speccolor.w,0);
Output.Color = Output.Color + speccolor;

Some screenshots of lake water specular highlights:

screenshot_specular_4

screenshot_specular_2 screenshot_specular_3 screenshot_specular_4


References

[Riemer] – Riemer Grootjans http://www.riemers.net/

[CgTOOLKIT] – Cg Toolkit: A developer’s Gide to Programmable Graphics

[EMaRoTWoNT] – Nathan Holmberg and Burkhard C. Wünsche: Efficient Modeling and Rendering of Turbulent Water over Natural Terrain


Word doc version of the chapter

Click here to download the word document version of this chapter.

Leave a comment

4 Responses to “Lake water shader”

RSS Feed for Habib's Water Shaders Comments RSS Feed

projected grid concept demo

included already

Dear Habib,

Do you know of any suggestion/way to make algae grow on the grow on the surface over a given time?


Where's The Comment Form?

Liked it here?
Why not try sites on the blogroll...