Lake water shader
There are many different types of water in our world and they range from small water surfaces in a mug to endless areas of oceans. Typically, smaller waters interact with floating or falling objects while bigger surfaces get into reaction with the wind and form waves. This chapter describes an example of middle-sized water area rendering: smaller lakes and rivers with moderate waves.
In this chapter I discuss creating water effects with the following preconditions:
- Realistic, nice-looking water needed
- Middle-sized, flat water surface
- Moderate interaction with the wind
- No need for breaking waves or foam
- No need for underwater effects (The view point is always over the water surface)
- Real time performance
Using high-fields or triangle-strips can result in very nice-looking effects, but if the waves do not need to be braking and the performance is an important factor than simpler only-shader-driven solutions can be a good compromise. To gain efficiency, the water surface will be approximated only by a single square.
The main steps of this water effect
- Adding a plane which reflects everything above it
- Get the plane in motion by some ripples
- Set the ration between reflection and refraction on the plane
- Adding some dull color to make the water dirtier
Before using the shader
The water surface will be a square which means that it is represented only by four vertices. This water-plane intersects the virtual world at a certain height, and if the landscape is lower than the height of the water plane, the water is visible. Everywhere else the water will be covered by the landscape.
Landscape can be created, for example, from a high-map. I discuss a technique for this and for creating sky-dome at the General terrain chapter. Those ideas can be the bases for the following water effects.
In the HLSL code we define the technique to create the water effect. It has only one pass, and both shaders can be set to version 2.0 in it. The definition is the following:
The definition of the structure returned by the vertex shader describes the variables. At first, we need to determine the sampling positions which are used later in the pixel shader. The sampling positions are returned (the name of the variables shows their use), and Position3D is needed to be able to calculate the accurate eye vector in the pixel shader:
The structure returned by the pixel shader is not so complicated any more. Using the relayed information, we sample the different textures and calculate the final color of the pixel which is written to the frame buffer. Only this color value is returned:
Calculating the sampling coordinates for reflection and refraction maps is possible through creating the necessary matrices. Multiplying the view-matrix and the projection-matrix results the view-projection matrix. This can is multiplied with the world-matrix to get the world-view-projection matrix, and so on and so forth. We can get the sampling positions, for example, with these lines:
Reflection and refraction maps are sampled using the perturbated positions in the pixel shader. The reason for using perturbations is described in the “Waves” section. The perturbated texture coordinates can be calculated as follows:
To be able to reflect the objects above the surface as described in the Water mathematics chapter, we need to have the image of the reflected objects, which shows the reflected color for each pixel of the water. Before creating the final picture, this image can be rendered into a texture as a new render-target and later can be used to the reflection effects.
In the C# code the new texture (the new render-target) needs to be defined and initialized first:
The original view-point and view-direction needs to be mirrored onto the plane of the water. (Check the paragraph Reflection.)
We need to create the matrix of the virtual view by mirroring the original one onto the water plane:
After the entire world (without water) is drawn from the virtual view, this image needs to be rendered onto our temporary render-target. To avoid ghost-reflections and hidden reflected areas, a clipping plane can be used to discard the objects under the plane of the water. This step helps eliminating unnecessary rendering and avoiding possible artifacts.
Riemer published a very good tutorial on his homepage. I used his solutions in my demo source codes as well.
Clipping planes must be set to remove areas, which cannot be reflected on the water surface, but can hide reflections. The idea is visualized on the next figure:
If we want to get the possible reflections from point A, we have to render the reflection map from point B. But before rendering, we have to remove every underwater object, because they can hide real reflections, as the underwater terrain does on the figure. Although the arrow points the reflected point if you look onto the water surface, the first intersection from point B is an underwater part of the scene. After removing everything under the water level with a clipping plane, the first intersection point will be our desired target, which is reflected on the water.
The clipping planes can be set up as follows to remove the underwater objects:
After this, the clipping plane and the view matrix can be used by the draw method:
I have to notice that, to restore the original state, the clipping plane is set to false at the end of the draw method. The terrain and the sky-dome are drawn using the matrix of the virtual view (reflectionViewMatrix) because of the reasons discussed earlier. At this point all the reflection data is stored on a texture.
On the next figure, the reflection map is shown on the right which was used to produce the image on the left:
The reflection-map is captured from underneath the water level as discussed earlier, and from that point it is possible to see what is “behind the sky”. This results the black area on the image.
The method to produce a refraction map is similar to one of the reflections map. There is no need to change the view point, the virtual and the original view-vectors are the same, but the clipping plane needs to be inverted, as everything is to be rendered below and not over the water level.
The clipping plane is applied on the graphic hardware, and therefore, the vertices are in camera space already when they will be compared to the plane. Because of this, the plane needs to be transformed with the inverse of the camera matrix. This can be achieved by the following lines:
In the draw method the clipping plane needs to be created, applied, and finally the original state needs to be restored after drawing onto a texture. The source code for this is the following:
The refraction data is also stored on a texture at this point of the source code. In the next section they are available to create the final image.
An example is given on the next figure. The refraction map is shown on the right which was used to produce the image on the left:
Notice that the clipping plane is set not exactly to the water level, but a little bit higher to avoid artifacts at the edges. To gain performance, the refraction map is half the size of the original image, just like the reflection map.
Operations to calculate the Fresnel term correctly are very complex (described here). To blend the previously determined reflected and refracted color we need the proper ratio between them. In this demo application I use various solutions to approximate the Fresnel effect.
Reflection and refraction colors need to be blended depending on the cosine of the angle between the eye-vector and the normal-vector. As both of these vectors are one unit long, the cosine of the angle can be determined by the dot product of them.
The first solution is the projection of the eye vector on the normal vector, which approximates the Fresnel term relatively good, but not accurate enough. The projection can be calculated by dot product. This is only adjusted with some terms to get similar result with the approaches described later:
The next approach uses the formula of the Realistic compromise chapter. For more details, see [EMaRoTWoNT].
The third approximation is discussed in [CgTOOLKIT]. It also calculates the dot product between the eye-vector and the normal-vector. After adding 1 to this, to get the result, we divide 1 by the fifth power of this value:
To be able to adjust the settings the xDrawMode input variable influences the Fresnel value. It is then reduced between 0 and 1. Finally, the reflection and refraction values are combined:
Some screenshots of water with the differently adjusted Fresnel value in the demo application:
To create an efficient wave effect for a bigger area the number of the vertices must be limited. In the lake water shader I used the following optimization techniques:
Creating the water effect only by pixel shader. In this manner, the water can be made of a very limited number of vertices.
- Wave motion effect created only by bump-map.
In this water effect the ripples of the water is animated by a moving bump-map, check [Riemer] for more details. From an original wave picture it is possible to create the gradient map of the image which shows the perturbations of the surface. A gradient map is the same size as the original picture, and every pixel stores a vector in the RGB components. This vector defines the deviation from the original normal vector at every single point of the image. The original normal vector for an absolutely flat surface is (0;0;1), for more precise calculations the value-range (-1;1) can be scaled to the values of the color components: 1 will be the maximum (256), 0 will be scaled to the half (128), while -1 to the minimum (0). For example the vector (0;0;1) is scaled to (128;128;256). As long as the perturbations are not very significant, every pixel of the image should have some similar values to (128;128;256), and this means, the blue component has always the highest value. This results a predominantly blue gradient map:
Gradient maps are usually called bump map in graphic development. In the XNA code we need to load the bump map and pass it as a parameter for the shaders, just like the elapsed time to get the waves in motion:
In the HLSL code, the input bumpmap and the sampler is defined:
The vertex shader passes the texture coordinates to the pixel shader. Note, that the inTex value is divided by the wavelength to make the bump map be stretched over the entire surface. Adding a time-dependant move-vector will move the waves:
At the beginning of the pixel shader code, the bump map is sampled, the values are scaled back and related to the wave height variable. Finally, the perturbation is added to the original coordinates:
The result can be get by sampling the reflection and refraction maps with the perturbated coordinates:
To avoid incorrect edges at the border, the clipping pane can be set to higher point:
In the final version of the source code, the wind direction is also a parameter to make the water move along the river and the rotation matrices are generated in the XNA code to gain some performance.
Adding dull color
To get more realistic result, some dark-bluish color is added to the final water color. This can be also adjusted by the user:
Specular highlights are approximated by adding some light color to specific areas as the Phong illumination model describes. For computational reasons, the half-vector is used instead of the vector of reflectance. For more details about this, see the Water mathematics chapter. The half-vector is also approximated, and some perturbations are added from the values of the bump-map (specPerturb). In this demo, I used the following code for this:
The angle between the surface-normal and the half-vector is calculated using the dot product between them. An input variable (specpower) adjusts the power, which results the specular highlights only in case of a very little angle between the vectors. Finally, the specular color is added to the original one.
Some screenshots of lake water specular highlights:
[Riemer] – Riemer Grootjans – http://www.riemers.net/
[CgTOOLKIT] – Cg Toolkit: A developer’s Gide to Programmable Graphics