Shader (shading model)
The algorithm to produce color across the surface of an object. Basically, the shader incorporates:
- The surface normal information
The topology of the object surface.
- The general surface attributes
Such as transparency and color.
- The surface reflectance attributes.
How much the object reflects its surroundings.
- The lighting model.
How many lights of what type from what direction illuminate a particular object.
Lambert
Calculates surface by interpolating between normals of two adjacent polygon normals, resulting in a smoothly shaded object. The surface reflectance is not incorporated, which yields lambert shader suitable for matte surfaces with unpolished, chalk-like look. This shader is based on the lambert's cosine law discovered by Johan Lambert, a sixteenth-century astronomer and physicist. Lambert's cosine law simply states that the intensity of light on a surface is proportional to the angle at which the light hits the surface.
Gouraud
Calculates surface by linearly interpolating between normals of adjacent polygons. Gouraud shader is a simplified version of lambert shader and is especially suitable for real-time rendering on graphics hardware, i.e hardware rendering. Developed by Henri Gouraud in 1971.
Phong
Calculates normal separately for every pixel on the surface and also processes the relation between normal, direction of the light source and the direction of the camera's point of view. This method gives a much better surface curvature. Phong suits best for plastic materials. Though much more computationally expensive than lambert or gouraud, phong is the most popular shading method today. Developed by Bui Tuong Phong in 1973.
Blinn
Calculates surface very much like phong, except that the shape of the specular highlight reflects the actual lighting more accurately. Suitable for metallic materials. Developed by James (Jim) Blinn in 1978.
Constant (Uniform)
The simplest possible shader: includes only the surface color, which is constant across the surface. This is because surface normal, reflectance and lighting information are not calculated at all. Useful with texture mapped surfaces when the texture image is preferred to reproduce on the object as close to the original color as possible.
Texture mapping
Method for wrapping 2D pattern or image around the 3D surface. Texture mapping can dramatically add realism even to a most rudimentary geometry.
Texture sources are typically:
2D texture
Typically an image from various sources: painted, scanned, rendered, etc.
3D(procedural) texture
Random patterns (such as marble, wood and clouds) generated by mathematical algorithms.
Mapping can be input to various shader parameters:
Transparency mapping
Mapping is directed to effect on the object transparency: white areas derived from the texture image become transparent, black areas stay opaque.
Bump mapping
Mapping is directed to effect on the surface normals, so that the object looks bumpy and rough. White areas in the texture image turn the surface normal to the opposite direction than black. This is only pseudo effect: it applies only to rendered images. Also, the contour of the surface remains smooth.
Displacement mapping
Like bump mapping, except that the object surface is affected for real. Displacement mapping is only a render effect. In some cases, however, it is possible to write out the 'carved' geometry.
Reflection mapping
Mapping is directed to reflect from the mapped object. An excellent method to reduce render time: ray tracing can be switched off and reflections are still achieved.
Refraction mapping
Mapping is directed to refract through the mapped object. Useful method to reduce render time on transparent objects. Rarely used, though.