Layout's Window Drop Down Menu |
||
|
Motion Mixer Motion Mixer (Window> Motion Mixer) was designed to bring the same concepts found in non-linear video editing to computer animation. You can store, re-use, and adjust motions for entire hierarchies. You can even mix motions together. Note: For more information about Motion Mixer see the Motion Mixer section. Presets The Preset Shelf is a sizable floating window that holds a list of previews along with all of the associated settings in Layout. It can be used for settings with surfaces, volumetric lights, hypervoxels, etc. It is accessed using the Presets button (Windows> Presets) or from some plug-in panels. Note: For more information about Presets see the Preset Shelf Section. Backdrop Options LightWave lets
you change the colors of the backdrop behind all of the objects in a scene.
The settings are on the Backdrop tab of the Effects panel (Window
> Backdrop Options). You can elect to have a solid color using
the Backdrop Color setting or a gradient color backdrop. By default, the
backdrop is solid black. |
|
|
Gradient Backdrops LightWave’s Gradient Backdrop settings essentially provide a quick earth-like environment background. Note that no shadows can be cast on the backdrop since it really isn’t there. Although you’ll likely not use it much for real-life imagery, it is great for faking a sky behind mountain objects, instructional animations, and logo animations. Use it when you need something to stick in the background. |
|
|
|
|
|
HINT: If you want the Gradient Backdrop to blend everywhere with no sharp color change, make the Sky Color and Ground Color the same. The camera is always positioned right in the center of the sphere. Thus, if you move the camera, the Gradient Backdrop will always look the same. However, if you rotate the camera, you will see the various colors. The Sky Squeeze value determines how the Zenith and Sky colors are blended. The Ground Squeeze value determines how the Ground and Nadir colors are blended. The default value of 2.0 yields a nice spread between the various colors. A lower value will spread the change in colors over a greater area and a higher value will compress the change. |
|
|
|
|
|
|
|
|
Environments Several plug-ins
may be used to create exciting backgrounds for your animations. You access
these plug-ins by adding them on the Add
Environment pop-up menu on the Backdrop tab of the Effects panel. The Image World Environment lets you apply an image as a spherical environment wrap; this approach is perfect for high dynamic range images. You can offset the heading and pitch of the image, as well as adjust its brightness. |
|
|
The Textured Environment lets you apply a texture as a spherical environment wrap; this approach is perfect for high dynamic range images. Clicking the Texture button launches the standard Texture Editor giving you access to gradients, procedurals, and the use of images to create an endless variety of textures. |
|
|
An interesting application would be to use a gradient and select the (camera) Heading or Pitch as the Input Parameter. This lets you vary a glorious sunset sky based on the rotation of the camera. You might also use this to add a cool nebula behind your starfield using a simple Turbulence or Fractal Noise procedural texture. Note: For more information on the Texture Editor see the Texture Editor section. SkyTracer The SkyTracer Environment was designed to create sophisticated atmospheric effects using real-world parameters. |
|
|
SkyTracer2 The SkyTracer2 Environment was designed to create sophisticated atmospheric effects using real-world parameters. You can adjust the atmospheric, cloud, and sun settings within the interface to create a variety of beautifully rendered sky images. These skies can be rendered (volumetrically or as a 2-D effect) within an existing scene, or saved as image files to be composited or used as texture maps. Note: For more Information about SkyTracer2 see the SkyTracer2 section. Background Booster The Background Booster tool (Window>Background Booster) gives you the ability to easily edit the background gradient settings. Use the Boost function to multiply each of the RGB values by the boost percentage. |
|
|
|
|
|
|
|
|
Background A very common example of volumetric lighting is the atmosphere: the color of the sky comes from the scattering and absorption of light in the different layers of the atmosphere. The combination of light scattering and absorption is the very core of volumetric lighting effects. Additionally, other parameters must also be taken into account, like volume size and shape, density distribution inside the volume, lighting conditions, and behavior of light inside the medium. Adjusting these parameters lets you create a wide range of natural effects. The sky, for example, is usually blue because light attenuation depends on wavelength and distance. When the sun is at the zenith, its light crosses fewer layers of atmosphere than when the sun is on the horizon. Blue light is stronger at the zenith than at sunset because the thinner atmosphere does not interfere with its shorter wavelength. In Red sunsets, the thicker layers of atmosphere attenuate blue light, but the longer wavelength of red light passes through the atmospheric layers. Fog is another good example. It is a medium composed of vaporized water where density is distributed in a non-homogeneous manner. The water particles in the fog cause a dispersion/absorption phenomenon that causes the lighting effect, while the density distribution gives the global appearance of the fog. If you want a thick fog lying on the ground and fading with altitude, you will have to use a density distribution that makes the density high at lower altitudes and low at higher altitudes. If you want to add turbulence in the fog (to have a more cloudy appearance), you can add fractal noise, which creates a 3D density field. Computational Issues Volumetrics are calculated by integrating all the scattering/absorption contributions along a ray (which comes from the camera). When you use a 3D fractal density field, the integration must be made numerically with a limited number of sampling points. In this case, the values will be calculated at each sampling point, which means that for 50 sampling points, the algorithm calculates 50 density field values, 50 lighting values, and 50 scattering/absorption values. All those values can take a lot of time to compute. Using fewer sampling points will result in a faster rendering but will introduce numerical errors: this is volumetric aliasing. Volumetric shadows can be obtained this way by measuring the lighting conditions at each sampling point along the ray. When you work with a normal density distribution, you can make the integration literally, which gives a much faster rendering. But in this model (which we call fast model), it is not possible to measure lighting conditions along the ray, and as a consequence it is not possible to get volumetric shadows. Another important note about numerical issues is how to adjust values to get the desired effect. The intensity of the effect is always related to the length of the medium crossed by the ray. This is obvious if you compare cigarette smoke to smoke from a large fire—the size of the volume has a big influence on the result. The behavior of light may also change completely when the volume size changes, because absorption may overpower scattering, and vice versa. A good example of this is clouds. When you look at clouds, you see that small thin clouds are bright and totally white, while big clouds have dark gray areas and a thin white border. The dark gray color comes from the absorption of light inside the cloud. Even the scattering of light emitted inside the cloud is absorbed from the point of scattering to the boundaries of the cloud. In this example, absorption takes precedence over scattering when thickness gets bigger. However, under other circumstances, the opposite could occur. When you use high absorption and scattering values, you can create explosion-like effects, where there is very high contrast between bright and dark areas. In conclusion, when adjusting parameters, you must be aware of the scale of the object you are working on. About Particles When you work with particles, a sphere of gas is associated with each particle. As a result, a particle cloud is really just a union of spheres. Computing the effect for every particle can be computationally intensive, particularly when their spheres overlap one another. The solution is the automatic particle sizing option, which evaluates a particle size so that each particle is close to another. The result is a dense cloud optimized for numerical integration and lower rendering times. Normal Fog LightWave can generate a quick fog effect that is useful for many special effects. Just as objects seem to blend into real fog with distance, the effect fades the objects away into the fog color you set. Fog surrounds the camera in all directions, as though the camera were in the middle of a huge spherical fog bank extending in all directions. The fog settings are on the Volumetric tab of the Effects panel (Window> Volumetrics and Fog Options). In setting up fog, you will set a minimum and maximum distance from the camera. Within and beyond this range, objects will take on some amount of the Fog Color. You can also specify the percent of fog color that objects take on at the minimum and maximum distances. |
|
|
|
|
|
A negative Minimum Distance will start the fog behind the camera. You can even enter a larger minimum amount than the Maximum Distance amount, which results in an effect whereby objects will render in more of the fog when closer to the camera. By default, fog is applied linearly between the Minimum Distance and Maximum Distance. Non-linear options are also available, which apply the fog amount more rapidly as they are moved away from the camera, then less so as they approach the maximum distance. The small graph to the left of the setting gives you an indication of the fog application over distance. The Use Backdrop Color option causes an object to blend in with whatever backdrop you have set, including a background image. This can have the effect of making objects appear slightly transparent. Use this option to simulate the effects of an underwater environment or a hazy, foggy day with an appropriate Backdrop Color like bluish green for underwater and grayish white for a foggy day. If you add the Texture Environment environment (Window > Backdrop Options) and also activate Use Backdrop Color, your fog (and backdrop) can use a texture. |
|
|
Volumetric Anti-Aliasing The Volumetric Anti-aliasing option activates or deactivates the anti-aliasing of volumetric plug-ins and volumetric lights. If this option is off, the volumetric effects from the first rendering pass are stored and reused in later passes, instead of being re-rendered in each pass. Obviously, this can save rendering time, but will require more memory and may cause problems when used in conjunction with motion blur or depth of field. |
|
|
Volumetric Plug-ins are added on the Volumetrics tab of the Effects panel. |
|
|
Use the Ground Fog volumetric plug-in to add three-dimensional fog to your scene. The fog has actual physical size so, for example, the camera can move in and out of the fog. This differs fundamentally from normal fog, discussed previously, where the camera is always within the fog. |
|
|
When using Ray Marcher, you can throttle the amount of computations using the Quality pop-up menu at the bottom of the panel. Add a texture by activating the Use Texture option. Clicking the Edit Texture button will bring up the standard Texture Editor. A fog texture will make your fog more interesting and less flat. The Top and Bottom settings control the altitude of the fog, that is, where your fog starts and stops vertically. Falloff determines how the fog decreases to zero, from the Bottom to the Top. The higher the value, the more the fog will decrease its density. Note that the Ray Marcher mode tends to fall off quickly at the fog’s edges, while Fast Fog has a uniform linear fall off. Note: Make sure you know where your camera is when using GroundFog. The effect is really best seen from outside of the fog. If your camera is inside the fog, changing settings may appear to have little effect. In fact, if your camera is always within the fog, you may want to just use the (faster) normal fog. The Luminosity and Opacity values are the values where the fog is at its thickest. Nominal Distance is the distance at which the fog has a medium effect—it is not like standard Fog’s Minimum Distance. You will want to use small values for small-scale scenes. Large-scale scenes may require higher values to keep close objects from getting too affected by the fog. You can set the color of the fog with the Color setting or you can just use the backdrop color. |
|
|
HyperVoxels With ordinary polygonal objects, realistic effects like liquids, smoke, clouds, and fire are difficult, if not impossible, to achieve without HyperVoxels. It simplifies the creation of volumetric rendering effects such as photo-realistic clouds, flames, explosions, dust, nebulas, contrails, fluids, smoke, ash, pyroclastics, gelatin, electro-microscopic images, rusted materials, detailed solid and rocky surfaces, and much, much more. Note: For More Information about Hypervoxels see the Hypervoxels section. Compositing Options Background Image The Background Image is similar to the gradient backdrop; however, it is always registered to the camera. That is, it will always appear in exactly the same position/location no matter which way you tilt or move the camera. You set this option on the Compositing tab of the Effects panel. Background images are considered infinitely distant from the camera. You can never have an object behind a background image, nor can you light a background image or cast shadows upon it. |
|
|
You often use background images to merge live action and 3D elements, like creating the illusion that a UFO crashed into the Empire State building. An image of the New York skyline would be the background image and the UFO would be a LightWave object. Note: Background images are visible only through the Camera View. HINT: If you actually need your background image to interact with objects, use the image as a Surface Color Texture mapped on a flat plane object and place the plane at the back of your scene. If you set Camera View Background on the Display Options tab of the Preferences panel (Display > Display Options) to Background Image, you will see the set Background Image in Layout’s Camera View. Of course, actually seeing the background in the Layout window is optional. The background image will always appear in the rendered frame. |
|
|
The whole idea behind traditional image compositing is quite simple: take two or more images, and merge them into a new composite image. LightWave lets you do this, but also takes it one step further by letting you throw objects into the mix. The images can also be a sequence of pictures, so you may use captured video footage as a background for your objects. A simple example would be a modeled UFO moving against a real sky and trees. Images can appear behind all objects, in front of all objects, or a combination of the two with objects showing in between. Compositing can be a render time-saver. If you set up a scene with many objects, but only a few are moving, you could render one frame with only the inanimate objects, and then remove all of those objects and render the animated objects against the single frame saved earlier. This is especially useful when the still objects are complicated or involve ray-traced shadows, refraction, and/or reflection. Foreground Images Placing an image in front of everything may seem like a silly thing to do. However, some options let you cut parts of the image away so you can see through it. You can also make the image partially dissolved or even envelope the dissolve. Dissolving in a black image in the foreground will produce the common fade-to-black effect, or reverse the dissolve to fade in from black. You can make holes in the foreground image based on a defined color range in the image. However, the biggest drawback to simply clipping out portions of the foreground image is that you will have hard edges. A color is either clipped or it isn’t, so you see either the foreground image or the background image. Use the Foreground Image pop-up to set the foreground image. |
|
|
You may also use a special alpha image to merge the background and foreground images. This type of alpha image is very different from the type you use to set up a transparency surface texture. LightWave composites the foreground image over any objects or background by adding their colors to the foreground image. How much background is added is determined by the alpha image. The darker the area of an alpha image, the more the background is added to the foreground. The pseudo mathematical equation might look like: Foreground + (1 - Alpha) * Background If you used the same exact image for both the background and foreground images, plus a solid black image as the alpha image, you will receive a final rendered image where every pixel is twice the color value it was. This results from the background image being completely added to the foreground image. Creating Alpha Images Generally, alpha images will be generated when you render a scene to create the foreground images. When you select Save Alpha on the Render Options panel’s Output Files tab (Render > Render Options), LightWave will generate and save an alpha image in addition to the normal RGB output. The alpha image will be composed of grayscale values representing the opacity of any objects that were rendered in the scene. |
|
|
|
|
|
Since glows and lens flares are additive effects and are assigned a value of black in an alpha image, glows and lens flares in the actual foreground image will simply have the background values added, so they will appear brighter where the background is a value other than black. HINT: Generally, due to LightWave’s additive compositing method, foreground images are created using a solid black backdrop. This allows the composited background to show through unaltered when it is mixed 100 percent with the foreground. Foreground Fader Alpha What happens if you want to composite a foreground element on top of objects or a background, but the foreground image was not rendered over black? In this case, LightWave provides a Foreground Fader Alpha button that can be selected. In this mode, LightWave will ignore any areas of the foreground image corresponding to black areas in the alpha Image. These areas of the foreground image will be faded away to nothing so you see 100 percent of the background instead. When using Foreground Fader Alpha, glows and lens flares will not be added to the final rendered image (their corresponding alpha is black) unless you are using a different alpha image containing non-black areas in those locations. Additionally, antialiased object edges most likely will stand out because they contain bits of color from the original non-black background. Foreground Key Activate Foreground Key when you want to key out (i.e., not render or see) a color (or range of colors) from the foreground image. Use the same color value for both Low Clip Color and High Clip Color if you wish to key out one color only. Use different values to key out the Low Clip Color and High Clip Color values, and all those in between. |
|
|
HINT: If you want to create a giant space battle and lack the RAM to hold all of the objects and image files, you could composite layers of ships and achieve the same results. This is, in fact, how some broadcast TV shots were done using machines with only 32MB of RAM in the early days of LightWave. Image Processing Options The Processing tab on the Effects panel contains functions that apply effects to the rendered image. Choose Window>Image Processing Options to directly bring up the Processing tab of the Effects panel. |
|
|
Limit Dynamic Range clips the pixel color components of each rendering pass at 1.0, improving the antialiasing of extremely bright areas. This option should not be used with filters or image savers that expect high dynamic range data. Dither Intensity Dithering blends two colors to simulate a third color between them, forming a more realistic blend. Dither Intensity lets you set the amount of color blending used by LightWave when rendering an image. Even with 24-bits of color data, it is possible to perceive color banding where distinct changes in color or brightness occur within otherwise smoothly ramped colors. Off removes all dithering, and you will probably experience some color banding. Normal, the default setting, reduces banding to the point where it nearly disappears. 2x Normal increases the dithering even further, which may be useful for high-end systems that still retain some appearance of banding in the final image. 4x Normal boosts dithering so that the resulting image looks more grainy, like film, which may be a desirable effect (especially when used with Animated Dither, below). Animated Dither Select Animated Dither to change the dithering pattern used from one frame to the next. This ensures that dithering is randomly placed, so there is no apparent pattern to the dither blend. With a 2x Normal or 4x Normal Dither Intensity, this can be used to approximate the randomness of film grain moving through an image. Color Saturation Color Saturation lets you control the amount of color in a scene (or in an animation, if using an envelope). Saturation at 100% is a normal, full-color setting, while saturation at 0% is a black and white setting. Glow Settings When Enable Glow is turned active, LightWave can add a glow around surfaces with a (surface) Glow Intensity above 0%. Use the controls below to set up the amount of glow you wish to add to all such surfaces. |
|
|
Image Processing: Pixel Filters Pixel filters let external applications affect LightWave’s rendering engine. Filters in this class can be affected by motion blur and other sub-frame operations during the render rather than as a post process, as image filters are. |
|
|
In print, halftone screens are made up of dots that control how much ink is deposited at a specific location. Varying the resulting dots’ size and proximities creates the illusion of variations of grey or continuous color. |
|
|
In the photography world, this type of effect is achieved with a (physical) screen that breaks up the image into dots. Think of the Screen Angle setting as the rotation of that screen. It controls the angle of the dots. You can control the overall amount of effect by adjusting the Effect Strength setting. Settings below and above the default of 100% are allowed. You can change the pattern by changing the Spot Type. Specify and animate a Screen Tack Object to animate the pattern position. The OK button stands for Ocular Kinesthetics. It lets the filter change pixels that will then be perceived by the ocular nerves. |
|
|
LW_Hypervoxels and LW_Hypervoxels_Doubler These two image pixel filters are Legacy tools for rendering older LightWave Scene files that use older versions of Hypervoxels. Math
Filter |
|
|
Sasquatch Lite allows you to quickly and easily create a variety of effects such as grass, fur, and hair on your objects. Sasquatch Lite uses its own rendering engine to render the hairs quickly and combine the results seamlessly with your LightWave objects. Sasquatch Lite is a very simplified version of the commercial plugin Sasquatch from Worley Laboratories. |
|
|
Skytracer The SkyTracer Environment was designed to create sophisticated atmospheric effects using real-world parameters. Note: For more information see the section on SkyTracer 2 Steamer and Steamer_Doubler These two image pixel filters are Legacy tools for rendering older LightWave Scene files that use Steamer. Image Processing: Image Filters Image filter, unlike Pixel filters in the previous section, act on an image as a whole. They can be used for a variety of things, from saving out a layered Photoshop file to achieving a light bloom effect on your render, or making it into a sepia or monochrome image. Some of the filters can be applied directly to images you use within LightWave - for instance, full precision blur can be used to soften black and white images intended for a bump map. Image filters are a post-process effect. This means that they get applied only once all rendering has been performed. Some of them, such as Digital Confusion, by their very nature need your picture to be rendered as one segment otherwise nasty banding effects can occur. |
|
|
The Aura 2.5 Export image filter has access to all the different internal render buffers and can export them to Aura. Aura 2.5 can then take this information and manipulate it with it's many features and options. |
|
|
Creates a Black and White veriosn of your render. Double clicking on the item in the list will bring up the Back and White panel where you can alter the gamma correction for the contrast levels. |
|
|
| |
|
Deluxe RLA This image filter allows you to set the format for the color, matte and depth channels for the RLA format. |
|
|
Emboss makes a render appear raised or stamped by converting its color to gray and tracing the edges with the original fill color. The embossed edges are the colours the objects originally had. |
|
|
|
|
|
Emboss BW makes a render appear raised or stamped by converting its color to gray and tracing the edges with the original fill color. |
|
|
Flare2Alpha Gamma Anaglyph
Stereo: Compose Using the Quicktime Stereo animation type in Render Options will allow you to save a stereoscopic animation. You need to render at least two images to see the stereoscopic ones. Anaglyph
Stereo: Simulate Sometimes light reflections in the real world are so bright that too much light will enter a camera lens, over-saturate areas, and create a glare or a glowing halo. Shiny metallic objects like cars and water often exhibit this phenomenon. Bloom will mimic this effect. |
|
|
Note: Use the Bloom filter for a simple over-exposure effect. Use Corona if you need more control over your bloom. Render
Buffer View (Post-processing only) |
|
|
|
|
|
Render with Chroma Depth Applied |
|
|
Auto-Scale finds the actual near and far for you when it renders - you can render once with this option on and the calculated values will appear in the fields. Corona |
|
|
Input Basis lets you choose which internal buffer to read for the bloom effect, so you can bloom on specularity, bloom on diffuse, and so on. The Input Basis acts like a trigger: when its value exceeds the Input Threshold, the effect is applied to that pixel. Color uses raw pixel values—essentially any pixel on screen that is bright enough gets bloomed. Alpha uses alpha’s pixel values—0 to 100% for the image. Specular Shading uses 0 to 100% of surface specularity as shaded during the rendering. This varies over a given surface and is different from the Specular surface channel, which is uniform over a surface. Diffuse Shading is similar, but uses the diffuse surface property. Geometry uses the normal of object surfaces, where 100% indicates that the normal points at the camera. Inverse Geometry is similar, but 100% indicates that the normal points perpendicular to the camera. These are easily demonstrated using a sphere. For Geometry, the center of the ball would trigger Corona, while Inverse Geometry would result in the effect along the edges. Special uses the surface Special Buffer feature on the Surface Editor. The value of the Special Buffer is compared against the threshold and when it exceeds that value, the Corona filter is applied. The input can also be masked to skip areas of the input altogether. Threshold Mask is basically an alpha channel. Brighter areas will be susceptible to the mask, while darker areas will not. Effect Settings The Effect pop-up menu selects your blending mode. Additive yields very hot (white) results where triggering pixels are closely grouped. This is useful for, say, obtaining the look of metal being heated. The center of a block of metal will become super hot while the edges do not. Normal is similar to Additive, except in the case of heated metal, the effect at the center and the edges tends to grow more evenly. Maximum takes the maximum of contributive pixels. This yields an effect like applying balls of cotton to brightly colored particles, whose effects start to merge as the particles become closer to each other. The Falloff pop-up menu lets you select how the bloom brush falls off. The preview window will give you an idea of what the falloff will look like. Strength is the strength of the brush compared to the source pixel. Size is the radius in pixels of the brush at a 640 by 480 resolution. If the resolution is different, the brush will be adjusted so that the effect always looks the same at different resolutions. The Edit Texture button can modulate the color of the bloom brush with a texture. When Texture Only is inactive and Edit Texture is off, the effect uses the color of the original image. If texture color is available, the effect uses the color of the original image plus the contribution of the texture. When Texture Only is active and there is also a texture color, the effect uses the value of the texture only. Other Settings The Corona filter will affect your alpha channel if you activate the Contribute to Alpha option. Use the Save Effect to File option to save just the corona effect to an image file when rendering. Note you must also choose a file format and define a file spec. When you use gradients with Corona, you will have additional options for the Input Parameter. These options let you customize how the corona effect is applied. For example, the size or intensity of the effect can grow or diminish based on an object’s proximity to another object, center of the image, and so on. You can use the standard Preset Shelf if you want to save and recall settings. Digital Confusion (Post-processing only) |
|
|
|
|
|
The four Diaphragm Shape settings, Hexagon, Octagon, Disc, and Cut Disc, determine which pattern Digital Confusion will use when “defocusing” the rendered image. These settings correspond to the actual shape of the camera lens used during this effect. The Aspect Ratio and Rotation angle of the camera lens can also be adjusted to create even more specialized effects. When using Lightwave's built-in Depth of Field controls, it is sometimes difficult to keep a moving object in focus. Now, instead of using envelopes to animate the focal distance, with Digital Confusion you can simply select a reference object from the Autofocus pop-up menu and the proper focal distance will be computed automatically. This reference object can be either the (target) geometry in the scene, or a null object used to dynamically adjust the focus. When an object is selected, the Focal Distance field becomes disabled. The Focal Distance setting represents the distance from the camera to the point in space that is in focus. Objects that fall either in front of or behind this area will be rendered out of focus. Just how far out of focus is determined by adjusting Digital Confusion's Lens F-Stop setting. By changing this value, you are adjusting the radius of the area that is in focus. For this value, the smaller the Lens F-Stop size, the smaller the in-focus area will be. Note: For more information on using the Depth of Field controls, refer to the Depth of Field section of the help file. The Blur Size setting acts as a multiplier to Digital Confusion's “defocusing” effect. Adjusting this control is similar to adjusting the Lens F-Stop setting, but instead of changing the size of the in-focus area, it determines the amount of blur these pixels will receive. By entering a Blur Size value of 50%, the area out of focus, defined by the Focal Distance and Lens F-Stop setting, will receive only half the computed blur. If a background image is used in your scene, you may want to activate Blur Background to blur it. However, because the background image is at an infinite distance from the camera, it will always receive the maximum blur amount and will result in much longer rendering times. A more efficient solution is to simply blur the background image in a paint program. The Render Hidden Geometry feature forces LightWave to ray trace any geometry behind objects in case they go transparent when being blurred. This is a much more accurate representation of the depth of field effect, but can increase rendering times. In multi-pass rendering and compositing, it may be acceptable to not activate this feature, but normally it should be activated. Note: Double-Sided geometry will not work correctly with the Render Hidden Geometry activated. Sometimes defocusing the rendered image can cause the effect of surface specularity or luminosity to diminish unacceptably. To offset this effect you can adjust the Luminosity Boost or Specular Boost multipliers located at the bottom of the panel. Any pixels rendered with these surface properties will have their intensity adjusted accordingly. Depth-Of-Field Blur (Post-processing only) This filter lets you add a depth of field effect that is based on a fast image filter, without requiring multi-pass anti-aliasing like normal depth of field does (Camera panel). You can adjust the overall strength of the blur, as well as independently choose whether to blur the alpha channel and background. |
|
|
Exposer This filter
normalizes high dynamic range (HDR) images for using as image maps and
HDR output for displaying in applications that are not HDR-savvy. The
intensity mapping is non - linear, similar to the light response process
in the human eye. |
|
|
Although you can add this filter on the Image Editor, it is of limited use there and more useful as an Image filter on the Processing tab of the Effects panel. This is mainly because most images you load are not HDR images, so pre-processing is not necessary and normal gamma should probably be used, if necessary. Moreover, if you do load an HDR image, it’s probably because you want the extra data. (Using the HDR Exposure filter will eliminate some, if not all, of the extra data.) The Input Dynamic Range is an informational display showing the High and Low pixel-intensity values encountered in the last processed image. Note that when the panel first appears, this information is not yet known. If you do not want the filter applied to the Full Image, set the Exposure Apply pop-up menu to Foreground to apply it only to scene items or Background to affect only the background (i.e., where your alpha would be black). The White Point is the input intensity considered to be the hottest white. Anything above this will be the same white in the output. This control is overridden by the Auto-Iris option, which sets the white point based on the actual input image data. Adjusting the white point is similar to cranking down an iris on a film camera to limit how bright parts blow out in a photograph. The Black Point, expressed as a percentage of the nominal black point (1/255), is the darkest non-black pixel level in the input image that will be preserved. Anything darker will come out black. The Auto-Exposure option overrides Black Point by using the actual image data to determine a black point in the incoming data. Lowering the black point is similar to increasing the exposure time for a photograph. Once these values are set, the filter translates the incoming image intensity—in a process very similar to gamma correction—so that the darker colors get more of the output range than the brighter colors. In other words, the filter changes the spacing of intensity levels so more levels are devoted to low intensity, darker details. Extended RLA Export (Post-processing only) This filter saves images in the Extended RLA format, popular for 2D/3D compositing work. The image includes optional depth buffers, as well as masks for which object or surface (material) a pixel came from. Enter the Image Base Name in the field or use the file requester button. |
|
|
This filter saves images in the Extended RPF format, popular for 2D/3D compositing work. The image includes a few more options then the Extended RLA. Enter the Image Base Name in the field or use the file requester button. |
|
|
This filter will soften an image by blurring it. Change the Size values to increase the amount of blur horizontally or vertically. The Strength settings determine the amount of the effect. You can also choose whether to affect the RGB (color) and/or alpha data. You can use the Rendered Alpha (channel) or a Special Buffer as a mask using the Use Mask pop-up menu. If you want the mask computed prior to the blurring, check the Pre-Blur Mask option, otherwise the mask accounts for any blurring. To reverse the mask, check Invert Mask. The Special Buffer setting on the Advanced tab of the Surface Editor can have a value from 0 to 1, with 0 meaning no blur and 1 meaning full blur. |
|
|
Display devices have a non-linear relationship between pixel values and physical light intensity—they do not excite the display phosphors linearly. This non-linearity must be compensated for to correctly reproduce intensity. The Gamma Correction value is the exponent value in the correction formula and determines how to convert pixel values to light intensity. The default, 2.2, is a common value used on images bound for video, but is not necessarily the optimum value. |
|
|
Rather than using red/blue glasses to create a stereoscopic animation, you can use LCD shutter glasses that work with the fielded nature of TV screens to work their magic. Field Stereo allows you to render separate images for each field so that you get an effect of depth from the separation between the fields. Don't forget to turn on Stereoscopic rendering and Field Rendering in the Camera Properties panel. |
|
|
This filter normalizes high dynamic range (HDR) images for using as image maps and HDR output for displaying in applications that are not HDR-savvy. The intensity mapping is non - linear, similar to the light response process in the human eye. This filter processes the HDR output created by radiosity renders into better-looking, brighter pictures. It does this without impacting the accuracy of the lighting simulation, which can happen if you add ambient light or crank up lights unrealistically. It is really an essential part of the camera simulation, for a perfect digital camera. (The Virtual Darkroom filter is similar, but more complex. It simulates the two-stage process of film response to light, and print emulsion response to projection through the film negative.) Although you can add this filter on the Image Editor, it is of limited use there and more useful as an Image filter on the Processing tab of the Effects panel. This is mainly because most images you load are not HDR images, so pre-processing is not necessary and normal gamma should probably be used, if necessary. Moreover, if you do load an HDR image, it’s probably because you want the extra data. (Using the HDR Exposure filter will eliminate some, if not all, of the extra data.) |
|
|
If you do not want the filter applied to the Full Image, set the Exposure Apply pop-up menu to Foreground to apply it only to scene items or Background to affect only the background (i.e., where your alpha would be black). The White Point is the input intensity considered to be the hottest white. Anything above this will be the same white in the output. This control is overridden by the Auto-Iris option, which sets the white point based on the actual input image data. Adjusting the white point is similar to cranking down an iris on a film camera to limit how bright parts blow out in a photograph. The Black Point, expressed as a percentage of the nominal black point (1/255), is the darkest non-black pixel level in the input image that will be preserved. Anything darker will come out black. The Auto-Exposure option overrides Black Point by using the actual image data to determine a black point in the incoming data. Lowering the black point is similar to increasing the exposure time for a photograph. Once these values are set, the filter translates the incoming image intensity—in a process very similar to gamma correction—so that the darker colors get more of the output range than the brighter colors. In other words, the filter changes the spacing of intensity levels so more levels are devoted to low intensity, darker details. Lscript
and LScript/RT Photoshop PSD Export The Photoshop
PSD Export image filter is an image saver masquerading around as an image
filter. Being a filter allows it to have access to all the different internal
render buffers. When you render a frame, a sequentially numbered PSD file
is saved with each of the selected buffers placed into their own channel.
(In Photoshop 6, select the Channels tab, between the Layers and Paths
tabs.) |
|
|
The Max field determines the maximum value allowed in a buffer. So the acceptable buffer values are always zero to the Max. All values in the buffer are divided by Max to normalize them to the range zero to one. The Sliding min/max range option will dynamically compute the minimum and maximum values of each buffer. The values are computed for every frame so the output buffers will have the widest possible range of values. This is great for still images, but animations should not use this setting, due to the lack of "temporal coherence" (i.e., the images may "pop" from frame to frame). The 16 bit component output option will save out 16 bits per channel/buffer. Normally, only eight bits are used. Soften Reflections (Post-processing only) This filter will blur reflections. The Blend control will Replace the reflection with a blurred version, Average them together, which is more subtle, or use the Maximum of the replace result and the original, which avoids a dark halo, at the cost of a lighter image. You can also blend based on the Alpha channel or the intensity of the reflection (LumaBlend). |
|
|
Render Buffer Export (Post-processing only) This filter lets you save images from one of LightWave’s internal buffers (Source). The Surface Custom option on the Source pop-up menu will create a grayscale image, where each object surface can have a unique grayscale value. This was designed to allow post-processing effects to be applied on a surface-by-surface basis. A surface’s grayscale value (0-255) is assigned using the Special Buffer option on the Advanced tab of the Surface Editor. |
|
|
With the Destination pop-up menu, you can save the selected buffer image as a separate Image File, or replace the Rendered RGB or Rendered Alpha image data. (If you choose Image File, set the Image Type and enter the full path and filename in the File Name input field.) TextureFilter Use TextureFilter to add the selected texture to the image before use. You could use this filter to add, say, an animated Fractal Noise pattern to a simple black image. Since textures are three-dimensional, particularly procedurals, use the Axis setting to use the X, Y, or Z of the texture. (Note: The differences between the Axis selection can be subtle.) |
|
|
You can use Texture Filter to see procedural textures in your Layout viewport! Basically, you apply the procedural texture(s) to the image using Texture Filter and then map the image to the surface. Here’s how you do it: 1 First, you need to load an image into the Image Editor. It really doesn’t matter what image you use since it will be obscured by the textures. 2 On the Image Editor’s Processing tab, add Texture Filter. Double-click it in the list window to access its options. Click the Texture button to access the Texture Editor. 3 Change the default initial Layer Type to Gradient. This provides the underlying color. You can leave it white or change it. 4 Add one or more procedural texture layers and set as you would normally. 5 Load your object and open the Surface Editor. 6 Click the Color attributes Texture button. Leave Layer Type set to Image Map on the Texture Editor that appears. 7 Set Projection as you would normally and select your image from the Image pop-up menu. The procedural will now appear in your viewport. This operation requires a lot of computations and Layout may seem sluggish as it computes the texture. Note: The image preview window on the Image Editor will show the texture as well. Thus, you can double-click the window to bring up the Image Viewer, from which you can save this compiled image. This can then be image mapped back onto the surface without all of the calculations required with TextureFilter. If you use an image sequence instead of a still image, you can even see an animated texture! Note that if an animated texture is applied to a still image, it will not appear animated. Vector Blur (Post-processing only) LightWave’s normal motion blur (Camera properties) may need long render times because of multi-pass anti-aliasing. Vector Blur, on the other hand, can achieve great-looking motion blur in a fraction of the render time. |
|
|
The Blur mode should be used in conjunction with the normal motion blur; however, the Vector mode can be used by itself—you don’t even need to use antialiasing! The result can be drastically reduced rendering time. Below is a comparison between regular motion blur and the two modes of Vector Blur. |
|
|
Overlapping Objects Since objects become transparent with motion blur, the filter needs something in the background of the blur. When Compensate Background is active, the filter does not use the backdrop and attempts to compensate for this absence. This works in most cases, but may not give a very realistic motion blur. When Compensate Background is not active, the filter uses the backdrop. However, it will not show an object behind another object. If you have overlapping objects, you may want to do some test renders to see if Vector Blur will provide acceptable results. If not, use LightWave’s normal motion blur. High Quality Blur If you uncheck Compensate Background, you can activate the High Quality Blur setting. This provides better quality, but takes longer to render. In this mode, you will only be able to set the Vector Length setting. Limits The important thing to understand about using Vector Blur is that it is a post process. As such, hidden geometry can't be blurred and you may see problems with motion blur on shadows and moving textures. However, it can be a great help when used in conjunction with normal motion blur, by giving you better quality with lower antialiasing settings (Camera panel). Video Legalize The Video Legalize filter might be more appropriately named Hot Video, since it assumes that pure black, RGB 0, 0, 0, is mapped to the proper pedestal by the encoding device (e.g., 7.5 IRE for NTSC). The encoding device may not, however, correct hot pixels—that is, colors that exceed certain video specifications. This is where VideoLegalize steps in. Pixel values are generally scaled into IRE units using the Black Point, White Point and Pedestal settings as follows: Level = Pedestal + (100 - Pedestal) * (pixel - Black) / (White - Black). White is always 100 IRE and NTSC black is 7.5 IRE. Thus, for NTSC, the level would be computed: Level = 7.5 + 92.5 * (pixel - Black) / (White – Black). |
|
|
Note: the actual computation is more complex than the above since other operations, like gamma correction, can happen. The settings default to NTSC encoding, but you may also select PAL from the Encoding pop-up menu. (Note that you also need to click the Default button after selecting a new Encoding item.) You may change the individual settings from their defaults, if desired. The Correct pop-up menu determines how the image is corrected to fall within the specified limits. Note: It is highly recommended that you use VideoLegalize (as a post-process image filter) if you plan to use your images for video. Video Tap (Post-processing only) The Video Tap filter will save a second image using different camera and render option settings. This is perfect for times when you render a film resolution scene, but you want to preview a video resolution version on your VT3 or other digital disk recorder. |
|
|
Virtual Darkroom The Virtual Darkroom filter simulates the photographic capture of images. It is based on A Model for Simulating the Photographic Development Process on Digital Images, by Joe Geigel and F. Kenton Musgrave in the SIGGRAPH ’97 conference proceedings. |
|
|
Basic
Settings Enable Scattering will activate the internal scattering effect. Negative Width and Negative Height are the width and height, respectively, in millimeters, of the virtual negative. These values are used in scattering and grain calculations. Enable Grain will activate the grain effect. Selwyn Granularity controls the intensity of the grain. Increasing this value increases grain, decreasing it will decrease grain. Advanced Settings: Spectral Sensitivity Tab If Output Type is set to Color, there will be six sets of RGB percentage controls. Each RGB trio specifies the percentage of the black and white output plane that is contributed by the input image plane named in the control. For example, the RGB trio in the upper-middle defines how the output from the negative passes on the spectral sensitivity module creates the green plane. If Output Type is set to Black & White, you specify what percentage of the red, green, and blue planes of the input image are used when they are combined into a single black and white image. This transition takes place in the negative (first) pass. During the printing (second) pass, there is only a single input plane, so spectral sensitivity is not used. |
|
|
|
|
|
LensCap Math
Filter |
|
|
NTSC_Legalize scans the rendered output for pixels with unsafe values of chrominance signal or composite signal amplitude when encoded into an NTSC color signal. Such illegal pixels can be corrected by reducing their Luminance or Saturation. Alternatively, the pixels can be colored black by activating the Black Out Illegal Pixels option. When applied, the rendered output will have a maximum composite signal amplitude of the Max IRE setting (110 IRE by default) and a maximum chroma amplitude of 50 IRE in compliance with the RS-170A video specification. |
|
|
|
|
|
|
|
|
PAL_Legalize scans the rendered output for pixels with unsafe values of chrominance signal or composite signal amplitude when encoded into an PAL color signal. Such illegal pixels can be corrected by reducing their Luminance or Saturation. Alternatively, the pixels can be colored black by activating the Black Out Illegal Pixels option. When applied, the rendered output will have a maximum composite signal amplitude of the Max IRE setting (110 IRE by default) and a maximum chroma amplitude of 50 IRE. |
|
|
|
|
|
The Sprite Edger image filter "unmixes" anti-aliased object edges from the background. It can remove background color fringing if the object is not rendered over black. It can also clip the alpha channel to form a 1-bit (0 or 1) mask. It can even use this mask to clip the RGB imaged edges, setting the pixels to the background color if they are outside the mask. |
|
|
Vignette |
|
|
The WaterMark filter embosses an image in the lower-right quarter of the image. You can select any loaded image; however, grayscale images using a black background work best. The Light Angle field determines the direction of the implied light source. |
|
|
Wave
Filter Image Note: See the Wave Filter Image section for more information. Motion Options Hide Floating Windows On/Off Scene Statistics Pressing the (w) key will display the Scene Statistics panel, which provides various information about the current scene. |
|
|
| |
|