Friday, 14 November 2014

on Leave a Comment

After Effects 3D Layers

The basic objects that you manipulate in After Effects are flat, two-dimensional (2D) layers. When you make a layer a 3D layer, the layer itself remains flat, but it gains additional properties: Position (z), Anchor Point (z), Scale (z), Orientation, X Rotation, Y Rotation, Z Rotation, and Material Options properties. Material Options properties specify how the layer interacts with light and shadows. Only 3D layers interact with shadows, Lights and Cameras.

Any layer can be a 3D layer, except an audio-only layer. Individual characters within text layers can optionally be 3D sub layers, each with their own 3D properties. A text layer with Enable Per-character 3D selected behaves just like a precomposition that consists of a 3D layer for each character. All camera and light layers have 3D properties.

By default, layers are at a depth (z-axis position) of 0. In After Effects, the origin of the coordinate system is at the upper-left corner; x (width) increases from left to right, y (height) increases from top to bottom, and z (depth) increases from near to far. Some video and 3D applications use a coordinate system that is rotated 180 degrees around the x axis; in these systems, y increases from bottom to top, and z increases from far to near.

You can transform a 3D layer relative to the coordinate space of the composition, the coordinate space of the layer, or a custom space by selecting an axis mode.


You can add effects and masks to 3D layers, composite 3D layers with 2D layers, and create and animate camera and light layers to view or illuminate 3D layers from any angle. When rendering for final output, 3D layers are rendered from the perspective of the active camera.

Friday, 31 October 2014

on Leave a Comment

Max Mental Ray BSP

BSP tree stands for Binary Space Partition tree.

The default BSP ray trace acceleration method is often used for small/medium Max scenes (i.e. less than one million triangles/polygons).Users often press the hotkey 7 to determine the number of polygons/faces/triangles in the scene. The default BSP ray trace acceleration method is often used for small/medium Max scenes (i.e. less than one million triangles/polygons). Users often press the hotkey 7 to determine the number of polygons/faces/triangles in the scene. It is worth noting that although the BSP parameters are under the ray tracing group, it only affects the geometry, as oppose to reflections, etc. 

This ray trace acceleration method essentially helps mental ray to cast rays in a speedy matter by creating an imaginary bounding box around the entire scene, with subdivisions. These subdivided patches/cells inside the bounding box are technically designated as voxels.Mental ray usually splits all voxels of the scene in three axes (i.e. X; Y; Z); in almost equal number of triangles, until depth is reached.

The "Size" and "Depth" parameters help mental ray to determine the total number of triangles (i.e.leafs)to be processed for ray casting/testing. The higher the depth values, the fewer the voxels will be. Fewer voxels equals faster rendering times, as mental ray will use fewer voxels to test the rays against.The default Size value of 10 sets the minimum number of objects to be found in the scene before a voxel is split (in all three axes (i.e. X; Y; Z). Smaller values equates to more voxels and slower rendering times.

When shooting a ray there are 2 phases

a) whilst checking/hitting voxels, it will touch triangles (i.e. leafs) in the process. If perchance there are 1000 triangles (i.e. leafs) in a voxel; each will be tested 40 times (i.e. default depth value). Subsequently the rendering times will be slow. If there are only 10 triangles (i.e. leafs), the process will be faster.With this in mind, the user’s goal should be to reduce the number of average and maximum leafs in the BSP tree.The total rendering time is a combination of the time it takes to create the voxels, move down the tree depth (i.e. pre processing/translation); and the final time to check/split the triangles (i.e. leafs) during the rendering time.

b) Moving down the BSP tree depth whilst checking/hitting all axis of each voxel. To have a visual representation of the BSP process, simply go to the mental ray processing parameters rollout. Under "diagnostics" parameters, enable the “visual” group:

This visual group consists of the following:

1-Sampling rate

2-coordinate space

3-Photon

4-BSP

5-Final Gather

The BSP visual diagnostics is divided by three different colors: Blue, Green and Red.Blue areas represent the lower areas of subdivision (i.e. less computation)
Green areas represent the middle areas of subdivision (i.e. intermediate computation)
Red areas represent greater areas of subdivision (i.e. high computation).

Production companies prefer to have a mix of all three colors in their diagnostics; which is an indication that mental ray is efficiently choosing the areas of the geometry to subdivide and otherwise. To fine-tune the BSP values, simply use a nice/simple texture or colour in "material override" toggle at a small resolution (i.e. 500x500 pixels).
The "material override" function has been covered in detail

Wednesday, 15 October 2014

on Leave a Comment

Deflectors

Deflectors are used to deflect particles or to affect dynamics systems.
          
Topics in this section

POmniFlect Space Warp

POmniFlect is a planar version of the omniflector type of space warp. It provides enhanced functionality over that found in the original Deflector space warp, including refraction and spawning capabilities.

PDynaFlect (planar dynamics deflector) is a planar version of the dynaflector, a special class of space warp that lets particles affect objects in a dynamics situation. For example, if you want a stream of particles to strike an object and knock it over, like the stream from a firehose striking a stack of boxes, use a dynaflector.

SOmniFlect is the spherical version of the omniflector type of space warp. It provides more options than the original SDeflector. Most settings are the same as those in POmniFlect. The difference is that this space warp provides a spherical deflection surface rather than the planar surface. The only settings that are different are in the Display Icon area, in which you set the Radius, instead of the Width and Height.

The SDynaFlect space warp is a spherical dynaflector. It’s like the PDynaFlect warp, except that it’s spherical, and its Display Icon spinner specifies the icon's Radius value.

UOmniFlect, the universal omniflector, provides more options than the original UDeflector. This space warp lets you use any other geometric object as a particle deflector. The deflections are face accurate, so the geometry can be static, animated, or even morphing or otherwise deforming over time.  

The UDynaFlect space warp is a universal dynaflector that lets you use the surface of any object as both the particles deflector and the surface that reacts dynamically to the particle impact.

The SDeflector space warp serves as a spherical deflector of particles.

The UDeflector is a universal deflector hat lets you use any object as a particle deflector.

The Deflector space warp acts as a planar shield to repel the particles generated by a particle system. For example, you can use Deflector to simulate pavement being struck by rain. You can combine a Deflector space warp with a Gravity space warp to produce waterfall and fountain effects………

Monday, 29 September 2014

on Leave a Comment

Photo manipulation

Photo manipulation (also called photo shopping or—before the rise of Photoshop software—airbrushing) is the application of image editing   techniques to photographs in order to create an illusion or deception (in contrast to mere enhancement or correction) after the original photographing took place.

Types of digital photo manipulation
In digital editing, photographs are usually taken with a digital camera and input directly into a computer. Transparencies, negatives or printed photographs may also be digitized using a scanner, or images may be obtained from stock photography databases. With the advent of computers, graphics tablets, and digital cameras, the term image editing encompasses everything that can be done to a photo, whether in a darkroom or on a computer. Photo manipulation is often much more explicit than subtle alterations to color balance or contrast and may involve overlaying a head onto a different body or changing a sign's text, for examples. Image editing software can be used to apply effects and warp an image until the desired result is achieved. The resulting image may have little or no resemblance to the photo (or photos in the case of compositing) from which it originated. Today, photo manipulation is widely accepted as an art form.
There are several subtypes of digital image-retouching:

Technical retouching
Manipulation for photo restoration or enhancement (adjusting colors / contrast / white balance (i.e. gradational retouching), sharpness, removing elements or visible flaws on skin or materials,)

Creative retouching
Used as an art form or for commercial use to create more sleek and interesting images for advertisements. Creative retouching could be manipulation for fashion, beauty or advertising photography such as pack-shots (which could also be considered inherently technical retouching in regards to package dimensions and wrap-around factors). One of the most prominent disciplines in creative retouching is image compositing. Here, the digital artist uses multiple photos to create a single image. Today, 3D computer graphics are used more and more to add extra elements or even locations and backgrounds. This kind of image composition is widely used when conventional photography would be technically too difficult or impossible to shoot on location or in studio.

Use in glamour photography

The photo manipulation industry has often been accused of promoting or inciting a distorted and unrealistic image of self; most specifically in younger people. The world of glamour photography is one specific industry which has been heavily involved with the use of photo manipulation (an obviously concerning element as many people look up to celebrities in search of embodying the 'ideal figure)

Photo shopping
Photo shopping is a neologism for the digital editing of photos. The term originates from Adobe Photoshop, the image editor most commonly used by professionals for this purpose; however, any image-editing program could be used, such as Paint Shop Pro, Corel Photo paint, Pixelmator, Paint.NET, or GIMP. Adobe Systems, the publisher of Adobe Photoshop, discourages use of the term "Photoshop" as a verb out of concern that it may become a generic trademark, undermining the company's trademark.

Monday, 15 September 2014

on Leave a Comment

Color Grading

Color grading is the process of altering and enhancing the color of a motion picture, video image, or still image either electronically, photo-chemically or digitally. The chemical process is also referred to as color timing and is typically performed at a photographic laboratory. Modern color correction, whether for theatrical film, video distribution, or print is generally done digitally in a color suite.

Primary and secondary color correction
Primary color correction affects the whole image utilizing control over intensities of red, green, blue, gamma (mid tones), shadows (blacks) and highlights (whites) of the entire frame. Secondary correction is based on the same types of processing used for Chroma Keying to isolate a range of color, saturation and brightness values to bring about alterations in luminance, saturation and hue in only that range, while having a minimal or usually no effect on the remainder of the color spectrum. Using digital grading, objects and color ranges within the scene can be isolated with precision and adjusted. Color tints can be manipulated and visual treatments pushed to extremes not physically possible with laboratory processing. With these advancements, the color correction process became increasingly similar to well-established digital painting techniques and ushered forth a new era of digital cinematography. 


Masks, Mattes, Power Windows
The evolution of digital color correction tools advanced to the point where the colorist could use geometric shapes (like mattes or masks in photo software such as Photoshop) to isolate color adjustments to specific areas of an image. These tools can highlight a wall in the background and color only that wall—leaving the rest of the frame alone—or color everything but that wall. Subsequent color correctors (typically software-based) have the ability to use spline-based shapes for even greater control over isolating color adjustments. Color keying is also used for isolating areas to adjust.

Inside and outside of area-based isolations, digital filtration can be applied to soften, sharpen or mimic the effects of traditional glass photographic filters in nearly infinite degrees.

Wednesday, 27 August 2014

on Leave a Comment

Geometry Caching

You can save your Polygon mesh, NURBS (including curves) surface, and Subdivision Surface deformations (skin and non-skin) to a server or local hard drive by caching your object’s deformations to a geometry cache. 

Geometry caches are special Maya files that store vertex transformation data. They are useful when you want to reduce the number of calculations Maya performs when playing back or rendering scenes that contain many deforming objects, and they allow you to easily mix and edit your object’s deformations in an intuitive, nonlinear manner. With geometry caches, you can also exchange point data through the Autodesk® FBX® plug-in with other supported software packages. Cache Blend Shapes so that you can further modify their deformations by replacing or deleting geometry cache frames. Cache a character’s high resolution skin with many deformations to speed up the playback or rendering of its scene. You can create geometry caches for your objects from the Geometry Caching.

Wednesday, 13 August 2014

on Leave a Comment

Redirect

Makes the current character set redirectable. When a character set is redirectable, this mean that you can now change the translation and orientation of already established (motion capture) animation.

Rotation and Translation Creates a rotation and translation control for the current character set.

 Rotation Only

Creates a rotation redirection control for the current character set. The rotation redirection control appears at the origin of the current character. The rotation redirection control is useful if you want to change the orientation of your character set’s pivot. For example, you can manipulate a rotation redirection control to get a character to turn 90 degrees (around a corner perhaps) halfway through its walk cycle.

Translation Only

Creates a translation redirection control for the current character set. The rotation redirection control appears at the origin of the current character.


The translation redirection control is useful if you want to change the translation of the point around which your object pivots. For example, you can manipulate the translation redirection control to change the place at which a character lands from a jump.
Powered by Blogger.