In
very basic terms, ISO is the level of sensitivity of your camera to available
light. The lower the ISO number, the less sensitive it is to the light, while a
higher ISO number increases the sensitivity of your camera. The component
within your camera that can change sensitivity is called “image sensor” or simply
“sensor”. It is the most important (and most expensive) part of a camera and it
is responsible for gathering light and transforming it into an image. With
increased sensitivity, your camera sensor can capture images in low-light
environments without having to use a flash. But higher sensitivity comes at an
expense – it adds grain or “noise” to the pictures. Every camera has something
called “Base ISO“, which is typically the lowest ISO number of the sensor that
can produce the highest image quality, without adding noise to the picture. On
most of the new Nikon cameras such as Nikon D5100, the base ISO is typically
200, while most Canon digital cameras have the base ISO of 100. So, optimally,
you should always try to stick to the base ISO to get the highest image
quality. However, it is not always possible to do so, especially when working
in low-light conditions. Typically, ISO numbers start from 100-200 (Base ISO)
and increment in value in geometric progression (power of two). So, the ISO
sequence is: 100, 200, 400, 800, 1600, 3200, 6400 and etc. The important thing
to understand is that each step between the numbers effectively doubles the
sensitivity of the sensor. So, ISO 200 is twice more sensitive than ISO 100,
while ISO 400 is twice more sensitive than ISO 200. This makes ISO 400 four
times more sensitive to light than ISO 100, and ISO 1600 sixteen times more
sensitive to light than ISO 100, so on and so forth. What does it mean when a
sensor is sixteen times more sensitive to light? It means that it needs sixteen
times less time to capture an image!
Tuesday, 30 December 2014
Thursday, 11 December 2014
3D layer interactions, render order, and collapsed transformations
The positions of certain kinds of layers in the layer stacking
order in the Timeline panel prevent groups of 3D layers from being processed
together to determine intersections and shadows.
A shadow cast by a 3D layer does not affect a 2D layer or any
layer that is on the other side of the 2D layer in the layer stacking order.
Similarly, a 3D layer does not intersect with a 2D layer or any layer that is
on the other side of the 2D layer in the layer stacking order. No such
restriction exists for lights. Just like 2D layers, other types of layers also
prevent 3D layers on either side from intersecting or casting shadows on one
another:
· An adjustment layer
· A 3D layer with a
layer style applied
·
A 3D precomposition
layer to which an effect, closed mask (with mask mode other than None), or
track matte has been applied
A 3D precomposition layer
without collapsed transformationsSaturday, 29 November 2014
Diwali Celebration
In the
midst of today's busy lifestyle, Diwali gives an opportunity to pause and be
grateful for what we have, to make special memories with family and friends, to
laugh and enjoy what life offers us. Though the festival of Dipavali has
undergone some changes, in due course of time, yet it has continued to be
celebrated since the time immemorial. Every year, the festive season of Diwali
comes back with all the excitement and merriment. Times may have undergone a
sea change but customs and traditions remain the same. Diwali is one of the
most colorful, sacred and loveliest festivals of the Hindus. It is celebrated
every year with great joy and enthusiasm throughout the length and breadth of
the country. It is a festival of lights and festivities. It comes off about
twenty days after Dussehra and shows the advent of winter. It is to the Hindus
what Christmas is to the Christians. It lends charms and delight to our life.
On this Special occasion Maac Preet Vihar organized Dance, Singing and best dressing competition. More than 100 students had participated in this competition and awarded as 3rd runner up, 2nd runner up and 1st prize.
Wishing you all a very happy Diwali.
O
Friday, 14 November 2014
After Effects 3D Layers
The
basic objects that you manipulate in After Effects are flat, two-dimensional
(2D) layers. When you make a layer a 3D layer, the layer itself remains flat,
but it gains additional properties: Position (z), Anchor Point (z), Scale (z),
Orientation, X Rotation, Y Rotation, Z Rotation, and Material Options properties.
Material Options properties specify how the layer interacts with light and
shadows. Only 3D layers interact with shadows, Lights and Cameras.
Any layer can be a 3D layer, except an audio-only layer.
Individual characters within text layers can optionally be 3D sub layers, each
with their own 3D properties. A text layer with Enable Per-character 3D
selected behaves just like a precomposition that consists of a 3D layer for
each character. All camera and light layers have 3D properties.
By default, layers are at a depth (z-axis position) of 0. In
After Effects, the origin of the coordinate system is at the upper-left corner;
x (width) increases from left to right, y (height) increases from top to
bottom, and z (depth) increases from near to far. Some video and 3D
applications use a coordinate system that is rotated 180 degrees around the x
axis; in these systems, y increases from bottom to top, and z increases from
far to near.
You can transform a 3D layer relative to the coordinate space of
the composition, the coordinate space of the layer, or a custom space by
selecting an axis mode.
You
can add effects and masks to 3D layers, composite 3D layers with 2D layers, and
create and animate camera and light layers to view or illuminate 3D layers from
any angle. When rendering for final output, 3D layers are rendered from the
perspective of the active camera.
Friday, 31 October 2014
Max Mental Ray BSP
BSP tree stands for
Binary Space Partition tree.
The default BSP ray
trace acceleration method is often used for small/medium Max scenes (i.e. less
than one million triangles/polygons).Users often press the hotkey 7 to
determine the number of polygons/faces/triangles in the scene. The default BSP
ray trace acceleration method is often used for small/medium Max scenes (i.e.
less than one million triangles/polygons). Users often press the hotkey 7 to
determine the number of polygons/faces/triangles in the scene. It
is worth noting that although the BSP parameters are under the ray tracing
group, it only affects the geometry, as oppose to reflections, etc.
This ray trace acceleration method essentially helps mental ray to cast rays in
a speedy matter by creating an imaginary bounding box around the entire scene,
with subdivisions. These subdivided patches/cells inside the bounding box
are technically designated as voxels.Mental ray usually splits all voxels of
the scene in three axes (i.e. X; Y; Z); in almost equal number of triangles,
until depth is reached.
The "Size" and "Depth" parameters help mental ray to determine the total number of triangles (i.e.leafs)to be processed for ray casting/testing. The higher the depth values, the fewer the voxels will be. Fewer voxels equals faster rendering times, as mental ray will use fewer voxels to test the rays against.The default Size value of 10 sets the minimum number of objects to be found in the scene before a voxel is split (in all three axes (i.e. X; Y; Z). Smaller values equates to more voxels and slower rendering times.
When
shooting a ray there are 2 phases
a) whilst checking/hitting voxels, it will touch triangles (i.e. leafs) in the process. If perchance there are 1000 triangles (i.e. leafs) in a voxel; each will be tested 40 times (i.e. default depth value). Subsequently the rendering times will be slow. If there are only 10 triangles (i.e. leafs), the process will be faster.With this in mind, the user’s goal should be to reduce the number of average and maximum leafs in the BSP tree.The total rendering time is a combination of the time it takes to create the voxels, move down the tree depth (i.e. pre processing/translation); and the final time to check/split the triangles (i.e. leafs) during the rendering time.
a) whilst checking/hitting voxels, it will touch triangles (i.e. leafs) in the process. If perchance there are 1000 triangles (i.e. leafs) in a voxel; each will be tested 40 times (i.e. default depth value). Subsequently the rendering times will be slow. If there are only 10 triangles (i.e. leafs), the process will be faster.With this in mind, the user’s goal should be to reduce the number of average and maximum leafs in the BSP tree.The total rendering time is a combination of the time it takes to create the voxels, move down the tree depth (i.e. pre processing/translation); and the final time to check/split the triangles (i.e. leafs) during the rendering time.
b) Moving down the BSP tree depth whilst checking/hitting
all axis of each voxel. To have a visual representation of the BSP
process, simply go to the mental ray processing parameters rollout. Under
"diagnostics" parameters, enable the “visual” group:
This visual group consists of the following:
1-Sampling rate
2-coordinate space
3-Photon
4-BSP
5-Final Gather
The BSP visual diagnostics is divided by three different colors: Blue, Green and Red.Blue areas represent the lower areas of subdivision (i.e. less computation)
Green areas represent the middle areas of subdivision (i.e. intermediate computation)
Red areas represent greater areas of subdivision (i.e. high computation).
Production companies prefer to have a mix of all three colors in their diagnostics; which is an indication that mental ray is efficiently choosing the areas of the geometry to subdivide and otherwise. To fine-tune the BSP values, simply use a nice/simple texture or colour in "material override" toggle at a small resolution (i.e. 500x500 pixels).
The "material override" function has been covered in detail
This visual group consists of the following:
1-Sampling rate
2-coordinate space
3-Photon
4-BSP
5-Final Gather
The BSP visual diagnostics is divided by three different colors: Blue, Green and Red.Blue areas represent the lower areas of subdivision (i.e. less computation)
Green areas represent the middle areas of subdivision (i.e. intermediate computation)
Red areas represent greater areas of subdivision (i.e. high computation).
Production companies prefer to have a mix of all three colors in their diagnostics; which is an indication that mental ray is efficiently choosing the areas of the geometry to subdivide and otherwise. To fine-tune the BSP values, simply use a nice/simple texture or colour in "material override" toggle at a small resolution (i.e. 500x500 pixels).
The "material override" function has been covered in detail
Wednesday, 15 October 2014
Deflectors
Deflectors are used to deflect particles or to affect
dynamics systems.
Topics in this section
POmniFlect Space Warp
POmniFlect is a planar version of
the omniflector type of space warp. It provides enhanced functionality over
that found in the original Deflector space warp, including refraction and
spawning capabilities.
PDynaFlect (planar dynamics
deflector) is a planar version of the dynaflector, a special class of space warp that lets particles affect
objects in a dynamics situation. For example, if you want a stream of particles
to strike an object and knock it over, like the stream from a firehose striking
a stack of boxes, use a dynaflector.
SOmniFlect is the spherical version
of the omniflector type of space warp. It provides more options than the
original SDeflector. Most settings are the same as those in POmniFlect. The difference is that this space warp provides a spherical
deflection surface rather than the planar surface. The only settings that are
different are in the Display Icon area, in which you set the Radius, instead of
the Width and Height.
The SDynaFlect space warp is a
spherical dynaflector. It’s like the PDynaFlect warp, except that it’s spherical, and its Display Icon
spinner specifies the icon's Radius value.
UOmniFlect, the universal omniflector, provides more options than the original UDeflector. This
space warp lets you use any other geometric object as a particle deflector. The
deflections are face accurate, so the geometry can be static, animated, or even
morphing or otherwise deforming over time.
The UDynaFlect space warp is a
universal dynaflector that lets you use the surface of any object as both the
particles deflector and the surface that reacts dynamically to the particle
impact.
Monday, 29 September 2014
Photo manipulation
Photo
manipulation (also
called photo shopping or—before the rise of Photoshop software—airbrushing) is the
application of image editing techniques
to photographs in order to create an illusion or deception (in contrast to mere enhancement or correction)
after the original photographing took place.
Types
of digital photo manipulation
In digital editing, photographs are usually
taken with a digital camera and input directly into a computer. Transparencies,
negatives or printed photographs may also be digitized using a
scanner, or images may be obtained from stock photography databases. With the
advent of computers, graphics tablets, and digital cameras, the term image editing encompasses
everything that can be done to a photo, whether in a darkroom or on a
computer. Photo manipulation is often much more explicit than subtle
alterations to color balance or contrast and may involve overlaying a head onto
a different body or changing a sign's text, for examples. Image editing software
can be used to apply effects and warp an image until the desired result is
achieved. The resulting image may have little or no resemblance to the photo
(or photos in the case of compositing) from which it originated. Today, photo
manipulation is widely accepted as an art form.
There are several subtypes of digital
image-retouching:
Technical retouching
Manipulation for photo restoration or
enhancement (adjusting colors / contrast / white balance (i.e. gradational
retouching), sharpness, removing elements or visible flaws on skin or materials,)
Creative retouching
Used as an art form or for commercial use to
create more sleek and interesting images for advertisements. Creative
retouching could be manipulation for fashion, beauty or advertising photography
such as pack-shots (which could also be considered inherently technical
retouching in regards to package dimensions and wrap-around factors). One of
the most prominent disciplines in creative retouching is image compositing.
Here, the digital artist uses multiple photos to create a single image.
Today, 3D computer graphics are used more and more to add extra
elements or even locations and backgrounds. This kind of image composition is
widely used when conventional photography would be technically too difficult or
impossible to shoot on location or in studio.
Use in glamour photography
The photo manipulation
industry has often been accused of promoting or inciting a distorted and
unrealistic image of self; most specifically in younger people. The world of glamour photography is one specific industry which has been
heavily involved with the use of photo manipulation (an obviously concerning
element as many people look up to celebrities in search of embodying the 'ideal
figure)
Photo shopping
Photo
shopping is a neologism for the digital editing of
photos. The term originates from Adobe Photoshop, the image editor most commonly used by professionals for this purpose; however, any image-editing program could be used, such as Paint
Shop Pro, Corel
Photo paint, Pixelmator, Paint.NET,
or GIMP. Adobe Systems,
the publisher of Adobe Photoshop, discourages use of the term "Photoshop"
as a verb out of concern that it may become a generic
trademark, undermining the company's trademark.
Powered by Blogger.