1
MOHIEDDIN MORADI
mohieddinmoradi@gmail.com
DREAM
IDEA
PLAN
IMPLEMENTATION
Agenda
Interest to 3DTV
Depth cues
Binocular information processing
Parallax
Stereo window
Floating window
Playing with the 3D images
The 3D Camera Rig
Showing 3D in the home(Displaying 3D video)
Auto stereoscopic displays
3D Image Processor
Variety of 3D/2D Display Functions
Near-term challenges for 3D
How to shoot 3D video?
Recording and post-production in 3D
3D generation from 2D
3
History of 3D (stereoscopic) contents
1858 First three-dimensional image (still picture) was
shown
1895 Motion pictures are invented: The Lumière brothers of
France (Auguste and Louis)
1922 Premiere of the first 3D movie: The first 3D movie
featuring the anaglyph process, “The Power of Love”,
1952 – 1954 The first wave of 3D cinema : As the number of
movie-goers declined with the growing popularity of
television,
1981 – 1984 The second wave of 3D cinema: Many directors
start to actively produce 3D movies again when CATV
channels begin to broadcast 3D programs.
2003-now -modern 3D revival
MOHIEDDIN MORADI
An Introduction to 3D TV
Interest to 3DTV
Interest to 3DTV
Interest in watching 3D at home 16%
Interest in purchasing a 3D capable TV in next 3 years 25%
Wearing glasses has no impact on purchase 45%
Interest in 3D Content:
Action / Adventure 65%
Nature Wildlife 60%
Sports 45%
Providers have announced plans to launch a 3D
Network
DirecTV Announced launch of 4 channels
Verizon FIOS Will carry 2010 MBA All-Star Game
Discovery and Sony / Imax JV to establish a 24/7 dedicated 3D television network
ESPN / Sony Covering at 85+ live events in 2010
Sky 3D broadcast service to pubs and clubs
Asia Pacific Region Launched 3D TV broadcast services
3D TV evolution
First generation (now) – stereoscopic television
left and right picture are “fused” by the brain
3D glasses necessary
Second generation – autostereoscopic
Multiple viewpoints
Glasses-free
Third generation – integral imaging and holography “true 3D”
– entire object wave is recorded
– replicates 3D physical light
Depth Cues
 eight depth cues (estimating the relative distance of the objects).
 Focus
 perspective
 occlusion
 lighting and shading
 color intensity and contrast
 relative movement
 vergence and stereopsis
artists, illustrators and
designers
Film & video
(depth in moving objects)
the most powerful depth
cues to estimate depth.
We can estimate depth by using any combination of the first six depth cues, which do not
require two eyes.
Focus(Accommodation )
 focus distance (Accommodation distance)
 1- we scan over the various objects in
the scene and continually refocus on
each object.
 2- Our brains remember how we focus
and build up a memory of the relative
distance of each object compared to all
the others in the scene.
Out of focus
In focus
10MOHIEDDIN MORADI
Perspective
 vanishing point :This is the point,
often on the horizon, where
objects become so small they
disappear altogether.
 Our brains are constantly searching
for the vanishing point in every
scene we look at.
 Straight lines and the relative size
of objects help to build a map in
our minds of the relative distance
of the objects in the scene.
Vanishing
point
The brain understands that things get
smaller as they move away.
11MOHIEDDIN MORADI
Vanishing point
Vanishing point
Vanishing point
The brain understands that things get smaller as they move away.
12MOHIEDDIN MORADI
Occlusion
 occlusion : Objects at the front of a scene hide objects further back.
 When the shape appears broken by another object we assume the broken object is
further away and behind the object causing the breakage.
The car obscures the house, therefore the car must be in front.
13MOHIEDDIN MORADI
Lighting and shading
1- Objects will appear brighter on the side facing the light source and darker on the
side facing away from the light source.
2- Objects also produce shadows which darken other objects.
 Our brains can build a map of the shape, and relative position of objects in a
scene from the way light falls on them and the pattern of the shadows caused.
Simple shapes
with no shading.
Same shapes
with shading
gives a powerful
sense of depth.
14MOHIEDDIN MORADI
The light on the left and shadow on the right provides a very strong sense of 3D shape and form.
15MOHIEDDIN MORADI
Colour intensity and contrast
1- Colour intensity is reduced in distant objects.
2- Contrast is reduced in distant objects.
 We can build a map in our minds of the relative distance of objects from their
colour intensity and the level of contrast.
Unsaturated colours(Faded colour)
Saturated colours(Intense colour)
16MOHIEDDIN MORADI
Combining depth cues
• The first five(focus, perspective, occlusion, lighting and shading, color intensity
and contrast) have been used by artists, illustrators and designers for hundreds of
years to simulate a 3D scene on paintings and drawings.
17MOHIEDDIN MORADI
Relative movement
 (As we walk through a scene)
close objects appear to be moving faster than far objects.
 A very powerful cue: The relative movement of each object compared to others.
Cartoonists:
have used this to give an
impression of 3D space in
animations.
Film and television:
is used to enhance a sense of depth
in movies and television programs.
fast
slow
an example.
• As the viewpoint moves side to side, the objects in the distance appear to move
more slowly than the objects close to the camera
Vergence
 divergence and convergence.
 brains can calculate distances by divergence and convergence.
 Film and video producers can use divergence as a trick to give the illusion that objects are
further away.
 this should be used sparingly because divergence is not a natural eye movement and may
cause eye strain.
20MOHIEDDIN MORADI
Stereopsis(IPD difference)
• We have two eyes.
• Mean Inter Ocular Distance (IOD).
• Sometimes called Inter Pupillary Distance (IPD)
IOD (IPD) About 65mm
21MOHIEDDIN MORADI
Stereopsis(IPD difference)
 Stereopsis is the small differences in everything we look at between the left and
right eyes.
 Our brains calculate which objects are close and which objects are further away
from these differences.
The statue appears slightly to the right in the left eye image when
compared to the buildings behind from the right eye image.
The brain interprets this as the statue being nearer than the
buildings.
Left eye Right eye
22MOHIEDDIN MORADI
How 3D Works
• Left eye sees one view
• Right eye sees a slightly different view
• Our brain puts them together and creates the depth of field
• 3D movies use this method – using two masters
– Left Eye image
– Right Eye image
Left Eye Right Eye
• To make our brains think we’re seeing 3D,
– Our left eye must see the Left Eye image
– Our right eye must see the Right Eye image
• So the key to getting 3D to work is
– Make sure the left eye sees the left view
– Make sure the right eye sees the right view
– Sounds simple enough!
How 3D Works
Placing objects in a 3D space
Example: Anaglyph
1. In 3D two images are projected onto the display.
2. When comparing the left and right eye images, every object in the scene is
horizontally displaced by a small amount.
 The direction and amount of displacement defines where each object is in the 3D
space
3. By wearing a special pair of glasses the two images are split so that each eye only
sees one of the two images.
4. The brain assumes these two displaced objects are actually one object, and tries to
fuse them together.
5. The only way:to assume the object is either in front or behind the screen plane.
25MOHIEDDIN MORADI
Binocular information processing
Right Left
26MOHIEDDIN MORADI
Monocular information processing
Reception of light
Transduction of light to neural signals
Conveying the signals further in visual
processing
Compression of signal
27MOHIEDDIN MORADI
version and vergence movements
 Any eye movement can be described as a sum of a version and vergence
movement.
 In other words, there are two neural areas, one controlling version and one
vergence.
Version movement
Eye rotation direction & amplitude
is equal in left and right eye
Vergence movement
Eye rotation direction is opposite &
amplitude is equal in left and right eye
28MOHIEDDIN MORADI
Motor fusion
Motor fusion involves vergence eye movements to
bifoveally fixate a target
Activated when correlated scenes are shown more than
200 ms
Hysteresis: Once the eyes have locked into an object,
they stay locked although depth changes.
29MOHIEDDIN MORADI
Sensory fusion
When the eyes have been directed to the same
object, there is still need for neurocognitive
computation to fuse the images of the eyes
The more complex the image the more time it takes
to fuse it. This is called postfusional latency.
30MOHIEDDIN MORADI
Binocular visual direction
Oculocentric visual direction
However, the eyes are not equal, one
often dominates!
Binocular visual direction
Stereopsis
Compares the images of the left and right eye
and calculates the three-dimensional structure
based on the differences.
33MOHIEDDIN MORADI
 Rivalry
 Depth
 Direction
 Double images
34
Alternate, uncontrollable change of the left and right eyes view
Annoying, irritating, creates eyestrain
MOHIEDDIN MORADI
Horizontal Parallax
Basis for stereopsis
–Different parallax indicates different distance from the viewer.
–Different parallax provides a different view of objects by each eye.
35MOHIEDDIN MORADI
Vertical Parallax
Unnatural in real world
Eyes concentrate on small area of interest
“Real World”parallax is always in the direction of separation of the two eyes
–Normally horizontal
Vertical fusion limited to about ½degree
Vertical parallax gives headaches
36MOHIEDDIN MORADI
Positive parallax
 The object is displaced slightly to the left
for the left eye and slightly to the right
for the right eye.
 The brain assumes this is only one object
behind the screen.
left right
left right
Projected image
Zero parallax
 The object for the left eye and right eye are in the
same position on the display.
 The brain sees this as one object on the
screen plane with no 3D displacement.
left right
left right
Projected image
Negative parallax
 The object is displaced slightly to the right for
the left eye and slightly to the left for the right
eye.
 The brain assumes this is one object in front of
the screen.
left right
leftright
Projected image
Pushing the limits
 the stereographers moderates the excesses of 3D so that everyone can enjoy 3D
movies, games and programmes that both look good and do not push the limits of
our ability to see 3D.
leftright
Projected image
left right
Projected image
Pushing the limits
left right
Divergence
no matter how small the amount,
is unnatural to humans. This will
either break the 3D illusion or
cause eye strain.
left right
Excessive convergence
on the display causes the eyes
to converge beyond their
normal limit, which either
breaks the 3D illusion or
causes eye strain.
Authoring for the screen size
• Programe made for cinema, OK on television.
left right left right
cinema T.V
42MOHIEDDIN MORADI
Authoring for the screen size
• Programme made for television, causes eye-strain in cinema.
left right
left right
cinemaT.V
43MOHIEDDIN MORADI
Stereoscopic comfort zone
 Gray : invisible to the audience
 Red: danger zones(unfusable)
• Strong muscular activity
• Convergence vs accommodation
• Do not stay too long
 Orange: no parking
• Retinal rivalry area
• Move in, out and fast
 Green: rest area(fusable depth)
• Close to screen plane
• Stripped: natural retinal rivalry zones
44MOHIEDDIN MORADI
THE STEREO WINDOW
 the spatial boundary of the picture through which the stereoscopic image is seen.
 Where two images converge, is where the stereo window is.
 It is possible to set this window at any plane
1-In the position of the closest object in the scene.
2- In front of the scene.
3-Some where inside the scene.
 Most stereo images :the closest object in the scene is at the stereo window plane.
 Sometimes the scene is deep inside and sometimes parts of the scene protrude on
purpose in front of the window.
 When it is done carefully and deliberately, interesting effects can be achieved.
 when it is done in error, the result is visually disturbing.
The stereo window is at the closest object in the scene
 a=the distance of closest object in the left picture from the left edge of that
picture.
 a’=the distance of closest object in the right picture from the left edge of that
picture.
 a=a’ then closest object in the picture is at the stereo window and nothing
comes through
 the tree and the front of the path, at the stereo window
 other elements behind the stereo window
stereo window
closest object in
the picture
The stereo window is in front of the scene
 Here all objects are placed behind the stereo window.
 a<a’
 that objects touching the frame, as the bottom of the tree, look natural.
stereo window
closest object in
the picture
The stereo window is somewhere inside the scene
 a>a’
 the tree, the front of the path and the sign come through the stereo
window.
 Objects touching the frame, as the tree, will not look natural
 Objects that don't touch the frame, as the path and the sign will float in
space and create an effect that sometime may be desirable albeit not
always logical.
stereo window
closest object in
the picture
Window violation
Where the two images converge is where the stereo window is.
 Any object (even the image frame itself) that exists in both left and right images
occupies a place in space from the viewer.
 In the most natural viewing experience, scene being viewed behind this virtual
window created by the image frame.
 A window violation exists when objects which come in front of the window are cut
off by the window. So avoid this When you want scenes to look natural.
Breaking the frame with positive parallax
• Object appears behind.
• Right eye cannot see the object anymore.
• Like looking through a window ,therefore OK.
left right
Breaking the frame with negative parallax
• Object should appear in front.
• Left eye cannot see the object anymore.
• Look wrong, 3D image broken.
window violation
 Objects in negative-z space can sometimes
be problematic if they intersect the edge of
the screen.
 Reason: the edge of the screen (which
usually acts like a window) is now creating
a retinal rivalry. you see part of the
object with one eye, floating over the area
beyond the frame.
leftright
Z<0
Z>0
?
51MOHIEDDIN MORADI
floating window
 for artistic reasons, objects will need to cross
the edge of the screen while in negative-z
space.
 The theory says:
to move the perceived edge of the screen into
negative-z space by enough to allow objects
in negative-z space to pass behind the edge of
the screen.
leftright
Z<0
Z>0
52MOHIEDDIN MORADI
 In practice:
matting the right edge of the right eye (or
the left edge of the left eye) by the same
number of pixels as the width of the
parallax difference of the negative-z object
in question.
 This matte should be the same color as the
edge of the screen, which in most
theatrical situations will be black.
leftright
Z<0
53MOHIEDDIN MORADI
• Image for the right eye is re-framed.
• Floating or false frame is placed.
• This blocks the false object for the right eye.
• Looks OK.
left
right
leftright
right left
Floating window
54MOHIEDDIN MORADI
• Image for the left eye is re-framed.
• Floating or false frame is placed.
• This blocks the false object for the
left eye.
• Looks OK.
examples of setting the same images for different window placement
Stereo Window set so that everything is behind the window.
Projection Simulation
Projection Simulation shows
you how the images look like
overlapped if were projected
without you wearing polarized
glasses.
Stereo Window set at infinity (everything in front of the window).
This is considered a "Window Violation".
The people and the tank are cut off and hanging in space.
Projection Simulation
Projection Simulation shows
you how the images look like
overlapped if were projected
without you wearing polarized
glasses.
Stereo Window set in front of the presenter (the guy with the gun).
This is considered a "Window Violation".
The people in front are cut off and are floating torsos.
Projection Simulation
Projection Simulation shows
you how the images look like
overlapped if were projected
without you wearing polarized
glasses.
examples of 'floating windows', or 'floating crops',
 If the object exits the screen quickly as far as the brain won't have time to register
the issue, probably isn't too much problem.
 if the object lingers or moves slowly out of frame it will cause a problem.
Solution:
 The only real way to overcome this issue is to either change the convergence
point, such that all objects are behind the screen window, or manage it in post by
'zooming' into the image to remove the object from both the left and right eyes.
 A more dynamic way to deal with the problem is to dynamically crop just the eye
that has more of the object such that it matches the other eye.
59MOHIEDDIN MORADI
A person breaking frame to the right sides of the image(The chap in the black t-shirt)
Close each eye in turn while looking through a set of Anaglyph glasses to see the differences.
A floating window is used to crop the extra information within the right eye images
such that the information contained within each eye is identical.
Conclusion
Floating window concept can be a very interesting way to manage issues with objects
leaving the screen unevenly and causing stereoscopic failure.
The edges of the left and right eyes are cropped dynamically depending on the objects
at the edge of screen.
Playing with the 3D image
Standard 3D shot
 The cameras are placed a nominal
65mm apart, parallel to one another.
 This produces a reasonable 3D image,
with a pleasing amount of depth for this
scene.
 The whole scene will be in front of the
screen plane.
Screen plane
Increased inter-axial distance
1. the perceived depth increases.
2. the distance between objects in the scene
appears to increase.
too much Increased inter-axial distance
 Close objects will appear closer, but will not
grow any larger.
 The 3D illusion may break if the inter-axial
distance be too large.
Screen plane
inter-axial distance(IAD)
Notes
1-Objects do not grow bigger, they just appear nearer.
2-Must be careful! The tree now forces eyes to diverge. This may cause eye-strain.
Altered toe-in angle
1-The perceived 3D image goes further into the distance
2-The various objects appear to be separated by the same
distance(distance between objects in the scene don’t changed.)
 Not to push the 3D scene back too far. Objects in the distance may
force eyes to diverge which may cause eye strain.
 Excessive toe-in angles will also introduce keystone errors which
will need to be corrected later.
Screen plane
d1 d2
d>d1 d>d2
d1 d2
3D Blindness
It is estimated that about 5% of people cannot see 3D.
 If you think you cannot see 3D or it gives you a headache, make sure you are
watching it properly.
 If this does not work and you still cannot see 3D, seek medical assistance.
Ophthalmic problems
blindness, amblyopia (lazy eye), optic nerve hypoplasia (underdeveloped eye), and
strabismus(squint) .
 Those with lazy, underdeveloped or squint eye will subconsciously compensate by
using first six depth cues.
Cerebral problems
 Our ability to calculate and distinguish 3D information is constructed in our brains
in the first few months of our lives.
 In some milder cases, careful practice will allow such people to see 3D movies and
video.
 In severe cases those people may never be able to understand 3D moves and
video.
67MOHIEDDIN MORADI
The 3D Camera Rig
Rig configurations
The parallel rig
The opposing rig
The mirror rig
- Bottom mount (vertical camera underneath)
- Top mount (vertical camera on top)
68MOHIEDDIN MORADI
The parallel rig
 The most compact dual camera 3D rig
 it is compact, and does not rely on mirrors which have an impact on image quality.
 generally work better with more compact cameras and lens designs.
 Difficult to achieve small IAD with large cameras or lenses.
One camera, twin lens
Wider than 65mm?
69MOHIEDDIN MORADI
The opposing rig
 A pair of mirrors placed between the camera
reflects the images for left and right eye into
the cameras. Both images are horizontally
flipped.
 This type of rig is bulky and is not generally
used in modern rigs but may return with new
compact cameras.
 It was popular with film cameras because it
allows accurate camera line-up by removing
the film plates and mirrors.
light
70MOHIEDDIN MORADI
The mirror rig
 A semi-transparent mirror reflects the
scene into the vertical camera while
also allowing the horizontal camera to
see through the mirror.
1-vertical camera on top:
2- vertical camera underneath:
has the advantage of a better centre of
gravity, less spurious mirror reflection. A
good quality mirror is vital in this type
of rig.
Semi-silvered mirror reflects light into one
camera and allows light into the other.
light
71MOHIEDDIN MORADI
Comparison chart
72MOHIEDDIN MORADI
Camera rig errors
 (3D Tax ):The loss in quality due to a mismatch between the left and right images.
 Stereographer try to keep 3D Tax below 5%, and certainly below 10%.
Camera rig misalignment
 Both cameras must be set at the appropriate inter-axial distance for the scene.
 They must also be perfectly aligned with one another so that the two images can
be mapped on top of one another on the display to provide a good 3D image.
73MOHIEDDIN MORADI
 Vertical misalignment
 3D image breaks, or may cause eye-strain.
 Small errors can be corrected in post-production.
 Rotational misalignment
 Cameras must be rotationally aligned.
 Small errors may be corrected in post-production.
left right
left right
74MOHIEDDIN MORADI
Lens pairing errors.
 The same kind of lens should be used.
 The lenses should be have the same optical and mechanical characteristics.
 Common lens related errors include:
 badly coupled zoom controls.
 badly coupled focus controls.
 badly coupled iris controls.
 Both lenses should track each other exactly through these three parameters,
either by electrically coupling the lenses together, or by providing accurate remote
control to both lenses at the same time.
75MOHIEDDIN MORADI
 Focus(Poor focus tracking)
 Lenses must focus together exactly.
 Errors cannot be corrected in post-production.
Therefore it is vital that focus is matched as accurately as possible in the rig.
Blurred anaglyph red componentCorrect version
Blurred anaglyph red
component
Correct version
Iris(Poor iris tracking)
 Zoom(Poor zoom tracking)
 Small errors may be corrected in post-production.
 Excessive convergence
 Causes keystone errors.
 3D image is broken.
 Small errors may be corrected in post production.
Lens misalignments
 The lens may be
1. misaligned to the CCD sensor
optical axis error
Lens misalignments
 The lens may be
1. misaligned to the CCD sensor
optical axis error
2. misaligned to its own mechanics
zoom wander or focus wander error
(the small deviation in the optical axis as the lens is zoomed or focused)
 Zoom and focus wander may be a straight line or simple curve, or may be a
complex spiral if any rotating lens elements are slightly misaligned.
Centre of image
Actual centre
True zoom
centre line
True centre
Zoom wander
Wide Angle
Telephoto
 Solutions:
1. remounting the lens to the camera and doing adjustment.
2. electronically in post-production adjustment.
 It is better to perform this adjustment in the camera and lens as this maintains the
best image quality. However this may be impossible and may need to be
performed electronically.
Centre of image
Actual centre
True zoom
centre line
True centre
Zoom wander
Wide Angle
Telephoto
Camera characteristics mismatch
 Ideally, both cameras in a 3D
camera rig should be the
same type.
 same video formats and
resolutions and video
processing characteristics
 colour matched, and white
balanced
Image alignment
The Stereographer
 The amount of perceived depth in front and behind the screen plane, expressed as
mm, pixels or percentage.
 Depth Budget: For example stereographers work on a depth budget of about 2%
for a 40" television screen.
The stereographer will monitor material from one
or more 3D camera rigs and check that the 3D image
is correctly aligned and positioned in the 3D space.
 The stereographer will also ensure that the 3D
image is kept within the allocated depth budget
throughout post-production.
86MOHIEDDIN MORADI
The 3D Processor Box
Is designed as the stereographer’s dream.
 Stereographers can use it to monitor the feeds from 3D camera rigs and
finely tune the two camera outputs to obtain the best quality 3D image.
 It can also be used to modify the 3D image, to tune the 3D look, adapt the
depth of the image, and maintain the 3D field with a given depth budget.
Parallel or toed-in?
Parallel:
The theory recommends using cameras in parallel.
It avoids trapezoidal distortions and vertical parallax.
drawbacks:
1. in practice it is very complex to carry out, because of an increase in costs.
2. It is really an increase in time due to an intense subsequent work of stereo editing
and the associated post-production(Cropping problems and different final frame
sizes).
3. There is not initial convergence point (infinity).
4. During the shooting, the stereo perception is not the same than at the end.
Parallel or toed-in?
Toe-In
1. You could have a similar sense of depth
as the final product.
2. It makes the rest of the operations
easier.
3. Stereo window located easily.
Drawback:
the keystone deformations, which leads
to unwanted vertical and horizontal
parallax.
Showing 3D in the home
 These are digital off-air, satellite and cable transmission, internet and Blu-ray.
 At the moment, there are no special compression standards designed for 3D,
therefore existing standards must be adapted.
 The left and right signals need to be combined into one HD frame sequence and
sent over a normal transmission system.
 Sequential 3D
 line-by-line
 Side-By-Side 3D
 Top-over-bottom 3D
 Checherboard
 2D + depth map
90
Left image
Right image
HD frame
sequence
MOHIEDDIN MORADI
Sequential 3D
 A sequence of alternating video frames
where each successive frame is designed to
be viewed by just one eye.
 Such a sequence is a natural fit for active
shutter 3D glasses.
 video must now be transmitted at 48
frames per second (24 for each eye) to
maintain HD quality.
 Good for local connections i.e. from a Play
station to TV.
 Resolution is full HD but the bandwidth &
frame rate must be doubled to reduce
flickering.
91MOHIEDDIN MORADI
Line-by-Line 3D
results in a 50% loss of vertical resolution. 92MOHIEDDIN MORADI
Side-By-Side 3D.
 Bandwidth & frame rate same as normal HD(24 frames per second)
 half horizontal HD resolution.
 Due to the horizontal up scaling, side-by-side 3D is not as sharp as
sequential 3D.
 only need a firmware upgrade to enable 3D transmission from your
DirecTV box.
93MOHIEDDIN MORADI
Side-By-Side 3D
 When the 3D TV receives a
side-by-side broadcast, it first
separates the left and right
images from each frame, then
upscale the width of each by a
factor of two.
 Lastly, it displays the left and
right images in a sequential
manner as necessary for
viewing with active shutter
glasses..
94MOHIEDDIN MORADI
Top-over-bottom 3D
 Good for normal broadcast transmission.
 Bandwidth & frame rate same as normal HD
 half vertical HD resolution.
95MOHIEDDIN MORADI
Checherboard 3D
 The left and right images are
sampled.
 The two views are then overlaid
and appear as a left and right
checkerboard pattern.
 This format preserves the
horizontal and vertical
resolution of the left and right
views providing the viewer with
the highest quality image
possible with the available
bandwidth.
96MOHIEDDIN MORADI
2D+depth map
1920x1080 2D image Depth map
97MOHIEDDIN MORADI
Connecting 3D
Dual Link HDSDI & 3G-SDI
Dual Link is a popular method of connecting 3D signals in
professional equipment. It consists of two cables and
connectors at 1.485Gbps each, one for left and the
other for right. However it takes up two inputs or
outputs, effectively halving equipment capacity.
3G-SDI is the new “stereo” connection for video,
achieved by multiplexing together the two links of
Dual Link into a single cable and connector at
2.97Gbps.
“3G makes 3D easier”
98MOHIEDDIN MORADI
HDMI
HDMI was originally developed for home use as a simple way of connecting high
definition video equipment.
Derived from DVI, it uses a 19 pin connector, with digital surround sound audio, a
command protocol and HDCP (High-bandwidth Digital Content Protection) copy
protection scheme.
HDMI version 1.4
Introduced in May 2009, v1.4 adds 3D display methods, including line, field and frame
sequences, side-by-side and 2D+depth. Any 3D video equipment with HDMI should
be v1.4.
low-voltage differential signal (LVDS)
transition minimized differential signaling (TMDS)
8 bit 10 bit
99MOHIEDDIN MORADI
Anaglyph
 An anaglyph is created by taking the data from the red color channel of the left
image and combining this with the green and blue channels of the right stereo
image.
 Compatible with Existing Displays
Anaglyph glasses
 the oldest and most common of showing 3D.
 There are several different anaglyph standards all using two opposite colours.
 The most common has a red filter for the left eye and cyan for the right eye.
 The glasses split the combined image into two images, one for each eye.
Is anaglyph the wrong way round?
 The diagrams shown here may seem the wrong way round. Why? The left image is
cyan filtered, but the left eye is red filtered.
LEFT BOTH RIGHT
Right
Left
Left
Right
• Anything that is cyan in the picture appears black in the left eye and white in the
right eye.
• Anything that is red in the picture appears black in the right eye and white in the
left eye.
• Therefore the colours appear to be the wrong way round but are actually correct.
Right
Left
Left
Right
Cyan
Cyan
Cyan
RED
Red
Red
No 3D effects for objects whose color matches with the filters
Advantages Disadvantages Usage Trademarks
•Established system
•Cheap
•Easily reproduced
on screen or printed
material
• No special display
needed.
•Inefficient
• Poor colour
reproduction.
•Requires exact
match between
display and glasses.
•No 3D effects for
objects whose color
matches with the
filters .
•Discomfort
generated by light
disparities between
the two eyes .
•Ghosting due to
the pairing of glass
filters and screen
(cross talk)
•Good for
magazines, posters
and other printed
material.
•Older cinema and
video system, but
largely replaced by
newer better
systems.
TrioScopics.
ColorCode.
NVIDEA
(3DDiscover)
Getting the best from 3D
 3D movies and programs are an illusion.
 They are carefully recorded with both cameras exactly horizontal to one
another.
 Therefore 3D movies must be viewed with your head exactly upright
move was recorded.
 Tilt your head and the illusion becomes strained and eventually snaps.
Light is a wave
Direction of travel
Transverse Wave
Longitudinal Wave
Electromagnetic waves (polarized light)
• Unpolarized light consist of waves with randomly directed electric fields. Here
the waves are all traveling along the same axis, directly out of the page, and all
have the same amplitude E.
• Light is polarized when its electric fields oscillate in a single plane, rather than
in any direction perpendicular to the direction of propagation.
direction of motion of wavez
x
y
EM waves are transverse waves
This wave is polarized in y direction
E
r
v

B
r
v

v

E
r
B
r
E

v

Polarization
In an un-polarized transverse wave oscillations may take place in any direction at right
angles to the direction in which the wave travels.
By Polarization vibration direction of waves light are restricted.
Polarization is a characteristic of all transverse waves that describes the orientation of
oscillations.
Direction of
propagation
of wave
Directions of oscillations
Direction of oscillation
Direction of travel
of wave
An Introduction to 3D TV
Linear Polarization
• If the oscillation does take place in only one direction then the wave is said to
be linearly polarized (or plane polarized) in that direction.
Direction of oscillation
Direction of travel of wave
Linear polarization
• Trace of electric field vector is linear
This wave is polarized
in y direction
Circular polarization
1. Two perpendicular EM plane waves of equal amplitude with 90° difference
in phase.
2. Electric vector rotates counterclockwise  right-hand circular polarization
Circular polarization
A clockwise circularly-polarized wave An anti-clockwise circularly-polarized wave
An Introduction to 3D TV
An Introduction to 3D TV
Liquid crystals
The alignment of the polarizer “stack” changes with voltage.
Linear polarized glasses
 Both images are linearly polarized on the display and shown together.
 The glasses have a linear polarizing filter for each eye, one vertical, the other
horizontal.
 Some glasses are set at +45 and -45 so that they can be used either way round.
Advantages Disadvantages Usage
•Better colour than anaglyph.
•Cheap glasses.
•Requires special display
technology with polarizing filters.
• Darker image.
•Viewer’s head must be exactly
vertical.
•Used in the early days of
polarizing screen.
•Largely replaced by circular
polarization due to the head
tilt position problem
Circular polarized glasses
 A 3D micro-polarizer filter
attached to the LCD panel,
and are supplied with
circular-polarizer glasses.
 The right and left images
delivered from the LCD panel
are circular-polarized in
opposite rotations through
the micro polarizer filter and
the patterned retarder.
 Each right and left image can
then be viewed through a
corresponding right and left
circular-polarizer filter.
Advantages Disadvantages Usage Trademarks
•Better colour than
anaglyph.
•Cheap glasses.
• Viewer’s head may be
tilted.
•Easily adapted to existing
display and screen
technologies because high
frame rates are not
required
•Requires special display
technology with polarizing
filters.
•Darker image.
•Prone to flickering if frame
sequential technology is
used.
•Reduced angle of view and
requires a silvered screen in
cinemas.
•Popular in cinemas
because the glasses are
cheap and can easily be
washed and reused.
• Good for professional
monitors because the frame
rate is not affected and
these monitors are
generally only used to view
3D material.
• Not so good for home use
where the screen is darker
even for normal 2D viewing.
RealD.
MasterImage.
Zalman.
Intel InTru3D
4K Cinema Projector
• The 4K frame is divided into two 2K frames, one above the other.
• These are projected through a special lens with two lens turrets and two polarizing
filters onto the screen.
4K frame
Left
Right
Combined image on screen
Left
Right
Shutter Glasses type 3D Display with IR
1. Each image is shown on the display separately, left, right, left, right, at a fast
enough rate to overcome flickering.
2. The display also extracts an infra-red synchronization signal which is sent to the
glasses to tell them which image is being displayed.
3. The glasses are active, and use an LCD shutter in each eye to sequentially shut
each eye, while opening the other.
 Lenses of the 3D shutter glasses
are made of LCD panel.
 While the image for the left eye
is shown, the right eye is
blocked by right glasses, and
vice versa.
 As you can see, each lens can
be turned off separately.
 The lenses turn off and on so
quickly that the brain just sees
one 3D image that is the two
images combined( 60 flashes
per second).
Advantages Disadvantages Usage Trademarks
•Good colour.
•Wide angle of view.
•Bright clear image for both 2D and
3D.
•Quite inefficient.
•Requires a high speed display or
projector. Prone to flickering if the
frame rate is not high enough.
• Active and expensive glasses.
•Impractical for cinema use where
the glasses need cleaning and
charging between movies.
•(Home grade glasses have a battery
life of about 80-100 hours.
• Cinema grade glasses have a battery
life of about 250-350 hours.)
•Good for home use because the
screen is just as bright for normal 2D
video as it is for 3D video, and Cost of
glasses not so much of a problem in
where things are cared for much
more.
• Manufacturers are working to
standardize these glasses so that
they can be used on any screen.
• Used in some cinemas where
medical wipes are handed out rather
than washing the glasses.
• Ticket price either includes a
deposit or security is high due to cost
of glasses.
•Sony.
•XpanD.
•NVIDEA
•Panasonic.
•Samsung.
Glasses with Wavelength Multiplex Visualization
 Each image is filtered to its primary colours with a narrow band filter.
 The exact primary colours are slightly different for each image.
 the two images can be combined on the display, and still differentiate each one.
 The glasses contain a narrow band diachroic optical filter in each eye, exactly
matched to the narrow band filters used in the display. Thus each eye only sees
the part of the combined images intended for it.
projection
• An interference filter (Infitec) is rotated at high speed, displaying RGB for left and
right eyes alternately, while the viewer wears Infitec filter glasses for this
"wavelength multiplex visualization system."
• Each of the three primary colours - red, green and blue - is split into 2 different
wavelengths, one for the left eye and one for the right eye. For projection, an
interference filter (Infitec) is rotated at high speed, displaying RGB for left and right
eyes alternately, while the viewer wears Infitec filter glasses for this "wavelength
multiplex visualization system."
Advantages Disadvantages Usage Trademarks
•Good separation.
• Wide angle of view.
•Can be used in cinemas on
a normal matt white screen.
•Quite inefficient.
• Expensive glasses.
•‘Thin’ colour.
•Prone to flickering if frame
sequential displays are
used.
•Good for cinemas that
cannot install the silvered
screens required by the
circular polarising system.
•high cost of glasses means
either a deposit is paid on
the glasses, or security is
high.
•Dolby.
•Infitec.
Active Glasses and stereo projector
 An infrared (IR) emitter is installed on the projector to create switching of the left
and right eye images using special electronically controlled glasses.
 The glasses required are expensive, but the process to set up and install can be
more cost effective to assemble with existing equipment such as two small
projectors, an IR emitter and the active glasses.
 No specialized screen is required, which makes the number of viewing screens
more readily available to production and post.
Passive Glasses and stereo projector
 To enhance the light output, a silver screen is required. This is a challenge because not all
content is 3D, and 2D content viewed on a silver screen can have unwanted effects such as
color shift and brightness.
 However, the brightness is a challenge for these systems whenever optics or filters are
installed in line with the light output of a projection system.
 Passive glasses are more comfortable to wear than active or anaglyph glasses and are
relatively inexpensive.
Autostereoscopic displays
 Autostereoscopic displays fool the brain so that a 2D medium can display a 3D
image by providing a stereo parallax view for the user.
 A filter is placed in front of the screen that separates images automatically, which
means viewers don’t need to wear glasses!
 Each eye sees a different image, having been calculated to appear from two eye
positions.
 Viewers need to hold their head in a certain position so that each eye sees a
different image.
 Viewers must be at the right distance and angle from the screen in order to
receive the right image on the right eye.
 These stereoscopic displays lack several other cues that are normally used to build
up a 3D image:
Movement Parallax
An autostereoscopic display only has a single 3D view which calculate by the
software.
Infinite number of images can be seen as viewing position is moved around the
object.
Convergence
On an autostereoscopic monitor eyes are converging in front or behind the
monitor on a virtual object.
focus
This is at a different distance on an autostereoscopic display as the display and
virtual object will be at different distances from the user.
Autostereoscopic displays
fool the brain so that a 2D medium can
display a 3D image by providing a stereo
parallax view for the user.
Method 1: Parallax Barrier
 In the parallax barrier a mask is placed
over the LCD display which directs light
from alternate pixel columns to each eye.
 Parallax barrier displays allow instant
switching between 2D and 3D modes as
the light barrier is constructed from a
layer of liquid crystal which can become
completely transparent when current is
passed across, allowing the LCD to
function as a conventional 2D display.
133MOHIEDDIN MORADI
Method 2: Lenticular Autostereoscopic
 In the lenticular lens, an array of cylindrical
lenses directs light from alternate pixel
columns to a defined viewing zone, allowing
each eye to receive a different image at an
optimum distance.
Crosstalk
 refers to the quantity of incorrect information that enters
into the wrong eye due to screen defects, filters and
synchronization problems.
 The crosstalk is a problem in the 3-D reproduction system
but can be minimized during the production of the film.
134MOHIEDDIN MORADI
 Inverse Image:
the image formed at each eye is the wrong
way round.
 blended zone:
(too far away from the optimal distance)
user is seeing both images forming in each
eye, causing a blurred and confusing image.
(Crosstalk)
 Centre:
within the correct viewing zone.
 if this was a monitor with head/eye tracking
technology and the viewers were in an
inverse image zone, the monitor could
switch the images being projected in each
direction.
135MOHIEDDIN MORADI
multiscopic
Drawback: Viewers need to hold their head in a certain position and at
the right distance and angle from the screen in order to receive the right
image on the right eye.
• To overcome this drawback, “multiscopic” systems are starting to appear
that show not only two, but several different views of the same scene
(five, eight, nine or even 25).
• This means you can see images in three dimensions even if you move
around. Such systems however greatly reduce the definition of each image
(number of pixels) which complicates the filming of natural scenes.
• Each screen has its own format and number of images that it displays.
• there is currently no standard for transmitting this data.
• The use is therefore restricted to end-to-end proprietary systems.
• there is currently no equivalent technology for transposing this system to
cinema.
Glass-free multi-viewers Full-HD 3D display using a triple liquid crystal barrier
• This idea rely upon existing technologies : 240Hz LCD panels and face recognition
and tracking system such as embedded in most modern cameras and camcorders.
• Each viewer occupy about 50cm, and the average space between eyes is 7cm.
• A video camera (not shown here, located on top or bottom of the monitor) detect
and locate each viewer.
• up to 4 simultaneous viewers, standing around 1m50 away from the screen
The second and third barrier, called “viewer
barrier” is here to filter each viewer
separately. hence the need to have a 240Hz
panel, so that each of the 4 viewers has a
refresh rate of 60Hz.
the focus lines from each eye can be
considered parallel (left eye P2 // left eye P4,
right eye P1 // right eye P3), as sun rays can
be considered parallel when reaching the
earth.
Those viewer barriers adapt size and position
based on the position of each viewer, creating
a virtual tunnel only for the targeted viewer,
fitting the rays from his 2 eyes.
It seems that a 1/3 pixel resolution and a
double barrier is enough to have such a
discrimination.
The first barrier (“stereo barrier”) is achieved
here through a 240Hz liquid crystal panel.
(does not differ from a classic parallax barrier)
The advantage :
1.we can virtually shift the position of the
stereo barrier by 1/3 pixel based on the
position of each viewer, so that each eye
actually receive a different pixel from the
other eye.
2. we can alternatively switch this vertical
interlacement : on the first scan the left eye
sees pixels P2 and P4, on the second it sees
pixels P1 and P3 (the opposite for right eye).
VIEWER 1
VIEWER 2
VIEWER 4
VIEWER 3
What are the potential risks associated with watching 3D TV?
 Viewing 3D TV may cause headache, motion sickness, perceptual after effects,
disorientation, eye strain, and dizziness.
 If your eyes show signs of fatigue or dryness or if you have any of the above
symptoms, immediately stop watching and rest.
 It is recommended that users take frequent breaks to lessen the potential of these
effects.
 In rare cases some viewers may experience an epileptic seizure or stroke when
exposed to certain flashing images or lights contained in certain television pictures
or video games.
 Children and teenagers may be more susceptible to health issues associated with
viewing in 3D and should be closely supervised when viewing these images.
Live 3D Broadcast(sky)
Live 3D Broadcast(sky)
Recording challenges
• Two video signals.
• Twice the recorded bandwidth.
• −… or half the quality.
140MOHIEDDIN MORADI
3D post production
Production problems
Editors, routers, mixers, etc. must process two video signals.
−All mixes, fades, wipes, etc must be frame accurate.
−Two mattes, keys, or alpha channels required.
One idea : Pair two channels together.
Better idea : design dual-channel equipment(Similar to the introduction of stereo audio in the 1960s).
141MOHIEDDIN MORADI
• PROFESSIONALDual stream capable switchers
• Standard switcher with dual stream capability.
142MOHIEDDIN MORADI
• PROFESSIONALEditing
• Dual video stream timelines
Left
Right
143MOHIEDDIN MORADI
Adding graphics and text
• 2D graphics do not appear properly when viewing 3D programs
 If text is added truly it will cause the 3D image.
leftright
Projected image
IRIB
left right
Adding graphics and text
• 2D graphics do not appear properly when
viewing 3D programs
 If text is added normally it will break the 3D
image.
leftright
Projected image
IRIB
left right
IRIB
Mixed and fades can cause eye-strain
left right left right
3D images if flattened for a few frames during a fade or wipe
left right
left right
Commonly used cameras for 3D capture.
Variety of 3D/2D Display Functions
Flip H
When a half-mirror type of rig is used, either the left or right signal may be reversed
horizontally. The Flip H function turns the reversed image to the normal view. This is
helpful because the user can refer directly to the rig camera, achieving a simple and
cost-saving system.
149MOHIEDDIN MORADI
light
Retinal Disparity
If both eyes are fixated on a pointin
space(f1):
– Image of f1 is focused at
corresponding points in the
center of the fovea of each
eye(zero disparity).
– f2, would be imaged at
points in each eye that may
at different distances from
center of the fovea.
– This difference in distance
is the retinal disparity.
f1f2
d1 d2
Left Eye Right Eye
Retinal disparity =d1 + d2
+
+-
-
+10 deg
+5 deg
positive disparity  in front of
point of fixation
-5 deg
-10 deg
negative disparity  behind
fixation point
f2
-5
-10
Right eye
Left eye
150MOHIEDDIN MORADI
Disparity Simulation
users can simulate the amount of 3D image parallax, and can judge whether the
camera rig should be adjusted on location or whether it would be better to adjust
the parallax later during the post-production process.
151MOHIEDDIN MORADI
Horopters
 Vieth-Muller circle: Map out what
points would appear at the same
retinal disparity.
 Horopter - the locus of points in space
that fall on corresponding points in the
two retinas when the two eyes
binocularly fixate on a given point in
space with zero disparity.
 Points on the horopter appear at the
same depth as the fixation point(can’t
use stereopsis).
 What is the shape of a horopter?
Vieth-Muller circle
points on the horopter have zero
disparity 152MOHIEDDIN MORADI
Horopter Check
 helps users to perceive the subtle difference of depth between different objects
placed on the 3D screen surface. Either left or right 3D image signals (or both) are
displayed in a selected single colour: black, red, blue, or monotone.
 EX: the left image in monotone and the right image in red
 If red parallax is seen on the left side, an object is placed in front
 If red parallax is seen on the right side, an object is placed behind.
153MOHIEDDIN MORADI
Checker Board [2D mode]
 Left and right input signals are displayed in a grid pattern on screen – divided into
9 blocks vertically and 16 blocks horizontally.
 By comparing adjacent images, users can recognize a difference in brightness and
colour setting of the left and right images, and thus easily adjust the camera's
white balance and iris settings.
154MOHIEDDIN MORADI
L/R Switch [2D mode]
Left and right signals can be swapped in a moment without inserting black frames.
enables users to compare whole images and check for any sense of lack of harmony
or for unnatural images.
155MOHIEDDIN MORADI
Inches Viewing Distance
• 42″ diagonally 7 feet in distance
• 46″ diagonally 8 feet in distance
• 50″ diagonally 9 feet in distance
• 52″ diagonally 9 to 10 feet in distance
• 55″ diagonally 10 to 11 feet in distance
• 60″ diagonally 12 to 13 feet in distance
• 63″ diagonally 13 to 14 feet in distance
Why we need 3D generation from 2D?
 The 3D stereo contents are not rich enough but there are still large numbers of 2D
videos exist in different compressed formats.
 The conversion process can be off-line to generate high-quality 3D version of
classic 2D content for redistribution in Blu-ray HDTV format. (Creating revenue
from the old content)
 The conversion can also be implemented in real-time on 3D TV set for stereoscopic
displaying of 2D video sources. (Adding 3D feature in the set-top-box, DVD player,
or TV set).
 CyberLink PowerDVD 10 Ultra 3D can convert 2D DVD movies to 3D.
 Most of the 3D ready TVs are embedded with real-time automatic 2D-to-3D
function for watching 2D video contents with 3D effect.
157MOHIEDDIN MORADI
Block Diagram of 2D-to-3D Video Conversion
 The main purpose of the 2D-to-3D video conversion is to generate the second view
video based on the content of the 2D video, which involve two processes:
(1) Depth Estimation
(2) Depth Image Based Rendering (DIBR)
158MOHIEDDIN MORADI
What is Depth Map or Depth Image?
• Each depth image stores depth information as 8-bit grey values with the grey level
0 indicating the furthest value and the grey level 255 specifying the closest value.
2D Image Depth Image (Map)
159MOHIEDDIN MORADI
Depth Map Estimation Methods
three commonly used depth estimation methods
1- blur :
to estimate the depth based on the amount of blur of the object.
2- Vanishing Point :
to find out the vanishing point that is the farthest point of the whole image.
3- Motion Parallax :
objects with different motions usually have different depths.
(near objects move faster than far objects, and so relative motion can be used to
estimate the depth map).
Motion Parallax is widely used for the depth estimation in 2D-to-3D video conversion.
160MOHIEDDIN MORADI
 The motion information can be easily obtained by block matching algorithm
between two consecutive frames.
 The relative depth information is calculated by:
very easy to implement by hardware
very suitable for real-time
the inputs of the block matching based depth estimation are motion vectors which
are easily extracted from the compressed video bit stream such as MPEG-2 (DVD),
and H.264/AVC (Digital TV broadcasting).
161MOHIEDDIN MORADI
2D Video Sequence Estimated Depth Map by
Block-Matching Motion Estimation
Drawback: serious staircase effect on
the boundary of the objects.
solution: color based region segmentation
162MOHIEDDIN MORADI
Depth Map Enhancement by Color Segmentation
Depth
Fusion
Depth Map Estimated by Block-Matching
Motion Estimation
Color Segmented Frame can provide important information of different
regions that is the block-based motion depth map lacking of
Enhanced Depth Map
Smoothed Enhanced Depth Map by C
can eliminate blocking effect as well a
163MOHIEDDIN MORADI
Depth Image Based Rendering (DIBR)
 To generate the 3D video, DIBR is used to syntheses the second view video based
on the estimated depth map and the 2D video input.
 DIBR consists of three processes.
164MOHIEDDIN MORADI
3D Image Warping
The process includes two steps:
1. Original image pixels (m’(x’,y’)) from the real view image are re-projected into the
3D world based on the parameters of camera configuration
2. The 3D space pixels (M(X,Y,Z)) are projected into the image plane of the “virtual”
view (e.g. m(x,y)) for virtual view generation.
virtual camera Original camera
165MOHIEDDIN MORADI
3D Image Warping
• left-eye and right-eye images at virtual
camera positions Cl and Cr can be
generated for a specific camera distance tc
with providing the information of the focal
length f, and the depth Z from the depth
map.
• The geometrical relationship can be
expressed as
• Based on these equations, we can directly
map the pixels in the right-eye view to the
left-eye view in the 3D image warping
process. Camera configuration for generation
of virtual stereoscopic images.
virtual camera Original camera
xl, xr: the projections of the 3D point P on the left image and right image
3d object
166MOHIEDDIN MORADI
Major Challenges in DIBR
Occlusion:
Two different points in the image plane at the real view can be warped to the same
location in the virtual view.
To resolve this, the point with position appear closer to the camera in the virtual
view will be used.
Disocclusion:
Occluded area in the real view may become visible in the virtual view.
There is no information provided to generate these pixels. As the result there are
some empty pixels (holes) created in the virtual view
To resolve this we do Hole-filling and Depth Map Pre-processing.
Original camera virtual camera
Original camera virtual camera
hole
167MOHIEDDIN MORADI
Holes Created in 3D Image Warping
Depth Map
Right View Image
3D Image
Warping
Left View Image created by 3D Image Warping
with holes due to disoculsion.
168MOHIEDDIN MORADI
Hole-Filling by Interpolation
Detect holes
 Fill holes by averaging textures from neighborhood pixels
 Linear interpolation technology
Linear interpolation will introduce stripe distortion in large holes.
SOLUTION: Pre-Processing of Depth Map by Smoothing Filter 169MOHIEDDIN MORADI
Pre-Processing of Depth Map by Smoothing Filter
Reduce disocclusion (holes) in the virtual views
 Less significant texture artifacts
Original Depth Map Depth Map after Smoothing Filter
170MOHIEDDIN MORADI
Pre-Processing of Depth Map
Left View Image created by 3D
Image Warping using the
smoothed depth map with
much fewer holes.
Left View Image after hole-filling
Asymmetric Smoothed Depth Map
Right View Image
171MOHIEDDIN MORADI
Conclusion
 It is possible to convert 2D video to 3D video automatically for some video with
good 3D perception using depth from motion estimation and DIBR techniques.
 When you buy your 3D Ready TV, the quality of the 2D-to-3D conversion function
should be one of your consideration as different brands use different technologies
for this conversion.
Converted 3D Video Sample
172MOHIEDDIN MORADI
IF-2D3D1 3D Image Processor
Real-time 2D/3D conversion
• 2D is converted into 3D in real time.
• Separate L/R HD-SDI outputs enable you to convert existing 2D content to 3D.
convenient for rough editing.
• You can adjust for both parallax and 3D intensity.
The 3D mixer converts L/R dual signals to a 3D mixed format
Convenient for real-time monitoring when shooting in 3D or when shooting with 2D equipment.
• Waveform monitor and vectorscope for comparing L & R video streams on a display.
• Split function for comparing L & R video streams on one screen with movable boundary.
• Rotation function to facilitate a restricted rig setup for 2 cameras when shooting in 3D.
• HD-SDI frame synchronizer* for synchronizing a pair of cameras that lack external sync.
• Anaglyph and sequential viewing modes for enhanced convenience, providing multiple ways to
check 3D content
 Choice of 3D mixed formats
LbL: Line-by-line SbS: Side-by-side-half AB: Above-below CB: Checkerboard
Parallax adjustment
3D Intensity adjustment:
 This allows virtual, simultaneous adjustment of curvature and relief, to manipulate
the intensity of the 3D effect.
 As with Parallax adjustment, there are three viewing modes: Intensity 1 (natural),
Intensity 2 (anaglyph), and Intensity 3 (sequential).
 You can adjust curvature and relief simultaneously.
Near-term challenges for 3D
Capacity for new channels and VOD
Consumer adoption
Technology specifications
-Content encoding, MPEG-4 profile
Production experience
-Best creative practices by event type
-Graphics insertion
 In 2008, (University of Arizona) : the first updatable (or rewritable) holographic 3D display ever
demonstrated.
 photorefractive polymers have the potential to offer colorful images and large sizes in an updatable
display.
 The display they demonstrated was, at 4 in. x 4 in., the largest yet created.
 It could display new images every 3 minutes, and images could be viewed for several hours without the
need for refreshing.
Holograms, like photographs, are recordings of reflected light.
 Here, the researchers created a hologram based on a 3D model of an object on a computer, and no real
physical object was required.
 They generated 2D perspectives of the object on the computer, which were processed and combined to
create about 120 holographic pixels, or “hogels.”
 To create a single hogel, the researchers modulated a laser beam with that hogel, focused the beam on a
thin vertical line, and made the beam interfere with a second, unmodulated laser beam.
 The entire hologram could be written by repeating this process with all 120 hogels and positioning them
next to each other.
 After all hogels were written, the researchers could illuminate the sample with a simple LED to make the
3D hologram viewable.
 The sensation of 3D is created due to parallax: each eye is seeing a different perspective of the object.
• Holograms created with the photorefractive polymer in an updatable holographic
3D display
An Introduction to 3D TV
An Introduction to 3D TV
Sony cameras & camcorder suitable for 3D
Format selection
thomsson cameras & camcorder suitable for 3D
SI-3D System Feature(Silicon Imaging company):
• 2K DCI & HD Raw Stereo Recording & Playback
• 3D Visualization (50/50, Side-by-Side, Anaglyph & Wiggle)
• Dual Outputs: Beam-Splitter LCD or Live Projection
• Virtual Parallax Adjustment: Flip, X-Y Shift and Rotation
• 12-Bit Uncompressed and CineformRAW Recording
• Single or Dual Drive independent Left/Right file
• Frame-Store: Grab/Save/Recall with 100%/50% Opacity
• Convergence Alignment Screen Guides (% of screen)
• Focus Tools: 4x Zoom, Edge Detect, Spot/Loupe Meter
• Exposure Tools: False-Color, Histograms & Spot Meter
• Ambient USB Timecode Interface
• Project & Metadata Management & AVID ALE Log File
• Auto File Sequencing – Scene/Shot/Take
• Iridas Speedgrade Embedded On-Set Grading
• Export Conversion: DNG, AVI, QuickTime MOV)
• Workflows – Avid, Apple Final-Cut, Adobe & Others
• Includes: 2xSI-2K Mini, Sync/Network Cables & Software
An Introduction to 3D TV
Stereoscopic 3D Displays for Virtual Reality
• S3D Display Technology Based on VR
System and Size of Audience
• Monitor (Fish Tank VR)
– Active Stereo
– Anaglyphic Stereo
• Head Mounted Displays (HMD)
– Separate Left/Right Signals
– Active Stereo Converted to Separate
Signals
• Desks
– Active Stereo
• CAVE
– Active Stereo
– Passive Rarely
• Walls/ Curved Screen
– Active Stereo for Small Audiences
– Passive for Larger Audiences
Near-term challenges for 3D
Capacity for new channels and VOD
Consumer adoption
Technology specifications
-Content encoding, MPEG-4 profile
-Tru2Way Host Requirements
Production experience
-Best creative practices by event type
-Graphics insertion
Naming / Branding
Questions??
Discussion!!
Suggestions!!
Criticism!!
190

More Related Content

PDF
Video Compression Part 1 Video Principles
PDF
Video Quality Control
PDF
Modern broadcast camera techniques, set up & operation
PDF
Latest Technologies in Production & Broadcasting
PDF
Broadcast Lens Technology Part 2
PDF
Broadcast Camera Technology, Part 2
PDF
Video Compression, Part 3-Section 2, Some Standard Video Codecs
PDF
HDR and WCG Principles-Part 5
Video Compression Part 1 Video Principles
Video Quality Control
Modern broadcast camera techniques, set up & operation
Latest Technologies in Production & Broadcasting
Broadcast Lens Technology Part 2
Broadcast Camera Technology, Part 2
Video Compression, Part 3-Section 2, Some Standard Video Codecs
HDR and WCG Principles-Part 5

What's hot (20)

PDF
Video Compression, Part 4 Section 1, Video Quality Assessment
PPTX
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
PDF
Thinking about IP migration
PDF
HDR and WCG Principles-Part 3
PDF
An Introduction to HDTV Principles-Part 4
PDF
HDR and WCG Principles-Part 4
PPTX
3 d television
PDF
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
PDF
HDR and WCG Video Broadcasting Considerations
PDF
Broadcast Camera Technology, Part 1
PDF
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
PPTX
3 d tv
PDF
An Introduction to Video Principles-Part 2
PDF
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
PDF
HDR and WCG Principles-Part 1
PDF
An Introduction to HDTV Principles-Part 1
PDF
Broadcast Lens Technology Part 1
PPTX
screen-less displays
PPTX
3 d tv
PPT
Augmented reality
Video Compression, Part 4 Section 1, Video Quality Assessment
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
Thinking about IP migration
HDR and WCG Principles-Part 3
An Introduction to HDTV Principles-Part 4
HDR and WCG Principles-Part 4
3 d television
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
HDR and WCG Video Broadcasting Considerations
Broadcast Camera Technology, Part 1
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
3 d tv
An Introduction to Video Principles-Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
HDR and WCG Principles-Part 1
An Introduction to HDTV Principles-Part 1
Broadcast Lens Technology Part 1
screen-less displays
3 d tv
Augmented reality
Ad

Similar to An Introduction to 3D TV (20)

PPTX
3 D Technology
PPTX
3 D Technology
PPTX
PDF
Sony
PPT
3 d poloroization glass
DOCX
3 d glasses explained
PPTX
3D Technology useful in B.Tech. CSE.pptx
PDF
3D television
PPT
3D technology
PPT
3d glasses
PPT
How does 3D technology work ?
DOCX
3 d technology
PPTX
how 3d technology works
PPTX
3 d tv
PPTX
Stereoscopic Imaging
PPTX
How To Make 3D Images
PDF
PPTX
3 d technology
PPTX
Stereoscopy
DOCX
3D Technology in the cinema and at home (Samantha Lusby Report )
3 D Technology
3 D Technology
Sony
3 d poloroization glass
3 d glasses explained
3D Technology useful in B.Tech. CSE.pptx
3D television
3D technology
3d glasses
How does 3D technology work ?
3 d technology
how 3d technology works
3 d tv
Stereoscopic Imaging
How To Make 3D Images
3 d technology
Stereoscopy
3D Technology in the cinema and at home (Samantha Lusby Report )
Ad

More from Dr. Mohieddin Moradi (14)

PDF
HDR and WCG Principles-Part 6
PDF
HDR and WCG Principles-Part 2
PDF
SDI to IP 2110 Transition Part 2
PDF
SDI to IP 2110 Transition Part 1
PDF
Broadcast Lens Technology Part 3
PDF
An Introduction to Video Principles-Part 1
PDF
An Introduction to HDTV Principles-Part 3
PDF
An Introduction to HDTV Principles-Part 2
PDF
Broadcast Camera Technology, Part 3
PDF
An Introduction to Audio Principles
PDF
Video Compression, Part 4 Section 2, Video Quality Assessment
PDF
Video Compression, Part 3-Section 1, Some Standard Video Codecs
PDF
Video Compression, Part 2-Section 2, Video Coding Concepts
PDF
Video Compression, Part 2-Section 1, Video Coding Concepts
HDR and WCG Principles-Part 6
HDR and WCG Principles-Part 2
SDI to IP 2110 Transition Part 2
SDI to IP 2110 Transition Part 1
Broadcast Lens Technology Part 3
An Introduction to Video Principles-Part 1
An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 2
Broadcast Camera Technology, Part 3
An Introduction to Audio Principles
Video Compression, Part 4 Section 2, Video Quality Assessment
Video Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts

Recently uploaded (20)

PPTX
CT Generations and Image Reconstruction methods
PPTX
Environmental studies, Moudle 3-Environmental Pollution.pptx
PPTX
Micro1New.ppt.pptx the main themes if micro
PDF
Unit I -OPERATING SYSTEMS_SRM_KATTANKULATHUR.pptx.pdf
DOCX
ENVIRONMENTAL PROTECTION AND MANAGEMENT (18CVL756)
PPT
UNIT-I Machine Learning Essentials for 2nd years
PDF
Micro 4 New.ppt.pdf a servay of cells and microorganism
PDF
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
PDF
Mechanics of materials week 2 rajeshwari
PDF
Computer System Architecture 3rd Edition-M Morris Mano.pdf
PPTX
chapter 1.pptx dotnet technology introduction
PPTX
BBOC407 BIOLOGY FOR ENGINEERS (CS) - MODULE 1 PART 1.pptx
PDF
electrical machines course file-anna university
PPTX
Cisco Network Behaviour dibuywvdsvdtdstydsdsa
PPT
Programmable Logic Controller PLC and Industrial Automation
PDF
MLpara ingenieira CIVIL, meca Y AMBIENTAL
PPTX
Solar energy pdf of gitam songa hemant k
PDF
SEH5E Unveiled: Enhancements and Key Takeaways for Certification Success
PDF
LOW POWER CLASS AB SI POWER AMPLIFIER FOR WIRELESS MEDICAL SENSOR NETWORK
PPTX
CNS - Unit 1 (Introduction To Computer Networks) - PPT (2).pptx
CT Generations and Image Reconstruction methods
Environmental studies, Moudle 3-Environmental Pollution.pptx
Micro1New.ppt.pptx the main themes if micro
Unit I -OPERATING SYSTEMS_SRM_KATTANKULATHUR.pptx.pdf
ENVIRONMENTAL PROTECTION AND MANAGEMENT (18CVL756)
UNIT-I Machine Learning Essentials for 2nd years
Micro 4 New.ppt.pdf a servay of cells and microorganism
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
Mechanics of materials week 2 rajeshwari
Computer System Architecture 3rd Edition-M Morris Mano.pdf
chapter 1.pptx dotnet technology introduction
BBOC407 BIOLOGY FOR ENGINEERS (CS) - MODULE 1 PART 1.pptx
electrical machines course file-anna university
Cisco Network Behaviour dibuywvdsvdtdstydsdsa
Programmable Logic Controller PLC and Industrial Automation
MLpara ingenieira CIVIL, meca Y AMBIENTAL
Solar energy pdf of gitam songa hemant k
SEH5E Unveiled: Enhancements and Key Takeaways for Certification Success
LOW POWER CLASS AB SI POWER AMPLIFIER FOR WIRELESS MEDICAL SENSOR NETWORK
CNS - Unit 1 (Introduction To Computer Networks) - PPT (2).pptx

An Introduction to 3D TV

  • 2. Agenda Interest to 3DTV Depth cues Binocular information processing Parallax Stereo window Floating window Playing with the 3D images The 3D Camera Rig Showing 3D in the home(Displaying 3D video) Auto stereoscopic displays 3D Image Processor Variety of 3D/2D Display Functions Near-term challenges for 3D How to shoot 3D video? Recording and post-production in 3D 3D generation from 2D
  • 3. 3 History of 3D (stereoscopic) contents 1858 First three-dimensional image (still picture) was shown 1895 Motion pictures are invented: The Lumière brothers of France (Auguste and Louis) 1922 Premiere of the first 3D movie: The first 3D movie featuring the anaglyph process, “The Power of Love”, 1952 – 1954 The first wave of 3D cinema : As the number of movie-goers declined with the growing popularity of television, 1981 – 1984 The second wave of 3D cinema: Many directors start to actively produce 3D movies again when CATV channels begin to broadcast 3D programs. 2003-now -modern 3D revival MOHIEDDIN MORADI
  • 6. Interest to 3DTV Interest in watching 3D at home 16% Interest in purchasing a 3D capable TV in next 3 years 25% Wearing glasses has no impact on purchase 45% Interest in 3D Content: Action / Adventure 65% Nature Wildlife 60% Sports 45%
  • 7. Providers have announced plans to launch a 3D Network DirecTV Announced launch of 4 channels Verizon FIOS Will carry 2010 MBA All-Star Game Discovery and Sony / Imax JV to establish a 24/7 dedicated 3D television network ESPN / Sony Covering at 85+ live events in 2010 Sky 3D broadcast service to pubs and clubs Asia Pacific Region Launched 3D TV broadcast services
  • 8. 3D TV evolution First generation (now) – stereoscopic television left and right picture are “fused” by the brain 3D glasses necessary Second generation – autostereoscopic Multiple viewpoints Glasses-free Third generation – integral imaging and holography “true 3D” – entire object wave is recorded – replicates 3D physical light
  • 9. Depth Cues  eight depth cues (estimating the relative distance of the objects).  Focus  perspective  occlusion  lighting and shading  color intensity and contrast  relative movement  vergence and stereopsis artists, illustrators and designers Film & video (depth in moving objects) the most powerful depth cues to estimate depth. We can estimate depth by using any combination of the first six depth cues, which do not require two eyes.
  • 10. Focus(Accommodation )  focus distance (Accommodation distance)  1- we scan over the various objects in the scene and continually refocus on each object.  2- Our brains remember how we focus and build up a memory of the relative distance of each object compared to all the others in the scene. Out of focus In focus 10MOHIEDDIN MORADI
  • 11. Perspective  vanishing point :This is the point, often on the horizon, where objects become so small they disappear altogether.  Our brains are constantly searching for the vanishing point in every scene we look at.  Straight lines and the relative size of objects help to build a map in our minds of the relative distance of the objects in the scene. Vanishing point The brain understands that things get smaller as they move away. 11MOHIEDDIN MORADI
  • 12. Vanishing point Vanishing point Vanishing point The brain understands that things get smaller as they move away. 12MOHIEDDIN MORADI
  • 13. Occlusion  occlusion : Objects at the front of a scene hide objects further back.  When the shape appears broken by another object we assume the broken object is further away and behind the object causing the breakage. The car obscures the house, therefore the car must be in front. 13MOHIEDDIN MORADI
  • 14. Lighting and shading 1- Objects will appear brighter on the side facing the light source and darker on the side facing away from the light source. 2- Objects also produce shadows which darken other objects.  Our brains can build a map of the shape, and relative position of objects in a scene from the way light falls on them and the pattern of the shadows caused. Simple shapes with no shading. Same shapes with shading gives a powerful sense of depth. 14MOHIEDDIN MORADI
  • 15. The light on the left and shadow on the right provides a very strong sense of 3D shape and form. 15MOHIEDDIN MORADI
  • 16. Colour intensity and contrast 1- Colour intensity is reduced in distant objects. 2- Contrast is reduced in distant objects.  We can build a map in our minds of the relative distance of objects from their colour intensity and the level of contrast. Unsaturated colours(Faded colour) Saturated colours(Intense colour) 16MOHIEDDIN MORADI
  • 17. Combining depth cues • The first five(focus, perspective, occlusion, lighting and shading, color intensity and contrast) have been used by artists, illustrators and designers for hundreds of years to simulate a 3D scene on paintings and drawings. 17MOHIEDDIN MORADI
  • 18. Relative movement  (As we walk through a scene) close objects appear to be moving faster than far objects.  A very powerful cue: The relative movement of each object compared to others. Cartoonists: have used this to give an impression of 3D space in animations. Film and television: is used to enhance a sense of depth in movies and television programs. fast slow
  • 19. an example. • As the viewpoint moves side to side, the objects in the distance appear to move more slowly than the objects close to the camera
  • 20. Vergence  divergence and convergence.  brains can calculate distances by divergence and convergence.  Film and video producers can use divergence as a trick to give the illusion that objects are further away.  this should be used sparingly because divergence is not a natural eye movement and may cause eye strain. 20MOHIEDDIN MORADI
  • 21. Stereopsis(IPD difference) • We have two eyes. • Mean Inter Ocular Distance (IOD). • Sometimes called Inter Pupillary Distance (IPD) IOD (IPD) About 65mm 21MOHIEDDIN MORADI
  • 22. Stereopsis(IPD difference)  Stereopsis is the small differences in everything we look at between the left and right eyes.  Our brains calculate which objects are close and which objects are further away from these differences. The statue appears slightly to the right in the left eye image when compared to the buildings behind from the right eye image. The brain interprets this as the statue being nearer than the buildings. Left eye Right eye 22MOHIEDDIN MORADI
  • 23. How 3D Works • Left eye sees one view • Right eye sees a slightly different view • Our brain puts them together and creates the depth of field • 3D movies use this method – using two masters – Left Eye image – Right Eye image Left Eye Right Eye
  • 24. • To make our brains think we’re seeing 3D, – Our left eye must see the Left Eye image – Our right eye must see the Right Eye image • So the key to getting 3D to work is – Make sure the left eye sees the left view – Make sure the right eye sees the right view – Sounds simple enough! How 3D Works
  • 25. Placing objects in a 3D space Example: Anaglyph 1. In 3D two images are projected onto the display. 2. When comparing the left and right eye images, every object in the scene is horizontally displaced by a small amount.  The direction and amount of displacement defines where each object is in the 3D space 3. By wearing a special pair of glasses the two images are split so that each eye only sees one of the two images. 4. The brain assumes these two displaced objects are actually one object, and tries to fuse them together. 5. The only way:to assume the object is either in front or behind the screen plane. 25MOHIEDDIN MORADI
  • 26. Binocular information processing Right Left 26MOHIEDDIN MORADI
  • 27. Monocular information processing Reception of light Transduction of light to neural signals Conveying the signals further in visual processing Compression of signal 27MOHIEDDIN MORADI
  • 28. version and vergence movements  Any eye movement can be described as a sum of a version and vergence movement.  In other words, there are two neural areas, one controlling version and one vergence. Version movement Eye rotation direction & amplitude is equal in left and right eye Vergence movement Eye rotation direction is opposite & amplitude is equal in left and right eye 28MOHIEDDIN MORADI
  • 29. Motor fusion Motor fusion involves vergence eye movements to bifoveally fixate a target Activated when correlated scenes are shown more than 200 ms Hysteresis: Once the eyes have locked into an object, they stay locked although depth changes. 29MOHIEDDIN MORADI
  • 30. Sensory fusion When the eyes have been directed to the same object, there is still need for neurocognitive computation to fuse the images of the eyes The more complex the image the more time it takes to fuse it. This is called postfusional latency. 30MOHIEDDIN MORADI
  • 32. However, the eyes are not equal, one often dominates! Binocular visual direction
  • 33. Stereopsis Compares the images of the left and right eye and calculates the three-dimensional structure based on the differences. 33MOHIEDDIN MORADI
  • 34.  Rivalry  Depth  Direction  Double images 34 Alternate, uncontrollable change of the left and right eyes view Annoying, irritating, creates eyestrain MOHIEDDIN MORADI
  • 35. Horizontal Parallax Basis for stereopsis –Different parallax indicates different distance from the viewer. –Different parallax provides a different view of objects by each eye. 35MOHIEDDIN MORADI
  • 36. Vertical Parallax Unnatural in real world Eyes concentrate on small area of interest “Real World”parallax is always in the direction of separation of the two eyes –Normally horizontal Vertical fusion limited to about ½degree Vertical parallax gives headaches 36MOHIEDDIN MORADI
  • 37. Positive parallax  The object is displaced slightly to the left for the left eye and slightly to the right for the right eye.  The brain assumes this is only one object behind the screen. left right left right Projected image
  • 38. Zero parallax  The object for the left eye and right eye are in the same position on the display.  The brain sees this as one object on the screen plane with no 3D displacement. left right left right Projected image
  • 39. Negative parallax  The object is displaced slightly to the right for the left eye and slightly to the left for the right eye.  The brain assumes this is one object in front of the screen. left right leftright Projected image
  • 40. Pushing the limits  the stereographers moderates the excesses of 3D so that everyone can enjoy 3D movies, games and programmes that both look good and do not push the limits of our ability to see 3D. leftright Projected image left right Projected image
  • 41. Pushing the limits left right Divergence no matter how small the amount, is unnatural to humans. This will either break the 3D illusion or cause eye strain. left right Excessive convergence on the display causes the eyes to converge beyond their normal limit, which either breaks the 3D illusion or causes eye strain.
  • 42. Authoring for the screen size • Programe made for cinema, OK on television. left right left right cinema T.V 42MOHIEDDIN MORADI
  • 43. Authoring for the screen size • Programme made for television, causes eye-strain in cinema. left right left right cinemaT.V 43MOHIEDDIN MORADI
  • 44. Stereoscopic comfort zone  Gray : invisible to the audience  Red: danger zones(unfusable) • Strong muscular activity • Convergence vs accommodation • Do not stay too long  Orange: no parking • Retinal rivalry area • Move in, out and fast  Green: rest area(fusable depth) • Close to screen plane • Stripped: natural retinal rivalry zones 44MOHIEDDIN MORADI
  • 45. THE STEREO WINDOW  the spatial boundary of the picture through which the stereoscopic image is seen.  Where two images converge, is where the stereo window is.  It is possible to set this window at any plane 1-In the position of the closest object in the scene. 2- In front of the scene. 3-Some where inside the scene.  Most stereo images :the closest object in the scene is at the stereo window plane.  Sometimes the scene is deep inside and sometimes parts of the scene protrude on purpose in front of the window.  When it is done carefully and deliberately, interesting effects can be achieved.  when it is done in error, the result is visually disturbing.
  • 46. The stereo window is at the closest object in the scene  a=the distance of closest object in the left picture from the left edge of that picture.  a’=the distance of closest object in the right picture from the left edge of that picture.  a=a’ then closest object in the picture is at the stereo window and nothing comes through  the tree and the front of the path, at the stereo window  other elements behind the stereo window stereo window closest object in the picture
  • 47. The stereo window is in front of the scene  Here all objects are placed behind the stereo window.  a<a’  that objects touching the frame, as the bottom of the tree, look natural. stereo window closest object in the picture
  • 48. The stereo window is somewhere inside the scene  a>a’  the tree, the front of the path and the sign come through the stereo window.  Objects touching the frame, as the tree, will not look natural  Objects that don't touch the frame, as the path and the sign will float in space and create an effect that sometime may be desirable albeit not always logical. stereo window closest object in the picture
  • 49. Window violation Where the two images converge is where the stereo window is.  Any object (even the image frame itself) that exists in both left and right images occupies a place in space from the viewer.  In the most natural viewing experience, scene being viewed behind this virtual window created by the image frame.  A window violation exists when objects which come in front of the window are cut off by the window. So avoid this When you want scenes to look natural.
  • 50. Breaking the frame with positive parallax • Object appears behind. • Right eye cannot see the object anymore. • Like looking through a window ,therefore OK. left right
  • 51. Breaking the frame with negative parallax • Object should appear in front. • Left eye cannot see the object anymore. • Look wrong, 3D image broken. window violation  Objects in negative-z space can sometimes be problematic if they intersect the edge of the screen.  Reason: the edge of the screen (which usually acts like a window) is now creating a retinal rivalry. you see part of the object with one eye, floating over the area beyond the frame. leftright Z<0 Z>0 ? 51MOHIEDDIN MORADI
  • 52. floating window  for artistic reasons, objects will need to cross the edge of the screen while in negative-z space.  The theory says: to move the perceived edge of the screen into negative-z space by enough to allow objects in negative-z space to pass behind the edge of the screen. leftright Z<0 Z>0 52MOHIEDDIN MORADI
  • 53.  In practice: matting the right edge of the right eye (or the left edge of the left eye) by the same number of pixels as the width of the parallax difference of the negative-z object in question.  This matte should be the same color as the edge of the screen, which in most theatrical situations will be black. leftright Z<0 53MOHIEDDIN MORADI
  • 54. • Image for the right eye is re-framed. • Floating or false frame is placed. • This blocks the false object for the right eye. • Looks OK. left right leftright right left Floating window 54MOHIEDDIN MORADI
  • 55. • Image for the left eye is re-framed. • Floating or false frame is placed. • This blocks the false object for the left eye. • Looks OK.
  • 56. examples of setting the same images for different window placement Stereo Window set so that everything is behind the window. Projection Simulation Projection Simulation shows you how the images look like overlapped if were projected without you wearing polarized glasses.
  • 57. Stereo Window set at infinity (everything in front of the window). This is considered a "Window Violation". The people and the tank are cut off and hanging in space. Projection Simulation Projection Simulation shows you how the images look like overlapped if were projected without you wearing polarized glasses.
  • 58. Stereo Window set in front of the presenter (the guy with the gun). This is considered a "Window Violation". The people in front are cut off and are floating torsos. Projection Simulation Projection Simulation shows you how the images look like overlapped if were projected without you wearing polarized glasses.
  • 59. examples of 'floating windows', or 'floating crops',  If the object exits the screen quickly as far as the brain won't have time to register the issue, probably isn't too much problem.  if the object lingers or moves slowly out of frame it will cause a problem. Solution:  The only real way to overcome this issue is to either change the convergence point, such that all objects are behind the screen window, or manage it in post by 'zooming' into the image to remove the object from both the left and right eyes.  A more dynamic way to deal with the problem is to dynamically crop just the eye that has more of the object such that it matches the other eye. 59MOHIEDDIN MORADI
  • 60. A person breaking frame to the right sides of the image(The chap in the black t-shirt) Close each eye in turn while looking through a set of Anaglyph glasses to see the differences.
  • 61. A floating window is used to crop the extra information within the right eye images such that the information contained within each eye is identical.
  • 62. Conclusion Floating window concept can be a very interesting way to manage issues with objects leaving the screen unevenly and causing stereoscopic failure. The edges of the left and right eyes are cropped dynamically depending on the objects at the edge of screen.
  • 63. Playing with the 3D image Standard 3D shot  The cameras are placed a nominal 65mm apart, parallel to one another.  This produces a reasonable 3D image, with a pleasing amount of depth for this scene.  The whole scene will be in front of the screen plane. Screen plane
  • 64. Increased inter-axial distance 1. the perceived depth increases. 2. the distance between objects in the scene appears to increase. too much Increased inter-axial distance  Close objects will appear closer, but will not grow any larger.  The 3D illusion may break if the inter-axial distance be too large. Screen plane inter-axial distance(IAD) Notes 1-Objects do not grow bigger, they just appear nearer. 2-Must be careful! The tree now forces eyes to diverge. This may cause eye-strain.
  • 65. Altered toe-in angle 1-The perceived 3D image goes further into the distance 2-The various objects appear to be separated by the same distance(distance between objects in the scene don’t changed.)  Not to push the 3D scene back too far. Objects in the distance may force eyes to diverge which may cause eye strain.  Excessive toe-in angles will also introduce keystone errors which will need to be corrected later. Screen plane
  • 67. 3D Blindness It is estimated that about 5% of people cannot see 3D.  If you think you cannot see 3D or it gives you a headache, make sure you are watching it properly.  If this does not work and you still cannot see 3D, seek medical assistance. Ophthalmic problems blindness, amblyopia (lazy eye), optic nerve hypoplasia (underdeveloped eye), and strabismus(squint) .  Those with lazy, underdeveloped or squint eye will subconsciously compensate by using first six depth cues. Cerebral problems  Our ability to calculate and distinguish 3D information is constructed in our brains in the first few months of our lives.  In some milder cases, careful practice will allow such people to see 3D movies and video.  In severe cases those people may never be able to understand 3D moves and video. 67MOHIEDDIN MORADI
  • 68. The 3D Camera Rig Rig configurations The parallel rig The opposing rig The mirror rig - Bottom mount (vertical camera underneath) - Top mount (vertical camera on top) 68MOHIEDDIN MORADI
  • 69. The parallel rig  The most compact dual camera 3D rig  it is compact, and does not rely on mirrors which have an impact on image quality.  generally work better with more compact cameras and lens designs.  Difficult to achieve small IAD with large cameras or lenses. One camera, twin lens Wider than 65mm? 69MOHIEDDIN MORADI
  • 70. The opposing rig  A pair of mirrors placed between the camera reflects the images for left and right eye into the cameras. Both images are horizontally flipped.  This type of rig is bulky and is not generally used in modern rigs but may return with new compact cameras.  It was popular with film cameras because it allows accurate camera line-up by removing the film plates and mirrors. light 70MOHIEDDIN MORADI
  • 71. The mirror rig  A semi-transparent mirror reflects the scene into the vertical camera while also allowing the horizontal camera to see through the mirror. 1-vertical camera on top: 2- vertical camera underneath: has the advantage of a better centre of gravity, less spurious mirror reflection. A good quality mirror is vital in this type of rig. Semi-silvered mirror reflects light into one camera and allows light into the other. light 71MOHIEDDIN MORADI
  • 73. Camera rig errors  (3D Tax ):The loss in quality due to a mismatch between the left and right images.  Stereographer try to keep 3D Tax below 5%, and certainly below 10%. Camera rig misalignment  Both cameras must be set at the appropriate inter-axial distance for the scene.  They must also be perfectly aligned with one another so that the two images can be mapped on top of one another on the display to provide a good 3D image. 73MOHIEDDIN MORADI
  • 74.  Vertical misalignment  3D image breaks, or may cause eye-strain.  Small errors can be corrected in post-production.  Rotational misalignment  Cameras must be rotationally aligned.  Small errors may be corrected in post-production. left right left right 74MOHIEDDIN MORADI
  • 75. Lens pairing errors.  The same kind of lens should be used.  The lenses should be have the same optical and mechanical characteristics.  Common lens related errors include:  badly coupled zoom controls.  badly coupled focus controls.  badly coupled iris controls.  Both lenses should track each other exactly through these three parameters, either by electrically coupling the lenses together, or by providing accurate remote control to both lenses at the same time. 75MOHIEDDIN MORADI
  • 76.  Focus(Poor focus tracking)  Lenses must focus together exactly.  Errors cannot be corrected in post-production. Therefore it is vital that focus is matched as accurately as possible in the rig.
  • 77. Blurred anaglyph red componentCorrect version Blurred anaglyph red component Correct version
  • 79.  Zoom(Poor zoom tracking)  Small errors may be corrected in post-production.
  • 80.  Excessive convergence  Causes keystone errors.  3D image is broken.  Small errors may be corrected in post production.
  • 81. Lens misalignments  The lens may be 1. misaligned to the CCD sensor optical axis error
  • 82. Lens misalignments  The lens may be 1. misaligned to the CCD sensor optical axis error 2. misaligned to its own mechanics zoom wander or focus wander error (the small deviation in the optical axis as the lens is zoomed or focused)  Zoom and focus wander may be a straight line or simple curve, or may be a complex spiral if any rotating lens elements are slightly misaligned. Centre of image Actual centre True zoom centre line True centre Zoom wander Wide Angle Telephoto
  • 83.  Solutions: 1. remounting the lens to the camera and doing adjustment. 2. electronically in post-production adjustment.  It is better to perform this adjustment in the camera and lens as this maintains the best image quality. However this may be impossible and may need to be performed electronically. Centre of image Actual centre True zoom centre line True centre Zoom wander Wide Angle Telephoto
  • 84. Camera characteristics mismatch  Ideally, both cameras in a 3D camera rig should be the same type.  same video formats and resolutions and video processing characteristics  colour matched, and white balanced
  • 86. The Stereographer  The amount of perceived depth in front and behind the screen plane, expressed as mm, pixels or percentage.  Depth Budget: For example stereographers work on a depth budget of about 2% for a 40" television screen. The stereographer will monitor material from one or more 3D camera rigs and check that the 3D image is correctly aligned and positioned in the 3D space.  The stereographer will also ensure that the 3D image is kept within the allocated depth budget throughout post-production. 86MOHIEDDIN MORADI
  • 87. The 3D Processor Box Is designed as the stereographer’s dream.  Stereographers can use it to monitor the feeds from 3D camera rigs and finely tune the two camera outputs to obtain the best quality 3D image.  It can also be used to modify the 3D image, to tune the 3D look, adapt the depth of the image, and maintain the 3D field with a given depth budget.
  • 88. Parallel or toed-in? Parallel: The theory recommends using cameras in parallel. It avoids trapezoidal distortions and vertical parallax. drawbacks: 1. in practice it is very complex to carry out, because of an increase in costs. 2. It is really an increase in time due to an intense subsequent work of stereo editing and the associated post-production(Cropping problems and different final frame sizes). 3. There is not initial convergence point (infinity). 4. During the shooting, the stereo perception is not the same than at the end.
  • 89. Parallel or toed-in? Toe-In 1. You could have a similar sense of depth as the final product. 2. It makes the rest of the operations easier. 3. Stereo window located easily. Drawback: the keystone deformations, which leads to unwanted vertical and horizontal parallax.
  • 90. Showing 3D in the home  These are digital off-air, satellite and cable transmission, internet and Blu-ray.  At the moment, there are no special compression standards designed for 3D, therefore existing standards must be adapted.  The left and right signals need to be combined into one HD frame sequence and sent over a normal transmission system.  Sequential 3D  line-by-line  Side-By-Side 3D  Top-over-bottom 3D  Checherboard  2D + depth map 90 Left image Right image HD frame sequence MOHIEDDIN MORADI
  • 91. Sequential 3D  A sequence of alternating video frames where each successive frame is designed to be viewed by just one eye.  Such a sequence is a natural fit for active shutter 3D glasses.  video must now be transmitted at 48 frames per second (24 for each eye) to maintain HD quality.  Good for local connections i.e. from a Play station to TV.  Resolution is full HD but the bandwidth & frame rate must be doubled to reduce flickering. 91MOHIEDDIN MORADI
  • 92. Line-by-Line 3D results in a 50% loss of vertical resolution. 92MOHIEDDIN MORADI
  • 93. Side-By-Side 3D.  Bandwidth & frame rate same as normal HD(24 frames per second)  half horizontal HD resolution.  Due to the horizontal up scaling, side-by-side 3D is not as sharp as sequential 3D.  only need a firmware upgrade to enable 3D transmission from your DirecTV box. 93MOHIEDDIN MORADI
  • 94. Side-By-Side 3D  When the 3D TV receives a side-by-side broadcast, it first separates the left and right images from each frame, then upscale the width of each by a factor of two.  Lastly, it displays the left and right images in a sequential manner as necessary for viewing with active shutter glasses.. 94MOHIEDDIN MORADI
  • 95. Top-over-bottom 3D  Good for normal broadcast transmission.  Bandwidth & frame rate same as normal HD  half vertical HD resolution. 95MOHIEDDIN MORADI
  • 96. Checherboard 3D  The left and right images are sampled.  The two views are then overlaid and appear as a left and right checkerboard pattern.  This format preserves the horizontal and vertical resolution of the left and right views providing the viewer with the highest quality image possible with the available bandwidth. 96MOHIEDDIN MORADI
  • 97. 2D+depth map 1920x1080 2D image Depth map 97MOHIEDDIN MORADI
  • 98. Connecting 3D Dual Link HDSDI & 3G-SDI Dual Link is a popular method of connecting 3D signals in professional equipment. It consists of two cables and connectors at 1.485Gbps each, one for left and the other for right. However it takes up two inputs or outputs, effectively halving equipment capacity. 3G-SDI is the new “stereo” connection for video, achieved by multiplexing together the two links of Dual Link into a single cable and connector at 2.97Gbps. “3G makes 3D easier” 98MOHIEDDIN MORADI
  • 99. HDMI HDMI was originally developed for home use as a simple way of connecting high definition video equipment. Derived from DVI, it uses a 19 pin connector, with digital surround sound audio, a command protocol and HDCP (High-bandwidth Digital Content Protection) copy protection scheme. HDMI version 1.4 Introduced in May 2009, v1.4 adds 3D display methods, including line, field and frame sequences, side-by-side and 2D+depth. Any 3D video equipment with HDMI should be v1.4. low-voltage differential signal (LVDS) transition minimized differential signaling (TMDS) 8 bit 10 bit 99MOHIEDDIN MORADI
  • 100. Anaglyph  An anaglyph is created by taking the data from the red color channel of the left image and combining this with the green and blue channels of the right stereo image.  Compatible with Existing Displays
  • 101. Anaglyph glasses  the oldest and most common of showing 3D.  There are several different anaglyph standards all using two opposite colours.  The most common has a red filter for the left eye and cyan for the right eye.  The glasses split the combined image into two images, one for each eye.
  • 102. Is anaglyph the wrong way round?  The diagrams shown here may seem the wrong way round. Why? The left image is cyan filtered, but the left eye is red filtered. LEFT BOTH RIGHT Right Left Left Right
  • 103. • Anything that is cyan in the picture appears black in the left eye and white in the right eye. • Anything that is red in the picture appears black in the right eye and white in the left eye. • Therefore the colours appear to be the wrong way round but are actually correct. Right Left Left Right Cyan Cyan Cyan RED Red Red No 3D effects for objects whose color matches with the filters
  • 104. Advantages Disadvantages Usage Trademarks •Established system •Cheap •Easily reproduced on screen or printed material • No special display needed. •Inefficient • Poor colour reproduction. •Requires exact match between display and glasses. •No 3D effects for objects whose color matches with the filters . •Discomfort generated by light disparities between the two eyes . •Ghosting due to the pairing of glass filters and screen (cross talk) •Good for magazines, posters and other printed material. •Older cinema and video system, but largely replaced by newer better systems. TrioScopics. ColorCode. NVIDEA (3DDiscover)
  • 105. Getting the best from 3D  3D movies and programs are an illusion.  They are carefully recorded with both cameras exactly horizontal to one another.  Therefore 3D movies must be viewed with your head exactly upright move was recorded.  Tilt your head and the illusion becomes strained and eventually snaps.
  • 106. Light is a wave Direction of travel Transverse Wave Longitudinal Wave
  • 107. Electromagnetic waves (polarized light) • Unpolarized light consist of waves with randomly directed electric fields. Here the waves are all traveling along the same axis, directly out of the page, and all have the same amplitude E. • Light is polarized when its electric fields oscillate in a single plane, rather than in any direction perpendicular to the direction of propagation. direction of motion of wavez x y EM waves are transverse waves This wave is polarized in y direction E r v  B r v  v  E r B r E  v 
  • 108. Polarization In an un-polarized transverse wave oscillations may take place in any direction at right angles to the direction in which the wave travels. By Polarization vibration direction of waves light are restricted. Polarization is a characteristic of all transverse waves that describes the orientation of oscillations. Direction of propagation of wave Directions of oscillations Direction of oscillation Direction of travel of wave
  • 110. Linear Polarization • If the oscillation does take place in only one direction then the wave is said to be linearly polarized (or plane polarized) in that direction. Direction of oscillation Direction of travel of wave
  • 111. Linear polarization • Trace of electric field vector is linear This wave is polarized in y direction
  • 112. Circular polarization 1. Two perpendicular EM plane waves of equal amplitude with 90° difference in phase. 2. Electric vector rotates counterclockwise  right-hand circular polarization
  • 113. Circular polarization A clockwise circularly-polarized wave An anti-clockwise circularly-polarized wave
  • 116. Liquid crystals The alignment of the polarizer “stack” changes with voltage.
  • 117. Linear polarized glasses  Both images are linearly polarized on the display and shown together.  The glasses have a linear polarizing filter for each eye, one vertical, the other horizontal.  Some glasses are set at +45 and -45 so that they can be used either way round. Advantages Disadvantages Usage •Better colour than anaglyph. •Cheap glasses. •Requires special display technology with polarizing filters. • Darker image. •Viewer’s head must be exactly vertical. •Used in the early days of polarizing screen. •Largely replaced by circular polarization due to the head tilt position problem
  • 118. Circular polarized glasses  A 3D micro-polarizer filter attached to the LCD panel, and are supplied with circular-polarizer glasses.  The right and left images delivered from the LCD panel are circular-polarized in opposite rotations through the micro polarizer filter and the patterned retarder.  Each right and left image can then be viewed through a corresponding right and left circular-polarizer filter.
  • 119. Advantages Disadvantages Usage Trademarks •Better colour than anaglyph. •Cheap glasses. • Viewer’s head may be tilted. •Easily adapted to existing display and screen technologies because high frame rates are not required •Requires special display technology with polarizing filters. •Darker image. •Prone to flickering if frame sequential technology is used. •Reduced angle of view and requires a silvered screen in cinemas. •Popular in cinemas because the glasses are cheap and can easily be washed and reused. • Good for professional monitors because the frame rate is not affected and these monitors are generally only used to view 3D material. • Not so good for home use where the screen is darker even for normal 2D viewing. RealD. MasterImage. Zalman. Intel InTru3D
  • 120. 4K Cinema Projector • The 4K frame is divided into two 2K frames, one above the other. • These are projected through a special lens with two lens turrets and two polarizing filters onto the screen. 4K frame Left Right Combined image on screen Left Right
  • 121. Shutter Glasses type 3D Display with IR 1. Each image is shown on the display separately, left, right, left, right, at a fast enough rate to overcome flickering. 2. The display also extracts an infra-red synchronization signal which is sent to the glasses to tell them which image is being displayed. 3. The glasses are active, and use an LCD shutter in each eye to sequentially shut each eye, while opening the other.
  • 122.  Lenses of the 3D shutter glasses are made of LCD panel.  While the image for the left eye is shown, the right eye is blocked by right glasses, and vice versa.  As you can see, each lens can be turned off separately.  The lenses turn off and on so quickly that the brain just sees one 3D image that is the two images combined( 60 flashes per second).
  • 123. Advantages Disadvantages Usage Trademarks •Good colour. •Wide angle of view. •Bright clear image for both 2D and 3D. •Quite inefficient. •Requires a high speed display or projector. Prone to flickering if the frame rate is not high enough. • Active and expensive glasses. •Impractical for cinema use where the glasses need cleaning and charging between movies. •(Home grade glasses have a battery life of about 80-100 hours. • Cinema grade glasses have a battery life of about 250-350 hours.) •Good for home use because the screen is just as bright for normal 2D video as it is for 3D video, and Cost of glasses not so much of a problem in where things are cared for much more. • Manufacturers are working to standardize these glasses so that they can be used on any screen. • Used in some cinemas where medical wipes are handed out rather than washing the glasses. • Ticket price either includes a deposit or security is high due to cost of glasses. •Sony. •XpanD. •NVIDEA •Panasonic. •Samsung.
  • 124. Glasses with Wavelength Multiplex Visualization  Each image is filtered to its primary colours with a narrow band filter.  The exact primary colours are slightly different for each image.
  • 125.  the two images can be combined on the display, and still differentiate each one.  The glasses contain a narrow band diachroic optical filter in each eye, exactly matched to the narrow band filters used in the display. Thus each eye only sees the part of the combined images intended for it.
  • 126. projection • An interference filter (Infitec) is rotated at high speed, displaying RGB for left and right eyes alternately, while the viewer wears Infitec filter glasses for this "wavelength multiplex visualization system."
  • 127. • Each of the three primary colours - red, green and blue - is split into 2 different wavelengths, one for the left eye and one for the right eye. For projection, an interference filter (Infitec) is rotated at high speed, displaying RGB for left and right eyes alternately, while the viewer wears Infitec filter glasses for this "wavelength multiplex visualization system."
  • 128. Advantages Disadvantages Usage Trademarks •Good separation. • Wide angle of view. •Can be used in cinemas on a normal matt white screen. •Quite inefficient. • Expensive glasses. •‘Thin’ colour. •Prone to flickering if frame sequential displays are used. •Good for cinemas that cannot install the silvered screens required by the circular polarising system. •high cost of glasses means either a deposit is paid on the glasses, or security is high. •Dolby. •Infitec.
  • 129. Active Glasses and stereo projector  An infrared (IR) emitter is installed on the projector to create switching of the left and right eye images using special electronically controlled glasses.  The glasses required are expensive, but the process to set up and install can be more cost effective to assemble with existing equipment such as two small projectors, an IR emitter and the active glasses.  No specialized screen is required, which makes the number of viewing screens more readily available to production and post.
  • 130. Passive Glasses and stereo projector  To enhance the light output, a silver screen is required. This is a challenge because not all content is 3D, and 2D content viewed on a silver screen can have unwanted effects such as color shift and brightness.  However, the brightness is a challenge for these systems whenever optics or filters are installed in line with the light output of a projection system.  Passive glasses are more comfortable to wear than active or anaglyph glasses and are relatively inexpensive.
  • 131. Autostereoscopic displays  Autostereoscopic displays fool the brain so that a 2D medium can display a 3D image by providing a stereo parallax view for the user.  A filter is placed in front of the screen that separates images automatically, which means viewers don’t need to wear glasses!  Each eye sees a different image, having been calculated to appear from two eye positions.  Viewers need to hold their head in a certain position so that each eye sees a different image.  Viewers must be at the right distance and angle from the screen in order to receive the right image on the right eye.
  • 132.  These stereoscopic displays lack several other cues that are normally used to build up a 3D image: Movement Parallax An autostereoscopic display only has a single 3D view which calculate by the software. Infinite number of images can be seen as viewing position is moved around the object. Convergence On an autostereoscopic monitor eyes are converging in front or behind the monitor on a virtual object. focus This is at a different distance on an autostereoscopic display as the display and virtual object will be at different distances from the user.
  • 133. Autostereoscopic displays fool the brain so that a 2D medium can display a 3D image by providing a stereo parallax view for the user. Method 1: Parallax Barrier  In the parallax barrier a mask is placed over the LCD display which directs light from alternate pixel columns to each eye.  Parallax barrier displays allow instant switching between 2D and 3D modes as the light barrier is constructed from a layer of liquid crystal which can become completely transparent when current is passed across, allowing the LCD to function as a conventional 2D display. 133MOHIEDDIN MORADI
  • 134. Method 2: Lenticular Autostereoscopic  In the lenticular lens, an array of cylindrical lenses directs light from alternate pixel columns to a defined viewing zone, allowing each eye to receive a different image at an optimum distance. Crosstalk  refers to the quantity of incorrect information that enters into the wrong eye due to screen defects, filters and synchronization problems.  The crosstalk is a problem in the 3-D reproduction system but can be minimized during the production of the film. 134MOHIEDDIN MORADI
  • 135.  Inverse Image: the image formed at each eye is the wrong way round.  blended zone: (too far away from the optimal distance) user is seeing both images forming in each eye, causing a blurred and confusing image. (Crosstalk)  Centre: within the correct viewing zone.  if this was a monitor with head/eye tracking technology and the viewers were in an inverse image zone, the monitor could switch the images being projected in each direction. 135MOHIEDDIN MORADI
  • 136. multiscopic Drawback: Viewers need to hold their head in a certain position and at the right distance and angle from the screen in order to receive the right image on the right eye. • To overcome this drawback, “multiscopic” systems are starting to appear that show not only two, but several different views of the same scene (five, eight, nine or even 25). • This means you can see images in three dimensions even if you move around. Such systems however greatly reduce the definition of each image (number of pixels) which complicates the filming of natural scenes. • Each screen has its own format and number of images that it displays. • there is currently no standard for transmitting this data. • The use is therefore restricted to end-to-end proprietary systems. • there is currently no equivalent technology for transposing this system to cinema.
  • 137. Glass-free multi-viewers Full-HD 3D display using a triple liquid crystal barrier • This idea rely upon existing technologies : 240Hz LCD panels and face recognition and tracking system such as embedded in most modern cameras and camcorders. • Each viewer occupy about 50cm, and the average space between eyes is 7cm. • A video camera (not shown here, located on top or bottom of the monitor) detect and locate each viewer. • up to 4 simultaneous viewers, standing around 1m50 away from the screen The second and third barrier, called “viewer barrier” is here to filter each viewer separately. hence the need to have a 240Hz panel, so that each of the 4 viewers has a refresh rate of 60Hz. the focus lines from each eye can be considered parallel (left eye P2 // left eye P4, right eye P1 // right eye P3), as sun rays can be considered parallel when reaching the earth. Those viewer barriers adapt size and position based on the position of each viewer, creating a virtual tunnel only for the targeted viewer, fitting the rays from his 2 eyes. It seems that a 1/3 pixel resolution and a double barrier is enough to have such a discrimination. The first barrier (“stereo barrier”) is achieved here through a 240Hz liquid crystal panel. (does not differ from a classic parallax barrier) The advantage : 1.we can virtually shift the position of the stereo barrier by 1/3 pixel based on the position of each viewer, so that each eye actually receive a different pixel from the other eye. 2. we can alternatively switch this vertical interlacement : on the first scan the left eye sees pixels P2 and P4, on the second it sees pixels P1 and P3 (the opposite for right eye). VIEWER 1 VIEWER 2 VIEWER 4 VIEWER 3
  • 138. What are the potential risks associated with watching 3D TV?  Viewing 3D TV may cause headache, motion sickness, perceptual after effects, disorientation, eye strain, and dizziness.  If your eyes show signs of fatigue or dryness or if you have any of the above symptoms, immediately stop watching and rest.  It is recommended that users take frequent breaks to lessen the potential of these effects.  In rare cases some viewers may experience an epileptic seizure or stroke when exposed to certain flashing images or lights contained in certain television pictures or video games.  Children and teenagers may be more susceptible to health issues associated with viewing in 3D and should be closely supervised when viewing these images.
  • 139. Live 3D Broadcast(sky) Live 3D Broadcast(sky)
  • 140. Recording challenges • Two video signals. • Twice the recorded bandwidth. • −… or half the quality. 140MOHIEDDIN MORADI
  • 141. 3D post production Production problems Editors, routers, mixers, etc. must process two video signals. −All mixes, fades, wipes, etc must be frame accurate. −Two mattes, keys, or alpha channels required. One idea : Pair two channels together. Better idea : design dual-channel equipment(Similar to the introduction of stereo audio in the 1960s). 141MOHIEDDIN MORADI
  • 142. • PROFESSIONALDual stream capable switchers • Standard switcher with dual stream capability. 142MOHIEDDIN MORADI
  • 143. • PROFESSIONALEditing • Dual video stream timelines Left Right 143MOHIEDDIN MORADI
  • 144. Adding graphics and text • 2D graphics do not appear properly when viewing 3D programs  If text is added truly it will cause the 3D image. leftright Projected image IRIB left right
  • 145. Adding graphics and text • 2D graphics do not appear properly when viewing 3D programs  If text is added normally it will break the 3D image. leftright Projected image IRIB left right IRIB
  • 146. Mixed and fades can cause eye-strain left right left right
  • 147. 3D images if flattened for a few frames during a fade or wipe left right left right
  • 148. Commonly used cameras for 3D capture.
  • 149. Variety of 3D/2D Display Functions Flip H When a half-mirror type of rig is used, either the left or right signal may be reversed horizontally. The Flip H function turns the reversed image to the normal view. This is helpful because the user can refer directly to the rig camera, achieving a simple and cost-saving system. 149MOHIEDDIN MORADI light
  • 150. Retinal Disparity If both eyes are fixated on a pointin space(f1): – Image of f1 is focused at corresponding points in the center of the fovea of each eye(zero disparity). – f2, would be imaged at points in each eye that may at different distances from center of the fovea. – This difference in distance is the retinal disparity. f1f2 d1 d2 Left Eye Right Eye Retinal disparity =d1 + d2 + +- - +10 deg +5 deg positive disparity  in front of point of fixation -5 deg -10 deg negative disparity  behind fixation point f2 -5 -10 Right eye Left eye 150MOHIEDDIN MORADI
  • 151. Disparity Simulation users can simulate the amount of 3D image parallax, and can judge whether the camera rig should be adjusted on location or whether it would be better to adjust the parallax later during the post-production process. 151MOHIEDDIN MORADI
  • 152. Horopters  Vieth-Muller circle: Map out what points would appear at the same retinal disparity.  Horopter - the locus of points in space that fall on corresponding points in the two retinas when the two eyes binocularly fixate on a given point in space with zero disparity.  Points on the horopter appear at the same depth as the fixation point(can’t use stereopsis).  What is the shape of a horopter? Vieth-Muller circle points on the horopter have zero disparity 152MOHIEDDIN MORADI
  • 153. Horopter Check  helps users to perceive the subtle difference of depth between different objects placed on the 3D screen surface. Either left or right 3D image signals (or both) are displayed in a selected single colour: black, red, blue, or monotone.  EX: the left image in monotone and the right image in red  If red parallax is seen on the left side, an object is placed in front  If red parallax is seen on the right side, an object is placed behind. 153MOHIEDDIN MORADI
  • 154. Checker Board [2D mode]  Left and right input signals are displayed in a grid pattern on screen – divided into 9 blocks vertically and 16 blocks horizontally.  By comparing adjacent images, users can recognize a difference in brightness and colour setting of the left and right images, and thus easily adjust the camera's white balance and iris settings. 154MOHIEDDIN MORADI
  • 155. L/R Switch [2D mode] Left and right signals can be swapped in a moment without inserting black frames. enables users to compare whole images and check for any sense of lack of harmony or for unnatural images. 155MOHIEDDIN MORADI
  • 156. Inches Viewing Distance • 42″ diagonally 7 feet in distance • 46″ diagonally 8 feet in distance • 50″ diagonally 9 feet in distance • 52″ diagonally 9 to 10 feet in distance • 55″ diagonally 10 to 11 feet in distance • 60″ diagonally 12 to 13 feet in distance • 63″ diagonally 13 to 14 feet in distance
  • 157. Why we need 3D generation from 2D?  The 3D stereo contents are not rich enough but there are still large numbers of 2D videos exist in different compressed formats.  The conversion process can be off-line to generate high-quality 3D version of classic 2D content for redistribution in Blu-ray HDTV format. (Creating revenue from the old content)  The conversion can also be implemented in real-time on 3D TV set for stereoscopic displaying of 2D video sources. (Adding 3D feature in the set-top-box, DVD player, or TV set).  CyberLink PowerDVD 10 Ultra 3D can convert 2D DVD movies to 3D.  Most of the 3D ready TVs are embedded with real-time automatic 2D-to-3D function for watching 2D video contents with 3D effect. 157MOHIEDDIN MORADI
  • 158. Block Diagram of 2D-to-3D Video Conversion  The main purpose of the 2D-to-3D video conversion is to generate the second view video based on the content of the 2D video, which involve two processes: (1) Depth Estimation (2) Depth Image Based Rendering (DIBR) 158MOHIEDDIN MORADI
  • 159. What is Depth Map or Depth Image? • Each depth image stores depth information as 8-bit grey values with the grey level 0 indicating the furthest value and the grey level 255 specifying the closest value. 2D Image Depth Image (Map) 159MOHIEDDIN MORADI
  • 160. Depth Map Estimation Methods three commonly used depth estimation methods 1- blur : to estimate the depth based on the amount of blur of the object. 2- Vanishing Point : to find out the vanishing point that is the farthest point of the whole image. 3- Motion Parallax : objects with different motions usually have different depths. (near objects move faster than far objects, and so relative motion can be used to estimate the depth map). Motion Parallax is widely used for the depth estimation in 2D-to-3D video conversion. 160MOHIEDDIN MORADI
  • 161.  The motion information can be easily obtained by block matching algorithm between two consecutive frames.  The relative depth information is calculated by: very easy to implement by hardware very suitable for real-time the inputs of the block matching based depth estimation are motion vectors which are easily extracted from the compressed video bit stream such as MPEG-2 (DVD), and H.264/AVC (Digital TV broadcasting). 161MOHIEDDIN MORADI
  • 162. 2D Video Sequence Estimated Depth Map by Block-Matching Motion Estimation Drawback: serious staircase effect on the boundary of the objects. solution: color based region segmentation 162MOHIEDDIN MORADI
  • 163. Depth Map Enhancement by Color Segmentation Depth Fusion Depth Map Estimated by Block-Matching Motion Estimation Color Segmented Frame can provide important information of different regions that is the block-based motion depth map lacking of Enhanced Depth Map Smoothed Enhanced Depth Map by C can eliminate blocking effect as well a 163MOHIEDDIN MORADI
  • 164. Depth Image Based Rendering (DIBR)  To generate the 3D video, DIBR is used to syntheses the second view video based on the estimated depth map and the 2D video input.  DIBR consists of three processes. 164MOHIEDDIN MORADI
  • 165. 3D Image Warping The process includes two steps: 1. Original image pixels (m’(x’,y’)) from the real view image are re-projected into the 3D world based on the parameters of camera configuration 2. The 3D space pixels (M(X,Y,Z)) are projected into the image plane of the “virtual” view (e.g. m(x,y)) for virtual view generation. virtual camera Original camera 165MOHIEDDIN MORADI
  • 166. 3D Image Warping • left-eye and right-eye images at virtual camera positions Cl and Cr can be generated for a specific camera distance tc with providing the information of the focal length f, and the depth Z from the depth map. • The geometrical relationship can be expressed as • Based on these equations, we can directly map the pixels in the right-eye view to the left-eye view in the 3D image warping process. Camera configuration for generation of virtual stereoscopic images. virtual camera Original camera xl, xr: the projections of the 3D point P on the left image and right image 3d object 166MOHIEDDIN MORADI
  • 167. Major Challenges in DIBR Occlusion: Two different points in the image plane at the real view can be warped to the same location in the virtual view. To resolve this, the point with position appear closer to the camera in the virtual view will be used. Disocclusion: Occluded area in the real view may become visible in the virtual view. There is no information provided to generate these pixels. As the result there are some empty pixels (holes) created in the virtual view To resolve this we do Hole-filling and Depth Map Pre-processing. Original camera virtual camera Original camera virtual camera hole 167MOHIEDDIN MORADI
  • 168. Holes Created in 3D Image Warping Depth Map Right View Image 3D Image Warping Left View Image created by 3D Image Warping with holes due to disoculsion. 168MOHIEDDIN MORADI
  • 169. Hole-Filling by Interpolation Detect holes  Fill holes by averaging textures from neighborhood pixels  Linear interpolation technology Linear interpolation will introduce stripe distortion in large holes. SOLUTION: Pre-Processing of Depth Map by Smoothing Filter 169MOHIEDDIN MORADI
  • 170. Pre-Processing of Depth Map by Smoothing Filter Reduce disocclusion (holes) in the virtual views  Less significant texture artifacts Original Depth Map Depth Map after Smoothing Filter 170MOHIEDDIN MORADI
  • 171. Pre-Processing of Depth Map Left View Image created by 3D Image Warping using the smoothed depth map with much fewer holes. Left View Image after hole-filling Asymmetric Smoothed Depth Map Right View Image 171MOHIEDDIN MORADI
  • 172. Conclusion  It is possible to convert 2D video to 3D video automatically for some video with good 3D perception using depth from motion estimation and DIBR techniques.  When you buy your 3D Ready TV, the quality of the 2D-to-3D conversion function should be one of your consideration as different brands use different technologies for this conversion. Converted 3D Video Sample 172MOHIEDDIN MORADI
  • 173. IF-2D3D1 3D Image Processor Real-time 2D/3D conversion • 2D is converted into 3D in real time. • Separate L/R HD-SDI outputs enable you to convert existing 2D content to 3D. convenient for rough editing. • You can adjust for both parallax and 3D intensity. The 3D mixer converts L/R dual signals to a 3D mixed format Convenient for real-time monitoring when shooting in 3D or when shooting with 2D equipment. • Waveform monitor and vectorscope for comparing L & R video streams on a display. • Split function for comparing L & R video streams on one screen with movable boundary. • Rotation function to facilitate a restricted rig setup for 2 cameras when shooting in 3D. • HD-SDI frame synchronizer* for synchronizing a pair of cameras that lack external sync. • Anaglyph and sequential viewing modes for enhanced convenience, providing multiple ways to check 3D content
  • 174.  Choice of 3D mixed formats
  • 175. LbL: Line-by-line SbS: Side-by-side-half AB: Above-below CB: Checkerboard
  • 177. 3D Intensity adjustment:  This allows virtual, simultaneous adjustment of curvature and relief, to manipulate the intensity of the 3D effect.  As with Parallax adjustment, there are three viewing modes: Intensity 1 (natural), Intensity 2 (anaglyph), and Intensity 3 (sequential).  You can adjust curvature and relief simultaneously.
  • 178. Near-term challenges for 3D Capacity for new channels and VOD Consumer adoption Technology specifications -Content encoding, MPEG-4 profile Production experience -Best creative practices by event type -Graphics insertion
  • 179.  In 2008, (University of Arizona) : the first updatable (or rewritable) holographic 3D display ever demonstrated.  photorefractive polymers have the potential to offer colorful images and large sizes in an updatable display.  The display they demonstrated was, at 4 in. x 4 in., the largest yet created.  It could display new images every 3 minutes, and images could be viewed for several hours without the need for refreshing. Holograms, like photographs, are recordings of reflected light.  Here, the researchers created a hologram based on a 3D model of an object on a computer, and no real physical object was required.  They generated 2D perspectives of the object on the computer, which were processed and combined to create about 120 holographic pixels, or “hogels.”  To create a single hogel, the researchers modulated a laser beam with that hogel, focused the beam on a thin vertical line, and made the beam interfere with a second, unmodulated laser beam.  The entire hologram could be written by repeating this process with all 120 hogels and positioning them next to each other.  After all hogels were written, the researchers could illuminate the sample with a simple LED to make the 3D hologram viewable.  The sensation of 3D is created due to parallax: each eye is seeing a different perspective of the object.
  • 180. • Holograms created with the photorefractive polymer in an updatable holographic 3D display
  • 183. Sony cameras & camcorder suitable for 3D
  • 185. thomsson cameras & camcorder suitable for 3D
  • 186. SI-3D System Feature(Silicon Imaging company): • 2K DCI & HD Raw Stereo Recording & Playback • 3D Visualization (50/50, Side-by-Side, Anaglyph & Wiggle) • Dual Outputs: Beam-Splitter LCD or Live Projection • Virtual Parallax Adjustment: Flip, X-Y Shift and Rotation • 12-Bit Uncompressed and CineformRAW Recording • Single or Dual Drive independent Left/Right file • Frame-Store: Grab/Save/Recall with 100%/50% Opacity • Convergence Alignment Screen Guides (% of screen) • Focus Tools: 4x Zoom, Edge Detect, Spot/Loupe Meter • Exposure Tools: False-Color, Histograms & Spot Meter • Ambient USB Timecode Interface • Project & Metadata Management & AVID ALE Log File • Auto File Sequencing – Scene/Shot/Take • Iridas Speedgrade Embedded On-Set Grading • Export Conversion: DNG, AVI, QuickTime MOV) • Workflows – Avid, Apple Final-Cut, Adobe & Others • Includes: 2xSI-2K Mini, Sync/Network Cables & Software
  • 188. Stereoscopic 3D Displays for Virtual Reality • S3D Display Technology Based on VR System and Size of Audience • Monitor (Fish Tank VR) – Active Stereo – Anaglyphic Stereo • Head Mounted Displays (HMD) – Separate Left/Right Signals – Active Stereo Converted to Separate Signals • Desks – Active Stereo • CAVE – Active Stereo – Passive Rarely • Walls/ Curved Screen – Active Stereo for Small Audiences – Passive for Larger Audiences
  • 189. Near-term challenges for 3D Capacity for new channels and VOD Consumer adoption Technology specifications -Content encoding, MPEG-4 profile -Tru2Way Host Requirements Production experience -Best creative practices by event type -Graphics insertion Naming / Branding