SlideShare a Scribd company logo
UNIT III
VISUAL REALISM
J.S.JAVITH SALEEM
ASSISTANT PROFESSOR
AL-AMEEN ENGINEERING COLLEGE
VISUALIZATION
 Visualization can be defined as a technique for
creating images , diagrams or animations to
communicate ideas.
 Projection and shading are common methods for
visualizing geometric models.
 CAD uses isometric and perspective projection in
addition to orthographic projection for generating
rich visual images with complete design
information.
 To project 3D to 2D objects we need to remove
the ambiguities of the different views, which can
be got by the elimination of hidden lines ,
surfaces , solid removal approaches.
 Shading,
 Lighting
 Transparency
 Coloring approaches provide more visual realism
Model clean up process
 Generate orthographic views
 Eliminate hidden lines
 Changing necessary hidden lines as dashed line or
adding dimension and text to the different views.
Application of realism
 Robot Simulations : Visualization of movement of their links and joints and end
effector movement etc.
 CNC programs verification of tool movement along the path prescribed and
estimation of cup height and surface finish etc.
 Discrete Even Simulation : Most of DES packages provide the user to create
shop floor environment on the screen to visualize layout of facilities, movement
of material handling systems, performance of machines and tools.
 Scientific Computing : Visualization of results of FEM analysis like iso-stress
and iso-strain regions, deformed shapes and stress contours. Temperature and
heat flux in heat-transfer analysis. Display and animation of mode shape in
vibration analysis.
 Flight Simulation : Cockpit training for pilots is first being provided with flight
simulators, which virtually simulates the surrounding that an actual flight will
pass through.
No Lines Removed
Hidden Lines Removed
Hidden Surfaces Removed
HIDDEN LINE REMOVAL
 “For a given three dimensional scene, a given
viewing point and a given direction eliminate
from an appropriate two dimensional projection
of the edges and faces which the observer cannot
see”
 Object space method
 Image space method
Two main types of algorithms:
– Object space: Determine which part of the object are visible. Also called
as World Coordinates. Object is described in physical coordinate system.
– It compares the object and parts to each other within the scene definition
to determine which surface is visible.
– Image space: Determine per pixel which point of an object is visible.
Also called as Screen Coordinates. Visibility is decided point by point at
each pixel position on view plane.
– Zooming does not degrade the quality.
Object space Image space
Two main Hidden Surface Removal Algorithm
Techniques:
Object space: Hidden Surface Removal for all
objects
Image space:
Objects: 3 D Clipping
Transformed to Screen Coordinates
Hidden Surface Removal
HIDDEN LINE ELIMINATION PROCESS
DISPLAY OF RESULTS
ELIMINATION OF HIDDEN LINES
APPLICATION OF VISIBILITY TECHNIQUES
CHECKS FOR OVERLAPPING
DEPTH COMPARISION IS USED TO DETERMINE PART OR ALL OF POLYGON IS
HIDDEN
SORTING OF 2D IMAGE DATA
2D OBJECT DATA
TRANSFORMATIONS-CONTAINS VISIBLE AND INVISBLE EDGES.
3D OBJECT DATA
GEOMETRY AND TOPOLOGY
VISIBILITY TECHNIQUES
 MINIMAX TEST
 CONTAINMENT TEST
 SURFACE TEST
 COMPUTING SILHOUETTES
 EDGE INTERSECTION
 SEGMENT COMPARISONS
 HOMOGENITY TEST
MINIMAX TEST
 Minimax test (also called the overlap or bounding box test) checks if two
polygons overlap. The test provides a quick method to determine if two
polygons do not overlap.
 It surrounds each polygon with a box by finding its extents (minimum and
maximum x and y coordinates) and then checks for the intersection for any two
boxes in both the X and Y directions.
 If two boxed do not intersect, their corresponding polygons do not overlap (see
Figure 1). In such a case, no further testing of the edges of the polygons is
required.
 If the minimax test fails (two boxes intersect), the two polygons may or may not
overlap, as shown in Figure 1. Each edge of one polygon is compared against
all the edges of the other polygon to detect intersections. The minimax test can
be applied first to any two edges to speed up this process
MINIMAX TEST
CONTAINMENT TEST
 The containment test checks whether a given point lies inside a given polygon or
polyhedron. There are three methods to compute containment or surroundness.
 For a convex polygon, one can substitute the X and Y coordinates of the point into
the line equation of each edge. If all substitutions result in the same sign, the
point is on the same side of each edge and is therefore surrounded.
 For non-convex polygons, two other methods can be used. In the first method, we
draw a line from the point under testing to infinity as shown in Figure 2a. The semi-
infinite line is intersected with the polygon edges. If the intersection count is
even, the point is outside the polygon ( in Figure 2a). If it is odd, the point is
inside.
 If one of the polygon edges lies on the semi-infinite line, a singular case arises which
needs special treatment to guarantee the consistency of the results.
 The second method for non-convex polygons (Figure 2b) computes the sum of the
angles subtended by each of the oriented edges as seen from the test point. If the
sum is zero, the point is outside the polygon. If the sum is -360 or +360 the point is
inside. The minus sign reflects whether the vertices of the polygon are ordered in a
clockwise direction instead of counter clockwise.
Computing silhouettes
 A set of edges that separates visible faces from invisible faces of an
object with respect to a given viewing direction is called silhouette
edges (or silhouettes).
 The signs of the components of normal vectors of the object faces
can be utilized to determine the silhouette.
 An edge that is part of the silhouette is characterized as the
intersection of one visible face and one invisible face.
 An edge that is the intersection of two visible faces is visible, but
does not contribute to the silhouette.
 The intersection of two invisible faces produces an invisible edge.
Unit 3 visual realism
Edge intersection
 The hidden algorithm initially calculates the
edges intersections in 2D.
 To find out partially visible lines.
 The two edges intersect at a point where y2-y1=0.
 Then segment comparisons are used to further
determine visibility
Unit 3 visual realism
Segment comparison
 The image is computed scan line by line that is in
segments and displayed in the same order.
 The scan line is divided into spans(dashed lines).
 The visibility is determined within each span by
comparing the depths of the edge segments that
lie in the span.
 Segments with maximum depth are visible
throughout the span.
Unit 3 visual realism
Homogeneity test
 Points are compared for visibility.
 Homogeneously visible
 Neighborhood of point P can be projected objectively
onto neighborhood of the projection of a point.
 Homogeneously invisible
 Neighborhood of point P cannot be projected
objectively onto neighborhood of the projection of a
point.
 In-homogeneously visible
 Pr(N(P))=N(Pr(P))
 Pr(N(P))≠N(Pr(P)) inhomogeneously invisible.
Surface Back-face elimination
We cannot see the back-face of solid objects:
Hence, these can be ignored
VN
faceback:0NV
Back-face elimination
We cannot see the back-face of solid objects:
Hence, these can be ignored
facefront:0NV
V
N
Back-face elimination
 Object-space method
 Works fine for convex polyhedra: ±50% removed
 Concave or overlapping polyhedra: require
additional processing
 Interior of objects can not be viewed
Partially visible front faces
 Hidden line removal algorithm
 Depth Algorithm or Z algorithm or Priority
algorithm
 Area oriented algorithms
 Overlay algorithm-Curved surface
 Roberts algorithm
 Hidden surface removal algorithm
 Depth buffer algorithm or z-buffer algorithm
 Area coherence algorithm or Warnock’s
algorithm
 Scan-line algorithm or Watkin’s algorithm
 Hidden solid removal algorithm
 Ray tracing algorithm
Depth or priority algorithm
 This algorithm is also known as the depth or z algorithm. The algorithm is
based on sorting all the faces (polygons) in the scene according to the
largest z coordinate value of each.
 This step is sometimes known as assignment of priorities. If a face
intersects more than one face, other visibility tests besides the z depth are
needed to resolve any ambiguities.
 Its basis is on the view according to the biggest Z co-ordinate value.
 If face intersects more than one face, other visibility test beside z-depth is
required to solve any issue.
The priority algorithm
Depth or priority algorithm
 Painter’s algorithm
 As we utilize the procedure the painter’s way of
creating the background first and then the overlaying
layer and then the outermost layer with reducing depth.
 When we view from z and x axis there is no
overlapping view.
Painter’s Algorithm
 Assumption: Later projected polygons overwrite earlier
projected polygons
Graphics Pipeline
1 12 23 3
Oops! The red polygon
Should be obscured by
the blue polygon
Painter’s Algorithm
 Main Idea
 A painter creates a picture
by drawing background
scene elemens before
foreground ones
 Requirements
 Draw polygons in back-to-
front order
 Need to sort the polygons
by depth order to get a
correct image
from Shirley
Painter’s Algorithm
 Sort by the depth of each polygon
Graphics Pipeline
1 12 23 3
depth
Painter’s Algorithm
 Compute zmin ranges for each polygon
 Project polygons with furthest zmin first
(z) depth
zmin
zmin
zmin
zmin
Painter’s Algorithm
 Problem: Can you get a total sorting?
zmin
zmin
zmin
zmin
Correct?
Area oriented algorithm
It is based on subdivision of given data set in a stepwise
fashion until all visible areas are determined and displayed.
 Identify silhouette polygons – silhouette edges are
recognized and connection of silhouette edges to form closed
polygons by sorting all edges for equal end points
 Assign quantitative hiding(QH) values to silhouette
polygon.
 This is achieved by intersecting the polygons. The intersection points
define points where QH may change. Find QH value using depth test. 0
is visible and 1 is invisible.
 Determine the visible silhouette segments.
 If closed silhouette polygon is completely invisible it need not be
considered any further. If it is visible the segments with least QH values
are considered.
 Intersect the visible silhouette segments with
partially visible faces.
 To find out the partially visible and fully visible faces.
 Display the interior of the visible or partially
visible polygons.
 By using stack and simply enumerates all faces lying
inside a silhouette polygon.
 The stack is started with a visible face and a loop
begins popping of face F2
Overlay algorithm
 The curved surfaces are approximated as planar
surfaces.
 The u-v grid is used to create grid surface which
consists of regions having straight edges.
 The curves in each region are approximated as
line segment.
 The 1st step is to use surface equation and grid
linear edges are created.
Hidden line removal for curved surface
 To compute the exact visibility we introduce a
notion of visibility curves obtained by projection of
silhouette and boundary curves and decomposing the
surface into nonoverlapping regions.
 The nonoverlapping and visible portions of the
surface are represented as trimmed surfaces and we
present a representation based on polygon
trapezoidation algorithms.
 The curved surface is converted in polygon mesh and
calculated for visiblity.
Hidden surface removal algorithm
 The elimination of parts of a solid objects that are
covered by others is called hidden surface
removal.
 Depth buffer or Z-buffer Algorithm
 Area coherence or Warnock’s algorithm
 Scan-line algorithm or Watkin’s algorithm
Depth-Buffer Methods
43
Three surfaces overlapping pixel position (x,y) on the view plane.
The visible surface, S1, has the smallest depth value.
vx
vy
vz
3S
2S
1S
 ,x y
view plane
Depth buffer or z-buffer algorithm
45
Z-Buffer Algorithm
As we render each polygon, compare the depth of each
pixel to depth in z buffer
If less, place shade of pixel in color buffer and update z
buffer
Z-buffer: A Secondary Buffer
Color buffer Depth buffer
 Two buffer areas are required
• Depth buffer
 Store depth values for each (x, y) position
 All positions are initialized to minimum depth
 Usually 0 – most distant depth from the viewplane
• Refresh buffer
 Stores the intensity values for each position
 All positions are initialized to the background
intensity
Z-BUFFER ALGORITHM:
• Its an extension of Frame Buffer
• Display is always stored on Frame Buffer
• Frame Buffer stores information of each and every
pixel on the screen
• Bits (0, 1) decide that the pixel will be ON or OFF
• Z- Buffer apart from Frame buffer stores the depth
of pixel
• After analyzing the data of the overlapping
polygons, pixel closer to the eye will be updated
• Resolution of X,Y => Array[X,Y]
Given set of polygon in image space
Z-Buffer Algorithm:
1. Set the frame buffer & Z-Buffer to a background
value
(Z-BUFFER=Zmin) where, Zmin is value
To display polygon decide=>color, intensity and depth
2. Scan convert each polygon
i.e, for each pixel, find depth at that point
If Z(X,Y)>Z-BUFFER(X,Y)
Update Z-BUFFER(X,Y)=Z(X,Y)
& FRAME BUFFER
This process will repeat for each pixel
• By this way we can remove hidden lines and display
those polygons which are closer to eye
• X*Y space required to maintain Z-Buffer=> X*Y
times will be scanned
• Expensive in terms of time and space as space is very
large
x
z
display
S
S’
x
y S
S’
Area coherence or warnock’s algorithm
Area Subdivision
 Exploits area coherence: Small areas of an
image are likely to be covered by only one
polygon
 Three easy cases for determining what’s in front
in a given region:
1. a polygon is completely in front of everything else in
that region
2. no surfaces project to the region
3. only one surface is completely inside the region,
overlaps the region, or surrounds the region
Identifying Tests
 Four possible relationships
 Surrounding surface
 Completely enclose the area
 Overlapping surface
 Partly inside and partly outside the area
 Inside surface
 Outside surface
 No further subdivisions are needed if one of the following conditions is
true
 All surface are outside surfaces with respect to the area
 Only one inside, overlapping, or surrounding surface is in the area
 A surrounding surface obscures all other surfaces within the area boundaries 
from depth sorting, plane equation
Surrounding
Surface
Overlapping
Surface
Inside
Surface
Outside
Surface
Warnock’s Area Subdivision
(Image Precision)
 Start with whole image
 If one of the easy cases is satisfied (previous slide), draw
what’s in front
 Otherwise, subdivide the region and recurse
 If region is single pixel, choose surface with smallest depth
 Advantages:
 No over-rendering
 Anti-aliases well - just recurse deeper to get sub-pixel
information
 Disadvantage:
 Tests are quite complex and slow
Characteristics
 Takes advantage of area coherence
 Locating view areas that represent part of a single surface
 Successively dividing the total viewing area into smaller rectangles
 Until each small area is the projection of part of a single visible
surface or no surface
 Require tests
 Identify the area as part of a single surface
 Tell us that the area is too complex to analyze easily
 Similar to constructing a quadtree
55
Process
 Staring with the total view
 Apply the identifying tests
 If the tests indicate that the view is sufficiently
complex
 Subdivide
 Apply the tests to each of the smaller areas
 Until belonging to a single surface
 Until the size of a single pixel
 Example
 With a resolution 1024  1024
 10 times before reduced to a point
Warnock’s Algorithm
 Regions labeled with case
used to classify them:
1) One polygon in front
2) Empty
3) One polygon inside,
surrounding or
intersecting
 Small regions not labeled
2 2 2
2222
2
2
3
3
3
3 33
3
3
3
3
3
333
3
3
1
1 1 1
1
SCAN LINE Z-BUFFER ALGORITHM:
• An image space method for identifying visible
surfaces
• Computes and compares depth values along the
various scan-lines for a scene
Scan-Line Method Basic Example
 Scan Line 1:
 (A,B) to (B,C) only inside S1, so color from S1
 (E,H) to (F,G) only inside S2, so color from S2
 Scan Line 2:
 (A,D) to (E,H) only inside S1, so color from S1
 (E,H) to (B,C) inside S1 and S2 , so compute & test depth
In this example we color from S1
 (B,C) to (F,G) only inside S2, so color from S2
B
A
D
C
G
F
E
H
S1 S2
Scan Line 1
Scan Line 2
Scan Line 3
• Scanning takes place row by row
• To facilitate the search for surfaces crossing a given
scan-line an active list of edges is formed for each
scan-line as it is processed
• The active list stores only those edges that cross the
scan-line in order of increasing x
• Pixel positions across each scan-line are processed
from left to right
• We only need to perform depth calculations
• In Scan Line, Z-Buffer(X) whereas earlier it was X*Y
Use Z-Buffer for only 1 scan line / 1 row of pixel
• During scan conversion in the Active Edge
List(AEL)=> Calculate Z(X,Y)
i.e, -Pixel info between 1 & 2 active edges will only
be stored
-Next pixel will be stored as z=z1+∆z
-∆z will be constant but can change anywhere in
case of slope
• If Z(X,Y)>Z BUFFER(X) => Update
• If 2 polygons are present-along with Active Edge List,
Active Polygon List will also be included
• Active Polygon List=> List of polygons intersecting a
scan line
Hidden solid removal
 The hidden solid removal of B-rep model are
algorithms such as z-buffer.
 Convert CSG to B-rep.
 Render it with standard hidden surface removal
techniques.
 RAY TRACING
 The complex 3D solid/solid intersection problem
is converted into a 1D ray/solid intersection
calculation
Ray tracing
 If we shoot a ray from the viewpoint through the
pixel, the first object which hits is the one that is
visible at the pixel.
 It can be used for both flat and curved surface.
 Shoot the ray from the eyepiece one per pixel
 Find the closest object blocking the path of the
ray.
 Since it has infinite rays, the light rays are traced
backwards, a ray from the viewpoint is traced
through a pixel until it reaches a surface.
Unit 3 visual realism
Unit 3 visual realism
Ray casting
 If resolution is x,y then there are xy pixels so xy
light rays are traced.
 Each ray is tested for intersections with each
object in the picture including the non clipping
plane.
 The intersection closest to the viewpoint is
determined since rays intersect many objects.
 The main advantage is that it can create extremely
realistic rendering of pictures by incorporating
laws of optics for reflection and transmitting light
rays
 The major disadvantage is the performance since
it starts the process a new and treat each eye ray
separately.

More Related Content

PPTX
CAD
ravikumarmrk
 
PPTX
CAD - Unit-1 (Fundamentals of Computer Graphics)
Priscilla CPG
 
PDF
Surfaces
Yatin Singh
 
PPTX
Surface representation
Sunith Guraddi
 
PPTX
Unit 2-ME8691 & COMPUTER AIDED DESIGN AND MANUFACTURING
Mohanumar S
 
PDF
Geometric model & curve
sai surendra veerla
 
PDF
CAD STANDARDS
Balamurugan Subburaj
 
PPTX
Unit 2 curves & surfaces
S.DHARANI KUMAR
 
CAD - Unit-1 (Fundamentals of Computer Graphics)
Priscilla CPG
 
Surfaces
Yatin Singh
 
Surface representation
Sunith Guraddi
 
Unit 2-ME8691 & COMPUTER AIDED DESIGN AND MANUFACTURING
Mohanumar S
 
Geometric model & curve
sai surendra veerla
 
CAD STANDARDS
Balamurugan Subburaj
 
Unit 2 curves & surfaces
S.DHARANI KUMAR
 

What's hot (20)

PPTX
visual realism in geometric modeling
sabiha khathun
 
PPTX
CAD - UNIT 2 (Geometric Modelling)
Priscilla CPG
 
PPTX
Solid modeling
Dhruv Shah
 
PPTX
ppt of solid modeling for cad
Ayush Upadhyay
 
PPTX
Curves wire frame modelling
jntuhcej
 
PPTX
Hermit curves & beizer curves
KKARUNKARTHIK
 
PPTX
Solid modeling
KRvEsL
 
PPTX
Visual realism -HIDDEN REMOVAL METHODS
viswaaswaran
 
PDF
Unit 4 assembly of parts
Javith Saleem
 
PDF
Unit 5-cad standards
Javith Saleem
 
PPTX
Part 4-Types and mathematical representations of Curves .pptx
Khalil Alhatab
 
PPTX
UNIT 2- GEOMETRIC MODELLING
TAMILMECHKIT
 
PPTX
Geometric modeling
SenthilnathanV4
 
PPTX
Solid modeling-Sweep Representation and B-representation
Destro Destro
 
PPTX
Assembly modelling
KKARUNKARTHIK
 
PPT
Geometric modeling111431635 geometric-modeling-glad (1)
manojg1990
 
PPTX
Data standard - IGES
Himanshu Shrivastava
 
PPT
Geometric Modeling
illpa
 
PPT
Surface models
nmahi96
 
visual realism in geometric modeling
sabiha khathun
 
CAD - UNIT 2 (Geometric Modelling)
Priscilla CPG
 
Solid modeling
Dhruv Shah
 
ppt of solid modeling for cad
Ayush Upadhyay
 
Curves wire frame modelling
jntuhcej
 
Hermit curves & beizer curves
KKARUNKARTHIK
 
Solid modeling
KRvEsL
 
Visual realism -HIDDEN REMOVAL METHODS
viswaaswaran
 
Unit 4 assembly of parts
Javith Saleem
 
Unit 5-cad standards
Javith Saleem
 
Part 4-Types and mathematical representations of Curves .pptx
Khalil Alhatab
 
UNIT 2- GEOMETRIC MODELLING
TAMILMECHKIT
 
Geometric modeling
SenthilnathanV4
 
Solid modeling-Sweep Representation and B-representation
Destro Destro
 
Assembly modelling
KKARUNKARTHIK
 
Geometric modeling111431635 geometric-modeling-glad (1)
manojg1990
 
Data standard - IGES
Himanshu Shrivastava
 
Geometric Modeling
illpa
 
Surface models
nmahi96
 
Ad

Similar to Unit 3 visual realism (20)

PPTX
CAD
ravikumarmrk
 
PDF
Computer Aided Design visual realism notes
KushKumar293234
 
PPTX
Hidden line removal algorithm
KKARUNKARTHIK
 
PPTX
Visible surface determination
Patel Punit
 
PDF
Hidden Surface Removal using Z-buffer
Raj Sikarwar
 
PDF
Visual surface detection computer graphics
530BYManoj
 
PPTX
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
Tekendra Nath Yogi
 
PPTX
UNIT-V
VarthiniRamesh
 
PPT
Computer graphics iv unit
aravindangc
 
PPTX
Hidden surface removal
Punyajoy Saha
 
PPTX
Visible surface identification
Pooja Dixit
 
PPTX
UNIT 2hidden surface elimination in graphics.pptx
Vinod Deenathayalan
 
PPTX
Computer Graphics - Hidden Line Removal Algorithm
Jyotiraman De
 
PPT
hidden surface removal in computer graphics
srinivasan779644
 
PPT
2IV60_11_hidden_surfaces (6).ppt
ssuser024cb2
 
PPT
Visible Surface Detection
AmitBiswas99
 
PPT
Visible surface detection in computer graphic
anku2266
 
PPTX
Surface design and visible surfaces
Arti Parab Academics
 
PPTX
Hidden Surface Removal.pptx
bcanawakadalcollege
 
PPTX
Hidden surface removal algorithm
KKARUNKARTHIK
 
Computer Aided Design visual realism notes
KushKumar293234
 
Hidden line removal algorithm
KKARUNKARTHIK
 
Visible surface determination
Patel Punit
 
Hidden Surface Removal using Z-buffer
Raj Sikarwar
 
Visual surface detection computer graphics
530BYManoj
 
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
Tekendra Nath Yogi
 
Computer graphics iv unit
aravindangc
 
Hidden surface removal
Punyajoy Saha
 
Visible surface identification
Pooja Dixit
 
UNIT 2hidden surface elimination in graphics.pptx
Vinod Deenathayalan
 
Computer Graphics - Hidden Line Removal Algorithm
Jyotiraman De
 
hidden surface removal in computer graphics
srinivasan779644
 
2IV60_11_hidden_surfaces (6).ppt
ssuser024cb2
 
Visible Surface Detection
AmitBiswas99
 
Visible surface detection in computer graphic
anku2266
 
Surface design and visible surfaces
Arti Parab Academics
 
Hidden Surface Removal.pptx
bcanawakadalcollege
 
Hidden surface removal algorithm
KKARUNKARTHIK
 
Ad

More from Javith Saleem (9)

PDF
Ppc 2 m and 16m
Javith Saleem
 
PDF
Ppc 2mark answer
Javith Saleem
 
PDF
ME6501 Cad 2 mark question and answer
Javith Saleem
 
PDF
ME6501 Unit 2 geometric modeling
Javith Saleem
 
PDF
ME6501 Unit 1 introduction to cad
Javith Saleem
 
PDF
Seven traditional tools
Javith Saleem
 
PDF
New seven management tools
Javith Saleem
 
PPT
Fmea
Javith Saleem
 
PDF
BENCH MARKING
Javith Saleem
 
Ppc 2 m and 16m
Javith Saleem
 
Ppc 2mark answer
Javith Saleem
 
ME6501 Cad 2 mark question and answer
Javith Saleem
 
ME6501 Unit 2 geometric modeling
Javith Saleem
 
ME6501 Unit 1 introduction to cad
Javith Saleem
 
Seven traditional tools
Javith Saleem
 
New seven management tools
Javith Saleem
 
BENCH MARKING
Javith Saleem
 

Recently uploaded (20)

PPTX
database slide on modern techniques for optimizing database queries.pptx
aky52024
 
PDF
CAD-CAM U-1 Combined Notes_57761226_2025_04_22_14_40.pdf
shailendrapratap2002
 
PDF
Machine Learning All topics Covers In This Single Slides
AmritTiwari19
 
PDF
The Effect of Artifact Removal from EEG Signals on the Detection of Epileptic...
Partho Prosad
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PPTX
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
PDF
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
PPTX
Inventory management chapter in automation and robotics.
atisht0104
 
PPTX
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
PDF
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
PDF
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
PDF
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
PPTX
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
PPTX
sunil mishra pptmmmmmmmmmmmmmmmmmmmmmmmmm
singhamit111
 
PDF
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 
PDF
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
PDF
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
PDF
All chapters of Strength of materials.ppt
girmabiniyam1234
 
PPTX
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
PPTX
quantum computing transition from classical mechanics.pptx
gvlbcy
 
database slide on modern techniques for optimizing database queries.pptx
aky52024
 
CAD-CAM U-1 Combined Notes_57761226_2025_04_22_14_40.pdf
shailendrapratap2002
 
Machine Learning All topics Covers In This Single Slides
AmritTiwari19
 
The Effect of Artifact Removal from EEG Signals on the Detection of Epileptic...
Partho Prosad
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
22PCOAM21 Session 1 Data Management.pptx
Guru Nanak Technical Institutions
 
Packaging Tips for Stainless Steel Tubes and Pipes
heavymetalsandtubes
 
Inventory management chapter in automation and robotics.
atisht0104
 
FUNDAMENTALS OF ELECTRIC VEHICLES UNIT-1
MikkiliSuresh
 
Introduction to Ship Engine Room Systems.pdf
Mahmoud Moghtaderi
 
67243-Cooling and Heating & Calculation.pdf
DHAKA POLYTECHNIC
 
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
Civil Engineering Practices_BY Sh.JP Mishra 23.09.pptx
bineetmishra1990
 
sunil mishra pptmmmmmmmmmmmmmmmmmmmmmmmmm
singhamit111
 
top-5-use-cases-for-splunk-security-analytics.pdf
yaghutialireza
 
Biodegradable Plastics: Innovations and Market Potential (www.kiu.ac.ug)
publication11
 
EVS+PRESENTATIONS EVS+PRESENTATIONS like
saiyedaqib429
 
All chapters of Strength of materials.ppt
girmabiniyam1234
 
22PCOAM21 Session 2 Understanding Data Source.pptx
Guru Nanak Technical Institutions
 
quantum computing transition from classical mechanics.pptx
gvlbcy
 

Unit 3 visual realism

  • 1. UNIT III VISUAL REALISM J.S.JAVITH SALEEM ASSISTANT PROFESSOR AL-AMEEN ENGINEERING COLLEGE
  • 2. VISUALIZATION  Visualization can be defined as a technique for creating images , diagrams or animations to communicate ideas.  Projection and shading are common methods for visualizing geometric models.  CAD uses isometric and perspective projection in addition to orthographic projection for generating rich visual images with complete design information.
  • 3.  To project 3D to 2D objects we need to remove the ambiguities of the different views, which can be got by the elimination of hidden lines , surfaces , solid removal approaches.  Shading,  Lighting  Transparency  Coloring approaches provide more visual realism
  • 4. Model clean up process  Generate orthographic views  Eliminate hidden lines  Changing necessary hidden lines as dashed line or adding dimension and text to the different views.
  • 5. Application of realism  Robot Simulations : Visualization of movement of their links and joints and end effector movement etc.  CNC programs verification of tool movement along the path prescribed and estimation of cup height and surface finish etc.  Discrete Even Simulation : Most of DES packages provide the user to create shop floor environment on the screen to visualize layout of facilities, movement of material handling systems, performance of machines and tools.  Scientific Computing : Visualization of results of FEM analysis like iso-stress and iso-strain regions, deformed shapes and stress contours. Temperature and heat flux in heat-transfer analysis. Display and animation of mode shape in vibration analysis.  Flight Simulation : Cockpit training for pilots is first being provided with flight simulators, which virtually simulates the surrounding that an actual flight will pass through.
  • 9. HIDDEN LINE REMOVAL  “For a given three dimensional scene, a given viewing point and a given direction eliminate from an appropriate two dimensional projection of the edges and faces which the observer cannot see”  Object space method  Image space method
  • 10. Two main types of algorithms: – Object space: Determine which part of the object are visible. Also called as World Coordinates. Object is described in physical coordinate system. – It compares the object and parts to each other within the scene definition to determine which surface is visible. – Image space: Determine per pixel which point of an object is visible. Also called as Screen Coordinates. Visibility is decided point by point at each pixel position on view plane. – Zooming does not degrade the quality. Object space Image space
  • 11. Two main Hidden Surface Removal Algorithm Techniques: Object space: Hidden Surface Removal for all objects Image space: Objects: 3 D Clipping Transformed to Screen Coordinates Hidden Surface Removal
  • 12. HIDDEN LINE ELIMINATION PROCESS DISPLAY OF RESULTS ELIMINATION OF HIDDEN LINES APPLICATION OF VISIBILITY TECHNIQUES CHECKS FOR OVERLAPPING DEPTH COMPARISION IS USED TO DETERMINE PART OR ALL OF POLYGON IS HIDDEN SORTING OF 2D IMAGE DATA 2D OBJECT DATA TRANSFORMATIONS-CONTAINS VISIBLE AND INVISBLE EDGES. 3D OBJECT DATA GEOMETRY AND TOPOLOGY
  • 13. VISIBILITY TECHNIQUES  MINIMAX TEST  CONTAINMENT TEST  SURFACE TEST  COMPUTING SILHOUETTES  EDGE INTERSECTION  SEGMENT COMPARISONS  HOMOGENITY TEST
  • 14. MINIMAX TEST  Minimax test (also called the overlap or bounding box test) checks if two polygons overlap. The test provides a quick method to determine if two polygons do not overlap.  It surrounds each polygon with a box by finding its extents (minimum and maximum x and y coordinates) and then checks for the intersection for any two boxes in both the X and Y directions.  If two boxed do not intersect, their corresponding polygons do not overlap (see Figure 1). In such a case, no further testing of the edges of the polygons is required.  If the minimax test fails (two boxes intersect), the two polygons may or may not overlap, as shown in Figure 1. Each edge of one polygon is compared against all the edges of the other polygon to detect intersections. The minimax test can be applied first to any two edges to speed up this process
  • 17.  The containment test checks whether a given point lies inside a given polygon or polyhedron. There are three methods to compute containment or surroundness.  For a convex polygon, one can substitute the X and Y coordinates of the point into the line equation of each edge. If all substitutions result in the same sign, the point is on the same side of each edge and is therefore surrounded.  For non-convex polygons, two other methods can be used. In the first method, we draw a line from the point under testing to infinity as shown in Figure 2a. The semi- infinite line is intersected with the polygon edges. If the intersection count is even, the point is outside the polygon ( in Figure 2a). If it is odd, the point is inside.  If one of the polygon edges lies on the semi-infinite line, a singular case arises which needs special treatment to guarantee the consistency of the results.  The second method for non-convex polygons (Figure 2b) computes the sum of the angles subtended by each of the oriented edges as seen from the test point. If the sum is zero, the point is outside the polygon. If the sum is -360 or +360 the point is inside. The minus sign reflects whether the vertices of the polygon are ordered in a clockwise direction instead of counter clockwise.
  • 18. Computing silhouettes  A set of edges that separates visible faces from invisible faces of an object with respect to a given viewing direction is called silhouette edges (or silhouettes).  The signs of the components of normal vectors of the object faces can be utilized to determine the silhouette.  An edge that is part of the silhouette is characterized as the intersection of one visible face and one invisible face.  An edge that is the intersection of two visible faces is visible, but does not contribute to the silhouette.  The intersection of two invisible faces produces an invisible edge.
  • 20. Edge intersection  The hidden algorithm initially calculates the edges intersections in 2D.  To find out partially visible lines.  The two edges intersect at a point where y2-y1=0.  Then segment comparisons are used to further determine visibility
  • 22. Segment comparison  The image is computed scan line by line that is in segments and displayed in the same order.  The scan line is divided into spans(dashed lines).  The visibility is determined within each span by comparing the depths of the edge segments that lie in the span.  Segments with maximum depth are visible throughout the span.
  • 24. Homogeneity test  Points are compared for visibility.  Homogeneously visible  Neighborhood of point P can be projected objectively onto neighborhood of the projection of a point.  Homogeneously invisible  Neighborhood of point P cannot be projected objectively onto neighborhood of the projection of a point.  In-homogeneously visible  Pr(N(P))=N(Pr(P))  Pr(N(P))≠N(Pr(P)) inhomogeneously invisible.
  • 25. Surface Back-face elimination We cannot see the back-face of solid objects: Hence, these can be ignored VN faceback:0NV
  • 26. Back-face elimination We cannot see the back-face of solid objects: Hence, these can be ignored facefront:0NV V N
  • 27. Back-face elimination  Object-space method  Works fine for convex polyhedra: ±50% removed  Concave or overlapping polyhedra: require additional processing  Interior of objects can not be viewed Partially visible front faces
  • 28.  Hidden line removal algorithm  Depth Algorithm or Z algorithm or Priority algorithm  Area oriented algorithms  Overlay algorithm-Curved surface  Roberts algorithm  Hidden surface removal algorithm  Depth buffer algorithm or z-buffer algorithm  Area coherence algorithm or Warnock’s algorithm  Scan-line algorithm or Watkin’s algorithm  Hidden solid removal algorithm  Ray tracing algorithm
  • 29. Depth or priority algorithm  This algorithm is also known as the depth or z algorithm. The algorithm is based on sorting all the faces (polygons) in the scene according to the largest z coordinate value of each.  This step is sometimes known as assignment of priorities. If a face intersects more than one face, other visibility tests besides the z depth are needed to resolve any ambiguities.  Its basis is on the view according to the biggest Z co-ordinate value.  If face intersects more than one face, other visibility test beside z-depth is required to solve any issue.
  • 31. Depth or priority algorithm  Painter’s algorithm  As we utilize the procedure the painter’s way of creating the background first and then the overlaying layer and then the outermost layer with reducing depth.  When we view from z and x axis there is no overlapping view.
  • 32. Painter’s Algorithm  Assumption: Later projected polygons overwrite earlier projected polygons Graphics Pipeline 1 12 23 3 Oops! The red polygon Should be obscured by the blue polygon
  • 33. Painter’s Algorithm  Main Idea  A painter creates a picture by drawing background scene elemens before foreground ones  Requirements  Draw polygons in back-to- front order  Need to sort the polygons by depth order to get a correct image from Shirley
  • 34. Painter’s Algorithm  Sort by the depth of each polygon Graphics Pipeline 1 12 23 3 depth
  • 35. Painter’s Algorithm  Compute zmin ranges for each polygon  Project polygons with furthest zmin first (z) depth zmin zmin zmin zmin
  • 36. Painter’s Algorithm  Problem: Can you get a total sorting? zmin zmin zmin zmin Correct?
  • 37. Area oriented algorithm It is based on subdivision of given data set in a stepwise fashion until all visible areas are determined and displayed.
  • 38.  Identify silhouette polygons – silhouette edges are recognized and connection of silhouette edges to form closed polygons by sorting all edges for equal end points  Assign quantitative hiding(QH) values to silhouette polygon.  This is achieved by intersecting the polygons. The intersection points define points where QH may change. Find QH value using depth test. 0 is visible and 1 is invisible.  Determine the visible silhouette segments.  If closed silhouette polygon is completely invisible it need not be considered any further. If it is visible the segments with least QH values are considered.
  • 39.  Intersect the visible silhouette segments with partially visible faces.  To find out the partially visible and fully visible faces.  Display the interior of the visible or partially visible polygons.  By using stack and simply enumerates all faces lying inside a silhouette polygon.  The stack is started with a visible face and a loop begins popping of face F2
  • 40. Overlay algorithm  The curved surfaces are approximated as planar surfaces.  The u-v grid is used to create grid surface which consists of regions having straight edges.  The curves in each region are approximated as line segment.  The 1st step is to use surface equation and grid linear edges are created.
  • 41. Hidden line removal for curved surface  To compute the exact visibility we introduce a notion of visibility curves obtained by projection of silhouette and boundary curves and decomposing the surface into nonoverlapping regions.  The nonoverlapping and visible portions of the surface are represented as trimmed surfaces and we present a representation based on polygon trapezoidation algorithms.  The curved surface is converted in polygon mesh and calculated for visiblity.
  • 42. Hidden surface removal algorithm  The elimination of parts of a solid objects that are covered by others is called hidden surface removal.  Depth buffer or Z-buffer Algorithm  Area coherence or Warnock’s algorithm  Scan-line algorithm or Watkin’s algorithm
  • 43. Depth-Buffer Methods 43 Three surfaces overlapping pixel position (x,y) on the view plane. The visible surface, S1, has the smallest depth value. vx vy vz 3S 2S 1S  ,x y view plane
  • 44. Depth buffer or z-buffer algorithm
  • 45. 45 Z-Buffer Algorithm As we render each polygon, compare the depth of each pixel to depth in z buffer If less, place shade of pixel in color buffer and update z buffer
  • 46. Z-buffer: A Secondary Buffer Color buffer Depth buffer
  • 47.  Two buffer areas are required • Depth buffer  Store depth values for each (x, y) position  All positions are initialized to minimum depth  Usually 0 – most distant depth from the viewplane • Refresh buffer  Stores the intensity values for each position  All positions are initialized to the background intensity
  • 48. Z-BUFFER ALGORITHM: • Its an extension of Frame Buffer • Display is always stored on Frame Buffer • Frame Buffer stores information of each and every pixel on the screen • Bits (0, 1) decide that the pixel will be ON or OFF • Z- Buffer apart from Frame buffer stores the depth of pixel • After analyzing the data of the overlapping polygons, pixel closer to the eye will be updated • Resolution of X,Y => Array[X,Y]
  • 49. Given set of polygon in image space Z-Buffer Algorithm: 1. Set the frame buffer & Z-Buffer to a background value (Z-BUFFER=Zmin) where, Zmin is value To display polygon decide=>color, intensity and depth 2. Scan convert each polygon i.e, for each pixel, find depth at that point If Z(X,Y)>Z-BUFFER(X,Y) Update Z-BUFFER(X,Y)=Z(X,Y) & FRAME BUFFER This process will repeat for each pixel
  • 50. • By this way we can remove hidden lines and display those polygons which are closer to eye • X*Y space required to maintain Z-Buffer=> X*Y times will be scanned • Expensive in terms of time and space as space is very large x z display S S’ x y S S’
  • 51. Area coherence or warnock’s algorithm Area Subdivision  Exploits area coherence: Small areas of an image are likely to be covered by only one polygon  Three easy cases for determining what’s in front in a given region: 1. a polygon is completely in front of everything else in that region 2. no surfaces project to the region 3. only one surface is completely inside the region, overlaps the region, or surrounds the region
  • 52. Identifying Tests  Four possible relationships  Surrounding surface  Completely enclose the area  Overlapping surface  Partly inside and partly outside the area  Inside surface  Outside surface  No further subdivisions are needed if one of the following conditions is true  All surface are outside surfaces with respect to the area  Only one inside, overlapping, or surrounding surface is in the area  A surrounding surface obscures all other surfaces within the area boundaries  from depth sorting, plane equation Surrounding Surface Overlapping Surface Inside Surface Outside Surface
  • 53. Warnock’s Area Subdivision (Image Precision)  Start with whole image  If one of the easy cases is satisfied (previous slide), draw what’s in front  Otherwise, subdivide the region and recurse  If region is single pixel, choose surface with smallest depth  Advantages:  No over-rendering  Anti-aliases well - just recurse deeper to get sub-pixel information  Disadvantage:  Tests are quite complex and slow
  • 54. Characteristics  Takes advantage of area coherence  Locating view areas that represent part of a single surface  Successively dividing the total viewing area into smaller rectangles  Until each small area is the projection of part of a single visible surface or no surface  Require tests  Identify the area as part of a single surface  Tell us that the area is too complex to analyze easily  Similar to constructing a quadtree
  • 55. 55 Process  Staring with the total view  Apply the identifying tests  If the tests indicate that the view is sufficiently complex  Subdivide  Apply the tests to each of the smaller areas  Until belonging to a single surface  Until the size of a single pixel  Example  With a resolution 1024  1024  10 times before reduced to a point
  • 56. Warnock’s Algorithm  Regions labeled with case used to classify them: 1) One polygon in front 2) Empty 3) One polygon inside, surrounding or intersecting  Small regions not labeled 2 2 2 2222 2 2 3 3 3 3 33 3 3 3 3 3 333 3 3 1 1 1 1 1
  • 57. SCAN LINE Z-BUFFER ALGORITHM: • An image space method for identifying visible surfaces • Computes and compares depth values along the various scan-lines for a scene
  • 58. Scan-Line Method Basic Example  Scan Line 1:  (A,B) to (B,C) only inside S1, so color from S1  (E,H) to (F,G) only inside S2, so color from S2  Scan Line 2:  (A,D) to (E,H) only inside S1, so color from S1  (E,H) to (B,C) inside S1 and S2 , so compute & test depth In this example we color from S1  (B,C) to (F,G) only inside S2, so color from S2 B A D C G F E H S1 S2 Scan Line 1 Scan Line 2 Scan Line 3
  • 59. • Scanning takes place row by row • To facilitate the search for surfaces crossing a given scan-line an active list of edges is formed for each scan-line as it is processed • The active list stores only those edges that cross the scan-line in order of increasing x • Pixel positions across each scan-line are processed from left to right • We only need to perform depth calculations • In Scan Line, Z-Buffer(X) whereas earlier it was X*Y
  • 60. Use Z-Buffer for only 1 scan line / 1 row of pixel • During scan conversion in the Active Edge List(AEL)=> Calculate Z(X,Y) i.e, -Pixel info between 1 & 2 active edges will only be stored -Next pixel will be stored as z=z1+∆z -∆z will be constant but can change anywhere in case of slope • If Z(X,Y)>Z BUFFER(X) => Update • If 2 polygons are present-along with Active Edge List, Active Polygon List will also be included • Active Polygon List=> List of polygons intersecting a scan line
  • 61. Hidden solid removal  The hidden solid removal of B-rep model are algorithms such as z-buffer.  Convert CSG to B-rep.  Render it with standard hidden surface removal techniques.  RAY TRACING  The complex 3D solid/solid intersection problem is converted into a 1D ray/solid intersection calculation
  • 62. Ray tracing  If we shoot a ray from the viewpoint through the pixel, the first object which hits is the one that is visible at the pixel.  It can be used for both flat and curved surface.  Shoot the ray from the eyepiece one per pixel  Find the closest object blocking the path of the ray.  Since it has infinite rays, the light rays are traced backwards, a ray from the viewpoint is traced through a pixel until it reaches a surface.
  • 65. Ray casting  If resolution is x,y then there are xy pixels so xy light rays are traced.  Each ray is tested for intersections with each object in the picture including the non clipping plane.  The intersection closest to the viewpoint is determined since rays intersect many objects.
  • 66.  The main advantage is that it can create extremely realistic rendering of pictures by incorporating laws of optics for reflection and transmitting light rays  The major disadvantage is the performance since it starts the process a new and treat each eye ray separately.