17. Deformable shape analysis refers to the process of analysing and understanding the
shapes of objects or regions in images or videos, especially when these shapes
undergo deformations or changes due to various factors such as viewpoint, lighting
conditions, occlusions, or inherent object deformations.
• Shape Representation: Objects or regions in images are often represented using geometric
models such as contours, key points, or geometric primitives like lines, circles, or ellipses. These
representations capture the essential characteristics of the object's shape.
• Deformation Models: Deformable shape analysis typically involves the use of deformation
models to describe how objects or regions in images can deform or change shape under different
conditions. Deformation models can be parametric or non-parametric and can range from simple
geometric transformations to more complex statistical models.
• Feature Extraction: Feature extraction techniques are used to capture relevant information from
the shape representations. These features may include shape descriptors, such as curvature, area,
perimeter, moments, or higher-order statistics, which encode important shape characteristics.
18. • Matching and Registration: Matching and registration algorithms are employed to compare shapes
across different images or frames in a video sequence. These algorithms aim to find correspondences
between points or features on different shapes and estimate the transformations or deformations
required to align them.
• Statistical Shape Analysis: Statistical shape analysis techniques are often used to analyze the
variability of shapes within a dataset and to model the underlying shape variations using statistical
methods such as principal component analysis (PCA), shape context, active shape models (ASMs), or
active appearance models (AAMs).
• Deformation Estimation and Tracking: Deformation estimation and tracking algorithms are used to
estimate the motion or deformation of objects or regions in images or videos over time. These
algorithms may rely on techniques such as optical flow, deformable models, or Kalman filtering to
track and predict shape changes.
• Applications: Deformable shape analysis has numerous applications in computer vision, including
object recognition, object tracking, gesture recognition, medical image analysis, facial expression
analysis, motion analysis, robotics, and more.
31. Centroidal Profiles
Centroidal profiles refer to a method of representing shapes or objects based on their centroidal (or centre of mass) properties. The centroidal profile provides
a concise and robust representation of the shape, which can be useful in various tasks such as object recognition, shape matching, and classification.
How centroidal profiles are typically computed and used:
• Centroid Computation: The centroid of a shape or object is the geometric centre or the average position of all its points. It can be computed as the
weighted average of the coordinates of all points in the shape, where each point's weight corresponds to its contribution to the overall mass or area of the
shape.
• Profile Construction: Once the centroid is computed, the centroidal profile is constructed by representing the shape as a set of profiles or vectors
originating from the centroid to the boundary points of the shape. These profiles encode information about the shape's geometry and structure relative to
its centroid.
• Normalization: Centroidal profiles are often normalized to make them invariant to translation, rotation, and scale changes. Normalization ensures that the
shape's representation remains consistent across different orientations and sizes, making it more suitable for comparison and matching tasks.
• Feature Extraction: Centroidal profiles can serve as features for shape analysis and recognition algorithms. Various shape descriptors can be extracted from
the centroidal profiles to capture important characteristics of the shape, such as its symmetry, compactness, and distribution of boundary points relative to
the centroid.
• Matching and Recognition: Centroidal profiles can be compared and matched with profiles extracted from other shapes using similarity measures such as
Euclidean distance, cosine similarity, or correlation coefficients. This matching process allows for shape recognition and classification based on the similarity
between centroidal profiles.
• Applications: Centroidal profiles find applications in a wide range of computer vision tasks, including object recognition, shape-based image retrieval,
gesture recognition, character recognition, and shape analysis in medical imaging.
33. Handling Occlusions
When objects are partially occluded in images, it complicates the process of analyzing their boundaries. Occlusions can alter the
perceived perimeter of an object, making it difficult to accurately determine its scale. Additionally, occluded sections of the
boundary may correspond to parts of other objects or unpredictable damaged segments.
• Segmenting Relevant Sections: A challenging task is to distinguish between relevant (part of the object) and irrelevant
(occluded or damaged) sections of the boundary. The strategy involves attempting positive matches wherever possible and
ignoring irrelevant sections during the matching process.
• Matching Strategy: To match a template with the distorted boundary, a sliding window approach is used, but with a template
extended to twice its original length to accommodate potential breaks due to occlusions. The matching process involves
comparing the differences between the template and boundary along various orientations and displacements.
• Length of Matched Boundary: The length of the matched boundary segment (L) indicates the quality of the match. Longer
matches are preferred, but if the boundary is occluded in multiple places, the maximum length of the unoccluded boundary
segment is considered.
• Choosing Salient Features: To improve accuracy and speed, it may be beneficial to use only short sections of the boundary
template for matching. Salient features, such as corners, etc., are prioritized over non-salient features.
• Reassembling Boundaries: After locating various segments, the boundary is reassembled as closely as possible. Techniques
such as the Hough transform and relational pattern matching are used for this purpose.
• Handling Mean Value: The mean value of the boundary orientation cannot be reliably estimated in the presence of occlusions.
Thus, a 2D search is necessary, although if small salient features are sought, a 1D search can be sufficient.
• Occlusions in boundary pattern analysis require a different approach compared to non-occluded scenarios. The method
described emphasizes matching salient features and reassembling boundaries to mitigate the effects of occlusions effectively.
35. ACCURACY OF BOUNDARY LENGTH
MEASURES
Concerned with the accuracy of estimating boundary length in computer vision, particularly in scenarios where adjacent pixels on a curve are considered
separated by different distances depending on their orientation.
• Estimating Boundary Length: The passage evaluates an idea suggesting that adjacent pixels on an 8-connected curve should be considered separated by 1
pixel if the vector joining them aligns with the major axes and by √2 pixels if the vector is in a diagonal orientation. However, this estimator tends to
overestimate the boundary length.
• Explanation of Error: Consider two scenarios where only the top of an object is analyzed. In the first scenario, the boundary length along the top of the
object matches the estimation rule. However, in the second scenario, the presence of a step increases the estimated length by √2 pixels. As the length of
the top of the object increases, the actual length approaches a certain value, while the estimated length grows, leading to a definite error.
• Variation in Error with Orientation: The error in estimating boundary length increases initially as the boundary orientation deviates from 0°. A similar
effect occurs as the orientation decreases from 45°, resulting in a maximum error between 0° and 45°. This systematic overestimation of boundary length
can be addressed by using an improved model with different lengths per pixel along major and diagonal axes.
• Improved Model Parameters: The improved model suggests lengths per pixel along major and diagonal axes, denoted as sm and sd respectively. These
values are found to be approximately 0.948 and 1.343 respectively, which still maintain the ratio of √2. Despite the improvement, more detailed modeling
of the step pattern around the boundary may be needed to further reduce errors.
• Digitization Process: It's important to note that the basis of this work is to estimate the length of the original continuous boundary rather than that of the
digitized boundary. The digitization process loses information, so the aim is to obtain the best estimate of the original boundary length.
• Reduction of Errors: By using the improved model parameters instead of the original ones, the estimated errors in boundary length measurement can be
reduced from 6.6% to 2.3%, under certain assumptions about correlations between orientations at neighbouring boundary pixels.