Categories
research

Plane and Edge Detection by the Random Sample Consensus (RANSAC)  Algorithm

By SGI Fellows Xinwen Ding, Josue Perez, and Elshadai Tegegn

During the second week of SGI, we worked under the guidance of Prof. Yusuf Sahillioğlu to detect planes and edges from a given point cloud. The main tool we applied to complete this task is the Random Sample Consensus (RANSAC) algorithm.

Definition

We define a plane P as the dominant plane of a 3D point cloud if the number of points defined in the 3D point cloud located on the plane P is no less than the number of points on any other plane. For example, consider the following object defined by a 3D point cloud. If we assume the number of points on a face is proportional to the area of the face, then \(P_2\) is more likely to be the dominant plane than \(P_1\) since the area of \(P_2\) is greater than that of \(P_1\). So, compared with \(P_1\), there are more points on \(P_2\), which makes \(P_2\) the potential dominant plane.

An object that is defined by a point cloud, where P2 is more likely to be the dominant plane than P1.

Problem Description

Given a 3D point cloud as the input model, we aim to detect all the planes in the model. To obtain a decent detection result, we divide the problem into two separate parts:

  • Part 1: Find the dominant plane \(P\) from a given 3D point cloud. This subproblem can be solved by the RANSAC Algorithm.
  • Part 2: Find all other planes contained in the 3D point cloud. This subproblem can be successfully addressed if we iteratively repeat the following two steps: (1). remove all the points on plane \(P\) (obtained from Part 1) from the given point cloud. (2). repeat the RANSAC algorithm with respect to the remaining points in the point cloud.

The RANSAC Algorithm

An Introduction to RANSAC

The RANSAC algorithm is a learning technique that uses random sampling of observed data to best estimate the parameters of a model.
RANSAC divides data into inliers and outliers and yields estimates computed from a minimal set of inliers with the most outstanding support. Each data is used to vote for one or more models. Thus, RANSAC uses a voting scheme.

Workflow & Pseudocode

RANSAC achieves its goal by repeating the following steps:

  • Sample a small random subset of data points and treat them as inliers.
  • Take the potential inliers and compute the model parameters.
  • Score how many data points support the fitting. Points that fit the model well enough are known as the consensus set.
  • Improve the model accordingly.

This procedure repeats a fixed number of times, each time producing a model that is either rejected because of too few points in the consensus set or refined according to the corresponding consensus set size.

The pseudocode of the algorithm is recorded as below:

function dominant_plane = ransac(PC)
    n := number of points defined in point cloud PC

    max_iter := max number of iterations

    eps := threshold value to determine data points that are 
           fit well by model (a 3D plane in our case)

    min_good_fit := number of nearby data points required to 
                    assert that a model (a 3D plane in our 
                    case) fits well to data

    iter := current iteration

    best_err := measure of how well a plane fits a set of 
                points. Initialized with a large number

    % store dominant plane information
    % planes are characterized as [a,b,c,d], where
    % ax + by + cz + d = 0
    dominant_plan = zeros(1,4);

    while iter < max_iter
        maybeInliers := 3 randomly selected values from data 
        maybePlane := a 3D plane fitted to maybeInliers
        alsoInliers := [] 
        for every point in PC not in maybeInliers
            if point_to_model_distance < eps
                add point to alsoInliers
            end if
        end for

        if number of points in alsoInliers is > min_good_fit
            % the model (3D plane) we find is accurate enough

            % this betterModel can be found by solving a least 
            % square problem.
            betterPlane := a better 3D plane fits to all 
                        points in maybeInliers and alsoInliers
            err := sum of distances from the points in 
                   maybeInliers and alsoInliers to betterPlane
            if err < best_err
                best_err := err
                dominant_plane := betterPlane
            end if
        end if

        iter = iter + 1;
    end while
end

Detect More Planes

As mentioned before, to detect more planes, we may simply remove the points that are on the dominant plane \(P\) of the point cloud, according to what is suggested by the RANSAC algorithm. After that, we can reapply the RANSAC algorithm to the remaining points, and we get the second dominant plane, say \(P’\) of the given point cloud. We may repeat the process iteratively and detect the planes one by one until the remaining points cannot form a valid plane in 3D space.

The following code provides a more concrete description of the iterative process.

% read data from .off file
[PC, ~] = readOFF('data/off/330.off');

% A threshold eps is set to decide if a point is on a detected 
% plane. If the distance from a point to a plane is less than 
% this threshold, then we consider the point is on the plane.
eps = 2e-3;

while size(PC, 1) > 3
    % in the implementation of ransac, if the algorithm cannot 
    % detect a plane using the existing points, then the 
    % coefficient of the plane (ax + by + cz + d = 0) is a 
    % zero vector.
    curr_plane_coeff = ransac(PC);
    
    if norm(curr_plane_coeff) == 0
        % remaining points cannot lead to a meaningful plane.
        % exit loop and stop algorithm
        break;
    end

    % At this point, one may add some code to visualize the 
    % detected plane.

    % return the indices of points that are on the dominant
    % plane detected by RANSAC algorithm
    [~, idx] = pts_close_to_plane(PC, curr_plane_coeff, eps);

    % remove the points on the dominant plane.
    V(idx,:) = [];
end

Results

RANSAC algorithm gives good results with 3D point clouds that have planes made up of a good number of points. The full implementation is available on SGI’s Github. This method is not able to detect curves or very small planes. The data we used to generate these examples are the Mechanical objects (Category 17) from this database.

Successful Detections

Mech. Data Model NumberModel Object RANSAC Plane Detection Result
330
331
In the third column, red dots are points on the most dominant plane, followed by blue and green dots are points on the least dominant plane.

Unsuccessful Detections

Mech. Data Model NumberModel Object RANSAC Plane Detection Result
332
338
In the third column, red dots are points on the most dominant plane, followed by blue and green dots are points on the least dominant plane. White points are the ones that cannot be characterized as a plane by the RANSAC algorithm.

Limitations and Future Work

As one may observe from the successful and unsuccessful detections, the RANSAC algorithm is good at detecting planar surfaces. However, the algorithm usually fails to detect a curved surface, because we are fitting the three randomly selected points using a plane equation as our base model. If we change this model to be a curved surface, then theoretically speaking, the RANSAC algorithm should be able to detect a curved plane. We may apply a similar fix if we want to perform line detection.

Moreover, we may notice that the majority of surfaces in our test data can be represented easily by an analytical formula. However, this pre-condition is, in general, not true if we choose an arbitrary point cloud. If this is the case, the RANSAC algorithm is unlikely to output an accurate detection result, which leads to more extra work.

Categories
research

Orienting Point Clouds

By Tiago Fernandes, Anna Krokhine, Daniel Scrivener, and Qi Zhang

During the first week of SGI, our group worked with Professor Misha Kazhdan on a virtually omnipresent topic in the field of geometry processing: algorithmically assigning normal orientations to point clouds.

Introduction

While point clouds are the typical output of 3D scanning equipment, triangle meshes with connectivity information are often required to perform geometry processing techniques. In order to convert a point cloud to a triangle mesh, we first need to orient the point cloud — that is, assign normals \(\{n_1, … n_n\}\) and signs \(\{\alpha_1, … \alpha_n\}\) to the initially unoriented points \(\{p_1, … p_n\}\).

To find \(n_i\), we can locally fit a surface to each point \(p_i\) using its neighborhood \(N(i)\) and use its uniquely determined normal. More difficult, however, is the process of assigning consistent signs to the normals. In order to evaluate the consistency of the signs, we have the following energy function, which we minimize over alpha:

\(E(\alpha)=\alpha^T\cdot E\cdot \alpha\)

Where \(\alpha \in \{-1, +1\}^n\) is the vector of signs and \(\tilde{E}_{ij} = \begin{cases} \frac{\langle N_i(p_i), N_i(p_j)\rangle \cdot \langle N_j(p_i), N_j(p_j) \rangle}{2} & i \sim j \\ 0 & \text{otherwise}\end{cases}\).

Two neighboring points, labelled pi and pj, have their normal orientations determined by the mapping of each normal onto the other point’s best-fit surface.

As shown in the diagram, \(N_i(p_j)\) is the normal assigned to \(p_j\) by the local surface fitted at \(p_i\). Greater values of \(E_{ij}\) indicate more consistent normal orientations, and thus we encourage neighboring points to have similar orientations.

Our goal is to obtain a set of signs minimizing this energy function, which is difficult to calculate on the full point set at once. To simplify the process, we looked at algorithms built on a hierarchical approach: we solve the problem first on small clusters of points, recursively building larger solutions until we’ve arrived at the initial problem.

Within this general algorithmic structure, we explored different possibilities as to the surfaces used to represent each cluster of points locally. Prof. Kazhdan had previously tried two approaches, one linear and the other quadratic. The linear approach assigns best-fit planes to the neighborhood around each point, whereas the quadratic approach uses general quadratic functions of best fit.

While these techniques provided a good foundation, we identified several failure cases upon experimenting with various point clouds. The linear approach works well on simple faces, but struggles to maintain consistent orientation around sharp features like edges. This is best demonstrated with the flower pointset, where the orientations of the normals are inverted in some clusters. The quadratic approach does better with fitting to distinctive features but overfits to what should be more simple surfaces, producing some undesirable outcomes such as the flower and vase seen below.

flower.xyz generated with linear fitting
Vase generated with quadratic fitting

Defining New Fits: First Attempt

Given that the linear and quadratic fits presented their own unique shortcomings, it became clear that we needed to experiment with different surface representations in order to find suitable fits for all clusters. Thanks to the abstract nature of our framework, there were few limitations on how the implicit surface needed to be defined; the energy function requires only a method for querying the gradient as well as a means of flipping the gradient’s sign, which leaves many options for implementation.

Our first geometric candidate was the sphere, a shape that approximates corners well due to the fact that it has curvature in all three directions. Additionally, fitting a set of points to a sphere is fairly straightforward: we used the least-squares approach described here, which, in implementation terms, is no more difficult than setting up a linear system and solving it. Unfortunately, the simplest solutions are not always the best solutions…

flower.xyz generated with spherical fitting

The spherical fitting wasn’t able to deal with sharp edges as expected, incorrectly matching the normal orientations when merging clusters close to those edges. It also generated some noise across the surface. But the process of designing the spherical fit gave us a solid idea on how to model other surfaces (and also helped to demystify some esoteric C++ conventions).

Parabolic Fit

We’ve established that since the surface may curve, a linear fit can be non-faithful to the features of the original pointset. Conversely, the generic quadratic fit has the potential to produce undesirable surface representations such as hyperbolas or ellipsoids. To this end, we introduced a parabolic fit, which improves upon the linear fit via the addition of a parameter.

By constructing a local coordinate frame where the normal from the linear fit serves as our z-axis, we were able to fit \(z=a(x^2 + y^2)\) with energy \(E_{loss} = \sum\limits_{i} (z_i – a(x_i^2+y_i^2))^2\). From this we obtained \(a = \frac{\sum_i z_i (x_i^2+y_i^2 ) }{\sum_i(x_i^2+y_i^2 )^2}\) yielding an \(a\) that minimizes this energy. By observing that \(z_i=\langle \vec{p_0 p_i} ,n \rangle, x_i^2+y_i^2=||\vec {p_0 p_i} \times n||^2\) , we derived a formula that is easier to represent programmatically: \(a = \frac{\sum_i \langle \vec{p_0 p_i} ,n \rangle ||\vec {p_0 p_i} \times n||^2 }{\sum_i||\vec {p_0 p_i} \times n||^4}\).

We calculated the gradient in a similar manner. Since locally \(\nabla ( z – a(x^2+y^2)) = (-2ax,-2ay,1) = -2a(x,y,0) + (0,0,1)\), then \(\nabla ( z – a(x^2+y^2)) = -2a\cdot \mbox{Proj}(\vec{p_0 p_i}) + \vec{n}\), where \(\mbox{Proj}( )\) projects vectors into the plane obtained in the original fit, which is written in a more code-friendly manner as \(\mbox{Proj}(\vec{x}) = \vec{x} – \langle \vec{x}, \vec{n} \rangle \vec{n}\). Finally, the gradient could be expressed as \(\nabla ( z – a(x^2+y^2)) = -2a(\vec{x} – \langle \vec{x}, \vec{n} \rangle \vec{n}) + \vec{n}\).

Our implementation of the parabolic fit generally works better than the linear fit, correcting for some key errors. It works particularly well on smooth, curved point clouds.

As for the flower dataset, however, the performance was still sub-par. When we examined some of the parabolic fits using a tool to visualize the isosurface, we found the following problem with some of the normals:

The normal of the best-fit plane doesn’t align with the best-fit parabola.

In this example, \(\vec{p_0 \bar p}\) would serve as a better normal vector than \(\vec{n}\), so we decided to implement a “choice” between the two. To avoid swapping the normal on flat point clouds (where the original normal serves its purpose quite well) we determined a threshold to handle this. Intuitively, a point cloud encoding an edge will yield a larger \(||\vec{p_0 \bar p}||\):

Two cases: one in which the cluster encodes an edge (left), and one in which the cluster encodes a plane (right)

To obtain a good threshold value, we used the local radius \(d_{max} = \max _{i} ||\vec{p_0 p_i}||\) and then picked a $ c$ that seems to work with our dataset (related to the size of each neighborhood), such that when \(||\vec{p_0 \bar p}|| < d_{max}/c\), we can assume the cluster is flat, whereas when \(||\vec{p_0 \bar p}||\ge d_{max}/c\) we expect it to be sharp.

Another comparison of the two cluster cases (angular versus flat local geometry). Note the relative sizes of p_bar and dmax.

In practice, this idea works well on flower.xyz when \(c \ge 12\) — the parameter is rather sensitive and can yield poor results on some point clouds if not tuned properly.

flower.xyz with parabolic fitting

Hybrid Methods: The Associative Fit

The parabolic fit provided a lot of hope as to how we could definitively address large variations in curvature. Recalling that the quadratic fit worked well for some types of clusters and that the linear fit worked well for others, the advantages of somehow being able to merge the two methods by leveraging both of their strengths had become apparent. This observation is what ultimately gave rise to the “associative” fit, which

  1. constructs two different fits for each cluster, and
  2. chooses the best fit by comparing the performance of the first fit to a user-specified threshold value.

The first fit’s performance is assessed according to a standard statistical measurement known as R-squared, which tells us, as a percentage, how much of the point cluster’s variation is explained by the chosen fit (or simply put, the correlation between the fit and the pointset). This was straightforward enough to implement with respect to our linear fit, which always serves as the first fit that AssociativeFit attempts to query. We can completely disable the linear fit when its R-squared value does not meet the threshold for inclusion, allowing us to switch to a better fit (quadratic, parabolic) for clusters where the observed variance is too great for a planar representation. As it turns out, clusters with high variance are likely to encode geometries with high curvature, which makes the use of a higher-degree fit appropriate.

Many of our best results were obtained using the associative fit, and we were pleased to see clear improvements over the original single-fit approaches. The inclusion of a threshold parameter does necessitate a lot of hands-on trial and error by the user — we’d eventually like to see this value determined automatically per-pointset, but the optimization problem involved in doing so is particularly challenging given that each point cloud has its own issues related to non-uniform sampling and density.

flower.xyz generated with associative (linear, quadratic) fitting
flower.xyz generated with associative (linear, quadratic) fitting

Quantitative Performance, Distance Weighting, and Other Touch-Ups

Aside from defining new implicit surfaces, we made a lot of other miscellaneous changes to the existing code base in order to improve our means of assessing each fit’s performance. We also wanted to provide the user with as much flexibility as possible when it came to manual refinement of the results.

At the beginning, when it came time to analyze our results, we relied heavily upon the disk-based visualization tool used to generate the images in this blog post. We also explored geometry processing techniques (mainly Poisson reconstruction) that make extensive use of the generated normals, assessing their performance as a proxy for the success of our method. These tools, however, provide only a qualitative means of assessment, essentially boiling down to whether a result “looks good.”

Fortunately, many of the point clouds in our dataset came with pre-assigned “true” normals that were then discarded by our method as it attempted to reconstruct them. We could easily use these normals as a point of comparison, allowing us to generate two metrics: similarity of normal lines (absolute value of the dot product, orientation-agnostic) and similarity of orientations (sign of the dot product). These statistics not only helped us with our debugging, but serve as natural candidates for any future optimization process that compares multiple passes of the algorithm.

The last main improvement to cover is the introduction of “weights” in the construction of each fit, or attenuating values that attempt to diminish the contribution of points farthest away from the mean. Despite having been implemented before we even touched the codebase, the weights were not used until we enabled them (previously, each fit received a weighting function that always returned a constant value rather than one of these distance-based weights). Improvements were modest in some cases and quite noticeable in others, as indicated by our results.

Pulley generated without distance weighting
Pulley generated with distance weighting

Conclusion

The problem of orienting point clouds is NP-hard in its simplest formulation, and a good approach for this task needs to take advantage of the geometric properties and structure of point clouds. By implementing more complex surface representations, we were able to significantly improve the results obtained across the 17 point clouds used for testing. The associative & parabolic fits in particular produced promising results: both were able to preserve the orientations of clusters across sharp edges and smooth surfaces alike.

Several obstacles remain with regard to our approach. The algorithm may perform poorly on especially dense point clouds or point clouds with non-manifold geometry. Additionally, our new surface representations are sensitive to manually-specified parameters, preventing this method from working automatically. Fortunately, the latter issue presents an opportunity for future work with optimization.

It was exciting to start off SGI working on a project with so many different applications in the field of geometry processing and beyond. The merit of this approach lies primarily with the fact that it provides an out-of-the box solution: once you compile the code, you can start experimenting with it right away (there are no neural networks to train, for example). We’re hopeful that this work will only continue to see improvements as adjustments are made to the various surface implementations (and there may be a part II on that front — stay tuned!)

Categories
research

Using Gray-Scott Reaction-Diffusion Model to Create Spots (and Stripes) on 3D Spot

By SGI Fellow Natalie Patten

The Gray-Scott reaction-diffusion model consists of two partial differential equations: $$\frac{\partial U}{\partial t}=D_u\nabla^2U-UV^2+F(1-U)$$ $$\frac{\partial V}{\partial t}=D_v\nabla^2V+UV^2-(F+k)V.$$Using a semi-implicit approach, each \(U^{i+1}\) can be calculated as follows: \((I -{\Delta}tD_uL)U^{i+1}=U^i + {\Delta}t (F(1-U^i) – U^iV^iV^i)\). Here, \(I\) is the identity matrix, \({\Delta}t\) is change in time, \(D_u\) is the diffusion rate, \(L\) is the Laplacian of the mesh, \(F\) is the feed rate, and \(k\) is the degrading rate. Similarly, \(V^{i+1}\) can be calculated as follows: \((I -{\Delta}tD_vL)V^{i+1}=V^i + {\Delta}t (U^iV^iV^i-(F+k)V^i)\). Using Eigen‘s SimplicialLDLT solver, which uses Cholesky decomposition, we can solve for each:

Eigen::SparseMatrix<double> temp1 = (I - timeStep * du * L).eval();
Eigen::SparseMatrix<double> temp2 = (I - timeStep * dv * L).eval();

Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>> solver1;
Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>> solver2;

solver1.compute(temp1);
solver2.compute(temp2);

for (int i = 0; i < numSteps; i++) {
   Eigen::VectorXd UVV = U.array() * V.array() * V.array();
   Eigen::VectorXd FU = F.array() * U.array();
   U = solver1.solve((U + timeStep * (F - FU - UVV)).eval());
   Eigen::VectorXd kV = k.array() * V.array();
   Eigen::VectorXd FV = F.array() * V.array();
   V = solver2.solve((V + timeStep * (UVV - kV - FV)).eval());
}

U was initialized to be 1.0 everywhere, and V was initialized to be 0.0 everywhere except in one small area where it was initialized to be 1.0. Du and Dv were set to be 1.0 and 0.5, respectively. I then modelled this onto a 3D mesh of Spot the cow using Polyscope with a time step of 1 and various F and k values, based on parameters found in Pearson’s “Complex Patterns in a Simple System” (1993). Below is a summary of the initial results. You can see various patterns formed by the Gray-Scott model.

F = 0.024 and k = 0.06 at 10,000 steps:

F = 0.04 and k = 0.06 at 10,000 steps:

F = 0.05 and k = 0.06 at 10,000 steps:

F = 0.03 and k = 0.056 at 10,000 steps:

F = 0.044 and k = 0.063 at 10,000 steps:

Acknowledgments:

  • Thank you to my project partners Mariem Khlifi and Gimin Nam for help with (lots of!) debugging.
  • Thank you to my project mentor Etienne Vouga and project TA Erick Jimenez Berumen for guidance, debugging, and resources.
Categories
tutorial week

Redesigning the Playing Cards

Selection of Cards from MathWorks’ Playing Cards…. and the SGI 2022 Mug.

For my first SGI blog, I have decided to combine what we have learned during the second day of the tutorial week given by the amazing Silvia Sellán to get an alternative version of the playing cards that we, fellows, have received in the swag package. The code can be found on SGI’s GitHub.

The cards are nicely designed already, so the goal is not to get a superior design but to put the knowledge we have collected into practice.

If you are curious about how the design was made by Mathworks, you can consult this website where they provide an explanation of the code that went into building the patterns. To briefly summarize the concept, the patterns were made using a Penrose Tiling, an aperiodic tiling that covers a surface with shapes such as triangles, polygons, etc. The Penrose Tiling is more distinct in larger surfaces where the tile’s repetition creates a distinct pattern.

Penrose tiling as obtained after running the steps in the Playing Cards blog.

The tiling in the deck of playing cards is much simpler than what the Penrose Tiling usually ends up looking like because the number of tiles used to divide the shapes is very small. The aim is to recreate a simple pattern using triangulation principles that we have covered during the tutorial week.

The steps making our approach, which we are going to detail in this post, can be summarized in the pipeline below:

Pipeline

The idea used to recreate the cards is based on two commands, one of them being the drawing tool from the gpltoolbox and the triangulation tool:

[V,E,cid] = get_pencil_curves(1e-6); (1)

[U,G] = triangle(V,E,[],'Flags','-q30'); (2)

To recreate the style of the playing cards, we will need to transfer the drawing obtained in (1) to a polyline-based design. The drawing from get_pencil_curve is never straight, so we have to detect the existing segments and estimate their positions.

To achieve this polyline-based design, the position of the segment bounds and their connections are estimated which means that to recreate the polylines, we need to estimate the points that have corner properties and build the connections between these points.

To execute the step that consists of detecting these corner points, we can use the script find_corners.m. The principle is simple: a corner point is a point \(x_{i}\) where the sign of the derivative changes when measured between \(x_{i}\)’s predecessor \(x_{i-1}\) and successor \(x_{i+1}\). The edge cases where vertical lines exist in the drawing and that yield to +/-Inf derivatives and the case of horizontal lines that have a zero derivative along the segment are also accounted for.

\(x_{i}\) is a corner with a change in the slope’s sign.

Points that satisfy this change in the slope’s sign will be considered corner candidates. At the end of the corner detection process, a pruning occurs to discard points that are very close to each other. Once we have these corner points, the polylines can be immediately seen as the connection between two consecutive corners.

Curvatures also naturally trigger this change in the slope’s sign. We do however need to distinguish between an intentional curvature and a non-intentional one caused by drawings which inherently have lines with varying degrees of curvature attributed to hand made drawings with the get_pencil_curve tool. This is solved by fixing a derivative threshold that discards very small variations.

When the polyline-based design is ready, we can run the triangulation tool as in (2) to obtain our resulting shapes.

In the pictures below, we have collected the results from our algorithm which generated triangulated polyline versions of shapes created by users. The results are not perfect due to many irregularities in the original drawing. These irregularities stem from the low quality of the drawings when using a mouse instead of an actual pen.

A queen, a spade, a heart and a diamond before and after triangulation.

Some interesting observations can be made from the results of the heart and spade shapes: the algorithm converts their curved lines into two straight lines which meet at a point that showcases corner properties which validates that strong curvatures are properly considered and the inevitable ones are discarded.

And with that, we can finally say that we have our own “artsy” playing cards design or maybe this could already be Mathworks’ design in a parallel universe!

Categories
tutorial week

Robustness in Geometry Processing

By SGI fellows Alejandro Pereira and Vivien van Veldhuizen

On the last day of the tutorial week, Nicholas Sharp gave a talk about robustness and debugging in geometry processing. He explained that there are a lot of interesting software tools available for geometry processing, but it is sometimes hard to make these tools robust to different types of data. This is especially true in geometry processing, where there are a lot of different complex data structures such as meshes. Software tools not being robust to different sorts of data, leads to a lot of the available tools becoming unusable, or results coming out wrong. It is therefore important for anyone working with geometry processing, to know what kinds of robustness problems you might encounter and how these may be solved.

Five Common Problems

Nicholas mentioned five of these common problems. The first concerned floating point numbers not being as precise as real mathematical numbers. This could lead to inaccurate data representations, which only pile up as we start computing with the less precise floating point numbers. A second problem to think about was different types of solvers. Nicholas mentioned that we often use numerical solvers in geometry processing, but that some types are a lot more fast and reliable (such as dense linear solvers) than others (for example quadratic or nonlinear solvers). Another common problem Nicholas mentioned was the problem of “bad” meshes. Preloaded meshes could for example have orientation problems, contain overlapping vertices or be nonmanifold. Moreover, even if the meshes were all structured correctly, if we want to run simulations on these meshes, a given algorithm might work on one mesh but not the other. This is because in visual computing, algorithms and inputs (in this case our meshes) are not designed together. Instead, algorithms are just a part of a pipeline and are expected to work on any input data, which is of course not always feasible in practice. Finally, Nicholas briefly talked about machine learning for geometry processing, which is often marketed as an alternative to classic geometry processing. He mentioned that actually, machine learning methods often still contain a lot of geometry processing operations under the hood, so we might still run into any of the problems mentioned above, which might also impact our machine learning results.

To show us what some these problems might look like in practice, Nick gave us two Matlab “puzzles” that we had to solve. Each of these Matlab exercises had some sort of robustness problem and our goal was to find out what these problems were and solve them.

Puzzle 1

In the first exercise, we want to plot a simple mesh. Nothing too complicated right?
Once we have a matrix of vertices V and a matrix for the faces F, it’s just a simple case of running:

>> tsurf(F, V)

However, we get an error:

Error using patch
Value must be of numeric type and greater than 1.

Error in trisurf (line 95)
h = patch('faces',trids,'vertices',[x(:) y(:) z(:)],'facevertexcdata',c(:),...

Error in tsurf (line 132)
t_copy = trisurf(F,V(:,1),V(:,2),V(:,3));

So, what happened? MATLAB told us the error comes from the patch function, so we call patch directly to see what is happening:

>> patches('Faces', F, 'Vertices', V)
Error using patch
Value must be of numeric type and greater than 1.

Because patch only uses V and F, the problem must be in either one of those variables. What if we explicitly give it the X, Y, and Z coordinates of the vertices to plot?


>> patch('XData', V(:,1), 'YData', V(:,2), 'ZData', V(:,3), 'Facecolor', 'Blue')
A bad-looking duck

Yes, progress! We plotted something, and we can see that it is a duck. The vertices matrix seems to be correct, so perhaps i is the faces F that is the problem

Remember the error in the patch function? “Value must be greater than 1.” Well, if you come from a different coding background such as Python, you know that arrays start at index=0. However, this is not the case in MATLAB, where indices start at 1. This means we just need to add 1 to F to fix the problem.

>> F= F+1
>> tsurf(F,V)
The correctly loaded mesh of a duck

Success! We have a lovely duck. Alternatively, we could have used tsurf(F+1,V) and it would also work.


Sometimes error and warnings may be a little intimidating, but if we go step by step from the first error we can find that the source is something as simple as changing a 0 to a 1.

Puzzle 2

For the next exercise, our objective was to compute the geodesic distances along the surface of a ‘meshified’ cow.

For that we will use the 'heat_geodesic' function. In this case, running the function results in the following error:

>> dists = heat_geodesic(V,F,1);
Warning: Matrix is singular, close to singular or badly scaled.
Results may be inaccurate. RCOND = NaN.
> In min_quad_with_fixed/solve (line 409)
In min_quad_with_fixed (line 103)
In heat_geodesic (line 199)

Here the problem is a little more complicated: in our quadratic solver, one of the matrices is singular. This means, among several other equivalent propositions, that the determinant of said matrix is 0.

For now, let’s try to visualize what the mesh looks like using tsurf(F,V):

Something is wrong with this mesh…

At first glance, there seems to be nothing wrong. The trick is to realize that some of the vertices in the above mesh are too close to each other. This results in the matrix becoming singular.
One way to solve this is to merge vertices that are closer to a certain threshold. In this case, a threshold of 0.0001 m is enough to remove 120 vertices. The result:

Correct rendering of Spot the cow

Finally, we get the desired result!

In conclusion: debugging is not only important but also fun! Spending time trying to understand the problem, creating our own hypothesis of what the output should be, and then testing them is what science is all about.

Categories
tutorial week

Fixing a bug in png2poly from gptoolbox

Many of the SGI fellows experienced an error using the png2poly function from the gptoolbox: when applied to the example picture, the function throws an indexing error and in some cases makes Matlab crash. So, I decided to investigate the code.

The function png2poly transforms a png image into closed polylines (polygons) and applies a Laplacian smoothing operation. I found out that this operation was returning degenerated polygons, which later in the code was crashing Matlab.

But first, how does Laplacian smoothing work?

I had never seen how a smoothing operation works, and I was surprised with how simple the idea of Laplacian smoothing is. It’s an iterative operation, and at each iteration the positions of the vertices are updated using local information, the position of neighbor vertices.

New position of vertices after one iteration (credit: Stanford CS 468).

\[p_i \gets p_i + \frac{(p_{i+1} – p_i)}{2} + \frac{(p_{i-1} – p_i)}{2}\]

In polylines, every vertex has 2 neighbors, apart from boundary vertices, to which the operation is not applied. For closed polylines, Laplacian smoothing converges to a single point after many iterations.

The Laplacian matrix \(L\) can be used to apply the smoothing for the whole polyline at once, and a Lambda parameter (0 ≤ λ ≤ 1) can be introduced to control how much the vertices are going to move in one iteration:

\[p \gets p + \lambda L p\]

The best thing about Laplacian smoothing is that the same idea pleasantly applies for meshes in 3D! The difference is that in meshes every vertex has a variable number of neighbors, but the same formula using the Laplacian matrix can be used to implement it.

(credit: Stanford CS 468)

For more details on smoothing, check out this slide from Stanford, from which the images in this post were taken. It also talks about ways to improve this technique using an average of the neighbors weighted by curvature.

What about the bug?

The bug was the following: For some reason, the function that converts the png into polygons was generating duplicate vertices. Later in the code, a triangulate function is used on these polygons, and the duplicate vertices by themselves make the function crash. But even worse, when smoothing is applied to a polygon with duplicate vertices, strange things happen. Here’s an example of a square with 5 vertices (1 duplicated); after 3 iterations it becomes a non-simple polygon:

You can try to simulate the algorithm to see that it happens.

Also, the lambda parameter used was 1.0, which was too high, making small polygons collapse or generating new duplicate points, so I proposed 0.5 as the new parameter. For closed curves, Laplacian smoothing will converge to a single point, making the polygon really small after many iterations, which is also a problem for the triangulation function. In most settings, these converged polygons can be erased.

Some other problems were also found, but less relevant to be discussed here. A pull request was merged into the gptoolbox repository removing duplicate points and fixing the other bugs, and now the example used in the tutorial week should work just fine. The changes I made don’t guarantee that the smoothing operation is not generating self-intersection anymore, but for most cases it does.

Fun fact: There’s a technique for finding if a polygon has self-intersection called sweep line, which works in \(O(n\log n)\)—more efficient than checking every pair of edges!

The things I learned

  • Laplacian smoothing is awesome.
  • It’s really hard to build software for geometry processing that works properly for all kinds of input. You need to have complete control over degenerate cases, corner cases, precision errors, … the list goes on. It gets even harder when you want to implement something that depends on other implementations, that may by themselves break with certain inputs. Now I recognize even more the effort of professor Alec Jacobson and collaborators for creating gptoolbox and making it accessible.

Big thanks to Lucas Valença, for encouraging me to try and solve the bug, and to Dimitry Kachkovski, for testing my code.

Categories
tutorial week

A Summary of the Shape Deformation Lecture with an Example

By SGI Fellows Xinwen Ding and Miles Silberling-Cook

The authors of this article appreciate Derek Liu and Eris Zhang for their awesome lectures on the third day of SGI. The rest of this summary is based on what we learned from their lectures.

Introduction

One of the motivations for shape deformation is the desire of reproducing the motion of a given initial shape as time evolves. It is clear that modeling every piece of information associated with the initial shape can be hard and unnecessary. So, a simple but nice-looking deformation strategy is needed. During the lecture, SGI fellows were introduced two deformation strategies listed below.

  • Energy-based Deformation: This method turns the deformation problem into an optimization problem, where possible objective functions can be Dirichlet Energy (\(E = \int_{M} ||\nabla f||^2 dx\)), Laplacian Energy (\(E = \int_{M} ||\Delta f||^2 dx\)), or Conformal Energy. It is worth mentioning that the nonlinear optimization problems are, in general, difficult to solve.
  • Direct Method: We mainly refer to a method called Linear Blend Skinning (LBS). In this method, we predefine a sequence of movable handles. Furthermore, we assume the movement of every handle will contribute linearly to the final coordinate of our input vertex. Based on this assumption, we write down the equation for calculating the target position: \(u_i = \displaystyle\sum_{j = 1}^{\text{# of handles}} w_{ij} T_j v_i\), where \(v_i, u_i\) are the \(i^{th}\) input and output vertices, respectively, \(w_{ij}\) is the weight, and \(T_j\) is the tranformation matrix of the \(j^{th}\) handle recording rotation and transformation in terms of homogeneous coordinates.

The remaining part of this summary will focus on the Linear Blend Skinning (LBS) method.

Workflow

The process of a 2D LBS deformation can be divided into three steps:

  • Step 1: Define the handles.
  • Step 2: Compute the weights \(w_{ij}\) and store the tranformation matrix \(T_j = \begin{bmatrix}\cos\theta & -\sin\theta & \Delta x \\ \sin\theta & \cos\theta & \Delta y\end{bmatrix}\), where the first two columns of \(T_j\) denote the rotation of the \(j^{th}\) handle, and the last column denotes the shift. Notice that in this case the \(i^{th}\) vertex should be \(v_i = \begin{bmatrix} v_x \\ v_y \\ 1\end{bmatrix}\).
  • Step 3: Put all parts together and compute \(u_i = \displaystyle\sum_{j = 1}^{\text{# of handles}} w_{ij} T_j v_i\).

A concrete example will be given in the final section.

Weight Calculation

Requirements on weights:

To minimize the artifacts in our deformation, we need to be smart when we generate the weights. Briefly speaking, we want our weights to satisfy the following criteria:

  • Smooth: the weight function should be \(C^{1}\) differentiable. This can be achieved by introducting the Laplacian operator.
  • Local: the effective area in a 2D domain should be within some neighborhood of the handle.
  • Shape aware: the effective area in a 2D domain should not cross the boundary of our domain of interest.
  • Sum up to 1.

Let \(\Omega\) be the domain of the interest, and \(w\) be the weight. Based on the aforementioned requirements, we can use the following minimization problem to characterize the weights.

\begin{equation}
\begin{aligned}
&\min_{w} \int_{\Omega} || \Delta w||^2 dx \\
&\textrm{s.t.} w(j^{th} \text{handle}) = e_j \hspace{20pt} \text{($e_j$ is a standard basis vector)}
\end{aligned}
\end{equation}

Discretize the Laplacian Operator

One may notice that in the objective function of the above optimization problem, we introduced a Laplacian operator. Luckily, we can play some math tricks to discretize the Laplacian operator, namely \(\Delta = M^{-1}C\), where \(M\) is the diagonal mass matrix, and \(C\) is the cotangent formula for the Laplacian.

With this knowledge in mind and with some knowledge from Finite Element Method, we can rewrite the objective function into a quadratic form.

\[
\int_{\Omega} || \Delta w||^2 dx
= ||M^{-1}Cw||_{M}^2
= (M^{-1} C w)^{T} M (M^{-1} C w)
= w^{T} C^{T} M^{-1} C w
\]

Let \(B = C^{T} M^{-1} C\), then we have \(\int_{\Omega} || \Delta w||^2 dx = w^{T} C^{T} M^{-1} C w = w^{T} B w\). Moreoever, we name the matrix \(B\) as bilaplacian or square Laplacian. Hence, the optimization problem is rewritten as follows:

\begin{equation}
\begin{aligned}
&\min_{w} w^{T} B w \\
&\textrm{s.t.} w(j^{th} \text{handle}) = e_j \hspace{20pt} \text{($e_j$ is a standard basis vector)}
\end{aligned}
\end{equation}

Solution Derivation to the Optimization Problem

Recall that \(w_{ij}\) is the degree to which vertex \(i\) is effected the motion of handle \(j\). To form our constraint, we pick a vertex for each handle that will only be affected by that handle. This constraint is much easier to express when the fixed weightings are separated from the free weightings, so we re-organize the weightings as \(w = [x, y]\). In this notation, \(x\) contains the free variables for the non-handle vertices, and \(y\) contains the fixed weighting for each handle. The boundary can now be concisely stated as \(y = \mathbb{I}\).

We now solve for the \(x\) that minimizes \([x \mid y]^T B [x \mid y]\). Following Derek Liu and Eris Zhang’s lead, we use subscripts to express the indices of \(B\) pertaining to either \(x\) or \(y\).

\begin{align}
E
&= w^T B w \\
&= [x \mid y]^T B [x \mid y] \\
&= [x \mid 0]^T \begin{bmatrix}B_{x,x} & 0 \\ 0 & 0 \end{bmatrix} [x \mid 0]
+ [x \mid 0]^T \begin{bmatrix}0 & B_{x,y} \\ 0 & 0 \end{bmatrix} [0 \mid y]
+ [0 \mid y]^T \begin{bmatrix}0 & 0 \\ B_{y,x} & 0 \end{bmatrix} [x \mid 0]
+ [0 \mid y]^T \begin{bmatrix}0 & 0 \\ 0 & B_{y,x} \end{bmatrix} [0 \mid y] \\
&= x^T B_{x,x} x + x^T B_{x,y} y + y^T B_{y,x} x + y^T B_{y,y} y \\
&=
\underbrace{ x^T B_{x,x} x }_\text{Quadratic}
+ \underbrace{ x^T \left(B_{x,y} + B_{y,x}^T \right) y }_\text{Linear}
+ \underbrace{ y^T B_{y,y} y }_\text{Constant}
\end{align}

As expected, this yields another quadratic equation. Differentiating gives:

\begin{equation}
\frac{\partial E}{\partial x}
= 2 A_{x,x,} x + \left(B_{x,y} + B_{y,x}^T \right) y
\end{equation}

and the extremizer is therefore \(x = \left( -2A\right)^{-1}\left(B_{x,y} + B_{y,x}^T \right) y\).

Code Implementation and Acceleration

Implementation

To calculate the solution in Matlab, see the following.

function W = compute_skinning_weight(V,F,b)
% V: Vertices
% F: Triangles
% b: Handle vertex indices

% Compute the Laplacian using the cotangent-matrix.
L = -cotmatrix(V,F);

% Compute the mass matrix.
M = massmatrix(V,F);

% Compute the BiLaplacian
B = L*(M\L);

% Create a list of non-handle verticies
a = setdiff(1:size(V,1), b);

% Initialize the fixed handle constraint
y = eye(length(b));

% Compute the minimum
A = Q(a,a);
B = (Q(a,b) + Q(b,a).') * y;
x = (-2*A) \ B;

% Assemble the final weight matrix
W(a,:) = x;
W(b,:) = y;
end

Once the weightings are done, we can propagate the transformation of the handles to the mesh (using homogeneous coordinates) as follows:

function [U] = linear_blend_skinning(V, T, W)
% V: Vertices
% T: Handle transforms
% W: Weighting matrix

U = zeros(size(V));

for i=1:size(V)

for j=1:size(T,3)
U(i,:) = U(i,:) + ( W(i,j) * [V(i,:) 1] * T(:,:,j).' );
end

end

end

Acceleration

Although the solution above guarantees correctness, this is a fairly inefficient implementation. Some of this computation can be done offline, and the nested loops are not required. The blending weights and the vertex positions can be compiled into a single matrix ahead of time, like so:

function C = compile_blending_weights(V, N, W)
% V: Vertices
% N: Number of handles
% W: Weighting matrix

for i=1:length(V)
v = repmat( [V(i,:), 1], 1, N)';
wi = reshape(repmat(W(i, :), 3, 1), [], 1);
C(:,i) = v.*wi;
end
end

Then propagating the transform is as simple as composing it with the precomputed weightings.

function [U] = linear_blend_skinning(T, C)
% T: Handle transforms
% C: Compiled blending weights

B = reshape(T,[], size(T, 3)*size(T, 2));
U = (B*C)';
end

Shout-out to SGI Fellow Dimitry Kachkovski for providing this extra-efficient implementation!

Example: 2D Gingerbread Man Deformation

In this example, we try to implement the whole method over a gingerbread man domain. Following the define handle – computing weights – combine and transform workflow, we provide a series of video examples.

Step 1: Define handles

By click randonly on the mesh, the handles are defined according to the position of mouse.

Step 2: Computing Weights

The area where the weight function is effective is in some color other than blue.

Step 3: Combine and Transform

Drag the handles and deform the gingerbread man!
Categories
program information

Welcome to SGI 2022!

Welcome to the official blog of the Summer Geometry Initiative (SGI) 2022, taking place July 11-August 19! I’m Justin Solomon, director of SGI 2022 and PI of the MIT Geometric Data Processing Group.

First launched in 2021, SGI is a completely online program engaging a paid cohort of undergraduate and early master’s students in six weeks of training and research experiences related to applied geometry and geometry processing. SGI Fellows come from all over the globe and represent a wide variety of educational institutions, life/career paths, and fields of interest.

SGI aims to accomplish the following objectives:

  • spark collaboration among students and researchers in geometry processing,
  • launch inter-university research projects in geometry processing involving team members across broad levels of seniority (undergraduate, graduate, faculty, industrial researcher),
  • introduce students to geometry processing research and development, and
  • diversify the “pipeline” of students entering geometry processing research, in terms of gender, race, socioeconomic background, and home institution.

SGI aims to address a number of challenges and inequities in geometry processing. Not all universities host faculty whose work touches on this emerging field, reducing the cohort of students exposed to this discipline during their undergraduate careers. Moreover, as with many engineering and mathematical fields, geometry processing suffers from serious gender, racial, and socioeconomic imbalance; by giving a broad set of students access to geometry processing research experiences, over the longer term we hope to affect the composition of the geometry processing community.

SGI is supported by a worldwide network of volunteers, including faculty, graduate students, and research scientists in geometry and related disciplines. This team supports the SGI Fellows through mentorship, instruction, panel discussions, and several other means.

SGI 2022 is due to start in a few days! Each SGI Fellow has been mailed a box of swag from our many sponsors, a certificate, and a custom-made coffee mug designed by SGI 2022 Fellows Enjeck Cleopatra, Mariem Khlifi, and Andrew Rodriguez.

We’ll kick off next week with tutorials in geometry processing led by Oded Stein (MIT), Silvia Sellán (U of Toronto), Hsueh-Ti (Derek) Liu (U of Toronto), Michal Edelstein (Technion), and Nick Sharp (U of Toronto). Then, in the remaining 5 weeks, our Fellows will have the opportunity to participate in multiple short-term (1-2 week) research projects, intended to kick off collaborations that last over the longer term. Check out last year’s SGI blog for examples of the kinds of projects they’ll be working on.

Revisit this blog as the summer progresses for updates on SGI 2022 and to read about the exciting ideas our Fellows are developing in geometry!