Categories
Tutorial week

Ironing Out Wrinkles with Mesh Fairing

Picture your triangle mesh as that poor, crumpled T-shirt you just pulled out of the dryer. It’s full of sharp folds, uneven patches, and you wouldn’t be caught dead wearing it in public—or running a physics simulation! In geometry processing, these “wrinkles” manifest as skinny triangles, janky angles, and noisy curvature that spoil rendering, break solvers, or just look… ugly. Sometimes, we might want to fix these problems using something geometry nerds call surface fairing.

Figure 1. (Left) Clean mesh, (Right) Mesh damaged by adding Gaussian noise to each vertex (Middle) Result after applying surface fairing to the damaged mesh.

In essence, surface fairing is an algorithm one uses to smoothen their mesh. Surface fairing can be solved using energy minimization. We define an “energy” that measures how wrinkly our mesh is, then use optimization algorithms to slightly nudge each vertex so that the total energy drops.

  1. Spring Energy
    Think of each mesh edge (i,j) as a spring with rest length 0. If a spring is too long, it costs energy. Formally:
    Emem = ½ Σ(i,j)∈E wij‖xi − xj‖² with weights wij uniform or derived from entries of the discrete Laplacian matrix (using cotangents of opposite angles). We’ll call this term the Membrane Term
  2. Bending Energy
    Sometimes, we might want to additionally smooth the mesh even if it looks less wrinkly. In this case, we penalize the curvature of the mesh, i.e., the rate at which normals change at the vertices:
    Ebend = ½ Σi Ai‖Δxi‖² where Δxi is the discrete Laplacian at vertex i and Ai is its “vertex area”. For the uninitiated, the idea of a vertex having an area might seem a little weird. But it represents how much area a vertex controls and is often calculated as one-third of the summed areas of triangles incident to vertex i. We’ll call this energy term the Laplacian Term.
  3. Angle Energy
    As Nicholas has warned the fellows, sometimes, having long skinny triangles can cause numerical instability. So we might like our triangles to be more equilateral. We can additionally punish deviations from 60° for each triangle, by adding another energy term: Eangle = Σk=1..3k−60°)². We’ll call this the Angle Term.
Figure 2. (Left) Damaged mesh. (In order) Surface fairing using only 1) Spring Energy 2) Bending Energy 3) Angle Energy. Notice how the last two energy minimizations fail to smooth the mesh properly alone.

Note that in most cases using angle energy and bending energy terms are optional, but using the spring energy term is important! (however, you may encounter a few special cases where you can make do without the spring energy?? i’m not too sure, don’t take my word for it). Once we have computed these energies, we weight them appropriately with scaling factors and sum them up. The next step is to minimize them. I love Machine Learning and hate numerical solvers; and so, it brings me immense joy to inform you that since the problem is non-convex, we can should use gradient descent! At each vertex, compute the gradient xE and take a tiny step towards the minima: \(x_{i} \leftarrow x_i – \lambda \cdot (\nabla_{x} E)_{i}\)

Now, have a look at our final surface fairing algorithm in its fullest glory. Isn’t it beautiful? Maybe you and I are nerds after all 🙂

Figure 1. (Left) Clean mesh, (Right) Damaged mesh (Middle) Resulting mesh after each step of gradient descent on the three energy terms combined. Note how the triangles in this case are more equilateral than the red mesh in Figure 2, because we’ve combined all energy terms.

So, the next time your mesh looks like you’ve tossed a paper airplane into a blender, remember: with a bit of math and a few iterations, you can make it runway-ready; and your mesh-processing algorithm might just thank you for sparing its life. You can find the code for the blog at github.com/ShreeSinghi/sgi_mesh_fairing_blog.

Happy geometry processing!

References

  • Desbrun, M., Meyer, M., Schröder, P., & Barr, A. H. (1999). Implicit fairing of irregular meshes using diffusion and curvature flow.
  • Pinkall, U., & Polthier, K. (1993). Computing discrete minimal surfaces and their conjugates.

Categories
Research

Topology Control: Training a DeepSDF (1/2)

SGI Fellows:
Stephanie Atherton, Marina Levay, Ualibyek Nurgulan, Shree Singhi, Erendiro Pedro

In the Topology Control project mentored by Professor Paul Kry and project assistants Daria Nogina and Yuanyuan Tao, we sought to explore preserving topological invariants of meshes within the framework of DeepSDFs. Deep Signed Distance Functions are a neural implicit representation used for shapes in geometry processing, but they don’t come with the promise of respecting topology. After finishing our ML pipeline, we explored various topology-preserving techniques through our simple, initial case of deforming a “donut” (torus) into a mug.

DeepSDFs

Signed Distance Field (SDF) representation of a 3D bunny. The network predicts the signed distance from each spatial point to the surface. Source: (Park et al., 2019).

Signed Distance Functions (SDFs) return the shortest distance from any point in 3D space to the surface of an object. Their sign indicates spatial relation: negative if the point lies inside, positive if outside. The surface itself is defined implicitly as the zero-level set: the locus where \(\text{SDF}(x) = 0 \).

In 2019, Park et al. introduced DeepSDF, the first method to learn a continuous SDF directly using a deep neural network (Park et al., 2019). Given a shape-specific latent code \( z \in \mathbb{R}^d \) and a 3D point \( x \in \mathbb{R}^3 \), the network learns a continuous mapping:

$$
f_\theta(z_i, x) \approx \text{SDF}^i(x),
$$

where \( f_\theta \) takes a latent code \( z_i \) and a 3D query point \( x \) and returns an approximate signed distance.

The training set is defined as:

$$
X := {(x, s) : \text{SDF}(x) = s}.
$$

Training minimizes the clamped L1 loss between predicted and true distances:

$$
\mathcal{L}\bigl(f_\theta(x), s\bigr)
= \bigl|\text{clamp}\bigl(f_\theta(x), \delta\bigr) – \text{clamp}(s, \delta)\bigr|
$$

with the clamping function:

$$
\text{clamp}(x, \delta) = \min\bigl(\delta, \max(-\delta, x)\bigr).
$$

Clamping focuses the loss near the surface, where accuracy matters most. The parameter \( \delta \) sets the active range.

This is trained on a dataset of 3D point samples and corresponding signed distances. Each shape in the training set is assigned a unique latent vector \( z_i \), allowing the model to generalize across multiple shapes.

Once trained, the network defines an implicit surface through its decision boundary, precisely where \( f_\theta(z, x) = 0 \). This continuous representation allows smooth shape interpolation, high-resolution reconstruction, and editing directly in latent space.

Training Field Notes

We sampled training data from two meshes, torus.obj and mug.obj using a mix of blue-noise points near the surface and uniform samples within a unit cube. All shapes were volume-normalized to ensure consistent interpolation.

DeepSDF is designed to intentionally overfit. Validation is typically skipped. Effective training depends on a few factors: point sample density, network size, shape complexity, and sufficient epochs.

After training, the implicit surface can be extracted using Marching Cubes or Marching Tetrahedra to obtain a polygonal mesh from the zero-level set.

Training Parameters
SDF Delta1.0
Latent Mean0.0
Latent SD0.01
Loss FunctionClamped L1
OptimizerAdam
Network Learning Rate0.001
Latent Learning Rate0.01
Batch Size2
Epochs5000
Max Points per Shape3000
Network Architecture
Latent Dimension16
Hidden Layer Size124
Number of Layers8
Input Coordinate Dim3
Dropout0.0
Point Cloud Sampling
Radius0.02
Sigma0.02
Mu0.0
Number of Gaussians10
Uniform Samples5000

For higher shape complexity, increasing the latent dimension or training duration improves reconstruction fidelity.

Latent Space Interpolation

One compelling application is interpolation in latent space. By linearly blending between two shape codes \( z_a \) and \( z_b \), we generate new shapes along the path

$$
z(t) = (1 – t) \cdot z_a + t \cdot z_b,\quad t \in [0,1].
$$

Latent space interpolation between mug and torus.

While DeepSDF enables smooth morphing between shapes, it exposes a core limitation: a lack of topological consistency. Even when the source and target shapes share the same number of genus, interpolated shapes can exhibit unintended holes, handles, or disconnected components. These are not artifacts, they reveal that the model has no built-in notion of topology.

However, this limitation also opens the door for deeper exploration. If neural fields like DeepSDF lack an inherent understanding of topology, how can we guide them toward preserving it? In the next post, we explore a fundamental topological property—genus—and how maintaining it during shape transitions could lead us toward more structurally meaningful interpolations.

References

Categories
Tutorial week

Tutorial Week Day 5: Debugging and Robustness

On July 11th, Nicholas Sharp, the creator of the amazing Polyscope library, gave the SGI Community an insightful talk on debugging and robustness in geometry processing—insights that would later save several fellows hours of head-scratching and mental gymnastics. The talk was broadly organized into five parts, each corresponding to a paradigm where bugs commonly arise:

  1. Representing Numbers
  2. Representing Meshes
  3. Optimization
  4. Simulation and PDEs
  5. Geometric Machine Learning

Representing Numbers

The algorithms developed for dealing with geometry are primarily built to work with clean and pretty real numbers. However, computers deal with floats, and sometimes they behave ugly. Floats are computers’ approximations of real numbers. Sometimes, it may require an infinite amount of storage to correctly represent a single number of arbitrary precision (unless you can get away with \(π=3.14\)). Each floating number can either be represented using 32 bits (single precision) or 64 bits (double precision).

Figure 1. An example of the float32 representation

In the floating realm, we have quirks like:

  1. \((x+y)+z \neq x+(y+z)\)
  2. \( a>0, b>0\) but \(a+b=a\)

It is important to note that floating-point arithmetic is not random; it is simply misaligned with real arithmetic. That is, the same operation will consistently yield the same result, even if it deviates from the mathematically expected one. Possible alternatives to floats are integers and binary fraction representations; however, they come with their own obvious limitations.

Who’s your best friend? The NaN! The NaN is a special “floating-point” computers spit out when they’re asked to perform invalid operations like

  1. \(\frac{0}{0}\rightarrow \) NaN
  2. \(\sqrt{-1} \rightarrow\) NaN (not \(i\) haha)

Every operation against a NaN results in… NaN. Hence, one small slip-up in the code can result in the entire algorithm being thrown off its course. Well then… why should I love NaNs? Because a screaming alarm is better than a silent error. If your algorithm is dealing with positive numbers and it computes \(\sqrt{-42}\) somewhere, you would probably want to know if you went wrong.

Well, what are some good practices to minimize numerical error in your code?

  1. Don’t use equality for comparison, use thresholds like: \(\left| x – x^* \right| < \epsilon\) or \(\frac{\left| x – x^* \right|}{\left| x^* \right|} < \epsilon\)
  2. Avoid transcendental functions wherever possible. A really cool example of this is to avoid \(\theta = \arccos \left( \frac{ \mathbf{u} \cdot \mathbf{v} }{ \left| \mathbf{u} \right| \left| \mathbf{v} \right| } \right)\) and use \(\cot \theta = \frac{ \cos \theta }{ \sin \theta } = \frac{ \mathbf{u} \cdot \mathbf{v} }{ \left| \mathbf{u} \times \mathbf{v} \right| }\) instead.
  3. Clamp inputs to safe bounds, e.g. \(\sqrt{x} \rightarrow \sqrt{max(x, 0)}\). (Note that while this keeps your code running smoothly, it might convert NaNs into silent errors!)
  4. Perturb inputs to singular functions to ensure numerical stability, e.g. \(\frac{1}{\left| x \right|} \to \frac{1}{\left| x \right| + \epsilon}\)

Some easy solutions include:

  1. Using a rational arithmetic system, as mentioned before. Caveats include: 1) no transcendental operations 2) some numbers might require very large integers to represent them, leading to performance issues, in terms of memory and/or speed.
  2. Use robust predicates for specific functional implementations, the design of which is aware of the common floating-point problems

Representing Surface Meshes

Sometimes, the issue lies not with the algorithm but with the mesh itself. However, we still need to ensure that our algorithm works seamlessly on all meshes. Common problems (and solutions) include:

  1. Unreferenced vertices and repeated vertices: throw them out
  2. A mixture of quad and triangular meshes: subdivide, retriangulate, or delete
  3. Degenerate faces and spurious topology: either repair these corner cases or adjust the algorithm to handle these situations
  4. Non-manifold and non-orientable meshes: split the mesh into multiple manifolds or orientable patches
  5. Foldover faces, poorly tessellated meshes, and disconnected components: use remeshing algorithms like Instant Meshes or use Generalized Winding Numbers instead of meshes

Optimization

Several geometry processing algorithms involve optimization of an objective function. Generally speaking, linear and sparse linear solvers are well-behaved, whereas more advanced methods like gradient descent or non-linear solvers may fail. A few good practices include:

  1. Performing sanity checks at each stage of your code, e.g. before applying an algorithm that expects SPD matrices, check if the matrix you’re passing is actually SPD
  2. When working with gradient-descent-like solvers, check if the gradient magnitudes are too large; that may cause instability for convergence

Simulations and PDEs

Generally, input meshes and algorithms are co-designed for engineering and scientific computing applications, so there aren’t too many problems with their simulations. However, visual computing algorithms need to be robust to arbitrary inputs, as they are typically just one component of a larger pipeline. Algorithms often fail when presented with “bad” meshes, even if it is perfect in terms of connectivity (like being a manifold, being oriented, etc.). Well then, what qualifies as a bad mesh? Meshes with skinny or obtuse triangles are particularly problematic. The solution is to remesh them using more equilateral triangles

Figure 2. (Left) An example of a “bad” mesh with skinny triangles. (Middle) A high-fidelity re-mesh that might be super-expensive to process. (Right) A low-fidelity re-mesh that trades fidelity for efficiency.

Geometric Machine Learning

Most geometric machine learning stands very directly atop geometry processing; hence, it’s important to get it right. The most common problems encountered in geometric machine learning are not so different from those encountered in standard machine learning. These problems include:

  1. Array shape errors: You can use the Python aargh library to monitor and validate tensor shapes
  2. NaNs and infs: Maybe your learning rate is too big? Maybe you’re passing bad inputs into singular functions? Use torch.autograd.set_detect_anomaly(mode, check_nan=True) to track these problematic numbers at inception.
  3. “My trained model works on shape A, but not shape B.”: Is your normalization, orientation, and resolution consistent?

It is a good idea to overfit your ML model on a single shape and ensure that it works on simple objects like cubes and spheres before moving on to more complex examples.

And above all, the golden rule when your algorithm fails:
Visualize everything.