Categories
Research

Quasi-harmonic Geodesic Distance

I kicked off this project with my advisor, Yu Wang, over Zoom the weekend before our official start, checked in mid-week to ask questions and provide a progress report, and wrapped up the week with a final Zoom review—each meeting improving my understanding of the fundamental problem and how to improve my approach in Python. I’m excited to share these quasi-harmonic ideas of how a blend of PDE insight and mesh-based propagation can yield both speed and exactness in geodesic computation.

Finding the shortest path on a curved surface—called the geodesic distance—is deceptively challenging. Exact methods track how a wavefront would travel across every triangle of the mesh, which is accurate but can be painfully slow on complex shapes. The Heat Method offers a clever shortcut: imagine dropping a bit of heat at your source point, let it diffuse for a moment, then “read” the resulting temperature field to infer distances. By solving two linear systems—one for the heat spread and one to recover a distance‐like potential—you get a fast, global approximation. It runs in near-linear time and parallelizes beautifully, but it can smooth over sharp creases and slightly misestimate in highly curved regions.

To sharpen accuracy where it matters, I adopted a hybrid strategy. First, detect “barrier” edges—those sharp creases where two faces meet at a steep dihedral angle—and temporarily slice the mesh along them. Then apply the Heat Method independently on each nearly‐flat patch, pinning the values along the cut edges to their true geodesic distances. Finally, stitch everything back together by running a precise propagation step only along those barrier edges. The result is a distance field that retains the Heat Method’s speed and scalability on smooth regions, yet achieves exactness along critical creases. It’s remarkable how something as seemingly simple as measuring distance on a surface can lead into rich territory—mixing partial-differential equations, sparse linear algebra, and discrete geometry in pursuit of both efficiency and precision.

Categories
Talks

Week 2: Guest Lecture

On Wednesday, July 9th, SGI fellows were treated to a one hour presentation by guest lecturer Aaron Hertzmann, Principal Scientist at Adobe Research in San Francisco, CA. Aaron was introduced by SGI fellow Amber Bajaj, who among other accomplishments, noted that Aaron was recently recognized by the Association for Computing Machinery (ACM)’s SIGGRAPH professional society “for outstanding achievement in computer graphics and interactive techniques” and correspondingly awarded the Computer Graphics Achievement Award. The title of the talk was “Toward a Theory of Perspective: Perception in Pictures” and began on a personal note, with Aaron conveying how he was often critical of his own art differing from the corresponding photo he would take of the scene he was illustrating.

From that anecdotal example, Aaron expanded his talk to cover topics of human perception, vision, theory of perspective, and much more, weaving it all together to paint a compelling picture of what factors contribute to what we, as humans, perceive as more accurate representations of our three dimensional reality on a two dimensional medium. He made a compelling point that, while single point perspective is typically how cameras capture scenes and that single point linear perspective is a common tenant of formal art classes, multi-point perspective more faithfully represents how we remember our experiences. In a world of electronics, digital imagery, and automation, it was striking how the lecturer made it clear that artists are still able to convey an image more faithful to the experience than digital cameras and rendered three dimensional imagery can capture.

Key points from Aaron’s talk:

  • Only 3% of classical paintings strictly follow linear perspective
  • A multi-perspective theory more faithfully captures our experience
  • MaDCoW (Zhang et al., CVPR 2025) is a warping algorithm that works for a variety of input scenes
  • 99.9% of vision is peripheral, which leads to inattention blindness (object lying outside the focus of our fovea)
  • We don’t store a consistent single 3-D representation in our head… it is fragmentary across fixations
  • There are systematic differences between drawings, photographs, and experiments

Finally, the lecture came full circle with Aaron returning to the art piece he presented at the start and noting seven trends he’s identified from his own work that merit further research: good agreement with linear perspective in many case; distant object size, nearby object size; fit to canvas shape; reduced / eliminated foreshortening; removed objects and simplified shapes; multiperspective for complex composition. Overall the lecture was thought provoking and motivational for the fellows currently engrossed in the 2025 Summer Geometry Initiative.