*By Mariem Khlifi, Abraham Kassahun Negash, Hongcheng Song, and Qi Zhang*

**Introduction**

For two weeks, we were involved in a project that looked at an interesting way of extracting lines from a line drawing raster image, mentored by Mikhail Bessmeltsev and Edward Chien and assisted by Zhehao Li.

Despite its advent in the 90s, automatic line extraction from drawings is still a topic of research today. Due to the unreliability of current tools, artists and engineers often trace lines by hand rather than deal with the results of automatic vectorization tools.

Most past algorithms treat junctions incorrectly which results in incorrect topology, which can lead to problems in applying different tools to the vectorized drawing. Bessmeltsev and Solomon [1] looked at junction detection as well as vectorizing line drawings using frame fields. Frame fields assign two directions to each point, one direction tangent to the curve and the other roughly aligned to neighboring frame field directions. This second direction is important at junctions as it aligns with the tangent of another incoming curve.

We tried to build on this work by additionally looking at minimal currents. Currents are functionals over the space of differential forms on a smooth manifold. They can be thought of as integrations of differential forms over submanifolds. Wang and Chern [2] represented surfaces bounded by curves as differential forms, leading to a formulation of the Plateau problem whose optimization is convex. This representation is common in geometric measure theory, which studies the geometric properties of sets in some space using measure theory. We used this procedure to find curves in our 2D space with a given metric that minimized the distance between two points. The formulation for a flat 2D space is similar to that of the paper, but some modifications had to be made in order to introduce a narrow band metric or frame fields (i.e. to obtain a curve that follows the dark lines while trying to minimize distance).

**Theory**

**Vectorization using Frame Fields**

Frame fields help guide the vectorized line in the direction of the drawn line. First, a gradient of the drawing is defined based on the intensity of the pixel. One of the frame field directions is optimized to be as close as possible to the direction perpendicular to this gradient while being smooth. The second direction is determined by a compromise between a few rules we want to enforce. First, this direction wants to be perpendicular to the tangent direction. Second, it wants to be smooth. For example, at a junction where we have a non-perpendicular intersection, the second direction starts to deviate from the perpendicular before the intersection so that it aligns with the incoming curve at the intersection.

*Fig 1: Frame field example **[1]*

Bessmeltsev and Solomon used these frame fields to trace curves along the tangent direction and bundle the extracted curves into strokes. The curves were obtained using Euler’s method by tracing along the tangent direction from each point up to a certain step size.

We on the other hand looked at defining a metric at each frame which defines the cost of moving in each direction. We then move through the frame field and try to minimize the distance between our start and end points. In this manner the whole drawing can be vectorized given we have the junction points and the frame field.

**What are differential forms?**

The main focus of our project was to understand and implement the algorithm proposed by Wang and Chern for the determination of minimal surfaces bounded by a given curve. We reduce the dimension to suit our application and add a metric so that the space is no longer Euclidean.

The core idea of their method is to represent surfaces using differential forms. Differential forms can be thought of as infinitesimal oriented lengths, areas or volumes defined at each point in the space. They can also be thought of as functions that take in vectors, or higher dimensional analog of vectors, and output scalars. In this interpretation, they can be thought of as measuring the vector or vector analog.

A k-form can be defined on an n-dimensional manifold, where k = 0,1,2,…,n. A 0-form is just a function defined over the space that outputs a scalar. 1-forms also known as covectors, which from linear algebra can be recalled to live inside the dual space of a vector space, map vectors to scalars. Generally, a k-form takes in a k-vector and outputs a scalar. These k-vectors are higher dimensional extensions of vectors that can be described by a set of k vectors. They have a magnitude defined by the volume spanned by the vectors. Additionally, they have an orientation, thus in general they can be thought of as signed k-volumes.

k-forms can be visualized as the k-vectors they measure, and the measurement operation can be thought of as taking the projection of the k-vector onto the k-form. But it is important to realize these are two different concepts even if they behave similarly. Their difference is clearly seen in curved spaces. Musical isomorphisms relate a k-form to a k-vector by taking advantage of this similarity, where a \(\sharp\) takes a k-vector to a k-form while a \(\flat\) takes a k-form to a k-vector.

We can define the wedge product denoted by \(\wedge\) to describe the spanning operation described earlier. In particular, a k-vector spanned by vectors x_{1}, x_{2,} x_{3},…,x_{k} can be expressed as \(x_1 \wedge x_2 \wedge . . . \wedge x_k\).

An operation called the Hodge star denoted by \(\star\) can also be defined on forms. On an n-dimensional space, the Hodge star maps a k-form to an n-k-form. In this way, it gives a complementary form.

**Reformulation of the plateau problem**

Plateau’s problem can be stated as follows. Given given a closed curve \(\Gamma\) inside a 3-dimensional compact space M, find the oriented surface \(\Sigma\) in M such that its area is minimal.

Wang and Chern represent the surface \(\Sigma\) and curve \(\Gamma\) using Dirac-delta forms \(\delta_\Sigma\) and \(\delta_\Gamma\). These are a one- and a two-form, respectively, having infinite magnitudes at the surface and 0 everywhere else. They can be understood as the forms used to measure line and surface integrals of vector fields on the curve and surface. Then they take the mass norm of this Dirac-delta form which is equivalent to the area of the surface. Thus minimizing this mass norm is the same problem as minimizing the area of the surface. The area can be expressed as:

$$Area(\Sigma) = \sup_{\omega \in \Omega^2(M), ||\omega||_{max} \leq 1} \int_M \omega \wedge \delta_\Sigma = ||\delta_\Sigma||_{mass}$$

Furthermore, the boundary constraint can also be written as a differential and hence linear relationship between forms:

$$\partial \Sigma = \Gamma \Leftrightarrow d\delta_\Sigma = \delta_\Gamma$$

Then the problem can be stated as:

$$\min_{\delta_\Sigma \in \Omega^1(M): d\delta_\Sigma = \delta_\Gamma} ||\delta_\Sigma||_{mass}$$

We may allow any 1 form \(\eta\) in the place of \(\delta_\Sigma\).

$$\min_{\eta \in \Omega^1(M):d\eta = \delta_\Gamma} ||\eta||_{mass}$$

Now, it happens that solving the Poisson equation features a bit in our solution algorithm, and to speed this process up, using a Fast Fourier Transform is advantageous. To use FFT, however, we need to use a periodic boundary condition. This introduces solutions that we might not necessarily want which belong to a different homology class to our desired solution. To alleviate this solution, Wang and Chern impose an additional constraint that ties the cohomology class of the optimum 1-form to the initial 1-form.

$$\min_{\eta \in \Omega^1(M):d\eta_0 = \delta_\Gamma,\int_{M} \vartheta_i \wedge \star \eta_0 = \Lambda_i} ||\eta||_{mass}$$

Finally, the space of 1-forms can be decomposed into coexact, harmonic and exact parts by the Helmholtz-Hodge decomposition, which simplifies all the constraints into a single constraint. After some rearranging, we arrive at the following final form.

Given an initial guess \(\eta_0 \in \Omega^1 (M)\) satisfying \(d\eta_0 = \delta_\Gamma\) and \(\int_{M} \vartheta_i \wedge \star \eta_0 = \Lambda_i\), solve

$$\min_{\varphi \in \Omega^0(M)} ||\eta_0 + d\varphi||_{mass}$$

Even though we are working on a 2-dimensional space, the final form of the Plateau problem is identical in our case as well. Unlike Wang and Chern, who were working in 3D, we didn’t need to worry as much about the initial conditions as we can initialize plausible one-forms by defining an arbitrary curve between two points and taking the one form to be perpendicular to the curve at each vertex and oriented consistently.

**The Algorithm**

## Algorithm 1 (Main Algorithm)

Our goal is to solve the Plateau problem \(\min_{\eta \in \eta_0 + im(d^0)} ||\eta||_{mass}\). This boils down to adding a component to an initial guess. So, an initial guess will already give us a good estimation of how to solve the problem by adjusting the solution values through optimization principles. In this sense, our final solution is the initial guess plus a certain component dφ:

$$\min_{\varphi \in \Omega^0(M)} ||\eta_0 + d\varphi||_{mass}$$.

The algorithm then tries to solve \(\min_{\varphi \in \mathbb{R}^V, X \in \mathbb{R}^{V \times 3}} ||X||_{L^1}\), subject to \(D\varphi – X = -X_0\) where \(X\) is the discrete form of the one-form \(\eta\) and \(V\) is the number of vertices in the space after discretizing..

Lagrange multipliers are introduced to reinforce the constraints and act as multipliers that relate the gradient of the \(X\) variable and the constraints since in general, their gradients point towards the same direction. Generally, the optimization is done in one go. However, since we are solving a minimization problem with two unknown components \(X\) and \(\varphi\), we resort to Alternating Direction Method of Multipliers (ADMM), which adopts a divide and conquer approach that decouples the two variables and solves each one of them in an alternating fashion until X converges [3]:

$$\varphi = argmin_\varphi \langle\langle\hat{\lambda}, D\varphi\rangle\rangle_{L^2} + \frac{\tau}{2}||D\varphi-\hat{X}+X_0||_{L^2}^2$$

$$X = argmin_X ||X||_{L^1} – \langle\langle\hat{\lambda}, X\rangle\rangle_{L^2} + \frac{\tau}{2}||D\varphi-X+X_0||_{L^2}^2$$

For the initialization, we have created our own 2D path initializer which takes two points and creates an L-shaped path. The corresponding 1-form consists of the normals perpendicular to the path and \(X_0\) is obtained through integration over the finite region surrounding each vertex. More complex paths can be created by adding the 1-forms related to multiple L-shaped segments.

Throughout the paper, the convention used for integration to evaluate \(X\) from 𝜂 and differentiation to define the differentiation matrices for the Poisson solver was the midpoint approach. We have noticed however, that a forward or backward differentiation produced more accurate results in our case.

## Algorithm 2 (Poisson Solver)

To solve for \(\varphi\) we reduce the following equation we have from the ADMM optimization.

$$\varphi = argmin_\varphi \langle\langle\hat{\lambda}, D\varphi\rangle\rangle_{L^2} + \frac{\tau}{2}||D\varphi-\hat{X}+X_0||_{L^2}^2$$

After reducing the L2 norms and inner products and setting the derivative of the resulting expression to zero, we have:

$$h^3 D^T \hat{\lambda} + \tau h^3 D^T (D\varphi – \hat{X} + X_0) = 0$$

$$\varphi = \Delta^{-1}(D^T(\hat{X}-X_0)-\frac{1}{\tau}D^T\hat{\lambda})$$

where \(\Delta=D^TD\), and \(D\) is the finite difference differential operator using central difference. Because the Fast Fourier Transform (FFT) is computation friendly to the computation of derivatives of our PDE or ODE by its clean and clear formula, we use FFT to solve the Poisson equation. Our FFT Poisson solver projects the right hand side signal from the time domain onto a different finite frequency basis in the frequency domain, and the solution is simply a scalar product in the frequency domain. The final solution in the time domain is generated by the inverse FFT.

To test the validity of the FFT Poisson solver, we use \(z = exp(x+\frac{y}{2})\) as the original function and compute its Laplacian as the right hand side signal. The final solution is identical up to a constant value (-2.1776), which is easy to find in the (0, 0) point.

*Fig 2: Validation of the Poisson solver*

## Algorithm 4 (Shrinkage)

Now for the minimization of X, we remember the formula we obtained earlier.

$$X = argmin_{X_v} ||X||_{L^1} – \langle\langle\hat{\lambda}, X\rangle\rangle_{L^2} + \frac{\tau}{2}||D\varphi-X+X_0||_{L^2}^2$$

After reducing the norms, we obtain:

$$X = argmin_{X_v} \sum_{v \in V} h^3 (|X_v| -\hat{\lambda}_v \cdot X_v+\frac{\tau}{2}|(D\varphi)_v – X_v + (X_0)_v|^2)$$

This is equivalent to minimizing the terms corresponding to each individual vertex \(v\).

$$X = argmin_{X_v} |X_v| -\hat{\lambda}_v \cdot X_v+\frac{\tau}{2}|(D\varphi)_v – X_v + (X_0)_v|^2$$

This statement is equivalent to the pointwise shrinkage problem which has the following solution.

$$X_v = SHRINKAGE_{\frac{1}{\tau}}(z) = max(1-\frac{1}{\tau|z|},0)z$$

Where \(z = \tau \hat{\lambda}_v + (D\varphi)_v + (X_0)_v\)

This minimizes the euclidean distance, which is only helpful for drawing lines. To adjust the formulation for our case where the defined metric and the Euclidean metric are different, we first update the minimization problem to include the newly defined metric and then solve it to obtain the new formulation of the Shrinkage map.

# Results

After we tinkered with the algorithm, we felt that it was time that we sent it to kindergarten to learn a few tricks.

## Drawing a line

For this example, we define two points as our boundary and take a non-optimal path (sequence of segments) between them as our initial guess. Once this path is determined, we find its equivalent 1-form which extracts the normals at every vertex of our grid with respect to the path. The metric used in this example is the Euclidean metric, i.e., it’s constant everywhere, so the optimal path will be a line connecting the boundary points.

*Fig 3: Initial one-form(top left) pseudo-inverse of the minimizing one-form (top-right) level-set obtained from the pseudo-inverse(bottom-left) minimizing one-form (bottom-right)*

Note that to obtain the curve, we take the pseudo-inverse of the final one-form \(X\). This gives a 0-form which can be visualized as a surface. This surface will have a big jump over the curve we are looking for.

## Tracing a circle

Instead of relying on a Euclidean metric, we can define our own metric to obtain the connecting line within a certain narrow band instead of getting the straight line that connects two points. A narrow band allows us to define pixels that should be prioritized over others. It defines in some way a penalty if the algorithm passes through a section that is not part of the narrowband by making it more expensive to go through these pixels in comparison to choosing the pixels from the narrowband.

We need to update our shrinkage problem. If \(c\) is the cost associated with each pixel, then the equation for \(X\) becomes:

$$ X = \min_{X_v} c|X_v| -\langle \hat{\lambda}_v,X_v\rangle + \frac{\tau}{2}|| (D\varphi)_v-X_v+(X_0)_v||^2

X = \min_{X_v} |X_v| -\langle \hat{\lambda}_v / c,X_v\rangle + \frac{\tau}{2c}|| (D\varphi)_v-X_v+(X_0)_v||^2$$

We can replace \(\hat{\lambda}\) by \(\hat{\lambda}/c\) and \(\tau\) by \(\tau/c\), our shrinkage problem boils down to:

$$X_v=SHRINK_{\frac{c}{\tau}}(\tau \hat{\lambda_v}/c^2+(D\varphi)_v+(X_0)_v)$$.

*Fig 4: Superimposition of the initial one-form(green normals), final one-form(red normals) and narrow band(yellow)*

## Same boundary, multiple paths

The algorithm provides a unique solution to the problem, which means that with different initial solutions for the same boundary, we obtain the same solution. We proved this by defining two curves having the same boundary (in 2D, our boundary is two points).

*Fig 5: The initializing one-form as a Superposition of two one-forms corresponding to two curves between the boundaries.*

*Fig 6: minimizing one-form(top left) pseudo-inverse of the minimizing one-form (top-right) level-set obtained from the pseudo-inverse(bottom-left)*

## Crossing paths

We have defined two intersecting paths from points from the circle. The obtained paths connected points that were not connected in the initialization. This shows the importance of running the optimization on points separately to obtain results on-par with the model’s complexity.

*Fig 7: minimizing one-form(top left) pseudo-inverse of the minimizing one-form (top-right) level-set obtained from the pseudo-inverse(bottom-left) initial one-form superimposed on the narrow band (bottom-right)*

## Cross-Shaped Narrow Band

We define crossing paths similar to the point above but we change the shape of the narrow band to a cross. The result yields connections between the closest points in the narrow band, hence the hyperbolic appearance.

*Fig 8: Initial one-form(top left) initial one-form superimposed on the narrow band (top-right) level-set obtained from the pseudo-inverse(bottom-left) minimizing one-form (bottom-right)*

# Conclusion

We have successfully used an algorithm proposed by Wang and Chern [2] for curves instead of surfaces to find minimal connections between 2D boundaries while also incorporating metric information from narrow bands to obtain various shapes of these connections. We have enjoyed working on this project which combined many topics that were new to us and we hope that you have similarly enjoyed going through the blog post!

# References

[3] A. Aich, “A Brief Review of Alternating Direction Method Of Multipliers (ADMM).”