Categories
Research

Hex Marks the Spot

How can we extend the 2-close interaction to a general n-close interaction?

That was the question at the end of the triangle’s hex blog post. This post goes through the thought process and tools used to solve for the sequence that extends the 2-close interaction to a general \(n\)-close interaction.

The approach here focused on analyzing the rotation of the dual structure and determining the precise amount of rotation required to arrive at the desired \(n\)-case configuration. This result was obtained by solving a relatively simple Euclidean geometry subproblem, which sometimes is encountered as a canonical geometry problem at the secondary or high-school level. Through this perspective, a closed-form starting step for the general sequence was successfully derived.

OPEN FOR SPOILER FOR ANSWER

The rotation angle, \(\theta\), required to obtain a part of the the \(n\)-close interaction term in the dual is given by:

$$
\theta = 60 \ – \ \sin^{-1} \left( \frac{3 n}{2 \sqrt{3 (n^2 + n + 1)}} \right)
$$

As with many mathematical problems, there are multiple approaches to reach the same result. I really encourage you to explore doing it yourself but without further ado, here is the presentation of the thought process that led to this solution.

By the way, the working name for this article was
Hex and the City: Finding Your N-Closest Neighbors ,
if you read all this way, I think you at least deserve to know this.


Constructing the Solution

We start off where we left off in the Auxiliary Insights section of the last blog post.

Recall that a rotation by \(60^\circ\) returns us to the original dual line due to the underlying sixfold symmetry. For the \(2\)-close interaction, we introduced a new dual by halving this angle to $30^\circ$. Naturally, to explore the \(3\)-close case, one might attempt the naive approach of halving again to $15^\circ$. However, as seen below, this approah does not yield the desired result, indicating that the simple halving approach cannot be extended straightforwardly.

By manually guessing and checking the rotation values, we end up with a somewhat unexpected sequence, something that, for me at least, is nothing immediately obvious.

This hints at the need to uncover a more subtle structure that lets us analytically determine the values we seek.

Some prerequisite background is provided in the next section, but if you’re already familiar, feel free to skip ahead to the solution process that builds on these ideas.


Some useful underlying mathematics prerequisites that could help

Modular Arithmetic

A Quick Introduction to Modular Arithmetic

Modular arithmetic is a system of arithmetic for integers where numbers “wrap around” after reaching a certain value called the modulus. Rather than focusing on the full quotient of a division, modular arithmetic concerns itself with the remainder. It is a foundational concept in number theory and has wide applications in cryptography, computer science, and abstract algebra.

Motivation

Suppose you are working with a standard 12-hour clock. If the current time is 9 and 5 hours pass, the clock shows 2, not 14. This is because clocks operate “modulo 12”, meaning we consider two numbers equivalent if they differ by a multiple of 12. So:

$$
9 + 5 = 14 \equiv 2 \pmod{12}
$$

This is a basic example of modular arithmetic: rather than computing the full sum, we reduce it to its residue modulo 12.

Definition

Let \(n \) be a fixed positive integer, called the modulus. For any integers \(a \) and \(b \), we say:

$$
a \equiv b \pmod{n}
$$

(read as “a is congruent to b modulo n”) if and only if \(n \) divides \(a – b \); that is, there exists an integer \(k \) such that:

$$
a – b = kn
$$

This defines an equivalence relation on the set of integers \(\mathbb{Z} \), which partitions \(\mathbb{Z} \) into \(n \) distinct congruence classes modulo \(n \). Each class contains all integers with the same remainder upon division by \(n \).

For example, with \(n = 5 \), the integers are grouped into the following classes:

$$
[0] = { \dots, -10, -5, 0, 5, 10, \dots }
$$

$$
[1] = { \dots, -9, -4, 1, 6, 11, \dots }
$$

$$
[2] = { \dots, -8, -3, 2, 7, 12, \dots }
$$

$$
[3] = { \dots, -7, -2, 3, 8, 13, \dots }
$$

$$
[4] = { \dots, -6, -1, 4, 9, 14, \dots }
$$

Each integer belongs to exactly one of these classes. For computational purposes, we usually represent each class by its smallest non-negative element, i.e., by the residue \(r \in {0, 1, \dots, n-1} \).

Arithmetic in \(\mathbb{Z}/n\mathbb{Z} \)

The set of congruence classes modulo \(n \), denoted \(\mathbb{Z}/n\mathbb{Z} \), forms a ring under addition and multiplication defined by:

  • \([a] + [b] := [a + b] \)
  • \([a] \cdot [b] := [ab] \)

These operations are well-defined: if \(a \equiv a’ \pmod{n} \) and \(b \equiv b’ \pmod{n} \), then \(a + b \equiv a’ + b’ \pmod{n} \) and \(ab \equiv a’b’ \pmod{n} \).

If \(n \) is prime, \(\mathbb{Z}/n\mathbb{Z} \) is a finite field, meaning every nonzero element has a multiplicative inverse.

Formal Summary

Modular arithmetic generalizes the idea of remainders after division. Formally, for a fixed modulus \(n \in \mathbb{Z}_{>0} \), we say:

$$
a \equiv b \pmod{n} \iff n \mid (a – b)
$$

This defines a congruence relation on \(\mathbb{Z} \), leading to a system where calculations “wrap around” modulo \(n \). It is a central structure in elementary number theory, forming the foundation for deeper topics such as finite fields, Diophantine equations, and cryptographic algorithms.

Law Of Sines

Derivation of the Law of Sines

The Law of Sines is a fundamental result in trigonometry that relates the sides of a triangle to the sines of their opposite angles. This elegant identity allows us to find unknown angles or sides in oblique triangles using simple trigonometric ratios.

Setup and Construction

Consider an arbitrary triangle \(\triangle ABC \) with:

  • \(a = BC \)
  • \(b = AC \)
  • \(c = AB \)

To derive the Law of Sines, we draw an altitude from vertex \(A \) perpendicular to side \(BC \), and denote this height as \(h \).

Now consider the two right triangles formed by this altitude:

  • In \(\triangle ABC \), using angle \(B \): $$
    \sin B = \frac{h}{a} \quad \Rightarrow \quad h = a \sin B
    $$
  • Using angle \(C \): $$
    \sin C = \frac{h}{b} \quad \Rightarrow \quad h = b \sin C
    $$

Since both expressions equal \(h \), we equate them:

$$
a \sin B = b \sin C
$$

Rewriting this gives:

$$
\frac{a}{\sin C} = \frac{b}{\sin B}
$$

We can repeat this construction by drawing altitudes from the other vertices to derive similar relationships:

$$
\frac{b}{\sin A} = \frac{c}{\sin B}, \quad \frac{c}{\sin A} = \frac{a}{\sin C}
$$

Combining all, we obtain the complete Law of Sines:

$$
\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}
$$

Geometric Interpretation

The Law of Sines implies that in any triangle, the ratio of a side to the sine of its opposite angle is constant. In fact, this constant equals \(2R \), where \(R \) is the radius of the circumcircle of the triangle:

$$
\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2R
$$

This shows a deep geometric connection between triangles, circles, reflecting the underlying symmetry of some parts of Euclidean geometry.

Law Of Cosines (generalisation of Pythagorean theorem)

Derivation of the Cosine Angle Formula

The cosine angle formula, also known as the Law of Cosines, relates the lengths of the sides of a triangle to the cosine of one of its angles.

Motivation

Consider a triangle with sides of lengths \(a \), \(b \), and \(c \), and let \(\theta \) be the angle opposite side \(c \). When \(\theta \) is a right angle, the Pythagorean theorem applies:

$$
c^2 = a^2 + b^2
$$

For any angle \(\theta \), the Law of Cosines generalizes this relationship:

$$
c^2 = a^2 + b^2 – 2ab \cos \theta
$$

This formula allows us to find a missing side or angle in any triangle, not just right triangles.

Setup and Coordinate Placement

To derive the formula, place the triangle in the coordinate plane:

  • Place vertex \(A \) at the origin \((0, 0) \).
  • Place vertex \(B \) on the positive x-axis at \((b, 0) \).
  • Place vertex \(C \) at \((x, y) \), such that the angle at \(A \) is \(\theta \).

By definition, the length \(a \) is the distance from \(B \) to \(C \), and \(c \) is the length from \(A \) to \(C \).

Since \(\theta \) is the angle at \(A \), the coordinates of \(C \) can be expressed using polar coordinates relative to \(A \):

$$
C = (a \cos \theta, a \sin \theta)
$$

Distance Calculation

The length \(c \) is the distance between points \(A(0,0) \) and \(C(a \cos \theta, a \sin \theta) \):

$$
c = \sqrt{(a \cos \theta – 0)^2 + (a \sin \theta – 0)^2} = a
$$

Actually, to keep consistent with the notation, let’s redefine:

  • \(AB = c \)
  • \(AC = b \)
  • \(BC = a \)

So placing:

  • \(A = (0, 0) \)
  • \(B = (c, 0) \)
  • \(C = (x, y) \)

with angle \(\theta = \angle A \) at point \(A \).

Point \(C \) has coordinates:

$$
C = (b \cos \theta, b \sin \theta)
$$

The length \(a = BC = \) distance between \(B(c, 0) \) and \(C(b \cos \theta, b \sin \theta) \):

$$
a^2 = (b \cos \theta – c)^2 + (b \sin \theta – 0)^2
$$

Expanding:

$$
a^2 = (b \cos \theta – c)^2 + (b \sin \theta)^2
= (b \cos \theta)^2 – 2bc \cos \theta + c^2 + b^2 \sin^2 \theta
$$

Simplify using the fundamental trigonometric identity \(\cos^2 \theta + \sin^2 \theta = 1 \):

$$
a^2 = b^2 \cos^2 \theta – 2bc \cos \theta + c^2 + b^2 \sin^2 \theta = b^2(\cos^2 \theta + \sin^2 \theta) – 2bc \cos \theta + c^2
$$

$$
a^2 = b^2 – 2bc \cos \theta + c^2
$$

This is the Law of Cosines, it states that for any triangle with sides \(a, b, c \) and angle \(\theta \) opposite side \(a \):

$$
a^2 = b^2 + c^2 – 2bc \cos \theta
$$

This formula generalizes the Pythagorean theorem and allows computation of unknown sides or angles in any triangle, making it fundamental in trigonometry and geometry.

Height of a Regular Hexagon of Side Length, \(a\)

Find the height of a regular hexagon with side length \(a\)

Understand the structure of a regular hexagon

  • A regular hexagon can be divided into 6 equilateral triangles.
  • Each triangle has side length \(a\).
  • The height of the hexagon is the distance between two parallel sides, i.e., it’s twice the height of one equilateral triangle formed inside.

So:

$$
\text{height of hexagon} = 2 \times \text{height of equilateral triangle with side } a
$$

Use formula for height of an equilateral triangle

The height \(h\) of an equilateral triangle with side length \(a\) is:

$$
h = a \cdot \sin(60^\circ)
$$

We use this since the height splits the triangle into two right triangles with angles \(30^\circ, 60^\circ, 90^\circ\).

Substitute:

$$
\sin(60^\circ) = \frac{\sqrt{3}}{2}
$$

So:

$$
h = a \cdot \frac{\sqrt{3}}{2}
$$

Multiply by 2 to get full hexagon height

$$
\text{Height of hexagon} = 2 \cdot \left( a \cdot \frac{\sqrt{3}}{2} \right)
$$

Yielding Result:

$$
\text{Height of a regular hexagon with side length } a = a \sqrt{3}
$$

Limits at infinity

Limits at Infinity: A Conceptual Overview

In mathematical analysis, the notion of a limit at infinity provides a rigorous framework for understanding the behavior of functions as their input grows arbitrarily large in magnitude. Specifically, we are interested in the asymptotic behavior of a function \(f(x) \) as \(x \to \infty \) (or \(x \to -\infty \)), that is, how \(f(x) \) behaves for sufficiently large values of \(x \).

Formally, we say that
$$
\lim_{x \to \infty} f(x) = L
$$
if for every \(\varepsilon > 0 \), there exists a real number \(N \) such that
$$
x > N \Rightarrow |f(x) – L| < \varepsilon.
$$
This definition captures the idea that the values of \(f(x) \) can be made arbitrarily close to \(L \) by taking \(x \) sufficiently large.

Limits at infinity play a crucial role in the study of convergence, asymptotics, and the global behavior of functions.


Continuing to derive the solution

Simplifying the focus

We begin by exploiting the inherent sixfold rotational symmetry, we reduce the problem “modulo 6”-style, effectively restricting our attention to a single fundamental sector (i.e one \(60\) degree sector). By doing so we simplify the analysis to just one representative section within this equivalence class.

If we restrict our view to along the direction of the blue arrow, we observe that the rotation lines align to form a straight “line” sequence. This linear arrangement reveals an underlying order that we can potentially use.

To proceed, we now seek an invariant, a quantity or structure that remains stable under the transformations, under consideration. The natural approach here is to introduce an auxiliary red arrow, aligned with the reference direction. By connecting this to each rotation line, we form triangles with a common “base”.

Let us now regard each line as encoding an angular offset, specifically, a rotation relative to a fixed reference direction given by the dark blue line (dual). , We can think of the red arrow as some sort of basis vector corresponding to the blue line . Our aim is to compute the rotational offset of each line with respect to this reference frame, treating the red arrow as a canonical representative of the direction encoded by the dark blue line.

Finding the general formula for the offset angle between the red arrow (a representation of the original dual) AND the \(n\)-case triangle

Dotted lines here are the lines we saw in the image earlier

Given:

  • Triangle with two sides:
  • Side 1: \(a \sqrt{3}\)
  • Side 2: \(a n \sqrt{3}\)
  • Included angle between them: \(120^\circ\)
  • Goal: Find the angle \(A\), opposite to the side \(a n \sqrt{3}\)

Step 1: Use Law of Cosines to find the third side

$$
c^2 = \alpha^2 + \beta^2 – 2 \alpha \beta \cos(C)
$$

Assign:

  • \( \alpha = a \sqrt{3}\)
  • \(\beta = a n \sqrt{3}\)
  • \(C = 120^\circ\)

$$
c^2 = (a \sqrt{3})^2 + (a n \sqrt{3})^2 – 2 (a \sqrt{3})(a n \sqrt{3}) \cos(120^\circ)
$$

Evaluate:

  • \((a \sqrt{3})^2 = 3 a^2\)
  • \((a n \sqrt{3})^2 = 3 a^2 n^2\)
  • \(2ab = 6 a^2 n\)
  • \(\cos(120^\circ) = -\frac{1}{2}\)

$$
c^2 = 3 a^2 + 3 a^2 n^2 + 3 a^2 n
$$

$$
c^2 = 3 a^2 (n^2 + n + 1)
$$

$$
c = a \sqrt{3 (n^2 + n + 1)}
$$

Step 2: Use Law of Sines to find the angle opposite to \(a n \sqrt{3}\)

$$
\frac{\sin A}{a n \sqrt{3}} = \frac{\sin(120^\circ)}{c}
$$

Substitute:

  • \(\sin(120^\circ) = \frac{\sqrt{3}}{2}\)
  • \(c = a \sqrt{3 (n^2 + n + 1)}\)

$$
\sin A = \frac{a n \sqrt{3}}{a \sqrt{3 (n^2 + n + 1)}} \times \frac{\sqrt{3}}{2}
$$

Cancel \(a\):

$$
\sin A = \frac{n \sqrt{3}}{\sqrt{3 (n^2 + n + 1)}} \times \frac{\sqrt{3}}{2}
$$

$$
\sin A = \frac{3 n}{2 \sqrt{3 (n^2 + n + 1)}}
$$

Final Result for the ? angle:

$$
A = \sin^{-1} \left( \frac{3 n}{2 \sqrt{3 (n^2 + n + 1)}} \right)
$$

Adjusting the Sequence for clearer understanding of the Asymptotics

We can now utilize the function defined above to generate the desired terms in the sequence. This brings us close to our objective. Upon closer inspection, we observe that the sequence aligns particularly well when interpreted as an angular offset. Specifically, as illustrated in the image below, we begin with the blue line corresponding to the case \(n = 0\), and for each subsequent value of \(n\), we proceed in a clockwise direction. (These lines correspond to the dotted rays shown in the previous image.)


Drawing a link between the symmetry and this

Originally, by leveraging the underlying symmetry of the problem, we simplified the analysis by focusing in a single direction, meaning in other words, we can intuitively infer that the sequence “resets” every 60 degrees.

Now instead, we can analyze the limiting behavior of the relevant angle in the \(n\)-th triangle considered earlier. As \(n\) grows large, the angle opposite the red line (the \(a \sqrt{3}\) side), which we denote by \(\omega\), decreases towards zero. Formally, this can be expressed as

$$
\lim_{n \to \infty} \omega = 0.
$$

Using the triangle angle sum formula, the angle of interest (the ?, but lets call it \(\phi\)), satisfies

$$
\phi = 180^\circ – 120^\circ – \omega.
$$

Therefore,

$$
\lim_{n \to \infty} \phi = 60^\circ,
$$

showing that the sequence converges to \(60^\circ\).

Final Adjustment of the formula to reflect the rotation of the original dual and the intuition above

Thus, from both a mathematical (okay maybe more so an aesthetic) standpoint, it is preferable to redefine the sequence so that it converges to zero. This aligns with the conceptual goal of determining ‘how much to rotate’ the original dual structure to obtain the \(n\)-th dual. In its current form, the sequence converges to a fixed offset (specifically, \(60^\circ\)), which, at first glance, is not immediately apparent from the formula.

To clarify what is meant by ‘converging to zero’: when we rotate the original dual by any multiple of \(60^\circ\) (i.e., an element of \(60\mathbb{Z}\)), we return to the original dual; similarly, a rotation by \(30\mathbb{Z}\) yields the 2-close interaction. In this context, we seek a rotation angle \(\theta\) such that \(\theta \cdot \mathbb{Z}\) captures the \(n\)-close interaction, and thus, it is natural to define \(\theta\) as a function of \(n\) that tends to zero as \(n \to \infty\). This reframing reveals the underlying structure more clearly and places the emphasis on the relative rotation required to reach the \(n\)-th configuration from the base case.

This sequence tending to zero carries a natural interpretation: it suggests diminishing deviation or adjustment over time. This aligns well with the intuition that as \(n\) increases, the system or pattern is “settling down” or tending to the reset point. In contrast, convergence to a nonzero constant may obscure this interpretation (unless you know a-priory about the symmetries of this system) , since the presence of a persistent offset implies a kind of residual asymmetry or imbalance. By ensuring that the sequence vanishes asymptotically, we emphasize convergence toward a limiting behavior centered at zero.

This was the justification to introducing a subtraction of 60 in front of the A earlier to consider the sequence in the original notion of rotation of the dual instead. Yielding us

$$
\theta = 60 \ – \ A
$$

$$
\theta = 60 \ – \ \sin^{-1} \left( \frac{3 n}{2 \sqrt{3 (n^2 + n + 1)}} \right).
$$

After this adjustment, hopefully it becomes apparent, even from a quick inspection of the formula, that the constant \(60\) plays a fundamental role in limiting structure of the problem.

Thus we have

The Problem is not finished

Now, reintroducing the 6-fold symmetry and examining the superposition of plots corresponding to the 1-close through \(n\)-close interactions (for increasingly large values of \(n\)), we observe:

But wait … this does not cover the full grid. It’s clearly missing parts! You scammed me! Well yes but not quite as I did mention at the start

“…closed-form starting step

Srivathsan Amruth,
You really didn’t think I was going to give you the full answer right…

I intentionally stopped here, as I want you to explore the problem further and realize that there are, in fact, additional symmetries you can exploit to fully cover the hexagonal grid. I’ll leave you with a the following hint to start:

Consider the 1-ray case and its associated \(60^\circ\) cone.

Now look at the 3-close case (green) and the 2-close case (purple). Ask yourself: can I flip something to fill in the missing part of the 3-close structure?

Then, move on to the 4-close case. Since (spoiler) there will be two distinct 3-close components involved, can I once again use flipping to complete the picture? And if so, how many times, and where, should these flips occur?

It’s getting quite recursive from here… but I’ll let you figure out the rest.

Also, I’ll leave you with one final thought, i.e is there a way to avoid this recursive construction altogether or simplify the approach, and instead generate the full structure more efficiently using some iterative or closed-form approach?

Wishing you a hex-tra special day ahead with every piece falling neatly into place.

Srivathsan Amruth,
i.e The person guiding offering you through this symmetry-induced spiral.
Categories
Research

The Triangle’s Hex

An Open Challenge

The goal of this post is to celebrate the question. History’s greatest intellectual leaps often began not with the answer itself, but with the journey of tackling a problem. Hilbert’s 23 problems charted the course of a good chunk of 20th-century mathematics. Erdős scattered challenges that lead to breakthroughs in entire fields. The Millennium Prize Problems continue to inspire and frustrate problem solvers.

There’s magic in a well-posed problem; it ignites curiosity and forces innovation. Since I am nowhere near the level of Erdős or Hilbert, today, we embrace that spirit with a maybe less well-posed but deceptively simple yet intriguing geometric puzzle on a disordered surface.

To this end, we present a somewhat geometric-combinatorial problem that bridges aspects of computational geometry, bond percolation theory and graph theory.

Problem Construction

We begin with the hexagonal lattice in \(\mathbb{R}^2\) (the two-dimensional plane), specifically the regular hexagonal lattice (also known as the honeycomb lattice). For visualisation, a finite subregion of this lattice tessellation of the plane is illustrated below.

We then enrich the information of our space by constructing centroids associated with each hexagon in the lattice. The guiding idea is to represent each face of the lattice by its centroid, treating these points as abstract proxies for the faces themselves.

These centroids are connected precisely when their corresponding hexagons share a common edge in the original lattice. Due to the hexagonal tiling’s geometry, this adjacency pattern induces a regular network of connections between centroids, reflecting the staggered arrangement of the hexagons.

This process yields a new structure: the dual lattice. In this case, the resulting dual is the triangular lattice.This dual lattice encodes the connectivity of the original tiling’s faces through its vertices and edges.

As a brief preview of the questions to come, our goal is to identify triangles within this expansive lattice. We begin by requiring that the vertices of these triangles originate from the centroids of the hexagonal faces. To then encode such triangular structures, we leverage both the primal and dual lattices, combining them so that the edges of these triangles can exist on any of the black paths shown in the diagram.

What is particularly interesting here is that the current construction arises from 1-neighbor connections, which we interpret as close-range interactions. However, this 1-neighbor primal-dual framework restricts the set of possible triangles to only equilateral ones. (Personally, I feel this limits the complexity of potential problems.)

On this grid, every triangle’s got equal sides, I guess this side of math is tired of drama.Trying to spot a scalene here is like trying to find a unicorn in a math textbook.

To make the question more engaging and challenging, we seek to incorporate more “medium-to-further-range interactions“. This can be accomplished by connecting every second neighbor in an analogous fashion, thereby constructing a “triple space“. The resulting structure forms another dual lattice that captures these extended adjacency relations, allowing for a richer variety of triangular configurations beyond the equilateral case. For generality, I will coin the term for this as 2-close-range-interactions.

A lazy yet elegant approach to achieve this is by rotating the original dual lattice by \(90\) degrees, or, more generally, by any multiple of \(30\) degrees (i.e., \(30 \times n\) degrees , we will use this generality in the final section to extend the problem scope even further). We then combine this rotated lattice with the original construction.

We will illustrate the process below.

The 2-close-range interaction:

We begin, as before, with the original primal and dual lattices.

We now rotate the blue dual lattice by \(90\) degrees, as described previously.

Combining the rotated dual with the original, as before, yields the following structure and, as the keen eyed among you may notice, both the dual constructions covers the entire original primal hexagonal lattice.

This finally yields us the following existence space for the triangles.

And now, we witness the emergence of a richer and more interesting variety of triangle configurations.

Wow, so many triangles to choose from here! Great, just what I wanted more problems(to select from)… wait, hold up…

The Random Interface

To introduce an interesting stochastic element to our problem, we draw inspiration from percolation theory (a framework in statistical physics that examines how randomness influences the formation of connected clusters within a network). Specifically, we focus on the concept of disorder, exploring how the presence of randomness or irregularities in a system affects the formation of connected clusters. In our context, this translates to the formation of triangles within the lattice.

Building upon the original hexagonal lattice, we assign to each edge a probability \(p\) of being “open” or “existent”, and consequently, a probability \(1 – p\) of being “closed” or “nonexistent”. This approach mirrors the bond percolation model, where edges are randomly assigned to be open or closed, independent of other edges. The open edges allow for the traversal of triangle edges, akin to movement through a porous medium, facilitating the formation of triangles only through the connected open paths.

We illustrate using the figure a sample configuration when \(p = 0.5\), where the red “exit” edges represent “open” edges that allow triangles to traverse through the medium.

Rules regarding traversal of this random interface:

As mentioned earlier, every red “open” gate permits traversal. To define this traversal more clearly in our problem setting, we can visualise using the following.

In the 2-close-range interaction case, this traversal is either in the tangential direction

Apologies for the somewhat messy hand-drawn centroids and wonky arrows…

or in the normal direction.

Once again ignore the horribly hand-drawn centroids please… Also note that in both cases, the arrows (specifically the arrowheads) are included purely for illustrative purposes. The traversal can be considered bidirectional, depending on the context; that is, each triangle may be treated either as an undirected or a directed graph, depending on the construction method being employed by the problem solver (i.e you 😉 ).
An aside on Interaction Modeling

This model assumes that long-range interactions emerge from specified short-range interactions. We can see this in our construction of the 2-close explicitly excluding any secondary dual lattice edges (2-close interactions) that can be generated from first-neighbor edges (1-close interactions). This ensures that the secondary dual lattice contains only truly second-neighbor interactions not implied by (or that goes through) the first-neighbor structure, preserving a clear hierarchical interaction scheme.

The Problems

An “Easy” starting question :

Given some finite subsection of this random interface, construct an algorithm to find the largest triangle (in terms of perimeter or area, your choice of definition for the problem depending on which variant of the problem you want to solve) .

Attempt at formalising this
1) Definition of Lattices
Hexagon Lattice

Let the hexagonal lattice be denoted by:
$$
H = (C, E_h)
$$


where:

  • \(C = { c_i } \subset \mathbb{R}^2\) is the set of convex hexagonal cells that partition the Euclidean plane \(\mathbb{R}^2\),
  • \(E_h = { (c_i, c_j) \in C \times C \mid c_i \text{ and } c_j \text{ share a common edge} }\) is the adjacency relation between hexagons.

Each hexagon edge \(e_h \in E_h\) is activated independently with probability \(p \in [0,1]\):
$$
\Pr(\sigma_h(e_h) = 1) = p, \quad \Pr(\sigma_h(e_h) = 0) = 1-p
$$

Primary Dual Lattice \(T_1 = (V, E_t^{(1)})\)

The primary dual lattice connects centroids of adjacent hexagons:
$$
E_t^{(1)} = \left \{ (v_i, v_j) \in V \times V \,\middle|\, (c_i, c_j) \in E_h \right \}
$$

Secondary Dual Lattice \(T_2 = (V, E_t^{(2)})\)

The secondary dual lattice captures 2-close interactions. An edge \((v_i, v_j) \in E_t^{(2)}\) exists if and only if:

  1. \(c_i \neq c_j\) and \((v_i, v_j) \notin E_t^{(1)}\) (i.e., not first neighbors),
  2. There exists a common neighboring cell \(c_k \in C\) such that:
    $$
    (c_i, c_k) \in E_h \quad \text{and} \quad (c_j, c_k) \in E_h
    $$
  3. The straight-line segment \(\overline{v_i v_j}\) does not lie entirely within \(c_i \cup c_j \cup c_k\):
    $$
    \overline{v_i v_j} \not\subseteq c_i \cup c_j \cup c_k
    $$

Hence, the formal definition:
$$
E_t^{(2)} = \left\{ (v_i, v_j) \in V \times V \ \middle| \
\begin{aligned}
& c_i \ne c_j,\ (v_i, v_j) \notin E_t^{(1)}, \
& \exists\, c_k \in C:\ (c_i, c_k), (c_j, c_k) \in E_h, \
& \overline{v_i v_j} \not\subseteq c_i \cup c_j \cup c_k
\end{aligned}
\right\}
$$

Combined Dual Lattice

The complete dual lattice is defined as:
$$
T = (V, E_t), \quad \text{where} \quad E_t = E_t^{(1)} \cup E_t^{(2)}
$$

2) Edge Activation
Hexagon Edge Activation

Each hexagon edge \(e_h \in E_h\) is independently activated with probability \(p\) as:
$$
\sigma_h(e_h) \sim \text{Bernoulli}(p)
$$

Dual Edge Activation

Each dual edge \(e = (v_i, v_j) \in E_t\) has an activation state \(\sigma(e) \in \{0,1\}\) defined by the following condition:

$$\sigma(e) = 1 \quad \text{if } \overline{v_i v_j} = or \perp e_h \in E_h \text{ and } \sigma_h(e_h) = 1;
\
\sigma(e) = 0 \quad \text{otherwise}$$

In other words, a dual edge is activated only if it corresponds spatially to a hexagon edge that is itself activated, either by lying exactly on it or intersecting it at right angles.

3) Interaction Constraints

Local Traversal: Paths across the lattice are constrained to activated edges:
$$
P(v_a \rightarrow v_b) = { e_1, e_2, \dots, e_k \mid \sigma(e_i) = 1,\ e_i \in E_t }
$$

Cardinality:
$$
\deg_{T_1}(v_i) \leq 6, \quad \deg_{T_2}(v_i) \leq 6
$$
Each vertex connects to at most 6 first neighbors (in \(T_1\)) and at most 6 geometrically valid second neighbors (in \(T_2\)).

4) Triangle Definition

A valid triangle \(\Delta_{abc} = (v_a, v_b, v_c)\) must satisfy:

  1. Connectivity: All three edges are activated:
    $$
    \sigma(v_a v_b) = \sigma(v_b v_c) = \sigma(v_c v_a) = 1
    $$
  2. Embedding: The vertices \((v_a, v_b, v_c)\) form a non-degenerate triangle in \(\mathbb{R}^2\), preserving local lattice geometry.
  3. Perimeter: Defined as the total edge length:
    $$
    P(\Delta_{abc}) = |v_a – v_b| + |v_b – v_c| + |v_c – v_a|
    $$
  4. Area: Using the standard formula for triangle area from vertices:
    $$
    A(\Delta_{abc}) = \frac{1}{2} \left| (v_b – v_a) \times (v_c – v_a) \right|
    $$
    where \(\times\) denotes the 2D scalar cross product (i.e., determinant).

Triangles may include edges from either \(E_t^{(1)}\) or \(E_t^{(2)}\) as long as they meet the activation condition.

5) Optimization Objective

Given the dual lattice \(T = (V, E_t)\) and activation function \(\sigma: E_t \rightarrow \{0,1\}\), the goal is to find the valid triangle maximizing either:

  • Perimeter:
    $$
    \underset{\Delta_{abc}}{\mathrm{argmax}} \; P(\Delta_{abc})
    $$
  • Area:
    $$
    \underset{\Delta_{abc}}{\mathrm{argmax}} \; A(\Delta_{abc})
    $$

subject to \(\Delta_{abc}\) being a valid triangle.

Example of a triangle we can find in this sample of random interface.

Harder Open Questions:

1) What are the scaling limit behaviors of the triangle structures that emerge as the lattice size grows?

In other words: As we consider larger and larger domains, does the global geometry of triangle configurations converge to some limiting structure or distribution?

Ideas to explore:

Do typical triangle shapes concentrate around certain forms?

Are there limiting ratios of triangle types (e.g., equilateral vs. non-equilateral)?

Does a macroscopic geometric object emerge (in the spirit of scaling limits in percolation, e.g., SLE curves on critical lattices)?

An attempt to formalise this would be as follows :

Let’s denote by \(T_p(L)\) the set of all triangles that can be formed inside a finite region of side length \(L\) on our probabilistic lattice, where each edge exists independently with probability \(p\).

Conjecture: As \(L\) becomes very large (tends to infinity), the distribution of triangle types (e.g. equilateral, isosceles, scalene, acute, obtuse, right-angled) converges to a well-defined limiting distribution that depends only on \(p\). (i.e converges in law to some limiting distribution \(\mu_p\). )

Interpretation of expected behaviours:

  • For high \(p\) (near 1), most triangles are close to equilateral.
  • For low \(p\) (near 0), triangles become rare or highly irregular.
  • This limiting distribution, \(\mu_p\) , tells us the “global shape profile” of triangles formed under randomness.
2) How does the distribution of triangle types (e.g., equilateral, isosceles, scalene; or acute, right, obtuse) vary with the probability parameter \(p\)?

Why this matters: At low \(p\), triangle formation is rare, and any triangle that does form may be highly irregular. As \(p\) increases, more symmetric (or complete) structures may become common.

Possible observations:

Are equilateral triangles more likely to survive at higher \(p\) due to more consistent connectivity?

Do scalene or right-angled triangles dominate at intermediate values of \(p\) due to partial disorder?

Is there a characteristic “signature” of triangle types at criticality?

3) Can we define an interesting phase transition in this setting, marking a shift in the global geometry of the triangle space?

In other words: As we gradually increase the edge probability $p$, at what point does the typical shape of the triangles we find change sharply?

An attempt to formalise this would be as follows :

There exist two critical probability values \(p_1\) and \(p_2\) (where \(0 < p_1 < p_2 < 1\)) that mark qualitative transitions in the kinds of triangles we typically observe.

Conjecture:

  • When \(p < p_1\), most triangles that do form are highly irregular (i.e scalene).
  • When \(p_1 < p < p_2\), the system exhibits a diverse mix of triangle types (now we have more equilateral, isosceles with less domination of scalene; and variations of acute, right-angled, and obtuse).
  • When \(p > p_2\), the triangles become increasingly regular, and equilateral triangles dominate.

This suggests a “geometric phase transition” in the distribution of triangle shapes, driven purely by the underlying connectivity probability.

4) Is there a percolation threshold at which large-scale triangle connectivity suddenly emerges?

Let’s define a triangle cluster as a set of triangles that are connected via shared edges or vertices (depending on your construction). We’re interested in whether such triangle clusters can become macroscopically large.

Conjecture:
There exists a critical probability \(p_c\) such that:

  • If \(p < p_c\), then all triangle clusters are finite (they don’t span the system).
  • If \(p > p_c\), then with high probability (as the system size grows), a giant triangle cluster emerges that spans the entire domain.

This is analogous to the standard percolation transition, but now with triangles instead of edge or site clusters.

(The first attempt setting up this was whether a single giant triangle can engulf the whole space but in retrospect, this would not be possible and is not such an interesting question.)

5) Results on Triangle Density

As we grow the system larger and larger, does the number of triangles per unit area settles down to a predictable average, but this average depends heavily on the edge probability $p$.

Let \(N_p(L)\) denote the total number of triangles formed within a square region of side length \(L\).

We define the triangle density as

$$
\rho_p(L) := \frac{N_p(L)}{L^2},
$$

which is the average number of triangles per unit area.

Conjecture: As \(L \to \infty\), the triangle density converges to a deterministic function \(\rho(p)\):
$$
\lim_{L \to \infty} \rho_p(L) = \rho(p).
$$
(Here, \(\rho(p)\) captures the asymptotic average density of triangles in the infinite lattice for edge probability \(p\).)

Furthermore, the variance of \(N_p(L)\) satisfies
$$
\mathrm{Var}[N_p(L)] = o(L^4),
$$
(i.e variance of the number of triangles grows more slowly than the square of the area)

which implies the relative fluctuations:
$$
\frac{\sqrt{\mathrm{Var}[N_p(L)]}}{L^2} \to 0.
$$
vanish as \(L \to \infty\).

The function \(\rho(p)\) is analogous to the percolation density in classical bond percolation, which measures the expected density of occupied edges or clusters. Just as percolation theory studies the emergence and scaling of connected clusters, here \(\rho(p)\) characterizes the density of geometric structures (triangles) emerging from random edge configurations.

The sublinear variance scaling reflects a law of large numbers effect in spatial random graphs: as the region grows, the relative randomness in triangle counts smooths out, enabling well-defined macroscopic statistics.

This parallels known results in random geometric graphs and continuum percolation, where densities of subgraph motifs stabilize in large limits. This behavior may also hint at connections to scaling exponents or universality classes from statistical physics. (With certain constraints, could classify the triangles or regions of connected triangles as interfaces and make connections with KPZ type problems too, possibly another conjecture but I think 5 problems to think about are more than enough for now.)


What are some applications?

While I approach this problem primarily from a theoretical lens, with an interest in its mathematical structure and the questions it naturally raises, I also recognize that others may find motivation in more tangible or applied outcomes. To speak to that perspective, I include two detailed examples that illustrate how ideas from this setting can extend into meaningful real-world contexts. The aim is not to restrict the problem’s scope to practical ends, but to show how the underlying tools and insights may resonate across domains and disciplines.

—–//—–

Application 1

1) Retopology of a 3D Scan

Consider the setting of high-resolution 3D scanning, where millions of sample points capture the geometry of a complex surface. These points are typically distributed (or essentially clumped) within an \(\varepsilon\)-thick shell around the true surface geometry, often due to the scan exhibiting local noise, redundancy, or sampling irregularities. Within this context, our framework finds a natural interpretation.

One can view the scan as a point cloud embedded on a virtual, warped hexagonal lattice, where the underlying lattice geometry adapts to local sampling density. By introducing a probability field, governed by factors such as local curvature, scan confidence, or spatial regularity, we assign likelihoods to potential connections between neighboring points.

In this formulation, the task of finding large, connected configurations of triangles corresponds to the process of identifying coherent, semantically meaningful patches of the surface. These patches serve as candidates for retopologization (the construction of a low-resolution, structured mesh that preserves the essential geometric features of the scan while filtering out noise and redundancies). This resulting mesh is significantly more amenable to downstream applications such as animation, simulation, or interactive editing.

The nearest-neighbor locality constraint we impose reflects a real physical prior, i.e mesh edges should not “jump” across unconnected regions of the surface. This constraint promotes local continuity and geometric surface fidelity.

Application 2

2) Fault Line Prediction in Material Fracture

In materials science, understanding how fractures initiate and propagate through heterogeneous media is a central problem. At microscopic scales, materials often have grain boundaries, impurities, or irregular internal structures that influence where and how cracks form. This environment can be abstracted as a disordered graph or lattice, where connections (or lack thereof) between elements model structural integrity or weakness.

We can reinterpret such a material as a random geometric graph laid over a deformed hexagonal lattice, with each connection representing a potential atomic or molecular bond. Assign a probability field to each edge, governed by local stress, strain, or material heterogeneity, capturing the likelihood that a bond remains intact under tension.

Triangles in this setting can be thought of as minimal stable configurations. As stress increases or defects accumulate, connected triangle structures begin to break down. By tracking the largest connected triangle regions, one may identify zones of high local integrity, while their complement reveals emergent fault lines, regions where fracture is likely to nucleate.

Thus, searching for maximal triangle clusters under stochastic constraints becomes a method for predicting failure zones before actual crack propagation occurs. Moreover, one can study how the geometry and size distribution of these triangle clusters vary with external stress or material disorder, giving rise to scaling laws or percolation-like transitions.

This formulation connects naturally with known models of fracture networks, but adds a new geometric lens: triangulated regions become proxies for mechanical stability, and their disintegration offers a pathway to understand crack emergence.

—–//—–

Again these are just two specific lenses. At its core, the problem, finding large-scale, connected substructures under local stochastic constraints, arises across many domains. It is easy to reformulate and adjust the constraints to use the tools developed here in other problems.


Auxiliary Insights

When constructing triangles, one can consider the set of paths accessible from a single vertex. To aid visualization, refer to the diagram below.

The blue paths are from the 1-close connections, while the pink paths correspond to 2-close connections. Together, they provide each vertex with a total of \(6 + 6 = 12\) possible directions or “degrees of freedom” along which it may propagate to form triangle edges.

Notably, the system exhibits a high degree of rotational symmetry, both the 1-close and 2-close path structures are invariant under rotations by \(60^\circ\). This symmetry can potentially be leveraged to design efficient search algorithms or classification schemes for triangle types within the lattice.

Extension to the n-close case

So far, we’ve been using triangles as a proxy for a special class of simple closed-loop interaction pathways (i.e structures formed by chaining together local movements across the lattice). Specifically we restricted ourselves to the 1- and 2-close interaction setting.

Naturally, a question arises: can we generalize this further? Specifically, what sort of angular adjustment or geometric rule would be needed to define an n-close interaction, beyond the 1- and 2-close cases we’ve explored? (When looking above at the auxillery and the construction of the 2-close, we can see that we use the \(30\) degree rotation offset at the basis.)

This leads to a broader and richer family of questions. In my other blog post (will be linked soon after it is posted) , I’ll pose precisely this: “How can we extend the 2-close interaction to a general n-close interaction?” . There, I’ll also provide a closed-form formula for generating any term in the sequence of such $n$-close interactions.

As always, I encourage you to attempt the question yourself before reading the solution, it may seem straightforward at first, but the correct sequence or geometric structure is not always immediately obvious.

Final Thoughts

Crafting this problem, presenting some drawings and formalising certain aspects was an interesting challenge for me. As always, I welcome any feedback on how to improve the presentation or to formalise certain sections more rigorously if you have any ideas. I’m especially looking forward to seeing and reading about any progress made on the questions posed here, attempts at their variations, or even presentation of potential applications.

Cheers and have an hex-traordinary day ahead!

Srivathsan Amruth
i.e The writer of this masterpiece chaotic word salad, who apologizes for your lost minutes
Hey, why are you still reading? … Shouldn’t you be pretending to work? ; – )