You are currently browsing the category archive for the ‘Euclidean Geometry’ category.

A couple of weeks ago, my student Yan Mary He presented a nice proof of Liouville’s theorem to me during our weekly meeting. The proof was the one from Benedetti-Petronio’s Lectures on Hyperbolic Geometry, which in my book gets lots of points for giving careful and complete details, and being self-contained and therefore accessible to beginning graduate students. Liouville’s Theorem is the fact that a conformal map between open subsets of Euclidean space of dimension at least 3 are Mobius transformations — i.e. they look locally like the restriction of a composition of Euclidean similarities and inversions on round spheres. This implies that the image of a piece of a plane or round sphere is a piece of a plane or round sphere, a highly rigid constraint. This sort of rigidity is in stark contrast to the case of conformal maps in dimension 2: any holomorphic (or antiholomorphic) map between open regions in the complex plane is a conformal map (and conversely). The proof given in Benedetti-Petronio is certainly clear and readable, and gives all the details; but Mary and I were a bit unsatisfied that it did not really provide any geometric insight into the meaning of the theorem. So the purpose of this blog post is to give a short sketch of a proof of Liouville’s theorem which is more geometric, and perhaps easier to remember.

Last week while in Tel Aviv I had an interesting conversation over lunch with Leonid Polterovich and Yaron Ostrover. I happened to mention the following gem from the remarkable book A=B by Wilf-Zeilberger. The book contains the following Theorem and “proof”:

Theorem 1.4.2. For every triangle ABC, the angle bisectors intersect at one point

Proof. Verify this for the 64 triangles for which the angle at A and B are one of 10, 20, 30, $\cdots$, 80. Since the theorem is true in these cases it is always true.

We are asked the provocative question: is this proof acceptable? The philosophy of the W-Z method is illustrated by pointing out that this proof is acceptable if one adds for clarity the remark that the coordinates of the intersections of the pairs of angle bisectors are rational functions of degree at most 7 in the tangents of A/2 and B/2; hence if they agree at 64 points they agree everywhere.

Leonid countered with a personal anecdote. Recall that an altitude in a triangle is a line through one vertex which is perpendicular to the opposite edge. Leonid related that one day his geometry class (I forget the precise context) were given the problem of showing that the altitudes in a hyperbolic triangle (i.e. a triangle in the hyperbolic plane) meet at a single point — the orthocenter of the triangle. After the class had struggled with this for some time, the professor laconically informed them that the result obviously followed immediately from the corresponding fact for Euclidean triangles “by analytic continuation”. Philosophically speaking, this is not too far from the W-Z example, although the details are slightly more shaky — in particular, the class of Euclidean triangles are not Zariski dense in the class of triangles in constant curvature spaces, so a little more remains to be done.

Actually, one might even go back and rethink the W-Z example — how exactly are we to verify that the angular bisectors intersect at a point for the triangles in question without doing a calculation no less complicated that the general case? Let’s raise the stakes further. After some thought, we see that not only will the intersections of pairs of angle bisectors be given by rational functions of the tangents of A/2 and B/2, but the (algebraic) heights of the coefficients of these rational functions can be easily estimated, and one can therefore compute an effective lower bound on how far apart the intersections of the angle bisectors would be if they were not equal. We can then literally draw the triangles on a piece of physical paper using a protractor, and verify by eyesight that the angle bisectors appear to coincide to within the necessary accuracy. After rigorously estimating the experimental errors, we can write qed.

The other day by chance I happened to look at Richard Kenyon’s web page, and was struck by a very beautiful animated image there. The image is of a region tiled by colored squares, which are slowly rotating. As the squares rotate, they change size in such a way that the new (skewed, resized) squares still tile the same region. I thought it might be fun to try to guess how the image was constructed, and to produce my own version of his image.

This morning I was playing trains with my son Felix. At the moment he is much more interested in laying the tracks than putting the trains on and moving them around, but he doesn’t tend to get concerned about whether the track closes up to make a loop. The pieces of track are all roughly the following shape:

In this post, I will cover triangles and area in spaces of constant (nonzero) curvature. We are focused on hyperbolic space, but we will talk about spheres and the Gauss-Bonnet theorem.

1. Triangles in Hyperbolic Space

Suppose we are given 3 points in hyperbolic space ${\mathbb{H}^n}$. A triangle with these points as vertices is a set of three geodesic segments with these three points as endpoints. The fact that there is a unique triangle requires a (brief) proof. Consider the hyperboloid model: three points on the hyperboloid determine a unique 3-dimensional real subspace of ${\mathbb{R}^{n+1}}$ which contains these three points plus the origin. Intersecting this subspace with the hyperboloid gives a copy of ${\mathbb{H}^2}$, so we only have to check there is a unique triangle in ${\mathbb{H}^2}$. For this, consider the Klein model: triangles are euclidean triangles, so there is only one with a given three vertices.

In hyperbolic space, it is still true that knowing enough side lengths and/or angles of a triangles determines it. For example, knowing two side lengths and the angle between them determines the triangle. Similarly, knowing all the angles determines it. However, not every set of angles can be realized (in euclidean space, for example, the angles must add to ${\pi}$), and the inequalities which must be satisfied are more complicated for hyperbolic space.

2. Ideal Triangles and Area Theorems

We can think about moving one (or more) of the points of a hyperbolic triangle off to infinity (the boundary of the disk). An ideal triangle is one with all three “vertices” (the vertices do not exist in hyperbolic space) on the boundary. Using a conformal map of the disk (which is an isometry of hyperbolic space), we can move any three points on the boundary to any other three points, so up to isometry, there is only one ideal triangle. We have fixed our metric, so we can find the area of this triangle. The logically consistent way to find this is with an integral since we will use this fact in our proof sketch of Gauss-Bonnet, but as a remark, suppose we know Gauss-Bonnet. Imagine a triangle very close to ideal. The curvature is ${-1}$, and the euler characteristic is ${1}$. The sum of the exterior angles is just slightly under ${3\pi}$, so using Gauss-Bonnet, the area is very close to ${\pi}$, and goes to ${\pi}$ as we push the vertices off to infinity.

One note is that suppose we know what the geodesics are, and we know what the area of an ideal triangle is (suppose we just defined it to be ${\pi}$ without knowing the curvature). Then by pasting together ideal triangles, as we will see, we could find the area of any triangle. That is, really the key to understanding area is knowing the area of an ideal triangle.

As mentioned above, there is a single triangle, up to isometry, with given angles, so denote the triangle with angles ${\alpha, \beta, \gamma}$ by ${\Delta(\alpha, \beta, \gamma)}$.

2.1. Area

Knowing the area of an ideal triangle allows us to calculate the area of any triangle. In fact:

Theorem 1 (Gauss) ${\mathrm{area}(\Delta(\alpha, \beta, \gamma)) = \pi - (\alpha + \beta + \gamma)}$

This geometric proof relies on the fact that the angles in the Poincare model are the euclidean angles in the model. Consider the generic picture:

We have extended the sides of ${\Delta(\alpha, \beta, \gamma)}$ and drawn the ideal triangle containing these geodesics. Since the angles are what they look like, we know that the area of ${\Delta(\alpha,\beta,\gamma)}$ is the area of the ideal triangle (${\pi}$), minus the sum of the areas of the smaller triangles with two points at infinity:

$\displaystyle \mathrm{area}(\Delta(\alpha, \beta, \gamma)) = \pi - \mathrm{area}(\Delta(\pi-\alpha, 0,0)) - \mathrm{area}(\Delta(\pi-\beta, 0, 0)) - \mathrm{area}(\Delta(\pi-\gamma, 0, 0))$

Thus it suffices to show that ${\mathrm{area}(\Delta(\pi - \alpha, 0, 0)) = \alpha}$.

For this fact, we need another picture:

Define ${f(\alpha) = \mathrm{area}(\Delta(\pi-\alpha, 0, 0))}$. The picture shows that the area of the left triangle (with two vertices at infinity and one near the origin) plus the area of the right triangle is the area of the top triangle plus the area of the (ideal) bottom triangle:

$\displaystyle f(\alpha) + f(\beta) = f(\alpha+\beta-\pi) + \pi$

We also know some boundary conditions on ${f}$: we know ${f(0) = 0}$ (this is a degenerate triangle) and ${f(\pi) = \pi}$ (this is an ideal triangle). We therefore conclude that

$\displaystyle f(\frac{\pi}{2}) + f(\frac{\pi}{2}) = f(0) + \pi \qquad \Rightarrow \qquad f(\frac{\pi}{2}) = \frac{\pi}{2}$

Similarly,

$\displaystyle 2f(\frac{3\pi}{4}) = f(\frac{\pi}{2}) + \pi \qquad \Rightarrow \qquad f(\frac{3\pi}{4}) = \frac{3\pi}{4}$

And we can find ${f(\pi/4) = \pi/4}$ by observing that

$\displaystyle f(\frac{3\pi}{4}) + f(\frac{\pi}{2}) = f(\frac{\pi}{4}) + \pi$

Similarly, if we know ${f(\frac{k\pi}{2^n}) = \frac{k\pi}{2^n}}$, then

$\displaystyle f(\frac{(2^{n+1}-1)\pi}{2^{n+1}}) = \frac{(2^{n+1}-1)\pi}{2^{n+1}}$

And by subtracting ${\pi/2^n}$, we find that ${f(\frac{k\pi}{2^{n+1}}) = \frac{k\pi}{2^{n+1}}}$. By induction, then, ${f(\alpha) =\alpha}$ if ${\alpha}$ is a dyadic rational times ${\pi}$. This is a dense set, so we know ${f(\alpha) = \alpha}$ for all ${\alpha \in [0,\pi]}$ by continuity. This proves the theorem.

3. Triangles On Spheres

We can find a similar formula for triangles on spheres. A lune is a wedge of a sphere:

A lune.

Since the area of a lune is proportional to the angle at the peak, and the lune with angle ${2\pi}$ has area ${4\pi}$, the lune ${L(\alpha)}$ with angle ${\alpha}$ has area ${2\alpha}$. Now consider the following picture:

Notice that each corner of the triangle gives us two lunes (the lunes for ${\alpha}$ are shown) and that there is an identical triangle on the rear of the sphere. If we add up the area of all 6 lunes associated with the corners, we get the total area of the sphere, plus twice the area of both triangles since we have triple-counted them. In other words:

$\displaystyle 4\pi + 4\mathrm{area}(\Delta(\alpha, \beta,\gamma)) = 2L(\alpha) + 2L(\beta) + 2L(\gamma) = 4(\alpha + \beta + \gamma)$

Solving,

$\displaystyle \mathrm{area}(\Delta(\alpha, \beta,\gamma)) = \alpha + \beta + \gamma - \pi$

4. Gauss-Bonnet

If we encouter a triangle ${\Delta}$ of constant curvature ${K(\Delta)}$, then we can scale the problem to one of the two formulas we just computed, so

$\displaystyle \mathrm{area}(\Delta) = \frac{\sum \mathrm{angles} - \pi}{K(\Delta)}$

This formula allows us to give a slightly handwavy, but accurate, proof of the Gauss-Bonnet theorem, which relates topological information (Euler characteristic) to geometric information (area and curvature). The proof will precede the statement, since this is really a discussion.

Suppose we have any closed Riemannian manifold (surface) ${S}$. The surface need not have constant curvature. Suppose for the time being it has no boundary. Triangulate it with very small triangles ${\Delta_i}$ such that ${\mathrm{area}(\Delta_i) \sim \epsilon^2}$ and ${\mathrm{diameter}(\Delta_i) \sim \epsilon}$. Then since the deviation between the curvature and the curvature at the midpoint ${K_\mathrm{midpoint}}$ is ${o(\epsilon^2)}$ times the distance from the midpoint,

$\displaystyle \int_{\Delta_i} K d\mathrm{area} = K_\mathrm{midpoint}\cdot \mathrm{area}(\Delta_i) + o(\epsilon^3)$

For each triangle ${\Delta_i}$, we can form a comparison triangle ${\Delta^c_i}$ with the same edge lengths and constant curvature ${K_\mathrm{midpoint}}$. Using the formula from the beginning of this section, we can rewrite the right hand side of the formula above, so

$\displaystyle \int_{\Delta_i} K d\mathrm{area} = \sum_{\Delta_i^c} \mathrm{angles} - \pi + o(\epsilon^3)$

Now since the curvature deviates by ${o(\epsilon^2)}$ times the distance from the midpoint, the angles in ${\Delta_i}$ deviate from those in ${\Delta_i^c}$ just slightly:

$\displaystyle \sum_{\Delta_i} \mathrm{angles} = \sum_{\Delta_i^c} \mathrm{angles} + o(\epsilon^3)$

So we have

$\displaystyle \int_{\Delta_i} K d\mathrm{area} = \sum_{\Delta_i} \mathrm{angles} - \pi + o(\epsilon^3)$

Therefore, summing over all triangles,

$\displaystyle \int_{S} K d\mathrm{area} = \sum_i \left[ \sum_{\Delta_i} \mathrm{angles} - \pi \right] + o(\epsilon)$

The right hand side is just the total angle sum. Since the angle sum around each vertex in the triangulation is ${2\pi}$,

$\displaystyle \sum_i \left[ \sum_{\Delta_i} \mathrm{angles} - \pi \right] = 2\pi V - \pi T$

Where ${V}$ is the number of vertices, and ${T}$ is the number of triangles. The number of edges, ${E}$, can be calculated from the number of triangles, since there are ${3}$ edges for each triangle, and they are each double counted, so ${E = \frac{3}{2} T}$. Rewriting the equation,

$\displaystyle \int_{S} K d\mathrm{area} = 2\pi (V - \frac{1}{2}T) = 2\pi (V - E + T) = 2\pi\chi(S) + o(\epsilon)$

Taking the mesh size ${\epsilon}$ to zero, we get the Gauss-Bonnet theorem ${\int_S K d\mathrm{area} = 2\pi\chi(S)}$.

4.1. Variants of Gauss-Bonnet

• If ${S}$ is compact with totally geodesic boundary, then the formula still holds, which can be shown by doubling the surface, applying the theorem to the doubled surface, and finding that euler characteristic also doubles.
• If ${S}$ has geodesic boundary with corners, then$\displaystyle \int_S K d\mathrm{area} + \sum_\mathrm{corners} \mathrm{turning angle} = 2\pi\chi(S)$Where the turning angle is the angle you would turn tracing the shape from the outside. That is, it is ${\pi - \alpha}$, where ${\alpha}$ is the interior angle.

• Most generally, if ${S}$ has smooth boundary with corners, then we can approximate the boundary with totally geodesic segments; taking the length of these segments to zero gives us geodesic curvature (${k_g}$):$\displaystyle \int_S K d\mathrm{area} + \sum_\mathrm{corners} \mathrm{turning angle} + \int_{\partial S} k_g d\mathrm{length} = 2\pi\chi(S)$

4.2. Examples

• The Euler characteristic of the round disk in the plane is ${1}$, and the disk has zero curvature, so ${\int_{\partial S} k_g d\mathrm{length} = 2\pi}$. The geodesic curvature is constant, and the circumference is ${2\pi r}$, so ${2\pi r k_g = 2\pi}$, so ${k_g = 1/r}$.
• A polygon in the plane has no curvature nor geodesic curvature, so ${\sum_\mathrm{corners} \pi - \mathrm{angle} = 2\pi}$.

The Gauss-Bonnet theorem constrains the geometry in any space with nonzero curvature. This the “reason” similarities which don’t preserve length and/or area exist in euclidean space; it has curvature zero.

I am Alden, one of Danny’s students. Error/naivete that may (will) be found here is mine. In these posts, I will attempt to give notes from Danny’s class on hyperbolic geometry (157b). This first post covers some models for hyperbolic space.

1. Models

We have a very good natural geometric understanding of ${\mathbb{E}^3}$, i.e. 3-space with the euclidean metric. Pretty much all of our geometric and topological intuition about manifolds (Riemannian or not) comes from finding some reasonable way to embed or immerse them (perhaps locally) in ${\mathbb{E}^3}$. Let us look at some examples of 2-manifolds.

• Example (curvature = 1) ${S^2}$ with its standard metric embeds in ${\mathbb{E}^2}$; moreover, any isometry of ${S^2}$ is the restriction of (exactly one) isometry of the ambient space (this group of isometries being ${SO(3)}$). We could not ask for anything more from an embedding.
• Example (curvature = 0) Planes embed similarly.
• Example (curvature = -1) The pseudosphere gives an example of an isometric embedding of a manifold with constant curvature -1. Consider a person standing in the plane at the origin. The person holds a string attached to a rock at ${(0,1)}$, and they proceed to walk due east dragging the rock behind them. The movement of the rock is always straight towards the person, and its distance is always 1 (the string does not stretch). The line traced out by the rock is a tractrix. Draw a right triangle with hypotenuse the tangent line to the curve and vertical side a vertical line to the ${x}$-axis. The bottom has length ${\sqrt{1-y^2}}$, which shows that the tractrix is the solution to the differential equation$\displaystyle \frac{-y}{\sqrt{1-y^2}} = \frac{dy}{dx}$

The Tractrix

The surface of revolution about the ${x}$-axis is the pseudosphere, an isometric embedding of a surface of constant curvature -1. Like the sphere, there are some isometries of the pseudosphere that we can understand as isometries of ${\mathbb{E}^3}$, namely rotations about the ${x}$-axis. However, there are lots of isometries which do not extend, so this embeddeding does not serve us all that well.

• Example (hyperbolic space) By the Nash embedding theorem, there is a ${\mathcal{C}^1}$ immersion of ${\mathbb{H}^2}$ in ${\mathbb{E}^3}$, but by Hilbert, there is no ${\mathcal{C}^2}$ immersion of any complete hyperbolic surface.That last example is the important one to consider when thinking about hypobolic spaces. Intuitively, manifolds with negative curvature have a hard time fitting in euclidean space because volume grows too fast — there is not enough room for them. The solution is to find (local, or global in the case of ${\mathbb{H}^2}$) models for hyperbolic manfolds such that the geometry is distorted from the usual euclidean geometry, but the isometries of the space are clear.

2. 1-Dimensional Models for Hyperbolic Space

While studying 1-dimensional hyperbolic space might seem simplistic, there are nice models such that higher dimensions are simple generalizations of the 1-dimensional case, and we have such a dimensional advantage that our understanding is relatively easy.

2.1. Hyperboloid Model

Parameterizing ${H}$

Consider the quadratic form ${\langle \cdot, \cdot \rangle_H}$ on ${\mathbb{R}^2}$ defined by ${\langle v, w \rangle_A = \langle v, w \rangle_H = v^TAw}$, where ${A = \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right]}$. This doesn’t give a norm, since ${A}$ is not positive definite, but we can still ask for the set of points ${v}$ with ${\langle v, v \rangle_H = -1}$. This is (both sheets of) the hyperbola ${x^2-y^2 = -1}$. Let ${H}$ be the upper sheet of the hyperbola. This will be 1-dimensional hyperbolic space.

For any ${n\times n}$ matrix ${B}$, let ${O(B) = \{ M \in \mathrm{Mat}(n,\mathbb{R}) \, | \, \langle v, w \rangle_B = \langle Mv, Mw \rangle_B \}}$. That is, matrices which preserve the form given by ${A}$. The condition is equivalent to requiring that ${M^TBM = B}$. Notice that if we let ${B}$ be the identity matrix, we would get the regular orthogonal group. We define ${O(p,q) = O(B)}$, where ${B}$ has ${p}$ positive eigenvalues and ${q}$ negative eigenvalues. Thus ${O(1,1) = O(A)}$. We similarly define ${SO(1,1)}$ to be matricies of determinant 1 preserving ${A}$, and ${SO_0(1,1)}$ to be the connected component of the identity. ${SO_0(1,1)}$ is then the group of matrices preserving both orientation and the sheets of the hyperbolas.

We can find an explicit form for the elements of ${SO_0(1,1)}$. Consider the matrix ${M = \left[ \begin{array}{cc} a & b \\ c& d \end{array} \right]}$. Writing down the equations ${M^TAM = A}$ and ${\det(M) = 1}$ gives us four equations, which we can solve to get the solutions

$\displaystyle \left[ \begin{array}{cc} \sqrt{b^2+1} & b \\ b & \sqrt{b^2+1} \end{array} \right] \textrm{ and } \left[ \begin{array}{cc} -\sqrt{b^2+1} & b \\ b & -\sqrt{b^2+1} \end{array} \right].$

Since we are interested in the connected component of the identity, we discard the solution on the right. It is useful to do a change of variables ${b = \sinh(t)}$, so we have (recall that ${\cosh^2(t) - \sinh^2(t) = 1}$).

$\displaystyle SO_0(1,1) = \left\{ \left[ \begin{array}{cc} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{array} \right] \, | \, t \in \mathbb{R} \right\}$

These matrices take ${\left[ \begin{array}{c} 0 \\ 1 \end{array} \right]}$ to ${\left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]}$. In other words, ${SO_0(1,1)}$ acts transitively on ${H}$ with trivial stabilizers, and in particular we have parmeterizing maps

$\displaystyle \mathbb{R} \rightarrow SO_0(1,1) \rightarrow H \textrm{ defined by } t \mapsto \left[ \begin{array}{cc} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{array} \right] \mapsto \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]$

The first map is actually a Lie group isomorphism (with the group action on ${\mathbb{R}}$ being ${+}$) in addition to a diffeomorphism, since

$\displaystyle \left[ \begin{array}{cc} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{array} \right] \left[ \begin{array}{cc} \cosh(s) & \sinh(s) \\ \sinh(s) & \cosh(s) \end{array} \right] = \left[ \begin{array}{cc} \cosh(t+s) & \sinh(t+s) \\ \sinh(t+s) & \cosh(t+s) \end{array} \right]$

Metric

As mentioned above, ${\langle \cdot, \cdot \rangle_H}$ is not positive definite, but its restriction to the tangent space of ${H}$ is. We can see this in the following way: tangent vectors at a point ${p \in H}$ are characterized by the form ${\langle \cdot, \cdot \rangle_H}$. Specifically, ${v\in T_pH \Leftrightarrow \langle v, p \rangle_H}$, since (by a calculation) ${\frac{d}{dt} \langle p+tv, p+tv \rangle_H = 0 \Leftrightarrow \langle v, p \rangle_H}$. Therefore, ${SO_0(1,1)}$ takes tangent vectors to tangent vectors and preserves the form (and is transitive), so we only need to check that the form is positive definite on one tangent space. This is obvious on the tangent space to the point ${\left[ \begin{array}{c} 0 \\ 1 \end{array} \right]}$. Thus, ${H}$ is a Riemannian manifold, and ${SO_0(1,1)}$ acts by isometries.

Let’s use the parameterization ${\phi: t \mapsto \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]}$. The unit (in the ${H}$ metric) tangent at ${\phi(t) = \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]}$ is ${\left[ \begin{array}{c} \cosh(t) \\ \sinh(t) \end{array} \right]}$. The distance between the points ${\phi(s)}$ and ${\phi(t)}$ is

$\displaystyle d_H(\phi(s), \phi(t)) = \left| \int_s^t\sqrt{\langle \left[ \begin{array}{c} \cosh(t) \\ \sinh(t) \end{array} \right], \left[ \begin{array}{c} \cosh(t) \\ \sinh(t) \end{array} \right] \rangle_H dv } \right| = \left|\int_s^tdv \right| = |t-s|$

In other words, ${\phi}$ is an isometry from ${\mathbb{E}^1}$ to ${H}$.

1-dimensional hyperbollic space. The hyperboloid model is shown in blue, and the projective model is shown in red. An example of the projection map identifying ${H}$ with ${(-1,1) \subseteq \mathbb{R}\mathrm{P}^1}$ is shown.

2.2. Projective Model

Parameterizing

Real projective space ${\mathbb{R}\mathrm{P}^1}$ is the set of lines through the origin in ${\mathbb{R}^2}$. We can think about ${\mathbb{R}\mathrm{P}^1}$ as ${\mathbb{R} \cup \{\infty\}}$, where ${x\in \mathbb{R}}$ is associated with the line (point in ${\mathbb{R}\mathrm{P}^1}$) intersecting ${\{y=1\}}$ in ${x}$, and ${\infty}$ is the horizontal line. There is a natural projection ${\mathbb{R}^2 \setminus \{0\} \rightarrow \mathbb{R}\mathrm{P}^1}$ by projecting a point to the line it is on. Under this projection, ${H}$ maps to ${(-1,1)\subseteq \mathbb{R} \subseteq \mathbb{R}\mathrm{P}^1}$.

Since ${SO_0(1,1)}$ acts on ${\mathbb{R}^2}$ preserving the lines ${y = \pm x}$, it gives a projective action on ${\mathbb{R}\mathrm{P}^1}$ fixing the points ${\pm 1}$. Now suppose we have any projective linear isomorphism of ${\mathbb{R}\mathrm{P}^1}$ fixing ${\pm 1}$. The isomorphism is represented by a matrix ${A \in \mathrm{PGL}(2,\mathbb{R})}$ with eigenvectors ${\left[ \begin{array}{c} 1 \\ \pm 1 \end{array} \right]}$. Since scaling ${A}$ preserves its projective class, we may assume it has determinant 1. Its eigenvalues are thus ${\lambda}$ and ${\lambda^{-1}}$. The determinant equation, plus the fact that

$\displaystyle A \left[ \begin{array}{c} 1 \\ \pm 1 \end{array} \right] = \left[ \begin{array}{c} \lambda^{\pm 1} \\ \pm \lambda^{\pm 1} \end{array} \right]$

Implies that ${A}$ is of the form of a matrix in ${SO_0(1,1)}$. Therefore, the projective linear structure on ${(-1,1) \subseteq \mathbb{R}\mathrm{P}^1}$ is the “same” (has the same isometry (isomorphism) group) as the hyperbolic (Riemannian) structure on ${H}$.

Metric

Clearly, we’re going to use the pushforward metric under the projection of ${H}$ to ${(-1,1)}$, but it turns out that this metric is a natural choice for other reasons, and it has a nice expression.

The map taking ${H}$ to ${(-1,1) \subseteq \mathbb{R}\mathrm{P}^1}$ is ${\psi: \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right] \rightarrow \frac{\sinh(t)}{\cosh(T)} = \tanh(t)}$. The hyperbolic distance between ${x}$ and ${y}$ in ${(-1,1)}$ is then ${d_H(x,y) = |\tanh^{-1}(x) - \tanh^{-1}(y)|}$ (by the fact from the previous sections that ${\phi}$ is an isometry).

Recall the fact that ${\tanh(a\pm b) = \frac{\tanh(a) \pm \tanh(b)}{1 \pm \tanh(a)\tanh(b)}}$. Applying this, we get the nice form

$\displaystyle d_H(x,y) = \frac{y-x}{1 - xy}$

We also recall the cross ratio, for which we fix notation as ${ (z_1, z_2; z_3, z_4) := \frac{(z_3 -z_1)(z_4-z_2)}{(z_2-z_1)(z_4-z_3)}}$. Then

$\displaystyle (-1, x;y,1 ) = \frac{(y+1)(1-x)}{(x+1)(1-y)} = \frac{1-xy + (y-x)}{1-xy + (x-y)}$

Call the numerator of that fraction by ${N}$ and the denominator by ${D}$. Then, recalling that ${\tanh(u) = \frac{e^{2u}-1}{e^{2u}+1}}$, we have

$\displaystyle \tanh(\frac{1}{2} \log(-1,x;y,1)) = \frac{\frac{N}{D} -1}{\frac{N}{D} +1} = \frac{N-D}{N+D} = \frac{2(y-x)}{2(1-xy)} = \tanh(d_H(x,y))$

Therefore, ${d_H(x,y) = \frac{1}{2}\log(-1,x;y,-1)}$.

3. Hilbert Metric

Notice that the expression on the right above has nothing, a priori, to do with the hyperbolic projection. In fact, for any open convex body in ${\mathbb{R}\mathrm{P}^n}$, we can define the Hilbert metric on ${C}$ by setting ${d_H(p,q) = \frac{1}{2}\log(a,p,q,b)}$, where ${a}$ and ${b}$ are the intersections of the line through ${a}$ and ${b}$ with the boundary of ${C}$. How is it possible to take the cross ratio, since ${a,p,q,b}$ are not numbers? The line containing all of them is projectively isomorphic to ${\mathbb{R}\mathrm{P}^1}$, which we can parameterize as ${\mathbb{R} \cup \{\infty\}}$. The cross ratio does not depend on the choice of parameterization, so it is well defined. Note that the Hilbert metric is not necessarily a Riemannian metric, but it does make any open convex set into a metric space.

Therefore, we see that any open convex body in ${\mathbb{R}\mathrm{P}^n}$ has a natural metric, and the hyperbolic metric in ${H = (-1,1)}$ agrees with this metric when ${(-1,1)}$ is thought of as a open convex set in ${\mathbb{R}\mathrm{P}^1}$.

4. Higher-Dimensional Hyperbolic Space

4.1. Hyperboloid

The higher dimensional hyperbolic spaces are completely analogous to the 1-dimensional case. Consider ${\mathbb{R}^{n+1}}$ with the basis ${\{e_i\}_{i=1}^n \cup \{e\}}$ and the 2-form ${\langle v, w \rangle_H = \sum_{i=1}^n v_iw_i - v_{n+1}w_{n+1}}$. This is the form defined by the matrix ${J = I \oplus (-1)}$. Define ${\mathbb{H}^n}$ to be the positive (positive in the ${e}$ direction) sheet of the hyperbola ${\langle v,v\rangle_H = -1}$.

Let ${O(n,1)}$ be the linear transformations preserving the form, so ${O(n,1) = \{ A \, | \, A^TJA = J\}}$. This group is generated by ${O(1,1) \subseteq O(n,1)}$ as symmetries of the ${e_1, e}$ plane, together with ${O(n) \subseteq O(n,1)}$ as symmetries of the span of the ${e_i}$ (this subspace is euclidean). The group ${SO_0(n,1)}$ is the set of orientation preserving elements of ${O(n,1)}$ which preserve the positive sheet of the hyperboloid (${\mathbb{H}^n}$). This group acts transitively on ${\mathbb{H}^n}$ with point stabilizers ${SO(n)}$: this is easiest to see by considering the point ${(0,\cdots, 0, 1) \in \mathbb{H}^n}$. Here the stabilizer is clearly ${SO(n)}$, and because ${SO_0(n,1)}$ acts transitively, any stabilizer is a conjugate of this.

As in the 1-dimensional case, the metric on ${\mathbb{H}^n}$ is ${\langle \cdot , \cdot \rangle_H|_{T_p\mathbb{H}^n}}$, which is invariant under ${SO_0(n,1)}$.

Geodesics in ${\mathbb{H}^n}$ can be understood by consdering the fixed point sets of isometries, which are always totally geodesic. Here, reflection in a vertical (containing ${e}$) plane restricts to an (orientation-reversing, but that’s ok) isometry of ${\mathbb{H}^n}$, and the fixed point set is obviously the intersection of this plane with ${\mathbb{H}^n}$. Now ${SO_0(n,1)}$ is transitive on ${\mathbb{H}^n}$, and it sends planes to planes in ${\mathbb{R}^{n+1}}$, so we have a bijection

{Totally geodesic subspaces through ${p}$} ${\leftrightarrow}$ ${\mathbb{H}^n \cap}$ {linear subspaces of ${\mathbb{R}^{n+1}}$ through ${p}$ }

By considering planes through ${e}$, we can see that these totally geodesic subspaces are isometric to lower dimensional hyperbolic spaces.

4.2. Projective

Analogously, we define the projective model as follows: consider the disk ${\{v_{n+1} \,| v_{n+1} = 1, \langle v,v \rangle_H < 0\}}$. I.e. the points in the ${v_{n+1}}$ plane inside the cone ${\langle v,v \rangle_H = 0}$. We can think of ${\mathbb{R}\mathrm{P}^n}$ as ${\mathbb{R}^n \cup \mathbb{R}\mathrm{P}^{n-1}}$, so this disk is ${D^\circ \subseteq \mathbb{R}^n \subseteq \mathbb{R}\mathrm{P}^n}$. There is, as before, the natural projection of ${\mathbb{H}^n}$ to ${D^\circ}$, and the pushforward of the hyperbolic metric agrees with the Hilbert metric on ${D^\circ}$ as an open convex body in ${\mathbb{R}\mathrm{P}^n}$.

Geodesics in the projective model are the intersections of planes in ${\mathbb{R}^{n+1}}$ with ${D^\circ}$; that is, they are geodesics in the euclidean space spanned by the ${e_i}$. One interesting consequence of this is that any theorem which is true in euclidean geometry which does not reply on facts about angles is still true for hyperbolic space. For example, Pappus’ hexagon theorem, the proof of which does not use angles, is true.

4.3. Projective Model in Dimension 2

In the case that ${n=2}$, we can understand the projective isomorphisms of ${\mathbb{H}^2 = D \subseteq \mathbb{R}\mathrm{P}^2}$ by looking at their actions on the boundary ${\partial D}$. The set ${\partial D}$ is projectively isomorphic to ${\mathbb{R}\mathrm{P}^1}$ as an abstract manifold, but it should be noted that ${\partial D}$ is not a straight line in ${\mathbb{R}\mathrm{P}^2}$, which would be the most natural way to find ${\mathbb{R}\mathrm{P}^1}$‘s embedded in ${\mathbb{R}\mathrm{P}^2}$.

In addition, any projective isomorphism of ${\mathbb{R}\mathrm{P}^1 \cong \partial D}$ can be extended to a real projective isomorphism of ${\mathbb{R}\mathrm{P}^2}$. In other words, we can understand isometries of 2-dimensional hyperbolic space by looking at the action on the boundary. Since ${\partial D}$ is not a straight line, the extension is not trivial. We now show how to do this.

The automorphisms of ${\partial D \cong \mathbb{R}\mathrm{P}^1}$ are ${\mathrm{PSL}(2,\mathbb{R}}$. We will consider ${\mathrm{SL}(2,\mathbb{R})}$. For any Lie group ${G}$, there is an Adjoint action ${G \rightarrow \mathrm{Aut}(T_eG)}$ defined by (the derivative of) conjugation. We can similarly define an adjoint action ${\mathrm{ad}}$ by the Lie algebra on itself, as ${\mathrm{ad}(\gamma '(0)) := \left. \frac{d}{dt} \right|_{t=0} \mathrm{Ad}(\gamma(t))}$ for any path ${\gamma}$ with ${\gamma(0) = e}$. If the tangent vectors ${v}$ and ${w}$ are matrices, then ${\mathrm{ad}(v)(w) = [v,w] = vw-wv}$.

We can define the Killing form ${B}$ on the Lie algebra by ${B(v,w) = \mathrm{Tr}(\mathrm{ad}(v)\mathrm{ad}(w))}$. Note that ${\mathrm{ad}(v)}$ is a matrix, so this makes sense, and the Lie group acts on the tangent space (Lie algebra) preserving this form.

Now let’s look at ${\mathrm{SL}(2,\mathbb{R})}$ specifically. A basis for the tangent space (Lie algebra) is ${e_1 = \left[ \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right]}$, ${e_2 = \left[ \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right]}$, and ${e_3 = \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right]}$. We can check that ${[e_1,e_2] = e_3}$, ${[e_1,e_3] = -2e_1}$, and ${[e_2, e_3]=2e_2}$. Using these relations plus the antisymmetry of the Lie bracket, we know

$\displaystyle \mathrm{ad}(e_1) = \left[ \begin{array}{ccc} 0 & 0 & -2 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{array}\right] \qquad \mathrm{ad}(e_2) = \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 2 \\ -1 & 0 & 0 \end{array}\right] \qquad \mathrm{ad}(e_3) = \left[ \begin{array}{ccc} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0 \end{array}\right]$

Therefore, the matrix for the Killing form in this basis is

$\displaystyle B_{ij} = B(e_i,e_j) = \mathrm{Tr}(\mathrm{ad}(e_i)\mathrm{ad}(e_j)) = \left[ \begin{array}{ccc} 0 & 4 & 0 \\ 4 & 0 & 0 \\ 0 & 0 & 8 \end{array}\right]$

This matrix has 2 positive eigenvalues and one negative eigenvalue, so its signature is ${(2,1)}$. Since ${\mathrm{SL}(2,\mathbb{R})}$ acts on ${T_e(\mathrm{SL}(2,\mathbb{R}))}$ preserving this form, we have ${\mathrm{SL}(2,\mathbb{R}) \cong O(2,1)}$, otherwise known at the group of isometries of the disk in projective space ${\mathbb{R}\mathrm{P}^2}$, otherwise known as ${\mathbb{H}^2}$.

Any element of ${\mathrm{PSL}(2,\mathbb{R})}$ (which, recall, was acting on the boundary of projective hyperbolic space ${\partial D}$) therefore extends to an element of ${O(2,1)}$, the isometries of hyperbolic space, i.e. we can extend the action over the disk.

This means that we can classify isometries of 2-dimensional hyperbolic space by what they do to the boundary, which is determined generally by their eigevectors (${\mathrm{PSL}(2,\mathbb{R})}$ acts on ${\mathbb{R}\mathrm{P}^1}$ by projecting the action on ${\mathbb{R}^2}$, so an eigenvector of a matrix corresponds to a fixed line in ${\mathbb{R}^2}$, so a fixed point in ${\mathbb{R}\mathrm{P}^1 \cong \partial D}$. For a matrix ${A}$, we have the following:

• ${|\mathrm{Tr}(A)| < 2}$ (elliptic) In this case, there are no real eigenvalues, so no real eigenvectors. The action here is rotation, which extends to a rotation of the entire disk.
• ${|\mathrm{Tr}(A)| = 2}$ (parabolic) There is a single real eigenvector. There is a single fixed point, to which all other points are attracted (in one direction) and repelled from (in the other). For example, the action in projective coordinates sending ${[x:y]}$ to ${[x+1:y]}$: infinity is such a fixed point.
• ${|\mathrm{Tr}(A)| > 2}$ (hyperbolic) There are two fixed point, one attracting and one repelling.
•

5. Complex Hyperbolic Space

We can do a construction analogous to real hyperbolic space over the complexes. Define a Hermitian form ${q}$ on ${\mathbb{C}^{n+1}}$ with coordinates ${\{z_1,\cdots, z_n\} \cup \{w\}}$ by ${q(x_1,\cdots x_n, w) = |z_1|^2 + \cdots + |z_n|^2 - |w|^2}$. We will also refer to ${q}$ as ${\langle \cdot, \cdot \rangle_q}$. The (complex) matrix for this form is ${J = I \oplus (-1)}$, where ${q(v,w) = v^*Jw}$. Complex linear isomorphisms preserving this form are matrices ${A}$ such that ${A^*JA = J}$. This is our definition for ${\mathrm{U}(q) := \mathrm{U}(n,1)}$, and we define ${\mathrm{SU}(n,1)}$ to be those elements of ${\mathrm{U}(n,1)}$ with determinant of norm 1.

The set of points ${z}$ such that ${q(z) = -1}$ is not quite what we are looking for: first it is a ${2n+1}$ real dimensional manifold (not ${2n}$ as we would like for whatever our definition of “complex hyperbolic ${n}$ space” is), but more importantly, ${q}$ does not restrict to a positive definite form on the tangent spaces. Call the set of points ${z}$ where ${q(z) = -1}$ by ${\bar{H}}$. Consider a point ${p}$ in ${\bar{H}}$ and ${v}$ in ${T_p\bar{H}}$. As with the real case, by the fact that ${v}$ is in the tangent space,

$\displaystyle \left. \frac{d}{dt} \right|_{t=0} \langle p + tv, p+tv\rangle_q = 0 \quad \Rightarrow \quad \langle v, p \rangle_q + \langle p,v \rangle_q = 0$

Because ${q}$ is hermitian, the expression on the right does not mean that ${\langle v,p\rangle_q = 0}$, but it does mean that ${\langle v,p \rangle_q}$ is purely imaginary. If ${\langle v,p \rangle_q = ik}$, then ${\langle v,v\rangle_q < 0}$, i.e. ${q}$ is not positive definite on the tangent spaces.

However, we can get rid of this negative definite subspace. ${S^1}$ as the complex numbers of unit length (or ${\mathrm{U}(1)}$, say) acts on ${\mathbb{C}^{n+1}}$ by multiplying coordinates, and this action preserves ${q}$: any phase goes away when we apply the absolute value. The quotient of ${\bar{H}}$ by this action is ${\mathbb{C}\mathbb{H}^n}$. The isometry group of this space is still ${\mathrm{U}(n,1)}$, but now there are point stabilizers because of the action of ${\mathrm{U}(1)}$. We can think of ${\mathrm{U}(1)}$ inside ${\mathrm{U}(n,1)}$ as the diagonal matrices, so we can write

$\displaystyle \mathrm{SU}(n,1) \times \mathrm{U}(1) \cong U(n,1)$

And the projectivized matrices ${\mathrm{PSU}(n,1)}$ is the group of isometries of ${\mathbb{C}\mathbb{H}^n \subseteq \mathbb{C}^n \subseteq \mathbb{C}\mathrm{P}^n}$, where the middle ${\mathbb{C}^n}$ is all vectors in ${\mathbb{C}^{n+1}}$ with ${w=1}$ (which we think of as part of complex projective space). We can also approach this group by projectivizing, since that will get rid of the unwanted point stabilizers too: we have ${\mathrm{PU}(n,1) \cong \mathrm{PSU}(n,1)}$.

5.1. Case ${n=1}$

In the case ${n=1}$, we can actually picture ${\mathbb{C}\mathrm{P}^1}$. We can’t picture the original ${\mathbb{C}^4}$, but we are looking at the set of ${(z,w)}$ such that ${|z|^2 - |w|^2 = -1}$. Notice that ${|w| \ge 1}$. After projectivizing, we may divide by ${w}$, so ${|z/w| - 1 = -1/|w|}$. The set of points ${z/w}$ which satisfy this is the interior of the unit circle, so this is what we think of for ${\mathbb{C}\mathbb{H}^1}$. The group of complex projective isometries of the disk is ${\mathrm{PU}(1,1)}$. The straight horizontal line is a geodesic, and the complex isometries send circles to circles, so the geodesics in ${\mathbb{C}\mathbb{H}^1}$ are circles perpendicular to the boundary of ${S^1}$ in ${\mathbb{C}}$.

Imagine the real projective model as a disk sitting at height one, and the geodesics are the intersections of planes with the disk. Complex hyperbolic space is the upper hemisphere of a sphere of radius one with equator the boundary of real hyperbolic space. To get the geodesics in complex hyperbolic space, intersect a plane with this upper hemisphere and stereographically project it flat. This gives the familiar Poincare disk model.

5.2. Real ${\mathbb{H}^2}$‘s contained in ${\mathbb{C}\mathbb{H}^n}$

${\mathbb{C}\mathbb{H}^2}$ contains 2 kinds of real hyperbolic spaces. The subset of real points in ${\mathbb{C}\mathbb{H}^n}$ is (real) ${\mathbb{H}^n}$, so we have a many ${\mathbb{H}^2 \subseteq \mathbb{H}^n \subseteq \mathbb{C}\mathbb{H}^n}$. In addition, we have copies of ${\mathbb{C}\mathbb{H}^1}$, which, as discussed above, has the same geometry (i.e. has the same isometry group) as real ${\mathbb{H}^2}$. However, these two real hyperbolic spaces are not isometric. the complex hyperbolic space ${\mathbb{C}\mathbb{H}^1}$ has a more negative curvature than the real hyperbolic spaces. If we scale the metric on ${\mathbb{C}\mathbb{H}^n}$ so that the real hyperbolic spaces have curvature ${-1}$, then the copies of ${\mathbb{C}\mathbb{H}^1}$ will have curvature ${-4}$.

In a similar vein, there is a symplectic structure on ${\mathbb{C}\mathbb{H}^n}$ such that the real ${\mathbb{H}^2}$ are lagrangian subspaces (the flattest), and the ${\mathbb{C}\mathbb{H}^1}$ are symplectic, the most negatively curved.

An important thing to mention is that complex hyperbolic space does not have constant curvature(!).

6. Poincare Disk Model and Upper Half Space Model

The projective models that we have been dealing with have many nice properties, especially the fact that geodesics in hyperbolic space are straight lines in projective space. However, the angles are wrong. There are models in which the straight lines are “curved” i.e. curved in the euclidean metric, but the angles between them are accurate. Here we are interested in a group of isometries which preserves angles, so we are looking at a conformal model. Dimension 2 is special, because complex geometry is real conformal geometry, but nevertheless, there is a model of ${\mathbb{R}\mathbb{H}^n}$ in which the isometries of the space are conformal.

Consider the unit disk ${D^n}$ in ${n}$ dimensions. The conformal automorphisms are the maps taking (straight) diameters and arcs of circles perpendicular to the boundary to this same set. This model is abstractly isomorphic to the Klein model in projective space. Imagine the unit disk in a flat plane of height one with an upper hemisphere over it. The geodesics in the Klein model are the intersections of this flat plane with subspaces (so they are straight lines, for example, in dimension 2). Intersecting vertical planes with the upper hemisphere and stereographically projecting it flat give geodesics in the Poincare disk model. The fact that this model is the “same” (up to scaling the metric) as the example above of ${\mathbb{C}\mathbb{H}^1}$ is a (nice) coincidence.

The Klein model is the flat disk inside the sphere, and the Poincare disk model is the sphere. Geodesics in the Klein model are intersections of subspaces (the angled plane) with the flat plane at height 1. Geodesics in the Poincare model are intersections of vertical planes with the upper hemisphere. The two darkened geodesics, one in the Klein model and one in the Poincare, correspond under orthogonal projection. We get the usual Poincare disk model by stereographically projecting the upper hemisphere to the disk. The projection of the geodesic is shown as the curved line inside the disk

The Poincare disk model. A few geodesics are shown.

Now we have the Poincare disk model, where the geodesics are straight diameters and arcs of circles perpendicular to the boundary and the isometries are the conformal automorphisms of the unit disk. There is a conformal map from the disk to an open half space (we typically choose to conformally identify it with the upper half space). Conveniently, the hyperbolic metric on the upper half space ${d_H}$ can be expressed at a point ${(x,t)}$ (euclidean coordinates) as ${d_H = d_E/t}$. I.e. the hyperbolic metric is just a rescaling (at each point) of the euclidean metric.

One of the important things that we wanted in our models was the ability to realize isometries of the model with isometries of the ambient space. In the case of a one-parameter family of isometries of hyperbolic space, this is possible. Suppose that we have a set of elliptic isometries. Then in the disk model, we can move that point to the origin and realize the isometries by rotations. In the upper half space model, we can move the point to infinity, and realize them by translations.

Hermann Amandus Schwarz (1843-1921) was a student of Kummer and Weierstrass, and made many significant contributions to geometry, especially to the fields of minimal surfaces and complex analysis. His mathematical creations are both highly abstract and flexible, and at the same time intimately tied to explicit and practical calculation.

I learned about Schwarz-Christoffel transformations, Schwarzian derivatives, and Schwarz’s minimal surface as three quite separate mathematical objects, and I was very surprised to discover firstly that they had all been discovered by the same person, and secondly that they form parts of a consistent mathematical narrative, which I will try to explain in this post to the best of my ability. There is an instructive lesson in this example (for me), that we tend to mine the past for nuggets, examples, tricks, formulae etc. while forgetting the points of view and organizing principles that made their discovery possible. Another teachable example is that of Dehn’s “invention” of combinatorial (infinite) group theory, as a natural branch of geometry; several generations of followers went about the task of reformulating Dehn’s insights and ideas in the language of algebra, “generalizing” them and stripping them of their context, before geometric and topological methods were reintroduced by Milnor, Schwarz (a different one this time), Stallings, Thurston, Gromov and others to spectacular effect (note: I have the second-hand impression that the geometric point of view in group theory (and every other subject) was never abandoned in the Soviet Union).

Schwarz’s minimal surface (also called “Schwarz’s D surface”, and sometimes “Schwarz’s H surface”) is an extraordinarily beautiful triply-periodic minimal surface of infinite genus that is properly embedded in $\mathbb{R}^3$. According to Nitsche’s excellent book (p.240), this minimal surface closely resembles the separating wall between inorganic and organic materials in the skeleton of a starfish. The basic building block of the surface can be described as follows. If the vertices of a cube are $2$-colored, the black vertices are the vertices of a regular tetrahedron. Let $Q$ denote the quadrilateral formed by four edges of this tetrahedron; then a fundamental piece $S$ of Schwarz’s surface is a minimal disk spanning $Q$:

The surface may be “analytically continued” by rotating $Q$ through an angle $\pi$ around each boundary edge. Six copies of $Q$ fit smoothly around each vertex, and the resulting surface extends (triply) periodically throughout space.

The symmetries of $Q$ enable us to give it several descriptions as a Riemann surface. Firstly, we could think of $Q$ as a polygon in the hyperbolic plane with four edges of equal length, and angles $\pi/3$. Twelve copies of $Q$ can be assembled to make a hyperbolic surface $\Sigma$ of genus $3$. Thinking of a surface of genus $3$ as the boundary of a genus $3$ handlebody defines a homomorphism from $\pi_1(\Sigma)$ to $\mathbb{Z}^3$, thought of as $H_1(\text{handlebody})$; the cover $\widetilde{\Sigma}$ associated to the kernel is (conformally) the triply periodic Schwarz surface, and the deck group acts on $\mathbb{R}^3$ as a lattice (of index $2$ in the face-centered cubic lattice).

Another description is as follows. Since the deck group acts by translation, the Gauss map from $\widetilde{\Sigma}$ to $S^2$ factors through a map $\Sigma \to S^2$. The map is injective at each point in the interior or on an edge of a copy of $Q$, but has an order $2$ branch point at each vertex. Thus, the map $\Sigma \to S^2$ is a double-branched cover, with one branch point of order $2$ at each vertex of a regular inscribed cube. This leads one to think (like a late 19th century mathematician) of $\Sigma$ as the Riemann surface on which a certain multi-valued function on $S^2 = \mathbb{C} \cup \infty$ is single-valued. Under stereographic projection, the vertices of the cube map to the eight points $\lbrace \alpha,i\alpha,-\alpha,-i\alpha,1/\alpha,i/\alpha,-1/\alpha,-i/\alpha \rbrace$ where $\alpha = (\sqrt{3}-1)/\sqrt{2}$. These eight points are the roots of the polynomial $w^8 - 14w^4 + 1$, so we may think of $\Sigma$ as the hyperelliptic Riemann surface defined by the equation $v^2 = w^8 - 14w^4 + 1$; equivalently, as the surface on which the multi-valued (on $\mathbb{C} \cup \infty$) function $R(w):= 1/v=1/\sqrt{w^8 - 14w^4 + 1}$ is single-valued.

The function $R(w)$ is known as the Weierstrass function associated to $\Sigma$, and an explicit formula for the co-ordinates of the embedding $\widetilde{\Sigma} \to \mathbb{R}^3$ were found by Enneper and Weierstrass. After picking a basepoint (say $0$) on the sphere, the coordinates are given by integration:

$x = \text{Re} \int_0^{w_0} \frac{1}{2}(1-w^2)R(w)dw$

$y = \text{Re} \int_0^{w_0} \frac{i}{2}(1+w^2)R(w)dw$

$z = \text{Re} \int_0^{w_0} wR(w)dw$

The integral in each case depends on the path, and lifts to a single-valued function precisely on $\widetilde{\Sigma}$.

Geometrically, the three coordinate functions $x,y,z$ are harmonic functions on $\widetilde{\Sigma}$. This corresponds to the fact that minimal surfaces are precisely those with vanishing mean curvature, and the fact that the Laplacian of the coordinate functions (in terms of isothermal parameters on the underlying Riemann surface) can be expressed as a nonzero multiple of the mean curvature vector. A harmonic function on a Riemann surface is the real part of a holomorphic function, unique up to a constant; the holomorphic derivative of the (complexified) coordinate functions are therefore well-defined, and give holomorphic $1$-forms $\phi_1,\phi_2,\phi_3$ which descend to $\Sigma$ (since the deck group acts by translations). These $1$-forms satisfy the identity $\sum_i \phi_i^2 = 0$ (this identity expresses the fact that the embedding of $\widetilde{\Sigma}$ into $\mathbb{R}^3$ via these functions is conformal). The (composition of the) Gauss map (with stereographic projection) can be read off from the $\phi_i$, and as a meromorphic function on $\Sigma$, it is given by the formula $w = \phi_3/(\phi_1 - i\phi_2)$. Define a function $f$ on $\Sigma$ by the formula $fdw = \phi_1 - i\phi_2$. Then $1/f,w$ are the coordinates of a rational map from $\Sigma$ into $\mathbb{C}^2$ which extends to a map into $\mathbb{CP}^2$, by sending each zero of $f$ to $wf = \phi_3/dw$ in the $\mathbb{CP}^1$ at infinity. Symmetry allows us to identify the image with the hyperelliptic embedding from before, and we deduce that $f=R(w)$. Solving for $\phi_1,\phi_2$ we obtain the integrands in the formulae above.

In fact, any holomorphic function $R(w)$ on a domain in $\mathbb{C}$ defines a (typically immersed with branch points) minimal surface, by the integral formulae of Enneper-Weierstrass above. Suppose we want to use this fact to produce an explicit description of a minimal surface bounded by some explicit polygonal loop in $\mathbb{R}^3$. Any minimal surface so obtained can be continued across the boundary edges by rotation; if the angles at the vertices are all of the form $\pi/n$ the resulting surface closes up smoothly around the vertices, and one obtains a compact abstract Riemann surface $\Sigma$ tiled by copies of the fundamental region, together with a holonomy representation of $\pi_1(\Sigma)$ into $\text{Isom}^+(\mathbb{R}^3)$. Sometimes the image of this representation in the rotational part of $\text{Isom}^+(\mathbb{R}^3)$ is finite, and one obtains an infinitely periodic minimal surface as in the case of Schwarz’s surface. A fundamental tile in $\Sigma$ can be uniformized as a hyperbolic polygon; equivalently, as a region in the upper half-plane bounded by arcs of semicircles perpendicular to the real axis. Since the edges of the loop are straight lines, the image of this hyperbolic polygon under the Gauss map is a region in $\mathbb{R}^3$ also bounded by arcs of round circles; thus Schwarz’s study of minimal surfaces naturally led him to the problem of how to explicitly describe conformal maps between regions in the plane bounded by circular arcs. This problem is solved by the Schwarz-Christoffel transformation, and its generalizations, with help from the Schwarzian derivative.

Note that if $P$ and $Q$ are two such regions, then a conformal map from $P$ to $Q$ can be factored as the product of a map uniformizing $P$ as the upper half-plane, followed by the inverse of a map uniformizing $Q$ as the upper half-plane. So it suffices to find a conformal map when the domain is the upper half plane, decomposed into intervals and rays that are mapped to the edges of a circular polygon $Q$. Near each vertex, $Q$ can be moved by a fractional linear transformation $z \to (az+b)/(cz+d)$ to (part of) a wedge, consisting of complex numbers with argument between $0$ and $\alpha$, where $\alpha$ is the angle at $Q$. The function $f(z) = z^{\alpha/\pi}$ uniformizes the upper half-plane as such a wedge; however it is not clear how to combine the contributions from each vertex, because of the complicated interaction with the fractional linear transformation. The fundamental observation is that there are certain natural holomorphic differential operators which are insensitive to the composition of a holomorphic function with groups of fractional linear transformations, and the uniformizing map can be expressed much more simply in terms of such operators.

For example, two functions that differ by addition of a constant have the same derivative: $f' = (f+c)'$. Functions that differ by multiplication by a constant have the same logarithmic derivative: $(\log(f))' = (\log(cf))'$. Putting these two observations together suggest defining the nonlinearity of a function as the composition $N(f):= (\log(f'))' = f''/f'$. This has the property that $N(af+b) = N(f)$ for any constants $a,b$. Under inversion $z \to 1/z$ the nonlinearity transforms by $N(1/f) = N(f) - 2f'/f$. From this, and a simple calculation, one deduces that the operator $N' - N^2/2$ is invariant under inversion, and since it is also invariant under addition and multiplication by constants, it is invariant under the full group of fractional linear transformations. This combination is called the Schwarzian derivative; explicitly, it is given by the formula $S(f) = f'''/f' - 3/2(f''/f')^2$. Given the Schwarzian derivative $S(f)$, one may recover the nonlinearity $N(f)$ by solving the Ricatti equation $N' - N^2/2 - S = 0$. As explained in this post, solutions of the Ricatti equation preserve the projective structure on the line; in this case, it is a complex projective structure on the complex line. Equivalently, different solutions differ by an element of $\text{PSL}(2,\mathbb{C})$, acting by fractional linear transformations, as we have just deduced. Once we know the nonlinearity, we can solve for $f$ by $f = \int e^{\int N}$, the usual solution to a first order linear inhomogeneous ODE. The Schwarzian of the function $z^{\alpha/\pi}$ is $(1-\alpha^2/\pi^2)/2z^2$. The advantage of expressing things in these terms is that the Schwarzian of a uniformizing map for a circular polygon $Q$ with angles $\alpha_i$ at the vertices has the form of a rational function, with principal parts $a_i/(z-z_i)^2 + b_i/(z-z_i)$, where the $a_i = (1-\alpha_i^2/\pi^2)/2$ and the $b_i$ and $z_i$ depend (unfortunately in a very complicated way) on the edges of $Q$ (for the ugly truth, see Nehari, chapter 5). To see this, observe that the map has an order two pole near finitely many points $z_i$ (the preimages of the vertices of $Q$ under the uniformizing map) but is otherwise holomorphic. Moreover, it can be analytically continued into the lower half plane across the interval between successive $z_i$, by reflecting the image across each circular edge. After reflecting twice, the image of $Q$ is transformed by a fractional linear transformation, so $S(f)$ has an analytic continuation which is single valued on the entire Riemann sphere, with finitely many isolated poles, and is therefore a rational function! When the edges of the polygon are straight, a simpler formula involving the nonlinearity specializes to the “familiar” Schwarz-Christoffel formula.

(Update 10/22): In fact, I went to the library to refresh myself on the contents of Nehari, chapter 5. The first thing I noticed — which I had forgotten — was that if $f$ is the uniformizing map from the upper half-plane to a polygon $Q$ with spherical arcs, then $S(f)$ is real-valued on the real axis. Since it is a rational function, this implies that its nonsingular part is actually a constant; i.e.

$S(f) = \sum _i a_i/(z-z_i)^2 + b_i/(z-z_i) + c$

where $a_i$ is as above, and $z_i,b_i,c$ are real constants (which satisfy some further conditions — really see Nehari this time for more details).

The other thing that struck me was the first paragraph of the preface, which touches on some of the issues I alluded to above:

In the preface to the first edition of Courant-Hilbert’s “Methoden der mathematischen Physik”, R. Courant warned against a trend discernible in modern mathematics in which he saw a menace to the future development of mathematical analysis. He was referring to the tendency of many workers in this field to lose sight of the roots of mathematical analysis in physical and geometric intuition and to concentrate their efforts on the refinement and the extreme generalization of existing concepts.

Instead of using a word like “menace”, I would rather take this as a lesson about the value of returning to the points of view that led to the creation of the mathematical objects we study every day; which was (to some approximation) the point I was trying to illustrate in this post.

A beautiful identity in Euclidean geometry is the Brianchon-Gram relation (also called the Gram-Sommerville formula, or Gram’s equation), which says the following: let $P$ be a convex polytope, and for each face $F$ of $P$, let $\omega(F)$ denote the solid angle along the face, as a fraction of the volume of a linking sphere. The relation then says:

Theorem (Brianchon-Gram relation): $\sum_{F \subset P} (-1)^{\text{dim}F} \omega(F)=0$. In other words, the alternating sum of the (solid) angles of all dimensions of a convex polytope is zero.

Sketch of Proof: we prove the theorem in the case that $P$ is a simplex $\Delta$; the more general case follows by generalizing to pyramids, and then decomposing any polytope into pyramids by coning to an interior point. This argument is due to Shephard.

Associated to each face $F$ is a spherical polyhedron $A(F)$ in $S^{n-1}$; if the span of $F$ is the intersection of a family of half-spaces bounded by hyperplanes $H_i$ with inward normals $n_i$, then $A(F)$ is the set of unit vectors $v \in S^{n-1}$ whose inner product with each $n_i$ is non-negative. Note further that for each $v \in S^{n-1}$ there is some $n_i$ that pairs non-negatively with $v$; consequently to each $v \in S^{n-1}$ one can assign a subset $I(v)$ of indices, so that $n_i$ pairs non-negatively with $v$ if and only if $i \in I(v)$. On the other hand, each subset $J \subset I(v)$ determines a unique face $F(J)$ of dimension $n - |J|$. By the inclusion-exclusion formula, we conclude that $\sum_{F} (-1)^{\text{dim}F}A(F)$ “equals” zero, thought of as a signed union of spherical polyhedra. Since $\omega(F) = \text{vol}(A(F))/\text{vol}(S^{n-1})$, the formula follows. qed.

Another well-known proof starts by approximating the polytope by a rational polytope (i.e. one with rational vertices). The proof then goes via Macdonald reciprocity, using generating functions.

Example: Let $T$ be a triangle, with angles $\alpha,\beta,\gamma$. The solid angle at an interior point is $1$, and the solid angle at each edge is $1/2$. Hence we get $(\alpha + \beta + \gamma)/2\pi - 3/2 + 1 = 0$ and therefore in this case Brianchon-Gram is equivalent to the familiar angle sum identity for a triangle: $\alpha + \beta + \gamma = \pi$.

Example: Next consider the example of a Euclidean simplex $S$. The contribution from the interior is $-1$, and the contribution from the four facets is $2$. There are six edges, with angles $\alpha_i$, that  contribute $\sum \alpha_i/2\pi$. Each vertex contributes one spherical triangle, with (spherical) angles $\alpha_i,\alpha_j,\alpha_k$ for certain $i,j,k$, where each $\alpha_i$ appears as a spherical angle in exactly two spherical triangles. The Gauss-Bonnet theorem implies that the area of a spherical triangle is equal to the angle sum defect: $\text{area}_{ijk} = \alpha_i + \alpha_j + \alpha_k - \pi$ so the vertices contribute $(2\sum \alpha_i - 4 \pi)/4\pi$ and the identity is seen to follow in this case too.

Note in fact that the usual proof of Gauss-Bonnet for a spherical triangle is done by an inclusion-exclusion argument involving overlapping lunes, that is very similar to the proof of Brianchon-Gram given above.

The sketch of proof above just as easily proves an identity in the spherical scissors congruence group. For $X^n$ equal to spherical, Euclidean or hyperbolic space of dimension $n$, the scissors congruence group $\mathcal{P}(X^n)$ is the abelian group generated by formal symbols $(x_0,x_1,\cdots,x_n,\alpha)$ where $x_i \in X^n$ and $\alpha$ is a choice of orientation, modulo certain relations, namely:

1. $(x_0,x_1,\cdots,x_n,\alpha)=0$ if the $x_i$ are contained in a hyperplane
2. an odd permutation of the points induces multiplication by $-1$; changing the orientation induces multiplication by $-1$
3. if $g$ is an isometry of $X^n$, then $(x_0,\cdots,x_n,\alpha) = (gx_0,\cdots,gx_n,g_*\alpha)$
4. $\sum_i (-1)^i (x_0,\cdots,\widehat{x_i},\cdots,x_{n+1},\alpha)$ for any set of $n+2$ points, and any orientation $\alpha$

(Note that this definition of scissors congruence is consistent with that of Goncharov, and differs slightly from another definition consistent with Sah; this difference has to do with orientations, and has as a consequence the vanishing of spherical scissors congruence in even dimensions; whereas with Sah’s definition, $\mathcal{P}(S^{2n}) = \mathcal{P}(S^{2n-1})$ for each $n$)

The argument we gave above shows that for any Euclidean simplex $\Delta$, we have $\sum_F(-1)^{\text{dim}F} A(F) = 0$ in $\mathcal{P}(S^{n-1})$.

Scissors congruence satisfies several fundamental properties:

1. $S^n = 0$ in $\mathcal{P}(S^n)$. To see this, “triangulate” the sphere as a pair of degenerate simplices, whose vertices lie entirely on a hyperplane.
2. There is a natural multiplication $\mathcal{P}(S^{a-1}) \otimes \mathcal{P}(S^{b-1}) \to \mathcal{P}(S^{a+b-1})$; to define it on simplices, think of $S^{a+b-1}$ as the unit sphere in $\mathbb{R}^{a+b}$. A complementary pair of subspaces $\mathbb{R}^a$ and $\mathbb{R}^b$ intersect $S^{a+b-1}$ in a linked pair of spheres of dimensions $a-1,b-1$; if $\Delta_a,\Delta_b$ are spherical simplices in these subspaces, the image of $\Delta_a \otimes \Delta_b$ is the join of these two simplices in $S^{a+b-1}$.

It follows that the polyhedra $A(F)=0$ in $\mathcal{P}(S^{n-1})$ whenever $F$ is a face of dimension at least $1$; for in this case, $A(F)$ is the join of a spherical simplex with a sphere of some dimension, and is therefore trivial in spherical scissors congruence. Hence the identity above simplifies to $\sum_v A(v)=0$ in $\mathcal{P}(S^{n-1})$.

One nice application is to extend the definition of Dehn invariants to ideal hyperbolic simplices. We recall the definition of the usual Dehn invariant. Given a simplex $P \in X^n$, for each face $F$ we let $\angle(F)$ denote the spherical polyhedron equal to the intersection of $P$ with the link of $F$. Then $D(P) = \sum_F F\otimes \angle(F) \in \oplus_i \mathcal{P}(X^{n-i})\otimes \mathcal{P}(S^{i-1})$. Ideal scissors congruence makes sense for ideal hyperbolic simplices, except in dimension one (where it is degenerate). For ideal hyperbolic simplices (i.e. those with some vertices at infinity), the formula above for Dehn invariant is adequate, except for the $1$-dimensional faces (i.e. the edges) $e$. This problem is solved by the following “regularization” procedure due to Thurston: put a disjoint horoball at each ideal vertex of $P$, and replace each infinite edge $e$ by the finite edge $e'$ which is the intersection of $e$ with the complement of the union of horoballs; hence one obtains terms of the form $e' \otimes \angle(e)$ in $D(P)$. This definition apparently depends on the choice of horoballs. However, if $H,H'$ are two different horoballs, the difference is a sum of terms of the form $c \otimes \angle(e)$ where $c$ is constant, and $e$ ranges over the edges sharing the common ideal vertex. The intersection of $P$ with a horosphere is a Euclidean simplex $\Delta$, and the $\angle(e)$ are exactly the spherical polyhedra $A(v)$ as $v$ ranges over the vertices of $\Delta$. By what we have shown above, the sum $\sum_v A(v)$ is trivial in scissors congruence; it follows that $D(P)$ is well-defined.

For more general ideal polyhedra (and finite volume complete hyperbolic manifolds) one first decomposes into ideal simplices, then computes the Dehn invariant on each piece and adds. A minor variation of the usual argument on closed manifolds shows that the Dehn invariant of any complete finite-volume hyperbolic manifold vanishes.

Update(7/29/2009): It is perhaps worth remarking that the Brianchon-Gram relation can be thought of, not merely as an identity in spherical scissors congruence, but in the “bigger” spherical polytope group, in which one does not identify simplices that differ by an isometry. Incidentally, there is an interesting paper on this subject by Peter McMullen, in which he proves generalizations of Brianchon-Gram(-Sommerville), working explicitly in the spherical polytope group. He introduces what amounts to a generalization of the Dehn invariant, with domain the Euclidean translational scissors congruence group, and range a sum of tensor products of Euclidean translational scissors congruence (in lower dimensions) with spherical polytope groups. It appears, from the paper, that McMullen was aware of the classical Dehn invariant (in any case, he was aware of Sah’s book) but he does not refer to it explicitly.