You are currently browsing the category archive for the ‘Visualization’ category.

It’s been a while since I last blogged; the reason, of course, is that I felt that I couldn’t post anything new before completing my series of posts on Kähler groups; but I wasn’t quite ready to write my last post, because I wanted to get to the bottom of a few analytic details in the notorious Gromov-Schoen paper. I am not quite at the bottom yet, but maybe closer than I was; but I’m still pretty far from having collected my thoughts to the point where I can do them justice in a post. So I’ve finally decided to put Kähler groups on the back burner for now, and resume my usual very sporadic blogging habits.

So the purpose of this blog post is to advertise that I wrote a little piece of software called kleinian which uses the GLUT tools to visualize Kleinian groups (or, more accurately, interesting hyperbolic polyhedra invariant under such groups). The software can be downloaded from my github repository at

https://github.com/dannycalegari/kleinian

and then compiled from the command line with “make”. It should work out of the box on OS X; Alden Walker tells me he has successfully gotten it to compile on (Ubuntu) Linux, which required tinkering with the makefile a bit, and installing freeglut3-dev. There is a manual on the github page with a detailed description of file formats and so on.

Read the rest of this entry »

The purpose of this brief blog post is to advertise that I wrote a little piece of software called wireframe which can be used to quickly and easily produce .eps figures of surface for inclusion in papers. The main use is that one can specify a graph in an ASCII file, and the program will then render a nice 3d picture of a surface obtained as the boundary of a tubular neighborhood of the graph. The software can be downloaded from my github repository at

https://github.com/dannycalegari/wireframe 

and then compiled on any unix machine running X-windows (e.g. linux, mac OSX) with “make”.

The program is quite rudimentary, but I believe it should be useful even in its current state. Users are strenuously encouraged to tinker with it, modify it, improve it, etc. If you use the program and find it useful (or not), please let me know.

A couple of examples of output (which can be created in about 5 minutes) are:

braid_iso

and

punct

(added Feb. 20, 2013): I couldn’t resist; here’s another example:

hand

(update April 12, 2013:) Scott Taylor used wireframe to produce a nice figure of a handlebody (in 3-space) having the Kinoshita graph as a spine. He kindly let me post his figure here, as an example. Thanks Scott!

KinoshitaHandlebody

My eldest daughter Lisa recently brought home a note from her school from her computer class teacher. Apparently, the 5th grade kids have been learning to program in Logo, in the MicroWorlds programming environment. I have very pleasant memories of learning to program in Logo back when I was in middle school. If you’re not familiar with Logo, it’s a simple variant of Lisp designed by Seymour Papert, whereby the programmer directs a turtle cursor to move about the screen, moving forward some distance, turning left or right, etc. The turtle can also be directed to raise or lower a pen, and one can draw very pretty pictures in Logo as the track of the turtle’s motion.

Let’s restrict our turtle’s movements to alternating between taking a step of a fixed size S, and turning either left or right through some fixed angle A. Then a (compiled) “program” is just a finite string in the two letter alphabet L and R, indicating the direction of turning at each step. A “random turtle” is one for which the choice of L or R at each step is made randomly, say with equal probability, and choices made independently at each step. The motion of a Euclidean random turtle on a small scale is determined by its turning angle A, but on a large scale “looks like” Brownian motion. Here are two examples of Euclidean random turtles for A=45 degrees and A=60 degrees respectively.

turtle_Euclid

The purpose of this blog post is to describe the behavior of a random turtle in the hyperbolic plane, and the appearance of an interesting phase transition at \sin(A/2) = \tanh^{-1}(S). This example illustrates nicely some themes in probability and group dynamics, and lends itself easily to visualization.

Read the rest of this entry »

I am spending a few months in Göttingen as a Courant Distinguished Visiting Professor, and talking a bit to Laurent Bartholdi about rational functions — i.e. holomorphic maps from the Riemann sphere \widehat{\mathbb C} to itself. A rational function is determined (up to multiplication by a constant) by its zeroes and poles, and can therefore generically be put in the form f:z \to P(z)/Q(z) where P and Q are polynomials of degree d. If d=1 then f is invertible, and is called a fractional linear transformation (or, sometimes, a Mobius transformation). The critical points are the zeroes of P'Q-Q'P; note that this is a polynomial of degree \le 2d-2 (not 2d-1) and the images of these points under f are the critical values. Again, generically, there will be 2d-2 critical values; let’s call them V. Precomposing f with a fractional linear transformation will not change the set of critical values.

The map f cannot usually be recovered from V (even up to precomposition with a fractional linear transformation); one needs to specify some extra global topological information. If we let \overline{C} denote the preimage of V under f, and let C denote the subset consisting of critical points, then the restriction f:\widehat{\mathbb C} - \overline{C} \to \widehat{\mathbb C} - V is a covering map of degree d, and to specify the rational map we must specify both V and the topological data of this covering. Let’s assume for convenience that 0 is not a critical value. To specify the rational map is to give both V and a representation \rho:\pi_1(\widehat{\mathbb C} - V,0) \to S_d (here S_d denotes the group of permutations of the set \lbrace 1,2,\cdots,d\rbrace) which describes how the branches of f^{-1} are permuted by monodromy about V. Such a representation is not arbitrary, of course; first of all it must be irreducible (i.e. not conjugate into S_e \times S_{d-e} for any 1\le e \le d-1) so that the cover is connected. Second of all, the cover must be topologically a sphere. Let’s call the (branched) cover \Sigma for the moment, before we know what it is. The Riemann-Hurwitz formula lets one compute the Euler characteristic of \Sigma from the representation \rho. A nice presentation for \pi_1(\widehat{\mathbb C}-V,0) has generators e_i represented by small loops around the points v_i \in V, and the relation \prod_{i=1}^{|V|} e_i = 1. For each e_i define o_i to be the number of orbits of \rho(e_i) on the set \lbrace 1,2,\cdots,d\rbrace. Then

\chi(\Sigma) = d\chi(S^2) - \sum_i (d-o_i)

If each \rho(e_i) is a transposition (i.e. in the generic case), then o_i=d-1 and we recover the fact that |V|=2d-2.

This raises the following natural question:

Basic Question: Given a set of points V in the Riemann sphere, and an irreducible representation \rho:\pi_1(\widehat{\mathbb C} - V,0) \to S_d satisfying \sum_i (d-o_i) = 2d-2, what are the coefficients of the rational function z \to P(z)/Q(z) that they determine (up to precomposition by a fractional linear transformation)?

Read the rest of this entry »

The purpose of this blog post is to try to give some insight into the “meaning” of the Hall-Witt identity in group theory. This identity can look quite mysterious in its algebraic form, but there are several ways of describing it geometrically which are more natural and easier to understand.

If G is a group, and a,b are elements of G, the commutator of a and b (denoted [a,b]) is the expression aba^{-1}b^{-1} (note: algebraists tend to use the convention that [a,b]=a^{-1}b^{-1}ab instead). Commutators (as their name suggests) measure the failure of a pair of elements to commute, in the sense that ab=[a,b]ba. Since [a,b]^c = [a^c,b^c], the property of being a commutator is invariant under conjugation (here the superscript c means conjugation by c; i.e. a^c:=cac^{-1}; again, the algebraists use the opposite convention).

Read the rest of this entry »

I am Alden, one of Danny’s students. Error/naivete that may (will) be found here is mine. In these posts, I will attempt to give notes from Danny’s class on hyperbolic geometry (157b). This first post covers some models for hyperbolic space.

1. Models

We have a very good natural geometric understanding of {\mathbb{E}^3}, i.e. 3-space with the euclidean metric. Pretty much all of our geometric and topological intuition about manifolds (Riemannian or not) comes from finding some reasonable way to embed or immerse them (perhaps locally) in {\mathbb{E}^3}. Let us look at some examples of 2-manifolds.

  • Example (curvature = 1) {S^2} with its standard metric embeds in {\mathbb{E}^2}; moreover, any isometry of {S^2} is the restriction of (exactly one) isometry of the ambient space (this group of isometries being {SO(3)}). We could not ask for anything more from an embedding.
  • Example (curvature = 0) Planes embed similarly.
  • Example (curvature = -1) The pseudosphere gives an example of an isometric embedding of a manifold with constant curvature -1. Consider a person standing in the plane at the origin. The person holds a string attached to a rock at {(0,1)}, and they proceed to walk due east dragging the rock behind them. The movement of the rock is always straight towards the person, and its distance is always 1 (the string does not stretch). The line traced out by the rock is a tractrix. Draw a right triangle with hypotenuse the tangent line to the curve and vertical side a vertical line to the {x}-axis. The bottom has length {\sqrt{1-y^2}}, which shows that the tractrix is the solution to the differential equation\displaystyle \frac{-y}{\sqrt{1-y^2}} = \frac{dy}{dx}

    The Tractrix

    The surface of revolution about the {x}-axis is the pseudosphere, an isometric embedding of a surface of constant curvature -1. Like the sphere, there are some isometries of the pseudosphere that we can understand as isometries of {\mathbb{E}^3}, namely rotations about the {x}-axis. However, there are lots of isometries which do not extend, so this embeddeding does not serve us all that well.

     

  • Example (hyperbolic space) By the Nash embedding theorem, there is a {\mathcal{C}^1} immersion of {\mathbb{H}^2} in {\mathbb{E}^3}, but by Hilbert, there is no {\mathcal{C}^2} immersion of any complete hyperbolic surface.That last example is the important one to consider when thinking about hypobolic spaces. Intuitively, manifolds with negative curvature have a hard time fitting in euclidean space because volume grows too fast — there is not enough room for them. The solution is to find (local, or global in the case of {\mathbb{H}^2}) models for hyperbolic manfolds such that the geometry is distorted from the usual euclidean geometry, but the isometries of the space are clear.

    2. 1-Dimensional Models for Hyperbolic Space

    While studying 1-dimensional hyperbolic space might seem simplistic, there are nice models such that higher dimensions are simple generalizations of the 1-dimensional case, and we have such a dimensional advantage that our understanding is relatively easy.

    2.1. Hyperboloid Model

    Parameterizing {H}

    Consider the quadratic form {\langle \cdot, \cdot \rangle_H} on {\mathbb{R}^2} defined by {\langle v, w \rangle_A = \langle v, w \rangle_H = v^TAw}, where {A = \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right]}. This doesn’t give a norm, since {A} is not positive definite, but we can still ask for the set of points {v} with {\langle v, v \rangle_H = -1}. This is (both sheets of) the hyperbola {x^2-y^2 = -1}. Let {H} be the upper sheet of the hyperbola. This will be 1-dimensional hyperbolic space.

    For any {n\times n} matrix {B}, let {O(B) = \{ M \in \mathrm{Mat}(n,\mathbb{R}) \, | \, \langle v, w \rangle_B = \langle Mv, Mw \rangle_B \}}. That is, matrices which preserve the form given by {A}. The condition is equivalent to requiring that {M^TBM = B}. Notice that if we let {B} be the identity matrix, we would get the regular orthogonal group. We define {O(p,q) = O(B)}, where {B} has {p} positive eigenvalues and {q} negative eigenvalues. Thus {O(1,1) = O(A)}. We similarly define {SO(1,1)} to be matricies of determinant 1 preserving {A}, and {SO_0(1,1)} to be the connected component of the identity. {SO_0(1,1)} is then the group of matrices preserving both orientation and the sheets of the hyperbolas.

    We can find an explicit form for the elements of {SO_0(1,1)}. Consider the matrix {M = \left[ \begin{array}{cc} a & b \\ c& d \end{array} \right]}. Writing down the equations {M^TAM = A} and {\det(M) = 1} gives us four equations, which we can solve to get the solutions

    \displaystyle \left[ \begin{array}{cc} \sqrt{b^2+1} & b \\ b & \sqrt{b^2+1} \end{array} \right] \textrm{ and } \left[ \begin{array}{cc} -\sqrt{b^2+1} & b \\ b & -\sqrt{b^2+1} \end{array} \right].

    Since we are interested in the connected component of the identity, we discard the solution on the right. It is useful to do a change of variables {b = \sinh(t)}, so we have (recall that {\cosh^2(t) - \sinh^2(t) = 1}).

    \displaystyle SO_0(1,1) = \left\{ \left[ \begin{array}{cc} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{array} \right] \, | \, t \in \mathbb{R} \right\}

    These matrices take {\left[ \begin{array}{c} 0 \\ 1 \end{array} \right]} to {\left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]}. In other words, {SO_0(1,1)} acts transitively on {H} with trivial stabilizers, and in particular we have parmeterizing maps

    \displaystyle \mathbb{R} \rightarrow SO_0(1,1) \rightarrow H \textrm{ defined by } t \mapsto \left[ \begin{array}{cc} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{array} \right] \mapsto \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]

    The first map is actually a Lie group isomorphism (with the group action on {\mathbb{R}} being {+}) in addition to a diffeomorphism, since

    \displaystyle \left[ \begin{array}{cc} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \end{array} \right] \left[ \begin{array}{cc} \cosh(s) & \sinh(s) \\ \sinh(s) & \cosh(s) \end{array} \right] = \left[ \begin{array}{cc} \cosh(t+s) & \sinh(t+s) \\ \sinh(t+s) & \cosh(t+s) \end{array} \right]

    Metric

    As mentioned above, {\langle \cdot, \cdot \rangle_H} is not positive definite, but its restriction to the tangent space of {H} is. We can see this in the following way: tangent vectors at a point {p \in H} are characterized by the form {\langle \cdot, \cdot \rangle_H}. Specifically, {v\in T_pH \Leftrightarrow \langle v, p \rangle_H}, since (by a calculation) {\frac{d}{dt} \langle p+tv, p+tv \rangle_H = 0 \Leftrightarrow \langle v, p \rangle_H}. Therefore, {SO_0(1,1)} takes tangent vectors to tangent vectors and preserves the form (and is transitive), so we only need to check that the form is positive definite on one tangent space. This is obvious on the tangent space to the point {\left[ \begin{array}{c} 0 \\ 1 \end{array} \right]}. Thus, {H} is a Riemannian manifold, and {SO_0(1,1)} acts by isometries.

    Let’s use the parameterization {\phi: t \mapsto \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]}. The unit (in the {H} metric) tangent at {\phi(t) = \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right]} is {\left[ \begin{array}{c} \cosh(t) \\ \sinh(t) \end{array} \right]}. The distance between the points {\phi(s)} and {\phi(t)} is

    \displaystyle d_H(\phi(s), \phi(t)) = \left| \int_s^t\sqrt{\langle \left[ \begin{array}{c} \cosh(t) \\ \sinh(t) \end{array} \right], \left[ \begin{array}{c} \cosh(t) \\ \sinh(t) \end{array} \right] \rangle_H dv } \right| = \left|\int_s^tdv \right| = |t-s|

    In other words, {\phi} is an isometry from {\mathbb{E}^1} to {H}.

    1-dimensional hyperbollic space. The hyperboloid model is shown in blue, and the projective model is shown in red. An example of the projection map identifying {H} with {(-1,1) \subseteq \mathbb{R}\mathrm{P}^1} is shown.

    2.2. Projective Model

    Parameterizing

    Real projective space {\mathbb{R}\mathrm{P}^1} is the set of lines through the origin in {\mathbb{R}^2}. We can think about {\mathbb{R}\mathrm{P}^1} as {\mathbb{R} \cup \{\infty\}}, where {x\in \mathbb{R}} is associated with the line (point in {\mathbb{R}\mathrm{P}^1}) intersecting {\{y=1\}} in {x}, and {\infty} is the horizontal line. There is a natural projection {\mathbb{R}^2 \setminus \{0\} \rightarrow \mathbb{R}\mathrm{P}^1} by projecting a point to the line it is on. Under this projection, {H} maps to {(-1,1)\subseteq \mathbb{R} \subseteq \mathbb{R}\mathrm{P}^1}.

    Since {SO_0(1,1)} acts on {\mathbb{R}^2} preserving the lines {y = \pm x}, it gives a projective action on {\mathbb{R}\mathrm{P}^1} fixing the points {\pm 1}. Now suppose we have any projective linear isomorphism of {\mathbb{R}\mathrm{P}^1} fixing {\pm 1}. The isomorphism is represented by a matrix {A \in \mathrm{PGL}(2,\mathbb{R})} with eigenvectors {\left[ \begin{array}{c} 1 \\ \pm 1 \end{array} \right]}. Since scaling {A} preserves its projective class, we may assume it has determinant 1. Its eigenvalues are thus {\lambda} and {\lambda^{-1}}. The determinant equation, plus the fact that

    \displaystyle A \left[ \begin{array}{c} 1 \\ \pm 1 \end{array} \right] = \left[ \begin{array}{c} \lambda^{\pm 1} \\ \pm \lambda^{\pm 1} \end{array} \right]

    Implies that {A} is of the form of a matrix in {SO_0(1,1)}. Therefore, the projective linear structure on {(-1,1) \subseteq \mathbb{R}\mathrm{P}^1} is the “same” (has the same isometry (isomorphism) group) as the hyperbolic (Riemannian) structure on {H}.

    Metric

    Clearly, we’re going to use the pushforward metric under the projection of {H} to {(-1,1)}, but it turns out that this metric is a natural choice for other reasons, and it has a nice expression.

    The map taking {H} to {(-1,1) \subseteq \mathbb{R}\mathrm{P}^1} is {\psi: \left[ \begin{array}{c} \sinh(t) \\ \cosh(t) \end{array} \right] \rightarrow \frac{\sinh(t)}{\cosh(T)} = \tanh(t)}. The hyperbolic distance between {x} and {y} in {(-1,1)} is then {d_H(x,y) = |\tanh^{-1}(x) - \tanh^{-1}(y)|} (by the fact from the previous sections that {\phi} is an isometry).

    Recall the fact that {\tanh(a\pm b) = \frac{\tanh(a) \pm \tanh(b)}{1 \pm \tanh(a)\tanh(b)}}. Applying this, we get the nice form

    \displaystyle d_H(x,y) = \frac{y-x}{1 - xy}

    We also recall the cross ratio, for which we fix notation as { (z_1, z_2; z_3, z_4) := \frac{(z_3 -z_1)(z_4-z_2)}{(z_2-z_1)(z_4-z_3)}}. Then

    \displaystyle (-1, x;y,1 ) = \frac{(y+1)(1-x)}{(x+1)(1-y)} = \frac{1-xy + (y-x)}{1-xy + (x-y)}

    Call the numerator of that fraction by {N} and the denominator by {D}. Then, recalling that {\tanh(u) = \frac{e^{2u}-1}{e^{2u}+1}}, we have

    \displaystyle \tanh(\frac{1}{2} \log(-1,x;y,1)) = \frac{\frac{N}{D} -1}{\frac{N}{D} +1} = \frac{N-D}{N+D} = \frac{2(y-x)}{2(1-xy)} = \tanh(d_H(x,y))

    Therefore, {d_H(x,y) = \frac{1}{2}\log(-1,x;y,-1)}.

    3. Hilbert Metric

    Notice that the expression on the right above has nothing, a priori, to do with the hyperbolic projection. In fact, for any open convex body in {\mathbb{R}\mathrm{P}^n}, we can define the Hilbert metric on {C} by setting {d_H(p,q) = \frac{1}{2}\log(a,p,q,b)}, where {a} and {b} are the intersections of the line through {a} and {b} with the boundary of {C}. How is it possible to take the cross ratio, since {a,p,q,b} are not numbers? The line containing all of them is projectively isomorphic to {\mathbb{R}\mathrm{P}^1}, which we can parameterize as {\mathbb{R} \cup \{\infty\}}. The cross ratio does not depend on the choice of parameterization, so it is well defined. Note that the Hilbert metric is not necessarily a Riemannian metric, but it does make any open convex set into a metric space.

    Therefore, we see that any open convex body in {\mathbb{R}\mathrm{P}^n} has a natural metric, and the hyperbolic metric in {H = (-1,1)} agrees with this metric when {(-1,1)} is thought of as a open convex set in {\mathbb{R}\mathrm{P}^1}.

    4. Higher-Dimensional Hyperbolic Space

    4.1. Hyperboloid

    The higher dimensional hyperbolic spaces are completely analogous to the 1-dimensional case. Consider {\mathbb{R}^{n+1}} with the basis {\{e_i\}_{i=1}^n \cup \{e\}} and the 2-form {\langle v, w \rangle_H = \sum_{i=1}^n v_iw_i - v_{n+1}w_{n+1}}. This is the form defined by the matrix {J = I \oplus (-1)}. Define {\mathbb{H}^n} to be the positive (positive in the {e} direction) sheet of the hyperbola {\langle v,v\rangle_H = -1}.

    Let {O(n,1)} be the linear transformations preserving the form, so {O(n,1) = \{ A \, | \, A^TJA = J\}}. This group is generated by {O(1,1) \subseteq O(n,1)} as symmetries of the {e_1, e} plane, together with {O(n) \subseteq O(n,1)} as symmetries of the span of the {e_i} (this subspace is euclidean). The group {SO_0(n,1)} is the set of orientation preserving elements of {O(n,1)} which preserve the positive sheet of the hyperboloid ({\mathbb{H}^n}). This group acts transitively on {\mathbb{H}^n} with point stabilizers {SO(n)}: this is easiest to see by considering the point {(0,\cdots, 0, 1) \in \mathbb{H}^n}. Here the stabilizer is clearly {SO(n)}, and because {SO_0(n,1)} acts transitively, any stabilizer is a conjugate of this.

    As in the 1-dimensional case, the metric on {\mathbb{H}^n} is {\langle \cdot , \cdot \rangle_H|_{T_p\mathbb{H}^n}}, which is invariant under {SO_0(n,1)}.

    Geodesics in {\mathbb{H}^n} can be understood by consdering the fixed point sets of isometries, which are always totally geodesic. Here, reflection in a vertical (containing {e}) plane restricts to an (orientation-reversing, but that’s ok) isometry of {\mathbb{H}^n}, and the fixed point set is obviously the intersection of this plane with {\mathbb{H}^n}. Now {SO_0(n,1)} is transitive on {\mathbb{H}^n}, and it sends planes to planes in {\mathbb{R}^{n+1}}, so we have a bijection

    {Totally geodesic subspaces through {p}} {\leftrightarrow} {\mathbb{H}^n \cap} {linear subspaces of {\mathbb{R}^{n+1}} through {p} }

    By considering planes through {e}, we can see that these totally geodesic subspaces are isometric to lower dimensional hyperbolic spaces.

    4.2. Projective

    Analogously, we define the projective model as follows: consider the disk {\{v_{n+1} \,| v_{n+1} = 1, \langle v,v \rangle_H < 0\}}. I.e. the points in the {v_{n+1}} plane inside the cone {\langle v,v \rangle_H = 0}. We can think of {\mathbb{R}\mathrm{P}^n} as {\mathbb{R}^n \cup \mathbb{R}\mathrm{P}^{n-1}}, so this disk is {D^\circ \subseteq \mathbb{R}^n \subseteq \mathbb{R}\mathrm{P}^n}. There is, as before, the natural projection of {\mathbb{H}^n} to {D^\circ}, and the pushforward of the hyperbolic metric agrees with the Hilbert metric on {D^\circ} as an open convex body in {\mathbb{R}\mathrm{P}^n}.

    Geodesics in the projective model are the intersections of planes in {\mathbb{R}^{n+1}} with {D^\circ}; that is, they are geodesics in the euclidean space spanned by the {e_i}. One interesting consequence of this is that any theorem which is true in euclidean geometry which does not reply on facts about angles is still true for hyperbolic space. For example, Pappus’ hexagon theorem, the proof of which does not use angles, is true.

    4.3. Projective Model in Dimension 2

    In the case that {n=2}, we can understand the projective isomorphisms of {\mathbb{H}^2 = D \subseteq \mathbb{R}\mathrm{P}^2} by looking at their actions on the boundary {\partial D}. The set {\partial D} is projectively isomorphic to {\mathbb{R}\mathrm{P}^1} as an abstract manifold, but it should be noted that {\partial D} is not a straight line in {\mathbb{R}\mathrm{P}^2}, which would be the most natural way to find {\mathbb{R}\mathrm{P}^1}‘s embedded in {\mathbb{R}\mathrm{P}^2}.

    In addition, any projective isomorphism of {\mathbb{R}\mathrm{P}^1 \cong \partial D} can be extended to a real projective isomorphism of {\mathbb{R}\mathrm{P}^2}. In other words, we can understand isometries of 2-dimensional hyperbolic space by looking at the action on the boundary. Since {\partial D} is not a straight line, the extension is not trivial. We now show how to do this.

    The automorphisms of {\partial D \cong \mathbb{R}\mathrm{P}^1} are {\mathrm{PSL}(2,\mathbb{R}}. We will consider {\mathrm{SL}(2,\mathbb{R})}. For any Lie group {G}, there is an Adjoint action {G \rightarrow \mathrm{Aut}(T_eG)} defined by (the derivative of) conjugation. We can similarly define an adjoint action {\mathrm{ad}} by the Lie algebra on itself, as {\mathrm{ad}(\gamma '(0)) := \left. \frac{d}{dt} \right|_{t=0} \mathrm{Ad}(\gamma(t))} for any path {\gamma} with {\gamma(0) = e}. If the tangent vectors {v} and {w} are matrices, then {\mathrm{ad}(v)(w) = [v,w] = vw-wv}.

    We can define the Killing form {B} on the Lie algebra by {B(v,w) = \mathrm{Tr}(\mathrm{ad}(v)\mathrm{ad}(w))}. Note that {\mathrm{ad}(v)} is a matrix, so this makes sense, and the Lie group acts on the tangent space (Lie algebra) preserving this form.

    Now let’s look at {\mathrm{SL}(2,\mathbb{R})} specifically. A basis for the tangent space (Lie algebra) is {e_1 = \left[ \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right]}, {e_2 = \left[ \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right]}, and {e_3 = \left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right]}. We can check that {[e_1,e_2] = e_3}, {[e_1,e_3] = -2e_1}, and {[e_2, e_3]=2e_2}. Using these relations plus the antisymmetry of the Lie bracket, we know

    \displaystyle \mathrm{ad}(e_1) = \left[ \begin{array}{ccc} 0 & 0 & -2 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{array}\right] \qquad \mathrm{ad}(e_2) = \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 2 \\ -1 & 0 & 0 \end{array}\right] \qquad \mathrm{ad}(e_3) = \left[ \begin{array}{ccc} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0 \end{array}\right]

    Therefore, the matrix for the Killing form in this basis is

    \displaystyle B_{ij} = B(e_i,e_j) = \mathrm{Tr}(\mathrm{ad}(e_i)\mathrm{ad}(e_j)) = \left[ \begin{array}{ccc} 0 & 4 & 0 \\ 4 & 0 & 0 \\ 0 & 0 & 8 \end{array}\right]

    This matrix has 2 positive eigenvalues and one negative eigenvalue, so its signature is {(2,1)}. Since {\mathrm{SL}(2,\mathbb{R})} acts on {T_e(\mathrm{SL}(2,\mathbb{R}))} preserving this form, we have {\mathrm{SL}(2,\mathbb{R}) \cong O(2,1)}, otherwise known at the group of isometries of the disk in projective space {\mathbb{R}\mathrm{P}^2}, otherwise known as {\mathbb{H}^2}.

    Any element of {\mathrm{PSL}(2,\mathbb{R})} (which, recall, was acting on the boundary of projective hyperbolic space {\partial D}) therefore extends to an element of {O(2,1)}, the isometries of hyperbolic space, i.e. we can extend the action over the disk.

    This means that we can classify isometries of 2-dimensional hyperbolic space by what they do to the boundary, which is determined generally by their eigevectors ({\mathrm{PSL}(2,\mathbb{R})} acts on {\mathbb{R}\mathrm{P}^1} by projecting the action on {\mathbb{R}^2}, so an eigenvector of a matrix corresponds to a fixed line in {\mathbb{R}^2}, so a fixed point in {\mathbb{R}\mathrm{P}^1 \cong \partial D}. For a matrix {A}, we have the following:

     

  • {|\mathrm{Tr}(A)| < 2} (elliptic) In this case, there are no real eigenvalues, so no real eigenvectors. The action here is rotation, which extends to a rotation of the entire disk.
  • {|\mathrm{Tr}(A)| = 2} (parabolic) There is a single real eigenvector. There is a single fixed point, to which all other points are attracted (in one direction) and repelled from (in the other). For example, the action in projective coordinates sending {[x:y]} to {[x+1:y]}: infinity is such a fixed point.
  • {|\mathrm{Tr}(A)| > 2} (hyperbolic) There are two fixed point, one attracting and one repelling.
  •  

    5. Complex Hyperbolic Space

    We can do a construction analogous to real hyperbolic space over the complexes. Define a Hermitian form {q} on {\mathbb{C}^{n+1}} with coordinates {\{z_1,\cdots, z_n\} \cup \{w\}} by {q(x_1,\cdots x_n, w) = |z_1|^2 + \cdots + |z_n|^2 - |w|^2}. We will also refer to {q} as {\langle \cdot, \cdot \rangle_q}. The (complex) matrix for this form is {J = I \oplus (-1)}, where {q(v,w) = v^*Jw}. Complex linear isomorphisms preserving this form are matrices {A} such that {A^*JA = J}. This is our definition for {\mathrm{U}(q) := \mathrm{U}(n,1)}, and we define {\mathrm{SU}(n,1)} to be those elements of {\mathrm{U}(n,1)} with determinant of norm 1.

    The set of points {z} such that {q(z) = -1} is not quite what we are looking for: first it is a {2n+1} real dimensional manifold (not {2n} as we would like for whatever our definition of “complex hyperbolic {n} space” is), but more importantly, {q} does not restrict to a positive definite form on the tangent spaces. Call the set of points {z} where {q(z) = -1} by {\bar{H}}. Consider a point {p} in {\bar{H}} and {v} in {T_p\bar{H}}. As with the real case, by the fact that {v} is in the tangent space,

    \displaystyle \left. \frac{d}{dt} \right|_{t=0} \langle p + tv, p+tv\rangle_q = 0 \quad \Rightarrow \quad \langle v, p \rangle_q + \langle p,v \rangle_q = 0

    Because {q} is hermitian, the expression on the right does not mean that {\langle v,p\rangle_q = 0}, but it does mean that {\langle v,p \rangle_q} is purely imaginary. If {\langle v,p \rangle_q = ik}, then {\langle v,v\rangle_q < 0}, i.e. {q} is not positive definite on the tangent spaces.

    However, we can get rid of this negative definite subspace. {S^1} as the complex numbers of unit length (or {\mathrm{U}(1)}, say) acts on {\mathbb{C}^{n+1}} by multiplying coordinates, and this action preserves {q}: any phase goes away when we apply the absolute value. The quotient of {\bar{H}} by this action is {\mathbb{C}\mathbb{H}^n}. The isometry group of this space is still {\mathrm{U}(n,1)}, but now there are point stabilizers because of the action of {\mathrm{U}(1)}. We can think of {\mathrm{U}(1)} inside {\mathrm{U}(n,1)} as the diagonal matrices, so we can write

    \displaystyle \mathrm{SU}(n,1) \times \mathrm{U}(1) \cong U(n,1)

    And the projectivized matrices {\mathrm{PSU}(n,1)} is the group of isometries of {\mathbb{C}\mathbb{H}^n \subseteq \mathbb{C}^n \subseteq \mathbb{C}\mathrm{P}^n}, where the middle {\mathbb{C}^n} is all vectors in {\mathbb{C}^{n+1}} with {w=1} (which we think of as part of complex projective space). We can also approach this group by projectivizing, since that will get rid of the unwanted point stabilizers too: we have {\mathrm{PU}(n,1) \cong \mathrm{PSU}(n,1)}.

    5.1. Case {n=1}

    In the case {n=1}, we can actually picture {\mathbb{C}\mathrm{P}^1}. We can’t picture the original {\mathbb{C}^4}, but we are looking at the set of {(z,w)} such that {|z|^2 - |w|^2 = -1}. Notice that {|w| \ge 1}. After projectivizing, we may divide by {w}, so {|z/w| - 1 = -1/|w|}. The set of points {z/w} which satisfy this is the interior of the unit circle, so this is what we think of for {\mathbb{C}\mathbb{H}^1}. The group of complex projective isometries of the disk is {\mathrm{PU}(1,1)}. The straight horizontal line is a geodesic, and the complex isometries send circles to circles, so the geodesics in {\mathbb{C}\mathbb{H}^1} are circles perpendicular to the boundary of {S^1} in {\mathbb{C}}.

    Imagine the real projective model as a disk sitting at height one, and the geodesics are the intersections of planes with the disk. Complex hyperbolic space is the upper hemisphere of a sphere of radius one with equator the boundary of real hyperbolic space. To get the geodesics in complex hyperbolic space, intersect a plane with this upper hemisphere and stereographically project it flat. This gives the familiar Poincare disk model.

    5.2. Real {\mathbb{H}^2}‘s contained in {\mathbb{C}\mathbb{H}^n}

    {\mathbb{C}\mathbb{H}^2} contains 2 kinds of real hyperbolic spaces. The subset of real points in {\mathbb{C}\mathbb{H}^n} is (real) {\mathbb{H}^n}, so we have a many {\mathbb{H}^2 \subseteq \mathbb{H}^n \subseteq \mathbb{C}\mathbb{H}^n}. In addition, we have copies of {\mathbb{C}\mathbb{H}^1}, which, as discussed above, has the same geometry (i.e. has the same isometry group) as real {\mathbb{H}^2}. However, these two real hyperbolic spaces are not isometric. the complex hyperbolic space {\mathbb{C}\mathbb{H}^1} has a more negative curvature than the real hyperbolic spaces. If we scale the metric on {\mathbb{C}\mathbb{H}^n} so that the real hyperbolic spaces have curvature {-1}, then the copies of {\mathbb{C}\mathbb{H}^1} will have curvature {-4}.

    In a similar vein, there is a symplectic structure on {\mathbb{C}\mathbb{H}^n} such that the real {\mathbb{H}^2} are lagrangian subspaces (the flattest), and the {\mathbb{C}\mathbb{H}^1} are symplectic, the most negatively curved.

    An important thing to mention is that complex hyperbolic space does not have constant curvature(!).

    6. Poincare Disk Model and Upper Half Space Model

    The projective models that we have been dealing with have many nice properties, especially the fact that geodesics in hyperbolic space are straight lines in projective space. However, the angles are wrong. There are models in which the straight lines are “curved” i.e. curved in the euclidean metric, but the angles between them are accurate. Here we are interested in a group of isometries which preserves angles, so we are looking at a conformal model. Dimension 2 is special, because complex geometry is real conformal geometry, but nevertheless, there is a model of {\mathbb{R}\mathbb{H}^n} in which the isometries of the space are conformal.

    Consider the unit disk {D^n} in {n} dimensions. The conformal automorphisms are the maps taking (straight) diameters and arcs of circles perpendicular to the boundary to this same set. This model is abstractly isomorphic to the Klein model in projective space. Imagine the unit disk in a flat plane of height one with an upper hemisphere over it. The geodesics in the Klein model are the intersections of this flat plane with subspaces (so they are straight lines, for example, in dimension 2). Intersecting vertical planes with the upper hemisphere and stereographically projecting it flat give geodesics in the Poincare disk model. The fact that this model is the “same” (up to scaling the metric) as the example above of {\mathbb{C}\mathbb{H}^1} is a (nice) coincidence.

    The Klein model is the flat disk inside the sphere, and the Poincare disk model is the sphere. Geodesics in the Klein model are intersections of subspaces (the angled plane) with the flat plane at height 1. Geodesics in the Poincare model are intersections of vertical planes with the upper hemisphere. The two darkened geodesics, one in the Klein model and one in the Poincare, correspond under orthogonal projection. We get the usual Poincare disk model by stereographically projecting the upper hemisphere to the disk. The projection of the geodesic is shown as the curved line inside the disk

    The Poincare disk model. A few geodesics are shown.

    Now we have the Poincare disk model, where the geodesics are straight diameters and arcs of circles perpendicular to the boundary and the isometries are the conformal automorphisms of the unit disk. There is a conformal map from the disk to an open half space (we typically choose to conformally identify it with the upper half space). Conveniently, the hyperbolic metric on the upper half space {d_H} can be expressed at a point {(x,t)} (euclidean coordinates) as {d_H = d_E/t}. I.e. the hyperbolic metric is just a rescaling (at each point) of the euclidean metric.

    One of the important things that we wanted in our models was the ability to realize isometries of the model with isometries of the ambient space. In the case of a one-parameter family of isometries of hyperbolic space, this is possible. Suppose that we have a set of elliptic isometries. Then in the disk model, we can move that point to the origin and realize the isometries by rotations. In the upper half space model, we can move the point to infinity, and realize them by translations.

 

The other day at lunch, one of my colleagues — let’s call her “Wendy Hilton” to preserve her anonymity (OK, this is pretty bad, but perhaps not quite as bad as Clive James’s use of “Romaine Rand” as a pseudonym for “Germaine Greer” in Unreliable Memoirs . . .) — expressed some skepticism about a somewhat unusual assertion that I make at the start of my scl monograph. Since it is my monograph, I feel free to quote the offending paragraphs:

It is unfortunate in some ways that the standard way to refer to the plane emphasizes its product structure. This product structure is topologically unnatural, since it is defined in a way which breaks the natural topological symmetries of the object in question. This fact is thrown more sharply into focus when one discusses more rigid topologies.

At this point I give an example, namely that of the Zariski topology, pointing out that the product topology of two copies of the affine line with the Zariski topology is not the same as the Zariski topology on the affine plane. All well and good. I then go on to claim that part of the bias is biological in origin, citing the following example as evidence:

Example 1.2 (Primary visual cortex). The primary visual cortex of mammals (including humans), located at the posterior pole of the occipital cortex, contains neurons hardwired to fire when exposed to certain spatial and temporal patterns. Certain specific neurons are sensitive to stimulus along specific orientations, but in primates, more cortical machinery is devoted to representing vertical and horizontal than oblique orientations (see for example [58] for a discussion of this effect).

(Note: [58] is a reference to the paper “The distribution of oriented contours in the real world” by David Coppola, Harriett Purves, Allison McCoy, and Dale Purves, Proc. Natl. Acad. Sci. USA 95 (1998), no. 7, 4002–4006)

I think Wendy took this to be some kind of poetic license or conceit, and perhaps even felt that it was a bit out of place in a serious research monograph. On balance, I think I agree that it comes across as somewhat jarring and unexpected to the reader, and the tone and focus is somewhat inconsistent with that of the rest of the book. But I also think that in certain subjects in mathematics — and I would put low-dimensional geometry/topology in this category — we are often not aware of the extent to which our patterns of reasoning and imagination are shaped, limited, or (mis)directed by our psychological — and especially psychophysical — natures.

The particular question of how the mind conceives of, imagines, or perceives any mathematical object is complicated and multidimensional, and colored by historical, social, and psychological (not to mention mathematical) forces. It is generally a vain endeavor to find precise physical correlates of complicated mental objects, but in the case of the plane (or at least one cognitive surrogate, the subjective visual field) there is a natural candidate for such a correlate. Cells on the rear of the occipital lobe are arranged in a “map” in the region of the occipital lobe known as the “primary visual cortex”, or V1. There is a precise geometric relationship between the location of neurons in V1 and the points in the subjective visual field they correspond to. Further visual processing is done by other areas V2, V3, V4, V5 of the visual cortex. Information is fed forward from Vi to Vj with j>i, but also backward from Vj to Vi regions, so that visual information is processed at several levels of abstraction simultaneously, and the results of this processing compared and refined in a complicated synthesis (this tends to make me think of the parallel terraced scan model of analogical reasoning put forward by Douglas Hofstadter and Melanie Mitchell; see Fluid concepts and creative analogies, Chapter 5).

The initial processing done by the V1 area is quite low-level; individual neurons are sensitive to certain kind of stimuli, e.g. color, spatial periodicity (on various scales),  motion, orientation, etc. As remarked earlier, more neurons are devoted to detecting horizontally or vertically aligned stimuli; in other words, our brains literally devote more hardware to perceiving or imagining vertical and horizontal lines than to lines with an oblique orientation. This is not to say that at some higher, more integrated level, our perception is not sensitive to other symmetries that our hardware does not respect, just as a random walk on a square lattice in the plane converges (after a parabolic rescaling) to Brownian motion (which is not just rotationally but conformally invariant). However the fact is that humans perform statistically better on cognitive tasks that involve the perception of figures that are aligned along the horizontal and vertical axes, than on similar tasks that differ only by a rotation of the figures.

It is perhaps interesting therefore that the earliest (?) mathematical conception of the plane, due to the Greeks, did not give a privileged place to the horizontal or vertical directions, but treats all orientations on an equal footing. In other words, in Greek (Euclidean) geometry, the definitions respect the underlying symmetries of the objects. Of course, from our modern perspective we would not say that the Greeks gave a definition of the plane at all, or at best, that the definition is woefully inadequate. According to one well-known translation, the plane is introduced as a special kind of surface as follows:

A surface is that which has length and breadth.

When a surface is such that the right line joining any two arbitrary points in it lies wholly in the surface, it is called a plane.

This definition of a surface looks as though it is introducing coordinates, but in fact one might just as well interpret it as defining a surface in terms of its dimension; having defined a surface (presumably thought of as being contained in some ambient undefined three-dimensional space) one defines a plane to be a certain kind of surface, namely one that is convex. Horizontal and vertical axes are never introduced. Perpendicularity is singled out as important, but the perpendicularity of two lines is a relative notion, whereas horizontality and verticality are absolute. In the end, Euclidean geometry is defined implicitly by its properties, most importantly isotropy (i.e. all right angles are equal to one another) and the parallel postulate, which singles it out from among several alternatives (elliptic geometry, hyperbolic geometry). In my opinion, Euclidean geometry is imprecise but natural (in the sense of category theory), because objects are defined in terms of the natural transformations they admit, and in a way that respects their underlying symmetries.

In the 15th century, the Italian artists of the Renaissance developed the precise geometric method of perspective painting (although the technique of representing more distant objects by smaller figures is extremely ancient). Its invention is typically credited to the architect and engineer Filippo Brunelleschi; one may speculate that the demands of architecture (i.e. the representation of precise 3 dimensional geometric objects in 2 dimensional diagrams) was one of the stimuli that led to this invention (perhaps this suggestion is anachronistic?). Mathematically, this gives rise to the geometry of the projective plane, i.e. the space of lines through the origin (the “eye” of the viewer of a scene). In principle, one could develop projective geometry without introducing “special” directions or families of lines. However, in one, two, or three point perspective, families of lines parallel to one or several “special” coordinate axes (along which significant objects in the painting are aligned) appear to converge to one of the vanishing points of the painting. In his treatise “De pictura” (on painting), Leon Battista Alberti (a friend of Brunelleschi) explicitly described the geometry of vision in terms of projections on to a (visual) plane. Amusingly (in the context of this blog post), he explicitly distinguishes between the mathematical and the visual plane:

In all this discussion, I beg you to consider me not as a mathematician but as a painter writing of these things.

Mathematicians measure with their minds alone the forms of things separated from all matter. Since we wish the object to be seen, we will use a more sensate wisdom.

I beg to differ: similar parts of the brain are used for imagining a triangle and for looking at a painting. Alberti’s claim sounds a bit too much like Gould’s “non-overlapping magisteria”, and in a way it is disheartening that it was made at a place and point in history at which mathematics and the visual arts were perhaps at their closest.

In the 17th century René Descartes introduced his coordinate system and thereby invented “analytic geometry”. To us it might not seem like such a big leap to go from a checkerboard floor in a perspective painting (or a grid of squares to break up the visual field) to the introduction of numerical coordinates to specify a geometrical figure, but Descartes’s ideas for the first time allowed mathematicians to prove theorems in geometry by algebraic methods. Analytic geometry is contrasted with “synthetic geometry”, in which theorems are deduced logically from primitive axioms and rules of inference. In some abstract sense, this is not a clear distinction, since algebra and analysis also rests on primitive axioms, and rules of deduction. In my opinion, this terminology reflects a psychological distinction between “analytic methods” in which one computes blindly and then thinks about what the results of the calculation mean afterwards, and “synthetic methods” in which one has a mental model of the objects one is manipulating, and directly intuits the “meaning” of the operations one performs. Philosophically speaking, the first is formal, the second is platonic. Biologically speaking, the first does not make use of the primary visual cortex, the second does.

As significant as Descartes ideas were, mathematicians were slow to take real advantage of them. Complex numbers were invented by Cardano in the mid 16th century, but the idea of representing complex numbers geometrically, by taking the real and imaginary parts as Cartesian coordinates, had to wait until Argand in the early 19th.

Incidentally, I have heard it said that the Greeks did not introduce coordinates because they drew their figures on the ground and looked at them from all sides, whereas Descartes and his contemporaries drew figures in books. Whether this has any truth to it or not, I do sometimes find it useful to rotate a mathematical figure I am looking at, in order to stimulate my imagination.

After Poincaré’s invention of topology in the late 19th century, there was a new kind of model of the plane to be (re)imagined, namely the plane as a topological space. One of the most interesting characterizations was obtained by the brilliantly original and idiosyncratic R. L. Moore in his paper, “On the foundations of plane analysis situs”. Let me first remark that the line can be characterized topologically in terms of its natural order structure; one might argue that this characterization more properly determines the oriented line, and this is a fair comment, but at least the object has been determined up to a finite ambiguity. Let me second of all remark that the characterization of the line in terms of order structures is useful; a (countable) group G is abstractly isomorphic to a group of (orientation-preserving) homeomorphisms of the line if and only if G admits an (abstract) left-invariant order.

Given points and the line, Moore proceeds to list a collection of axioms which serve to characterize the plane amongst topological spaces. The axioms are expressed in terms of separation properties of primitive undefined terms called points and regions (which correspond more or less to ordinary points and open sets homeomorphic to the interiors of closed disks respectively) and non-primitive objects called “simple closed curves” which are (eventually) defined in terms of simpler objects. Moore’s axioms are “natural” in the sense that they do not introduce new, unnecessary, unnatural structure (such as coordinates, a metric, special families of “straight” lines, etc.). The basic principle on which Moore’s axioms rest is that of separation — which continua separate which points from which others? If there is a psychophysical correlate of this mathematical intuition, perhaps it might be the proliferation of certain neurons in the primary visual cortex which are edge detectors — they are sensitive, not to absolute intensity, but to a spatial discontinuity in the intensity (associated with the “edge” of an object). The visual world is full of objects, and our eyes evolved to detect them, and to distinguish them from their surroundings (to distinguish figure from ground as it were). If I have an objection to Cartesian coordinates on biological grounds (I don’t, but for the sake of argument let’s suppose I do) then perhaps Moore should also be disqualified for similar reasons. Or rather, perhaps it is worth being explicitly aware, when we make use of a particular mathematical model or intellectual apparatus, of which aspects of it are necessary or useful because of their (abstract) applications to mathematics, and which are necessary or useful because we are built in such a way as to need or to be able to use them.

Let R be a polynomial in two variables; i.e. R(\lambda,\mu) = \sum_{i,j} a_{ij} \lambda^i\mu^j where each i,j is non-negative, and the coefficients a_{ij} are complex numbers which are nonzero for only finitely many pairs i,j. For a generic choice of coefficients, the equation R=0 determines a smooth complex curve \Sigma in \mathbb{C}^2 (i.e. a Riemann surface). How can one see the geometry of the curve directly in the expression for R? It turns out that there are several ways to do it, some very old, and some more recent.

The most important geometric invariant of the curve is the genus. To a topologist, this is the number of “handles”; to an algebraic geometer, this is the dimension of the space of holomorphic 1-forms. One well-known way to calculate the genus is by means of the Newton polygon. In the (real) plane \mathbb{R}^2, consider the finite set consisting of the points with integer coordinates (i,j) for which the coefficient a_{ij} of R is nonzero. The convex hull of this finite set is a convex integral polygon, called the Newton polygon of R. It turns out that the genus of \Sigma is the number of integer lattice points in the interior of the Newton polygon. In fact, one can find a basis for the space of holomorphic 1-forms directly from this formulation. Let R_\mu denote the partial derivative of R with respect to \mu. Then for each lattice point (i,j) in the interior of the Newton polygon, the 1-form (\lambda^i\mu^j/R_\mu) d\lambda is a holomorphic 1-form on \Sigma, and the set of all such forms is a basis for the space of all holomorphic 1-forms.

This is direct but a bit unsatisfying to a topologist, since the connection between the dimension of the space of 1-forms and the topological idea of handles is somewhat indirect. In some special cases, it is a bit easier to see things. Two important examples are:

  1. Hyperelliptic surfaces, i.e equations of the form \lambda^2 = p(\mu) for some polynomial p(\cdot) of degree n. The Newton polygon in this case is the triangle with vertices (0,0), (2,0), (0,n) and it has \lfloor (n-1)/2 \rfloor interior lattice points. Geometrically one can “see” the surface by projecting to the \mu plane. For each generic value of \mu, the complex number p(\mu) has two distinct square roots, so the map is 2 to 1. However, at the n roots of p(\cdot), there is only 1 preimage. So the map is a double cover, branched over n points, and one can “see” the topology of the surface by cutting open two copies of the complex line along slits joining pairs of points, and gluing.
  2. A generic surface of degree d. The Newton polygon in this case is the triangle with vertices (0,0), (d,0), (0,d) and it has (d-1)(d-2)/2 interior lattice points. One way to “see” the surface in this case is to first imagine d lines in general position (a quite special degree d curve). Each pair of lines intersect in a point, so there are d(d-1)/2 points of intersection. After deforming the curve, these points of intersection are resolved into tubes, so one obtains d complex lines joined by d(d-1)/2 tubes. The first d-1 tubes are needed to tube the lines together into a (multiply)-punctured plane, and the remaining (d-1)(d-2)/2 tubes each add one to the genus.

It turns out that there is a nice way to directly see the topology of \Sigma in the Newton polygon, via tropical geometry. I recently learned about this idea from Mohammed Abouzaid in one of his Clay lectures; this point of view was pioneered by Grisha Mikhalkin. The idea is as follows. First consider the restriction of \Sigma to the product \mathbb{C}^* \times \mathbb{C}^*; i.e. remove the intersection with the coordinate axes. For generic R, this amounts to removing a finite number of points from \Sigma, which will not change the genus. Then on this punctured curve \Sigma, consider the real valued function (\lambda,\mu) \to (\log(|\lambda|),\log(|\mu|)). The image is a subset of \mathbb{R}^2, called an amoeba. If one varies the (nonzero) coefficients of R generically, the complex geometry of the curve \Sigma will change, but its topology will not. Hence to see the topology of \Sigma one should deform the coefficients in such a way that the topology of the amoeba can be read off from combinatorial information, encoded in the Newton polygon. The terms in R corresponding to lattice points in a boundary edge of the Newton polygon sum to a polynomial which is homogeneous after a suitable change of coordinates. In the region in which these terms dominate, \Sigma looks more and more like a collection of cylinders, each asymptotic to a cone on some points at infinity. The image in the amoeba is a collection of asymptotically straight rays. If the polynomial were genuinely homogeneous, the preimage of each point in the amoeba would be a circle, parameterized by a choice of argument of (a certain root of) either \lambda or \mu. So the amoeba looks like a compact blob with a collection of spikes coming off. As one deforms the coefficients in a suitable way, the compact blob degenerates into a piecewise linear graph which can be read off from purely combinatorial data, and the topology of \Sigma can be recovered by taking the boundary of a thickened tubular neighborhood of this graph.

More explicitly, one chooses a certain triangulation of the Newton polygon into triangles of area 1/2 and with vertices at integer lattice points (by Pick’s theorem this is equivalent to the condition that each triangle and each edge has no lattice points in the interior). This triangulation must satisfy an additional combinatorial condition, namely that there must exist a convex piecewise linear function on the Newton polygon whose domains of linearity are precisely the triangles. This convex function is used to deform the coefficients of R; roughly, if f is the function, choose the coefficient a_{ij} \sim e^{f(i,j)t} and take the limit as t gets very big. The convexity of f guarantees that in the preimage of each triangle of the Newton polygon, the terms of R that contribute the most are those corresponding to the vertices of the triangle. In particular, as t goes to infinity, the amoeba degenerates to the dual spine of the triangle (i.e. a tripod). The preimage of this tripod is a pair of pants; after a change of coordinates, any given triangle can be taken to have vertices (0,0), (1,0), (0,1) corresponding to a linear equation a\lambda + b\mu = c whose solution set in \mathbb{C}^* \times \mathbb{C}^* (for generic a,b,c) is a line minus two points — i.e. a pair of pants.

One therefore has a concrete combinatorial description of the degenerate amoeba: pick a triangulation of the Newton polygon satisfying the combinatorial conditions above. Let \Gamma be the graph dual to the triangulation, with edges dual to boundary edges of the triangulation extended indefinitely. The surface \Sigma is obtained by taking the boundary of a thickened neighborhood of \Gamma. The genus of \Sigma is equal to the rank of the first homology of the graph \Gamma; this is evidently equal to the number of lattice points in the interior of the polygon.

As a really concrete example, consider a polynomial like

R = 1 + 7z^3 - 23.6w^2 + e^\pi z^3w^2

(the exact coefficients are irrelevant; the only issue is to choose them generically enough that the resulting curve is smooth (actually I did not check in this case – please pretend that I did!)). The Newton polygon is a rectangle with vertices (0,0), (3,0), (0,2), (3,2). This can be subdivided into twelve triangles of area 1/2 as in the following figure:

Newton_polygon_1The dual spine is then the following:

Newton_polygon_2

which evidently has rank of H_1 equal to 2, equal on the one hand to the number of interior points in the Newton polygon, and on the other hand to the genus of \Sigma.

Follow

Get every new post delivered to your Inbox.

Join 158 other followers