You are currently browsing the monthly archive for September 2009.

Let $R$ be a polynomial in two variables; i.e. $R(\lambda,\mu) = \sum_{i,j} a_{ij} \lambda^i\mu^j$ where each $i,j$ is non-negative, and the coefficients $a_{ij}$ are complex numbers which are nonzero for only finitely many pairs $i,j$. For a generic choice of coefficients, the equation $R=0$ determines a smooth complex curve $\Sigma$ in $\mathbb{C}^2$ (i.e. a Riemann surface). How can one see the geometry of the curve directly in the expression for $R$? It turns out that there are several ways to do it, some very old, and some more recent.

The most important geometric invariant of the curve is the genus. To a topologist, this is the number of “handles”; to an algebraic geometer, this is the dimension of the space of holomorphic $1$-forms. One well-known way to calculate the genus is by means of the Newton polygon. In the (real) plane $\mathbb{R}^2$, consider the finite set consisting of the points with integer coordinates $(i,j)$ for which the coefficient $a_{ij}$ of $R$ is nonzero. The convex hull of this finite set is a convex integral polygon, called the Newton polygon of $R$. It turns out that the genus of $\Sigma$ is the number of integer lattice points in the interior of the Newton polygon. In fact, one can find a basis for the space of holomorphic $1$-forms directly from this formulation. Let $R_\mu$ denote the partial derivative of $R$ with respect to $\mu$. Then for each lattice point $(i,j)$ in the interior of the Newton polygon, the $1$-form $(\lambda^i\mu^j/R_\mu) d\lambda$ is a holomorphic $1$-form on $\Sigma$, and the set of all such forms is a basis for the space of all holomorphic $1$-forms.

This is direct but a bit unsatisfying to a topologist, since the connection between the dimension of the space of $1$-forms and the topological idea of handles is somewhat indirect. In some special cases, it is a bit easier to see things. Two important examples are:

1. Hyperelliptic surfaces, i.e equations of the form $\lambda^2 = p(\mu)$ for some polynomial $p(\cdot)$ of degree $n$. The Newton polygon in this case is the triangle with vertices $(0,0), (2,0), (0,n)$ and it has $\lfloor (n-1)/2 \rfloor$ interior lattice points. Geometrically one can “see” the surface by projecting to the $\mu$ plane. For each generic value of $\mu$, the complex number $p(\mu)$ has two distinct square roots, so the map is 2 to 1. However, at the $n$ roots of $p(\cdot)$, there is only 1 preimage. So the map is a double cover, branched over $n$ points, and one can “see” the topology of the surface by cutting open two copies of the complex line along slits joining pairs of points, and gluing.
2. A generic surface of degree $d$. The Newton polygon in this case is the triangle with vertices $(0,0), (d,0), (0,d)$ and it has $(d-1)(d-2)/2$ interior lattice points. One way to “see” the surface in this case is to first imagine $d$ lines in general position (a quite special degree $d$ curve). Each pair of lines intersect in a point, so there are $d(d-1)/2$ points of intersection. After deforming the curve, these points of intersection are resolved into tubes, so one obtains $d$ complex lines joined by $d(d-1)/2$ tubes. The first $d-1$ tubes are needed to tube the lines together into a (multiply)-punctured plane, and the remaining $(d-1)(d-2)/2$ tubes each add one to the genus.

It turns out that there is a nice way to directly see the topology of $\Sigma$ in the Newton polygon, via tropical geometry. I recently learned about this idea from Mohammed Abouzaid in one of his Clay lectures; this point of view was pioneered by Grisha Mikhalkin. The idea is as follows. First consider the restriction of $\Sigma$ to the product $\mathbb{C}^* \times \mathbb{C}^*$; i.e. remove the intersection with the coordinate axes. For generic $R$, this amounts to removing a finite number of points from $\Sigma$, which will not change the genus. Then on this punctured curve $\Sigma$, consider the real valued function $(\lambda,\mu) \to (\log(|\lambda|),\log(|\mu|))$. The image is a subset of $\mathbb{R}^2$, called an amoeba. If one varies the (nonzero) coefficients of $R$ generically, the complex geometry of the curve $\Sigma$ will change, but its topology will not. Hence to see the topology of $\Sigma$ one should deform the coefficients in such a way that the topology of the amoeba can be read off from combinatorial information, encoded in the Newton polygon. The terms in $R$ corresponding to lattice points in a boundary edge of the Newton polygon sum to a polynomial which is homogeneous after a suitable change of coordinates. In the region in which these terms dominate, $\Sigma$ looks more and more like a collection of cylinders, each asymptotic to a cone on some points at infinity. The image in the amoeba is a collection of asymptotically straight rays. If the polynomial were genuinely homogeneous, the preimage of each point in the amoeba would be a circle, parameterized by a choice of argument of (a certain root of) either $\lambda$ or $\mu$. So the amoeba looks like a compact blob with a collection of spikes coming off. As one deforms the coefficients in a suitable way, the compact blob degenerates into a piecewise linear graph which can be read off from purely combinatorial data, and the topology of $\Sigma$ can be recovered by taking the boundary of a thickened tubular neighborhood of this graph.

More explicitly, one chooses a certain triangulation of the Newton polygon into triangles of area $1/2$ and with vertices at integer lattice points (by Pick’s theorem this is equivalent to the condition that each triangle and each edge has no lattice points in the interior). This triangulation must satisfy an additional combinatorial condition, namely that there must exist a convex piecewise linear function on the Newton polygon whose domains of linearity are precisely the triangles. This convex function is used to deform the coefficients of $R$; roughly, if $f$ is the function, choose the coefficient $a_{ij} \sim e^{f(i,j)t}$ and take the limit as $t$ gets very big. The convexity of $f$ guarantees that in the preimage of each triangle of the Newton polygon, the terms of $R$ that contribute the most are those corresponding to the vertices of the triangle. In particular, as $t$ goes to infinity, the amoeba degenerates to the dual spine of the triangle (i.e. a tripod). The preimage of this tripod is a pair of pants; after a change of coordinates, any given triangle can be taken to have vertices $(0,0), (1,0), (0,1)$ corresponding to a linear equation $a\lambda + b\mu = c$ whose solution set in $\mathbb{C}^* \times \mathbb{C}^*$ (for generic $a,b,c$) is a line minus two points — i.e. a pair of pants.

One therefore has a concrete combinatorial description of the degenerate amoeba: pick a triangulation of the Newton polygon satisfying the combinatorial conditions above. Let $\Gamma$ be the graph dual to the triangulation, with edges dual to boundary edges of the triangulation extended indefinitely. The surface $\Sigma$ is obtained by taking the boundary of a thickened neighborhood of $\Gamma$. The genus of $\Sigma$ is equal to the rank of the first homology of the graph $\Gamma$; this is evidently equal to the number of lattice points in the interior of the polygon.

As a really concrete example, consider a polynomial like

$R = 1 + 7z^3 - 23.6w^2 + e^\pi z^3w^2$

(the exact coefficients are irrelevant; the only issue is to choose them generically enough that the resulting curve is smooth (actually I did not check in this case – please pretend that I did!)). The Newton polygon is a rectangle with vertices $(0,0), (3,0), (0,2), (3,2)$. This can be subdivided into twelve triangles of area $1/2$ as in the following figure:

The dual spine is then the following:

which evidently has rank of $H_1$ equal to $2$, equal on the one hand to the number of interior points in the Newton polygon, and on the other hand to the genus of $\Sigma$.

A geometric structure on a manifold is an atlas of charts with values in some kind of “model space”, and transformation functions taken from some pseudogroup of transformations on the model space. If $X$ is the model space, and $G$ is the pseudo-group, one talks about a $(G,X)$-structure on a manifold $M$. One usually (but not always) wants $X$ to be homogeneous with respect to $G$. So, for instance, one talks about smooth structures, conformal structures, projective structures, bilipschitz structures, piecewise linear structures, symplectic structures, and so on, and so on. Riemannian geometry does not easily fit into this picture, because there are so few (germs of) isometries of a typical Riemannian metric, and so many local invariants; but Riemannian metrics modeled on a locally symmetric space, with $G$ a Lie group of symmetries of $X$, are a very significant example.

Sometimes the abstract details of a theory are hard to grasp before looking at some fundamental examples. The case of geometric structures on $1$-manifolds is a nice example, which is surprisingly rich in some ways.

One of the most important ways in which geometric structures arise is in the theory of ODE’s. Consider a first order ODE in one variable, e.g. an equation like $y' = f(y,t)$. If we fix an “initial” value $y(t_0)=y_0$, then we are guaranteed short time existence and uniqueness of a solution (providing the function $f$ is nice enough). But if we do not fix an initial value, we can instead think of an ODE as a $1$-parameter family of (perhaps partially defined) maps from $\mathbb{R}$ to itself. For each fixed $t$, the function $f(y,t)$ defines a vector field on $\mathbb{R}$. We can think of the ODE as specifying a path in the Lie algebra of vector fields on $\mathbb{R}$; solving the ODE amounts to finding a path in the Lie group of diffeomorphisms of $\mathbb{R}$ (or some partially defined Lie pseudogroup of diffeomorphisms on some restricted subdomain) which is tangent to the given family of vector fields. It makes sense therefore to study special classes of equations, and ask when this family of maps is conjugate into an interesting pseudogroup; equivalently, that the evolution of the solutions preserves an interesting geometric structure on $\mathbb{R}$. We consider some examples in turn.

1. Indefinite integral $y' = a(t)$. The group in this case is $\mathbb{R}$, acting on $\mathbb{R}$ by translation. The equation is solved by integrating: $y=\int a(t)dt + C$.
2. Linear homogeneous ODE $y' = a(t)y$. The group in this case is $\mathbb{R}^+$, acting on $\mathbb{R}$ by multiplication (notice that this group action is not transitive; the point $0 \in \mathbb{R}$ is preserved; this corresponds to the fact that $y = 0$ is always a solution of a homogeneous linear ODE). The Lie algebra is $\mathbb{R}$, and the ODE is “solved” by exponentiating the vector field, and integrating. Hence $y = C e^{\int a(t)dt}$ is the general solution. In fact, in the previous example, the Lie algebra of the group of translations is also identified with $\mathbb{R}$, and “exponentiating” is the identity map.
3. Linear inhomogeneous ODE $y' = a(t)y + b(t)$. The group in this case is the affine group $\mathbb{R}^+ \ltimes \mathbb{R}$ where the first factor acts by dilations and the second by translation. The affine group is not abelian, so one cannot “integrate” a vector field directly, but it is solvable: there is a short exact sequence $\mathbb{R} \to \mathbb{R}^+ \ltimes \mathbb{R} \to \mathbb{R}^+$. The image in the Lie algebra of the group of dilations is the term $a(t)y$, which can be integrated as before to give an integrating factor $e^{\int a(t)dt}$. Setting $z = ye^{-\int a(t)dt}$ gives $z' = y'e^{-\int a(t)dt} - a(t)ye^{-\int a(t)dt} = b(t)e^{-\int a(t)dt}$ which is an indefinite integral, and can be solved by a further integration. In other words, we do one integration to change the structure group from $\mathbb{R}^+ \ltimes \mathbb{R}$ to $\mathbb{R}$ (“integrating out” the group of dilations) and then what is left is an abelian structure group, in which we can do “ordinary” integration. This procedure works whenever the structure group is solvable; i.e. whenever there is a finite sequence $G=G_0,\cdots,G_n=0$ where each $G_i$ surjects onto an abelian group, with kernel $G_{i-1}$, and after finitely many steps, the last kernel is trivial.
4. Ricatti equation $y' = a(t)y^2 + b(t)y + c(t)$. In this case, it is well-known that the equation can blow up in finite time, and one does not obtain a group of transformations of $\mathbb{R}$, but rather a group of transformations of the projective line $\mathbb{RP}^1 = \mathbb{R} \cup \infty$; another point of view says that one obtains a pseudogroup of transformations of subsets of $\mathbb{R}$. The group in this case is the projective group $\text{PSL}(2,\mathbb{R})$, acting by projective linear transformations. Let $A(t)$ be a $1$-parameter family of matrices in $\text{PSL}(2,\mathbb{R})$, say $A(t)=\left( \begin{smallmatrix} u(t) & v(t) \\ w(t) & x(t) \end{smallmatrix} \right)$, with $A(0)=\text{id}$. Matrices act on $\mathbb{R}$ by fractional linear maps; that is, $Az = (uz + v)/(wz+x)$ for $z \in \mathbb{R}$. Differentiating $A(t)z$ at $t=0$ one obtains $(Az)'(0) = (u'z+v')-z(w'z+x') = w'z^2 + (u'-x')z + v'$ which is the general form of the Ricatti equation. Since the group $\text{PSL}(2,\mathbb{R})$ is not solvable, the Ricatti equation cannot be solved in terms of elementary functions and integrals. However, if one knows one solution $y=z(t)$, one can find all other solutions as follows. Do a change of co-ordinates, by sending the solution $z(t)$ “to infinity”; i.e. define $x = 1/(y-z)$. Then as a function of $x$, the Ricatti equation reduces to a linear inhomogeneous ODE. In other words, the structure group reduces to the subgroup of $\text{PSL}(2,\mathbb{R})$ fixing the point at infinity (i.e. the solution $z(t)$), which is the affine group $\mathbb{R}^+ \ltimes \mathbb{R}$. One can therefore solve for $x$, and by substituting back, for $y$.

The Ricatti equation is important for the solution of second order linear equations, since any second order linear equation $y'' = a(t)y' + b(t)y + c(t)$ can be transformed into a system of two first order linear equations in the variables $y$ and $y'$. A system of first order ODEs in $n$ variables can be described in terms of pseudogroups of transformations of (subsets of) $\mathbb{R}^n$. A system of linear equations corresponds to the structure group $\text{GL}(n,\mathbb{R})$, hence in the case of a $2\times 2$ system, to $\text{GL}(2,\mathbb{R})$. The determinant map is a homomorphism from $\text{GL}(2,\mathbb{R})$ to $\mathbb{R}^*$ with kernel $\text{SL}(2,\mathbb{R})$; hence, after  multiplication by a suitable integrating factor, one can reduce to a system which is (equivalent to) the Ricatti equation.

Having seen these examples, one naturally wonders whether there are any other interesting families of equations and corresponding Lie groups acting on $1$-manifolds. In fact, there are (essentially) no other examples: if one insists on (finite dimensional) simple Lie groups, then $\text{SL}(2,\mathbb{R})$ is more or less the only example. Perhaps this is one of the reasons why the theory of ODEs tends to appear to undergraduates (and others) as an unstructured collection of rules and tricks. Nevertheless, recasting the theory in terms of geometric structures has the effect of clearing the air to some extent.

Geometric structures on $1$-manifolds arise also in the theory of foliations, which may be seen as a geometric abstraction of certain kinds of PDE. Suppose $M$ is a manifold, and $\mathcal{F}$ is a codimension one foliation. The foliation determines local charts on the manifold in which the leaves of the foliation intersect the chart in the level sets of a co-ordinate function. In the overlap of two such local charts, the transitions between the local co-ordinate functions take values in some pseudogroup. For certain kinds of foliations, this pseudogroup might be analytically quite rigid. For example, if $\mathcal{F}$ is tangent to the kernel of a nonsingular $1$-form $\alpha$ on $M$, then integrating $\alpha$ determines a metric on the leaf space which is preserved by the co-ordinate transformations, and the pseudogroup is conjugate into the group of translations. There are also some interesting examples where the pseudogroup has no interesting local structure, but where structure emerges on a macroscopic scale, because of some special features of the topology of $M$ and $\mathcal{F}$. For example, suppose $M$ is a $3$-manifold, and $\mathcal{F}$ is a foliation in which every leaf is dense. One knows for topological reasons (i.e. theorems of Novikov and Palmeira) that the universal cover $\tilde{M}$ is homeomorphic to $\mathbb{R}^3$ in such a way that the pulled-back foliation $\tilde{\mathcal{F}}$ is topologically a foliation by planes. One important special case is when any two leaves of $\tilde{\mathcal{F}}$ are a finite Hausdorff distance apart in $\tilde{M}$. In this case, the foliation $\tilde{\mathcal{F}}$ is topologically conjugate to a product foliation, and $\pi_1(M)$ acts on the leaf space (which is $\mathbb{R}$) by a group of homeomorphisms. The condition that pairs of leaves are a finite Hausdorff distance away implies that there are intervals $I$ in the leaf space whose translates do not nest; i.e. with the property that there is no $g \in \pi_1(M)$ for which $g(I)$ is properly contained in $I$. Let $I^\pm$ denote the two endpoints of the interval $I$. One defines a function $Z:\mathbb{R} \to \mathbb{R}$ by defining $Z(p)$ to be the supremum of the set of values $g(I^+)$ over all $g \in \pi_1(M)$ for which $g(I^-) \le p$. The non-nesting property, and the fact that every leaf of $\mathcal{F}$ is dense, together imply that $Z$ is a strictly increasing (i.e. fixed-point free) homeomorphism of $\mathbb{R}$ which commutes with the action of $\pi_1(M)$. In particular, the action of $\pi_1(M)$ is conjugate into the subgroup $\text{Homeo}^+(\mathbb{R})^{\mathbb{Z}}$ of homeomorphisms that commute with integer translation. One says in this case that the manifold $M$ slithers over a circle; it is possible to deduce a lot about the geometry and topology of $M$ and $\mathcal{F}$ from this structure. See for example Thurston’s paper, or my book.

A third significant way in which geometric structures arise on circles is in the theory of conformal welding. Let $\gamma:S^1 \to \mathbb{CP}^1$ be a Jordan curve in the Riemann sphere. The image of the curve decomposes the sphere into two regions homeomorphic to disks. Each open disk region can be uniformized by a holomorphic map from the open unit disk, which extends continuously to the boundary circle. These uniformizing maps are well-defined up to composition with an element of the Möbius group $\text{PSL}(2,\mathbb{R})$, and their difference is therefore a coset in $\text{Homeo}^+(S^1)/\text{PSL}(2,\mathbb{R})$ called the welding homeomorphism. Conversely, given a homeomorphism of the circle, one can ask when it arises from a Jordan curve in the Riemann sphere as above, and if it does, whether the curve is unique (up to conformal self-maps of the Riemann sphere). Neither existence nor uniqueness hold in great generality. For example, if the image $\gamma(S^1)$ has positive (Hausdorff) measure, any quasiconformal deformation of the complex structure on the Riemann sphere supported on the image of the curve will deform the curve but not the welding homeomorphism. One significant special case in which existence and uniqueness is assured is the case that $\gamma(S^1)$ is a quasicircle. This means that there is a constant $K$ with the property that if two points $p,q$ are contained in the quasicircle, and the spherical distance between the two points is $d(p,q)$, then at least one arc of the quasicircle joining $p$ to $q$ has spherical diameter at most $Kd(p,q)$. In other words, there are no bottlenecks where two points on the quasicircle come very close in the sphere without being close in the curve. Welding maps corresponding to quasicircles are precisely the quasisymmetric homeomorphisms. A homeomorphism is quasisymmetric if for every sufficiently small interval in the circle, the image of the midpoint of the interval under the homeomorphism is not too far from being the midpoint of the image of the interval; i.e. it divides the image of the interval into two pieces whose lengths have a ratio which is bounded below and above by some fixed constant. Other classes of geometric structures can be detected by welding: smooth Jordan circles correspond to smooth welding maps, real analytic circles correspond to real analytic welding maps, round circles correspond to welding maps in $\text{PSL}(2,\mathbb{R})$, and so on. Recent work of  Eero Saksman and his collaborators has sought to find the correct idea of a “random” welding, which corresponds to the kinds of Jordan curves generated by stochastic processes such as SLE. In general, the precise correspondence between the analytic quality of $\gamma$ and of the welding map is given by the Hilbert transform.

This list of examples of geometric structures on $1$-manifolds is by no means exhaustive. There are many very special features of $1$-dimensional geometry: oriented $1$-manifolds have a natural causal structure, which may be seen as a special case of contact/symplectic geometry; (nonatomic) measures on $1$-manifolds can be integrated to metrics; connections on $1$-manifolds are automatically flat, and correspond to representations. It would be interesting to hear other examples, and how they arise in various mathematical fields.

I am in Kyoto right now, attending the twenty-first Nevanlinna colloquium (update: took a while to write this post – now I’m in Sydney for the Clay lectures). Yesterday, Junjiro Noguchi gave a plenary talk on Nevanlinna theory in higher dimensions and related Diophantine problems. The talk was quite technical, and I did not understand it very well; however, he said a few suggestive things early on which struck a chord.

The talk started quite accessibly, being concerned with the fundamental equation

$a +b = c$

where $a,b,c$ are coprime positive integers. The abc conjecture, formulated by Oesterlé and Masser, says that for any positive real number $\epsilon$, there is a constant $C_\epsilon$ so that

$\max(a,b,c) \le C_\epsilon\text{rad}(abc)^{1+\epsilon}$

where $\text{rad}(abc)$ is the product of the distinct primes appearing in the product $abc$. Informally, this conjecture says that for triples $a,b,c$ satisfying the fundamental equation, the numbers $a,b,c$ are not divisible by “too high” powers of a prime. The abc conjecture is known to imply many interesting number theoretic statements, including (famously) Fermat’s Last Theorem (for sufficiently large exponents), and Roth’s theorem on diophantine approximation (as observed by Bombieri).

Roth’s theorem is the following statement:

Theorem(Roth, 1955): Let $\alpha$ be a real algebraic number. Then for any $\epsilon>0$, the inequality $|\alpha - p/q| < q^{-(2+\epsilon)}$ has only finitely many solutions in coprime integers $p,q$.

This inequality is best possible, in the sense that every irrational number can be approximated by infinitely many rationals $p/q$ to within $1/2q^2$. In fact, the rationals appearing in the continued fraction approximation to $\alpha$ have this property. There is a very short and illuminating geometric proof of this fact.

In the plane, construct a circle packing with a circle of radius $1/2q^2$ with center $p/q,1/2q^2$ for each coprime pair $p,q$ of integers.

This circle packing nests down on the $x$-axis, and any vertical line (with irrational $x$-co-ordinate) intersects infinitely many circles. If the $x$ co-ordinate of a vertical line is $\alpha$, every circle the line intersects gives a rational $p/q$ which approximates $\alpha$ to within $1/2q^2$. qed.

On the other hand, consider the corresponding collection of circles with radius $1/2q^{2+\epsilon}$. Some “space” appears between neighboring circles, and they no longer pack tightly (the following picture shows $\epsilon = 0.2$).

The total cross-sectional width of these circles, restricted to pairs $p/q$ in the interval $[0,1)$, can be estimated as follows. Each $p/q$ contributes a width of $1/2q^{2+\epsilon}$. Ignoring the coprime condition, there are $q$ fractions of the form $p/q$ in the interval $[0,1)$, so the total width is less than $\frac 1 2 \sum_q q^{-1-\epsilon}$ which converges for positive $\epsilon$. In other words, the total cross-sectional width of all circles is finite. It follows that almost every vertical line intersects only finitely many circles.

Some vertical lines do, in fact, intersect infinitely many circles; i.e. some real numbers are approximated by infinitely many rationals to better than quadratic accuracy; for example, a Liouville number like $\sum_{n=1}^\infty 10^{-n!}$.

Some special cases of Roth’s theorem are much easier than others. For instance, it is very easy to give a proof when $\alpha$ is a quadratic irrational; i.e. an element of $\mathbb{Q}(\sqrt{d})$ for some integer $d$. Quadratic irrationals are characterized by the fact that their continued fraction expansions are eventually periodic. One can think of this geometrically as follows. The group $\text{PSL}(2,\mathbb{Z})$ acts on the upper half-plane, which we think of now as the complex numbers with non-negative imaginary part, by fractional linear transformations $z \to (az+b)/(cz+d)$. The quotient is a hyperbolic triangle orbifold, with a cusp. A vertical line in the plane ending at a point $\alpha$ on the $x$-axis projects to a geodesic ray in the triangle orbifold. A rational number $p/q$ approximating $\alpha$ to within $1/2q^2$ is detected by the geodesic entering a horoball centered at the cusp. If $\alpha$ is a quadratic irrational, the corresponding geodesic ray eventually winds around a periodic geodesic (this is the periodicity of the continued fraction expansion), so it never gets too deep into the cusp, and the rational approximations to $\alpha$ never get better than $C/2q^2$ for some constant $C$ depending on $\alpha$, as required. A different vertical line intersecting the $x$-axis at some $\beta$ corresponds to a different geodesic ray; the existence of good rational approximations to $\beta$ corresponds to the condition that the corresponding geodesic goes deeper and deeper into the cusp infinitely often at a definite rate (i.e. at a distance which is at least some fixed (fractional) power of time). A “random” geodesic on a cusped hyperbolic surface takes time $n$ to go distance $\log{n}$ out the cusp (this is a kind of equidistribution fact – the thickness of the cusp goes to zero like $e^{-t}$, so if one chooses a sequence of points in a hyperbolic surface at random with respect to the uniform (area) measure, it takes about $n$ points to find one that is distance $\log{n}$ out the cusp). If one expects that every geodesic ray corresponding to an algebraic number looks like a “typical” random geodesic, one would conjecture (and in fact, Lang did conjecture) that there are only finitely many $p/q$ for which $|p/q - \alpha| < q^{-2}(\log{q})^{-1-\epsilon}$ for any $\epsilon > 0$.

A slightly different (though related) geometric way to see the periodicity of the continued fraction expansion of a quadratic irrational is to use diophantine geometry. This is best illustrated with an example. Consider the golden number $\alpha = (1+\sqrt{5})/2$. The matrix $A=\left( \begin{smallmatrix} 2 & 1 \ 1 & 1 \end{smallmatrix} \right)$ has $\left( \begin{smallmatrix} \alpha \ 1 \end{smallmatrix} \right)$ and $\left( \begin{smallmatrix} \bar{\alpha} \ 1 \end{smallmatrix} \right)$ as eigenvectors (here $\bar{\alpha}$ denotes the “conjugate” $1-\alpha$), and thus preserves a “wedge” in $\mathbb{R}^2$ bounded by lines with slopes $\alpha$ and $\bar{\alpha}$. The set of integer lattice points in this wedge is permuted by $A$, and therefore so is the boundary of the convex hull of this set (the sail of the cone). Lattice points on the sail correspond to rational approximations to the boundary slopes; the fact that $A$ permutes this set corresponds to the periodicity of the continued fraction expansion of $\alpha$ (and certifies the fact that $\alpha$ cannot be approximated better than quadratically by rational numbers).

There is an analogue of this construction in higher dimensions: let $A$ be an $n\times n$ integer matrix whose eigenvalues are all real, positive, irrational and distinct. A collection of $n$ suitable eigenvectors spans a polyhedral cone which is invariant under $A$. The  convex hull of the set of integer lattice points in this cone is a polyhedron, and the vertices of this polyhedron (the vertices on the sail) are  the “best” integral approximations to the eigenvectors. In fact, there is a $\mathbb{Z}^{n-1}$ subgroup of $\text{SL}(n,\mathbb{Z})$ consisting of matrices with the same set of eigenvectors (this is a consequence of Dirichlet’s theorem on the structure of the group of units in the integers in a number field). Hence there is a group that acts discretely and co-compactly on the vertices of the sail, and one gets a priori estimates on how well the eigenvectors can be approximated by integral vectors. It is interesting to ask whether one can give a proof of Roth’s theorem along these lines, at least for algebraic numbers in totally real fields, but I don’t know the answer.

I was in Stony Brook last week, visiting Moira Chas and Dennis Sullivan, and have been away from blogging for a while; this week I plan to write a few posts about some of the things I discussed with Moira and Dennis. This is an introductory post about the Goldman bracket, an extraordinary mathematical object made out of the combinatorics of immersed curves on surfaces. I don’t have anything original to say about this object, but for my own benefit I thought I would try to explain what it is, and why Goldman was interested in it.

In his study of symplectic structures on character varieties $\text{Hom}(\pi,G)/G$, where $\pi$ is the fundamental group of a closed oriented surface and $G$ is a Lie group satisfying certain (quite general) conditions, Bill Goldman discovered a remarkable Lie algebra structure on the free abelian group generated by conjugacy classes in $\pi$. Let $\hat{\pi}$ denote the set of homotopy classes of closed oriented curves on $S$, where $S$ is itself a compact oriented surface, and let $\mathbb{Z}\hat{\pi}$ denote the free abelian group with generating set $\hat{\pi}$. If $\alpha,\beta$ are immersed oriented closed curves which intersect transversely (i.e. in double points), define the formal sum

$[\alpha,\beta] = \sum_{p \in \alpha \cap \beta} \epsilon(p; \alpha,\beta) |\alpha_p\beta_p| \in \mathbb{Z}\hat{\pi}$

In this formula, $\alpha_p,\beta_p$ are $\alpha,\beta$ thought of as based loops at the point $p$, $\alpha_p\beta_p$ represents their product in $\pi_1(S,p)$, and $|\alpha_p\beta_p|$ represents the resulting conjugacy class in $\pi$. Moreover, $\epsilon(p;\alpha,\beta) = \pm 1$ is the oriented intersection number of $\alpha$ and $\beta$ at $p$.

This operation turns out to depend only on the free homotopy classes of $\alpha$ and $\beta$, and extends by linearity to a bilinear map $[\cdot,\cdot]:\mathbb{Z}\hat{\pi} \times \mathbb{Z}\hat{\pi} \to \mathbb{Z}\hat{\pi}$. Goldman shows that this bracket makes $\mathbb{Z}\hat{\pi}$ into a Lie algebra over $\mathbb{Z}$, and that there are natural Lie algebra homomorphisms from $\mathbb{Z}\hat{\pi}$ to the Lie algebra of functions on $\text{Hom}(\pi,G)/G$ with its Poisson bracket.

The connection with character varieties can be summarized as follows. Let $f:G \to \mathbb{R}$ be a (smooth) class function (i.e. a function which is constant on conjugacy classes) on a Lie group $G$. Define the variation function $F:G \to \mathfrak{g}$ by the formula

$\langle F(A),X\rangle = \frac {d}{dt}|_{t=0} f(A\text{exp}{tX})$

where $\langle \cdot,\cdot\rangle$ is some (fixed) $\text{Ad}$-invariant orthogonal structure on the Lie algebra $\mathfrak{g}$ (for example, if $G$ is reductive (eg if $G$ is semisimple), one can take $\langle X,Y\rangle = \text{tr}(XY)$). The tangent space to the character variety $\text{Hom}(\pi,G)/G$ at $\phi$ is the first cohomology group of $\pi$ with coefficients in $\mathfrak{g}$, thought of as a $G$ module with the $\text{Ad}$ action, and then as a $\pi$ module by the representation $\phi$. Cup product and the pairing $\langle\cdot,\cdot\rangle$ determine a pairing

$H^1(\pi,\mathfrak{g})\times H^1(\pi,\mathfrak{g}) \to H^2(\pi,\mathbb{R}) = \mathbb{R}$

where the last equality uses the fact that $\pi$ is a closed surface group; this pairing defines the symplectic structure on $\text{Hom}(\pi,G)/G$.

Every element $\alpha \in \pi$ determines a function $f_\alpha:\text{Hom}(\pi,G)/G \to \mathbb{R}$ by sending a (conjugacy class of) representation $[\phi]$ to $f(\phi(\alpha))$. Note that $f_\alpha$ only depends on the conjugacy class of $\alpha$ in $\pi$. It is natural to ask: what is the Hamiltonian flow on $\text{Hom}(\pi,G)/G$ generated by the function $f_\alpha$? It turns out that when $\alpha$ is a simple closed curve, it is very easy to describe this Hamiltonian flow. If $\alpha$ is nonseparating, then define a flow $\psi_t$ by $\psi_t\phi(\gamma)=\phi(\gamma)$ when $\gamma$ is represented by a curve disjoint from $\alpha$, and $\psi_t\phi(\gamma)= \text{exp} tF_\alpha(\phi)\phi(\gamma)$ if $\gamma$ intersects $\alpha$ exactly once with a positive orientation (there is a similar formula when $\alpha$ is separating). In other words, the representation is constant on the fundamental group of the surface “cut open” along the curve $\alpha$, and only deforms in the way the two conjugacy classes of $\alpha$ in the cut open surface are identified in $\pi$.

In the important motivating case that $G = \text{PSL}(2,\mathbb{R})$, so that one component of $\text{Hom}(\pi,G)/G$ is the Teichmüller space of hyperbolic structures on the surface $S$, one can take $f = 2\cosh^{-1}\text{tr/2}$, and then $f_\alpha$ is just the length of the geodesic in the free homotopy class of $\alpha$, in the hyperbolic structure on $S$ associated to a representation. In this case, the symplectic structure on the character variety restricts to the Weil-Petersson symplectic structure on Teichmüller space, and the Hamiltonian flow associated to the length function $f_\alpha$ is a family of Fenchel-Nielsen twists, i.e. the deformations of the hyperbolic structure obtained by cutting along the geodesic $\alpha$, rotating through some angle, and regluing. This latter observation recovers a famous theorem of Wolpert, connected in an obvious way to his formula for the symplectic form $\omega = \sum dl_\alpha \wedge d\theta_\alpha$ where $\theta$ is angle and $l$ is length, and the sum is taken over a maximal system of disjoint essential simple curves $\alpha$ for the surface $S$.

The combinatorial nature of the Goldman bracket suggests that it might have applications in combinatorial group theory. Turaev discovered a Lie cobracket on $\mathbb{Z}\hat{\pi}$, and showed that together with the Goldman bracket, one obtains a Lie bialgebra. Motivated by Stallings’ reformulation of the Poincaré conjecture in terms of group theory, Turaev asked whether a free homotopy class contains a power of a simple curve if and only if the cobracket of the class is zero. The answer to this question is negative, as shown by Chas; on the other hand, Chas and Krongold showed that a class $\alpha$ is simple if and only if $[\alpha,\alpha^3]$ is zero. Nevertheless, the full geometric meaning of the Goldman bracket remains mysterious, and a topic worthy of investigation.