You are currently browsing the category archive for the ‘Biology’ category.

The other day at lunch, one of my colleagues — let’s call her “Wendy Hilton” to preserve her anonymity (OK, this is pretty bad, but perhaps not quite as bad as Clive James’s use of “Romaine Rand” as a pseudonym for “Germaine Greer” in Unreliable Memoirs . . .) — expressed some skepticism about a somewhat unusual assertion that I make at the start of my scl monograph. Since it is my monograph, I feel free to quote the offending paragraphs:

It is unfortunate in some ways that the standard way to refer to the plane emphasizes its product structure. This product structure is topologically unnatural, since it is defined in a way which breaks the natural topological symmetries of the object in question. This fact is thrown more sharply into focus when one discusses more rigid topologies.

At this point I give an example, namely that of the Zariski topology, pointing out that the product topology of two copies of the affine line with the Zariski topology is not the same as the Zariski topology on the affine plane. All well and good. I then go on to claim that part of the bias is biological in origin, citing the following example as evidence:

Example 1.2 (Primary visual cortex). The primary visual cortex of mammals (including humans), located at the posterior pole of the occipital cortex, contains neurons hardwired to fire when exposed to certain spatial and temporal patterns. Certain specific neurons are sensitive to stimulus along specific orientations, but in primates, more cortical machinery is devoted to representing vertical and horizontal than oblique orientations (see for example [58] for a discussion of this effect).

(Note: [58] is a reference to the paper “The distribution of oriented contours in the real world” by David Coppola, Harriett Purves, Allison McCoy, and Dale Purves, Proc. Natl. Acad. Sci. USA 95 (1998), no. 7, 4002–4006)

I think Wendy took this to be some kind of poetic license or conceit, and perhaps even felt that it was a bit out of place in a serious research monograph. On balance, I think I agree that it comes across as somewhat jarring and unexpected to the reader, and the tone and focus is somewhat inconsistent with that of the rest of the book. But I also think that in certain subjects in mathematics — and I would put low-dimensional geometry/topology in this category — we are often not aware of the extent to which our patterns of reasoning and imagination are shaped, limited, or (mis)directed by our psychological — and especially psychophysical — natures.

The particular question of how the mind conceives of, imagines, or perceives any mathematical object is complicated and multidimensional, and colored by historical, social, and psychological (not to mention mathematical) forces. It is generally a vain endeavor to find precise physical correlates of complicated mental objects, but in the case of the plane (or at least one cognitive surrogate, the subjective visual field) there is a natural candidate for such a correlate. Cells on the rear of the occipital lobe are arranged in a “map” in the region of the occipital lobe known as the “primary visual cortex”, or V1. There is a precise geometric relationship between the location of neurons in V1 and the points in the subjective visual field they correspond to. Further visual processing is done by other areas V2, V3, V4, V5 of the visual cortex. Information is fed forward from Vi to Vj with j>i, but also backward from Vj to Vi regions, so that visual information is processed at several levels of abstraction simultaneously, and the results of this processing compared and refined in a complicated synthesis (this tends to make me think of the parallel terraced scan model of analogical reasoning put forward by Douglas Hofstadter and Melanie Mitchell; see Fluid concepts and creative analogies, Chapter 5).

The initial processing done by the V1 area is quite low-level; individual neurons are sensitive to certain kind of stimuli, e.g. color, spatial periodicity (on various scales),  motion, orientation, etc. As remarked earlier, more neurons are devoted to detecting horizontally or vertically aligned stimuli; in other words, our brains literally devote more hardware to perceiving or imagining vertical and horizontal lines than to lines with an oblique orientation. This is not to say that at some higher, more integrated level, our perception is not sensitive to other symmetries that our hardware does not respect, just as a random walk on a square lattice in the plane converges (after a parabolic rescaling) to Brownian motion (which is not just rotationally but conformally invariant). However the fact is that humans perform statistically better on cognitive tasks that involve the perception of figures that are aligned along the horizontal and vertical axes, than on similar tasks that differ only by a rotation of the figures.

It is perhaps interesting therefore that the earliest (?) mathematical conception of the plane, due to the Greeks, did not give a privileged place to the horizontal or vertical directions, but treats all orientations on an equal footing. In other words, in Greek (Euclidean) geometry, the definitions respect the underlying symmetries of the objects. Of course, from our modern perspective we would not say that the Greeks gave a definition of the plane at all, or at best, that the definition is woefully inadequate. According to one well-known translation, the plane is introduced as a special kind of surface as follows:

A surface is that which has length and breadth.

When a surface is such that the right line joining any two arbitrary points in it lies wholly in the surface, it is called a plane.

This definition of a surface looks as though it is introducing coordinates, but in fact one might just as well interpret it as defining a surface in terms of its dimension; having defined a surface (presumably thought of as being contained in some ambient undefined three-dimensional space) one defines a plane to be a certain kind of surface, namely one that is convex. Horizontal and vertical axes are never introduced. Perpendicularity is singled out as important, but the perpendicularity of two lines is a relative notion, whereas horizontality and verticality are absolute. In the end, Euclidean geometry is defined implicitly by its properties, most importantly isotropy (i.e. all right angles are equal to one another) and the parallel postulate, which singles it out from among several alternatives (elliptic geometry, hyperbolic geometry). In my opinion, Euclidean geometry is imprecise but natural (in the sense of category theory), because objects are defined in terms of the natural transformations they admit, and in a way that respects their underlying symmetries.

In the 15th century, the Italian artists of the Renaissance developed the precise geometric method of perspective painting (although the technique of representing more distant objects by smaller figures is extremely ancient). Its invention is typically credited to the architect and engineer Filippo Brunelleschi; one may speculate that the demands of architecture (i.e. the representation of precise 3 dimensional geometric objects in 2 dimensional diagrams) was one of the stimuli that led to this invention (perhaps this suggestion is anachronistic?). Mathematically, this gives rise to the geometry of the projective plane, i.e. the space of lines through the origin (the “eye” of the viewer of a scene). In principle, one could develop projective geometry without introducing “special” directions or families of lines. However, in one, two, or three point perspective, families of lines parallel to one or several “special” coordinate axes (along which significant objects in the painting are aligned) appear to converge to one of the vanishing points of the painting. In his treatise “De pictura” (on painting), Leon Battista Alberti (a friend of Brunelleschi) explicitly described the geometry of vision in terms of projections on to a (visual) plane. Amusingly (in the context of this blog post), he explicitly distinguishes between the mathematical and the visual plane:

In all this discussion, I beg you to consider me not as a mathematician but as a painter writing of these things.

Mathematicians measure with their minds alone the forms of things separated from all matter. Since we wish the object to be seen, we will use a more sensate wisdom.

I beg to differ: similar parts of the brain are used for imagining a triangle and for looking at a painting. Alberti’s claim sounds a bit too much like Gould’s “non-overlapping magisteria”, and in a way it is disheartening that it was made at a place and point in history at which mathematics and the visual arts were perhaps at their closest.

In the 17th century René Descartes introduced his coordinate system and thereby invented “analytic geometry”. To us it might not seem like such a big leap to go from a checkerboard floor in a perspective painting (or a grid of squares to break up the visual field) to the introduction of numerical coordinates to specify a geometrical figure, but Descartes’s ideas for the first time allowed mathematicians to prove theorems in geometry by algebraic methods. Analytic geometry is contrasted with “synthetic geometry”, in which theorems are deduced logically from primitive axioms and rules of inference. In some abstract sense, this is not a clear distinction, since algebra and analysis also rests on primitive axioms, and rules of deduction. In my opinion, this terminology reflects a psychological distinction between “analytic methods” in which one computes blindly and then thinks about what the results of the calculation mean afterwards, and “synthetic methods” in which one has a mental model of the objects one is manipulating, and directly intuits the “meaning” of the operations one performs. Philosophically speaking, the first is formal, the second is platonic. Biologically speaking, the first does not make use of the primary visual cortex, the second does.

As significant as Descartes ideas were, mathematicians were slow to take real advantage of them. Complex numbers were invented by Cardano in the mid 16th century, but the idea of representing complex numbers geometrically, by taking the real and imaginary parts as Cartesian coordinates, had to wait until Argand in the early 19th.

Incidentally, I have heard it said that the Greeks did not introduce coordinates because they drew their figures on the ground and looked at them from all sides, whereas Descartes and his contemporaries drew figures in books. Whether this has any truth to it or not, I do sometimes find it useful to rotate a mathematical figure I am looking at, in order to stimulate my imagination.

After Poincaré’s invention of topology in the late 19th century, there was a new kind of model of the plane to be (re)imagined, namely the plane as a topological space. One of the most interesting characterizations was obtained by the brilliantly original and idiosyncratic R. L. Moore in his paper, “On the foundations of plane analysis situs”. Let me first remark that the line can be characterized topologically in terms of its natural order structure; one might argue that this characterization more properly determines the oriented line, and this is a fair comment, but at least the object has been determined up to a finite ambiguity. Let me second of all remark that the characterization of the line in terms of order structures is useful; a (countable) group G is abstractly isomorphic to a group of (orientation-preserving) homeomorphisms of the line if and only if G admits an (abstract) left-invariant order.

Given points and the line, Moore proceeds to list a collection of axioms which serve to characterize the plane amongst topological spaces. The axioms are expressed in terms of separation properties of primitive undefined terms called points and regions (which correspond more or less to ordinary points and open sets homeomorphic to the interiors of closed disks respectively) and non-primitive objects called “simple closed curves” which are (eventually) defined in terms of simpler objects. Moore’s axioms are “natural” in the sense that they do not introduce new, unnecessary, unnatural structure (such as coordinates, a metric, special families of “straight” lines, etc.). The basic principle on which Moore’s axioms rest is that of separation — which continua separate which points from which others? If there is a psychophysical correlate of this mathematical intuition, perhaps it might be the proliferation of certain neurons in the primary visual cortex which are edge detectors — they are sensitive, not to absolute intensity, but to a spatial discontinuity in the intensity (associated with the “edge” of an object). The visual world is full of objects, and our eyes evolved to detect them, and to distinguish them from their surroundings (to distinguish figure from ground as it were). If I have an objection to Cartesian coordinates on biological grounds (I don’t, but for the sake of argument let’s suppose I do) then perhaps Moore should also be disqualified for similar reasons. Or rather, perhaps it is worth being explicitly aware, when we make use of a particular mathematical model or intellectual apparatus, of which aspects of it are necessary or useful because of their (abstract) applications to mathematics, and which are necessary or useful because we are built in such a way as to need or to be able to use them.

The development and scope of modern biology is often held out as a fantastic opportunity for mathematicians. The accumulation of vast amounts of biological data, and the development of new tools for the manipulation of biological organisms at microscopic levels and with unprecedented accuracy, invites the development of new mathematical tools for their analysis and exploitation. I know of several examples of mathematicians who have dipped a toe, or sometimes some more substantial organ, into the water. But it has struck me that I know (personally) few mathematicians who believe they have something substantial to learn from the biologists, despite the existence of several famous historical examples.  This strikes me as odd; my instinctive feeling has always been that intellectual ruts develop so easily, so deeply, and so invisibly, that continual cross-fertilization of ideas is essential to escape ossification (if I may mix biological metaphors . . .)

It is not necessarily easy to come up with profound examples of biological ideas or principles that can be easily translated into mathematical ones, but it is sometimes possible to come up with suggestive ones. Let me try to give a tentative example.

Deoxiribonucleic acid (DNA) is a nucleic acid that contains the genetic blueprint for all known living things. This blueprint takes the form of a code — a molecule of DNA is a long polymer strand composed of simple units called nucleotides; such a molecule is typically imagined as a string in a four character alphabet \lbrace A,T,G,C \rbrace, which stand for the nucleotides Adenine, Thymine, Guanine, and Cytosine. These molecular strands like to arrange themselves in tightly bound oppositely aligned pairs, matching up nucleotides in one string with complementary nucleotides in the other, so that A matches with T, and C with G.

The geometry of a strand of DNA is very complicated — strands can be tangled, knotted, linked in complicated ways, and the fundamental interactions between strands (e.g. transcription, recombination) are facilitated or obstructed by mechanical processes depending on this geometry. Topology, especially knot theory, has been used in the study of some of these processes; the value of topological methods in this context include their robustness (fault-tolerance) and the discreteness of their invariants (similar virtues motivate some efforts to build topological quantum computers). A complete mathematical description of the salient biochemistry, mechanics, and semantic content of a configuration of DNA in a single cell is an unrealistic goal for the foreseeable future, and therefore attempts to model such systems depends on ignoring, or treating statistically, certain features of the system. One such framework ignores the ambient geometry entirely, and treats the system using symbolic, or combinatorial methods which have some of the flavor of geometric group theory.

One interesting approach is to consider a mapping from the alphabet of nucleotides to a standard generating set for F_2, the free group on two generators; for example, one can take the mapping T \to a, A \to A, C \to b, G \to B where a,b are free generators for F_2, and {}A,B denote their inverses. Then a pair of oppositely aligned strands of DNA translates into an edge of a van Kampen diagram — the “words” obtained by reading the letters along an edge on either side are inverse in F_2.

Strands of DNA in a configuration are not always paired along their lengths; sometimes junctions of three or more strands can form; certain mobile four-strand junctions, so-called “Holliday junctions”, perform important functions in the process of genetic recombination, and are found in a wide variety of organisms. A configuration of several strands with junctions of varying valences corresponds in the language of van Kampen diagrams to a fatgraph — i.e. a graph together with a choice of cyclic ordering of edges at each vertex — with edges labeled by inverse pairs of words in F_2 (note that this is quite different from the fatgraph model of proteins developed by Penner-Knudsen-Wiuf-Andersen). The energy landscape for branch migration (i.e. the process by which DNA strands separate or join along some segment) is very complicated, and it is challenging to model it thermodynamically. It is therefore not easy to predict in advance what kinds of fatgraphs are more or less likely to arise spontaneously in a prepared “soup” of free DNA strands.

As a thought experiment, consider the following “toy” model, which I do not suggest is physically realistic. We make the assumption that the energy cost of forming a junction of valence v is c(v-2) for some fixed constant c. Consequently, the energy of a configuration is proportional to -\chi, i.e. the negative of Euler characteristic of the underlying graph. Let w be a reduced word, representing an element of F_2, and imagine a soup containing some large number of copies of the strand of DNA corresponding to the string \dot{w}:=\cdots www \cdots. In thermodynamic equilibrium, the partition function has the form Z = \sum_i e^{-E_i/k_BT} where k_B is Boltzmann’s constant, T is temperature, and E_i is the energy of a configuration (which by hypothesis is proportional to -\chi). At low temperature, minimal energy configurations tend to dominate; these are those that minimize -\chi per unit “volume”. Topologically, a fatgraph corresponding to such a configuration can be thickened to a surface with boundary. The words along the edges determine a homotopy class of map from such a surface to a K(F_2,1) (e.g. a once-punctured torus) whose boundary components wrap multiply around the free homotopy class corresponding to the conjugacy class of w. The infimum of -\chi/2d where d is the winding degree on the boundary, taken over all configurations, is precisely the stable commutator length of w; see e.g. here for a definition.

Anyway, this example is perhaps a bit strained (and maybe it owes more to thermodynamics than to biology), but already it suggests a new mathematical object of study, namely the partition function Z as above, and one is already inclined to look for examples for which the partition function obeys a symmetry like that enjoyed by the Riemann zeta function, or to specialize temperature to other values, as in random matrix theory. The introduction of new methods into the study of a classical object — for example, the decision to use thermodynamic methods to organize the study of van Kampen diagrams — bends the focus of the investigation towards those examples and contexts where the methods and tools are most informative. Phenomena familiar in one context (power laws, frequency locking, phase transitions etc.) suggest new questions and modes of enquiry in another. Uninspired or predictable research programs can benefit tremendously from such infusions, whether the new methods are borrowed from other intellectual disciplines (biology, physics), or depend on new technology (computers), or new methods of indexing (google) or collaboration (polymath).

One of my intellectual heroes — Wolfgang Haken — worked for eight years in R+D for Siemens in Munich after completing his PhD. I have a conceit (unsubstantiated as far as I know by biographical facts) that his experience working for a big engineering firm colored his approach to mathematics, and made it possible for him to imagine using industrial-scale “engineering” tools (e.g. integer programming, exhaustive computer search of combinatorial possibilities) to solve two of the most significant “pure” mathematical open problems in topology at the time — the knot recognition problem, and the four-color theorem. It is an interesting exercise to try to imagine (fantastic) variations. If I sit down and decide to try to prove (for example) Cannon’s conjecture, I am liable to try minor variations on things I have tried before, appeal for my intuition to examples that I understand well, read papers by others working in similar ways on the problem, etc. If I imagine that I have been given a billion dollars to prove the conjecture, I am almost certain to prioritize the task in different ways, and to entertain (and perhaps create) much more ambitious or innovative research programs to tackle the task. This is the way in which I understand the following quote by John Dewey, which I used as the colophon of my first book:

Every great advance in science has issued from a new audacity of the imagination.

Follow

Get every new post delivered to your Inbox.

Join 175 other followers