## Why Abstract Mathematics Matters

It often seems, to those with only a high school (or even basic university) education in the subject, that mathematics is largely a solved problem. Sure, the thinking goes, there are those arcane and abstruse little corners of irrelevancy with which academics busy themselves, but for the most part we know it all, right?

In fact there is a great deal that we simply do not know, and do not understand; and a lot of it is much simpler, and much closer to home than you might think. In many ways our understanding of the subject only really began very recently. Let’s take a brief tour of some of what we don’t know.

Certainly at the cutting face of research mathematics there is much to learn and much that is not understood. It can be argued that these topics are precisely those obscure irrelevancies, mathematical chimera, that were dismissed earlier; but they provide a point of entry and are, perhaps, not as irrelevant as they may seem at first glance. The difficulty of describing these areas is the layered scaffolding of definitions and intermediary results that mathematicians have constructed beneath them as they climb towards their goal. Such constructions are often daunting to the outsider, and serve to obfuscate what lies at their heart. This means that the depth of the underlying elegance may not be clear as we climb, but hopefully we will catch a glimpse as we climb down the other side.

The particular edifice I would like to climb for the purposes of this discussion is that of Group Theory and Representation Theory. Along the way I’ll try and point out some of the various directions where our understanding is less than perfect without necessarily getting into the details of those specific fields.

Let’s begin with the definition of a group, sacrificing a little rigour and precision for the sake of brevity: A group is a set of objects with a particular “algebraic structure”. By an algebraic structure I mean a set of operations that take one or more elements of the set as arguments and return a single element of the set. A natural concrete example is the algebraic structure provided to the set of natural numbers by addition and multiplication – operations that take two numbers and return a single number. The more such operations you have the greater the scope for complex interplay between them. For the particular algebraic structure that defines a group we ask for things to be (almost) as simple as possible. We require a single operation that is associative (the order in which we apply the operation doesn’t affect the end result), and takes exactly two arguments. We also require the existence of certain set elements with regard to how they behave under this operation. First we require an “identity” element – an element *e* such that for any other element *x* we have *f(x,e) = f(e,x) = x* where *f* is our associative operation. Second we require the existence of “inverse” elements – for each *x* we require some element *x’* such that *f(x,x’) = f(x’,x) = e*. Any set of objects with an algebraic structure satisfying those constraints is a group. Adding more operations and different constraints leads to a profusion of other things to study: rings, domains, fields, modules, algebras, etc.

Given just this abstract axiomatic definition, just the raw scaffolding, it is hard to see the motivation for studying such things – why abstract out from numbers in this way? Surprisingly groups, rings, fields, and other such structures, crop up in all manner of different contexts – once you think to look for them you find them everywhere. It turns out that we require more study of such things, not less. For example programming multi-threaded and multi-process applications is considered very hard, in a large part because it is hard to think and reason about many different things all happening at once in parallel. The study of communicating concurrent processes, which is to say multi-threaded and multi-process systems, can be reduced to a structure that is almost (but not quite – it lacks a couple of constraints) a ring. The subtle difference puts it just outside the reach of the vast array of tools and theorems mathematicians have developed. A better understanding of those structures could quickly make reasoning about and programming massively concurrent multithreaded systems exceptionally simple. For now it remains one of those areas about which we know far too little. Let us get back to groups however.

For something as surprisingly prevalent as it is, we know a lot less about groups than you might think. The designation of something as a group is very general – we require only some very general properties of the algebraic structure imposed upon the set; the specific properties of the set itself, and the algebraic structure imposed, can lead to a wide variety of different groups, each with their own unique character and properties. Perhaps the most immediate division is between groups with only finitely many elements, and infinite groups. One might expect that when restricted to just groups with finitely many elements it should be relatively simple to work out what all the possibilities are. Classifying finite groups is, however, a remarkably difficult problem and remains, to this day, unsolved. That is not to say that a lot of work has not been done. So called Abelian groups, where the operation is commutative (that is *f(x,y) = f(y,x)* for any *x,y* in the group) are well understood and can essentially be characterised knowing little more than the size of the set on which the algebraic structure is defined. Similarly, after a vast effort spanning decades, there is a classification of all so called finite Simple groups. The best existing theorems classifying general finite groups are the Sylow Theorems which decompose groups into what are called p-groups, which are closely related to prime numbers. The classification of finite p-groups remains an open problem on which much work is currently being done using methods and techniques from seemingly unrelated fields, including theorems about Lie algebras. It appears that the structure of finite groups is a very complex problem that relies on a much deeper understanding of algebraic structures other than just groups.

Let us turn our attention then to Representation Theory. For Representation Theory we will require a little more scaffolding. We begin with group actions. A group action is a way of thinking of a group (call it *G*) as acting on some other set (call it *A*): We can think of each element of *G* as a function from *A* to *A*, and to make sure it all make sense we will require that for any *x,y* in *G* and *a* in *A* it follows that *x(y(a)) = f(x,y)(a)* where *f* is, as before, the group operation. That sounds rather obscure and technical, but essentially we are just asking that the if we view elements of *G* as functions, those functions should still respect the algebraic structure of the group. We also require that the identity element of the group should act as an identity function on *A*. If *A* has some structure (be it algebraic or otherwise) of its own we can ask the inverse question: what of the set of all (bijective) functions from *A* to *A* that preserve the structure? Surprisingly that set of functions turns out to behave precisely as a group with a natural pre-defined action on *A*. It is in this sense that groups are often referred to as symmetries – at their heart symmetries are structure preserving functions – wherever you find symmetries you find a group that describes them all.

The next ingredient we require for Representation Theory is group homomorphisms. These are, simply put, maps or functions from one group to another. At first glance this is no different than functions on the underlying sets, with which most people should be familiar – what makes it a function between groups is the necessity that the function respect the structure of the groups involved. Specifically we require that a function *Ï†* from a group *G* to a group *H* obey *Ï†(f(x,y)) = g(Ï†(x),Ï†(y))* where *f* and *g* are the group operations for *G* and *H* respectively. This doesn’t seem particularly momentous, but a surprising amount of theory flows from this very simple definition. It turns out that we can learn everything there is to know about the internal structure of a group simply by looking at how it relates to other groups through homomorphisms: the external relationships define the internal structures – this is a surprisingly deep insight and is one of the founding observations for Category Theory, a language and theory that is having revolutionising impacts on both mathematics and computer science.

We have another topic to cover, and that is vector spaces – I’ll try to leave out details that won’t concern us so as to keep things moving along. Probably the easiest way to think of a vector space is as an *n*-dimensional space – that is a space of points where each point requires *n* pieces of information to uniquely specify it. We then allow scaling of points, multiplying by a “scalar” which is just an ordinary number. In essence this is simply stretching the point away from (or toward if the scalar is less than 1) the origin. We also need to be able to add points together, which works in a vector style parallelogram addition. Now, to say that each point can be uniquely specified by *n* pieces of information is to say that there is a set of *n* points such that each point can be uniquely expressed as a sum of multiples of those *n* points. We can ask if there is just one such set of *n* points that will let us do that – and the answer is no, there are lots! So what if we start with one set of *n* base points that work, and have a function that maps them to another set that also works? This is what is called a linear transformation, and as long as we use linear transformations the structure of the vector space (the ability to scale and add points and express any point as a combination of *n* base points) is always preserved. If the whole concept of preserving structure is sounding familiar, then you can guess where this is headed.

Take a vector space and consider the set of all (invertible) linear transformations of the vector space. That set of linear transformations (sine they all preserve the structure of the vector space), as noted when discussing group actions, forms a group. It is possible then, given some group *G*, to wonder about all the homomorphisms from *G* into the group of linear transformations of some vector space. Each such homomorphism is called a *representation of G*, and the study of such representations is called Representation Theory. A couple more facts we’ll need: the “degree” of a representation is the dimension of the vector space, and a representation is called “irreducible” if it cannot be broken down and expressed as a “sum” of simpler representations.

Again, this sounds like an unnecessarily abstract and futile exercise, but given that relationships to other groups can tell us a great deal about the potentially complex internal structure of a group, and that linear transformations of vector spaces are easy to deal with (they are just matrices!), it promises a way to better understand the internal structure of complicated groups. In practice Representation Theory provides an extremely rich and enlightening view of groups, the details proving to be far more intricate and beautiful (at least in an abstract mathematical sense) than can really be imagined without considerable study: new interesting structures called group-rings arise in a natural way, and prove to have their own intriguing and poorly understood complications; a new and barely explored expanse of open problems beckons; we discover just how much more we don’t know.

There is a great deal more climbing that can be done. Mathematicians continue to expand our understanding of these subjects, and there is much more that could be said before we truly arrive at the very cutting face. I would like to stop at this point though, in part because many readers are undoubtedly already feeling that we’ve already climbed too far into the airy heights of irrelevancy, and in part because we’ve arrived at a convenient point to take a surprising step sideways.

So far we’ve been discussing the completely abstract. We’ve climbed up a path of increasingly theoretical constructions built one atop the other to reach a point of potentially perilous irrelevance. Now it’s time to step into the concrete world. Our universe can be thought of as a space with some structure – the laws of physics. We can ask what functions or transforms of space and time will preserve certain structures (like, say, Maxwell’s equations). This leads us to a group of transformations of space-time that preserve desired properties – as I said, groups, once you know to look for them, crop up everywhere. Now something intriguing happens: the internal structures of the group correspond in a natural way with physical conservation laws, such as the law of conservation of energy, or the law of conservation of momentum. Things get really interesting, however, when we realise that, given the group, each irreducible representation of the group very naturally corresponds to a different fundamental particle, with force-carrying and non-force-carrying particles neatly delineated from one another according to the degree of the irreducible representation! All of a sudden what had seemed like a little axiomatic game that mathematicians were off playing by themselves is having very real impacts on the world in which we live. All of a sudden we’ve come crashing back down into relevance. Which brings us to the start of the truly hard questions: why does it work?!

Mathematics works. That much is obvious to anybody – mathematics is essentially the language of much of science, and is seemingly unendingly applicable to the real world. The problem is that it is not at all obvious why this should actually be the case. Philosophers have struggled with this question since Pythagoras, all to no avail. It is entirely all too common for mathematicians to embark on a purely theoretical exploration of the truly abstract and abstruse as a mental game, only to have, decades or centuries later, their work prove stunningly applicable to some very real world problem. The mathematics that laid the foundations for General Relativity began with mathematicians wondering, purely hypothetically, what would happen if they ignored Euclid’s fifth postulate; much of the work of G.H. Hardy, who famously claimed no discovery of his would ever make a practical difference in the world, has found considerable application in cryptology. To explain its effectiveness it seems we need first to explain what mathematics actually is.

If mathematics is simply the abstraction of our intuitions about the physical world, then why is it so universal? Surely each mathematician would then develop a personal mathematics from their own intuitions and claims of mathematical objectivity would become muddied. Equally one can ask why it is that a rejection of intuition on purely logical grounds, such as the rejection of Euclid’s fifth postulate, should lead to mathematics that is surprisingly more applicable rather than less.

Can it then be argued that mathematics is simply logic? This would explain, to some degree, its applicability: mathematics would simply be the expression of fundamental eternal universal truths. Unfortunately, despite truly heroic efforts by some of histories greatest mathematicians and logicians (Frege in “Foundations of Arithmetic” Russell and Whitehead in “Principia Mathematica”) the reduction of mathematics to pure logic failed. While Russell and Whitehead managed to construct a remarkable edifice out of pure logic, introducing along the way many new concepts such as Type theory which is now of great importance in computer science, they fell short of building mathematics as we know it: their system was too weak and required extra axioms (the axiom of infinity and the axiom of reducibility) which cannot reasonably be called purely logical principles.

Can we not, then, define our base set of rules and proceed by logic from there? This was the approach to mathematics favoured by Hilbert and his formalist school. While this approach did much to shore up the shaky foundations of pure mathematics before running afoul of GÃ¶del’s Incompleteness Theorems, it simply sidesteps our initial question. Mathematics, in this line of thought, becomes reduced to a game played with symbols on paper and an arbitrary set of rules (we ask only for consistency). Why should such an arbitrary game apply to reality? That is not at all clear.

The debate as to what mathematics actually is continues, mostly in philosophic circles – No school of thought has yet given a satisfactory answer. Personally I lean towards a more recent view that mathematics is about structure and can be grounded best in a structure oriented language such as that of Category Theory. However, Category Theory does not yet provide a sufficiently rigorous foundation to make that claim truly defensible. Only time will tell. Thus we began wondering what there is in mathematics that we don’t know, and conclude realising that we don’t even know what mathematics is.

Further Reading:

A First Course in Abstract Algebra by John Fraleigh provides a gentle introduction to elegant intricacies of Group Theory.

Conceptual Mathematics by William Lawvere and Stephen Schanuel is a very accessible (high school level) introduction to Category Theory.

Gauge Fields, Knots, and Gravity by John Baez and Javier Muniain is a (relatively) accessible book discussing the links between pure mathematics and physics in General Relativity and Quantum Field Theory.

Introduction to Mathematical Philosophy by Bertrand Russell provides and excellent and very readable introduction to the subject of mathematical philosophy.

August 22, 2009 at 10:50 am |

[…] By Leland McInnes […]