Mathematics and the Real

I will argue that it is a category mistake to regard mathematics as “physically real”. I will quote D. E. Littlewood from his remarkable book “The Skeleton Key of Mathematics”:

“A trained sheepdog may perceive the significance of two, three or five sheep,  and may know that two sheep and three sheep make five sheep. But very likely the knowledge would tell him nothing concerning, say horses. A child learns that two fingers and three fingers make five fingers, that two beads and three beads make five beads. The irrelevance of the fingers or the beads or the exact nature of the things  that are counted becomes evident, and by the process of abstraction the universal truth that 2+3=5 becomes evident.”

I can just hear someone pouncing on Littlewood’s phrase “universal truth”, but that is not the reason I quote this paragraph.  I quote it to remind that the essence of mathematics is never in the particular way it is represented, but in the concept that it brings forth, and the unification of particulars that it embodies.

One might hypothesize that any mathematical system will find natural realizations. This is not the same as saying that the mathematics itself is realized. The point of an abstraction is that it is not, as an abstraction, realized. The set { { }, { { } } } has 2 elements, but it is not the number 2. The number 2 is nowhere “in the world”.

I argue that one should understand from the outset that mathematics is distinct from the physical. Then it is possible to get on with the remarkable task of finding how mathematics fits with the physical, from the fact that we can represent numbers by rows of marks |  , ||, |||, ||||, |||||, ||||||, …
(and note that whenever you do something concrete like this it only works for a while and then gets away from the abstraction living on as clear as ever, while the marks get hard to organize and count) to the intricate relationships of the representations of the symmetric groups with particle physics (bringing us back ’round to Littlewood and the Littlewood Richardson rule that appears to be the right abstraction behind elementary particle interactions).

Search for the right conceptual match between mathematics and phenomena, physical and computational. Understand that mathematics is distinct from its instances. And yet our understanding of the abstractions is utterly dependent on knowing more and more about their instantiations. The domains of the physical and the conceptual are distinct and they are mutually supporting.

It is very hard to take this point of view because we do not usually look carefully enough at our mathematics to distinguish the abstract part from the concrete or formal part.  The key is in the seeing of the pattern, not in the mechanical work of the computation. The work of the computation occurs in physicality. The seeing of the pattern, the understanding of its generality occurs in the conceptual domain.

Conceptual and physical domains are interlocked in our understanding.

Is Mathematics Real?

It has been proposed by Max Tegmark most recently and earlier by others in other ways that Reality is fundamentally Mathematics. I say Mathematics and not mathematical because that is apparently the proposal – that everything, nay Everything is just mathematics! Well lets see what this could mean. We know, that all the mathematics that can be articulated at the present time is built from sets and definitions about sets. I could ask for a further foundation for sets, but lets postpone that. If I were teaching a class about foundations of mathematics, I would likely introduce them to the empty set {  } (it has no members) and to the notion that if I have already constructed some sets, then I can form the set whose members are these formerly constructed sets. Thus I can form { { } }, the set whose member is the empty set. This is the first set born after the appearance of the empty set. Then I have the sets { }, {{}} and we can now arrive at { { { } } } and { { } , {{}} }, the next two sets in the process of creation. I would also tell my students that two sets are equal if and only if they have the same members. Now we are finitely on our way to making more and more in a big bang from the empty set into the vast set theoretic universe.

Before proceeding, it is good to be assured that the empty set is unique. It is so! For if there were two empty sets they would have exactly the same members — none!

And we are finding multiplicity for { } has none and { { } } has one and { { }, { { } } } has two distinct members. There is a recursion here. Let S(X} = X U { X } be the set obtained from a set X  by making X itself a member. This yields a new set S(X). For example  S({ }) = {{}} and S({{}}) = {{},{{}}} and so on. The sequence of sets {}, S({}), S(S({}))),… give sets with 0,1,2,3,… members and the natural numbers are born. So it goes, all from nothing. Every mathematical structure that we know can be defined in a few lines as a creation from the empty set. And so we are brought inescapably to the  conclusion that the Universe is constructed from nothing but emptiness.

The notion that the Universe comes from nothing is echoed in other traditions, but the sense of that emptiness, that Nothing, is quite different from the mathematical nothing of the empty set. This becomes apparent as soon as we inquire just how did we get that empty set at the beginning of the mathematical process?

Look again. The empty set was represented by us as a container with no contents {  }. I grant you that we used language and the definition of set equality to show that {  } is unique, and so we could have notated it any other convenient way, but this is to assume that mathematics is ‘just’ what comes from some set of rules. As soon as there are rules, we get to ask how do these rules fit a context, where to they come from, how do we know how to follow them? In the case of the empty set we needed a concept of emptiness, and now we are in an open realm for discussion.

If you want to say that the Universe IS mathematics, you will have to explain what is mathematics and you will find that you cannot explain and understand even the simplest mathematics without looking into the context in which it occurs. If you believe that this context is itself more mathematics, then you believe that mathematics is fundamentally circular in nature, and wraps around itself to produce itself. I am willing to believe this! I can even demonstrate it with a bit of mathematics. Let G be the following operator. When G meets an entity X, it produces two copies of X and places them in brackets.

GX = {XX}

For example GA = {AA}. Pretty innocuous!

But GG = {GG}. So GG sits inside itself and by instruction will fall down the rabbit hole in an endless  recursion

GG = {GG} = {{GG}} = {{{GG}}} = {{{{GG}}}} ={{{{{GG}}}}}={{{{{{GG}}}}}} = …

The mathematical entity GG wraps right around itself.  Just so does our language and apparent existence wrap around itself and give us the possibility that we are ‘nothing more’ than our own description of our own description, a kind of illusion that generates its own illusion.

To see how we are propelled into the imaginary, define another operator R by RX = ~XX, the Russell Operator (apres Bertrand Russell) and INTERPRET: XY as “Y is a member of X”. Then the equation

RX = ~XX

is the statement that “X is a member of R exactly when X is not a member of X.” And we have the curious fixed point of negation

RR = ~RR.

In the Universe that expands from the  Russell Operator, there are imaginary logical values, neither true nor false. RR is such a value and being a mathematical construct, it exists in the Russell Universe. Perhaps you do not want to live in the Russell Universe, then being a Form of Mathematics you can devise rules to follow (Mathematics following the rules that it creates in order to follow its own rules) that will avoid the jubjub bird and shun the frumious bandersnatch.

In this note I have pointed out that if you take the Universe to be Mathematics, then you have to face that “it” is generated from emptiness and that this emptiness is very full in the capacity to wrap right round itself and seem to be inside and outside itself at the same time. If the Universe is Mathematics, then Mathematics may not be what she seemed to be. We are down the rabbit hole.

The Matrix

Blinker

Here is a natural iterant for you, blinking on and off in relation to the locus of your perception.

Lets recall how we got from …+_+_+_+_… to the square root of minus one. We said ii = -1 implies that i = -1/i and so if a value feeds back through -1, it will keep flipping its value: 1 —> -1 —-> +1 —-> … But then we were impertinent enough to ask if there was a way to get the algebraic statement ii = -1 our of that. The answer was let i = [1,-1]\eta where \eta is a phase shifter or time shifter so that [1,-1]\eta = \eta[-1,1]. Remember that

\eta\eta = 1.

Then i is temporally sensitive and we have that

ii = [1,-1]\eta[1,-1]\eta = [1,-1][-1,1]\eta\eta = [-1,-1]1 = -1

with the convention that [a,b][c,d] = [ac,bd]. Natural enough. You might say that

THE SQUARE ROOT OF MINUS ONE IS A CLOCK!

The strange and marvellous thing is that this is going to lead us straight into the Matrix, or rather into Matrix Algebra.

THE MATRIX

Matrix algebra has some strange wisdom built into its very bones. Consider a two dimensional periodic pattern or “waveform.”

\displaystyle ......................

\displaystyle ...abababababababab...

\displaystyle ...cdcdcdcdcdcdcdcd...

\displaystyle ...abababababababab...

\displaystyle ...cdcdcdcdcdcdcdcd...

\displaystyle ...abababababababab...

\displaystyle ......................

\displaystyle \left(\begin{array}{cc} a&b\\ c&d \end{array}\right), \left(\begin{array}{cc} b&a\\ d&c \end{array}\right), \left(\begin{array}{cc} c&d\\ a&b \end{array}\right), \left(\begin{array}{cc} d&c\\ b&a \end{array}\right)

Above are some of the matrices apparent in this array. Compare the matrix with the “two dimensional waveform” shown above. A given matrix freezes out a way to view the infinite waveform. In order to keep track of this patterning, lets write
\displaystyle [a,b] + [c,d]\eta = \left(\begin{array}{cc} a&c\\ d&b \end{array}\right).

where

\displaystyle [x,y] = \left(\begin{array}{cc} x&0\\ 0&y \end{array}\right).

and

\displaystyle \eta = \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right).

Recall the definition of matrix multiplication.

\displaystyle \left(\begin{array}{cc} a&c\\ d&b \end{array}\right) \left(\begin{array}{cc} e&g\\ h&f \end{array}\right) = \left(\begin{array}{cc} ae+ch&ag+cf\\ de+bh&dg+bf \end{array}\right).

Compare this with the iterant multiplication.
\displaystyle ([a,b] + [c,d]\eta)([e,f]+[g,h]\eta) =

\displaystyle [a,b][e,f] + [c,d]\eta[g,h]\eta + [a,b][g,h]\eta + [c,d]\eta[e,f] =

\displaystyle [ae,bf] + [c,d][h,g] +( [ag, bh] + [c,d][f,e])\eta =

\displaystyle [ae,bf] +[ch,dg] + ( [ag, bh] + [cf,de])\eta =

\displaystyle [ae+ch, dg+bf] + [ag + cf, de+bh]\eta.
Thus iterant multiplication is the same as matrix multiplication. If you had not learned matrix multiplication first, it could be motivated by iterant multiplication.

There is a parallel universe where the mathematicians studied discrete time, and in that universe they discovered matrix algebra from iterants and were later pleasantly surprised to discover that their temporal algebras had a geometric interpretation. Of course they discovered Julia and Mandelbrot sets long before they solved the quadratic equation by radicals, that that is all another story. We shall need a Tardis to get to their universe and it is entirely possible that time runs backwards there relative to us.

 
The four matrices that can be framed in the two-dimensional wave form are all obtained from the two iterants {[a,d]} and {[b,c]} via the shift operation {\eta [x,y] = [y,x] \eta} which we shall denote by an overbar as shown below
\displaystyle \overline{[x,y]} = [y,x].

Letting {A = [a,d]} and {B=[b,c]}, we see that the four matrices seen in the grid are
\displaystyle A + B \eta, B + A \eta, \overline{B} + \overline{A}\eta, \overline{A} + \overline{B}\eta.

The operator {\eta} has the effect of rotating an iterant by ninety degrees in the formal plane. Ordinary matrix multiplication can be written in a concise form using the following rules:

\displaystyle \eta \eta = 1

\displaystyle \eta Q = \overline{Q} \eta

where Q is any two element iterant. Note the correspondence
\displaystyle \left(\begin{array}{cc} a&b\\ c&d \end{array}\right) = \left(\begin{array}{cc} a&0\\ 0&d \end{array}\right) \left(\begin{array}{cc} 1&0\\ 0&1 \end{array}\right) + \left(\begin{array}{cc} b&0\\ 0&c \end{array}\right) \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right) = [a,d]1 + [b,c]\eta.

This means that {[a,d]} corresponds to a diagonal matrix.
\displaystyle [a,d] = \left(\begin{array}{cc} a&0\\ 0&d \end{array}\right),

{\eta} corresponds to the anti-diagonal permutation matrix.
\displaystyle \eta = \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right),

and {[b,c]\eta} corresponds to the product of a diagonal matrix and the permutation matrix.
\displaystyle [b,c]\eta = \left(\begin{array}{cc} b&0\\ 0&c \end{array}\right) \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right) = \left(\begin{array}{cc} 0&b\\ c&0 \end{array}\right).

The fact that the iterant expression { [a,d]1 + [b,c]\eta} captures the whole of {2 \times 2} matrix algebra corresponds to the fact that a two by two matrix is combinatorially the union of the identity pattern (the diagonal) and the interchange pattern (the antidiagonal) that correspond to the operators {1} and {\eta.}

\displaystyle \left(\begin{array}{cc} *&@\\ @ & *\\ \end{array}\right)

In the formal diagram for a matrix shown above, we indicate the diagonal by {*} and the anti-diagonal by {@.}
In the case of complex numbers we represent

\displaystyle \left(\begin{array}{cc} a&b\\ -b&a \end{array}\right) = [a,a] + [b,-b]\eta = a1 + b[1,-1]\eta = a + bi.

In this way, we see that all of {2 \times 2} matrix algebra is a hypercomplex number system based on the symmetric group {S_{2}.} We will now see how to generalize this point of view to arbitrary finite groups.

We have reconstructed the square root of minus one in the form of the matrix
\displaystyle i = \epsilon \eta = [-1,1]\eta =\left(\begin{array}{cc} 0&1\\ -1&0 \end{array}\right).

More generally, we see that
\displaystyle (A + B\eta)(C+D\eta) = (AC+B\overline{D}) + (AD + B\overline{C})\eta

writing the {2 \times 2} matrix algebra as a system of hypercomplex numbers. Note that

\displaystyle (A+B\eta)(\overline{A}-B\eta) = A\overline{A} - B\overline{B}

The formula on the right corresponds to the determinant of the matrix. Thus we define the conjugate of {A+B\eta} by the formula
\displaystyle \overline{A+B\eta} = \overline{A} - B\eta.

These patterns generalize to higher dimensional matrix algebra. It is worth pointing out the first precursor to the quaternions ( the so-called split quatenions): This precursor is the system
\displaystyle \{\pm{1}, \pm{\epsilon}, \pm{\eta}, \pm{i}\}.

Here {\epsilon\epsilon = 1 = \eta\eta} while {i=\epsilon \eta} so that {ii = -1}. The basic operations in this algebra are those of epsilon and eta. Eta is the delay shift operator that reverses the components of the iterant. Epsilon negates one of the components, and leaves the order unchanged. The quaternions arise directly from these two operations once we construct an extra square root of minus one that commutes with them. Call this extra root of minus one {\sqrt{-1}}. Then the quaternions are generated by
\displaystyle I=\sqrt{-1}\epsilon, J= \epsilon \eta, K= \sqrt{-1}\eta\}

with
\displaystyle I^{2} = J^{2}=K^{2}=IJK=-1.

The “right” way to generate the quaternions is to start at the bottom iterant level with boolean values of 0 and 1 and the operation EXOR (exclusive or). Build iterants on this, and matrix algebra from these iterants. This gives the square root of negation. Now take pairs of values from this new algebra and build {2 \times 2} matrices again. The coefficients include square roots of negation that commute with constructions at the next level and so quaternions appear in the third level of this hierarchy. We will return to the quaternions after discussing other examples that involve matrices of all sizes.

Later we will see that Latin Square Patterns such as the one below are the key to the relationships of Group Theory and Iterant Algebras.

\displaystyle \left(\begin{array}{cccccc} 1& \Delta&\Theta&\Psi& \Omega&\Sigma\\ \Theta&1& \Delta&\Sigma&\Psi& \Omega\\ \Delta&\Theta&1& \Omega&\Sigma&\Psi\\ \Psi&\Sigma& \Omega&1& \Theta&\Delta\\ \Omega&\Psi&\Sigma& \Delta&1&\Theta\\ \Sigma& \Omega&\Psi&\Theta& \Delta&1\\ \end{array}\right).

Iterants, Imaginaries and Matrices

Recall the square root of minus one, usually denoted by i with  i x i = -1. Since no real numbers can have negative squares, this number
was originally called imaginary, but it was used and explored from the 1500’s, even though no one understood its meaning. Then around 1800, Gauss and others realized that this imaginary number had the very natural interpretation as a rotation of the Euclidean plane by 90 degrees around the origin! Thus i x 1 = i will be a point one unit above the real line with a perpendicular drop to the origin of the real line. And as you can see, i x i will come back to the line at exactly -1, a 180 degree turn from 1 itself. This interpretation flowered into the whole subject of complex analysis and changed mathematics and her applications forever.  And yet … there is another interpretation to come. In 1969 George Spencer-Brown published his ground-breaking book ‘Laws of Form’ and in it, suggested that there should be imaginary values in logic analogous to the square root of minus one in mathematics. He pointed out that the paradoxical logical value x such that x = ~x (x is its own negation. x is true if and only if x is false and so can be neither true nor false) could be interpreted as such an imaginary. He further pointed out  that the solution to the paradox inherent in  x = ~x  lies in the temporal dimension. Think of x as a little circuit with feedback. When it is on the feedback turns it off. When it is off the feedback turns it on. In time the little circuit oscillates between on and off. Its behaviour is

…off on off on off on off on off on off on off on off …

I shall  take this notion and find a temporal interpretation of  the square root of minus one. (I found this interpretation around 1980, and have published it in various forms, but am returning to it here in a new light.) Then we have i x i = -1 can be re-written as  i = -1/i  and this is as paradoxical as the imaginary logical value x.  If  i = 1, then it is equal to -1/1 = -1.  If  i = -1, then it is equal to 1/-1 = -1.  We can interpret i as an oscillation!

…+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+…

But if you do this, where do we get i x i = -1?  Read on.

When you observe the oscillation between plus one and minus one, you tend to see it as [+1,-1] repeated, or as [-1, +1] repeated. Lets call these two views of the oscillation, the ‘iterants’ associated with it. Now I shall introduce a ‘temporal shift operator’ S with the property that [a,b]S = S[b,a] and SS = 1. Putting S in an expression makes it temporally sensitive, so that when you interact with it, the time shifts by one tick of the clock, and where there was b in …ababab… there is now a, and where there was a in …abababab….  there is now b. Now we can define

i = [-1,+1]S   and   -i = [+1, -1]S.

We assume  that [a.b][c,d] = [ac,bd] so that [1,1] = 1 and [-1,-1] = -1.

Then

i x i = [-1,+1]S[-1,+1]S = [-1,+1][+1,-1] SS = [-1, -1] = -1.

We have recovered the square root of minus one and found a temporal interpretation for its behavior. When i interacts with itself it causes a minute temporal shift of the underlying waveform and it is this shift of phase that produces the negative one.

One can see that the collection of iterants of the form

[a,b] + [c,d]S

where a,b,c,d run over all real numbers, has exactly the structure of  two by two matrix algebra (next installment).

In the next installment we shall see how all of matrix algebra arises naturally from waveforms of arbitrary finite period.

To give a hint, consider sequences of period three.

… abcabcabcabcabcabcabcabcabcabcabcabcabcabcabc…

There are three types of iterant in relation to views of this waveform:

[a,b,c]   and   [b,c,a]   and   [c,a,b].

The appropriate operator T has order three, TTT=1.

And we have [x,y,z]T = T[y,z,x].  One tick of the clock shifts the wave by one third of its period.

Three by three matrix algebra corresponds to iterants of the type

[a,b,c] + [d,e,f]T + [g,h,k]TT.

Perhaps you hope for more than cyclic symmetries. That occurs as well, and all finite groups fit into this picture.