# The Matrix Here is a natural iterant for you, blinking on and off in relation to the locus of your perception.

Lets recall how we got from …+_+_+_+_… to the square root of minus one. We said ii = -1 implies that i = -1/i and so if a value feeds back through -1, it will keep flipping its value: 1 —> -1 —-> +1 —-> … But then we were impertinent enough to ask if there was a way to get the algebraic statement ii = -1 our of that. The answer was let i = [1,-1] $\eta$ where $\eta$ is a phase shifter or time shifter so that [1,-1] $\eta$ = $\eta$[-1,1]. Remember that $\eta$ $\eta$ = 1.

Then i is temporally sensitive and we have that

ii = [1,-1] $\eta$[1,-1] $\eta$ = [1,-1][-1,1] $\eta$ $\eta$ = [-1,-1]1 = -1

with the convention that [a,b][c,d] = [ac,bd]. Natural enough. You might say that

THE SQUARE ROOT OF MINUS ONE IS A CLOCK!

The strange and marvellous thing is that this is going to lead us straight into the Matrix, or rather into Matrix Algebra.

THE MATRIX

Matrix algebra has some strange wisdom built into its very bones. Consider a two dimensional periodic pattern or “waveform.” $\displaystyle ......................$ $\displaystyle ...abababababababab...$ $\displaystyle ...cdcdcdcdcdcdcdcd...$ $\displaystyle ...abababababababab...$ $\displaystyle ...cdcdcdcdcdcdcdcd...$ $\displaystyle ...abababababababab...$ $\displaystyle ......................$ $\displaystyle \left(\begin{array}{cc} a&b\\ c&d \end{array}\right), \left(\begin{array}{cc} b&a\\ d&c \end{array}\right), \left(\begin{array}{cc} c&d\\ a&b \end{array}\right), \left(\begin{array}{cc} d&c\\ b&a \end{array}\right)$

Above are some of the matrices apparent in this array. Compare the matrix with the “two dimensional waveform” shown above. A given matrix freezes out a way to view the infinite waveform. In order to keep track of this patterning, lets write $\displaystyle [a,b] + [c,d]\eta = \left(\begin{array}{cc} a&c\\ d&b \end{array}\right).$

where $\displaystyle [x,y] = \left(\begin{array}{cc} x&0\\ 0&y \end{array}\right).$

and $\displaystyle \eta = \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right).$

Recall the definition of matrix multiplication. $\displaystyle \left(\begin{array}{cc} a&c\\ d&b \end{array}\right) \left(\begin{array}{cc} e&g\\ h&f \end{array}\right) = \left(\begin{array}{cc} ae+ch&ag+cf\\ de+bh&dg+bf \end{array}\right).$

Compare this with the iterant multiplication. $\displaystyle ([a,b] + [c,d]\eta)([e,f]+[g,h]\eta) =$ $\displaystyle [a,b][e,f] + [c,d]\eta[g,h]\eta + [a,b][g,h]\eta + [c,d]\eta[e,f] =$ $\displaystyle [ae,bf] + [c,d][h,g] +( [ag, bh] + [c,d][f,e])\eta =$ $\displaystyle [ae,bf] +[ch,dg] + ( [ag, bh] + [cf,de])\eta =$ $\displaystyle [ae+ch, dg+bf] + [ag + cf, de+bh]\eta.$
Thus iterant multiplication is the same as matrix multiplication. If you had not learned matrix multiplication first, it could be motivated by iterant multiplication.

There is a parallel universe where the mathematicians studied discrete time, and in that universe they discovered matrix algebra from iterants and were later pleasantly surprised to discover that their temporal algebras had a geometric interpretation. Of course they discovered Julia and Mandelbrot sets long before they solved the quadratic equation by radicals, that that is all another story. We shall need a Tardis to get to their universe and it is entirely possible that time runs backwards there relative to us.

The four matrices that can be framed in the two-dimensional wave form are all obtained from the two iterants ${[a,d]}$ and ${[b,c]}$ via the shift operation ${\eta [x,y] = [y,x] \eta}$ which we shall denote by an overbar as shown below $\displaystyle \overline{[x,y]} = [y,x].$

Letting ${A = [a,d]}$ and ${B=[b,c]}$, we see that the four matrices seen in the grid are $\displaystyle A + B \eta, B + A \eta, \overline{B} + \overline{A}\eta, \overline{A} + \overline{B}\eta.$

The operator ${\eta}$ has the effect of rotating an iterant by ninety degrees in the formal plane. Ordinary matrix multiplication can be written in a concise form using the following rules: $\displaystyle \eta \eta = 1$ $\displaystyle \eta Q = \overline{Q} \eta$

where Q is any two element iterant. Note the correspondence $\displaystyle \left(\begin{array}{cc} a&b\\ c&d \end{array}\right) = \left(\begin{array}{cc} a&0\\ 0&d \end{array}\right) \left(\begin{array}{cc} 1&0\\ 0&1 \end{array}\right) + \left(\begin{array}{cc} b&0\\ 0&c \end{array}\right) \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right) = [a,d]1 + [b,c]\eta.$

This means that ${[a,d]}$ corresponds to a diagonal matrix. $\displaystyle [a,d] = \left(\begin{array}{cc} a&0\\ 0&d \end{array}\right),$ ${\eta}$ corresponds to the anti-diagonal permutation matrix. $\displaystyle \eta = \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right),$

and ${[b,c]\eta}$ corresponds to the product of a diagonal matrix and the permutation matrix. $\displaystyle [b,c]\eta = \left(\begin{array}{cc} b&0\\ 0&c \end{array}\right) \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right) = \left(\begin{array}{cc} 0&b\\ c&0 \end{array}\right).$

The fact that the iterant expression ${ [a,d]1 + [b,c]\eta}$ captures the whole of ${2 \times 2}$ matrix algebra corresponds to the fact that a two by two matrix is combinatorially the union of the identity pattern (the diagonal) and the interchange pattern (the antidiagonal) that correspond to the operators ${1}$ and ${\eta.}$ $\displaystyle \left(\begin{array}{cc} *&@\\ @ & *\\ \end{array}\right)$

In the formal diagram for a matrix shown above, we indicate the diagonal by ${*}$ and the anti-diagonal by ${@.}$
In the case of complex numbers we represent $\displaystyle \left(\begin{array}{cc} a&b\\ -b&a \end{array}\right) = [a,a] + [b,-b]\eta = a1 + b[1,-1]\eta = a + bi.$

In this way, we see that all of ${2 \times 2}$ matrix algebra is a hypercomplex number system based on the symmetric group ${S_{2}.}$ We will now see how to generalize this point of view to arbitrary finite groups.

We have reconstructed the square root of minus one in the form of the matrix $\displaystyle i = \epsilon \eta = [-1,1]\eta =\left(\begin{array}{cc} 0&1\\ -1&0 \end{array}\right).$

More generally, we see that $\displaystyle (A + B\eta)(C+D\eta) = (AC+B\overline{D}) + (AD + B\overline{C})\eta$

writing the ${2 \times 2}$ matrix algebra as a system of hypercomplex numbers. Note that $\displaystyle (A+B\eta)(\overline{A}-B\eta) = A\overline{A} - B\overline{B}$

The formula on the right corresponds to the determinant of the matrix. Thus we define the conjugate of ${A+B\eta}$ by the formula $\displaystyle \overline{A+B\eta} = \overline{A} - B\eta.$

These patterns generalize to higher dimensional matrix algebra. It is worth pointing out the first precursor to the quaternions ( the so-called split quatenions): This precursor is the system $\displaystyle \{\pm{1}, \pm{\epsilon}, \pm{\eta}, \pm{i}\}.$

Here ${\epsilon\epsilon = 1 = \eta\eta}$ while ${i=\epsilon \eta}$ so that ${ii = -1}$. The basic operations in this algebra are those of epsilon and eta. Eta is the delay shift operator that reverses the components of the iterant. Epsilon negates one of the components, and leaves the order unchanged. The quaternions arise directly from these two operations once we construct an extra square root of minus one that commutes with them. Call this extra root of minus one ${\sqrt{-1}}$. Then the quaternions are generated by $\displaystyle I=\sqrt{-1}\epsilon, J= \epsilon \eta, K= \sqrt{-1}\eta\}$

with $\displaystyle I^{2} = J^{2}=K^{2}=IJK=-1.$

The “right” way to generate the quaternions is to start at the bottom iterant level with boolean values of 0 and 1 and the operation EXOR (exclusive or). Build iterants on this, and matrix algebra from these iterants. This gives the square root of negation. Now take pairs of values from this new algebra and build ${2 \times 2}$ matrices again. The coefficients include square roots of negation that commute with constructions at the next level and so quaternions appear in the third level of this hierarchy. We will return to the quaternions after discussing other examples that involve matrices of all sizes.

Later we will see that Latin Square Patterns such as the one below are the key to the relationships of Group Theory and Iterant Algebras. $\displaystyle \left(\begin{array}{cccccc} 1& \Delta&\Theta&\Psi& \Omega&\Sigma\\ \Theta&1& \Delta&\Sigma&\Psi& \Omega\\ \Delta&\Theta&1& \Omega&\Sigma&\Psi\\ \Psi&\Sigma& \Omega&1& \Theta&\Delta\\ \Omega&\Psi&\Sigma& \Delta&1&\Theta\\ \Sigma& \Omega&\Psi&\Theta& \Delta&1\\ \end{array}\right).$

## 5 thoughts on “The Matrix”

1. chorasimilarity says:

Dear Louis, looks great! I have a question about the last matrix from the post, what is it about?

2. loukauffman says:

Dear Marius,
The last matrix will be explained in the next post. It shows how to write any 6 x 6 matrix as as sum of diagonal matrices multiplied by permutation matrices. In this case the pattern comes from the permutation group on 3 letters. It is part of a pattern that can unfold for any finite group.
Best,
Lou

1. mandrake2014 says:

Dear Lou
Just checking to see if I’m here or maybe not here, if I am here, thanks for the invite

3. mandrake2014 says:

Oh my gosh Lou im on holiday on the sunshine coast in australia & iv just found pen & paper & looked at your oscillating clock, & its just shocked me , becos its so profound ofcourse its so obvious but so hidden, on tbe equator is wer the stable unstable attractor is & it oscillates, all logic is also linked to temporal horizons this has been shown in experiemnts by Jaques. My sitting here looking at the pacific ocean in shock!!!! Best & kind regards gavin

4. mandrake2014 says:

Lou what is the frequency of the oscillation or the angular momentum