# Iterants, Imaginaries and Matrices

Recall the square root of minus one, usually denoted by i with  i x i = -1. Since no real numbers can have negative squares, this number
was originally called imaginary, but it was used and explored from the 1500’s, even though no one understood its meaning. Then around 1800, Gauss and others realized that this imaginary number had the very natural interpretation as a rotation of the Euclidean plane by 90 degrees around the origin! Thus i x 1 = i will be a point one unit above the real line with a perpendicular drop to the origin of the real line. And as you can see, i x i will come back to the line at exactly -1, a 180 degree turn from 1 itself. This interpretation flowered into the whole subject of complex analysis and changed mathematics and her applications forever.  And yet … there is another interpretation to come. In 1969 George Spencer-Brown published his ground-breaking book ‘Laws of Form’ and in it, suggested that there should be imaginary values in logic analogous to the square root of minus one in mathematics. He pointed out that the paradoxical logical value x such that x = ~x (x is its own negation. x is true if and only if x is false and so can be neither true nor false) could be interpreted as such an imaginary. He further pointed out  that the solution to the paradox inherent in  x = ~x  lies in the temporal dimension. Think of x as a little circuit with feedback. When it is on the feedback turns it off. When it is off the feedback turns it on. In time the little circuit oscillates between on and off. Its behaviour is

…off on off on off on off on off on off on off on off …

I shall  take this notion and find a temporal interpretation of  the square root of minus one. (I found this interpretation around 1980, and have published it in various forms, but am returning to it here in a new light.) Then we have i x i = -1 can be re-written as  i = -1/i  and this is as paradoxical as the imaginary logical value x.  If  i = 1, then it is equal to -1/1 = -1.  If  i = -1, then it is equal to 1/-1 = -1.  We can interpret i as an oscillation!

…+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+…

But if you do this, where do we get i x i = -1?  Read on.

When you observe the oscillation between plus one and minus one, you tend to see it as [+1,-1] repeated, or as [-1, +1] repeated. Lets call these two views of the oscillation, the ‘iterants’ associated with it. Now I shall introduce a ‘temporal shift operator’ S with the property that [a,b]S = S[b,a] and SS = 1. Putting S in an expression makes it temporally sensitive, so that when you interact with it, the time shifts by one tick of the clock, and where there was b in …ababab… there is now a, and where there was a in …abababab….  there is now b. Now we can define

i = [-1,+1]S   and   -i = [+1, -1]S.

We assume  that [a.b][c,d] = [ac,bd] so that [1,1] = 1 and [-1,-1] = -1.

Then

i x i = [-1,+1]S[-1,+1]S = [-1,+1][+1,-1] SS = [-1, -1] = -1.

We have recovered the square root of minus one and found a temporal interpretation for its behavior. When i interacts with itself it causes a minute temporal shift of the underlying waveform and it is this shift of phase that produces the negative one.

One can see that the collection of iterants of the form

[a,b] + [c,d]S

where a,b,c,d run over all real numbers, has exactly the structure of  two by two matrix algebra (next installment).

In the next installment we shall see how all of matrix algebra arises naturally from waveforms of arbitrary finite period.

To give a hint, consider sequences of period three.

… abcabcabcabcabcabcabcabcabcabcabcabcabcabcabc…

There are three types of iterant in relation to views of this waveform:

[a,b,c]   and   [b,c,a]   and   [c,a,b].

The appropriate operator T has order three, TTT=1.

And we have [x,y,z]T = T[y,z,x].  One tick of the clock shifts the wave by one third of its period.

Three by three matrix algebra corresponds to iterants of the type

[a,b,c] + [d,e,f]T + [g,h,k]TT.

Perhaps you hope for more than cyclic symmetries. That occurs as well, and all finite groups fit into this picture.

## 7 thoughts on “Iterants, Imaginaries and Matrices”

1. Jon Awbrey says:

You may find some ideas of use in the following paper.

2. loukauffman says:

Thank you John. Do you analyze paradoxes in your paper? As you can see, I am working with the theme that Time and Paradox are mutually related. The logical precursor to I = sqrt(-1) is J = ~J, the ‘tertium non datur’. The temporal interpretation of J = ~J is an alternating temporal sequence of values of True and False.

3. Jon Awbrey says:

Lou,

Let me get some notational matters out of the way before continuing.

I use $\mathbb{B}$ for a generic 2-point set, usually $\{ 0, 1 \}$ and usually but not always interpreted for logic so that $0 = \text{false}$ and $1 = \text{true}.$ I use “teletype” parentheses $\texttt{(} \ldots \texttt{)}$ for negation, so that $\texttt{(} x \texttt{)} = \lnot x$ for $x ~\text{in}~ \mathbb{B}.$ Later on I’ll be using teletype format lists $\texttt{(} x_1 \texttt{,} \ldots \texttt{,} x_k \texttt{)}$ for minimal negation operators.

4. Jon Awbrey says:

Lou,

As long as we’re reading $x$ as a boolean variable $(x \in \mathbb{B})$ the equation $x = \texttt{(} x \texttt{)}$ is not paradoxical but simply false. As an algebraic structure $\mathbb{B}$ can be extended in many ways but that leaves open the question of whether those extensions have any application to logic.

On the other hand, the assignment statement $x := \texttt{(} x \texttt{)}$ makes perfect sense in computational contexts. The effect of the assignment operation on the value of the variable $x$ is commonly expressed in time series notation as $x' = \texttt{(} x \texttt{)}$ and the same change is expressed even more succinctly by defining $\mathrm{d}x = x' - x$ and writing $\mathrm{d}x = 1.$

Now suppose we are observing the time evolution of a system $X$ with a boolean state variable $x : X \to \mathbb{B}$ and what we observe is the following time series: $\begin{array}{c|c} t & x \\ \hline 0 & 0 \\ 1 & 1 \\ 2 & 0 \\ 3 & 1 \\ 4 & 0 \\ 5 & 1 \\ 6 & 0 \\ 7 & 1 \\ 8 & 0 \\ 9 & 1 \\ \ldots & \ldots \end{array}$

Computing the first differences we get: $\begin{array}{c|cc} t & x & \mathrm{d}x \\ \hline 0 & 0 & 1 \\ 1 & 1 & 1 \\ 2 & 0 & 1 \\ 3 & 1 & 1 \\ 4 & 0 & 1 \\ 5 & 1 & 1 \\ 6 & 0 & 1 \\ 7 & 1 & 1 \\ 8 & 0 & 1 \\ 9 & 1 & 1 \\ \ldots & \ldots & \ldots \end{array}$

Computing the second differences we get: $\begin{array}{c|cccc} t & x & \mathrm{d}x & \mathrm{d}^2 x & \ldots \\ \hline 0 & 0 & 1 & 0 & \ldots \\ 1 & 1 & 1 & 0 & \ldots \\ 2 & 0 & 1 & 0 & \ldots \\ 3 & 1 & 1 & 0 & \ldots \\ 4 & 0 & 1 & 0 & \ldots \\ 5 & 1 & 1 & 0 & \ldots \\ 6 & 0 & 1 & 0 & \ldots \\ 7 & 1 & 1 & 0 & \ldots \\ 8 & 0 & 1 & 0 & \ldots \\ 9 & 1 & 1 & 0 & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots \end{array}$

This leads to thinking of the system $X$ as having an extended state $(x, \mathrm{d}x, \mathrm{d}^2 x, \ldots, \mathrm{d}^k x),$ and this additional language gives us the facility of describing state transitions in terms of the various orders of differences. For example, the rule $x' = \texttt{(} x \texttt{)}$ can now be expressed by the rule $\mathrm{d}x = 1.$

The following article has a few more examples along these lines:

5. loukauffman says:

Dear John,
Thank you. I like to avoid the problem of any iteration being paradoxical by regarding it as a replacement rather than an equality. On the other hand, it is always of interest to see if there is also an interpretation (possibly by extending the language) to having the equality. Thus x = 1 + 1/x demands an extension from rational numbers to real numbers, and x = -1/x demands an extension from real numbers to complex numbers. In fact x = a + b/x for a and b real with a^2 + 4b a + b/x is often interesting, sometimes chaotic and ‘explained’ by the geometry of complex numbers.
Best,
Lou

1. mandrake2014 says:

Oh my gosh Lou that comment ” the sqrt of -1 is a clock ” is the most profound comment