Remarks on Turing and Spencer-Brown
Computation is holographic. Information processing is a formal operation made abstract only by a reduction in the number of free variables, a projective recording which analyzes from all angles the entropy or information contained in the space. Thus, basing my results partly on Hooft’s holographic conjecture for physics (regarding the equivalence of string theory and quantum theory,) and by extending Spencer-Brown’s work on algebras of distinction (developed in his Laws of Form,) I will sketch the outlines of a new theory of universal computation, based not on system-cybernetic models but on holographic transformations (encoding and projection, or more precisely, fractal differentiation and homogeneous integration.)
Hooft’s conjecture allows us to extend the Laws of Form with an “interface” model where computation doesn’t require an observer, only the potentiality of being observed. In other words, all we need is the construction of a interface (positive feedback system, i.e., an iterative calculation or mutual holographic projection) in order to process information. Light itself can be thought of as encoding information, and in particular, electromagnetic waves form a necessary part of holographically recorded information. In other words, to operate in a formal system is to derive information only from interfaces, simpler than but in some way equivalent to the “real” objects.
This abstraction is at the heart of Spencer-Brown’s The Laws of Form which describes the fundamental features of any formal system. These formal operations are also representable as holographic operations. Furthermore, since a description of the holographic structure of a process is equivalent to a description of its original form, we ought to be able to understand computation exclusively in terms of holographic operations. We can represent a region of space by a projection onto a holographic surface. The key point is that we lose a dimension, but owing to a fractal mapping, we lose no information. This projective holographic process can continue until we reach a representation (reality?) with no dimensions at all, i.e., pure or manifest information itself (a holomorphic field.) This stepwise or iterative movement towards pure information is holographic in essence.
Claude Shannon has defined information processing as the conversion of latent, implicit information into manifest information; we will add, into its (dimension-zero) holographic representation. Information processing occurs through holographic projection and encoding: any presented multiplicity can be converted into pure information through some n-dimensional holographic “cascade”. An observer distinguishes spaces, an interface encodes these distinctions, thereby extruding the holographic sub-structure of the universe (the “virtual” information processing occuring in distinguished regions of space.) It is in this context that Spencer-Brown provides a unique insight to cybernetics with his analysis of the operation of distinction. In combination with the holographic paradigm for the physical structure of the universe, a correspondingly extended algebra of distinction for the structure of formal systems provides the basis for our claim that information processing can be thought of, at the limit of abstraction, as consisting fundamentally of holographic operations.
Holographic Space and Information Processing
(How to move from n to n-1 dimensions.)
All our knowledge is symbolic.
A holographic surface is a two-dimensional representation of a three-dimensional volume of space. It is composed of heterogeneous “perspectives” where any piece of the surface encodes a complete construction of the entire volume from its given vantage point. The importance of this structure is that it is fractal: a hologram reduces the number of dimensions isometrically while leaving the presented information intact. Some recent results in quantum physics and string theory suggest that the structure of a hologram, a system which can be extrapolated from its surface, is very much like the structure of the universe. For instance, the AdS/CFT correspondence  suggests that the string theory and quantum field theory are in fact equivalent languages for describing the same underlying reality. More precisely, a string theory on a given space is equivalent to a quantum field theory without gravity defined on the conformal boundary of that space. Maybe not so surprisingly, the dimension of the quantum theory is lower by one or more than the dimension of the string theory. For example, there is a duality between Type IIB string theory on a five-dimensional space and a supersymmetric Yang-Mills gauge theory on the four-dimensional boundary. The theories are equivalent, but one is simpler: it has less dimensions and doesn’t need to discuss gravity.
Hoof’t has shown even more explicitly that the limit of any gauge theory (with a large enough number of colors) is a version of string theory — despite the fact that string theory doesn’t appear to be a theory of quantum gravity! Hooft’s holographic principle states that all of the information contained in a volume of space can be represented by a theory which “lives” only in the n-1 dimensional boundary of that region. Information (entropy) is proportional to the surface area of a region, not to its volume. By a theory ‘just on the edge’ of a formal space, we can get every bit of the information contained within the entire deep volume of the space. A two-dimensional boundary is all we need, it’s equivalent to what’s inside the three-dimensional region.
What is a hologram? A hologram maps a volume onto a surface. A holographic surface ‘completely’ contains the volume it describes; the information it encodes is fractally distributed upon its surface. Any piece of the hologram stores information about the entire scene at a fidelity equal to its optical sensitivity. Essentially, each point on the holographic material records a photograph of the scene. In a hologram, one entire space (the scene or situation) is the projection of another space topologically equivalent to its n-1 dimensional surface (the holographic representation.) The operation of recording a hologram requires a coherent light source which is split by a mirror. The first beam is meant to bounce off the scene. The interference patterns of this signal beam and the second reference beam is recorded onto the holographic plate. The second beam is later used in the reconstructing the scene. When the processed hologram is illuminated by the reference beam, the diffraction patterns reconstructs the original signal beam.
So there is at least one particular ‘theory-space’ operating over the surface of any given volume which is equivalent (a dual theory) to the ‘theory-space’ operating over the volume itself. The holographic ‘crust’ of a system is a complete mapping; the strong holographic principle suggests that the system itself is an illusion, a projection of the simpler system onto a ‘bigger’ space. They are equivalent descriptions of an underlying reality. ‘Truth’ is not in the hologram or its projection, but in the operation which maps between. It is an immanent theorizing which allows these hidden dualisms to surface, through a logical revolt to structures of knowledge and power. Holograms are a model of the universe and consciousness only insofar as we recognize their status, like any structure, as metaphors. Nonetheless, it certainly seems true that some metaphors are remarkably more descriptive, apt and succint than others. Some even capture essential structural and technical unities, tracing the intricate diachrony of machinic interaction. I think we still have new things to learn from holograms.
My question here is the genealogy of computation, the nature of information-processing. My conjecture is that we can understand information-processing in terms of the holographic paradigm in such a way as to realize that it is possible to ground a model, or reduce a constellation of particular complex problems to simpler, equivalent problems. In particular, ordinary logical computation can be easily modeled by the laws of form which can then be realized by holographic transformations. My point is that the holographic transformations themselves are a much simpler and “reduced” language for discussing the exact same theoretical series of problems. Specifically, computation can be understood using a single, unary operation: holographic transformation.
Therefore my main task here is to show that a holographic model for information processing is equivalent to a universal Turing machine. In other words, the capacity for holographic projection (which is inherent in any selected region of space, for all physical processes and relationships) embodies the essence of what constitutes an information-processing machine.
The second task is to show how the Laws of Form constitute a detailed logic of holographic transformation, the creation (projection) of (parts of) the universe by the division of space. (Interestingly, though we shall not consider this too deeply, the Laws of Form also exhibit an isomorphism to electrical circuits.) Our specific concern with the laws of Form will be to show their unique applicability to holography, as an algebraic model to show how holographic transformations could in fact embody the essential operation of computation.)
What is Computation?
Astonishing! Everything is intelligent.
We will begin with a brief analysis of the holonomic model and sketch some key isomorphisms to models of computation. First, a hologram is nothing more a flat map of a region of space (conforming informationally to the boundary of the space.) Every ‘difference’ in that volume of space is conserved, recorded upon the holographic surface whose projection, when illuminated by the signal beam, is this volume.
A hologram is a fractal map of a region of regular space. It is a particularly interesting structure for us because we find in it two different scales persisting unresolved. On the one hand, there are the micro-photographs which collectively constitute the surface, whereas on the other hand, we have the macro-holograph which singularly represent the volume. In a holographic structure we subtract a dimension while conserving information: the operation of passage between spaces of different dimension is certainly transversal, the hologram results from a complex transduction.
To record a hologram is to transfer the information contained in a volume of space (a scene) onto a surface (the holographic material.) Ontologically we are dealing with different kinds of information. This transference has only practical limits. Theoretically we can take this process infinitely, packing a surface full of holograms, then micro-holograms, then micro-micro-holograms… This recursive operation is the metamathematical operator of abstraction (embodied as ‘transposition’ in LoF); it is a transformation which takes a given concrete space to the zero-dimension ‘extrusion’ of the entire volume onto a single point. Here there is a turning point: the legitimately ontological transformation which connects us to Spencer-Brown. From the space to a single point, but the process can even be continued: from the single point (which maps an entire volume in n dimensions with a micro-hologram cascade) with a positive dimension less than one. From these inter-dimensional mappings, it becomes clear we are interested not in positioning but in topology: information is being written directly into the structure of the space. These mixed topological structures are not arbitrary, but they are also not regular or continuous. They conform to new kinds of spaces with alternate symmetries. There are an infinite number of these in-between spaces, any particular layer would but another step in an infinite fractal recursion.)
There are not really two inverse operations: recording a volume onto a surface and projecting a volume from a surface. Holographic space is a generalization of both of these, allowing the operations to become continuous. This brings us to 1936, when Emil Post described a model of computation which is extremely interesting to me for several reasons. First, because it represents a move beyond Turing towards a simpler model, which is still formally equivalent. Post’s system is extremely simple, but complex enough to be formally equivalent to recursion — that is, it describes a universal computer. Second, Post-Turing machines are structurally isomorphic to Spencer-Brown’s Laws of Form extended to n-dimensions. 
However, while Post-Turing machines may be fundamental models in some senses, it is clear we need a second-order model of computation to account for emergent properties of distinction. In other words, we need to assume a user, programming the machine with methods and posing to it problem-spaces of various kinds. But if we presume the user, we leave his desires (enfolded within his programs) unexplained, we leave them as the musician leaves the composition: we perform it precisely as we are enjoined by the quasi-linguistic flow of instructions. A universal machine also performs without deviation or flourish. But how, then, are creative deviations to methods and problem-spaces generated? So far, we have not consciously conjoined cybernetics with psychoanalysis on this particular point. We have assumed that only the mysterious users with their magical organic brains can ‘outrun’ the infinite logical loop computation cannot overcome. Godel’s general recursive function — the method of representing formulas by numbers, a program by a series of instructions — appears to be the cognitive limit, the asymptotic horizon of computation complexity. But already to give a complete and formal deductive theory (symbol logic) we would have to find an equivalent predicate in recursive form, which is the key observation from which Godel’s theorem immediately follows.
The existence of provably unprovable statements is difficult to reconcile, but Spencer-Brown’s Laws of Form do precisely this. By showing that containing a space is to make a distinction, recursivity is introduced prior to symbolic reduction. Indeed, we can outline an equally fundamental (though considerably more complex) mode of computation where creative responses arise through feedback and transformation. Interfaces themselves should be intelligently generated for a given problem space, through analyzing its holographic structure, ‘deducing’ the underlying program, or technical schematic. How are the forms of programs generated? But after all, what is the shape of desire? How do we connect the forms we imagine to digital forms? Interfaces must become porous membranes, they must be designed to be broken through and overcome.
The interface itself must be the site of the transformation of the problem space and therefore of the underlying representation of the problem. Abstract computation is embodied by this process of generating new interfaces for problem spaces. In other words, we extrude from the surface/image of the problem information about its projected space. We move from a series of distinctions which bound the space of the problem to an interface which functions to transform the problem, if you like, from the Form to the anti-Form (quasi-distinctions, on the boundaries of distinct forms.) Programs ultimately do nothing more than operate over a series of marked and unmarked spaces in order to simplify and transform them according to rules based on the state of the machine. Post’s machine is a formalization of this insight, representing an ‘atomization’ of Turing instructions; but is further reduction in the complexity of the machine is possible?
The smallest universal Turing machine was described by Stephen Wolfram, who suggested that a 2-state, 3-symbol Turing machine was the smallest universal possible. This year, a 20-year old cybernetics student, Alex Smith, proved that this machine was indeed the smallest universal machine possible. The machine is similar in its simplicity to a Post machine. However, the recursive step must still be made. In fact, the machine must be able to simulate itself, it’s entire field of operational decision-making. The program which would perform this would amount to a meta-operating system. Simply it is able to create virtual machines; each of these obviously contain a similar program capable of virtualizing another series of machines… However, we are getting ahead of ourselves. Again, our basic project here is simply to show that holographic transformations are equivalent to the operations of a universal computer. How do you build a holographic computer? Storing information with light is really a very old idea. But holography is quite different from photography, for enough information to reconstruct the entire scene is distributed throughout the entire surface of the holographic material — whereas in a photograph only a single light ray is recorded at any particular point, so cutting the photograph destroys half the information. Cutting holographic material, on the other hand, merely dulls the resolution of the encoded information, causing distinctions to become blurred.
Holography and Distinction
It’s been known for more than a hundred years, ever since Maxwell, that all physical systems register and process information.
David Bohm has argued that the structure of the universe itself is holographic ; I am saying the same thing about computation. The holographic paradigm has had a recent successful implementation in multidimensional associative memory . Interestingly, the model seems to naturally reproduce many characteristics of organic memory: dynamically localizable attention, making it effective for generalization and pattern recognition with changeable focus . These results are compelling, but not enough to make our case. It is necessary to point out in addition several aspects of the holographic paradigm that are important to computation.
(1) A hologram is a complete map of a volume which fits on the conformal boundary of that volume. The surface is a fractal representation of this volume which reproduces the optical (electromagnetic) properties of the volume when decoded or projected. Thus a hologram is an encoded map of a complex region which it represents in its micro-structure (it cannot be reconstructed without the recording signal which produced it.) The operations of recording and projection are not just analogies to the metamathematical operations of abstraction and instantiation, but in fact the pure model and wholly commensurate with the ontological split evinced between pure and computationally-oriented, recursive mathematics (as in Godel, who closes his proof by writing that there might still be proofs of completeness — but which simply cannot be stated in set theory or arithmetic.)
(2) A hologram then is a complete system (of calculation.) It is formed by the gathering and hardening of electrons into light or dark areas, into marked and unmarked spaces. This bonding of electrons is not without tensions, but they are relatively stable allowing for the formation of the micro-images. A hologram is a formally operational space, every portion of the space reproduces the entire scene from a given perspective. A hologram is a functionally complete system, a calculus.
(3) Every point on a hologram is an optical algorithm (or lambda expression,) encoding a functional mapping of a series of higher dimensional points onto a single, lower-dimensional point. This fractal mapping binds parameters into expressions, each micro-scene is a non-linear function of the interference of optical signals, the excitation slowly hardening into regions of light and dark, visible and invisible.
The Laws of Form represent the horizon of metamathematical abstraction. In his simple calculus we find the fundaments of set theory, arithmetic and logic. (In particular, Bricken and Kauffman have shown there is a simple mapping from the laws of form to mathematical logic.) What is important to remember is that the laws of form are a reduced image of the more complex logical axiom-systems (which can still be derived from the simpler image.) In fact, the more complex system is again a projection of the simpler. The Laws of Form encode holographically the generic features of computation, or reasoning within the boundaries a formal system. What is critical is that we are dealing with a meta-formalization (not wholly unlike the Godel numbers) where transformations in the Laws of Form can be interpreted as systems of mathematics. The Laws of Form can be seen also the logical basis for electronic circuits. Every circuit has a form, a pattern of decisions or distinctions it makes. A circuit is a recognition-machine, whose responses vary predictably on the basis of the information with which it is presented, trained to recognize information that appears in a certain form. All mathematical formulations are encoded in a logical language whose structure is not arborescent but holographic — characterized by progressive abstraction of projective and integrative operations. Holograms represent not only the basis of formal computation but in many ways are an apt paradigm for formal and informal process of all kind, of information processing at the most abstract limit.
A final key comparison to make here would be to the Einstein field equations, where particular solutions correspond to specific space-time topologies. A hologram models the concept of operation, not only formalization but projection. The recursive aspect that makes a holographic surface ‘coded’ and therefore the origin of computability is that holographic representation involves an mapping across a dimensional break accomplished through multiple perspectives, or fractal transpositions of the original space.
1. See Witten, Anti-de Sitter Space and Holography, or Gubser, Klebanov and Polyakov, Gauge Theory Correlators from Non-Critical String Theory
2. I Grattan-Guinness, The manuscripts of Emil L Post, Hist. Philos. Logic 11 (1) (1990), 77-83.
3. Bohm, David (1980) Wholeness and the Implicate Order, Routledge, London.
4. K. I. Khan and D. Y. Yun. Characteristics of Multidimensional Holographic Associative Memory in Retrieval with Dynamically Localizable Attention. IEEE Transactions on Neural Networks, 9(3):389–406, May 1998.
Claude E. Shannon: A Mathematical Theory of Communication, Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, 1948. (online here)
Taylor, R. Gregory (1998). Models of Computation and Formal Languages. New York: Oxford University Press
G.Japaridze, The logic of interactive Turing reduction. Journal of Symbolic Logic 72 (2007), No.1, pp. 243-276.