**Kinematic
Self-Replicating Machines**

**©
2004 Robert A. Freitas Jr. and Ralph C. Merkle. All Rights Reserved.**

Robert A. Freitas Jr., Ralph C. Merkle, *Kinematic Self-Replicating Machines*,
Landes Bioscience, Georgetown, TX, 2004.

**2.1.1 A Logical Organization
of Self-Replication**

Von Neumann set for himself the goal of showing what the logical organization of a self-replicating machine might be. He had in mind a full range of self-replicating machine models which he intended to explore, including the (a) kinematic machine, (b) cellular machine, (c) neuron type machine, (d) continuous machine, and (e) probabilistic machine. Von Neumann ultimately produced only a very informal description of the kinematic machine, and although he wrote a great deal on the cellular machine, his magnum opus on the subject was left in the form of unfinished notes at the time of his death in 1957 [3, 4]. His other three models were left largely unexplored and will not be discussed further here in great detail.*

* According to Laing [2]: “In addition to his kinematic and cellular models, von Neumann planned to examine three other models of self-reproducing machines. These were to be a neuronal or “excitation-threshold-fatigue” model, a continuous model, and a probabilistic model. Von Neumann is not known to have left any completed work whatsoever on these models at the time of his death, so his intentions are almost entirely a matter of conjecture.”

“Following Burks’ speculations
on this matter [3], we can guess that von Neumann’s
**neuronal system** might have been a version of the cell-space
model in which the individual cell automata in the space were to be constructed
of neuron-like elements. This would have been a rather straightforward process,
as it is well known that idealized neurons of the McCulloch-Pitts [307]
variety can be employed to implement the kinds of logical gatings and delays
called for in the 29-state cell automaton system (Section
2.1.3). The reason for employing neuron-like elements seems mainly an attempt
to recast the model in a more “biological” vocabulary.” [2]
Tempesti [308] adds that a careful analysis
of von Neumann’s intentions suggests that this model “would have
borne a fairly close relationship to today’s artificial neural networks,
with the addition of some features which would have both increased the resemblance
to biological neurons and introduced the possibility of self-replication.”

“Von Neumann’s postulated **continuous
model** might have been an attempt to comprehend machine reproduction
in an even more biological format. The usual mathematical tools for handling
actual neuron activity are differential equations expressing the electrochemical
flows through and along neuron soma and axons. Thus the actions of cell automata
(implemented with neurons) could be expressed by sets of differential equations.
In this way the more highly developed tools of mathematical analysis might be
employed in representing the behavior of the machine system, in contrast to
the use of combinatorics which von Neumann himself characterized as one of the
most intractable of mathematical specialties.” [2]

“Finally, in his proposed **probabilistic
model** von Neumann perhaps intended to consider using whole congeries
of neuron-like elements in implementing the behaviors of what in the neuronal
model could be carried out by single neurons. By employing redundancy techniques
similar to those described in his classic paper on reliability [309],
von Neumann may finally have hoped to design a reliable, biologically oriented,
self-reproducing machine characterizable by differential equations.” [2]
Again, Tempesti [308] adds: “We know
that von Neumann intended to introduce a kind of automaton where the transitions
between states were probabilistic rather than deterministic. Such an approach
would allow the introduction of mechanisms such as mutation and thus of the
phenomenon of evolution in artificial automata. Once again, we cannot be sure
of how von Neumann would have realized such systems, but we can assume they
would have exploited some of the same tools used today by genetic algorithms.”

Von Neumann [3] concluded that the following characteristics and capabilities were sufficient for machines to replicate without degeneracy of complexity:

o

Logical universality– the ability to function as a general-purpose computing machine able to simulate a universal Turing machine (an abstract representation of a computing device, which itself is able to simulate any other Turing machine) [310, 311]. This was deemed necessary because a replicating machine must be able to read instructions to carry out complex computations.o

Construction capability– to self-replicate, a machine must be capable of manipulating information, energy, and materials of the same sort of which it itself is composed.o

Constructional universality– In parallel to logical universality, constructional universality implies the ability to manufacture any of the finitely sized machines which can be formed from specific kinds of parts, given a finite number of different kinds of parts but an indefinitely large supply of parts of each kind.

Self-replication follows immediately from the above, since the universal constructor* must be constructible from the set of manufacturable parts. If the original machine is made of these parts, and it is a constructible machine, and the universal constructor is given a description of itself, it ought to be able to make more copies of itself.

* McMullin [312, 313] suggests that the issue of “universal construction” is perhaps a bit more subtle and that closure engineering (Section 5.6) may be a challenging endeavor in some applications: “In brief, the definition of a ‘universal’ set of automata is highly context dependent (in contrast to the definition of a ‘universal’ set of computations). As a result, it is not trivial to ensure that any proposed, ‘universal’ constructor is, indeed, a member of the set of machines which it can construct. Von Neumann finessed (and obscured) this point by implicitly restricting attention to ‘initially quiescent’ automata in his particular cellular automaton formulation. Of course, in general, construction ‘universality’ is absurdly overkill if one simply wants to achieve self-replication. Much lesser capabilities would suffice for that.”

Von Neumann thus hit upon a deceptively simple architecture
for machine replication [3]. The machine would
have four parts – (1) a constructor “A” that can build a machine
“X” when fed explicit blueprints of that machine; (2) a blueprint
copier “B”; (3) a controller “C” that controls the actions
of the constructor and the copier, actuating them alternately; and finally (4)
a set of blueprints f(A + B + C) explicitly describing
how to build a constructor, a controller, and a copier. The entire replicator
may therefore be described as (A + B + C) + f(A +
B + C). Now, for the machine “X” that is to be manufactured, let
us choose X = (A + B + C). In this special case, Controller C first actuates
copier B which copies f(A + B + C) to produce a second
copy f(A + B + C), then actuates constructor A to
build a second constructor, copier, and controller – i.e., another (A
+ B + C) – and to tie them together with the second copy of the blueprints.
Thus the automaton (A + B + C) + f(A + B + C) produces
a second automaton (A + B + C) + f(A + B + C) and
so self-replication has taken place. The fact that the description f(A
+ B + C) must be both copied (*uninterpreted *data) and translated (*interpreted*
data) during replication is the key to avoiding the paradox of self-reference.
Note that this replication takes place in an environment of stockroom parts
(Section 2.1.2).

Von Neumann [3] also pointed out that if we let X = (A + B + C + D) where D is any arbitrary automaton, then (A + B + C) + f(A + B + C + D) produces (A + B + C + D) + f(A + B + C + D), and “our constructing automaton is now of such a nature that in its normal operation it produces another object D as well as making a copy of itself.” In other words, it can create useful non-self products in addition to replicating itself and has become a productive general-purpose manufacturing system. Alternatively, it has the potential to evolve by incorporating new features into itself.

Observers [314-316] have noted that von Neumann’s early schema was later confirmed by subsequent research on the molecular biology of cellular reproduction, with von Neumann’s component “A” represented by the ribosomes (Section 4.2) and supporting cellular mechanisms, component “B” represented by DNA polymerase enzymes, component “C” represented by repressor and derepressor molecules and associated expression-control machinery in the cell, and finally component “f(A + B + C)” represented by the genetic material DNA that carries the organism’s genome. (The correspondence is not complete: cells include additional complexities.) More importantly, the dual use of information – both interpreted and uninterpreted, as in von Neumann’s machine schema – was also found to be true for the information contained in DNA.

Last updated on 1 August 2005