Kinematic Self-Replicating Machines
© 2004 Robert A. Freitas Jr. and Ralph C. Merkle. All Rights Reserved.
Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown, TX, 2004.
6.3.4 An Early Design Will Not Speed Development
A fourth argument against designing an assembler at this time is that such a design might not speed development. This argument holds that even if we knew a fully detailed and validated design, a design that was in fact physically possible, it would neither advance our state of knowledge nor speed development because our present comparatively primitive experimental abilities are insufficient to build it. Therefore, why bother to create the design? In other words, exploratory engineering should not be done at all – the ability to build nanostructures experimentally ought to be demonstrated before systems-level analysis should be attempted.
This perspective suffers from the misconception of technological determinism: the idea that new technology is developed on a more-or-less fixed schedule, independent of the desires and efforts of those involved. According to this view, humans first traveled to the moon in 1969 because that was when the technology to do so was available. There are two related sub-arguments here: First, that the moon landing could not have been done before 1969, and second, that the moon landing would not have been significantly delayed after 1969 because once the technology was available, it was inevitable that someone would do it.
To the first sub-argument, Robert Goddard’s classic 1919 paper describing how a small payload could be sent to the lunar surface  and the existence of early “manned moon rocket” technical design proposals in the 1930s and 1940s  (which included numerous features later adopted in the Apollo Program) established that such machines were plausible. This gave decisionmakers sufficient confidence to proceed with development once the political will and public resources became available for such efforts in the early 1960s, even though these efforts probably could have begun decades earlier. “These developments involve many experimental difficulties, to be sure,” Goddard wrote of the proposed moon shot in his 1919 paper, “but they depend upon nothing that is really impossible.” He presciently concluded his classical work  as follows: “Although the present paper is not the description of a working model, it is believed, nevertheless, that the theory and experiments, herein described, together settle all points that could seriously be questioned, and that it remains only to perform certain necessary preliminary experiments before an apparatus can be constructed that will carry recording instruments to any desired altitude.”
To the second sub-argument, we observe that human astronauts have not visited the Moon in 30 years, even though the technology remains eminently available. Thus the mere availability of a technology does not imply that its usage is inevitable if there is no clear recognition of the benefits of using it.
The claim that a design is useless because we can’t immediately build it presupposes that the design will have no influence on our assessment of how easy it is to build. But it seems premature to draw broad conclusions about the difficulty of building something in the absence of a design. In research, it is not uncommon to view a problem as very difficult or even impossible – and yet when the solution is found, the problem, in retrospect, appears quite simple. In the present case, we need at least one worked example before we can begin to reasonably address the question of how hard it is to build an assembler. The answer to this question would seem to depend very much on the proposed designs and the volume of the design space that can be explored. The experience of the present authors is that the “difficult” and “fundamental” problems involved in the design of a molecular assembler succumb when systematic efforts are applied to their solution.
Another important historical example can shed some light on the issues involved. In 1821, Charles Babbage designed the Difference Engine (the first hands-off algorithm-executing machine), and by 1834, Babbage had conceived detailed plans for his even more complex Analytical Engine, intended as a general-purpose programmable computing machine but based entirely on 19th century mechanical technology [3060-3066]. The Analytical Engine was to have a random-access memory consisting of 1000 words of 50 decimal digits each (~175,000 bits), with separate memory and central processing unit (CPU), a special storage unit for the instructions or program, data entry via punched metal cards, and even an output printer [3061, 3062, 3065]. This ambitious device, though well specified, was never built. (A 2000-part working subsection of Babbage’s brass-geared Difference Engine was demonstrated in 1832, and an entire working Difference Engine was reconstructed by historians in 1991, proving that Babbage’s design was sound .)
But Babbage’s proposals were forgotten* and the development of the stored program computer was delayed by about a century. Was that delay primarily caused by our experimental limitations? Or would a better understanding of the theoretical issues have allowed us to build a stored program computer in the mid 1800s? In short, was the development of the stored program computer in the mid 20th century the result of technological determinism? Or was the timing of this most critical technological development influenced by the somewhat random interplay of ideas and events that took place during the preceding decades? We suspect the latter.
* The Report of the Committee of the British Association for the Advancement of Science , written in 1878 about halfway in time between Babbage and ENIAC, demonstrates that Babbage wasn’t entirely forgotten. These mainstream scientists knew exactly what the Analytical Engine could be good for (flexible digital program execution for universal calculation), and that it might be useful in a few special cases. But, they argued, the Engine was incompletely specified, and it was not clear whether practical problems would prevent it from working. Furthermore, it was unknown how much effort would be required or how much it would cost. The final official recommendation read as follows: “Having regard to all these considerations, we have come, not without reluctance, to the conclusion that we cannot advise the British Association to take any steps, either by way of recommendation or other wise, to procure the construction of Mr. Babbage’s Analytical Engine and the printing tables by its means. We think it, however, a question for further consideration whether some specialized modification of the engine might not be worth construction, to serve as a simple multiplying machine.” Politicians were perhaps less insightful, according to a remark attributed  to Babbage: “On two occasions, I have been asked [by members of Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question.”
With the advantage of hindsight, we can see that an electrical relay-based programmable digital computer could have been built in the 1850s, or earlier.* The first electromagnet was invented by William Sturgeon in 1825, Joseph Henry built the first electromagnetic signaling relay in 1831, and Samuel Morse exhibited the first compact relay-based telegraphy device in 1837 – just three years after Babbage’s Analytical Engine design work. Telegraphy was rapidly commercialized in the 1840s , with the mature consolidation phase of this 19th-century high-tech industry marked by the founding of Western Union in 1851. Thus by the 1850s, telegraphs were common, electromechanical relays were well known, and the major barrier to implementing a stored program computer was our collective failure to understand how easily it could have been implemented with readily available electromechanical technology. Indeed, the world’s first programmable digital computer, built by Konrad Zuse in 1941, used 1,408 electromechanical relays for its 1,408-bit random access memory and another 1,200 relays for its central processing unit . (Zuse’s Z-3 machine computed very slowly, taking 3 seconds to perform a single multiplication). Acknowledging Babbage’s 110-year precedence, one computer historian  described the much larger relay-based Mark I, the first American programmable digital computer completed in 1944 by Howard Aiken, as “an electromechanical Analytical Engine with IBM card handling.” Had Babbage undertaken a more systematic exploration of the design space for computational engines (e.g., including electromechanical options as well as purely mechanical options), and had he been successful in enlisting the aid of others** with a clearer exposition of the potential benefits, it seems quite clear in retrospect that the development of the stored program digital computer could have been accelerated by almost a century.
* An alternative history that might have resulted if Babbage’s invention had taken hold a century earlier, including giant computing machines transforming global politics, economics and culture in the 19th century, was examined fictionally by Gibson and Sterling in 1990 .
** New and superior technologies can still be torpedoed by bureaucrats. For instance, in 1861, Giovanni Caselli patented the pantelegraph or Universal Telegraph, a machine system for sending and receiving images over long distances by telegraph, with transmitted images reproduced using electrochemistry and signals transmitted using the electromagnetic relays that Babbage had ignored. The pantelegraph was the first prototype of a modern fax machine. Overland links were established between several pairs of European and British cities, with the line between Paris and Lyon handling 5000 faxes during the first year of operation in 1865. Despite the enthusiastic personal interest of Emperor Napoleon III and the formation of a commercial Pantelegraph Society in Paris to promote the device, Caselli “clashed with the French Telegraphs administration which, fearing competition with its ordinary telegraphic network, refused to lower the tariff for handwritten dispatches and even advised taxing such dispatches at a higher rate than ordinary ones. Although the pantelegraph, like today’s fax, was perfectly able to transmit written texts correctly, there was a general refusal to allow it any other role than the transmission of a banking signature or a trademark, since this was the only system capable of doing so, and the Telegraphs administration went on to ensure it was gently stifled out of existence.”  The fax did not make a comeback until the 1980s. Babbage himself probably won few friends in the British establishment when he published an “unmannerly” pamphlet  denouncing the Royal Society and alleging “that wealthy Tory amateurs had a stranglehold on science policy and were discriminating against socially less well positioned scientists, who were more deserving of support.”
We expect that the development of the first molecular assembler can be similarly accelerated by a systematic exploration of its design space (Section 5.1.9) – an important motivation for the writing of this book (Section 6.5).
This leads to an important related point: the design of a molecular assembler will not represent a static object which is dropped upon an eagerly awaiting scientific community (the so-called “waterfall” or “trickle down” model of development). Instead, the design will lead to a proposal which will promptly be criticized from a variety of perspectives. These criticisms will then be used to evolve a second design better able to address the issues raised by the first design. A likely criticism of the first design might be: “We see how to build components A and B, but C is quite beyond us – change it!” The second proposal will itself attract further criticism, leading to further modifications. The design effort will be ongoing, with the objective of simplifying the design to the point that it can be manufactured using available technology. The people involved in the design effort will not just be familiar with a single design, like Babbage. Rather, they will become knowledgeable about the shape of the design space, the range of system designs that are feasible, and the tradeoffs that can be made to simplify or to change various aspects of the design.
Last updated on 1 August 2005