# Minds And Machines: The Limits Of Turing-Complete Machines

While the invention of calculus by Newton and Leibniz in the 17th century set the stage for the so-called industrial revolution and unleashed unparalleled analytical power to fast-track human civilization on a technological revolution, it was the idea of a Turing machine (Turing, 1937) and Lambda calculus (Church, 1936) that enabled the so-called information and computational revolution of the present era. The Von Neumann architecture of most modern digital computers, theoretically equivalent to a universal Turing machine, laid the foundations for a massive onslaught of algorithmic innovation to model complexity in physical, chemical, biological, and more recently, social, ecological, political, economic and management systems. The algorithms executed on such machines constitute the core of informational and computational revolution that enables simulation of heterogeneously interacting agents in any complex system, which was not always possible with the Newtonian/Leibnizian calculus.

While algorithms could be written that simulate the behavior of heterogeneous entities interacting at multiple scales, typically the phenomena of emergence, self-organization and adaptation are observed at *higher* scales. We acknowledge the prospects of new scientific insights that are provided by generative and pattern oriented modeling. However, we propose that there are also unexplained factors that are *at work* here that could give a false impression to an observer that the patterns that are generated by algorithms have captured the complexity of the system in its complete essence. Fundamental assumptions that are engrained in each algorithm about the behavioral rules, intelligence, creative decision-making, learning, treatment of uncertainty and so forth, constrain the modeling of emergence, self-organization and adaptation in complex systems. In this post, we focus on illuminating the limits of Turing-complete machines on simulating creative decision making that is observed in nature ranging from very small organisms such as fish, mammalian and avian species to evolutionary scales in the form of Darwinian pre-adaptations. With this post, our goal is to initiate a broader dialogue around understanding the prospects and limits of Turing-complete machines in such simulation endeavors, starting with creative decision making, but hopefully extending in future to understanding the simulation of individual and organizational learning, intelligence and lack thereof.

The connection between a Turing-complete machine and an algorithm pervades the computational complexity paradigm. Due to the Boolean/binary (0,1) nature of the electrical storage of memory, the Turing-complete machines enable those algorithms to be run that can be programmed to convert any language in Boolean/binary code. According to Church-Turing (CT) computing theory, any computer program/algorithm can be simulated in a Turing machine and vice-versa for a programming language. Further, the strong form of the CT thesis says that every physically realizable computation model can be simulated by a Turing-complete machine *with polynomial overhead*. We would like to point out that a fundamental limit of Turing-complete machines in simulating creativity, which is widely observed among living creatures, occurs due to the "framing" or affordances problem. The framing and/or affordances problem has bedeviled programmers since the onslaught of Turing-complete machines.

Here is an illustrative example: A man (Stuart Kauffman, one of the authors of this post) had his laptop computer on a coffee table, plugged into a floor socket beneath the table, the cord running over the side of the table. His wife and adult son were walking about. He feared the worst for the safety of his laptop due to tripping over the cord. Now we describe the table: three wooden top boards, four chunky legs, four runners between the legs, two cracks starting at one end of one of the table top boards, the table top was painted in flaked red paint in a complex pattern, a flower pot stood on the table, one end of the table was four feet from the fire place and 3.5 feet from the nearest wall, another corner was seven feet from the edge of the kitchen, the table was 238,000 miles from the moon, and 6 feet from the nearest door.

Note that there is no bounded set of features of the table alone, or relational features of the table and other objects, nor are these features "orderable," i.e., first, second, third, etc.

To solve the problem he jammed the power cord into one of the cracks on the table top, pulled it to the narrow end of the smooth sided crack (another feature) and pulled it taut so no part of the cord extended beyond the table. He is extremely proud of his solution.

What is the point of this little drama of the kind we humans creatively solve all the time? Affordances must be given as a finite list of features and, for each feature, an "is a", "does a", "needs a", "has a" and so forth. The essence of the frame problem is that there is no guaranty that this finite list of features and affordances will logically entail the relevant use(s) need to solve a given problem. Further, the set of features and relational features is neither bounded nor orderable, that is listable. Thus, there is no algorithmic way to find the relevant variables and their appropriate relations with respect to solving an arbitrary problem. Yet people solve such problems all the time.

One difference between rule driven Turing-complete machines and empirically observed biological systems could be that the adaptive agents in biological systems have certain "unity of consciousness" that enables them to solve the frame/affordances problem and assign adaptive and creative decision rules to changing decision problems (Chalmers, 1997; Kauffman, 2010).

Consciousness is, of course, a contentious issue. Philosopher of the mind Gilbert Ryle (1949/2008) argued that we are not conscious at all, but merely have propensities to behave in certain ways given certain stimuli — a form of logical behaviorism. More, much of computer science is still deeply wedded to strong artificial intelligence according to which a sufficiently complex network of logic gates, electronic or bowls of water, will "emerge" into consciousness. However, there is no reason to suppose that such a network could solve the frame problem. Thus we argue that strong artificial intelligence is a non-starter.

A critical feature of Turing-complete machines is that they are inherently "propositional." This line of thought started with McCulloch and Pitts (1943) who demonstrated that a feed-forward network of binary devices computing linearly separable Boolean functions on their 0 to many inputs, rank to rank, could compute arbitrary Boolean functions on the vector of (1,0) values on the input rank at time t = 0. (They also examined networks with feedback loops.) They then implicitly identified the 1 or 0 state of an input neuron with what philosopher Bertrand Russell called a "sense data statement." We must be careful of Russell here. If you look about you, you experience a "unity of consciousness" say of your visual field. Russell invented, whole cloth, "sense data," such as "red here" and the tone "a flat now." He did so because he had written with Whitehead the Principia Mathematica hoping that all of mathematics was derivable from first order predicate logic. With it, he hoped for maximally reliable knowledge of the external world.

Given sense data, a sense data statement is "for Kauffman, 'red here' is true." McCulloch and Pitts borrowed sense data statements, and implicitly identified the truth of falseness of a set of N sense data statements for the N formal neurons in the first, input rank, of their neural network. Then the network could compute any logical function on those inputs. They titled their paper, "A logical calculus of the ideas immanent in nervous activity." With this was born artificial intelligence and neural networks. And with this was born the unsupported assumption that one could identify with the on/off state of each formal neuron the truth or falseness some "experienced" sense data statement.

Clearly, there is no justification for this identification. We are in no ways guaranteed that the features and affordances encoded in the formal neurons or their analogues, correspond to the features, including relational features and affordances that are relevant to solving a given problem.

An alternative approach could be to design non-algorithmic information processing systems. An entirely new field of physics, and with it, information processing is opening up. Part of the reason we may be "stuck" is that we are frozen into limiting our thinking to classical physics. A Turing-complete machine is a subset of classical physics that is a discrete state and discrete time system.

In contrast, we believe, with our colleague Gabor Vattay, that an entire "poised realm" exists for open quantum systems in an environment, hovering back and forth between quantum coherence decohereing to classicality FAPP, then recohering to quantum coherence. More specifically, consider an X,Y Cartesian coordinate system. Let the Y axis represent decoherence from quantum coherence to classicality FAPP "up" the Y axis from the origin, and recoherence "down" the Y axis. Let the X axis represent "order," "criticality" and "chaos," for example as tuned for classical systems by their Hamiltonians. This coordinate system represents the "poised realm." Within this realm, Niiranen, Vattay and Kauffman have invented "trans-Turing systems" with a filed United States utility patent, and well described in "Answering Descartes: beyond Turing" by Kauffman (2012). Without going into the details, trans-Turing systems are partially non-determinate due to their quantum aspects, yet non-random due to their classical aspects. Thus, as non-determinate systems, they are not algorithmic. They are an entirely new class of information processing systems, with quantum coherent, poised realm, and classical inputs and outputs in one or among a set of trans-Turing systems. Indeed, the very concept of information requires modification as information as normally defined applies only to classical physics systems over finite discrete alphabets.

There are beginning grounds to believe that such systems can solve the frame problem. Strangely the evidence lies in evolutionary biology where, again quantum and classical aspects mix: mutations are quantum random and indeterminate, yet evolution is not random as seen in the convergent evolution of the octopus and mammalian camera eye. Neither quantum mechanics alone, nor classical physics alone is sufficient for evolution. We believe the same is true for trans-Turing systems. We believe in general trans-Turing systems will be able to solve the frame problem non-propositionally. The example of trans-Turing systems, let alone evolution, frees us from the constraints of strong artificial intelligence, that mind must be based on networks of classical physics logic gates. These so-called trans-Turing Systems could be informed by the growing fields of neuro-sciences and neuro-biology on the one hand, and quantum physics and quantum biology on the other hand. What allows a creature like jelly fish to devise creative strategies, a bird like a crane to devise creative navigation strategies and a human brain to recognize complex patterns and creatively solve decision problems needs to be systematically investigated through the study of neural networks/brains with in and across the species.

*Asim Zia, the co-author of this post, is currently serving as assistant professor in the Department of Community Development and Applied Economics at the University of Vermont. He has a Ph.D. in Public Policy from the Georgia Institute of Technology. His research is focused on the development of computational and complex-systems based approaches for policy analysis, governance informatics and adaptive management.*