International Plenary Lecture II

August 6th, Friday, 15:20-16:20
"Managing Complexity and Uncertainty"
Prof. John C. Doyle
California Institute of Technology

Modern fields of science and engineering have evolved remarkably high degrees of specialization. The present division of intellectual labor is structured by the assumption that complex systems can be "vertically" decomposed into layers of materials and devices versus the systems they compose. A further assumption is that each layer is further "horizontally" divided into chemical, mechanical, and electrical materials/devices as well as processing, communication, computation, and control systems. A central cause of the fragmentation of complex systems into isolated subdisciplines has traditionally been the inherent intractability of problems that require integration of, say, communications, computation, and control. This has necessitated specialized and domain-specific assumptions and methods that can appear arbitrary and ad hoc to researchers in other subdomains. The power of this decomposition is that it has facilitated a massively parallel development of advanced technologies, the proliferation of sophisticated domain-specific theories, allowing each subdiscipline to function independently, with only higher level system integrators required to be generalists. An increasingly troublesome side-effect is a growing intellectual Tower of Babel where experts within one subdiscipline can rarely have meaningful contact with experts from other subdisciplines, and may even be largely unaware of their existence. For example, the term "information" is used by everyone, but often has not just different but almost opposite meanings in, say, communications, computing, or controls systems, let alone between systems and devices.

Despite its enormous success, the reductionist program provides a poor foundation for many new technical challenges. For example, the ubiquitous connectivity and flexibility of the Internet as observed by the user is taken for granted, as are the wires, chips, and displays that make up the hardware, but it is rare for nonexperts to be aware of the complex layers of protocols and feedback regulation that makes the Internet's flexibility and robustness possible. Until recently, there has been limited theoretical support for the study of the systems-level challenges in either internetworking or biology. Nevertheless, for some time there has been a widely shared vision there could be universal features of complex systems that can transcend these reductionist decompositions, and provide a
unifying integration. Sharp differences have arisen however with regard to exactly what those features are. We believe there is now a clear, compelling, and coherent path emerging from the striking convergence of the three research themes of biology, technology, and mathematics.

First, biologists have provided a detailed description of the components of biological networks, and many organizational principles of these networks are becoming increasingly apparent. Even bacterial cells organize and integrate communication, computation, and feedback control subsystems into highly organized regulatory networks, and builds them right on top of molecules, with highly integrated nanoscale chemical, mechanical, and electrical materials. The most familiar multiscale challenges in biology involve the predictive modeling and analysis of complex multiscale dynamics "vertically" across time and space scales, connecting molecular interactions with higher level network function. A less familiar and more abstract "horizontal" aspect involves interconnection of modular components for sensing, signal processing, communication, computation, and actuation into vast regulatory networks with layers of feedback. This horizontal interconnection happens within every vertical level, from intra-macromolecular dynamics to intracellular regulation to organism and ecosystem homeostasis, although the complexity grows at higher and more aggregated scales. The most subtle and arguably most important challenge involves the discovery and characterization of higher-level organizational principles of complex networks, without which the multiscale complexity becomes overwhelming.

Second, advanced information technologies have enabled engineering systems to approach biology in their complexity. While the components are entirely different, there are remarkable similarities at the network architecture level and in the role of protocols in structuring modularity, with layers of feedback and regulation. New theories elucidate these similarities and are comparable in depth and richness with those available for more traditional subdisciplines. While these share with their traditional counterparts many of the domain-specific assumptions that overcome the intractability of more general formulations, this progress has sharpened the mathematical questions that are relevant to these important application domains. Thus we have the beginnings of the first coherent, complete theoretical foundation of the Internet, and have also been developing new theory and software infrastructure to support systems biology. We are making rigorous and precise the notion that this apparent network-level evolutionary convergence within and between biology and technology is not accidental, but follows necessarily from the universal requirements of efficiency and robustness.

While the full consequences of the claimed convergence emerging from these two areas will take years to be fully resolved, an important message is now clear. The method of decomposing complex systems into vertical layers of varying complexity and scale, wherein each layer is further decomposed horizontally into modules, appears to be not only ubiquitous but necessary. It is neither an accident of evolution nor merely an artificial construct imposed by humans to make biology and technology comprehensible, although that may be a wonderfully serendipitous side-effect. Thus we don't advocate abandoning the reductionist program of decomposing complexity, but in managing the process more consciously and systematically. The disciplinary decompositions that exist may indeed be historical artifices, but the need for such decompositions is not. The key to creating an integrated approach to managing complexity is not to replace existing technologies so much as to augment them with a more flexible and rigorous methods for decomposition and recomposition.

Finally, the mathematical foundation is being developed for a far more unified theory of complex systems that overcomes the intractability that forced the disciplinary fragmentation in the first place. It is in retrospect unsurprising that a genuinely new science of complexity, would require equally new mathematics to answer basic universal questions such as: Is a model consistent with experimental data, which may come from extremely heterogeneous sources? If so, is it robust to additional perturbations that are plausible but untested? Are different models at multiple scales of resolution consistent? What is the most promising experiment to refute or refine a model? These questions are all naturally nonlinear, nonequilibrium, uncertain, hybrid and so on, and their analysis has relied mainly on simulation. Unfortunately, simulation alone is inadequate. One computer simulation produces one example of one time history for one set of parameters and initial conditions. Thus simulations can only ever provide counterexamples to hypotheses about the behavior of a complex system, and can never provide proofs. (They can in principle provide satisfactory solutions to questions in NP, but not to questions in coNP.) Simulations can never prove that a given behavior or regularity is necessary and
universal; they can at best show that a behavior is generic or typical. What is
needed is an effective (and scalable) method for, in essence, systematically proving robustness properties of nonlinear dynamical systems. That such a thing could be possible (especially without P=NP=coNP in computational complexity theory) is profound and remarkable, and it is the foundation of our approach.

The intrinsically "robust yet fragile" nature of complex systems has the computational counterpart of "dual complexity implies primal fragility. "Organisms, ecosystems, and successful advanced technologies are highly constrained in that they are not evolved/designed arbitrarily, but necessarily in ways that are robust to uncertainties in their environment and their component parts. These are extremely severe constraints, not present in other sciences but essential in both biology and engineering. The most obvious feature is that their macroscopic system properties can be both extremely robust to most microscopic details yet hyper-fragile to a few, and this must shape both modeling and analysis, and the experimental process that it interacts with. If most details don't matter, most experiments are relatively uninformative. If a few details are crucial, then this is where both modeling and experiments must focus, but neither a purely top-down nor bottom-up approach can reliably find them.

Thus failure to explicitly exploit the highly structured, organized, and "robust yet fragile" nature of such systems hopelessly dooms any method to be overwhelmed by their sheer complexity. Technically speaking, we can now formulate a wide range of questions for very general dynamical systems under a common Lyapunov-type umbrella, converting them into statements involving semi-algebraic sets, polynomial (nonlinear) equations and inequalities. Proving such statements is still coNP-hard, but real algebraic geometry, semi-definite programming, and duality theory from optimization provide new methods to systematically exhaust coNP by searching for nested families of short proofs using convex relaxations. Not only can we search for short proofs systematically, but a lack of short proofs implies, by a generalization of duality, intrinsic fragilities in the question itself.

This feedback from computation to modeling does not imply P=NP=coNP, which is unlikely, but rather that inference problems within coNP lacking short proofs can be traced to specific and meaningful flaws in models or data for which resolution can then be systematically pursued. Note that this is a radical broadening of the numerical analysts notion of ill-conditioning, and involves mathematics from a variety of previously unrelated disciplines. Again, in retrospect, this should not be surprising, but it creates enormous challenges in both education and the review process.

This mathematical framework has already found substantial applications in networking, biology, physics, dynamical systems, controls, algorithms, and finance, and work on connections with communications theory is in progress. A side benefit of a deepening understanding of the fundamental nature of complexity in a general sense is also a new and more rigorous explanations for long-standing problems in physics associated with complex systems.

Back to the page of "Special Lecture and International Plenary Lectures"