Team:ETH Zurich/human/interviews/expert7

From 2014.igem.org

Revision as of 14:25, 16 October 2014 by Eledieu (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

There are various definitions of complexity. Most of the time, the notion of system is mentioned in those definitions. Would you say that complexity is a property of a system?

I would say that complexity is not a property, but a characterization of a system. The next question is what could be the defining criterion for this characterization. At some point, people tried to come up with formal definitions of what complexity could actually be and how it can be measured quantitatively. Before that happened, for a long time, for decades actually, people were talking about complex systems in a colloquial sense. What you’ll find in literature is criteria like, for instance: “The coupling of a system with its environment is important for the behavior of the system” (open systems), “Many complex systems have a lot of constituents” (Nevertheless, we can find complex systems with a small number of degrees of freedom, for example in deterministic chaos). What you need for complexity is non-linear behavior, non-linear feedback. Another criterion that some people use is that you are dealing with systems far from thermal equilibrium. In biology, you are typically far from the thermal equilibrium. Another feature of complex systems is their intrinsic instability, which makes it difficult or impossible to treat their behavior as stationary. In many of these complex systems, it is not easy to find those domains of behavior in which they are stationary, stable structurally, stable dynamically, stable. There are these stability islands, which are generally not easy to find. This is a number of terms and concepts which have been used for a long time.

Can one measure complexity quantitatively?

The first attempt to really define complexity in more rigorous way was already happening in the 1960s, it was done not by physicists but by mathematicians. Kolmogorov and others had a definition of complexity that has later been called “algorithmic complexity”. When you have a pattern and want to measure its complexity, their approach was the following: if you construct an algorithm that you can run on a computer, then the length of the shortest algorithm that is capable of reproducing the pattern is the algorithmic complexity of the pattern. If you have a completely regular pattern, like a period 2 process, this is a very short algorithm. That is not a huge algorithm and the complexity is low. On the other hand, if you construct your pattern as a random sequence of black and white pixels, then the shortest algorithm to reproduce that is the sequence of pixels. So that is the longest algorithm in relation to the pattern itself that you can imagine. It will be the pattern with the highest complexity score. There was a point of criticism that was quickly raised: in this sense, complexity is indeed nothing else than randomness. Then, the question is: why do we need two different names for the same phenomenon? In the 1980s, people came up with a different view point. The intuition was: if you have a completely regular behavior, that is not complex anyway but, if you only have a completely random behavior, this should also not be called complex. What should be called complex is an intricate mixture between random and regular elements in your pattern.

This is a basic distinction between two different general categories of complexity measures: one of them just being a measure of randomness, the other one characterizing the mixture between order and randomness.

In a review paper back in the 1990s, we reviewed all the complexity measures that existed at that time (more than 40). We tried to identify to which class they belong and how they behave on the basis of a very simple example, an artificial example: the so-called logistic map. The logistic map is a discrete recursive map. That means that you have a starting value x. The value of x at the next step is given by rx(1-x). It is a very simple map. It is a nice example because it is very simple on the one side and on the other side; it exhibits quite a lot of complicated, complex if you prefer, dynamics.

What were the results of your review?

First of all, we had a distinction between monotonic and convex measures. Then, within these categories, there are multiple different definitions of complexity and they all react to different features of the logistic map in different ways.

For instance, take the epsilon-machine complexity. That is a very sophisticated and powerful measure. That’s worth knowing. So far as I know, it is the only measure of complexity that is embedded in a


very comprehensive theoretical background. The person who originally developed this was Jim Crutchfield. He was one of the pioneers in chaos theory in the 1980s.

In a way, all the other people including ourselves, just tried to identify certain measures of complexity that we thought would be interesting for a particular purpose but we did not care about a theoretical framework for them. Finally, if you are interested in identifying certain kinds of instabilities in a system, particular measures serve this purpose best but they are maybe not very sensitive to other features, like identifying periods.

Is complexity linked to Emergence?

Yes because that’s another one of these colloquial features of complexity. Complex systems often have a hierarchical structure. So you have levels of description. For instance, you can describe complex systems in terms of individual constituents, like individual neurons in the brain. Then, you also have other levels of description: for instance, the level at which neural assemblies are formed. Then, these neural assemblies often have properties. Some people call them emergent properties, which zou connot simply derive from their constituents unless you know something about the collective level. In physics, when you study the relationship of individual molecules in a box of gas and the thermodynamic behavior. Temperature is, of course, not a property of single molecules. In this sense, it is also an emergent property and much has been written about that example.

Emergence is very intensely discussed in the context of complex systems. Another issue that is more and more discussed in the context of complex systems is the issue of reproducibility of certain results or experiments.

Why are complex systems most of time not reproducible? Is it due to the subjectivity of the observer that has to be taken into account?

That is one issue but I think even more basic is the intrinsic instability of complex systems. When you have unstable behavior, what usually happens is that systems search their sample space in such a way that they end up relaxing into stable attractors. But in complex systems, this can take an enormously long time. There are lots of studies which started in 1990s about these super-transients. The behavior of your complex system can remain transient. This means that your complex system does not reach the stationary regime for an extremely long time. Whenever you are still in the transient phase and you try to reproduce something, you fail, because of the instability. If you know a little bit more about your system then you may be able to calculate with certain tools the time that it takes for the system to become stationary and that helps you. Then you can say: “To achieve reproducible results, I have to wait that much time”. But if you don’t have this knowledge, then you are completely lost.

Most of the research results we are talking about are a few decades old. Was there an evolution in the field of complexity this past few years or has the research on complexity attained a bottleneck?

I think there was a very decisive point in time in the study of complexity. That was when people could buy for not so much money high-power computing system. The reason is obvious. You cannot analytically solve complex systems in most cases and if you really want to study them, you have to run them in simulation studies. Everything that happened before the late 1970s was more or less heuristic: mathematicians had analyticalexamples for complex systems. Those examples were the simplest ones. After powerful computer came up, everything could be simulated. Then, the whole field exploded.

You just talked about simple complex systems. Does an antagonist notion of complexity like simplicity exist?

Of course. The notion of simplicity has not become such a buzz word in science. Using the complexity measures we talked about before, one possibility would be: if the complexity is low, then the system will be simple. There is an interesting book on complexity. The final chapter of this book says something about Simplexity and Complicity, as opposed to Simplicity and Complexity. This word game tends to say that it is not that easy to tear complex behaviors and simple behaviors apart from another.


You described diverse measures for complexity. Would it be possible to build a universality theory of complex systems?

There are differences in the notion of universality classes on the way from regular to chaotic behavior. I am not saying that complexity lacks completely of a kind of universal behavior. But what we do not have is a compact set of equations that describes everything, like Maxwell’s equations. Maxwell’s equations resulted from the attempt of physicists to create a fundamental universal law for electromagnetism. In complex systems research, something like this has simply never happened. My intuition is that it is a fundamental problem in complex system theory and it is not simply that we have to work harder or to work for a longer time. Considering universality as a methodological pillar on scientific work, Peter Grassberger had an intuitive argument about this issue. He brings in the issue of meaning. For him, complexity is nothing else than the “difficulty of a meaningful task”. Thus, meaning implies subjectivity, which implies uniqueness, which is opposed to universality. That created some real controversy at that time in the study of complex systems because people realized that when you try to import meaning as an explicit object of study in physics, then you are really not doing physics anymore. At that time, a lot of people considered this as a no-go in physics. But Grassberger was courageous, he did it. I think it is interesting because it opens up a whole new level of discussion and deliberation. My favorite notion in this kind of discussion is contextuality. I would not contrast universality with the subjective but with the contextual.

What do you mean by contextuality? For instance, measures of complexity are not universal but they have to be applied in a way that respects the context of the question that you have. What do you want to know? What do you look for? If your answer would be independent of the context, then it would be universal.

It seems to be a vain quest to have a global wrap up of complexity. However, could meta-models give new insights on this issue?

I cannot rule this out. That would change the whole methodology of theory building. What you usually do is considering experimental results, facts or data and then you try to find a model that more or less fits your data. With a meta-model, you would presuppose the data and the model that you have and try to see the relationship between them. It may be a possible path to come up with something more universal than present-day models of complex systems.



To learn more

http://www.sciencedirect.com/science/article/pii/096007799490023X

Wackerbauer, Renate, et al. "A comparative classification of complexity measures." Chaos, Solitons & Fractals 4.1 (1994): 133-173.


https://ejournals.library.ualberta.ca/index.php/complicity/article/viewFile/8764/7084

Complicity and Simplexity, Ian Stewart


http://link.springer.com/article/10.1007/BF00668821

Grassberger, Peter. "Toward a quantitative theory of self-generated complexity."International Journal of Theoretical Physics 25.9 (1986): 907-938.