Karoline Wiesner, describes a number of complex systems, and notes that some systems like a watch are very complex yet a single part can undermine the entire function of the watch. As opposed to other types of systems such as a bee hive or ant colony where individual elements do not undermine the hive/colony.
Perhaps a watch is a complicated system while the hive is a complex system. Both systems enable a holistic function, a product of the system that is greater than and in a manner independent of the parts, the signature of a complex system.
Yet, I would like to differentiate between the two types of systems. A system whose identity is tightly coupled to its parts is a complicated system. While a system whose identity is independent of its parts is a complex system.
I understand there to be levels, a hierarchy, in complex systems, where local connectivity among elements in a lower level provide for the emergence of a higher level, the whole that is greater than the parts.
The thing I am struggling with is that intuitively I know that there must be a restriction on the information flow between levels. This is because an unrestricted flow of information between levels would collapse the system. As an analogy I think of multi-level artificial neural networks with linear transition functions, they collapse to a single linear network.
Yet, there should be some flow between the levels. The whole should be able to communicate to the parts its identity so that the parts can better define their local rules (this is not the same as the whole communicating to the parts their behavior).
I was thinking that a good example of this phenomena is learning as differentiated from memorization. Learning occurs when data perceived in a first domain is abstracted and stored as an abstraction in a new word (expanding the language). It is this abstraction that enables transfer of knowledge to a second domain, the generalization that tests the efficacy of the learning. This is in contrast to memorization, that faithfully stores the data perceived in the first domain, yet can not generalize and transfer that knowledge to a second domain.
Hence, in learning there are two levels, at the lower level the data from the first domain is stored, these are the parts. After the abstraction occurs, a higher level concept is stored and associated with a new word. This higher level abstraction can not utilize the same descriptive language as the lower level, that would undermine the abstraction and hence the learning.
Yet, once the higher level abstraction exists it can and should be utilized to enhance the functionality of the parts. But how to do that without coercing the parts to fit into a predefined framework?
I think the secret is that we need multiple instances of the thing.
Lets suppose we have a system that is constructed of parts. We need multiple instantiations of this system, each by a different craftsman. Only then can we create second-order relationships between the abstractions at the higher level.
Thus, each craftsman will have created their own version of the abstraction. Their own dialect to describe the system. When we look at the relationship between the different dialects a new set of higher order ideas will converge to create a new language (this time shared among the craftsman).
Now comes the trick:
Since each craftsman employed the same parts (this is critical, the parts are shared among all the craftsman, they must be utilizing the same lower level language) we can now back project the second-order relationships between the craftsman back onto the parts!
Now the parts have two elements to their identity, they have their initial functionality and they have an element that is back projected.
So we are left with an information flow between the levels that is back projected without any coercion. While retaining the emergent property and disconnect between the forward projection of the low level (memorized parts) with the higher level abstraction, thus enabling learning (the gap between levels) while providing for a communication between levels.
This is why recommendation engines are so powerful, if you want to know if to invest in a resource, say purchase and read a book, you could check the contents, but that would be a low level analysis of the parts. Since multiple people have read multiple books, we can construct higher second-order relationships that provides a new language. This new language describes the reader's relationships, a social construct. When we back-project those relationships onto the books we infuse the books with information from the higher level. Now the books are more than the sum of their words, they are also social constructs.
Now, you might want to argue that external influence and coercion exists through the multiplicity of craftsman. Yet, you chose the craftsman either explicitly or through your choices. Following the previous example, by choosing to read a book (perhaps initially based on content) you have entered into a domain of experts (a subset of other readers).
Your choice drives the identity of the elements that you will meet. Hence, when I meet a book and you meet the same book, they are not the same, even though they may contain the same content, but because each book is infused with a second identity, through the second-order relationships, they are different books. Simply said, the words in the book have different meaning to me than to you due to the different social constructs that we live in, each providing a different context to the words and hence a different meaning.
Your choice drives the identity of the elements that you will meet. Hence, when I meet a book and you meet the same book, they are not the same, even though they may contain the same content, but because each book is infused with a second identity, through the second-order relationships, they are different books. Simply said, the words in the book have different meaning to me than to you due to the different social constructs that we live in, each providing a different context to the words and hence a different meaning.
> What types of emergent properties between levels could be not causal?
How about a recommendation engine. At a lower level there are individuals who recommend things (let's say books). So, there is a pattern classifier for each individual, no hierarchy yet. Now the question is which book should I read. So I need a higher level answer.
A first method (1) for providing an answer might be (a causal relationship) some function of the lower level. For example, take a book that the majority of the individuals read and recommend that on the higher level.
But we can do something different, method (2). We can create a relationship between the individuals, based on something else other than their favorite books. Lets say I chose to make the relationship based on movies, while you chose to make the relationship based on favorite politician. This relationships provide for a higher level structure (here there is a hierarchy), e.g., this group of individuals prefer action movies or left/right-wing politicians.
Now we can utilize this second method (2) to provide an answer to which book I should read, we can go back and say, people that like action movies or that share favorite politicians prefer this book.
Is there a difference between method (1) and method (2). I want to argue that there is. In a pure causal relationship I would not expect that given the same set of individual pattern classifiers (book recommendations) they would come to different conclusions for me or you. But they do, because we added a dimension of information that is specific to you or me.
You could argue that once all is said and done, when we know which secondary parameters are employed to create the relationship between the individuals, then we have a causal system. But that is cheating. Because I agree that when you map an external system into your closed system, you create a new closed system (we had this conversation before)
So, the emergent system exists for a brief period in time, when we each map the two systems differently. After we have mapped the systems, yes they look causal, but the mapping itself is an emergent process.
This is true, since when I decided to utilize movies as my method for relationship creation, I did not know how that would play out. The result of utilizing movies created the new external system. Remember the steps in the process, first I chose a metric for relationships, then I let the individuals work out their own relationships based on the metric I chose, then I utilized the groups that they created to map back into my closed system and create the meaning I needed to determine which book to read.
So, before I started I did not know how it would end (because I did not know what relationships would exist in the other system and hence how that would map to my system), after all is said and done, of course it looks causal, that is the nature of emergent systems, in retrospect they look causal.
This is very different than method (1) where a priori we know the resolution to the problem, since we start and end within the SAME closed system.
No comments:
Post a Comment