Emergent behaviour is a characteristic of many AI systems. So what exactly is it and how important is it that we understand what's going on?
Emergence is a phenomenon that can occur in any system that is made up of component parts - such as a crystal that is composed of atoms, a multi-cellular organism or a colony of bees. It happens when the interactions of the parts cause the system as a whole to have additional and often unexpected properties or behaviours - beyond those associated with the individual components.
The idea that the whole is greater than the sum of the parts has been around for a long time and it turns up in may areas. For example, emergence is used to account for how biology and even consciousness can arise from lifeless underlying physical processes.
Our perception of emergence as novel or surprising is of course entirely subjective. In many cases we simply cannot comprehend complex systems well enough to predict their behaviour - let alone their interactions with one another and their environment. This mismatch between predicted and observed behaviour can have positive and negative implications.
Systems can surprise us in all sorts of ways. For example, a system can do something it was not designed to do - or it can do what it is supposed to do but in a novel manner. It is entirely possible for the same system to exhibit multiple different forms of emergent behaviour - even simultaneously.
Emergent behaviour has a long association with software and particularly with AI. In conventional software circles emergence is nobody's friend. The idea that regular software should start to do something that wasn't intended is every developers worst nightmare. It is like an irreproducible defect from hell.
On the other hand the AI community tends to regard emergence more positively. Emergent behaviour is treated like the secret sauce that takes otherwise conventional software to the next level. By analogy, if emergence can cause a bunch of biological neurones to think then it can do the same for electronic or software versions of neurones. Indeed, much of AI depends on technology that encourages emergence to a greater or lesser extent. Still, emergent behaviour can also be problematic in AI - as with conventional code.
Emergence doesn't just happen automatically. While ultimately it is a function of the components, by definition emergence is not something that can easily and predictably be designed into a system - otherwise it would simply be expected behaviour. Nevertheless it is possible to build systems that are likely to demonstrate emergent behaviour using techniques like feedback and recursion. Not surprisingly, examples of emergence also turn up where no one was looking for them.
It can be difficult to recognise and understand emergent behaviour which we must do - at least to some extent - if we are to leverage it. The outdated view that emergent behaviour cannot be understood - even in principle - is un-scientific and no longer widely held. However, a lot of AI technology currently in use already has an intrinsic problem with transparency. This makes it difficult to understand any kind of behaviour in such systems - emergent or otherwise. Ultimately no one can ever quite trust a system that can't be understood. As a result AI that behaves like a black box is increasingly seen as a risk - if not a liability.
Nevertheless is possible to build AI systems that exhibit emergent behaviour while also remaining explainable. For example, Zoea is a knowledge-based AI that transforms test cases directly into code. The components that make up Zoea correspond to different pieces of knowledge about computer programs. These components work together and self-assemble to create code fragments that represent partial and complete solutions. It is the ability of the components to interact with one another in an unspecified infinity of ways that allows Zoea to produce complete programs from static test data. You could say that Zoea only operates as a result of emergent behaviour.
Each component in Zoea is fully explainable in terms of the information that triggered it, the rationale used, the hypotheses created and the code fragment produced. As a result every decision regarding each element in the generated code can be traced back through the complete reasoning process to the source data. This remains true regardless of whether the system is behaving as expected or not.
Zoea has frequently demonstrated additional forms of emergent behaviour throughout it's development. Very often during testing it will find a simpler or alternative version of the specified program. Also it will frequently use unexpected or sometimes bizarre programming techniques. Initially, such behaviour made testing more of a challenge as there can be many functionally equivalent variants of the same program. Once this kind of behaviour was recognised and understood it was easily addressed. Indeed, it would not have been possible to build Zoea if it was not itself explainable.
AI systems often have little or no definition of how a goal should be achieved. In this respect many existing AI systems already demonstrate a degree of emergent behaviour. It is commonly expected that future AI systems will employ even higher levels of emergent behaviour to improve their performance and also their versatility. If anything this will make it more important we have the ability to understand their operation.
Company No: 12128693 Registered Address: Zoea Ltd. 20 - 22 Wenlock Road, London N1 7GU
© Copyright 2023. All Rights Reserved. Zoea is a trademark of Zoea Ltd.