Promise Theory 20 year anniversary
Promise Theory and Semantic Spacetime as a Theory for Agent Systems: a personal review
Promise Theory was first presented to an academic audience at the DSOM Conference in Barcelona in 2005, then (less technically) to an industry audience at Google Santa Monica in 2008. It began as a proposal for overcoming the limitations of command and control system design in computer systems, using agents and their promises as a conceptual building block. From these beginnings, Promise Theory grew to touch on areas like sociology and management. This year, Promise Theory has just turned 20 years old. The IT industry has embraced promise-compatible ideas in SOA, microservices, and systems like Kubernetes famously employ promise-compatible ideas, however, academia has all but ignored Promise Theory, clinging onto traditional deontic logics — but now, with the sudden popularity of “AI”, the industry seems to be trying to reinvent Promise Theory and my metaphorical phone has been ringing, so it seems right to recap.
What if we turned our world view upside down? During a time of religious dominion, we modelled the world as external forces compelling agents to behave. But how does it look if we accept that change is determined independently at each point? This is the short version of Promise Theory. Whether your agent/container is a spacetime point, a liver cell, a car, or a person, predictability starts from within. We call it “voluntary cooperation”, but voluntary only means determined autonomously.
Promise Theory goes back to my attempt to write a text book suitable for university courses about scientific methods for computer management. At the time, no such book existed but I had committed myself to teaching the course in order to qualify as a Master’s degree subject at the university in Oslo. I threw everything I could think of together into a volume called Analytical Network and System Administration. It was an interesting exercise, heavily influenced by my background in physics. But, on “finishing” the book, I realized that something crucial was missing. What are systems actually for?
The purpose of something is not a question one asks in natural science. Things don’t have purposes (at least that we know of), they simply exist. But systems do have functions and intended outcomes. There seemed to be no way to capture that in a meaningful language. In computer science, half wedded to electrical engineering and half wedded to mathematics, logic was the only tool in play. In particular, so-called modal logics or deontic logic were the approach. Deontic logic is the logic of necessity.
System X MUST obey this law
System Y IS OBLIGED TO terminate on this condition
Without exception, everyone rushed to use these modal logics. So I looked into this and was horrified by what I saw. To a physicist, this was completely unnatural. Although Newton’s universe is somewhat modelled on this idea of an all powerful hand of God moving objects around at will, science has since evolved beyond this to study phenomena without a master controller. Worse, in my view, there were obvious logical or philosophical flaws in these logics–short cuts used to make progress with the algebra. For instance, idempotence of the “necessity” operation was assumed. It must be necessary that it be necessary. I confess to rolling my eyes. That was just a fudge.
My own journey in the 1990s had led me to develop an agent-based solution to managing and healing faults in computers, called CFEngine. This has been purely intuitive, but it worked well–and it was the complete opposite of obligation logics. So my response was to reject modal logic and look for something based on the causal independence of agents, which I had built into the globally adopted CFEngine configuration system, and was the source of its success in many ways.
So, it was by looking for a “physics of autonomous systems in order to understand and why it was more successful than traditional deontic “obligation logic” models used by academia. Autonomous agents became the natural “atoms” from which system processes are built.
Causal origins
The basic starting point of Promise Theory was to model autonomous agents, where autonomous meant “causally independent”, i.e. not receiving any instructions from without. Everything that they did was decided from within. This immediately led to confusion, because the pre-existing field of multi-agent systems defined autonomous agents differently: an autonomous agent in MAS simply meant an agent that worked alone (but still with a controller).
Even physics has a split brain on this issue. On the one hand we speak of the “laws of physics” (as Newton’s generation did, implying something given by an authoritarian god), but on the other hand, there was quantum theory in which the basic building blocks behaved very much autonomously subject to various constraining interactions.
In computer science, it was simply understood that every agent had to be told what to do by a central command planning controller, as was the case in Control Theory of analogue systems. But, in my world, I knew that agents couldn’t rely on communication with a boss–they needed to be able to resolve issues independently when there was no network communication or possibility to be instructed.
Moreover, agents couldn’t guarantee anything deterministically: no matter how hard we may stamp our foot and want it to be true, agents might be unable or even unwilling to comply with instructions from a controller, because they are usually out in the field, cut off from ideals, dealing with reality, not mere theoretical constructs with infinite capability. If a battery was running out of power, or if information were dropped, if there happened to be a fault in a vital instrument, how could they comply with an “obligation” imposed on it from a controller. It was just nonsense, but the telecom industry was used to being able to strong-arm reliability by brute force, and that model was ubiquitous.
And so, I began to formulate a theory of such agents. Only much later did I finally choose the term “promise” for a declaration of “intent” or purpose. It was an idea like momentum in physics, not an idea like conscious agency in philosophy, but either could apply depending on how elementary or sophisticated an agent was. So this proto-Promise Theory set about trying to describe how agents could communicate their actual states and intentions, possibly based on an original command instruction, but contingent on reality.
The academics at conferences were interested in CFEngine, because it had unwittingly become some of the most used and successful software on the planet. You could turn over practically any stone in a datacentre, and you’d find CFEngine running like moss underneath it. Even today, some of the largest cloud systems still have bits of legacy CFEngine code running. However, the haughty academics did their best to ignore the ideas, and still do to this day!
To oblige or not to oblige
It should be obvious obligation-based reasoning is fraught with problems and inconsistencies. I used the example of “you say tomayto and I say tomahto” to explain this at Google in 2008. Suppose a child has one British and one American parent (obviously divorced). Each tells the child that they must speak properly according to their own instruction. The child cannot resolve this inconsistency in pronunciation, because the “policy” originates from a place it doesn’t control.
Every agent can only promise its own behaviour, not the behaviours of other agents. This became the first axiom.
The multiple sources or originators can’t resolve the situation without knowing and “talking to” one another, which they (naturally) refuse to do. If the child decides for itself, there is no conflict. It can promise to cooperate with an outside suggestion voluntarily, but the entire system doesn’t hang or blow up if it receives conflicting advice. Remarkably, in the moral philosophy of promises, which a few authors had written about, promises were universally assumed to generate obligations to keep them! This was an extraordinary leap of subordination on the part of humans. Every author reduced promises to obligations in this way. Clearly, the religious overtones were still strong. But I was having none of that, for good reason.
In real world systems, agents that base their instructions on outside information are vulnerable to hacking, so CFEngine rejected that. As a result it has an immaculate security record because it refused to accept instructions imposed on it from outside over public channels. Each agent would decide if, when, and from where it would accept new information. Determinism comes only from within.
To a physicist, it was clear that (truly) autonomous agents were a fundamental building block of system cooperation. They were the “atoms” in a chemistry of cooperation. More importantly, these building blocks were not passive things (as physics assumed) but active processes (as physics was later forced to accept). I began to formulate a representation of this idea to formalise what I’d already implemented intuitively in CFEngine.
Early on I met a Dutch computer scientist Jan Bergstra, a professor of logic and process algebra at Amsterdam. We quickly developed a mutual respect and became friends and worked on the later formulation together for a number of years. He encouraged me to focus on the ideas before diving too quickly into algebra. This turned out to be an important lesson. His knowledge of logic was enormously helpful to me, because the ideas of Promise Theory defied ordinary logic–promises do not form a logic in any ordinary sense. I think this was both exciting and shocking to him. He remarked that in fifty years of modal logics (deontic logic in particular) almost nothing had been accomplished, and yet Promise Theory could quickly see through problems to a kind of natural solution. In one subsequent episode, I was embarrassed to solve a student’s PhD thesis on a whiteboard discussion after 10 minutes.
My initial thought was to expunge the notion of obligation altogether, but my chance encounter with my Jan Bergstra set me straight. One had to acknowledge the concept, whether it was real or not. He suggested that we create a signal called an “imposition” to model the interaction of trying to oblige. By giving it a different name, we could distance ourselves from obligations but at the same time model them fully. And thus Promise Theory debunked the logic of obligations by introducing elementary promises. In another effort, we partially clarified the relationship between promises and trust (something I returned to later) and also the role of “assessment”, which is the autonomous observation of agents by other agents.
Outside of academia, people caught wind of Promise Theory quickly, in both the technical and social world of management. It was clear that its ideas were not confined to computing agents: they could apply to any agent, a person, or even an atom would have to be described by Promise Theory, because it was based on the most basic principles of causality. It was a physics of agents.
A physics of agents, a chemistry of cooperation
There was always a grudging acceptance of Promise Theory when presented, but still academics rejected it and continued to use obligation logics over and over again. It boils down to the sociology of research. My own hope was that more people (much smarter than myself) would work on the problem and discover important results–that we could finally get away from dead-end modal logics.
We can speculate why that didn’t happen (apart from the obvious issue of professional jealousy or “not invented here” syndrome that infects most of the dynasties in academia). To me, it had been a difficult creation. I spent a year struggling to accept key ideas, where I had to give up on issues that I had hoped would be present. It feels incomplete. Some would say, it’s “not even wrong”, or not falsifiable. I think this is not true, but the methods of falsification are not the usual kinds of experiments we do in science, so people struggle to understand what it is. To see the obvious in the unfamiliar is sometimes tricky.
These are my thoughts about why it hasn’t been adopted more widely:
- Computer scientists find it difficult, because it is not deterministic or based on logic or type theory. It describes a best-effort outcome, which is aesthetically distressing to some.
- Autonomous agents are not naturally homogeneous. Every agent might be different, so we can’t obviously create general rules or calculate statistical outcomes at scale. It’s more like molecular chemistry than bulk statistical mechanics.
- Physicists don’t know what to do with it, because it’s not based on differential calculus. It has not single equation of motion. On the other hand, in fact it has some features in common with quantum mechanics, but that is also problematic, because physics deifies quantum theory as unique and its own property.
- Some scientists are offended by the use of notions like “intent” and “promises”, which is considered anthropomorphism, but it’s no more anthropomorphic than direction or momentum. It harks back to old prejudices about the superiority of Man before God.
- Sets are the basis of Promise Theory, and numbers only come from assessments (measurements). It seems that there is no simple way of calculating basic answers from a differential equation–in the way we can with arithmetic and statistical sciences. Indeed, figuring out how to answer quantitative answers proved quite difficult and time consuming for a long time. It is possible, but it requires additional steps that are not automated.
- Mathematically inclined researchers gravitated to more formal and provable theories like Category Theory or sheaf theory, which are much more abstruse and pretentious, and therefore have a greater cachet professionally, even though they have no actual practical results.
- Agents don’t have to do anything they are told — many people just don’t like that and cover their ears, mumbling as their world view collapses.
Why would anyone work on something where they didn’t know that method? More generally, we are biased in favour of quantitative results. Many conclusions from Promise Theory are not simple numbers, but semantics, interpretations, qualitative outcomes, and so on. Science itself is quite immature in being able to judge these qualitative results in a fair minded way. The cultural split between philosophy and natural science leads to much hostility–with often good reason. One is not excused from being precise just because an idea can’t be expressed in numbers. Promise Theory straddles both worlds and thus finds itself in a place where no one really wants to be its friend. It is too mathematical for philosophers and too reliant on semantics to appease physicists. One can decide this is either a strength or a weakness, but the prevailing orthodoxy will always prefer to attack it.
So what does PT have today?
- The core concepts are in place and are quite simple and self-contained.
- The reliance of assessments mirrors the “measurement problem” in quantum theory.
- The causal structure is well understood, but it is observationally non-deterministic.
- We can detail and “explain” the structure of a number of phenomena in terms of autonomous promises (e.g. why imposition cannot succeed in general, how to align cooperatively and avoid cooperative failure).
- The relationship between assessment and trust has been beautifully confirmed by a prediction about the probable size of groups formed in human cooperation (with Robin Dunbar).
- The analysis of spacetime semantics in Semantic Spacetime offers a compelling model of knowledge and process representation, but this is more of a philosophy than a prediction.
- Jan Bergstra has developed a spinoff Theory of Accusations from Promise Theory, which deconstructs the process of accusations in the social sphere.
My personal feeling is that these new approaches to social sciences are much more powerful and fundamental than the reasoning by analogy of moral philosophy that has dominated social sciences up to now. For the first time, we have a credible approach to emulating Hari Seldon!
The Promise Theory of trust is perhaps the closest example of a “physics problem” solved was the calculation of probable group sizes in human cooperation. It is a model of adaptive assessment rates by agents with finite resources. This is an indirect result, but nonetheless satisfying.
What questions does Promise Theory need to answer?
Computer scientists can design provable (if only simple) systems based on logic, where the assumptions may be wrong and the cases hopelessly complex. But if one prefers the appearance of orthodox certainty over a best-effort idealization (which is the realm of physics) then it’s easy to argue against Promise Theory. Meanwhile, people in tech have been using the ideas (usually always admitting it) to design network systems and cloud computing, and people in management and sociology were writing about the ideas in a more inspired and welcoming way.
To engage PhD students, there needs to be a lot of simple questions that just require steady plodding effort to overcome. How could Promise Theory become more attractive to researchers? What are the key questions and problems that need to be worked on?
- Develop a more regimented approach to proof using Promise Theory axioms.
- Look for approximate continuum parameterizations of discrete PT system to enable physicists to apply their methods to understand it. Perhaps quantum chemistry can be an inspiration here. Stochastic mechanics is a natural contender. Work out a collective theory of agents, e.g. like an Ising model generalization.
- Derive an equation or method to calculate the probability that a set of agents will succeed at keeping a promise, perhaps from a generating function like the action principle.
- Calculate the doubt or uncertainty of outcome for some collaborative promise to be kept
- etc
I’ve tried to develop the theory myself: applying it to define issues like trust quantitatively, the semantics of ownership and economic transactions, the semantics of spacetime and process notions (taken fully for granted in physics) that are compatible with agent axioms, and describing the semantics of motion more carefully. Promise Theory is shown to be a more elementary underpinning for Game Theory. All these results feel quite satisfactory, but they don’t give an immediate lift. They only clear up previous ambiguities or oversights. There is no obvious incentive for others to go beyond. Meanwhile, Jan Bergstra has developed a theory of accusations based on Promise Theory. This is a fascinating area. For the first time, there is an approach to dealing with hard problems in sociology without resorting to moral arguments.
Working on Promise Theory’s big questions can easily seem like working on the interpretation of quantum mechanics. It’s a fool’s errand, suitable only for older researchers who have overcome their need to belong. Part of the problem is that Promise Theory can’t immediately give people the answers they want, because it says that some of those answers are impossible to find. Some people still mumble about more established ideas like “control theory”, hoping again to recoup some kind of determinism from the age of analogue processes.
Promise Theory is more like engineering than mathematics. Its whole raison d’être is to answer questions about practical outcomes. It has already succeeded at that, but it hardly seems to have gotten started. As I approach my 60th anniversary, CFEngine has passed 30, and Promise Theory hits 20, I still hope for more.
