Edge Computing and Microservice Event Horizons

It's all about space and time

From abacus to laptop, computation started in the hands of those who needed it, but lately it’s become removed from that human context — concentrated in highly centralized shared computational farms, and accessed by thin client services.

Edge Computing in Space and Time versus Centralization for Calibration as a Service

Distributed Systems

Despite what they write in academic papers, truly distributed systems are not colocated in big datacentres, rather they are also distributed broadly both in space and in time, usually at “the edge” of the network, where user interactions take place. They have to deal with all the issues mentioned above. Leslie Lamport realized this as far back as the 1970s, but many still don’t understand this.

Microservices and cooperative scope

The microservice pattern is a relatively modern extension of the age-old method of code reuse, or procedural subroutine programming. This form of functional abstraction has moved through a number of incarnations, from subroutines to Object Orientation, Aspect Orientation, callbacks, to Service Oriented Architecture, even client-server architecture, etc. We separate these concerns for semantic clarity, in order to symbolize or modularize different parts of a process, so that data flows tell a meaningful human narrative. It may actually work against performance, but it helps to respect human cognitive capabilities for explaining the story, i.e. the semantics.

Edge computing

The metaphorical edge of the network is where all the action is, so that’s where change is most rapid. This presents challenges when scaling updates and adding services on top of shared data.


Ten years ago the massively centralized cloud systems could just about manage services centrally, based on the implicit assumption of slowly varying edge data. Today, the growth in edge population and the increasing diversity of services and timescales for change are revealing cracks in those assumptions.

Better contextual databases

The whole world, in a sense, is a database. What we need for predictability is for databases to have useful coordinate systems, so we can find both the latest changes and the archived past with a proper indexing system from which we can tell the difference. Voting on correctness is an inherently non-deterministic procedure, what we want is a single-valued and causal map. Explaining these issues has been a project over the past years, with my book Smart Spacetime and various articles.


As we move into an uncertain future, consuming ever more energy per answer, we need to reduce our energy waste to be more sustainable. Centralization of big and fast data is an energy sink. Memory is relatively cheap, computation is slightly more expensive, but the real killer is communication. Every bit flip costs energy and adds carbon to our atmosphere. A key approach to optimizing lookups is to minimize needless copying and transport, with a mixed strategy of caching, lazy evaluation, and Just In Time delivery. This approach is already behind the distribution networks of CDN, and streamers like Netflix and Spotify–platforms that trade data access rates against spatial replication. Beyond that, we need to scope data properly in space and time: temporary data should not be replicated, local data should not leave their compound.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mark Burgess

Mark Burgess


@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org