Sitemap

Why Semantic Spacetime (SST) is the answer to rescue property graphs

How we misrepresent reasoning and how to fix it

10 min readAug 17, 2025

--

Knowledge representation is in a state of confusion. The first step was to confuse data and information with knowledge. The next step was to believe that reasoning is a form of logic. The final step was to misunderstand the role of graphs in knowledge representations.

Thanks to “AI” (quote-unquote), knowledge graphs are back in fashion. They were introduced in the 1990s. We imagined them poorly then and we imagine them no better today. While it is true that anything can be reduced to a graphical form of nodes and simple links, it takes some discipline to do this, and it involves multi-scale thinking. Not all nodes and links should be considered equal, if you don’t want to end up in trouble. The standard approach today quickly gets into trouble.

It’s not about using graph databases versus other kinds of databases either. It’s about how we make models that reflect the causal behaviour that data represent, and how to use scale to represent abstraction. Too often, computer science thinks of data in terms of static snapshots over a single scale, rather than as multi-scalar dynamic processes. As we weave modern systems ever more tightly into our daily lives, that belief is crumbling and failing. Moreover, our fascination with logic itself is based on the fallacy that there are invariant truths about the world that can be determined by naive storytelling of the most simplistic kind.

Logic makes molecules, reasoning comes from soup

There are trivial graphs. They look like small molecules. For instance, a classic one is the central hub, pointing to a starburst of things around it, which is the basic model of networking on the Internet.

Press enter or click to view image in full size

The star network is an ownership (or "property") model, which is simple and certain. There is a planetary owner surrounded by a number of satellite things in its orbit. The whole planetary system is an extension of the central mass. This is formally a graph, but it’s not a very interesting one. Orbital graphs are trivial because they don’t go anywhere. Any kind of database could represent ownership. There is nothing especially graphical about it. Schema or schemaless is just a convention. It boils down to a trivial query: take me to your leader X, and tell me who your leader X leads. This is simple and logical, but it take us only one hop at a time.

A collection of such orbital systems, floating ad hoc in space, is what random access databases look like. To find some information, you want to teleport to a known place and then see what’s there, in their orbit.

The Great Unexplored Unknown, on the other hand, is a kind of formless soup, which might have logic deep down on a microscopic level, but it tells no interesting stories. Usually, we are searching for knowledge and we don’t know what we’re looking for until we find it and feel “satisfied” with the result. Reasoning is about emotionally satisfying outcomes. It isn’t something logical, unless you have very strict rules for every move. The problem is that (in order to have very strict rules) you have to know very strictly everything in advance. There can be no inference.

The time we need a more involved graph is when we don’t know where we’re going. We start from "somewhere", and we are trying to find a path (or a story) that takes us to somewhere that we find "satisfactory". Graphs can be such maps that tell us where we might need to go. A random access datastore doesn’t do that.

How we arrange information in space is something of an art. The science of that art began in physical warehouses and shops, where (for user convenience) related items would be placed next to one another to encourage more sales. Later Amazon copied this in their online store: "users who bought X also bought these other things", etc. Graphs can map out those additional relationships by adding a new scale of meaning.

Press enter or click to view image in full size
Knowledge is multi-scalar

A purely archival model of planetary records has no sense of distance between the different planets, no difference between planet or molecule, person or concept. So, when you go into a filing archive room, where records are typically stored in order by name or date, alphabetically, that’s how we decide what we’re looking for. The “primary keys” for lookup are all the same: they are just proper names, but there is no relationship between the keys. It's little more than a telephone directory. The data are large collections of ad hoc molecular graphs.

With molecular graphs, we can’t reason across records with relationships. Yet this is what we want to do — by following our noses, such as when we are looking up something on Wikipedia. Inside a topic, there are stories, which is where the relationships behind more interesting graphs lie.

A non-trivial graph is a map of a journey

The virtue and the problem with graphs as a representation is that they are structures that represent journeys. A collection of points is formally a graph, but it contains no clues. You have to inspect every single point yourself one by one, just as in a file archive or random access database. We invented indexing as a tool for creating secondary maps for such archives.

The point of a graph representation (stored in a database or just scribbled on paper) is to let the data be its own map. Every node's links is a local private index of causal relationships. When you are at location X, it shows you what is close to you and what is presumably related (assuming someone made the connections in a manner designed to help rather than to hinder). Ciphers and mazes are graphs that are designed to lead you down blind alleys, to obscure and hinder.

So we come to the simple purpose of graphs. We are trying to document coherences. If graphs have no coherence, you can’t use them to make predictions. Logical relationships are extremely short range. There is no continuity that allows you to make inferences that are “transitive”.

Transitivity is one such inference:

If A is the same colour as B and B is the same colour as C then A is the same colour as C.

Only very generic relations potentially exhibit transitivity. For example:

If A is next to B and B is next to C, A and C may or may not be next to each other.

We like transitivity because it allows us to teleport across several steps to somewhere new. But the reason we can’t do it is that properties do not teleport. The confusion lies in the obsession with data as ownership properties. There are other relationships, like geometry: order and distance that do obey simple rules of inference, but most relationships don’t. We need more sophisticated ideas to represent inferences in general.

If A is close to B and B is close to C, …. what? Define “close” !

If A looks like B and B is an arm’s length from C, then it doesn’t tell us anything.

If A looks like B and B looks like C then B may look like C, how alike is like?

But if we repeat this kind of game, like a game of Chinese whispers, then that similarity between A, B, C… can degrade over distance–which, of course, is the point of the game.

Semantic Spacetime is a simple approach

The Semantic Spacetime (SST) model is a simple method that explains graphically how spacetime process (and thus knowledge) work. You won’t read about it in any industry standard manual yet, because the major houses all make money from perpetuating old fashioned models that force you to need their help. With SST, the emphasis is on you making your own knowledge. It is an open method, building on what you already know how to do: make notes in natural language. SST helps you to formalize then add discipline to your notes, gradually transforming them from notes into a navigable graph. Moreover, the process of doing that also helps you to understand what you are writing.

The concept of space is something we now take for granted, because we are so familiar with it from everyday experience. But our ideas about space were shaped in a single image: they go back to the Greeks and Euclid’s famous book about geometrical ideas, and they have been etched into our thinking by modern schooling. Space is a simple abstraction that lets us organize things into ordered sequences. We reach for spaces whenever we can, because our visual senses (for those who are lucky enough to enjoy them) help us to visualize anything by painting a picture in some imaginary 3d canvas. Graphs are also “spaces”, but of a different kind, and we typically make the confusion deeper by embedding graphs in spaces and trying to convert economical graphs into vectorized spaces.

Semantic spacetime bridges these worlds, using relationships that we take for granted and stripping away the speculative embedding. If you make a map of major cities in your country. You don’t need to draw every single point or blade of grass in between them to have a useful representation of the cities in relation to one another. Direction and distance suffice for most things. That gives us a graph, not a space–an embedding of the graph back into a space gives us nothing but empty wasted space.

Property graphs SSToperty graphs

There are more relations of interest than “is a property of” (ownership). There are four main relationships:

  • A is before or after B (for events)
  • A is inside or outside of B (spatial containment, for things)
  • A is a property or aspect of B (property, for ideas)
  • A is similar to B (a kind of short cut or statement about interpreting distance)

There are three kinds of node: Events, Things, and Concepts (etc). The precise labels we put into arrows is far less important than which of these types we intend. 1 and 2 are transitive. If A is after B and B is after C, then C is after A. 3 and 4 are not obviously transitive and require more care. So we see that the popular focus on property graphs makes inference very hard. Properties are not transitive, and there is almost no generic sense in which we can make rules for it.

"we see that the popular focus on property graphs makes inference very hard.."

The (only) way to enable reasoning and inference by logic alone is to force the modeller to obey strict rules in defining nodes and links, without any errors. Then a small number of (mostly trivial inference rules can be either true or not).

The entire charade is about forcing humans to jump through hoops for computers, but in technology the tool is supposed to help us–not the other way around.

Following type 1 or 2 links immediately tells a basic kind of story, because events are generic, no matter what their properties. They are spacetime process markers. The story we find might not make obvious coherent sense, but it will be a true set of inferences. Once we arrive at a special event location we can inspect the properties, and that gives a characterization without inference. Inferences we made from 3 are simply: tell me everything that has this property, or tell me all the properties of this place. Inferences from 4 are generally one to one short cuts. Are these two things the same or not? Don’t confuse these things that might appear similar based only on their properties or names.

In Semantic Spacetime,

  • Chapters and regions encapsulate intent. The same intentions may occur in several regions. Intent matters more than names. Graphs do not have to be simply connected like vector spaces, e.g. the same topic can occur in completely different chapters. This is not true of a vector space, only for local patches. So we can separate intentions without necessarily partitioning the atoms or nodes.
  • Contexts are scenario descriptors, describing virtual chapters, or changes of relevance.
  • Start with events. Nodes that are just proper names are not very informative. Forget things and think of events that describe actual knowledge. Events can be broken down into components to explain them.
  • Choose links that are aliases for one of the 4 basic types: order, containment, property, and similarity. The regularity of having only 4 types with specific process-reasoning semantics makes inference trivial.

It is an enormous failure of imagination on the part of computer scientists and engineers, if not the whole of science, that we always put convenience ahead of innovation. The recent flirtation with chat bots masquerading as “AI” is a classic example of this. If we can have a ready made microwave dinner at the push of a button, we can always convince ourselves that it’s what we want–even that it’s healthy. So easily we forget to ask the obvious questions.

Press enter or click to view image in full size
Start with events and then break them down…

You can try this for yourself. You should be able to replace a clumsy knowledge property graph quite quickly with something more natural, readable, and useful.

There is much more to read about knowledge in this series.

Get involved in SSTorytime

The open source project already covers generic graph database semantics and shows you how to build actual knowledge models. It’s free to use. You can now use SST technology based on Postgres datastore to model graphs and spaces using the SSTorytime project.

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

Responses (10)