Universal Data Analytics as Semantic Spacetime

Mark Burgess
20 min readSep 6, 2021

Part 6. Applying reusable SST patterns for graph data modelling

With the basic tools and conceptual issues in place, we can begin to apply them more seriously to build database-backed models of the world, including (machine) learning structures. The purpose of storing data is then not just to build an archaeological record of events, but to structure observations in ways that offer future value to possibly related processes. This is where Semantic Spacetime helps, but there are other practical considerations too. In this installment, I’ll walk through the Semantic Spacetime library package for Go to explain what’s going on.

Semantic Spacetime treats every imaginable process realm as a knowledge representation in its own right. It could be in human infrastructure space, sales product space, linguistic word space, protein space, or all of these together. In part 5, I showed how certain data characteristics naturally belong at nodes (locations) while others belong to the links in between (transitions).

How could we decide this rationally? Promise Theory offers one approach, and explains quite explicitly how models can be a “simple” representation of process causality.

Choosing Node and Link data fields

When we observe the world and try to characterize it, we have to distinguish between modelling things and modelling their states. These are two different ideas. Trying to think about objects led to the troubles in Object Oriented Analysis mentioned in the previous post. If we think about state, some information is qualitative or symbolic in nature (definitive and descriptive, and based on fixed-point semantics), whereas other information measures quantitative characteristics, degrees, extents, and amplitudes of phenomena. The latter obey some algebra: they may combine additively or multiplicatively, for instance.

In data analysis, and machine learning in particular, probabilistic methods stand out as a popular way of modelling. Probabilities are an even mixture of qualitative and quantitative values. The probability of a qualitative outcome is a quantitative estimate. Using probability is partly a cultural issue — scientists traditionally feel more comfortable if there are quantitative degrees to work with, because it makes uncertainty straightforward to incorporate. Thus one tends to neglect the detailed role of the spanning sets of outcomes. In probabilistic networks, people might refer to such outcomes in terms of convergence points in a belief network. But it’s important to understand the sometimes hidden limitations of probabilities in modelling. Sometimes we need to separate qualitative and quantitative aspects in a rational way, without completely divorcing them!

In IT, there are databases galore, vying for our attention and trying to persuade users with impressive features. Yet, you shouldn’t head straight for a particular database and start playing with its query language expecting to find answers. Wisdom lies in understanding how to model and to capture processes with integrity and high or low fidelity. My choice of tools for this series (ArangoDB and Go) were based on trial and error — after five other attempts.

Promise Theory, nodes and links

Our guide, behind the scenes, is Promise Theory. Promise Theory applies an extended form of labelled graph to model cooperative scenarios — with a few insights added. It was introduced originally for modelling distributed control structures, but has since been generalized into an independent tool for analyzing network processes of any kind. Promise Theory sees the world in terms of agents (which are nodes in a graph) and promises which are self-declarations by the agents. Promises don’t form links by themselves, but can combine, like chemical bonds, to lead to cooperative links. Promise Theory is a minimal model that unifies many ideas about interaction. It’s also the theoretical foundation for Semantic Spacetime.

Promise Theory tells us that, if some information (a property) comes from inside one agent alone, then that agent is its sole source, and thus the property can be promised by that agent alone — using data from within that node. Even if the data were passed to it by another agent, in a previous interaction, it’s only what is in this source agent at the moment of interaction that matters. This is true on a point to point basis, transaction by transaction.

Conversely, if a property is a characteristic of some process that’s on-going between two nodes (a kind of entanglement, or co-dependence) then such a property is shared information, and thus the agent it arises from is the combination (entangled superagent) of the two — so the information belongs between them. The obvious place is then the directed link between the agents, which is represented as an “edge” in a graph database. A graph link symbolizes several promises and captures the idea of a directional process entanglement. There can be many links of different types of edge between the same pair of nodes (idempotently accumulated so that there are no identical duplicates), so we can model qualitative issues by type and quantitative issues by embedded weight or value.

Object modelling sometimes fails to model such cases because it has no concept of what could be “between objects”. That can be worked around by modelling shared entanglements, but it may lead to an unwanted explosion of types.

Sorting out the mechanics of where to put data fields — of how to update and retain information in a learning structure — is the first order of business with a data model. Ultimately, we’re looking for a dimensional reduction of concepts, to span a large number of ad hoc data with a small number of modelled concepts. All the programming intricacies shouldn’t obscure that goal.

So — now that we have all the issues in place — let’s solve them using our Go and ArangoDB tools, building on their low level programming APIs to make something a little more user friendly for scientists.

Turning principles into a template: pkg/SST

Take a look at the pkg directory in the SST github repository. This is where you’ll find all the code for this blog series. Part of that code is a Go package or library. Follow the “destructions” in the README file (or in part 2 of this series) to connect it to your GOPATH and you’ll be able to start using and modifying this proto-SST library for your own projects. The remainder of this post gives a quick overview of the package and how to use it.

The example code below shows a template for what every data application could look like.

The setting up of a graph model on top of the underlying database is relatively complicated, due to the potentially large number of options involved. So this OpenAnalytics() function conceals a lot of details so that we can live with the overhead. It simplifies by creating a single graph with a number of fixed interior collections. The Analytics data type has three types of nodes called Fragments, Nodes, and Hubs — which can be used to model three levels of node aggregation. We often want to identify clusters and clusters of clusters, etc.

Later if we want to add new sets of Nodes and Links for temporary usage, we need to call new functions, AddNodeCollection() and AddLinkCollection() which are in the SST library. I’ll come back to that when talking about matrix multiplications in part 9.

In this code example above, everything at the beginning can be pasted into any program, and after the “do your own stuff” line of demarcation, we can get down to defining nodes and links (vertices and edges in graph parlance). There are functions for adding Nodes, Fragments, Hubs, and Links between them. The code is designed to be easy to understand, not to impress hard nosed software developers, so there is deliberate repetition. You can decide to keep it or throw it. I’ve added just a few data fields (descriptions and weighting values) to each kind of node and link. It’s enough to do a lot already, but you can add more (see below). Let’s run through the structure of what’s going on.

The code is packaged as a toolkit on github. You can browse the repository, download as a zip file, or use the “git” tool on Linux to copy the whole project using “git clone”.

First: opening everything up

As we’ve seen in past installments, we have to open a database as a service, and then create and open various collections every time we call an application. The library bakes all of this into an “analytics” object. Then the OpenAnalytics() function proceeds to:

  • Open/create Database Service connection
  • Open/create a specific database project by name
  • Open/create collections for nodes, hubs, and components (3 layers)
  • Open/create collections for edges in each of the four STtypes.
  • Make all of the above available through a single reference “g” for “graph”.

The code to set everything up is just this:

S.InitializeSmartSpaceTime()var dbname string = “SemanticSpacetime”
var url string = “http://localhost:8529"
var user string = “root”
var pwd string = “mark”
g := S.OpenAnalytics(dbname,url,user,pwd)

Notice the use of the prefix “S.” to call functions in the library. Notice also that we assign the result to a variable “g” that we are now going to use to pass to graph management functions. It contains references to all the structures and their open handles. The full structure is defined in the pkg/SST.go file, and it looks like this:

type Analytics struct {// Container dbS_db A.Database// Graph modelS_graph A.Graph// 3 levels of nodes and supernodesS_frags A.Collection 
S_nodes A.Collection
S_hubs A.Collection
// 4 primary link typesS_Follows A.Collection
S_Contains A.Collection
S_Expresses A.Collection
S_Near A.Collection
// Chain memoryprevious_event_key Node
}

This contains an open database handle S_db, an open graph handle S_graph, open node collections S_frags, S_nodes, and S_hubs which describe three levels of node nesting to capture multidimensional structures, and open collection handles for the four spacetime types of link. From time to time, we might want to access these open handles directly to extend the library code.

Finally, at the end of the data analytics structure, I’ve added a reference to a “previous” link so that the graph structure is stateful with respect to process chains too. This is used in the passport example below, as well as in the next post to automatically create an incremental “log” of activity with a proper timeline. An automated chain makes it easy to build narratives without carrying around a lot of bloat.

Go Note: when using the previous_event_key value and its interfaces NextDataNode(), we have to pass the g variable by reference so enable updating. In Go, this means passing &g instead of g, and subsequent references in the child functions have to use *g rather than g.

Adding nodes and links

To create nodes, we need a model for the documents they’ll store at each location. Programmers would try to define a generic interface here for all possible kinds, but as researchers you can start just by coding your own use-case. For the example, I’ve chosen data in this Go format:

type Node struct {Key string     `json:”_key”`       // mandatory field, short name
Data string `json: “data”` // bulk data
Prefix string
Weight float64 `json:”weight”`
}

In ArangoDB, every node needs a key, which maps to the “_key” field for its JSON shadow representation. We can use a short string for this. It should be something that makes sense to you. Then the node needs a prefix to say which collection it’s part of. We keep these two parts separate because sometimes you need the fully qualified “prefix/name” form of a node, and sometimes just the unqualified “name” form (when working inside the collection). For the actual data, I chose a string called “Data”. It could be a whole book or just a single sentence. I also added a floating point number to act as a “weight” or score. We’ll use this later when looking at eigenvector centrality in Part 9. Floating point numbers are useful for update learning.

To add a node we need AddNode(), but this assumes a fully formed Node structure. It’s convenient to package up this structure using a wrapper function as an interface CreateNode(), then we don’t have to think about how to write structured types as constants — though having said that, other examples in the coming posts will show that too.

func CreateNode(g Analytics, short_description, vardescription string, weight float64) Node {var concept Nodedescription := InvariantDescription(vardescription)
concept.Data = description
concept.Key = short_description
concept.Prefix = “Nodes/”
concept.Weight = weight
AddNode(g,concept)return concept
}

In the library, I also call a string replace function to exchange any spaces in the key with underscores, since spaces are not allowed in ArangoDB keys. A corresponding function for links is also in the library. These functions just pass reformatted data structures to a new function AddNode() or AddLink(), which is where the database magic happens.

Data stability — using fixed points and idempotence

As with key value pairs, adding data to a store is fraught with pitfalls for the unwary. Without care, we can fill our container to overflowing and add multiple copies of things we only need once. Mostly, we don’t want data to be fragile or to grow out of control. A simple way to avoid that is to treat data as mathematical fixed points. A fixed point X of a function f(x) has the property that:

f(X) = X,

i.e. the function has no effect on it. The functions that update a database need to behave so that if we call them once, they update data values, and if we call the functions more than once, nothing happens. Every data addition wants to be a fixed point of our process to avoid repetition. This is a form of directed idempotence.

Accidentally calling a function twice due to a repeated value shouldn’t mess up a data set, add an additional node to a graph, or an additional link. This is taken care of by ensuring convergent additions. A second issue is that we want to avoid writing to a database at all if we can help it, because that’s an expensive operation and one that’s fraught with contention and possible errors. In summary, the behaviour we want for AddNode() or AddAnything() can be summarized by:

f(new) = resultf(result) = result

The convergence principle (also sometimes referred to as data monotonicity, when it’s scaled to distributed data) adds a spacetime directionality to data additions. The idea of convergence indicates a desired outcome — or a desired end-state. The principle uses the mathematics of fixed-points to offer data stability.

Figure 1: To make inferences about the meaning of data, we need multiple data points with the same meaning to converge to a stable region of the process graph. This is the basis for representing stable qualitative outcomes.

The functions we’ve defined thus have idempotent and convergent semantics, which we also discussed under key value pairs in part 4, and this is quite important when collecting data that are supposed to have a definite value. Later we’ll look at algorithms where a new data value modifies running averages to accomplish machine learning, but let’s take the simpler case first.

Unique names to avoid unwanted repetition

So far we’ve used the principle of unique naming in key-value stores, and now we need to recall it for the graph databases too, owing to the semantics of ArangoDB. In the case of edges, ArangoDB has a subtlety. It’s policy is to keep adding copies of the same link, with new keys, if we don’t define the key ourselves — even if all the other attributes are the same. I can’t think of a use-case for this except perhaps as a way of counting. To avoid this, the SST library contains an AddLink() function that makes its own unique key from a unique combination of to and from nodes, plus the link alias. The Go package hash/fnv has a suitable hash function that turns a string into a unique value key. The SST library contains the code:

description := link.From + link.SId + link.Tokey := fnvhash([]byte(description))

This guarantees a much-needed collision that we can use to handle or avoid duplicating data spuriously (analogous to a data type by convergence). When using hashing functions, we always have to be careful about the uniqueness of the hashes. Unwanted collisions can occur if we don’t use a good algorithm or handle them explicitly. In IT, cryptographic hashes like SHA2 or MD5 are very good at avoiding any collisions, but also very expensive to compute and tend to be overused, so I haven’t handled this issue in the code. The hash/fnv package seems like a good compromise here. I haven’t seen any unwanted collisions yet.

We should be cautious about giving everything a unique name. That’s not always what we want, because it leads to a meaningless data explosion. We only name things we want to keep or classify into buckets. If we want to count similar things, then we use updating of weights, strengths, frequencies, etc, to represent changes. This is the basic form of machine learning, and follows the principle of data convergence (figure 1).

How to use and modify the library code for your own purposes

Trying to make code that will work for every possible case is a hopeless task. What I’ve tried to show in the SST package and series examples is to offer enough detail to allow anyone to modify the existing code in order to make it fit a different case. You may not need all the functions, and you may have to change several functions to get exactly what you want. Either way, the structures provided should be a good guide to keep matters under control.

Let’s look at two approaches of how to use the library code for a different kind of data model. The examples consider a database to capture sequences of transport movements, with a passport and visa system for tracking access border controls. In each case, we want to make it easy to track the movements of persons with passports in a natural way. We don’t want to be typing queries in a database language. We want an operator approach, like a series of chess commands or button presses, for moving pieces around. These can then automatically and transparently update the state of a database to record the state of things.

Passports and movement as an event log

We begin, of course, with the question: what process(es) are we trying to model? To understand this, we appeal to Promise Theory (figure 2) as a guide for deciding what kind of information is needed and where it should be stored. Which agents? How many kinds of nodes?

Promise analysis shows us simply and quickly how to tell a the kind of story we want to tell using the data. There are plenty of ways of approaching this, but it quickly shows the most separable model. A partial picture is shown in figure 2.

Figure 2: Promise Theory view of a passport system, it’s tempting to think that a passport is a property of a person, because it’s held by a person, but a passport is issued by a country for a person. Similarly, it’s tempting to assume that passports must be accepted by other countries, but data offered (+) need not be accepted (-). Moreover, the granting of a visa (which is the actual entry permit) is a separate conditional promise based on the possession of a passport and visa. Countries may also explicitly block entry to named individuals.

A passport system is quite a complicated matter, if we were to go into all the details, as indeed all “semantic spacetime” processes belonging to the real world are. Let’s keep to a skeleton model. In the example code, I’ll idealize it in three stages.

  1. Focus on the movement graph.
  2. Focus on separation of data by concern, into several types of node.
  3. Show an example of blocking exceptions, where visas are generally allowed.

Each nationality keeps its own data about passport holders, and the rights they have in that country. The passport is no guarantee of right of entry to another country. It’s just a declaration of citizenship (a promise) of membership. This may or may not be accepted by another nation. A visa document is a promise to accept a passport for a limited time. As travellers move around, we can associate them with cities or countries. So we have persons, countries, cities to deal with. But these are just the things, not the processes we are interested in. What semantic spacetime tells us is that, if we try to model in terms of things instead of processes, we’ll likely get into a muddle. To model processes, with a timeline, we need to model “events” in which persons and locations come together. This is part of the elementary technique of composition of state that eventually leads to processes like DNA recombination! When a person visits a location, this constitutes an event, of which the person and the location are members. Locations don’t form a sequence, nor do people — events do. So we can say that one event FOLLOWS another, using the NextDataEvent() function.

In the library the NextDataEvent() only works in a single threaded environment, as written. Golang has some atomic updating functionality to generalize this. A database would also work.

In the first version, I use the SST library without modification. This is easy but not completely natural as we have to shoehorn concepts like persons, countries, etc into the existing node types, which is a form of abuse. The Go code is in Passports1.go and the main section, consisting of “chess moves” looks like these excerpts:

CountryIssuedPassport(&g,”Professor Burgess”,”UK”,”Number 12345")CountryIssuedVisa(&g,”Professor Burgess”,”USA”,”Visa Waiver”)PersonLocation(&g,”Professor Burgess”,”USA”)PersonLocation(&g,”Professor Burgess”,”UK”)

Spacetime coincidence modelling

In modelling spacetime processes, we have to work with the sensory stream as we find it — not with derivative conceptualizations such as “is an employee of”. It’s hard to think at the level of invariant associations when constructing a database from data that are changing — so we need to transform the problem into a kind of sensor-stream monitoring, or event logging interface. It’s the classic monitoring/observation problem. So, instead of working with query language statements “INSERT LINK BETWEEN NODE X and NODE Y”, which result in invariant states, we introduce hubs for events that tell stories about a changing configuration timeline that’s independently about combinations rather than invariant things: person and job are invariants, but “person has job” is not. What we observe is “does person currently have job” or was there an interaction whereby “person is (un)assigned job”?

We use the procedural model in Go to interface these abstractions into spacetime declarations. e.g.

PersonLocation(&g,”Professor Burgess”,”UK”)

A coincidence (occur together) of things becomes an event by virtue of happening at a particular spacetime coordinate. How we model space and time is unimportant, it’s the simultaneity or coincidence of things that defines an event. This function helps us to translate observations into invariants using the model:

func PersonLocation(g *S.Analytics, person, location string) {person_id := strings.ReplaceAll(person," ","_")S.CreateFragment(*g, person_id, person, 0)S.CreateHub(*g, location, "", 0)// Event: Darmok, Gillard at Tanagravar short,long string

short = strings.ReplaceAll(person + " in " + location," ","_")
long = person + " observed in " + location
S.NextDataEvent(g,short,long)}

We first create a (fragment) node for the person. Then we create a (hub) node for the location — this is how we adapt the generic model in this first version. Then we call the NextDataEvent function which creates an (event) node and makes the person and location members of it. See the full code in Passports1.go.

The order of calling these commands is the order of “proper time” in this process. Program Passports1.go is a simple application of the standard SST library (without modification). It clumsily shoehorns a special case into the general model, perhaps unnaturally). Version 2 below fixes this.

% go run Passports1.goTimeline: UK granted passport Number 12345 to Professor Burgess
Timeline: USA grants visa Visa Waiver to Professor Burgess
Timeline: Professor_Burgess_in_USA
Timeline: Professor_Burgess_in_UK
Timeline: France grants visa Schengen work visa to Emily
Timeline: Emily_in_Paris
Timeline: USA grants visa Work Visa to Captain Evil
Timeline: Captain_Evil_in_UK
Timeline: Captain_Evil_in_USA

What this output (and graph in figure 3) shows is that there is a unifying timeline, represented by the events, in which the process itself leads to the formation of the graph as a sensory stream. It’s a point of view captured into a graph of invariants with an embedded history timeline. Once formed, we are not bound to retell that only particular story: we can parse the graph in other ways to tell different stories.

In this first version, the story remains quite simple, which is good as an example, but it doesn’t capture the full potential of the method. After all, we want to use computers and analytics to help us with scenarios that are not easy to comprehend.

Figure 3: There is a global timeline in the black nodes, which remains quite separate from the other invariants. The cluster on the left shows relationships between persons and passports or visas. The cluster on the right shows Emily in Paris (an observation, e.g. spotted on CCTV, or Netflix!).

Note that we express ideas like CountryIssuedPassport() simply by extending the list with a specially named EXPRESSES association. Since a passport is sometime static granted by a nation state, this is a promise made by a country, so we use EXPRESSES to model this promise:

S.ASSOCIATIONS[pass_id] = S.Association{pass_id, S.GR_EXPRESSES, ”grants passport to”, ”holds passport from”, ”did not grant passport to”, ”does not hold passport from”}S.CreateLink(*g, country_hub, pass_id, person_node, time_limit)

The slave function CreateLink() does the real work, after marshalling the parts of the configuration. In the second version, we undertake a more properly specific model by modifying the SST code. We separate the concepts properly by defining Node types:

// Node typesS_locations A.Collection
S_countries A.Collection
S_persons A.Collection
S_events A.Collection

To do this, we can abandon using the SST package directly by include statement, and rather copy the template of code into the end of Passport2.go as a standalone program. Then we can alter it to adapt it sensibly and meaningfully to this special case.

We can now add proper type registrations for the kinds of node, with CreateTYPE() functions for each type, to annotate them as documents before linking into a graph. By decoupling the model, we can add this information at any time. The convergent semantics prevent us from duplicating nodes when receiving new information. So now, we can also add ad hoc links, such as person aliases for multiple identities:

mb1 := CreatePerson(g,”markburgess_osl”, “Professor Mark Burgess”,123456,0)mb2 := CreatePerson(g,”Professor Burgess”, “Professor Mark Burgess”,123456,0)CreateLink(g,mb1,”ALIAS”,mb2,0,0)
CreateCountry(g,”USA”,”United States of America”)
CreateCountry(g,”UK”,”United Kingdom”)
CreateLocation(g,”London”,”London, capital city in England”)
CreateLocation(g,”Washington DC”,”Washington, capital city in USA”)
CreateLocation(g,”New York”,”Capital of the World”)
LocationCountry(g,”Washington DC”,”USA”)
LocationCountry(g,”New York”,”USA”)
LocationCountry(g,”London”,”UK”)
france := CreateCountry(g,”France”,”France, country in Europe”)
paris := CreateLocation(g,”Paris”,”Paris, capital city in France”)
CreateLink(g,paris,”PART_OF”,france,0,0)// Mark’s journey as a sequential processCountryIssuedPassport(&g,”markburgess_osl”,”UK”,”Number 12345")
CountryIssuedVisa(&g,”markburgess_osl”,”USA”,”Visa Waiver”)
PersonLocation(&g,”markburgess_osl”,”New York”)
PersonLocation(&g,”markburgess_osl”,”London”)
// This could be a problem, because we haven’t made a collection for cities// Requires some additional logicCountryIssuedVisa(&g,”Emily”,”France”,”Schengen work visa”)
PersonLocation(&g,”Emily”,”Paris”)
// Captain Evil’s journey as a sequential processCountryIssuedVisa(&g,”Captain Evil”,”USA”,”Work Visa”)
PersonLocation(&g,”Captain Evil”,”London”)
PersonLocation(&g,”Captain Evil”,”Washington DC”)

The code is found in Passport2.go. The resulting graph is shown in figure 4 below.

Figure 4: In this example, the graph is better annotated, with cleaner types, and more features. That makes it also more complicated. Strong types are helpful to separate node semantics: events, persons, locations, countries, etc, but they are still insufficient to represent all the relationships. The Arango browser struggles to show all the details, as it’s a fairly simplistic viewer. I’ve read that this is going to improve.

Although the model is now more sophisticated, the picture is now harder to parse. We can no longer see the proper timeline so easily (but it’s still there from right to left in the dark nodes), due to all the side connections. The side connections annotate countries and persons that participate in events, linking an ephemeral history to its participating process invariants. Because these are invariants, they don’t change in spacetime and so they must be fixed points, which indeed we captured with the idempotent semantics (see previous posts). The semantic spacetime immediately helps here, because if we filter the graph on only FOLLOWS links, then the timeline magically reappears without any knowledge of the specific processes or their semantics.

These timeline links are conveniently separated into their own ArangoDB collection by the model, so we don’t even have to perform complex filtering and joining to reproduce it. This is what makes ArangoDB stand out as my choice for this series. The same approach could be quite clumsy using other tools.

% go run Passports2.goLocation: Washington_DC is in USA
Location: New_York is in USA
Location: London is in UK
Timeline: UK granted passport Number 12345 to markburgess_osl
Timeline: USA grants visa Visa Waiver to markburgess_osl
Timeline: markburgess_osl_in_New_York
Timeline: markburgess_osl_in_London
Timeline: France grants visa Schengen work visa to Emily
Timeline: Emily_in_Paris
Timeline: USA grants visa Work Visa to Captain Evil
Timeline: Captain_Evil_in_London
Timeline: Captain_Evil_in_Washington_DC

The ArangoDB visualizer isn’t easy to work with (it’s still quite young and simplistic), but it should be clear from the model that we can show different relationships: time (what follows what), space (what contains what), and composition (what refers to what) by following the four different kinds of link — regardless of what their specific semantic are. This is the Semantic Spacetime principle in action — and we’ll be using this again and again for the remainder of the series.

Last but NOT least: negative semantic spacetime

In the final version 3, I’ve added a more realistic policy for visas — one of denying specific individuals, rather than just accepting named individuals, using the negative association types. Often, countries have general arrangements to allow visa free travel to most travellers, but then they have a blacklist of known troublemakers, whom they want to block. The appendix Passport3.go shows this by the addition of a visa blocking action. Unfortunately the current visualizer in Arango can’t show this distinction in the graph without some modifications.

You should now have all you need to begin to modify the code in the SST package and play with simple models of your own. In the next installment, I’ll show some more examples of how the functions can be used very unobtrusively so that database and storage details disappear into the background as we focus on applications.

--

--

Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org