Sitemap

Semantic Spacetime 2: Why you still can’t find what you’re looking for…

How to better know what you know “you already know”, you know?

24 min readApr 8, 2025

In the first part of this essay, we saw how information takes on certain shapes, forming a graph. We can try to use this idea to our advantage when organizing information we would like to have as knowledge–but how? We need to make good maps. We need to learn to think in vectors! Putting information into boxes, into file archives, or just your ailing brain, are all easily done, but getting it out again is a lot harder. Faced with mountains of boxes, stacked in rows of warehousing, real or metaphorical, we often give up. Why do we put ships into bottles?

When we’re working at first hand on some task, and we’re “in the zone”, we have all knowledge at our fingertips, seemingly without effort. Later, perhaps just minutes later, everything about the situation is different, and remembering it is another story.

Example note in N4L

Knowledge originates from past experience. It might be a direct sensory experience, or a second or third hand experience from a book. The closer we are to it, the better it sits.

We might remember a song we learned when we were children, because we sang that song many times either with foreboding or with happiness, but we don’t remember our French vocabulary from school just a year later, because we only used it for our homework and for a specific kind of question on an exam, and never in real life. Placed into the wrong kind of context, we neither engaged our intentionality, our motor cortex, nor received the reward of a successful outcome to ingrain it in our memories. We never intended to use the knowledge for anything, so it went out with the bathwater.

If we intend to learn something, whether for life or even for a short period, we have to revisit it and use it regularly. Making and rehearsing notes that tie it to our own thoughts and efforts, engaging our intent to know it, e.g. by promising ourselves that we mean it. But we have limited attention and we may quickly forget as our focus drifts. We help ourselves by writing things down, by looking them up again to remind ourselves of what we already know somewhere in our leaky heads. If we want to record something usefully in a form where an assistant or a computer could help us to become our better selves, that is made easier by using a certain structure or "shape" to the way we write.

In this second part of the essay about the SSTorytime knowledge project, I'll discuss the method we can adopt to write down our learning for both purposes.

Example note in N4L

The key is this simple enough (repeat after me): it’s not knowledge if you don’t know it! We need to realize what that means.

The SSTorytime project proposes that there are two parts to knowledge representation, that software can help with:

  • Taking suitable notes to form a map
  • Querying the map with simple tools to help us think.

Separation of concerns vs map making (structuring data for retrieval)

Databases are not designed for knowledge, as such: they are data stores. We should never confuse raw data with what it means to know something. Nor should we confuse the strategy for archiving with a suitability for recall.

Imagine if cities were designed like databases. We would make everyone with blue eyes live in the Northwest of town, in one tower block. Green eyes, in the next block. If your eyes are halfway between green and blue, then you’re in trouble or you might be assigned two apartments, or none. Apples are located on Fruit street in the Food district, whereas Butter is in a small area of the Dairy Row which you can’t get to directly from Fruit street. Flour is located in the wheat warehouses, which are next to the Bread Bowl and the Pasta Pennetentiary. If you want to make an apple pie, you have to make several long journeys across town and JOIN the outcomes to assemble the data. Then you are still left with the task of processing them together.

Luckily, we plan cities more carefully than we plan data. Cities are planned as processes, not as taxonomies or ontologies. Neighbourhoods have key resources and services where they are needed. Well, there’s the exception of Brasilia, a new-town that tried to organize the city by function, more like a database than a living space, but it quickly unleashed a massive traffic problem as people were forced to drive across the city all at the same time for the simplest task. Recall Terry Gilliam’s movie Brazil? “Central Services!

Example note in N4L

Informatics teaches us to “separate concerns” when designing systems, but we often tend to do so by categorizations based on what we think things are, rather than what they need to do. The endless propagandizing of the “is a” relation to justify the object Object Orientation movement was one of the historic embarrassments for Computer Science. We were encouraged to separate types by identity rather than by activity, even when the primary clue was in the methods rather than the classes.

On the scale of a city, the physical distribution of shops and supermarkets is not designed “in 3rd normal form”, uniquely centralized in a single location like central services. However, inside each replicated shop, goods are more likely to be placed together by a category, only because the categories map more closely to the processes they are use in. This is not about an ontology of what things are, but about tailoring the stories about what people are looking for and how shoppers move around. Are you baking a cake or making dinner? Stocking the freezer or buying things for personal hygiene (in the bathroom, the bedroom, or the kitchen)? Things we typically forget are placed near the entrance/exit–which is usually where the payment cashiers are located–because they can’t be avoided.

In user based design, designers and marketers speak about “user journeys” and “user stories”, because mapping out these timelines is the way one shapes trajectories toward successful interactions. Stories that are repeatable patterns of intent that have a special significance. Sure enough, when there is money involved, designers of knowledge see every interaction as a process in space and time–just as we discussed in part 1 of this essay. Knowledge is built from such stories.

Every journey tells a story

Not everyone likes the word story; it conjures negative associations for many scientists and professionals. Stories are just nonsense we tell children before bed, right? We like to think that what we know and write is more than a mere fictional invention. That, of course, undervalues fiction writers, not to mention being slightly naive, but one can also sympathize with the sentiment: science doesn’t want its strenuous efforts for truth to be discussed as if it were on a par with something simply made up. The Norwegian word for story “fortelling” is somehow nicer. It simply means something that’s told, or literally some which is “for telling” to someone else. Nevertheless, science is about what has happened in the past, and every past history contains “story”. But, words don’t matter, right? Whatever shapes intent matters.

In SSTorytime notes, we signal the meaning in the parenthesized links between bits of text. The words can be chosen freely, but the software will know what to do with the information because every combination of words represents one of the four horsemen of meaning: leads to, contains, expresses property, or is near/similar to.

Let’s use the word story to provoke some thought. Suppose we start with some material we've made notes on, or collected experiences about. What are the kind of question or intentions we could have for retrieving knowledge? Here are some examples of one-off queries:

  • Tell me the (his)story of…
  • Tell me a possible prognosis for…
  • How can I…?
  • Read me the destructions about …
  • Test me like a quiz on the subject of …
  • Trace the source of …
  • Track down the perpetrator of …
  • Who else can I ask about…
  • Where else can I get …
  • What might happen if …

One of the hardest things for someone to learn is ad hoc factual information, which is not part of our normal experience, because we don’t immediately know how to incorporate it into our lives. Indeed, we might have to change our lives to accommodate the knowledge! We need to work towards that goal, and taking and revising notes is part of that process.

Individual knowledge

A case that I find particularly compelling for knowledge representation is using the computer to help me practice my intent to learn, by reminding me of my notes each day, like going to the gym:

  • Show me notes systematically from the beginning, chapter by chapter.
  • Show me things at random, with the part I’m trying to learn as the focus
  • Always show me how each factoid is used in practice.

Of course, even the desire to know might not be enough to overcome the work of learning. Learning is hard.

For example, suppose I am trying to learn a foreign language, like Mandarin Chinese. I can use items and links between them to map between written forms. Mandarin to English has three forms of writing: the chinese symbols (hanzi), the phonetic form (pinyin), and of course an English translation. Converting from one to the other can be marked with an arrow. There are six possible arrows between these forms: (eh), (ph), (ep), (pe), (ph), (hp). As we write these down, our order might vary depending on how we learn the information. By building a map using arrows, a computer process can easily rearrange these to suit us when we’re in a different mood:

Example note in N4L

Some parts are easier to repeat than others, and therefore easier to learn. Writing this reminder is, so far, a trivial matter. But, in order to make it relevant to our lives, we want to add observations and comments that help us to understand the raw facts, e.g. an example of how to use a word:

Example note in N4L

Now we’ve introduced a new kind of arrow, from facts to examples, called (e.g.). More importantly, we will want to annotate these basic translations, and even order them into chapters. We are now close to thinking in types, but where the types are vectors — not an ontology of things, but rather types of scenario.

Example note in N4L — notes can mix different types. It shouldn't be about types.

The deceptively tough question is: how do we choose the "right" arrows (or the best possible arrows) to annotate these factoids. Is there a correct shape to knowledge, and does it depend on the names we use? This is an issue that we’ll come back to.

With machine assistance, or even old fashioned human assistance, we could employ a translator to tell us what is being said. In doing so, we must be aware that we are trusting a potentially misleading and inaccurate channel of information. It's not the same as actually knowing it ourselves. Why do we trust this interloper to accurately represent the intent of the speaker? Mandarin Chinese is an excellent example, because the shape of its language, and the culture of expression it stems from, are notably different from English. When we rely on a third party, we face the Intermediate Agent Theorem in Promise Theory, which tells us that what we hear comes from the intermediary not the original voice. It (coincidentally) encapsulates what we call “Chinese whispers” in English!

We all choose our battles, so there’s nothing wrong with trusting someone or something else, up to a point, especially if we have no intention to integrate certain topics into our lives. But that’s the opposite of knowledge management. If we want to learn, gladly or even begrudgingly, then we can still choose well.

Machine assistance: planning and speculating

We can get remarkably far just by coughing up what we already know, as long as our experiences are sufficient. If we ever doubted that, then Large Language Models have surely shown it beyond doubt. But that isn’t the end of knowledge, because what that enables is the ability to speculate and make predictions–as long as we know what we already know. This is how we make strategies for problem solving.

If our cognitive skills originate from the need for repeated navigation, then path solving is a natural case for our process-based thought. Suppose we want to determine whether there is a route from one place to another, perhaps through a maze. Labyrinths are designed to be hard to navigate (by “messing with” what our senses and flouting what’s normal in the landscape) so we can try so solve a route through a maze on one-way streets (see figure).

A map of one-way streets, but how to find our way? SSTorytime to the rescue!

We collect data about the streets, piece by piece, but it's hard to really know the route, because seeing the route when we are down in the weeds is still difficult for us, and we don't get much chance to practice. However, knowing the process of the maze, and with thelp of a tool, we can ask the data for the stories that take us from entrances to exits. Once the path is marked, it's obvious. A computer can keep the whole picture in mind ina way that a human can't. If we make it into a graph, then there is a simple algorithm for solving paths (quite like quantum transition matrices). This is an area where having some machine assistance is valuable and time saving. It doesn’t fight compete with our skills but amplifies them. The SStorytime example search_maze shows how to do this.

Output of the maze solver

Notice that in this example, N4L is not the best way to enter the data. Sometimes we need help getting the data into the system too. Find the right tool, and always make your tool do the work!

Next, thinking more strategically now, Simon Wardley has long championed the idea of planning using spacetime maps for a kind of evolutionary process thinking. These have come to be called Wardley Maps. They are tools for brainstorming and for discussing, but one way to read them is to look for pathways from one event, or from some influence, to a desired outcome. Here is an example map that I picked from one of Simon’s posts:

The value of Wardley maps may lie in the detailed brainstorming that leads to them, but being able to find a quick summary of the routes to success is another way that a machine algorithm can offer assistance, without competing on fundamental skills — especially if the maps become large and detailed. The structure of the map is to order value increasing Northwards, and evolutionary time or maturity going Eastwards. We can represent the two main processes by introducing arrow vectors: a north-east arrow for gradual maturity, and a northwest for risk-taking developments:

Wardley maps build on two kinds of process, with LEADSTO vectors

These are proxies for proper time steps. In N4L:

Excerpt of map in N4L

Using the path solver, to see how, say, The Internet enables the notion of the Sovereign Individual, is just a case to entering the boundary conditions and running the solver to see if there is a direct or indirect path:

Using the path solver, we see the imagined route from Internet as an enabler of Individual Sovereignty

Spanning knowledge with graphs, well and poorly

The first inventors of computer knowledge representations were computer scientists. They were taught to think about reasoning in terms of logic, so the models of knowledge ended up looking like a bunch of theoretical data types. Later, came the idea of knowledge graphs, in which things and ideas were connected by arrow relationships or merely adjacencies. Knowledge graphs grew more out of the concept of taxonomies that had became popular in the 1800s. If we look at typical data for a knowledge graph, we see link labels like this:

  • Martin (is a friend of) Sally.
  • Sally (is a kind of) Person.
  • X (has a business relationship with) Y
  • Dog (belongs to) Martin.
  • Dog (eats) biscuits.

These typical examples reveal the persistent obsession with what we believe things are, i.e. how we imagine their generic roles. This can be contrasted with what we think they do, or how we might use them, in a causal model. The relationships are types, like “is a vegetable”, “is a wheat product”, rather than “can be used in cakes” or “goes well with Mexican food”. The representation is a bit too specific and makes asking questions those other questions later unnaturally difficult.

Our cognitive abilities are not obviously logical algebras or graphs. They are based around scenes, i.e. the use of our senses about space and time. So the way we think, unfolded as a more natural language, is to start with events or happenings:.

  • Martin was walking his dog when he met Sally walking with Sarah.
  • Business X engaged in a contract/deal with Y.
  • Marit discovered that adding sour cream to gravy enhanced the flavour.

These sentences don’t have any obvious arrows, so what should we do? Their status in graphical terms is that they are hubs or meeting places where concepts come together to form meaningful events. The purpose of fragments of knowledge like these is to join several participant entities into observations by someone about an event that specifically took place. We say: I put on my shoes, not: an entity of type shoe AND a foot which is an attribute of a person. The nature of these statements is not logical, but causal. There is a middle ground however. As humans, we have developed language in terms of events for a reason. Our language is not obviously designed to turn into computable propositions (that we would find difficult to understand), but in fact the distance between the two isn’t that far.

Language works by formulating scenes. Scenes are aggregations of separable entities that come together to describe a sequence of events. Each full sentence becomes a hub that connects several characters into a unifying happening. That event can be further elaborated in detail by breaking it down (and formally connecting it) into its components: who or what was involved, what was the intention, the text of the contract, and so on. What happened leading up to the event? Who are these people? What else did they do? We ask these questions because we want to know the stories behind things. Each point may come together from several stories, and may unfold into different ones.

If we break statements down they become easier to pass on and translate.

Our minds are navigators of these memories in space and time. There is a kind of hierarchy to this, which is similar to an ontology, but instead of trying to be a universal prescription of meaning, it’s actually just a “spanning tree” for a process of parsing the information. But wait! The component concepts are going to feature in several stories, not just one, as basic vocabulary. My shoes are my shoes, regardless of whether I’m talking about putting them on or having them repaired. This is why events are always new, ephemeral combinations. The shoes are a kind of abstract invariant or concept that pop up many times, correlated by the events they partake in. The shoes live at the bottom of this hierarchy, their events higher up. They are just a name for an actor in several plays. Eventually, the connections that “shoes” establish to many events defines what "shoes" means.

Our brains are very good at abstraction, so we rightly leave this classification of what things “are” to emerge over time empirically. We have more use for the recipes we can form from things. When something happens, we easily overlook details and jump straight to the next question: then what happened? We have an intent of our own that homes in on what we’re hoping to know. The way we label the intent of something in a graph is to use a specifically named link.

Example note in N4L

Figuring out how to express these chains of events meaningfully isn’t easy. That’s why not everyone becomes a teacher or a writer, why natural language captures our interest more soundly than logical steps. Yet, the essence of helping ourselves to simplify ideas remains the main way we “understand” and benefit from knowledge.

Consumer stories, private notes, and the role of intent

One way to lay out knowledge is by narrative. Narrative paints a scene in our minds, either to confirm what we already know, or by dropping clues, to tell a story that hopefully carries us along to a punchline. This is natural to us, and has its origins going far back into the animal kindgom's history of navigating terrain. Turning information into journeys is what writers still do with books and articles. The order, pacing, and intent of the author lay out a story that we can absorb at our leisure. It’s rare that a few bullet points would provide enough context to hold our interest and activate the necessary learning effort.

With narrative, you are partly hostage to the teller’s person’s journey: each thread has a viewpoint. The author could introduce irrelevancies to spice it up in their own minds, tell jokes, make analogies that are helpful or merely distracting. You can try to skip them to extract the relevant parts of interest, so you aren’t a completely helpless hostage. Ultimately, what will remain with you is the parts that you can use in something you intend yourself. Once again, your own intent plays a role in selecting information to remember. If you don’t intend to use the content, then it’s just passing entertainment–it never becomes your knowledge. But, when you make your own notes, your agenda may be more focused around intent.

Instead of trying to glue separable types of entity as logical attributes like these two versions:

Example notes in N4L, alternative formulations

When we search by Random Access Memory lookup, as with directories, archives, and databases, we want an index of pointers that take us as close as possible to the precise result we seek. We might look alphabetically, by timestamp, etc. What makes a good index key? Some writers make hopelessly inadequate word lists in their indices, as if they simply have to have an index but don’t really care to put in the work. The smart author plans ahead for every imaginable state of mind that readers will have when searching the book. Even the author him-or-herself will need this after a year or two.

This concept of the “better index” goes back to the writings of Vannevar Bush. Later came Hypertext, Topic Maps, and RDF efforts, etc. The designer of a good index tries to anticipate every question or context that the reader might have and leave keywords that point to where that discussion can be found. Even if the words don’t actually occur in the full text, references by inference can be lifesavers. Unfortunately, our knowledge representation technologies make this hard, not easy. This is how we can improve on them using our knowledge of process underpinnings.

When we strive to know by making notes their function is something between a personalized narrative and as an index for lookup. It’s always best when notes use our own words because that engages our own intention, which is based in things we already know well, but most important of all is that we select as the key highlights and how we connect them like events in a story. If we grab bullet-points with wild abandon, stockpiling without intent, it becomes landfill. If any datastore merely treats information as a formless grey goo, navigating through it is impossibly difficult. But if we structure notes according to a simple and clear map, we’re basically making an index for our future selves.

Writing notes, because you intend something

There’s a huge number of tools for taking notes, from the simple notepad to advanced software products that allow you to manipulate multimedia objects and link them up into some kind of archive. Sorting things into baskets is the way generations of humans have sorted items for different purposes since the beginning of human intelligence. But sorting is not the same as knowing or understanding. There are few tools that can look at a bunch of notes and extract meaningful predictions, hypotheses, or other forms of knowledge from them. The things we want to remember often arrive in snippets that are disorganized. We might write them down quickly, but we also need to form the habit of coming back to go over the notes later.

The direction of intent is our guide. Regardless of whether we use a machine to shore up our flagging cognitive skills, we need to capture information in a form that we understand and make it our own knowledge before building on it. For instance, imagine the following:

  • Emergency lockdowns and disaster procedures.
  • Security incident response.
  • Who to call in case of supernatural haunting.

Learning information that’s rarely used has its own risk strategy–think of the safety instructions aboard a flight. The safety demonstration is given too often for some and too abstractly for others to take seriously, but often is better than never. Fire drills only work if they are accompanied by an intent to obey–mostly we are annoyed by them. In both these cases, what we need to remember is so little that memorizing isn’t as important as being aware of the possibilities. On the other hand, if we intend to learn how to fly the plane, we would feel a need to keep notes. Luckily these emergency procedures are short enough to be intuitive. For longer and more involved knowledge, we need to make stories within stories:

  • A scientific paper might discuss several arguments and calculations, each of which is a story in its own right. They are embedded within a larger story to make a conclusive point.
  • Sometimes a single observation (as in the case of these very two bullet points) is the length of your connected thought, and these fragments remain floating in your notes for months before deliberately harvesting them to form a larger story.

As a student I took notes in lectures, simply copying from a blackboard. Every evening, I would rewrite these in my own words to make sure I understood them. Later, before an exam, I wrote them all again like my own textbook on the subject, filling in perceived gaps. A few subjects were impossible for me to digest. My notes were not searchable by computer, but I have a visual memory, so I could remember what the pages looked like where I would find something.

Example M4L note

Improving and training good habits is the goal of the SSTorytime project. With good habits, we can help to train users to think for the long term–not just for instant gratification. If we encourage habits based on the processes and questions we hope to capture, rather than as an archive, we hope to make recalling more powerful and useful.

Planning knowledge as events and maps

For many cases, simple, logical Random Access Memory recall suffices for many industrialized purposes. We’ve made do with SQL for half a century to solve our priority issues, because we can always hand the results for a human to select from–with a dialogue. But in the age of knowledge and complex information, we expect more than just looking up by an address.

We would like to be able to edit and play with assumptions, ask “what if” questions and otherwise harvest the fruit’n’veg of our intelligence. Today we’ve learnt to use Artificial Neural Networks to capture complex relationships through training rather than by direct input, but they still need intentional input to function. The economics are very different.

If the aim is to fetch a throwaway factoid from a dispensing machine, like a fizzy drink, then any kind of database will do the job. If, on the other hand, we want to actually learn and understand the material ourselves (on some level)–then we need to be active participants and the machinery must serve our learning needs, by feeding us compelling stories that motivate the knowledge.

As Steven Weinberg wrote (in the quote at the beginning of part 1), sometimes we can only see the truth by rummaging around in the weeds. Trying to fly too high above the fray just isolates us from the truth. We need to get our hands and noses dirty.

With the SSToryline project we are trying to take a route based on efficient representation. Instead of ingesting large amounts of data by machine learning, it tries to put the control back in the hands of humans, by equipping human users with a way to lay out their own semantic coordinates over the landscape of potential knowledge. Our notes must be breadcrumb trails to satisfactory answers.

What kinds of questions can we ask about concepts of whatever kind?

  • What other ideas or things are like it?
  • What does it lead to or where does it come from?
  • What is it part of or what does it consist of?
  • What attributes does it have or is it part of?

Think about what you would want to say about something that mattered to you!

Some chain links in our storytelling

Notice that the type of “things” these relationships encompass goes beyond a kind of ontological identity, phenotype or object class model of property attributes, such as those we encounter in typical information models. We should be indexing possibilities, not birthmarks.

Example notes about physics in N4L…??

Obviously, we are now approaching specialist territory. We needn’t go into the specifics here. The key thing is that we start with the unifying event and break them up into smaller pieces by documenting their causal histories. This is what doctors do about their patients, after all. An event is not a new type of composite thing. It might happen only once, or in one way. What happens next, and how it came about are the next important things, and so on.

Trying to make information logical is primarily helpful to the person doing it, because it’s their intention that’s activated to organize it–not necessarily the reader’s. They might think with the same logic next time they need to recall it, but it’s a more risky assumption to imagine that anyone else will. Societal norms play a role in shaping this sort of groupthink, but in the information age we are much more individualistic than we were in bygone times, so we need to adapt.

If information doesn’t have a narrative timeline to draw us in, we may skip around and abandon it in no time, so the ability to dig deeper is a key signal for intent. Keeping your attention requires trust in two parts: an assessment of trustworthiness, and sufficient mistrust in knowing (or curiosity to know) the outcome to want to rummage. It’s the work of attention that makes it knowledge. We might accept or reject the knowledge, but positive and negative knowledge are both knowledge, and we can change our minds later. These are issues about ongoing context and state of mind. All of those things are in play when we interact with information.

Dealing with context

Ontology tries to deal with distinct cases by making logical data types, one for each case — like a menu of polymorphism, but this is generally too rigid. In natural language names are names and they overlap sometimes accidentally, but often with purpose. Trying to turn names into

Example note in N4L

For example, consider the earlier example of notes about unicode. It begins with a more rigid chapter heading as a container to prevent spuriously crossing subject boundaries when uploading to storage. The colon markers, on the other hand, are soft “containers”. The suggest keywords for when we would want to use the information. These are like index terms, hoping to match what we might type in anger while looking for something. The context terms apply until the next changes, or we mark the end.

Example noes in N4L

The recipe for recall…

Here’s how I imagine using the SSTorytime tools. The goal is not to write a book, but to create actionable summaries that prompt your own recall of things in the right context.

  • Be on the lookout for information you can use!
  • Cut and paste any short text you find (if you can) that you think is important, with a URL into your notes.
  • Better still, write down what you see in your own words.
  • Write down all your random thoughts about things before you forget them.
  • Go back and turn all these scraps into meaningful comments by breaking them up further, like bullet points in a storyline, and adding notes and examples.
  • Add contexts as keywords that you like to index how you expect to use the knowledge in future.
  • Add your own research, from other sources, to try to make a meaningful resource from your notes. Don’t forget to add where you found the information.
  • Simply throw away things that no longer make sense to you. They are now beyond saving.
  • Run your notes through the N4L compiler to make sure they are parsable by the computer.
  • Upload the results to a database so that you can browse and search the notes in different ways.
  • Keep returning to organize and further annotate your knowledge.

The major challenge that we must return to later this year is the question of how we use context to find relevant information.

Give it a try!

Isn’t gamification the answer to learning?

There's a new kid in town. A generation of business owners reared on computer games, has lately taken to trying to appeal to learners by hacking their addiction centres with the gamification of learning. Duolingo, the language learning app, is perhaps the best known example. These apps are extremely popular, but how effective are they?

Actually this isn’t a new idea. Like classroom and pop quizzes, and even examinations, games are supposed to motivate learning. But the risk with gamification of knowledge is that the game takes more of the attention than the material. Do people know what they’ve memorized, or are they like Pavlov’s Parrots, merely memorizing and repeating a phrase to fetch a reward? If you train your recall on a single key, you will become an expert at the game, and still know nothing about the subject. This was my experience when trying to learn a language. I got full scores on the app, but could never recall anything when I needed to.

It’s ongoing research, please help!

The tools being developed in SSTorytime are being designed for ease of use with as wide an audience as possible. This is a very hard task to fuilfil, so we need your feedback. The only way to get experience and improve the tools is to use them! Please try them and let me know what you think.

Thanks!

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

Responses (1)