Entanglement of Software Agents
How some quantum weirdness may be explained by a software model
Last year, I made a model of quantum entanglement in software. Below, I'll try to explain how to make a model of virtual processes which is dynamically similar to the Einstein-Podolsky-Rosen-Bell-Aspect experiment, so that a series of independent trials of the experiment converge on the result predicted by quantum mechanics. Some would have it that this is impossible, but here's a proof by demonstration that similar weirdness is not particular to the quantum realm. It shows that entanglement and “spooky action at a distance” exist in everyday software systems. It’s not limited to quantum mechanical particles. This may help us to understand quantum mechanics too.
The common exhortation about “quantum weirdness” seems to be a result of interpreting what happens in too rigidly "classical" a way, and also artificially restricting what classical can be to manufacture a dichotomy. Here quantum-like effects can be understood purely as the result of interior deterministic processes that decouple from the usual linear motion of particles. These result in a dynamical interference.
Intro
Bell's inequalities have been touted as a way to rule out (at least certain kinds of) hidden variable models in quantum mechanics. These arguments are known to overstate what's meant by hidden variables, as Bell himself pointed out over the years. Von Neumann’s classic proof that hidden variables couldn’t work is also known to be wrong. Moreover, there have been numerous papers showing how extremely prosaic classical systems can violate Bell's inequalities — which is not quite the same thing as exhibiting similar behaviour to a quantum entanglement. So far, to my knowledge, no one seems to have just tried to reproduce the results of quantum mechanical predictions explicitly. The world seems still too captivated by the thrall of quantum weirdness to try. So let's try. In a sense, this experiment does exactly what is claimed to be impossible: it shows how hidden variables can generate the quantum result.
I use the term "virtual motion" (or Motion of the Third Kind) to describe what happens when a process propagates across some kind of background of states, like a program running across a network of computers. Waves on water are one kind of virtual motion (the waves move on average, the water doesn't); software agents running the computer cloud are another. I'm going to use this as a model for how quantum mechanics could actually work under the covers.
So, this is about a kind of hidden variable model using virtual motion rather than physical motion — but it’s not the usual kind of hidden variables referred to in quantum mechanics textbooks. Moreover, it rejects the need for some other assumptions that bind one to a flawed interpretation, namely the machinations of “wave function collapse”, so it also has no need for decoherence in the usual sense to explain measurement. Finally, it has no need for the equivalent of “faster than light communication” to correlate spins, as a long as the lifetimes of interior states are sufficiently long lived or unaffected by their environments.
In a nutshell, the way we handle the measurement issue is the simple way we can overcome the usual arguments of quantum weirdness. Put another way, the assumptions about what a state is and how it gets measured are what I take issue with. Here, I take an extremely conservative view, using nothing more than the Schrodinger equation and standard transition rules to be compatible with information theory. There's no shock interpretation about consciousness or many worlds here. It's very pedestrian!
I should point out (as loudly as possible) that this is not a complete model of quantum mechanics, by any standard. All it really seeks to show is that quantum behaviour is not as special or unique as many claim. It does this by constructing a completely ordinary everyday model of “virtual motion” using computers. This is the kind of process we find in “the cloud” all the time. I show how such processes can exactly reproduce the statistical prediction of the EPR-Bell entanglement scenario.
I did this in response to a number of scientists who were trying to tell me that complex numbers and superposition are unique to quantum mechanics. This sounded like quantum mysticism to me, so I wanted to show how it could work using a kind of model that I’d worked on over the years, in something called Promise Theory. I’ll show how complex numbers are just one representation of an interior process with a phase.
The model, then, is only about how superposed interior spin states can measure boundary conditions that are determined far apart. What this model doesn’t show is how momentum states interfere in something like a double slit experiment. That’s a topic for a different occasion, perhaps one day when I’m sufficiently annoyed once again!
The software model
These notes are based on my original write up from last year. Consider a model of an Aspect type experiment (see diagram):
Consider a software model with three independently running programs. A left particle (software agent), a right particle (software), and an observer (software agent), which reads from a detector at each end: L and R.
- In the model, “particle” pairs (actually, pairs of software agents “A”) are generated anti-symmetrically by a master program called the observer. We can call the two agents left and right or L,R.
- There is no model of a wavefunction joining the two agents. We can choose to flip the spins randomly at any time, but we don't need to in order to generate consistent answers. What matters is that they remain in opposite relative phase for the duration of the experiment.
- In the particle version, the pair moves apart. In my model the software agents don’t need to move from one computer, they just split into two processes that evolve independently until they are measured in quick succession.. Linear motion plays no role in the outcome in either case.
- The particles can express two states q and qbar (which are both superpositions of up and down) — but these are not scalar numbers, they are wavelike vectors (see below). The fact that they are vector means they contain (at all times) both up and down states internally with nothing to select either.
- When the “particles” reach the detectors, the detector can’t measure a superposition, It only measures a leading value of the vector, with three possible detector outcomes:
- * Up, +1
- * Down, -1
- * Nothing 0 — pass through
- We don’t assume that the detector measures the “true state” of the particles/agents faithfully, because the true state is a vector, not a simple eigenvalue. What the detector yields is the outcome of a measurement process. This means there is no need to talk about collapse. Only process outcomes.
- To generate data, some 3rd party observer collects and compares the results detected at each end, and fills out a table of counts for outcomes (up,up) or (up,down), (down,up), or (down,down) — see below.
Note that it’s the combination of assumptions: simultaneity of measurements at each end, together with the long term comparison of remote measurements from L and R, and the picture of a collapsing wavefunction, that seem to imply faster than allowed communication. In this model, the preservation of initial anti-correlated phases together with the implicit rotational symmetry breaking by boundary conditions is what allows the information to be inferred.
I designed the model to imitate the causal assumption implicit in the Schrodinger equation, but the software doesn’t use any quantum mechanical results to calculate the “right outcome”. It’s a purely independent statistical measurement on each trial.
- The “joint probabilities” for these combined events can only be calculated by a third party observer who collects the results from both ends over time. P(++) is the probability of getting (up,up) in a single experiment, etc. The probabilities for measuring up and down together at left and right respectively gives rise to a value for the correlation function E(L,R) that we can compare to the predictions of quantum mechanics. This is what is used in the Bell inequalities.
- In the model, it’s also possible for the measurements to give zero (by destructive interference of the interior vectors). This means that no particle is measured. Without this possibility, the results deviate slightly from the quantum mechanical prediction with much greater error bars!
Classical or quantum?
Let’s take a moment to think about what we would expect to happen. If the outcomes up/down were exactly the state of the particles, they would be independent of the detector setting, and the two measurements for (up,up) and (down,down) would be impossible due to the assumed antisymmetry.
Because the outcome depends on a process of interaction with the detector, they can be non-zero depending on the relative phase of the detector. The fact these can be non-zero tells us that the state of the particles/agents cannot be a scalar number. It has to be a vector. This is the “quantum weirdness”.
Another weird assumption debunked: if the measurement of one end of the experiment led to a collapse of the wavefunction for both, with instantaneous action at a distance, it would be almost impossible to perform the experiment. How could the timings be made precisely with faster than light speeds at play? We know, from Einstein, that “simultaneous” is a basically meaningless concept. To make something appear simultaneous, we would have to have persistent states, coarse grained in time, so that the result was not too sensitive to when the outcomes were measured. But this can’t be if the wave function collapses instantaneously. The model here shows that all we need is for the spin phases to be persistent to give a stable result. Deterministically, this can be the case in the absence of a magnetic field.
Let's compare the usual supposed objections about entanglement with the explicit model.
The usual story of quantum spins
The standard story makes some assumptions something like this:
- One begins with an presupposed distinction between a classical and a quantum system. A classical system must be like a coin flip, either up or down. It can't be something in between. We think of a static coin outcome as lying still on a table, not a coin continuously spinning showing partly one face and partly another.
- A quantum system is supposed to be continuously in a superposition of up and down (but how/what this means is not explained). This is further assumed to be fundamentally different from what a classical system can do, but it’s easy to show that it’s not, it’s just about how we define time and measurement.
- Classically, the outcome of many experiments then has to be 50/50 because the detector just counts predetermined states that have only two options. There is no way to measure “no result” as in a dynamical model.
- Quantum mechanically, there are vectors involved, shifted by phases, that are not present for coins (like the Hilbert space vector). This is what makes QM different. The detector angle matters to the rate at which particles are measured to be in up or down states.
- In entanglement, the relative angle of far-separated detectors is measurable by the results of independent statistical trials–it parameterizes the answer deterministically (see the cosine expression in the figure below)! The fact that the joint probabilities depend on the relative angle is impossible for coins that have landed (collapsed), because the detector can’t reverse-causally bias the flip rates over independent trials: we think the coin just *is* heads or tails. That can’t be changed by the detector. But, actually, it’s not impossible if the coins are still flying.
Suppose we could measure heads or tails by taking a photo of the spinning coin instead of stopping (collapsing) the coin into a final state. The precise timing of the photo would matter to the outcome. You could even get a null result, halfway between heads and tails. The information about the state is not the same as the imagined physical makeup of the state.
If you can only take pictures every second, then over the timescale of a second, the state of the coin is literally in a superposition of states (interleaved) because the coin’s rate of evolutionary time is faster than the observer’s rate of measurement time. This is about sampling rates and information. This is called Time Division Multiplexing in technology.
This is the essence of what makes quantum measurement different from the classical picture. It seems that we deliberately compare apples with oranges to preserve a narrative of weirdness. If we think of detectors more like photographs of still-spinning coins, rather than the landing/collapse of coins, everything looks a lot more quantum mechanical!
An equivalent(?) story of virtual motion
- My “software particle” or agent is not like the classical model of the point like body with a static label up or down, it’s more like a black box software program that contains an independent running process in its memory. It has interior structure. Every interaction to measure the state of an agent involves a computation involving the state of the particle and the detector, which is like an interference process.
- Detectors can take snapshots of an agent's state only by interacting with the agent/particle through a convolution with its own state.
When we talk about a superposition of states in entanglement, this doesn’t have to mean that the process is pointing both up and down “at the same time” on a typical wall clock–because we can’t observe the phase times inside the process. The concept of interior and exterior time (evolution and measurement) are muddled in the equations of quantum mechanics. “At the same time” depends on how you count time. The vector could easily be both up and down at some time “during the same day” (interior time), but is not the same as actually simultaneously both up and down (between different days). So a coarse graining of observable dynamics explains superposition. This is the Time Division Multiplexing, and it’s widely used in technology to make processes appear to happen in parallel.
- A state detector can see the interior processes of the particle/agent, but it can’t observe all the details about the process. It only returns one of the eigenvalues as predicted by quantum theory.
- What makes the detector give an up, down, or no answer is a process of interference between the particle process and the detector process (see explicitly below). If the processes interfere constructively, there’s a bias to produce either an up or down answer. Either strong or weak depending on the phase. If the interference is destructive, it can’t see the event at all.
- In this way, the phases of the detector’s configuration angle bias the production rates for probability, i.e. for producing up and down outcomes, depending on its phase, which is determined by its “angle”. There’s no need to action at a distance as long as the phases are persistently 180 degrees apart.
- Only a consistent deterministic bias can result in an outcome that depends on the detector angle over many independent trials. That comes from the persistent phases.
- The anti-correlation of the two particles in the entangled pair is assumed to be perfect as long as the initial phases from the moment of creation remain in sync.
- There is no collapse of the particle process required on detection.
- There is no communication between the particles required as long as the phases remain in synch. (This is interesting though. For the reader: why would/wouldn’t the states drift?)
The representation of internal states: interfere → outcome
For this model, each software agent has an interior string of digits, like an approximate sine wave but very low digital resolution! This gives a simple way of representing an interference pattern and a phase with only elementary resources.
q = 0000uuuu0000dddd = 0000++++0000
(squint and you see a square wave imitating a sine wave!)
The states q and qbar are vectors, so the total state is both u and d at the same time! States can be phase shifted:
q = 00uuuu0000dddd00
The opposite / complementary / anticorrelated phase is
qbar = 00dddd0000uuuu00
So an entangled pair has state (q,qbar). We choose this pair randomly in every trial, but correlated, and repeat trials thousands of times using a random number generator to determine the initial offset.
Detectors can’t see the phase directly in a static “classical” way. However, each detector has a similar string and phase, and the phase is equal to its “angle”, like a polarization filter. We do know that detector’s phase and we can set up different angles on left and right. When a particle is detected, the phases interfere by convolution
The result is still a mix of both u and d at the same time, but suppose the detector picks the leading edge. If the result starts with u, the final outcome is “up”. If it starts with d the outcome is called “down”. If the outcome is 0, we don’t detect it at all. This is the only “collapse” that needs to occur: the collapse of interior time or phase into a single exterior outcome by coarse selection.
Notice that the original state is unaffected. It doesn’t collapse, it just gets filtered by the detector. It’s unclear whether detection destroys the entanglement between the pair. For software, there’s no reason why it should. In physics, I don’t know.
The bigger picture of this interaction is that it’s a deterministic interaction, which provides a mechanism to bias outcomes deterministically over multiple experiments by persistently correlating the boundary conditions. The boundary conditions and phase (angle) of the detector provides that bias locally thanks to the persistence of the spin agents.
The anticorrelation of the pairs of particles is enough to ensure that they will have opposite effects at each end, and this inference can effectively measure the angle between the detectors, even though the information is not transmitted. (But note that we’ve already assumed it makes sense to measure the angles relative to one another in the QM formalism, so we’ve broken the symmetry by choosing a preferred axis by the experiment boundary conditions.)
Showing the output violates Bell’s inequalities
We know the QM result violates Bell’s inequalities. To show that the software model for virtual motion does too, we run the model and collect the data points over thousands of individual random trials into an expression for E(L,R).
The E(L,R) function calculates the joint probability of seeing up/down +/- at L/R ends…
These two expressions already show how it feels strange to be able to get an expression like cosine(theta) from a sum of four probabilities. However, we simply do it, and here are some results from one run of a thousand trials, compared to the calculated value from quantum mechanics.
These are plotted below to those the sinusoidal result visually:
Concluding remarks
It’s gratifying to find an explicit deterministic mechanism for something that is supposed to be impossible. There’s no cheating involved, no free-will nonsense, only a deliberate attempt to design a system that exhibits the same biases as quantum mechanics for this one special case.
- The formalism of quantum mechanics doesn’t try to explain how particles work; it only describes how biases influence average processes, like all potential/field theories. It appears to describe an entire system from the outside.
- QM claims to predict probabilities but what kind of probabilities are they? Are they frequencies, Bayesian, propensity, interpretations? This model suggests that they are actually propensities or production rates, in the form of measurement acceptance rates over a fixed time.
- The probabilities refer to average information about a single particle per experiment, but can only be realized by repeated experiments over many trials and many particles. This should tell us that the interpretation is hiding a “causal bias” over multiple independent events.
- This model is a fully deterministic system. Only the initial phases of the (q,qbar) are chosen randomly
- There are still two possibilities: If quantum mechanics were deterministic, the anti-correlation must persist until an interaction spoils it. If QM is non-deterministic, the entangled could presumably decay after a certain time unless there is a faster than light signal to repair the phase relationship. The first is perhaps Occam's choice.
Quantum mechanics is interpreted to mean that particles entangled in pair states like (q(1).qbar(2)-q(2).qbar(1))/2 etc, are not a single state, but a superposition of both, but it doesn’t tell us how this can be interpreted. In fact there's a clue. We distinguish pure states from mixed states. The difference is whether the mixtures are interior or exterior to the process. The Schrödinger equation doesn’t actually say that both states have to be active in equal amounts at all times, because it doesn’t tell us what “at the same time” means , only that either states could be measured with some probability at any time. But what is time here?
- Clock time
- Experiment iteration / outcome
By separating slow and fast variables as we do all the time in physics, we can separate interior (fast) and exterior (slow, i.e. experiment iteration) time to coarse grain and get an explanation.
The problem lies in the ambiguities of time.
Here, we see the outcome of a measurement is a projection/snapshot of the continuing phasing of the process state. Nothing collapses. No communication is needed from one end to the other as long as the particles maintain their phase relationship. Of course, why might they do this is another interesting question.
I hope this more detailed exposition helps to explain how the software entanglement works, and also sheds light on how we prejudice the narratives about classical and quantum to preserve an unnatural distinction between the two.
Afternote
(I’m grateful to Finn Ravndal and Paul Borrill for discussions over the years.)
I first wrote about this in a routine followup to an article about computing on Researchgate because arXiv refused to allow it, prejudicially judging it to violate their "rules about publishability". Recently, I tweeted about it here.
The software generating the results is here.
My larger lecture notes on virtual motion are here and here.