# augmenting timestamp sorting with causality

Here is a simple technique for sorting a linear collection of events by timestamp from devices where the clock might disagree. Importantly, this device does not depend on a privileged node to resolve an ordering and produces plausible results for sparse data too. This makes the technique useful for offline uses and conditions of partial connectivity.

Each message must have:

A good id scheme might be a hash of the message contents, a unique device id with a sequence number, or a cryptographically random sequence. A hash or cryptographically random sequence will have additional implicit causality properties that could be helpful to ensure more robust behavior outside the scope of this technique.

The array of links is a collection of the ids of the most recently seen "heads" of the messages each device has at the time of publishing. A "head" is like in a distributed version control system like git, the commit hash (here we use ids) of each branch. In git, when a commit refers to a prior commit as its predecessor, it becomes the new "head". Each commit may supersede multiple heads at once and new branches (each with their own head) are created when multiple commits refer to the same predecessor. Our "links" field works the same way as this branch/heads model.

To calculate the "links" field (the heads) for each message, when it is time to generate a new message, find every id that is not linked to by any other message. Use the ids of that set of not-linked-to messages as the "links" field in the new message.

# example message data

Let's generate some linked data to use later. In this example, the id starts with a letter for the originating device followed by a device-local sequence number.

First, device "a" creates a new message linking to no previous messages.

{ id: 'a0', links: [], time: '2023-02-22 11:30' }

Then 45 minutes later, device "a" creates a second message, replacing "a0" as the head:

{ id: 'a1', links: ['a0'], time: '2023-02-22 12:15' }

Device "b" has seen the message from "a" and publishes a new message at 12:55, according to the clock on "b" that links "a1" from device "a":

{ id: 'b0', links: ['a1'], time: '2023-02-22 12:55' }

Now "a" publishes 2 more messages:

{ id: 'a2', links: ['a1'], time: '2023-02-22 13:10' }
{ id: 'a3', links: ['a2'], time: '2023-02-22 13:00' }

Next a new device "c" has a fairly out of sync clock and has only received messages "a0" and "a2", not "a1" nor "b0". This message could not have occurred before 13:00 on "a"'s clock because it links to "a2", but according to the clock on "c", it is 12:25. Our sorting algorithm should place this message after "a2", despite the time stamp.

{ id: 'c0', links: ['a0','a2'], time: '2023-02-22 12:25' }

Meanwhile, device "d" has been creating some independent messages offline. The first message links to "d0", but that message isn't available right now.

{ id: 'd1', links: ['d0'], time: '2023-02-22 13:37' }
{ id: 'd2', links: ['d1'], time: '2023-02-22 13:38' }

Then things got slightly confused on device "d" and it published a message "d3" that links back to "d0", not the latest head on device "d", "d2". There are many reasons why this might occur. Maybe there was data corruption on device "d" or perhaps device "d" doesn't have enough resources to track very many prior messages. Or perhaps according to the specific logic of the application, this is an intentional fork or thread. Whatever the reason, our sorting algorithm should handle this type of scenario too.

{ id: 'd3', links: ['d0'], time: '2023-02-22 13:39' }

By the time device "a" wants to publish a new message, it has received all the updates from device "d". Device "a" links against all known heads: its own ("a3"), as well as the heads from "d" ("d2" and "d3").

{ id: 'a4', links: ['a3','d2','d3'], time: '2023-02-22 14:10' }

Finally, device "b" publishes a new message, having seen updates from "a" and also device "c". However, the clocks on "a" and "b" must disagree, because the timestamp on "b1" (14:05) is earlier than one of the messages that "b1" links to ("a4" at "14:10"). So this message should be listed after "a4" in our sorting output despite the timestamp.

{ id: 'b1', links: ['a4','c0'], time: '2023-02-22 14:05' }

# sorting algorithm

This sorting algorithm takes data as in the previous example and sorts it by causality first and time stamps second.

First, build a data structure mapping message ids to message data. In the code below, this is the "messages" class property.

Then, build a data structure that maps each message id to an array of message ids that link to the given id. This is a kind of reverse link table called "rlinks" in the code below.

The algorithm for sorting (the "list" method below) uses a stack of messages and selects the message with the smallest timestamp each iteration of the main loop.

The stack is initialized to all of the messages that do not link to any other known documents. Those initial messages can still link to messages unknown on the device doing the sorting. This initial set of messages is on the opposite end of the "heads", a kind of "lasts" (although a head may also be a "last").

Looping until the stack is not empty and ensuring not to visit the same message twice, the message with the least timestamp is selected while swapped to the end of the stack and popped off. The selected message is skipped if any of its links hasn't been visited yet so that it doesn't appear before any of its ancestors.

If the selected message was not skipped, it is yielded as a result and then all of the messages that link to the present message (using the reverse link table) and have not been visited yet are added to the stack.

When the stack is empty, the results are complete. Here is some javascript that implements this causal sorting technique:

class Messages {
  constructor() {
    this.messages = new Map // id => msg
    this.rlinks = new Map // id => [ids that link to id]
  add(msg) {
    if (this.messages.has(msg.id)) return
    this.messages.set(msg.id, msg)
    for (let link of msg.links ?? []) {
      let rl = this.rlinks.get(link)
      if (rl === undefined) {
        this.rlinks.set(link, [msg.id])
      } else {
  list() {
    let stack = []
    // initialize the stack with messages that have no available links
    for (let msg of this.messages.values()) {
      let count = 0
      for (let link of msg.links ?? []) {
        count += this.messages.has(link) ? 1 : 0
        if (count > 0) break
      if (count === 0) stack.push(msg)

    let seen = new Set
    let results = []
    loop: while (stack.length > 0) {
      let msg = popNext(stack)
      for (let link of msg.links ?? []) {
        if (this.messages.has(link) && !seen.has(link)) {
          // this msg links to an available msg that has not yet been seen,
          // so it's too early to yield this msg.
          // don't worry, it will get pushed to the stack again when the
          // linked message appears on the stack because msg is reverse linked.
          continue loop
      if (seen.has(msg.id)) continue
      for (let link of this.rlinks.get(msg.id) ?? []) {
        if (seen.has(link)) continue
        let m = this.messages.get(link)
        if (m !== undefined) stack.push(m)
    return results

function popNext(stack) {
  let best = stack[0], besti = 0
  for (let i = 1; i < stack.length; i++) {
    let s = stack[i]
    if (s.time < best.time) {
      besti = i
      best = s
  // swap the found element to the end of the stack and pop it
  // this avoids an in-place splice that would copy or re-allocate the array
  let tmp = stack[stack.length-1]
  stack[besti] = tmp
  return best

Using the set of messages we created earlier:

let m = new Messages
m.add({ id: 'a1', links: ['a0'], time: '2023-02-22 12:15' })
m.add({ id: 'b0', links: ['a1'], time: '2023-02-22 12:55' })
m.add({ id: 'a2', links: ['a1'], time: '2023-02-22 13:10' })
m.add({ id: 'a0', links: [], time: '2023-02-22 11:30' })
m.add({ id: 'a3', links: ['a2'], time: '2023-02-22 13:00' })
m.add({ id: 'c0', links: ['a0','a2'], time: '2023-02-22 12:25' })
m.add({ id: 'a4', links: ['a3','d2','d3'], time: '2023-02-22 14:10' })
m.add({ id: 'b1', links: ['a4','c0'], time: '2023-02-22 14:05' })
m.add({ id: 'd1', links: ['d0'], time: '2023-02-22 13:37' })
m.add({ id: 'd2', links: ['d1'], time: '2023-02-22 13:38' })
m.add({ id: 'd3', links: ['d0'], time: '2023-02-22 13:39' })
console.log(m.list().map(msg => msg.id)) // ['a0','a1','b0','a2','c0','a3','d1','d2','d3','a4','b1']

Here we get the messages sorted by causality first and timestamp second.

Here are the sorted results in full instead of just the ids:

{ id: 'a0', links: [], time: '2023-02-22 11:30' }
{ id: 'a1', links: [ 'a0' ], time: '2023-02-22 12:15' }
{ id: 'b0', links: [ 'a1' ], time: '2023-02-22 12:55' }
{ id: 'a2', links: [ 'a1' ], time: '2023-02-22 13:10' }
{ id: 'c0', links: [ 'a0', 'a2' ], time: '2023-02-22 12:25' }
{ id: 'a3', links: [ 'a2' ], time: '2023-02-22 13:00' }
{ id: 'd1', links: [ 'd0' ], time: '2023-02-22 13:37' }
{ id: 'd2', links: [ 'd1' ], time: '2023-02-22 13:38' }
{ id: 'd3', links: [ 'd0' ], time: '2023-02-22 13:39' }
{ id: 'a4', links: [ 'a3', 'd2', 'd3' ], time: '2023-02-22 14:10' }
{ id: 'b1', links: [ 'a4', 'c0' ], time: '2023-02-22 14:05' }

This algorithm should scale well enough for a set of messages that would be displayed all at once on a screen (the intended use) but if you run into scaling issues you could consider using a sorted collection for the stack instead of an array to avoid scanning the stack for the smallest timestamp each iteration of the loop (at the expense of a more costly insert).

If you pull these messages from a database, you can choose whether you want to sort and display only the messages within a time range or whether you also want to resolve one or more levels of dangling references in case that some of them might be sorted into the set of messages displayed inside of the time window through implied causality. With the second approach, you can show messages that would be contained within a bounded time query despite having timestamps that exist outside of it. For that approach, you also need to trim messages from the start and end that are outside the time query range, but messages causally implied to be within a query range will have messages within the time query range on both sides (ancestors and descendants).