Time-Based Versioned Graphs

May 13th, 2014  |  Published in Cypher, Graph Databases, Neo4j by Ian Robinson

(Want to talk about this or any other data-modelling topic in person? Sign up for our GOTO Amsterdam tutorial, June 18.)

Many graph database applications need to version a graph so as to see what it looked like at a particular point in time. Neo4j doesn’t provide intrinsic support either at the level of its labelled property graph model or in its Cypher query language for versioning. Therefore, to version a graph we need to make our application graph data model and queries version aware.

I’ll say this up front: versioning necessarily creates a lot more data – both more nodes and more relationships. In addition, queries will tend to be more complex, and slower, because every MATCH must take account of one or more versioned elements. Given these overheads, apply versioning with care. Perhaps not all of your graph needs to be versioned. If that’s the case, version only those portions of the graph that require it.

In this post I’m going to discuss what I consider to be the easiest form of versioning – time-based versioning. Time-based versioning gives us a universal versioning scheme: pick an instant, convert it to a long value, and the resulting version number applies to all parts of the graph. Version-number-based versioning, where each entity has its own independently varying version number (and scheme even), is more complex to implement. I’ll be dealing with that in a future post.

All graph versioning strategies with Neo4j must make tradeoffs between the cost of writes, the amount of on-disk storage require, the degree of fragmentation incurred, and the performance of reads for both current and historical queries. The design described here strikes a balance between these competing considerations. At the end of the post I describe some variations and optimisations.

The examples here were written using Neo4j Community 2.0.3. I’ve also included the examples in a GraphGist.

Separate Structure From State

The key to versioning a graph is separating structure from state. This allows us to version the graph’s structure independently of its state.

To help describe how to design a version-aware graph model, I’m going to introduce some new terminology: identity nodes, state nodes, structural relationships and state relationships.

Identity Nodes

Identity nodes are used to represent the positions of entities in a domain-meaningful graph structure. Each identity node contains one or more immutable properties, which together constitute an entity’s identity. In a version-free graph (the kind of graph we build normally) nodes tend to identify an entity, locate it in the network structure, and represent its state. Identity nodes in a version-aware graph, in contrast, serve only to identify an entity and locate it in a network structure.

Structural Relationships

Identity nodes are connected to one another using timestamped structural relationships. These structural relationships are similar to the domain relationships we’d include in a version-free graph, except they have two additional properties, from and to, both of which are timestamps.

State Nodes and Relationships

Connected to each identity node are one or more state nodes. Each state node represents an immutable snapshot of an entity’s state. State nodes are connected to identity nodes using timestamped state relationships.

Example

The following example describes a simple domain made up of shops, products and suppliers. A Shop SELLS one or more Products, which are SUPPLIED_BY Suppliers. Over time prices change, shops sell different products, and source them from different suppliers. With a versioned graph we’ll be able to see which products a shop was selling, what price they were being sold at, and who was supplying them, at any point in the graph’s history.

The diagram below shows a version-aware graph representation of this domain. To help you see how structural and state-based elements are combined in the same graph, I’ve drawn identity nodes and structural relationships in blue, state nodes and state relationships in red. In the data itself, I’ve labelled the identity nodes Shop, Product, and Supplier, and the state nodes ShopState, ProductState and SupplierState. Structural relationships are named SELLS and SUPPLIED_BY. All state relationships are named STATE.

initial-state

This graph represents the domain in its initial state on 1 January 2014. All the relationships, both structural and state, are current. Each relationship has a from property with a long millisecond value representing 1 January 2014, and a to property with a very large long value (End-Of-Time, or EOT, which is a magic number – in this case Long.MAX_VALUE) that effectively means there is no current upper bound to the period associated with the relationship.

Here’s the Cypher for creating this graph in its initial state:

CREATE
(s1:Shop{shop_id:1})
  -[:STATE{from:1388534400000,to:9223372036854775807}]->
  (ss1:ShopState{name:'General Store'}),
(p1:Product{product_id:1})
  -[:STATE{from:1388534400000,to:9223372036854775807}]->
  (ps1:ProductState{name:'Cheese',price:1.00}),
(p2:Product{product_id:2})
  -[:STATE{from:1388534400000,to:9223372036854775807}]->
  (ps2:ProductState{name:'Crisps',price:0.50}),
(p3:Product{product_id:3})
  -[:STATE{from:1388534400000,to:9223372036854775807}]->
  (ps3:ProductState{name:'Orange Juice',price:1.50}),
(u1:Supplier{supplier_id:1})
  -[:STATE{from:1388534400000,to:9223372036854775807}]->
  (us1:SupplierState{name:'International Imports'}),
(u2:Supplier{supplier_id:2})
  -[:STATE{from:1388534400000,to:9223372036854775807}]->
  (us2:SupplierState{name:'Local Markets'}),
(s2:Shop{shop_id:2})
  -[:STATE{from:1388534400000,to:9223372036854775807}]->
  (ss2:ShopState{name:'Cornershop'}),
(s1)-[:SELLS{from:1388534400000,to:9223372036854775807}]->(p1),
(s1)-[:SELLS{from:1388534400000,to:9223372036854775807}]->(p2),
(s2)-[:SELLS{from:1388534400000,to:9223372036854775807}]->(p3),
(p1)-[:SUPPLIED_BY{from:1388534400000,to:9223372036854775807}]->(u2),
(p2)-[:SUPPLIED_BY{from:1388534400000,to:9223372036854775807}]->(u1),
(p3)-[:SUPPLIED_BY{from:1388534400000,to:9223372036854775807}]->(u2)

Current Queries

To query the current state of a version-aware graph, we need to match current structural and state relationships, which we can do simply by matching the to property on relationships against our EOT value. Here’s a query for finding all products that currently retail for one pound or less, together with their suppliers, and the shops where they can be purchased:

MATCH (u:Supplier)<-[:SUPPLIED_BY{to:9223372036854775807}]-
      (p:Product)-[:STATE{to:9223372036854775807}]->(ps:ProductState)
WHERE ps.price <= 1.0
MATCH (p)<-[:SELLS{to:9223372036854775807}]-(s:Shop)
MATCH (u)-[:STATE{to:9223372036854775807}]->(us:SupplierState)
MATCH (s)-[:STATE{to:9223372036854775807}]->(ss:ShopState)
RETURN us.name AS supplier, 
       ps.name AS product, 
       ps.price AS price, 
       ss.name AS shop
ORDER BY price DESC

Results:

+--------------------------------------------------------------+
| supplier                | product  | price | shop            |
+--------------------------------------------------------------+
| "Local Markets"         | "Cheese" | 1.0   | "General Store" |
| "International Imports" | "Crisps" | 0.5   | "General Store" |
+--------------------------------------------------------------+

As you can see, because of the need to make all parts of the query version-aware, even simple queries can become quite complex.

Modifying the Graph

The kinds of changes we may need to track in a version-aware graph include changes to state and changes to structure. Changes to state involve adding, removing and modifying node properties. Structural changes involve adding and deleting relationships, as well as modifying the strength, weight or quality of relationships by changing one or more of their properties.

Modifying Structure

In the following query, we move product 1 from shop 1 to shop 2. This change will be effective from 1 February 2014 (1391212800000 milliseconds since 1970-01-01T00:00:00Z).

MATCH (p:Product{product_id:1})<-[r:SELLS]-(:Shop)
WHERE r.to = 9223372036854775807
MATCH (s:Shop{shop_id:2})
SET r.to = 1391212800000
CREATE (s)-[:SELLS{from:1391212800000,to:9223372036854775807}]->(p)

As part of this query, we archive the old SELLS relationship, setting its to property to the millisecond value representing 1 February 2014, and introduce a new SELLS relationship that connects product 1′s identity node to shop 2′s identity node. We set the from property of this new relationship to 1 February 2014, and its to property to EOT.

This is what the graph looks like after we’ve made the change. I’ve used a dashed blue arrow to show the archived SELLS relationship, and a thick blue arrow to show the new SELLS relationship.

move-product

Modifying State

In the following query, we update product 1′s price. As with our structural change, this change will be effective from 1 February 2014.

MATCH (p:Product{product_id:1})
       -[r1:STATE{to:9223372036854775807}]->(currentState:ProductState)
SET r1.to = 1391212800000
CREATE (p)-[r2:STATE{from:1391212800000,to:9223372036854775807}]->
       (newState:ProductState)
SET newState = currentState
SET newState.price = 2.0

This time, we archive the product’s current STATE relationship, and then create a new state node and a new STATE relationship. We copy the product’s properties from the old state node to the new state node (SET newState = currentState), and then update the price.

update-product

Historical Queries

We’ve seen that to query the current state of the graph, we need to do an exact match on the to property of both structural and state relationships, matching against our EOT value. Historical queries are slightly different, in that we’re looking for relationships whose from-to period covers the millisecond instant we’re interested in; that is, relationships where the from value is less than or equal to the instant we’re interested in, and the to value is greater than that instant. These kinds of comparisons must be done in the WHERE clause.

The following query finds all the products sold by shop 1 on 5 January 2014 (1388880000000 milliseconds from the Java epoch):

MATCH (s:Shop{shop_id:1})-[r1:SELLS]->(p:Product)
WHERE (r1.from <= 1388880000000 AND r1.to > 1388880000000)
MATCH (p)-[r2:STATE]->(ps:ProductState)
WHERE (r2.from <= 1388880000000 AND r2.to > 1388880000000)
RETURN p.product_id AS productId, 
       ps.name AS product, 
       ps.price AS price
ORDER BY price DESC

Results:

+------------------------------+
| productId | product  | price |
+------------------------------+
| 1         | "Cheese" | 1.0   |
| 2         | "Crisps" | 0.5   |
+------------------------------+

And here’s the same query for 5 February 2014:

MATCH (s:Shop{shop_id:1})-[r1:SELLS]->(p:Product)
WHERE (r1.from <= 1391558400000 AND r1.to > 1391558400000)
MATCH (p)-[r2:STATE]->(ps:ProductState)
WHERE (r2.from <= 1391558400000 AND r2.to > 1391558400000)
RETURN p.product_id AS productId, 
       ps.name AS product, 
       ps.price AS price
ORDER BY price DESC

Results:

+------------------------------+
| productId | product  | price |
+------------------------------+
| 2         | "Crisps" | 0.5   |
+------------------------------+

Observations

Some further comments on this approach:

Additive

This design is purely additive in nature. To delete a structural relationship we simply archive it; that is, timestamp its to property. To delete a node we must archive both its current state relationship and any current structural relationships. Current relationships (both structural and state) are those with an EOT value.

This additive model strikes a reasonable balance between the competing considerations mentioned at the beginning of this post: the cost of writes, the amount of on-disk storage require, the degree of fragmentation incurred, and the performance of reads for both current and historical queries. Changes in state require the addition of nothing more than a single new state relationship and one new state node, which keeps the cost of writes down. Nothing is rewritten; therefore, nothing is deleted, which in turn keeps store growth to a minimum. On the other hand, both current and historical queries require scanning timestamped relationships, and every state lookup requires traversing at least one state relationship.

Indexes and Constraints

Indexes and constraints are not version aware. When using indexes (or label scans), you will have to test the results of each lookup. For a node to be included in the results for a particular point in time, it must be reachable by a single state relationships whose period includes that point. For entities that change frequently, or are (logically) deleted frequently, this check can impact the performance of index lookups.

Having multiple states for a particular entity poses problems for unique constraints. If a uniquely constrained property value doesn’t change between versions, the constraint will fail because there will now be duplicate property values. To get round this problem, we have to introduce a version property that increments with each state change, and a synthetic property that combines the current version number with the value of the property we want to constrain. We then apply the unique constraint to this synthetic property instead of the underlying domain property. I’ll write this up in more detail in the near future.

Variations and Optimisations

The design strategy described here can be modified in several ways, depending on your needs:

Linked Lists

I’ve attached state nodes to identity nodes in a radial fashion. This works well when there are no more than several tens of changes to an entity’s state over its lifetime. If an entity changes far more often than that, you might consider using a linked list.

Linked lists are also useful when you need to compare successive states: the ordering imposed by the linked list helps you identify next and previous states. In fact, you can combine radial relationships with linked lists:

radial-and-linked-lists

CURRENT_STATE Relationships

For graphs with entities that change frequently, you might consider adding a CURRENT_STATE relationship that connects an entity’s identity node directly to its latest state node. This allows you to navigate directly to that entity’s current state, without having to iterate many radial state relationships, or traverse a linked list. The use of a CURRENT_STATE relationship, however, requires more housekeeping when updating state: you’ll have to delete the old CURRENT_STATE relationship, and then create a new one.

Overloading Identity Node with Initial State

For graphs where most entities change infrequently, or not at all, you might consider overloading each entity’s identity node with that entity’s initial state. You then add state nodes and relationships only with the first change to an entity’s state. If you adopt this strategy, you should continue to adjust structure at the level of the identity nodes, however, not the latest state nodes. Further, while it helps reduce the number of additional nodes and relationships in the graph, it does impose some additional cognitive overhead when visualising the resulting structure, because you now have to make sense of a structure in which current structural relationships are connected to historical initial state nodes.

Overloading Identity Node with Current State

Following on from the idea of overloading the identity node with initial state, and as an alternative to using a CURRENT_STATE relationship, you might consider overloading the identity node with the current state of the entity, and building a linked list of previous states. The following diagram shows what our shop-product-supplier model would look like after if we were to adopt this approach:

alternate-solution

At first glance, this looks simpler. There are fewer nodes, for starters. But it comes with its own compromises. Updating a node requires us not only to create a new node at the head of the linked list, but also to physically delete all the structural relationships attached to the old head of the list, before then recreating them on behalf of the new head of the list. The model is optimised for querying the current state of the graph, but at the expense of more costly writes and increased space on disk (because each time we recreate a batch of relationships, we add them to the end of the stores on disk). I’ll be looking at this model in more detail in a future post.

Atom   RSS 2.0   Email

Cypher: Calculating Shortest Weighted Path

June 24th, 2013  |  Published in Cypher, Neo4j by Ian Robinson

This example shows a simple solution for calculating the shortest weighted path between two nodes in a reasonably dense graph (say 30 relationships per node) using Cypher.

Consider the following graph:

shortest-weighted-path

Let’s say we want to calculate the shortest weighted path between A and I. All we have to do is sum the weights along each path. Doing this, we find that (A)->(B)->(C)->(I) is weight 15, (A)->(D)->(E)->(I) is weight 12, and (A)->(F)->(G)->(H)->(I) is weight 8. Therefore, (A)->(F)->(G)->(H)->(I) is the shortest weighted path, even though it has one more relationship than either of the other two paths.

Implementation

The Cypher to accomplish this is as follows:

START  startNode=node:node_auto_index(name={startNode}),
       endNode=node:node_auto_index(name={endNode})
MATCH  p=(startNode)-[:CONNECTED_TO*1..4]->(endNode)
RETURN p AS shortestPath, 
       reduce(weight=0, r in relationships(p) : weight+r.weight) AS totalWeight
       ORDER BY totalWeight ASC
       LIMIT 1

The summing of weights is accomplished using Cypher’s reduce function.

For modestly sized graphs with up to 30 relationships per node, this query performs reasonably well. For very large, extremely densely connected graphs, however, it’s not going to cut it: those types of graph require an implementation of a shortest weighted path algorithm such as Dijkstra or A*. Running on a MacBook Air against a graph comprising 1 million nodes, 12.5 million relationships, with between 4 and 20 outgoing CONNECTED_TO relationships per node (average 12), average execution time is 120 ms. Increasing the depth of the search and/or the density of the graph will increase query times.

This example has been tested against Neo4j 1.9. The code for the example can be downloaded from here.

Alternative implementation

Instead of using r in relationships(p) to iterate the relationships along each path, we could bind a path’s CONNECTED_TO relationships to an identifier instead, like this:

MATCH   p=(startNode)-[rels:CONNECTED_TO*1..4]->(endNode)
RETURN  p AS shortestPath, 
        reduce(weight=0, r in rels : weight+r.weight) AS totalWeight

My tests, however, show that using r in relationships(p) is faster than using identifiers.

Atom   RSS 2.0   Email

Cypher: Adding a Node to a Linked List

June 20th, 2013  |  Published in Cypher, Neo4j by Ian Robinson

This example shows how to add a node to the head of a linked list using Cypher.

Let’s assume we have a graph containing an indexed node that can act as the list’s owner. What we want to do is build up a linked list, with the head of the list attached to the owner node using a HEAD relationship. When a new node is added to the list, it becomes the head node. The HEAD relationship is detached from the old head node and attached to this new head node. At the same time, the new head node is linked to the old head node using a PREV relationship.

The diagram below shows a linked list comprising three nodes. The list is attached to an owner node named ‘my-list’.

linked-list

Here’s the Cypher for adding a node to the head of a linked list:

START owner=node:node_auto_index(name={listName})
CREATE UNIQUE (owner)-[:HEAD]->(newHead{newElement})
WITH owner, newHead
MATCH (newHead)<-[:HEAD]-(owner)-[oldRel:HEAD]->(oldHead)
DELETE oldRel
CREATE (newHead)-[:PREV]->(oldHead)

To build the linked list shown above, we would execute this Cypher statement three times.

How it works

The START clause looks up the node representing the list’s owner in an index. The CREATE UNIQUE clause then creates a new node and links it to the owner using a HEAD relationship. The {newElement} parameter here allows us to pass a map of the new node’s properties to the query.

list-1

Having created the new head node, we then pipe it and the owner node to a second query using WITH.

In this second query we try to match the list owner, the new head node and the old head node (if there is one):

list-2

If we find an old head node, we delete its HEAD relationship, because it is no longer valid. We then create a PREV relationship that links the new head node to this old head node:

list-3

The MATCH clause here is similar to a pattern matching clause in a functional language: if Cypher can match the (newHead)<-[:HEAD]-(owner)-[oldRel:HEAD]->(oldHead) pattern in the graph, it will invoke the subsequent DELETE and CREATE clauses; otherwise it will terminate without deleting or creating any further relationships.

This example has been tested against Neo4j 1.8.1 and 1.9. The code for the example can be downloaded from here.

Atom   RSS 2.0   Email

Running the Graph Databases Book Examples

June 11th, 2013  |  Published in Graph Databases, Neo4j by Ian Robinson

The current version of Neo4j has two different types of index: named indexes and automatic indexes. The Cypher query examples throughout the Graph Databases book use named indexes. In this query, for example, venue, city and author are named node indexes:

START theater=node:venue(name='Theatre Royal'), 
      newcastle=node:city(name='Newcastle'), 
      bard=node:author(lastname='Shakespeare')
MATCH (newcastle)<-[:STREET|CITY*1..2]-(theater) 
      <-[:VENUE]-()-[:PERFORMANCE_OF]->()-[:PRODUCTION_OF]-> 
      (play)<-[:WROTE_PLAY]-(bard)
RETURN DISTINCT play.title AS play

There’s a problem, however: data created using a Cypher CREATE statement won’t be indexed in a named index. This has led to some confusion for anyone wanting to code along with the examples: if you use the CREATE statements as published, the query examples won’t work.

(You can find out more about Neo4j indexes in the documentation.)

If you want to code along with the examples, the easiest thing to do is create the data using Cypher and automatic indexes. You’ll need to modify the queries to use the automatic indexes, but the changes aren’t complicated.

Configure Automatic Indexes

The examples below show how to run the Shakespeare examples (pp.47-50) using automatic indexes.

Before you create any data, you’ll need to configure the automatic indexes. The approach you take will depend on whether you are running Neo4j in embedded or server mode.

Embedded mode

If running in embedded mode, configure the database at startup:

GraphDatabaseService graphDb = new GraphDatabaseFactory().
    newEmbeddedDatabaseBuilder( storeDirectory ).
    setConfig( GraphDatabaseSettings.node_keys_indexable, "name,lastname" ).
    setConfig( GraphDatabaseSettings.node_auto_indexing, "true" ).
    newGraphDatabase();

Server mode

If running in server mode and executing Cypher statements through the console or using the REST API, add the following to the conf/neo4j.properties file.

node_auto_indexing=true
node_keys_indexable=name,lastname

Modify the Queries

The Shakespeare queries should then be modified as follows. In each case we’ve replaced a named node index (e.g. node:author) with an automatic node index (node:node_auto_index).

p.47:

START theater=node:node_auto_index(name='Theatre Royal'), 
      newcastle=node:node_auto_index(name='Newcastle'), 
      bard=node:node_auto_index(lastname='Shakespeare')
MATCH (newcastle)<-[:STREET|CITY*1..2]-(theater)
      <-[:VENUE]-()-[:PERFORMANCE_OF]->()-[:PRODUCTION_OF]-> 
      (play)<-[:WROTE_PLAY]-(bard)
RETURN DISTINCT play.title AS play

p.48:

START theater=node:node_auto_index(name='Theatre Royal'),
      newcastle=node:node_auto_index(name='Newcastle'), 
      bard=node:node_auto_index(lastname='Shakespeare')
MATCH (newcastle)<-[:STREET|CITY*1..2]-(theater)
      <-[:VENUE]-()-[:PERFORMANCE_OF]->()-[:PRODUCTION_OF]-> 
      (play)<-[w:WROTE_PLAY]-(bard)
WHERE w.year > 1608
RETURN DISTINCT play.title AS play

p.49:

START theater=node:node_auto_index(name='Theatre Royal'), 
      newcastle=node:node_auto_index(name='Newcastle'), 
      bard=node:node_auto_index(lastname='Shakespeare')
MATCH (newcastle)<-[:STREET|CITY*1..2]-(theater)
      <-[:VENUE]-()-[p:PERFORMANCE_OF]->()-[:PRODUCTION_OF]-> 
      (play)<-[:WROTE_PLAY]-(bard)
RETURN play.title AS play, count(p) AS performance_count 
ORDER BY performance_count DESC

p.50:

START bard=node:node_auto_index(lastname='Shakespeare') 
MATCH (bard)-[w:WROTE_PLAY]->(play)
WITH play
ORDER BY w.year DESC
RETURN collect(play.title) AS plays

Download Working Examples

The code that we used to generate most of the examples for the book, including the use cases, can be found on my GitHub account.

Atom   RSS 2.0   Email

Progressive .NET Tutorial

August 2nd, 2011  |  Published in Events, REST by Ian Robinson  |  3 Comments

On September 7th I’ll be running a half-day tutorial on building RESTful Web services as part of the Skills Matter and London .NET User Group’s Progressive .NET Tutorials.

This will be a very hands-on tutorial. Over the course of three and a half hours we’ll build a hypermedia-driven service and client in .NET using the new WCF Web APIs. The subject matter hearkens back to a favourite subject of mine, Hydras and Hypermedia, for the application we’ll build will be a simple text-based pick-your-path-to-adventure game.

This isn’t a tutorial on games design, however. Despite the non-enterprisey subject matter, the tutorial serves to illustrate how machine clients and services can cooperate to achieve a useful application goal. Business application analogies lurk around every corner and surprise you with every encounter.

The tutorial comprises three exercises. With each exercise you’ll have to fix a number of broken unit and functional tests. Bit by bit, you’ll build a working application:

  • Exercise 1: Here we implement the server resources that make up the dungeon – the chambers, tunnels and caves in which our daring client will venture.
  • Exercise 2: Next we build a crafty client that can discover a path through the dungeon. By the end of the exercise, the client ought to be able to navigate the dungeon from entrance to exit.
  • Exercise 3: Last, we add an element of danger, populating the dungeon with encounters (more resources) that the client must overcome before it achieves its application goal.

Along the way you’ll learn about:

  • The Atom Syndication Format
  • “What if” client-side intelligence
  • The client as arbiter of application state
  • Hypermedia controls: links, link relations and forms
  • DRY URIs

So come and join me for what promises to be an entertaining hack-n-slash workshop. Attendees should be familiar with C# and unit testing, and have some knowledge of HTTP.

Full details and registration for the entire event can be found here.

3 Comments  |  Atom   RSS 2.0   Email