Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Simple-graph – a graph database in SQLite (github.com/dpapathanasiou)
237 points by dpapathanasiou on Dec 26, 2020 | hide | past | favorite | 42 comments


Interesting. SQLite is awesome.

I did something similar recently, a block store for a rust implementation of ipfs, which models a directed acyclic graph of content-addressed nodes.

https://github.com/actyx/ipfs-sqlite-block-store

I found that performance is pretty decent if you do almost everything inside SQLite using WITH RECURSIVE.

The documentation has some really great examples for WITH RECURSIVE. https://sqlite.org/lang_with.html


The issue I found with WITH RECURSIVE queries is that they're incredibly inefficient for anything but trees. I've looked around and there doesn't seem to be any way to store a global list of visited nodes. This means that when performing a traversal of the graph the recursive query will follow all paths between two nodes.


I'm pretty sure you could also maintain a temp table and use some kind of "insert...where...returning" construct to squeeze that into a recursive query.

At a moderate overhead you could also definitely return all seen nodes and a flag to identify them as such as part of your intermediate data at each recursive step.

The postgres query optimizer struggles with recursive queries even when well suited to the problem though. Are they actually efficient in sqlite even for trees?


I would say they are reasonably efficient.

Of course many orders of magnitude slower than keeping it all in in memory maps and doing the traversal there, but fast enough to not be a limiting factor.

Traversing a medium depth DAG with a million nodes to find orphaned nodes takes less than a second on average hardware.

One thing to be aware of is that SQLite has lots of tuning options, and they are all set to very conservative values by default.

E.g. the default journal mode is FULL, which means that it will flush all the way to disk after each write. The default cache size is tiny.

With a bit of tuning you can get quite decent performance out of SQLite while still having full ACID guarantees, or very good performance for cases where you can compromise on the ACID stuff.

I have not yet found a situation where nosql databases like leveldb offer an orders of magnitude advantage over SQLite, and SQLite is so much more powerful and robust...

https://danluu.com/file-consistency/


> Traversing a medium depth DAG with a million nodes to find orphaned nodes takes less than a second on average hardware.

Unless you have an abnormally high edge count that sounds super slow to me. Even accounting for metadata overhead and disk page slop you're only reading and processing tens of megabytes, and every algorithm in sight is linear. I'd be surprised if you couldn't get a 2-5x speedup by reading the whole table to RAM in your favorite compiled/jitted language and just traversing it there.

> I have not yet found a situation where nosql databases like leveldb offer an orders of magnitude advantage over SQLite, and SQLite is so much more powerful and robust...

I have no skin in that game, but would some of the nosql solutions not perform significantly better under heavily concurrent insertions and the other workloads they were designed for?


> I'm pretty sure you could also maintain a temp table and use some kind of "insert...where...returning" construct to squeeze that into a recursive query.

I'm not sure if this is possible in SQLite, as far as I know the WITH clause is limited to SELECT statements.

> Are they actually efficient in sqlite even for trees?

Recursive common table expressions work by adding returned rows to a queue and then performing the recursive select statement independently on each row in the queue until it's empty.

You can use WITH RECURSIVE to traverse a tree by adding the root node to the queue and recursively visiting adjacent rows until the queue is empty. This works correctly and quickly because trees have only a single path between nodes. If you try the same query on a DAG though it will return every path to a given node, you then have to perform a GROUP BY to find the shortest path outside of the recursive query. In the worst case, if you have a graph with many paths between nodes, this method is exponentially slower than a standard BFS.


Tangentially related to graph dbs, but if you're looking for more hierarchical support, SQLite does has a transitive closure extension[0] that might be of some assistance. I leveraged this back in 2014 to write our framework-agnostic result storage on AWS Device Farm.

[0] - https://www.sqlite.org/cgi/src/artifact/636024302cde41b2bf0c...

[1] - https://charlesleifer.com/blog/querying-tree-structures-in-s...


I've actually been working on an extension to perform breadth first search queries in SQLite on general graphs [0]. The extension is actually based off of the transitive closure extension. You can use it on any existing SQLite database as long as you can wrangle your edges into a single table (real or virtual) and the node ids are integers (I'm planning on removing this constraint in the future).

[0]: https://github.com/abetlen/sqlite3-bfsvtab-ext



A more recent effort: https://cgsql.dev/


I just read through the docs of this, what an amazing project.

I'm considering doing a js template string implementation for node.. cql`...` type thing with an internal compilation cache.


Thanks


I really like this, OP. I'm member of the clan "why are you creating your own XDR, just use sqlite!!" and have oft jumped to that in technical discussions, so appreciate it.

However, what's lacking from something like this is a detailed bill of the cost. I'd love to see some, any benchmark on a DB with > 10^6 edges to see how it goes. That's the other hand of the equation "just use sqlite and be happy" -- the expectation that performance will actually be reasonable.



How does this perform compared to a “native” graph database like Neo4J?


It really depends on what you want to do with it.

I would benchmark the tasks "traversal", "aggregation" and "shortest past" for a 10k to 10M node graph. Anything under 10k would be good enough with most techs and over 10M need to consider more tasks (writes, backup, the precise fields queried can become their particular problems at larger scale).

The Github link implements "traversal "in Python instead of pure SQLite. I suspect it will be around x10 slower than it could be with the same tech stack, because it queries once per node from Python to SQLite. Shortest path is not implemented and would be too slow to be useful in an interactive environment. "Aggregation" is also not implemented, but it would perform admirably, because SQL is good at that.

Traditional relational OLTP databases such as Postgres are already faster than dedicated graph databases for certain graph related tasks, according to this benchmark: https://www.arangodb.com/2018/02/nosql-performance-benchmark...


> Traditional relational OLTP databases such as Postgres are already faster than dedicated graph databases for certain graph related tasks

It is indeed quite common that relational databases outperform graph databases on certain graph processing problems such as subgraph queries (a.k.a. graph pattern matching). There are two key reasons for this: (1) most graph pattern matching operations can be formulated using relational operations such as natural joins, antijoins, and outer joins; and (2) relational databases have been around longer and have well-optimized operators.

A lot of the value that graph databases provide lies in their query languages which (for most systems) allow formulating path queries using a nice syntax (unlike SQL's WITH RECURSIVE which many people find difficult to read and write). Their property graph data model supports a schema-optional approach, which makes them better suited for storing semi-structured data. They also "provide efficient programmatic access to the graph, allowing one to write arbitrary algorithms against them if needed" [1].

With all these said, graph databases could be much faster on subgraph queries than relational databases and there are recent research results on the topic (worst-case optimal joins, A+ indexes, etc.). But these are not available in any production system yet.

[1] http://wp.sigmod.org/?p=1497


> "shortest past"

shortest path typo, right?


The Open Shortest Past First protocol is used to resolve temporal paradoxes.


Neo4j has failed queries I have written, with "out of memory" errors. I have never, ever, ever gotten that from SQLite.


Performance issues are a very valid discussion. But to me, the availability of a graph-oriented query language on top of this graph variant of SQLite is, imho, the very first step to investigate. (RDF import/CSV import being next)


There has been a lot of progress on creating standardized query languages for graphs. The two most notable ones are [2]:

- SQL/PGQ, a property graph query extension to SQL is planned to be released next year as part of SQL:2021.

- GQL, a standalone graph query language will follow later.

While it is a lot of work to design these languages, both graph database vendors (e.g. Neo4j, TigerGraph) and traditional RDBMS companies (e.g. Oracle [2], PostgreSQL/2ndQuadrant [3]) seem serious about them. And with a well-defined query language, it should be possible to build a SQL/PGQ engine in (or on top of) SQLite as well.

[1] https://www.linkedin.com/pulse/sql-now-gql-alastair-green/

[2] http://wiki.ldbcouncil.org/pages/viewpage.action?pageId=1062...

[3] https://www.linkedin.com/pulse/postgresql-oracle-graph-query...


have SPARQL and Gremlin not seen adoption as standard graph traversal languages? They're the two names that spring to mind when I think "graph querying".


I second that. I have not followed the news about the Gremlin-to-SPARQL (or SPARQL-to-Cypher) bridge. But afaiu, making your graph system Gremlin-compatible is a first step in the right direction. (And yes, doing that on top of a SQL backend sounds not that natural).


Both SPARQL and Gremlin have been adopted to some extent. SPARQL is a W3C standard and Gremlin is reasonably well-specified (it has good documentation and a reference implementation), so it's possible to implement a functionally correct SPARQL/Gremlin engine with a reasonable development effort.

Gremlin's main focus is defining traversal operations on property graphs. While it supports pattern matching [1], IMHO its syntax is not as clean as Cypher's. Gremlin queries are also difficult to optimize: while it is possible to define traversal rewrite rules, they are more involved than relational optimization rules. The fact that most open-source Gremlin implementations are focusing on distributed setups (e.g. a typical deployment of Titan/JanusGraph runs on top of Cassandra) has also implications on single-machine performance, which certainly did not help the adoption of Gremlin -- but this is not necessarily the problem of the query language. Overall, Gremlin is great for workloads where complex single-source traversal operations do the bulk of the work but it's less well-suited to global pattern matching queries such as the ones in the LDBC Social Network Benchmark's BI workload [2].

SPARQL focuses on the graph problems of the "semantic web" domain, which include not only pattern matching but semantic reasoning/inferencing. One can use it for pattern matching queries but with the following caveats:

- Its data model is based on triples so if one wants to return a node and its attributes (properties), one has to specify each of these attributes explicitly.

- On the execution side, returning these attributes might necessitate executing a number of self-join operations.

- Many SPARQL implementations also have performance limitations due to the extra complexity introduced by self-joins, lack of intra-query parallelism, etc.

The "RDF* and SRARQL* approach" is an initiative to amend the self-join problem by introducing nested triples in the data model. It's currently being worked on by a W3C community group [3]. SPARQL also has "property paths", which allows regular path queries, i.e. traversals where the node/edge labels confirm some regular expression (the "property" in "property paths" has nothing to do with "property graphs").

SQL/PGQ and GQL target the property graph data model and support an ASCII-art like syntax for pattern matching queries (inspired by Cypher). They also offer some graph traversal/shortest path operations (including shortest path and regular path queries). Additionally, GQL supports returning graphs so it's queries can be composed.

[1] https://en.wikipedia.org/wiki/Gremlin_(query_language)#Decla...

[2] https://ldbc.github.io/ldbc_snb_docs/workload-bi-reads.pdf

[3] https://blog.liu.se/olafhartig/2019/01/10/position-statement...


Isn't it like asking "how does sqlite perform compared to databases like PostgreSQL" ?

SQLite is used a lot on edge (mobile apps, ...), sounds like this project provide a graph database for the very same use case (I probably won't run Neo4J on mobile).


IMO it’s a different question. SQLite and Postgres are both relational databases, it stands to reason that they’re at least doing things in similar ways. They’re two implementations of the same idea(ish). A graph database is something else altogether. Grafting that capability onto a relational database has the potential to perform horribly.

It’s a bad analogy, but SQLite to Postgres is like AMD vs Intel x86 CPUs, whereas a graph database is ARM. Can it be emulated? Yes. Is there a far greater potential for slowdown? Yes.


I think the big difference here is that when comparing SQLite you are at least using the same query language.

In the graph space you have Gremlin, Cryper, GQL and many other proprietary query engines (which also looks to be the the case here).

Without that accessibility this feels a bit like pickling a NetworkX object.


Then maybe no such questions are necessary.


It depends on how the graph is stored in the database. In this project the nodes ids are TEXT so it will likely not scale very well. I know because I use a similar implementation with GUID as string in Sqlite in a project since a couple of years and while it works fine for the graph I have (<1 million nodes, few edges per nodes) it won’t perform too well past that.


To some extent I think it depends on what data you're storing in the graph ie. If it's temporal data using a ulid instead of a guid speeds things up significantly (30x for large data) as your ids are not as fragmented.

https://github.com/schinckel/ulid-postgres/blob/master/ulid....


Thanks for the info. Do you happen to have some stats to share?


I wonder if there are ways, in SQLite, to build indices for s,p,o/s,p/p,o/ and maybe more subtle ones... That would be uber nice, given the fact that most graph databases have their own indexing strategies, and you cannot craft your own.


I saw this lecture some time back on the topic of implementation and tradeoffs https://www.youtube.com/watch?v=Dxwo9DYWV_c


rdflib-sqlalchemy is a SQLAlchemy rdflib graph store backend: https://github.com/RDFLib/rdflib-sqlalchemy

It also persists namespace mappings so that e.g. schema:Thing expands to http://schema.org/Thing

The table schema and indices are defined in rdflib_sqlalchemy/tables.py: https://github.com/RDFLib/rdflib-sqlalchemy/blob/develop/rdf...

You can execute SPARQL queries against SQL, but most native triplestores will have a better query plan and/or better performance.

Apache Rya, for example:

> indexes SPO, POS, and OSP.


Thanks for your comment. I use rdflib frequently but have never tried the SQLAlchemy back end. Now I will. That said, Jena or Fuseki, or the commercial RDF stores like GraphDB, Stardog, and Allegrograph are so much more efficient.


Isn't the whole point of graph databases that they can traverse graph edges efficiently by following pointers to nodes, which relational databases can't do? Then it seems a bit strange to implement a graph database on top of a relational database like SQLite?


Tongue in cheek answer, but: No. That is the whole point of "efficient" graph databases. The point of a "graph database" in the more general sense is simply to be a database that uses a graph paradigm.

This is a silly pedantic point to make, but it is not necessarily trivial. E.g. it may be the case that a particular use-case scneario does not require massive efficiency, and has a lot to gain from the simplicity of sqlite. In which case this kind of project is an amazing thing to exist.

And if there is a way to get a valid benchmark comparison against a more traditional "efficient" graph database, then informed decisions can be made.

As a personal anecdote, a friend and I based a graph-based project on neo4j and were very happy ... until it was time to deploy. We then realised the installations involved were highly complex, rarely supported on traditional webhosts, and costs involved for adopting 'formal' commercial solutions were highly prohibitive. Had we known about this project at the time we would have definitely used it instead (at least as a proof of concept; you can always switch to a more efficient database later if you really have to)


Just a quick side question: Why not deploy with Docker?

My latest API+multiple frontends application uses Neo4j as the only database and we deployed with Docker (compose) with great success. With the config in git we were able to do the traditional test-new-versions-on-a-branch-before-deploy and everything is solid.


It was more along the lines of a website powered by an online database, rather than anything to do with a locally installable client. The webhost provider did not support neo4j, the project was a hobby project so we did not want to buy a professional virtual server, and commercial neo4j-online solutions were prohibitive for what we wanted to do.

I haven't used docker much, but I don't know how it could help here (unless you misunderstood and were referring to locally installable software).


I can see the struggle if the requirement was to have it run on the same servers as the existing website, but any linux (or windows) server can run docker containers.

We just ran the backend and web frontend on a single Digital Ocean droplet, scaling as needed (we started with the $5/mo and it was fine)


Awesome! And you can write SQL queries on the data amazing SQLite is the best database.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: