New graph DBs implemented with the GraphBLAS linear algebra model will be orders of magnitude more performant than previous gen DB models. RedisGraph 1.0 is the first public GraphBLAS database implementation. And things are about to get even faster with the GraphBLAS GPU implementations in the works.
I've never seen any advantage to graphdbs over relational models until I saw this talk. Raising graph analysis to the level of linear algebra is brilliant.
> Raising graph analysis to the level of linear algebra is brilliant.
Adjacency matrices is how graph problems were handled by APL programmers in the 1980s, mostly because there was no alternative before nested arrays. At the time, there was not a lot of vectorized hardware, and main memory sizes were too small for many problems, so the adjacency matrix representation was more of a problem than a good solution. It's really the advances in vectorized hardware and main memory sizes that have made this technique practical for large graphs.
Yes, a lot of innovation had to occur to make the GraphBLAS linear algebra model practical, which has been in the works for more than 10 years. It began with Jeremy Kepner and John Gilbert's work formalizing the linear algebra model over semi-rings [1]. And then working with Intel, IBM, Nvidia and the other hardware vendors to define and implement a standard set of hardware primitives. But you could really see the stars align a few years ago then when the deep-learning ML wave hit because it paved the way with a surge in demand for GPU/TPU accelerators in the data center. A lot of things had to happen in to make this all come together. It's been an industry-wide effort.
Here's the Redis Day London Nov '18 launch video [1] of RedisStreams and RedisGraph 1.0 GA [2] -- their benchmarks show it to be up to 600X faster over previous-gen graph DBs [2] (and that's before the coming parallel/distributed GraphBLAS GPU implementations that are in the works)...
And it's not just graphs...as referenced in the linked comments above, there are now linear algebra models that encode Datalog and/or the entire typed-lambda calculus as linear algebra matrix operations (and as shown in the Datalog paper [1], the Datalog linear algebra implementation is the fastest Datalog implementation to date).
But MIT and Sandia Labs are taking the linear algebra model to the next level, and are now working on encoding an entire operating system in the language of linear algebra...
Jeremy Kepner (the head of MIT Lincoln Labs and GraphBLAS lead) and his team just published a paper [2] where they define an entire unix operating system using the same linear algebra model as they used for D4M/GraphBLAS. The linear-algebra OS model scales linearly way beyond the Linux limits, and since the entire OS kernel representation is defined as generic matrix transformations, it can run on any processor, including CPUs, GPUs, or a cluster of TPUs.
IBM also has a GraphBLAS [1] implementation in the works [2], and this summer Jenna Wise [3], a PhD candidate at CMU, spent the summer at IBM working on a formal verification proof of the GraphBLAS code [4].
See previous discussions on GraphBLAS https://hn.algolia.com/?query=GraphBLAS&sort=byPopularity&pr...