Co-founder of Quickwit here. Seeing our acquisition by Datadog on the HN front page feels like a truly full-circle moment.
HN has been interwoven with Quickwit's journey from the very beginning. Looking back, it's striking to see how our progress is literally chronicled in our HN front-page posts:
- Searching the web for under $1000/month [0]
- A Rust optimization story [1]
- Decentralized cluster membership in Rust [2]
- Filtering a vector with SIMD instructions (AVX-2 and AVX-512) [3]
- Efficient indexing with Quickwit Rust actor framework [4]
- A compressed indexable bitset [5]
- Show HN: Quickwit – OSS Alternative to Elasticsearch, Splunk, Datadog [6]
- Quickwit 0.8: Indexing and Search at Petabyte Scale [7]
- Binance built a 100PB log service with Quickwit [9]
- Datadog acquires Quickwit [10]
Each of these front-page appearances was a milestone for us. We put our hearts into writing those engineering articles, hoping to contribute something valuable to our community.
I'm convinced HN played a key role in Quickwit's success by providing visibility, positive feedback, critical comments, and leads that contacted us directly after a front-page post. This community's authenticity and passion for technology are unparalleled. And we're incredibly grateful for this.
Anyway tantivy is great! I love pg_search https://www.paradedb.com/blog/introducing_search (which appears to be built by another company, but on top of tantivy, which is a great feature of open source)
Now, I am worried about development being stalled after this acquisition. How does further developing tantivy in the open helps Datadog's bottom line?
Congratulations! The fact you and your team managed to built Tantivy is a huge contribution to open source.
As someone who never managed to built a fond relationship with Apache Lucene based products (Solf, Elastic). I was extremely happy to see Tantivy in open source.
BM25 scoring, proper asian language support, speed, memory foot prints, etc - amazing job! Thank you so much!
If Tantivy itself just stays permanently under Apache2 licence and find a sustainable path to co exist with the rest of open source community - it's all good guys. You are more than deserve a commercial success.
Well, it looks like Quickwit was going to add an Enterprise license as of earlier this year (PR #5529), which I had been keeping eyes on, but this announcement says they're instead going to relicense as Apache 2.0 so the "community can continue on":
> We will be focused on building a new product with Datadog, and to ensure our open-source community can continue, we will soon release a major update of both Quickwit with a relicense to Apache License 2.0 and tantivy.
So, it looks like we'll get a more liberally licensed Quickwit, but reading between the lines suggests development of it is might otherwise be winding down? It has been pretty nice and stable in my experience, so I can't really complain much. But I was really looking forward to what else it could bring.
"So, it looks like we'll get a more liberally licensed Quickwit, but reading between the lines suggests development of it is might otherwise be winding down?"
They will stop fulltime day-to-day effort in it themselves, probably because they have been relocated to writing a similar service but closed and integrated in DD, but it seems they want to opensource the current product with a OSI compliant license in the hopes that the community picks up the tab.
I think that's a nice trade. Could have been much worse.
By the way, also note that DD is not a total stranger in the OSS space. They actually opensourced their observability pipeline tooling for general use as Vector, which is a rock solid product. - https://vector.dev/
OrioleDB continues to be a fully open source and liberally licensed. We're working with the OrioleDB team to provide an initial distribution channel so they can focus on the storage engine vs hosting + providing lots of user feedback/bug reports. Our shared goal is to advance OrioleDB until it becomes the go-to storage engine for Postgres, both on Supabase and everywhere else.
I don't want to hijack Datadogs+Quickwit's post comment section with unrelated promotional-looking info. Quick summary below but if you have any other questions pls tag olirice in a Supabase GH discussion.
The OrioleDB storage engine for postgres is a drop-in replacement for the default heap method. Its takes advantage of modern hardware (e.g. SSDs) and cloud infrastructure. The most basic benefit is that throughput at scale is > 5x higher than heap [1], but it also is architected for a bunch of other cool stuff [2]. copy-on-write unblocks branching. row-level-WAL enables an S3 backend and scale-to-zero compute. The combination of those two makes it a suitable target for multi-master.
So yes, given that it could greatly improve performance on the platform, it is a goal to release in Supabase's primary image once everything is buttoned up. Note that an OrioleDB release doesn't take away any of your existing options. Its implemented as an extension so users would be able to optionally create all heap tables, all orioledb tables, or a mix of both.
Makes sense, perhaps the previous commenter thought OrioleDB was itself a database rather than an implementation detail alternative to current databases. That's what I thought before I went to their site.
> Mezmo recently put in production Quickwit to serve thousands of customers and petabytes of logs, drastically reducing infrastructure cost and complexity while delivering the same user experience.
I can't imagine they feel great about Quickwit getting bought by a competitor after that.
I hate Datadog. We use their name as an epithet at our company for how not to sell/market. Their selling tactics circa 2015-2018 completely burned us out. Endless calls and emails. The icing on the cake was an AWS reInvent presentation on Lambda right when lambda was first announced. We were pumped to get in on lambda early. Got the whole crew to attend the talk. Turned out to be a rudimentary copy of a Barr "lambda up and running" blog wrapped in a stand up comedy routine hawked by a Datadog employee who made sure to tell us he was a Datadog employee. Get us all drunk and happy and think Datadog is cool.
Genuine question: has the company changed enough in the interim to deserve a second look?
The product itself is very good, but the sales process is truly awful. Random calls with non-technical reps unable to answer basic questions like, "now that you've added this to my GCP account for 2 weeks, how much is this going to cost?" They'd say they're not sure but they have a startup deal with $xxxx minimum commit for 12 months gets you two months of extra trial, cancel anytime no questions asked. It's not just bad, it's comically bad.
Different person with similar stance: Those specific examples? Whatever.
We did get absolutely burnt by other manifestations of the DataDog approach: The billing model was (is?) very much not good, transparent or predictable and staying on top of costs was close to a nightmare. The way surprise costs and contract changes (triggered by them) was handled did not feel honest.
The product itself is great but from my perspective it's absolutely not worth having to deal with their business side of things and the risks, costs (money, time, attention) and stress associated.
If I were a Quickwit customer I'd start looking for alternatives.
It made me extremely distrustful of any and all interactions I would have with an employee. Is every email I send to my rep going to turn into an upsell? Are they being straight with me in answering my question?
For me, as much as it pains me to admit this, the sales and account relationship process is just as important of a factor now. I'm at a level where I'm not the end user of most of the infrastructure I purchase for the business, but I'm the one that has to deal with most of the vendor interaction.
Datadog is a pain in the ass. I've got two emails and a voicemail from them just this week. We are not an active customer.
Heroku/Salesforce is also a pain in the ass. It causes enough friction with legal that I'll spend whatever effort it takes to replatform our workload just to not have to have those unending inbound calls.
NS1 was easy-peasy, but post-IBM I now receive a PDF invoice for $50 once per month with no credit card-based billing options and have to remind finance to cut a paper check. I'll be rehoming our DNS as soon as we decide on where to move it to.
tl;dr: the business experience is part of the product
There are multiple possible outcomes from the merge with Datadog.
As my ex manager once told - there is no such thing as nice people in P&L statement. Someone has to pay
It's very easy to be anxious and see the path to the dark side here.
However one of possible outcomes - there will be a valid open source competitor to Grafana ecosystems, and this along secure the rest of scene from relicensing. There is a chance it will be all win-win with clear sustainable path and no money and power struggles for founders.
I want to stay on the positive here. Time will show.
Loki by Grafana Labs is nice (https://grafana.com/oss/loki/). There was a time (3+ years ago) where the product was changing pretty rapidly and much of the documentation was on git, so we had a few headaches doing minor version bumps, but I believe its much more mature now.
> Organizations in financial services, insurance, healthcare, and other regulated industries must meet stringent data residency, privacy, and regulatory requirements while maintaining full visibility into their systems. This becomes challenging when logs need to remain at rest in customers’ environments or specific regions, hindering teams’ ability to attain seamless observability and insight. To help our customers meet these requirements without sacrificing visibility or introducing multiple logging tools, we are pleased to announce that Quickwit—a popular open source distributed search engine—is joining Datadog.
We switched from Datadog to Grafana (do not recommend unless they got you over a barrel on pricing and you need to escape) and one nice thing Grafana gives you is the ability to self-host for local development so you can even run integration tests against your observability... an edge case need but if you need it you're glad it has it.
I’ve historically been a pretty big fan of Grafana, I’ve advocated for the cloud solution at more than one company.
But it seems like business development has utterly hijacked the experience.
The flow you want out of the box is Prometheus, Loki, and Tempo with one button that drops you the config for grafana-agent (now alloy which seems good technically but brings a whole new config language with some truly insane discoverability problems) that makes graphs on screens, you build up from there.
But these days everything is some complicated co-sell, up-sell, click farming hedge maze through 90 kinds of cloud vendor rip-off half baked thing.
Graphs, logs, traces out of the box. Put all the works with Snowflake shit behind an icon. A small one.
Not OP, but looked at doing grafana self hosted for similar reasons. The tooling is too spread out across different installables, the common golden signals and other monitoring metrics have a high learning curve / cliff more like it, and there's not good enough documentation to cover the user from "I want to do a synthetics test on a service to see if it is alive and show the results of that test in a graph"...a journey like that involves 4+ different tools.
DD was just easier to use for everybody, has lots of useful baked-in things we liked to use (apdex scores), and was intuitive enough that non-devs could design their own dashboards. We also found it way easier to collect metrics, traces, and other things.
FWIW none of these things are insurmountable and I suspect you'll eventually reach parity. Datadog lost our business for two reasons 1) Lack of billing transparency and 2) an incompetent account rep who managed to piss off our finance department while also embarrassing our CTO. And to be clear, while we aren't a "whale" our spend was over $6M/yr with Datadog - and the CTO along with the rest of eng leadership were all huge DD fanboys and yet they still managed to burn that bridge to the point where we'll never go back.
>it looks like they're hinting towards offering a 'self-hosted' model
That makes sense. Datadog has been pure SaaS the whole time, which is unusual. Buying a good db engine like Quickwit would be a smart head-start into the on-prem segment which is a natural expansion opportunity.
I've previously made the prediction that Datadog is the new Cisco - can expect lots of acquisitions to be made going forward.
I haven't kept super close tabs on it but last year we were hiring for a role to do tech lead stuff and OSS community building for Vector, and yes several of the original Vector employees still work here.
they are planning to allow you to run your logs in your own datacenter/cloud and put something like a proxy there or being built into quickwit that your logs show up in the datadog UI
My guess is you will be billed per gig or something but not nearly the cost of shipping your logs to DD
I integrated Quickwit into our o11y platform. It was great tech with lots of promise, and now it's dead. Yet another reason to never do business with Datahog.
> This summer, the wind started to turn. We witnessed stronger open-source traction, our revenue increased dramatically, and VCs became more insistent. It was time for us to open a new chapter for the company and raise a series A round.
Rhetorically, why was it time for this?
Practically, the answer is right there: the VCs wouldn't accept a mere rapidly-growing company with great tech. It's either an up round so they can mark up the value on their portfolio, or if the market isn't hot enough for a high-priced Series A, force an exit.
Do any engineers actually like DD? Execs/managers seem to love DD and get upsold all the time on it and ask engineers to implement some of their half-baked features. It seems like its good for alerts and dashboards for infrastructure teams. But as an engineer its a pain and using it for log analysis is much more annoying than just tracking down the actual relevant logs and grepping.
Easily the best product in its category. The value is in APM/tracing - if you're just using it for logs you're missing a lot.
If you're used to traditional Enterprise pricing it's fairly priced for the value you get but anybody coming from self-funded or VC it's very expensive. If you're already using Splunk you can afford it.
One of its best features is it's consumption priced not by seats so easy to open up for all of eng including product/QA teams and not just devs to use which is great for breaking down barriers.
It's a bit like aws in that it's a platform - use of one product tends to encourage using more from their suite.
I used to log large apps using kibana and elastic search. Also using the clusterfuck that are all AWS tries at this (cloud watch, log insights.amd whatnot).
Nothing compares to what DD give you in observability.
Having said that, DD should only be an AWS feature. They should buy them for a couple of billions and integrate it as a service for all of AWS infrastructure.
It’s good, I like it. I don’t use the logs product (we splunk) but I’m big fan of the automatic profiling and trace/span stuff. It’s like 50% better UX compared to other tools I’ve tried. But it’s expensive enough we’re always thinking about moving off it. That will be a very sad day for me.
HN has been interwoven with Quickwit's journey from the very beginning. Looking back, it's striking to see how our progress is literally chronicled in our HN front-page posts:
- Searching the web for under $1000/month [0]
- A Rust optimization story [1]
- Decentralized cluster membership in Rust [2]
- Filtering a vector with SIMD instructions (AVX-2 and AVX-512) [3]
- Efficient indexing with Quickwit Rust actor framework [4]
- A compressed indexable bitset [5]
- Show HN: Quickwit – OSS Alternative to Elasticsearch, Splunk, Datadog [6]
- Quickwit 0.8: Indexing and Search at Petabyte Scale [7]
- Tantivy – full-text search engine library inspired by Apache Lucene [8]
- Binance built a 100PB log service with Quickwit [9]
- Datadog acquires Quickwit [10]
Each of these front-page appearances was a milestone for us. We put our hearts into writing those engineering articles, hoping to contribute something valuable to our community.
I'm convinced HN played a key role in Quickwit's success by providing visibility, positive feedback, critical comments, and leads that contacted us directly after a front-page post. This community's authenticity and passion for technology are unparalleled. And we're incredibly grateful for this.
Thank you all :)
[0] https://news.ycombinator.com/item?id=27074481
[1] https://news.ycombinator.com/item?id=28955461
[2] https://news.ycombinator.com/item?id=31190586
[3] https://news.ycombinator.com/item?id=32674040
[4] https://news.ycombinator.com/item?id=35785421
[5] https://news.ycombinator.com/item?id=36519467
[6] https://news.ycombinator.com/item?id=38902042
[7] https://news.ycombinator.com/item?id=39756367
[8] https://news.ycombinator.com/item?id=40492834
[9] https://news.ycombinator.com/item?id=40935701
[10] https://news.ycombinator.com/item?id=42648043
reply