Why the indirection in the pricing page. Like, if a unit is 1,000 searches/month, why not just put that in the matrix.
Free = 10,000 searches/month.
Standard = $1 per 1,000 searches.
...
Pay as you go:
1,000 - 10,000 searches = Free
11,000 - 100,000 searches = $1.00/ 1,000 searches
The whole to unit conversion really just adds a level of indirection that I don't understand. This was further confused by units having different colored dots depending on the plan, making me think there were 3 different kinds of units.
A slider would be nice, let me slide it to what my search volume for a given month would be, and tell me how much that would cost, factoring in volume discounts.
Additionally, this is a huge red flag:
> If you exceed your committed usage, there are overages that will be charged.
What are the overages??! Why is it not just sliding back to pay-as-you-go pricing, like reservations for say EC2 work.
----
As an aside, we use Algolia to power some search features at Discord. This new pricing structure looks to be an order of magnitude more expensive (we fall under the "contact sales" usage here...) Luckily we're grandfathered in or we'd have to consider putting a cloudflare worker in-front of this and leveraging that to do caching of common hot queries to reduce cost.
Thanks for your feedback, we have some more work to make our pricing page clear. We need to add a simulator on this page.
The reason of this indirection is that we still have to deal with data/record. It is unfortunately not possible to pay only for searches, you can imagine a use case that push 100GB of data and perform only a few searches. The unit gives access to 1k searches request and 1k records. For the majority of users, they will pay per searches.
For SaaS use case, we have a different pricing where we price per GB with volume discount.
There is a volume discount, so the more units you consume, the cheaper they are. And if you commit to a year, the volume discount applies on your your yearly capacity. This give you a significant discount if you commit to a year. This is how you can have overages. Of course if you stay on a month-to-month play, there is no overages.
I've been working on an open source alternative. It's dead simple to setup and run (including raft based clustering). It also integrates seamlessly with Instantsearch.js library.
Hi there, I see you posted multiple times your Typesense link. Just wanted to point out that your demo website is broken for some days now. I keep getting "ERR_CERT_DATE_INVALID".
We (Streak) are in the same boat. Looks like we'd be paying approx half a million dollars a month on their new pricing which would be ~100x more than we are paying now. Haven't heard from our enterprise rep but starting to get nervous...
Sounds like the new pricing is for their ecommerce customers given how much value they provide them, doesn't seem to make sense anymore for SaaS use cases.
No worries, it will not be a 100x on the pricing. We will add a pricing calculator to simplify the projection.
Btw, for your use case we designed a different pricing that we call OEM pricing that is simply based on the GB used and not the numbers of searches/records.
Also you can keep your existing plan, we force no-one to move to the new pricing.
I don't know what the original prices are, but as someone who might like to use algoria search instead of MVP search, the idea that pricing scales with our usage (e.g. free tier -> lowest paid tier) is very appealing. There's too many SaaS where when your usage jumps plus 1kb suddenly they want $100/month instead of $0. Which is hard if you have a tight budget or wnat to keep a tight reign on your budget.
Not the GP, but I figure their point is as follows:
If I'm running an e-commerce website, I don't mind pay-per-search since those searches may turn into sales, so the cost is justified. My income scales with search count, and the Algolia price is part of user acquisition costs.
If I'm running a SaaS business, the search is a feature for customers who have already paid, so I don't see any further returns from the search being used. The more a client uses search, the less I'm profiting from having them as a client. They could potentially even cost me money to service them!
Interesting. This reminds me of Canada Post's address complete API. You can integrate it with your website or app to ensure the addresses the users enter are valid within Canada and entered correctly. The pricing is around 5–10¢ per search [0]. At that price point, it only makes sense to use it for e-commerce if you are going to deliver physical goods to the customer and want to minimize the number of returned "address not found" packages. But if the pricing were lower, e.g. close to Google Map's 0.3¢ per search [1], I imagine the list of potential uses would have increased substantially.
Many local governments publish shapefiles (geojson) that have address information. You could parse out addresses and have a fairly simple free option. Since this data is based on property taxes, these data sets aren't going to be missing parcels (or the tax collector would be missing potential revenue).
That still leaves out a lot: unincorporated lands, First Nation reserves, small settlement where all the mail goes to a post office instead of individual buildings, etc.
Also, one of the main benefits from Canada Post API is that it gives you the canonical address Canada Post uses to deliver packages. I don't think this always overlaps with addresses municipalities use to assess taxes. For example, I can imagine for an apartment building belonging to a real estate management company, there would be a single entry in the municipality database (because there is only one owner who needs to pay taxes), but one entry per unit in Canada Post database.
Yeah, my initial suggestion likely misses some corner cases. One case I am fairly confident it covers is (not sure if correct term) mutual interest land ownernship (aka, condos). Each layer has the same shape, but they're stacked one top another; each layer is a billable portion of the interest (e.g., a single condo on the shared land).
A poor man's approach could be to query the publicly available data sets, and use that data when there's match. Then, only pay for queries that fall into the edge cases.
Does an e-commerce site search really need anything more than fuzzy matching on item names? It pay not be perfect, but if it provides relevant results 90% of the time, you're paying a big premium for SaaS search that may only net you a small tail end of additional sales.
In particular for ecommerce sites it pays to invest in search. A large percentage of transactions is initiated by search queries. You are correct that fuzzy matching will likely get you decent results. However, finding the right product 90% of the time or 98% of the time makes a huge difference if you have a high volume of transactions. Synonyms and a better spelling model can massively improve your results and the resulting conversion.
But search relevancy is only one part of the equation. Think about a physical store and how the milk is usually in the back and a few high margin items are close to the counter or strategically placed. The same can be true for an ecommerce store. If the search engine has the ability to take business metrics like revenue and margins, or customer data like loyalty programs or brand affinity into account, you can much better optimise for your desired business outcome.
We just recently switched one of the largest ecommerce retailers in Australia over from Algolia to Sajari by doing the above and increasing their conversion rate by 10%.
This pricing change from Algolia makes it easier for low-volume search users or someone who is just getting started to use their enterprise features, but at the same time as the parent mentions, makes it an order of magnitude more expensive for mid-to-high volume users. Imo, this is a mismatch as there is already a trial in place for someone who's just getting started to try those features out.
For anyone interested in creating a search at scale, I would recommend checking us out at appbase.io (founder here :wave:) - we provide relevant search, analytics, and access control for search and are built transparently on top of Elasticsearch. It also doesn't hurt that we author some of the most popular open-source search UI libraries: https://github.com/appbaseio/reactivesearch
It is also less expensive and more accessible for mid/large volumes. We have built-in volume discount in the pricing that reduce significantly the price at scale
> What are the overages??! Why is it not just sliding back to pay-as-you-go pricing, like reservations for say EC2 work.
I'd guess because they'd like to accurately model their compute requirements to try to get as low of a price as possible (e.g. via reserved EC2 instances).
If clients constantly and consistently shoot past what Algolia plans for, it might cause issues or at the very least cause the company to spend more on, on-demand instances than they intended to.
Interesting change. Disclaimer, I joined one of Algolia's competitors (http://sajari.com) 6 months ago and we are about to release a new product and change our pricing, so I've been thinking about this a lot.
Having worked at Atlassian before, I understand how important simple pricing can be. My personal (and probably biased opinion) is that this is a move in the wrong direction. It appears simpler on the surface, but the concept of a unit and understanding all the disclaimers associated with it make the pricing more complicated than before. If I have to read the faq to understand what I'm getting, it's too complicated IMHO. Also, it seems other features, like analytics retention, crawler, have moved into add-ons, which requires you to contact sales to find out how much you'll pay.
More granular pricing does provide more flexibility, but it also reduces predictability, which can be important especially in small to mid-sized businesses. I'm curious to understand how people feel about "usage pricing" vs. "tiered pricing" where you know exactly what you are paying each month? Which ones do you prefer? We are still finalising our own pricing, any feedback would be very much welcome.
THIS ! I feel many times business put the "business-complexities" onto the client, in this case the search-unit and although the CTO explained why (I gloss over it) as a customer , I don't need to understand what makes it difficult in your business nor do I want to know all your internal jargon and limitations. I the customer understand searches and searches per month. Don't make me think !
Like going to a restaurant: I don't want to know the whole nine yards story as to why you don't have lasagne today since one of the guys didn't show up for work and he was supposed to make the sauce and another guy was over his government allowed working hours. All this might be true and adequately explained why there is no lasagne, but now I need to also understand about employees, government working hours and and and. Just give me lasagne or not :) It's almost always a red-herring if you need to "explain" your pricing or use internal-company-jargon in your pricing-pages.
I prefer 1000x a complex pricing page than having to talk to a sales rep to get the first sense of pricing.
Each industry has different aspects of how much value they get from a product, the effort to try to encapsulate these aspects into a single pricing functions is much appreciated.
We face the same challenge with Decimals.app - should be adding pricing soon as well
The previous Starter tier pricing was a lot cheaper. Most important point to note is that additional searches were closer to 10k/$1 vs the proposed 1000 (additional 100k operations: $10/month). Yes, these were 'operations' not searches' but the mapping between operations vs searches doesn't appear to be anywhere near 10-1.
This is not the case, the price of unit decrease with volume and you can have a big discount if you want to commit to a yearly (similar to AWS/GCP/Azure)
I fail to see where you provided him wrong, based on my math and the calculator in the backend. The existing $29 is roughly equal to $240 on the new plan 250,000 operations vs 250,000 searches. Obviously operations include adding/updating/deleting data. But if you don't do any of that it's basically a different of $220. If you do some operations, say half of them are operations you are looking at $150~ vs $29.
I see it like they're chasing the lower end of the market like single dev projects that don't want to spend much - which I guess makes sense. But for us this definitely would be a lot more expensive.
Well, if so they've failed in my case. I've got a customer doing £1m annual turnover on e-commerce, and we were about to start an integration with Algolia. The new pricing has stopped that from going ahead. I've been asked to find an alternative.
With the Covid crisis we were no longer able to pay 3K per month for a basic search engine and now we are using a solution based on this on production (> 30M search/month).
We've loved algolia for the 2-3 sites we've used it for. The speed is unbeatable.
I wish we could've used it for one of our HIPAA use cases (patient data). I could've rolled out powerful search for an internal site in a day, but instead I'm having to build in postgres full text search and looking longingly at them.
I wish some more effort would be put into Postgres full text search seems to have stagnated somewhat, it does a lot but is also lacking compared to say Lucene.
Not that its going to be as good as Algolia but it could be a whole lot better for many use cases specifically TF/IDF and BM25 scoring:
So you basically only pay based on amount of queries? Dataset size doesn't matter? We have an expensive elastic search setup. It's expensive because of the amount of data it contains. But we don't run a lot of queries. So with algolia this would basically be free and potentially faster?
It's written in small print under the screenshot of the "units widget" here [1]:
"Number of indexed records is capped at roughly 10% of annual search volume. See FAQ for more information."
The FAQ says:
"When using Annual Commitment pricing (minimum one year commitment and 1,200 units), each unit contains 1,000 searches x 12 = 12,000 searches during the year, and 100 x 12 = 1,200 maximum of records at any point in the year."
Thanks for pointing that out. I read an (old) article on the engineering behind it and they keep a lot in memory. So it makes sense to cap this somehow.
1 million records doesn't seem like all that much. If each document is 1 MB, that's only 1 GB of indexed data. Does Algolia add a lot of value on top of that?
For a second I thought they had gotten rid of the # of record pricing and made this a hell of a lot simpler. But I guess now it's just all hidden in a FAQ at the bottom of the page, and it's actually worse than it used to be.
For what it's worth, we do around ~5.1k queries on App Search and pay something like $200.
Looking at Algolia (which we didn't go with because it was too expensive), the cost would be about even right now ($1.50 premium * 6). However, that doesn't include the elastic or kibana instance we get at the moment.
How come you're paying $200 for 5.1k queries on App Search? 5k doesn't seem like a big number so I imagine the lower tier plans/configs like the $20-30 plans should be enough right? Right now I have the lowest configuration and I pay $25/month.
These guys are probably planning to IPO next year. And when they file the S1 they can claim they had 100% year over year growth (mostly due to price increases).
(I am the CTO and Co-founder). I can reassure you this is not the case, I don't have visibility on when we will do an IPO and we grandfather all existing customers. We always did that and we still have customers today on 2013 pricing! It bring a lot of complexity internally.
We released this new pricing only because we are convinced this is better for customers.
I signed up for an account just to respond to this. I run a small, low-budget, lean startup and we were banking on the starter plan lasting us at least 6 months as we gradually scaled. You cited in the blog post that making this change was supposed to "reduce entry costs" but how has this happened?
Just the limit of the old free plan would cost you more than the old starter plan! The limit of the old starter plan would cost like 7x as much now. Can you give a single use case under which the new pricing is cheaper for entry level customers?
I understand your perception, pricing and perception is hard. I want to reassure you that this is really way cheaper for people to start using the product and as a company it was a huge project to lunch this pricing, especially when you lower the price for such a large portion of users.
The previous pricing was based on indexing operations + search operations. The new pricing is only based on search request and in a lot of situation N search operations = 1 search request (disjunctive faceting, federated search, etc.). At the end, for the big majority of free users (> 99%) they have as many or more search request in the 10 free units than in the old plan.
For comparison, the old free plan used to be 10k records plus 100k searches. That would be $100/month now.
I think the new plan makes sense if you're selling to sites with purchase intent, but for searching knowledgebases it seems like it's way too expensive -- the value per search just isn't there. Especially when using instant searching and counting each slightly debounced keystroke as a search.
I've been working on an open source alternative. It's dead simple to setup and run (including raft based clustering). It also integrates seamlessly with Instantsearch.js library.
Would love to hear your feedback on Typesense: https://github.com/typesense/typesense
Tried it out with about 10k short docs and it works pretty well as a potentional algolia replacement. I had to use 'sort_by' : '_text_match:desc' to get good results.
I couldn't find a way to create synonyms, is this possible?
Is it also possible to control typo sensitivity like algolia? e.g. min chars for 1 typo, min chars for 2 typos.
The curation features looks handy, but I haven't tried it yet.
Install and config was a breeze which is appreciated.
Thanks for the feedback. Synonyms is on the immediate roadmap, but not available at the moment.
> Is it also possible to control typo sensitivity
You can control overall number of typose with ?num_typos=1 -- there is no way to define min chars for a specific typo. The engine does make some intelligent decisions to optimize. For example, for a 2 letter query it does not use a num_typos=2 even if that is specified.
Glad to hear about the setup. Will continue to improve.
(I am the CTO and co-founder)
This is not exactly true, the previous free plan was 10k records and 100k API calls (counting all indexing operations and all search API call).
We now only count search request and we did a simulation on our existing free plan base, offering 10 units cover all of them.
Btw, we keep offering more quota for opensource projects.
Everyone seems really hyped with Algolia, before and after the pricing update but the major drawback i've seen is the pricing model that is still a big issue IMO.
Given you have 10K documents, if you want to make a "sort by something" you have to create a replica with another ranking configuration.
So if you want 4 sort criteria, you'll be billed for 40K document even if you have only 10k at start.
I've looked in competitors like Swiftype and even them point out this issue in the pricing page :
"Index once, sort all you want (No need to replicate engines to sort or filter your data in different ways. Once your data is indexed, it can be filtered, faceted and sorted at will.)"
Love the change. I really like the "annual commitments" pricing - I love it when I can pre-reserve a certain amount of performance and get a discount for it.
Well thats no good. I will say, that for the customers Ive had who have suffered big problems with their search engines, Algolia has always been the solution. It just works, so its worth paying for. Im a little concerned that this change puts the price over the top.
1. You say the discounted cost for 1M queries per month with yearly commitment is $600 (= 1000 units, priced incrementally).
But according to your website (https://www.algolia.com/pricing/) 1000 units cost:
10 units free + (90 units x $0.83) + (400 units x $0.70) + (500 units x $0.61) = 659.70 !
This would be $659.70 versus the $600 you stated. Am I understanding something wrong?
2. What would be the discounted cost for 10M and 100M queries per month with yearly commitment?
For a quick search engine it's hard to beat Sphinx. I especially love that you can just point it at a database and give it a query and it will just work.
Why do people want this product? If I use the demo search on their homepage, theres several annoyances on mobile that made me nope out.
Is it because backend free form search queries are hard? or do people use this primarily for the frontend quality?
Edited for typo.
Because it gets customers to the product they want faster than anything else in e-commerce. Selling shoes or something online? Algolia will earn back its cost in conversion.
It was an honest question, I’ve read a bunch about it now and have no idea why you would want to pay so much for this. Is it the typo handling? Is there really no straightforward free search tech out there?
Free = 10,000 searches/month.
Standard = $1 per 1,000 searches. ...
Pay as you go:
1,000 - 10,000 searches = Free
11,000 - 100,000 searches = $1.00/ 1,000 searches
The whole to unit conversion really just adds a level of indirection that I don't understand. This was further confused by units having different colored dots depending on the plan, making me think there were 3 different kinds of units.
A slider would be nice, let me slide it to what my search volume for a given month would be, and tell me how much that would cost, factoring in volume discounts.
Additionally, this is a huge red flag:
> If you exceed your committed usage, there are overages that will be charged.
What are the overages??! Why is it not just sliding back to pay-as-you-go pricing, like reservations for say EC2 work.
----
As an aside, we use Algolia to power some search features at Discord. This new pricing structure looks to be an order of magnitude more expensive (we fall under the "contact sales" usage here...) Luckily we're grandfathered in or we'd have to consider putting a cloudflare worker in-front of this and leveraging that to do caching of common hot queries to reduce cost.