So I'm happy MongoDB is acquiring additional expertise in hosting...
... but I wish this announcement was a bit different. It says (numbers mine):
1. You will be given plenty of time to migrate, and nothing will be required of you for at least 4 months. We expect to have all customers migrated to Atlas within the next 12 months.
2. Once migrated, your Atlas database will be hosted on similar hardware and cost the same or less than your original plan on mLab.
3. We will provide tools that allow you to migrate with either minimal downtime or no downtime to your application.
I'm fine with the second point. I wish the first and third points instead read:
1. We will publish detailed migration documents by the end of 2018. No action will be required of any customer until a minimum of 9 months after that documentation has been published. We can guarantee all customers following current mLab best practices will be able to migrate with zero downtime should they so choose. As with many migrations, it may be much easier to migrate with a very small amount of downtime, rather than none. To those customers we will offer account credits for their trouble.
I think Mongo is well suited to pure PaaS. Worrying about whether the hardware or software is similar is about the same as worrying about the details of hardware or software. If the needs are very specific, hosting it on a pure PaaS shouldn't be used, but more of an IaaS type of solution.
I think most mLab users rely heavily on mLab's expertise, so if they trust them now it makes sense to trust them after being acquired by a company that's demonstrated interest the type of product they're providing.
Can't say I'm exactly over the moon about this. Although Atlas does seem to be quite a bit cheaper and probably offers a better implementation of the product, mLab's key offering was always the incredibly good support they provide as standard. I haven't needed it often, but I know I can rely on it.
Atlas does not include this - it's something you pay (a lot) extra for and I'm pretty apprehensive about whether it's actually going to be great support or not. At this point my primary worry is that it's going to be more AWS-style support, i.e. expensive and total crap.
The fact that there's no mention of support in the email or blog post does not fill me with hope.
Any Atlas customers who can chime in with their anecdata?
Would you be open to sharing some details about your experience there? I'm building a competing service and want to learn how I can make it great. If you're not comfortable sharing publically, my email is in my profile.
We also left compose but left to move to mLab. When Compose was still MongoHQ, we were really impressed by their support. In the last few years, we became incredibly unsatisfied. mLab has been awesome and responsive. I hope the move to Atlas doesn't cause their support to change.
What didn't you like about the support you got from compose? I'm working on standing up a competing service and would like to learn from your experiences with them.
Can't agree more on the positive comments about mlab. We migrated to them from compose ~2years ago (after horrible support experiences) and have been happy since. At the time, we looked at atlas and their support offerings for affordable plans was just bad. Also, mlab offered a lot of help in the migration while atlas only sent us some sales guys.
I had a similar experience with Atlas. At least 1.5 years ago, Atlas felt like a enterprise company trying to do a SaaS. They used the same sales person driven approach I associate with enterprise sales. mLab just felt far more developer focused.
Here's a blog post that outlines the best MongoDB hosting alternatives - ScaleGrid, Compose, and ObjectRocket. All three have free support, and ScaleGrid allows you to keep full MongoDB admin access and SSH access to your machines. https://scalegrid.io/blog/mongodb-acquires-mlab-what-are-the...
AWS is total crap? We received great support and advice for just $30month extra with AWS. It wasn't emergency support granted but helped us with various cloudwatch setup detailsm
I've had the same experience as you. The people behind the support desk have always been able to assist and find an answer for me. Granted to get them to jump on it quickly I usually have to use the "phone me now" option, but the quality has been the same regardless of how I've approached them.
It really depends on your budget for context. If you're running a solo side-project where your database costs are $5-15 a month, paying more than double that just for basic support is a big pain. The cost of support for a single service might end up being more than the total cost of the rest of your stack (VPS/DB/Domain). If you're at a business then sure, $30 a month is no big deal.
Basic support is free. You pay more for better response times and more users being able to open cases. Having said that it is expensive at business level. 10% of your spend if you want the sort of response times you need in production.
Yeah imo. We have the biz support which is 10% of our spend on AWS resources. We only use it for emergencies or dealing with problems caused by AWS, not for help using the products. They simply aren’t that helpful and are never able to actually resolve anything - typically just provide substandard workarounds.
I hope this does not mean death to the free 500MB mLab tier. I have been using it for small personal projects with Heroku and have been extremely happy.
MongoDB has been deprecated and on the way out at $work for a while now, and we have gradually been rewriting and migrating services to more suitable/sane database tech.
I can confidently say that this migration would have gone a lot faster if it wasn’t for MLab.
They are expensive, but the quality and level of support has been amazing, and this has allowed us to balance building features and addressing other tech debt work, safe in the knowledge that when an overloaded mongo cluster implodes at 3am because the query planner did something stupid, MLab will be there within minutes, and will know exactly how to bail us out.
I've recently done a bunch of research on the best hosting options for MongoDB because we use it at our startup [1] and DB hosting is one of our biggest service expenses.
The major options I found worth considering were Atlas (owned by MongoDB), mLab, and Compose (owned by IBM). Pricing, performance, and versioning all seemed significantly better with Atlas. This narrows the options even more.
I wonder if they intend on jacking up prices once they own the majority of the market, and are one of the only reasonable solutions... I sure hope not.
If DB is one of your biggest expenses... why no run your own DB? Paying for someone to spin up instances is quite optional in the big picture. The operational complexity of running a DB is not hard. That said, Mongo is a poor choice for any app ever written. I would question why you would use a best-in-class at nothing Database with many operational issues.
Using mongo as a DB adds more risk then using self hosted Postgres or MySQL. You can use provider hosted Postgres or MySQL for less money and a better product.
McDonalds is very popular, doesn’t mean it’s good food. Be wary the metrics of success you use.
Bit of a disingenuous statistic. It’s popularity is down to being the default choice for people learning the MEAN/MERN stack. Which is a large number of people currently teaching themselves web dev and goes hand in hand with the rise in popularity of Javascript. It’s got little to do with wether people need a document store or wether it’s good at that job, 99% of the time it’s being used to dump relational data.
1. Only people who have never run their own database at scale make flippant "just run your own DB" statements. Because it is a difficult job to do right e.g. testing of backup/restore, clustering, monitoring etc. Especially if you are a growing startup.
2. MongoDB is a great database if you have a data model that suits it. Problem is a lot of people have treated like it was relational. And operationally it's far simpler than most systems out there.
MongoDB is a document store which has much better querying and update capabilities than PostgreSQL's JSONB/HStore. It's also probably an order of magnitude faster when it comes to updates.
Not sure why you brought up Cassandra it is not a document store and completely irrelevant to this discussion.
Postgres used to benchmark faster than MongoDB as a document store when compared to MongoDB running with a reasonable reliability setting. Not sure if that has changed in the last few years though - Mongo has improved a bit, and Postgres is getting faster on each release.
As much as I agree, the arguments are a bit tired, I think the criticism can be useful.
There's a tendency in web development towards using well known stacks, and some of them are quite poor choices for most projects. MongoDB features heavily in these stacks despite relational databases being a better fit most of the time, possibly because there's less visibility of alternatives in the JS community.
Do you think the OP is storing documents? I will put cash money they are doing something like the MEAN stack and storing relational or semi relational data in Mongo. Not many startups spend the majority of their storage storing documents.
Shouldn't be illegal to buy a competitor that do exactly the same things as you when the sum of the market share is greater that 50% ? I know it depends on the definition of the relevant market that could be "hosted mongodb" ( mlab+atlas are maybe 70% of the market ) or "hosted documents DB" (maybe 40%) or "hosted DB" (maybe 5%). What do you think ?
The issue is whether it's bad for the customer, right?
There's the possibility that things could worsen, as there's the loss of competition. There's also the possibility things could improve, as there could be a concentration of effort on one product-suite rather than several.
I'm inclined to be optimistic in this case, as there's still competition from non-managed Mongo. If they go nuts with their price-point, people have the option to just dump them and manage their databases themselves.
They don't have proper lock-in, as the data can be exported with relatively little fuss. They earn their keep by delivering value to the customer month after month.
I think in all case I'm better when I can choose between mlab or Atlas than when I've no choice. The US has an history of allowing monopolies that I really can't understand, for exemple Facebook acquisitions or more recently, GitHub.
Maybe it's time to lauch a competitor, by starting new leveraging kubernetes or whatever new tools exists now and by benefiting the time they will spend merging products and teams.
ObjectRocket is still owned by Rackspace - but they don't seem to be very integrated. ObjectRocket are still doing their own thing, fairly independently of the main Rackspace service.
You don't have to be in the same zone or the same cloud, you just need be in roughly the same DC area and buy 10 Gig Direct Connect links from AWS. This is possible in places like London, Ashburn, VA, Dublin which is why many of these providers only offer their service in a subset of a cloud providers locations. The latency is is basically imperceptible ~1-2ms.
You can get an awful lot of customers just by being in Ashburn VA with direct connects to to the big cloud providers. Most queries to a DB like mongo are slow enough that 1ms of network latency is just noise.
> with Mlab you were able to specify the cloud provider and zone IIRC
That's correct, you specify a provider and region but can also set up VPC peering so all traffic remains within AWS' network within a region. I've used this setup - mLab on AWS with VPC peering and it worked great.
But specifying the zone won't help since zones are not consistent across different accounts, so while you can ensure your in the same region, you may still end up talking across different data centers. Do they have anyway to handle this?
While sometimes a longer timeframe to migrate is better (often, most times), there are edge cases where you want to migrate immediately after an announcement like this. For us, we'd like to get this done by next week. The sooner the better to avoid any schedule disruption thereafter.
I don't think the two are really comparable - completely different querying paradigms. Athena works by writing SQL queries against JSON, CSV, or other kinds of documents stored in S3. You pay per request and per GB scanned as part of your query.
MDB is a fully fledged database that can do aggregations, queries, etc.
I consider those two products addressing totally different problem sets but if you've found places where they're the same I'd be interested in learning more.
Yeah sorry, I was pointing out that the original title was different than the posted one (due to the HN guideline as explained by ibotos). Also it included the name of the acquired company.
People love to hate on Mongo. Honestly these days it's really completely undeserved.
As long as you've bothered to actually read the docs on the things you're using before you use them, it will do just fine. Just because the drivers' default configurations are different from other DBs does not make it fundamentally bad.
IMO it's a fantastic DB and I constantly wish Postgres would get anything like the complex-object querying support that Mongo has.
-- create a table with data
CREATE TABLE arrs AS SELECT format('{ "arr": [{ "a": [{ "b": %s, "c": true }] }] }', g.i)::jsonb AS data FROM generate_series(1,1000) g(i);
-- index, this one only supports "rooted" paths, but you can create one that allows searches not starting from the root too
CREATE INDEX idx ON arrs USING gin (data jsonb_path_ops);
-- search
postgres[22708][1]=# SELECT data->'arr' FROM arrs WHERE data @> '{"arr": [{"a":[{"b":5}]}]}';
┌────────────────────────────────┐
│ ?column? │
├────────────────────────────────┤
│ [{"a": [{"b": 5, "c": true}]}] │
└────────────────────────────────┘
(1 row)
-- show index usage
postgres[22708][1]=# EXPLAIN ANALYZE SELECT data->'arr' FROM arrs WHERE data @> '{"arr": [{"a":[{"b":5}]}]}';
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ QUERY PLAN │
├─────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Bitmap Heap Scan on arrs (cost=20.01..24.02 rows=1 width=32) (actual time=0.131..0.132 rows=1 loops=1) │
│ Recheck Cond: (data @> '{"arr": [{"a": [{"b": 5}]}]}'::jsonb) │
│ Heap Blocks: exact=1 │
│ -> Bitmap Index Scan on idx (cost=0.00..20.01 rows=1 width=0) (actual time=0.107..0.108 rows=1 loops=1) │
│ Index Cond: (data @> '{"arr": [{"a": [{"b": 5}]}]}'::jsonb) │
│ Planning Time: 0.107 ms │
│ Execution Time: 0.186 ms │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
(7 rows)
You can make an argument that the mongo's path description is easier to write, but "you can’t even get close to that with what Postgres currently supports" doesn't seem accurate.
I believe that only works if it’s the first item in the array right?
Having done more reading up though, it sounds like composite types are actually a better solution for PG, though then they come with their own set of caveats/limitations :\
... but I wish this announcement was a bit different. It says (numbers mine):
1. You will be given plenty of time to migrate, and nothing will be required of you for at least 4 months. We expect to have all customers migrated to Atlas within the next 12 months.
2. Once migrated, your Atlas database will be hosted on similar hardware and cost the same or less than your original plan on mLab.
3. We will provide tools that allow you to migrate with either minimal downtime or no downtime to your application.
I'm fine with the second point. I wish the first and third points instead read:
1. We will publish detailed migration documents by the end of 2018. No action will be required of any customer until a minimum of 9 months after that documentation has been published. We can guarantee all customers following current mLab best practices will be able to migrate with zero downtime should they so choose. As with many migrations, it may be much easier to migrate with a very small amount of downtime, rather than none. To those customers we will offer account credits for their trouble.