If you were paying attention, postgres always the better option, based on my experience starting around 2002 or so. MySQL had the better marketing in its name and was arguably a bit easier to get going from the start, but postgres was always more advanced and IMHO once you got past the most basic usage made a lot more sense in how it's configured.
The fact that it was the M in the LAMP stack gave it a huge edge too. What was at one point an all day, if not multiday setup and configuration process became a simple install script once LAMP became a thing.
MySql was the "default" for a long time until Oracle muddied the waters just enough to make devs look around a bit.
So you had startups which created their MVP on some shared webhost, which had MySQL and nearly non had pg. as those startups graduated into enterprise fame, they never left MySQL but instead fixed its problems along the way.
The mysql replication story was the big thing for me in the early/mid 2000s. Back then postgres replication seemed like every option was a brittle/incomplete 3rd party hack with different sets of tradeoffs.
Postgres also had a reputation for being correct but slower. The perf differential disappeared a) as postgres got faster and b) mysql got more reliable/robust.
> Postgres also had a reputation for being correct but slower
Even back then, I found that mysql was faster for “select * from table where primary_key = 42”, but with even the slightest complication (joins, functional queries, subqueries), postgres pulled ahead.
I guess a huge percentage of SQL queries are really just key-value lookups though, so I can’t blame people for using mysql too much (this was before memcached, let alone redis et al)
Setting up multi-master replication with automatic failover was easy with MySQL a very very long time ago. Yes, there were foot guns (MySQL let you do some extremely stupid things). Yes, you really had to have a decent grasp of how things worked under the hood to not lose transactions, and your metrics / logs / alerts story had to be good, and your ops staff needed to know what they were doing during an incident. But that's always the case when self-hosting databases. We looked at Postgres several times over a 5 that period, and it always felt seriously lacking to me as an operator (not a real DBA or developer).
Galera kinda improved the MYSQL replication story for certain specific use cases, at the expense of (a lot of) extra complexity and way more foot-guns. I couldn't recommend galera even when the use case matches up. It took us years to get it operating relatively safely, and that was after a heap of near-misses with customer data. (Yeah, a bunch of us thought it was folly to use it but we weren't decision makers at the time)
I don't remember it like that. I think even php2 had support for more databases. One I am sure it supported was mSQL. I think they always had ms sql and oracle too. They were after all always very quick to provide bindings for anything with a C library. One of the reasons for php syntax being all over the place is because they tried to keep the C semantics for any given library.
Perhaps what you are remembering is a particular OS or installer's packaging.
It’s the same story as people building databases in Excel, while it’s small you want to look away from all the details so you choose a tool that doesn’t get in your way. Then when it grows and becomes unwieldy it’s difficult to migrate away because the data is a big mess and there is no model.
Postgres has the safety of the data as its utmost goal, while people will just rename genes because Excel automatically misinterprets them as dates.
I don’t agree. WAL/replication was complete junk for a long time. Not to mention needing stuff like pgpool and pgbouncer in more complicated deployments.
I run Postgres since about 2016. I used to run MySQL behind the same frontend app.
I barely know anything about Postgres beyond installation for our use case, backup and recover. I used to know loads about obscure MySQL optimization techniques, fixing broken tables, fiddling with scary parameters and recovering from hair raising situations.
One example of the sort of trivia that's burned into my brain: You never want to use the utf8 encoding. It's broken. What everyone else calls utf8 MySQL calls "utf8mb4". MySQL is filled to the brim with this sort of thing.
Isn't that just more about the abundance of hardware we have these days? While on the MySQL side you had to delve into tweaking the different cache sizes and picking MyISAM or InnoDB depending on the use case, on Postgres you had to deal with stuff like manually running VACUUM at the right time or later when that arrived tweaking the autovacuum params.
These days even on my "underpowered" NAS I can just run a default docker image of my database and not worry about tweaking anything.
MariaDB doesn't have a commitment to binary compatibility with MySQL, though. For a time it was marketed as a drop-in replacement, but that has been increasingly untrue for over a decade now.
Aha, I didn't know that. Looks like the marketing worked too well on me! In any case, the lie of drop-in compatibility at least I think led some people to look for alternatives that wouldn't be hampered by Oracle-era MySQL.
I didn't either until it bit me in the ass a few months back - often it's shitty proprietary software being only compatible with "genuine" (oracle) MySQL.
This is the most direct explanation. The blog post avoids talking about this at all, but I'm guessing that's to keep the post informative and not let it degrade into a database flame war.
As a MySQL DBA for the past 20 years it was practically the only choice until Oracle bought Sun, then it was instantly radioactive. A shame because the fears people had were unfounded and some of the best development work has been done since then.
Postgres already had cachet before Oracle bought MySql, but it was seen more as a small scale oracle replacement for people who were serious about data integrity or for lower volume OLTP work in the web sphere. The MySql acquisition coincided with performance gains by Postgres that made it more applicable to web scale OLTP workloads, and JSONB support was really the nail in the coffin.
The only correct answer to this question is 2012-09-10, the PostgreSQL 9.2 release date. In that release they unveiled JSON datatypes, and developers were able to have their cake and eat it too.
It's funny because I was going to say circa 2012. Working at Linode I definitely saw a shift away from MySQL to Postgres around that time, but I didn't have any over arching reason beyond "it was trendy."
I think CTEs and window functions in version 8 were big, combined with udf you could write some very complicated queries for data analysis without exponentially growing number of joins.
When shopping for an open source SQL database, cool is not the first thing in your mind. You are looking for familiarity, compatibility, reliability, viability etc etc, lots of -ity words that are decidedly uncool but, ahem, realy important.
What made postgres "cool" for me was the realisation that you get more than what you bargain for:
> More than "just a database", it's a data platform
This feeling of being also relevant for the evolving world of data engineerimg comes from noSQL functionality like JSON support and also extensions that allow graph operations.
So postgres is cool because it is a reliable workhorse that wont let you down but its codebase and community have also the DNA of a racehorse that can win an occasional race for you.
I remember going to a general community-run tech conference in Vienna in 2002 and there was a talk on PostgreSQL which I'd never heard of at that point (had been using MySQL and Oracle). The person giving the talk was a nerd and quite enthusiastic, and quite sad that PostgreSQL wasn't more popular. Come to think of it, I guess that person is happy now :)
So I would say it was already "cool" at that point (albeit not in wide use, at least in my circles).
We weren't too happy with Oracle (pricing), and we'd only moved to that after being unhappy with MySQL in 2000 (no transactions!). I think PostgreSQL would have been a good choice for us, and I did give it a try, but migrating all the data out of Oracle just didn't really seem possible (Oracle didn't provide great export tools as you can imagine, and a "SELECT col1 || "," || col2 .." type of thing to produce a CSV would have taken hours per table and we had a few dozen tables, so would have either resulted in days of downtime, or some funky logic with a lookup table to say which database a user was in and moving them over one-by-one, but then what about FKs? what about a "messages" table where one user sends a message to another? etc. So on Oracle we stayed until the end of the product around 2012.
I'll argue that it was Heroku that made Postgres popular. It was easy and simple to get set up, and it pretty much just worked. It was the first time I'd used it, and the first place many of my friends and coworkers had used it. Heroku was "mediatemple for apps built in the 2010s" and Postgres being the default (and really only sql option outside the marketplace) was a huge win for Postgres.
Edit: I also think emoji becoming popular was a huge win for Postgres. Emoji didn't work in MySQL because "utf8" wasn't actually UTF-8 and "utf8mb4" made indexes super limited for dumb reasons. As people started realizing this, it hurt MySQL's reputation pretty badly and a lot of folks avoided it for new projects.
"utf8" doesn't support four-byte characters from UTF-8. If you use it and instead mean "utf8mb4" (which was historically not the default), data would be silently truncated (with a warning, but who queries for MySQL warnings?) and lead to all sorts of weird production issues. For many years this was the case until "utf8mb4" was made the default in 2017. Which is to say, Unicode was broken by default in MySQL until just over five years ago.
> Which is to say, Unicode was broken by default in MySQL until just over five years ago.
No, stock MySQL has never used a default character set of utf8 (utf8mb3). Prior to MySQL 8, the default character set was latin1, not utf8 (utf8m3).
"utf8" being utf8mb3 is indeed a huge foot-gun, but it isn't correct to say that "Unicode was broken by default in MySQL". utf8mb4 has been available for use since 2010 (MySQL 5.5).
As for silent truncation, that bad default was fixed in 2015 (MySQL 5.7), but the option has been available since like 2004 or 2005.
That all said, some Linux distros may apply some non-standard defaults (like changing the default charset), and ditto for cloud DBaaS products (e.g. RDS disables strict sql_mode even today in all versions).
latin1 as an encoding also results in broken Unicode support (for obvious reasons). The point still stands that if you did nothing, you didn't have correct Unicode support, and if you tried to do the correct obvious thing, it was still broken. And a lot of people ran into this, enough that it literally made many folks avoid MySQL.
> latin1 as an encoding also results in broken Unicode support
That actually depends on whether your application actually relies on collation behaviors for case insensitivity or accent insensitivity of non-Latin characters.
From experience I can tell you that some of the largest companies in the world actually store unicode in MySQL tables using latin1 character set! It's not ideal and it's conceptually very gross. But in practice it actually works completely fine for these companies, because the relevant collation logic is handled in the application and/or in ancillary services.
Anyway, I agree with your overall point that MySQL should have changed the "utf8" alias to point to utf8mb4 much much earlier. Although I also understand the significant backwards-compatibility concerns (especially regarding logical dumps) which forced MySQL/Sun/Oracle to slow-walk this change.
I've been running postgres since, I think, 2008. Mainly I don't like MSFT in general (although I admit VS Code and WSL are pretty cool). I tried MySQL on my first blog and a couple projects and it just seemed too be a bit to chaotic (how many engines do you need?), then a bit too gross (post acquisition). Some of the guys I work with use MariaDB, but at this point I've met Michael Stonebraker and some hardcore postgres folks and it just feels like the right way. Plus it seems like the cloud vendors tend to support Postgres-like features (e.g. Redshift) so that seems like a hella safe bet.
What you have to understand about WSL is that it relies on one of the three userspaces that the NT/VMS kernel was designed to offer: Win32, POSIX, and OS/2.
Cygwin and busybox performance is awful in code that calls fork() often, but I understand that WSL1 behavior is very different, because fork() isn't fighting through layers of Windows.
The reason that the POSIX layer exists in NT is that Microsoft was the largest UNIX vendor in the early 80s with their XENIX variant, where the largest market segment ran on the TRS-80 Model 2 (68k-based, 3 simultaneous users, two attached rs-232 terminals).
"Broad software compatibility was initially achieved with support for several API "personalities", including Windows API, POSIX, and OS/2 APIs – the latter two were phased out starting with Windows XP."
My understanding was that POSIX was added to qualify for government contracts that required a FIPS-1512 compliant OS and had little to do with their UNIX business which was taken over by SCO in 1987.
My mental model is that they shed any UNIX business to focus on Windows, but then POSIX happened and they had to provide something in the market to meet the requirement.
Were this to be true, and they intended immediate death for this whole layer, then a) "Windows Services for UNIX" would never have existed, and b) the famous argument with David Korn over the quality of their port of his shell would never have happened.
In any case, a "userspace personality" such as NT exhibits is not added quickly. The NT design began in the late 80's, and I think that something like a POSIX layer existed from the beginning.
I didn’t say they intended death for the POSIX subsystem, but that it was included to satisfy requirements for government contracts. It had the obvious additional advantage of allowing UNIX software to be recompiled for Windows with minimal changes.
WSL2 is cool [for me] because it's boring - boring and stable is what I need on my workhorse. Other "cool" would be distrohopping and unixporn, but that's teenager-like coolness in reverse of the former.
Postgresql will become cool once they start thinking on operators - they still lack simple `show slave status\G` statements for quick checking replication status. Googling everytime for the WTH of tables needs to be queried.
When I started using Django 10 years ago during the Django/Rails era, Postgres was by far the most recommended DB to use. Not sure if that was also true in Rails community.
I definitely recall being on the talk committee one year for DjangoCon, and there was some rough discussion. (Context: The conference was generally a two track conference but keynotes and a few other sessions were single track). One of the single track talks was about Postgres. The discussion was roughly "If we have a Postgres talk we should have another talk like Mongo or MySQL" and the response was roughly "Everyone in Django is using Postgres and if you're not you should be at the talk to learn why you should".
Way more Rails apps used MySQL or other databases, it was largely Heroku winning Rails that led to the strong adoption amongst that community.
Started using rails 10 years ago, myself. Postgres was by far the most common recommendation. In fact I learned a rule of thumb, early on, that I abide by still today: Always start with postgres and only migrate out once you understand your data well enough to consider the migration a clear win.
I don't know when, but I just came back to it after a stint of first MySQL and then MS SQL, and the first feeling I got is that here is a piece of software that seems to be made by people who just want to make my day better.
Just my 2 cents and I generally agree with the sentiment in the comments about Postgres just being rock solid reliable and having great integrations / vendor neutral.
One of the huge benefits I see with Postgres is its extension system. PostGIS is amazing. Can save any company that needs to do heavy geospatial processing $$$ by dodging ESRI. The only database I've found that handles geospatial stuff better is BigQuery, but considering the revolution that was Google Maps over the best decade I don't think that would really surprise anyone.
I remember bringing a postgresql book to a sleepover when I was in high school around 2002. An IRC friend was on the pgsql advocacy mailing list and we were all obsessed.
Now were we cool? I have my doubts. We sure thought pgsql was cool though (still do!)
As with anything truly cool, it was cool long before most people noticed. I have been using Postgres on and off since ~7.2 and it has always been amazing. Being known as cool takes time.
I know I made the decision in 2012 or so. The startup I was working at used Postgres for the server, and I didn't know anything about databases, so, okay, fine, whatever. Then they started an analytics team, which went with MySQL ... and ended up discovering that MySQL only handled up to 3-byte characters of UTF-8, which was a problem since the users of our game Chinese characters and lots of creative abuses of Unicode characters. My takeaway was that UTF-8 has always been at least 4 bytes (and I think it might have required 6 back then, which has since been walked back), and that any database which could not implement the spec of something fundamental like UTF-8 correctly was completely out of the realm of consideration.
MySQL had utf8mb4 support in 2010, but to be fair it wasn’t the default, and also depending on the row format indexing cols using it wasn’t possible.
Like everything with MySQL, there were and are endless gotchas. Once you learn most of them it’s incredibly performant (and ProxySQL in front of it is amazing, and has way more capabilities than just connection pooling), but it’s definitely full of footguns.
I think it's this: a group of well-intentioned, collaborative experts spent 30 years on a project.
It doesn't need an expensive website or loads of devrel or non-grassroots event management - those are what you do to simulate success until maybe you make it. It needs smart people over a long period of time.
Postgres for a long time was the leader in doing geospatial calculations thanks to postgis. It became popular with the rise of apps like uber and doordash because it had better support for geospatial calculations required to make decisions on how to allocate drivers to users around town and deal with surges.
I remember in the early-mid aughts I evaluated Postgres vs MySQL. At the time the conventional wisdom seemed to be that Postgres was focusing more on robustness and MySQL more on functionality. And a lot of people seemed to prefer MySQL because of this.
When I looped back around several years later Postgres had started to overcome MySQL. Conventional wisdom then was it was roughly at feature parity with MySQL but more robust.
So it would seem that working on having a robust inner core first paid off, even if it cost some early reputation.
Postgres started in 1986. its was never less featureful than MySQL...in fact MySQL tried to get by without _transactions_ for the longest time. the fact that MySQL had more market/mindshare at any point is more of a testament about crowd mentality than anything about either of the two databases.
I remember maybe circa 2004 debating Postgres and mysql with a colleague. I told him to unplug the machine that was hosting his mysql instance. He did and corrupted his database. He said it didn't matter, he had backups, speed was more important :p This was before mysql had the innodb storage engine, after that it wasn't so bad. I have always stood by Postgres though, it's a fantastic piece of open source software.
Sorry - I said functionality but meant performance. Doesn't look I can edit my post anymore. I don't know if that was even true, but that was what the wisdom of the crowds said at the time.
MySql did have better insert performance for a while, but this was due to unsafe defaults in conjunction with no transactions, which is only a good tradeoff if you're storing disposable data.
When Oracle bought MySQL, developers really started looking hard at Postgres.
MySQL developers peeking over the fence at Postgres realized their database lagged way behind Postgres when it came to transactional DDL statements, window functions, basically most modern SQL stuff. MySQL was just sloppy from a developer's perspective... weird nonstandard SQL RDBMS behavior in places.
I've heard that MySQL has made nice strides in recent years. Cool, I guess. Not really trying to tie my future to Oracle.
An interesting reason I became intrigued with Postgres in the early 2000s was the license it was/is distributed under. The situation involved some work for a customer that needed to distribute an RDBMS along with their software. The customer had more apprehension around the GPLv2-licensed MySQL than the BSD-esque PostgreSQL License [0], so we went with Postgres.
I later continued using Postgres in my career after having done quite a variety of benchmarks and came to the conclusion that it, in general, scaled more linearly across additional CPUs than MySQL's InnoDB engine did at the time.
When I started working with Rails in 2008 having come from PHP, both communities were pretty squarely in the MySQL camp by default. That changed quickly over the following couple of years however. As I recall, the free Heroku Postgres offering had a huge influence.
Nobody seems to have mentioned the MySQL licensing kerfuffle that got it pulled from most main stream Linux distros. For some of us this was the nail in MySQL coffin at a time when Postgres was becomming easier to manage.
My first experience with postgres was around 2009. Picked it out of curiosity and it ended up at the clients server doing it’s job and never bothered anyone.
I thought it was very cool doing fancy joins and reliable procedures but then came the NoSQL era and I nearly forgot about it.
Then everyone started to realize that any growing database ends up with complicated relations and lots of applications do not fit non-relational data structures well.
Suddenly postgres came up with the json and jsonb types. It was like a fairytale.
So when did it became cool? Around 2016/2017. Why? Good technical decisions and execution.
It's that moment just before something becomes uncool, lol.
Software developers are glorified teenagers. Once something goes from upstart to widely used, developers get bored with it or else frustrated with the issues that arise from actual professional use. Then they go looking for the next fashionable upstart.
Oddly, I feel like PostgreSQL is on it's second cycle through this loop now.
Interesting thread/post on Heroku's choice to go with Postgres, and speculation that that decision was pivotal in Postgres's rise its status as the relational database of choice for hackers: https://news.ycombinator.com/item?id=31425115
I picked PostgreSQL over MySQL for a project I did in 2005.
I picked it since the project would benefit from many advantages which PostgreSQL already had over MySQL, and it would not particularly benefit from MySQL’s only often-touted advantage, speed. This turned out to be the correct choice, and this has not changed since.
The project did not need to be particularly fast, which was the only touted advantage which MySQL had at the time. In all other respects, PostgreSQL was the clear winner. (I edited my original post to be more clear.)
Developers were afraid it would become a tool for Oracle to monetize and trap people in their ecosystem. That never really happened and they have actually done some really good development work on it.
I think Heroku played less than a role than it sounds like. Sure, they offered it for Rails, but working with Rails especially back then a typical Rails application made very little use of any of the features.
It typically was and to a degree still is tables rather than arrays, application defined integers instead of Postgres Enums, in general very little user defined types, Rail's validation system instead of Postgres constraints (even automatic querying for duplicates).
Sure it might have made people hear about Postgres, but I think few people using ORMs really could tell you anything about any of all the amazing features that Postgres has. I think Arrays might be the main one.
i used mysql for almost 20 years, since 3.23 and in the last 4 years i am exclusively using postgres (because of the hype and i thought i am missing out), and its not my cup of tea.
i just love mysql's hackable storage engine (particularly the new lsmt engines), and i hate toast so much, not to mention the permission system which is so convoluted and it is so easy to shoot yourself in the foot. and of course it does not play nice with low iops ebs.
its too expensive to migrate from pg now, but i would avoid it in the future
i usually dont use super sophisticated sql, so mysql is actually pretty good for me considering i can tune the knobs on some tables pretty well.
it could be just that i am more comfortable with hacking mysql, or just the early design principles of early mysql makes it a more hackable (its easier to lose some characters from acid haha)
nothing against postgres, i just think its a bit overhyped and its also harder to squeeze it in the cloud's ridiculous iops prices
I remember back in the day when Ingress was the cool kid (running on HP-UX at our shop I think). Almost overnight there was a fork in the road and everyone went to either Orcale or SQL Server. Now I would touch anything other then Postgres. The more I use and learn about it the more I appreciate all the work that has gone into it over the years.
About at the transition between oracle 11 and 12, wmvare was quickly becoming the next big thing, and oracle licensing was causing concerns I don't precisely recall with virtual deployments. In conjunction with orm reaching good maturity and postgres starting to catch up with benchmark it was the perfect storm.
In a license audit, Oracle will charge you a full CPU license (currently $47,500/core for Enterprise) for every CPU in your VMware cluster, regardless if the database is running on it or not.
One thing that made me slightly iffy in one of the usecases Postgres would be a no brainer is the need to vacuum for dead tuples. I was quite sketched by it and instead opted for traditional NoSQL stack. I wonder how people with large volumes of UPSERTs deal with vacuuming, isn’t it a huge operational burden?
For most folks autovacuum "Just Works", transparently and in the background. For people with large volumes of UPSERTs or similar workloads with lots of activity against existing rows, it may be necessary to tune autovacuum - usually to be more aggressive than the defaults. Aside from figuring out that this is a thing that can be done and then doing it (or paying someone to figure this out for you), there really isn't much operational burden involved.
I have seen cases where it made sense to run regularly scheduled vacuums outside of just leaving it all to autovacuum, but in my experience this is rare - when I see someone worrying about vacuums it's usually the case that they should just change an autovacuum knob and then forget about it.
I first started noticing that apparently lots of people had begun to find it cool some five or eight years ago... So, given how incredibly hip and well-connected and "with it" I am, presumably at least some five (or perhaps ten) years before that.
in 02002–02004 i worked at a startup shipping an on-premises saas network management system (i.e., buy our appliance and install it) built on postgres, after a previous startup using mysql. even then postgres was a perfectly fine database; at the time there wasn't much reason to prefer postgres or mysql over each other unless you were running at massive scale, where even oracle couldn't hold a candle to mysql's readslaves
99% of the time i'm using a sql database it's because i don't care about cool features tho. i mean window functions and json and recursive ctes are definitely cool but my orm isn't gonna use them
the startup got bought out a couple years later, so i guess it was successful, but my options were so diluted they weren't worth much
the huge difference in my experience was going from no credible gratis sql rdbms in 01993 or so, to msql in 01994 (gratis but far from libre), to mysql in 01996 (gratis but not quite libre) and postgresql (libre!) in the fuzzy period 01996–02000. postgres didn't support sql until 01996 but for reasons i don't remember wasn't really a viable alternative to mysql until about 01999. i don't remember exactly when mysql started offering a real free-software license but i think it was maybe 02000 or 02001; the lawsuit over nusphere's infringement of mysql's gpl was 02002
maybe gumby can weigh in with his experience trying to sell people on an enhanced postgres fork supporting cross-data-center oltp at zembu (eventual-consistency-like multisite performance but without the eventual consistency, which is to say, inconsistency). other founders were ncm and ian (lance) taylor: http://web.archive.org/web/20010617143323/http://www.zembu.c...
nowadays i think sqlite is kind of the big competitor; lower write performance, higher read performance (except for very complex queries where its simple execution model is inadequate), and much lower hassle
probably all these non-column-store designs are obsolete; i'm curious whether there's a column-based sql database that offers the same degree of hassle-freeness as sqlite or even postgres
Postgres became cool when the scales fell off peoples eyes and they realized that very few use cases necessitate or even benefit from "NoSQL" databases.
This is also my opinion and I think before that MySQL was known for fast and Postgres for features and correctness. I think the change went both ways, people realized that MySQL's casual approach only got them so far and at the same time Postgres focused more on performance without giving up its existing qualities. I think Postgres became cool when it became fast (too).
The last straw was when Oracle bought MySQL, and it was forked to MariaSQL. Postgres went from the nerdy awkward kid to the nerdy cool kid, while MySQL started drinking, getting split personalities, and living off his past fame.
I think even Sun buying MySQL already hurt MySQL and helped PostgreSQL.
Back in 2006, the startup I was with moved from PostgreSQL to MySQL because the support we were able to find was both expensive (300 USD/hr) and not satisfactory. Back then MySQL AB (this was before Sun) gave us a 10K two year deal on three servers and they had excellent response times and knowledgeable support.
For PostgreSQL the year 2010 when Salesforce bought Heroku (big plus) and Oracle bought MySQL (big minus for MySQL so indirect big plus to PostgreSQL) was the breakthrough, I would say.
That MySQL didn't get CTE and window functions until 2018 is a sad joke.
It was my mistake, I misremembered MariaDB, but can't edit my comment now. In my defense, I've barely thought of it since 2009, when it was forked.
On the other hand, I've worked with MySQL a few times in the same period, mostly because Oracle, whatever its failings, has kept MySQL alive. Not thriving, but alive. I'm so glad they didn't buy PostgreSQL.
I came from a MSSQL background and eventually worked on an MySQL project with somebody who had a TON of MySQL experience by way of symfony.
I was explaining that not all of his non-aggregated columns were in his group by, so his query wasn't going to work.... but what do you know, MySQL will just return whatever it feels like from the group in that situation. I expected it not to run at all.
I think there are some flags etc you can use to enable to stricter behavior but it was one of the wildest footguns I've ever seen.
I think that also came late. It was when Oracle bought Sun. People who needed a drop-in replacement moved to MariaDB, but it was the last straw for people who were creating new projects and sort of hated MySQL anyway. Everybody remembered the other one.
It was excellent and recommended over MySQL before but I agree it became "cool" in the wake of the mongodb/nosql hype->trough of disillusionment transition.
You get JSONB with postgres, which, to me, is a significant difference. This means you can indexing by key/value, access keys directly, without parsing the whole json string, etc.
For my use case, JSONB support completely killed most NoSQL solutions.
Sorry, I should have said JSONB. Do you still consider a table with a single JSONB column SQL or NoSQL? Sure, it may be contained within a RDBMS but you are treating it more like a document store than a relationship database.
NoSQL is as popular as it has ever been though. Many of those most popular NoSQL DBs have outgrown the general database market for a decade and it doesn’t look to be slowing down.
Saying that, SQL databases are still the king by a wide margin. Postgres has grown more at other SQL DBs expense than anything.
My experience says that PG wins in the SQL language feature department.
However IMHO MySQL still wins in operational efficiency and performance. These are things that only start to matter when you scale your database and most people never actually reach those sizes.
idk. I did worked on a distributed database startup in the mid-2000s and we started with a PG base since that made sense. it became pretty clear that the market was still based around MySQL and so we pivoted.
maybe we were mistaken, but lets say the picture was still fuzzy
You're right, the recommendation in the LAMP era was MySQL, but it ate some of my data once (the disk got full and it kept writing) and I never trusted it again.
What's the situation where minor version upgrades are giving you trouble?
Asking because it's not supposed to be troublesome. In theory (!), minor version upgrades don't require any change to the on-disk data, so you should be able to just upgrade your PG binaries then restart the database.
Ah, looks like psql seems to have switched to actually using minor and major versions properly during the last two years - I remember the dance from the 9.x versions and postgres-upgrade [1].
In any case it's way more straightforward with mysql.
The fact that it was the M in the LAMP stack gave it a huge edge too. What was at one point an all day, if not multiday setup and configuration process became a simple install script once LAMP became a thing.
MySql was the "default" for a long time until Oracle muddied the waters just enough to make devs look around a bit.