I'd like to welcome the github ops/dbas to the club of people who've learned the hard way that automated database failover usually causes more downtime than it prevents.
Though it turns into an MMM pile-on the tool doesn't matter so much as the scenarios. Automated failover is simply unlikely to make things better, and likely to make things worse, in most scenarios.
Automated database failover is absolutely mandatory for HA environments (as in, there is no way to run a 5 9s system without it) but, poorly done, results in actually reducing your uptime (which is a separate concept from HA).
I've been in a couple of environment in which developers have successfully rolled out automated database failover, and, my takeaway, is that's it usually not worth the cost - and with very, very few exceptions, most organizations can take the downtime of several minutes to do manual failover.
In general, when rolling out these operational environment, they are only ready when you've found, and demonstrated 10-12 failure cases, and come up with workarounds.
In other words - if you can't demonstrate how your environment will fail, then it's not ready for an HA deployment.
With the possible exception of life safety systems, credit card processing, stock exchanges, and other "High $ per second applications" - I just don't see getting HA right on transactional databases as worth the effort. Properly rehearsed, a good Ops/DBA (and, in the right environment) NOC team can execute a decent failover in just a few minutes - and there aren't that many environments (with the exceptions listed above) - that can't take two or three 5 minute outages a year.
The alternative is your HA manager decides to act wacky on you, and your database downtime is extended.
For some reason - this rarely (almost never, in my practical experience) is a problem with HA systems in networking. With just a modicum of planning, HA Routers, Switches, and Load Balancers Just Seem to Work (tm).
Likewise, HA Storage arrays are bullet proof to the point at which a lot of reasonably conservative companies are comfortable picking up a single array/frame.
But HA transactional databases - still don't seem to be there.
automated failover in the case of too much load is usually not what you want to do. automated failure in the case of hw/network failure is usually what you want to do. differentiating the former from the latter is left as an exercise for the reader.
Here's sortof the seminal post on the matter in the mysql community: http://www.xaprb.com/blog/2009/08/30/failure-scenarios-and-s...
Though it turns into an MMM pile-on the tool doesn't matter so much as the scenarios. Automated failover is simply unlikely to make things better, and likely to make things worse, in most scenarios.