These tools allow you to specify a state and then specify which servers should have that state. A 'state', loosely speaking, is a collection of definitions of 'X should be Y'.
For example:
Package 'libapache2-mod-php' should be installed. File '/etc/apache2/sites-available/customersite' should be the contents of this file we have on the master server. File '/etc/apache2/sites-enabled/customersite' should be a symlink to above. Directory '/var/www/customersite' should be the contents of this git repository. Command 'apache2ctl graceful' should be run if any of the above change.
With these five rules, you can turn a default install of Ubuntu into a webserver providing client files in a matter of seconds.
Once you have those rules written, you then apply them. You can wildcard, so that hosts matching 'www*.yourhostingco.com' get these rules.
Now, when you want to add a new webserver to your cluster, you install a new Ubuntu instance, you register it with the Salt master server, and then trigger a state update, and you're done. No SCP'ing files, no manual git checkouts, no copy-paste-edit configuration management.
Then, once you've got all that, you can get into templating. You can do templating both on the contents of files (e.g. for memcached config, insert the machine's internal IP address instead of having one config per server) as well as the rules themselves to avoid having a ton of boilerplate (for module in 'list','of','python','modules', install the module via pip with these rules).
One of my favourite benefits of something like Salt is that if you use Salt to do all of your configuration, then just back up your salt config to github or wherever, then you always have a record of how you did things. You don't get the incomplete documentation or missing steps that happen with most approaches.
As a sysadmin managing a cluster of systems, it can be life-changing.
But the problem that all of the configuration managers have and also why they all suck is that the state you specify has no guarantee of being the end state. A much better alternative would be to manage state in chunks and test those chunks, e.g an http-server transaction would require responding on port 80 if not fail or roll it back. Rolling back could be using namespaces, jails, etc.
In combination with service.running you can require that, for example, package x is installed, and its service is running, before service y is started.
I would not say that is a reason why all configuration managers suck. The feature you'd like to see would represent a significant increase in overhead, complexity, and potential drawbacks that not everyone needs yet.
Meanwhile, a disciplined system of designing and documenting tests and rollback procedures before committing changes will accomplish this without the need for additional software.
My guess is that if you are not disciplined enough to design tests and rollback procedures on your own, then you are probably going to just switch off or work around those features if they are built into the software.
For example:
Package 'libapache2-mod-php' should be installed. File '/etc/apache2/sites-available/customersite' should be the contents of this file we have on the master server. File '/etc/apache2/sites-enabled/customersite' should be a symlink to above. Directory '/var/www/customersite' should be the contents of this git repository. Command 'apache2ctl graceful' should be run if any of the above change.
With these five rules, you can turn a default install of Ubuntu into a webserver providing client files in a matter of seconds.
Once you have those rules written, you then apply them. You can wildcard, so that hosts matching 'www*.yourhostingco.com' get these rules.
Now, when you want to add a new webserver to your cluster, you install a new Ubuntu instance, you register it with the Salt master server, and then trigger a state update, and you're done. No SCP'ing files, no manual git checkouts, no copy-paste-edit configuration management.
Then, once you've got all that, you can get into templating. You can do templating both on the contents of files (e.g. for memcached config, insert the machine's internal IP address instead of having one config per server) as well as the rules themselves to avoid having a ton of boilerplate (for module in 'list','of','python','modules', install the module via pip with these rules).
One of my favourite benefits of something like Salt is that if you use Salt to do all of your configuration, then just back up your salt config to github or wherever, then you always have a record of how you did things. You don't get the incomplete documentation or missing steps that happen with most approaches.
As a sysadmin managing a cluster of systems, it can be life-changing.