Emphasis on the "aspirationally" -- it really appears not to be how most companies do things, despite overwhelming evidence that doing these things work.
Totally agreed. Incidentally, on evidence of working with savvier smaller companies, talking to one man shops, and hanging out with startups, I think the community feels that there is a body of "Everybody does X" practices which are actually quite rare.
Examples I'm intimately familiar with include A/b testing, using usage metrics to drive decisions, customer development, lifecycle email marketing, etc etc. Or for more dev focused stuff, unit tests, Selenium ("the best technology that I'll never use"), code reviews as a routine practice, reproducible server setup/deploys, etc.
> A/b testing, using usage metrics to drive decisions, customer development, lifecycle email marketing
Yes, but I've been pleasantly surprised at the clueyness of of more than a few big-name clients recently.
> Or for more dev focused stuff, unit tests, Selenium ("the best technology that I'll never use"), code reviews as a routine practice, reproducible server setup/deploys, etc.
For deployments at least I've been blown away by the range of configurations I've seen. Everything from "git push heroku master" to "ssh into these three machines, run svn up, restart apache, merge dev couchdb to live, copy paste these 80 sql patches out of a file into the terminal, sacrifice a hamster to Amon Ra and cross your fingers."
> A/B testing, using usage metrics to drive decisions, customer development, lifecycle email marketing... unit tests, Selenium [testing], code reviews as a routine practice, reproducible server setup/deploys, etc.
Many companies favor short-term win or Money over Technicality because there aren't many people who have the skill to put the numbers on Excel spreadsheet the amount of money the company can save had we done things "Right" at the first time during the meeting between Business and Tech team (Product Manager, CEO vs Team Lead/Dev Manager).
Basically it comes down to quantifying these quality works over developing new features.
I had the pleasure to work in a very small tight-knit team where you have to have the bulldog to bark at people when they're slowing down their defense (i.e.: writing unit-tests, making sure build script is up to date).
Results: on-time, within-budget, 1 Saturday, 1 Statutory holiday, 2 days OT until 10.30PM (all re-imburse afterwards) for a rather ambitious project to be shipped in 6 months from zero to live in 2 months afterwards.
Oh, and we have 1 performance bug after the release (easy one to solve) and 1 requirement bug 3 months after release. The rest (been live for about a year) was as smooth as the baby skin. We rotate the BB phone (on-call) but we never had it buzzed.
Our saving graces were:
0) Management Support
1) Continuous Integration
2) Unit-Testing
We use these unit-tests as REPL mostly, so we don't have to re-compile and re-deploy the _whole_ app to GlassFish.
Basically the people above the developers trust our judgment to work on these non-feature tasks: fixing build script (we use Maven but when it first setup, it didn't handle deployment to multi-environment that well), improving the component (we were newly created team, thus the reference of bulldog that bark) by either re-writing it or adding more unit-tests.
The management almost never questioned us except one or two situations.