> This type of innovation takes significant up-front design time, and working on components over longer than one week iterations. Because the projects have such simple external interfaces, and so much internal complexity, much of the work is not even visible to “customers”, so there is no way to write customer visible stories about it. This type of software takes 8–20 months to deliver the first working version to the customer.
At Yahoo, I designed and built a large project. This took about six months to deliver. At the time, the company was doing quarterly reviews that I think Marissa had imported from Google. Anyway, my manager had rated me as "meets expectations" for a few quarters because this project and the one I had previously worked on weren't visible to "customers" till they were delivered, so he didn't have any evidence to justify a higher rating against other managers' team members.
Little did my manager know that a few quarters of "meets expectations" had caused HR to drop me into the bottom 5% of the company and so I received a letter from HR that I was at risk of being terminated.
So I deliver the project and now it's visible and everyone loves it and I get an "exceeds expectations" rating and then a promotion and a raise.
From being warned I'd be terminated to a promotion in less than six months, with no change in my work, but simply it becoming visible. Whee big company fun.
> Little did my manager know that a few quarters of "meets expectations" had caused HR to drop me into the bottom 5% of the company and so I received a letter from HR that I was at risk of being terminated.
This is disturbing and, I think, a twisted form of grade inflation, mixed with the usual suitspeak where words don't necessarily mean what they mean. If an employee is as good as you think they should be, why would you put it at risk of termination?
For the record, I also work in a big company with long product cycles, meaning that the product I'm working on started more than one year ago, and the first public release is planned for next December, so I feel your pain. Luckily our process is a bit more sane and our manager follows our work closely, so I'm getting good ratings (but I wonder if the grade inflation will continue in such a way that "exceeds expectations" now means "subpar employee"...).
This is 100% caused by stack ranking, and the problem you identify is a very real one. Chopping off the bottom 5% of employees works if you have a LOT of dead weight, but realistically you can't really do it continuously or you start removing legitimate talent and, just as worse, fostering a toxic community that values playing politics over delivering value.
Not only that, but chopping off the bottom 5% typically makes room for more hiring; these emptied jobs are likely to be filled with less-than-stellar candidates. It take the new hires a few years before they too are dropped down into that lower 5% rank.
Stack ranking ensures that you have churn in your workforce, and not in a good way. You wonder why workers in the tech industry move around so much? Look at stack ranking as a significant contributing factor.
Less cynically, it also increases the odds of finding good employees. A "bottom 5%" worker is statistically less likely to suddenly rise to the top of the pack than a new hire is to produce at that level initially. And, more cynically, that remains true even if there's significant measurement error in identifying that 5%.
That's true if the performance ratings are cost-free and fairly accurate. A company where firing is based on misconduct and promotion is based on achievement, rather than basing both on quarterly reviews, might actually have a better chance of finding good employees, and perhaps more importantly, retaining them.
Why would companies prefer cheap insurance over productivity? If they want to pay less, they can just pay less. People would rather get paid less than get fired.
It’s often easier to get that pay raise by hopping companies. You’re then replaced with a new hire who is probably getting more than you did. And needs a few months to get up to speed.
Treating your proven employees well to retain them would cost less.
I worked for a large global company and this matched exactly what I was seeing. I had projects with year or more development cycles before something was deliverable so "meets expectations" was difficult when you had nothing to show for your work as they only cares about final results. We even had an unwritten rule where everyone must have at a minimum two "needs improvement" because "everyone should be working on making themselves better." This lead to a dark rabbit hole and a lot of engineers leaving.
With such a messed up evaluation process engineering demanded direct, documented process for doing evaluations. Instead we had managers trying to gauge output based on defects fixed and lines of code committed. You could tell management had no clue.
A previous employer had had that rampant feedback inflation for years, and reset it one year. The Big Boss - quite reasonably in my mind - said “our expectations are high and you should feel proud of meeting them”. But the company culture was still set up around “strong exceeds” being the norm and “meets expectations” being code for “massively underperforming” so that was an uphill battle. And many of those who had gotten used to “strong exceeds” really were massive underperformers. Really to make this work they should have deleted the entire history from the HR database.
If an employee is as good as you think they should be, why would you put it at risk of termination?
Because quality of the employee and the employee's work are not the only factors in hiring and firing decisions. Sometimes a smaller workforce is what makes sense, and if all your engineers are "meeting expectations", what are you going to do?
There are other factors but just one is under discussion: cutting the bottom five percent in performance reviews. And if consecutively "meeting expectations" puts you near the bottom, then the expectation must actually be to exceed expectations.
Nevermind that this policy necessarily means churn and undervaluing aspects of software like security or fault tolerance and a constant drain of institutional knowledge.
I'm not arguing for the policy. I wouldn't want to work at a place like that.
But I feel like this isn't very hard to understand and I don't know what you're not getting about it.
And if consecutively "meeting expectations" puts you near the bottom, then the expectation must actually be to exceed expectations.
Uh.. no.. you seem to think "meets expectations" necessarily means "will not get fired". This policy exists, as dumb as it may be. Imagine you're a manager at a company like this and you're tasked with giving a performance review. You have two choices: you could do what you seem to be implying they should go and fudge the ratings, so that the bottom 5% always get "does not meet expectations." Or you could say "fuck that stupid rule, I'm going to rate this person honestly, and if they still fire him that's their choice."
A performance rating and a company policy about firing are two completely separate things. I think you must have a naive understanding of how staffing decisions are made in the real world.
Eh, the language makes a bit more sense than that. It's a bit too euphemistic for my taste, but it's reasonable.
"Meets expectations" — which expectations? The expectations for the role of for the employee? If we're talking about the expectations for the employee, then obviously it makes no sense for meeting expectations to be a bad thing. But if instead we're talking about expectations for the role, it's perfectly consistent to say that an employee who perpetually merely meets the expectations for their role is subpar, if employees are expected to continually improve.
Now what's less clear is why companies care about the derivative of their employee productivity in a labor market where employees stay with a company for just a few years.
A bus driver that “meets expectations” is the perfect bus driver, I don’t want him to “continually improve” thinking that he’s the new Ayrton Senna, I don’t want to be driven around by drivers who think that they’re Formula 1 drivers. How are programmers different from bus drivers?
> A bus driver that “meets expectations” is the perfect bus driver
By what standards are Bus Drivers evaluated? Near accidents are OK as long as they're lucky enough to not be involved in a real accident? If so, I'd prefer they actually improve and decrease their odds of being in a future accident. Does efficiency matter? Maybe one that's on time more often is actually better? Maybe customer service matters, and they can get better at helping people who need to buy tickets for the first time?
> I don’t want to be driven around by drivers who think that they’re Formula 1 drivers
It seems like there are multiple dimensions that a bus driver can improve on that aren't "become a race car driver". Given that, it seems like the bus driver analogy already isn't a good one.
> How are programmers different from bus drivers?
It doesn't even seem like we need to answer this one. You started with the false assumption that there are no dimensions that a bus driver can improve on other than becoming a race car driver. One could argue that there are much wider ranges of value creation in programming between average and above average programmers, but that's irrelevant given your initial premise is obviously flawed.
No, I think this is a fine metaphor. If an engineer can understand a problem, identify a solution, and then implement, document, deploy, and support it in production for project after project, it’s completely legit to say they’re good engineers, and are not necessarily improving in any relevant way.
Similarly, a bus driver that consistent drives without causing accidents, doesn’t spill the passengers’ coffee, doesn’t burn too much gas or wear out the breaks too quickly is a good driver.
What you say to either of these people is “Great job, we love you, keep doing what you’re doing!”
Encouraging growth and improvement as an end in itself can be destructive: How many projects amount to rewriting something that works in some new language or framework because an engineer wants to pad their resume or just learn something new?
Craig Mod recently wrote Fast Software, the Best Software; I’m hoping he follows that up with Software That Already Exists and Works Fine Using an Unsexy Suite of Technologies, Software that Doesn’t Need to Be Rewritten.
> No, I think this is a fine metaphor. If an engineer can understand a problem, identify a solution, and then implement, document, deploy, and support it in production for project after project, it’s completely legit to say they’re good engineers, and are not necessarily improving in any relevant way.
We agree. If someone has maxed out all the relevant dimensions then there's no room for improvement. The problem is that there's a difference between that and average. The other problem is that there's continual room for being even better at identifying solutions.
> How many projects amount to rewriting something that works in some new language or framework because an engineer wants to pad their resume or just learn something new?
Unneeded re-writes and getting better are orthogonal concepts. That you're confusing them here underlines the logical fallacy that you're making. If there's legitimate room for improvement (and there generally is in bus driving and in software engineering), and if that improvement makes you substantially better at your job, then it's worth it. In neither of those areas is the "average" employee completely topped off in where they can grow.
In that programmers are supposed to be creative, over-deliver, etc.
You'd worry if a bus driver got from A to B in half the time driving twice as fast suddenly.
But you'd be OK if a programmer you've asked to design system with features A, B, C and performance P, also delivers features D, E, F and performance 2*P without being asked!
> In that programmers are supposed to be creative, over-deliver, etc.
Like I said, I don't want the programmer in charge of the software that manages my financial or health records to be creative or over-deliver, quite the contrary. I'd add personal-data to the mix, and now I've covered a huge chunk of SV companies. I think that the era of "move fast and break things" should be over by now, unfortunately relatively paltry $5 billion fines won't stop the creativeness of some companies.
>Like I said, I don't want the programmer in charge of the software that manages my financial or health records to be creative or over-deliver, quite the contrary.
Sure, but most programmers don't program systems that manage financial or health records or write the software for the ISS.
For the majority working on enterprise backoffice stuff, CDUD, web services, mobile and desktop apps, websites, and so on, being creative and over-delivering is OK, and even encouraged.
You'd like your financial and health records to be managed by average-quality software? That is, software that's full of bugs, confuses its users into making lots of eyes, and is insecure? I'd rather have them managed by file clerks on paper if that's what's on offer on the software side.
> From being warned I'd be terminated to a promotion in less than six months, with no change in my work, but simply it becoming visible. Whee big company fun.
They probably deduced from this that their warning system worked as expected.
Regression to the mean tells you that if you reward success and yell at under performers the successful people will get worse and the under performers will get better. Lesson learned: All yelling, all the time.
And did your manager go on to get another promotion? And may be few more team members had similar jumps in performance?
A new manager we had deliberately did this to us. Few of us got marked as "under-performer" levels at random times - risk of being terminated by the bell-curve justification and then within 6 months (not at the same time) we all were "exceeds expectations". The few of us figured out what the manager had done - after 2.5 years. By then she went from being a manager to a senior director with a decent portfolio - and that too pretty quickly. Oh why not! she created "stars" and "performers" out of under-performers.
I ended up "deliberately" underperforming for the next 1-1.5 years after they first tagged me. How hard can it be to NOT work? Well it is hard. I invested in learning lot of stuff. But management always figured out I was more than meeting expectations, got me raises and promotions. May be forwarding emails did the trick. But then I got bored, frustrated and quit.
The engineers who I have seen game these systems inspire a mix of anger, jealousy and admiration. I can never decide if those of us working hard and trying to make things better are dumb or if the people exploiting company policies are.
I watched a new hire recently lynchpin himself into a critical part of the software, make larger then life promises about new “cutting edge” tech in presentations, constantly post articles in Slack, and finally stop showing up or being online much at all other than to do some lip-service and make the occasional PR. I am a little envious of how little he does and how much praise he receives for checking all the boxes with higher ups.
Where I work this is an "Architect". I'm in the middle of a project designed by these same people to use these "new cutting edge technologies" we are essentially replacing one legacy pile of shit for another new pile of shit that is ever more so complicated to maintain. It blows my mind the money that it being wasted.
In agile, or any other project management process, the "customer" is not necessarily just the final user but any stakeholder.
As a rule of thumb: anybody who needs regular progress updates on a project is a "customer".
The difference in agile vs other development processes is progress reports are partial releases of working code instead of a percentage increase on a gantt chart.
Sorry you had to go through such obviously bad management. Just to give some context about how these guidelines are supposed to work at Google: being managed out for staying at Meets Expectations is something that's supposed to be only applicable to entry-level roles. (when I was there, this applied to the new grad T3 rank, as well as T4, I think this may have changed since I left to be just T3).
But at an entry-level rank, it is possible to get good ratings and even get promoted if your product hasn't shipped to customers yet (and I would do this with my directs routinely), as long as you are delivering milestones. Part of the definition of T3 is that you cannot yet independently work on a project, ie you need help from senior devs to get things done, and IMO if you stay at this level for more than 1.5 years (3 reviews), and there are no other extenuating circumstances, it's appropriate to consider if the company is a fit.
At more senior levels, you have to show business/customer impact to get promoted, but there is also no notion of being managed out for meeting expectations.
Of course, these guidelines aren't always implemented or communicated equally across the company, to say nothing of other companies that copy only parts of the process.
That termination threat is to me a signal that the company hates me and the business relationship is unrecoverable. I'd be polishing my CV and would be gone within a month of finding a better opportunity. Likely with a raise while I was at it.
Toxic environments like that are not worth working in. All sorts of other "interesting" corporate cultural aspects are usually also present. Their ethics are compromised so fraud and other criminal activity etc is probably going on as well. Even if not, the company is badly run and long term will fail in various ways for foolish reasons.
How is this even legal? Is there a movement to ban stacking ranking? If we don't voice our concerns as employees, then nobody will.
Also the fact that "meets expectations" is concern for firing makes absolutely no sense. A plumber who "meets expectations" in fixing my toilet is a good plumber. I don't expect a plumber to go "above and beyond" and give me a Thai massage too.
Absolutely unreal some of the crap we put up with in this industry. Blows my mind that there are actually humans out there in management positions advocating for and implementing this totalitarian dystopian garbage, throwing away their employees for "meeting expectations" as if they're disposable "widgets".
This is a sign of a sick and abusive culture.
Once process becomes more important than people then the evil ones will work the system and climb the ladder hurting everyone in their path. Yuck.
Look on the bright side: Yahoo is an example suggesting that at least in tech if a company is managed badly enough this eventually results in its demise.
Huh, as much as I've heard, good luck ever getting a promotion at Google with a track record of MEs and a single EE. People with SEE regularly get their promotions denied.
That's because it's uncorrelated, not high bar. Until recently, the manager giving you EE and SEE is totally different from and actively distrusted by the promo committees.
And on top of that, the criteria are different. Working a little faster than your peers is EE, and is rewarded with raise and extra bonus, but that's different from performing in a higher level role with qualitatively different expectations.
Your post is implying a few things that compel me to ask:
1) There is a pure relative ranking firing cut-off at Yahoo regardless of how well the people at the bottom do?
2) Does Yahoo make this clear when they hire you?
3) Your manager was so incompetent that he could not judge your performance outside of customer facing product?
4) A company as big as Yahoo thinks its going to survive alienating the personality that looks for big company culture with that kind of "make the cut" performance review?
The problem, so to speak, is that companies are getting extremely data driven, and constantly need some form of metrics to evaluate their employees.
This again can turn into a culture where doing what's expected is devalued to lowest score - because you have that system where there has to be bottom and top performers.
And you sadly end up with a situation where workers are (almost) embellishing their projects to get more visibility.
My company insists each ticket in Jira have a time logged on it now. This is on top of story points. Everyone I know has pushed back on it, but I hope this isn't becoming a standard elsewhere.
Product / project / scrum guy here. Logging time won't make sense unless each ticket is one person working on it at a time. The moment it becomes teamwork (like asking someone else for their thoughts on an idea) the value of time is meaningless.
People are also useless at estimating time in advance, but pretty great with throwing a story-point number on a given story with the right guidance. Please feel free to schedule me in with your uneducated upper management. They're about to lose value.
Same in the company I work at, but only for IT employees. It's fine as long you have over 80% of the monthly hours in you contract tracked.
Definitely not used for micromanagement. I myself find it useful for seeing inefficiencies in what I do and ensuring I actually go home.
It's handy for charging clients for development work specific to them.
My project just delivered MVP 1 which automated a certain infrastructure build process that was previously a mix of manual, semi-manual (people running scripts by hand) and automated. In parallel to that green field development I also fixed the issues with the previous solution. Happy path for the old tool is down from 20+ days to 4 and dropping, happy path for the new tool is hours.
Now that I've delivered those improvements my boss added reworking the software stack that feeds my current system. He also said to me "Oh, BTW RIFs are coming, take that how you will."
So I'm not certain if at this point I'll be RIFd right after delivery of a complete retooling of our infrastructure build process.
That sounds interesting a few years ago I worked on a project (the back end system that managed large chunks of the core UK internet infrastructure ).
That had an extended 9 month design phase - but a short 12 week agile sprint phase that delivered two intermediate POC versions and then a final working project (we even delivered hardware when at the last minute the customer admitted they hadn't ordered a server) - we even took enough networking gear that we could have installed a net work if that was missing to.
Ugh. If "meets expectations" puts you in the bottom 5% this says to me that there's wild miscalibration in the review process; e.g not everyone is using the same standard for "meets" and some teams have inflation going on. At my last employer we had lot of cross-review between teams to make sure everyone applies a similar the same standard, just to avoid this problem.
I have never heard of a tech company that doesn’t require that you explicitly give up ownership rights to software written on the job, so good luck with that
Speaking as someone who has been at Google for about 10 years, I would caution against trying to paint "developers at Google" using broad strokes. Google is a multifaceted megacorporation housing a plethora software project categories. Through the years I've been deeply involved in a wide spectrum of them, ranging from "disconnect for 6 solid months and deliver a leapfrog for the masses" to "form an umbilical cord with a handful of enterprise customers." The techniques necessary to drive results in each case vary in accordance with the nature of the problem. This is where broad perspective and good judgement come into play, and dogmatic adherence to -- or dogmatic rejection of -- any particular development process presents a risk.
You missed the point by two miles. I still use Hangouts, but they’ve introduced 12 other apps in the last 10 years that do the exact same thing. All while making Hangouts / google chat / gTalk (whatever it was originally called in Labs) worse.
Google is good at a lot of things. Messaging strategy is not one of them. HN just hates Google doing well in anything so the expected harping on messaging.
The hardest part of working with really smart people is that they can rationalize pretty much anything. Worse still, if they have been on the project for years and you’re the new guy, you’re working from a deficit of knowledge and they’ve had plenty of time to cook up a “coherent but wrong” model of the universe. The only thing that saves a group like that is humility. Trying to see the system through the eyes of the new person helps loads. I call people out for complaining about “having to” train a new guy. It’s “getting to”, man. Take advantage of the opportunity.
The first time I worked with a stable group where everyone was a smart as me, even the old new guy with a massive inferiority complex who managed to sneak one really profound comment into every meeting, there was a lack of humility on everybody’s part and things did not go well. Planning meetings were exhausting because you could get them to bash their own old work but couldn’t get them to admit to needing to change process at all. I should have quit, but I had a string of experiences with persistence paying off and I had not been beaten yet. That was the first time, and my current employer the second (I should probably quit here, too).
Cognitive dissonance is at its absolute worst in the head of a scary-smart person.
You probably know this, but a lot of studies back you up. Being smarter doesn't necessarily make you more likely to be right, but it makes you a hell of a lot better at rationalizing your position and persuading others. Also, that comment about the "old guy with an inferiority complex who sneaks one really profound comment in every meeting" really hit home. We have that guy on my team too. You have to wonder what's going on when those guys are sitting in the corner and us blustery people are doing all of the talking.
I would say that being smart does make you more likely to be right, but overall, your rightness is not the percentage of cases in which you were right, because some bad decisions are far more (and unexpectedly) highly weighted than others when it comes to an overall outcome.
Where smart people fail is because they were "right" in a limited, seemingly appropriate context and it turns out that in a larger context, it didn't matter.
> Being smarter doesn't necessarily make you more likely to be right (...)
> (...), but it makes you a hell of a lot better at rationalizing your position and persuading others.
My reading has said the opposite: smart people are far more likely to be right, but it does no good because the important thing is to be able to communicate
So I included that "necessarily" on purpose. In general smarter people will solve problems more correctly. But there is also a whole class of problems where being smart is actually a detriment. Here is a good read on this topic: https://www.newyorker.com/tech/frontal-cortex/why-smart-peop...
There's a lag (potentially years) in judging peer competency (or cognitive dissonance) when you switch domains. Gatekeeping and poor new-hire integration are early warnings.
You might ask yourself if your current executives are aware of the issues you're seeing, then think through what you can control and re-prioritize.
I personally think that it is all matter of risk. The risk of contract work (which is what the agile manifesto writers were facing most of their careers [they are all consultants), is building something that the customer do not want or did not intend do. The actual tech risk is very low (e.g. ruby on rails on postgres) since most of them use mature technology. In this case the agile manifesto make sense - I.e. we can build something really fast and literally ask the customer if this is what he wants every two weeks.
When you building hard tech products (e.g. borg or tensorflow), the actual risk is technology and competition, and since this is new technology, there are not that many customers. In this case the risk is in the architecture (i.e. it would be hard to change the core architecture) and in the technology, since you do not know what kind of application will be build on top of this new technology. In addition, you face general competition risk, since the market can usually absorb only one dominant architecture.
So in this case, you invest much more time in design and careful construction.
Even without mature technology, you want to shake down the new tech early to find out if it’s going to crumble around your shoulders at the worst possible moment.
While Agile is not perfect for this kind of work, it does have a lot of facility in this area.
Now if you could just get any two people to agree on what constitutes the last responsible moment...
And with new tech you often have to do it first or do it better. Oddly I find Agile seems to work better for “do it better” because if it’s easier for you to add features than your competitor, they will probably get burned out before you do. If you can survive that long you start to pull ahead.
So, just from observation, when developing new tech, there is always LAST to market advantage. MS was LAST to market. Google was LAST to market. Kubernetes was LAST to market. Apple was LAST to market (After MS and Palm).
I think that Agile is good after the market is captured. But when developing the core tech (usually with no customers beside product managers and based on research), it might be too constraining.
I’ve observed the same thing. I’ve also observed that the dogma in the startup world is exactly opposite. Everything is about version 1 and there is nothing left in the tank for version 2.
The whole MVP thing is more about selling demoware to investors. It just sounds like it’s being supportive and it gets them what they want. They don’t give one shit about what happens 18 months from now. That’s somebody else’s problem. There’s always someone with a higher pain tolerance they can hire (note: pain tolerance is negatively correlated with capacity to anticipate problems before they become emergencies). And there’s always advertising or mergers to mask the real problems.
Gates made this argument strongly, in public, before getting nailed a couple of times - the latest big smackdown being Android, as he's confessed recently. "Last to market" is often "first to market with the truly appropriate technology."
I agree 100%. You should be continuously delivering in a way that allows you to learn about the area of greatest risk.
For much of what Agile development is applied to, the greatest risk is market risk. You want to get something out to users as quickly as possible to see if your assumptions about your users and their needs are right.
For google style projects, there is primarily technology risk rather than market risk. If you are trying to build something that is obviously useful but is perhaps impossible, it makes sense to focus on delivering working systems that allow you to see if your assumptions about technological limitations are correct.
Suppose you are a Series <X> startup with a very large amount of technology to build before you surpass your competitors. In order to do a better job, you have to adjust things constantly in response to customer feedback.
Presupposing that there are years of feature work to be done, do you go with Agile, or waterfall?
There is unfortunately a large gulf between “agile” and “Agile” development these days. The former refers to development generally in line with the agile manifesto, with processes tailored to the needs of the team and the project. The latter is a buzzword used by consultants to convince management to put their favored bureaucratic method in place, usually some variant on the scrum system.
This is mostly what I see happen too, "Agile" is used an excuse to introduce some type of bureaucratic process to fix what is actually pathological cultural / leadership issues.
Agile as practiced by nontechnical managers is essentially a hill climbing algorithm.
And I think we all learned in college that hill climbing rarely if ever finds the optimal solution. Often it doesn’t even find Good Enough. But it’s the best thing you can see in the vicinity so it gets labeled Good Enough even over the protests of people who have done it better elsewhere.
This. Our team tends to take more of a Kanban approach and we've yet to miss a quarterly delivery and always have a good sense of how far along we are. Meanwhile, the higher ups think it's black magic we don't spend 4 hours in "sprint planning" every two weeks.
First of all, it's not like most engineers at Google are working on Bigtable or Borg, with a "very simple interface and tons of hidden internal complexity". Plenty of them are working on normal consumer-facing products, the "software with a simple core and lots of customer visible features that are incrementally useful", although maybe that's not the hype people want to believe.
But either way -- he insists "companies like Google write revolutionary software which has never been written before, and which doesn’t work until complex subcomponents are written." But putting aside the "revolutionary" hype, there's no reason subcomponents can't have agile philosophies applied.
A big part of agile is ensuring developers actually understand the requirements (consensus via point estimation), seek to define requirements where they're suddenly discovered to be vague/undefined, and have frequent check-ins to demonstrate that their software is making progress towards those requirements and raising any potential blockers early on, and not accidentally going down a rabbit hole of building the wrong thing (despite best intentions) for weeks at a time which nobody notices until too late, and the project is delayed by months.
That's just as important for a subcomponent of a subcomponent, as it is for a service dashboard. To assume agile is only for "consumer-facing" software is a deep and fundamental misunderstanding.
This is true. I'm a swe at Google. My team uses scrum methodology, and is responsible for a customer facing but thoroughly "normal" product. My work doesn't involve building heretofore unknown planet scale computational resources. It's just a normal product where I'm a full stack dev.
I'm sure there's ways we can improve our canonical agile-ness, but it's not lip service. We set quarterly goals sure, but they can change if situations change and the goals themselves are usually data-driven (based on prior quarters a/b testing and other research done by our UX team). We also have strong product and project managers, who own a lot of the scrum meta-work.
So, I guess #notallgooglers (which for context is an ironic, kind of self deprecating response here). There's tens of thousands of engineers at G. Some teams are agile, some aren't.
This Qoura post is typical pro-Google propaganda. Things at current Google are far from described: modern Google is deep bureaucratic hierarchical organization.
First of all, it's not like most engineers at Google are working on Bigtable or Borg, with a "very simple interface and tons of hidden internal complexity".
I would say that most engineers are working on something like that. Those two examples are from google cloud/technical infrastructure, but other products such as search and ads have tremendous complexity behind them.
I feel like this hubris is why almost every google product I use is riddled with horrible bugs. The cursor move functionality on Gmail for iOS has been broken for TWO YEARS. It’s also why the tensorflow API is a horribly over-engineered abomination when Pytorch is almost at feature and performance parity with something exponentially simpler and more consistent/elegant. That “biggest baddest solution” culture makes sense for Bigtable; it’s god-awful for anything remotely customer facing.
> Plenty of them are working on normal consumer-facing products, the "software with a simple core and lots of customer visible features that are incrementally useful", although maybe that's not the hype people want to believe.
Honestly, it seems a bit elitist to me.
I work at a company that pioneered SOA for both technical and organizational flexibility reasons and I'd say it's worked out pretty well for us!
This article makes a common mistake: Scrum and Agile are not synonymous. Given that error, the conclusion is nonsense. And besides, even Scrum doesn’t require a delivered product each sprint. Just an updated status and increment completed.
If I’m building a new server and client architected system, the early sprints may be getting certain design details down. More documentation than code. Later on, I may have sprints that only establish the handshake between client and server and no real functionality. The point is taking milestones (big chunks of work) down to smaller chunks of work that can establish a clearer goal set for the near future.
And that isn’t even unique to Scrum. If you don’t establish small iterative goals it becomes difficult to measure progress. Saying, “We can’t measure success until this large number of lines of code are completed” is folly. If you have that design, you can measure your progress towards completion by creating inch-pebbles for the next few iterations. Like in my example, finalize design docs based on R&D efforts, make prototypes or simulations, create basic handshake code, then build up server and client features in tandem to ensure they make sense (maybe the design can be simplified when you realize several commands or messages are really specialized versions of a more generic one, or maybe something is more complex and requires a reexamination of the design). And yes, a lot of this is common sense to good engineers. Problem is, common sense isn’t common and you’ll get people pushing Waterfall again and again.
> This article makes a common mistake: Scrum and Agile are not synonymous
It does not. It merely acknowledges that Scrum is the most popular and widely practiced incarnation of Agile and is what most developers are familiar with.
From the article: "The simple high level Agile Manifesto is something I think is close to the way Google engineers think about software development...Google development style exemplifies the type of individual empowerment talked about in the principles."
And: "While the high level Agile Manifesto is flexible enough to work with these principles, these are very different than the short-iteration low documentation Agile/Scrum process which has become synonymous with the word Agile."
When you are at the edge of technology a lot of the initial tasks are the smart people go off and think for a few weeks or months. Talking amongst themselves but not necessarily producing visible things. Just updating their mental state as they search solution space for the best achievable answer. Now, I have worked in a place where that time was accounted for and allowed, but most places the value of it seems lost on management.
If you redefine "agile" as common sense and what currently works for you, and "waterfall" as failure, then obviously every good thing will be "agile", and wrong stuff will be "waterfall".
I feel like people who are opposed to agile and scrum often make out things of it that are in no way prescribed by these methodologies. For example, I see people mentioning the idea that with scrum, you should release working software every sprint. This is not the case. The idea is to deliver a _potentially_ shippable product increment - a.k.a. not building 10 bridges in parallel if there's no need for it. Some might even call this common sense.
Most people who are pessimistic about the agile development who I have spoken to are usually pessimistic because of poor implementations they've encountered. Any methodology is awful when misused or when hidden agendas are at play. No methodology will ever solve that.
That being said (probably onpopular opinion), I often also see engineers use agile to blame everything that is going wrong onto "management" and use it as way of avoiding responsibility. Complaining and arguing as if there are only absolutes is an immature and unproductive attitude. Calling agile "nonsense" is a prime example of that.
My biggest criticism of Agile and Scrum is that it assumes that you can just plan out how a feature will work by sitting in a meeting and talking about it. Then spend the next two weeks implementing that plan. Software development almost never works out so neatly.
What really happens is that you have a set of requirements and an idea of how you are going to implement the feature. You start working on it, maybe you discover that there is a whole area of code that needs updating in order to allow for your feature to be implemented. This adds on another week of work required.
In a common-sense based development team, what usually happens is the developer has a quick work with the project manager and maybe other devs who need to be involved and says "Hey, I've found out that we need another week to complete this feature because X needs refactoring, do you still want me to continue with this?" then the PM either says "Yes this feature is important, even if it will take another week" or "No, the feature is not important enough to be worth spending another week on". Its really not rocket science.
In SCRUM what usually happens is the SCRUM master will tell you that the rules say that we have already committed to a set amount of work this sprint and that extra week of work falls outside the scope so we should create another card for the extra work and consider this card blocked.
The whole idea of having a rule for what to do in that situation just seems absolutely crazy. Just do whatever makes sense! Its really not that difficult.
> My biggest criticism of Agile and Scrum is that it assumes that you can just plan out how a feature will work by sitting in a meeting and talking about it. Then spend the next two weeks implementing that plan.
They don’t, there are ‘spikes’ that are just time spent researching or exploring or designing rather than implementing.
If your scrum master is telling the team what they can’t or can’t do, or how they should do their jobs, they are doing it wrong. Their job is to help you plan and help you resolve impediments to your work. They aren’t your boss.
The first thing in the manifesto is people over process, there should never be a case where some rule tells you that you can’t do the right thing.
Formal Scrum is a nice way to organize ‘normal’ work, but there are lots of cases where you have to improvise, and the process should never get in the way of that.
> If your scrum master is telling the team what they can’t or can’t do, or how they should do their jobs, they are doing it wrong
Unfortunately in my experience this does happen quite often. I've found the biggest issue with scrum is when people (usually the scrum master and PM) take hard-line, dogmatic views of scrum processes.
There's this course in pluralsight that talks about two orgs that promote scrum. The other one seems to be giving out the wrong ideas. I think it's this one (copied from wikipedia):
In 2002, Schwaber with others founded the Scrum
Alliance[18] and set up the Certified Scrum accreditation
series. Schwaber left the Scrum Alliance in late 2009 and
founded Scrum.org which oversees the parallel Professional
Scrum accreditation series.[19]
I think it was Scrum Alliance that's giving out ideas like Burndown chart is required in scrum.
> My biggest criticism of Agile and Scrum is that it assumes that you can just plan out how a feature will work by sitting in a meeting and talking about it.
No, it doesn't. It wants the team to discuss features to help find preventable issues up front and to make it as clear as possible. The unknown unknowns remain unknown, which is why an estimate is an estimate, and a sprint is a best-effort attempt at getting a thing done. If and when it becomes clear that this feature is too complex, too big, or whatever, then it can be reestimated, or decomposed or reevaluated. It's supposed to be agile, not rigid and fragile.
Whats the point of SCRUM then? If all it does is to tell you "do what makes sense" why do we even need it? Thats a rhetorical question, I already know the answer is that we need it in order to keep SCRUM consultants, certifications, training etc. a profitable business.
Whether or not you think SCRUM is good or bad, you can't argue that a sizeable (if not the majority) amount of companies are "not doing it right" and that a sizeable (if not the majority) amount of developers loath working in SCRUM teams. So wouldn't we just be better off if the industry ditched SCRUM entirely?
The "do what makes sense" is limited because you have to ask questions to ascertain what makes sense.
Some questions are easy to answer, but many questions can only be answered by doing a thing that takes work. And Scrum gives you a framework that tries to put a limit on how much work goes by with no answers.
So it uses a simple schedule.
The team works for X weeks, working on implementing specific tasks. (Which in turn answer some questions.)
Then the team stops, reviews their progress on the larger product, and adjusts plans accordingly. Now "do what makes sense" has demonstrably advanced.
Working is not vacation. Sticking to a process is a hygiene factor which doesn't necessarily has to be fun but it makes it possible for companies to be properly managed.
That being said, stop complaining. If you can find room to complain, you surely can find room for improvement right? It might just be me, but everywhere I've worked I've never encountered a manager who isn't open to solid ideas for improvement.
I never said I wanted software engineering to be fun. I have worked on 4 different teams that followed Scrum and all four of those teams suffered from the same problems - low productivity and poor management. The most entertaining was a recent place I worked where we had a certified Scrum master who had been "trained" by one of the two founders of Scrum, when I joined I told him about some of the bad experiences I've had with Scrum and he said "yeah that annoys me, those teams are not really following scrum and so teams think that scrum is bad but really they are not doing it right". Of course in the months to come I realised that we ended up with the same exact issues I saw in previous teams. In contrast theses problems were just non-existent in teams that did not follow a set methodology.
Scrum is just not a good way to manage a team. I find Scrum to be a lot like religion - it means different things to different people but its evangelists always have a pre-prepared answer to every possible criticism, usually containing lots of jargon, buzzwords and double speak.
It's to keep everyone in the loop on what's happening in the codebase, and give people a time to ask for help on whatever's blocking them (so you hopefully get it sorted in a 10 second conversation instead of having to break their flow later to get help).
In the traditional waterfall model, you don't do what makes sense. The Plan has been set, and you have to find a way to do even the parts that don't make sense, because it will be a huge disaster if The Plan isn't complete.
Sometimes it's even the right development model! If you're building the control system for a new power plant, you'd better follow the agreed-on spec even if you realize it's not the best way to do things.
> What really happens is that you have a set of requirements and an idea of how you are going to implement the feature.
The thing is, your story cards or whatever are meant to be a live document. You have the big meeting, the next day you realise that what you came up with doesn't work, you grab a coffee with your stakeholder and talk it through, come up with a new plan, and you change the card you're working on. Probably check with the tech lead first, but it's that flexible.
> In SCRUM what usually happens is the SCRUM master will tell you that the rules say [...]
This SCRUM thing sounds evil.
> Just do whatever makes sense! Its really not that difficult.
This is probably the best advice ever for any development team. Just do the thing!
I think the OP is missing the point of scrum. The scrum master isn't supposed to tell you what the rules are. The rules are whatever the team decides they are. The scrum master is not the boss, they're supposed to be a facilitator...
"This Guide contains the definition of Scrum. This definition consists of Scrum’s roles, events, artifacts, and the rules that bind them together."
Definitions, Rules, Roles, Artifacts etc. Sounds pretty authoritative to me. If it wasn't authoritative, how would you even know what SCRUM "really" is? You cannot say both that SCRUM is not authoritative and lets you do what you want, and then also say "you're not doing scrum correctly".
All of the rules say the team is allowed to change anything that doesn’t work for them. Scrum is a set of suggestions for organizing a team, it’s not Moses coming down with tablets from on high.
Why not try to find a perspective that will make things easier for you? You can look at any suggestion and shoot it down for reasons X, Y and Z - that's easy. Try to invest some time in finding out what works for you and focus on that.
You shouldn't actually ever change a story in progress if you can avoid it. The feature is the feature. How you build it doesn't go in the story. And if you aren't sure how to do it, you account for that in the estimate.
You never just write a story on the spot for a feature. It has to come out of completed design work that takes as long as it takes and ideally is funnelled through the same pipeline as you use for development.
The problem I've seen with agile is it's too easy to turn into "as long as we complete something that was defined as value this sprint the problem isn't what this team is doing" that many places end up having teams avoiding any large or external issues completely while reporting everything is going great.
You may say it's not a methodologies fault when it's misused but I'd say it IS a methodologies fault if the feedback cycle on the misuse is easily hidden by the process.
In the end I think for many places where agile is put in more value would be had putting that work into the business communication flow. If your biggest problem to solve is you don't know how to work on problems in smaller chunks + communicate when you've done something and that's what you need to reform you're probably doing pretty good and agile will be great. If you're implementing agile for another reason it probably won't fix that problem and agile will be blamed.
I think Agile has the same strength and weakness of any methodology: The people implementing it. If you have good people running and working on an Agile project it's going to go well. Those same people would do similarly well with waterfall, or just sitting in a dark room with a carton of Coke each.
Where your methodology really matters is the how information flows between your dev team, the end users, and any other systems you need to interoperate with. Agile is great for providing development-as-a-service to a client who doesn't know what they want but will know it when they see it (eg. building enterprise software or web apps). Waterfall is great for providing known, fixed functionality (eg. building an ECU for an engine, or a back-end for a web bank).
I think the answer badly misconstrues the purpose of "short term planning" and "continuous integration" as though these concepts preclude long proof-of-concept projects. The point of short delivery cycles is not to deliver a viable product to an external customer every two weeks. Rather it simply ensures that code remains in-sync with its intended use-case and guards against individuals going off on untracked tangents.
Basically, developers who try to develop 10,000 lines of code on their own, without sharing it for a month, create massive risks and are an impediment to their team. Forcing regular code curation and sharing among the team makes the project stronger – that's what "short term planning" does.
Doing 1 month without integration is still fine. When agile was conceived doing 6 sprints per year would have been considered agile in many fields. The goal is to be able to be as responsive as reasonably possible not simply to add thousands of mostly meaningless milestones.
It turns out that most projects can be broken down into absolutely tiny pieces, and that is generally a good idea. However, the core idea is you need to adjust the methodology to the problem at hand and not the other way.
>It turns out that most projects can be broken down into absolutely tiny pieces, and that is generally a good idea.
Generally I agree, but I have noticed in doing things like that, no one ever seems to bother about refactoring, or tidying things up at a higher level.
Our designer complained about the CSS organization and spend a couple of days tidying it up. I have a feeling that breaking of tasks into small pieces takes away focus from more long term commitment to code organization and architectural patterns.
Some of the most beautiful code I’ve ever seen has come of an engineer going off for several months and coming back with a 1MB patch.
That kind of time allows some actual deep thought to happen. Some kinds of outcomes won’t happen any other way. I agree with that point in the quora answer.
I'm sorry but what's the point of a 1MB patch of beautiful code? Did this programmer actually develivered a business value for the customer or not? I'm not saying that you should not maintain your code nice, decoupled and what note, but having as a major KPI the code being beautiful to me it's non sense.
It’s important that it’s beautiful since the bytecode of a VM gets used so much throughout some of the VM’s most complex parts. Beautiful code is easier to use and maintain.
also they don't seem to grasp the meaning of "customer" in the context of agile. if nobody ordered the software from you, the customer is you. so the project owner must be someone from the company. it can even be the scrum master assuming both roles
> I think the answer badly misconstrues the purpose of "short term planning" ...
'short term planning' appears to be your construct, not found in TFA.
Short term implies reactive / tactical -- not the planned / strategic level that TFA is talking about.
It may well be that Agile 'precludes long proof-of-concept projects', given its essence is 'better ways of developing software', which clearly hides a wealth of complexity behind the heading of user-facing software.
Agile (and XP) realised that it's a folly to plan in detail the next 5 years when likely the project, the circumstances, the business landscape, the team and even the technology will significantly shift in that timespan.
This of course doesn't mean that if you have a very concrete goal and you need to do a trial and error iterative research and development project, there is of course no need to do "agile", because the goal is fix, the things you can try is probably largely predetermined, and you already know one of them will work adequately. (Or not, but then at least you might have some form of benchmark target in mind.)
But even then, almost no endeavor starts in a vacuum, so it likely pays off to have some ability to quickly check progress (be it a CI system or just a testbed in the lab, or the aforementioned benchmark), and similarly other aspects of "agile", like try to break up the work and estimate it (think about it, its dependencies, its complexity, your competence for the particular work item, typical pitfalls), let the team know who works on what, interact with stakeholders from time to time, and so on.
I consider it nonsense because it becomes dogma at each organization. The problem with that is that they each have their own way of being “agile” and questioning the status quo is futile at best to politically charged at worst.
Secondly, well meaning management often thinks that the reason they are having trouble delivering on time is because agile needs to be implemented. If the developers and business analysts don’t have deep understanding of the problem set then the project is at risk. All the agile in the world won’t fix that.
Hiring developers is not easy, and finding those that can deliver is also tough. Agile doesn’t solve that.
That said I do prefer an agile like approach to dev ops because it usually encourages teams to communicate and break work down into smaller pieces. When that happens system wide issues come to light much quicker than they would in a waterfall style delivery.
As it relates to the quora question, I suspect that the person that asked the question is pretty new to development. Google is huge, there is probably a million different styles of dev ops happening there. Ok that was hyperbole, but steelframes comment is exactly right. Google isn’t a monolith and googlers aren’t all the same.
> questioning the status quo is futile at best to politically charged at worst.
Which is antithetical to the agile manifesto. The whole point of agile was to question the current methods and be agile enough to change them as needed.
The most toxic example of Agile/Scrum I've come across so far has been in a company that was non-technical at its core but wanted badly to pivot to a SaaS business model. The type of company that 50% of the employees have the word "manager" in their title yet manage no one (Digital Project Manager, Technical Project Manager, Product Manager, etc). I think its just in a lot of companies culture that if they can instill some process, any process really, then the results will follow. And software engineers complaining about the demoralizing effects of being micro-managed by junior coworkers who never wrote a line of code in their life are just typical spoiled tech workers. Other companies I've worked for that used Agile methodology were not all that much better, just somewhat less absurd. I'm glad I've gotten out of that world, its left such a bad taste in my mouth I don't know if I can even look at Agile objectively anymore.
Both my best AND worse experiences with process have been 'agile'. (Although the best was NOT "Scrum").
And not just because all of my experiences have been agile (they have not been).
For agile to work well, you need a competent team and manager, AND be embedded in an organization (ie, who determines appraisal of your manager/team) which truly is concerned with prioritizing value to business/customer over, say, "looking good to other managers", or "trying futilely to predict what the software will look like a year from now, because it makes me feel in control" or "weird power struggles with other parts of the organization". Everyone (or at least the majority of those with the power to make you miserable or break your career) has to be really at least trying to work together to maximize business/customer value.
If you have that, is your experience (and your software) going to be good whether you use agile or not? Probably. But agile is a really useful and effective mindset for making your software product _and_ your experience (stress-levels for instance) even better. It works, in my experience.
If you don't have that, is your experience (and your software) going to be terrible using agile or not, in proportion to how much you don't have it? Sure. And agile and especially scrum can probably make it even worse.
Agile as implemented at almost all companies fails to account for the possibility of software with simple observable interfaces requiring extremely complex and intrinsically non-modular internal design. Most high-end platform infrastructure work looks like this if you are doing it correctly. As a consequence, it becomes difficult-to-impossible to do any kind of hardcore platform software engineering work in most organizations because their model for what successful agile software development should look like doesn't allow for it. There is no appetite for software development where there will be no visible progress for many months and demonstrable functionality is extremely backloaded.
A related issue is software that cannot be meaningfully prototyped because any design that is reflective of the desired product has the same order of effort as the actual product, for many of the same reasons.
There are certain kinds of high-value software that cannot be built inside most tech companies now because it pattern matches so poorly to the way people think "agile" software development should look like.
At Google, there are too many complex systems and too many team dependencies to break tasks down into granular chunks, or to shoot from the hip in a pair programming session.
Google runs on design docs. You define the problem, dependencies, risks, end state, etc, in a 20-page doc, and then you drive its development for 1-2 quarters. Google doesn’t do extreme programming, we do extreme ownership.
I really prefer it to the time I spent working with Pivotal or Carbon5. They were great, but imo not a sustainable process for mature products. Agile is good for driving a new, greenfield MVP... and that’s about it.
There’s a lot of power in refactoring, but I’ll allow that some of it is counterintuitive. Anything counterintuitive requires state of mind and experience to line up and that’s quite hit or miss.
When that awful Design Patterns book was off the hype train but the new devs had heard about it and asked if they should read it, I would tell them No, read Refactoring twice instead, because it tells you why and when to use this stuff, which is much more valuable. Like feature work, if you know why you are doing something you can often reason out the gaps in the “what” on your own.
We sort of take a bunch of XP processes as a given and forget that this was Agile for most people at the time of the manifesto. Scrum pretends it’s separate but is essentially useless without half of XP.
Google does refactoring, but you’re expected to sell the idea, plan its roll out, and measure its impact... and then bring the codebase into a consistent state. I would rather things be consistently “wrong” than inconsistently right.
You can assign work to teams to complete, but you must drive the effort and gain buy in. Otherwise it’s not actually worth refactoring.
I don't want to get into the specifics of what makes Scrum good or bad. But one thing I have noticed working in software is that 99% of people who talk about Scrum actually have not spent any real time learning it. Not the product people, not the managers, and not the engineers.
There's a lot to it, as a software engineer, learning more about it was actually pretty exciting for me because there are some good ideas in there for some of the standard headaches I see over and over in software companies.
> From tax-accounting software to computer games, some software is not suited to give to end customers when partially finished.
The original Extreme Programming project at Chrysler was a payroll system, so many strategies for handling things like this have been thoroughly discussed in literature and forums in that community.
The main gripe I have with this analysis is it weds the short-term sprint cycle with customer exposure & feedback. It then argues that, well, we can't let customers change their minds every two weeks and Big Serious Things take 20 months to get working so this is just for like web contractors and stuff.
I don't see agile as exclusively requiring you to expose everything you do to your end-users as you make it. I get that it can accommodate that if that's what you want, but the real essence of it is about de-risking a project by breaking things down into shorter milestones.
Do you remember those Microsoft Press software management books? So good. One point was "Beware of a Guy in a Room".[1] Which is to say, don't let developers shut their door and go dark for months at a time without surfacing with something working. It can take a bit more work to adapt a big project into smaller "working" milestones and seem silly and inefficient, but in the end this helps avoid a lot of risks that can lead to deathmarches that can truly kill a project.
One way to adapt the "customer" aspect of agile is to not have an actual live customer but rather a "voice of the market" that the product manager represents. After all, you do want to empower them to course correct because the world can change over 8-20 months. This applies even to BigTable.
If your software development organization can't ship software without an agile process, then it's unlikely that it can ship software with an agile process either.
I'm not saying that agile processes are inferior or even no better than other software development processes. I'm just saying that there is some level of fundamental capability that a software organization must have before the process is even going to matter.
> much of the work is not even visible to “customers”, so there is no way to write customer visible stories about it
If your work doesn't have a customer, why are you even doing it?
The customer doesn't have to be the business's end user. The can be another team or even your own team. Someone must benefit from the work or else the work should not be done.
I am sometimes amazed by our industry. Somebody comes up with a solution to a problem and then someone else starts applying it before they even have a problem. Sometimes people ignore what mistakes/challenges others have dealt with and win/lose their own battles. So many times, people just say X does/uses it as some kind of validation or guidance. Well, if your business is also like X, great, you are learning from other people doing the same thing. Otherwise why are you looking at X and not "your" challenges, resources and opportunities?
There is just too much religion here :)
If you need it use it, if it works, great, if it doesn't or you don't need it, move on. The only advice I would give to people is that there is no rule book or the good way. Use your brain and pick up things that work.
The corollary is that for everyone else working on solving business problems in non-software industries (coincidentally there is a related discussion from today https://news.ycombinator.com/item?id=20599190) the principles are still perfectly sane.
In that kind of environment it doesn't lead to short-term thinking because a long-term business vision can exist independently of how a software solution is implemented.
Edit: removed the bit about Google being an exception.
"The word 'agile' has been subverted to the point where it is effectively meaningless, and what passes for an agile community seems to be largely an arena for consultants and vendors to hawk services and products." - Dave Thomas (Manifesto for Agile Software Development author)
> Developers should create a Google Design Document (a fairly minimal, but structured design doc), explaining the project, what goals it hopes to achieve, and explains why it can’t be done in other ways.
Does somebody have more on this? I'm never sure how to structure my design documents, how does a Google Design Document look like, is there a template somewhere on how to write one? This would be super helpful, please.
Anybody doing Agile/Scrum know how it deals with people issues? It seems like with these practices everyone is on a tight leash -- how does it deal with the fact that real people have a natural ebb and flow in their concentration levels at work or interest in work. If your are scrumming all the time, how do you get slow periods in your work cycle so you can recharge (aka slack-off :-))?
First, I disagree with the author's post that all software developed at Google is revolutionary low-level technical infrastructure. But that's not the only type of software development that benefits from avoiding the productivity tax that agile imposes.
There are teams at Google that look to use agile methods. E.g. I see standups, there are internal tools that track sprints, etc. I think these are mostly front end teams.
I've never worked on front end but was on a team that used scrum at a previous job. I hated it. It seems like a huge impedance mismatch for my kind of work.
I'm guessing some consultants sold management on the miraculous benefits of scrum at my previous job, they like the snakeoil they were sold and forced scrum on everyone. The amount of overhead process it brings is an enormous tax on productivity.
Some people call this "corporate scrum" to try to distinguish it from real scrum. But this feels like a No True Scotsman fallacy. Why is it so damn hard to do "real scrum" and actually get the productivity benefits it promises? I've never met any SWE in real life who has tried scrum and liked it.
At Google, the development process is not just "develop 10000 lines of code on their own" like some posts here have imagined. There is typically a design phase that gets wide buy in, and then code reviews, and extensive testing every step of the way during development. This feels far more realistic as a process.
And it is NOT some "waterfall" process which only seems to exist as a strawman for agile consultants to knock down. There is no architect who designs the code and then hands it off to code monkeys to go implement exactly as written. That's ridiculous and now how software is written anywhere as far as I can tell.
If I were to answer the question in the title myself, it's that Google has allowed engineers to build a process that works for them, and they almost never pick scrum or other agile methods.
Agile at its core, to me as an engineering manager, is:
- People over process/tools
- Plan-able execution and pivoting quickly in rapidly changing environments
- Create consistent, repeatable work delivery
- Retrospection, learning and iteration over time
- Tools (can) support the process to multiply efficiency but do not solve the core goals and problems
Like the author of the comment said, the details of how you execute "agile" is where things get bumpy for teams.
The problem with agile is that it is a greedy algorithm. It can lead you to local optima.
Usually this is not an issue if you are doing contract work,
However, if you try to create a product in a market with competition, a non agile competitor can reach a global optima (for example Apple vs MS in mobile phones), and than you will lose the market (and sometime your future).
I would speculate that this philosophy is so far removed from the customer that it's the reason they deliver so many consumer-facing duds and exhibit a completely schizophrenic product roadmap in so many verticals.
Amazon is relentlessly customer focussed and is now a veritable hit factory And mostly in industries that were already seen as mature.
The worst part about Agile is that non-technical technical managers that I worked with, convinced the non-technical business managers that using Agile would lead to more and better innovation. And that's how they got training and pseudo-experience to pad their resumes.
At most companies I’ve mostly seen “agile” used to justify a process that is clearly broken. The one exception is when I worked with Mark Herschberg. He is a talented CTO. He did Agile the right way. I wrote up his technique here:
Let's paraphrase:
1. Don't talk to customers.
2. Write down the design and never change it. Make a lot of meetings.
3. Maximize bureaucracy.
4. Maximally slow down the project.
5. Don't finish anything earlier than planned.
There's a lot of irony in saying that about "developers at Google" and then immediately turning around and classifying everyone else as "the masses". Talk about broad strokes!
Beyond the literal fact that "everyone" is undeniably a giant mass of people, I suspect this term was intended to mean "to consumers", particularly given that the following example calls out enterprises.
Developing for a small but demanding set of enterprise customers with concrete (but sometimes arcane) requirements is a very different problem from developing for consumer markets.
That wasn't the implication. The issue is an hypocritical assertion. If treating a large group as a unified actor is improper, then giving another group another label and doing the same thing...because it's that group is in a different classification, is also improper. The fact these views live in the same comment, is ironic.
ie
> I would caution against trying to paint "developers at Google" using broad stroke
> If treating a large group as a unified actor is improper, then giving another group another label and doing the same thing...because it's that group is in a different classification, is also improper.
I think this is the logical leap that people aren't understanding. Generalizing means assuming things about individuals based on their group membership: OP wasn't saying it's a moral failing, but that it leads to inaccurate assumptions when applied to the heterogenous group in question.
What's the analogy to using the term "the masses"?
The morality and intent is irrelevant to my core assertion. A criticism of contravariance (if that makes it clearer) followed by a usage of it is unexpected hypocrisy, which is ironic.
It's a phrase that tends to be used by people who consider themselves superior to 'the masses'. The phrase refers specifically to the lower-classes or those outside of elite circles[1]. Combined with Google's reputation of quiet arrogance[2], it's a phrase that I would have avoided.
[2] Put in to words nicely here https://medium.com/@steve.yegge/why-i-left-google-to-join-gr..., but you can feel it in many of Google's products where they make all the major decisions because they don't believe users are as capable of making good decisions as Googlers.
I read the masses as anyone but he development team. Who's to say that the masses doesn't mean Googlers? It wasn't categorising anyone aside from being "everyone else" as I read it
Who said that Googlers are not part of those "masses"? Surely they enjoy products from their company as anybody else?
And then, of course, nobody said you're part of those masses anyway. Maybe you're not. Do you enjoy Google product X as other hundreds of million do? Then you're part of the mass that enjoys it. Otherwise you are part of the mass that doesn't.
'The masses' refers to the lower classes (look at any dictionary and how the word has historically been used - 'opiate of the masses', 'the unwashed masses'). It would not typically be correct to say that a Googler is part of the lower classer. 'The masses' is definitely not the same as 'everyone'.
Saying unwashed before most words standing in for people would likely not be a nice thing to say. So of course unwashed masses isn’t a good phrase. Nor is “unwashed people”
From the 2018 shareholder's letter, Google created $335 billion in economic activity in 2018.[0] From the 2019 Q2 financial results, Google has 107,646 employees. By raw division the average Google employee creates $3,112,052 in economic global activity each year.
I know a nurse, occasionally she brings people back to life, it is just a part of her job - is she contributing to humanity more or less then an average Googler?
Average seems like a weird measurement. Some Forbes article claims by 2030 the world needs 68 million teachers (I had trouble finding how many currently exist). So, the scale of numbers of teachers and number of engineers is hardly one to one.
Maybe you could say : is the average engineer contributing more to humanity than 1,000 teachers? Or you could go by pay: is the average engineer contributing more than 10 teachers? (guessing on numbers of course)
One to one ratio though? I'd argue: yeah probably. It's indirect effects, but Android/Chromebook/Google Docs... These all have brought computing technology and the internet into the hands of children around the world. Internet access is clearly one of the great opportunities for education. It might be hard to measure, but it clearly has an outsized effect.
Maybe "it's natural" would have been more accurate than "it's quite common"; I was trying to word it to avoid it being read as claiming Google broadly is elitist.
But, no, I can't cite a study, it's a pattern I've observed at range of places.
Can the teams at Google experience processes where they talk to their customers, iterate, talk to customers, iterate again and move fast and break stuff - releasing at each step of the way and measuring their intended A/B/C tests?
I suspect a company like that has committees and subcommittees and review processes and 100 layers of bureaucracy before anything even gets done
I bet there are projects that reached a production version and never saw the light of day because of this very reason
I went from waiter -> self-educated SW engineer founding own startup -> sold startup -> Google. (Been on 3 teams, all of which shipped to match HW deadlines)
The only thing noteworthy about Google is _there is no process_.
Hell, just last week, a coworker and I finally resolved a year-long difference of opinion when it clicked for him that there wasn't a defined system, insomuch as there was one I was defining it, and I was thrilled to have someone else help with that.
Overall picture: Google recognizes it's a large company that has an extreme diversity of teams with idiosyncratic goals and personalities, and doesn't prescribe _anything_
Dollars to doughnuts, there's multiple teams doing extremely prescriptive Agile, but I've never seen one in 3 years in a 2,000 employee office.
By far the most organized team I've been on merely kept track of bugs using an internal web app that renders bugs from the internal bug tracker as post-it notes arranged in columns, with each column representing a bug status (ex. assigned/started work/code in review/done).
This alone caused a ton of grumbling, meanwhile I'm sitting there gobsmacked that Scrum and Agile aren't proper nouns that everyone has highly detailed opinions on.
Nailed it, that's the name of the internal web app :) I always hesitate to call it Kanban because my relative that's a university professor is obsessed with Proper Kanban and it's nigh-unrecognizable to me beyond the post-it notes and status
The teams I have worked on at Google have no process on a team level. We just copied the mandatory things like code reviews and automated testing, other than that people just figure out what to do on their own. The main motivation people have to work is so they have something nice to put in their biannual performance evaluation, so everyone is very eager to try to solve problems. Of course if you bite off more than you can chew and totally fail then you have nothing, so people try to pick sensible projects which will actually succeed.
GSuite does an incredible amount of talking to customers, because it’s easy to know who your target customer is. But who does a Billion-user consumer app talk to? For that we rely on internal testing and reporting feedback from family/friends.
People do read your in product feedback, btw. But it takes quarters for anyone to triage and act on it.
At Yahoo, I designed and built a large project. This took about six months to deliver. At the time, the company was doing quarterly reviews that I think Marissa had imported from Google. Anyway, my manager had rated me as "meets expectations" for a few quarters because this project and the one I had previously worked on weren't visible to "customers" till they were delivered, so he didn't have any evidence to justify a higher rating against other managers' team members.
Little did my manager know that a few quarters of "meets expectations" had caused HR to drop me into the bottom 5% of the company and so I received a letter from HR that I was at risk of being terminated.
So I deliver the project and now it's visible and everyone loves it and I get an "exceeds expectations" rating and then a promotion and a raise.
From being warned I'd be terminated to a promotion in less than six months, with no change in my work, but simply it becoming visible. Whee big company fun.