Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why I close pull requests (jeffgeerling.com)
292 points by geerlingguy on Dec 28, 2016 | hide | past | favorite | 181 comments


At Google, if you want to implement new features (or large refactoring), you'll need to write a design doc. In which, you should answer questions your reviewers might ask (common questions like: why do you want to do this, what are the alternatives, how components interactive with each other before/after your change). This is something like Python's PEP: you need a proposal to convince your reviewer that you have put thought into your change.

Real world examples of these design docs can be found at https://github.com/golang/proposal


Requiring permission to do work is the enemy of progress and engineering dignity. It creates a presumption of incompetence and an atmosphere of low trust that punishes people who want to push the envelope of what's possible.

Google's design document culture is bad. Google has succeeded in spite of it.

In my experience, having worked at many large tech companies, design documents obfuscate, not enlighten. They become increasingly out-of-date as the code evolves, creating anti-documentation that makes it take longer to understand code. Yes, yes, people should update design documents as the code evolves. Everyone knows that in practice, nobody updates old design documents.

Design documents make it too easy for other developers to shoot down ideas. Sometimes the worth of code isn't apparent until it's made. It's far too easy for someone to comment "this will never work" on a proposed change. It's much harder for someone to deny benchmarks attached to a proposed change. It's too easy for reviewers to knock out functionality.

Design documents turn every feature into a half-assed, lowest-common-denominator risk-minimized barely-adequate shell of itself.

The real reason everyone at Google writes design documents is that promotion committees demand documents as "evidence of complexity". No design document, no impact. No impact, no promotion.

Code is just code. Bad changes can be backed out. It's much better to move fast and iterate quickly than to create an illusion of care and add friction to every aspect of the development process. Up-front design of software just does not work. If it did, waterfall project planning would be successful.

These questions you highlight --- Why are you making this change? What impact will it have? --- can be asked during review of actual code. There is no need to build a speedbump, not if you trust your people.

Developers should be able to choose to create design documents and solicit feedback. Most changes don't need this process. You should trust developers to know what changes require a more extensive discussion and which ones don't.

A culture that requires permissions and signoffs before work can begin is a culture that leaves products stagnant for years.


At least in the part of Google where I work (Technical Infrastructure), in general most of the obvious optimizations that can be made at only one level (within the scope of a single programmer, or a single team), have been made long ago. So most of the changes to make the system more efficient will require coordinated changes across multiple teams, and multiple pieces of software, and in some cases, may impact more than one SRE team.

In that kind of situation, you betcha we need to have design docs! And in terms of making it easy for other developers to shoot down ideas, very often they may know about some dependency or key assumption in some other piece of code that you didn't know about it. And it's better to find out about it during the design phase, than to have to rework 50% of your work when you find out about it at code review time, or worse, if it gets deployed and you get angry notes from SRE's that were woken up at 3am and you need to send them a bottle of whiskey to apologize for your f*ck up....


We write design docs at Google to communicate ideas with each other. They're useful for promo committees because effective communication between engineers is something that is prerequisite for effective engineering.

Of course a full design doc is not needed for every change. Many changes are small and straightforward. Only big things, where the team needs to discuss and understand options. Or bigger things, where directors etc need to sign off (not really design docs anymore, but same basic thing).

The fact is that, on average, design doc+code takes less time than code without design (that is, without communication). Again, only for certain kinds of changes. Things go faster because problems are found, approaches are adjusted, or unmotivated features are axed.


> We write design docs at Google to communicate ideas with each other.

I have no problem with individual developers choosing to circulate documents in order to solicit feedback. My objection is to rigid processes that force engineers to write documents.

Mandatory design documents for "communication" invariably morph into checklists of required signoffs from people who have little incentive to say "yes". In this way, a culture of design documents breeds a culture of extreme risk avoidance.

It's a tragedy, really. Through numerous small steps, each apparently reasonable, a nimble organization becomes an ossified nightmare in which it takes six months to add a checkbox.

> The fact is that, on average, design doc+code takes less time than code without design

Prove it. Provide evidence. In my experience, your claim is not the case for most changes in most projects. For the changes where design documents facilitate development, my experience is that developers will choose to circulate documents even when not required to do so.

> Unmotivated features are axed.

One developer's "unmotivated feature" is another developer's essential use case.


> Prove it. Provide evidence. In my experience...

That's amusing, that you want solid evidence, yet you're willing to use your own anecdotes.

> Mandatory design documents for "communication" invariably morph into checklists of required signoffs from people who have little incentive to say "yes".

Or they make you think about things that are not obvious on first glance, especially at Google scale. For any customer facing feature, you have to make sure that PII is taken care of, that security is implemented properly (SQL injection and XSS vulnerabilities for example), that internationalization is taken care of (especially right to left languages), and that UI fit and finish plays well with design guidelines, in both web and mobile, and that cross browser compatibility is at least thought about, as well as other issues.

> One developer's "unmotivated feature" is another developer's essential use case.

One developer's essential use case is another three dozen developers' backwards compatibility breaking change.

> Through numerous small steps, each apparently reasonable, a nimble organization becomes an ossified nightmare in which it takes six months to add a checkbox.

When you're serving up traffic at those volumes, with datacenters all over the world, accumulating revenue that quickly, yes, it's worth taking six months adding a checkbox to take every possible step possible to ensure that doesn't leak a security vulnerability somewhere.

Just because your individual progress is slow, doesn't mean that the progress of the team is slow. One breaking change in say Google adwords can undo literally man years of work.


> That's amusing, that you want solid evidence, yet you're willing to use your own anecdotes.

I'm not the one presenting my anecdotes as fact: "The fact is that, on average, design doc+code takes less time than code without design".

Anyway, you've very clearly articulated the conventional wisdom of big companies originating in a certain era of computing. Conventional wisdom isn't necessarily wrong, but it's not necessarily right for all time either. There are companies with data needs, user counts, and codebase sizes on par with Google that don't practice Google-style process, yet succeed anyway. That these companies have succeeded without Google's process is evidence that Google's process is unnecessary, at least in today's environment.

> SQL injection and XSS vulnerabilities for example

Code-level concerns. You're not going to stop SQL injection by looking at some high-level design document. The same goes for r2l text layout bugs.

> it's worth taking six months adding a checkbox to take every possible step possible to ensure that doesn't leak a security vulnerability somewhere.

Keep that in mind when smaller competitors surpass you. It's easy to say that Google's codebase represents 18 years of work. I strongly suspect that it wouldn't take so long to do starting today.

Look at self-driving cars: how long has Google been working on them? How long has Uber? Whose cars are serving real-world passengers today?

These days, we have 1) very good continuous integration systems, 2) good code review tools, 3) fast shipping vehicles, and 4) continually improving static analysis. These things were unavailable (at least at adequate quality levels) when Google started its design culture.

Maybe the conventional wisdom you articulate might have been an optimum some time ago. These days, I think it's far too process-heavy and that Google and similar companies haven't kept up with the times.

> Just because your individual progress is slow, doesn't mean that the progress of the team is slow

It means that the team is inefficient. Communication overhead goes as N^2, after all. Google's teams are notoriously huge. When you see that a startup (or even another > $1 billion company) can do the same damn thing Google does and put a quarter of the people on the task, maybe it's time to wonder whether Google is doing something wrong.

If developers feel like process is slowing them down, maybe you should listen to them.

> One breaking change in say Google adwords can undo literally man years of work.

It's easy to look at a few failures and conclude that you need to add process to fix whatever went wrong. It takes much more foresight and wisdom to see that this process probably costs more man years in overhead and inflexibility than you spend fixing the occasional mistake.


"There are companies with data needs, user counts, and codebase sizes on par with Google that don't practice Google-style process, yet succeed anyway."

Let's start with "how big do you think Google's codebase is"?

Because the last time we went looking, the number of companies even close was <10, and they all pretty much have the same as Google's process.

If you really have examples, i know the engineering productivity guys would love to hear about them and talk to these folks.


> There are companies with data needs, user counts, and codebase sizes on par with Google that don't practice Google-style process, yet succeed anyway.

Which? The ones I can think of are Apple and Microsoft, and I'm pretty sure they practice Google-style process. Amazon has its own flavour of process which is heavyweight in its own way. What are you thinking of?

> Code-level concerns. You're not going to stop SQL injection by looking at some high-level design document. The same goes for r2l text layout bugs.

I notice you left out PII and other security implication, as well as design fit and finish. Those can easily be caught at design time, especially I18n bugs. For instance, the average east asian phrase is shorter (graphically) than the same phrase in a western language, and that all has to be translated and dealt with.

> It's easy to say that Google's codebase represents 18 years of work. I strongly suspect that it wouldn't take so long to do starting today

That's a strawman argument. Google has written reams of code for distributed computing (Borg), continuous integration (Tap + Blaze + Forge), code review tools (Mondrian, Critique).

That's like saying that although it took a decade to design the Boeing 737 (just a pretend example), it would take less time now. That is correct, but based on advancements on technology and materials, what's your point?

> Look at self-driving cars: how long has Google been working on them? How long has Uber? Whose cars are serving real-world passengers today?

That's a false comparison. Right now, Uber still has to have drivers behind the wheel, whereas Google self-driving cars strive for a higher level of autonomy. Also, Google has not wanted to get into a directly customer facing role, instead looking for partners to manufacture the cars.

> These things were unavailable (at least at adequate quality levels) when Google started its design culture.

Design culture evolves. Google wrote all of its own integration systems, code review tools, and many static analysis tools. Even though they have top class systems, they still stick to the same way of doing things. That's evidence that it works, and the process is roughly where it needs to be.

> When you see that a startup (or even another > $1 billion company) can do the same damn thing Google does

What's an example? Most startups/competitors to Google seems to do about 90% of the things that Google does for one business division, leaving aside the last 10%, which is naturally the hardest 10% to do.

> If developers feel like process is slowing them down, maybe you should listen to them.

Which developers? People looking in from the outside or actual Google engineers?

> It's easy to look at a few failures and conclude that you need to add process to fix whatever went wrong.

Interesting study on checklists. http://www.nature.com/news/hospital-checklists-are-meant-to-...

If you have institutional resistance towards checklists in hospitals (or process), introducing them doesn't help. But if you actually implement them correctly, they do eliminate many common mistakes.

A lot of companies use cargo-cult like process, thinking if they follow a magical recipe, they automatically get good results. I doubt Google is one of them


It's funny that you mention Microsoft. A friend (he has ~100 reports, transitively) at Microsoft tells me that at least on his team, the old-fashioned three-specification (design, dev, test) document triplet, each with a multi-page checklist-laden Word template, has been supplanted by a lightweight scheme that boils down to one or two paragraphs. That's real progress. Microsoft even runs successful open source projects these days and takes external contributions.

Microsoft has not collapsed. In fact, it's doing better than ever. If Microsoft of all companies can reform itself, so can Google.

> PII and other security implication

When your developers are both smart and invested in the product's success, they learn about these things on their own. Sure, they can make mistakes, but so can some damned review committee.

It's interesting to see how people rise to challenges. If there's a security committee tasked with reviewing the security implications of various changes, developers won't take security as seriously. "That's the security committee's job", they might think. But if you entrust developers with their own security, and they're high quality developers, they'll take the responsibility seriously and do a better job.

(I know I'm making a "no true Scotsman" argument, but I think there's a real qualitative difference between developers you can trust with this sort of responsibility and developers who aren't as invested.)

I don't think you can look at security/PII/whatever problems that a committee catches and conclude that those problems would have made it to production absent the committee.

> That is correct, but based on advancements on technology and materials, what's your point?

The Brooklyn Bridge was designed to be six times stronger than it needed to be for its design load. Modern bridges are only about two times stronger than they need to be. The Brooklyn Bridge needed its large safety factor because suspension bridges were not well understood at the time. With modern technology and design tools, we don't need to pay for a safety factor of six.

Imposing Google-style process in 2016 is like building every modern suspension bridges like the Brooklyn Bridge because the Brooklyn Bridge is still standing. "That's evidence that it works, and the process is roughly where it needs to be."

> For instance, the average east asian phrase is shorter (graphically) than the same phrase in a western language, and that all has to be translated and dealt with.

Pseudolocalization and dogfooding help. I'd argue that rapid iteration helps most on UIs. A/B testing and metrics beat heavyweight up-front design any day of the week.

> Design culture evolves. Google wrote all of its own integration systems, code review tools, and many static analysis tools. Even though they have top class systems, they still stick to the same way of doing things. That's evidence that it works, and the process is roughly where it needs to be.

There's an ever-increasing morale cost. How do you expect developers who have experience in process-light environments to come to Google and be happy? "Yes", nobody says, "I want to go from experimenting rapidly on my ideas to writing internal documents to convince people to maybe let me try something."

> If you actually implement [checklists] correctly, they do eliminate many common mistakes.

It's a lot easier to back out a problem diff than to remove the staph you accidentally introduced into a patient's bloodstream.

Process should be proportional to the difficulty of undoing a mistake. If a mistake is easy to undo, it should be easy to do. If a mistake is very costly to undo, it's worth investing in not making the mistake in the first place.

The vast majority of programming errors are of the "easy to undo" variety.

And by the way:

> [Uber's self-driving cars is] a false comparison. Right now, Uber still has to have drivers behind the wheel, whereas Google self-driving cars strive for a higher level of autonomy. Also, Google has not wanted to get into a directly customer facing role, instead looking for partners to manufacture the cars.

Uber recently delivered beer fully autonomously. In a truck. They're definitely planning for L5 autonomy.


> Process should be proportional to the difficulty of undoing a mistake. If a mistake is easy to undo, it should be easy to do. If a mistake is very costly to undo, it's worth investing in not making the mistake in the first place. > The vast majority of programming errors are of the "easy to undo" variety.

At Google's scale even small issues will have widespread impact on real people. Let's say you break the ability to reply to email in GMail for ten minutes - cumulatively that could result in hundreds of hours of lost work across all their users.


What about all the hundreds of decades of work lost because extreme risk aversion makes it impossible to add productivity features to GMail for fear of breaking what works already?

Some people like to cower behind "Google scale" as a reason never to change anything. Not me.


Uber's cars and Otto's trucks are different platforms I believe (for now at least.) The beer delivery was a publicity stunt, but definitely impressive. However it was largely made possible by a team who had spent years at Google figuring out how to do it :)

It's definitely possible for a team to come along and catch up/overtake the Google (now Waymo) project, but I agree with jimmywanger it's not a valid comparison for the sake of this discussion. Uber is following a different path than Google focused on, and is hugely benefitting (as is the whole industry) from the work done at Google.


> Microsoft has not collapsed. In fact, it's doing better than ever. If Microsoft of all companies can reform itself, so can Google.

I don't know what you think of Google's process. A design doc is needed for any large user facing change or large infrastructure change, and it goes through sections and you skip the ones that are not applicable.

For instance, if you're not storing user information, you skip the PII section and so forth. If you're just making a change to adwords billing, you skip the entire I18N section.

Also, the areas in which Microsoft is revitalizing itself are green field projects like the cloud and some other interesting hardware/software integrations. You can play fast and loose with those, as opposed to Google, which doesn't really have any legacy code and has to support all existing users.

> When your developers are both smart and invested in the product's success, they learn about these things on their own. Sure, they can make mistakes, but so can some damned review committee.

The review committee does this for hours a day, and they see far more cases. That's like saying that it's better for you to assess the condition of the transmission of your car, because you care more and are more invested. I'd rather have the guy who rebuilds transmissions for a living, who has seen dozens of transmissions, and is familiar with common failure modes and pitfalls.

Specialized labor does help.

> The Brooklyn Bridge was designed to be six times stronger than it needed to be for its design load.

Well, heavier than air flight was impossible before the 1890's without investment in materials, engines, and construction techniques. Is it easier to build an airplane now? I still fail to see your point. We're talking about something where you have to invent the tools to make the tools to make what you want to make, vs. already having the tools available.

> Pseudolocalization and dogfooding help. I'd argue that rapid iteration helps most on UIs.

AB testing on wireframes helps and gets most of the edge cases. After you roll out to production things get hairy.

> It's a lot easier to back out a problem diff than to remove the staph you accidentally introduced into a patient's bloodstream.... The vast majority of programming errors are of the "easy to undo" variety.

Not really, when you're working on fundamental libraries that many products depend on. That can cause issues all up and down the product stack, and you're going to cause issues for developers who rely on your code who now have to throw away months of work.

> Uber recently delivered beer fully autonomously. In a truck. They're definitely planning for L5 autonomy.

That's a publicity stunt, and it's unknown whether or not they got paid or not. Also, Otto was based from Google expats, one from Google maps and one from the Google self driving car company. Your point is?


> Also, the areas in which Microsoft is revitalizing itself are green field projects like the cloud and some other interesting hardware/software integration

The example I have in mind is in a big legacy product. I can't get more specific without outing myself, but it's very far from greenfield.

> Specialized labor does help.

Specialization of labor can also hurt. I've found myself frustrated with security people in the past because they spend so much time thinking about security threats that they start to veto massively useful functionality on very flimsy security grounds. Broad exposure helps too.

> AB testing on wireframes helps and gets most of the edge cases. After you roll out to production things get hairy.

Why? There's no rule that says that everyone needs to see the same UI in production.

> Otto was based from Google expats, one from Google maps and one from the Google self driving car company. Your point is?

It's telling that Google autonomous drivers experts had to leave the company in order to get their work into a real live product.


> Broad exposure helps too.

It seems as though you haven't really encountered Google process in person, you've just heard stories. Security people do have a day job, they just do security on the side because they've expressed interest/aptitude.

My point remains. I'd rather have a plumber fix my plumbing or check over plumbing designs rather than an enthusiastic amateur.

> Why? There's no rule that says that everyone needs to see the same UI in production.

They don't. Google constantly runs A/B testing. Once you get it to production you've already invested the time in productionizing it.

> It's telling that Google autonomous drivers experts had to leave the company in order to get their work into a real live product.

Or that it's be far more lucrative to be acquired than continue working on Google X. You can't ascribe motives to their actions.


> The review committee does this for hours a day, and they see far more cases. That's like saying that it's better for you to assess the condition of the transmission of your car, because you care more and are more invested. I'd rather have the guy who rebuilds transmissions for a living, who has seen dozens of transmissions, and is familiar with common failure modes and pitfalls.

That's a really bad analogy, borderline dishonest.

In this case it's a mechanic taking his vehicle to another mechanic for service, you better believe that first mechanic is both more invested and more familiar with the transmission.


> Through numerous small steps, each apparently reasonable...

I think I read somewhere that, on an online shop, each step you add to the checkout process halves the number of people who complete it. If true, that is a powerful argument for what you're saying.

Also reminds me of Bastiat's "What is Not Seen". We don't see the developers who chose not to go through the whole process because it was too tedious.


> I think I read somewhere that, on an online shop, each step you add to the checkout process halves the number of people who complete it.

I wonder how much of that is just "more steps bad", and how much is that any step you add to a checkout process beyond what is necessary for the actual sale is (a) forcing you to do something you don't care about, and (b) going to make it more intrusive? Eg. forcing you to create an account, forcing you to validate an email address, forcing you to fill out a "quick survey", forcing you to unsubscribe from their marketing spam.

There have been plenty of times that I've wanted to buy an item from an online store but have decided against it because the store wants to "create a relationship" with me instead of just selling me the damn item.


To clarify here, what kinds of changes do you think require a design document?


I don't think any specific type of change should require a design document. That framing presupposes too much hierarchy. Design documents, like code reviews and tests, ought to be helpful tools for developers, not requirements imposed from above. I've worked at companies that didn't require design documents, but occasionally wrote them anyway because I wanted feedback.

Another trigger could be a reviewer commenting, "Hey, this diff stack is getting pretty tangled. Can you write something that describes how it all fits together?"

You should trust your developers' judgement.


(to be clear, this is all made up)

Well let's put it this way, let's say that I'm a googler, I work on the android team and I want to rewrite the launcher from the ground up because I have some whiz-bang idea. This falls under my general purview of stuff that my team works on, so I grab my coworker, and for a month we hack away at it and have a good MVP. It has great improvements over the existing homescreen.

We've been working and reviewing each other's code, and so after a month we submit the big change that touches bunches of files out for review by all of the various owners of various affected components. One of them shoots me an email a few minutes later and says that this work is all wasted because another team has been secretly working on an updated launcher for the release of Google's new flagship phone, the Pixel, it already has some of our features, but also has many others, and has been in development for 6 months already.

So now my coworker and I have wasted 2 man-months of employee time on something. That's tens of thousands of dollars that goes poof when someone closes the pull request. And those 10s of thousands of dollars could have been saved with a 2 page document and a 1 hour meeting.

The goal of these things is to not waste developer time, because you or I can't be aware of everything going on in a company, so writing a design doc allows other people to see what you're doing and

1. provide insight and feedback from their experience with similar problems

2. provide prior art from within the company

3. remind you of things you might have forgotten about

4. give you insight into how these changes will affect others

5. most importantly, give you information about other activities the company is doing in this direction that are related to your proposal, so that if you can avoid destructive interference, and potentially have constructive interference.

Otherwise you end up with repeated work, wasted effort, and fragmentation.


This has absolutely nothing to do with design documents. As a developer you are not supposed to start working on whatever you like, you need to follow the big plan. Is it really like this in google with developers wasting their time just writing design documents on whatever they like that will obviously be rejected rather than doing the work that is expected from them? I don't think this is very productive with people working on a whim instead of following a plan in a coordinate way.


You're exactly correct. Outside of 20% time, I can't see that example being really realistic (at least not at Google for an android launcher, there are other companies where I've heard of 10+ competing internal libraries for the same problem developed by different teams).

In the case that you're asked to implement a project from 'above', they're still useful, for practically the same set of reasons (maybe minus #5, but plus 'other people who you impact can provide feedback to reduce future friction')


Coordination is important. I think you can achieve it without a formal design document and approval process. Your team could post a quick, informal message to a mailing list saying that you're going to work on a new launcher. The other launcher team could then reply and suggest getting together to talk about common plans and avoiding duplicate work.

I'm not suggesting that communication is bad. I'm objecting strenuously (perhaps stridently) to the design document as a step in a rigid process and a requirement imposed from above.

Personally, I'd rather see two teams work toward our improved hypothetical launcher than to see zero teams do that work because process imposed too high an "activation energy" on the launcher experiment.

Also, why are both teams working in isolation for a month? If both teams check in code, the checkins themselves can provide an indication that other people are working in the same area. Incidentally, it's this effect that makes me strongly dislike feature branches. It's better to develop unstable code behind a feature flag, where the code is visible even if not active, than in a feature branch, where nobody can see it.

(It's true that not all changes can be gated behind a flag, but most can be.)


>Also, why are both teams working in isolation for a month? If both teams check in code, the checkins themselves can provide an indication that other people are working in the same area. Incidentally, it's this effect that makes me strongly dislike feature branches. It's better to develop unstable code behind a feature flag, where the code is visible even if not active, than in a feature branch, where nobody can see it.

I chose my example carefully, the 'new' launcher being developed officially was part of a new, unannounced product, and therefore likely secret prior to release. On the other hand, the new version developed by the two people was a top to bottom revamp, which might mean an entirely new application, or something that can't easily be done behind a feature branch.

>Personally, I'd rather see two teams work toward our improved hypothetical launcher than to see zero teams do that work because process imposed too high an "activation energy" on the launcher experiment.

And again, I chose my example carefully: one team would always have created this new launcher, it was creating something as part of a much larger initiative.

>I'm not suggesting that communication is bad. I'm objecting strenuously (perhaps stridently) to the design document as a step in a rigid process and a requirement imposed from above.

Indeed, but if nothing else, a design document is a formalized process for this type of communication. To be clear, my understanding is that Google also has best practices in place for objections and blockages to things proposed through design documents.

The result is that when you are planning to create a feature that is large enough that it is, shall we say, statistically likely to break things or cause someone else serious aggravation, there is, if you are following policy, a formalized process for informing the people who your change might impact, a way for them to provide feedback, and a formalized way to resolve conflicting opinions when each engineer thinks that their workflow or feature is the most important.

This goes back to the other example I gave, which was a real one, where my design documented project could have, if implemented, created a situation where, until rectified, breaking changes would have gone undected by automated tests, and instead shown themselves in the tests for unrelated changes, which would have been an annoying bug to track down, and would have caused some other team a lot of stress and time. They would never have been notified until final review, and conceivably could have been months of wasted effort on my part.


> secret

If your culture is one of secrecy (Google's generally is not), then you need process to effect coordination, since nothing else can. Fortunately, good (IMHO) software companies are internally open, so there's no secrecy forcing them into heavyweight process. Process is one of the many costs of secrecy.

> formalized process

The problem with formalized anything is that it involves writing down rules. When you write down rules, you have to distill complex and subtle human interactions into essentially an algorithm for people to follow. The loss of nuance, while creating clarity, introduces inefficiency, since it forces everyone to follow the same steps even when these steps are inappropriate.

You can't write down exceptions for all the inappropriate cases (but you can for some of them). If you tried to allow for a large number of exceptions, the resulting algorithm would be too complex to follow or it would be vague enough to allow anyone to skirt the rules.

I prefer to avoid formal rulesets for human processes. In my experience, the efficiency loss arising from formal rulemaking has outweighed the gain in clarity. Maybe others have different experiences, but in mine, the higher the quality of developer you have in an organization, the fewer formal rules you need.

Applying lots of formal rules to good developers just makes them unhappy.


> internally open,

Internally open does not mean that everyone can always see everything though.

>Applying lots of formal rules to good developers just makes them unhappy.

I'm not sure I agree with this. I'm a developer, and I'm happier when there are formal rules about human code review, code style, automated testing, and minimal test coverage.

I'm happy about those things because they are a small inconvenience (if one at all) that save me time and effort down the line, and insulate me from lazyness, and mistakes on the part of developers (myself included!).

I see design documents as an extension of this. My pay in is that I occasionally need to write a document outlining my thought process w.r.t. a new (large-ish) feature I'm implementing. In exchange for this, I know that I will not waste my time reinventing things and, more importantly, when other people intend to make changes that will potentially negatively impact me, I will be able to provide feedback before those changes go live.


> Internally open does not mean that everyone can always see everything though.

If secrecy is rare in an organization, secrecy-induced problems in that organization are also rare.

> I'm not sure I agree with this. I'm a developer, and I'm happier when there are formal rules about human code review, code style, automated testing, and minimal test coverage.

It takes all kinds, I guess. Personally, nothing galls me more than having to follow a rigid set of rules when I know the original motivation for these rules doesn't apply to my situation. I'd make a terrible soldier.

I understand your social contract argument, but IME, the benefit to me of other people writing design documents isn't big enough to justify my having to do it. I find that subscribing to actual code reviews targeting directories that interest me gives me enough ability to see and affect (and unfortunately, sometimes block or delay) changes before they go live.


While I agree with you in principle, I have my doubts that that approach works at "Google scale".


What makes "Google scale" unique? I've never understood the argument that because Google serves ultraziggabytes of data, Google needs complex engineering practices. A program is the same program whether it runs on ten machines or ten thousand. Complexity is what matters, and while Google does solve very complex problems, other companies solve them too.


I think the point is that every company that works at this scale does more or less the same thing out of necessity.


Saying that process is a necessary side effect of growth is like saying cancer is a necessary side effect of age.

I don't think you have the causation quite right --- it's not necessity that forces the adoption of process exactly. Process is what you get by default if you don't consciously counteract natural human tendencies in management. A lot of large companies stop consciously protecting their culture, so they get the default big company culture instead. The default big company culture is ever-increasing process.


Do they? Not even governments do that – instead, they manage billions of people and trillions of dollar by having a hierarchical structure, instead of using a long legislative process for every decision.


Why do you think those two things are impossible to do at the same time. Most companies also have hierarchical structures, and most governments use some kind of bureaucratic process to approve major new projects, I'd expect it to be more bureaucratic than at most businesses.


But that’s even not the case, that’s the point. Most governments let their government agencies just do things, without unreasonable bureaucracies like these.


Really?

Everything I'm familiar with has approval and review on everything in government, bureaucracy is synonymous with government to many people for exactly this reason.


How do you write design docs? Did you have to go through a course establishing some fundamentals to it? Did you take a writing class? Do you use specific tools? I would like to try and adopt this style of communication but everyone in my team, including me, are writing illiterates and I wouldn't even know where to start.


We don't really have a class: Like any writing, you just need to be aware of what's your goal. For a design doc, you want to say three things: (1) what are yo doing/planning to do, (2) why, and (3) what alternatives you've considered and why do you think they're worse.

We just use Google Docs, and it's easy to share the doc and have people add comments or propose changes. There's a template somewhere, but I don't like it much.


I find the "five questions" approach helpful. The classic questions are "Who?", "What?", "Where?", "When?", and "Why?". If you answer those questions (probably best to start with "What?") and additionally address foreseeable questions and objections (e.g., "Why do you want to use Foo instead of Bar?" "Bar doesn't work on Spam data."), you have a good design document.


> A culture that requires permissions and signoffs before work can begin is a culture that leaves products stagnant for years.

I've also seen the exact opposite being the case. A culture where everyone does whatever they're in the mood for without running ideas by other teammates who will have different experience in different areas of the codebase, can leave products stagnant for years. Tech debt builds up and the size of change that anyone is comfortable doing gets smaller and smaller until large new features become unfeasible.

If the culture on the team is to mostly work on your own without spending time designing & brainstorming up front except in rare circumstances, then nobody wants to be the only person enforcing process on themselves. You might worry that you come across as a weak engineer if you ask for a lot of feedback and nobody else does, and it does slow you down some so you'll get less work done on individual projects than your teammates (and your solutions might be of a higher quality, but that tends to show itself as the absence of problems or only be visible in the future to the next person who works on your code, and those things are easy to overlook).

So IMO it's not a matter of a presumption of incompetence vs competence, the goal is aligning incentives so that collaboration and taking the time to come up with the best solutions become the most natural way to work.

(FWIW I've never worked at Google and can't speak at all to their implementation of a design document culture).


I call that "The inmates running the asylum".


> In my experience, having worked at many large tech companies, design documents obfuscate, not enlighten. They become increasingly out-of-date as the code evolves, creating anti-documentation that makes it take longer to understand code. Yes, yes, people should update design documents as the code evolves. Everyone knows that in practice, nobody updates old design documents.

My experience has been the reverse, precisely because as you said.... no one updates old design documents.

A piece of code with a design document at least has a historical record of what the original aims of the project were, and a written rationale for why they took certain approaches.

Often you'll find a piece of code with a seemingly inane architecture and wonder "why is this so inane? were the the developers on drugs?". By reading the design document, you find out that sadly no the water fountains were not spiked with LSD in the 70s, but rather they were working around the performance characteristics of hardware that no longer exists and thus these baked in assumptions had reason and merit.

Understanding the original why often illuminates the entire architecture, even if undergone a lot of changes because the original skeleton still remains.

I sort of look at it as Code Archeology, or maybe literary deconstruction as applies to code.


> A piece of code with a design document at least has a historical record of what the original aims of the project were, and a written rationale for why they took certain approaches.

That information is very useful, particularly as a comment or a commit message. Storing this information in a separate unversioned document off on some enterprise management system makes it harder to find. If you put the rationale for a change in the commit message, the rationale is right there when you run blame!


And what about when your commit is part of a 5-patch series, as part of a 3-series project to bring some internal framework code up to scratch in order to start implementing a new feature? Which commit do you stick your design rationale on, and how do you ensure people will see it?

Commit messages are really good for e.g. "patch that due to this bug" or "refactor this so it's decoupled from that feature", but not quite so good for describing arches in development. On the other hand, you can reference an issue in every commit message, and that issue can lead to the design document.


I can't comment on Google's culture, but:

>Design documents turn every feature into a half-assed, lowest-common-denominator risk-minimized shell of itself.

Sounds like a sentence written by someone who is an engineer and not a support staff or a user, i.e the people who have to deal with the fallout of every feature change and every engineering decision. I could just as easily substitute "feature driven design" into most of your points above.

Requiring someone to put thought into the design of a product and to subject that thought to rigorous scrutiny is, on the whole, a good thing. One that leads to products end users want to use.


> Sounds like a sentence written by someone who is an engineer and not a support staff or a user

Wow. I feel terribly offended as an engineer.

Could we not confuse "engineering" with "pushing random changes at any times that may not even pass the tests by any bro-ninja that just felt like it".

The poster has clearly never written or maintained any critical software.


I thought design documents and reporting were mandatory for 'engineering'. As is things like ethics, and organizational standards, and testability.

Move fast and break something is the domain of hacking.


If it produces results, does it matter whether you call it "engineering" or "hacking"?


It depends on the result. If the result is public embarrassment of the company and the loss of millions in market worth? If the result is someone dying?


Do you have any actual evidence that a culture of design documents and signoffs produces code with fewer vulnerabilities?


How to make flawless software that goes into space => A truckload of process and design documents. https://www.fastcompany.com/28121/they-write-right-stuff

Whatever software is made, the [lack of] efforts put in design and reviews will have high impact on quality [or lack thereof].


I didn't mean it to offend, just to emphasize that though the poster may feel it affronts his "engineering dignity" (his words) to write design documents, there are others in the chain who have to deal with the fallout from that.


Ouch. I don't think I've ever met a competant engineer who would agree with the sentence. The person who posted it is obviously very junior and I don't think I'll ever call him/her an engineer if they don't manage to mature away from that kind of prattle.


Where are the mods? Does the rule against inflammatory ad-hominem attacks apply only to those holding unpopular opinions?

Believe me, I'm the furthest thing from junior you'll ever see. I don't particularly care what you call me, but I'm produced tons of value.

There's a certain type of mid-career programmer who's obsessed with "best practices" and thinks that anyone who doesn't stuff a program full of design patterns is being incompetent and irresponsible. It's a kind of "sanctimony porn". The attitude is that "if programming is hard for me, it'd better be hard for you too". It's this kind of programmer that shames other programmers for having opinions that result in the creation of simpler code.

I hope you grow out of this phase.


> Sounds like a sentence written by someone who is an engineer and not a support staff or a user

Do support staff and users sign off on design documents? "No" is the universal answer. Are you claiming that engineers aren't reasonable human beings who can take support staff and user concerns into account on their own? What makes you think the people reviewing design documents can do that?

Is it that you just trust a subset of engineers to understand the big picture? Are most engineers just drones? You shouldn't hire people who don't give enough of a shit to take the big picture into account.

I am an engineer. I am also a user. I do support my software. I've been programming for twenty years. I find that "rigorous scrutiny" hurts more often than it helps. Maybe this rigid scrutiny was more appropriate in a world with release cycles measured in years, but we don't live in that world anymore. When you ship every week, you can easily undo mistakes, and you're better off erring toward iteration.


>You shouldn't hire people who don't give enough of a shit to take the big picture into account.

At a certain point you can't. I was recently asked to implement a feature for a usecase for another engineer. It required a design doc and review by a few representatives from related teams. He and I wrote the doc, and the initial review was that any solution that would fix his usecase would break the testing infrastructure for practically every other developer in the building. He could write a few fewer loc when testing, at the cost of tests failing due to unrelated changes.

Neither he nor I had the awareness of the impact, and realistically couldn't have, it was neither of our responsibilities. And because I spent an hour filling out a template document, I didn't need to waste my time doing that.


> break the testing infrastructure for practically every other developer in the building

Why wouldn't continuous integration have caught that bug? If a diff breaks the build, it shouldn't land. If a diff causes tests to fail, it shouldn't land. If a diff lands anyway and causes problems, any developer negatively affected should be able to insta-revert the diff.

How does a design document help?


It helps identify potential problems / conflicts across teams even before a single code is written (or development time committed).


I don't think so. Who looks at design documents? Your own team. If you don't yourself catch that a change will cause problems with other teams, the existence of a design document won't magically alert that team. If you do suspect that there might be a bad interaction with another team, you can alert that team with or without a design document.

So again: how does a design document requirement help?


>Who looks at design documents? Your own team.

At least where I work, no. Your team, your manager, and anyone who you think will be affected, and depending on the scale of the change, you inform everyone that uses your tool or works on your product so that anyone can provide comments and feedback.


You're making the assumption that other teams and other leaders won't review your docs. In my experience, they do and always come back with more questions.

> If you do suspect ...

And that's exactly it. If I suspect something then of course I can communicate it directly or redesign, but we're trying to assess impact on areas I might not have no awareness of.

So let's put it in another way:

Design docs is one of many formal methods of communication in any organisation. It preserves context, change history, and a good part of risk management. It protects you and potentially saves downstream rework.


I missed this earlier, for reference, with this specific problem, I was intentionally modifying part of the continuous build pipeline, specifically, I was working on a tool to autogenerate certain tests in certain contexts. This was eventually possible, but the first request as to how I solve the problem would have been bad.

Feedback from a team that worked with another tool we used in the testing process informed me of the potential for the problems. To be clear, this would have eventually happened anyway, its unlikely that a PR breaking the CI system in this way would have gotten approved, but there would have been much wasted effort on my part in the interim.


> You shouldn't hire people who don't give enough of a shit to take the big picture into account.

Even if you care, you can't know the entire story. I work for a small company, tiny compared to Google, but often we run into someone proposing a change that backtracks on a strategy decided 2 years ago. Luckily it was encoded in a design document or else, how would anyone new find out about it?

New people want to know the big picture, but the best way to do that is to write about your decisions as you are making them, and give them some sort of searchable record.

Personally, I love Github because while you don't have formal design docs---you do have a history of the entire argument of why a feature should be implemented, followed up with a history of all the breaking changes caused by it, and the final reversion. It's great to plunge into a codebase, and see how did we get there, before deciding to propose something.

Have you ever been on a team, and proposed something only to hear "yes we tried that, it failed for X,Y,Z reasons... but no we never noted down starting this 6 month initiative down anywhere except my head".


> backtracks on a strategy decided 2 years ago

Why should anyone be bound today by decisions made two years ago? Maybe that strategy no longer makes sense.

I've seen it go both ways. On one hand, sometimes a new developer in an old codebase does something "against the grain" of the system due to unfamiliarity or JavaScript-induced brain damage. On the other hand, sometimes circumstances change and even good old designs become obsolete.

A culture of good taste and transmission of institutional knowledge helps preserve the good aspects of design. I don't think design documents help: they're just bytes on a disk. Unless you have people to enforce them, these documents won't do a thing. If you do have people who know what the system is supposed to look like and who shepherd changes to work with the original design, you don't need the design document.


> Why should anyone be bound today by decisions made two years ago? Maybe that strategy no longer makes sense.

Then that'd be a great time to discuss that the strategy should be changed, and move forwards as a team---not a single developer deciding to go against the grain with side effects that may be unknown.

Here's an example:

At my company it's our strategy that all file operations happen atomically and asynchronously. Even if your function was the one to create a temporary file, and you are absolutely sure it's happening locally, deleting it must be an asyc task handled by another worker.

Why? Because historically, small tasks start off as local only operations, but get upgraded to handle remote instances. Remote operations can fail fairly often, and we don't want the entire task chain to die because you couldn't delete a temp file on another server.

Now .... to any single developer writing a small script, this feels inane because forcing it to a asyc op, using our message bus tool chain, etc. will slow down the entire function by an order of magnitude. But it saves on 10x more integration work 6 months from now that they don't anticipate.


You don't know everything, and if you do, your replacement 3 years from now won't.

Engineering is a process to solve problems at its core. Fundamentally, you cannot solve problems without know on what the the questions are.


> When you ship every week, you can easily undo mistakes, and you're better off erring toward iteration.

Or so they thought: http://pythonsweetness.tumblr.com/post/64740079543/how-to-lo...


At a large Internet Company that I worked at, my experience was (seemingly?) quite unlike yours.

While writing a CEP, my gut reaction was like yours, "this is a waste of time, code is art, and I am programmer Picasso," etc. But since my manager and my technical leads at the time were very good, I bit my tongue and did the work of writing up a CEP anyway. In that environment, it didn't take that long to write a CEP. It was about 2 pages. And since I was proposing to change how a critical piece of infrastructure worked, it was really important for the oncall people to be on board, and it had to make sense, and it was important to identify all the failure points in advance, etc. From the egocentric perspective, I'm certain that it was much less work for me to write those 2 pages than it would have been to explain to 20 different people what I'm doing and why, either at the water cooler or during code review.

Trust is often earned and not given. Your coworkers may not know you or the quality of your work. Under the right conditions, I don't think being asked to write a CEP is being asked to dilute your vision, it's merely being asked to define it, and describe how it fits in with how things already work. If that antithetical to you, then you need to work at or found a small company, where everyone is most concerned (hopefully) with making something work, rather than trying to make something that is already working and already making money better.

I have some points of agreement with you, although I wholeheartedly disagree with the conclusions you make.

"Everyone knows that in practice, nobody updates design documents after the fact." Even if this is completely true (and it isn't -- I have written and read many documents that closely follow current practice) does that make it better to not try?

"Sometimes the worth of code isn't apparent until it's made." Does this mean that it is too hard to explain why it's worth doing?

"These questions you highlight --- Why are you making this change? What impact will it have? --- can be asked during review of actual code." You're right, these questions will certainly be asked -- in which case, you can link them to the CEP which will probably answer their questions plus other ones that they didn't even think of to ask.

I can't disagree more that code review can replace a CEP.


I think it really depends on the project, the community, etc. Some projects seem to be quite successful with design documents. (go, python) Some make it the process where changes go to die unless you're part of the core team and can poke the right people personally to review them. (openstack) The culture is everything.

But I think you're missing some very practical things here:

> Why are you making this change? What impact will it have? --- can be asked during review of actual code.

Yes, and they're going to be asked every single time. And if the reviewers disagree with the impact, you'll have to either drop the change or rewrite it. So why not ask first?

> There is no need to build a speedbump, not if you trust your people.

"your people" works on a small scale with people you work with continuously. It doesn't work on the internet when someone called "fdsfsaa" submits the code to your project and you hear of them for the first time.

> Most changes don't need this process.

Depends on the stage of development goals, etc. Some projects will get bugfixes and maintenance. Some will be constantly changing. You can't generalise.


It sounds like you have some experience dealing with some serious bureaucratic red tape manifesting itself in design doc's.

I don't know if design documents need to be a huge thing with stakeholder sign off and approval processes necessarily - maybe when you're google size. At my (small) company, we use something similar (we simply call them spec's) and it's more of a way to express your concept than it is to get sign off. Sometimes just spelling out an idea helps you validate it.

Really, you should be able to defend a certain amount of criticism if you're writing code that someone (and probably not you, eventually) will have to maintain. Bad changes can be backed out if caught early, but if left they find ways to creep into other areas and cement themselves when they are built upon.

I don't disagree with your main point though, but I do believe there is a balance.


> A culture that requires permissions and signoffs before work can begin is a culture that leaves products stagnant for years.

I don't see such a culture. I often create one or more prototypes as part of my design. One of those might become the final result, or I might throw away everything. As long as that's understood, all is well.

The way I think about it is this: don't do work you aren't willing to throw away until earlier steps have been reviewed. How much work you're willing to throw away is a personal preference. Your design reviewer(s) might say "did you consider this other way that has these advantages?" You shouldn't reply with "No, and I've invested too much time to consider other approaches now. Stop holding me up and lgtm already." No one wants to work with someone like that. Your argument should be based on what's best. What's already done should only be considered if neither approach is (believed to be) significantly better.


>Requiring permission to do work is the enemy of progress and engineering dignity. It creates a presumption of incompetence and an atmosphere of low trust that punishes people who want to push the envelope of what's possible.

Its not a permission to do work, its permission to merge the results of the work to master. Those are different.

>Google's design document culture is bad.

Then why do we have JEPs, PEPs, and rfcs in perhaps every other major project out there?


Counterexample: the Linux kernel.


Which leads to problems: see systemd.

Also, to be clear, Linux is relatively small compared to Google or Microsoft, or indeed many corporations codebases. Someone could conceivably read the entire Linux kernel codebase. That's not true for BigCorp.



Right, a single book has on the order of 4-500 pages, longer ones have more, but we'll take the average, and an average book has....30 lines of text per page, so 13500 lines per book. That makes the kernel ~1000 books, which is a lot of books, but avid readers do read 50+ books per year (my father probably does close to double this). That makes reading the entire source absolutely possible, in the order of 10s of years, which is a long time, but then, the kernel has been around for what, 25 now? So there are a number of maintainers who have been around long enough to have read through the entire kernel.

Compare that to google, where[1] there are almost as many unique source files as the kernel has lines (though to be fair, many are autogenerated).

[1]: http://cacm.acm.org/magazines/2016/7/204032-why-google-store...


Which has succeeded in spite of being a hot, churning mess.


Design, functionality, and test specifications are very important. But the process must be monitored and be very flexible.

I'm not familiar with Google system, but in other big corporations the problem is that you end up with system which does not allow exceptions - so engineers end up requiring writing 10 different documents to make a small change which can be explain on one page and everybody will get it: support, testing, doc writers, etc.

And these documents ended up begin written so that management is happy and not actual intended audience: tester, support, maintenance, documentation writers, etc. So these documents ended up being useless so ....

The worst result is that engineers which do not write good code but write good essays end up with promotion and eventually destroying the entire product (not their fault - they just not talented programers).


I think the pathologies you mention are inevitable once it becomes acceptable to address technical problems by adding process like mandatory design document signoff. The only way to avoid these pathologies is to take a hard stand against process.


  It creates a presumption of incompetence 
No, it acknowledges the reality, that people, including you and I, don't know what is good for them.

Being forced to come up with design documents is very similar to forcing doctors to use checklists. They complained, and still complain, that they are professionals and don't need the bureaucracy and 'assumption of incompetence'. But the numbers speak for themselves: there are much fewer medical errors when they are forced to use simple checklists.

My colleagues are all competent, yet they produce much better work if they are forced to first come up with a design document.


> Code is just code. Bad changes can be backed out. It's much better to move fast and iterate quickly than to create an illusion of care and add friction to every aspect of the development process. Up-front design of software just does not work. If it did, waterfall project planning would be successful.

Bad changes can be backed out, but in an infrastructure with any size, this isn't a trivial task. I've worked in a million-line codebase with multiple separate deployments, and in that case, design documents were much cheaper to create and maintain than a rollback or even writing working code and then discarding it. In an infrastructure of Google's age and size, the trade off is even more clearly in favor of up-front design.

You probably think the "move fast and break things" ideology you're espousing is agile, but it's not. It's just another plan, and agile is about responding to change over following a plan. I hope that if you ever work on a project of Google's size that you can adapt.


I agree with what you're saying, but what you're saying is not how it works at Google. At least the teams I've been at. Design docs aren't a prerequisite to start working, and in many cases a design is nonsensical if you haven't actually at least prototyped what you want. It's just a tool to help you be comprehensive when you want to decide between alternatives, and let other eyeballs help you decide.

"this will never work", without an actual argument, isn't Googley :)

A big design doc is a liability for sure, in the same way as code is.


I can tell you've never dealt with one of those trivial backouts on a product that moves a hundred billion dollars a day and is a key component of the entire US economy. Or a product where a bad software deployment can actually kill people.

Process is the scar tissue of the enterprise. Those scars are there because the enterprise was wounded. Lots of process, many scars.


I can choose not to work in an environment riddled with "scar tissue". Instead, I can go work at a startup and eat that enterprise's lunch with a tenth of the budget. Unfortunately, thanks to inflexible and sanctimonious attitudes some programmers adopt about what is and is not "responsible" engineering, the only way to change practices is to beat the old practices in the marketplace.


You're welcome to try eating Google's lunch, since their process is your favorite example.

But try tackling banking, or insurance, or transportation logistics, or any other big-boy problems with that attitude. You won't last long.


Wait, you don't spec out project changes? You just start writing code and attempt to figure it out as you go along? That seems like a recipe for disaster, or at least a lot of wasted effort. How do you even quantify that you've done something useful if you haven't set a goal that everyone can agree on?


Talking about desing documments i think internet would be a mess without RFC's (https://en.wikipedia.org/wiki/Request_for_Comments) which i think are formal design documents...


RFCs are usually protocol specifications. Specifications are usually intended to facilitate interoperability. They document protocols or grammars or some other artifact. Specifications need to be well-written and precise.

The kind of design document that I frequently find superfluous isn't a specification of some protocol, but a prose description of the code one intends to write.

In concrete terms, an RFC might describe TCP header flags and the TCP state machine, but it'd be silent on the Linux kernel's sk_buff structure. A design document would describe sk_buff in detail.


> "It's much better to move fast and iterate quickly than to create an illusion of care and add friction to every aspect of the development process."

This!


This attitude strikes me as very SV/HN and while I can appreciate certain elements here, the answer is the typical one - "It depends."

Rather than regurgitate what most people here already said, let me list a few programming projects, domains, and tasks where at least thinking about design if not writing design documents or spending days, weeks, or months figuring it all out is worthwhile.

* Programming Languages

* Databases

* Operating Systems

* Medical Devices

* Safety Equipment

* Streaming containers/formats

* Encryption

* Security

* Manufacturing/Robotics

* Aerospace / Space

* App Dev Frameworks

* Game Engines

I could go on.

The point here is that there are plenty of things where thinking about it up front is beneficial, if not required, especially if some combination (but not limited to) the following are true:

* Lives are at stake

* Changing it later would be hard (programming languages are an egregious offender, I won't name names)

* Customer adoption will completely derail or forbid architectural changes

* Fixing it will require essentially doing it again from scratch

* Changes will force the creation of patches that will incrementally kill the project or slow future development

Frankly, I think we have too many things that are poorly designed. Most projects I see in nearly any domain are mostly set in stone once time and money is added to the mix. Everyone talks about redoing or fixing things, but it rarely happens except for minor changes. As projects scale up, few people can afford to constantly back out lots of changes and rearchitect everything. Those that do usually fail or don't get a good ROI, and those that don't change fail anyway.

I've worked with all kinds of people and though there are people I have great admiration for, I can safely say that 99% of them are idiots and have no business being programmers. I know it sounds harsh, but I've been doing this a long time and have worked with all kinds of people. Too often I see the programmer's equivalent of an illiterate child that gets pushed through high school. So no, I don't trust people to do the right thing, I merely trust most people I work with to not act maliciously. Most of all, I don't trust myself. As the progression goes as a programmer - your code sucks -> my code sucks -> all code sucks -> my code sucks but I'll live with it, hope it is better than most, and ask people smarter than me for help.

Most better developers I know do in fact right some form of design documents, even if it's just notes and justification why X or Y won't work, but Z "might" work. Many also take a lot of time to think about something before writing any code, but once they do, they actually finish much quicker with less bugs than the young programmers who want to "move fast." Of course none of this is universal, and as I said, it all just "depends." What do I know?


> 99% of them are idiots and have no business being programmers

I feel the same way. If you have a group of idiots, you need process as a harm reduction measure. A very high contributor bar is a prerequisite for a process-light environment, because if you can't trust people to do the right thing, you need a system to force them into a conservative approximation of the right thing, and this system is called process.


So what you're proposing is that every company hire only the best?

That's what they all think they're doing. Math doesn't work that way.


Not sure what you are replying to, but I think at least parent is saying because people aren't as good as they should be, you put processes in place. I don't think anyone is saying what you stated. Most companies might believe they are hiring the best, but their coworkers are the ones that know otherwise and do things like use design documents to give some structure where needed.

In other words, you take measures to catch errors, mitigate failures, and protect yourself rather than say "go code and push to production everyone, we can just roll back!" That might work as stated in some domains, but not others.


> Code is just code. Bad changes can be backed out.

Not if they have already been shipped.


As a less formal version of PEP/design docs, I always open an issue on Github projects proposing the changes that I would submit in a PR before I do the work, and end by asking if there's interest in a PR that implements those changes. That requires very little effort and avoids a lot of wasted time on both sides.


This, unless my PR is addressing an already open issue. For larger ideas/changes that may be difficult to explain, I may open an initial PR, but make it clear that it's for illustrative purposes and that I don't expect it to be accepted or to remain open.

On occasion I've saved myself a whole lot of time when one of the maintainers agrees with the proposal and reveals they're working on something very similar already.


Google isn't an engineering paragon anymore, it's too large.

I speak from what I've seen in how they handle their development & releases for Angular 2, which is a fairly large project with an equally large community.

Take a look at this release candidate, which is in between other release candidate RC.5. It has over 100+ breaking changes with new features

https://github.com/angular/angular/blob/master/CHANGELOG.md#...

This is practically against against all definitions of what a release candidate should be: https://en.wikipedia.org/wiki/Software_release_life_cycle#Re... .

So this neatly painted blanket statement of 'at google' we do x,y,z for development, new features & reviewers isn't true. There are very messy projects & development practices, practically like at any other large company.


Also, don’t send requests out of the blue. The original maintainer has to know that you’re working on something. One reason is that your changes might collide spectacularly with other planned changes you weren’t aware of. Another reason is that the maintainer might say “no” to the entire idea, much less the implementation, and save you time.

The mere creation of a fork isn’t a sufficient signal, either; the project maintainer isn’t going to treat that as a sign that you’re actually working on something. (There seem to be an insane number of forks out there that are created and never changed again, apparently used to pad résumés by having important-sounding projects listed on user profiles.)


I think a better way to put this is "don't send requests out of the blue if you care about them getting merged".

I often hack on software that I use to make it do something I need. I then try to upstream the code, if I think that it's something others may want too. I don't particularly care about it being merged -- my attitude to this is "Here's something I found useful, if other people think it's good please take it". I'm willing to put some effort into cleaning up the patch to make it submission-worthy, but if it doesn't get merged, it doesn't get merged. I'll keep it up in a fork, and that's about it.

I don't care about it getting upstreamed enough that I will open a dialogue before I start to work on it. It's a feature I want, and I will be working on it regardless of how the dialogue goes down. The only way a dialogue can help me is by giving me implementation advice, but quite often I've already figured out a way to do it which is sufficient for my purposes (hacky or otherwise).

If there's a feature that you actually care about existing in upstream, then you should totally open an issue first and discuss it.


Why Not?

I see a problem that I can fix, I do that, and then send it back to the original maintainer, making sure that I'm not breaking tests if they exist, and providing a good description of the change. Then it's up to the maintainer to either merge it, or throw it away.

Of course, I don't have any problem with getting my PRs closed, I'm not the one who would end up maintaining the code.


> I see a problem that I can fix...

Remember, they said "out of the blue" which means "without prior contact to discuss doing so."

This exact reason was explained in the parent post: maybe the maintainers and project owners don't see it as a problem or are already working on something.

It's not just about being a machine that can crank out fixes to the individual issues in software wherever you see them. It's about participating in a social project intended to fix a larger scale problem.

> Of course, I don't have any problem with getting my PRs closed, I'm not the one who would end up maintaining the code.

A reason alone not to send PR's "out of the blue". If you're just looking to pump and dump, I'd rather not involve you in the process, only to have to do work to fix/rm code down the road.

But I mean, above all, it's your time to waste.


The post calls out major changes as deserving a chat first. My primary workflow that results in PRs is: 1) I find an interesting project, 2) I find something it doesn't do the way I want (a bug, or a config tweak), 3) I patch my fork to fix that, 4) I open up a PR incase the maintainer does think it's worth merging into upstream.

What's the upside of me opening an issue first to chat about it? The maintainer still has to burn the time to think about it, but without seeing the code that I'm proposing, which is already written because I already needed it to scratch my itch.

If they think it's worth of merging, woo, we merge it. If they want the problem fixed a different way, cool, one of us writes the PR that makes that happen. If they don't want the change, I keep using my fork.


The post in the comments was the one I was referring to. Says verbatim don't make pr's without a ping first. Says nothing of major changes.

You forget maintainers aren't computers. While your logic is correct, social norms for humans are different.

You have voices describing a human view and interest in a certain way. Whether you agree is irrelevant; the project maintainer has no obligation to listen.

It's not all about you and the workflow that makes most sense from your office chair.


> Remember, they said "out of the blue" which means "without prior contact to discuss doing so."

If I do contact the maintainer before I do any change, that means I'm making a commitment to do that change, which is not something I want to do.

> If you're just looking to pump and dump, I'd rather not involve you in the process, only to have to do work to fix/rm code down the road.

I think you should always assume with any PR, that it's a pump and dump, unless you have a history with whoever raised the PR.

> It's your time to waste.

I don't think it's a waste, it's an issue that I'm facing (or a feature that I want), so writing the code is for my own need, I'm sharing it back in case other people want it.


> If I do contact the maintainer before I do any change, that means I'm making a commitment to do that change, which is not something I want to do.

There doesn't have to be a commitment to do anything. You are opening an issue and starting a dialogue. If you then disappear, someone else can come along later and implement/fix it with the benefit of the original discussion.


Padding resumes is a bit cynical there are many good reasons to make a fork for oneself. Like ensuring you have code you depend on in the event the project is deleted


I've also seen people who don't understand how his stuff works, and fork projects before cloning them because they think that's the process. A relatively small number of clueless people could result in a lot of pointless forks.


The big button does say "fork me on github" after all.


Indeed, if you're not quite sure what to do, then that would be an obvious one to try. And it'll work in the end, so you won't necessarily change afterwards.


How are you supposed to do it otherwise ? Where else are you going to push your commits to ? Its silly to clone, fiddle with git config, make a new git repo on github/lab and then push to your new 'manual fork'. Just use the fork button.


99% of the time, when I clone an open source repository, I just want to use it, not make new commits to it.

Obviously, if you want your own copy to commit to, then use the fork button. That's what it's for. But lots of people use the fork button even when their use case is read-only.


I sometimes use the fork button just to get a copy I may or may not play with over the weekend.


How does that help? Are you just using the presence of the fork in your account as a sort of bookmark?


Yes, a complete and fully featured bookmark that won't disappear. It costs me absolutely nothing, and is ready to go when and if I need it. Why wouldn't I do that if I could ?


Yeah, this is what I do, too. Also keep in mind, though, that you'll need to exfil your fork if you think it might disappear at all from Github. They yank the original repo and all of its forks for example if they get a DMCA notice. Years ago, I set up automation that pulls repos that I've forked, so for me, the act of forking ensures I get a copy of the code imported into my personal Gitlab instance on my home LAN.


It makes perfect sense. This isn't what I've seen some people do, though.


I use stars for this, but I guess one could use forks as kind of "super stars".


Yes, I'll almost certainly star a project if I forked it. I certainly star way more than I fork, I assume most people would do the same.


Presumably you mean: "don't send requests [for new features] out of the blue"?

I would add: if you do, don't expect the maintainer to merge it. Once it's merged, it becomes their technical debt.

Nonetheless, as a maintainer I always appreciate a pull-request, and as a developer I might fork/apply non-merged patches for projects where I needed that feature as well. In some cases, since they saved me a lot of time, I might fix their abandoned/closed patch and re-submit it upstream.


Why not send requests out of the blue? This happens to me all the time and I don't mind. It means someone cares enough to try to fix their problems instead of dumping it on me.


Right, I love it when people do work for me. It's a bit different in larger projects, but on small side projects, it's a lot better to get people more involved rather than less so.


The problem is on the other side. If you don't want the push, the sender just wasted a some work.


Not necessarily. They obviously need that change so it wasn't wasted. However they have to maintain the change themselves if it isn't merged.


How do you know they obviously need the change by making a decision away from the project management discussions?

Perhaps a solution has been discussed and is incoming from a regular developer?

They could deprecate the module where the change is "obviously" needed in a larger fix.

Frame it a different way: You're basing the needs of a project on your view of it, which, minus discussing things with project mgmt beforehand, may be incomplete.


You completely misunderstood who was needing the fix. In this case, it was the person who spent time fixing it. They likely fixed it because they need the fix.

Doesn't matter if the entire module becomes deprecated. They'll stay back with their fix (likely). Doesn't matter if another solution is inbound, they needed the fix now and not when a patch lands.

And since they already took the time to fix it - little time is lost sending a PR, regardless if it gets merged or not. If it is merged? Sweet you helped the project out (most likely). If not? Well a small bummer that you'll need to maintain your own patch(es).


Actually, sometimes people push changes because they intellectually think it's a better architecture or public API, but they can't survive as a fork.

They need the frequent bug fixes and upgrades from the mainline, so they really do waste their time if their changest is not approved.


This is an odd way of thinking about it. As a user of software, I like fixing it for my needs but I'd rather make the change in a way that the maintainer will accept, so that I don't have to maintain my patch. That means I'll tend to open an issue first, explain that I'm willing to submit a PR, and see if the maintainer wants to give me guidance.


I think it depends a lot on context. If I am looking to contribute for the sole purpose of improving the OSS project, then I follow your method. If I am using an OSS project in a larger commercial project and I need a bugfix or feature now, I just build it. If I think it may be useful to the OSS project, then I will contribute back with the understanding that it may not be merged.


Given the scenario that I am using open-source software X, and that I have made changes to the software to suit my requirements, and that I believe those changes might be useful to others, I can either:

1. Contribute those changes up-stream, by sending a pull request.

2. Publicise the changes, by keeping my fork, and/or talking about it in a blog post.

3. Keep quiet, and say nothing.

4. Send an email to the original developer, suggesting I may have some useful changes, and asking whether I should send a PR.

If I start working on changes without notifying the original maintainer, well, I might do work that's useless, that maintains no value, that isn't sustainable. But that's my loss.

If I send a PR, the net loss is the maintainer's time to review my PR (if they choose to).

Many open-source contributions stem from sending PRs out of the blue. You're right to say that it can be an inefficient mechanism in some circumstances; it's just that a lot of developers are OK with that.


If you've already made the changes, then by all means, just send a PR. The worst thing that can happen is that it won't get merged.

If you're considering doing some work, please talk to the maintainer first. Otherwise it might lead to unhappiness all around, because, as a maintainer, I don't like turning down all this work you've done for free any more than you do, but there's not much choice when it's a net negative (for any of the reasons in the article).


Exactly. Too often I get a large PR that I call a 'code dump'. at least give a couple paragraphs of explanation behind the changes. Sometimes a conversation can start in a PR, but it's more rare that results in merged code than if there was an issue first.


Yeah, I semi-recently had to fork a library, make some big changes to get it working for us, then made an issue with the main repo basically saying "Hey I needed this and don't have time to do it properly, Here is what I did, and I ham-fistedly ripped out everything i'm not using. Look at these few files for an example of the core change that's needed. I'm willing to help work on a real solution later, but I can't right now"

The maintainer saw what I did, and was able to easily make the change so it conformed to their style, their architecture, and they worked with me later so I could make the doc changes.

I felt like that worked magnitudes better than when someone makes a big PR with a potentially controversial change and lets the maintainer decide what to do with it.


  as a maintainer, I don't like turning down all this work you've done for free
I empathise with that perspective, and I appreciate any work that an open-source maintainer does to contribute to a project, so thank you.

I must confess, I often submit patches without prior warning or discussion. They almost always come out of client work: I've been asked to deliver a requirement, so I do what I have to do to deliver that.

When I have to modify upstream code, I know that my use-case might not be appropriate for everyone (so I try to explain that in my comments). But even if I think it's a patch that isn't going to be accepted, I generally post it anyway: partly for the few people who might share my use-case, but also for me, in my next project, where I may run into the same problem again.

Sometimes my PRs are nothing more than a note-to-self. A lot of the time that's been dictated by client policy: projects where I could only modify third-party code if there was a published upstream patch for the changes. So if I have to modify upstream code, I have to record a patch/PR upstream, to comply with client policy. But I do understand that the answer to the PR may well be "no".


That's pretty much the best scenario, I think. If you won't be offended by the fact that I may not consider the PR appropriate for the codebase, I appreciate the time you took to post it for consideration (and thank you!).


what on earth are you talking about? I pull a project into my codebase, find it has bugs or missing features so i fork it, make my changes and PR.

If I fork, write the code and follow the outlined contribution guidelines (which often do not include "ask for permission") then I'm doing what I should in the OSS world.


I work on many things for software projects for my own personal gain. I don't particularly care if someone is planning to do something similar in the future. I need this thing now, for whatever project I'm currently doing.

I implement it, and sometimes I may send a PR to see if the maintainer wants it. I don't particularly care if it gets accepted or not. However I think giving them the option to pull if they want it, is a good idea.


At the same time, sometimes, the best argument for a big new idea is _code_ showing that it works in reality. Obviously, if you're going to do a major refactor on your own, you're risking that work being thrown away. So, it's certainly a risk.


Wow, managing over 160 projects? I can imagine that he has to close quickly.

At GitLab we have a written down definition of done so people know what should be in their merge request, see https://gitlab.com/gitlab-org/gitlab-ce/blob/master/CONTRIBU...

And our merge request coaches try to get people over the finish line instead of closing. But that are full time people on a single project. Maintaining 160 projects is a whole different ballgame.


I'll _usually_ leave it open if it looks interesting to me, passes tests, and I know I'll get back to it in my next round of reviews (every quarter or half year, I spend an evening or two of my dedicated OSS time reviewing each project's open issues and PRs to clean things up).

I try to at least indicate if it's close to ready for merge or if it will need a bit more work, and don't often immediately close a PR.


At GitLab I think our policy is to already leave feedback too, even when we close it immediately. Of course we have more resources than one volunteer running 160 projects.


I know it's not best practice, but I leave them open. For years.

They may be fixable, they may be useful to someone. I've no need to reject them unless I really think they're a bad idea.

I'm sure this can be frustrating to users and contributors, but I also see it as a way of encouraging forks. "I haven't had a chance to review this, but you might try PR #NN..."

The most useful ones get replaced by better versions by other people, and eventually merged.


That depends on if you respond with a comment that indicates it's not likely to be merged. Then it is understandable, otherwise it is just rude and confusing.


Well, I usually say that it needs tests, and in my crowd that's equivalent. :-/


this strategy is ok as long as you explicitly indicate in the issue you might not get around to reviewing soon but to go ahead and fork.

I've seen too many folks who just leave PRs hanging which leads to frustration. People will remember and think twice about contributing to anything with your name/id in it.


I have some PRs hanging around that are just translation fixes. How the heck can a maintainer not merge some translation fixes? For years...

About every quarter I write something like "is there something wrong with this PR?" "can I do something else to get this merged".

Usually without reaction.


I'm actually quite bad for this. I think this year I might make it a new years resolution to retroactively apologise and add comments about forking, and try to make sure any future PRs get an answer one way or another.


Yeah. But it is not the way PR and Git is intended


Git is intended to work over email.

How do you close an email PR ?


You reply to the email saying 'sorry, but we can't accept your code because...'. (The shortest way to do this is to just reply saying 'NAK', but I agree with Rusty Russell that nak-mails are generally unfriendly and a bad idea: http://ozlabs.org/~rusty/index.cgi/tech/2007-05-04.html)


I also maintain a great number of projects (perhaps more) and generally I give feedback on why a PR is unacceptable and leave it open until it's resolved. Sometimes I'll close it a year or two later.


Definitely a valid approach. Usually people digging for historic non-merged requests will still find the closed requests either in Google or GitHub search, so I close them by default just to keep open PRs more focused.


You know what would be cool? If I could create a fork off the project i was using. Then I write a feature I need in that project. it becomes a PR, but then the fork is also automatically (if possible) updated whenever the main branch is updated. If it can't be updated automatically you are notified to update your fork against the upstream changes. This would have many benefits, including easy testing of PRs, forks that don't stale, and overall helping closing the loop on open PRs.


I don't know about "automatically", since all the hashes would change, but I definitely do wish that Github had better tools for managing long term forks, especially web workflows for:

- rebase this branch on upstream

- resolve merge/rebase conflicts

- make my master be upstream's master with PRs X, Y, and Z applied to it in that order.

I find the last one in particular to be awful, where I'm iterating on multiple PRs with an upstream, and keeping them separate for the sake of review, but wanting to test as a group to validate the end-to-end functionality. It can be a huge hassle cherry picking and rebasing commits between branches.


Have you considered just constantly rebasing as you pull? This works well for private forks, but it may also be doable for public forks if you explain what's going on in the Readme. Just an idea. I know it goes against everything we learn about Git, but perhaps it's not so bad in this particular situation.


Yeah, I mean that's basically what you do, but it becomes a huge pain if you're making more than one contribution upstream. You feature-branch-1, feature-branch-2, and then the upstream master with those two branches merged into it. When you update either one of the two PRs, you need to rewind the upstream master and reapply the two merges onto it.

Maybe that doesn't sound so bad, but it feels super awkward to me.


Ah, I see. Makes more sense when I reread it now. I'm just not juggling multiple patches at a time like this, but it's a valid workflow.


> resolve merge/rebase conflicts

This was added to github rather recently


One thing that I find different about github vs previously (e.g on sourceforge) when you had to sort of sign up to be part of the group to propose a change, is that people feel a lot more free to suggest out of the blue quite impactful but ultimately rather superficial changes to a project. They argue and argue to get these changes accepted, and then disappear.

On several projects I've been on, I get issues or pull requests proposing to change the entire build system of a project. As you know, for C/C++ projects, the build system can be non-trivial, and maybe many years have gone into getting it to work well. And as things change, we adapt it. But as soon as it's not the flavour of the week, you get github requests suggesting to change to a completely different one, to suit some or other system's needs.

A tweak here, a tweak there, or an entire overhaul being proposed from people who haven't contributed to the actual code base at all. These infrastructure "suggestions" from people who aren't invested in the project but love to play with scripts built up "around" the code get very annoying. I don't know what the difference is exactly but it didn't happen with such frequency when things were more oriented around mailing lists.

I've now got 3 projects that have at least two build systems each, because of random people's preferences. That is a lot of extra work to maintain that is orthogonal to the actual project source code. I've started closing PRs that make infrastructural changes that I don't want to be responsible for, unless I can get the submitter to promise he'll be around for a while to maintain it. I've also started forcing people to put such changes in subfolders so that it's clear which one is the "supported" system. And I haven't shied from "assigning" subsequent bugs back to the original PR submitter. But sometimes that doesn't even solicit a response.

People: if you are going to suggest switching a project to a completely different build system, and then disappear and not promise to maintain said system, please think twice about changing something just because it doesn't suit your preferences of the week.


Between the responses here and on the more recent Chrome for business posting, i find myself wonder if there is a ever widening split between the "push to prod" web dev mentality, and the "clasic software" mentality.


I think there's always been a "just ship it" and "let's make sure it's well tested" split. I find that there's also the cowboy coders, who tend to create nasty problems constantly pushing to production and breaking things for users, and the folks who push to production often but make sure what they've pushed is well covered by unit and functional tests, as well as having been properly designed and signed off before starting work at all. The former seem to have commented a lot on this article, which I find quite sad if it's a growing trend :(


I feel it is a growing trend as the net makes it easier and easier to go "release now, patch later".

Except that rather than patch later it gets left in place. And then rewritten from scratch in whatever is the buzzword language of the day some years down the road as the guard changes.


This article gives a nice view for someone who hasn't had that much experience with OSS development like myself. While I'm kinda familiar with CI systems and the concept of coverage, could someone explain to me what the author means by "happy path" in coverage? Is that considered the most used path in standard behaviour?


> Is that considered the most used path in standard behaviour?

Yes, basically. If you're building an application that allows people to submit a contact form, then the happy path would be something like:

  1. Load form
  2. Verify the correct fields are there
  3. Fill out form with valid data
  4. Submit form
  5. Verify submission went through correctly
The happy path is a minimum viable test to make sure things are working. For completeness, you'd also want to make sure that input is sanitized, invalid input (e.g. non-functional email addresses) causes a form submission to fail, and layout looks correct (e.g. CSS styles applied).


If a given PR would be acceptable except for its maintenance burden, and you do not expect the PR submitter to provide sufficient help with maintenance to compensate, you could request the difference from them as payment for acceptance of the PR.


The article brings up a lot of great points. Though people do need to be mindful lest they be writing a follow-up, "Why my project got forked, and I now sidelined."


The fact they have to state they won't accept pull requests that break the build astounds me.

Who is submitting pull requests that break the build? That is akin to trolling.


There are junior/newbie devs unfamiliar with the concept of CI etc, who will often submit working code without updated tests, and in that case it's up to the community/maintainer to point them in the right direction and help get their PR merged. An important first step to lay the foundation for future contributions :)


Just stopping by to point out that there is a typo in the first sentence of the article, I think: in "I maintain over many", the "over" should not be there.

Now I will read the article. :)


Also "the bus factor is high". Should be "low", as it is 1 for most projects.


I'm not sure if you're just being pedantic, but "the bus factor is high" in this instance means there is a high level of risk to the health of the OSS projects he maintains if he were to get hit by a bus.


I don't think he's being pedantic, it's the opposite of the general usage. The bus factor is the number of people that have to be hit by a bus to cause real problems for your project. Higher is better.


Indeed!

> The "bus factor" is the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel.

https://en.wikipedia.org/wiki/Bus_factor


You should report this to the blog author; not to HN readers.


My assumption is that the OP, "geerlingguy", and the post's actual author Jeff Geerling, are one and the same. Not unreasonable, I think. If he wants to post his work on HN and not read the discussion (which is obviously going to be relevant to his work), that's on him.


I'm here now... had to put my kids to bed; some nights it takes an hour or two (nights when all three act like the Tasmanian devil!).

It's fixed now :)


Grandparent's point wasn't that the author may not be reading, but that HN is better when comments benefit everyone here.

This isn't a hard and fast rule; I've certainly done my share of 1:1 conversations on HN. But for typos my usual approach is to go look at the author's profile, maybe click through to his website. Sometimes I'll even `git clone` a repo just to harvest an email address ;)

Just my 2 cents. You can keep reporting typos here as well. I'm sure authors would rather get bug reports somehow than not at all. Certainly not the most egregious thing one will see here on any given day.


https://news.ycombinator.com/user?id=geerlingguy https://keybase.io/geerlingguy

And the assumption is accurate. You can read their HN profile and see the keybase.io connections to the website, HN account, and to github.


Submit a PR :)


[flagged]


You've posted quite a few uncivil and/or unsubstantive comments to HN. That's not what this site is for, so please stop doing that. You can read the following to get an idea of what we're looking for here:

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html


Upvoted for introducing me to the term 'bus factor.'

https://en.wikipedia.org/wiki/Bus_factor




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: