Hacker News new | past | comments | ask | show | jobs | submit login

The process is fine, but sometimes shit is broken and you need to fix it now and clean up the existing mess later.

Why not instead of

> Shirley (Code Review): It is now against company policy to have any hard-coded variables. You will have to make this a record in the Parameters file. Also, there are 2 old Debug commands, an unassigned variable warning message, and a hard-coded Employee ID that will all have to be fixed before this module can be moved to production.

Add at the end:

However, I'll allow the change for now because this is an important update that affects peoples jobs. I want to see every issue fixed before the end of the week and I will not approve any other changes until these issues are resolved.

There, win-win.




> The process is fine, but sometimes shit is broken and you need to fix it now and clean up the existing mess later.

Working in big enterprise shops, I've noticed that periodically that attitude does take hold, and the result is that the firefighting ends up producing more fires than it puts out.

If you could change most of the people, culture, and incentives in a non-software-centered enterprise all at once (and this typically goes far beyond the IT shop), sure, maybe more fluidity in software development and release process would work better. But that's usually not possible, nor is it usually possible to durably wall off a group within an enterprise to isolate it from the problems of the larger organization (though that does happen sometimes.)


Exactly this. I think dev shops are fairly stratified into quality levels. "fix it now and clean it up later" works in a certain class of shop, and that's great.

But there's a huge swath of dev shops that never pay any technical debt back, and the compound interest on that technical debt consumes all productivity. Past a certain point, drawing a line in the sand and forcing better practices no matter how easy it is to rationalize 'one more hack' is the only way to break that cycle.


> But there's a huge swath of dev shops that never pay any technical debt back

Engineers often talk about technical debt as if the codebase is the currency of the realm. It's not. Business Value is the currency of the realm. The code is something that facilitates business value, when (and only when) it does what the business needs. The people working on the code have a customer, the business. The business is locked in to a single provider for that service. And the service provider has grown complacent, prioritizing its own needs and processes ahead of the needs of the customer. They're behaving like other entrenched and untouchable service providers do, in monopoly industries like government and cable companies. Deal with my service levels, or suck it.

There are many customer-obsessed engineers who know how to strike this balance, but this is one of the many issues where there's selection bias in who will take the time to participate in the discussion. People who think this is a powerful story of IT done right need to broaden their minds, stop playing around on hacker news, and start thinking a little broader about how software service providers can do a better job of learning how to say "yes" faster.


> Engineers often talk about technical debt as if the codebase is the currency of the realm. It's not. Business Value is the currency of the realm.

"Technical Debt" is a term that exists largely to communicate the idea that the thing it represents increases the dollar cost of both operating the software to deliver constant business value in a steady state, and of delivering changes to the software to continue to deliver the same value as business needs change or to realize additional opportunities to deliver new value.

It is, absolutely and centrally, about business value; ignoring it in decision-making is exactly like ignoring maintenance/operations costs in a capital purchase and considering only acquisition costs.


All tech debt need not be paid back in full - thinking otherwise is indeed a fallacy like you point out.

The catch here though is the fact that rate of business value production is inversely proportional to amount of tech debt. Meaning a ton of yes's today will mean a grinding halt in a couple of years. I have worked on too many (very successful) teams now and watched many others that have gotten themselves into this situation.

After this point, you end in a place where tech debt repayment efforts can take years - but it also coincides with 4-5 year mark when original developers are tired and leave. End product - a giant legacy system operated by a ton of new folk who have no clue what's up. End result: This gets 'deprecated' and replaced by a brand new one.

Ideally I do think data is gold but code needs to be incrementally ripped out and replaced every 3-5 years or so. Else entropy is too crippling on the system.


> start thinking a little broader about how software service providers can do a better job of learning how to say "yes" faster.

We don't need more "yes men", we need more people able and willing to say 'Here are the trade-offs', and work with the business from there.

Yes, it absolutely is about Business Value, and the best Business Value is not always "yes" (of course, sometimes it is).

... speaking as someone currently dealing with and seeing the Business Value impact of technical debt.


But for a lot of things, this is actually generally acceptable. Back office business apps, for example, where because of internal politics and attrition, they generally end up being replaced wholesale every 5-10 years, which is approximately how long it takes technical debt-ridden code to become so brittle it's unsupportable. A CIO can save millions by allowing this behavior. Thousands upon thousands of apathetic IT staff (including programmers) make decent livings in enterprises like this.


5-10 years? In a company of what size, in what industry? For most "back-office" apps you're talking 2 years for the re-write just to see (almost) feature parity, and another year for the users to be trained and feel comfortable with it. If you were ripping and replacing a non-technical users go-to app every 5 years you'd have revolt.


We have 2 apps that are used for the same thing.... the original one and the rewritten one. The rewritten one was coded up in C# over 3 years and isn't anywhere near as robust as the original one. The original one was written in VB3 circa 1995 and dragged to VB6 where its stayed to this day.

Theres that much technical debt on the old app that the first change I made to it 5 years ago caused it to stop compiling because of the single line I added to a function which made it to big to compile.

If you work in a shop that has been around long enough to lose its training wheels and get a driving licence, then you have technical debt thats core functionality that everyone is scared of breaking.


Anecdotal example: The place I was working at had high software quality standards, but eventually created a separate team that wrote in a bizarre XML 'language' of sorts. This was, in effect, still part of the software, and the sins committed there were horrendous, but all was allowed in the name of "business value".

The end result of that being that you couldn't really change _anything_ without it breaking any of N client installs (each custom) - and you couldn't reason about which one touched a particular part of the code, or what assumptions it made.

That company was all but dead inside two years. There were other factors leading to its demise, but far and away the biggest issue was the crippling technical debt that was piled on with reckless abandon.

There are definitely trade-offs to be made, and sometimes it makes sense to take on some technical debt. Your example of back-office apps may be one of them, but there's certainly a subset of software development for which this is both a business and technical-level suicide.


At least in this example, the entirety of the project involves fixing that "technical debt" of hard coding a variable, and it's being held up for not fixing _enough_ technical debt.


> and the result is that the firefighting ends up producing more fires than it puts out.

I usually find that customers

- Prefer getting feature early, even if it's unpolished, meaning ship something broken and fix it later rather than polish and ship something perfect.

- Prefer many, complex features over simplicity/performance in the product.

- Prefer having more bugs, but quick turnaround over having few bugs and a long turnaround.

In all the enterprise scenarios I have worked I have tried to stress the importance of simplicity etc but really all customers have stressed is that they don't mind bugs, crashes or poor performance as long as they can get their work done, and that might depend on that last feature or putting out that last fire.


They say that now. But what happens when they list all the bugs against you when it comes to paying the invoice?


They say, thanks for fixing all those bugs. They know the bugs exist because they encountered them. Fixing bugs actually looks like work is getting done.

Unlike if you came back to them with a polished product with no bugs, they would be like why am I spending all this money for this time?


That’s related to the concept expressed by behavioural economist, Dan Ariely: “consumers don’t value goods and services solely by their utility, … but also a sense of fairness relating to how much effort was exerted” [1]. In that quote, I’d probably qualify the word effort as “perceived effort”.

http://danariely.com/2010/12/15/locksmiths/


The customer doesn't know about or understand technical debt.

Any improvements you make down there are just racking up on time spent that makes the bills higher which they will notice. They do not see your improvements, but they will notice it if your much betterer code now introduces a new bug.

Higher bills as your predecessor, slow in fixing and new bugs. That is the moment to worry about getting bills paid, not for fixing fast.


The thing is, the existing customers are locked in (or we failed).

The customers we want are those that aren't customers yet. Those customers require features that we don't have yet in order for them to become customers.

It might be the right decision as a business to screw the existing customers by aggregating tech debt in order to grab new market share before the competition does (again this is in a context where a customer has no chance of leaving once on board - which is more or less the case in all enterprise scenarios I have experience from, both web and desktop).

I hate this mentality and as a developer it's terrible - but I can't say that it's the wrong decision to make short term, business wise.


Guess I simply do not share your view. My customers are not locked in, in fact I will go out of my way to make it easy for them, their employees or even another consultant to continue on any work of the work I do and have done for them in the past.

In addition I am not in favor for aggregating technical debt, I suppose I wasn't clear. The article talked about a 1 line fix and that by itself should not trigger a rewrite of a feature, especially not when the fix has to be done now, not next week.


> But what happens when they list all the bugs against you when it comes to paying the invoice.

I have never seen that happen, but then I have never worked on invoiced/project work, only long term enterprise/product dev. Customers license the software, but I suppose there could be some SLA type agreement also for desktop software (which happens to be the case).


OT: I was about to post that women (Shirley) are bossy, but then I pretended that the named individual is Sam and the entire tone of the comment shifted immensely. It was as if the big boss was making this comment. And big bosses are never looking into things of this much detail, which gave more importance to the task.

There is something that isn't right here. I am not sure why I am feeling this.


Welcome to the scary world of recognizing your own biases!

It's a good thing.


It's not Shirley's job to make an executive decision, like whether to break policy or not.

If the company policy should be ignored, then the issue should have been escalated to someone with the authority to make that decision.

IMO the real problem is that the submitter didn't realize that was the case and so didn't escalate sooner. This could have all been avoided by asking for (and getting) carte blanche from Philip or David in the first place.


And yet, if everyone is aware not only of company policies, but the underlying principles, everyone also has the authority to deny something. Hardcoded values, for example, would violate a principle about maintainability.

Talk about policy and it's politics and escalation and such. Talk about principles and it becomes a moral issue.

Alternatively, think of it as breaking a specific law (such as the speed limit) vs being immoral in a more general sense of the word.


I think making this a moral issue slightly misses the point. Even if Shirley understands the underlying principle of maintainability and is motivated to do the best thing for the company, she is still not the right decision-maker because the relevant risks and rewards are not fully visible to her.

Decisions should be made at the lowest level where all the relevant information is available. In this case, the executive understands the pressing reasons for making the change, so is properly able to weigh the incremental risk (with the technical advice of Shirley et al) vs. the business priority, whereas Shirley can only see half of the picture.

Imagine if Shirley had gone ahead and broken policy, then something went badly wrong. She likely would've been blamed for the whole fiasco.


> Imagine if Shirley had gone ahead and broken policy, then something went badly wrong. She likely would've been blamed for the whole fiasco.

This is what I was looking for in this entire thread. It's one thing to push through a small change in a small internal program for something that isn't critical like a batch job that clears old data from a server. That's not what this is.

It changes the potential output of the entire company. Yes, it's a single LOC totaling out to a single byte. Yes, it passed all of the tests. However, tests are written by people, and tests are only as good as the people who write them. This is partially why there are human eyes in the QA/IT/UAT phases. Shirley would have been viewed as the single point of failure had something go wrong, because she made the decision to break policy.


Yeah, if I was Shirley, I'd just have suggested "why don't you ask David to overrule this one?", but definitively not overruled it myself.


If I was Ed, I would have grabbed David and told him to tell Shirley to allow the change as is. Probably before submitting it for review.


Because everything is urgent, all of the time.. and so nothing gets improved.


This is what escalation is for - the job of a VP or Director of Engineering is to decide what is actually urgent and approve fast-tracking it, and what should actually follow process. Having an efficient escalation process is critical.


Very much this: there should be a process for "emergency change" and that process has to be used for every actual emergency and not for anything else.


I've found that those in charge also have a budget to run you on... how do they justify you spending x hours updating code to do exactly 0 as far as anyone is concerned? ..plus all the regression testing? They just don't... it gets left as it is


The word "should"


"I know this is priority 1, but is it priority 1A, 1B, or 1C?"


Every numeric priority scoring system I've ever worked with at every company I've ever worked with has devolved into this: Start with P1 (high), P2 (medium), and P3 (low). Sooner or later, an exec will swoop in and declare something Even Higher Priority Than P1!!! So, "P0" gets invented, and after some time, you have tons of P0 issues, meaning P1 is the new "medium", P2 is the new "low" and P3 becomes "never". Then something comes in that is Even Higher Priority than P0!!! And now, you're stuck.

For this reason, I always advocate reversing the numbers. The higher the P number, the higher the priority. So you never run out of numbers to express how much more urgent and earth shattering your next bug is. Try it!


I worked at a big company that had a big data warehouse that you could submit jobs to: these jobs could be given one of five priorities from "lowest" to "highest". Naturally, everyone set their jobs to "highest", so jobs that generated some weekly email report that nobody really read were competing at the same level as jobs that were actually business critical (e.g., necessary for ad auctions to work correctly or whatever).

So they introduced a new priority level: "Critical". The pattern repeated itself in short order.

It was a tragedy of the commons thing, and eventually it was literally impossible for a job to ever finish if it was marked anything other than "Critical".

So they introduced "Omega Critical". Now you needed to be on a permissions list maintained by the database team to raise something to "Omega Critical". I got on the list (because I was scheduling actually high-priority stuff). Then everyone in my org would come to me and ask me to set random bullshit to Omega Critical, because they knew I had permissions and "one more job won't hurt"...

I don't work there anymore, but I believe they have since developed even more weird categories like "Q4 Omega Critical" and who knows what else.


I've seen 2 pretty good solutions to this.

1. VP bangs head with equivalent VP in other department, complaining there are too many requests. Can work, depends on individual.

2. Actually have a service level agreement. Requests are limited to X per hour/day/week/whatever the process needs, and billed accordingly. Have a bit of buffer in this, a little creative billing. Requests within SLA are dealt with in agreed turnaround time. Have a dashboard that clearly shows this to customers. Alert customers if they're consistently submitting what's outside of SLA, and they will receive a degradation of service unless they provide further funding.

Everyone knows if you pour too much water unto a cup it will overflow.


Having a budget for each department and a cost for running the queries is a great idea... I'll try to implement it the next time I'm in this situation. (I have also had a ton of VPs - plus the CEO and the owner - come to me with "very urgent now now now!" requests... it would have been great to tell them "the top priority for me is this query worth $1000, can you beat that?)


Sure. That's how external service providers work - being internal is in the buffer, you can 'do a favour' when needed. If it is the owner making an ASAP!OMG! request perhaps take the chance for a chat and to demonstrate you have a system and that you're not only managing, but taking a stance of leadership (and point out that as it is internally charged, they're not losing any cash).


Data warehouse queries at Amazon worked like this (5 levels of priority). Everyone set the priority for all of their queries to the highest possible level, every time.


This is what I like about kanban, there is a queue, priority is not a status.


Remove all priority levels.

Put a checkbox. "This is an emergency". People are suddently less likely to click it.

And anyway, if it really were, they'd have called so it still doesn't matter but it makes them happy :D


I worked at place with such a button. Neat caveat: only VP's could unchecked it...

(And I managed to check it within my first month there. My boss kindly let me know the difference between a client saying emergency and a VP saying so....)


I've seen where through general abuse and repeated, early dismissal of bug reports, people are extremely reluctant to call anything an emergency ... even actual emergencies.


I'll never forget the company I consulted at where the priorities were harder and harder to say with a straight face.

To get to the front of the queue, you had to mark your request "hot and urgent for the whole team"


I just got a sudden bitter taste in my mouth when reading "P0" and remembering the people who used to say it out loud; is that normal?

Another great thing about inverting the priority numbers is that nothing can easily go beneath a floor of 0, though ultimately all of this is about fixing the symptom and not the cause.


The reason for the numbering system possibly comes from the ITIL system where priority is determined as a product of two factors - impact (scope) and urgency. These definitions in most IT orgs I've seen regardless of how poorly executed or immature the org is seems to be pretty consistent and oftentimes include gating factors like other comments in this subtree have been discussing. If some of the most immature and byzantine IT processes can get this nonsense settled, engineers can avoid the bike shedding and goalpost moving, too if they actually put some effort into a sensible system and enforced it.


Order them, don't bucket them. That is, the highest priority is head of the queue, and the lowest is at the bottom.


I have a different issue. Everything kindof always gets promoted to a must have. So now I've made 2 priorities, need-to-have and nice-to-have. For any given collection of work that needs to be done there can be no more than 60% need-to-have estimated work hours. When the deadline is accepted by my team, it means that all need-to-have are done at that deadline, and nice-to-have probably are, but no gurantees.

Every single fucking time the PO thinks the deadline is too late, she tries to remove nice-to-have issues - sorry, but that doesn't change the deadline.


I former co-worker of mine told me about their priority tracking software where you could enter a number from 0-5 for each bug, where 0 was the highest priority and only certain senior managers could set a priority of 0.

However people soon discovered that priorities where stored as a float in the database and they could enter any number 0<n<=5 in the field. So bugs quickly started to get priorities like 0.5, 0.25, 0.1, 0.05 and so on...


This is inevitably what happens when the people reporting the issues get to decide their priority (I've seen this even in a two-person dynamic). You have to have discipline, and you usually have to have an impartial outsider (to the issue at hand) who doesn't care about other people's panic to actually set priority levels.


I've seen systems that have two fields, e.g. priority and severity. Where severity is "how important the client thinks it is" and priority is "how important we think it is".


Our system has another one: Who it impacts. Does it impact a user, a department, or the entire organization? The difference between "my crap's broken" "our crap's broken" or "everybody's crap's broken".


Heh. I like that distinction.


No, sooner or later an exec swoops in with "this is our largest customer!!!!". It's still a P1, it just gets bumped to the front of the line - because revenue. Nothing wrong with that, and no need for a different numbering system. Their P1 is no different than a smaller customers P1. They just get to be first in line because they pay to play.


Or what is even more likely, in my experience, "this could be our largest customer!!!!"

Sales-driven-development is lots and lots of fun.


I once had a priority list with a few items had a "low" priority, the rest being "high", "very high" and "extremely high". It gave me a depressing feeling because we had to delay stuff that was supposedly "high" priority for someone.


The problem with priorities is that multiple issues can have the same priority. If you give every issue it's own priority number then you no longer have this problem.


Hire P4 interns or contractors :)

If possible.


And let me guess... everything priority 2 or lower means 'never' :-)


  UPDATE jira SET priority=nil WHERE 1=1;
Here's your bugfix.


Or priority zero.


The best solution against "priority inflation" that I ever saw: support request form of the DreamHost web hosting provider had priority levels with vivid descriptions (examples) of their meaning, to the extent that the top priority level was called "people are dying". I doubt anyone dared to select that priority unless people were indeed dying.


Nah. Nothing is ever urgent.

Just take all tickets that are "urgent", remove the urgent and put them at the back of the queue.


Story of my last few jobs.


This is quite literally how technical debt is created. The story said that there was already a set of bugs being worked on. Which means that by adding this, the technical debt backlog would only continue to grow.

I worked at a company where the technical debt was so deep that I really wanted to say that it was technically bankrupt, meaning that the software should be entirely rewritten. This goes against every grain of my experience to do, and we ended up not declaring technical bankruptcy, but if you want to understand where it starts, it's, we'll do it later.

Each of those decisions needs to be accounted for just like a credit card purchase.


I think that's the key - let's not forget what's at state with this story: laying off people. What's the price of extending the tech debt of this core code without 'proper' unit tests, hardcoded values, and employee-IDs in code? Is that greater than laying people off?

It is easy to forget what the real world looks like when staring at a screen all day. We take out tech debt like a mortgage - sometimes that debt doesn't make sense and we end up having to foreclose on it, other times we're able to pay it off slowly over time and it all works out.

Where the analogy to debt falls down is that sometimes you luck out and don't have to repay tech debt. Maybe your legacy manufacturing system gets replaced with something modern and all of that horrible code magically vanishes - maybe your single threaded software from the 90s is finally outdated and you need something modern and would have had to rewrite it anyhow.


>Where the analogy to debt falls down is that sometimes you luck out and don't have to repay tech debt

Just call that "winning the tech debt lottery". Much like winning the real lottery it definitely happens but not enough to rely on it.


It seems like the code review policy should accommodate for scary legacy code.

What happens if the findings are fixed later in the week? Are they promoted, too? Are large swaths of the system regressed because of the changes? The process should allow the reviewer to say "oh, this is legacy code--the actual change looks good, and I won't worry about the code from ten years ago that doesn't conform to the standards we created a year ago."


Fixing code to meet more recent standards has a nontrivial chance of introducing a bug. If it is to be done at all, it should be done on its own account, and not piggybacked onto something with its own goals.

No matter how hard the process mavens try to reduce development to painting-by-numbers, good human judgement is the key to success.


Agreed, as that is the big issue. Anyone who requires a patch to production include more than the absolute minimum code change has a broken process. You make the next release bring that code up to the new standards when you can do a much fuller test.


Yes, totally agree. Far more risk is introduced by the cleanup than by the original change. The cleanup is still important, but shouldn't be fast-tracked.


I get the intention here, but sometimes code really needs to be touched. Otherwise you get in the modern day, some-variant-of-Whitesmiths code that nobody can read, or wants to read ever. If you separate it out, business processes usually impede on your ability to ever get to the spun-off task and then it just gets forgotten.


With that attitude all code is legacy code the day it is created, because it will never be updated to conform to future standards.


That's a reasonable point, but how do you (as a product owner) justify the cost of making those fixes (and doing the regression testing necessary to ensure the system still works properly)?

My group supports several legacy applications (10+ years old). The product owners can barely be bothered to green-light bug fixes. I can't imagine they could find the justification to green-light tech debt fixes.


I think the language of technical debt can actually be very meaningful to business. It's fair to take on debt in the short term to make initial gains, but you need to have a reasonable perspective about the ongoing cost that debt has, and pay it off on an ongoing basis if you can't afford the debt service.

If it's a decade-old application that rarely even gets bug fixes, it's probably not worth doing major refactorings. On the other hand, if it's your core product, and your team is regularly making updates and fixes, it would be irresponsible to not incorporate your team's latest standards.

Also, it's worth investing in test automation, because the balance really shifts if you can minimize the cost of regression testing by automating it.


That's the worst thing you can do. The change will be forgotten and will never be properly worked out. Why should it? It just works.

I have seen a 270k loc codebase where every change has been made by this kind of thinking. Nobody ever went back to make the "temporary" changes properly.


It depends on the project. If this is critical code, 9/10 times it might not be an issue to let it slide in favor of getting the release out early, but that 10th time is a disaster. Weird things start happening and you don't know why and the team dealing with the fallout may not even realize which change caused the issue.

So now there's a lot more disruption and a ton of frustrated people throwing way too much energy into finding what ultimately ends up being a one line mistake.

After a few rounds of that you decide that if it's going to take 6 days, it's going to take 6 days.


Open a bug for the other changes and change only what's needed (unless they're literally in the same line and have no other secondary impacts)


Right - the idea that a QA person would happily jeopardize people's livelihoods over a box-ticking exercise is why that profession gets a bad reputation...


> However, I'll allow the change for now because this is an important update that affects peoples jobs. I want to see every issue fixed before the end of the week and I will not approve any other changes until these issues are resolved.

This goes in the backlog and gets pushed down further and furhter and further until it's backlog clutter that nobody even notices until that particular chunk of code no longer exists.


>The process is fine, but sometimes shit is broken and you need to fix it now and clean up the existing mess later.

when did you ever get to this land of milk and honey called later? whenever you have a jfdi because its broken, the way its coded always stays....

I've never seen a manager sign off on reworking code that does the job, just to make it "more manageable" or "standard" or "less of a bodge"


> The process is fine, but sometimes shit is broken and you need to fix it now and clean up the existing mess later.

A problem is that most minor changes in the big picture are thought of as being critical things that need to bypass the process in the short term.

It's not just this case that seems to make sense. It's that there's dozens or hundreds of other similar critical patches that need to bypass the process.


What is to keep Shirley from simply filing up to four new tickets? Isn't that better?


Shirley keeps finding problems everywhere, she's not a team player.


Can you explain how finding problems is the opposite of being a team player? How is a team supposed to get things done if there isn't someone to recognize problems?


No I can't. It's just an attitude that I often see from management.


Management is almost never incented to understand defects.


Not only not a team player but Shirley is also a hypocrite. I guarantee there are non-standards based code touching the refractors she demanded at some level.


Sigh


This smells. I've seen plenty of the first part, fix it quick, but rarely any of the latter, fix it properly later. And the former stacks up rapidly as weeks turns to months turns to years.


>The process is fine, but sometimes shit is broken and you need to fix it now and clean up the existing mess later.

Because "later" might end up actually being in the middle of the night, or later might not ever come. And then the mess might be magnitudes bigger than what you were facing before. Just wait, now. You have the time. Always. Unless your platform/software is continually crashing, your business always has time. Changing code like that without review is begging for you to "not have time" and cause an undue amount of stress for everyone involved.

Be an engineer, not a monkey. We're all only human and mistakes will be made.


I find this attitude on code reviews to be extremely demotivating, especially on code bases where I am not the primary committer or lack overall responsibility for the system being changed. It completely distracts from the code under review, and it only requires a small amount of social/organisational ill health to be abused.

Even if you believe that the next committer should clean up, it can require quite a big context switch - either the clean up explodes into quite a big task and you have to question the trade-offs being made in doing it now, or it doesn't have full focus and becomes risky and/or no real improvement.

I don't happen to believe that the next committer should clean up in most of these situations. Minor things, sure, you should do it (but don't side-track the review with it!). In my experience, people who already couldn't be bothered to do it or clean up later use a new commit as an excuse to get their work done for them. I've often seen people criticising tangential code issues in review who were responsible for the original commit. The question becomes - if it's such a minor issue for me to clean up, why wasn't it a minor issue for you to clean up? If it's bigger, how did it pass review in the first place?

My view is that extraneous cleanup should be proportional to the original change at most - and not discussed in code review of unrelated changes. If a particular area of the code sees sufficient churn to accumulate more and more debt, everybody will know and talk about it, it will be clear that some specific refactoring is necessary, and more importantly it will be evidence that small cleanup is worthless (high churn) and a better architecture is valuable (high churn). A given team may have high tolerance for technical debt and thus never do this work. But then they are screwed anyway, why create noise at every review just hoping that someone else solves it?

Remember, there's also a good chance that the mess stays the same as it always was, sitting there passively. And there's a chance that the cleanup introduces unanticipated problems, especially if it's legacy code with no/less testing.


Are you addressing my statement or someone else's? I don't understand how going through with code reviews for any change is somehow distracting from "code under review."

And surely addressing the code you're touching isn't "extraneous" cleanup. You're modifying the code, you can do it bit by bit.


I think he's addressing this line from the original article, which sparked this whole sub-thread way up-thread:

"Also, there are 2 old Debug commands, an unassigned variable warning message, and a hard-coded Employee ID that will all have to be fixed before this module can be moved to production."

Basically, it's the idea that if you touch a piece of code, you have to fix everything else wrong with it, even if that involves touching other areas of the code that you weren't looking at initially and may not even have familiarity with.

This is "the perfect is the enemy of the good", applied to code refactoring. If your policies put people on the hook for fixing mistakes that were made by other people a long time ago every time they touch code, there is a very strong incentive to never touch code. You get a bunch of hyperactive software wizards that shit out a working product (with a bunch of crap code), put it on their resume, and then move on to another employer. The people hired after them have a strong incentive to never do anything (because touching code will require that they touch a bunch of other stuff that they don't understand and have no interest in dealing with), and so they browse Reddit and Hacker News all day long. The code remains crappy, but its crappiness can't be blamed on anyone except folks who are no longer with the company.

Actually, this is a pretty accurate description of the software industry.


Cause when you say, "You can fix it later," later never comes.


"The CTO mandated that we reformat the TPS reports by Friday. We'll add the change next week."




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: