An alternative take to this article would be that this person wasted two days because he was reluctant to ask more questions from the person who filed the bug report.
How often do you actually receive quality bug reports at work? My experience is that external or internal users almost never provide sufficient information and you as a coder are always expected to drill down on what they reported with a barrage of questions.
> Some developers would have immediately gone back to the person reporting the problem and required more information before investigating. I try and do as much as I can with the information provided.
Which seems like being stubborn and making a mistake because of it.
Couple other parts also seem a bit overdoing it:
> Because I investigated if there were other ways of getting to the same problem, not just the reported reproduction steps.
> Because I took the time to verify if there were other parts of the code that might be affected in similar ways.
These seem like taking a gamble. Maybe something comes up, but is it more probable that this work should be minimised until there is more proof of "other ways of getting to the same problem"? Developer time is expensive, is this really the best way of using it? Would it make sense to just fix the issue at hand and only put in more time if more bug reports come in after the fix or if there is some other indication that this part of the code might be more broken?
> How often do you actually receive quality bug reports at work?
Very, very often. I work as a QA engineer whose main responsibilities is to go through the bugfixing queue and add needed info where necessary. And I have to spend a lot of time every day doing this. Sometimes it gets so bad I have to assign the ticket back to the reporter to add more info, because even I don't know where to look without it.
Interestingly enough, it's always the more senior people at our company who are guilty of writing crap bug reports.
I went from huge shop to medium shop and I really miss that layer of QA engineers before the bugs got to us, it filtered so much nonsense that I had tremendous respect for those guys.
Half of my current tickets barely have 2 sentences in so-so English.
Almost all the developers I work with never started working on a bug until i gave them the right steps to replicate it - even if the bug is reported by a user. A very stupid example i can think of is this: the developer designed a login form on web and mobile with the password field as expected, but forgot to uncapitalize the first letter of entry in password. Like if your password is abcdef, on mobile keyboard (unless you are careful), it would be entered as 'Abcdef' and would not work. The issue was reported thrice and he said that there is no error, and did not fix. Then later it struck me (and i tried it on safari while he tried on the mobile responsive version of chrome) that it is this issue. Not saying the developer should not start working right away, but the expectation is that if they actually paid attention to what was reported, it would not have taken this much time to figure out what the issue was. There needs to a midway which varies from org to org depending on their workload.
> is it more probable that this work should be minimised
I guess this is where automated tests can come in. You fix something and see if passes the unit tests. But then everyone has their own approaches. For him, fixing a similar bug twice is worse than finding all possible mistakes at once.
* a contained smokey fire is sufficient to hide the start of a wildfire
This means:
* keep your errors at 0. If it "can't be kept at 0" you're either too far gone or thinking about the issue incorrectly.
* user complaints are errors. Just because they aren't clear doesn't make them any less so.
There is a perception that users go out of their way to make unfounded complaints. In my experience, getting any complaints is the issue.
There is also a perception that some errors aren't important. If you have a channel to recieve an error its because it has business value. If a dev I was managing ignored a p/w entry bug due to non-dupe / assumed user error without significant digging and user interaction I'd be livid. Most businesses will lose significant value if users perceive the act of logging in as difficult.
keeping errors at zero does not mean errors don't happen, it just means you resolve them all, you don't ignore any. Perhaps it should be 'keep errors at 0 or 1'.
So if you're small, a RAM issue is something you deal with manually and rarely. As you get bigger you'll transition to automated failovers that still get looked at individually. Then you'll scale up to the point where these aren't freak occurrences. Now its important for you to have a strategy to identify the issue and its follow on issues, and resolve them. Its also past time you think deeply enough about your setup to be able to contain them so you can stop surfacing them as errors - they are now part of a normal business process. You want to make sure that too many are surfaced as an error (and too few), and any effects you can't currently recover from automatically are also errors.
Its perfectly possible to "keep errors at 0" without ignoring any output.
I agree with you on most parts, but not about keeping your errors at zero. In a fast moving environment, there will be mistakes or things that are missed. ideally, bugs should not have any attached meaning - yes there was an issue and now it is fixed. Good thing we found it today compared to a few days later. That is it.
Getting complaints from users is a good thing, but yeah missing them or not fixing them when you come across them is. As a user, I would be overjoyed if I reported a small issue, and the company fixed it quickly.
Being livid is natural, but I was pretty sure the developer himself knew he screwed up this time, so no point expressing it. Plus as I explained in the earlier comment, this escaped for so long because we did not have many cross device users. So did not affect that many users. Just for my experience, I check this every single time i test a website before it goes live. (and also check with other keyboards than just Gboard)
After your suggested fix, how does a user enter the first letter of their password in upper case if that's what they want?
Maybe you were expecting the dev to go through all the passwords and fix them so the first letter of all passwords is lower case? Oh but if they were following best practices they don't know the password, they only know its salted hash.
Since the app was already shipping without the first letter being auto-lowercased that would suggest there were plenty of passwords with the first letter already upper cased, also something you can't test for easily if all you have is salted hashes.
Sorry, it requires more context. The reason this issue wasnt highlighted before was 1/ Many of our users were on laptop and did not use mobile site as much 2/ There was rarely a switch for those who used the mobile site (as in they rarely used desktop, else we would have caught it sooner). The fix was a longer one. We obviously had to have the same convention for a password in desktop and mobile web. For mobile users, after we made a fix, if they had trouble logging in, we asked them to capitalize the first letter of password and try again- when they logged in, we made them change the password and if they could not, they reset it. At the point we found it, we were pushing mobile site to users as an internal growth activity, and we were able to navigate it. It wasnt the best UX to be fair, but we potentially averted a bigger disaster at the time we did it.
We also had to take care of the few folks who signed up with wrong password(in the sense that they never intended the password to start with a capital letter.). Changing the field value was a part of it - applicable for all new users. The complication was the people who already went through the flow and would have trouble logging in now.
"I try and do as much as I can with the information provided."
I know I'm guilty of this one, and I've stayed away from high paced jobs and appreciate jobs where people are ok with my reluctance to bother a lot of people even if that means it takes me longer to figure things out on my own.
This also means I build a much deeper understaning of the systems I work with, or at least I like to think so, and some people have confirmed that about me, indirectly, by praises of my insights.
Their praise is obviously great for knowing how you're doing, but it doesn't confirm the reason for your good insights. Plenty of people manage to be good at what they do without doing 100% the best way possible, so maybe you'd be even better if you changed your approach!
Of course I know nothing about you so I'm not trying to give advice, just replying about the "confirmed" being a potential cognitive bias.
How often do you actually receive quality bug reports at work? My experience is that external or internal users almost never provide sufficient information and you as a coder are always expected to drill down on what they reported with a barrage of questions.
ie. if you are not doing https://en.wikipedia.org/wiki/Five_whys then you might be doing it wrong and wasting time because of it.
I'm referring to this:
> Some developers would have immediately gone back to the person reporting the problem and required more information before investigating. I try and do as much as I can with the information provided.
Which seems like being stubborn and making a mistake because of it.
Couple other parts also seem a bit overdoing it:
> Because I investigated if there were other ways of getting to the same problem, not just the reported reproduction steps. > Because I took the time to verify if there were other parts of the code that might be affected in similar ways.
These seem like taking a gamble. Maybe something comes up, but is it more probable that this work should be minimised until there is more proof of "other ways of getting to the same problem"? Developer time is expensive, is this really the best way of using it? Would it make sense to just fix the issue at hand and only put in more time if more bug reports come in after the fix or if there is some other indication that this part of the code might be more broken?