This is what escalation is for - the job of a VP or Director of Engineering is to decide what is actually urgent and approve fast-tracking it, and what should actually follow process. Having an efficient escalation process is critical.
I've found that those in charge also have a budget to run you on... how do they justify you spending x hours updating code to do exactly 0 as far as anyone is concerned? ..plus all the regression testing?
They just don't... it gets left as it is
Every numeric priority scoring system I've ever worked with at every company I've ever worked with has devolved into this: Start with P1 (high), P2 (medium), and P3 (low). Sooner or later, an exec will swoop in and declare something Even Higher Priority Than P1!!! So, "P0" gets invented, and after some time, you have tons of P0 issues, meaning P1 is the new "medium", P2 is the new "low" and P3 becomes "never". Then something comes in that is Even Higher Priority than P0!!! And now, you're stuck.
For this reason, I always advocate reversing the numbers. The higher the P number, the higher the priority. So you never run out of numbers to express how much more urgent and earth shattering your next bug is. Try it!
I worked at a big company that had a big data warehouse that you could submit jobs to: these jobs could be given one of five priorities from "lowest" to "highest". Naturally, everyone set their jobs to "highest", so jobs that generated some weekly email report that nobody really read were competing at the same level as jobs that were actually business critical (e.g., necessary for ad auctions to work correctly or whatever).
So they introduced a new priority level: "Critical". The pattern repeated itself in short order.
It was a tragedy of the commons thing, and eventually it was literally impossible for a job to ever finish if it was marked anything other than "Critical".
So they introduced "Omega Critical". Now you needed to be on a permissions list maintained by the database team to raise something to "Omega Critical". I got on the list (because I was scheduling actually high-priority stuff). Then everyone in my org would come to me and ask me to set random bullshit to Omega Critical, because they knew I had permissions and "one more job won't hurt"...
I don't work there anymore, but I believe they have since developed even more weird categories like "Q4 Omega Critical" and who knows what else.
1. VP bangs head with equivalent VP in other department, complaining there are too many requests. Can work, depends on individual.
2. Actually have a service level agreement. Requests are limited to X per hour/day/week/whatever the process needs, and billed accordingly. Have a bit of buffer in this, a little creative billing. Requests within SLA are dealt with in agreed turnaround time. Have a dashboard that clearly shows this to customers. Alert customers if they're consistently submitting what's outside of SLA, and they will receive a degradation of service unless they provide further funding.
Everyone knows if you pour too much water unto a cup it will overflow.
Having a budget for each department and a cost for running the queries is a great idea... I'll try to implement it the next time I'm in this situation. (I have also had a ton of VPs - plus the CEO and the owner - come to me with "very urgent now now now!" requests... it would have been great to tell them "the top priority for me is this query worth $1000, can you beat that?)
Sure. That's how external service providers work - being internal is in the buffer, you can 'do a favour' when needed. If it is the owner making an ASAP!OMG! request perhaps take the chance for a chat and to demonstrate you have a system and that you're not only managing, but taking a stance of leadership (and point out that as it is internally charged, they're not losing any cash).
Data warehouse queries at Amazon worked like this (5 levels of priority). Everyone set the priority for all of their queries to the highest possible level, every time.
I worked at place with such a button. Neat caveat: only VP's could unchecked it...
(And I managed to check it within my first month there. My boss kindly let me know the difference between a client saying emergency and a VP saying so....)
I've seen where through general abuse and repeated, early dismissal of bug reports, people are extremely reluctant to call anything an emergency ... even actual emergencies.
I just got a sudden bitter taste in my mouth when reading "P0" and remembering the people who used to say it out loud; is that normal?
Another great thing about inverting the priority numbers is that nothing can easily go beneath a floor of 0, though ultimately all of this is about fixing the symptom and not the cause.
The reason for the numbering system possibly comes from the ITIL system where priority is determined as a product of two factors - impact (scope) and urgency. These definitions in most IT orgs I've seen regardless of how poorly executed or immature the org is seems to be pretty consistent and oftentimes include gating factors like other comments in this subtree have been discussing. If some of the most immature and byzantine IT processes can get this nonsense settled, engineers can avoid the bike shedding and goalpost moving, too if they actually put some effort into a sensible system and enforced it.
I have a different issue. Everything kindof always gets promoted to a must have. So now I've made 2 priorities, need-to-have and nice-to-have. For any given collection of work that needs to be done there can be no more than 60% need-to-have estimated work hours. When the deadline is accepted by my team, it means that all need-to-have are done at that deadline, and nice-to-have probably are, but no gurantees.
Every single fucking time the PO thinks the deadline is too late, she tries to remove nice-to-have issues - sorry, but that doesn't change the deadline.
I former co-worker of mine told me about their priority tracking software where you could enter a number from 0-5 for each bug, where 0 was the highest priority and only certain senior managers could set a priority of 0.
However people soon discovered that priorities where stored as a float in the database and they could enter any number 0<n<=5 in the field. So bugs quickly started to get priorities like 0.5, 0.25, 0.1, 0.05 and so on...
This is inevitably what happens when the people reporting the issues get to decide their priority (I've seen this even in a two-person dynamic). You have to have discipline, and you usually have to have an impartial outsider (to the issue at hand) who doesn't care about other people's panic to actually set priority levels.
I've seen systems that have two fields, e.g. priority and severity. Where severity is "how important the client thinks it is" and priority is "how important we think it is".
Our system has another one: Who it impacts. Does it impact a user, a department, or the entire organization? The difference between "my crap's broken" "our crap's broken" or "everybody's crap's broken".
No, sooner or later an exec swoops in with "this is our largest customer!!!!". It's still a P1, it just gets bumped to the front of the line - because revenue. Nothing wrong with that, and no need for a different numbering system. Their P1 is no different than a smaller customers P1. They just get to be first in line because they pay to play.
I once had a priority list with a few items had a "low" priority, the rest being "high", "very high" and "extremely high". It gave me a depressing feeling because we had to delay stuff that was supposedly "high" priority for someone.
The problem with priorities is that multiple issues can have the same priority. If you give every issue it's own priority number then you no longer have this problem.
The best solution against "priority inflation" that I ever saw: support request form of the DreamHost web hosting provider had priority levels with vivid descriptions (examples) of their meaning, to the extent that the top priority level was called "people are dying". I doubt anyone dared to select that priority unless people were indeed dying.