If you're getting burnout doing agile, you're doing agile wrong.
Don't do sprints. Have a continuous backlog. Don't do overtime. Don't make estimates. Always do the simplest thing. Only ever do the most important thing, as defined by the stakeholder.
I've written and talked about this at great length. The fact people suggest agile gives you burnout reinforces my experience that Scrum is largely misinterpreted and people incorrectly focus on sprint commitments. If Scrum is so commonly misinterpreted, it is flawed.
The business wants to know how much feature X is going to cost and when they can expect it.
They need to know that, because they need to decide if it's worth it in the first place, or because they need to plan follow-up actions for when the feature will be done.
If the developer doesn't make estimates, you're just forcing other people to make their own estimates, that they'll hold you to.
> The business wants to know how much feature X is going to cost and when they can expect it.
Of course they do. We all want things that are impossible to have. I want to know the AAPL stock price in 6 months.
The traditional way to manage this impossibility is that engineering lies about it (they have to lie, because they can't know either), and once people are lying to each other, trust is unlikely to arise.
The agile concept of "velocity" is the best way I know of managing this. It's not very good, and it's often a victim of Goodhart's law.
It's not impossible to estimate roughly how long something will take.
If you consistently get it wrong, either:
1. You're not breaking down the work into small enough chunks to properly think about how long it will take
2. You are probably consistently under (or rarely over) estimating and should be able to fix that. For me, I have to triple my estimates, it always takes 3 times longer than I think it would
To claim estimating is "impossible", when many of us do it absolutely fine, is ludicrous.
Separately, there's scope creep, but that's another matter and again can be managed (e.g. "if you want X extra functionality for the same cost, you're going to have to drop feature Y, which will take roughly the same time").
I've seen Agile™ estimation break down when the task is either
1. something nobody has done before (or has no close analog), or
2. is so interconnected that it can't be broken down.
The former is just a matter of hiring more experienced engineers _or_ allocating exploratory/prototyping time. Still high uncertainty but these kinds of tasks become rarer over a time.
For the latter, the common refrain is "break it down" but there certainly exists a relatively common type of work that must be completed all at once. And I find it increases as the complexity or popularity of the product increases, so with time. Therefore perhaps the metaphor of building becomes less appropriate, and surgery paints a more accurate picture.
Builders can construct a house, then add a garage, go work on another house, then return and add a guest bedroom, then remodel the kitchen, all with relatively minimal pausing or switching cost. But once a patient is put under and opened up, the surgeon really should work on finishing up that one patient before moving on to the next one. And for some weird reason we tend to prefer one big surgery to multiple small "atomic" ones.
> Builders can construct a house, then add a garage, go work on another house, then return and add a guest bedroom, then remodel the kitchen, all with relatively minimal pausing or switching cost.
Have you worked with contractors? This is so so so so not true.
> 1. You're not breaking down the work into small enough chunks to properly think about how long it will take
This usually means prototyping, which means actual coding, which means you couldn't do it as you already had to give your estimation. To put it another way, places who require estimates for planning want them during planning and don't allow time for doing this. On the up side, when they do let you take the time to prototype they take your prototype and put it right into production because then the estimate is 0.
> 2. You are probably consistently under (or rarely over) estimating and should be able to fix that. For me, I have to triple my estimates, it always takes 3 times longer than I think it would
This is a common trick but it really just means you are not really estimating, but guessing that it will take less than this amount of time. This would be more apparent if you gave an estimation range, what you actually estimated to your estimate times the 3X padding.
> 1. You're not breaking down the work into small enough chunks to properly think about how long it will take
This is exactly what waterfall was. It pre-planned all the small tasks beforehand and required stopping the world and replanning everything when something changed.
Agile was an attempt to move away from that and create a feedback loop where you do some limited work, learn something from it, and then do another thing after you've internalized what you learned. (Original "sprints" were a coordination mechanism between departments that was applied in the automotive industry: there was lots of dead space in-between them). The whole point was the iteration was small enough that it didn't actually matter if your estimate was correct or not.
This bastardization where Agile has become synonymous with estimation accuracy is completely against its original spirit. People have started to care about estimates because Software Project Managers wouldn't have a job if there wasn't a need for heavy-handed planning sessions.
I think the estimation accuracy is not the main point nowadays, but instead the simple fact that the viability/rentability of every project matters, and even if there's an up front pile of money to spend (quarterly/yearly budget), there are probably multiple competing ideas on what to spend these - and of course these usually consist of and involve software and its development.
Of course this is why having a low-fluctuating empowered team (project ownership, refactors, etc) can usually deliver changes faster and with lower cost and with greater consistency, than every time doing a new project (which might involve new people who never saw the stack, nor the business domain) to modify something on a system.
> And if you do, that time is essentially wasted - something that shouldn't be billable.
If I develop a feature for company A, then reuse it for company B, isn't A footing the bill for B? I mean, we all do that, but I think it's worth thinking about.
It might be billable to comp B (they got the value add), but internally that should not have high cost (in hours, resources) attached.
Ed: that is to say the time
is wasted - but might very well be able to charge a premium on the experience. In my original comment I was mostly talking about making feature x for customer a, then making feature y for customer a. Where x and y are pretty much the same.
That's the problem. In theory every bridge is the same. Yet you need to plan each one of them.
Similarly every run of the mill business-as-usual boring-as-fuck CRUD corporate internal "app" is the same, yet they still need a lot of work, and they are still hard to "estimate".
Initial estimation (over many sprints, with either a greenfield project or with a new dev team) is always very hard, almost a completely foolish attempt.
After the first few sprints the uncertainty reduces.
And after the team gains experience with the system (business domain + codebase + infrastructure - if applicable), they are much better at scoring tickets (SCRUM poker), which then can be converted back to time from points.
Directly estimating time (asking for time estimate from programmers) is just something that never works (or if it does, then it means the programmers does the following adjustment internally), as humans get into the confusion of constantly having to recalculate their intuitive feeling of required time based on how long actually that took the last time they felt that.
50% off is understandable ? Where can I find such forgiving clients? My clients lawyers would eat me alive if I would start to bill them in such manner. They dont want to hear about uncertainty - they are buying professionals and this kind of estimations looks to them like we dont know what we are doing.
Sure we all know this, but for business people with money this looks fishy.
This xkcd sums up the problem perfectly:
https://xkcd.com/1425/
How non technical person can tell the difference between task inherent uncertainty and Your incopetence? They cant that's why they will buy Your competition that will claim there is no uncertainty.
> To claim estimating is "impossible", when many of us do it absolutely fine, is ludicrous.
How do you do it? I mean, how do you know when you reached the necessary granularity? And even if you know, sometimes it just means more questions that the client might not be able to answer at that time. Do you come up with a worst and best case and carry that delta up the breakdown hierarchy of components? Are you able to do this for every kind of task that can come up in a project? (From frontend design to backend implementation and third party system integration and infrastructure setup and then product deployment.)
And velocity should be an internal team measurement not shared with outsiders because (Goodhart's Law) when management says "Let's increase our velocity" and the team and developers are GRADED/PROMOTED on their velocity then you'll get point/estimate inflation which will hurt you more in the long run since the inflated estimates actually now allow for the work to fill the time (Parkinson's law).
> The traditional way to manage this impossibility is that engineering lies about it (they have to lie, because they can't know either)
Honest estimates with uncertainty are not lies, and not having certainty doesn't prevent estimation.
But, yes, people seem to often decide that being uncertain justifies self-serving lies instead of honesty (and sometimes the environment encourages that by punishing honest estimates.)
What's the usual processing speed of a software engineer when it comes to reading through and understanding design & specification?
We found that usually either the documentation/specification/requirements are too fine grained (and then they then change all the time, but then there's no real effort/bandwidth to do change management on the specs), or they are not detailed enough, which makes the estimation process a useless guessing game on what might they mean by this or that.
See, I would expect the engineers to pipe up and explain that. It takes collaboration to get it right, and it takes the engineers setting expectations properly for what they need to estimate accurately.
It’s a collaborative effort, not a one-way street, as with most things.
Another bit of wisdom I got from Kent Beck is that there are three basic controls to every project... schedule, scope, and resources. You can only control two of them. And resources are generally fixed at the beginning of the project(1), so most projects are either scope-bound (we must have all these features), or schedule-bound (we must hit this hard date). If you're feature-bound, estimation failure means you'll be late. If you're schedule-bound, estimation failure means you'll be incomplete.
This can be a very hard pill for the business to swallow. They want it all, and they want a predictable schedule. But Beck's triangle is akin to thermodynamics. Do you want the volume, or the pressure, or the temperature? When fixing devops-related and agile-related process problems, I often hear "But we're a schedule-driven company!" when they have a scope-driven problem like process transformation.
1. As The Mythical Man-Month pointed out almost 50 years ago, adding resources to a late project makes it even later.
> If you're schedule-bound, estimation failure means you'll be incomplete.
Schedule-Bound development (or Deadline Driven Development) can actually work well with minimal estimation as long as you are truly building MVP (minimum viable product), you have to ruthlessly slash features and only build the core features that are going to deliver the most bang for the buck.
There is a fourth control of course which comes into play when nothing else will give room: quality. You can take many shortcuts to deliver in time, full scope, with the resources available. But the result will be buggy and/or unmaintainable.
Surprising amount of companies choose this path, I guess most without realizing it.
Quality is basically inherent in scope. Is testing the product in scope? Yes, then it will take up precious resources (time). Do you want to test it on every platform? Yes, even more time. (And of course the same goes for code review, writing unit tests, planning, writing and discussing a design document, etc.)
> They need to know that, because they need to decide if it's worth it in the first place
If this is the case, they should also be able to state the threshold above which the item would be "not worth it", and I would assume this is significantly easier to figure out than it takes developers to build a decent estimate.
From a developer point-of-view, the first high-level estimate is then much, much simpler: "Is it going to take more or less than $not_worth_it to build?"
If the answer is "more", then it gets dropped -- at least in its current state. It could be re-scoped later as a new item with a smaller MVP, for example.
If the answer is "less", then the developer can put in the time to get a proper estimate. I think there might also be an argument that if the backlog is properly ranked the detailed estimate also becomes pointless -- just work on it until it's done. No one should be doing detailed estimates on items that are more than a few weeks/sprints away in the first place.
Disclaimer: I've only recently been thinking about this abstractly, and have not yet tested this in practice, though I'd love to have this discussion with anyone that has.
Nobody should make estimates. They're always wrong.
Do your best to break tasks up so that all tasks are the same size. Then work on tasks. You'll find a stable average of tasks per amount of time and that will let you forecast how long things will take, how much they'll cost, etc.
That's how you figure out when things will be done.
Some tasks have a high degree of uncertainty. Others don't.
Back when I was using a ruby-on-rails style framework (in PHP) I would frequently get 20 hours of work estimated properly down to 15 minutes when it came to adding simple features to a web application.
If on the other hand you are trying to figure out the gap between what the documentation says should work and what actually works, that is hard to estimate.
That act of breaking up and organizing tasks of the same size, that's what estimation is. This feature has these tasks, historically we complete these tasks in this time, so here's a hard minimum for a completion date (which implies cost). Slap on an appropriate fudge factor for dealing with other teams, testing and burn in, and general error bounds as needed.
You've described scrum, what you're doing is scrum.
Points are a team-internal measure of a task. You can after the fact convert points to time, and then after a few sprints (when you have a fairly stable average velocity) then you can convert points to hours and do a forecast/estimation.
// In my mind you forecast the date of completion, but you estimate the amount of work. It's probably just mindless semantics, after all saying you can estimate the date of completion sounds just as natural, but saying you can forecast the amount of work sounds a bit unnatural.
This is the perfect recipe for never delivering anything. Without release dates developers will continue building and gold plating and building and gold plating. Create your tasks, estimate your tasks, put a date out there, and hit the date. If the product has bugs, unfinished features, then so be it. Users understand flaws. They don't understand missed dates.
That's interesting as in my experience is exactly the other way about. It's the experienced developers that try to gold plate to avoid the issues that they had in the last project or two projects ago, whereas the juniors ship code quickly but unfortunately often incomplete and certainly lacking a reasonable amount of test coverage
I don't think the parent poster was saying that the learning itself was bad.
I think the gold plating they're referring to is a form of over-correcting. The learning itself is good, and correcting prior issues is good. But over-correcting and over-learning can be problematic and can lead to gold plating.
I don't think there's any easy indication of the line between the right-amount of correction and over-correction, but I don't think it's unreasonable to state that one can over-correct based on prior experiences.
The second-system effect is definitely real and is a trap most developers will fall into. I think gold plating introduces liability and it would be better to ship early. I am, however, conflating my own behavior and anecdotal observations to paint broad strokes.
Estimates are always wrong only if they are concrete estimates as if they were certain.
If, instead, they are 90% ranges (I am 90% sure that this will be done between x and y) then it is much easier to estimate accurately. It is also easier to spot bullshit (if the spread doesn't go up as your time to completion moves further from now, it is probably a bullshit estimate).
I really love the method from The Clean Coder, where you make best/median/worst estimates for tasks and then add them up to a mean and standard deviation. This helps capture the truth of “we don’t know how long it’s going to take, but it’s likely between x and y”.
Where this still falls apart, for me, is knowing how many hours per day I’ll be able to spend on each project. I have multiple clients, and things come up. This method has gotten me very good at the budget aspect of project estimation, but the scheduling aspect still slips some (it’s rare that more hours in a day become available)
I think this is totally reasonable at certain times. E.g how long will it take to add feature x with scope y to our existing product is a decent thing to estimate (and estimate only) for stakeholders to prioritise appropriately.
I think most problems people have with estimates is when they are applied to things too large, or too small, to reasonably estimate. E.g How long will it take to build this product from scratch with a laundry list of features. Or more commonly, to micromanage. Why does anyone care whether this task will take 2 days as opposed to 3? That’s not meanginful information unless you’re mistakently expecting the individual user story estimates to add up to a reliable assembly line of work.
I’m happy to estimate when it has some business purpose. If not, it’s meaningless busy work or worse.
Theoretically. Practicaly they don't do that kind of planning and high estimate leads to pressure to lower it down (or making you feel ashamed for it being to big etc), which most developers will do, because developers tend to sux at negotiation.
That's where it breaks down. If you strip the Scrum-Industrial Complex nonsense away, one of the basic concepts of Scrum planning is that the business gets to define the stories, and the developers get to assign the points (or whatever mechanism is being used to estimate). Business sets the scope, technical sets the resource requirements.
If developers can't hold the line on estimates, they're toast. Nothing will ever get done on time or under budget, because the organization is focused on basic dishonesty about what work can actually get done. Which means people are being rewarded for the wrong things. Measure by estimate accuracy rather than promises made, and you'll see a lot more honesty in estimation.
Only issue being that the estimates one gets from poorly planned sprints are probably less reliable than just asking the devs how long the project will take and going with it.
Good estimates come from watching people deliver over time and scientifically comparing empirical performance to subjective estimates, not asking them to predict. Joel Spolsky has a great blog post on Painless Software Schedules.
There is nothing scientific about this method in my opinion. You do not make consistent reproducible experiments, you do not control for anything, you haven't even formed a null hypothesis etc.
If it's optional, they need to know to make a decision to include or exclude it.
If it's not optional for the project, but the project itself is optional, they need to know to make a decision to kill or keep the project.
If it's not optional to the project , and the project is not optional to the business, they need to know to decide whether they keep or fold the business.
In any case, if they might choose to keep the business, project, and feature, they need to know because they have to budget for the cost if they do so.
So, essentially, business always needs to know the cost.
>Don't do sprints. Have a continuous backlog. Don't do overtime. Don't make estimates. Always do the simplest thing. Only ever do the most important thing, as defined by the stakeholder.
The problem with this is that the stakeholder might not understand what simple means. It happens here all the time.
We get "simple" requests for verbiage changes, but after reviewing the story, the verbiage request isn't universal. It only applies to certain offerings, and the stakeholders only want the verbiage the be applied after a certain step in the application. This is still a relatively simple change, but when factoring in all the other "simple" requests that involve complex logic, changing displayed text becomes relatively complex.
We do sprints because it's our time box to see how close we are to hitting the mark. A sprint isn't a hard deadline in which the team must kill themselves to get everything finished. It's an arbitrary passage of time for setting goal to keep on track with what is going to be released. If you have something that's releasing two months into the future, it's easy to say that you can still make time although your first two weeks were riddled with unexpected complications and stoppages. A sprint forces us to focus on what should have already been completed to re-prioritize if necessary. And you can't feasibly do that without an estimation.
The two biggest problems with estimations are underestimating and treating estimations as promises. It's hard to estimate. So the best course of action is to make stories as small as possible. Probably smaller than someone would consider rational. If not, at least have the stories divided into individual tasks or chunks. Then once you have estimations, treat them as goals rather than deadlines. Use your sprint review as a time to honestly reflect on why the estimation was missed. Then, and this is critical to successfully estimating in the future, use the notes of reflection to make better estimates.
> The problem with this is that the stakeholder might not understand what simple means. It happens here all the time.
"Do the simplest thing" means don't overengineer, not necessarily that the feature won't be complex. You only code up what helps fulfill that particular story's definition of "done."
As for complex features. when my stakeholder asks for some big complex change it's almost always decomposable into much simpler stories. Maybe those have to be hidden behind feature flags until the whole epic is done, but they're shippable individually. Doing that decomposition up front helps demonstrate the complexity to the stakeholder and takes some pressure off of me. It also makes them feel secure because they have more granular insight into progress; that they're not sending me off on some Lewis & Clark expedition.
The number one reason agile projects fail in my experience has absolutely nothing to do with planning session, sprint cadence or estimation. It's because the client was not properly prepared to accept iterative delivery or play their part as product owner. I see a lot of team organize themselves around a well-groomed backlog and set their priorities only to have clients come in and ask for deadlines and fixed scopes and all the other stuff that is anathema to agile. If you client is able to set priorities effectively and allow a slower, quality-driven model then everything else becomes just so much easier.
I just read a marvelous book called Handmade, by a fine furniture woodworker, and something he says over and over is "Go slow to go fast". The sales pitch to the business for a well-controlled agile process is that it maximizes productivity. Shifting priorities and poor planning undermine the productivity of the development team.
Ironically, such clients also seem to expect that whatever additions/changes they dream up should be able to be folded in to the plan willy-nilly. Whereas if they accepted an iterative process that would come naturally, without constant re-negotiation or ill will.
They can! That's exactly the point of agile and is how you get them onboard. When clients want a feature that is weird or complicated or whatever, the answer is never "no" the answer is "sure, now tell us where it fits in the priority list".
And agility in this case being Business Agility: the business being able to change course, not being hold hostage to a years long plan which cannot be changed.
Agility has nothing to do with software development, and everything with business.
I tend to view scrum, kanban, XP and also the more traditional tools like V-model as tool boxes - or maybe some kind of pre-configured framework. They combine agile or other project management tools in order to solve problems the team or stake holders of the team have.
However, this doesn't mean you can't take out and exchange parts. Some teams work profiles fit well with set sprints with a stable set of tasks. Others, like ops work with few people, doesn't due to incalculable factors like incident management. Prioritization might need different mechanics depending on the position and the responsibilities of the team.
It's all a big grab bag of tools to create some working workflow for a team.
If you can accurately estimate how long a software task will take, you should have already made a reusable component or automation to generate that code.
Estimates are meaningless. I've seen PhDs waste endless hours faffing with estimate-calculating spreadsheets.
Fundamentally, the universe is unpredictable. Chaotic and complex systems require their starting conditions to be measured to an infinite level of precision to be predictable. The Heisenberg principle means this is, as far as physics can tell, impossible.
On a more practical macro level, a complex adaptive system becomes unpredictable once three feedback loops are present (the three body problem is related).
Modern computer systems are unpredictable because we cannot predict the interplay between levels of abstraction.
It does not matter how smart you are. Unless you have perfect knowledge of all levels of abstraction in the system below you, even in a macroscopic sense you cannot predict the future. Precise estimates will be inaccurate.
Confidence ranges and superprobabilities are of some use. Discrete and precise estimates are an utter waste of time at best, and are dangerous and misleading at worst.
EDIT
Written on a mobile waiting for plane take-off, hence lack of citations. For more on complex adaptive systems, I recommend the works of Murray Gell-Mann and the research output of the Santa Fe Institute, particularly Scott E. Page.
Another way to put it: If you know how long something will take in advance, you have a solution in mind. It is unlikely that this solution is (A) the best one and (B) the one you will actually implement. It would be stupid to ignore information you learned along the way. If you could actually predict the future you should invest in the lottery, not in software.
EDIT: Of course there are projects where you actually know exactly what to do. Happens a lot in consulting. That has nothing to do with Agile though.
I think it's often the over approach from the manager/PM side: they will be looking for a methodology to have estimates and team commitment, and Scrum will be an option.
There is an awful lot of PMs who will candidly explain that they don't really care about the methodology, they just need stuff to get done and know when.
That is technically be true, but I've worked at four shops that have implemented Agile methodologies and it hasn't been true for any of them, nor for most of the engineers I personally know but don't work with. I do personally know one person who works on a team that this is true, but he's the only one.
This may be doing Agile wrong, but if something can be so easily done wrong that it's common, I count that as serious flaw in the methodology.
I'd push back on calling Agile a methodology. [The Agile Manifesto](http://agilemanifesto.org) a set of ideals, that's all. These ideals often run afoul of conventional wisdom in traditional management/business/sales circles, so we end up with a set of procedures masquerading as "Agile" in order to not upset prevailing sensibilities.
Said another way, estimates are not promises. Don't crunch to meet them. I think it should be phrased as "no deadlines".
The whole sprint structure is so you're constantly adjusting your plans and estimations at some predictable time. Without a sprint you can end up with randomization.
I've noticed some managers and project managers love to emphasize the that sprint plans are "commitments". "OK, is this what we're committing to for this sprint? Is everyone comfortable committing to this?". LOL. OK. Maybe that guilts the young bloods into doing free overtime or something. But no.
It's like when car salesmen try to get you to name a number you'd definitely buy the car for and sign it on some not-at-all-binding-or-official piece of paper, like that means something, before they go back and "ask their boss if they can make it work". Psychological trickery bullshit.
It's a very similar psychological trick. I've even heard project managers I like and who I think are generally very good do it. I think it's just part of their language now, and some may not realize they're doing anything kinda shitty. But it's something straight out of Cialdini's Persuasion, and unsubtle enough that even I can tell what it is.
"Do you feel comfortable committing to these stories?" Always a question.
Committing. If you don't make it for any reason not obviously caused by "outside blockers" you're morally responsible—and maybe even then. What are you, some kind of liar? Quite a step from an estimate. Kind of harsher than a deadline, even, which are oh-so-rarely as "deadly" as the name implies. But people close enough to Scrumish processes to get in the meetings but far enough away not to be writing code or testing things or putting out designs sure seem to like that word. Commit.
I agree with most of it minus the "Don't make esitmates" part. Without making some sort of estimate things just don't work.
I guess maybe it could work assuming you fully control a single product. Everywhere I have worked we need the estimates simply for coordination of all the moving parts.
I think by estimates they might mean deadlines. It’s well worth asking, as an engineer several questions like: “How complicated is this? What are all the moving parts? Whose going to need to be involved to get this out the door?”
But it’s counter productive sometimes to say “ I think feature X will be completed by Y” and then that estimate turns into a deadline.
Maybe you could rephrase it to "Don't make estimates before work is well underway". The main problem with estimates is that they are usually just wild guesses and pretty useless. But once you've got going on something, have had time to think it through and test your ideas, you can usually give at least a very rough estimate at that point.
>Don't do sprints. Have a continuous backlog. Don't do overtime. Don't make estimates. Always do the simplest thing. Only ever do the most important thing, as defined by the stakeholder.
you just described doing agile wrong...
You're advocating for a version of agile lite, not agile.
Or alternative, agile is different to everyone you talk to and your agile isn't the same as someone elses agile.
Take your pick, but sprints, estimations, not only doing 1 thing/most important things are basic tenants of agile as most people see it.
Would you prefer the centralized planning model of waterfall? Of course none of these systems are perfect, but it's been a major step in a better direction for everyone involved.
What i would prefer, is to take a step back and look at all major interest partys involved, who will deform the development process. If management puts to much pressure on, a good process would give tech and the customer more chances to counter said pressure, to avoid tech debt and badly implemented features.
I want a process that reacts to the situation, in favour of the product, in favour of longterm goals - who actively resists people who try to gamble it for whatever reason. Agile is not that.
"Agile" is a set of principles and values as defined in the Manifesto for Agile Software Development. If how you work contrary to those principles, then you simply are not "Agile", no matter if you call it that.
"Agile" will never fix bad management, nor will anything else for that matter.
You have to do estimates, it's how a business works. Not knowing when something will be delivered is too much to ask a company to deal with.
Also, "always do the simplest thing" is one of those phrases that can be twisted and morphed to support literally anything, which makes the phrase useless. It feels good to hear and say, but the reality is it doesn't help you out of a jam.
Sprints are exactly what you're describing, except with accountability built in. That shouldn't scare people, but it does, and that is what causes burnout. Fixing the fear around accountability is how you fix burnout, not eliminating the thing that your boss can use to justify your job.
Don't do sprints. Have a continuous backlog. Don't do overtime. Don't make estimates. Always do the simplest thing. Only ever do the most important thing, as defined by the stakeholder.
I've written and talked about this at great length. The fact people suggest agile gives you burnout reinforces my experience that Scrum is largely misinterpreted and people incorrectly focus on sprint commitments. If Scrum is so commonly misinterpreted, it is flawed.
https://www.linkedin.com/pulse/scrum-makes-you-dumb-daniel-j...
https://youtu.be/k9duArRuSjQ