> Not everything has to be designed for a super critical prod environment with >10 coders working non stop on it.
You don't need a super critical prod environment to have decent code. Half of these are hardcoding environment configuration, others have hidden side effects that the caller of the function cannot control at all, and others are badly reimplementing things that already exist (@redirect -> you want logging for this, @stacktrace -> use a debugger)
A debugger isn't necessarily available in all environments. At my employer data scientists do a significant amount of their work in Databricks, where as far as I know it's impossible to drop into a debugger to trace execution.
That said, I'm not really defending these specific decorators.
> Not everything has to be designed for a super critical prod environment with >10 coders working non stop on it.
And even when it does, cargo-culting rules-of-thumb is generally the wrong way to do that. Best practices are better treated as the Pirate Code than the Divine Writ.
I find them creative too - and I didn't say they are terrible. Actually, I'm also from data background and that's exactly the type of stuff I would come up with too.
But as I'm recently trying to improve my software skills, I notice that while those are indeed useful in the short term, in the long term they are not worth the price. The @production one, seems like a disaster waiting to happen.
The case I've seen most frequently is developers falling into a a bit of a trap because code bases from several years ago don't look like they were written yesterday. "Best practices" is often the justification for taking on work that doesn't have a clear benefit.
Some folks _really_ insist on changing everything to be "modern" and follow "best practices" using "up to date tooling" (invariably for a non-consensus-but-very-cool definition of "modern" and "up to date"). Often, it's switching to something that's only been around for 6 months over tooling that's _incredibly_ well supported and has been around for decades. I'm not opposed to using something new, but give me a reason beyond "it's new and everyone uses it now". That's doubly true when the new approach has the old tooling as a dependency and is basically a different interface to the same things (i.e. adding a dependency without taking one way).
There's lots of things that need to be updated to be more modern, sure. But there's also a trap of lots of tempting-but-relatively-low-value "best practice" updates that some folks will insist on spending 100% of their time on.
Another common example is some variant of this situation:
"Yes, X looks like a wart and is for many common use cases. It's there because of functionality Y needed by projects A,B,C. Downstream projects D,E,F,G already have workarounds in place where it matters. If you remove the wart, it breaks key functionality for projects A,B,C and means that D,E,F,G have to change the way they use this. Sure, you could handle this in a different way that could be a bit cleaner, but is it worth changing? Changing it is non-trivial and means a bunch of other people suddenly need to do extra work for no clear benefit. Oh, you really think it is, and want to devote the next 6 months to doing that and only that..."
Sometimes things really need some love and attention to get up to date. However, it's also important to avoid work that's temping to do, but low-impact and high-risk (in terms of unintended consequences).
It would be more interesting to point out the parts you feel are so terrible.
Not everything has to be designed for a super critical prod environment with >10 coders working non stop on it.