I think they're saying that because it builds on the previous result having any one effort claim a record doesn't really make sense.
Like imagine there was a record for longest novel published, and what you did was take the previous longest novel and add the word "hello" to the end of it. Does the person who added "hello" get the record?
Is there a way to do non-scalar multiplication? E.g if I want to say "what is the sum of three dice rolls" (ignoring the fact that that's not a normal distro) I want to do 1~6 * 3 = 1~6 + 1~6 + 1~6 = 6~15. But instead it does 1~6 * 3 = 3~18. It makes it really difficult to do something like "how long will it take to complete 1000 tasks that each take 10-100 days?"
So the context of the quiz is software estimation, where I assume it's an intentional parable of estimating something you haven't seen before. It's trying to demonstrate that your "5-7 days" estimate probably represents far more certainty than you intended.
For some of these, your answer could span orders of magnitude. E.g. my answer for the heaviest blue whale would probably be 5-500 tons because I don't have a good concept of things that weigh 500 tons. The important point is that I'm right around 9 times in 10, not that I had a precise estimate.
I don't know, an estimate spanning three orders of magnitude doesn't seem useful.
To continue your example of 5-7 days, it would turn into an estimate of 5-700 days. So somewhere between a week or two years. And fair enough, whatever you're estimating will land somewhere in between. But how do I proceed from there with actual planning or budget?
> But how do I proceed from there with actual planning or budget?
You make up the number you wanted to hear in the first place that ostensibly works with the rest of the schedule. That’s why engineering estimates are so useless - it’s not that they’re inaccurate or unrealistic - it’s that if we insisted on giving them realistic estimates we’d get fired and replaced by someone else who is willing to appease management and just kick the can down the road a few more weeks.
Your question is akin to asking ‘how do I make the tail to wag the dog?’
Your budget should be allocated for say 80% confidence (which the tool helpfully provides behind a switch) and your stakeholders must be on board with this. It shouldn’t be too hard to do since everyone has some experience with missed engineering deadlines. (Bezos would probably say 70% or even less.)
I mean it's no less useful than a more precise, but less certain estimate. It means you either need to do some work to improve your certainty (e.g. in the case of this quiz, allow spending more than 10 minutes or allow research) or prepare for the possibility that it's 700 days.
Edit: And by the way given a large enough view, estimates like this can still be valuable, because when you add these estimates together the resulting probability distribution narrows considerably. e.g. at just 10 tasks of this size, you get a 95% CI of 245~460 per task. At 20, 225~430 per task.
Note that this is obviously reductive as there's no way an estimate of 5-700 would imply a normal distribution centred at 352.5, it would be more like a logarithmic distribution where the mean is around 10 days. And additionally, this treats each task as independent...i.e. one estimate being at the high end wouldn't mean another one would be as well.
Still not sure a bill will give you that stability, since Trump has used loopholes to sidestep ratified trade agreements that he himself negotiated. Anything signed by this President isn't worth the paper it's printed on.
It feels to me like they poached some high-level product executive from an intrusive ad company, trained in the art of dark patterns, and pointed them at their paying customers. It's a truly offensive way of looking at your user base, as solely engagement metrics to be optimized. It's what happens when an entire business is built around gamifying one KPI.
"Extraordinary", such as a fictional fentanyl emergency at the northern border (despite the fact that a trivial amount of fentanyl enters the U.S. from Canada, and more enters Canada from the U.S.).
Of course, you have Congress to keep the President in check that these are real emergencies. Which is why the House will definitely be bringing S.J.Res. 37 to the floor, right? Because it's their sworn duty to act as a check on executive power, right?
There's a reason the saying, "in America we have three coequal branches of government: the Supreme Court rules, the President crimes, and Congress is just for fun" has become popular since some time after his first inauguration.
He first announced tariffs on China over fentanyl something like 2 or 3 days after pardoning the biggest heroin by mail operator in world history (Ross Ulbricht).
On the point of air travel specifically, Canadians are being advised to use airports over land borders because if you are denied entry at U.S. customs at a Canadian airport, they cannot detain you as you are on Canadian soil.
So for Canadians still travelling to the U.S. it might actually increase their carbon footprint.
WHAT THE HELL WHY DID YOU COME IN YOU'RE NOT WELCOME HERE
...sorry but you said-
I DON'T CARE WHAT I SAID.
...OK fine I'll just be leaving.
OH YOU THINK YOU CAN JUST LEAVE? YOU CANT LEAVE! You'll be locked in my dank basement for the next five days while I fill out the paperwork for you to be dragged out of the house!
Admittedly without knowing much about our dairy industry and supply management, isn't this flawed logic? If a 200% import tariff means Canadian dairy can also be sold at a higher cost, then it's still the consumers bearing the cost--they are being denied access to cheaper dairy altogether.
The reality is when you get to another certain point (larger than the point you describe) you start negotiating directly with those cloud providers and bypass their standard pricing models entirely.
It's the time in between that's the most awkward. When the potential savings are there that hiring an engineering team to internalize infrastructure will give a good return (were current pricing to stay), but you're not so big that just threatening to leave will cause the provider to offer you low margin pricing.
All I'd say is don't assume you're getting the best price you can get. Engineers are often terrible negotiators, we'd rather spend months solving a problem than have an awkward conversation. Before you commit to leaving, take that leverage into a conversation with your cloud sales rep.
> Engineers are often terrible negotiators, we'd rather spend months solving a problem than have an awkward conversation.
My experience is the opposite: lots of software developers ("engineers") would love to do "brutal" negotiations to fight against the "choking" done by the cloud vendors.
The reason why you commonly don't let software developers do these negotiations is thus the complete opposite: they apply (for the mentioned reasons) an ultra-hardball negotiation style (lacking all the diplomatic and business customs of politeness) that leads to vast lands of burnt soil. Thus, many (company) customers of the cloud providers fear that this hardball negotiation style destroys any future business relationship with the respective (and perhaps for reputation reasons a lot of other) cloud service provider(s).
Even with the discounts of volume pricing cloud prices are still quite inflated unless you need to inherit specific controls like the P&E ones from FedRAMP High/GovCloud. The catch there is lock-in technologies that may require to re-develop large swaths of your applications if you're heavily reliant on cloud-native tools.
Even going multi-region, hiring dedicated 24/7 data center staff, and purchasing your own hardware amortizes out pretty quickly and can you a serious competitive advantage in pricing against others. This is especially true if you are a large consumer of bandwidth.
> The reality is when you get to another certain point (larger than the point you describe) you start negotiating directly with those cloud providers and bypass their standard pricing models entirely.
And even if you do, you still end up with pretty horrible pricing, still paying per GB of "premium" traffic for some outrageously stupid reason, instead of going the route of unmetered connections and actually planning your infrastructure.
This was more a response to the comment I replied to, that cloud is always more expensive. And saying it more for everyone, not OP.
It's almost always less expensive at the start, which is super important for the early stages of a company (your capital costs are basically zero when choosing say AWS).
Then after you're established, it's still cheaper when considering opportunity costs (minor improvements in margin aren't usually the thing that will 10x a company's value, and adding headcount has a real cost).
But then your uniqueness as a company will come into play and there will be some outsized expense that seems obscene for the value you get. For the article writer, it was S3, for the OP, it's bandwidth. For me it's lambdas (and bizarrely, cloud watch alarms). That's when you need to have a hard look and negotiate. Sometimes the standard pricing model really doesn't consider how you're using a certain service, after all it's configured to optimize revenue in the general case. That doesn't mean the provider isn't going to be willing to take a much lower margin on that service if you explain why the pricing model is an issue for you.
So obviously this is an extreme, but I worked for a company that had long dismissed third party cloud providers as too expensive (customers would be routing all of their network traffic through our data centers, so obviously the bandwidth costs would just be too dang high). Then that company got purchased by a certain mega corporation who then negotiated an exclusive deal with GCP, and the math flipped. It was now far too expensive to run our own set of datacenters. Google was willing to take such a low margin on bandwidth that it made no sense not to.
So in this case, hundreds of billions. But the principle stands at lower company sizes, just with different numbers and amounts of leverage.
I don’t remember if our first enterprise agreement was at $1M or $2M, but it was low and in that neighborhood [but also 10 years ago, well before cloud was the default and had growth baked into it].
Cloud providers are looking for multi-year term, commitment to growth as much as/more than exact spend level now.
In my experience with GCP, go through a Google partner (that will aggregate multiple clients to get discounts) and you'll be able to get commitment discounts with $500K/year or even less. But don't save too much money during your commitment period: if you don't expend your commitment, you'll pay for it anyway, and you might even lose some discounts.
Also, one trick to inflate your commitment expenses is asking your SaaS providers if it's possible to pay them through AWS or GCP marketplaces: it often counts against your commitment minimum expense, so not everything has to be instances and storage.
You can commit right there in the console - no need to work with a partner unless you want “flex” commit where saving is less. Even with 3y commit its still nowhere near cheap compared to buying servers and renting colo space especially for bandwidth and storage
It's not the same commitment. When doing a commitment through a partner, you're doing an expense commitment (let's say, 600k in a year) in ALL your expenses. Well, except for Google Maps API it seems :P. So not tied to an specific product or type of instance, as the typical commitment, but to your whole GCP billing.
From this, you get a wide range of discounts in a bunch of products, not just instances. And I think those discounts go on top of some of the other discounts you regularly have, but I'm not sure and I'd had to check our billing.
Like imagine there was a record for longest novel published, and what you did was take the previous longest novel and add the word "hello" to the end of it. Does the person who added "hello" get the record?