Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Merely keeping up with the stream of issues found by fuzzing costs Google at least 0.25 full time software engineers

I like this way of measuring extra work. Is this standard at Google?



Yes, a SWE-year is a common unit of cost.

And there are internal calculators that tell you how much CPU, memory, network etc a SWE-year gets you. Same for other internal units, like the cost of a particular DB.

This allows you to make time/resource tradeoffs. Spending half an engineer’s year to save 0.5 SWE-y of CPU is not a great ROI. But if you get 10 SWE out of it, it’s probably a great idea.

I personally have used it to argue that we shouldn’t spend 2 weeks of engineering time to save a TB of DB disk space. The cost of the disk comes to less than a SWE-hour per year!


Note that this can lead to horrid economics for the user.

An example being Google unilaterally flipping on VP8/VP9 decode, which at that time purely decoded on the CPU or experimentally on the GPU.

It saved Google a few CPU cycles and some bandwidth but it nuked every user's CPU and battery. And the amount of energy YouTube consumed wholesale (so servers + clients) skyrocketed. Tragedy of the Commons.


There's definitely some nuance around how many resources to consume at the client vs the server. Video decoding and ML inference are probably at the extreme end of what you can make a client do.

On the whole, since clients are so constrained, it usually pays to be efficient there - make websites load quickly to increase revenue, require only weak hardware to get more game/app sales, etc. Clients are also untrusted, so there's so many things you can only do on the backend.


How is that related to the tradeoff calculation at hand?

Implementing VP9, enabling it, transcoding videos and testing it COST SWE hours, it didn't save them. It also cost resources.

In what way did you envision the VP9 issue being related to SWE/resource hour computations here?


I think what they're suggesting is that yes, Google spent some SWE hours to implement it, but they have (or will, over time) saved more than that in equivalent compute (by pushing it onto users). So, from Google's point of view, they saved SWE hours on the company balance sheet, so it's a win.

But from the larger point of view of "everyone," this actually made the total cost worse. The cost/benefit analysis from Google's point-of-view was worth it, even though it made things overall worse in the larger perspective. Hence "tragedy of the commons," where the short-sighted behavior of some actors makes things worse overall.


I'm don't think that metric applies here - "SWE years" is when you want for determine whether investing in certain optimisation is worth doing.

In case of VP9, there was no such tradeoff - it used more storage, more CPU time and age SWE hours. What it saved was a very clearly quantifiable financial cost of bandwidth and SWE hour tradeoff didn't have much to do with it.

It would be applicable if the debate would be investing engineering time to speed up VP9 encoder for example.


SWE-years are actually used for two things: a) measuring ongoing cost and b) for calculating if an investment will pay off.

There's a few benefits over just using a dollar figure. It's a unit that natural ties time to value. SWE-years is a natural unit for thinking about the time of SWEs. It's also convertible to other units. The conversion factors to other units can change over time, without having to pay attention to the minutiae.

All these factors make it useful for both measuring ongoing cost and making investment tradeoffs.


It also saved licensing costs to MPEG-LA too I guess.


They still have to pay for H.264 and the cost of it are peanuts.


Do you have an article about this? What is the state now?


They did this during the period where Chrome was eating into Firefox usage, after telling Mozilla that they would drop H.264 in favor of open codecs but never keeping that promise. Here’s some period discussion:

https://www.osnews.com/story/24263/google-h264-stifles-innov...

What this meant in practice was that Firefox would play YouTube videos at close to 100% CPU and Chrome would use H.264 and play at like 5% CPU, and since it was VP8 the quality was noticeably worse than H.264 as well. Regular people talked about that a lot, and it helped establish Chrome’s reputation for being faster.


Unfortunately some 10 years later Google is still pretty much the same with regards to video codec understanding.

Never attribute to malice that which is adequately explained by stupidity. If there was something outside of stupidity, zealotry would be another one. Although I guess one could argue zealotry is also a form of stupidity.


Pretty much every device these days decodes VP9 in hardware.

It's been a long time since I've seen a VP8 video on YT, so I'd assume it's not even used anymore due to worse compression.


FTE. Full time equivalent. Mosts costs are denominated in FTE - headcount as well as things like CPU/memory/storage/...

The main economic unit for most engineers is FTE not $.


I’m pretty sure FTE stands for full-time employee


When used to compare costs, “equivalent” is used instead of “employee.”

“The cost of one full-time employee is equivalent to X CPUs.” —> “The full-time equivalent is X CPUs.”


This is also extremely extremely common in engineering services contracts, both for government and private sector contracting. RFPs (or their equivalent) will specifically lay out what level of effort is expected and will often denote effort in terms of FTE


Technically a measure of power, work/time.


Yeap, kinds of. It's preferred because whenever you propose/evaluate some changes, you can have a rough idea whether it was worth the time. Like you worked on some significant optimization then measure it and justify like the saving was 10 SWE-years where you just put 1 SWE-quarter.


Yep, often things are measured in FTE or FTE-equivalent units. It’s not precise of course but is a reasonable shorthand for the amount of work required.


Yes, all those well paid C-level managers cannot handle multiple units so they require everyone to use one “easy to understand unit so that everything is easy to compare and micromanage”


If you want to decide which of several options is better, how do you propose doing that without using a single number? You can’t, in general, compare multi dimensional quantities.


I never said that idea of using single number is bad in itself, what is bad is forcing this single number and it's calculation on everyone else like it is some biblical revelation instead of calculating it ad-hoc (and modifying calculation as need requires), I can only hope c-suites can add and multiply

as for if you can compare multidimensional quantities - of course you can do it and it is done every day with engineering and medical data, it's just tad more complicated than adding and multiplying so it's a no-go for c-suite

you cannot expect people earning six figures to understand actual math, can you?


It's quite the opposite. This is intended for engineers to make good trade-off decisions as a rule of thumb without financial micromanaging.


I think this means the engineer fuzzes 4 projects?


It means it costs them three months per year per employee. So, 'n' employees, n/4 years of man-power is spent fixing issues found by fuzzing. As others have said, FTE (full-time equivalent) is the more common name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: