One of the solutions they mention is underutilizing links. This is probably a good time to mention my thesis work, where we showed that streaming video traffic (which is the majority of the traffic on the internet) can pretty readily underutilize links on the internet today, without a downside to video QoE! https://sammy.brucespang.com
Packet switching won over circuit switching because the cost-per-capacity was so much lower; if you have to end up over-provision/under-utilize links anyways, why not use circuit switching?
TFA suggest a 900% overcapacity, not a few percent. I just skimmed GP's article but it seems to suggest a ~100% overcapacity for streaming-video specifically.
A physical circuit costs a lot, so much more that it's not even funny.
You can deploy a 24-fiber optical cable and allow many thousand virtual circuits to run on it in parallel using packet switching. Usually orders of magnitude more when they share bandwidth opportunistically, because the streams of packets are not constant intensity.
Running thousands of separate fibers / wires would be much more expensive, and having thousands of narrow-band splitters / transcievers, also massively expensive.
Phone networks have tried that all, and gladly jumped off the physical circuits ship as soon as they could.
> Anyway, circuits are more expensive than just running a packet-switched network lightly loaded.
This was undoubtedly true (and not even close) 20 years ago. As technology changes, it can be worth revisiting some of these axioms to see if they still hold. Since virtual circuits require smart switches for the entire shared path, there are literal network effects making it hard to adopt.
The old and new standard ways to do virtual circuit switching are ATM (heavily optimized for low latency voice - 53 byte packets!) and MPLS (seems to be a sort of flow labeling extension to "host" protocols such as IP - clever!).
Both are technologies that one rarely has any contact with as an end user.
Sources: Things I've read a long time ago + Wikipedia for ATM, Wikipedia for MPLS.
> my thesis work, where we showed that streaming video traffic [...] can pretty readily underutilize links on the internet today, without a downside to video QoE!
was slightly at a loss in what exactly needed to be shown here until i clicked the link and came to the conclusion that you re-invented(?) pacing.
I would definitely not say that we re-invented pacing! One version of the question we looked at was: how low a pace rate can you pick for an ABR algorithm, without reducing video QoE? The part which takes work is this "without reducing video QoE" requirement. If you're interested, check out the paper!
> One version of the question we looked at was: how low a pace rate can you pick for an ABR algorithm, without reducing video QoE?
that is certainly an interesting economical optimization problem to reason about, though imho somewhat beyond technical merit, as simply letting the client choose quality and sending the data full speed works well enough.
addition:
i totally agree that things have to look economical in order to work and that there are technical edge-cases that need to be handled for good ux, but i dont't quite see how client-side buffer occupancy in the seconds range is in the users interest.
i did not read that paper with a focus on adaptive bitrate selection for video streaming services that came out 8 years after the pacing implementation hit the kernel. thx thou
Can you comment on latency-sensitive video (Meet, Zoom) versus latency-insensitive video (YouTube, Netflix)? Is only the latter “streaming video traffic”?
We looked at latency-insensitive like YouTube and Netflix (which were a bit more than 50% of internet traffic last year [1]).
I'd bet you could do something similar with Meet and Zoom–my understanding is video bitrates for those services are lower than for e.g. Netflix which we showed are much lower than network capacities. But it might be tricky because of the latency-sensitivity angle, and we did not look into it in our paper.
> Meet and Zoom–my understanding is video bitrates for those services are lower than for e.g. Netflix
For a given quality, bitrate will generally be higher in RTC apps (though quality may be lower in general depending on the context and network conditions obviously) because of tradeoffs between encoding latency and efficiency. However, RTC apps generally already try to underutilize links because queuing is bad for latency and latency matters a lot for the RTC case.
the term “streaming video" usually refers to the fact that the data is sent slower than the link capacity (but intermittently faster than the content bitrate)
op used the term presumably to describe "live content" eg. the source material is not available as a whole (because the recording is not finished); which can be considered a subset of "streaming video"
the sensitivity in regard to transport characteristics stems from the fact that "live content" places an upper bound for the time required for processing and transferring the content-bits to the clients (for it to be considered "live").
I'm not even finished with it, but it has still introduced me to Lisp, functional programming, artificial life, and web servers as well completely changed the way I program.
It's to make what was done (which is important to the paper) seem important and make who did it (which is less important to the paper) seem unimportant.
The interesting thing to me is that in mathematics journals, the universal pronoun is "we." The reason (I've been told) is that "we" represents the collaboration of the author and the reader to understand the results and proofs in the paper. This makes sense to me, because reading and writing mathematics is a skill entirely apart from most other types of discourse. (Of course, when I say "mathematics," I mean to include fields like theoretical computer science and others in which discourse is of the "theorem, proof, discussion" form.)
That is interesting. I instinctively use "we" when commenting code or talking myself through performing a novel task, in both cases for the same reason.
For a photoshop-like web design tool, it would probably be a poor idea to have it output to HTML + CSS, as it would be insanely difficult to get remotely decent HTML + CSS (especially HTML + CSS that responds to interaction well). Even if you managed that, any decent front-end developer would probably rewrite it themselves to get the best interaction, "bulletproof-ness", and optimization possible. What would be far more useful is a design tool that incorporates HTML + CSS by using them for the layout of the image, but doesn't attempt to write them decently. Doing so would make it easy to use CSS to quickly change something like the font, while not trying to write decent front-end code.
In addition, I'd love it if this tool had the following. Even if it only had one of these things, I'd probably buy it.
1. Way to easily deal with text. In photoshop, text is a pain to deal with, especially for things like navigation. If there was a way to easily style the text of a file with css in addition to a gui method, things would be much more pleasurable and efficient.
2. Decent ways to include header, sidebar, and footer elements. Currently, a designer has to duplicate these across multiple files which makes updating even a snippet of text an arduous process involving opening many files.
3. A common browser element palette. Currently the only solution (That I know of), is to get a psd with the elements pre-made and try and fit whatever you get with into a design, which makes things like adding decent amounts of text to buttons rather bothersome.
Hey flapjack, I'm developing a tool to do this and would love to get your thoughts. If you stumble across this comment, shoot me an email: matthew.h.mazur@gmail.com.
Same goes for anyone else -- the app is still a ways away from prime time, but if you'd like to play a part in the development and testing process, let me know.
If a user searches for something like 27" iMac, there are no results, but if the user searches for 27 iMac, the 27" imac is found. It might be good to strip any "'s in the add field before searching.
I count calories and try to stick to a consistent limit, and bike around 8 miles Monday through Friday. It's probably the 8 miles of biking that provides the benefit.