> Wrong: you see 10KB/s download speed because you are not throttling the incoming packets but the outgoing packets!
Yep. TC's default is to policy outgoing traffic, which in OP's example is a bunch of TCP ACKs essentially. Instead, they should be using ingress keyword, something like described here:
Caveat emptor: ingress rate-limiting is hard. Long story short, it all boils down to what you do with non-confirming packets: There are two alternatives, and both are rather sub-optimal. You can either buffer/delay packets in kernel space (default, which leads to bufferbload and memory waste), or drop (which author linked above opted for, which leads to excessive retransmits and bandwidth waste).
The drop vs buffer decision is no harder for outgoing or incoming packets. In either case: if you're trying to simulate a different kind of network, do what that network does. If you're just trying to get good QoS on your gateway router, then use a smart AQM that will buffer only to the extent that is reasonable, and then drop or ECN mark when buffering threatens to add too much latency.
What actually happens when Internet traffic makes the jump from my ISPs big-pipe backbone connection to my much slower last-mile connection? They clearly can't cache the world, and "excessive retransmits and bandwidth waste" doesn't seem to be the case?
It's common to find that the buffers in front of the bottleneck last-mile link are sized for the highest tier of service regardless of what speed you're actually subscribed for. DSLReports has the only browser-based speed test that measures bufferbloat, and they have extensive data from that testing: https://www.dslreports.com/speedtest/results/bufferbloat?up=...
Your modem and the box on the other end of that bottleneck link are probably buffering far more than is reasonable. There's simply no reason for a cable modem to ever have in excess of 1s worth of backlog.
Middle ground between dropping [1] and buffering is Active Queue Management as wtallis pointed out above. State of the art AQM on Linux nowadays is CoDel [2].
Big network gear vendors that ship to ISPs have adopted early forms of AQM (both RED [3] and proprietary algorithms) quite a while ago - they had to:
(a) backbone routers have much smaller buffer-to-bandwidth ratio (compared to a PC or even home router), so endless buffering is not an option;
(b) buffer tail drops (i.e. what drop keyword does when added to tc filters/disciplines) interact really poorly with TCP bandwidth control algorithms, and rather badly with RTP streams, too (so drop is not an option either - it ruins user's connections; and
(c) ISPs and carriers would typically run links (on average) much closer to saturation than your typical home router, so situation where router has to make buffer/drop decision is much, much more frequent.
[1] My point above was that you don't really want to drop packets on the receiver side, after that packet already traversed expensive part of the network.
If you don't need hierarchical classes[1], just use tbf (token bucket filter) instead of htb (hierararchy token bucket) - it's more efficient, more compact, and gives you access to delay in the same discipline as well. Compare:
# htb
tc qdisc add dev eth0 handle 1: root htb default 11
tc class add dev eth0 parent 1: classid 1:1 htb rate 1kbps
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 1kbps
[1] i.e. stuff like "I would like to have tcp/80 limited to 10 mbit/s, tcp/443 limited to 15 mbit/s, while sum of above should never exceed 20mbit/s, and tcp/80 should get priority when competing for that shared 20mbit/s"
EDIT: changed to "burst 2k". Having burst lower than interface MTU will delay large packets essentially forever.
You have to be pretty careful about what conclusions you draw from this info. It means relatively little about what your actual peer has seen. It tells you a some info about the channel between you and your peer, though.
If you need detailed feedback, you need to implement a feature that closes the loop.
And that is exactly why open IM protocols and 3rd party clients are so important.
If both you and your correspondents do use 3rd party IM client ([1], [2], etc), then just run OTR2 or OMEMO on top of the protocol, and let google store whatever it pleases - it's not going to be much use for them.
If this [1] Lenovo employee's answer is a real deal, and this was in fact sanctioned by MS, I wonder if their actual goal was to prevent people from installing Windows 8/7/whatever. MS did go on record by saying they are willing to block people from installing old versions on new hardware. [2]
Given the quality of answers to questions that I'm used to by computer manufacturer "product experts" online... it's really unlikely that that's actually the case.
Have a look lenovo forums [1] (also linked prominently from original article), where OP has following to say:
I am attempting to install ubuntu 16.04 on my yoga 900.
The bios can see the 512 gb samsung hard drive and so can Windows.
The ubuntu installer can not see it at all.
...and first person to reply adds:
I have the same issue with the 900S model.
I have tried the newest kernel 4.6 but linux doesn't
even list the pci express device in lspci.
So no, positively not the 3.2.x kernel to blame here.
Right. Thanks for that. I am reacting to the claims that it's "locked by Microsoft". This looks to be missing Linux support for an Intel device. Sad, but not unusual.
And for an example of co-existence of competing languages, see javascript triggering flash/actionscript (e.g. for clipboard access) and vice-versa (e.g. for DOM access).
Both are somewhat dated examples, that is pre-HTML5.
Designed-for use-case for an SD card is a buffer where a photo/video camera will save footage sequentially, then you copy everything to your PC/laptop and reformat the card.
When used as a generic storage with a complex write/delete/overwrite pattern, most card would start corrupting data fairly quickly.
I understand its use case. Now imagine you're on location shooting, and you're fumbling around with tiny SD cards. It wouldn't be the first time I've dropped one. Even compact flash I try not to put TOO much work on one before I switch them over.
To say that every consumer is going to use it only as a temporary medium, I think is overly optimistic.
I think part of the appeal of high capacity cards is that you spend less time fumbling with cards. A 1TB card would probably last an entire day of shooting HD video. As an amateur photographer, I generally keep 2 extra SD cards on hand but they're backups and not used. I don't think I've ever filled up the 64GB card that I shoot with and at the end of the day I move off what was shot. If I'm shooting on vacation I'll generally leave a copy of the images on the card so that I have 2 copies floating around but I've yet to fill up the card.
This card probably isn't targeting the consumer market. I think smartphone cameras have cannibalized much of the Point-and-Shoot camera and PVR market. That being said, I think most consumers purchase one SD card when they buy their camera and it's the only card that camera ever sees.
I get what you're saying, and it is a concern, but there is a decent argument (to me, anyway) that with 1TB you probably won't be taking the card out to be fumbling with it in the first place. Even with on-location shooting, "I filled up a 1TB card, I need to archive it" is probably a good reason for a lunch break.
No one in their right mind would fill a 1TB card before swapping. That's an insane amount of potential lost. I could see it as a secondary backup disk in DSLR about it.
He is saying that people will fill a 1TB card before swapping exactly the same way that people often leave thousands of photos floating around without backup.
Not that you're wrong, but [citation needed]. Mainly because you use a microSD card to boot the Raspberry Pi. Also, in my Yearbook class in high school, we almost never formatted(?) our cards. Granted, they were only 32 GB or less to avoid SDXC, but still.
This was mostly from a personal experience of running couple of dozens Raspberry Pies. To elaborate:
- Cheap cards would start corrupting right in the middle of first "apt-get update && apt-get upgrade";
- SanDisk Extreme Pro 16GB fared much better, but still we had several failures after half-a-year;
- Failure mode in both cases - corruption in superblocks and inodes, journal doesn't help much when recovering (we use ext4).
As a result, we settled on splitting each card into two root partitions - active and standby. To upgrade, overwrite standby one with dd, switch active/standby by editing /boot/cmdline.txt, and reboot.
I have had half a dozen SD cards fail (corrupted) on my RPi in a 4 month period - but it turned out to be caused by anaemic power supply. I never got a corrupted SD card since I got a 2.5 Amp adaptor - this was 18 months ago with the very same RPi.
We did have a few cameras that supported SDXC, but the majority didnt. It was easier to just use SDHC and not have to worry about what card works in what camera. Besides, 32 GB was plenty because we almost never shot in RAW (high schoolers aren't professionals); I think only me and 3 others knew about working with RAW.
So to answer your question, space wasn't a premium, and because of the technical and human limitations of SDHC vs SDXC. I'm sure some of those cameras had firmware updates that would add SDXC support, but we didn't need it.
Caddy is written in Go, and make use of native fibers/green threads/goroutines.