>In France, French Canada and Romania, octet is used in common language instead of byte when the 8-bit sense is required, for example, a megabyte (MB) is termed a megaoctet (Mo).
French has everything to do with it. it got grandfathered into their system from previous notations while the rest of the world uses megabyte for memory as a matter of practice
this is correct. i'm a romanian and both schools and government use the octet. on the other hand private enterprise has long used the bit/byte and "octet" is nowadays mostly confusing people.
but it goes a bit deeper than that and it's mostly due to french influences: language is considered at the heart of national identity [0]. this has been detrimental to developing a modern language. one of the effects is actually the slow death of languages that can't adapt to their surroundings. instead we get english words translated in a crude manner to romanian or french. words that no one uses as the english/american versions are not only more popular, but also much more flexible.
the french (romanians and others) are highly protective of their language, which is why they're losing the language battles.
Yes, but Mo is more technically correct where "byte = 8 bits" is merely "de facto" correct. There were (likely still are) designs with bytes of 6, 7, 9 and other numbers of bits, where octet an is always 8 bits.
Then again if we cared for "technically correct" as much as we claim to when correcting other people, we'd use KiB/MiB/... instead of MB/KB/... when working in 1024s not 1000s!
My understanding of byte is that it is defined as "the smallest unit other than a single bit that a processing unit naturally deals with".
Similarly "word" is the largest value the instruction set natively deals with (i.e. the architecture's register size).
Obviously these definitions are only general and are "broken" by many exceptions: for instance many CISC designs with 8-bit bytes have some instructions that work on data in 4-bit chunks (nibbles/nybbles), multiply instructions need to output twice the input size to be efficiently useful, and so on. Also "word" has in some places become synonymous with 32-bits rather than its more general definition, in much the same way "byte" became so with 8-bits.
Just a fancier/exactier way of referring to MB (mega byte) as an Mo (mega octet), since on more exotic systems a byte can have a different number of bits that 8 (there are/were 7-bit-byte or 6-bit-byte systems...), while octet always means 8.
...if a programmer uses Mo, they are surely 100% French!
(All the other countries that have Mo on their language only have it used by non-IT people, programmers would always use MB. The word octet is in the English language too btw, https://en.wikipedia.org/wiki/Octet_(computing) , but obviously nobody uses it, it's just too French :P)
I do like that it supports multiple gpus. I usually have tmux session up with my training progress (and keras callbacks) on one spit and the other side dedicated to htop/watch -n 0.5 nvidia-smi. Looks like I’ll give it a run this week when I’m running models.
It’s not quite the same, but windows 10 shows some basic gpu utilisation in task manager next to each processes, and some graphs in the next tab (where cpu, memory etc are - there’s a new tab for GPU)
Sysinternals Process Explorer also has this, and can show you per-'engine' gpu utilization, where engines represent things like shader cores, hardware video encoding cores, etc
An awesome looking tool, however I'm a little wary of building software for an expensive multi-gpu server that isn't fully vetted. Seems like a ripe target for cryptojacking. On that note, people should probably avoid pre-built binaries of this for that reason.
What exactly is "Mo" in the memory column?