Some of this wrong, some of this is way too wordy.
>
USB-C cables that support Power Delivery 2.0 and 3.0 can carry a minimum of 7.5 watts (1.5 amps at 5 volts) or 15W (3A at 5V), depending on the device, cable, and purpose.
Nope, all USB C cables are 60W minimum. The connector might support less, but the cables do support 60W minimum.
> USB-C Charge Cable—designed in the early days of USB-C
Nope, nothing wrong with a USB C cable which doesn't have the high speed lanes connected. That's a perfectly specification conformant cable. High speed lanes is the phrase I miss from this too long article.
Here's a much shorter explanation of it all.
A USB C cable always have separate wires for 1) power , either 60W or 100W capable. The latter needs an eMarker IC in the connector 2) a separate pair for 0.48Mbit/s speed USB data (aka USB 2.0) 3) it can have and most do have four high speed lanes (each lane is formed from two wires as differential pairs). These lanes can carry various data , by default it will use two lanes to carry USB data -- at at least 5gbps, if the cable is short enough then 10gbps. (While 20gbps mode over four lanes is defined so few hosts support this we don't need to care.) There are alternative modes besides USB, namely DisplayPort and Thunderbolt. In DisplayPort alternate mode it can use 2 or 4 lanes for DisplayPort. If two then the other two can be used for USB data as above. DisplayPort has its own versions: DisplayPort 1.2 can deliver 8.64 Gbit/s video data over two lanes, while 1.3/1.4 can carry 12.96 Gbit/s and for both double that over four lanes. (DisplayPort 1.4 introduces compression as well which needs to be supported by the monitor and it's very rare to do so.) This is aggregate video bandwidth for PCs: they can drive multiple monitors using a DisplayPort technology called MST. For Mac, you need Thunderbolt.
Thunderbolt. Unlike the previous ones where the cable is functioning the same as a pre-USB C cable and a simple (basically, passive converter) can dole out USB and DisplayPort data this one is a complex bus and both ends needs to have a Thunderbolt controller. The Thunderbolt 3 bus can carry PCI Express and DisplayPort data (USB is provided by the controller presenting a hot plug root hub, due to hot plug nature there's a bevy of problems), the Thunderbolt 4 bus can carry PCI Express and DisplayPort and USB data. (Footnote: Thunderbolt 4 is the same as USB 4 with some optional features made mandatory. It is somewhat unlikely we will see non-TB4 USB 4 controllers so this is mostly just pedantry -- you can treat USB 4 and TB4 as being the same.) The bus bandwidth in one direction depending on cable length can be 20gbps or 40gbps. When allocating this bandwidth, DisplayPort has priority. How much DisplayPort is on the bus in total is a bit confusing due to some history: laptops with one Thunderbolt port most often will only put one DisplayPort 1.2 connection on the bus. Laptops with two ports always have full Thunderbolt and will put two DisplayPort connection on the bus (very rarely, single port laptops will do this too, mostly early workstations). This confusion goes away in Thunderbolt 4, that's always full. But the bus is never faster than 40gbps so even if it is fed by two DP 1.3/1.4 connections, it still can't deliver more than 40gbps data where two independent DisplayPort cables would be able to deliver slightly more than 50gpbs.
Finally, power. If only 5V is required, resistors are used to signal how much power is requested. For 9/15/20V power is negotiated: the devices figure out which one is source and which one is sink, once that's done, the source will communicate how much power it is able to provide. At most 3A can be used normally, at 20V provided the right cable is present, 5A is also a possibility. There is a separate wire for this communication. Using the same communication process, the data roles are decided: one of them will be upstream (think host) one of them will be downstream (think peripheral). There's a sensible default where the upstream data role takes on the source power role but this can be changed using this negotiation.
Footnote: these were fixed power. Today some devices support the PPS (Programmable Power Supplies) feature which allows the sink to rapidly change the wattage as needed. This also requires eMarker cables.
So when stuff is pasted in from Word or whatever other weird Rich Text Enterprise application - what's actually doing the heavy lifting of translating all that \\rtf\\section\\block\\blockend\\secondend garbage into HTML? Is it the `sanitize-html` library here? Is it the browser? My first intuition would be "pasting a bunch of formatted stuff from Wordpad into an HTML document surely doesn't result in anything reasonable, right?" But then seeing the direct jump from "alright, stuff was pasted in, now we only want to allow these HTML tags" makes me think somewhere in here some magic happened to convert it.
I think that's probably the most interesting part here. What happens for images? Do they get converted into an <img src="data:base-64;..." type thing?
There is no substitute to building one yourself, but The Craft of Text Editing book has a lot of accumulated wisdom. It is Emacs-centric, but basics are same.
Aside from issues of synchronization, leap seconds, data container limitations and such, it's important to choose the correct kind of time to store, and the right kind depends on what the purpose of recording the time is.
There are three main kinds of time:
*Absolute Time*
Absolute time is a time that is fixed relative to UTC (or relative to an offset from UTC). It is not affected by daylight savings time, nor will it ever change if an area's time zone changes for political reasons. Absolute time is best recorded in the UTC time zone, and is mostly useful for events in the past (because the time zone is now fixed at the time of the event, so it probably no longer matters what specific time zone was in effect).
*Fixed Time*
Fixed time is a time that is fixed to a particular place, and that place has a time zone associated with it (but the time zone might change for political reasons in the future,for example with daylight savings). If the venue changes, only the time zone data needs to be updated. An example would be an appointment in London this coming October 12th at 10:30.
*Floating Time*
Floating (or local) time is always relative to the time zone of the observer. If you travel and change time zones, floating time changes zones with you. If you and another observer are in different time zones and observe the same floating time value, the absolute times you calculate will be different. An example would be your 8:00 morning workout.
*When to Use Each Kind*
Use whichever kind of time most succinctly and completely handles your time needs. Don't depend on time zone information as a proxy for a location; that's depending on a side effect, which is always brittle. Always store location information separately if it's important.
Examples:
Recording an event: Absolute
Log entries: Absolute
An appointment: Fixed
Your daily schedule: Floating
Deadlines: Usually fixed time, but possibly absolute time.
Proving "classical" propositions in an intuitionistic system is trivial. Intuitionistic logic can be viewed as an extension of classical logic with new "constructive OR" and "constructive EXISTS" operators. The classical operators are recovered via negation: NOT (NOT a AND NOT b) is classical OR, whilst NOT FORALL x (NOT p) is a classical existential quantifier.
Long ago, someone was archiving magnetic tapes at MIT, containing Lisp Machine backups, from longer ago. Or maybe they were TOPS-20 backups... my memory has faded.
Archiving here means using an old 9-track tape machine with a custom driver, to copy data off now read-once 9 and 7 track tapes. Read-once tapes, because after the tape goes by the read head, the tape's plastic backing goes one way, and the tape's rust goes another. The backing is rewound, and the rust makes a scattered little pile. The original driver would backup and retry on error. Scrubbing back and forth, back and forth. Which here, would be bad. But back to our story.
On one such longer-ago backup, was a core dump file. A core dump, with a snapshot of the frame buffer. A frame buffer showing someone's screen, at that moment of the core dump, so long ago. And at that moment, that someone was being pranked. Pranked by a program which would draw, crawling across the screen, a little spider. A little bitmap sprite bug. A bug, trapped by chance in a core dump, and preserved in rust.
Rubbish. Stuff like e.g. boats in the multi-tonne weight class are secured by tying knots every day. Spearfishers. Mountain climbers. I could go on.
Get some 2 mm Dyneema rope, which has an average breaking strength of 300 kg, run it through some D-rings and tie it with an Alpine Butterfly. You will be able to do pullups off that artwork if you want.
This is awesome! I've always wanted to try this. The only real complaint I have is that "da" is not actually a copula in the strictest sense. It's a contraction of "de aru". Similarly "na" is not a modifier. It's a contraction of "ni aru". "aru" is the verb which is the closest you get to a copula in Japanese - it means "it exists" for non-animate noun-phrases.
So if you say "sakana da", it does mean "It is fish", but so does just "sakana". The copula is implied. The "da" is completely optional and is actually only added for emphasis -- the literal translation is kind of like "That it is fish exists". In literary Japanese you would say "sakana de aru", "de" being the particle that links a verb the the means with which the verb is executed. For example "basu de iku" means "will go by bus" -- bus is the means by which we will go. In "sakana de aru" or "sakana da" we are basically saying that "fish" is the means of its existence.
The "na" modifier is also interesting. It is really "ni aru" where "ni" is essentially the "direction" in which something exists. "Something like a fish" would be "sakana no you". If you want to say "It is a fragrance something like a fish" you could say "sakana no you na kaori". Although I'm not aware of any modern Japanese that would express it like this, this is equivalent to "sakana no you ni aru kaori" -- "It is a fragrance that exists in the direction like a fish". Hopefully you can understand.
The interesting part of this is that adding "ni aru" to the end of a noun phrase just turns it into a verb phrase. And the even more interesting bit is that the only thing that can modify a noun phrase is a verb phrase.
But, you may have heard of "i-adjectives" -- these are adjectives that end in i. In actuality, these are not adjectives! They are verb phrases! So the word "cute" is "kawaii". However, the actual word is "kawai" and the inflection is "i". That's why when you want to say "not cute" it becomes "kawaiku nai" -- the "i" turns into "ku" because you are inflecting a verb.
This in turn is why you modify nouns directly with "i-adjectives". "kawai sakana", or "cute fish". Other adjectives are actually noun phrases in Japanese. "yumei na sakana" or "famous fish". This is, again, exactly the same as "yumei ni aru sakana" -- "The fish exists in the direction of fame".
So the rules are even simpler than presented in this blog post.
By the way, for anyone trying to learn Japanese and who wants to go beyond phrase-book level: learn plain form first and polite form later (if ever). Japanese makes absolutely no sense if you learn polite form first. It's incredibly logical (even the polite form extensions) if you start with plain form.
So here's a "folk theorem" that explains why you should care about such abstract structures.
Take a 2x2 matrix with real elements (a b)(c d). If I take the space of all such matrices and put a uniform probability distribution over it, what is the probability of getting a matrix that is not invertible? Not invertible is det M = 0, or ac - bd = 0. I can solve for a in terms of b, c, and d, which shows that the non-invertible matrices are a 3 dimensional subspace of the 4 dimensional space, which has zero volume. So a matrix chosen at random is invertible.
The folk theorem is in analogy with this: say I have a set of elements {a, b, c, ...}, and I describe mappings of various kinds that take some subset of tuples of the elements into other subsets. You can draw it as a directed multigraph. If you consider the space of all such graphs, how many of them generate "rich" or "regular" structure? For example, having a system where my mapping is defined over all elements is actually pretty restrictive. For a set of N elements, there are N-1 + N-2 + ... + 1 sets that are not defined over all elements. As N gets large, this dwarfs my one regular version. As I put in more operations and go to infinite sets of elements and add more regularity properties, this imbalance grows. So systems that have rich, regular structure are a zero volume subspace of the set of all such systems.
Given this, suddenly the interest in the few dozen algebraic structures that algebraists of various kinds explore makes a lot more sense. They're following infinitely thin paths of structure through space. You start from a raw set with no operations. In one direction you trace through monoids, groupoids, semigroups, groups, rings, rigs, tropical rings, fields, vector spaces, modules, etc. In another you go through pre-orders, partial orders, chain complete partial orders, semilattices, lattices, etc. In another you go through categories, topological spaces, natural transformations, monads, arrows, topoi...
Roughly, the groups, rings, fields path takes you through values that have regularity that looks vaguely like numbers. Orders and lattices take you through things that look like decomposition into pieces. Categories to topoi take you through things that look like sequences of operations and transformation. That the latter might be of interest to a programmer is fairly obvious from this point of view. So when someone says a monad is interesting, what they are trying to tell you is that it is the relevant data structure for describing an imperative program the way an array or list is the relevant data structure for describing an ordered sequence of numbers.
The reason you care, then, is that once you have traced out these paths, when you are looking at a problem your brain will automatically try to draw you back towards the thread of regular structures. Sometimes you don't go to familiar ones. I ended up building an odd variation of lattices to describe genome annotations because of this, and it was an exercise in finding what about the domain drew me out of strict regularity rather than trying to find my way into it, which is a lot easier.
Similarly, the entirety of the literature on eventual consistency in databases can be summarized as "make your merge operation the meet of a semi-lattice." If you've traced through that particular thread, then you can immediately think in a deep way about what kind of structure eventual consistency has and what the variations on it are.
The word mindfulness itself is a poor translation of the Pali word 'Sati'. The Pali word actually means 'remembering to be aware of the objects that your mind is attending to'. There is no word in western languages to capture that phrase and hence word 'mindfulness' was used to convey in a confusing manner what 'Sati' means.
I have been meditating for 1 hour to 1.5 hour every day for several years and I finally got into a stage called 1st Jhana, where your mind becomes temporarily free of all 'wants' and is completely at peace. In that state the awareness becomes super sharp, breathing becomes very shallow (less than 5 breaths per minute) and experience of time distorts. Your awareness can clearly watch thoughts coming up like 'lava bubbles' from your subconsciousness into your consciousness. It is at that point you get a glimpse into 'anatta' (non-self). The idea that there is no controller (or soul or self) that is creating ideas. It is an automatic process that is happening due to your past Karma (conditioning due to repeated practice).
It takes a lifetime to develop the wisdom and compassion that Buddha talked about. It cannot be understood purely using logic. You have to get the experience of a calm unbiased mind.
Even though people like her are well-intentioned but they should stop to think of the possibility that they may not understand what they are talking about as well as they think they do.
To the computer hackers who read these things: biology is a complex system. So complex that the most complicated system you have ever worked on seems very simple compared to it. WHen you read articles like this, be aware the scientists have extensive training in working with complex systems, and publish exciting sounding coherent narratives that are, at best, incorrect but useful working models (https://en.wikipedia.org/wiki/Not_even_wrong).
It's fun to speculate but to prove your case and make a real dent in human health problems is a lot harder than coming in late and saying "But wait, why don't you just..."
I heartily recommend going back to the great textbooks of these fields and reading them, rather than trying to understand things by dropping into the state of the art research (which is usually wrong, and hard to understand in detail).
Some books I recommend:
The Biology of Cancer (Weinberg). After you read this book you'll have a better understanding of why doctors and scientists cringe when people say "cure cancer".
Molecular Biology the Cell (Alberts, etc). After you read this book you'll have a much better understanding of the full complexity that scientists have to deal with in complex cellular systems.
Molecular Biology of the Gene (watson, etc). Can't say much about this book except that's it's a classic reference.
What makes these three books exception is that they support all their factual claims with direct links to the papers that established the facts. And they provide you with the skills to evaluate modern research. But nothing compares to actually going to grad school and participating in the research- once you see how the grants are made, the experiments are, and the papers are written, you'll understand why trying to understand biology by press release is like trying to understand assembly language by watching a Steve Jobs product announcement.
I sort of dislike this kind of "philosophical" introduction to calculus. Maybe I don't have the spirit of an artist.
The best way to start with calculus is the one by Gilbert Strang, who explains everything on the first page of his book (and the rest of the book are "just" examples).
The first half of the first page shows a drawing of the speedometer and odometer of a car and it explains what they are, and that they are not independent but related in a special way. On the second half of the first page it says that differential calculus is the task of computing the speed from the distance, and integral calculus is the task of computing the distance from the speed. Then it says a lovely sentence "this is not an analogy, this is the real deal and we have already started with the subject, and this is actually all there is to it". Then in the rest of the page it explains in a couple of sentences how can you compute speeds from distances and vice-versa, and why you need a constant of integration, and so on. It also proves the fundamental theorem of calculus. The rest of the book consists in concrete examples and a few more constructions, up to Taylor series.
I did a project during my undergraduate degree in physics which involved interfering three planar waves at 60 degrees from one another to create a hexagonal intensity pattern. The interesting thing was that at each of the 6 corners of the hexagon was a singularity (optical vortex) where the phase was undefined. At these points, the phase space was shaped like a spiral staircase (screw dislocation) and particles suspended there could actually be rotated. It was like an “optical wrench” if you will.
On a small scale, planar waves can be modeled like flat sheets of paper traveling through space without any angular momentum (no twisting motion). Yet when these sheets hit an object from multiple angles with the right timing, they can actually cause the object to twist.
I taught this concept in my course on Programming Languages at Stanford. If you’d like a more pedagogic introduction to both session types and their implementation, check out the course notes: http://cs242.stanford.edu/f18/lectures/07-2-session-types.ht...
In addition, multiple works[0][1] have discovered that grid cell representations arise from regularized recurrent networks when provided relative inputs to predict absolute outputs.
I think what you did is rolled back a layer of abstraction your brain created for you. Sure, your eyeballs are high-sampling-frequency tongues swimming in a sea of light. But the brain processes those raw sensory readings, fuses them with a bunch of other things, and presents it as a view of the world, and then to simplify things, it tells itself that this is the world.
I'm actually super-interested in ways of building those layers of abstractions. Consider tool use. If you've used any tool a lot, you know the feeling of it becoming an extension of yourself. Proficient drivers don't think about moving the keyboard or turning the wheels, they just sort of are the car and will themselves to move where they want to go. As I type this comment, I don't think about my fingers pounding on the keyboard, I just will the words to appear on screen. The brain is good at papering over the mechanics of interacting with the world, and I think it's entirely feasible to start incorporating new senses if our devices can be designed around this purpose, instead of relying on explicitly displaying data for our eyes.
Long time mediators are familiar with this. When you drill down attention to smaller and smaller time resolution, eventually you notice that attention flickers.
Buddhist Abhidhamma sutra describes this phenomenon. Smallest flicker of consciousness is called ceta.
I think 250 ms is closer to vithi (kind of small molecule of mind- moments), than ceta.
> We have seen that the Abhidharma’s analysis of sentient experience reveals that what we perceive as a temporally extended, uninterrupted flow of phenomena is, in fact, a rapidly occurring sequence of causally connected consciousness moments or cittas
...
> The Sarvāstivādins use the term “moment” (kṣaṇa) in a highly technical sense as the smallest, definite unit of time that cannot be subdivided, the length of which came to be equated with the duration of mental events as the briefest conceivable entities. There is no Sarvāstivādin consensus on the length of a moment, but the texts indicate figures between 0.13 and 13 milliseconds in modern terms
(My personal subjective estimate is something like to 40 ms)
First, why "d"? Well, "d" is for "difference". As in: as x changes from x_1 to x_2, the difference (x_2 - x_1) -- when it's very small.
But wait, there's more!
The commonly used symbols for finite difference like that is the Greek letter Delta: Δ
For a list of values x_1, x_2, x_3, x_4,.. we write Δx_i = (x_i - x_{i-1}). That is, Δx_i is the i'th change. (Side note: an airline had a marketing slogan Change is Delta, which some nerd must have been immensely proud of).
Ok, bear with me for a bit more!
The symbol we use for finite sums is Σ: we write
Σy_i = y_1 + y_2 + ... + y_n
That is, summing up small succesive changes gives you total change. Simple?
Now apply this to the situation where the small changes in the quantity you are looking at are proportional to changes in another:
Δy_i = Δx_i * f(x_i)
Say, y is position, x is time; then f(x_i) is the speed at time x_i: as time increases a little, so does your position; the ratio of the changes is the speed. Δy_i is how much you moved from time x_{i-1} to time x_i, which is proportional to change in time
Δx_i.
Note that f(x_i) = Δy_i/Δx_i here (speed = change in position div. by change in time).
Now write:
y_n - y_0 = ΣΔy_i = Σf(x_i)Δx_i
Again, just summing up small changes to get the net chnage.
NOW, what you've been waiting for!
Imagine you took infinitely many measurements. The changes become infinitely small, and the sum becomes of infinitely many things.
We need new notation for this.
But let's keep it similar. Instead of using Greek letters, let's use the same letters... in Latin.
Δx becomes dx
ΣΔy becomes S dy
And, with some sloppy handwriting of the letter S, the net change equation becomes:
y_final - y_initial = ∫ f(x)dx
where
f(x) = dy/dx.
You now see that Σ and the sloppy S -- ∫ -- stand for Sum.
And that, my friend, is pretty much all there is to Calculus and its symbols, fundamentally[1].
The closest thing I've found to this is Pico-8: not a computer, a piece of software, but it's very cool. It has a simple text editor for writing Lua code, a simple sprite editor, and a simple sound editor, all for creating games. It's a nice little sandbox for playing with code and creating things.
Try getting a math professor to admit that he doesn't actually have an intuitive understanding for any of the stuff he teaches. I believe the vast majority probably don't have any such understanding but they are understandably scared to admit it.
Mathematicians are famous for embracing the fact that they have no intuition. For example, here's a famous quote from Geoff Hinton:
To deal with hyper-planes in a 14-dimensional space, visualize a 3-D space and say 'fourteen' to yourself very loudly. Everyone does it.
And here's a famous quote from G.H.Hardy:
In mathematics, you never understand things; you just get used to them.
The idea behind both of these quips is that there is no intuitive way to visualize these weird mathematical objects. The main reason people struggle with math (in my experience) is that they assume there must be an intuition. There's not. Math is weird. And all mathematicians feel that way about math.
And then you get the power of regexes too. re2c will match an arbitrary set of regexes (including constant strings) by walking through the string byte-by-byte, a single time.
> USB-C cables that support Power Delivery 2.0 and 3.0 can carry a minimum of 7.5 watts (1.5 amps at 5 volts) or 15W (3A at 5V), depending on the device, cable, and purpose.
Nope, all USB C cables are 60W minimum. The connector might support less, but the cables do support 60W minimum.
> USB-C Charge Cable—designed in the early days of USB-C
Nope, nothing wrong with a USB C cable which doesn't have the high speed lanes connected. That's a perfectly specification conformant cable. High speed lanes is the phrase I miss from this too long article.
Here's a much shorter explanation of it all.
A USB C cable always have separate wires for 1) power , either 60W or 100W capable. The latter needs an eMarker IC in the connector 2) a separate pair for 0.48Mbit/s speed USB data (aka USB 2.0) 3) it can have and most do have four high speed lanes (each lane is formed from two wires as differential pairs). These lanes can carry various data , by default it will use two lanes to carry USB data -- at at least 5gbps, if the cable is short enough then 10gbps. (While 20gbps mode over four lanes is defined so few hosts support this we don't need to care.) There are alternative modes besides USB, namely DisplayPort and Thunderbolt. In DisplayPort alternate mode it can use 2 or 4 lanes for DisplayPort. If two then the other two can be used for USB data as above. DisplayPort has its own versions: DisplayPort 1.2 can deliver 8.64 Gbit/s video data over two lanes, while 1.3/1.4 can carry 12.96 Gbit/s and for both double that over four lanes. (DisplayPort 1.4 introduces compression as well which needs to be supported by the monitor and it's very rare to do so.) This is aggregate video bandwidth for PCs: they can drive multiple monitors using a DisplayPort technology called MST. For Mac, you need Thunderbolt.
Thunderbolt. Unlike the previous ones where the cable is functioning the same as a pre-USB C cable and a simple (basically, passive converter) can dole out USB and DisplayPort data this one is a complex bus and both ends needs to have a Thunderbolt controller. The Thunderbolt 3 bus can carry PCI Express and DisplayPort data (USB is provided by the controller presenting a hot plug root hub, due to hot plug nature there's a bevy of problems), the Thunderbolt 4 bus can carry PCI Express and DisplayPort and USB data. (Footnote: Thunderbolt 4 is the same as USB 4 with some optional features made mandatory. It is somewhat unlikely we will see non-TB4 USB 4 controllers so this is mostly just pedantry -- you can treat USB 4 and TB4 as being the same.) The bus bandwidth in one direction depending on cable length can be 20gbps or 40gbps. When allocating this bandwidth, DisplayPort has priority. How much DisplayPort is on the bus in total is a bit confusing due to some history: laptops with one Thunderbolt port most often will only put one DisplayPort 1.2 connection on the bus. Laptops with two ports always have full Thunderbolt and will put two DisplayPort connection on the bus (very rarely, single port laptops will do this too, mostly early workstations). This confusion goes away in Thunderbolt 4, that's always full. But the bus is never faster than 40gbps so even if it is fed by two DP 1.3/1.4 connections, it still can't deliver more than 40gbps data where two independent DisplayPort cables would be able to deliver slightly more than 50gpbs.
Finally, power. If only 5V is required, resistors are used to signal how much power is requested. For 9/15/20V power is negotiated: the devices figure out which one is source and which one is sink, once that's done, the source will communicate how much power it is able to provide. At most 3A can be used normally, at 20V provided the right cable is present, 5A is also a possibility. There is a separate wire for this communication. Using the same communication process, the data roles are decided: one of them will be upstream (think host) one of them will be downstream (think peripheral). There's a sensible default where the upstream data role takes on the source power role but this can be changed using this negotiation.
Footnote: these were fixed power. Today some devices support the PPS (Programmable Power Supplies) feature which allows the sink to rapidly change the wattage as needed. This also requires eMarker cables.
And yes, cables are a mess. https://people.kernel.org/bleung/how-many-kinds-of-usb-c-to-...