It doesn’t exist. The above explanation is the result of me spending almost all of my time immersing myself in ML for the last three years.
gwern helped too. He has an intuition for ML that I’m still jealous of.
Your best bet is to just start building things and worry about explanations later. It’s not far from the truth to say that even the most detailed explanation is still a longform way of saying “we don’t really know.” Some people get upset and refuse to believe that fundamental truth, but I’ve always been along for the ride more than the destination.
It’s never been easier to dive in. I’ve always wanted to write detailed guides on how to start, and how to navigate the AI space, but somehow I wound up writing an ML fanfic instead: https://blog.gpt4.org/jaxtpu
(Fun fact: my blog runs on a TPU.)
I’m increasingly of the belief that all you need is a strong desire to create things, and some resources to play with. If you have both of those, it’s just a matter of time — especially putting in the time.
That link explains how to get the resources. But I can’t help with how to get a desire to create things with ML. Mine was just a fascination with how strange computers can be when you wire them up with a small dose of calculus that I didn’t bother trying to understand until two years after I started.
(If you mean contrastive loss specifically, https://openai.com/blog/clip/ is decent. But it’s just a droplet in the pond of all the wonderful things there are to learn about ML.)