> every ML 101 course teaches the difference between single-layer and "multi"-layer (usually 2-layer) perceptrons: Linear separability, XOR problem etc.
Yeah, that's the point! ML related stuff seems to be starting with simpler problems like linear separation and XOR, then diving into some math, and soon it shows a magical python code out of nowhere that solves a problem (e.g. MNIST) and only that problem
Yeah, that's the point! ML related stuff seems to be starting with simpler problems like linear separation and XOR, then diving into some math, and soon it shows a magical python code out of nowhere that solves a problem (e.g. MNIST) and only that problem