I'm not sure it is quite comparable, as the lottery hypothesis postulates that for every Neural Net with n connections/weights, there is one with n-1 connections, that retains its accuracy. So it doesn't learn by pruning, just becomes more efficient by being smaller. That's at least how I understood it.
Your thing would imply a 0 connected neural net could compete with GPT3. I quote
The Lottery Ticket Hypothesis: A randomly-initialized, dense neural network contains a subnetwork that is initialised such that — when trained in isolation — it can match the test accuracy of the original network after training for at most the same number of iterations. - Frankle & Carbin (2019, p.2)
Yeah, that's why it is merely a hypothesis. I think the Lottery Ticket hypothesis goes further and states that this subnetwork also has a subnetwork which matches test accuracy, up to some degree. Else it wouldn't be interesting. Of course there must be a limit...
The interesting bit is that you don't need to tweak the subnet, only remove the other noisy connections. If you remove capacity, at some point test accuracy needs to go down. The hypothesis is about the idea that we maybe don't "train" the weights, we "find" them, not about unlimited nesting (as far as I understand it, but I have high confidence. Happy to be proven wrong tho)