I've read many times that MoE models should be comparable to dense models with a number of parameters equal to the geometric mean of the MoE's total number of parameters and active ones.
In the case of gpt-oss 120B that would means sqrt(5*120)=24B.
Not sure there is on formula. Because there are two different cases:
1) performance constrained. like NVidia Spark with 128GB or AGX with 64GB.
2) memory constrained. like consumers' GPUs.
In first case MoE is clear win. They fit and run faster. In second case dense models will produce better results. And if performance in token/sec is acceptable then they are better choice.
In the case of gpt-oss 120B that would means sqrt(5*120)=24B.