Hacker News new | past | comments | ask | show | jobs | submit login

If you don't mind, could you explain the practical difference or the reasoning behind such a requirement? Are there situations where boundaries appear to be asymptotic but the exact solution shows this not to be the case?



Once again, I don't understand the Math Overflow poster's exact situation.

But roughly speaking, imagine you have two functions f(t) and g(t), which are described in completely different ways, and you want to prove that f(t) = g(t). If you try and fail, then you might instead aim for a proof that the difference between f and g is bounded, or that f(t) = O(g(t)) and vice versa, or that the limit of the ratio between f and g is 1, or something along these lines.

In many cases, such partial results are also of interest. In general, partial successes in math are considered to be successes.

But in some cases, partial results aren't really considered all that interesting -- or perhaps are known already or can be obtained very easily.


I re-implemented a quasi-polynomial algorithm. Experimentally, it shows exponential behaviour. Back-of-the-envelope calculation shows this behaviour can continue until the input size is >>10^21 before the asymptotic bound asserts itself. (For comparison, input size 30 is unfeasible)


Which algorithm?


Parys' quasi-polynomial algorithm for solving parity games, https://arxiv.org/pdf/1904.12446.pdf

Note, another implementation doesn't have this behaviour for the family of inputs I use. It's an implementation detail that has no effect on correctness. Thus for the other implementation another family should exist.


I mean, maybe they need to exponentiate the bound or something. e.g. 2^(4n) is much much much worse than 2^n.

The sentence did strike me a little wierd though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: