I'm a PhD student who studies stuff similar to this (I work in complexity theory), and one of my favorite "conversation starters" when I meet other researchers is whether they think integer factorization is actually a computationally difficult task.
Perhaps surprisingly, most people don't have strong feelings either way, and a good number of researchers suspect the task is actually efficiently computable (i.e., in polynomial-time). At least from the standpoint of complexity theory there are not really any compelling reasons to think factoring is hard, beyond the fact that we haven't discovered an algorithm for it yet.
Doesn't cracking this problem boil down to finding a pattern in prime numbers, which doesn't seem to exist? For the researchers that think the task is efficiently computable in polynomial time, whats the reason behind their thinking?
The intuition that the difficulty of factoring comes from the mysteries surround prime numbers is compelling, but it is also somewhat muddied by other known results. For example, a famous result [1] showed that testing whether a given number is prime, actually CAN be done in efficiently (i.e., in polynomial time), unlike integer factorization (that we know of).
Some discussion of why some believe factoring could be easy can be found here: [2]
So in integer factorization there are two different quantities that one might naturally want to denote by N: the number you want to factor and the number of bits required to specify that number (i.e, the input length).
The convention is to let N be the number to be factored, in which case log_2(N) is the input length. Hence, an algorithm that runs in time "polynomial (in the input length)" would have to run in time log(N)^k for some k.
Note that what makes this notable is that the time bound is proven (according to the paper, anyway). N^1/5 is much slower than we normally think of GNFS as being, but the thing to note is that GNFS isn't actually proven to run that fast.
Subexponential algorithms in general have long had proven running times. Dixon's method [1], in particular, has had a proven running time since 1980. The number field sieve itself has had some recent progress on that front [2].
However, those algorithms are _probabilistic_, not _deterministic_, which is what sets this linked method apart.
This algorithm, much like Pollard-Strassen before it, is pretty much irrelevant to integer factorization in practice. But it is nevertheless remarkable to see some progress on deterministic factorization after such a long time.
The abstract says "computes the prime factorisation of a positive integer N", so it's not your fault you were confused. The abstract should be corrected.
I love a day when I see/understand an idea that is new to me.
This is an extremely well written paper. A high-school student with motivation could understand all of it in about 1 week of work.
The key idea here is in Lemma 3.3 which is a simplified presentation of Lehman's original idea. All of the "good" algorithms in this class are exploiting this observation.
Hittmeir adds a new ingredient, namely applying "baby-step-giant-step" searching. And Harvey makes a slightly better parameter choice and strategy than Hittmeir to get his result.
The new idea and value to me would be the concise explanation in Lemma 3.3.
A lot of the feedback is of the form, "who cares, since it doesn't push the state of the art in practical factoring", but for me, there is art in ideas like this. Lehman, Hittmeir and Harvey have put forth a really elegant and beautiful line of thought here---that is accessible!---and I hope you can enjoy it without too much worry about how much it is worth.
Does this have practical implications, as in, is this an algorithm that you'd run in practice, or is it mainly a theoretical bound due to constants hidden in the landau symbol?
The General Number Field Sieve[1], referenced in the paper, is believed to have a better asymptotic runtime performance (and has a number of efficient implementations), but it's runtime is not proven. ECM [2](also referenced in the paper) has better proven, but probabilistic, performance.
So most likely, no. This is primarily of theoretical importance (it is now the fastest, deterministic algorithm with a proven bound), but is not immediately a performance gain on existing algorithms.
Sounds like GNFS is still better if you believe its conjectured computational complexity, but this still seems like a good advance. Paper's over my head for 10 minutes reading though.
Perhaps surprisingly, most people don't have strong feelings either way, and a good number of researchers suspect the task is actually efficiently computable (i.e., in polynomial-time). At least from the standpoint of complexity theory there are not really any compelling reasons to think factoring is hard, beyond the fact that we haven't discovered an algorithm for it yet.