It would be interesting if they compared training speed with CatBoost [0].
I remember seeing a paper where they managed to avoid getting stuck in local optimum in terms of number of learners, and the more trees you add better the result.
Logloss results seem to confirm there's a superior tree algorithm going on there in CatBoost.
CatBoost is benefited from categorical feature transform.
LightGBM is also working for the better categorical feature support (https://github.com/Microsoft/LightGBM/issues/699). I think the accuracy of LightGBM will be comparable with CatBoost when it is finished.
> While gradient boosting algorithms are the workhorse of modern industrial machine learning and data science, all current implementations are susceptible to a non-trivial but damaging form of label leakage. It results in a systematic bias in pointwise gradient estimates that lead to reduced accuracy
I see a github link in there https://github.com/arogozhnikov/infiniteboost, but it does not seem to be in CatBoost (as someone here pointed out better logloss has to do with CatBoost handling of categorical features).
I remember seeing a paper where they managed to avoid getting stuck in local optimum in terms of number of learners, and the more trees you add better the result.
Logloss results seem to confirm there's a superior tree algorithm going on there in CatBoost.
[0]: https://catboost.yandex/