You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There has been some discussion of it in the past, but it is not currently a high priority on our roadmap. If you can provide more details on what you're hoping to do with it and why Catboost is preferable to XGBoost/LightGBM for your use case, that would be very helpful for assessing if we should increase the priority.
Also, if other users have an interest in this, please upvote this issue to let us know that it's important to you.
Thank you for your reply.
Catboost use ordered Target Statistics and ordered boosting to perform better than GBDT like XGBoost, especially on category features.
According to Appendices A in Catboost paper, theoretically, GBDT framework may result in prediction shifting during boosting, however Catboost not.
Experimentally, Catboost(https://catboost.ai/) shows the benchmark, which is better in several Classification tasks.
My offline demo performs similarly. xbt AUC 0.9961 vs cbt AUC 0.9980.
However, Catboost is slightly better than xgboost in my case, maybe I need others to increase the priority.
No description provided.
The text was updated successfully, but these errors were encountered: