I'm using RandomForest and XGBoost for binary classification, and my task is to predict probabilities for each class. Since tree-based models are bad with outputting usable probabilities, i imported the sklearn.calibration CalibratedClassifierCV, trained RF on 40k, then trained CCV with a separate 10k samples ( with cv="prefit" option ), my metric ( Area Under ROC ) is showing a huge drop in performance. Is it normal for probability calibration to alter the base estimator's behavior?
Edit : Since i'm minimizing logloss on my XGBClassifier, the output probabilities aren't that bad compared to RF's outputs.