19

I have tried for a while to figure out how to "shut up" LightGBM. Especially, I would like to suppress the output of LightGBM during training (i.e. feedback on the boosting steps).

My model:

params = {
            'objective': 'regression',
            'learning_rate' :0.9,
            'max_depth' : 1,
            'metric': 'mean_squared_error',
            'seed': 7,
            'boosting_type' : 'gbdt'
        }

gbm = lgb.train(params, lgb_train, num_boost_round=100000, valid_sets=lgb_eval, early_stopping_rounds=100)

I tried to add verbose=0 as suggested in the docs, but this does not work. https://github.com/microsoft/LightGBM/blob/master/docs/Parameters.rst

Does anyone know how to suppress LightGBM output during training?

Peter
  • 7,896
  • 5
  • 23
  • 50

5 Answers5

12

As @Peter has suggested, setting verbose_eval = -1 suppresses most of LightGBM output (link: here).

However, LightGBM may still return other warnings - e.g. No further splits with positive gain. This can be suppressed as follows (source: here ):

lgb_train = lgb.Dataset(X_train, y_train, params={'verbose': -1}, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, params={'verbose': -1},free_raw_data=False)
gbm = lgb.train({'verbose': -1}, lgb_train, valid_sets=lgb_eval, verbose_eval=False)
bradS
  • 1,695
  • 9
  • 20
8

Solution for sklearn API (checked on v3.3.0):

import lightgbm as lgb

param = {'objective': 'binary', "is_unbalance": 'true', 'metric': 'average_precision'} model_skl = lgb.sklearn.LGBMClassifier(**param)

early stopping and verbosity

it should be 0 or False, not -1/-100/etc

callbacks = [lgb.early_stopping(10, verbose=0), lgb.log_evaluation(period=0)]

train

model_skl.fit(x_train, y_train, eval_set=[(x_train, y_train), (x_val, y_val)], eval_names=['train', 'valid'], eval_metric='average_precision', callbacks=callbacks)

banderlog013
  • 181
  • 1
  • 3
7

To suppress (most) output from LightGBM, the following parameter can be set.

Suppress warnings: 'verbose': -1 must be specified in params={}.

Suppress output of training iterations: verbose_eval=False must be specified in the train{} parameter.

Minimal example:

params = {
            'objective': 'regression',
            'learning_rate' : 0.9, 
            'max_depth' : 1, 
            'metric': 'mean_squared_error',
            'seed': 7,
            'verbose': -1,
            'boosting_type' : 'gbdt'
        }

gbm = lgb.train(params,
                lgb_train,
                num_boost_round=100000,
                valid_sets=lgb_eval,
                verbose_eval=False,
                early_stopping_rounds=100)
Peter
  • 7,896
  • 5
  • 23
  • 50
4

Follow these points.

  1. Use verbose= False in fit method.
  2. Use verbose= -100 when you call the classifier.
  3. Keep silent = True (default).
Ethan
  • 1,657
  • 9
  • 25
  • 39
2

I read all the answers and issues, and tried all these approaches and yet LGBM still outputs some info (which drives me crazy). If you want to completely suppress any output during the training try this out:

with open(os.devnull, "w") as f, contextlib.redirect_stdout(f):
    gbm = lgb.cv(param, lgb_dataset)