close

How to get feature importance in xgboost?

Hello Guys, How are you all? Hope You all Are Fine. Today We Are Going To learn about How to get feature importance in xgboost in Python. So Here I am Explain to you all the possible Methods here.

Without wasting your time, Let’s start This Article.

Table of Contents

How to get feature importance in xgboost?

  1. How to get feature importance in xgboost?

    Using sklearn API and XGBoost >= 0.81:
    clf.get_booster().get_score(importance_type="gain")

  2. get feature importance in xgboost

    Using sklearn API and XGBoost >= 0.81:
    clf.get_booster().get_score(importance_type="gain")

Method 1

In your code you can get feature importance for each feature in dict form:

bst.get_score(importance_type='gain')

>>{'ftr_col1': 77.21064539577829,
   'ftr_col2': 10.28690566363971,
   'ftr_col3': 24.225014841466294,
   'ftr_col4': 11.234086283060112}

Explanation: The train() API’s method get_score() is defined as:

get_score(fmap=”, importance_type=’weight’)

  • fmap (str (optional)) – The name of feature map file.
  • importance_type
    • ‘weight’ – the number of times a feature is used to split the data across all trees.
    • ‘gain’ – the average gain across all splits the feature is used in.
    • ‘cover’ – the average coverage across all splits the feature is used in.
    • ‘total_gain’ – the total gain across all splits the feature is used in.
    • ‘total_cover’ – the total coverage across all splits the feature is used in.

Method 2

Using sklearn API and XGBoost >= 0.81:

clf.get_booster().get_score(importance_type="gain")

or

regr.get_booster().get_score(importance_type="gain")

For this to work correctly, when you call regr.fit (or clf.fit), X must be a pandas.DataFrame.

Summery

It’s all About this issue. Hope all Methods helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which Method worked for you? Thank You.

Also, Read