close

[Solved] UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples

Hello Guys, How are you all? Hope You all Are Fine. Today I get the following error UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples in python. So Here I am Explain to you all the possible solutions here.

Without wasting your time, Let’s start This Article to Solve This Error.

How UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples Error Occurs?

Today I get the following error UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples in python.

How To Solve UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples Error ?

  1. How To Solve UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples Error ?

    To Solve UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples Error If you set this before importing the other modules, you will see the warning every time you run the code.

  2. UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples

    To Solve UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples Error If you set this before importing the other modules, you will see the warning every time you run the code.

Solution 1

As mentioned in the comments, some labels in y_test don’t appear in y_pred. Specifically in this case, label ‘2’ is never predicted:

>>> set(y_test) - set(y_pred)
{2}

This means that there is no F-score to calculate for this label, and thus the F-score for this case is considered to be 0.0. Since you requested an average of the score, you must take into account that a score of 0 was included in the calculation, and this is why scikit-learn is showing you that warning.

This brings me to you not seeing the error a second time. As I mentioned, this is a warning, which is treated differently from an error in python. The default behavior in most environments is to show a specific warning only once. This behavior can be changed:

import warnings
warnings.filterwarnings('always')  # "error", "ignore", "always", "default", "module" or "once"

If you set this before importing the other modules, you will see the warning every time you run the code.

There is no way to avoid seeing this warning the first time, aside for setting warnings.filterwarnings('ignore'). What you can do, is decide that you are not interested in the scores of labels that were not predicted, and then explicitly specify the labels you are interested in (which are labels that were predicted at least once):

>>> metrics.f1_score(y_test, y_pred, average='weighted', labels=np.unique(y_pred))
0.91076923076923078

The warning is not shown in this case.

Solution 2

The accepted answer explains already well why the warning occurs. If you simply want to control the warnings, one could use precision_recall_fscore_support. It offers a (semi-official) argument warn_for that could be used to mute the warnings.

(_, _, f1, _) = metrics.precision_recall_fscore_support(y_test, y_pred,
                                                        average='weighted', 
                                                        warn_for=tuple())

Summery

It’s all About this issue. Hope all solution helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which solution worked for you? Thank You.

Also, Read