From a974079bd5b867303b7c77442be14c11ca312482 Mon Sep 17 00:00:00 2001 From: Unai Gurbindo <77396585+unai-gurbindo@users.noreply.github.com> Date: Tue, 8 Oct 2024 12:32:30 +0200 Subject: [PATCH] Update metrics.py MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Possible error in the `ap_per_class` function when calculating the precision-recall curve Hello, I have identified a possible error in the `ap_per_class` function when calculating the metrics for the precision-recall curve. Below, I provide an example to explain it more clearly: Let’s assume the `unique_classes` list is `[0, 1, 2]`, and the model never predicted class 1. In this case, on line 583 of the `utils/metrics.py` file, the loop will skip to the next class (class 2) because `n_p == 0` for class 1. However, before moving on to the next class, a zero-filled array corresponding to class 1 should be added to the `prec_values` list. Otherwise, the precision-recall curve for class 2 will incorrectly appear as if it belongs to class 1, and the legend will also fail to show the correct label for class 2. To fix this behavior, before the `continue` statement, the code should be modified as follows: ```python if n_p == 0 or n_l == 0: prec_values.append(np.zeros(1000)) continue ``` I hope my explanation is clear, and I have correctly identified the issue. While this is a rare case where the model never predicts one of the classes in the dataset, it is still something to consider to avoid errors in the precision-recall curve visualizations. Best regards. --- ultralytics/utils/metrics.py | 1 + 1 file changed, 1 insertion(+) diff --git a/ultralytics/utils/metrics.py b/ultralytics/utils/metrics.py index b5988110..ae98fb71 100644 --- a/ultralytics/utils/metrics.py +++ b/ultralytics/utils/metrics.py @@ -581,6 +581,7 @@ def ap_per_class( n_l = nt[ci] # number of labels n_p = i.sum() # number of predictions if n_p == 0 or n_l == 0: + prec_values.append(np.zeros(1000)) continue # Accumulate FPs and TPs