Unai Gurbindo a974079bd5
Update metrics.py
## Possible error in the `ap_per_class` function when calculating the precision-recall curve

Hello,

I have identified a possible error in the `ap_per_class` function when calculating the metrics for the precision-recall curve. Below, I provide an example to explain it more clearly:

Let’s assume the `unique_classes` list is `[0, 1, 2]`, and the model never predicted class 1. In this case, on line 583 of the `utils/metrics.py` file, the loop will skip to the next class (class 2) because `n_p == 0` for class 1. 

However, before moving on to the next class, a zero-filled array corresponding to class 1 should be added to the `prec_values` list. Otherwise, the precision-recall curve for class 2 will incorrectly appear as if it belongs to class 1, and the legend will also fail to show the correct label for class 2.

To fix this behavior, before the `continue` statement, the code should be modified as follows:

```python
if n_p == 0 or n_l == 0:
    prec_values.append(np.zeros(1000))
    continue
```

I hope my explanation is clear, and I have correctly identified the issue. While this is a rare case where the model never predicts one of the classes in the dataset, it is still something to consider to avoid errors in the precision-recall curve visualizations.

Best regards.
2024-10-08 12:32:30 +02:00
..
2024-06-04 19:19:53 +08:00
2024-07-30 04:34:28 +00:00
2024-05-23 14:12:00 +00:00
2024-10-08 12:32:30 +02:00
2024-03-01 12:21:30 +01:00
2024-05-23 14:12:00 +00:00
2024-05-23 14:12:00 +00:00
2024-03-23 23:17:00 +01:00