Definition of the metrics that can be used to evaluate models
/home/user/jupyter/env/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at  /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  return torch._C._cuda_getDeviceCount() > 0

class mAP_Metric[source]

mAP_Metric(iou_thresholds, recall_thresholds=None, mpolicy='greedy', name='mAP', remove_background_class=True)

Metric to calculate mAP for different IoU thresholds

Function to create mAP metrics

create_mAP_metric[source]

create_mAP_metric(iou_tresh, recall_thresholds, mpolicy, metric_name, remove_background_class=True)

Creates a function to pass into learner for measuring mAP. iou_tresh: float or np.arange, f.e.: np.arange(0.5, 1.0, 0.05) recall_thresholds: None or np.arange, f.e.: np.arange(0., 1.01, 0.01) mpolicy: str, 'soft' or 'greedy' metric_name: str, name to display in fastaiĀ“s recorder remove_background_class: True or False, remove first index before evaluation, as it represents background class in our dataloader Metric Examples: COCO mAP: set recall_thresholds=np.arange(0., 1.01, 0.01), mpolicy="soft" VOC PASCAL mAP: set recall_thresholds=np.arange(0., 1.1, 0.1), mpolicy="greedy" VOC PASCAL mAP in all points: set recall_thresholds=None, mpolicy="greedy"

Custom mAP metrics

First we create some predictions and targets. Note that our dataloader contains a background class with index 0 and all metrics remove by default the background class, so the first class has index 1 and the number of classes is 2.

num_classes = 2
boxes = torch.tensor([
    [439, 157, 556, 241],
    [437, 246, 518, 351],
    [515, 306, 595, 375],
    [407, 386, 531, 476],
    [544, 419, 621, 476],
    [609, 297, 636, 392]])
labels = torch.ones(6, dtype=torch.long)

targs = [dict({"boxes":boxes, "labels":labels})]
targs
[{'boxes': tensor([[439, 157, 556, 241],
          [437, 246, 518, 351],
          [515, 306, 595, 375],
          [407, 386, 531, 476],
          [544, 419, 621, 476],
          [609, 297, 636, 392]]),
  'labels': tensor([1, 1, 1, 1, 1, 1])}]
boxes = torch.tensor([
    [429, 219, 528, 247],
    [433, 260, 506, 336],
    [518, 314, 603, 369],
    [592, 310, 634, 388],
    [403, 384, 517, 461],
    [405, 429, 519, 470],
    [433, 272, 499, 341],
    [413, 390, 515, 459]])
labels = torch.ones(8, dtype=torch.long)
scores = torch.tensor([0.460851, 0.269833, 0.462608, 0.298196, 0.382881, 0.369369, 0.272826, 0.619459])

preds = [dict({"boxes":boxes, "labels":labels, "scores":scores})]
preds
[{'boxes': tensor([[429, 219, 528, 247],
          [433, 260, 506, 336],
          [518, 314, 603, 369],
          [592, 310, 634, 388],
          [403, 384, 517, 461],
          [405, 429, 519, 470],
          [433, 272, 499, 341],
          [413, 390, 515, 459]]),
  'labels': tensor([1, 1, 1, 1, 1, 1, 1, 1]),
  'scores': tensor([0.4609, 0.2698, 0.4626, 0.2982, 0.3829, 0.3694, 0.2728, 0.6195])}]

VOC PASCAL

voc_pascal = create_mAP_metric(0.5, np.arange(0., 1.1, 0.1), "greedy", "VOC PASCAL mAP", 
                               remove_background_class=True)
voc_pascal.func(preds, targs, num_classes=num_classes)
tensor(0.5000)
voc_pascal_all_pnts = create_mAP_metric(0.5, None, "greedy", "VOC PASCAL mAP all points", 
                                        remove_background_class=True)
voc_pascal_all_pnts.func(preds, targs, num_classes=num_classes)
tensor(0.5000)

COCO mAP

coco_map_50 = create_mAP_metric(0.5, np.arange(0., 1.01, 0.01), "soft", "COCO mAP@0.5", 
                                remove_background_class=True)
coco_map_50.func(preds, targs, num_classes=num_classes)
tensor(0.5000)
coco_map_50_95 = create_mAP_metric(np.arange(0.5, 1, .05), np.arange(0., 1.01, 0.01), "soft", "COCO mAP@[0.5:0.95]", 
                                remove_background_class=True)
coco_map_50_95.func(preds, targs, num_classes=num_classes)
tensor(0.1573)
test_close(voc_pascal.func(preds, targs, num_classes=2), 0.5, eps=1e-03)
test_close(voc_pascal_all_pnts.func(preds, targs, num_classes=2), 0.5, eps=1e-03)
test_close(coco_map_50.func(preds, targs, num_classes=2), 0.5, eps=1e-03)
test_close(coco_map_50_95.func(preds, targs, num_classes=2), 0.157, eps=1e-03)

Prebuilt metrics

There are some prebuilt metrics, which you can use instantly: