Skip to content

Bad results - Investigate reason #11

@PavlosMelissinos

Description

@PavlosMelissinos
Metric IoU area maxDets Result
Average Precision 0.50:0.95 all 100 0.001
Average Precision 0.50 all 100 0.004
Average Precision 0.75 all 100 0.000
Average Precision 0.50:0.95 small 100 0.000
Average Precision 0.50:0.95 medium 100 0.000
Average Precision 0.50:0.95 large 100 0.004
Average Recall 0.50:0.95 all 1 0.005
Average Recall 0.50:0.95 all 10 0.005
Average Recall 0.50:0.95 all 100 0.005
Average Recall 0.50:0.95 small 100 0.000
Average Recall 0.50:0.95 medium 100 0.001
Average Recall 0.50:0.95 large 100 0.019

This is using the official mscoco script.

Setup as: full image as input, each pixel gets classified using a one hot vector with a size of 81, 0 to 80 inclusive, that correspond to the actual category ids in MS-COCO. More specifically, index 0 is background, ..., index 12 corresponds to class id 13 (stop sign), ..., and index 80 is in fact class 90 (toothbrush). Output is the full image, not a crop. Then a script is used to separate the pixels of each detected object. No classes were used in the evalCOCO.py script (useCats = False).

These are really bad scores, and at the moment I have no idea why it's like that. I'll push the changes soon.

Which script do you use for evaluation @athundt ? If you have a working version maybe I should just replace mine with it. Does this work for mscoco?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions