1. Detection Task

    For each of the three classes (“Car”, “Bus”, and “Truck”), predict the bounding boxes of all objects of that class (if any) in a test image. Each bounding box should be outputted with an associated real-valued confidence (between 0 and 1) of the detection, so that a precision/recall curve can be drawn.
    The raw image set "ParallelEye_rgb_train.rar" and corresponding detection ground truth set "ParallelEye_motgt_train.rar" can be downloaded from the homepage.
    Figure 1 shows an example of detection results. The raw image is shown on the left and the detection bounding boxes is shown on the right.

Figure 1. Example of detection results. Top: raw image. Down: detection bounding boxes.

2. Evaluation

2.1 Data Supplied

    The test sets ("detection_testing_set.rar" in homepage) contain 10220 RGB images in the same PNG format with training images.

2.2 Submission of Results

    The output from your system should be a txt file, and each line should be a detection output by the detector in the following format:
    <image id> <confidence> <left> <top> <right> <bottom> <class>
where (left,top)-(right,bottom) defines the bounding box position of the detected object. Greater confidence value signifies greater confidence that the detection is correct. An exemplar file excerpt is shown below. Note that there can be multiple objects in a test image.
    151 0.702732 89 112 516 466 Car
    152 0.870849 373 168 488 229 Car
    153 0.852346 407 157 500 213 Bus
    153 0.914587 2 161 55 221 Car
    153 0.532489 175 184 232 201 Car
    154 0.657233 110 124 234 297 Truck

2.3 Evaluation method

    The principal quantitative metric used in detection task is the mean average precision (mAP). AP is computed by sampling the monotonically decreasing precision curve at a fixed set of uniformly-spaced recall values 0, 0.1, 0.2, . . . , 1.
    Detections are considered true or false positives based on the area of overlap with ground truth bounding boxes. To be considered as a true positive, the intersection-over-union (IoU) between the predicted bounding box Bp and ground truth bounding box Bgt must exceed 50% by the formula:

    Multiple detections of the same object in an image are regarded as false detections, e.g., 5 detections of a single object will be counted as 1 correct detection and 4 false detections – it is the responsibility of the participant’s system to filter multiple detections from its output.
    Participants are expected to submit a single set of results per method employed. Participants who have investigated several algorithms may submit one result per method. Changes in algorithm parameters do not constitute a different method – all parameter tunings must be conducted using the training and validation data alone.