We employ the computer graphics and virtual reality technologies to construct realistic virtual worlds for visual computing research. For ParallelEye Challenge 2018, our visual computing tasks include object detection, object tracking, semantic segmentation, and depth prediction. Our goal is to provide large-scale diversified virtual image datasets, in place of real image datasets, to evaluate computer vision methods.
Figure 1. Virtual camera in virtual scene.
In order to organize the challenge, we propose a new virtual image dataset called “ParallelEye”. For that, we present a dataset generation pipeline that uses open-source street map, computer graphics, virtual reality, and rule modeling technologies to construct a large-scale virtual urban traffic scene. The virtual scene matches the real world well in terms of fidelity and geographic information. In the virtual scene, we flexibly configure the objects, the camera (including its position, height, and orientation), and the environmental conditions, to collect diversified images. In addition to collecting raw images, we are able to obtain ground-truth annotations easily for each visual challenge. The ground truth corresponding to every image includes object bounding box, object tracking, semantic segmentation, depth, etc. The ParallelEye dataset can be used to train and test your visual computing models. For each visual challenge, we provide an evaluation standard and rank the submitted results accordingly.
The training data can be downloaded below, and you can use them freely to train your visual models. The test data are also provided here. You can download the test images, generate testing results according to our defined formats, and upload the results to our website for evaluation.
Training data setsParallelEye_rgb_train.rar ParallelEye_depth_train.rar ParallelEye_motgt_train.rar ParallelEye_semantic_train.rar
Note: The ground truth annotations of object detection and tracking are provided for all folders in "ParallelEye_rgb_train.rar", but the ground truth annotations of depth and semantic segmentation are provided only for folders "04", "05" and "06".