Transform the real urban road information into the corresponding virtual city road information. Then insert 3D models such as buildings, roads, trees, lane lines into the virtual city through rapid modeling.
In order to increase the diversity and fidelity of artificial scene, we control the parameters in the script, such as the material, the shader, and the simulated environmental conditions.
Ground-truth annotations are essential for the design and evaluation of computer vision algorithms. Unity3D is used to automatically generate accurate ground-truth labels: depth, optical flow, object tracking, object detection, instance segmentation, and semantic segmentation.
The ParallelEye dataset consists of 40251 frames of virtual images from 7 sequences taken by a virtual car moving through the city.
ParallelEye 2017 is a large-scale dataset designed to design and evaluate a variety of computer vision models for object detection and tracking, semantic/instance segmentation and so on.
PLEASE READ THESE TERMS CAREFULLY BEFORE DOWNLOADING THE PARALLELEYE DATASET. DOWNLOADING OR USING THE DATASET MEANS YOU ACCEPT THESE TERMS.
We provide one [.tar] archive per type of data as described below. Our indexes always start from 00001. In the following,
The ground truth for each area consists of a CSV-like text file named as:
ParallelEye_rgb_2017: Each area is simply a folder in the format: The compressed file contains the original image.
ParallelEye_motgt_2017: The compressed file contains the ground truth of Object Detection (2D) and Multi-Object Tracking.
ParallelEye_semantic_2017 & ParallelEye_inst_2017: The compressed file contains the ground truth of Semantic and Instance-level Segmentation. The per-pixel segmentation ground truth is encoded as per-frame .png files (standard 8-bit precision per channel).
ParallelEye_flow_2017: The compressed file contains optical flow information, which is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene.
All rights of the ParallelEye Dataset are reversed by the Parallel Vision Technology Innovation Center. It is free for academic research, and your cooperation with us is appreciated. Feel free to contact us if you have any questions.
If the ParallelEye Dataset is used in your research, please cite the following papers:
1. Kunfeng Wang, Chao Gou, Nanning Zheng, James M. Rehg, and Fei-Yue Wang, "Parallel vision for perception and understanding of complex scenes: methods, framework, and perspectives," Artificial Intelligence Review, Oct. 2017, vol. 38, no. 3, pp. 299-329. PDF
2. Xuan Li, Kunfeng Wang, Yonglin Tian, Lan Yan, and Fei-Yue Wang. "The ParallelEye Dataset: Constructing Large-Scale Artificial Scenes for Traffic Vision Research," IEEE ITSC 2017 Workshop on Transportation 5.0, Yokohama, Japan, October 2017. PDF
3. Yonglin Tian, Xuan Li, Kunfeng Wang, and Fei-Yue Wang, "Training and Testing Object Detectors with Virtual Images," IEEE/CAA Journal of Automatica Sinica, March 2018, vol. 5, no. 2, pp. 539-546. PDF