We present a systematic method to design driving scene tasks and generate virtual datasets for intelligent vehicles testing research.
In order to fully test intelligent vehicles’ vision algorithms, changing conditions should be regarded as an important indicator. In artificial scenes, all conditions can be modified “on demand” by changing the materials or the shaders in the scripts.
Ground-truth annotations are essential for the design and evaluation of computer vision algorithms. Unity3D is used to automatically generate accurate ground-truth labels: depth, optical flow, object tracking, object detection, instance segmentation, and semantic segmentation.
Currently, the ParallelEye-CS dataset consists of 17450 frames (virtual training data, normal tasks, environmental tasks, and difficult tasks) taken from artificial scenes.
ParallelEye-CS 2018 is arealistic and challenging virtual driving scene dataset for testing the visual intelligence of intelligent vehicles.
PLEASE READ THESE TERMS CAREFULLY BEFORE DOWNLOADING THE PARALLELEYE-CS DATASET. DOWNLOADING OR USING THE DATASET MEANS YOU ACCEPT THESE TERMS.
We provide one [.tar] archive per type of data as described below. Our indexes always start from 00001. In the following,
The ground truth for each area consists of a CSV-like text file named as:
ParallelEye-CS_rgb_2018: Each area is simply a folder in the format: The compressed file contains the original image.
ParallelEye-CS_motgt_2018: The compressed file contains the ground truth of Object Detection (2D) and Multi-Object Tracking.
ParallelEye-CS_semantic_2018 & ParallelEye-CS_inst_2018: The compressed file contains the ground truth of Semantic and Instance-level Segmentation. The per-pixel segmentation ground truth is encoded as per-frame .png files (standard 8-bit precision per channel).
ParallelEye-CS_flow_2018: The compressed file contains optical flow information, which is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene.
All rights of the ParallelEye-CS Dataset are reversed by the Parallel Vision Technology Innovation Center. It is free for academic research, and your cooperation with us is appreciated. Feel free to contact us if you have any questions.
If the ParallelEye-CS Dataset is used in your research, please cite the following papers:
1. Kunfeng Wang, Chao Gou, Nanning Zheng, James M. Rehg, and Fei-Yue Wang, "Parallel vision for perception and understanding of complex scenes: methods, framework, and perspectives," Artificial Intelligence Review, Oct. 2017, vol. 38, no. 3, pp. 299-329. PDF
2. Xuan Li, Yutong Wang, Kunfeng Wang, Lan Yan, Fei-Yue Wang. "The ParallelEye-CS Dataset: Constructing Artificial Scenes for Evaluating the Visual Intelligence of Intelligent Vehicles," Intelligent Vehicles Symposium (IV), 2018, pp. 37-42. PDF
3. Xuan Li, Kunfeng Wang, Yonglin Tian, Lan Yan, Fang Deng, and Fei-Yue Wang. "The ParallelEye Dataset: a Large collection of virtual images for traffic vision research," IEEE Transactions on Intelligent Transportation Systems, vol. 99, no. 1, 2018, pp. 1-13. PDF
4. Yonglin Tian, Xuan Li, Kunfeng Wang, and Fei-Yue Wang, "Training and Testing Object Detectors with Virtual Images," IEEE/CAA Journal of Automatica Sinica, March 2018, vol. 5, no. 2, pp. 539-546. PDF