swMATH ID: 35060
Software Authors: Ros, G., Sellart, L., Materzynska, J., Vazquez, D., Lopez, A.M
Description: The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images, thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this paper, we propose to use a virtual world to automatically generate realistic synthetic images with pixel-level annotations. Then, we address the question of how useful such data can be for semantic segmentation - in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show how the inclusion of SYNTHIA in the training stage significantly improves performance on the semantic segmentation task.
Homepage: https://ieeexplore.ieee.org/document/7780721
Related Software: ImageNet; Cityscapes; KITTI; DeepLab; Mapillary Vistas; U-Net; Adam; Python; PointNet; OctNet; SPLATNet; ScanNet; PASCAL VOC; ShapeNet; BDD100k; ApolloScape; AlexNet; NYU Depth; RefineNet; TartanAir
Cited in: 3 Publications

Citations by Year