Research and Implementation of Multi-scene Image Semantic Segmentation based on Fully Convolutional Neural Network
- DOI
- 10.2991/icmeit-19.2019.27How to use a DOI?
- Keywords
- Computer vision; Deep learning; FCN; Semantic Segmentation.
- Abstract
With the rapid development of deep neural networks, image recognition and segmentation are important research issues in computer vision in recent years. This paper proposes an image semantic segmentation method based on Fully Convolutional Networks (FCN), which combines the deconvolution layer and convolutional layer converted from the fully connected layer in the traditional Convolutional Neural Networks (CNN). The multi-scene image data set of the label is model-trained, and the training model is applied to pixel-level segmentation of images containing different targets, and the test results are visualized by writing test modules and the segmentation results of the test set images are colored. The experimental process uses two training modes with different parameters to achieve faster and better convergence, and Mini Batch also are used to adapt to the training of big data sets during training. Finally, through the comparison between the segmentation results of test set and the Ground Truth image, it is proved that the full convolutional neural network training model has a higher validity and Robustness for segmentation of some targets in different scene images.
- Copyright
- © 2019, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Fangzhou Yu PY - 2019/04 DA - 2019/04 TI - Research and Implementation of Multi-scene Image Semantic Segmentation based on Fully Convolutional Neural Network BT - Proceedings of the 3rd International Conference on Mechatronics Engineering and Information Technology (ICMEIT 2019) PB - Atlantis Press SP - 156 EP - 161 SN - 2352-538X UR - https://doi.org/10.2991/icmeit-19.2019.27 DO - 10.2991/icmeit-19.2019.27 ID - Yu2019/04 ER -