Volume 5, Number 6 (2021)
Year Launched: 2016
Journal Menu
Archive
Previous Issues
Why Us
-  Open Access
-  Peer-reviewed
-  Rapid publication
-  Lifetime hosting
-  Free indexing service
-  Free promotion service
-  More citations
-  Search engine friendly
Contact Us
Email:   service@scirea.org
Home > Journals > SCIREA Journal of Information Science and Systems Science > Archive > Paper Information

Accelerating SegNet-Based Semantic Segmentation Using a Model Post-Pruning Strategy

Volume 5, Issue 6, December 2021    |    PP. 136-152    |PDF (1352 K)|    Pub. Date: December 21, 2021
DOI: 10.54647/isss12188    7 Downloads     665 Views  

Author(s)
Wei Liu, School of Business Administration, Shandong University of Finance and Economics, Jinan 250014, China.

Abstract
Accelerating deep convolutional networks has recently attracted a great deal of attention due to the demand of real-time applications. SegNet is a typical deep convolution network in the field of semantic segmentation, and also is a smaller and more memory, time efficient model. In this paper, we focus on accelerating the SegNet-based semantic segmentation by using a model post-pruning strategy. Despite the fact that several methods have been proposed for accelerating deep models including pruning and compressing the weights of each layer, these methods may cause a certain loss of segmentation accuracy by irregular pruning. To address this issue, we propose a post-pruning strategy for deep model compression, which is commonly used to essentially deal with the over-fitting problem of decision tree. Different from some existing methods that employ the irregular pruning strategy, the proposed method can significantly improve the generalization ability of the compressed model. Inspired by the post-pruning method originally used in decision tree, our method minimizes a channel pruning loss function. The compressed model is then retrained to further improve its performance of semantic segmentation. Experimental results on two segmentation datasets show that our method obtain competitive results compared with other existing methods in terms of reducing the computational burden and improving the generalization ability of the SegNet model.

Keywords
Deep learning, Convolutional neural networks, Semantic segmentation, Model pruning

Cite this paper
Wei Liu, Accelerating SegNet-Based Semantic Segmentation Using a Model Post-Pruning Strategy, SCIREA Journal of Information Science and Systems Science. Vol. 5 , No. 6 , 2021 , pp. 136 - 152 . https://doi.org/10.54647/isss12188

References

[ 1 ] J. Long, E. Shelhamer, T. Darrell, “Fully convolutional networks for semantic segmentation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, 2015.
[ 2 ] V. Badrinarayanan, A. Kendall, R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
[ 3 ] O. Ronneberger, P. Fischer, T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” In Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241, 2015.
[ 4 ] L.-C. Chen, Y. Zhu, G. Papandreou, et al., “Encoder-decoder with atrous separable convolution for semantic image segmentation,” In Proceedings of the European Conference on Computer Vision, pp. 801–818, 2018.
[ 5 ] G. Lin, A. Milan, C. Shen, “Refinenet: Multi-path refinement networks for high-resolution semantic segmentation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1925–1934, 2017.
[ 6 ] S. Anwar, K. Hwang, W. Sung, “Structured pruning of deep convolutional neural networks,” ACM Journal on Emerging Technologies in Computing Systems, vol. 13, no. 3, pp. 1–18, 2017.
[ 7 ] M. Jaderberg, A. Vedaldi, A. Zisserman, “Speeding up convolutional neural networks with low rank expansions,” In Proceedings of the British Machine Vision Conference, pp. 1–12, 2014.
[ 8 ] S. Han, J. Pool, J. Tran, et al., “Learning both weights and connections for efficient neural network,” In Advances in Neural Information Processing Systems, pp. 1135–1143, 2015.
[ 9 ] W. Wen, C. Wu, Y. Wang, et al., “Learning structured sparsity in deep neural networks,” In Advances in Neural Information Processing Systems, pp. 2074–2082, 2016.
[ 10 ] J.M. Alvarez, M. Salzmann, “Learning the number of neurons in deep networks,” In Advances in Neural Information Processing Systems, pp. 2270–2278, 2016.
[ 11 ] Y. He, X. Zhang, J. Sun, “Channel pruning for accelerating very deep neural networks,” In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397, 2017.
[ 12 ] Z. Huang, N. Wang, “Data-driven sparse structure selection for deep neural networks,” In Proceedings of the European Conference on Computer Vision, pp. 304–320, 2018.
[ 13 ] G.J. Brostow, J. Shotton, J. Fauqueur, “Segmentation and recognition using structure from motion point clouds,” In Proceedings of the European Conference on Computer Vision, pp. 44–57, 2008.
[ 14 ] M. Denil, B. Shakibi, L. Dinh, “Predicting parameters in deep learning,” In Advances in Neural Information Processing Systems, pp. 2148–2156, 2013.
[ 15 ] E.L. Denton, W. Zaremba, J. Bruna, “Exploiting linear structure within convolutional networks for efficient evaluation,” In Advances in Neural Information Processing Systems, pp. 1269–1277, 2014.
[ 16 ] V. Lebedev, Y. Ganin, M. Rakhuba, “Speeding-up convolutional neural networks using fine-tuned cp-decomposition,” In Proceedings of the International Conference on Learning Representations, pp. 1–11, 2015.
[ 17 ] X. Zhang, J. Zou, X. Ming, “Efficient and accurate approximations of nonlinear convolutional networks,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1984–1992, 2015.
[ 18 ] L. Breiman, J. Friedman, C.J. Stone, Olshen, R.A. CART: Classification and Regression Trees, Sprinter, 1984.
[ 19 ] H. Li, A. Kadav, I. Durdanovic, et al., “Pruning filters for efficient convnets,” In Proceedings of the International Conference on Learning Representations, pp. 1–11, 2016.
[ 20 ] S. Anwar, W. Sung, “Compact deep convolutional neural networks with coarse pruning,” In Proceedings of the International Conference on Learning Representations, pp. 1–10, 2016.
[ 21 ] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series , vol. 58, no. 1, pp. 267–288, 1996.

Submit A Manuscript
Review Manuscripts
Join As An Editorial Member
Most Views
Article
by Sergey M. Afonin
3057 Downloads 61980 Views
Article
by Jian-Qiang Wang, Chen-Xi Wang, Jian-Guo Wang, HRSCNP Research Team
15 Downloads 51279 Views
Article
by Syed Adil Hussain, Taha Hasan Associate Professor
2418 Downloads 25002 Views
Article
by Omprakash Sikhwal, Yashwant Vyas
2486 Downloads 21039 Views
Article
by Munmun Nath, Bijan Nath, Santanu Roy
2364 Downloads 20643 Views
Upcoming Conferences