Proceedings of the 2023 International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2023)

Optimization Model Performance through Pruning Techniques

Authors
Hanzhang Tang1, *
1American International School, Hong Kong, 999077, China
*Corresponding author. Email: 221115@ais.edu.hk
Corresponding Author
Hanzhang Tang
Available Online 14 February 2024.
DOI
10.2991/978-94-6463-370-2_6How to use a DOI?
Keywords
Pruning; Machine Learning; Optimization Method
Abstract

Due to their ability to automatically extract features, Deep Neural Networks (DNNS) have demonstrated performance never before seen. Due to this high degree of performance during the past ten years, a considerable number of DNN models have been combined with various Internet of Things (IoT) applications. However, deploying DNN models on resource-constrained IoT devices is impractical because of the high computing, energy, and storage needs of these models. Because of this, several pruning approaches have been put out recently to lessen the storage and processing needs of DNN models. These DNN pruning methods take a new approach to condense the DNN while lowering accuracy. It motivates us to present a thorough analysis of deep neural network compression methods. In order to decrease storage and computing requirements, A thorough analysis of the current literature pruning techniques will be given. The currently used strategies are into three groups are categorized as layer, channel, filter, and connection pruning. The difficulties that come with each class of DNN pruning strategies are also covered in the study. Finally, a brief summary of the ongoing work in each classification is provided, along with a projection of network pruning’s future evolution.

Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2023 International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2023)
Series
Advances in Intelligent Systems Research
Publication Date
14 February 2024
ISBN
10.2991/978-94-6463-370-2_6
ISSN
1951-6851
DOI
10.2991/978-94-6463-370-2_6How to use a DOI?
Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Hanzhang Tang
PY  - 2024
DA  - 2024/02/14
TI  - Optimization Model Performance through Pruning Techniques
BT  - Proceedings of the 2023 International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2023)
PB  - Atlantis Press
SP  - 43
EP  - 54
SN  - 1951-6851
UR  - https://doi.org/10.2991/978-94-6463-370-2_6
DO  - 10.2991/978-94-6463-370-2_6
ID  - Tang2024
ER  -