Home > Journals > SCIREA Journal of Mathematics > Archive > Paper Information

Variable selection of regularized stochastic gradient descent in logistic regression

Volume 7, Issue 2, April 2022    |    PP. 38-44    |PDF (257 K)|    Pub. Date: May 18, 2022
DOI: 10.54647/mathematics11319    12 Downloads     271 Views  

Author(s)
Ping Guo, College of Mathematics and Statistics, Guangxi Normal University, Guilin, Guangxi, China

Abstract
In the modern big data environment, Stochastic gradient descent is an important method for training neural networks, processing largescale data sets, optimization, etc. Deeply welcomed in various fields. With regard to SGD, the existing literature considers the stopping condition of parameter iteration. In fact, some unimportant parameters do not always have values of 0 during iteration, and it is not clear whether they are important or not even if the stop condition is reached. We consider variable selection of SGD parameter iteration with L1 regular in generalized linear regression model (taking Logistic regression as an example). Monte Carlo numerical simulation and practical application examples were given to illustrate the consistency of variable selection. The results show that high accuracy can be achieved by using the selected variables to build the model.

Keywords
SGD; Lasso; Logistic regression; Variable selection

Cite this paper
Ping Guo, Variable selection of regularized stochastic gradient descent in logistic regression, SCIREA Journal of Mathematics. Vol. 7 , No. 2 , 2022 , pp. 38 - 44 . https://doi.org/10.54647/mathematics11319

References

[ 1 ] Kushner, H. and Yin, G.(1997). Stochastic Approximation Algorithms and Applications. Springer Verlag, New York.
[ 2 ] Nemirovski, A., Juditsky, A., Lan, G.H., et al(2009). Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19:1574–1609.
[ 3 ] Du, S.S., Zhai, X.Y., Póczos, B., et al(2018). Gradient descent provably optimizes overparameterized neural networks. Statistics, 1467-5463.
[ 4 ] Bottou,L. and Bousquet, O.(2007). The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems (NeurIPS), 161–168.
[ 5 ] Tibshirani, R.(1996). Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B:Methodological, 58(1):267-288.
[ 6 ] Khalili, A., and Chen, J.H.(2007). Variable Selection in Finite Mixture of Regression Models. Journal of the American Statistical Association, 102(479): 1025-1038.
[ 7 ] Wang, P.Q. and Nguyen, P.X.(2012). Variations of Logistic Regression with Stochastic Gradient Descent.
[ 8 ] Sun,Y., Song Q.F.,and Liang, F.M.(2021). Consistent Sparse Deep Learning: Theory and Computation. Journal of the American Statistical Association, 1-42.

Submit A Manuscript
Review Manuscripts
Join As An Editorial Member
Most Views
Article
by Sergey M. Afonin
3057 Downloads 59576 Views
Article
by Jian-Qiang Wang, Chen-Xi Wang, Jian-Guo Wang, HRSCNP Research Team
15 Downloads 45052 Views
Article
by Syed Adil Hussain, Taha Hasan Associate Professor
2418 Downloads 24075 Views
Article
by Omprakash Sikhwal, Yashwant Vyas
2486 Downloads 20228 Views
Article
by Munmun Nath, Bijan Nath, Santanu Roy
2364 Downloads 19950 Views
Upcoming Conferences