Журнал «Современная Наука»

Russian (CIS)English (United Kingdom)
MOSCOW +7(495)-142-86-81

COMPARISON OF THE CONVERGENCE RATE OF GRADIENT AND STOCHASTIC GRADIENT DESCENT IN TRAINING FULLY CONNECTED NEURAL NETWORKS

Verezubova N.   (Candidate of economic sciences, associate professor Moscow State Academy of Veterinary Medicine and Biotechnology named after K.I. Scriabin )

Sakovich N.   (Doctor of technical sciences, associate professor Bryansk State Agrarian University )

Chekulaev A.   (Moscow State Academy of Veterinary Medicine and Biotechnology named after K.I. Scriabin )

This paper presents a comprehensive study of the relationship between the learning rate and the performance of various optimization algorithms in machine learning problems. Particular attention is paid to the comparative analysis of classical and stochastic gradient descent, as well as modern modifications using momentum, adaptive parameter tuning, and regularization. The study demonstrates non-trivial interactions between the learning rate and other hyperparameters, including the batch size. The results obtained have practical value for optimizing the training process of neural networks and can be used in the development of adaptive methods for selecting the optimal configuration of hyperparameters, which is especially important in conditions of limited computing resources. The practical value of this study is that the knowledge gained allows us to significantly optimize the process of training neural networks. The results of the study can be used to develop more efficient and adaptive methods for selecting the optimal configuration of hyperparameters. This is especially relevant in conditions of limited computing resources, where optimization of the training process is a critical factor for successful operation. Understanding the relationship between the learning rate and other hyperparameters allows us to avoid lengthy and costly enumeration of options, reducing the time and resources required to train an effective machine learning model.

Keywords:fully connected neural network, stochastic gradient descent, optimization, learning rate, local minima, loss function, model accuracy, influence of batch size.

 

Read the full article …



Citation link:
Verezubova N. , Sakovich N. , Chekulaev A. COMPARISON OF THE CONVERGENCE RATE OF GRADIENT AND STOCHASTIC GRADIENT DESCENT IN TRAINING FULLY CONNECTED NEURAL NETWORKS // Современная наука: актуальные проблемы теории и практики. Серия: Естественные и Технические Науки. -2025. -№08. -С. 48-52 DOI 10.37882/2223-2966.2025.08.05
LEGAL INFORMATION:
Reproduction of materials is permitted only for non-commercial purposes with reference to the original publication. Protected by the laws of the Russian Federation. Any violations of the law are prosecuted.
© ООО "Научные технологии"