MOMENT PARAMETRI VA UNING SUN’IY NEYRON TARMOQLARINI O‘QITISHDAGI AHAMIYATI

Authors

  • Qutbiddinova Shahloxon Saydolimjon qizi Farg’ona davlat universiteti talabasi qutbiddinovashahloxon@gmail.com Author
  • Tojimamatov Israil Nurmamatovich Farg‘ona davlat universiteti Amaliy matematika va informatika kafedrasi katta o‘qituvchisi israiltojimamatov@gmail.com Author

Keywords:

Sun’iy intellekt, sun’iy neyron tarmoqlari, moment parametri, gradient tushish algoritmi, optimallashtirish, chuqur o‘rganish.

Abstract

Ushbu tezis sun’iy intellekt tizimlarida, xususan, sun’iy neyron tarmoqlarini o‘qitish jarayonida qo‘llaniladigan moment (momentum) parametrining mohiyati va ahamiyatini tahlil qilishga bag‘ishlangan. Neyron tarmoqlarni optimallashtirishda gradient tushish algoritmlarining samaradorligi muhim bo‘lib, moment parametri ushbu algoritmlarning konvergentsiya tezligini oshirish va o‘qitish jarayonidagi tebranishlarni kamaytirishda muhim rol o‘ynaydi. Tezisda moment parametrining ishlash prinsipi, uning klassik gradient tushish usulidan farqi hamda chuqur neyron tarmoqlarda lokal minimum muammosini kamaytirishdagi o‘rni yoritib beriladi. Tadqiqot natijalari shuni ko‘rsatadiki, moment parametridan foydalanish neyron tarmoqlarni o‘qitish jarayonining barqarorligini ta’minlab, model aniqligi va umumlashtirish qobiliyatini oshiradi. Ushbu ish sun’iy intellekt sohasida samarali optimallashtirish usullarini qo‘llash bo‘yicha dolzarb ilmiy ahamiyatga ega.

References

1. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

2. Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv:1609.04747.

3. Kingma, D. P., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR).

4. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323, 533–536.

5. Sutskever, I., Martens, J., Dahl, G., & Hinton, G. (2013). On the importance of initialization and momentum in deep learning. International Conference on Machine Learning (ICML).

6. Nesterov, Y. (1983). A method of solving a convex programming problem with convergence rate O(1/k²). Soviet Mathematics Doklady, 27, 372–376.

7. Bottou, L. (2012). Stochastic gradient descent tricks. In G. B. Orr & K. Müller (Eds.), Neural Networks: Tricks of the Trade (2nd ed., pp. 421–436). Springer.

8. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444.

9. Ruder, S. (2017). An overview of momentum and adaptive learning rate methods. arXiv:1710.03223.

10. Nielsen, M. (2015). Neural Networks and Deep Learning. Determination Press.

11. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

12. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., & Recht, B. (2017). The marginal value of adaptive gradient methods in machine learning. arXiv:1705.08292.

13. Smith, L. N. (2017). Cyclical learning rates for training neural networks. IEEE Winter Conference on Applications of Computer Vision (WACV).

14. Bengio, Y. (2012). Practical recommendations for gradient-based training of deep architectures. In G. Montavon, G. B. Orr, & K. Müller (Eds.), Neural Networks: Tricks of the Trade (2nd ed., pp. 437–478). Springer.

15. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. International Conference on Machine Learning (ICML).

16. Qutbiddinova, S. (2025). Sun’iy intellekt tizimlarida neyron tarmoqlar va optimallashtirish usullari. [Maqola]. O‘zbekiston: Najot Ta’lim, ilmiy nashr

Downloads

Published

22-12-2025

How to Cite

MOMENT PARAMETRI VA UNING SUN’IY NEYRON TARMOQLARINI O‘QITISHDAGI AHAMIYATI. (2025). ZAMONAVIY ILM-FAN VA TADQIQOTLAR: MUAMMO VA YECHIMLAR, 2(5), 101-104. https://innoworld.net/index.php/ziftmy/article/view/1510