Whitened gradient descent, a new updating method for optimizers in deep neural networks
Optimizers are vital components of deep neural networks that perform weight updates. This paper introduces a new updating method for optimizers based on gradient descent, called whitened gradient descent (WGD). This method is easy to implement and can be used in every optimizer based on the gradient descent algorithm. It does not increase the training time of the network significantly. This method smooths the training curve and improves classification metrics. To evaluate the proposed algorithm, we performed 48 different tests on two datasets, Cifar100 and Animals-10, using three network structures, including densenet121, resnet18, and resnet50. The experiments show that using the WGD method in gradient descent based optimizers, improves the classification results significantly. For example, integrating WGD in RAdam optimizer increased the accuracy of DenseNet from 87.69% to 90.02% on the Animals-10 dataset.
-
Fire and Smoke Segmentation using FireNet Combined with UNet3+
A. Eskandari *, H. Khosravi
International Journal of Engineering, Oct 2025 -
تحول در تحلیل احساسات: یک ساختار نوین در معماری مدل زبانی
*، طهورا رمضانی مقدم
نشریه فناوریهای نوین در مهندسی برق و کامپیوتر، بهار 1404 -
ارائه یک مدل یادگیری عمیق برای سیستم تشخیص نفوذ با استفاده از لایه هم آمیخت و روش تحلیل مولفه های اصلی
سید محمد جوادی مقدم*، ، طهورا رمضانی مقدم
نشریه علوم رایانشی، بهار 1404 -
A novel approach for vehicle identification based on image registration and deep learning
R. Asgarian Dehkordi, H. Khosravi *, H. Asgarian Dehkordi, M. Sheyda
Scientia Iranica, Mar-Apr 2024