Performance Analysis of Optimization Algorithms on Stacked Autoencoder


Adem K., Kilicarslan S.

3rd International Symposium on Multidisciplinary Studies and Innovative Technologies, ISMSIT 2019, Ankara, Türkiye, 11 - 13 Ekim 2019 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/ismsit.2019.8932880
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Anahtar Kelimeler: Deep learning, Optimization, Stacked autoencoder
  • Sivas Cumhuriyet Üniversitesi Adresli: Hayır

Özet

Stacked autoencoder (SAE) model, which is one of the deep learning methods, has been widely used in one dimensional data sets in recent years. In this study, a comparative performance analysis was performed using the five most commonly used optimization techniques and two well-known activation functions in SAE architecture. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RmsProp), Adaptive Moment Estimation (Adam), Adaptive Delta (Adadelta) and Nesterov-accelerated Adaptive Moment Estimation (Nadam) and Softmax and Sigmoid were used as optimization techniques. In this study, two different data sets in public UCI database were used. In order to verify the performance of the SAE model, experimental studies were performed by using the obtained data sets together with optimization and activation techniques separately. As a result of the experimental studies, the success rate of 88.89%, 85.19% in Cryotherapy and Immunotherapy data set was achieved by using Softmax activation function with SGD optimization method on three-layer SAE. After a successful training phase, adaptive optimization techniques Adam, Adadelta, Nadam and RmsProp methods were observed to have a weaker learning process than the stochastic method SGD.