Neural Computing and Applications, cilt.34, sa.24, ss.21729-21740, 2022 (SCI-Expanded)
Activation functions have an important role in obtaining the most appropriate output by processing the information coming into the network in deep learning architectures. Deep learning architectures are widely used in areas such as image processing applications, time series, and disease classification, generally in line with the analysis of large and complex data. Choosing the appropriate architecture and activation function is an important factor in achieving successful learning and classification performance. There are many studies to improve the performance of deep learning architectures and to overcome the disappearing gradient and negative region problems in activation functions. A flexible and trainable fast exponential linear unit (P + FELU) activation function is proposed to overcome existing problems. With the proposed P + FELU activation function, a higher success rate and faster calculation time can be achieved by incorporating the advantages of fast exponentially linear unit (FELU), exponential linear unit (ELU), and rectified linear unit (RELU) activation functions. Performance evaluations of the proposed P + FELU activation function were made on MNIST, CIFAR-10, and CIFAR-100 benchmark datasets. Experimental evaluations have shown that the proposed activation function outperforms the ReLU, ELU, SELU, MPELU, TReLU, and FELU activation functions and effectively improves the noise robustness of the network. Experimental results show that this activation function with “flexible and trainable” properties can effectively prevent vanishing gradient and make multilayer perceptron neural networks deeper.