Training Artificial Neural Networks by Coordinate Search Algorithm

Ehsan Rokhsatyazdi, Shahryar Rahnamayan, Sevil Zanjani Miyandoab, Azam Asilian Bidgoli, H. R. Tizhoosh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Training Artificial Neural Networks (ANNs) poses a challenging and critical problem in machine learning. Despite the effectiveness of gradient-based learning methods, such as Stochastic Gradient Descent (SGD), in training neural networks, they do have several limitations. For instance, they require differentiable activation functions, and cannot optimize a model based on several independent non-differentiable loss functions simultaneously; for example, the F1-score, which is used during testing, can be used during training when a gradient-free optimization algorithm is utilized. Furthermore, the training (i.e., optimization of weights) in any DNN can be possible with a small size of the training dataset. To address these concerns, we propose an efficient version of the gradient-free Coordinate Search (CS) algorithm, an instance of General Pattern Search (GPS) methods, for training (i.e., optimizing) neural networks. The proposed algorithm can be used with non-differentiable activation functions and tailored to multi-objective/multi-loss problems. Finding the optimal values for weights of ANNs is a large-scale optimization problem. Therefore instead of finding the optimal value for each variable, which is the common technique in classical CS, we accelerate optimization and convergence by bundling the variables (i.e., weights). In fact, this strategy is a form of dimension reduction for optimization problems. Based on the experimental results, the proposed method is comparable with the SGD algorithm, and in some cases, it outperforms the gradient-based approach. Particularly, in situations with insufficient labeled training data, the proposed CS method performs better. The performance plots demonstrate a high convergence rate, highlighting the capability of our suggested method to find a reasonable solution with fewer function calls. As of now, the only practical and efficient way of training ANNs with hundreds of thousands of weights is gradient-based algorithms such as SGD or Adam. In this paper we introduce an alternative method for training ANN.

Original languageEnglish (US)
Title of host publication2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1540-1546
Number of pages7
ISBN (Electronic)9781665430654
DOIs
StatePublished - 2023
Event2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023 - Mexico City, Mexico
Duration: Dec 5 2023Dec 8 2023

Publication series

Name2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023

Conference

Conference2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023
Country/TerritoryMexico
CityMexico City
Period12/5/2312/8/23

Keywords

  • Artificial Neural Network (ANN)
  • Coordinate Search
  • Expensive Optimization
  • Gradient-free
  • Large-Scale Optimization
  • Stochastic Gradient Descent (SGD)

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Human-Computer Interaction
  • Decision Sciences (miscellaneous)
  • Safety, Risk, Reliability and Quality

Fingerprint

Dive into the research topics of 'Training Artificial Neural Networks by Coordinate Search Algorithm'. Together they form a unique fingerprint.

Cite this