Authors
Sanele Makamo, Benguela Global Fund Managers, South Africa
Abstract
In this paper we review different popular optimization algorithms for machine learning models, we then evaluate the model performance and convergence rates for each optimizer using a multilayer fully connected neural networks. Using sequential dataset of index returns (time-series data) spanning over of 20-years, we demonstrate Adam and RMSprop optimizers can efficiently solve practical deep learning problems dealing with sequential datasets. We use the same parameter initialization when comparing different optimization algorithms. The hyper-parameters, such as learning rate and momentum, are searched over a dense grid and the results are reported using the best hyper-parameter setting.
Keywords
machine learning, deep learning, neural networks, optimization algorithms, loss function.