Implementation of MLP Networks Running Backpropagation on Various Parallel Computer Hardware Using MPI

Abstract

Multiple-Layer Perceptrons (MLPs) running Backpropagation are still a very frequently utilized artificial neural networks paradigm. Particularly when applied to very highdimensional data sets leading to rather large networks, their training may take too long. Besides some architectural or solely numerical modifications, especially a parallel implementation is suitable to speed up the network training. This paper evaluates a Message Passing Interface (MPI) based parallel variant of Backpropagation which was run on a number of different parallel computer architectures. A standard character recognition problem with a wavelet transform based feature data set of 262 input dimensions is applied as benchmark environment.

Publication
Proceedings 5th International Conference on Recent Advances in Soft Computing (RASC2004)