Neural networks : the official journal of the International Neural Network Society
-
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. ⋯ Journal of Computer and System Sciences 73(5), 725-734; Anthony, M. (2004). On learning a function of perceptrons. In Proceedings of the 2004 IEEE international joint conference on neural networks (pp. 967-972): Vol. 2] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.