Neural networks : the official journal of the International Neural Network Society
-
In this paper, we first discuss the existence and uniqueness of the equilibrium point of interval general BAM neural networks with reaction-diffusion terms and multiple time-varying delays by means of using degree theory. Then by applying the existence result of an equilibrium point and constructing a Lyapunov functional, we discuss global exponential stability for above neural networks. In the last section, we also give an example to demonstrate the validity of our global exponential stability result for above neural network.
-
In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. ⋯ We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments.
-
Noise in electroencephalography data (EEG) is an ubiquitous problem that limits the performance of brain computer interfaces (BCI). While typical EEG artifacts are usually removed by trial rejection or by filtering, noise induced in the data by the subject's failure to produce the required mental state is very harmful. ⋯ In this manner, our method effectively "cleans" the training data and thus allows better BCI classification. Preliminary results conducted on a data set of 43 naive subjects show a significant improvement for 74% of the subjects.
-
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. ⋯ Journal of Computer and System Sciences 73(5), 725-734; Anthony, M. (2004). On learning a function of perceptrons. In Proceedings of the 2004 IEEE international joint conference on neural networks (pp. 967-972): Vol. 2] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.