Neural networks : the official journal of the International Neural Network Society
-
Comparative Study
ARTSTREAM: a neural network model of auditory scene analysis and source segregation.
Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an 'auditory scene analysis' enables the brain to solve the cocktail party problem. ⋯ Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis that ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.
-
We suggest that any brain-like (artificial neural network based) learning system will need a sleep-like mechanism for consolidating newly learned information if it wishes to cope with the sequential/ongoing learning of significantly new information. We summarise and explore two possible candidates for a computational account of this consolidation process in Hopfield type networks. The "pseudorehearsal" method is based on the relearning of randomly selected attractors in the network as the new information is added from some second system. ⋯ The "unlearning" method is based on the unlearning of randomly selected attractors in the network after new information has already been learned. This process is supposed to locate and remove the unwanted associations between information that obscure the learned inputs. We suggest that as a computational model of sleep consolidation, the pseudorehearsal approach is better supported by the psychological, evolutionary, and neurophysiological data (in particular accounting for the role of the hippocampus in consolidation).
-
The number of required hidden units is statistically estimated for feedforward neural networks that are constructed by adding hidden units one by one. The output error decreases with the number of hidden units by an almost constant rate, if each appropriate hidden unit is selected out of a great number of candidate units. The expected value of the maximum decrease per hidden unit is estimated theoretically as a function of the number of learning data sets in relation to the number of candidates that are obtained by random search. ⋯ Therefore the number of candidates can be regarded as a parameter that represents the efficiency of the search. Computer simulation shows that estimating this parameter experimentally from the actual decrease in output error is useful for demonstrating the efficiency of the gradient search. It also shows the influence, on the number of hidden units, of the hidden unit's nonlinearity.
-
This paper introduces an Associative List Memory (ALM) that has high recall fidelity with low memory and low processing requirements. This permits a simple implementation in software on a personal computer or space instrument microprocessor. Associative List Memory has a performance comparable with Sparse Distributed Memory (SDM) but differs from SDM in that convergence occurs during learning, rather than on recall, and in that the memory is in the form of a dynamic list rather than static randomly distributed locations. ⋯ Its processing times on a personal computer are found to be practical for database applications. Implemented within a space instrument processor, ALM would greatly reduce downlink data transmission rates. Copyright 1997 Elsevier Science Ltd.
-
This article offers a new neural network framework for understanding both the transients and the asymptotes of operant (instrumental) learning. The theory shows that interplay between simple short and long-term memory mechanisms is sufficient to explain a large number of operant phenomena. It describes short- and long-term effects of reinforcement and how these effects modulate the operant response, how novel events are detected and processed, and how their consequences also modulate the operant response. ⋯ Implications of the present theory for other operant conditioning phenomena, classical conditioning, and avoidance behavior are suggested. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.