Neural networks : the official journal of the International Neural Network Society
-
Anomaly detection in hyperspectral images (HSIs) faces various levels of difficulty due to the high dimensionality, redundant information and deteriorated bands. To address these problems, we propose a novel unsupervised feature representation approach by incorporating a spectral constraint strategy into adversarial autoencoders (AAE) without any prior knowledge in this paper. Our approach, called SC_AAE (spectral constraint AAE), is based on the characteristics of HSIs to obtain better discrimination represented by hidden nodes. ⋯ Considering the different contribution rates of each hidden node to anomaly detection, we individually fuse the hidden nodes by an adaptive weighting method. A bi-layer architecture is then designed to suppress the variational background (BKG) while preserving features of anomalies. The experimental results demonstrate that our proposed method outperforms the state-of-the-art methods.
-
This work focuses on global asymptotic stability of Takagi-Sugeno fuzzy Cohen-Grossberg neural networks with multiple time delays. By using the standard Lyapunov stability techniques and nonsingular M-matrix condition of matrices together with employing the nonlinear Lipschitz activation functions, a new easily verifiable sufficient criterion is obtained to guarantee global asymptotic stability of the Cohen-Grossberg neural network model which is represented by a Takagi-Sugeno fuzzy model. ⋯ This numerical example is also used to make a comparison between the global stability condition obtained in this study and some of previously published global stability results. This comparison reveals that the condition we propose establishes a novel and alternative stability result for Takagi-Sugeno fuzzy Cohen-Grossberg neural networks of this class.
-
In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. ⋯ Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.
-
Imbalance problem occurs when the majority class instances outnumber the minority class instances. Conventional extreme learning machine (ELM) treats all instances with same importance leading to the prediction accuracy biased towards the majority class. To overcome this inherent drawback, many variants of ELM have been proposed like Weighted ELM, class-specific cost regulation ELM (CCR-ELM) etc. to handle the class imbalance problem effectively. ⋯ The proposed work has lower computational overhead compared to CCR-ELM. The proposed work is evaluated using benchmark real world imbalanced datasets downloaded from the KEEL dataset repository. The results show that the proposed work has better performance than weighted ELM, CCR-ELM , EFSVM, FSVM, SVM for class imbalance learning.
-
Parallel incremental learning is an effective approach for rapidly processing large scale data streams, where parallel and incremental learning are often treated as two separate problems and solved one after another. Incremental learning can be implemented by merging knowledge from incoming data and parallel learning can be performed by merging knowledge from simultaneous learners. We propose to simultaneously solve the two learning problems with a single process of knowledge merging, and we propose parallel incremental wESVM (weighted Extreme Support Vector Machine) to do so. ⋯ As such, the proposed algorithm is able to conduct parallel incremental learning by merging knowledge over data slices arriving at each incremental stage. Both theoretical and experimental studies show the equivalence of the proposed algorithm to batch wESVM in terms of learning effectiveness. In particular, the algorithm demonstrates desired scalability and clear speed advantages to batch retraining.