Neural networks : the official journal of the International Neural Network Society
-
We propose a new regularization method for deep learning based on the manifold adversarial training (MAT). Unlike previous regularization and adversarial training methods, MAT further considers the local manifold of latent representations. Specifically, MAT manages to build an adversarial framework based on how the worst perturbation could affect the statistical manifold in the latent space rather than the output space. ⋯ The proposed MAT is important in that it can be considered as a superset of one recently-proposed discriminative feature learning approach called center loss. We conduct a series of experiments in both supervised and semi-supervised learning on four benchmark data sets, showing that the proposed MAT can achieve remarkable performance, much better than those of the state-of-the-art approaches. In addition, we present a series of visualization which could generate further understanding or explanation on adversarial examples.
-
Comparative Study
Comparative study using inverse ontology cogency and alternatives for concept recognition in the annotated National Library of Medicine database.
This paper introduces inverse ontology cogency, a concept recognition process and distance function that is biologically-inspired and competitive with alternative methods. The paper introduces inverse ontology cogency as a new alternative method. It is a novel distance measure used in selecting the optimum mapping between ontology-specified concepts and phrases in free-form text. ⋯ Results indicate that using both inverse ontology cogency and corpora cogency improved concept recognition precision 20% over the best published MetaMap results. This demonstrates a new, effective approach for identifying medical concepts in text. This is the first time cogency has been explicitly invoked for reasoning with ontologies, and the first time it has been used on medical literature where high-quality ground truth is available for quality assessment.
-
Transfer learning enables solving a specific task having limited data by using the pre-trained deep networks trained on large-scale datasets. Typically, while transferring the learned knowledge from source task to the target task, the last few layers are fine-tuned (re-trained) over the target dataset. However, these layers are originally designed for the source task that might not be suitable for the target task. ⋯ The classification results obtained through the proposed AutoTune method outperforms the standard baseline transfer learning methods over the three datasets by achieving 95.92%, 86.54%, and 84.67% accuracy over CalTech-101, CalTech-256, and Stanford Dogs, respectively. The experimental results obtained in this study depict that tuning of the pre-trained CNN layers with the knowledge from the target dataset confesses better transfer learning ability. The source codes are available at https://github.com/JekyllAndHyde8999/AutoTune_CNN_TransferLearning.
-
This research paper conducts an investigation into the stability issue for a more general class of neutral-type Hopfield neural networks that involves multiple time delays in the states of neurons and multiple neutral delays in the time derivatives of the states of neurons. By constructing a new proper Lyapunov functional, an alternative easily verifiable algebraic criterion for global asymptotic stability of this type of Hopfield neural systems is derived. ⋯ Two instructive examples are employed to indicate that the result obtained in this paper reveals a new set of sufficient stability criteria when it is compared with the previously reported stability results. Therefore, the proposed stability result enlarges the application domain of Hopfield neural systems of neutral types.
-
In this paper, we derive a new fixed-time stability theorem based on definite integral, variable substitution and some inequality techniques. The fixed-time stability criterion and the upper bound estimate formula for the settling time are different from those in the existing fixed-time stability theorems. ⋯ Numerical simulations illustrate that the new upper bound estimate formula for the settling time is much tighter than those in the existing fixed-time stability theorems. Moreover, the plaintext signals can be recovered according to the new fixed-time stability theorem, while the plaintext signals cannot be recovered according to the existing fixed-time stability theorems.