Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleDecember 2024
Accelerated gradient descent using improved Selective Backpropagation
Expert Systems with Applications: An International Journal (EXWA), Volume 255, Issue PAhttps://doi.org/10.1016/j.eswa.2024.124426AbstractAn improved version of the Selective Backpropagation (SBP+) is described which significantly reduces the training time and enhances the generalization. The algorithm eliminates correctly predicted instances from the Backpropagation (BP) in an ...
Highlights- Improved Selective Backpropagation is described which accelerates training.
- Acceleration is due to the elimination of accurate predictions from backpropagation.
- Improvement in accuracy is due to the inclusion of random error term.
- research-articleNovember 2024
Bitcoin price prediction using LSTM autoencoder regularized by false nearest neighbor loss
Soft Computing - A Fusion of Foundations, Methodologies and Applications (SOFC), Volume 28, Issue 21Pages 12827–12834https://doi.org/10.1007/s00500-024-10301-4AbstractWe implement deep learning for predicting bitcoin closing prices. Identifying two new determiners, we propose a novel LSTM Autoencoder using Mean Squared Error (MSE) loss which is regularized by False Nearest Neighbor (FNN) algorithm. The method ...
- research-articleSeptember 2024
Improving few-shot named entity recognition via Semantics induced Optimal Transport
AbstractNamed entity recognition (NER) is to identify and categorize entities in unstructured text, which serves as a fundamental task for a variety of natural language processing (NLP) applications. In particular, emerging few-shot NER methods aim to ...
Highlights- This paper highlights the contribution of the semantic distribution distance constraints on general knowledge transfer.
- The proposed approach is a common framework without adding extra learning parameters.
- Extensive experiments ...
- research-articleOctober 2022
Convergence of Batch Gradient Method for Training of Pi-Sigma Neural Network with Regularizer and Adaptive Momentum Term
Neural Processing Letters (NPLE), Volume 55, Issue 4Pages 4871–4888https://doi.org/10.1007/s11063-022-11069-0AbstractPi-sigma neural network (PSNN) is a class of high order feed-forward neural networks with product units in the output layer, which leads to its fast convergence speed and a high degree of nonlinear mapping capability. Inspired by the sparse ...
- ArticleOctober 2022
Prior-Guided Adversarial Initialization for Fast Adversarial Training
AbstractFast adversarial training (FAT) effectively improves the efficiency of standard adversarial training (SAT). However, initial FAT encounters catastrophic overfitting, i.e., the robust accuracy against adversarial attacks suddenly and dramatically ...
- research-articleDecember 2021
An improving sparse coding algorithm for wireless passive target positioning
AbstractWith the developments of smart city, wireless passive localization (WPL) technique that detects targets without carrying any devices draws a lot of research attention. Some machine learning methods, such as sparse coding and deep learning, have ...
- research-articleOctober 2021
Extensive framework based on novel convolutional and variational autoencoder based on maximization of mutual information for anomaly detection
Neural Computing and Applications (NCAA), Volume 33, Issue 20Pages 13785–13807https://doi.org/10.1007/s00521-021-06017-3AbstractIn present study, we proposed a general framework based on a convolutional kernel and a variational autoencoder (CVAE) for anomaly detection on both complex image and vector datasets. The main idea is to maximize mutual information (MMI) through ...
- ArticleSeptember 2021
The KL-Divergence Between a Graph Model and its Fair I-Projection as a Fairness Regularizer
Machine Learning and Knowledge Discovery in Databases. Research TrackPages 351–366https://doi.org/10.1007/978-3-030-86520-7_22AbstractLearning and reasoning over graphs is increasingly done by means of probabilistic models, e.g. exponential random graph models, graph embedding models, and graph neural networks. When graphs are modeling relations between people, however, they ...
- rapid-communicationSeptember 2021
Regularizer based on Euler characteristic for retinal blood vessel segmentation
Pattern Recognition Letters (PTRL), Volume 149, Issue CPages 83–90https://doi.org/10.1016/j.patrec.2021.05.023Highlights- Introduce regularizer based on the number of isolated objects for small vessel segmentation.
- The number of isolated objects estimate from the topology of segmentation result by using Euler characteristics.
- Euler characteristic-...
Segmentation of retinal blood vessels is important for the analysis of diabetic retinopathy (DR). Existing methods do not prioritize the small and disconnected vessels for DR. With the aim of paying attention to the small and disconnected vessel ...
- ArticleAugust 2020
Gabor Layers Enhance Network Robustness
- Juan C. Pérez,
- Motasem Alfarra,
- Guillaume Jeanneret,
- Adel Bibi,
- Ali Thabet,
- Bernard Ghanem,
- Pablo Arbeláez
AbstractWe revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore the effect of replacing the first layers of various deep architectures with Gabor layers (i.e. convolutional layers with filters that ...
- ArticleNovember 2019
Concept Factorization with Optimal Graph Learning for Data Representation
AbstractIn recent years, concept factorization methods become a popular data representation technique in many real applications. However, conventional concept factorization methods cannot capture the intrinsic geometric structure embedded in data using ...
- research-articleOctober 2018
Boundedness and convergence of split complex gradient descent algorithm with momentum and regularizer for TSK fuzzy models
Neurocomputing (NEUROC), Volume 311, Issue CPages 270–278https://doi.org/10.1016/j.neucom.2018.05.075AbstractThis paper investigates the split complex gradient descent based neuro-fuzzy algorithm with self-adaptive momentum and L 2 regularizer for training TSK (Takagi–Sugeno–Kang) fuzzy inference models. The major threat for disposing complex data with ...
- articleMay 2018
An optimization model for collaborative recommendation using a covariance-based regularizer
Data Mining and Knowledge Discovery (DMKD), Volume 32, Issue 3Pages 651–674https://doi.org/10.1007/s10618-018-0552-3This paper suggests a convex regularized optimization model to produce recommendations, which is adaptable, fast, and scalable--while remaining very competitive to state-of-the-art methods in terms of accuracy. We introduce a regularizer based on the ...
- research-articleJanuary 2016
Local and global regularized sparse coding for data representation
Neurocomputing (NEUROC), Volume 175, Issue PAPages 188–197https://doi.org/10.1016/j.neucom.2015.10.048Recently, sparse coding has been widely adopted for data representation in real-world applications. In order to consider the geometric structure of data, we propose a novel method, local and global regularized sparse coding (LGSC), for data ...
- articleFebruary 2014
Weighted discriminative sparsity preserving embedding for face recognition
Sparse representation (SR) based dimension reduction (DR) methods have aroused lots of interests in the field of face recognition. In this paper, we firstly propose a new sparse representation method called weighted elastic net (WEN). Compared to the ...
- articleOctober 2010
On a new notion of the solution to an ill-posed problem
Journal of Computational and Applied Mathematics (JCAM), Volume 234, Issue 12Pages 3326–3331https://doi.org/10.1016/j.cam.2010.04.032A new understanding of the notion of the stable solution to ill-posed problems is proposed. The new notion is more realistic than the old one and better fits the practical computational needs. A method for constructing stable solutions in the new sense ...
- articleJanuary 2009
Enhancing the generalization ability of neural networks through controlling the hidden layers
Applied Soft Computing (APSC), Volume 9, Issue 1Pages 404–414https://doi.org/10.1016/j.asoc.2008.01.013In this paper we proposed two new variants of backpropagation algorithm. The common point of these two new algorithms is that the outputs of nodes in the hidden layers are controlled with the aim to solve the moving target problem and the distributed ...