Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = RSSC

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6045 KiB  
Article
Q-A2NN: Quantized All-Adder Neural Networks for Onboard Remote Sensing Scene Classification
by Ning Zhang, He Chen, Liang Chen, Jue Wang, Guoqing Wang and Wenchao Liu
Remote Sens. 2024, 16(13), 2403; https://doi.org/10.3390/rs16132403 - 30 Jun 2024
Viewed by 650
Abstract
Performing remote sensing scene classification (RSSC) directly on satellites can alleviate data downlink burdens and reduce latency. Compared to convolutional neural networks (CNNs), the all-adder neural network (A2NN) is a novel basic neural network that is more suitable for onboard RSSC, [...] Read more.
Performing remote sensing scene classification (RSSC) directly on satellites can alleviate data downlink burdens and reduce latency. Compared to convolutional neural networks (CNNs), the all-adder neural network (A2NN) is a novel basic neural network that is more suitable for onboard RSSC, enabling lower computational overhead by eliminating multiplication operations in convolutional layers. However, the extensive floating-point data and operations in A2NNs still lead to significant storage overhead and power consumption during hardware deployment. In this article, a shared scaling factor-based de-biasing quantization (SSDQ) method tailored for the quantization of A2NNs is proposed to address this issue, including a powers-of-two (POT)-based shared scaling factor quantization scheme and a multi-dimensional de-biasing (MDD) quantization strategy. Specifically, the POT-based shared scaling factor quantization scheme converts the adder filters in A2NNs to quantized adder filters with hardware-friendly integer input activations, weights, and operations. Thus, quantized A2NNs (Q-A2NNs) composed of quantized adder filters have lower computational and memory overheads than A2NNs, increasing their utility in hardware deployment. Although low-bit-width Q-A2NNs exhibit significantly reduced RSSC accuracy compared to A2NNs, this issue can be alleviated by employing the proposed MDD quantization strategy, which combines a weight-debiasing (WD) strategy, which reduces performance degradation due to deviations in the quantized weights, with a feature-debiasing (FD) strategy, which enhances the classification performance of Q-A2NNs through minimizing deviations among the output features of each layer. Extensive experiments and analyses demonstrate that the proposed SSDQ method can efficiently quantize A2NNs to obtain Q-A2NNs with low computational and memory overheads while maintaining comparable performance to A2NNs, thus having high potential for onboard RSSC. Full article
Show Figures

Figure 1

22 pages, 1329 KiB  
Article
A Scene Classification Model Based on Global-Local Features and Attention in Lie Group Space
by Chengjun Xu, Jingqian Shu, Zhenghan Wang and Jialin Wang
Remote Sens. 2024, 16(13), 2323; https://doi.org/10.3390/rs16132323 - 25 Jun 2024
Viewed by 804
Abstract
The efficient fusion of global and local multi-scale features is quite important for remote sensing scene classification (RSSC). The scenes in high-resolution remote sensing images (HRRSI) contain many complex backgrounds, intra-class diversity, and inter-class similarities. Many studies have shown that global features and [...] Read more.
The efficient fusion of global and local multi-scale features is quite important for remote sensing scene classification (RSSC). The scenes in high-resolution remote sensing images (HRRSI) contain many complex backgrounds, intra-class diversity, and inter-class similarities. Many studies have shown that global features and local features are helpful for RSSC. The receptive field of a traditional convolution kernel is small and fixed, and it is difficult to capture global features in the scene. The self-attention mechanism proposed in transformer effectively alleviates the above shortcomings. However, such models lack local inductive bias, and the calculation is complicated due to the large number of parameters. To address these problems, in this study, we propose a classification model of global-local features and attention based on Lie Group space. The model is mainly composed of three independent branches, which can effectively extract multi-scale features of the scene and fuse the above features through a fusion module. Channel attention and spatial attention are designed in the fusion module, which can effectively enhance the crucial features in the crucial regions, to improve the accuracy of scene classification. The advantage of our model is that it extracts richer features, and the global-local features of the scene can be effectively extracted at different scales. Our proposed model has been verified on publicly available and challenging datasets, taking the AID as an example, the classification accuracy reached 97.31%, and the number of parameters is 12.216 M. Compared with other state-of-the-art models, it has certain advantages in terms of classification accuracy and number of parameters. Full article
Show Figures

Graphical abstract

18 pages, 2663 KiB  
Article
Breaking the ImageNet Pretraining Paradigm: A General Framework for Training Using Only Remote Sensing Scene Images
by Tao Xu, Zhicheng Zhao and Jun Wu
Appl. Sci. 2023, 13(20), 11374; https://doi.org/10.3390/app132011374 - 17 Oct 2023
Viewed by 1082
Abstract
Remote sensing scene classification (RSSC) is a very crucial subtask of remote sensing image understanding. With the rapid development of convolutional neural networks (CNNs) in the field of natural images, great progress has been made in RSSC. Compared with natural images, labeled remote [...] Read more.
Remote sensing scene classification (RSSC) is a very crucial subtask of remote sensing image understanding. With the rapid development of convolutional neural networks (CNNs) in the field of natural images, great progress has been made in RSSC. Compared with natural images, labeled remote sensing images are more difficult to acquire, and typical RSSC datasets are consequently smaller than natural image datasets. Due to the small scale of these labeled datasets, training a network using only remote sensing scene datasets is very difficult. Most current approaches rely on a paradigm consisting of ImageNet pretraining followed by model fine-tuning on RSSC datasets. However, there are considerable dissimilarities between remote sensing images and natural images, and as a result, the current paradigm may present some problems for new studies. In this paper, to break free of this paradigm, we propose a general framework for scene classification (GFSC) that can help to train various network architectures on limited labeled remote sensing scene images. Extensive experiments show that ImageNet pretraining is not only unnecessary but may be one of the causes of the limited performance of RSSC models. Our study provides a solution that not only replaces the ImageNet pretraining paradigm but also further improves the baseline for RSSC. Our proposed framework can help various CNNs achieve state-of-the-art performance using only remote sensing images and endow the trained models with a stronger ability to extract discriminative features from complex remote sensing images. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Visual Signal Processing)
Show Figures

Figure 1

24 pages, 1384 KiB  
Article
Characterization and Association of Rips Repertoire to Host Range of Novel Ralstonia solanacearum Strains by In Silico Approaches
by Juan Carlos Ariute, Andrei Giachetto Felice, Siomar Soares, Marco Aurélio Siqueira da Gama, Elineide Barbosa de Souza, Vasco Azevedo, Bertram Brenig, Flávia Aburjaile and Ana Maria Benko-Iseppon
Microorganisms 2023, 11(4), 954; https://doi.org/10.3390/microorganisms11040954 - 6 Apr 2023
Cited by 1 | Viewed by 1997
Abstract
Ralstonia solanacearum species complex (RSSC) cause several phytobacteriosis in many economically important crops around the globe, especially in the tropics. In Brazil, phylotypes I and II cause bacterial wilt (BW) and are indistinguishable by classical microbiological and phytopathological methods, while Moko disease is [...] Read more.
Ralstonia solanacearum species complex (RSSC) cause several phytobacteriosis in many economically important crops around the globe, especially in the tropics. In Brazil, phylotypes I and II cause bacterial wilt (BW) and are indistinguishable by classical microbiological and phytopathological methods, while Moko disease is caused only by phylotype II strains. Type III effectors of RSSC (Rips) are key molecular actors regarding pathogenesis and are associated with specificity to some hosts. In this study, we sequenced and characterized 14 newly RSSC isolates from Brazil’s Northern and Northeastern regions, including BW and Moko ecotypes. Virulence and resistance sequences were annotated, and the Rips repertoire was predicted. Confirming previous studies, RSSC pangenome is open as α0.77. Genomic information regarding these isolates matches those for R. solanacearum in NCBI. All of them fit in phylotype II with a similarity above 96%, with five isolates in phylotype IIB and nine in phylotype IIA. Almost all R. solanacearum genomes in NCBI are actually from other species in RSSC. Rips repertoire of Moko IIB was more homogeneous, except for isolate B4, which presented ten non-shared Rips. Rips repertoire of phylotype IIA was more diverse in both Moko and BW, with 43 common shared Rips among all 14 isolates. New BW isolates shared more Rips with Moko IIA and Moko IIB than with other public BW genome isolates from Brazil. Rips not shared with other isolates might contribute to individual virulence, but commonly shared Rips are good avirulence candidates. The high number of Rips shared by new Moko and BW isolates suggests they are actually Moko isolates infecting solanaceous hosts. Finally, infection assays and Rips expression on different hosts are needed to better elucidate the association between Rips repertoire and host specificities. Full article
(This article belongs to the Section Plant Microbe Interactions)
Show Figures

Figure 1

16 pages, 7225 KiB  
Article
HCFPN: Hierarchical Contextual Feature-Preserved Network for Remote Sensing Scene Classification
by Jingwen Yuan and Shugen Wang
Remote Sens. 2023, 15(3), 810; https://doi.org/10.3390/rs15030810 - 31 Jan 2023
Cited by 3 | Viewed by 1390
Abstract
Convolutional neural networks (CNNs) have made significant advances in remote sensing scene classification (RSSC) in recent years. Nevertheless, the limitations of the receptive field cause CNNs to suffer from a disadvantage in capturing contextual information. To address this issue, vision transformer (ViT), a [...] Read more.
Convolutional neural networks (CNNs) have made significant advances in remote sensing scene classification (RSSC) in recent years. Nevertheless, the limitations of the receptive field cause CNNs to suffer from a disadvantage in capturing contextual information. To address this issue, vision transformer (ViT), a novel model that has piqued the interest of academics, is used to extract latent contextual information in remote sensing scene classification. However, when confronted with the challenges of large-scale variations and high interclass similarity in scene classification images, the original ViT has the drawback of ignoring important local features, thereby causing the model’s performance to degrade. Consequently, we propose the hierarchical contextual feature-preserved network (HCFPN) by combining the advantages of CNNs and ViT. First, a hierarchical feature extraction module based on ResNet-34 is utilized to acquire the multilevel convolutional features and high-level semantic features. Second, a contextual feature-preserved module takes advantage of the first two multilevel features to capture abundant long-term contextual features. Then, the captured long-term contextual features are utilized for multiheaded cross-level attention computing to aggregate and explore the correlation of multilevel features. Finally, the multiheaded cross-level attention score and high-level semantic features are classified. Then, a category score average module is proposed to fuse the classification results, whereas a label smoothing approach is utilized prior to calculating the loss to produce discriminative scene representation. In addition, we conduct extensive experiments on two publicly available RSSC datasets. Our proposed HCPFN outperforms most state-of-the-art approaches. Full article
(This article belongs to the Special Issue Pattern Recognition in Hyperspectral Remote Sensing)
Show Figures

Figure 1

18 pages, 1038 KiB  
Article
Adaptive Discriminative Regions Learning Network for Remote Sensing Scene Classification
by Chuan Tang, Xiao Zheng and Chang Tang
Sensors 2023, 23(2), 773; https://doi.org/10.3390/s23020773 - 10 Jan 2023
Cited by 2 | Viewed by 1594
Abstract
As an auxiliary means of remote sensing (RS) intelligent interpretation, remote sensing scene classification (RSSC) attracts considerable attention and its performance has been improved significantly by the popular deep convolutional neural networks (DCNNs). However, there are still several challenges that hinder the practical [...] Read more.
As an auxiliary means of remote sensing (RS) intelligent interpretation, remote sensing scene classification (RSSC) attracts considerable attention and its performance has been improved significantly by the popular deep convolutional neural networks (DCNNs). However, there are still several challenges that hinder the practical applications of RSSC, such as complex composition of land cover, scale-variation of objects, and redundant and noisy areas for scene classification. In order to mitigate the impact of these issues, we propose an adaptive discriminative regions learning network for RSSC, referred as ADRL-Net briefly, which locates discriminative regions effectively for boosting the performance of RSSC by utilizing a novel self-supervision mechanism. Our proposed ADRL-Net consists of three main modules, including a discriminative region generator, a region discriminator, and a region scorer. Specifically, the discriminative region generator first generates some candidate regions which could be informative for RSSC. Then, the region discriminator evaluates the regions generated by region generator and provides feedback for the generator to update the informative regions. Finally, the region scorer makes prediction scores for the whole image by using the discriminative regions. In such a manner, the three modules of ADRL-Net can cooperate with each other and focus on the most informative regions of an image and reduce the interference of redundant regions for final classification, which is robust to the complex scene composition, object scales, and irrelevant information. In order to validate the efficacy of the proposed network, we conduct experiments on four widely used benchmark datasets, and the experimental results demonstrate that ADRL-Net consistently outperforms other state-of-the-art RSSC methods. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

22 pages, 18409 KiB  
Article
RSCNet: An Efficient Remote Sensing Scene Classification Model Based on Lightweight Convolution Neural Networks
by Zhichao Chen, Jie Yang, Zhicheng Feng and Lifang Chen
Electronics 2022, 11(22), 3727; https://doi.org/10.3390/electronics11223727 - 14 Nov 2022
Cited by 10 | Viewed by 1953
Abstract
This study aims at improving the efficiency of remote sensing scene classification (RSSC) through lightweight neural networks and to provide a possibility for large-scale, intelligent and real-time computation in performing RSSC for common devices. In this study, a lightweight RSSC model is proposed, [...] Read more.
This study aims at improving the efficiency of remote sensing scene classification (RSSC) through lightweight neural networks and to provide a possibility for large-scale, intelligent and real-time computation in performing RSSC for common devices. In this study, a lightweight RSSC model is proposed, which is named RSCNet. First, we use the lightweight ShuffleNet v2 network to extract the abstract features from the images, which can guarantee the efficiency of the model. Then, the weights of the backbone are initialized using transfer learning, allowing the model to learn by drawing on the knowledge of ImageNet. Second, to further improve the classification accuracy of the model, we propose to combine ShuffleNet v2 with an efficient channel attention mechanism that allows the features of the input classifier to be weighted. Third, we use a regularization technique during the training process, which utilizes label smoothing regularization to replace the original loss function. The experimental results show that the classification accuracy of RSCNet is 96.75% and 99.05% on the AID and UCMerced_LandUse datasets, respectively. The floating-point operations (FLOPs) of the proposed model are only 153.71 M, and the time spent for a single inference on the CPU is about 2.75 ms. Compared with existing RSSC methods, RSCNet achieves relatively high accuracy at a very small computational cost. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 3654 KiB  
Article
Generative Adversarial Networks for Zero-Shot Remote Sensing Scene Classification
by Zihao Li, Daobing Zhang, Yang Wang, Daoyu Lin and Jinghua Zhang
Appl. Sci. 2022, 12(8), 3760; https://doi.org/10.3390/app12083760 - 8 Apr 2022
Cited by 9 | Viewed by 2175
Abstract
Deep learning-based methods succeed in remote sensing scene classification (RSSC). However, current methods require training on a large dataset, and if a class does not appear in the training set, it does not work well. Zero-shot classification methods are designed to address the [...] Read more.
Deep learning-based methods succeed in remote sensing scene classification (RSSC). However, current methods require training on a large dataset, and if a class does not appear in the training set, it does not work well. Zero-shot classification methods are designed to address the classification for unseen category images in which the generative adversarial network (GAN) is a popular method. Thus, our approach aims to achieve the zero-shot RSSC based on GAN. We employed the conditional Wasserstein generative adversarial network (WGAN) to generate image features. Since remote sensing images have inter-class similarity and intra-class diversity, we introduced classification loss, semantic regression module, and class-prototype loss to constrain the generator. The classification loss was used to preserve inter-class discrimination. We used the semantic regression module to ensure that the image features generated by the generator can represent the semantic features. We introduced class-prototype loss to ensure the intra-class diversity of the synthesized image features and avoid generating too homogeneous image features. We studied the effect of different semantic embeddings for zero-shot RSSC. We performed experiments on three datasets, and the experimental results show that our method performs better than the state-of-the-art methods in zero-shot RSSC in most cases. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

14 pages, 2401 KiB  
Article
The Separation of Carbonaceous Matter from Refractory Gold Ore Using Multi-Stage Flotation: A Case Study
by Sugyeong Lee, Charlotte E. Gibson and Ahmad Ghahreman
Minerals 2021, 11(12), 1430; https://doi.org/10.3390/min11121430 - 17 Dec 2021
Cited by 4 | Viewed by 3009
Abstract
As a pre-treatment method of refractory gold ore, carbonaceous matter (C-matter) flotation was investigated with multi-stage flotation by rougher, scavenger, and cleaner stages. Different dosages of kerosene and MIBC (4-Methyl-2-pentanol) were applied and the optimum dosage was selected by testing in each flotation [...] Read more.
As a pre-treatment method of refractory gold ore, carbonaceous matter (C-matter) flotation was investigated with multi-stage flotation by rougher, scavenger, and cleaner stages. Different dosages of kerosene and MIBC (4-Methyl-2-pentanol) were applied and the optimum dosage was selected by testing in each flotation stage. With the combination of each stage, four circuit designs were suggested, which were a single-stage rougher flotation (R), rougher-scavenger flotation (R+S), rougher-scavenger-scavenger cleaner flotation (R+S+SC), and rougher-rougher cleaner-scavenger-scavenger cleaner flotations (R+S+RC+SC). The results indicated that the scavenger flotation increased C-matter recovery but reduced C-matter grade compared with single-stage rougher flotation. Cleaning of the scavenger flotation concentrate improved C-matter grade significantly, but reduced recovery slightly. Cleaning of the rougher flotation concentrate achieved overall improved selectivity in flotation. A combination of rougher-scavenger flotation followed by cleaning of both concentrates (R+S+RC+SC) resulted in 73% C-matter recovery and a combined cleaner concentrate grade of 4%; the final tailings C-matter grade was 0.9%, where the C-matter remaining in the tailings was locked, and fine grained. The results demonstrate the need for the multi-stage flotation of C-matter from refractory gold ore to achieve selective separation and suggested the potential of C-matter flotation as the pre-treatment for efficient gold production. Full article
(This article belongs to the Section Mineral Processing and Extractive Metallurgy)
Show Figures

Figure 1

28 pages, 17621 KiB  
Article
Object Tracking in Hyperspectral-Oriented Video with Fast Spatial-Spectral Features
by Lulu Chen, Yongqiang Zhao, Jiaxin Yao, Jiaxin Chen, Ning Li, Jonathan Cheung-Wai Chan and Seong G. Kong
Remote Sens. 2021, 13(10), 1922; https://doi.org/10.3390/rs13101922 - 14 May 2021
Cited by 24 | Viewed by 3509
Abstract
This paper presents a correlation filter object tracker based on fast spatial-spectral features (FSSF) to realize robust, real-time object tracking in hyperspectral surveillance video. Traditional object tracking in surveillance video based only on appearance information often fails in the presence of background clutter, [...] Read more.
This paper presents a correlation filter object tracker based on fast spatial-spectral features (FSSF) to realize robust, real-time object tracking in hyperspectral surveillance video. Traditional object tracking in surveillance video based only on appearance information often fails in the presence of background clutter, low resolution, and appearance changes. Hyperspectral imaging uses unique spectral properties as well as spatial information to improve tracking accuracy in such challenging environments. However, the high-dimensionality of hyperspectral images causes high computational costs and difficulties for discriminative feature extraction. In FSSF, the real-time spatial-spectral convolution (RSSC) kernel is updated in real time in the Fourier transform domain without offline training to quickly extract discriminative spatial-spectral features. The spatial-spectral features are integrated into correlation filters to complete the hyperspectral tracking. To validate the proposed scheme, we collected a hyperspectral surveillance video (HSSV) dataset consisting of 70 sequences in 25 bands. Extensive experiments confirm the advantages and the efficiency of the proposed FSSF for object tracking in hyperspectral video tracking in challenging conditions of background clutter, low resolution, and appearance changes. Full article
Show Figures

Graphical abstract

24 pages, 1009 KiB  
Article
Genotyping by Sequencing Highlights a Polygenic Resistance to Ralstonia pseudosolanacearum in Eggplant (Solanum melongena L.)
by Sylvia Salgon, Morgane Raynal, Sylvain Lebon, Jean-Michel Baptiste, Marie-Christine Daunay, Jacques Dintinger and Cyril Jourda
Int. J. Mol. Sci. 2018, 19(2), 357; https://doi.org/10.3390/ijms19020357 - 25 Jan 2018
Cited by 31 | Viewed by 5399
Abstract
Eggplant cultivation is limited by numerous diseases, including the devastating bacterial wilt (BW) caused by the Ralstonia solanacearum species complex (RSSC). Within the RSSC, Ralstonia pseudosolanacearum (including phylotypes I and III) causes severe damage to all solanaceous crops, including eggplant. Therefore, the creation [...] Read more.
Eggplant cultivation is limited by numerous diseases, including the devastating bacterial wilt (BW) caused by the Ralstonia solanacearum species complex (RSSC). Within the RSSC, Ralstonia pseudosolanacearum (including phylotypes I and III) causes severe damage to all solanaceous crops, including eggplant. Therefore, the creation of cultivars resistant to R. pseudosolanacearum strains is a major goal for breeders. An intraspecific eggplant population, segregating for resistance, was created from the cross between the susceptible MM738 and the resistant EG203 lines. The population of 123 doubled haploid lines was challenged with two strains belonging to phylotypes I (PSS4) and III (R3598), which both bypass the published EBWR9 BW-resistance quantitative trait locus (QTL). Ten and three QTLs of resistance to PSS4 and to R3598, respectively, were detected and mapped. All were strongly influenced by environmental conditions. The most stable QTLs were found on chromosomes 3 and 6. Given their estimated physical position, these newly detected QTLs are putatively syntenic with BW-resistance QTLs in tomato. In particular, the QTLs’ position on chromosome 6 overlaps with that of the major broad-spectrum tomato resistance QTL Bwr-6. The present study is a first step towards understanding the complex polygenic system, which underlies the high level of BW resistance of the EG203 line. Full article
(This article belongs to the Special Issue Plant Defense Genes Against Biotic Stresses)
Show Figures

Graphical abstract

Back to TopTop