Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (482)

Search Parameters:
Keywords = spatio-temporal fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 10820 KiB  
Article
Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods
by Xiaochun Zhang, Hongsi Gao, Liangsheng Shi, Xiaolong Hu, Liao Zhong and Jiang Bian
Remote Sens. 2024, 16(13), 2414; https://doi.org/10.3390/rs16132414 - 1 Jul 2024
Viewed by 249
Abstract
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the [...] Read more.
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the unmixing–weight ET image fusion model (UWET), which integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction to produce daily ET maps with a high spatial resolution. The Landsat-ET and MODIS-ET datasets for the UWET fusion data are retrieved from Landsat and MODIS images based on the surface energy balance model. The UWET model considers the effects of crop phenology, precipitation, and land cover in the process of the ET image fusion. The precision evaluation is conducted on the UWET results, and the measured ET values are monitored by eddy covariance at the Luancheng station, with average MAE values of 0.57 mm/day. The image results of UWET show fine spatial details and capture the dynamic ET changes. The seasonal ET values of winter wheat from the ET map mainly range from 350 to 660 mm in 2019–2020 and from 300 to 620 mm in 2020–2021. The average seasonal ET in 2019–2020 is 499.89 mm, and in 2020–2021, it is 459.44 mm. The performance of UWET is compared with two other fusion models: the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and the Spatial and Temporal Reflectance Unmixing Model (STRUM). UWET performs better in the spatial details than the STARFM and is better in the temporal characteristics than the STRUM. The results indicate that UWET is suitable for generating ET products with a high spatial–temporal resolution in agricultural regions. Full article
Show Figures

Figure 1

19 pages, 4648 KiB  
Article
MSE-VGG: A Novel Deep Learning Approach Based on EEG for Rapid Ischemic Stroke Detection
by Wei Tong, Weiqi Yue, Fangni Chen, Wei Shi, Lei Zhang and Jian Wan
Sensors 2024, 24(13), 4234; https://doi.org/10.3390/s24134234 - 29 Jun 2024
Viewed by 274
Abstract
Ischemic stroke is a type of brain dysfunction caused by pathological changes in the blood vessels of the brain which leads to brain tissue ischemia and hypoxia and ultimately results in cell necrosis. Without timely and effective treatment in the early time window, [...] Read more.
Ischemic stroke is a type of brain dysfunction caused by pathological changes in the blood vessels of the brain which leads to brain tissue ischemia and hypoxia and ultimately results in cell necrosis. Without timely and effective treatment in the early time window, ischemic stroke can lead to long-term disability and even death. Therefore, rapid detection is crucial in patients with ischemic stroke. In this study, we developed a deep learning model based on fusion features extracted from electroencephalography (EEG) signals for the fast detection of ischemic stroke. Specifically, we recruited 20 ischemic stroke patients who underwent EEG examination during the acute phase of stroke and collected EEG signals from 19 adults with no history of stroke as a control group. Afterwards, we constructed correlation-weighted Phase Lag Index (cwPLI), a novel feature, to explore the synchronization information and functional connectivity between EEG channels. Moreover, the spatio-temporal information from functional connectivity and the nonlinear information from complexity were fused by combining the cwPLI matrix and Sample Entropy (SaEn) together to further improve the discriminative ability of the model. Finally, the novel MSE-VGG network was employed as a classifier to distinguish ischemic stroke from non-ischemic stroke data. Five-fold cross-validation experiments demonstrated that the proposed model possesses excellent performance, with accuracy, sensitivity, and specificity reaching 90.17%, 89.86%, and 90.44%, respectively. Experiments on time consumption verified that the proposed method is superior to other state-of-the-art examinations. This study contributes to the advancement of the rapid detection of ischemic stroke, shedding light on the untapped potential of EEG and demonstrating the efficacy of deep learning in ischemic stroke identification. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

14 pages, 1252 KiB  
Article
Multisensory Fusion for Unsupervised Spatiotemporal Speaker Diarization
by Paris Xylogiannis, Nikolaos Vryzas, Lazaros Vrysis and Charalampos Dimoulas
Sensors 2024, 24(13), 4229; https://doi.org/10.3390/s24134229 - 29 Jun 2024
Viewed by 291
Abstract
Speaker diarization consists of answering the question of “who spoke when” in audio recordings. In meeting scenarios, the task of labeling audio with the corresponding speaker identities can be further assisted by the exploitation of spatial features. This work proposes a framework designed [...] Read more.
Speaker diarization consists of answering the question of “who spoke when” in audio recordings. In meeting scenarios, the task of labeling audio with the corresponding speaker identities can be further assisted by the exploitation of spatial features. This work proposes a framework designed to assess the effectiveness of combining speaker embeddings with Time Difference of Arrival (TDOA) values from available microphone sensor arrays in meetings. We extract speaker embeddings using two popular and robust pre-trained models, ECAPA-TDNN and X-vectors, and calculate the TDOA values via the Generalized Cross-Correlation (GCC) method with Phase Transform (PHAT) weighting. Although ECAPA-TDNN outperforms the Xvectors model, we utilize both speaker embedding models to explore the potential of employing a computationally lighter model when spatial information is exploited. Various techniques for combining the spatial–temporal information are examined in order to determine the best clustering method. The proposed framework is evaluated on two multichannel datasets: the AVLab Speaker Localization dataset and a multichannel dataset (SpeaD-M3C) enriched in the context of the present work with supplementary information from smartphone recordings. Our results strongly indicate that the integration of spatial information can significantly improve the performance of state-of-the-art deep learning diarization models, presenting a 2–3% reduction in DER compared to the baseline approach on the evaluated datasets. Full article
(This article belongs to the Special Issue Multimodal Sensing Technologies for IoT and AI-Enabled Systems)
Show Figures

Figure 1

20 pages, 10516 KiB  
Article
Improving the Spatiotemporal Resolution of Land Surface Temperature Using a Data Fusion Method in Haihe Basin, China
by Rencai Lin, Zheng Wei, He Chen, Congying Han, Baozhong Zhang and Maomao Jule
Remote Sens. 2024, 16(13), 2374; https://doi.org/10.3390/rs16132374 - 28 Jun 2024
Viewed by 277
Abstract
Land surface temperature (LST) serves as a pivotal component within the surface energy cycle, offering fundamental insights for the investigation of agricultural water environment, urban thermal environment, and land planning. However, LST monitoring at a point scale entails substantial costs and poses implementation [...] Read more.
Land surface temperature (LST) serves as a pivotal component within the surface energy cycle, offering fundamental insights for the investigation of agricultural water environment, urban thermal environment, and land planning. However, LST monitoring at a point scale entails substantial costs and poses implementation challenges. Moreover, the existing LST products are constrained by their low spatiotemporal resolution, limiting their broader applicability. The fusion of multi-source remote sensing data offers a viable solution to enhance spatiotemporal resolution. In this study, the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) was used to estimate time series LST utilizing multi-temporal Landsat 8 (L8) and MOD21A2 within the Haihe basin in 2021. Validation of ESTARFM LST was conducted against L8 LST and in situ LST. The results can be summarized as follows: (1) ESTARFM was found to be effective in heterogeneous regions within the Haihe basin, yielding LST with a spatiotemporal resolution of 30 m and 8 d while retaining clear texture information; (2) the comparison between ESTARFM LST and L8 LST shows a coefficient determination (R2) exceeding 0.59, a mean absolute error (MAE) lower than 2.43 K, and a root mean square error (RMSE) lower than 2.63 K for most dates; (3) comparison between ESTARFM LST and in situ LST showcased high validation accuracy, revealing a R2 of 0.87, a MAE of 2.27 K, and a RMSE of 4.12 K. The estimated time series LST exhibited notable reliability and robustness. This study introduced ESTARFM for LST estimation, achieving satisfactory outcomes. The findings offer a valuable reference for other regions to generate LST data with a spatiotemporal resolution of 8 d and 30 m, thereby enhancing the application of data products in agriculture and hydrology contexts. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

24 pages, 17475 KiB  
Article
Spatio-Temporal Land-Use/Cover Change Dynamics Using Spatiotemporal Data Fusion Model and Google Earth Engine in Jilin Province, China
by Zhuxin Liu, Yang Han, Ruifei Zhu, Chunmei Qu, Peng Zhang, Yaping Xu, Jiani Zhang, Lijuan Zhuang, Feiyu Wang and Fang Huang
Land 2024, 13(7), 924; https://doi.org/10.3390/land13070924 - 25 Jun 2024
Viewed by 438
Abstract
Jilin Province is located in the northeast of China, and has fragile ecosystems, and a vulnerable environment. Large-scale, long time series, high-precision land-use/cover change (LU/CC) data are important for spatial planning and environmental protection in areas with high surface heterogeneity. In this paper, [...] Read more.
Jilin Province is located in the northeast of China, and has fragile ecosystems, and a vulnerable environment. Large-scale, long time series, high-precision land-use/cover change (LU/CC) data are important for spatial planning and environmental protection in areas with high surface heterogeneity. In this paper, based on the high temporal and spatial fusion data of Landsat and MODIS and the Google Earth Engine (GEE), long time series LU/CC mapping and spatio-temporal analysis for the period 2000–2023 were realized using the random forest remote sensing image classification method, which integrates remote sensing indices. The prediction results using the OL-STARFM method were very close to the real images and better contained the spatial image information, allowing its application to the subsequent classification. The average overall accuracy and kappa coefficient of the random forest classification products obtained using the fused remote sensing index were 95.11% and 0.9394, respectively. During the study period, the area of cultivated land and unused land decreased as a whole. The area of grassland, forest, and water fluctuated, while building land increased to 13,442.27 km2 in 2023. In terms of land transfer, cultivated land was the most important source of transfers, and the total area share decreased from 42.98% to 38.39%. Cultivated land was mainly transferred to grassland, forest land, and building land, with transfer areas of 7682.48 km2, 8374.11 km2, and 7244.52 km2, respectively. Grassland was the largest source of land transfer into cultivated land, and the land transfer among other feature types was relatively small, at less than 3300 km2. This study provides data support for the scientific management of land resources in Jilin Province, and the resulting LU/CC dataset is of great significance for regional sustainable development. Full article
Show Figures

Figure 1

18 pages, 9234 KiB  
Article
High-Density Polyethylene Pipe Butt-Fusion Joint Detection via Total Focusing Method and Spatiotemporal Singular Value Decomposition
by Haowen Zhang, Qiang Wang, Juan Zhou, Linlin Wu, Weirong Xu and Hong Wang
Processes 2024, 12(6), 1267; https://doi.org/10.3390/pr12061267 - 19 Jun 2024
Viewed by 325
Abstract
High-density polyethylene (HDPE) pipes are widely used for urban natural gas transportation. Pipes are usually welded using the technique of thermal butt fusion, which is prone to manufacturing defects that are detrimental to safe operation. This paper proposes a spatiotemporal singular value decomposition [...] Read more.
High-density polyethylene (HDPE) pipes are widely used for urban natural gas transportation. Pipes are usually welded using the technique of thermal butt fusion, which is prone to manufacturing defects that are detrimental to safe operation. This paper proposes a spatiotemporal singular value decomposition preprocessing improved total focusing method (STSVD-ITFM) imaging algorithm combined with ultrasonic phased array technology for non-destructive testing. That is, the ultrasonic real-value signal data are first processed using STSVD filtering, enhancing the spatiotemporal singular values corresponding to the defective signal components. The TFM algorithm is then improved by establishing a composite modification factor based on the directivity function and the corrected energy attenuation factor by adding angle variable. Finally, the filtered signal data are utilized for imaging. Experiments are conducted by examining specimen blocks of HDPE materials with through-hole defects. The results show the following: the STSVD-ITFM algorithm proposed in this paper can better suppress static clutter in the near-field region, and the average signal-to-noise ratios are all higher than the TFM algorithm. Moreover, the STSVD-ITFM algorithm has the smallest average error among all defect depth quantification results. Full article
Show Figures

Figure 1

22 pages, 7895 KiB  
Article
Spatiotemporal Fusion Prediction of Sea Surface Temperatures Based on the Graph Convolutional Neural and Long Short-Term Memory Networks
by Jingjing Liu, Lei Wang, Fengjun Hu, Ping Xu and Denghui Zhang
Water 2024, 16(12), 1725; https://doi.org/10.3390/w16121725 - 18 Jun 2024
Viewed by 390
Abstract
Sea surface temperature (SST) prediction plays an important role in scientific research, environmental protection, and other marine-related fields. However, most of the current prediction methods are not effective enough to utilize the spatial correlation of SSTs, which limits the improvement of SST prediction [...] Read more.
Sea surface temperature (SST) prediction plays an important role in scientific research, environmental protection, and other marine-related fields. However, most of the current prediction methods are not effective enough to utilize the spatial correlation of SSTs, which limits the improvement of SST prediction accuracy. Therefore, this paper first explores spatial correlation mining methods, including regular boundary division, convolutional sliding translation, and clustering neural networks. Then, spatial correlation mining through a graph convolutional neural network (GCN) is proposed, which solves the problem of the dependency on regular Euclidian space and the lack of spatial correlation around the boundary of groups for the above three methods. Based on that, this paper combines the spatial advantages of the GCN and the temporal advantages of the long short-term memory network (LSTM) and proposes a spatiotemporal fusion model (GCN-LSTM) for SST prediction. The proposed model can capture SST features in both the spatial and temporal dimensions more effectively and complete the SST prediction by spatiotemporal fusion. The experiments prove that the proposed model greatly improves the prediction accuracy and is an effective model for SST prediction. Full article
Show Figures

Figure 1

18 pages, 7478 KiB  
Article
Estimation of Rice Leaf Area Index Utilizing a Kalman Filter Fusion Methodology Based on Multi-Spectral Data Obtained from Unmanned Aerial Vehicles (UAVs)
by Minglei Yu, Jiaoyang He, Wanyu Li, Hengbiao Zheng, Xue Wang, Xia Yao, Tao Cheng, Xiaohu Zhang, Yan Zhu, Weixing Cao and Yongchao Tian
Remote Sens. 2024, 16(12), 2073; https://doi.org/10.3390/rs16122073 - 7 Jun 2024
Viewed by 383
Abstract
The rapid and accurate estimation of leaf area index (LAI) through remote sensing holds significant importance for precise crop management. However, the direct construction of a vegetation index model based on multi-spectral data lacks robustness and spatiotemporal expansibility, making its direct application in [...] Read more.
The rapid and accurate estimation of leaf area index (LAI) through remote sensing holds significant importance for precise crop management. However, the direct construction of a vegetation index model based on multi-spectral data lacks robustness and spatiotemporal expansibility, making its direct application in practical production challenging. This study aimed to establish a simple and effective method for LAI estimation to address the issue of poor accuracy and stability that is encountered by vegetation index models under varying conditions. Based on seven years of field plot trials with different varieties and nitrogen fertilizer treatments, the Kalman filter (KF) fusion method was employed to integrate the estimated outcomes of multiple vegetation index models, and the fusion process was investigated by comparing and analyzing the relationship between fixed and dynamic variances alongside the fusion accuracy of optimal combinations during different growth stages. A novel multi-model integration fusion method, KF-DGDV (Kalman Filtering with Different Growth Periods and Different Vegetation Index Models), which combines the growth characteristics and uncertainty of LAI, was designed for the precise monitoring of LAI across various growth phases of rice. The results indicated that the KF-DGDV technique exhibits a superior accuracy in estimating LAI compared with statistical data fusion and the conventional vegetation index model method. Specifically, during the tillering to booting stage, a high R2 value of 0.76 was achieved, while at the heading to maturity stage, it reached 0.66. In contrast, within the framework of the traditional vegetation index model, the red-edge difference vegetation index (DVIREP) model demonstrated a superior performance, with an R2 value of 0.65, during tillering to booting stage, and 0.50 during the heading to maturity stage, respectively. The multi-model integration method (MME) yielded an R2 value of 0.67 for LAI estimation during the tillering to booting stage, and 0.53 during the heading to maturity stage. Consequently, KF-DGDV presented an effective and stable real-time quantitative estimation method for LAI in rice. Full article
(This article belongs to the Special Issue UAS Technology and Applications in Precision Agriculture)
Show Figures

Figure 1

27 pages, 35655 KiB  
Article
An Improved Gap-Filling Method for Reconstructing Dense Time-Series Images from LANDSAT 7 SLC-Off Data
by Yue Li, Qiang Liu, Shuang Chen and Xiaotong Zhang
Remote Sens. 2024, 16(12), 2064; https://doi.org/10.3390/rs16122064 - 7 Jun 2024
Viewed by 395
Abstract
Over recent decades, Landsat satellite data has evolved into a highly valuable resource across diverse fields. Long-term satellite data records with integrity and consistency, such as the Landsat series, provide indispensable data for many applications. However, the malfunction of the Scan Line Corrector [...] Read more.
Over recent decades, Landsat satellite data has evolved into a highly valuable resource across diverse fields. Long-term satellite data records with integrity and consistency, such as the Landsat series, provide indispensable data for many applications. However, the malfunction of the Scan Line Corrector (SLC) on the Landsat 7 satellite in 2003 resulted in stripping in subsequent images, compromising the temporal consistency and data quality of Landsat time-series data. While various methods have been proposed to improve the quality of Landsat 7 SLC-off data, existing gap-filling methods fail to enhance the temporal resolution of reconstructed images, and spatiotemporal fusion methods encounter challenges in managing large-scale datasets. Therefore, we propose a method for reconstructing dense time series from SLC-off data. This method utilizes the Neighborhood Similar Pixel Interpolator to fill in missing values and leverages the time-series information to reconstruct high-resolution images. Taking the blue band as an example, the surface reflectance verification results show that the Mean Absolute Error (MAE) and BIAS reach minimum values of 0.0069 and 0.0014, respectively, with the Correlation Coefficient (CC) and Structural Similarity Index Metric (SSIM) reaching 0.93 and 0.94. The proposed method exhibits advantages in repairing SLC-off data and reconstructing dense time-series data, enabling enhanced remote sensing applications and reliable Earth’s surface reflectance data reconstruction. Full article
(This article belongs to the Special Issue Quantitative Remote Sensing of Vegetation and Its Applications)
Show Figures

Figure 1

30 pages, 9825 KiB  
Article
DEEP-STA: Deep Learning-Based Detection and Localization of Various Types of Inter-Frame Video Tampering Using Spatiotemporal Analysis
by Naheed Akhtar, Muhammad Hussain and Zulfiqar Habib
Mathematics 2024, 12(12), 1778; https://doi.org/10.3390/math12121778 - 7 Jun 2024
Viewed by 345
Abstract
Inter-frame tampering in surveillance videos undermines the integrity of video evidence, potentially influencing law enforcement investigations and court decisions. This type of tampering is the most common tampering method, often imperceptible to the human eye. Until now, various algorithms have been proposed to [...] Read more.
Inter-frame tampering in surveillance videos undermines the integrity of video evidence, potentially influencing law enforcement investigations and court decisions. This type of tampering is the most common tampering method, often imperceptible to the human eye. Until now, various algorithms have been proposed to identify such tampering, based on handcrafted features. Automatic detection, localization, and determine the tampering type, while maintaining accuracy and processing speed, is still a challenge. We propose a novel method for detecting inter-frame tampering by exploiting a 2D convolution neural network (2D-CNN) of spatiotemporal information and fusion for deep automatic feature extraction, employing an autoencoder to significantly reduce the computational overhead by reducing the dimensionality of the feature’s space; analyzing long-range dependencies within video frames using long short-term memory (LSTM) and gated recurrent units (GRU), which helps to detect tampering traces; and finally, adding a fully connected layer (FC), with softmax activation for classification. The structural similarity index measure (SSIM) is utilized to localize tampering. We perform extensive experiments on datasets, comprised of challenging videos with different complexity levels. The results demonstrate that the proposed method can identify and pinpoint tampering regions with more than 90% accuracy, irrespective of video frame rates, video formats, number of tampering frames, and the compression quality factor. Full article
(This article belongs to the Special Issue Deep Learning in Computer Vision: Theory and Applications)
Show Figures

Figure 1

20 pages, 6364 KiB  
Article
MST-DGCN: A Multi-Scale Spatio-Temporal and Dynamic Graph Convolution Fusion Network for Electroencephalogram Recognition of Motor Imagery
by Yuanling Chen, Peisen Liu and Duan Li
Electronics 2024, 13(11), 2174; https://doi.org/10.3390/electronics13112174 - 3 Jun 2024
Viewed by 314
Abstract
The motor imagery brain-computer interface (MI-BCI) has the ability to use electroencephalogram (EEG) signals to control and communicate with external devices. By leveraging the unique characteristics of task-related brain signals, this system facilitates enhanced communication with these devices. Such capabilities hold significant potential [...] Read more.
The motor imagery brain-computer interface (MI-BCI) has the ability to use electroencephalogram (EEG) signals to control and communicate with external devices. By leveraging the unique characteristics of task-related brain signals, this system facilitates enhanced communication with these devices. Such capabilities hold significant potential for advancing rehabilitation and the development of assistive technologies. In recent years, deep learning has received considerable attention in the MI-BCI field due to its powerful feature extraction and classification capabilities. However, two factors significantly impact the performance of deep-learning models. The size of the EEG datasets influences how effectively these models can learn. Similarly, the ability of classification models to extract features directly affects their accuracy in recognizing patterns. In this paper, we propose a Multi-Scale Spatio-Temporal and Dynamic Graph Convolution Fusion Network (MST-DGCN) to address these issues. In the data-preprocessing stage, we employ two strategies, data augmentation and transfer learning, to alleviate the problem of an insufficient data volume in deep learning. By using multi-scale convolution, spatial attention mechanisms, and dynamic graph neural networks, our model effectively extracts discriminative features. The MST-DGCN mainly consists of three parts: the multi-scale spatio-temporal module, which extracts multi-scale information and refines spatial attention; the dynamic graph convolution module, which extracts key connectivity information; and the classification module. We conduct experiments on real EEG datasets and achieve an accuracy of 77.89% and a Kappa value of 0.7052, demonstrating the effectiveness of the MST-DGCN in MI-BCI tasks. Our research provides new ideas and methods for the further development of MI-BCI systems. Full article
Show Figures

Figure 1

33 pages, 10723 KiB  
Article
IONOLAB-Fusion: Fusion of Radio Occultation into Computerized Ionospheric Tomography
by Sinem Deniz Yenen and Feza Arikan
Atmosphere 2024, 15(6), 675; https://doi.org/10.3390/atmos15060675 - 31 May 2024
Viewed by 239
Abstract
In this study, a 4-D, computerized ionospheric tomography algorithm, IONOLAB-Fusion, is developed to reconstruct electron density using both actual and virtual vertical and horizontal paths for all ionospheric states. The user-friendly algorithm only requires the coordinates of the region of interest and range [...] Read more.
In this study, a 4-D, computerized ionospheric tomography algorithm, IONOLAB-Fusion, is developed to reconstruct electron density using both actual and virtual vertical and horizontal paths for all ionospheric states. The user-friendly algorithm only requires the coordinates of the region of interest and range with the desired spatio-temporal resolutions. The model ionosphere is formed using spherical voxels in a lexicographical order so that a 4-D ionosphere can be mapped to a 2-D matrix. The model matrix is formed automatically using a background ionospheric model with an optimized retrospective or near-real time manner. The singular value decomposition is applied to extract a subset of significant singular values and corresponding signal subspace basis vectors. The measurement vector is filled automatically with the optimized number of ground-based and space-based paths. The reconstruction is obtained in closed form in the least squares sense. When the performance of IONOLAB-Fusion across Europe was compared with ionosonde profiles, a 26.51% and 32.33% improvement was observed over the background ionospheric model for quiet and disturbed days, respectively. When compared with GIM-TEC, the agreement of IONOLAB-Fusion was 37.89% and 31.58% better than those achieved with the background model for quiet and disturbed days, respectively. Full article
(This article belongs to the Section Upper Atmosphere)
Show Figures

Figure 1

19 pages, 4001 KiB  
Article
Advancing Spatiotemporal Pollutant Dispersion Forecasting with an Integrated Deep Learning Framework for Crucial Information Capture
by Yuchen Wang, Zhengshan Luo, Yulei Kong and Jihao Luo
Sustainability 2024, 16(11), 4531; https://doi.org/10.3390/su16114531 - 27 May 2024
Viewed by 454
Abstract
This study addressed the limitations of traditional methods in predicting air pollution dispersion, which include restrictions in handling spatiotemporal dynamics, unbalanced feature importance, and data scarcity. To overcome these challenges, this research introduces a novel deep learning-based model, SAResNet-TCN, which integrates the strengths [...] Read more.
This study addressed the limitations of traditional methods in predicting air pollution dispersion, which include restrictions in handling spatiotemporal dynamics, unbalanced feature importance, and data scarcity. To overcome these challenges, this research introduces a novel deep learning-based model, SAResNet-TCN, which integrates the strengths of a Residual Neural Network (ResNet) and a Temporal Convolutional Network (TCN). This fusion is designed to effectively capture the spatiotemporal characteristics and temporal correlations within pollutant dispersion data. The incorporation of a sparse attention (SA) mechanism further refines the model’s focus on critical information, thereby improving efficiency. Furthermore, this study employed a Time-Series Generative Adversarial Network (TimeGAN) to augment the dataset, thereby improving the generalisability of the model. In rigorous ablation and comparison experiments, the SAResNet-TCN model demonstrated significant advances in predicting pollutant dispersion patterns, including accurate predictions of concentration peaks and trends. These results were enhanced by a global sensitivity analysis (GSA) and an additive-by-addition approach, which identified the optimal combination of input variables for different scenarios by examining their impact on the model’s performance. This study also included visual representations of the maximum downwind hazardous distance (MDH-distance) for pollutants, validated against the Prairie Grass Project Release 31, with the Protective Action Criteria (PAC) and Immediately Dangerous to Life or Health (IDLH) levels serving as hazard thresholds. This comprehensive approach to contaminant dispersion prediction aims to provide an innovative and practical solution for environmental hazard prediction and management. Full article
Show Figures

Figure 1

15 pages, 2850 KiB  
Article
Residual Spatiotemporal Convolutional Neural Network Based on Multisource Fusion Data for Approaching Precipitation Forecasting
by Tianpeng Zhang, Donghai Wang, Lindong Huang, Yihao Chen and Enguang Li
Atmosphere 2024, 15(6), 628; https://doi.org/10.3390/atmos15060628 - 24 May 2024
Viewed by 383
Abstract
Approaching precipitation forecast refers to the prediction of precipitation within a short time scale, which is usually regarded as a spatiotemporal sequence prediction problem based on radar echo maps. However, due to its reliance on single-image prediction, it lacks good capture of sudden [...] Read more.
Approaching precipitation forecast refers to the prediction of precipitation within a short time scale, which is usually regarded as a spatiotemporal sequence prediction problem based on radar echo maps. However, due to its reliance on single-image prediction, it lacks good capture of sudden severe convective events and physical constraints, which may lead to prediction ambiguities and issues such as false alarms and missed alarms. Therefore, this study dynamically combines meteorological elements from surface observations with upper-air reanalysis data to establish complex nonlinear relationships among meteorological variables based on multisource data. We design a Residual Spatiotemporal Convolutional Network (ResSTConvNet) specifically for this purpose. In this model, data fusion is achieved through the channel attention mechanism, which assigns weights to different channels. Feature extraction is conducted through simultaneous three-dimensional and two-dimensional convolution operations using a pure convolutional structure, allowing the learning of spatiotemporal feature information. Finally, feature fitting is accomplished through residual connections, enhancing the model’s predictive capability. Furthermore, we evaluate the performance of our model in 0–3 h forecasting. The results show that compared with baseline methods, this network exhibits significantly better performance in predicting heavy rainfall. Moreover, as the forecast lead time increases, the spatial features of the forecast results from our network are richer than those of other baseline models, leading to more accurate predictions of precipitation intensity and coverage area. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

17 pages, 13688 KiB  
Technical Note
Fast Fusion of Sentinel-2 and Sentinel-3 Time Series over Rangelands
by Paul Senty, Radoslaw Guzinski, Kenneth Grogan, Robert Buitenwerf, Jonas Ardö, Lars Eklundh, Alkiviadis Koukos, Torbern Tagesson and Michael Munk
Remote Sens. 2024, 16(11), 1833; https://doi.org/10.3390/rs16111833 - 21 May 2024
Viewed by 644
Abstract
Monitoring ecosystems at regional or continental scales is paramount for biodiversity conservation, climate change mitigation, and sustainable land management. Effective monitoring requires satellite imagery with both high spatial resolution and high temporal resolution. However, there is currently no single, freely available data source [...] Read more.
Monitoring ecosystems at regional or continental scales is paramount for biodiversity conservation, climate change mitigation, and sustainable land management. Effective monitoring requires satellite imagery with both high spatial resolution and high temporal resolution. However, there is currently no single, freely available data source that fulfills these needs. A seamless fusion of data from the Sentinel-3 and Sentinel-2 optical sensors could meet these monitoring requirements as Sentinel-2 observes at the required spatial resolution (10 m) while Sentinel-3 observes at the required temporal resolution (daily). We introduce the Efficient Fusion Algorithm across Spatio-Temporal scales (EFAST), which interpolates Sentinel-2 data into smooth time series (both spatially and temporally). This interpolation is informed by Sentinel-3’s temporal profile such that the phenological changes occurring between two Sentinel-2 acquisitions at a 10 m resolution are assumed to mirror those observed at Sentinel-3’s resolution. The EFAST consists of a weighted sum of Sentinel-2 images (weighted by a distance-to-clouds score) coupled with a phenological correction derived from Sentinel-3. We validate the capacity of our method to reconstruct the phenological profile at a 10 m resolution over one rangeland area and one irrigated cropland area. The EFAST outperforms classical interpolation techniques over both rangeland (−72% in the mean absolute error, MAE) and agricultural areas (−43% MAE); it presents a performance comparable to the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) (+5% MAE in both test areas) while being 140 times faster. The computational efficiency of our approach and its temporal smoothing enable the creation of seamless and high-resolution phenology products on a regional to continental scale. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Back to TopTop