Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (78)

Search Parameters:
Keywords = remote source coding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 6364 KiB  
Article
Mapping the Influence of Olympic Games’ Urban Planning on the Land Surface Temperatures: An Estimation Using Landsat Series and Google Earth Engine
by Joan-Cristian Padró, Valerio Della Sala, Marc Castelló-Bueno and Rafael Vicente-Salar
Remote Sens. 2024, 16(18), 3405; https://doi.org/10.3390/rs16183405 - 13 Sep 2024
Viewed by 1170
Abstract
The Olympic Games are a sporting event and a catalyst for urban development in their host city. In this study, we utilized remote sensing and GIS techniques to examine the impact of the Olympic infrastructure on the surface temperature of urban areas. Using [...] Read more.
The Olympic Games are a sporting event and a catalyst for urban development in their host city. In this study, we utilized remote sensing and GIS techniques to examine the impact of the Olympic infrastructure on the surface temperature of urban areas. Using Landsat Series Collection 2 Tier 1 Level 2 data and cloud computing provided by Google Earth Engine (GEE), this study examines the effects of various forms of Olympic Games facility urban planning in different historical moments and location typologies, as follows: monocentric, polycentric, peripheric and clustered Olympic ring. The GEE code applies to the Olympic Games that occurred from Paris 2024 to Montreal 1976. However, this paper focuses specifically on the representative cases of Paris 2024, Tokyo 2020, Rio 2016, Beijing 2008, Sydney 2000, Barcelona 1992, Seoul 1988, and Montreal 1976. The study is not only concerned with obtaining absolute land surface temperatures (LST), but rather the relative influence of mega-event infrastructures on mitigating or increasing the urban heat. As such, the locally normalized land surface temperature (NLST) was utilized for this purpose. In some cities (Paris, Tokyo, Beijing, and Barcelona), it has been determined that Olympic planning has resulted in the development of green spaces, creating “green spots” that contribute to lower-than-average temperatures. However, it should be noted that there is a significant variation in temperature within intensely built-up areas, such as Olympic villages and the surrounding areas of the Olympic stadium, which can become “hotspots.” Therefore, it is important to acknowledge that different planning typologies of Olympic infrastructure can have varying impacts on city heat islands, with the polycentric and clustered Olympic ring typologies displaying a mitigating effect. This research contributes to a cloud computing method that can be updated for future Olympic Games or adapted for other mega-events and utilizes a widely available remote sensing data source to study a specific urban planning context. Full article
(This article belongs to the Special Issue Urban Planning Supported by Remote Sensing Technology II)
Show Figures

Graphical abstract

20 pages, 8417 KiB  
Article
How to Circumvent and Beat the Ransomware in Android Operating System—A Case Study of Locker.CB!tr
by Kornel Drabent, Robert Janowski and Jordi Mongay Batalla
Electronics 2024, 13(11), 2212; https://doi.org/10.3390/electronics13112212 - 6 Jun 2024
Viewed by 1204
Abstract
Ransomware is one of the most extended cyberattacks. It consists of encrypting a user’s files or locking the smartphone in order to blackmail a victim. The attacking software is ordered on the infected device from the attacker’s remote server, known as command and [...] Read more.
Ransomware is one of the most extended cyberattacks. It consists of encrypting a user’s files or locking the smartphone in order to blackmail a victim. The attacking software is ordered on the infected device from the attacker’s remote server, known as command and control. In this work, we propose a method to recover from a Locker.CB!tr ransomware attack after it has infected and hit a smartphone. The novelty of our approach lies on exploiting the communication between the ransomware on the infected device and the attacker’s command and control server as a point to reverse disruptive actions like screen locking or file encryption. For this purpose, we carried out both a dynamic and a static analysis of decompiled Locker.CB!tr ransomware source code to understand its operation principles and exploited communication patterns from the IP layer to the application layer to fully impersonate the command and control server. This way, we gained full control over the Locker.CB!tr ransomware instance. From that moment, we were able to command the Locker.CB!tr ransomware instance on the infected device to unlock the smartphone or decrypt the files. The contributions of this work are a novel method to recover the mobile phone after ransomware attack based on the analysis of the ransomware communication with the C&C server; and a mechanism for impersonating the ransomware C&C server and thus gaining full control over the ransomware instance. Full article
(This article belongs to the Special Issue Intelligent Solutions for Network and Cyber Security)
Show Figures

Figure 1

20 pages, 6722 KiB  
Article
An Artificial Neural Network-Based Data-Driven Embedded Controller Design for a Pneumatic Artificial Muscle-Actuated Pressing Unit
by Mustafa Engin, Okan Duymazlar and Dilşad Engin
Appl. Sci. 2024, 14(11), 4797; https://doi.org/10.3390/app14114797 - 1 Jun 2024
Viewed by 1235
Abstract
Obtaining mathematical models of nonlinear cyber–physical systems for use in controller design is both difficult and time consuming. In this paper, an ANN-based method is proposed to design a controller for a nonlinear system that does not require a mathematical model. The developed [...] Read more.
Obtaining mathematical models of nonlinear cyber–physical systems for use in controller design is both difficult and time consuming. In this paper, an ANN-based method is proposed to design a controller for a nonlinear system that does not require a mathematical model. The developed ANN-based control algorithm is implemented directly on a real-time field controller, and its performance is evaluated without the use of auxiliary devices, such as PCs or workstations. By executing machine learning algorithms on local devices or embedded systems, edge artificial intelligence (Edge AI) with transfer learning gives priority to processing data at the source, minimizing the necessity for continuous connectivity to remote servers. The control algorithm was developed using the Matlab Simulink environment. The first and second ANNs were cascaded, wherein the first ANN computes the appropriate pressure signal for the given displacement, while the second predicts the force based on the pressure value from the first ANN. Subsequently, the ANN-based control algorithm was converted to SCL code using the Simulink PLC Coder and deployed on the PLC for operation. The algorithm was tested using two different scenarios. The conducted tests demonstrated the successful prediction of pressure signals corresponding to the targeted displacement values and accurate estimation of force values. Experimental work was carried out on PAM manipulators as a nonlinear model application, and the obtained results were discussed. Full article
Show Figures

Figure 1

14 pages, 3691 KiB  
Article
Bagging Improves the Performance of Deep Learning-Based Semantic Segmentation with Limited Labeled Images: A Case Study of Crop Segmentation for High-Throughput Plant Phenotyping
by Yinglun Zhan, Yuzhen Zhou, Geng Bai and Yufeng Ge
Sensors 2024, 24(11), 3420; https://doi.org/10.3390/s24113420 - 26 May 2024
Viewed by 808
Abstract
Advancements in imaging, computer vision, and automation have revolutionized various fields, including field-based high-throughput plant phenotyping (FHTPP). This integration allows for the rapid and accurate measurement of plant traits. Deep Convolutional Neural Networks (DCNNs) have emerged as a powerful tool in FHTPP, particularly [...] Read more.
Advancements in imaging, computer vision, and automation have revolutionized various fields, including field-based high-throughput plant phenotyping (FHTPP). This integration allows for the rapid and accurate measurement of plant traits. Deep Convolutional Neural Networks (DCNNs) have emerged as a powerful tool in FHTPP, particularly in crop segmentation—identifying crops from the background—crucial for trait analysis. However, the effectiveness of DCNNs often hinges on the availability of large, labeled datasets, which poses a challenge due to the high cost of labeling. In this study, a deep learning with bagging approach is introduced to enhance crop segmentation using high-resolution RGB images, tested on the NU-Spidercam dataset from maize plots. The proposed method outperforms traditional machine learning and deep learning models in prediction accuracy and speed. Remarkably, it achieves up to 40% higher Intersection-over-Union (IoU) than the threshold method and 11% over conventional machine learning, with significantly faster prediction times and manageable training duration. Crucially, it demonstrates that even small labeled datasets can yield high accuracy in semantic segmentation. This approach not only proves effective for FHTPP but also suggests potential for broader application in remote sensing, offering a scalable solution to semantic segmentation challenges. This paper is accompanied by publicly available source code. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 6752 KiB  
Article
iBVP Dataset: RGB-Thermal rPPG Dataset with High Resolution Signal Quality Labels
by Jitesh Joshi and Youngjun Cho
Electronics 2024, 13(7), 1334; https://doi.org/10.3390/electronics13071334 - 2 Apr 2024
Cited by 1 | Viewed by 3091
Abstract
Remote photo-plethysmography (rPPG) has emerged as a non-intrusive and promising physiological sensing capability in human–computer interface (HCI) research, gradually extending its applications in health-monitoring and clinical care contexts. With advanced machine learning models, recent datasets collected in real-world conditions have gradually enhanced the [...] Read more.
Remote photo-plethysmography (rPPG) has emerged as a non-intrusive and promising physiological sensing capability in human–computer interface (HCI) research, gradually extending its applications in health-monitoring and clinical care contexts. With advanced machine learning models, recent datasets collected in real-world conditions have gradually enhanced the performance of rPPG methods in recovering heart-rate and heart-rate-variability metrics. However, the signal quality of reference ground-truth PPG data in existing datasets is by and large neglected, while poor-quality references negatively influence models. Here, this work introduces a new imaging blood volume pulse (iBVP) dataset of synchronized RGB and thermal infrared videos with ground-truth PPG signals from ear with their high-resolution-signal-quality labels, for the first time. Participants perform rhythmic breathing, head-movement, and stress-inducing tasks, which help reflect real-world variations in psycho-physiological states. This work conducts dense (per sample) signal-quality assessment to discard noisy segments of ground-truth and corresponding video frames. We further present a novel end-to-end machine learning framework, iBVPNet, that features an efficient and effective spatio-temporal feature aggregation for the reliable estimation of BVP signals. Finally, this work examines the feasibility of extracting BVP signals from thermal video frames, which is under-explored. The iBVP dataset and source codes are publicly available for research use. Full article
(This article belongs to the Special Issue Future Trends and Challenges in Human-Computer Interaction)
Show Figures

Figure 1

25 pages, 12816 KiB  
Technical Note
Ecosystem Integrity Remote Sensing—Modelling and Service Tool—ESIS/Imalys
by Peter Selsam, Jan Bumberger, Thilo Wellmann, Marion Pause, Ronny Gey, Erik Borg and Angela Lausch
Remote Sens. 2024, 16(7), 1139; https://doi.org/10.3390/rs16071139 - 25 Mar 2024
Cited by 1 | Viewed by 1384
Abstract
One of the greatest challenges of our time is monitoring the rapid environmental changes taking place worldwide at both local and global scales. This requires easy-to-use and ready-to-implement tools and services to monitor and quantify aspects of bio- and geodiversity change and the [...] Read more.
One of the greatest challenges of our time is monitoring the rapid environmental changes taking place worldwide at both local and global scales. This requires easy-to-use and ready-to-implement tools and services to monitor and quantify aspects of bio- and geodiversity change and the impact of land use intensification using freely available and global remotely sensed data, and to derive remotely sensed indicators. Currently, there are no services for quantifying both raster- and vector-based indicators in a “compact tool”. Therefore, the main innovation of ESIS/Imalys is having a remote sensing (RS) tool that allows for RS data processing, data management, and continuous and discrete quantification and derivation of RS indicators in one tool. With the ESIS/Imalys project (Ecosystem Integrity Remote Sensing—Modelling and Service Tool), we try to present environmental indicators on a clearly defined and reproducible basis. The Imalys software library generates the RS indicators and remote sensing products defined for ESIS. This paper provides an overview of the functionality of the Imalys software library. An overview of the technical background of the implementation of the Imalys library, data formats and the user interfaces is given. Examples of RS-based indicators derived using the Imalys tool at pixel level and at zone level (vector level) are presented. Furthermore, the advantages and disadvantages of the Imalys tool are discussed in detail in order to better assess the value of Imalys for users and developers. The applicability of the indicators will be demonstrated through three ecological applications, namely: (1) monitoring landscape diversity, (2) monitoring landscape structure and landscape fragmentation, and (3) monitoring land use intensity and its impact on ecosystem functions. Despite the integration of large amounts of data, Imalys can run on any PC, as the processing and derivation of indicators has been greatly optimised. The Imalys source code is freely available and is hosted and maintained under an open source license. Complete documentation of all methods, functions and derived indicators can be found in the freely available Imalys manual. The user-friendliness of Imalys, despite the integration of a large amount of RS data, makes it another important tool for ecological research, modelling and application for the monitoring and derivation of ecosystem indicators from local to global scale. Full article
(This article belongs to the Section Earth Observation for Emergency Management)
Show Figures

Figure 1

23 pages, 922 KiB  
Article
Incremental Coding for Real-Time Remote Control over Bandwidth-Limited Channels and Its Applications in Smart Grids
by Yiyu Qiu, Junjie Wu and Wei Chen
Entropy 2024, 26(2), 122; https://doi.org/10.3390/e26020122 - 30 Jan 2024
Viewed by 982
Abstract
Remote control over communication networks with bandwidth-constrained channels has attracted considerable recent attention because it holds the promise of enabling a large number of real-time applications, such as autonomous driving, smart grids, and the industrial internet of things (IIoT). However, due to the [...] Read more.
Remote control over communication networks with bandwidth-constrained channels has attracted considerable recent attention because it holds the promise of enabling a large number of real-time applications, such as autonomous driving, smart grids, and the industrial internet of things (IIoT). However, due to the limited bandwidth, the sub-packets or even bits have to be transmitted successively, thereby experiencing non-negligible latency and inducing serious performance loss in remote control. To overcome this, we introduce an incremental coding method, in which the actuator acts in real time based on a partially received packet instead of waiting until the entire packet is decoded. On this basis, we applied incremental coding to a linear control system to obtain a remote-control scheme. Both its stability conditions and average linear-quadratic-Gaussian-(LQG) cost are presented. Then, we further investigated a multi-user remote-control method, with a particular focus on its applications in the demand response of smart grids over bandwidth-constrained communication networks. The utility loss due to the bandwidth constraint and communication latency are minimized by jointly optimizing the source coding and real-time demand response. The numerical results show that the incremental-coding-aided remote control performed well in both single-user and multi-user scenarios and outperformed the conventional zero-hold control scheme significantly under the LQG metric. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 564 KiB  
Article
AAHEG: Automatic Advanced Heap Exploit Generation Based on Abstract Syntax Tree
by Yu Wang, Yipeng Zhang and Zhoujun Li
Symmetry 2023, 15(12), 2197; https://doi.org/10.3390/sym15122197 - 14 Dec 2023
Cited by 1 | Viewed by 2052
Abstract
Automatic Exploit Generation (AEG) involves automatically discovering paths in a program that trigger vulnerabilities, thereby generating exploits. While there is considerable research on heap-related vulnerability detection, such as detecting Heap Overflow and Use After Free (UAF) vulnerabilities, among contemporary heap-automated exploit techniques, only [...] Read more.
Automatic Exploit Generation (AEG) involves automatically discovering paths in a program that trigger vulnerabilities, thereby generating exploits. While there is considerable research on heap-related vulnerability detection, such as detecting Heap Overflow and Use After Free (UAF) vulnerabilities, among contemporary heap-automated exploit techniques, only certain automated exploit techniques can hijack program control flow to the shellcode. An important limitation of this approach is that it cannot effectively bypass Linux’s protection mechanisms. To solve this problem, we introduced Automatic Advanced Heap Exploit Generation (AAHEG). It first applies symbolic execution to analyze heap-related primitives in files and then detects potential heap-related vulnerabilities without a source code. After identifying these vulnerabilities, AAHEG builds an exploit abstract syntax tree (AST) to identify one or more successful exploit strategies, such as fast bin attack and Safe-unlink. AAHEG then selects exploitable methods via an abstract syntax tree (AST) and performs final testing to produce the final exploit. AAHEG chose to generate advanced heap-related exploits because the exploits can bypass Linux protections. Basically, AAHEG can automatically detect heap-related vulnerabilities in binaries without source code, build an exploit AST, choose from a variety of advanced heap exploit methods, bypass all Linux protection mechanisms, and generate final file-form exploit based on pwntools which can pass local and remote testing. Experimental results show that AAHEG successfully completed vulnerability detection and exploit generation for 20 Capture The Flag (CTF) binary files, 11 of which have all protection mechanisms enabled. Full article
(This article belongs to the Special Issue Advanced Studies of Symmetry/Asymmetry in Cybersecurity)
Show Figures

Figure 1

20 pages, 5228 KiB  
Article
Potential Impact of Using ChatGPT-3.5 in the Theoretical and Practical Multi-Level Approach to Open-Source Remote Sensing Archaeology, Preliminary Considerations
by Nicodemo Abate, Francesca Visone, Maria Sileo, Maria Danese, Antonio Minervino Amodio, Rosa Lasaponara and Nicola Masini
Heritage 2023, 6(12), 7640-7659; https://doi.org/10.3390/heritage6120402 - 12 Dec 2023
Cited by 1 | Viewed by 3611
Abstract
This study aimed to evaluate the impact of using an AI model, specifically ChatGPT-3.5, in remote sensing (RS) applied to archaeological research. It assessed the model’s abilities in several aspects, in accordance with a multi-level analysis of its usefulness: providing answers to both [...] Read more.
This study aimed to evaluate the impact of using an AI model, specifically ChatGPT-3.5, in remote sensing (RS) applied to archaeological research. It assessed the model’s abilities in several aspects, in accordance with a multi-level analysis of its usefulness: providing answers to both general and specific questions related to archaeological research; identifying and referencing the sources of information it uses; recommending appropriate tools based on the user’s desired outcome; assisting users in performing basic functions and processes in RS for archaeology (RSA); assisting users in carrying out complex processes for advanced RSA; and integrating with the tools and libraries commonly used in RSA. ChatGPT-3.5 was selected due to its availability as a free resource. The research also aimed to analyse the user’s prior skills, competencies, and language proficiency required to effectively utilise the model for achieving their research goals. Additionally, the study involved generating JavaScript code for interacting with the free Google Earth Engine tool as part of its research objectives. Use of these free tools, it was possible to demonstrate the impact that ChatGPT-3.5 can have when embedded in an archaeological RS flowchart on different levels. In particular, it was shown to be useful both for the theoretical part and for the generation of simple and complex processes and elaborations. Full article
(This article belongs to the Special Issue XR and Artificial Intelligence for Heritage)
Show Figures

Figure 1

21 pages, 290958 KiB  
Article
Systematic Quantification and Assessment of Digital Image Correlation Performance for Landslide Monitoring
by Doris Hermle, Markus Keuschnig, Michael Krautblatter and Valentin Tertius Bickel
Geosciences 2023, 13(12), 371; https://doi.org/10.3390/geosciences13120371 - 3 Dec 2023
Viewed by 3098
Abstract
Accurate and reliable analyses of high-alpine landslide displacement magnitudes and rates are key requirements for current and future alpine early warnings. It has been proved that high spatiotemporal-resolution remote sensing data combined with digital image correlation (DIC) algorithms can accurately monitor ground displacements. [...] Read more.
Accurate and reliable analyses of high-alpine landslide displacement magnitudes and rates are key requirements for current and future alpine early warnings. It has been proved that high spatiotemporal-resolution remote sensing data combined with digital image correlation (DIC) algorithms can accurately monitor ground displacements. DIC algorithms still rely on significant amounts of expert input; there is neither a general mathematical description of type and spatiotemporal resolution of input data nor DIC parameters required for successful landslide detection, accurate characterisation of displacement magnitude and rate, and overall error estimation. This work provides generic formulas estimating appropriate DIC input parameters, drastically reducing the time required for manual input parameter optimisation. We employed the open-source code DIC-FFT using optical remote sensing data acquired between 2014 and 2020 for two landslides in Switzerland to qualitatively and quantitatively show which spatial resolution is required to recognise slope displacements, from satellite images to aerial orthophotos, and how the spatial resolution affects the accuracy of the calculated displacement magnitude and rate. We verified our results by manually tracing geomorphic markers in orthophotos. Here, we show a first generic approach for designing and optimising future remote sensing-based landslide monitoring campaigns to support time-critical applications like early warning systems. Full article
(This article belongs to the Special Issue Landslide Monitoring and Mapping II)
Show Figures

Figure 1

22 pages, 2704 KiB  
Article
SAR-HUB: Pre-Training, Fine-Tuning, and Explaining
by Haodong Yang, Xinyue Kang, Long Liu, Yujiang Liu and Zhongling Huang
Remote Sens. 2023, 15(23), 5534; https://doi.org/10.3390/rs15235534 - 28 Nov 2023
Cited by 4 | Viewed by 2367
Abstract
Since the current remote sensing pre-trained models trained on optical images are not as effective when applied to SAR image tasks, it is crucial to create sensor-specific SAR models with generalized feature representations and to demonstrate with evidence the limitations of optical pre-trained [...] Read more.
Since the current remote sensing pre-trained models trained on optical images are not as effective when applied to SAR image tasks, it is crucial to create sensor-specific SAR models with generalized feature representations and to demonstrate with evidence the limitations of optical pre-trained models in downstream SAR tasks. The following aspects are the focus of this study: pre-training, fine-tuning, and explaining. First, we collect the current large-scale open-source SAR scene image classification datasets to pre-train a series of deep neural networks, including convolutional neural networks (CNNs) and vision transformers (ViT). A novel dynamic range adaptive enhancement method and a mini-batch class-balanced loss are proposed to tackle the challenges in SAR scene image classification. Second, the pre-trained models are transferred to various SAR downstream tasks compared with optical ones. Lastly, we propose a novel knowledge point interpretation method to reveal the benefits of the SAR pre-trained model with comprehensive and quantifiable explanations. This study is reproducible using open-source code and datasets, demonstrates generalization through extensive experiments on a variety of tasks, and is interpretable through qualitative and quantitative analyses. The codes and models are open source. Full article
Show Figures

Figure 1

15 pages, 4711 KiB  
Article
DFFA-Net: A Differential Convolutional Neural Network for Underwater Optical Image Dehazing
by Xujia Hou, Feihu Zhang, Zewen Wang, Guanglei Song, Zijun Huang and Jinpeng Wang
Electronics 2023, 12(18), 3876; https://doi.org/10.3390/electronics12183876 - 14 Sep 2023
Viewed by 1167
Abstract
This paper proposes DFFA-Net, a novel differential convolutional neural network designed for underwater optical image dehazing. DFFA-Net is obtained by deeply analyzing the factors that affect the quality of underwater images and combining the underwater light propagation characteristics. DFFA-Net introduces a channel differential [...] Read more.
This paper proposes DFFA-Net, a novel differential convolutional neural network designed for underwater optical image dehazing. DFFA-Net is obtained by deeply analyzing the factors that affect the quality of underwater images and combining the underwater light propagation characteristics. DFFA-Net introduces a channel differential module that captures the mutual information between the green and blue channels with respect to the red channel. Additionally, a loss function sensitive to RGB color channels is introduced. Experimental results demonstrate that DFFA-Net achieves state-of-the-art performance in terms of quantitative metrics for single-image dehazing within convolutional neural network-based dehazing models. On the widely-used underwater Underwater Image Enhancement Benchmark (UIEB) image dehazing dataset, DFFA-Net achieves a peak signal-to-noise ratio (PSNR) of 24.2631 and a structural similarity index (SSIM) score of 0.9153. Further, we have deployed DFFA-Net on a self-developed Remotely Operated Vehicle (ROV). In a swimming pool environment, DFFA-Net can process hazy images in real time, providing better visual feedback to the operator. The source code has been open sourced. Full article
Show Figures

Figure 1

15 pages, 3534 KiB  
Communication
A Scalable Reduced-Complexity Compression of Hyperspectral Remote Sensing Images Using Deep Learning
by Sebastià Mijares i Verdú, Johannes Ballé, Valero Laparra, Joan Bartrina-Rapesta, Miguel Hernández-Cabronero and Joan Serra-Sagristà
Remote Sens. 2023, 15(18), 4422; https://doi.org/10.3390/rs15184422 - 8 Sep 2023
Cited by 2 | Viewed by 1663
Abstract
Two key hurdles to the adoption of Machine Learning (ML) techniques in hyperspectral data compression are computational complexity and scalability for large numbers of bands. These are due to the limited computing capacity available in remote sensing platforms and the high computational cost [...] Read more.
Two key hurdles to the adoption of Machine Learning (ML) techniques in hyperspectral data compression are computational complexity and scalability for large numbers of bands. These are due to the limited computing capacity available in remote sensing platforms and the high computational cost of compression algorithms for hyperspectral data, especially when the number of bands is large. To address these issues, a channel clusterisation strategy is proposed, which reduces the computational demands of learned compression methods for real scenarios and is scalable for different sources of data with varying numbers of bands. The proposed method is compatible with an embedded implementation for state-of-the-art on board hardware, a first for a ML hyperspectral data compression method. In terms of coding performance, our proposal surpasses established lossy methods such as JPEG 2000 preceded by a spectral Karhunen-Loève Transform (KLT), in clusters of 3 to 7 bands, achieving a PSNR improvement of, on average, 9 dB for AVIRIS and 3 dB for Hyperion images. Full article
(This article belongs to the Special Issue Recent Progress in Hyperspectral Remote Sensing Data Processing)
Show Figures

Figure 1

20 pages, 889 KiB  
Article
Distributed and Lightweight Software Assurance in Cellular Broadcasting Handshake and Connection Establishment
by Sourav Purification, Jinoh Kim, Jonghyun Kim, Ikkyun Kim and Sang-Yoon Chang
Electronics 2023, 12(18), 3782; https://doi.org/10.3390/electronics12183782 - 7 Sep 2023
Cited by 1 | Viewed by 1035
Abstract
With developments in OpenRAN and software-defined radio (SDR), the mobile networking implementations for radio and security control are becoming increasingly software-based. We design and build a lightweight and distributed software assurance scheme, which ensures that a wireless user holds the correct software (version/code) [...] Read more.
With developments in OpenRAN and software-defined radio (SDR), the mobile networking implementations for radio and security control are becoming increasingly software-based. We design and build a lightweight and distributed software assurance scheme, which ensures that a wireless user holds the correct software (version/code) for their wireless networking implementations. Our scheme is distributed (to support the distributed and ad hoc networking that does not utilize the networking-backend infrastructure), lightweight (to support the resource-constrained device operations), modular (to support compatibility with the existing mobile networking protocols), and supports broadcasting (as mobile and wireless networking has broadcasting applications). Our scheme is distinct from the remote code attestation in trusted computing, which requires hardwarebased security and real-time challenge-and-response communications with a centralized trusted server, thus making its deployment prohibitive in the distributed and broadcasting-based mobile networking environments. We design our scheme to be prover-specific and incorporate the Merkle tree for the verification efficiency to make it appropriate for a wireless-broadcasting medium with multiple receivers. In addition to the theoretical design and analysis, we implement our scheme to assure srsRAN (a popular open-source software for cellular technology, including 4G and 5G) and provide a concrete implementation and application instance to highlight our scheme’s modularity, backward compatibility to the existing 4G/5G standardized protocol, and broadcasting support. Our scheme implementation incorporates delivering the proof in the srsRAN-implemented 4G/5G cellular handshake and connection establishment in radio resource control (RRC). We conduct experiments using SDR and various processors to demonstrate the lightweight design and its appropriateness for wireless networking applications. Our results show that the number of hash computations for the proof verification grows logarithmically with the number of software code files being assured and that the verification takes three orders of magnitude less time than the proof generation, while the proof generation overhead itself is negligible compared to the software update period. Full article
(This article belongs to the Special Issue 5G Mobile Telecommunication Systems and Recent Advances)
Show Figures

Figure 1

29 pages, 23338 KiB  
Article
Identifying Urban and Socio-Environmental Patterns of Brazilian Amazonian Cities by Remote Sensing and Machine Learning
by Bruno Dias dos Santos, Carolina Moutinho Duque de Pinho, Antonio Páez and Silvana Amaral
Remote Sens. 2023, 15(12), 3102; https://doi.org/10.3390/rs15123102 - 14 Jun 2023
Cited by 2 | Viewed by 1979
Abstract
Identifying urban patterns in the cities in the Brazilian Amazon can help to understand the impact of human actions on the environment, to protect local cultures, and secure the cultural heritage of the region. The objective of this study is to produce a [...] Read more.
Identifying urban patterns in the cities in the Brazilian Amazon can help to understand the impact of human actions on the environment, to protect local cultures, and secure the cultural heritage of the region. The objective of this study is to produce a classification of intra-urban patterns in Amazonian cities. Concretely, we produce a set of Urban and Socio-Environmental Patterns (USEPs) in the cities of Santarém and Cametá in Pará, Brazilian Amazon. The contributions of this study are as follows: (1) we use a reproducible research framework based on remote sensing data and machine learning techniques; (2) we integrate spatial data from various sources into a cellular grid, separating the variables into environmental, urban morphological, and socioeconomic dimensions; (3) we generate variables specific to the Amazonian context; and (4) we validate these variables by means of a field visit to Cametá and comparison with patterns described in other works. Machine learning-based clustering is useful to identify seven urban patterns in Santarém and eight urban patterns in Cametá. The urban patterns are semantically explainable and are consistent with the existing scientific literature. The paper provides reproducible and open research that uses only open software and publicly available data sources, making the data product and code available for modification and further contributions to spatial data science analysis. Full article
Show Figures

Graphical abstract

Back to TopTop