Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (479)

Search Parameters:
Keywords = container-based cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 44470 KiB  
Article
A Decision Support System for Crop Recommendation Using Machine Learning Classification Algorithms
by Murali Krishna Senapaty, Abhishek Ray and Neelamadhab Padhy
Agriculture 2024, 14(8), 1256; https://doi.org/10.3390/agriculture14081256 - 30 Jul 2024
Abstract
Today, crop suggestions and necessary guidance have become a regular need for a farmer. Farmers generally depend on their local agriculture officers regarding this, and it may be difficult to obtain the right guidance at the right time. Nowadays, crop datasets are available [...] Read more.
Today, crop suggestions and necessary guidance have become a regular need for a farmer. Farmers generally depend on their local agriculture officers regarding this, and it may be difficult to obtain the right guidance at the right time. Nowadays, crop datasets are available on different websites in the agriculture sector, and they play a crucial role in suggesting suitable crops. So, a decision support system that analyzes the crop dataset using machine learning techniques can assist farmers in making better choices regarding crop selections. The main objective of this research is to provide quick guidance to farmers with more accurate and effective crop recommendations by utilizing machine learning methods, global positioning system coordinates, and crop cloud data. Here, the recommendation can be more personalized, which enables the farmers to predict crops in their specific geographical context, taking into account factors like climate, soil composition, water availability, and local conditions. In this regard, an existing historical crop dataset that contains the state, district, year, area-wise production rate, crop name, and season was collected for 246,091 sample records from the Dataworld website, which holds data on 37 different crops from different areas of India. Also, for better analysis, a dataset was collected from the agriculture offices of the Rayagada, Koraput, and Gajapati districts in Odisha state, India. Both of these datasets were combined and stored using a Firebase cloud service. Thirteen different machine learning algorithms have been applied to the dataset to identify dependencies within the data. To facilitate this process, an Android application was developed using Android Studio (Electric Eel | 2023.1.1) Emulator (Version 32.1.14), Software Development Kit (SDK, Android SDK 33), and Tools. A model has been proposed that implements the SMOTE (Synthetic Minority Oversampling Technique) to balance the dataset, and then it allows for the implementation of 13 different classifiers, such as logistic regression, decision tree (DT), K-Nearest Neighbor (KNN), SVC (Support Vector Classifier), random forest (RF), Gradient Boost (GB), Bagged Tree, extreme gradient boosting (XGB classifier), Ada Boost Classifier, Cat Boost, HGB (Histogram-based Gradient Boosting), SGDC (Stochastic Gradient Descent), and MNB (Multinomial Naive Bayes) on the cloud dataset. It is observed that the performance of the SGDC method is 1.00 in accuracy, precision, recall, F1-score, and ROC AUC (Receiver Operating Characteristics–Area Under the Curve) and is 0.91 in sensitivity and 0.54 in specificity after applying the SMOTE. Overall, SGDC has a better performance compared to all other classifiers implemented in the predictions. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

25 pages, 14134 KiB  
Article
Fast Robust Point Cloud Registration Based on Compatibility Graph and Accelerated Guided Sampling
by Chengjun Wang, Zhen Zheng, Bingting Zha and Haojie Li
Remote Sens. 2024, 16(15), 2789; https://doi.org/10.3390/rs16152789 - 30 Jul 2024
Abstract
Point cloud registration is a crucial technique in photogrammetry, remote sensing, etc. A generalized 3D point cloud registration framework has been developed to estimate the optimal rigid transformation between two point clouds using 3D key point correspondences. However, challenges arise due to the [...] Read more.
Point cloud registration is a crucial technique in photogrammetry, remote sensing, etc. A generalized 3D point cloud registration framework has been developed to estimate the optimal rigid transformation between two point clouds using 3D key point correspondences. However, challenges arise due to the uncertainty in 3D key point detection techniques and the similarity of local surface features. These factors often lead to feature descriptors establishing correspondences containing significant outliers. Current point cloud registration algorithms are typically hindered by these outliers, affecting both their efficiency and accuracy. In this paper, we propose a fast and robust point cloud registration method based on a compatibility graph and accelerated guided sampling. By constructing a compatible graph with correspondences, a minimum subset sampling method combining compatible edge sampling and compatible vertex sampling is proposed to reduce the influence of outliers on the estimation of the registration parameters. Additionally, an accelerated guided sampling strategy based on preference scores is presented, which effectively utilizes model parameters generated during the iterative process to guide the sampling toward inliers, thereby enhancing computational efficiency and the probability of estimating optimal parameters. Experiments are carried out on both synthetic and real-world data. The experimental results demonstrate that our proposed algorithm achieves a significant balance between registration accuracy and efficiency compared to state-of-the-art registration algorithms such as RANSIC and GROR. Even with up to 2000 initial correspondences and an outlier ratio of 99%, our algorithm achieves a minimum rotation error of 0.737° and a minimum translation error of 0.0201 m, completing the registration process within 1 s. Full article
Show Figures

Figure 1

11 pages, 957 KiB  
Perspective
Precision Medicine in Peritoneal Dialysis: An Expert Opinion on the Application of the Sharesource Platform for the Remote Management of Patients
by Loris Neri, Lorenzo Di Liberato, Gaetano Alfano, Valeria Allegrucci, Nicoletta Appio, Carla Bussi, Daniela Cecilia Cannarile, Ilaria De Palma, Silvio Di Stante, Rosa Pacifico, Vincenzo Panuccio, Silvia Porreca, Vincenzo Terlizzi, Silvia D’Alonzo and Giusto Viglino
J. Pers. Med. 2024, 14(8), 807; https://doi.org/10.3390/jpm14080807 - 30 Jul 2024
Abstract
The management of end-stage kidney disease (ESKD) has been constantly evolving over the last decade with the development of targeted approaches. In this field, telemedicine and remote monitoring are based on the availability of new cyclers that allow for bidirectional communication (between patient [...] Read more.
The management of end-stage kidney disease (ESKD) has been constantly evolving over the last decade with the development of targeted approaches. In this field, telemedicine and remote monitoring are based on the availability of new cyclers that allow for bidirectional communication (between patient and physician) and for the application of the Sharesource cloud-based platform. These technologies allow patients with ESKD to undergo automated peritoneal dialysis (APD) at home. However, these approaches are not well standardized and largely applied yet. Therefore, this study aimed to elaborate a protocol for the utilization of the Sharesource platform to facilitate the practical management of patients treated with APD. A series of expert meetings were held between September 2022 and January 2023 in Italy. The participants (ten nephrologists and five nurses) from nine Italian public dialysis centers shared their opinions, examined the current scientific literature in the field, and reviewed the key characteristics of the Sharesource system to achieve a common position on this topic. A detailed and practical document containing experts’ opinions and suggestions on the use of the Sharesource platform for the management of patients treated with APD was produced. This expert opinion might represent a new useful instrument in clinical practice for managing patients undergoing home-based peritoneal dialysis (PD) through the Sharesource platform, which is valid not only for Italy. These recommendations pave the way to novel patient-centered and personalized therapeutic approaches for ESKD and highlight the advantages of telemedicine and remote monitoring in the management of patients with ESKD undergoing PD and its positive impact on their quality of life. Full article
Show Figures

Figure 1

19 pages, 43879 KiB  
Article
3D Data Processing and Entropy Reduction for Reconstruction from Low-Resolution Spatial Coordinate Clouds in a Technical Vision System
by Ivan Y. Alba Corpus, Wendy Flores-Fuentes, Oleg Sergiyenko, Julio C. Rodríguez-Quiñonez, Jesús E. Miranda-Vega, Wendy Garcia-González and José A. Núñez-López
Entropy 2024, 26(8), 646; https://doi.org/10.3390/e26080646 (registering DOI) - 30 Jul 2024
Viewed by 70
Abstract
This paper proposes an advancement in the application of a Technical Vision System (TVS), which integrates a laser scanning mechanism with a single light sensor to measure 3D spatial coordinates. In this application, the system is used to scan and digitalize objects using [...] Read more.
This paper proposes an advancement in the application of a Technical Vision System (TVS), which integrates a laser scanning mechanism with a single light sensor to measure 3D spatial coordinates. In this application, the system is used to scan and digitalize objects using a rotating table to explore the potential of the system for 3D scanning at reduced resolutions. The experiments undertaken searched for optimal scanning windows and used statistical data filtering techniques and regression models to find a method to generate a 3D scan that was still recognizable with the least amount of 3D points, balancing the number of points scanned and time, while at the same time reducing effects caused by the particularities of the TVS, such as noise and entropy in the form of natural distortion in the resulting scans. The evaluation of the experimentation results uses 3D point registration methods, joining multiple faces from the original volume scanned by the TVS and aligning it to the ground truth model point clouds, which are based on a commercial 3D camera to verify that the reconstructed 3D model retains substantial detail from the original object. This research finds it is possible to reconstruct sufficiently detailed 3D models obtained from the TVS, which contain coarsely scanned data or scans that initially lack high definition or are too noisy. Full article
Show Figures

Figure 1

23 pages, 1216 KiB  
Article
Towards Collaborative Edge Intelligence: Blockchain-Based Data Valuation and Scheduling for Improved Quality of Service
by Yao Du, Zehua Wang, Cyril Leung and Victor C. M. Leung
Future Internet 2024, 16(8), 267; https://doi.org/10.3390/fi16080267 - 28 Jul 2024
Viewed by 225
Abstract
Collaborative edge intelligence, a distributed computing paradigm, refers to a system where multiple edge devices work together to process data and perform distributed machine learning (DML) tasks locally. Decentralized Internet of Things (IoT) devices share knowledge and resources to improve the quality of [...] Read more.
Collaborative edge intelligence, a distributed computing paradigm, refers to a system where multiple edge devices work together to process data and perform distributed machine learning (DML) tasks locally. Decentralized Internet of Things (IoT) devices share knowledge and resources to improve the quality of service (QoS) of the system with reduced reliance on centralized cloud infrastructure. However, the paradigm is vulnerable to free-riding attacks, where some devices benefit from the collective intelligence without contributing their fair share, potentially disincentivizing collaboration and undermining the system’s effectiveness. Moreover, data collected from heterogeneous IoT devices may contain biased information that decreases the prediction accuracy of DML models. To address these challenges, we propose a novel incentive mechanism that relies on time-dependent blockchain records and multi-access edge computing (MEC). We formulate the QoS problem as an unbounded multiple knapsack problem at the network edge. Furthermore, a decentralized valuation protocol is introduced atop blockchain to incentivize contributors and disincentivize free-riders. To improve model prediction accuracy within latency requirements, a data scheduling algorithm is given based on a curriculum learning framework. Based on our computer simulations using heterogeneous datasets, we identify two critical factors for enhancing the QoS in collaborative edge intelligence systems: (1) mitigating the impact of information loss and free-riders via decentralized data valuation and (2) optimizing the marginal utility of individual data samples by adaptive data scheduling. Full article
15 pages, 5072 KiB  
Technical Note
Reflection–Polarization Characteristics of Greenhouses Studied by Drone-Polarimetry Focusing on Polarized Light Pollution of Glass Surfaces
by Péter Takács, Adalbert Tibiássy, Balázs Bernáth, Viktor Gotthard and Gábor Horváth
Remote Sens. 2024, 16(14), 2568; https://doi.org/10.3390/rs16142568 - 13 Jul 2024
Viewed by 333
Abstract
Drone-based imaging polarimetry is a valuable new tool for the remote sensing of the polarization characteristics of the Earth’s surface. After briefly reviewing two earlier drone-polarimetric studies, we present here the results of our drone-polarimetric campaigns, in which we measured the reflection–polarization patterns [...] Read more.
Drone-based imaging polarimetry is a valuable new tool for the remote sensing of the polarization characteristics of the Earth’s surface. After briefly reviewing two earlier drone-polarimetric studies, we present here the results of our drone-polarimetric campaigns, in which we measured the reflection–polarization patterns of greenhouses. From the measured patterns of the degree and angle of linear polarization of reflected light, we calculated the measure (plp) of polarized light pollution of glass surfaces. The knowledge of polarized light pollution is important for aquatic insect ecology, since polarotactic aquatic insects are the endangered victims of artificial horizontally polarized light sources. We found that the so-called Palm House of a botanical garden has only a low polarized light pollution, 3.6% ≤ plp ≤ 13.7%, while the greenhouses with tilted roofs are strongly polarized-light-polluting, with 24.8% ≤ plp ≤ 40.4%. Similarly, other tilted-roofed greenhouses contain very high polarized light pollution, plp ≤ 76.7%. Under overcast skies, the polarization patterns and plp values of greenhouses practically only depend on the direction of view relative to the glass surfaces, as the rotationally invariant diffuse cloud light is the only light source. However, under cloudless skies, the polarization patterns of glass surfaces significantly depend on the azimuth direction of view and its angle relative to the solar meridian because, in this case, sunlight is the dominant light source, rather than the sky. In the case of a given direction of view, those glass surfaces are the strongest polarized-light-polluting, from which sunlight and/or skylight is reflected at or near Brewster’s angle in a nearly vertical plane, i.e., with directions of polarization close to horizontal. Therefore, the plp value is usually greatest when the sun shines directly or from behind. The plp value of greenhouses is always the smallest in the green spectral range due to the green plants under the glass. Full article
(This article belongs to the Special Issue Drone Remote Sensing II)
Show Figures

Figure 1

17 pages, 842 KiB  
Article
2D3D-DescNet: Jointly Learning 2D and 3D Local Feature Descriptors for Cross-Dimensional Matching
by Shuting Chen, Yanfei Su, Baiqi Lai, Luwei Cai, Chengxi Hong, Li Li, Xiuliang Qiu, Hong Jia and Weiquan Liu
Remote Sens. 2024, 16(13), 2493; https://doi.org/10.3390/rs16132493 - 8 Jul 2024
Viewed by 346
Abstract
The cross-dimensional matching of 2D images and 3D point clouds is an effective method by which to establish the spatial relationship between 2D and 3D space, which has potential applications in remote sensing and artificial intelligence (AI). In this paper, we propose a [...] Read more.
The cross-dimensional matching of 2D images and 3D point clouds is an effective method by which to establish the spatial relationship between 2D and 3D space, which has potential applications in remote sensing and artificial intelligence (AI). In this paper, we propose a novel multi-task network, 2D3D-DescNet, to learn 2D and 3D local feature descriptors jointly and perform cross-dimensional matching of 2D image patches and 3D point cloud volumes. The 2D3D-DescNet contains two branches with which to learn 2D and 3D feature descriptors, respectively, and utilizes a shared decoder to generate the feature maps of 2D image patches and 3D point cloud volumes. Specifically, the generative adversarial network (GAN) strategy is embedded to distinguish the source of the generated feature maps, thereby facilitating the use of the learned 2D and 3D local feature descriptors for cross-dimensional retrieval. Meanwhile, a metric network is embedded to compute the similarity between the learned 2D and 3D local feature descriptors. Finally, we construct a 2D-3D consistent loss function to optimize the 2D3D-DescNet. In this paper, the cross-dimensional matching of 2D images and 3D point clouds is explored with the small object of the 3Dmatch dataset. Experimental results demonstrate that the 2D and 3D local feature descriptors jointly learned by 2D3D-DescNet are similar. In addition, in terms of 2D and 3D cross-dimensional retrieval and matching between 2D image patches and 3D point cloud volumes, the proposed 2D3D-DescNet significantly outperforms the current state-of-the-art approaches based on jointly learning 2D and 3D feature descriptors; the cross-dimensional retrieval at TOP1 on the 3DMatch dataset is improved by over 12%. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

36 pages, 30845 KiB  
Article
Semantic Visual SLAM Algorithm Based on Improved DeepLabV3+ Model and LK Optical Flow
by Yiming Li, Yize Wang, Liuwei Lu, Yiran Guo and Qi An
Appl. Sci. 2024, 14(13), 5792; https://doi.org/10.3390/app14135792 - 2 Jul 2024
Viewed by 546
Abstract
Aiming at the problem that dynamic targets in indoor environments lead to low accuracy and large errors in the localization and position estimation of visual SLAM systems and the inability to build maps containing semantic information, a semantic visual SLAM algorithm based on [...] Read more.
Aiming at the problem that dynamic targets in indoor environments lead to low accuracy and large errors in the localization and position estimation of visual SLAM systems and the inability to build maps containing semantic information, a semantic visual SLAM algorithm based on the semantic segmentation network DeepLabV3+ and LK optical flow is proposed based on the ORB-SLAM2 system. First, the dynamic target feature points are detected and rejected based on the lightweight semantic segmentation network DeepLabV3+ and LK optical flow method. Second, the static environment occluded by the dynamic target is repaired using the time-weighted multi-frame fusion background repair technique. Lastly, the filtered static feature points are used for feature matching and position calculation. Meanwhile, the semantic labeling information of static objects obtained based on the lightweight semantic segmentation network DeepLabV3+ is fused with the static environment information after background repair to generate dense point cloud maps containing semantic information, and the semantic dense point cloud maps are transformed into semantic octree maps using the octree spatial segmentation data structure. The localization accuracy of the visual SLAM system and the construction of the semantic maps are verified using the widely used TUM RGB-D dataset and real scene data, respectively. The experimental results show that the proposed semantic visual SLAM algorithm can effectively reduce the influence of dynamic targets on the system, and compared with other advanced algorithms, such as DynaSLAM, it has the highest performance in indoor dynamic environments in terms of localization accuracy and time consumption. In addition, semantic maps can be constructed so that the robot can better understand and adapt to the indoor dynamic environment. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

18 pages, 4924 KiB  
Article
LOD2-Level+ Low-Rise Building Model Extraction Method for Oblique Photography Data Using U-NET and a Multi-Decision RANSAC Segmentation Algorithm
by Yufeng He, Xiaobian Wu, Weibin Pan, Hui Chen, Songshan Zhou, Shaohua Lei, Xiaoran Gong, Hanzeyu Xu and Yehua Sheng
Remote Sens. 2024, 16(13), 2404; https://doi.org/10.3390/rs16132404 - 30 Jun 2024
Viewed by 504
Abstract
Oblique photography is a regional digital surface model generation technique that can be widely used for building 3D model construction. However, due to the lack of geometric and semantic information about the building, these models make it difficult to differentiate more detailed components [...] Read more.
Oblique photography is a regional digital surface model generation technique that can be widely used for building 3D model construction. However, due to the lack of geometric and semantic information about the building, these models make it difficult to differentiate more detailed components in the building, such as roofs and balconies. This paper proposes a deep learning-based method (U-NET) for constructing 3D models of low-rise buildings that address the issues. The method ensures complete geometric and semantic information and conforms to the LOD2 level. First, digital orthophotos are used to perform building extraction based on U-NET, and then a contour optimization method based on the main direction of the building and the center of gravity of the contour is used to obtain the regular building contour. Second, the pure building point cloud model representing a single building is extracted from the whole point cloud scene based on the acquired building contour. Finally, the multi-decision RANSAC algorithm is used to segment the building detail point cloud and construct a triangular mesh of building components, followed by a triangular mesh fusion and splicing method to achieve monolithic building components. The paper presents experimental evidence that the building contour extraction algorithm can achieve a 90.3% success rate and that the resulting single building 3D model contains LOD2 building components, which contain detailed geometric and semantic information. Full article
Show Figures

Figure 1

16 pages, 2176 KiB  
Article
A Pilot Detection and Associate Study of Gene Presence-Absence Variation in Holstein Cattle
by Clarissa Boschiero, Mahesh Neupane, Liu Yang, Steven G. Schroeder, Wenbin Tuo, Li Ma, Ransom L. Baldwin, Curtis P. Van Tassell and George E. Liu
Animals 2024, 14(13), 1921; https://doi.org/10.3390/ani14131921 - 28 Jun 2024
Viewed by 546
Abstract
Presence-absence variations (PAVs) are important structural variations, wherein a genomic segment containing one or more genes is present in some individuals but absent in others. While PAVs have been extensively studied in plants, research in cattle remains limited. This study identified PAVs in [...] Read more.
Presence-absence variations (PAVs) are important structural variations, wherein a genomic segment containing one or more genes is present in some individuals but absent in others. While PAVs have been extensively studied in plants, research in cattle remains limited. This study identified PAVs in 173 Holstein bulls using whole-genome sequencing data and assessed their associations with 46 economically important traits. Out of 28,772 cattle genes (from the longest transcripts), a total of 26,979 (93.77%) core genes were identified (present in all individuals), while variable genes included 928 softcore (present in 95–99% of individuals), 494 shell (present in 5–94%), and 371 cloud genes (present in <5%). Cloud genes were enriched in functions associated with hormonal and antimicrobial activities, while shell genes were enriched in immune functions. PAV-based genome-wide association studies identified associations between gene PAVs and 16 traits including milk, fat, and protein yields, as well as traits related to health and reproduction. Associations were found on multiple chromosomes, illustrating important associations on cattle chromosomes 7 and 15, involving olfactory receptor and immune-related genes, respectively. By examining the PAVs at the population level, the results of this research provided crucial insights into the genetic structures underlying the complex traits of Holstein cattle. Full article
(This article belongs to the Collection Advances in Cattle Breeding, Genetics and Genomics)
Show Figures

Figure 1

23 pages, 6836 KiB  
Article
Simulation Modeling of the Process of Danger Zone Formation in Case of Fire at an Industrial Facility
by Yuri Matveev, Fares Abu-Abed, Olga Zhironkina and Sergey Zhironkin
Fire 2024, 7(7), 221; https://doi.org/10.3390/fire7070221 - 28 Jun 2024
Viewed by 505
Abstract
Proactive prevention and fighting fire at industrial facilities, often located in urbanized clusters, should include the use of modern methods for modeling danger zones that appear during the spread of the harmful combustion products of various chemicals. Simulation modeling is a method that [...] Read more.
Proactive prevention and fighting fire at industrial facilities, often located in urbanized clusters, should include the use of modern methods for modeling danger zones that appear during the spread of the harmful combustion products of various chemicals. Simulation modeling is a method that allows predicting the parameters of a danger zone, taking into account a number of technological, landscape, and natural-climatic factors that have a certain variability. The purpose of this research is to develop a mathematical simulation model of the formation process of a danger zone during an emergency at an industrial facility, including an explosion of a container with chemicals and fire, with the spread of an aerosol and smoke cloud near residential areas. The subject of this study was the development of a simulation model of a danger zone of combustion gases and its graphical interpretation as a starting point for timely decision making on evacuation by an official. The mathematical model of the process of danger zone formation during an explosion and fire at an industrial facility presented in this article is based on the creation of a GSL library from data on the mass of explosion and combustion products, verification using the Wald test, and the use of algorithms for calculating the starting and ending points of the danger zone for various factor values’ variables, constructing ellipses of the boundaries of the distribution of pollution spots. The developed model makes it possible to calculate the linear dimensions and area of the danger zone under optimistic and pessimistic scenarios, constructing a graphical diagram of the zones of toxic doses from the source of explosion and combustion. The results obtained from the modeling can serve as the basis for making quick decisions about evacuating residents from nearby areas. Full article
(This article belongs to the Special Issue Fire and Explosions Risk in Industrial Processes)
Show Figures

Figure 1

21 pages, 1402 KiB  
Article
Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing
by Urooba Shahid, Ghufran Ahmed, Shahbaz Siddiqui, Junaid Shuja and Abdullateef Oluwagbemiga Balogun
Sensors 2024, 24(13), 4195; https://doi.org/10.3390/s24134195 - 27 Jun 2024
Viewed by 363
Abstract
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure [...] Read more.
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance. Full article
Show Figures

Figure 1

15 pages, 18719 KiB  
Article
Adaptive Weighted Data Fusion for Line Structured Light and Photometric Stereo Measurement System
by Jianxin Shi, Yuehua Li, Ziheng Zhang, Tiejun Li and Jingbo Zhou
Sensors 2024, 24(13), 4187; https://doi.org/10.3390/s24134187 - 27 Jun 2024
Cited by 1 | Viewed by 342
Abstract
Line structured light (LSL) measurement systems can obtain high accuracy profiles, but the overall clarity relies greatly on the sampling interval of the scanning process. Photometric stereo (PS), on the other hand, is sensitive to tiny features but has poor geometrical accuracy. Cooperative [...] Read more.
Line structured light (LSL) measurement systems can obtain high accuracy profiles, but the overall clarity relies greatly on the sampling interval of the scanning process. Photometric stereo (PS), on the other hand, is sensitive to tiny features but has poor geometrical accuracy. Cooperative measurement with these two methods is an effective way to ensure precision and clarity results. In this paper, an LSL-PS cooperative measurement system is brought out. The calibration methods used in the LSL and PS measurement system are given. Then, a data fusion algorithm with adaptive weights is proposed, where an error function that contains the 3D point cloud matching error and normal vector error is established. The weights, which are based on the angles of adjacent normal vectors, are also added to the error function. Afterward, the fusion results can be obtained by solving linear equations. From the experimental results, it can be seen that the proposed method has the advantages of both the LSL and PS methods. The 3D reconstruction results have the merits of high accuracy and high clarity. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

29 pages, 26734 KiB  
Article
Variational-Based Spatial–Temporal Approximation of Images in Remote Sensing
by Majid Amirfakhrian and Faramarz F. Samavati
Remote Sens. 2024, 16(13), 2349; https://doi.org/10.3390/rs16132349 - 27 Jun 2024
Viewed by 533
Abstract
Cloud cover and shadows often hinder the accurate analysis of satellite images, impacting various applications, such as digital farming, land monitoring, environmental assessment, and urban planning. This paper presents a new approach to enhancing cloud-contaminated satellite images using a novel variational model for [...] Read more.
Cloud cover and shadows often hinder the accurate analysis of satellite images, impacting various applications, such as digital farming, land monitoring, environmental assessment, and urban planning. This paper presents a new approach to enhancing cloud-contaminated satellite images using a novel variational model for approximating the combination of the temporal and spatial components of satellite imagery. Leveraging this model, we derive two spatial-temporal methods containing an algorithm that computes the missing or contaminated data in cloudy images using the seamless Poisson blending method. In the first method, we extend the Poisson blending method to compute the spatial-temporal approximation. The pixel-wise temporal approximation is used as a guiding vector field for Poisson blending. In the second method, we use the rate of change in the temporal domain to divide the missing region into low-variation and high-variation sub-regions to better guide Poisson blending. In our second method, we provide a more general case by introducing a variation-based method that considers the temporal variation in specific regions to further refine the spatial–temporal approximation. The proposed methods have the same complexity as conventional methods, which is linear in the number of pixels in the region of interest. Our comprehensive evaluation demonstrates the effectiveness of the proposed methods through quantitative metrics, including the Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Metric (SSIM), revealing significant improvements over existing approaches. Additionally, the evaluations offer insights into how to choose between our first and second methods for specific scenarios. This consideration takes into account the temporal and spatial resolutions, as well as the scale and extent of the missing data. Full article
(This article belongs to the Special Issue Remote Sensing in Environmental Modelling)
Show Figures

Figure 1

16 pages, 9644 KiB  
Article
FF3D: A Rapid and Accurate 3D Fruit Detector for Robotic Harvesting
by Tianhao Liu, Xing Wang, Kewei Hu, Hugh Zhou, Hanwen Kang and Chao Chen
Sensors 2024, 24(12), 3858; https://doi.org/10.3390/s24123858 - 14 Jun 2024
Viewed by 502
Abstract
This study presents the Fast Fruit 3D Detector (FF3D), a novel framework that contains a 3D neural network for fruit detection and an anisotropic Gaussian-based next-best view estimator. The proposed one-stage 3D detector, which utilizes an end-to-end 3D detection network, shows superior accuracy [...] Read more.
This study presents the Fast Fruit 3D Detector (FF3D), a novel framework that contains a 3D neural network for fruit detection and an anisotropic Gaussian-based next-best view estimator. The proposed one-stage 3D detector, which utilizes an end-to-end 3D detection network, shows superior accuracy and robustness compared to traditional 2D methods. The core of the FF3D is a 3D object detection network based on a 3D convolutional neural network (3D CNN) followed by an anisotropic Gaussian-based next-best view estimation module. The innovative architecture combines point cloud feature extraction and object detection tasks, achieving accurate real-time fruit localization. The model is trained on a large-scale 3D fruit dataset and contains data collected from an apple orchard. Additionally, the proposed next-best view estimator improves accuracy and lowers the collision risk for grasping. Thorough assessments on the test set and in a simulated environment validate the efficacy of our FF3D. The experimental results show an AP of 76.3%, an AR of 92.3%, and an average Euclidean distance error of less than 6.2 mm, highlighting the framework’s potential to overcome challenges in orchard environments. Full article
Show Figures

Figure 1

Back to TopTop