Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (497)

Search Parameters:
Keywords = outdoor mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 13740 KiB  
Article
Accurate Tracking of Agile Trajectories for a Tail-Sitter UAV Under Wind Disturbances Environments
by Xu Zou, Zhenbao Liu, Zhen Jia and Baodong Wang
Drones 2025, 9(2), 83; https://doi.org/10.3390/drones9020083 - 22 Jan 2025
Viewed by 494
Abstract
To achieve more robust and accurate tracking control of high maneuvering trajectories for a tail-sitter fixed-wing unmanned aerial vehicle (UAV) operating within its full envelope in outdoor environments, a novel control approach is proposed. Firstly, the study rigorously demonstrates the differential flatness property [...] Read more.
To achieve more robust and accurate tracking control of high maneuvering trajectories for a tail-sitter fixed-wing unmanned aerial vehicle (UAV) operating within its full envelope in outdoor environments, a novel control approach is proposed. Firstly, the study rigorously demonstrates the differential flatness property of tail-sitter fixed-wing UAV dynamics using a comprehensive aerodynamics model, which incorporates wind effects without simplification. Then, utilizing the derived flatness functions and the treatments for singularity, the study presents a complete process of the differential flatness transform. This transformation maps the desired maneuver trajectory to a state-input trajectory, facilitating control design. Leveraging an existing controller from the reference literature, trajectory tracking is implemented. Subsequently, a low-cost wind estimation method operating during all flight phases is proposed to estimate the wind effects involved in the model. The wind estimation method involves generating a virtual wind measurement utilizing a low-fidelity tail-sitter model. The virtual wind measurement is integrated with real wind data obtained from the pitot tube and processed through fusion using an extended Kalman filter. Finally, the effectiveness of our methods is confirmed through comprehensive real-world experiments conducted in outdoor settings. The results demonstrate superior robustness and accuracy in controlling challenging agile maneuvering trajectories compared to the existing method. Additionally, the test results highlight the effectiveness of our method in wind estimation. Full article
Show Figures

Figure 1

19 pages, 7788 KiB  
Article
Research on Outdoor Navigation of Intelligent Wheelchair Based on a Novel Layered Cost Map
by Jianwei Cui, Siji Yu, Yucheng Shang, Yuxiang Dai and Wenyi Zhang
Actuators 2025, 14(2), 46; https://doi.org/10.3390/act14020046 - 22 Jan 2025
Viewed by 386
Abstract
With the aging of the population and the increase in the number of people with disabilities, intelligent wheelchairs are essential in improving travel autonomy and quality of life. In this paper, we propose an autonomous outdoor navigation framework for intelligent wheelchairs based on [...] Read more.
With the aging of the population and the increase in the number of people with disabilities, intelligent wheelchairs are essential in improving travel autonomy and quality of life. In this paper, we propose an autonomous outdoor navigation framework for intelligent wheelchairs based on hierarchical cost maps to address the challenges of wheelchair navigation in complex and dynamic outdoor environments. First, the framework integrates multi-sensors such as RTK high-precision GPS, IMU, and 3D LIDAR; fuses RTK, IMU, and odometer data to realize high-precision positioning; and performs path planning and obstacle avoidance through dynamic hierarchical cost maps. Secondly, the drivable area layer is integrated into the traditional hierarchical cost map, in which the drivable area detection algorithm utilizes local plane fitting and elevation difference analysis to achieve efficient ground point cloud segmentation and real-time updating, which ensures the real-time safety of navigation. The experiments are validated in real outdoor scenes and simulation environments, and the results show that the speed of drivable region detection is about 30 ms, the positioning accuracy of wheelchair outdoor navigation is less than 10 cm, and the distance of active obstacle avoidance is 1 m. This study provides an effective solution for the autonomous navigation of the intelligent wheelchair in a complex outdoor environment, and it has a high robustness and application potential. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

21 pages, 9794 KiB  
Article
Research on a Density-Based Clustering Method for Eliminating Inter-Frame Feature Mismatches in Visual SLAM Under Dynamic Scenes
by Zhiyong Yang, Kun Zhao, Shengze Yang, Yuhong Xiong, Changjin Zhang, Lielei Deng and Daode Zhang
Sensors 2025, 25(3), 622; https://doi.org/10.3390/s25030622 - 22 Jan 2025
Viewed by 523
Abstract
Visual SLAM relies on the motion information of static feature points in keyframes for both localization and map construction. Dynamic feature points interfere with inter-frame motion pose estimation, thereby affecting the accuracy of map construction and the overall robustness of the visual SLAM [...] Read more.
Visual SLAM relies on the motion information of static feature points in keyframes for both localization and map construction. Dynamic feature points interfere with inter-frame motion pose estimation, thereby affecting the accuracy of map construction and the overall robustness of the visual SLAM system. To address this issue, this paper proposes a method for eliminating feature mismatches between frames in visual SLAM under dynamic scenes. First, a spatial clustering-based RANSAC method is introduced. This method eliminates mismatches by leveraging the distribution of dynamic and static feature points, clustering the points, and separating dynamic from static clusters, retaining only the static clusters to generate a high-quality dataset. Next, the RANSAC method is introduced to fit the geometric model of feature matches, eliminating local mismatches in the high-quality dataset with fewer iterations. The accuracy of the DSSAC-RANSAC method in eliminating feature mismatches between frames is then tested on both indoor and outdoor dynamic datasets, and the robustness of the proposed algorithm is further verified on self-collected outdoor datasets. Experimental results demonstrate that the proposed algorithm reduces the average reprojection error by 58.5% and 49.2%, respectively, when compared to traditional RANSAC and GMS-RANSAC methods. The reprojection error variance is reduced by 65.2% and 63.0%, while the processing time is reduced by 69.4% and 31.5%, respectively. Finally, the proposed algorithm is integrated into the initialization thread of ORB-SLAM2 and the tracking thread of ORB-SLAM3 to validate its effectiveness in eliminating feature mismatches between frames in visual SLAM. Full article
Show Figures

Figure 1

22 pages, 5549 KiB  
Article
A Proposal of In Situ Authoring Tool with Visual-Inertial Sensor Fusion for Outdoor Location-Based Augmented Reality
by Komang Candra Brata, Nobuo Funabiki, Yohanes Yohanie Fridelin Panduman, Mustika Mentari, Yan Watequlis Syaifudin and Alfiandi Aulia Rahmadani
Electronics 2025, 14(2), 342; https://doi.org/10.3390/electronics14020342 - 17 Jan 2025
Viewed by 528
Abstract
In location-based augmented reality (LAR) applications, a simple and effective authoring tool is essential to create immersive AR experiences in real-world contexts. Unfortunately, most of the current tools are primarily desktop-based, requiring manual location acquisitions, the use of software development kits (SDKs), [...] Read more.
In location-based augmented reality (LAR) applications, a simple and effective authoring tool is essential to create immersive AR experiences in real-world contexts. Unfortunately, most of the current tools are primarily desktop-based, requiring manual location acquisitions, the use of software development kits (SDKs), and high programming skills, which poses significant challenges for novice developers and a lack of precise LAR content alignment. In this paper, we propose an intuitive in situ authoring tool with visual-inertial sensor fusions to simplify the LAR content creation and storing process directly using a smartphone at the point of interest (POI) location. The tool localizes the user’s position using smartphone sensors and maps it with the captured smartphone movement and the surrounding environment data in real-time. Thus, the AR developer can place a virtual object on-site intuitively without complex programming. By leveraging the combined capabilities of Visual Simultaneous Localization and Mapping(VSLAM) and Google Street View (GSV), it enhances localization and mapping accuracy during AR object creation. For evaluations, we conducted extensive user testing with 15 participants, assessing the task success rate and completion time of the tool in practical pedestrian navigation scenarios. The Handheld Augmented Reality Usability Scale (HARUS) was used to evaluate overall user satisfaction. The results showed that all the participants successfully completed the tasks, taking 16.76 s on average to create one AR object in a 50 m radius area, while common desktop-based methods in the literature need 1–8 min on average, depending on the user’s expertise. Usability scores reached 89.44 for manipulability and 85.14 for comprehensibility, demonstrating the high effectiveness in simplifying the outdoor LAR content creation process. Full article
Show Figures

Figure 1

23 pages, 12001 KiB  
Article
Enhancing Off-Road Topography Estimation by Fusing LIDAR and Stereo Camera Data with Interpolated Ground Plane
by Gustav Sten, Lei Feng and Björn Möller
Sensors 2025, 25(2), 509; https://doi.org/10.3390/s25020509 - 16 Jan 2025
Viewed by 492
Abstract
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, [...] Read more.
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, have higher accuracy and longer range but much less coverage. LIDARs are also more expensive. The research question examines whether incorporating LIDARs can significantly improve stereo camera accuracy. Current sensor fusion methods use LIDARs’ raw measurements directly; thus, the improvement in estimation accuracy is limited to only LIDAR-scanned locations The main contribution of our new method is to construct a reference ground plane through the interpolation of LIDAR data so that the interpolated maps have similar coverage as the stereo camera’s point cloud. The interpolated maps are fused with the stereo camera point cloud via Kalman filters to improve a larger section of the topography map. The method is tested in three environments: controlled indoor, semi-controlled outdoor, and unstructured terrain. Compared to the existing method without LIDAR interpolation, the proposed approach reduces average error by 40% in the controlled environment and 67% in the semi-controlled environment, while maintaining large coverage. The unstructured environment evaluation confirms its corrective impact. Full article
Show Figures

Figure 1

16 pages, 3567 KiB  
Article
Research on Lightweight Algorithm Model for Precise Recognition and Detection of Outdoor Strawberries Based on Improved YOLOv5n
by Xiaoman Cao, Peng Zhong, Yihao Huang, Mingtao Huang, Zhengyan Huang, Tianlong Zou and He Xing
Agriculture 2025, 15(1), 90; https://doi.org/10.3390/agriculture15010090 - 2 Jan 2025
Viewed by 624
Abstract
When picking strawberries outdoors, due to factors such as light changes, obstacle occlusion, and small target detection objects, the phenomena of poor strawberry recognition accuracy and low recognition rate are caused. An improved YOLOv5n strawberry high-precision recognition algorithm is proposed. The algorithm uses [...] Read more.
When picking strawberries outdoors, due to factors such as light changes, obstacle occlusion, and small target detection objects, the phenomena of poor strawberry recognition accuracy and low recognition rate are caused. An improved YOLOv5n strawberry high-precision recognition algorithm is proposed. The algorithm uses FasterNet to replace the original YOLOv5n backbone network and improves the detection rate. The MobileViT attention mechanism module is added to improve the feature extraction ability of small target objects so that the model has higher detection accuracy and smaller module sizes. The CBAM hybrid attention module and C2f module are introduced to improve the feature expression ability of the neural network, enrich the gradient flow information, and improve the performance and accuracy of the model. The SPPELAN module is added as well to improve the model’s detection efficiency for small objects. The experimental results show that the detection accuracy of the improved model is 98.94%, the recall rate is 99.12%, the model volume is 53.22 MB, and the mAP value is 99.43%. Compared with the original YOLOv5n, the detection accuracy increased by 14.68%, and the recall rate increased by 11.37%. This technology has effectively accomplished the accurate detection and identification of strawberries under complex outdoor conditions and provided a theoretical basis for accurate outdoor identification and precise picking technology. Full article
Show Figures

Figure 1

23 pages, 14524 KiB  
Article
Everyday-Carry Equipment Mapping: A Portable and Low-Cost Method for 3D Digital Documentation of Architectural Heritage by Integrated iPhone and Microdrone
by Nan Zhang and Xijian Lan
Buildings 2025, 15(1), 89; https://doi.org/10.3390/buildings15010089 - 30 Dec 2024
Viewed by 650
Abstract
Mapping constitutes a critical component of architectural heritage research, providing the groundwork for both conservation and utilization efforts. Three-dimensional (3D) digital documentation represents a prominent form of mapping in the contemporary era, and its value is widely recognized. However, cost and portability constraints [...] Read more.
Mapping constitutes a critical component of architectural heritage research, providing the groundwork for both conservation and utilization efforts. Three-dimensional (3D) digital documentation represents a prominent form of mapping in the contemporary era, and its value is widely recognized. However, cost and portability constraints often limit its widespread use in routine research and conservation initiatives. This study proposes a cost-effective and portable approach to 3D digital documentation, employing everyday-carry (EDC) equipment, the iPhone 15 Pro and DJI Mini 4 Pro, for data acquisition in architectural heritage. The workflow was subsequently optimized, and the datasets from the iPhone-LiDAR and microdrone were seamlessly integrated, resulting in an integrated 3D digital model of both the indoor and outdoor spaces of the architectural heritage site. The model demonstrated an overall relative error of 4.93%, achieving centimeter-level accuracy, precise spatial alignment between indoor and outdoor sections, clear and smooth texture mapping, high visibility, and suitability for digital display applications. This optimized workflow leverages the strengths of both EDC equipment types while addressing the limitations identified in prior studies. Full article
Show Figures

Figure 1

13 pages, 5669 KiB  
Article
Optimization of Video Surveillance System Deployment Based on Space Syntax and Deep Reinforcement Learning
by Bingchan Li and Chunguo Li
Electronics 2025, 14(1), 38; https://doi.org/10.3390/electronics14010038 - 26 Dec 2024
Viewed by 362
Abstract
With the widespread deployment of video surveillance devices, a large number of indoor and outdoor places are under the coverage of cameras, which plays a significant role in enhancing regional safety management and hazard detection. However, a vast number of cameras lead to [...] Read more.
With the widespread deployment of video surveillance devices, a large number of indoor and outdoor places are under the coverage of cameras, which plays a significant role in enhancing regional safety management and hazard detection. However, a vast number of cameras lead to high installation, maintenance, and analysis costs. At the same time, low-quality images and potential blind spots in key areas prevent the full utilization of the video system’s effectiveness. This paper proposes an optimization method for video surveillance system deployment based on space syntax analysis and deep reinforcement learning. First, space syntax is used to calculate the connectivity value, control value, depth value, and integration of the surveillance area. Combined with visibility and axial analysis results, a weighted index grid map of the area’s surveillance importance is constructed. This index describes the importance of video coverage at a given point in the area. Based on this index map, a deep reinforcement learning network based on DQN (Deep Q-Network) is proposed to optimize the best placement positions and angles for a given number of cameras in the area. Experiments show that the proposed framework, integrating space syntax and deep reinforcement learning, effectively improves video system coverage efficiency and allows for quick adjustment and refinement of camera placement by manually setting parameters for specific areas. Compared to existing coverage-first or experience-based optimization, the proposed method demonstrates significant performance and efficiency advantages. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence)
Show Figures

Figure 1

24 pages, 7396 KiB  
Article
Smoke Detection Transformer: An Improved Real-Time Detection Transformer Smoke Detection Model for Early Fire Warning
by Baoshan Sun and Xin Cheng
Fire 2024, 7(12), 488; https://doi.org/10.3390/fire7120488 - 23 Dec 2024
Viewed by 750
Abstract
As one of the important features in the early stage of fires, the detection of smoke can provide a faster early warning of a fire, thus suppressing the spread of the fire in time. However, the features of smoke are not apparent; the [...] Read more.
As one of the important features in the early stage of fires, the detection of smoke can provide a faster early warning of a fire, thus suppressing the spread of the fire in time. However, the features of smoke are not apparent; the shape of smoke is not fixed, and it is easy to be confused with the background outdoors, which leads to difficulties in detecting smoke. Therefore, this study proposes a model called Smoke Detection Transformer (Smoke-DETR) for smoke detection, which is based on a Real-Time Detection Transformer (RT-DETR). Considering the limited computational resources of smoke detection devices, Enhanced Channel-wise Partial Convolution (ECPConv) is introduced to reduce the number of parameters and the amount of computation. This approach improves Partial Convolution (PConv) by using a selection strategy that selects channels containing more information for each convolution, thereby increasing the network’s ability to learn smoke features. To cope with smoke images with inconspicuous features and irregular shapes, the Efficient Multi-Scale Attention (EMA) module is used to strengthen the feature extraction capability of the backbone network. Additionally, in order to overcome the problem of smoke being easily confused with the background, the Multi-Scale Foreground-Focus Fusion Pyramid Network (MFFPN) is designed to strengthen the model’s attention to the foreground of images, which improves the accuracy of detection in situations where smoke is not well differentiated from the background. Experimental results demonstrate that Smoke-DETR has achieved significant improvements in smoke detection. In the self-building dataset, compared to RT-DETR, Smoke-DETR achieves a Precision that has reached 86.2%, marking an increase of 3.6 percentage points. Similarly, Recall has achieved 80%, showing an improvement of 3.6 percentage points. In terms of mAP50, it has reached 86.2%, with a 3.8 percentage point increase. Furthermore, mAP50 has reached 53.9%, representing a 3.6 percentage point increase. Full article
Show Figures

Figure 1

20 pages, 6270 KiB  
Article
Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping
by Eun-Seok Park , Saba Arshad and Tae-Hyoung Park
Sensors 2024, 24(24), 8199; https://doi.org/10.3390/s24248199 - 22 Dec 2024
Viewed by 645
Abstract
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, [...] Read more.
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, handheld devices allow data collection from different angles, but this mobility introduces challenges in data quality, particularly when initial calibration between sensors is not precise. Accurate LiDAR-IMU calibration, essential for mapping accuracy in Simultaneous Localization and Mapping applications, involves precise alignment of the sensors’ extrinsic parameters. This research presents a robust initial pose calibration method for LiDAR-IMU systems in handheld devices, specifically designed for indoor environments. The research contributions are twofold. Firstly, we present a robust plane detection method for LiDAR data. This plane detection method removes the noise caused by mobility of scanning device and provides accurate planes for precise LiDAR initial pose estimation. Secondly, we present a robust planes-aided LiDAR calibration method that estimates the initial pose. By employing this LiDAR calibration method, an efficient LiDAR-IMU calibration is achieved for accurate mapping. Experimental results demonstrate that the proposed method achieves lower calibration errors and improved computational efficiency compared to existing methods. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 11605 KiB  
Article
GeometryFormer: Semi-Convolutional Transformer Integrated with Geometric Perception for Depth Completion in Autonomous Driving Scenes
by Siyuan Su and Jian Wu
Sensors 2024, 24(24), 8066; https://doi.org/10.3390/s24248066 - 18 Dec 2024
Viewed by 421
Abstract
Depth completion is widely employed in Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SfM), which are of great significance to the development of autonomous driving. Recently, the methods based on the fusion of vision transformer (ViT) and convolution have brought the [...] Read more.
Depth completion is widely employed in Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SfM), which are of great significance to the development of autonomous driving. Recently, the methods based on the fusion of vision transformer (ViT) and convolution have brought the accuracy to a new level. However, there are still two shortcomings that need to be solved. On the one hand, for the poor performance of ViT in details, this paper proposes a semi-convolutional vision transformer to optimize local continuity and designs a geometric perception module to learn the positional correlation and geometric features of sparse points in three-dimensional space to perceive the geometric structures in depth maps for optimizing the recovery of edges and transparent areas. On the other hand, previous methods implement single-stage fusion to directly concatenate or add the outputs of ViT and convolution, resulting in incomplete fusion of the two, especially in complex outdoor scenes, which will generate lots of outliers and ripples. This paper proposes a novel double-stage fusion strategy, applying learnable confidence after self-attention to flexibly learn the weight of local features. Our network achieves state-of-the-art (SoTA) performance with the NYU-Depth-v2 Dataset and the KITTI Depth Completion Dataset. It is worth mentioning that the root mean square error (RMSE) of our model on the NYU-Depth-v2 Dataset is 87.9 mm, which is currently the best among all algorithms. At the end of the article, we also verified the generalization ability in real road scenes. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

26 pages, 2585 KiB  
Article
Depth Prediction Improvement for Near-Field iToF Lidar in Low-Speed Motion State
by Mena Nagiub, Thorsten Beuth, Ganesh Sistu, Heinrich Gotzig and Ciarán Eising
Sensors 2024, 24(24), 8020; https://doi.org/10.3390/s24248020 - 16 Dec 2024
Viewed by 640
Abstract
Current deep learning-based phase unwrapping techniques for iToF Lidar sensors focus mainly on static indoor scenarios, ignoring motion blur in dynamic outdoor scenarios. Our paper proposes a two-stage semi-supervised method to unwrap ambiguous depth maps affected by motion blur in dynamic outdoor scenes. [...] Read more.
Current deep learning-based phase unwrapping techniques for iToF Lidar sensors focus mainly on static indoor scenarios, ignoring motion blur in dynamic outdoor scenarios. Our paper proposes a two-stage semi-supervised method to unwrap ambiguous depth maps affected by motion blur in dynamic outdoor scenes. The method trains on static datasets to learn unwrapped depth map prediction and then adapts to dynamic datasets using continuous learning methods. Additionally, blind deconvolution is introduced to mitigate the blur. The combined use of these methods produces high-quality depth maps with reduced blur noise. Full article
(This article belongs to the Collection Navigation Systems and Sensors)
Show Figures

Figure 1

16 pages, 6772 KiB  
Article
Cartographic Visualisation of Light Pollution Measurements
by Mieczysław Kunz and Dominika Daab
Urban Sci. 2024, 8(4), 254; https://doi.org/10.3390/urbansci8040254 - 16 Dec 2024
Viewed by 655
Abstract
The light pollution of the night sky is already a widespread phenomenon, the spatial extent and magnitude of which are increasingly represented in the form of thematic maps and cartographic visualization. Its leading cause needs to be correctly designed or adequately installed outdoor [...] Read more.
The light pollution of the night sky is already a widespread phenomenon, the spatial extent and magnitude of which are increasingly represented in the form of thematic maps and cartographic visualization. Its leading cause needs to be correctly designed or adequately installed outdoor lighting. The problem of excessive artificial light emission at night, together with its adverse effects, has already reached such a level that it has become necessary to develop usable and comprehensible methods for the cartographic representation of the distribution of the phenomenon. In practice, there are several ways to measure the intensity of this pollution. However, there are no uniform legal standards for the use of outdoor lighting and no guidance and guidelines for the visualization of measurement data. Such visualization should provide a consistent, reliable, and, above all, readable picture of the phenomenon adapted to the needs of different audiences. Examples of the representation of the results of measurements of light pollution of the night sky can be found in the literature or a few atlases. Still, they often differ in color scales, value divisions, and measurement units used. This paper reviews the scales and units available in the literature to describe this phenomenon. The differences between the approaches of specialists from different branches and their influence on the final interpretation of the data are also presented. In addition, an authorial solution is proposed to standardize methods of cartographic visualization of the spatial distribution of light smog measurement results. The article attempts to draw attention to the importance of the graphical description of light smog, which will shortly be the subject of increasing research and work on the unification of cartographic communication. Full article
Show Figures

Figure 1

12 pages, 3041 KiB  
Article
High-Spatial Resolution Maps of PM2.5 Using Mobile Sensors on Buses: A Case Study of Teltow City, Germany, in the Suburb of Berlin, 2023
by Jean-Baptiste Renard, Günter Becker, Marc Nodorft, Ehsan Tavakoli, Leroy Thiele, Eric Poincelet, Markus Scholz and Jérémy Surcin
Atmosphere 2024, 15(12), 1494; https://doi.org/10.3390/atmos15121494 - 15 Dec 2024
Viewed by 744
Abstract
Air quality monitoring networks regulated by law provide accurate but sparse measurements of PM2.5 mass concentrations. High-spatial resolution maps of the PM2.5 mass concentration values are necessary to better estimate the citizen exposure to outdoor air pollution and the sanitary consequences. To address [...] Read more.
Air quality monitoring networks regulated by law provide accurate but sparse measurements of PM2.5 mass concentrations. High-spatial resolution maps of the PM2.5 mass concentration values are necessary to better estimate the citizen exposure to outdoor air pollution and the sanitary consequences. To address this, a field campaign was conducted in Teltow, a midsize city southwest of Berlin, Germany, for the 2021–2023 period. A network of optical sensors deployed by Pollutrack included fixed monitoring stations as well as mobile sensors mounted on the roofs of buses and cars. This setup provides PM2.5 pollution maps with a spatial resolution down to 100 m on the main roads. The reliability of Pollutrack measurements was first established with comparison to measurements from the German Environment Agency (UBA) and modelling calculations based on high-resolution weather forecasts. Using these validated data, maps were generated for 2023, highlighting the mean PM2.5 mass concentrations and the number of days per year above the 15 µg.m−3 value (the daily maximum recommended by the World Health Organization (WHO) in 2021). The findings indicate that PM2.5 levels in Teltow are generally in the good-to-moderate range. The higher values (hot spots) are detected mainly along the highways and motorways, where traffic speeds are higher compared to inner-city roads. Also, the PM2.5 mass concentrations are higher on the street than on the sidewalks. The results were further compared to those in the city of Paris, France, obtained using the same methodology. The observed parallels between the two datasets underscore the strong correlation between traffic density and PM2.5 concentrations. Finally, the study discusses the advantages of integrating such high-resolution sensor networks with modelling approaches to enhance the understanding of localized PM2.5 variability and to better evaluate public exposure to air pollution. Full article
(This article belongs to the Special Issue Cutting-Edge Developments in Air Quality and Health)
Show Figures

Figure 1

30 pages, 11752 KiB  
Article
Optimizing Outdoor Micro-Space Design for Prolonged Activity Duration: A Study Integrating Rough Set Theory and the PSO-SVR Algorithm
by Jingwen Tian, Zimo Chen, Lingling Yuan and Hongtao Zhou
Buildings 2024, 14(12), 3950; https://doi.org/10.3390/buildings14123950 - 12 Dec 2024
Viewed by 616
Abstract
This study proposes an optimization method based on Rough Set Theory (RST) and Particle Swarm Optimization–Support Vector Regression (PSO-SVR), aimed at enhancing the emotional dimension of outdoor micro-space (OMS) design, thereby improving users’ outdoor activity duration preferences and emotional experiences. OMS, as a [...] Read more.
This study proposes an optimization method based on Rough Set Theory (RST) and Particle Swarm Optimization–Support Vector Regression (PSO-SVR), aimed at enhancing the emotional dimension of outdoor micro-space (OMS) design, thereby improving users’ outdoor activity duration preferences and emotional experiences. OMS, as a key element in modern urban design, significantly enhances residents’ quality of life and promotes public health. Accurately understanding and predicting users’ emotional needs is the core challenge in optimizing OMS. In this study, the Kansei Engineering (KE) framework is applied, using fuzzy clustering to reduce the dimensionality of emotional descriptors, while RST is employed for attribute reduction to select five key design features that influence users’ emotions. Subsequently, the PSO-SVR model is applied to establish the nonlinear mapping relationship between these design features and users’ emotions, predicting the optimal configuration of OMS design. The results indicate that the optimized OMS design significantly enhances users’ intention to stay in the space, as reflected by higher ratings for emotional descriptors and increased preferences for longer outdoor activity duration, all exceeding the median score of the scale. Additionally, comparative analysis shows that the PSO-SVR model outperforms traditional methods (e.g., BPNN, RF, and SVR) in terms of accuracy and generalization for predictions. These findings demonstrate that the proposed method effectively improves the emotional performance of OMS design and offers a solid optimization framework along with practical guidance for future urban public space design. The innovative contribution of this study lies in the proposed data-driven optimization method that integrates machine learning and KE. This method not only offers a new theoretical perspective for OMS design but also establishes a scientific framework to accurately incorporate users’ emotional needs into the design process. The method contributes new knowledge to the field of urban design, promotes public health and well-being, and provides a solid foundation for future applications in different urban environments. Full article
(This article belongs to the Special Issue Art and Design for Healing and Wellness in the Built Environment)
Show Figures

Figure 1

Back to TopTop