Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = hardware optimisation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 482 KiB  
Article
Domain Specific Abstractions for the Development of Fast-by-Construction Dataflow Codes on FPGAs
by Nick Brown
Chips 2024, 3(4), 334-360; https://doi.org/10.3390/chips3040017 - 4 Oct 2024
Viewed by 486
Abstract
FPGAs are popular in many fields but have yet to gain wide acceptance for accelerating HPC codes. A major cause is that whilst the growth of High-Level Synthesis (HLS), enabling the use of C or C++, has increased accessibility, without widespread algorithmic changes [...] Read more.
FPGAs are popular in many fields but have yet to gain wide acceptance for accelerating HPC codes. A major cause is that whilst the growth of High-Level Synthesis (HLS), enabling the use of C or C++, has increased accessibility, without widespread algorithmic changes these tools only provide correct-by-construction rather than fast-by-construction programming. The fundamental issue is that HLS presents a Von Neumann-based execution model that is poorly suited to FPGAs, resulting in a significant disconnect between HLS’s language semantics and how experienced FPGA programmers structure dataflow algorithms to exploit hardware. We have developed the high-level language Lucent which builds on principles previously developed for programming general-purpose dataflow architectures. Using Lucent as a vehicle, in this paper we explore appropriate abstractions for developing application-specific dataflow machines on reconfigurable architectures. The result is an approach enabling fast-by-construction programming for FPGAs, delivering competitive performance against hand-optimised HLS codes whilst significantly enhancing programmer productivity. Full article
Show Figures

Figure 1

22 pages, 5463 KiB  
Article
A ROS2-Based Gateway for Modular Hardware Usage in Heterogeneous Environments
by Rúben Carreira, Nuno Costa, João Ramos, Luís Frazão and António Pereira
Sensors 2024, 24(19), 6341; https://doi.org/10.3390/s24196341 - 30 Sep 2024
Viewed by 631
Abstract
The rise of robotics and the Internet of Things (IoT) could potentially represent a significant shift towards a more integrated and automated future, where the physical and digital domains may merge. However, the integration of these technologies presents certain challenges, including compatibility issues [...] Read more.
The rise of robotics and the Internet of Things (IoT) could potentially represent a significant shift towards a more integrated and automated future, where the physical and digital domains may merge. However, the integration of these technologies presents certain challenges, including compatibility issues with existing systems and the need for greater interoperability between different devices. It would seem that the rigidity of traditional robotic designs may inadvertently make these difficulties worse, which in turn highlights the potential benefits of modular solutions. Furthermore, the mastery of new technologies may introduce additional complexity due to the varying approaches taken by robot manufacturers. In order to address these issues, this research proposes a Robot Operating System (ROS2)-based middleware, called the “ROS2-based gateway”, which aims to simplify the integration of robots in different environments. By focusing on the payload layer and enabling external communication, this middleware has the potential to enhance modularity and interoperability, thus accelerating the integration process. It offers users the option of selecting payloads and communication methods via a shell interface, which the middleware then configures, ensuring adaptability. The solution proposed in this article, based on the gateway concept, offers users and programmers the flexibility to specify which payloads they want to activate depending on the task at hand and the high-level protocols they wish to use to interact with the activated payloads. This approach allows for the optimisation of hardware resources (only the necessary payloads are activated), as well as enabling the programmer/user to utilise high-level communication protocols (such as RESTful, Kafka, etc.) to interact with the activated payloads, rather than low-level programming. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

28 pages, 2660 KiB  
Article
Development and Evaluation of Training Scenarios for the Use of Immersive Assistance Systems
by Maximilian Rosilius, Lukas Hügel, Benedikt Wirsing, Manuel Geuen, Ingo von Eitzen, Volker Bräutigam and Bernd Ludwig
Appl. Syst. Innov. 2024, 7(5), 73; https://doi.org/10.3390/asi7050073 - 26 Aug 2024
Viewed by 828
Abstract
Emerging assistance systems are designed to enable operators to perform tasks better, faster, and with a lower workload. However, in line with the productivity paradox, the full potential of automation and digitalisation is not being realised. One reason for this is insufficient training. [...] Read more.
Emerging assistance systems are designed to enable operators to perform tasks better, faster, and with a lower workload. However, in line with the productivity paradox, the full potential of automation and digitalisation is not being realised. One reason for this is insufficient training. In this study, the statistically significant differences among three different training scenarios on performance, acceptance, workload, and technostress during the execution of immersive measurement tasks are demonstrated. A between-subjects design was applied and analysed using ANOVAs involving 52 participants (with a statistical overall power of 0.92). The ANOVAs were related to three levels of the independent variable: quality training, manipulated as minimal, personal, and optimised training. The results show that the quality of training significantly influences immersive assistance systems. Hence, this article deduces tangible design guidelines for training, with consideration of the system-level hardware, operational system, and immersive application. Surprisingly, an appropriate mix of training approaches, rather than detailed, personalised training, appears to be more effective than e-learning or ‘getting started’ tools for immersive systems. In contrast to most studies in the related work, our article is not about learning with AR applications but about training scenarios for the use of immersive systems. Full article
Show Figures

Figure 1

33 pages, 1785 KiB  
Article
Sustainable Machine Vision for Industry 4.0: A Comprehensive Review of Convolutional Neural Networks and Hardware Accelerators in Computer Vision
by Muhammad Hussain
AI 2024, 5(3), 1324-1356; https://doi.org/10.3390/ai5030064 - 1 Aug 2024
Viewed by 1818
Abstract
As manifestations of Industry 4.0. become visible across various applications, one key and opportune area of development are quality inspection processes and defect detection. Over the last decade, computer vision architectures, in particular, object detectors have received increasing attention from the research community, [...] Read more.
As manifestations of Industry 4.0. become visible across various applications, one key and opportune area of development are quality inspection processes and defect detection. Over the last decade, computer vision architectures, in particular, object detectors have received increasing attention from the research community, due to their localisation advantage over image classification. However, for these architectural advancements to provide tangible solutions, they must be optimised with respect to the target hardware along with the deployment environment. To this effect, this survey provides an in-depth review of the architectural progression of image classification and object detection architectures with a focus on advancements within Artificially Intelligent accelerator hardware. This will provide readers with an understanding of the present state of architecture–hardware integration within the computer vision discipline. The review also provides examples of the industrial implementation of computer vision architectures across various domains, from the detection of fabric defects to pallet racking inspection. The survey highlights the need for representative hardware-benchmarked datasets for providing better performance comparisons along with envisioning object detection as the primary domain where more research efforts would be focused over the next decade. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Image Processing and Computer Vision)
Show Figures

Figure 1

15 pages, 2218 KiB  
Review
A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms
by Seham Al Abdul Wahid, Arghavan Asad and Farah Mohammadi
Electronics 2024, 13(15), 2963; https://doi.org/10.3390/electronics13152963 - 26 Jul 2024
Viewed by 1727
Abstract
Neuromorphic computing, a brain-inspired non-Von Neumann computing system, addresses the challenges posed by the Moore’s law memory wall phenomenon. It has the capability to enhance performance while maintaining power efficiency. Neuromorphic chip architecture requirements vary depending on the application and optimising it for [...] Read more.
Neuromorphic computing, a brain-inspired non-Von Neumann computing system, addresses the challenges posed by the Moore’s law memory wall phenomenon. It has the capability to enhance performance while maintaining power efficiency. Neuromorphic chip architecture requirements vary depending on the application and optimising it for large-scale applications remains a challenge. Neuromorphic chips are programmed using spiking neural networks which provide them with important properties such as parallelism, asynchronism, and on-device learning. Widely used spiking neuron models include the Hodgkin–Huxley Model, Izhikevich model, integrate-and-fire model, and spike response model. Hardware implementation platforms of the chip follow three approaches: analogue, digital, or a combination of both. Each platform can be implemented using various memory topologies which interconnect with the learning mechanism. Current neuromorphic computing systems typically use the unsupervised learning spike timing-dependent plasticity algorithms. However, algorithms such as voltage-dependent synaptic plasticity have the potential to enhance performance. This review summarises the potential neuromorphic chip architecture specifications and highlights which applications they are suitable for. Full article
(This article belongs to the Special Issue Neuromorphic Device, Circuits, and Systems)
Show Figures

Figure 1

21 pages, 4849 KiB  
Article
Leak and Burst Detection in Water Distribution Network Using Logic- and Machine Learning-Based Approaches
by Kiran Joseph, Jyoti Shetty, Ashok K. Sharma, Rudi van Staden, P. L. P. Wasantha, Sharna Small and Nathan Bennett
Water 2024, 16(14), 1935; https://doi.org/10.3390/w16141935 - 9 Jul 2024
Viewed by 1494
Abstract
Urban water systems worldwide are confronted with the dual challenges of dwindling water resources and deteriorating infrastructure, emphasising the critical need to minimise water losses from leakage. Conventional methods for leak and burst detection often prove inadequate, leading to prolonged leak durations and [...] Read more.
Urban water systems worldwide are confronted with the dual challenges of dwindling water resources and deteriorating infrastructure, emphasising the critical need to minimise water losses from leakage. Conventional methods for leak and burst detection often prove inadequate, leading to prolonged leak durations and heightened maintenance costs. This study investigates the efficacy of logic- and machine learning-based approaches in early leak detection and precise location identification within water distribution networks. By integrating hardware and software technologies, including sensor technology, data analysis, and study on the logic-based and machine learning algorithms, innovative solutions are proposed to optimise water distribution efficiency and minimise losses. In this research, we focus on a case study area in the Sunbury region of Victoria, Australia, evaluating a pumping main equipped with Supervisory Control and Data Acquisition (SCADA) sensor technology. We extract hydraulic characteristics from SCADA data and develop logic-based algorithms for leak and burst detection, alongside state-of-the-art machine learning techniques. These methodologies are applied to historical data initially and will be subsequently extended to live data, enabling the real-time detection of leaks and bursts. The findings underscore the complementary nature of logic-based and machine learning approaches. While logic-based algorithms excel in capturing straightforward anomalies based on predefined conditions, they may struggle with complex or evolving patterns. Machine learning algorithms enhance detection by learning from historical data, adapting to changing conditions, and capturing intricate patterns and outliers. The comparative analysis of machine learning models highlights the superiority of the local outlier factor (LOF) in anomaly detection, leading to its selection as the final model. Furthermore, a web-based platform has been developed for leak and burst detection using a selected machine learning model. The success of machine learning models over traditional logic-based approaches underscores the effectiveness of data-driven, probabilistic methods in handling complex data patterns and variations. Leveraging statistical and probabilistic techniques, machine learning models offer adaptability and superior performance in scenarios with intricate or dynamic relationships between variables. The findings demonstrate that the proposed methodology can significantly enhance the early detection of leaks and bursts, thereby minimising water loss and associated economic costs. The implications of this study are profound for the scientific community and stakeholders, as it provides a scalable and efficient solution for water pipeline monitoring. Implementing this approach can lead to more proactive maintenance strategies, ultimately contributing to the sustainability and resilience of urban water infrastructure systems. Full article
(This article belongs to the Special Issue Advances in Management of Urban Water Supply System)
Show Figures

Figure 1

25 pages, 21044 KiB  
Article
Design and Implementation of a Hardware-in-the-Loop Air Load Simulation System for Testing Aerospace Actuators
by Alessandro Dell’Amico
Actuators 2024, 13(7), 238; https://doi.org/10.3390/act13070238 - 25 Jun 2024
Viewed by 1255
Abstract
This paper presents the design and implementation of the hardware and control strategies of an electrohydraulic air load simulation system for testing aerospace actuators. The system is part of an Iron Bird, which is an energy management research platform developed in collaboration between [...] Read more.
This paper presents the design and implementation of the hardware and control strategies of an electrohydraulic air load simulation system for testing aerospace actuators. The system is part of an Iron Bird, which is an energy management research platform developed in collaboration between Saab AB and Linköping University. The purpose of the air load system is to provide realistic forces on the test object through the integration of a flight simulator for full mission evaluation. The challenge with electrohydraulic force control is tackled by increasing the hydraulic capacitance from increased load cylinder dead volumes, together with a feed-forward link based on accurate modelling of the test object and load system by adopting an optimisation routine to find model parameters. The system is implemented for both an electromechanical and servohydraulic actuator as test objects with different performance requirements. The control design is based on nonlinear and linear modelling of the system, and experimental test data are used to tune the models. Finally, test results of the air load system prove its force-tracking performance. Full article
Show Figures

Figure 1

27 pages, 10878 KiB  
Article
Reliability Assessment of Wireless Sensor Networks by Strain-Based Region Analysis for Redundancy Estimation in Measurements on the Example of an Aircraft Wing Box
by Sören Meyer zu Westerhausen, Gurubaran Raveendran, Thorben-Hendrik Lauth, Ole Meyer, Daniel Rosemann, Max Leo Wawer, Timo Stauß, Johanna Wurst and Roland Lachmayer
Sensors 2024, 24(13), 4107; https://doi.org/10.3390/s24134107 - 24 Jun 2024
Viewed by 674
Abstract
Wireless sensor networks (WSNs) are attracting increasing research interest due to their ability to monitor large areas independently. Their reliability is a crucial issue, as it is influenced by hardware, data, and energy-related factors such as loading conditions, signal attenuation, and battery lifetime. [...] Read more.
Wireless sensor networks (WSNs) are attracting increasing research interest due to their ability to monitor large areas independently. Their reliability is a crucial issue, as it is influenced by hardware, data, and energy-related factors such as loading conditions, signal attenuation, and battery lifetime. Proper selection of sensor node positions is essential to maximise system reliability during the development of products equipped with WSNs. For this purpose, this paper presents an approach to estimate WSN system reliability during the development phase based on the analysis of measurements, using strain measurements in finite element (FE) models as an example. The approach involves dividing the part under consideration into regions with similar strains using a region growing algorithm (RGA). The WSN configuration is then analysed for reliability based on data paths and measurement redundancy resulting from the sensor positions in the identified measuring regions. This methodology was tested on an exemplary WSN configuration at an aircraft wing box under bending load and found to effectively estimate the hardware perspective on system reliability. Therefore, the methodology and algorithm show potential for optimising sensor node positions to achieve better reliability results. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Graphical abstract

16 pages, 4089 KiB  
Article
Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural Networks
by George Brayshaw, Benjamin Ward-Cherrier and Martin J. Pearson
Electronics 2024, 13(11), 2159; https://doi.org/10.3390/electronics13112159 - 1 Jun 2024
Viewed by 840
Abstract
The neuroTac, a neuromorphic visuo-tactile sensor that leverages the high temporal resolution of event-based cameras, is ideally suited to applications in robotic manipulators and prosthetic devices. In this paper, we pair the neuroTac with Spiking Neural Networks (SNNs) to achieve a movement-invariant neuromorphic [...] Read more.
The neuroTac, a neuromorphic visuo-tactile sensor that leverages the high temporal resolution of event-based cameras, is ideally suited to applications in robotic manipulators and prosthetic devices. In this paper, we pair the neuroTac with Spiking Neural Networks (SNNs) to achieve a movement-invariant neuromorphic tactile sensing method for robust texture classification. Alongside this, we demonstrate the ability of this approach to extract movement profiles from purely tactile data. Our systems achieve accuracies of 95% and 83% across their respective tasks (texture and movement classification). We then seek to reduce the size and spiking activity of our networks with the aim of deployment to edge neuromorphic hardware. This multi-objective optimisation investigation using Pareto frontiers highlights several design trade-offs, where high activity and large network sizes can both be reduced by up to 68% and 94% at the cost of slight decreases in accuracy (8%). Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

17 pages, 7510 KiB  
Article
Optimisation Challenge for a Superconducting Adiabatic Neural Network That Implements XOR and OR Boolean Functions
by Dmitrii S. Pashin, Marina V. Bastrakova, Dmitrii A. Rybin, Igor. I. Soloviev, Nikolay V. Klenov and Andrey E. Schegolev
Nanomaterials 2024, 14(10), 854; https://doi.org/10.3390/nano14100854 - 14 May 2024
Cited by 1 | Viewed by 1271
Abstract
In this article, we consider designs of simple analog artificial neural networks based on adiabatic Josephson cells with a sigmoid activation function. A new approach based on the gradient descent method is developed to adjust the circuit parameters, allowing efficient signal transmission between [...] Read more.
In this article, we consider designs of simple analog artificial neural networks based on adiabatic Josephson cells with a sigmoid activation function. A new approach based on the gradient descent method is developed to adjust the circuit parameters, allowing efficient signal transmission between the network layers. The proposed solution is demonstrated on the example of a system that implements XOR and OR logical operations. Full article
(This article belongs to the Special Issue Neuromorphic Devices: Materials, Structures and Bionic Applications)
Show Figures

Figure 1

16 pages, 6667 KiB  
Article
A Temperature Prediction Model for Flexible Electronic Devices Based on GA-BP Neural Network and Experimental Verification
by Jin Nan, Jiayun Chen, Min Li, Yuhang Li, Yinji Ma and Xuanqing Fan
Micromachines 2024, 15(4), 430; https://doi.org/10.3390/mi15040430 - 23 Mar 2024
Cited by 2 | Viewed by 1374
Abstract
The problem that the thermal safety of flexible electronic devices is difficult to evaluate in real time is addressed in this study by establishing a BP neural network (GA-BPNN) temperature prediction model based on genetic algorithm optimisation. The model uses a BP neural [...] Read more.
The problem that the thermal safety of flexible electronic devices is difficult to evaluate in real time is addressed in this study by establishing a BP neural network (GA-BPNN) temperature prediction model based on genetic algorithm optimisation. The model uses a BP neural network to fit the functional relationship between the input condition and the steady-state temperature of the equipment and uses a genetic algorithm to optimise the parameter initialisation problem of the BP neural network. To overcome the challenge of the high cost of obtaining experimental data, finite element analysis software is used to simulate the temperature results of the equipment under different working conditions. The prediction variance of the GA-BPNN model does not exceed 0.57 °C and has good robustness, as the model is trained according to the simulation data. The study conducted thermal validation experiments on the temperature prediction model for this flexible electronic device. The device reached steady state after 1200 s of operation at rated power. The error between the predicted and experimental results was less than 0.9 °C, verifying the validity of the model’s predictions. Compared with traditional thermal simulation and experimental methods, this model can quickly predict the temperature with a certain accuracy and has outstanding advantages in computational efficiency and integrated application of hardware and software. Full article
Show Figures

Figure 1

25 pages, 5066 KiB  
Article
An Optimal Switching Sequence Model Predictive Control Scheme for the 3L-NPC Converter with Output LC Filter
by Felipe Herrera, Andrés Mora, Roberto Cárdenas, Matías Díaz, José Rodríguez and Marco Rivera
Processes 2024, 12(2), 348; https://doi.org/10.3390/pr12020348 - 6 Feb 2024
Viewed by 1239
Abstract
In some applications of microgrids and distributed generation, it is necessary to feed islanded or stand-alone loads with high-quality voltage to provide low total harmonic distortion (THD). To fulfil these demands, an LC filter is usuallyconnected to the output terminals of power [...] Read more.
In some applications of microgrids and distributed generation, it is necessary to feed islanded or stand-alone loads with high-quality voltage to provide low total harmonic distortion (THD). To fulfil these demands, an LC filter is usuallyconnected to the output terminals of power electronics converters. A cascaded voltage and current control loop with pulse-width modulation schemes are used to regulate the voltages and currents in these systems. However, these strategies have some drawbacks, particularly when multiple-input–multiple-output plants (MIMO) are controlled using single-input–single-output (SISO) design methods. This methodology usually produces a sluggish transient response and cross–coupling between different control loops and state variables. In this paper, a model predictive control (MPC) strategy based on the concept of optimal switching sequences (OSS) is designed to control voltage and current in an LC filter connected to a three-level neutral-point clamped converter. The strategy solves an optimisation problem to achieve control of the LC filter variables, i.e., currents and output voltages. Hardware-in-the-loop (HIL) results are obtained to validate the feasibility of the proposed strategy, using a PLECS–RT HIL platform and a dSPACE Microlab Box controller. In addition to the good dynamic performance of the proposed OSS–MPC, it is demonstrated using HIL results that the control algorithm is capable of obtaining low total harmonic distortion (THD) in the output voltage for different operating conditions. Full article
(This article belongs to the Special Issue Energy Process Systems Simulation, Modeling, Optimization and Design)
Show Figures

Figure 1

18 pages, 861 KiB  
Article
Optimal Selection of Switch Model Parameters for ADC-Based Power Converters
by Saif Alsarayreh and Zoltán Sütő
Energies 2024, 17(1), 56; https://doi.org/10.3390/en17010056 - 21 Dec 2023
Cited by 2 | Viewed by 1018
Abstract
Real-time hardware-in-the-loop-(HIL) simulation integration is now a fundamental component of the power electronics control design cycle. This integration is required to test the efficacy of controller implementations. Even though hardware-in-the-loop-(HIL) tools use FPGA devices with computing power that is rapidly evolving, developers constantly [...] Read more.
Real-time hardware-in-the-loop-(HIL) simulation integration is now a fundamental component of the power electronics control design cycle. This integration is required to test the efficacy of controller implementations. Even though hardware-in-the-loop-(HIL) tools use FPGA devices with computing power that is rapidly evolving, developers constantly need to balance the ease of deploying models with acceptable accuracy. This study introduces a methodology for implementing a full-bridge inverter and buck converter utilising the associate-discrete-circuit-(ADC) model, which is optimised for real-time simulator applications. Additionally, this work introduces a new approach for choosing ADC parameter values by using the artificial-bee-colony-(ABC) algorithm, the firefly algorithm (FFA), and the genetic algorithm (GA). The implementation of the ADC-based model enables the development of a consistent architecture in simulation, regardless of the states of the switches. The simulation results demonstrate the efficacy of the proposed methodology in selecting optimal parameters for an ADC-switch-based full-bridge inverter and buck converter. These results indicate a reduction in overshoot and settling time observed in both the output voltage and current of the chosen topologies. Full article
Show Figures

Figure 1

19 pages, 22398 KiB  
Article
Automated Age-Related Macular Degeneration Detector on Optical Coherence Tomography Images Using Slice-Sum Local Binary Patterns and Support Vector Machine
by Yao-Wen Yu, Cheng-Hung Lin, Cheng-Kai Lu, Jia-Kang Wang and Tzu-Lun Huang
Sensors 2023, 23(17), 7315; https://doi.org/10.3390/s23177315 - 22 Aug 2023
Cited by 2 | Viewed by 1477
Abstract
Artificial intelligence has revolutionised smart medicine, resulting in enhanced medical care. This study presents an automated detector chip for age-related macular degeneration (AMD) using a support vector machine (SVM) and three-dimensional (3D) optical coherence tomography (OCT) volume. The aim is to assist ophthalmologists [...] Read more.
Artificial intelligence has revolutionised smart medicine, resulting in enhanced medical care. This study presents an automated detector chip for age-related macular degeneration (AMD) using a support vector machine (SVM) and three-dimensional (3D) optical coherence tomography (OCT) volume. The aim is to assist ophthalmologists by reducing the time-consuming AMD medical examination. Using the property of 3D OCT volume, a modified feature vector connected method called slice-sum is proposed, reducing computational complexity while maintaining high detection accuracy. Compared to previous methods, this method significantly reduces computational complexity by at least a hundredfold. Image adjustment and noise removal steps are excluded for classification accuracy, and the feature extraction algorithm of local binary patterns is determined based on hardware consumption considerations. Through optimisation of the feature vector connection method after feature extraction, the computational complexity of SVM detection is significantly reduced, making it applicable to similar 3D datasets. Additionally, the design supports model replacement, allowing users to train and update classification models as needed. Using TSMC 40 nm CMOS technology, the proposed detector achieves a core area of 0.12 mm2 while demonstrating a classification throughput of 8.87 decisions/s at a maximum operating frequency of 454.54 MHz. The detector achieves a final testing classification accuracy of 92.31%. Full article
(This article belongs to the Special Issue Integrated Circuit and System Design for Health Monitoring)
Show Figures

Figure 1

29 pages, 23352 KiB  
Article
GNSS-Based Driver Assistance for Charging Electric City Buses: Implementation and Lessons Learned from Field Testing
by Iman Esfandiyar, Krzysztof Ćwian, Michał R. Nowicki and Piotr Skrzypczyński
Remote Sens. 2023, 15(11), 2938; https://doi.org/10.3390/rs15112938 - 5 Jun 2023
Viewed by 1777
Abstract
Modern public transportation in urban areas increasingly relies on high-capacity buses. At the same time, the share of electric vehicles is increasing to meet environmental standards. This introduces problems when charging these vehicles from chargers at bus stops, as untrained drivers often find [...] Read more.
Modern public transportation in urban areas increasingly relies on high-capacity buses. At the same time, the share of electric vehicles is increasing to meet environmental standards. This introduces problems when charging these vehicles from chargers at bus stops, as untrained drivers often find it difficult to execute docking manoeuvres on the charger. A practical solution to this problem requires a suitable advanced driver-assistance system (ADAS), which is a system used to automatise and make safer some of the tasks involved in driving a vehicle. In the considered case, ADAS supports docking to the electric charging station, and thus, it must solve two issues: precise positioning of the bus relative to the charger and motion planning in a constrained space. This paper addresses these issues by employing GNSS-based positioning and optimisation-based planning, resulting in an affordable solution to the ADAS for the docking of electric buses while recharging. We focus on the practical side of the system, showing how the necessary features were attained at a limited hardware and installation cost, also demonstrating an extensive evaluation of the fielded ADAS for an operator of public transportation in the city of Poznań in Poland. Full article
(This article belongs to the Special Issue GNSS for Urban Transport Applications II)
Show Figures

Figure 1

Back to TopTop