Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,610)

Search Parameters:
Keywords = quantization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 405 KiB  
Article
Deformation Quantization of Nonassociative Algebras
by Elisabeth Remm
Mathematics 2025, 13(1), 58; https://doi.org/10.3390/math13010058 - 27 Dec 2024
Viewed by 131
Abstract
We investigate formal deformations of certain classes of nonassociative algebras including classes of K[Σ3]-associative algebras, Lie-admissible algebras and anti-associative algebras. In a process which is similar to Poisson algebra for the associative case, we identify for each type [...] Read more.
We investigate formal deformations of certain classes of nonassociative algebras including classes of K[Σ3]-associative algebras, Lie-admissible algebras and anti-associative algebras. In a process which is similar to Poisson algebra for the associative case, we identify for each type of algebras (A,μ) a type of algebras (A,μ,ψ) such that formal deformations of (A,μ) appear as quantizations of (A,μ,ψ). The process of polarization/depolarization associates to each nonassociative algebra a couple of algebras which products are respectively commutative and skew-symmetric and it is linked with the algebra obtained from the formal deformation. The anti-associative case is developed with a link with the Jacobi–Jordan algebras. Full article
34 pages, 1240 KiB  
Article
Towards a Unitary Formulation of Quantum Field Theory in Curved Spacetime: The Case of de Sitter Spacetime
by K. Sravan Kumar and João Marto
Symmetry 2025, 17(1), 29; https://doi.org/10.3390/sym17010029 - 27 Dec 2024
Viewed by 221
Abstract
Before we ask what the quantum gravity theory is, there is a legitimate quest to formulate a robust quantum field theory in curved spacetime (QFTCS). Several conceptual problems, especially unitarity loss (pure states evolving into mixed states), have raised concerns over several decades. [...] Read more.
Before we ask what the quantum gravity theory is, there is a legitimate quest to formulate a robust quantum field theory in curved spacetime (QFTCS). Several conceptual problems, especially unitarity loss (pure states evolving into mixed states), have raised concerns over several decades. In this paper, acknowledging the fact that time is a parameter in quantum theory, which is different from its status in the context of General Relativity (GR), we start with a “quantum first approach” and propose a new formulation for QFTCS based on the discrete spacetime transformations which offer a way to achieve unitarity. We rewrite the QFT in Minkowski spacetime with a direct-sum Fock space structure based on the discrete spacetime transformations and geometric superselection rules. Applying this framework to QFTCS, in the context of de Sitter (dS) spacetime, we elucidate how this approach to quantization complies with unitarity and the observer complementarity principle. We then comment on understanding the scattering of states in de Sitter spacetime. Furthermore, we discuss briefly the implications of our QFTCS approach to future research in quantum gravity. Full article
(This article belongs to the Special Issue Quantum Gravity and Cosmology: Exploring the Astroparticle Interface)
Show Figures

Figure 1

17 pages, 305 KiB  
Article
Lie Bialgebra Structures and Quantization of Generalized Loop Planar Galilean Conformal Algebra
by Yu Yang and Xingtao Wang
Axioms 2025, 14(1), 7; https://doi.org/10.3390/axioms14010007 - 26 Dec 2024
Viewed by 257
Abstract
In this paper, we analyze the Lie bialgebra (LB) and quantize the generalized loop planar-Galilean conformal algebra (GLPGCA) W(Γ). Additionally, we prove that all LB structures on W(Γ) possess a triangular coboundary. We also quantize [...] Read more.
In this paper, we analyze the Lie bialgebra (LB) and quantize the generalized loop planar-Galilean conformal algebra (GLPGCA) W(Γ). Additionally, we prove that all LB structures on W(Γ) possess a triangular coboundary. We also quantize W(Γ) using the Drinfeld-twist quantization technique and identify a group of noncommutative algebras and noncocommutative Hopf algebras. Full article
29 pages, 5099 KiB  
Article
Configurable Multi-Layer Perceptron-Based Soft Sensors on Embedded Field Programmable Gate Arrays: Targeting Diverse Deployment Goals in Fluid Flow Estimation
by Tianheng Ling, Chao Qian, Theodor Mario Klann, Julian Hoever, Lukas Einhaus and Gregor Schiele
Sensors 2025, 25(1), 83; https://doi.org/10.3390/s25010083 - 26 Dec 2024
Viewed by 240
Abstract
This study presents a comprehensive workflow for developing and deploying Multi-Layer Perceptron (MLP)-based soft sensors on embedded FPGAs, addressing diverse deployment objectives. The proposed workflow extends our prior research by introducing greater model adaptability. It supports various configurations—spanning layer counts, neuron counts, and [...] Read more.
This study presents a comprehensive workflow for developing and deploying Multi-Layer Perceptron (MLP)-based soft sensors on embedded FPGAs, addressing diverse deployment objectives. The proposed workflow extends our prior research by introducing greater model adaptability. It supports various configurations—spanning layer counts, neuron counts, and quantization bitwidths—to accommodate the constraints and capabilities of different FPGA platforms. The workflow incorporates a custom-developed, open-source toolchain ElasticAI.Creator that facilitates quantization-aware training, integer-only inference, automated accelerator generation using VHDL templates, and synthesis alongside performance estimation. A case study on fluid flow estimation was conducted on two FPGA platforms: the AMD Spartan-7 XC7S15 and the Lattice iCE40UP5K. For precision-focused and latency-sensitive deployments, a six-layer, 60-neuron MLP accelerator quantized to 8 bits on the XC7S15 achieved an MSE of 56.56, an MAPE of 1.61%, and an inference latency of 23.87 μs. Moreover, for low-power and energy-constrained deployments, a five-layer, 30-neuron MLP accelerator quantized to 8 bits on the iCE40UP5K achieved an inference latency of 83.37 μs, a power consumption of 2.06 mW, and an energy consumption of just 0.172 μJ per inference. These results confirm the workflow’s ability to identify optimal FPGA accelerators tailored to specific deployment requirements, achieving a balanced trade-off between precision, inference latency, and energy efficiency. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 5189 KiB  
Article
Autoencoder-Based DIFAR Sonobuoy Signal Transmission and Reception Method Incorporating Residual Vector Quantization and Compensation Module: Validation Through Air Channel Modeling
by Yeonjin Park and Jungpyo Hong
Appl. Sci. 2025, 15(1), 92; https://doi.org/10.3390/app15010092 - 26 Dec 2024
Viewed by 236
Abstract
This paper proposes a novel autoencoder-based neural network for compressing and reconstructing underwater acoustic signals collected by Directional Frequency Analysis and Recording sonobuoys. To improve both signal compression rates and reconstruction performance, we integrate Residual Vector Quantization and a Compensation Module into the [...] Read more.
This paper proposes a novel autoencoder-based neural network for compressing and reconstructing underwater acoustic signals collected by Directional Frequency Analysis and Recording sonobuoys. To improve both signal compression rates and reconstruction performance, we integrate Residual Vector Quantization and a Compensation Module into the decoding process to effectively compensate for quantization errors. Additionally, an unstructured pruning technique is applied to the encoder to minimize computational load and parameters, addressing the battery limitations of sonobuoys. Experimental results demonstrate that the proposed method reduces the data transmission size by approximately 31.25% compared to the conventional autoencoder-based method. Moreover, the spectral mean square errors are reduced by 60.58% for continuous wave signals and 55.25% for linear frequency modulation signals under realistic air channel simulations. Full article
Show Figures

Figure 1

13 pages, 1603 KiB  
Article
The Impact of 8- and 4-Bit Quantization on the Accuracy and Silicon Area Footprint of Tiny Neural Networks
by Paweł Tumialis, Marcel Skierkowski, Jakub Przychodny and Paweł Obszarski
Electronics 2025, 14(1), 14; https://doi.org/10.3390/electronics14010014 - 24 Dec 2024
Viewed by 267
Abstract
In the field of embedded and edge devices, efforts have been made to make deep neural network models smaller due to the limited size of the available memory and the low computational efficiency. Typical model footprints are under 100 KB. However, for some [...] Read more.
In the field of embedded and edge devices, efforts have been made to make deep neural network models smaller due to the limited size of the available memory and the low computational efficiency. Typical model footprints are under 100 KB. However, for some applications, models of this size are too large. In low-voltage sensors, signals must be processed, classified or predicted with an order of magnitude smaller memory. Model downsizing can be performed by limiting the number of model parameters or quantizing their weights. These types of operations have a negative impact on the accuracy of the deep network. This study tested the effect of model downscaling techniques on accuracy. The main idea was to reduce neural network models to 3 k parameters or less. Tests were conducted on three different neural network architectures in the context of three separate research problems, modeling real tasks for small networks. The impact of the reduction in the accuracy of the network depends mainly on its initial size. For a network reduced from 40 k parameters, a decrease in accuracy of 16 percentage points was achieved, and for a network with 20 k parameters, a decrease of 8 points was achieved. To obtain the best results, knowledge distillation and quantization-aware training methods were used for training. Thanks to this, the accuracy of the 4-bit networks did not differ significantly from the 8-bit ones and their results were approximately four percentage points worse than those of the full precision networks. For the fully connected network, synthesis to ASIC (application-specific integrated circuit) was also performed to demonstrate the reduction in the silicon area occupied by the model. The 4-bit quantization limits the silicon area footprint by 90%. Full article
Show Figures

Figure 1

14 pages, 2382 KiB  
Article
Edge-AI Enabled Wearable Device for Non-Invasive Type 1 Diabetes Detection Using ECG Signals
by Maria Gragnaniello, Vincenzo Romano Marrazzo, Alessandro Borghese, Luca Maresca, Giovanni Breglio and Michele Riccio
Bioengineering 2025, 12(1), 4; https://doi.org/10.3390/bioengineering12010004 - 24 Dec 2024
Viewed by 298
Abstract
Diabetes is a chronic condition, and traditional monitoring methods are invasive, significantly reducing the quality of life of the patients. This study proposes the design of an innovative system based on a microcontroller that performs real-time ECG acquisition and evaluates the presence of [...] Read more.
Diabetes is a chronic condition, and traditional monitoring methods are invasive, significantly reducing the quality of life of the patients. This study proposes the design of an innovative system based on a microcontroller that performs real-time ECG acquisition and evaluates the presence of diabetes using an Edge-AI solution. A spectrogram-based preprocessing method is combined with a 1-Dimensional Convolutional Neural Network (1D-CNN) to analyze the ECG signals directly on the device. By applying quantization as an optimization technique, the model effectively balances memory usage and accuracy, achieving an accuracy of 89.52% with an average precision and recall of 0.91 and 0.90, respectively. These results were obtained with a minimal memory footprint of 347 kB flash and 23 kB RAM, showcasing the system’s suitability for wearable embedded devices. Furthermore, a custom PCB was developed to validate the system in a real-world scenario. The hardware integrates high-performance electronics with low power consumption, demonstrating the feasibility of deploying Edge-AI for non-invasive, real-time diabetes detection in resource-constrained environments. This design represents a significant step forward in improving the accessibility and practicality of diabetes monitoring. Full article
(This article belongs to the Special Issue Monitoring and Analysis of Human Biosignals, Volume II)
Show Figures

Figure 1

36 pages, 3222 KiB  
Article
A Deployment Method for Motor Fault Diagnosis Application Based on Edge Intelligence
by Zheng Zhou, Yusong Qiao, Xusheng Lin, Purui Li, Nan Wu and Dong Yu
Sensors 2025, 25(1), 9; https://doi.org/10.3390/s25010009 - 24 Dec 2024
Viewed by 296
Abstract
The rapid advancement of Industry 4.0 and intelligent manufacturing has elevated the demands for fault diagnosis in servo motors. Traditional diagnostic methods, which rely heavily on handcrafted features and expert knowledge, struggle to achieve efficient fault identification in complex industrial environments, particularly when [...] Read more.
The rapid advancement of Industry 4.0 and intelligent manufacturing has elevated the demands for fault diagnosis in servo motors. Traditional diagnostic methods, which rely heavily on handcrafted features and expert knowledge, struggle to achieve efficient fault identification in complex industrial environments, particularly when faced with real-time performance and accuracy limitations. This paper proposes a novel fault diagnosis approach integrating multi-scale convolutional neural networks (MSCNNs), long short-term memory networks (LSTM), and attention mechanisms to address these challenges. Furthermore, the proposed method is optimized for deployment on resource-constrained edge devices through knowledge distillation and model quantization. This approach significantly reduces the computational complexity of the model while maintaining high diagnostic accuracy, making it well suited for edge nodes in industrial IoT scenarios. Experimental results demonstrate that the method achieves efficient and accurate servo motor fault diagnosis on edge devices with excellent accuracy and inference speed. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

21 pages, 2619 KiB  
Article
MIRA-ChatGLM: A Fine-Tuned Large Language Model for Intelligent Risk Assessment in Coal Mining
by Yi Sun, Chao Zhang, Chen Wang and Ying Han
Appl. Sci. 2024, 14(24), 12072; https://doi.org/10.3390/app142412072 - 23 Dec 2024
Viewed by 464
Abstract
Intelligent mining risk assessment (MIRA) is a vital approach for enhancing safety and operational efficiency in mining. In this study, we introduce MIRA-ChatGLM, which leverages pre-trained large language models (LLMs) for the domain of gas risk assessment in coal mines. We meticulously constructed [...] Read more.
Intelligent mining risk assessment (MIRA) is a vital approach for enhancing safety and operational efficiency in mining. In this study, we introduce MIRA-ChatGLM, which leverages pre-trained large language models (LLMs) for the domain of gas risk assessment in coal mines. We meticulously constructed a dataset specifically designed for mining risk analysis and performed parameter-efficient fine-tuning on the locally deployed GLM-4-9B-chat base model to develop MIRA-ChatGLM. By utilizing consumer-grade GPUs and employing LoRA and various levels of quantization algorithms such as QLoRA, we investigated the impact of different data scales and instruction settings on model performance. The evaluation results show that MIRA-ChatGLM achieved excellent performance with BLEU-4, ROUGE-1, ROUGE-2, and ROUGE-L scores of 84.47, 90.63, 86.88, and 90.63, respectively, highlighting its outstanding performance in coal mine gas risk assessment. Through comparative experiments with other large language models of similar size and manual evaluation, MIRA-ChatGLM demonstrated superior performance across multiple key metrics, fully demonstrating its tremendous potential in intelligent mine risk assessment and decision support. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 12297 KiB  
Article
Enhancing Accessibility: Automated Tactile Graphics Generation for Individuals with Visual Impairments
by Yehor Dzhurynskyi, Volodymyr Mayik and Lyudmyla Mayik
Computation 2024, 12(12), 251; https://doi.org/10.3390/computation12120251 - 23 Dec 2024
Viewed by 241
Abstract
This study addresses the accessibility challenges faced by individuals with visual impairments due to limited access to graphic information, which significantly impacts their educational and social integration. Traditional methods for producing tactile graphics are labor-intensive and require specialized expertise, limiting their availability. Recent [...] Read more.
This study addresses the accessibility challenges faced by individuals with visual impairments due to limited access to graphic information, which significantly impacts their educational and social integration. Traditional methods for producing tactile graphics are labor-intensive and require specialized expertise, limiting their availability. Recent advancements in generative models, such as GANs, diffusion models, and VAEs, offer potential solutions to automate the creation of tactile images. In this work, we propose a novel generative model conditioned on text prompts, integrating a Bidirectional and Auto-Regressive Transformer (BART) and Vector Quantized Variational Auto-Encoder (VQ-VAE). This model transforms textual descriptions into tactile graphics, addressing key requirements for legibility and accessibility. The model’s performance was evaluated using cross-entropy, perplexity, mean square error, and CLIP Score metrics, demonstrating its ability to generate high-quality, customizable tactile images. Testing with educational and rehabilitation institutions confirmed the practicality and efficiency of the system, which significantly reduces production time and requires minimal operator expertise. The proposed approach enhances the production of inclusive educational materials, enabling improved access to quality education and fostering greater independence for individuals with visual impairments. Future research will focus on expanding the training dataset and refining the model for complex scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health)
Show Figures

Figure 1

19 pages, 31273 KiB  
Article
Binary Transformer Based on the Alignment and Correction of Distribution
by Kaili Wang, Mingtao Wang, Zixin Wan and Tao Shen
Sensors 2024, 24(24), 8190; https://doi.org/10.3390/s24248190 - 22 Dec 2024
Viewed by 265
Abstract
Transformer is a powerful model widely used in artificial intelligence applications. It contains complex structures and has extremely high computational requirements that are not suitable for embedded intelligent sensors with limited computational resources. The binary quantization technology takes up less memory space and [...] Read more.
Transformer is a powerful model widely used in artificial intelligence applications. It contains complex structures and has extremely high computational requirements that are not suitable for embedded intelligent sensors with limited computational resources. The binary quantization technology takes up less memory space and has a faster calculation speed; however, it is seldom studied for the lightweight transformer. Compared with full-precision networks, the key bottleneck lies in the distribution shift problem caused by the existing binary quantization methods. To tackle this problem, the feature distribution alignment operation in binarization is investigated. The median shift and mean restore is designed to ensure consistency between the binary feature distribution and the full-precision transformer. Then, a knowledge distillation architecture for distribution correction is developed, which has a teacher–student structure comprising a full-precision and binary transformer, to further rectify the feature distribution of the binary student network to ensure the completeness and accuracy of the data. Experimental results on the CIFAR10, CIFAR100, ImageNet-1k, and TinyImageNet datasets show the effectiveness of the proposed binary optimization model, which outperforms the previous state-of-the-art binarization mechanisms while maintaining the same computational complexity. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 2254 KiB  
Article
Hierarchical Reinforcement Learning-Based Adaptive Initial QP Selection and Rate Control for H.266/VVC
by Shuqian He, Biao Jin, Shangneng Tian, Jiayu Liu, Zhengjie Deng and Chun Shi
Electronics 2024, 13(24), 5028; https://doi.org/10.3390/electronics13245028 - 20 Dec 2024
Viewed by 289
Abstract
In video encoding rate control, adaptive selection of the initial quantization parameter (QP) is a critical factor affecting both encoding quality and rate control precision. Due to the diversity of video content and the dynamic nature of network conditions, accurately and efficiently determining [...] Read more.
In video encoding rate control, adaptive selection of the initial quantization parameter (QP) is a critical factor affecting both encoding quality and rate control precision. Due to the diversity of video content and the dynamic nature of network conditions, accurately and efficiently determining the initial QP remains a significant challenge. The optimal setting of the initial QP not only influences bitrate allocation strategies but also impacts the encoding efficiency and output quality of the encoder. To address this issue in the H.266/VVC standard, this paper proposes a novel hierarchical reinforcement learning-based method for adaptive initial QP selection. The proposed method introduces a hierarchical reinforcement learning framework that decomposes the initial QP selection task into high-level and low-level strategies, handling coarse-grained and fine-grained QP decisions, respectively. The high-level strategy quickly determines a rough QP range based on global video features and network conditions, while the low-level strategy refines the specific QP value within this range to enhance decision accuracy. This framework integrates spatiotemporal video complexity, network conditions, and rate control objectives to form an optimized model for adaptive initial QP selection. Experimental results demonstrate that the proposed method significantly improves encoding quality and rate control accuracy compared to traditional methods, confirming its effectiveness in handling complex video content and dynamic network environments. Full article
Show Figures

Figure 1

8 pages, 279 KiB  
Article
Statistical Gravity Through Affine Quantization
by Riccardo Fantoni
Quantum Rep. 2024, 6(4), 706-713; https://doi.org/10.3390/quantum6040042 - 18 Dec 2024
Viewed by 316
Abstract
I propose a possible way to introduce the effect of temperature (defined through the virial theorem) into Einstein’s theory of general relativity. This requires the computation of a path integral on a ten-dimensional flat space in a four-dimensional spacetime lattice. Standard path [...] Read more.
I propose a possible way to introduce the effect of temperature (defined through the virial theorem) into Einstein’s theory of general relativity. This requires the computation of a path integral on a ten-dimensional flat space in a four-dimensional spacetime lattice. Standard path integral Monte Carlo methods can be used to compute this. Full article
17 pages, 3417 KiB  
Article
TransSMPL: Efficient Human Pose Estimation with Pruned and Quantized Transformer Networks
by Yeonggwang Kim, Hyeongjun Yoo, Je-Ho Ryu, Seungjoo Lee, Jong Hun Lee and Jinsul Kim
Electronics 2024, 13(24), 4980; https://doi.org/10.3390/electronics13244980 - 18 Dec 2024
Viewed by 369
Abstract
Existing Transformers for 3D human pose and shape estimation models often struggle with computational complexity, particularly when handling high-resolution feature maps. These challenges limit their ability to efficiently utilize fine-grained features, leading to suboptimal performance in accurate body reconstruction. In this work, we [...] Read more.
Existing Transformers for 3D human pose and shape estimation models often struggle with computational complexity, particularly when handling high-resolution feature maps. These challenges limit their ability to efficiently utilize fine-grained features, leading to suboptimal performance in accurate body reconstruction. In this work, we propose TransSMPL, a novel Transformer framework built upon the SMPL model, specifically designed to address the challenges of computational complexity and inefficient utilization of high-resolution feature maps in 3D human pose and shape estimation. By replacing HRNet with MobileNetV3 for lightweight feature extraction, applying pruning and quantization techniques, and incorporating an early exit mechanism, TransSMPL significantly reduces both computational cost and memory usage. TransSMPL introduces two key innovations: (1) a multi-scale attention mechanism, reduced from four scales to two, allowing for more efficient global and local feature integration, and (2) a confidence-based early exit strategy, which enables the model to halt further computations when high-confidence predictions are achieved, further enhancing efficiency. Extensive pruning and dynamic quantization are also applied to reduce the model size while maintaining competitive performance. Quantitative and qualitative experiments on the Human3.6M dataset demonstrate the efficacy of TransSMPL. Our model achieves an MPJPE (Mean Per Joint Position Error) of 48.5 mm, reducing the model size by over 16% compared to existing methods while maintaining a similar level of accuracy. Full article
(This article belongs to the Special Issue Trustworthy Artificial Intelligence in Cyber-Physical Systems)
Show Figures

Figure 1

8 pages, 837 KiB  
Communication
Dental Loop Chatbot: A Prototype Large Language Model Framework for Dentistry
by Md Sahadul Hasan Arian, Faisal Ahmed Sifat, Saif Ahmed, Nabeel Mohammed, Taseef Hasan Farook and James Dudley
Software 2024, 3(4), 587-594; https://doi.org/10.3390/software3040029 - 17 Dec 2024
Viewed by 634
Abstract
The Dental Loop Chatbot was developed as a real-time, evidence-based guidance system for dental practitioners using a fine-tuned large language model (LLM) and Retrieval-Augmented Generation (RAG). This paper outlines the development and preliminary evaluation of the chatbot as a scalable clinical decision-support tool [...] Read more.
The Dental Loop Chatbot was developed as a real-time, evidence-based guidance system for dental practitioners using a fine-tuned large language model (LLM) and Retrieval-Augmented Generation (RAG). This paper outlines the development and preliminary evaluation of the chatbot as a scalable clinical decision-support tool designed for resource-limited settings. The system’s architecture incorporates Quantized Low-Rank Adaptation (QLoRA) for efficient fine-tuning, while dynamic retrieval mechanisms ensure contextually accurate and relevant responses. This prototype lays the groundwork for future triaging and diagnostic support systems tailored specifically to the field of dentistry. Full article
Show Figures

Figure 1

Back to TopTop