Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,743)

Search Parameters:
Keywords = semantics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2372 KiB  
Article
PDeT: A Progressive Deformable Transformer for Photovoltaic Panel Defect Segmentation
by Peng Zhou, Hong Fang and Gaochang Wu
Sensors 2024, 24(21), 6908; https://doi.org/10.3390/s24216908 (registering DOI) - 28 Oct 2024
Abstract
Defects in photovoltaic (PV) panels can significantly reduce the power generation efficiency of the system and may cause localized overheating due to uneven current distribution. Therefore, adopting precise pixel-level defect detection, i.e., defect segmentation, technology is essential to ensuring stable operation. However, for [...] Read more.
Defects in photovoltaic (PV) panels can significantly reduce the power generation efficiency of the system and may cause localized overheating due to uneven current distribution. Therefore, adopting precise pixel-level defect detection, i.e., defect segmentation, technology is essential to ensuring stable operation. However, for effective defect segmentation, the feature extractor must adaptively determine the appropriate scale or receptive field for accurate defect localization, while the decoder must seamlessly fuse coarse-level semantics with fine-grained features to enhance high-level representations. In this paper, we propose a Progressive Deformable Transformer (PDeT) for defect segmentation in PV cells. This approach effectively learns spatial sampling offsets and refines features progressively through coarse-level semantic attention. Specifically, the network adaptively captures spatial offset positions and computes self-attention, expanding the model’s receptive field and enabling feature extraction across objects of various shapes. Furthermore, we introduce a semantic aggregation module to refine semantic information, converting the fused feature map into a scale space and balancing contextual information. Extensive experiments demonstrate the effectiveness of our method, achieving an mIoU of 88.41% on our solar cell dataset, outperforming other methods. Additionally, to validate the PDeT’s applicability across different domains, we trained and tested it on the MVTec-AD dataset. The experimental results demonstrate that the PDeT exhibits excellent recognition performance in various other scenarios as well. Full article
(This article belongs to the Special Issue Deep Learning for Perception and Recognition: Method and Applications)
Show Figures

Figure 1

21 pages, 2250 KiB  
Article
Color Dominance-Based Polynomial Optimization Segmentation for Identifying Tomato Leaves and Fruits
by Juan Pablo Guerra Ibarra, Francisco Javier Cuevas de la Rosa and Alicia Linares Ramirez
Agriculture 2024, 14(11), 1911; https://doi.org/10.3390/agriculture14111911 (registering DOI) - 28 Oct 2024
Abstract
Optimization processes or methods play an essential role in the continuous improvement of various human activities, particularly in agriculture, given its vital role in food production. In precision agriculture, which utilizes technology to optimize food production, a primary goal is to minimize the [...] Read more.
Optimization processes or methods play an essential role in the continuous improvement of various human activities, particularly in agriculture, given its vital role in food production. In precision agriculture, which utilizes technology to optimize food production, a primary goal is to minimize the consumption of resources like water, fertilizers, and the detection of pests and diseases. In the fertilization process, it is essential to identify any deficiencies or excesses of chemical elements. Nutrient deficiencies, which are essential for plant development, are typically detected in the leaves of crops. This paper proposes a methodology for optimizing the color threshold dominance factors employed in the segmentation process for tomato crop leaves and fruits. The optimization is performed using an interpolation method to find the values that maximize the segmentation of leaves and fruits used by the color dominance segmentation method. A comparison of the interpolation method results with those obtained using a greedy algorithm, which iteratively finds the optimal segmentation values, shows nearly identical outcomes. Similarly, a UNetmodel is used for semantic segmentation, the results of which are inferior to those obtained by the proposed interpolation optimization method. The most significant contribution of the interpolation method is that it requires only a single iteration to generate the initial data, in contrast to the iterative search required by the greedy algorithm and the lengthy training process and video card dependency of the UNet model. This results in an 80% reduction in computation time. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

15 pages, 6433 KiB  
Technical Note
RSPS-SAM: A Remote Sensing Image Panoptic Segmentation Method Based on SAM
by Zhuoran Liu, Zizhen Li, Ying Liang, Claudio Persello, Bo Sun, Guangjun He and Lei Ma
Remote Sens. 2024, 16(21), 4002; https://doi.org/10.3390/rs16214002 (registering DOI) - 28 Oct 2024
Abstract
Satellite remote sensing images contain complex and diverse ground object information and the images exhibit spatial multi-scale characteristics, making the panoptic segmentation of satellite remote sensing images a highly challenging task. Due to the lack of large-scale annotated datasets for panoramic segmentation, existing [...] Read more.
Satellite remote sensing images contain complex and diverse ground object information and the images exhibit spatial multi-scale characteristics, making the panoptic segmentation of satellite remote sensing images a highly challenging task. Due to the lack of large-scale annotated datasets for panoramic segmentation, existing methods still suffer from weak model generalization capabilities. To mitigate this issue, this paper leverages the advantages of the Segment Anything Model (SAM), which can segment any object in remote sensing images without requiring any annotations and proposes a high-resolution remote sensing image panoptic segmentation method called Remote Sensing Panoptic Segmentation SAM (RSPS-SAM). Firstly, to address the problem of global information loss caused by cropping large remote sensing images for training, a Batch Attention Pyramid was designed to extract multi-scale features from remote sensing images and capture long-range contextual information between cropped patches, thereby enhancing the semantic understanding of remote sensing images. Secondly, we constructed a Mask Decoder to address the limitation of SAM requiring manual input prompts and its inability to output category information. This decoder utilized mask-based attention for mask segmentation, enabling automatic prompt generation and category prediction of segmented objects. Finally, the effectiveness of the proposed method was validated on the high-resolution remote sensing image airport scene dataset RSAPS-ASD. The results demonstrate that the proposed method achieves segmentation and recognition of foreground instances and background regions in high-resolution remote sensing images without the need for prompt input, while providing smooth segmentation boundaries with a panoptic segmentation quality (PQ) of 57.2, outperforming current mainstream methods. Full article
Show Figures

Figure 1

24 pages, 6467 KiB  
Article
YOLO-DHGC: Small Object Detection Using Two-Stream Structure with Dense Connections
by Lihua Chen, Lumei Su, Weihao Chen, Yuhan Chen, Haojie Chen and Tianyou Li
Sensors 2024, 24(21), 6902; https://doi.org/10.3390/s24216902 (registering DOI) - 28 Oct 2024
Abstract
Small object detection, which is frequently applied in defect detection, medical imaging, and security surveillance, often suffers from low accuracy due to limited feature information and blurred details. This paper proposes a small object detection method named YOLO-DHGC, which employs a two-stream structure [...] Read more.
Small object detection, which is frequently applied in defect detection, medical imaging, and security surveillance, often suffers from low accuracy due to limited feature information and blurred details. This paper proposes a small object detection method named YOLO-DHGC, which employs a two-stream structure with dense connections. Firstly, a novel backbone network, DenseHRNet, is introduced. It innovatively combines a dense connection mechanism with high-resolution feature map branches, effectively enhancing feature reuse and cross-layer fusion, thereby obtaining high-level semantic information from the image. Secondly, a two-stream structure based on an edge-gated branch is designed. It uses higher-level information from the regular detection stream to eliminate irrelevant interference remaining in the early processing stages of the edge-gated stream, allowing it to focus on processing information related to shape boundaries and accurately capture the morphological features of small objects. To assess the effectiveness of the proposed YOLO-DHGC method, we conducted experiments on several public datasets and a self-constructed dataset. Exceptionally, a defect detection accuracy of 96.3% was achieved on the Market-PCB public dataset, demonstrating the effectiveness of our method in detecting small object defects for industrial applications. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection: 2nd Edition)
Show Figures

Graphical abstract

13 pages, 511 KiB  
Article
Addressing Semantic Variability in Clinical Outcome Reporting Using Large Language Models
by Fatemeh Shah-Mohammadi and Joseph Finkelstein
BioMedInformatics 2024, 4(4), 2173-2185; https://doi.org/10.3390/biomedinformatics4040116 (registering DOI) - 28 Oct 2024
Abstract
Background/Objectives: Clinical trials frequently employ diverse terminologies and definitions to describe similar outcomes, leading to ambiguity and inconsistency in data interpretation. Addressing the variability in clinical outcome reports and integrating semantically similar outcomes is important in healthcare and clinical research. Variability in [...] Read more.
Background/Objectives: Clinical trials frequently employ diverse terminologies and definitions to describe similar outcomes, leading to ambiguity and inconsistency in data interpretation. Addressing the variability in clinical outcome reports and integrating semantically similar outcomes is important in healthcare and clinical research. Variability in outcome reporting not only hinders the comparability of clinical trial results but also poses significant challenges in evidence synthesis, meta-analysis, and evidence-based decision-making. Methods: This study investigates variability reduction in outcome measures reporting using rule-based and large language-based models. It aims to mitigate the challenges associated with variability in outcome reporting by comparing these two models. The first approach, which is rule-based, will leverage well-known ontologies, and the second approach exploits sentence-bidirectional encoder representations from transformers (SBERT) to identify semantically similar outcomes along with Generative Pre-training Transformer (GPT) to refine the results. Results: The results show that the relatively low percentages of outcomes are linked to established rule-based ontologies. Analysis of outcomes by word count highlighted the absence of ontological linkage for three-word outcomes, which indicates potential gaps in semantic representation. Conclusions: Employing large language models (LLMs), this study demonstrates its ability to identify similar outcomes, even with more than three words, suggesting a crucial role in outcome harmonization efforts, potentially reducing redundancy and enhancing data interoperability. Full article
Show Figures

Figure 1

16 pages, 2025 KiB  
Article
Pre- and Post-Operative Cognitive Assessment in Patients Undergoing Surgical Aortic Valve Replacement: Insights from the PEARL Project
by Valentina Fiolo, Enrico Giuseppe Bertoldo, Silvana Pagliuca, Sara Boveri, Sara Pugliese, Martina Anguissola, Francesca Gelpi, Beatrice Cairo, Vlasta Bari, Alberto Porta and Edward Callus
NeuroSci 2024, 5(4), 485-500; https://doi.org/10.3390/neurosci5040035 (registering DOI) - 28 Oct 2024
Abstract
Background: Aortic valve stenosis (AVS) is a common valvular heart disease affecting millions of people worldwide. It leads to significant neurocognitive and neuropsychological impairments, impacting patients’ quality of life. Objective: The objective of this article is to identify and discuss the potential neurocognitive [...] Read more.
Background: Aortic valve stenosis (AVS) is a common valvular heart disease affecting millions of people worldwide. It leads to significant neurocognitive and neuropsychological impairments, impacting patients’ quality of life. Objective: The objective of this article is to identify and discuss the potential neurocognitive effects on patients with aortic stenosis before and after undergoing surgical aortic valve replacement (SAVR). Method: Our study involved the assessment of 64 patients undergoing aortic valve replacement (SAVR) using a neurocognitive evaluation comprising a battery of 11 different cognitive tests. These tests were designed to analyze the patients’ overall cognitive functioning, executive abilities, short- and long-term memory, and attentional performance. The tests were administered to patients before the aortic valve surgery (T0) and after the surgery (T1). From a statistical perspective, numerical variables are presented as means (±standard deviation) and medians (IQR), while categorical variables are presented as counts and percentages. Normality was assessed using the Shapiro–Wilk test. T0 and T1 scores were compared with the Wilcoxon signed rank test, with p < 0.05 considered significant. Analyses were performed using SAS version 9.4. Results: Conducted as part of a fully financed Italian Ministry of Health project (RF-2016-02361069), the study found that most patients showed normal cognitive functioning at baseline. Cognitive assessments showed that executive functions, attention, language, and semantic knowledge were within the normal range for the majority of participants. After SAVR, cognitive outcomes remained stable or improved, particularly in executive functions and language. Notably, verbal episodic memory demonstrated significant improvement, with the percentage of patients scoring within the normal range on the BSRT increasing from 73.4% at T0 to 92.2% at T1 (p < 0.0001). However, visuospatial and visuoconstructive abilities showed stability or slight decline, while attentional skills remained relatively stable. The Clock Drawing Test indicated the maintenance of cognitive functions. Conclusions: The findings of our study indicate a global stability in cognitive status among patients after undergoing SAVR, with significant improvement noted in verbal episodic memory. While other cognitive domains did not demonstrate statistically significant changes, these insights are valuable for understanding the cognitive effects of SAVR and can guide future research and clinical practice in selecting the most effective surgical and rehabilitative options for patients. Monitoring cognitive outcomes in patients undergoing aortic valve replacement surgery remains crucial. Full article
Show Figures

Figure 1

22 pages, 9696 KiB  
Article
Text-Enhanced Graph Attention Hashing for Cross-Modal Retrieval
by Qiang Zou, Shuli Cheng, Anyu Du and Jiayi Chen
Entropy 2024, 26(11), 911; https://doi.org/10.3390/e26110911 (registering DOI) - 27 Oct 2024
Abstract
Deep hashing technology, known for its low-cost storage and rapid retrieval, has become a focal point in cross-modal retrieval research as multimodal data continue to grow. However, existing supervised methods often overlook noisy labels and multiscale features in different modal datasets, leading to [...] Read more.
Deep hashing technology, known for its low-cost storage and rapid retrieval, has become a focal point in cross-modal retrieval research as multimodal data continue to grow. However, existing supervised methods often overlook noisy labels and multiscale features in different modal datasets, leading to higher information entropy in the generated hash codes and features, which reduces retrieval performance. The variation in text annotation information across datasets further increases the information entropy during text feature extraction, resulting in suboptimal outcomes. Consequently, reducing the information entropy in text feature extraction, supplementing text feature information, and enhancing the retrieval efficiency of large-scale media data are critical challenges in cross-modal retrieval research. To tackle these, this paper introduces the Text-Enhanced Graph Attention Hashing for Cross-Modal Retrieval (TEGAH) framework. TEGAH incorporates a deep text feature extraction network and a multiscale label region fusion network to minimize information entropy and optimize feature extraction. Additionally, a Graph-Attention-based modal feature fusion network is designed to efficiently integrate multimodal information, enhance the affinity of the network for different modes, and retain more semantic information. Extensive experiments on three multilabel datasets demonstrate that the TEGAH framework significantly outperforms state-of-the-art cross-modal hashing methods. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 13514 KiB  
Article
A Nighttime Driving-Scene Segmentation Method Based on Light-Enhanced Network
by Lihua Bi, Wenjiao Zhang, Xiangfei Zhang and Canlin Li
World Electr. Veh. J. 2024, 15(11), 490; https://doi.org/10.3390/wevj15110490 (registering DOI) - 27 Oct 2024
Abstract
To solve the semantic segmentation problem of night driving-scene images, which often have low brightness, low contrast, and uneven illumination, a nighttime driving-scene segmentation method based on a light-enhanced network was proposed. Firstly, we designed a light enhancement network, which comprises two parts: [...] Read more.
To solve the semantic segmentation problem of night driving-scene images, which often have low brightness, low contrast, and uneven illumination, a nighttime driving-scene segmentation method based on a light-enhanced network was proposed. Firstly, we designed a light enhancement network, which comprises two parts: a color correction module and a parameter predictor. The color correction module mitigates the impact of illumination variations on the segmentation network by adjusting the color information of the image. Meanwhile, the parameter predictor accurately predicts the parameters of the image filter through the analysis of global content, including factors such as brightness, contrast, hue, and exposure level, thereby effectively enhancing the image quality. Subsequently, the output of the light enhancement network is input into the segmentation network to obtain the final segmentation prediction. Experimental results show that the proposed method achieves mean Intersection over Union (mIoU) values of 59.4% on the Dark Zurich-test dataset, outperforming other segmentation algorithms for nighttime driving-scenes. Full article
(This article belongs to the Special Issue Vehicle-Road Collaboration and Connected Automated Driving)
Show Figures

Figure 1

13 pages, 8080 KiB  
Article
Linguistic Secret Sharing via Ambiguous Token Selection for IoT Security
by Kai Gao, Ji-Hwei Horng, Ching-Chun Chang and Chin-Chen Chang
Electronics 2024, 13(21), 4216; https://doi.org/10.3390/electronics13214216 (registering DOI) - 27 Oct 2024
Abstract
The proliferation of Internet of Things (IoT) devices has introduced significant security challenges, including weak authentication, insufficient data protection, and firmware vulnerabilities. To address these issues, we propose a linguistic secret sharing scheme tailored for IoT applications. This scheme leverages neural networks to [...] Read more.
The proliferation of Internet of Things (IoT) devices has introduced significant security challenges, including weak authentication, insufficient data protection, and firmware vulnerabilities. To address these issues, we propose a linguistic secret sharing scheme tailored for IoT applications. This scheme leverages neural networks to embed private data within texts transmitted by IoT devices, using an ambiguous token selection algorithm that maintains the textual integrity of the cover messages. Our approach eliminates the need to share additional information for accurate data extraction while also enhancing security through a secret sharing mechanism. Experimental results demonstrate that the proposed scheme achieves approximately 50% accuracy in detecting steganographic text across two steganalysis networks. Additionally, the generated steganographic text preserves the semantic information of the cover text, evidenced by a BERT score of 0.948. This indicates that the proposed scheme performs well in terms of security. Full article
(This article belongs to the Special Issue IoT Security in the Age of AI: Innovative Approaches and Technologies)
Show Figures

Figure 1

12 pages, 7664 KiB  
Article
Semantic Segmentation of the Prostate Based on Onefold and Joint Multimodal Medical Images Using YOLOv4 and U-Net
by Estera Kot, Tomasz Les, Zuzanna Krawczyk-Borysiak, Andrey Vykhodtsev and Krzysztof Siwek
Appl. Sci. 2024, 14(21), 9814; https://doi.org/10.3390/app14219814 (registering DOI) - 27 Oct 2024
Abstract
Magnetic Resonance Imaging is increasing in importance in prostate cancer diagnosis due to the high accuracy and quality of the examination procedure. However, this process requires a time-consuming analysis of the results. Currently, machine vision is widely used in many areas. It enables [...] Read more.
Magnetic Resonance Imaging is increasing in importance in prostate cancer diagnosis due to the high accuracy and quality of the examination procedure. However, this process requires a time-consuming analysis of the results. Currently, machine vision is widely used in many areas. It enables automation and support in radiological studies. Successful detection of primary prostate tumors depends on the effective segmentation of the prostate itself. At times, a CT scan may be performed; alternatively, MRI may be the selected option. The data always reach a bottleneck stage. This paper presents the effective training of deep learning models to segment the prostate based on onefold and multimodal medical images. This approach supports the computer-aided diagnosis (CAD) system for radiologists as the first step in cancer exams. A comparison of two approaches designed for prostate segmentation is described. The first combines YOLOv4, the object detection neural network, and U-Net for a semantic segmentation based on onefold modality MRI images. The second presents the same method trained on multimodal images—a CT and MRI mixed dataset. The learning process was carried out in a cloud environment using GPU cards. The experiments are based on data from 120 patients who have undergone MRI and CT examinations. Several metrics evaluated the trained models. In the prostate semantic segmentation process, better results were achieved by mixed MRI with CT datasets. The best model achieved the value of 0.9685 for the Sørensen–Dice coefficient for the threshold value of 0.6. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

20 pages, 1971 KiB  
Article
A Patch-Level Region-Aware Module with a Multi-Label Framework for Remote Sensing Image Captioning
by Yunpeng Li, Xiangrong Zhang, Tianyang Zhang, Guanchun Wang, Xinlin Wang and Shuo Li
Remote Sens. 2024, 16(21), 3987; https://doi.org/10.3390/rs16213987 (registering DOI) - 27 Oct 2024
Abstract
Recent Transformer-based works can generate high-quality captions for remote sensing images (RSIs). However, these methods generally feed global or grid visual features to a Transformer-based captioning model for associating cross-modal information, which limits performance. In this work, we investigate unexplored ideas for a [...] Read more.
Recent Transformer-based works can generate high-quality captions for remote sensing images (RSIs). However, these methods generally feed global or grid visual features to a Transformer-based captioning model for associating cross-modal information, which limits performance. In this work, we investigate unexplored ideas for a remote sensing image captioning task, using a novel patch-level region-aware module with a multi-label framework. Due to an overhead perspective and a significantly larger scale in RSIs, a patch-level region-aware module is designed to filter the redundant information in the RSI scene, which benefits the Transformer-based decoder by attaining improved image perception. Technically, the trainable multi-label classifier capitalizes on semantic features as supplementary to the region-aware features. Moreover, modeling the inner relations of inputs is essential for understanding the RSI. Thus, we introduce region-oriented attention, which associates region features and semantic labels, omits the irrelevant regions to highlight relevant regions, and learns related semantic information. Extensive qualitative and quantitative experimental results show the superiority of our approach on the RSICD, UCM-Captions, and Sydney-Captions. The code for our method will be publicly available. Full article
Show Figures

Figure 1

16 pages, 3941 KiB  
Article
DecoupleCLIP: A Novel Cross-Modality Decouple Model for Painting Captioning
by Mingliang Zhang, Xia Hou, Yujing Yan and Meng Sun
Electronics 2024, 13(21), 4207; https://doi.org/10.3390/electronics13214207 (registering DOI) - 27 Oct 2024
Abstract
Image captioning aims to describe the content in an image, which plays a critical role in image understanding. Existing methods tend to generate the text for more distinct natural images. These models can not be well for paintings containing more abstract meaning due [...] Read more.
Image captioning aims to describe the content in an image, which plays a critical role in image understanding. Existing methods tend to generate the text for more distinct natural images. These models can not be well for paintings containing more abstract meaning due to the limitation of objective parsing without related knowledge. To alleviate, we propose a novel cross-modality decouple model to generate the objective and subjective parsing separately. Concretely, we propose to encode both subjective semantic and implied knowledge contained in the paintings. The key point of our framework is decoupled CLIP-based branches (DecoupleCLIP). For the objective caption branch, we utilize the CLIP model as the global feature extractor and construct a feature fusion module for global clues. Based on the objective caption branch structure, we add a multimodal fusion module called the artistic conception branch. In this way, the objective captions can constrain artistic conception content. We conduct extensive experiments to demonstrate our DecoupleCLIP’s superior ability over our new dataset. Our model achieves nearly 2% improvement over other comparison models on CIDEr. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

21 pages, 9024 KiB  
Article
Atypical Brain Connectivity During Pragmatic and Semantic Language Processing in Children with Autism
by Amparo V. Márquez-García, Vasily A. Vakorin, Nataliia Kozhemiako, Grace Iarocci, Sylvain Moreno and Sam M. Doesburg
Brain Sci. 2024, 14(11), 1066; https://doi.org/10.3390/brainsci14111066 (registering DOI) - 26 Oct 2024
Abstract
Background/Objectives: Children with Autism Spectrum Disorder (ASD) face challenges in social communication due to difficulties in considering context, processing information, and interpreting social cues. This study aims to explore the neural processes related to pragmatic language communication in children with ASD and address [...] Read more.
Background/Objectives: Children with Autism Spectrum Disorder (ASD) face challenges in social communication due to difficulties in considering context, processing information, and interpreting social cues. This study aims to explore the neural processes related to pragmatic language communication in children with ASD and address the research question of how functional brain connectivity operates during complex pragmatic language tasks. Methods: We examined differences in brain functional connectivity between children with ASD and typically developing peers while they engaged in video recordings of spoken language tasks. We focused on two types of speech acts: semantic and pragmatic. Results: Our results showed differences between groups during the pragmatic and semantic language processing, indicating more idiosyncratic connectivity in children with ASD in the Left Somatomotor and Left Limbic networks, suggesting that these networks play a role in task-dependent functional connectivity. Additionally, these functional differences were mainly localized to the left hemisphere. Full article
(This article belongs to the Collection Collection on Neurobiology of Language)
Show Figures

Figure 1

18 pages, 802 KiB  
Article
Logical Spaces and Subjunctive Tenses
by Rui Marques
Languages 2024, 9(11), 334; https://doi.org/10.3390/languages9110334 (registering DOI) - 26 Oct 2024
Abstract
Apparently, Subjunctive tenses express temporal location, and, in some constructions, the past subjunctive can also express modal values. A long-standing debate exists over whether—even in the latter case—verbal tenses are temporal operators or whether in some constructions they convey temporal meaning, and in [...] Read more.
Apparently, Subjunctive tenses express temporal location, and, in some constructions, the past subjunctive can also express modal values. A long-standing debate exists over whether—even in the latter case—verbal tenses are temporal operators or whether in some constructions they convey temporal meaning, and in others they have a modal value, maybe derived from their basic temporal meaning. The assumption that the basic meaning of subjunctive tenses are of a temporal nature is challenged by the fact that the future subjunctive, which exists in Portuguese, has the same temporal interpretation as the present subjunctive, with which it is in complementary distribution. Moreover, no clear modal difference is observed between the future and present subjunctive tenses. In this paper, I present arguments against the separation of the temporal and modal values of the subjunctive tenses. I posit, instead, that a semantic analysis of subjunctive morphemes must consider ordered pairs of times and possible worlds; only in this way can we adequately capture the observed data and allow a comprehensive view of the system of subjunctive tenses in Portuguese (which will be extendable to Romance languages in general). If we accept this proposal, then the modal as temporal information associated with subjunctive tenses follows naturally, including the systematic futurate reading of subjunctive temporal clauses. Full article
(This article belongs to the Special Issue Semantics and Meaning Representation)
Show Figures

Figure 1

28 pages, 2887 KiB  
Article
Leveraging Large Language Models for Enhancing Literature-Based Discovery
by Ikbal Taleb, Alramzana Nujum Navaz and Mohamed Adel Serhani
Big Data Cogn. Comput. 2024, 8(11), 146; https://doi.org/10.3390/bdcc8110146 - 25 Oct 2024
Abstract
The exponential growth of biomedical literature necessitates advanced methods for Literature-Based Discovery (LBD) to uncover hidden, meaningful relationships and generate novel hypotheses. This research integrates Large Language Models (LLMs), particularly transformer-based models, to enhance LBD processes. Leveraging LLMs’ capabilities in natural language understanding, [...] Read more.
The exponential growth of biomedical literature necessitates advanced methods for Literature-Based Discovery (LBD) to uncover hidden, meaningful relationships and generate novel hypotheses. This research integrates Large Language Models (LLMs), particularly transformer-based models, to enhance LBD processes. Leveraging LLMs’ capabilities in natural language understanding, information extraction, and hypothesis generation, we propose a framework that improves the scalability and precision of traditional LBD methods. Our approach integrates LLMs with semantic enhancement tools, continuous learning, domain-specific fine-tuning, and robust data cleansing processes, enabling automated analysis of vast text and identification of subtle patterns. Empirical validations, including scenarios on the effects of garlic on blood pressure and nutritional supplements on health outcomes, demonstrate the effectiveness of our LLM-based LBD framework in generating testable hypotheses. This research advances LBD methodologies, fosters interdisciplinary research, and accelerates discovery in the biomedical domain. Additionally, we discuss the potential of LLMs in drug discovery, highlighting their ability to extract and present key information from the literature. Detailed comparisons with traditional methods, including Swanson’s ABC model, highlight our approach’s advantages. This comprehensive approach opens new avenues for knowledge discovery and has the potential to revolutionize research practices. Future work will refine LLM techniques, explore Retrieval-Augmented Generation (RAG), and expand the framework to other domains, with a focus on dehallucination. Full article
Show Figures

Figure 1

Back to TopTop