Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (721)

Search Parameters:
Keywords = GPT

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1283 KiB  
Article
Increasing the Reliability of Software Systems Using a Large-Language-Model-Based Solution for Onboarding
by Ioan Cristian Schuszter and Marius Cioca
Inventions 2024, 9(4), 79; https://doi.org/10.3390/inventions9040079 - 15 Jul 2024
Viewed by 108
Abstract
Software systems are often maintained by a group of experienced software developers in order to ensure that faults that may bring the system down are less likely. Large turnover in organizations such as CERN makes it important to think of ways of onboarding [...] Read more.
Software systems are often maintained by a group of experienced software developers in order to ensure that faults that may bring the system down are less likely. Large turnover in organizations such as CERN makes it important to think of ways of onboarding newcomers on a technical project rapidly. This paper focuses on optimizing the way that people get up-to-speed on the business logic and technologies used on the project by using a knowledge-imbued large language model that is enhanced using domain-specific knowledge from the group or team’s internal documentation. The novelty of this approach is the gathering of all of these different open-source methods for developing a chatbot and using it in an industrial use-case. Full article
Show Figures

Figure 1

12 pages, 270 KiB  
Article
The Vision of University Students from the Educational Field in the Integration of ChatGPT
by Sara Cebrián Cifuentes, Empar Guerrero Valverde and Sabina Checa Caballero
Digital 2024, 4(3), 648-659; https://doi.org/10.3390/digital4030032 (registering DOI) - 15 Jul 2024
Viewed by 122
Abstract
ChatGPT has significantly increased in popularity in recent months because of its capacity to generate novel content and provide genuine responses to questions. Nevertheless, like all technologies, it is crucial to assess its limitations and features prior to implementing it into an educational [...] Read more.
ChatGPT has significantly increased in popularity in recent months because of its capacity to generate novel content and provide genuine responses to questions. Nevertheless, like all technologies, it is crucial to assess its limitations and features prior to implementing it into an educational setting. A major obstacle associated with ChatGPT is its tendency to produce consistent yet occasionally unreliable and inaccurate responses. Our study provides students with training in this area, and its objective was to analyse the opinion of those same university students studying education-related degrees regarding the efficacy of the usefulness of ChatGPT for their learning. We used a mixed methodology and two instruments for data collection: questionnaires and discussion groups. The sample comprised 150 university students pursuing degrees in teaching and social education. The results show that the majority of students are familiar with the technology but have not had any formal training in a university. They use this tool to complete academic assignments outside the classroom, and they emphasise the need for training in it. Furthermore, following the training, the students highlight an increase in motivation and a positive impact on the development of generic skills, such as information analysis, synthesis and management, problem solving, and learning how to learn. Ultimately, this study provides an opportunity to consider the implementation of educational training of this tool at the university level in order to ensure its appropriate use. Full article
(This article belongs to the Collection Multimedia-Based Digital Learning)
12 pages, 1289 KiB  
Article
Mental Health Applications of Generative AI and Large Language Modeling in the United States
by Sri Banerjee, Pat Dunn, Scott Conard and Asif Ali
Int. J. Environ. Res. Public Health 2024, 21(7), 910; https://doi.org/10.3390/ijerph21070910 - 12 Jul 2024
Viewed by 311
Abstract
(1) Background: Artificial intelligence (AI) has flourished in recent years. More specifically, generative AI has had broad applications in many disciplines. While mental illness is on the rise, AI has proven valuable in aiding the diagnosis and treatment of mental disorders. However, there [...] Read more.
(1) Background: Artificial intelligence (AI) has flourished in recent years. More specifically, generative AI has had broad applications in many disciplines. While mental illness is on the rise, AI has proven valuable in aiding the diagnosis and treatment of mental disorders. However, there is little to no research about precisely how much interest there is in AI technology. (2) Methods: We performed a Google Trends search for “AI and mental health” and compared relative search volume (RSV) indices of “AI”, “AI and Depression”, and “AI and anxiety”. This time series study employed Box–Jenkins time series modeling to forecast long-term interest through the end of 2024. (3) Results: Within the United States, AI interest steadily increased throughout 2023, with some anomalies due to media reporting. Through predictive models, we found that this trend is predicted to increase 114% through the end of the year 2024, with public interest in AI applications being on the rise. (4) Conclusions: According to our study, we found that the awareness of AI has drastically increased throughout 2023, especially in mental health. This demonstrates increasing public awareness of mental health and AI, making advocacy and education about AI technology of paramount importance. Full article
(This article belongs to the Special Issue Digital Mental Health: Changes, Challenges and Success Strategies)
Show Figures

Figure 1

16 pages, 1344 KiB  
Article
Evaluating Large Language Model (LLM) Performance on Established Breast Classification Systems
by Syed Ali Haider, Sophia M. Pressman, Sahar Borna, Cesar A. Gomez-Cabello, Ajai Sehgal, Bradley C. Leibovich and Antonio Jorge Forte
Diagnostics 2024, 14(14), 1491; https://doi.org/10.3390/diagnostics14141491 - 11 Jul 2024
Viewed by 266
Abstract
Medical researchers are increasingly utilizing advanced LLMs like ChatGPT-4 and Gemini to enhance diagnostic processes in the medical field. This research focuses on their ability to comprehend and apply complex medical classification systems for breast conditions, which can significantly aid plastic surgeons in [...] Read more.
Medical researchers are increasingly utilizing advanced LLMs like ChatGPT-4 and Gemini to enhance diagnostic processes in the medical field. This research focuses on their ability to comprehend and apply complex medical classification systems for breast conditions, which can significantly aid plastic surgeons in making informed decisions for diagnosis and treatment, ultimately leading to improved patient outcomes. Fifty clinical scenarios were created to evaluate the classification accuracy of each LLM across five established breast-related classification systems. Scores from 0 to 2 were assigned to LLM responses to denote incorrect, partially correct, or completely correct classifications. Descriptive statistics were employed to compare the performances of ChatGPT-4 and Gemini. Gemini exhibited superior overall performance, achieving 98% accuracy compared to ChatGPT-4’s 71%. While both models performed well in the Baker classification for capsular contracture and UTSW classification for gynecomastia, Gemini consistently outperformed ChatGPT-4 in other systems, such as the Fischer Grade Classification for gender-affirming mastectomy, Kajava Classification for ectopic breast tissue, and Regnault Classification for breast ptosis. With further development, integrating LLMs into plastic surgery practice will likely enhance diagnostic support and decision making. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

22 pages, 353 KiB  
Article
GPTs or Grim Position Threats? The Potential Impacts of Large Language Models on Non-Managerial Jobs and Certifications in Cybersecurity
by Raza Nowrozy
Informatics 2024, 11(3), 45; https://doi.org/10.3390/informatics11030045 - 11 Jul 2024
Viewed by 268
Abstract
ChatGPT, a Large Language Model (LLM) utilizing Natural Language Processing (NLP), has caused concerns about its impact on job sectors, including cybersecurity. This study assesses ChatGPT’s impacts in non-managerial cybersecurity roles using the NICE Framework and Technological Displacement theory. It also explores its [...] Read more.
ChatGPT, a Large Language Model (LLM) utilizing Natural Language Processing (NLP), has caused concerns about its impact on job sectors, including cybersecurity. This study assesses ChatGPT’s impacts in non-managerial cybersecurity roles using the NICE Framework and Technological Displacement theory. It also explores its potential to pass top cybersecurity certification exams. Findings reveal ChatGPT’s promise to streamline some jobs, especially those requiring memorization. Moreover, this paper highlights ChatGPT’s challenges and limitations, such as ethical implications, LLM limitations, and Artificial Intelligence (AI) security. The study suggests that LLMs like ChatGPT could transform the cybersecurity landscape, causing job losses, skill obsolescence, labor market shifts, and mixed socioeconomic impacts. A shift in focus from memorization to critical thinking, and collaboration between LLM developers and cybersecurity professionals, is recommended. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

13 pages, 1210 KiB  
Article
Utilizing ChatGPT for Curriculum Learning in Developing a Clinical Grade Pneumothorax Detection Model: A Multisite Validation Study
by Joseph Chang, Kuan-Jung Lee, Ti-Hao Wang and Chung-Ming Chen
J. Clin. Med. 2024, 13(14), 4042; https://doi.org/10.3390/jcm13144042 - 10 Jul 2024
Viewed by 340
Abstract
Background: Pneumothorax detection is often challenging, particularly when radiographic features are subtle. This study introduces a deep learning model that integrates curriculum learning and ChatGPT to enhance the detection of pneumothorax in chest X-rays. Methods: The model training began with large, easily [...] Read more.
Background: Pneumothorax detection is often challenging, particularly when radiographic features are subtle. This study introduces a deep learning model that integrates curriculum learning and ChatGPT to enhance the detection of pneumothorax in chest X-rays. Methods: The model training began with large, easily detectable pneumothoraces, gradually incorporating smaller, more complex cases to prevent performance plateauing. The training dataset comprised 6445 anonymized radiographs, validated across multiple sites, and further tested for generalizability in diverse clinical subgroups. Performance metrics were analyzed using descriptive statistics. Results: The model achieved a sensitivity of 0.97 and a specificity of 0.97, with an area under the curve (AUC) of 0.98, demonstrating a performance comparable to that of many FDA-approved devices. Conclusions: This study suggests that a structured approach to training deep learning models, through curriculum learning and enhanced data extraction via natural language processing, can facilitate and improve the training of AI models for pneumothorax detection. Full article
(This article belongs to the Section Pulmonology)
25 pages, 1211 KiB  
Article
Assessing the Accuracy of Artificial Intelligence Models in Scoliosis Classification and Suggested Therapeutic Approaches
by Artur Fabijan, Agnieszka Zawadzka-Fabijan, Robert Fabijan, Krzysztof Zakrzewski, Emilia Nowosławska and Bartosz Polis
J. Clin. Med. 2024, 13(14), 4013; https://doi.org/10.3390/jcm13144013 - 9 Jul 2024
Viewed by 373
Abstract
Background: Open-source artificial intelligence models (OSAIMs) are increasingly being applied in various fields, including IT and medicine, offering promising solutions for diagnostic and therapeutic interventions. In response to the growing interest in AI for clinical diagnostics, we evaluated several OSAIMs—such as ChatGPT 4, [...] Read more.
Background: Open-source artificial intelligence models (OSAIMs) are increasingly being applied in various fields, including IT and medicine, offering promising solutions for diagnostic and therapeutic interventions. In response to the growing interest in AI for clinical diagnostics, we evaluated several OSAIMs—such as ChatGPT 4, Microsoft Copilot, Gemini, PopAi, You Chat, Claude, and the specialized PMC-LLaMA 13B—assessing their abilities to classify scoliosis severity and recommend treatments based on radiological descriptions from AP radiographs. Methods: Our study employed a two-stage methodology, where descriptions of single-curve scoliosis were analyzed by AI models following their evaluation by two independent neurosurgeons. Statistical analysis involved the Shapiro–Wilk test for normality, with non-normal distributions described using medians and interquartile ranges. Inter-rater reliability was assessed using Fleiss’ kappa, and performance metrics, like accuracy, sensitivity, specificity, and F1 scores, were used to evaluate the AI systems’ classification accuracy. Results: The analysis indicated that although some AI systems, like ChatGPT 4, Copilot, and PopAi, accurately reflected the recommended Cobb angle ranges for disease severity and treatment, others, such as Gemini and Claude, required further calibration. Particularly, PMC-LLaMA 13B expanded the classification range for moderate scoliosis, potentially influencing clinical decisions and delaying interventions. Conclusions: These findings highlight the need for the continuous refinement of AI models to enhance their clinical applicability. Full article
(This article belongs to the Section Orthopedics)
Show Figures

Figure 1

13 pages, 1622 KiB  
Article
Generative Artificial Intelligence, Human Agency and the Future of Cultural Heritage
by Dirk H. R. Spennemann
Heritage 2024, 7(7), 3597-3609; https://doi.org/10.3390/heritage7070170 - 9 Jul 2024
Viewed by 466
Abstract
The first half of 2023 was dominated by a public discussion of the nature and implications of generative artificial intelligence (genAI) models that are poised to become the most significant cross-cultural global disruptor since the invention of the World-Wide Web. It can be [...] Read more.
The first half of 2023 was dominated by a public discussion of the nature and implications of generative artificial intelligence (genAI) models that are poised to become the most significant cross-cultural global disruptor since the invention of the World-Wide Web. It can be predicted that genAI will affect how cultural heritage is being managed and practiced, primarily by providing analysis and decision-making tools, but also by genAI generated texts and images, in particular reconstructions of objects and sites. The more speculative interpretations of contexts and alternative interpretations generated by genAI models may constitute manifestations of cultural heritage in their own right. But do these constitute human cultural heritage, or are they AI cultural heritage? This paper is a deliberation of the realities and future(s) of cultural heritage in a genAI and post-genAI world. Full article
(This article belongs to the Special Issue AI and the Future of Cultural Heritage)
Show Figures

Figure 1

27 pages, 4743 KiB  
Article
A Qualitative Evaluation of ChatGPT4 and PaLM2’s Response to Patient’s Questions Regarding Age-Related Macular Degeneration
by George Adrian Muntean, Anca Marginean, Adrian Groza, Ioana Damian, Sara Alexia Roman, Mădălina Claudia Hapca, Anca Mădălina Sere, Roxana Mihaela Mănoiu, Maximilian Vlad Muntean and Simona Delia Nicoară
Diagnostics 2024, 14(14), 1468; https://doi.org/10.3390/diagnostics14141468 - 9 Jul 2024
Viewed by 335
Abstract
Patient compliance in chronic illnesses is essential for disease management. This also applies to age-related macular degeneration (AMD), a chronic acquired retinal degeneration that needs constant monitoring and patient cooperation. Therefore, patients with AMD can benefit by being properly informed about their disease, [...] Read more.
Patient compliance in chronic illnesses is essential for disease management. This also applies to age-related macular degeneration (AMD), a chronic acquired retinal degeneration that needs constant monitoring and patient cooperation. Therefore, patients with AMD can benefit by being properly informed about their disease, regardless of the condition’s stage. Information is essential in keeping them compliant with lifestyle changes, regular monitoring, and treatment. Large language models have shown potential in numerous fields, including medicine, with remarkable use cases. In this paper, we wanted to assess the capacity of two large language models (LLMs), ChatGPT4 and PaLM2, to offer advice to questions frequently asked by patients with AMD. After searching on AMD-patient-dedicated websites for frequently asked questions, we curated and selected a number of 143 questions. The questions were then transformed into scenarios that were answered by ChatGPT4, PaLM2, and three ophthalmologists. Afterwards, the answers provided by the two LLMs to a set of 133 questions were evaluated by two ophthalmologists, who graded each answer on a five-point Likert scale. The models were evaluated based on six qualitative criteria: (C1) reflects clinical and scientific consensus, (C2) likelihood of possible harm, (C3) evidence of correct reasoning, (C4) evidence of correct comprehension, (C5) evidence of correct retrieval, and (C6) missing content. Out of 133 questions, ChatGPT4 received a score of five from both reviewers to 118 questions (88.72%) for C1, to 130 (97.74%) for C2, to 131 (98.50%) for C3, to 133 (100%) for C4, to 132 (99.25%) for C5, and to 122 (91.73%) for C6, while PaLM2 to 81 questions (60.90%) for C1, to 114 (85.71%) for C2, to 115 (86.47%) for C3, to 124 (93.23%) for C4, to 113 (84.97%) for C5, and to 93 (69.92%) for C6. Despite the overall high performance, there were answers that are incomplete or inaccurate, and the paper explores the type of errors produced by these LLMs. Our study reveals that ChatGPT4 and PaLM2 are valuable instruments for patient information and education; however, since there are still some limitations to these models, for proper information, they should be used in addition to the advice provided by the physicians. Full article
(This article belongs to the Special Issue Diagnosis, Treatment and Management of Eye Diseases, Second Edition)
Show Figures

Figure 1

15 pages, 1430 KiB  
Article
The Moderating Effects of Gender and Study Discipline in the Relationship between University Students’ Acceptance and Use of ChatGPT
by Ibrahim A. Elshaer, Ahmed M. Hasanein and Abu Elnasr E. Sobaih
Eur. J. Investig. Health Psychol. Educ. 2024, 14(7), 1981-1995; https://doi.org/10.3390/ejihpe14070132 - 8 Jul 2024
Viewed by 376
Abstract
The intensive adoption of ChatGPT by university students for learning has encouraged many scholars to test the variables that impact on their use of such AI in their learning. This study adds to the growing body of studies, especially in relation to the [...] Read more.
The intensive adoption of ChatGPT by university students for learning has encouraged many scholars to test the variables that impact on their use of such AI in their learning. This study adds to the growing body of studies, especially in relation to the moderating role of students’ gender and their study discipline in their acceptance and usage of ChatGPT in their learning process. This study expanded the Unified Theory of Acceptance and Use of Technology (UTAUT) by integrating gender as well as study disciplines as moderators. The study collected responses from students in Saudi universities with different study disciplines and of different genders. The results of a structural model using Smart PLS showed a significant moderating effect of gender on the relationship between performance expectancy and ChatGPT usage. The results confirmed that the impact of performance expectancy in fostering ChatGPT usage was stronger in male than in female students. Moreover, social influence was shown to significantly affect males more than females in relation to ChatGPT usage. In addition, the findings showed that study discipline significantly moderates the link between social influence and ChatGPT usage. In the same vein, social influence significantly influences ChatGPT use in social sciences more than in applied sciences. Hence, the various implications of the study were discussed. Full article
Show Figures

Figure 1

12 pages, 391 KiB  
Article
SCC-GPT: Source Code Classification Based on Generative Pre-Trained Transformers
by Mohammad D. Alahmadi, Moayad Alshangiti and Jumana Alsubhi
Mathematics 2024, 12(13), 2128; https://doi.org/10.3390/math12132128 - 7 Jul 2024
Viewed by 278
Abstract
Developers often rely on online resources, such as Stack Overflow (SO), to seek assistance for programming tasks. To facilitate effective search and resource discovery, manual tagging of questions and posts with the appropriate programming language is essential. However, accurate tagging is not consistently [...] Read more.
Developers often rely on online resources, such as Stack Overflow (SO), to seek assistance for programming tasks. To facilitate effective search and resource discovery, manual tagging of questions and posts with the appropriate programming language is essential. However, accurate tagging is not consistently achieved, leading to the need for the automated classification of code snippets into the correct programming language as a tag. In this study, we introduce a novel approach to automated classification of code snippets from Stack Overflow (SO) posts into programming languages using generative pre-trained transformers (GPT). Our method, which does not require additional training on labeled data or dependency on pre-existing labels, classifies 224,107 code snippets into 19 programming languages. We employ the text-davinci-003 model of ChatGPT-3.5 and postprocess its responses to accurately identify the programming language. Our empirical evaluation demonstrates that our GPT-based model (SCC-GPT) significantly outperforms existing methods, achieving a median F1-score improvement that ranges from +6% to +31%. These findings underscore the effectiveness of SCC-GPT in enhancing code snippet classification, offering a cost-effective and efficient solution for developers who rely on SO for programming assistance. Full article
(This article belongs to the Special Issue AI-Augmented Software Engineering)
Show Figures

Figure 1

20 pages, 835 KiB  
Article
Exploring the Integration of Artificial Intelligence-Based ChatGPT into Mathematics Instruction: Perceptions, Challenges, and Implications for Educators
by Felix Oromena Egara and Mogege Mosimege
Educ. Sci. 2024, 14(7), 742; https://doi.org/10.3390/educsci14070742 - 6 Jul 2024
Viewed by 457
Abstract
This research investigates how secondary school mathematics educators in the Nsukka Education Zone, Enugu State, Nigeria, perceive the incorporation of artificial intelligence-based ChatGPT into teaching mathematics. The study employed a sequential exploratory mixed-methods strategy, starting with a systematic survey and followed by detailed [...] Read more.
This research investigates how secondary school mathematics educators in the Nsukka Education Zone, Enugu State, Nigeria, perceive the incorporation of artificial intelligence-based ChatGPT into teaching mathematics. The study employed a sequential exploratory mixed-methods strategy, starting with a systematic survey and followed by detailed interviews. The Mathematics Teachers’ Awareness and Perceptions of AI-based ChatGPT Questionnaire (MTAPACQ) used in this study was adapted from an existing online survey and administered to 80 mathematics teachers, who were selected using stratified random sampling to ensure varied representation across different local government areas. The survey explored teachers’ awareness, utilisation, and perceptions of ChatGPT. Following the quantitative phase, in-depth qualitative interviews were conducted with a subset of five teachers who were familiar with ChatGPT to gain deeper insights into their experiences. The findings indicate limited awareness of ChatGPT, with only 17% demonstrating familiarity with the technology. The infrequent utilisation of ChatGPT in mathematics teaching is mainly associated with this limited awareness. Teachers who integrate ChatGPT report positive outcomes, including improved teaching effectiveness, heightened student engagement, and enhanced comprehension of complex concepts. Nevertheless, the overall perceptions of the tool’s impact on mathematics teaching and learning are moderate. The identified challenges in relation to integration include technical adaptability, curriculum alignment, and the need for customisation to accommodate diverse learning styles. This study emphasises the significance of continuous professional development and ongoing support for teachers to integrate AI-based ChatGPT into mathematics instruction proficiently. The insights derived from the findings hold value for educators, policymakers, and technology developers aspiring to elevate the role of artificial intelligence in mathematics education. Full article
Show Figures

Figure 1

16 pages, 4071 KiB  
Article
Enhancing Software Code Vulnerability Detection Using GPT-4o and Claude-3.5 Sonnet: A Study on Prompt Engineering Techniques
by Jaehyeon Bae, Seoryeong Kwon and Seunghwan Myeong
Electronics 2024, 13(13), 2657; https://doi.org/10.3390/electronics13132657 - 6 Jul 2024
Viewed by 412
Abstract
This study investigates the efficacy of advanced large language models, specifically GPT-4o, Claude-3.5 Sonnet, and GPT-3.5 Turbo, in detecting software vulnerabilities. Our experiment utilized vulnerable and secure code samples from the NIST Software Assurance Reference Dataset (SARD), focusing on C++, Java, and Python. [...] Read more.
This study investigates the efficacy of advanced large language models, specifically GPT-4o, Claude-3.5 Sonnet, and GPT-3.5 Turbo, in detecting software vulnerabilities. Our experiment utilized vulnerable and secure code samples from the NIST Software Assurance Reference Dataset (SARD), focusing on C++, Java, and Python. We employed three distinct prompting techniques as follows: Concise, Tip Setting, and Step-by-Step. The results demonstrate that GPT-4o and Claude-3.5 Sonnet significantly outperform GPT-3.5 Turbo in vulnerability detection. GPT-4o showed the highest improvement with the Step-by-Step prompt, achieving an F1 score of 0.9072. Claude-3.5 Sonnet exhibited consistent high performance across all prompt types, with its Step-by-Step prompt yielding the best overall results (F1 score: 0.8933, AUC: 0.74). In contrast, GPT-3.5 Turbo showed minimal performance changes across prompts, with the Tip Setting prompt performing best (AUC: 0.65, F1 score: 0.6772), yet significantly lower than the other models. Our findings highlight the potential of advanced models in enhancing software security and underscore the importance of prompt engineering in optimizing their performance. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

15 pages, 8467 KiB  
Article
LLM-Powered Natural Language Text Processing for Ontology Enrichment
by Assel Mukanova, Marek Milosz, Assem Dauletkaliyeva, Aizhan Nazyrova, Gaziza Yelibayeva, Dmitrii Kuzin and Lazzat Kussepova
Appl. Sci. 2024, 14(13), 5860; https://doi.org/10.3390/app14135860 - 4 Jul 2024
Viewed by 363
Abstract
This paper describes a method and technology for processing natural language texts and extracting data from the text that correspond to the semantics of an ontological model. The proposed method is distinguished by the use of a Large Language Model algorithm for text [...] Read more.
This paper describes a method and technology for processing natural language texts and extracting data from the text that correspond to the semantics of an ontological model. The proposed method is distinguished by the use of a Large Language Model algorithm for text analysis. The extracted data are stored in an intermediate format, after which individuals and properties that reflect the specified semantics are programmatically created in the ontology. The proposed technology is implemented using the example of an ontological model that describes the geographical configuration and administrative–territorial division of Kazakhstan. The proposed method and technology can be applied in any subject areas for which ontological models have been developed. The results of the study can significantly improve the efficiency of using knowledge bases based on semantic networks by converting texts in natural languages into semantically linked data. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 770 KiB  
Article
Navigating the Evolving Landscape of Teaching and Learning: University Faculty and Staff Perceptions of the Artificial Intelligence-Altered Terrain
by Veera Kallunki, Päivi Kinnunen, Eeva Pyörälä, Anne Haarala-Muhonen, Nina Katajavuori and Liisa Myyry
Educ. Sci. 2024, 14(7), 727; https://doi.org/10.3390/educsci14070727 - 3 Jul 2024
Viewed by 350
Abstract
This study examines the perspectives of university faculty and staff regarding the influence of artificial intelligence on the higher education teaching and learning landscape following the global launch of free-to-use OpenAI ChatGPT in the autumn of 2022. The participants were 79 university faculty [...] Read more.
This study examines the perspectives of university faculty and staff regarding the influence of artificial intelligence on the higher education teaching and learning landscape following the global launch of free-to-use OpenAI ChatGPT in the autumn of 2022. The participants were 79 university faculty and staff from diverse academic fields across all campuses of a multidisciplinary university in Finland. The data were collected in two phases in May–June 2023 and in March 2024, with focus group interviews and Learning Café discussions. The results showed that AI has a broad impact on teaching and studying in higher education. Six main categories were identified: (1) the impact of AI on students’ learning processes, (2) the impact of AI on teaching, (3) the knowledge required of future employees and the impact of AI on them, (4) ethical and economic issues, (5) the development of AI or its use in the future, and (6) the nature of the change brought about by artificial intelligence. AI is already making inroads into higher education, and participants underscored its dual impact on teaching and learning, highlighting both opportunities and challenges. While teachers recognized AI’s potential to enhance teaching and assessment methods, they also acknowledged the need to adapt their courses accordingly. They expressed concerns about understanding AI’s impact on students’ learning processes and their own contributions to learning assignments. The participants emphasized the necessity of providing support and training for teachers to ensure AI is meaningfully and effectively integrated into teaching and learning practices and landscapes. Full article
Show Figures

Figure 1

Back to TopTop