Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = embodied AI agent

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 270 KiB  
Perspective
Active Inference for Learning and Development in Embodied Neuromorphic Agents
by Sarah Hamburg, Alejandro Jimenez Rodriguez, Aung Htet and Alessandro Di Nuovo
Entropy 2024, 26(7), 582; https://doi.org/10.3390/e26070582 - 9 Jul 2024
Viewed by 881
Abstract
Taking inspiration from humans can help catalyse embodied AI solutions for important real-world applications. Current human-inspired tools include neuromorphic systems and the developmental approach to learning. However, this developmental neurorobotics approach is currently lacking important frameworks for human-like computation and learning. We propose [...] Read more.
Taking inspiration from humans can help catalyse embodied AI solutions for important real-world applications. Current human-inspired tools include neuromorphic systems and the developmental approach to learning. However, this developmental neurorobotics approach is currently lacking important frameworks for human-like computation and learning. We propose that human-like computation is inherently embodied, with its interface to the world being neuromorphic, and its learning processes operating across different timescales. These constraints necessitate a unified framework: active inference, underpinned by the free energy principle (FEP). Herein, we describe theoretical and empirical support for leveraging this framework in embodied neuromorphic agents with autonomous mental development. We additionally outline current implementation approaches (including toolboxes) and challenges, and we provide suggestions for next steps to catalyse this important field. Full article
35 pages, 2478 KiB  
Article
Attention-Based Variational Autoencoder Models for Human–Human Interaction Recognition via Generation
by Bonny Banerjee and Murchana Baruah
Sensors 2024, 24(12), 3922; https://doi.org/10.3390/s24123922 - 17 Jun 2024
Cited by 2 | Viewed by 627
Abstract
The remarkable human ability to predict others’ intent during physical interactions develops at a very early age and is crucial for development. Intent prediction, defined as the simultaneous recognition and generation of human–human interactions, has many applications such as in assistive robotics, human–robot [...] Read more.
The remarkable human ability to predict others’ intent during physical interactions develops at a very early age and is crucial for development. Intent prediction, defined as the simultaneous recognition and generation of human–human interactions, has many applications such as in assistive robotics, human–robot interaction, video and robotic surveillance, and autonomous driving. However, models for solving the problem are scarce. This paper proposes two attention-based agent models to predict the intent of interacting 3D skeletons by sampling them via a sequence of glimpses. The novelty of these agent models is that they are inherently multimodal, consisting of perceptual and proprioceptive pathways. The action (attention) is driven by the agent’s generation error, and not by reinforcement. At each sampling instant, the agent completes the partially observed skeletal motion and infers the interaction class. It learns where and what to sample by minimizing the generation and classification errors. Extensive evaluation of our models is carried out on benchmark datasets and in comparison to a state-of-the-art model for intent prediction, which reveals that classification and generation accuracies of one of the proposed models are comparable to those of the state of the art even though our model contains fewer trainable parameters. The insights gained from our model designs can inform the development of efficient agents, the future of artificial intelligence (AI). Full article
Show Figures

Figure 1

22 pages, 377 KiB  
Article
Illusory Arguments by Artificial Agents: Pernicious Legacy of the Sophists
by Micah H. Clark and Selmer Bringsjord
Humanities 2024, 13(3), 82; https://doi.org/10.3390/h13030082 - 29 May 2024
Viewed by 456
Abstract
To diagnose someone’s reasoning today as “sophistry” is to say that this reasoning is at once persuasive (at least to a significant degree) and logically invalid. We begin by explaining that, despite some recent scholarly arguments to the contrary, the understanding of ‘sophistry’ [...] Read more.
To diagnose someone’s reasoning today as “sophistry” is to say that this reasoning is at once persuasive (at least to a significant degree) and logically invalid. We begin by explaining that, despite some recent scholarly arguments to the contrary, the understanding of ‘sophistry’ and ‘sophistic’ underlying such a lay diagnosis is in fact firmly in line with the hallmarks of reasoning proffered by the ancient sophists themselves. Next, we supply a rigorous but readable definition of what constitutes sophistic reasoning (=sophistry). We then discuss “artificial” sophistry: the articulation of sophistic reasoning facilitated by artificial intelligence (AI) and promulgated in our increasingly digital world. Next, we present, economically, a particular kind of artificial sophistry, one embodied by an artificial agent: the lying machine. Afterward, we respond to some anticipated objections. We end with a few speculative thoughts about the limits (or lack thereof) of artificial sophistry, and what may be a rather dark future. Full article
(This article belongs to the Special Issue Ancient Greek Sophistry and Its Legacy)
Show Figures

Figure 1

17 pages, 2830 KiB  
Review
Exploring Embodied Intelligence in Soft Robotics: A Review
by Zikai Zhao, Qiuxuan Wu, Jian Wang, Botao Zhang, Chaoliang Zhong and Anton A. Zhilenkov
Biomimetics 2024, 9(4), 248; https://doi.org/10.3390/biomimetics9040248 - 19 Apr 2024
Viewed by 2092
Abstract
Soft robotics is closely related to embodied intelligence in the joint exploration of the means to achieve more natural and effective robotic behaviors via physical forms and intelligent interactions. Embodied intelligence emphasizes that intelligence is affected by the synergy of the brain, body, [...] Read more.
Soft robotics is closely related to embodied intelligence in the joint exploration of the means to achieve more natural and effective robotic behaviors via physical forms and intelligent interactions. Embodied intelligence emphasizes that intelligence is affected by the synergy of the brain, body, and environment, focusing on the interaction between agents and the environment. Under this framework, the design and control strategies of soft robotics depend on their physical forms and material properties, as well as algorithms and data processing, which enable them to interact with the environment in a natural and adaptable manner. At present, embodied intelligence has comprehensively integrated related research results on the evolution, learning, perception, decision making in the field of intelligent algorithms, as well as on the behaviors and controls in the field of robotics. From this perspective, the relevant branches of the embodied intelligence in the context of soft robotics were studied, covering the computation of embodied morphology; the evolution of embodied AI; and the perception, control, and decision making of soft robotics. Moreover, on this basis, important research progress was summarized, and related scientific problems were discussed. This study can provide a reference for the research of embodied intelligence in the context of soft robotics. Full article
(This article belongs to the Special Issue Bio-Inspired and Biomimetic Intelligence in Robotics)
Show Figures

Figure 1

20 pages, 8979 KiB  
Article
Modeling Theory of Mind in Dyadic Games Using Adaptive Feedback Control
by Ismael T. Freire, Xerxes D. Arsiwalla, Jordi-Ysard Puigbò and Paul Verschure
Information 2023, 14(8), 441; https://doi.org/10.3390/info14080441 - 4 Aug 2023
Cited by 1 | Viewed by 1601
Abstract
A major challenge in cognitive science and AI has been to understand how intelligent autonomous agents might acquire and predict the behavioral and mental states of other agents in the course of complex social interactions. How does such an agent model the goals, [...] Read more.
A major challenge in cognitive science and AI has been to understand how intelligent autonomous agents might acquire and predict the behavioral and mental states of other agents in the course of complex social interactions. How does such an agent model the goals, beliefs, and actions of other agents it interacts with? What are the computational principles to model a Theory of Mind (ToM)? Deep learning approaches to address these questions fall short of a better understanding of the problem. In part, this is due to the black-box nature of deep networks, wherein computational mechanisms of ToM are not readily revealed. Here, we consider alternative hypotheses seeking to model how the brain might realize a ToM. In particular, we propose embodied and situated agent models based on distributed adaptive control theory to predict the actions of other agents in five different game-theoretic tasks (Harmony Game, Hawk-Dove, Stag Hunt, Prisoner’s Dilemma, and Battle of the Exes). Our multi-layer control models implement top-down predictions from adaptive to reactive layers of control and bottom-up error feedback from reactive to adaptive layers. We test cooperative and competitive strategies among seven different agent models (cooperative, greedy, tit-for-tat, reinforcement-based, rational, predictive, and internal agents). We show that, compared to pure reinforcement-based strategies, probabilistic learning agents modeled on rational, predictive, and internal phenotypes perform better in game-theoretic metrics across tasks. The outlined autonomous multi-agent models might capture systems-level processes underlying a ToM and suggest architectural principles of ToM from a control-theoretic perspective. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
Show Figures

Figure 1

14 pages, 6698 KiB  
Article
Skill Fusion in Hybrid Robotic Framework for Visual Object Goal Navigation
by Aleksei Staroverov, Kirill Muravyev, Konstantin Yakovlev and Aleksandr I. Panov
Robotics 2023, 12(4), 104; https://doi.org/10.3390/robotics12040104 - 16 Jul 2023
Cited by 2 | Viewed by 2438
Abstract
In recent years, Embodied AI has become one of the main topics in robotics. For the agent to operate in human-centric environments, it needs the ability to explore previously unseen areas and to navigate to objects that humans want the agent to interact [...] Read more.
In recent years, Embodied AI has become one of the main topics in robotics. For the agent to operate in human-centric environments, it needs the ability to explore previously unseen areas and to navigate to objects that humans want the agent to interact with. This task, which can be formulated as ObjectGoal Navigation (ObjectNav), is the main focus of this work. To solve this challenging problem, we suggest a hybrid framework consisting of both not-learnable and learnable modules and a switcher between them—SkillFusion. The former are more accurate, while the latter are more robust to sensors’ noise. To mitigate the sim-to-real gap, which often arises with learnable methods, we suggest training them in such a way that they are less environment-dependent. As a result, our method showed top results in both the Habitat simulator and during the evaluations on a real robot. Full article
(This article belongs to the Topic Artificial Intelligence in Navigation)
Show Figures

Figure 1

15 pages, 11331 KiB  
Article
Outdoor Vision-and-Language Navigation Needs Object-Level Alignment
by Yanjun Sun, Yue Qiu, Yoshimitsu Aoki and Hirokatsu Kataoka
Sensors 2023, 23(13), 6028; https://doi.org/10.3390/s23136028 - 29 Jun 2023
Cited by 1 | Viewed by 1654
Abstract
In the field of embodied AI, vision-and-language navigation (VLN) is a crucial and challenging multi-modal task. Specifically, outdoor VLN involves an agent navigating within a graph-based environment, while simultaneously interpreting information from real-world urban environments and natural language instructions. Existing outdoor VLN models [...] Read more.
In the field of embodied AI, vision-and-language navigation (VLN) is a crucial and challenging multi-modal task. Specifically, outdoor VLN involves an agent navigating within a graph-based environment, while simultaneously interpreting information from real-world urban environments and natural language instructions. Existing outdoor VLN models predict actions using a combination of panorama and instruction features. However, these methods may cause the agent to struggle to understand complicated outdoor environments and ignore the details in the environments to fail to navigate. Human navigation often involves the use of specific objects as reference landmarks when navigating to unfamiliar places, providing a more rational and efficient approach to navigation. Inspired by this natural human behavior, we propose an object-level alignment module (OAlM), which guides the agent to focus more on object tokens mentioned in the instructions and recognize these landmarks during navigation. By treating these landmarks as sub-goals, our method effectively decomposes a long-range path into a series of shorter paths, ultimately improving the agent’s overall performance. In addition to enabling better object recognition and alignment, our proposed OAlM also fosters a more robust and adaptable agent capable of navigating complex environments. This adaptability is particularly crucial for real-world applications where environmental conditions can be unpredictable and varied. Experimental results show our OAlM is a more object-focused model, and our approach outperforms all metrics on a challenging outdoor VLN Touchdown dataset, exceeding the baseline by 3.19% on task completion (TC). These results highlight the potential of leveraging object-level information in the form of sub-goals to improve navigation performance in embodied AI systems, paving the way for more advanced and efficient outdoor navigation. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

20 pages, 1353 KiB  
Article
An Empirical Study of A Smart Education Model Enabled by the Edu-Metaverse to Enhance Better Learning Outcomes for Students
by Xiaoyang Shu and Xiaoqing Gu
Systems 2023, 11(2), 75; https://doi.org/10.3390/systems11020075 - 1 Feb 2023
Cited by 42 | Viewed by 7168
Abstract
The Edu-Metaverse, a vast ensemble of different technologies, has initiated a great and unprecedented change in the field of education. This change has been effected through the following Edu-Metaverse characteristics: embodied and multimodal interaction; immersive teaching scenarios, which can accelerate learning and skill [...] Read more.
The Edu-Metaverse, a vast ensemble of different technologies, has initiated a great and unprecedented change in the field of education. This change has been effected through the following Edu-Metaverse characteristics: embodied and multimodal interaction; immersive teaching scenarios, which can accelerate learning and skill acquisition; and the emergence of AI-enabled agents. In comparison to traditional classroom teaching models, smart education is a collaborative and visual model that adopts the latest AI technologies to reach a learning outcome. However, a problem that should be considered is how a smart education model, enabled by the Edu-Metaverse, can be developed to enhance better learning outcomes for students. Such a model should highlight smart pedagogy in the context of the Edu-Metaverse, together with a smart teaching environment, multimodal teaching resources, and AI-enabled assessment. In this study, we focused on the teaching of college English to 60 students from Zhejiang Open University. We investigated the effectiveness of a smart education model, which was empowered by the Edu-Metaverse, in enhancing better learning outcomes for the students, using a combination of qualitative and quantitative research. After the one-semester-long experiment, questionnaires were sent out to complement the interview findings. It was found that the students who engaged in the smart education model in the Edu-Metaverse yielded higher scores in oral English, vocabulary and grammar, reading comprehension, English-to-Chinese translation, and writing than those who engaged in traditional instruction. Therefore, this study suggests that a smart education model enabled by the Edu-Metaverse, which is characterized by a highly immersive experience, multimodal interaction, and a high degree of freedom for resource sharing and creation can help learners to realize deep learning, develop their capacity for high-order thinking, and help them to become intelligent individuals in an online learning space. In order to facilitate this smart learning, we make the following suggestions for educational institutions: (1) teachers should improve the design of teaching scenarios, (2) teachers should focus on learning assessment that is based on core literacy, and (3) teachers’ knowledge of the architecture of the Edu-Metaverse should be enhanced. Full article
Show Figures

Figure 1

29 pages, 2156 KiB  
Concept Paper
Biology, Buddhism, and AI: Care as the Driver of Intelligence
by Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane and Michael Levin
Entropy 2022, 24(5), 710; https://doi.org/10.3390/e24050710 - 16 May 2022
Cited by 11 | Viewed by 21051
Abstract
Intelligence is a central feature of human beings’ primary and interpersonal experience. Understanding how intelligence originated and scaled during evolution is a key challenge for modern biology. Some of the most important approaches to understanding intelligence are the ongoing efforts to build new [...] Read more.
Intelligence is a central feature of human beings’ primary and interpersonal experience. Understanding how intelligence originated and scaled during evolution is a key challenge for modern biology. Some of the most important approaches to understanding intelligence are the ongoing efforts to build new intelligences in computer science (AI) and bioengineering. However, progress has been stymied by a lack of multidisciplinary consensus on what is central about intelligence regardless of the details of its material composition or origin (evolved vs. engineered). We show that Buddhist concepts offer a unique perspective and facilitate a consilience of biology, cognitive science, and computer science toward understanding intelligence in truly diverse embodiments. In coming decades, chimeric and bioengineering technologies will produce a wide variety of novel beings that look nothing like familiar natural life forms; how shall we gauge their moral responsibility and our own moral obligations toward them, without the familiar touchstones of standard evolved forms as comparison? Such decisions cannot be based on what the agent is made of or how much design vs. natural evolution was involved in their origin. We propose that the scope of our potential relationship with, and so also our moral duty toward, any being can be considered in the light of Care—a robust, practical, and dynamic lynchpin that formalizes the concepts of goal-directedness, stress, and the scaling of intelligence; it provides a rubric that, unlike other current concepts, is likely to not only survive but thrive in the coming advances of AI and bioengineering. We review relevant concepts in basal cognition and Buddhist thought, focusing on the size of an agent’s goal space (its cognitive light cone) as an invariant that tightly links intelligence and compassion. Implications range across interpersonal psychology, regenerative medicine, and machine learning. The Bodhisattva’s vow (“for the sake of all sentient life, I shall achieve awakening”) is a practical design principle for advancing intelligence in our novel creations and in ourselves. Full article
Show Figures

Figure 1

4 pages, 178 KiB  
Proceeding Paper
Ontology and AI Paradigms
by Roman Krzanowski and Pawel Polak
Proceedings 2022, 81(1), 119; https://doi.org/10.3390/proceedings2022081119 - 29 Mar 2022
Cited by 1 | Viewed by 1892
Abstract
Ontologies of the real world that are realized, internally, by AI systems and human agents are different. We call this difference an ontological gap. The paper posits that this ontological gap is one of the reasons responsible for the failures of AI, in [...] Read more.
Ontologies of the real world that are realized, internally, by AI systems and human agents are different. We call this difference an ontological gap. The paper posits that this ontological gap is one of the reasons responsible for the failures of AI, in realizing Artificial General Intelligence (AGI) capacities. Moreover, the authors postulate that the implementation of the biosemiotics perspective and a subjective judgment in synthetic agents would seem to be a necessary precondition for a synthetic system to realize human-like cognition and intelligence. The paper concludes with general remarks on the state of AI technology and its conceptual underpinnings. Full article
20 pages, 2419 KiB  
Systematic Review
A Systematic Review on Healthcare Artificial Intelligent Conversational Agents for Chronic Conditions
by Abdullah Bin Sawad, Bhuva Narayan, Ahlam Alnefaie, Ashwaq Maqbool, Indra Mckie, Jemma Smith, Berkan Yuksel, Deepak Puthal, Mukesh Prasad and A. Baki Kocaballi
Sensors 2022, 22(7), 2625; https://doi.org/10.3390/s22072625 - 29 Mar 2022
Cited by 35 | Viewed by 7276
Abstract
This paper reviews different types of conversational agents used in health care for chronic conditions, examining their underlying communication technology, evaluation measures, and AI methods. A systematic search was performed in February 2021 on PubMed Medline, EMBASE, PsycINFO, CINAHL, Web of Science, and [...] Read more.
This paper reviews different types of conversational agents used in health care for chronic conditions, examining their underlying communication technology, evaluation measures, and AI methods. A systematic search was performed in February 2021 on PubMed Medline, EMBASE, PsycINFO, CINAHL, Web of Science, and ACM Digital Library. Studies were included if they focused on consumers, caregivers, or healthcare professionals in the prevention, treatment, or rehabilitation of chronic diseases, involved conversational agents, and tested the system with human users. The search retrieved 1087 articles. Twenty-six studies met the inclusion criteria. Out of 26 conversational agents (CAs), 16 were chatbots, seven were embodied conversational agents (ECA), one was a conversational agent in a robot, and another was a relational agent. One agent was not specified. Based on this review, the overall acceptance of CAs by users for the self-management of their chronic conditions is promising. Users’ feedback shows helpfulness, satisfaction, and ease of use in more than half of included studies. Although many users in the studies appear to feel more comfortable with CAs, there is still a lack of reliable and comparable evidence to determine the efficacy of AI-enabled CAs for chronic health conditions due to the insufficient reporting of technical implementation details. Full article
Show Figures

Figure 1

17 pages, 958 KiB  
Article
User Experience Sensor for Man–Machine Interaction Modeled as an Analogy to the Tower of Hanoi
by Arkadiusz Gardecki, Michal Podpora, Ryszard Beniak, Bartlomiej Klin and Sławomir Pochwała
Sensors 2020, 20(15), 4074; https://doi.org/10.3390/s20154074 - 22 Jul 2020
Cited by 1 | Viewed by 2438
Abstract
This paper presents a novel user experience optimization concept and method, named User Experience Sensor, applied within the Hybrid Intelligence System (HINT). The HINT system, defined as a combination of an extensive AI system and the possibility of attaching a human expert, is [...] Read more.
This paper presents a novel user experience optimization concept and method, named User Experience Sensor, applied within the Hybrid Intelligence System (HINT). The HINT system, defined as a combination of an extensive AI system and the possibility of attaching a human expert, is designed to be used by relational agents, which may have a physical form, such as a robot, a kiosk, be embodied in an avatar, or may also exist as only software. The proposed method focuses on automatic process evaluation as a common sensor for optimization of the user experience for every process stage and the indicator for human-expert automatic session activation. This functionality is realized by the User Experience Sensor, which constitutes one of main elements of the self-optimizing interaction system. The authors present the optimization mechanism of the HINT system as an analogy to the process of building a Tower of Hanoi. The proposed sensor evaluates the user experience and measures the user/employee efficiency at every stage of a given process, offering the user to choose other forms of information, interaction, or expert support. The designed HINT system is able to learn and self-optimize, making the entire process more intuitive and easy for each and every user individually. The HINT system with the proposed sensor, implemented in a window assembly facility, successfully reduced assembly time, increased employees’ satisfaction, and assembly quality. The proposed approach can be implemented in numerous man–machine interaction applications. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

5 pages, 245 KiB  
Proceeding Paper
Morphological Computing in Cognitive Systems, Connecting Data to Intelligent Agency
by Gordana Dodig-Crnkovic
Proceedings 2020, 47(1), 41; https://doi.org/10.3390/proceedings2020047041 - 15 May 2020
Cited by 1 | Viewed by 1468
Abstract
This paper addresses some of the major controversies underlying the theme of the IS4SI 2019 Berkeley summit: “Where is the I in AI and the meaning of Information?”. It analyzes the relationship between cognition and intelligence in the light of the difference between [...] Read more.
This paper addresses some of the major controversies underlying the theme of the IS4SI 2019 Berkeley summit: “Where is the I in AI and the meaning of Information?”. It analyzes the relationship between cognition and intelligence in the light of the difference between old, abstract and the new embodied, embedded, enactive computationalism. It is questioning presuppositions of old computationalism which described the abstract ability of humans to construct knowledge as a symbol system, comparing it to the modern view of cognition found in various degrees in all living beings, with morphological/physical computational processes emerging at a variety of levels of organization. Cognitive computing based on natural/ physical/ morphological computation is used to explain the goal-directed behavior of an agent acting on its own behalf (the “I” as self-referential awareness) applicable to both living beings and machines with varying degrees of intelligence. Full article
(This article belongs to the Proceedings of IS4SI 2019 Summit)
Show Figures

Figure 1

10 pages, 2129 KiB  
Article
Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots
by Minoru Asada
Philosophies 2019, 4(3), 38; https://doi.org/10.3390/philosophies4030038 - 13 Jul 2019
Cited by 16 | Viewed by 7418
Abstract
In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental [...] Read more.
In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed. Full article
Show Figures

Figure 1

Back to TopTop