Abstract
Social companion robots are getting more attention to assist elderly people to stay independent at home and to decrease their social isolation. When developing solutions, one remaining challenge is to design the right applications that are usable by elderly people. For this purpose, co-creation methodologies involving multiple stakeholders and a multidisciplinary researcher team (e.g., elderly people, medical professionals, and computer scientists such as roboticists or IoT engineers) are designed within the ACCRA (Agile Co-Creation of Robots for Ageing) project. This paper will address this research question: How can Internet of Robotic Things (IoRT) technology and co-creation methodologies help to design emotional-based robotic applications? This is supported by the ACCRA project that develops advanced social robots to support active and healthy ageing, co-created by various stakeholders such as ageing people and physicians. We demonstra this with three robots, Buddy, ASTRO, and RoboHon, used for daily life, mobility, and conversation. The three robots understand and convey emotions in real-time using the Internet of Things and Artificial Intelligence technologies (e.g., knowledge-based reasoning).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Service robots [86] are highly demanded to assist human beings, typically by performing a job that is dirty, dull, distant, dangerous or repetitive. Indeed, according to the International Federation of RoboticsFootnote 1 “by 2019, sales forecasts indicate another rapid increase of service robots up to an accumulated value of US$ 23 billion for the period 2016-2019”. “Investments under the joint SPARC initiative (the largest research and innovation program in robotics in the world) are expected to reach 2.8 billion euro with 700 million euro in financial investments coming from the European Commission under Horizon 2020 over 7 years.”
Service robots are becoming an important technology to invest in Sophia [95], the humanoid robot, is one of the latest demonstrations of the advancements in this area. The key remaining challenge when developing robots is to make them interact well with humans and their environment.Footnote 2 Over 70% of people aged 80 and over experience problems with personal mobility, while they want to remain independent in their lives.Footnote 3 Others, those who are able to maintain their full mobility, struggle with isolation, and lack of social interaction. People can become socially isolated for a variety of reasons, such as getting older or weaker, no longer being the hub of their family, leaving the workplace, the deaths of spouses and friends, or through disability or illness. Whatever the cause, it is shockingly easy to be left feeling alone and vulnerable, which can lead to depression and a serious decline in physical health and wellbeing.Footnote 4Social robots are autonomous robots that interact and communicate with humans or other autonomous physical agents by following social behaviors and rules attached to their role. Social robots can assist elderly to live secure, independent, and sociable lives. These robots also lighten the workload of caregivers, doctors, and nurses. Social companion robots [42] play the role of companion to help persons enjoy being with someone and reduce loneliness. They are getting more and more attentionFootnote 5 such as Pepper that dances and jokes or BuddyFootnote 6 that conveys emotions.
Social robots need to detect emotion and behave accordingly [80], innovative social robots should have personalized behaviors based on users’ emotional states to fit more in their activities and to improve human-robot interaction. Consequently, robots have been required to follow mental models based on Human-Human interaction because they are perceived as social actors. The human is in the loop, the robot detects and analyses the emotion as a first step, plans a reaction as a second step and closes the loop on the human-being as the third step.
Further, social robots need to interact better with humans and understand and/or convey emotions [85]. Technology can help. Wearable devices can be used for this purpose: indeed, these devices produce physiological signals that can be analyzed by machines to understand humans’ emotions and their physical state (see Table 7). Internet of Things (IoT) technology can connect wearables to AI based capability on the web leveraging approaches such as the W3C Web of Things.Footnote 7 The integration of Robotics, IoT, Web and Cloud technologies paves the way to the Internet of Robotics Things (IoRT)Footnote 8 [94, 98, 106, 115], i.e. an environment where unlimited and continuously improving knowledge reasoning and data processing is available, thereby endowing robots with capabilities for context awareness and for interactions with humans by reacting to their emotions, thus enhancing the quality of human-robot interactions.
ACCRAFootnote 9 is a joint Europe-Japan project focusing on the application of a co-creation methodology to build robot applications for ageing, co-created by various stakeholders such as ageing people and physicians. ACCRA has developed three social companion robots applications for older adults (daily life, conversation, and mobility applications) using three robots, Buddy, RoboHon, and ASTRO. Buddy is an emotional robot conveying specific feelings and emotions to humans on-demand, RoboHonFootnote 10 robot can understand emotions by using face recognition, and ASTROFootnote 11 helps with mobility.
ACCRA designed an engineering framework consisting of three components as shown in Fig. 1:
-
1.
IoT capabilities for robotics, i.e. connecting robots to other IoT devices and to cloud processing capabilities.
-
2.
Knowledge base for emotional well-being, i.e. accessing capabilities for context awareness, human interaction learning and emotion management
-
3.
Co-creation methodologies, i.e. integrating multidisciplinary expertise and involving humans in the design of applications.
In this light, our main research question is: How can IoRT technology and co-creation methodologies help to design emotional-based robotic applications?
We address the following sub-research questions (RQ):
-
RQ1: What is the architecture of social companion robot systems for smart healthcare and emotional well-being?
-
RQ2: What kind of robots do we need to enhance elderly’s mobility?
-
RQ3: What kind of robots do we need to help elderly to socialize and reduce their loneliness?
-
RQ4: How do we ensure that implemented applications meet the need of elderly to stay independent at home?
-
RQ5: What kind of robots do we need to recognize emotions? How can we integrate IoT-based emotional understanding into applications?
The main contributions of this paper are:
-
C1: The design of the robotic-based IoT architecture of the ACCRA project; it addresses RQ1 in Sect. 3.2.
-
C2: The ASTRO robot for the mobility application, designed with co-creation; it addresses RQ4 in Sect. 5.1.1.
-
C3: The Buddy robot for the daily life and conversational applications, designed with co-creation; it addresses RQ2 in Sect. 5.1.2 and RQ3 in Sect. 5.1.3.
-
C4: The design of robotic applications are supported with the Listen, Innovate, Field-test, Evaluate (LIFE) co-creation methodology for IoT, integrating human factors; it addresses RQ1 in Sect. 6.1.
-
C5: The knowledge API to enrich and reason on data generated by sensors to build emotional and ontology-based applications; it addresses RQ5 in Sects. 4 and 5.2.
-
C6: The RoboHon robot recognizes emotions with face-recognition; it addresses RQ5 in Sect. 3.1.
Structure of the paper: Section 2 introduces the concepts for robotics and emotional well-being as well as information on technological solutions that may be used in this context. Section 3 describes the ACCRA system, the robots, and their applications. Section 4 focuses on the Knowledge API. Section 5 explains the engineering components for robotics in smart health and emotional well-being and their evaluation within the ACCRA project. Section 6 evaluates our proposed solutions with stakeholders. Section 7 reminds key contributions and summarizes lessons learnt. Section 8 concludes the paper and provides suggestions for future work.
2 Related Work: Internet of Robotic Things and Emotional Care
The three sections hereafter introduce the related work for robotics and smart heath emotional care. Some of the technologies mentioned hereafter are employed in the engineering framework described in Sect. 5. The literature study’s purpose is to scope the state of the art and historic development. It has been collected over the last 10 years by searching for specific keywords on a corpus such as Google Scholar. We synthesize the related work in Tables 2, 3, 7 and 8 for a quick overview and comparison. We keep mentioning oldest pioneer work to acknowledge them instead of ignoring them.
2.1 Robotics Technologies and Internet of Robotic Things
We selected papers with keywords: “Internet of Robotics Things”, “IoT and Robotics”, “Cloud Robotics”, “Service Robotics”, “Robotic-as-a-Service”, “Robots Web”, “Robotics Ontologies”. Survey papers have been read first to find key references. A World Wide Web for Robots is introduced by the RoboEarth project [117]. RoboEarth provides a cloud-based knowledge repository for robots learning to solve tasks. “Internet of Robotic Things” (IoRT) integrates robotics, Internet of Things (IoT), and artificial intelligence (AI) to design and develop new frontiers in human-robot interaction (HRI), collaborative robotics, and cognitive robotics [94]. Thanks to the pervasiveness of IoT solutions, robots can expand their sensing abilities [5] whereas cloud computing resources and artificial intelligence can expand robots’ cognitive abilities [60]. Cloud Robotics is another term to describe the benefit of computational power for robots [57, 60, 99].
Kamilaris et al. [59] address six research questions when surveying the connection between robotics and the principles of IoT and WoT: (1) Which concepts, characteristics, architectures, platforms, software, hardware, and communication standards of IoT are used by existing robotic systems and services? (2) Which sensors and actuators are incorporated in IoT-based robots? (3) In which application areas is IoRT applied? (4) Which technologies of IoT are used in robotics till today? (5) Is Web of Things (WoT) used in robotics? By employing which technologies? (6) Which is the overall potential of WoT in combination with robotics, towards a WoRT? Tools supporting those IoRT technologies are still missing.
Tiddi et al. identify the main characteristics, core themes and research challenges of social robots for cities [113], encourage knowledge-based environments for robots, but do not focus on the emotional care. Nocentini et al. [80] surveyed Human-Robot Interaction (HRI) that provide robots with cognitive and affective capabilities to establish empathic relationships with users; and focus on the following architectural aspects: (1) the development of adaptive behavioral models, (2) the design of cognitive architectures, and (3) the ability to establish empathy with the user. They encourage an holistic approach to design empathic robots.
Paulius et al. [86] advice for designing an effective knowledge representation (e.g., distinguish representation from learning, cloud computing interface for robots, sharing knowledge and task experience between robots to avoid redeveloping from scratch). They conclude the need of standards for service robotics and quickly mention the need for ontologies [90] without describing the IEEE Autonomous robotics ontologies and numerous existing ontologies for robotics.
Surveys about robotics using IoT, Cloud, and AI technologies without a focus on semantic web technologies are summarized in Table 2. We reviewed ontology-based robotics projects, with a focus on robot autonomy in [83] published in 2019. We design Table 3 for this new paper to give a quick overview.
Zander et al. [122] surveyed semantic-based robotics projects and compare them according to six dimensions that are relevant for describing robotic components and capabilities: (i) domain and scope of an approach, (ii) system design and architecture, (iii) ontology scope and extensibility, (iv) reasoning features, (v) technological foundation of the reasoning techniques, and (vi) additional relevant technological features.
Shortcomings of the Literature Study: Previous work focuses on technological aspects such as integration or bringing AI (semantic web, IoT, Cloud) capabilities and not on bringing emotional care. There is a lack of a survey regarding all models already designed to cover the robotic domain (this is the purpose of our survey [83]). There is a need to understand why there is a need to constantly design new models without reusing the existing ones. It clearly demonstrates the need for tools to support developers willing to design new applications. The Internet of Robotics Things (IoRT) study published in 2017 [115] cites only one ontology for robotics designed by Prestes et al. (the IEEE standardization) [89].
2.2 Emotion Capture and Reproduction
Emotional understanding is widely investigated: (1) emotion recognition is investigated by Eckman [32], a pioneer who defined the six basic emotions (anger, happiness, surprise, disgust, sadness, and fear) focusing on face recognition, (2) positive psychology and research about happiness is investigated by Seligman [104], and (3) affective computing [88] that combines affective sciences to computing capabilities. Blue Frog robotics designed the Buddy companion robot, with a cute and smiley face. Buddy’s strength is to assist, entertain, educate, and make everybody smile. We selected papers with such keywords: “IoT and emotion”, “IoT Well-Being”, “well-being recommender system”, “wellness sensor”, “happiness IoT”, “emotion ontologies”, and solutions to remedy to those disorders: “depression sensor”, “stress sensor”, “sleep disorder sensor”. Survey papers have been read first to find key references. The main goal was to classify key sensors used in such systems, and the reasoning mechanisms to interpret sensor data. The classification of emotional well-being projects using IoT devices (Table 7) or ontologies (Table 8) is provided in Sect. 5.2.
In Table 7, the first column references authors, the second column reminds the date of publicatios, the third column provides a short description of the research, the fourth column describes sensors mentioned, and the last column provides the reasoning mechanisms employed to infer meaningful information from sensor data.
Conclusion: Emotional robots are still in their infancy stage. There is a need to classify sensors that can help to understand better emotions (not just with a camera for face recognition) [33], but to understand relationships between physiological data and emotions better [12, 17]. Table 7 is a first step towards bringing together IoT devices to enhance mood and emotions by managing disorders such as stress, sleep, mental health, and depression to enhance quality of life with empowered IoT-based well-being solutions.
2.3 System Design and Lifecycle Process: Agile Programming, DevOps and Co-creation
Agile programming Product DevelopmentFootnote 12 fosters rapid and flexible response to change; it is carried out in short periods called sprints. Sprints’ objectives are agreed by members of the project team: the agile programming and engineering team, the product manager, and the product owner. Sprints’ results are demonstrated and assessed by the product owners and managers. Product backlogs list future features to be integrated and developed such as new features, changes, bug fixes, infrastructure changes, etc.
DevOps is a ‘release early, release often’ software development methodology where engineers are involved in planning, designing and developing product releases, with varied activities and skills required at each stage. DevOps emphasizes the creation of tight feedback loops between engineers and testers or end-users and reduces the risk of creating software that no one will use. Development teams are encouraged to implement smaller sets of features, with minimal capabilities, to reshape future product planning and, to decide features worth to develop.
The co-creation approach actively involves multi-disciplinary partners as users in development, as opposed to being the subject of development [34]. Users co-develop solutions using tools and techniques that involve them in making ’things’ themselves, such as collages, mock-ups, and prototypes. Co-creation methods, tools and techniques access the tacit and latent knowledge of people, which may be very difficult to express, as opposed to what people say what they want (explicit knowledge) or what we can see people wanting (observable knowledge). Such methods can uncover people’s dreams, needs and wants, giving access to their true feelings [34]. This is relevant when developing IoT robotics applied to emotional well-being, because people’s emotions can be accessed through co-creation; this may not be the case when using methods such as interviews or observations that focus of uncovering on explicit or observable knowledge.
Conclusion: Agile, DevOps, and co-creation methodologies are used internally within the ACCRA engineering team to develop applications (mobility, daily life, and socialization) and externally with elderly people and multidisciplinary stakeholders to adapt applications to their needs (through the LIFE methodology [1] explained in Sect. 6.1).
3 System Used in ACCRA: Robots, and Architecture
The robots employed within ACCRA are described in Sect. 3.1, and the ACCRA project architecture is explained in Sect. 3.2.
3.1 Buddy, ASTRO, and RoboHon Robots
Buddy is a companion robot, that helps and entertains people every day. It offers several services: daily company, protection and security, communication and social connection, well-being and entertainment, and conversation.
-
Companionship: The robot is a playful and endearing companion that keeps company and entertains. It hums a tune, tells jokes, gives a warm smile when its head or nose is stroked, answers questions about the weather, the date, and the time, etc.
-
Protection and Security: The robot assists elderly in case of falling or getting stuck when moving. It can call for help upon request. It reminds medical appointments, medication to take, and good practices to follow (e.g., drink regularly in hot weather).
-
Communication and Social Connection: The robot enables close ones (family, grandchildren, friends, etc.) to interact with the elderly by phone or video call, send photos, videos and drawings, and displays them on its screen.
-
Well-being and Entertainment: The robot accompanies the elderly with well-being or relaxation activities, by playing videos, to unwind (relaxation sessions, sophrology, etc.) or to keep fit (physical exercise, gentle gymnastics, yoga, etc.). It can also play for amusement or to exercise your memory, read an audiobook or play your favorite music.
-
Conversation: The robot entertains by talking to the elderly about a variety of topics to stimulate the mind and can retrieve useful information for them.
ASTRO is a socially assistive robot designed with elderly to support their indoor mobility, manage exercises, support caregivers at work, communication, telepresence, etc..
-
Walking support: ASTRO supports the walking of elderly users through the robotic handle by acting like a rollator. Older persons can easily drive ASTRO by changing the handgrip strength, automatically adapted to the user and its velocity over time (using a customized machine learning algorithm).
-
Exercises: ASTRO helps users with physical exercises that can be selected among a list of videos managed by the caregiver. Users can see themselves on various ASTRO interfaces while performing the exercise, and later perform self-assessments.
-
Support caregiver at work: ASTRO supports the work of caregiver: (1) Managing exercises: the caregiver can select the proper exercises for each user. (2) Monitoring: the caregiver can have an overview of the satisfaction on the performance of the exercises and walking support services. He can access the overview of the usage of the service, the total traveled time, and the mean velocity. The data are clustered and organized in a bar graph to enhance the overview. (3) Communication and Telepresence: the caregiver can call the elderly through this service, and can use ASTRO to remotely visit the elderly, using the ad hoc interface he can guide ASTRO and see what he sees.
RoBoHoN [38] is a robot that gives therapy for blind elderly people to induce a positive mental emotional state and reduce stress, and to reduce social isolation. RoBoHoN is a personal mobile robot whose main interface is the voice. By making it a humanoid robot, it is easy to talk to and get an expression by movement. Through the voice UI, applications such as telephone, mail, and camera, and contents such as weather, fortune-telling, and news can be transformed into special experiences, and various apps can be added to help the owner. RoBoHoN provides such functionalities:
-
Emotional Face Recognition: RoBoHoN [102] recognizes emotion (true face, joy, surprise, anger, and sadness) from the face using the OMRON’s Human Vision camera Component (HVC-P2) embedded within the robot’s eyes.
-
Emotional Conversation: RoBoHoN senses which sentence affected the emotions of the elderly and responds appropriately.
-
Person’s Location from Voice Recognition: RoBoHoN is a humanoid robot capable of voice communication to identify where the person is located in the room from the speech recognition engine (Rospeex). To move around and understand its environment, RoBoHoN deals with several sensors: three axis accelerometers, three axis geo-magnetometers, three-axis gyroscope, a light sensor, a temperature sensor, a humidity sensor, heart rate, and a camera for facial recognition. RoBoHoN can manipulate his environment with ease, making it easy for him to walk and know his exact location.
The RoBoHoN operating system is Google Android (TM), and the standard Android API is also available. Those who have experience in Android application development can utilize the knowledge and technology that they have cultivated so far. In addition, you can develop apps that provide a user experience that is different from smartphones by taking advantage of RoBoHoN’s characteristic functions (voice dialogue, motion, etc.). RoBoHoN is using the FIWARE infrastructure software, and MQTT as the communication protocol for sensor data.
3.2 ACCRA Architecture
The ACCRA overview architecture, depicted in Fig. 2, has three layers:
-
The IoT and Robot Layer (see Sect. 3.1) comprises the robots used for mobility, daily life, and conversation applications, and sensors such as room humidity and thermometer, heartbeat, and a camera to detect facial expressions.
-
The Core Platform Layer comprises the following functionalities: (1) The FIWARE platform which exchanges sensor data and the services used by them. The server receives and sends information from the sensors to the services that are running on different robots (ASTRO, Buddy, and RoboHon). (2) Astro Support which communicates and receives data from the FIWARE platform, (3) Buddy Support, (4) RoboHon Support with the Rospeex Speech Communication platform which is used for conversational applications and which can recognize facial emotions (true face, joy, surprise, anger, and sadness) with the camera. (5) Application Stakeholder Cooperation Portal (Marketplace), and (6) Quality of Service (QoS) to prioritize network traffic. For instance, the ACCRA Knowledge API (see Sect. 4) provides the knowledge to the platform to decide what the messages and orders are, sent to the services depending on the data received by the different sensors. The reasoning engine interprets the data meaning to build well-being and emotional applications.
-
The Application Layer provides the three main applications (mobility, daily life, and conversation), developed and experimented within the ACCRA methodology. The applications are detailed in Sect. 5.1.
More information regarding the ACCRA architecture and relationships between core components are detailed in ACCRA D5.3 Platform Environment for Marketplace [25].
4 ACCRA Knowledge API for Robotics Emotional Well-Being
4.1 Inferring Meaningful Knowledge from Data
The end-to-end architecture provided in Fig. 3 uses data generated by devices (e.g., heartbeat sensor, temperature, humidity) available within the ACCRA project. The data are stored and managed within the FIWARE middleware. The Semantic Annotator API component explicitly annotates the data (e.g., unit of the measurement, context such as body temperature or room temperature) and unifies the data when needed. The semantic annotation uses ontologies that can be found through ontology catalogs (e.g., LOV4IoT [49] [83], Sect. 4.2). The ontology chosen must be compliant with a set of rules to infer additional information. The Reasoning Engine API (inspired from [48, 50] and supported by the AIOTI group on semantic interoperability [79]) deduces additional knowledge from the data (e.g., abnormal heartbeat) using rule-based reasoning. The IF THEN ELSE rules executed by the reasoning engine will add new data in the FIWARE data storage. Finally, the enriched data can be exploited within end-user applications (e.g., call the family or doctor when the heartbeat is abnormal since it might be an emergency).
Listing 1 shows a rule compliant with the Jena framework and the W3C Sensor Observation Sampler and Actuator (SOSA)/Semantic Sensor Networks (SSN) ontology and its extension, the Machine-to-Machine-Measurement (M3) ontology that classifies sensor type, measurement type, units, etc. to do knowledge-based reasoning.
Table 1 provides examples of the web services used by the GUI (Fig. 4 ). Such web services are employed to later code rules as depicted in Listing 1 for the scenarios according to the applications’ needs.
Various rules (compliant with the Jena framework such as Listing 1) can be provided by the Sensor-based Linked Open Rules (S-LOR) tool which classifies rules per domain and per sensor. In ACCRA, we are focused on healthcare since we have the heart beat sensor and smart home domains for Ambient Assisted Living with temperature and humidity sensors. Figures 4 and 5 depicted below show a demonstrator of the rule-based engine for IoT, called Sensor-based Linked Open Rules (S-LOR). They show a drop-down list with a set of IoT sub-domains, such as smart home, that we are interested in. Once the domain is selected, the list of sensors relevant for this domain (e.g., presence detector, temperature, light sensor) are depicted. The developer clicks on the button “Get Project” to retrieve existing projects already using such sensors, or the “Get rule” button to find existing rules relevant for this sensor to deduce meaningful information from sensor data. For instance, for a temperature sensor within a smart home, the smart home application integrating a rule-based reasoner understands that when the temperature is cold or too hot, it can automatically switch on the heater or air-conditioning.
Several reasoning examples are provided to demonstrate the full scenarios from a simple measurement:
-
Skin conductance to infer potential emotion such as anxiety (Fig. 6).
-
Blood pressure to infer potential disease such as hypertension (Fig. 7).
-
Blood glucose to infer potential disease such as hyperglycemia (Fig. 8).
-
Heart beat to infer diseases such as Tachycardia (Fig. 9). Another scenario could be to detect emotions such as fear if the heart beat is elevated (e.g., 155 heart beat per minute).
Demos provide tooltips to provide the veracity of the facts. The facts come, most of the time, from scientific publications referenced within the LOV4IoT catalogs and the Sensor-based Linked Open Rules (S-LOR) tool. Such technical AI solutions can be later embedded within the robots to better understand users’ emotions and diseases from sensor devices.
4.2 Reusing Knowledge Expertise
The LOV4IoT ontology catalog is generic enough to be extended to any domain. In this paper, we are mainly focused on the following domains: emotion, health, robotics, and home.
LOV4IoT-Emotion knowledge catalog (see Table 2 for URLs). We have collected 22 ontology-based emotion projects in Table 8 (enrichment of [48]), from 2003 to 2018, and have classified them: (1) ontologies that are open-source that define domain knowledge (column ontology availability with a green check mark), (2) ontology not openly accessible (column ontology availability with a red cross mark). We released it as an open-source dataset, web service, and dump (see Table 2 for URLs) to easily retrieve the list of projects, their ontology URLs, and scientific publications for further automatic knowledge extraction. Those ontologies classify the numerous emotions to be automatically recognized with tasks such as text analysis (e.g., robots understanding conversations), or face recognition. Table 8 concludes that frequently, ontologies are not accessible online which demonstrates the need to disseminate the Semantic Web Community Best practices guidelinesFootnote 13 within the Affective Computing community more, to encourage the reuse and the publication of ontologies. Ontology-based emotional projects miss the information of which devices are required to recognize emotions (except microphone for voice data, and camera for face detection). For this reason, we also classified well-being projects (e.g., recommendation systems) using IoT devices to interpret emotions in Table 7. This table helped us to build the reasoning engine for emotions explained in [48].
LOV4IoT-Robotics provides an HTML interface (Fig. 10 for humans but also the RDF dataset is employed for statistics (number of the ontology for the robotics domain and the quality of the ontology such as published online or not) and other automatic tasks (e.g., the Perfecto project to link the ontology URL with semantic web tools and analyze ontology best practices or automatic visualization). In the same way, a focus on Ambient Assisted Living (AAL) ontologies (e.g., UniversAAL ontologies) could be done, or even the intersection of robotic ontologies for AAL. We also released dump files with the ontology code for a quick analysis of common concepts used within present or past robotics projects.
Similarly, we have built LOV4IoT-Health and LOV4IoT-Home (see Table 2 for URLs). All of those ontology datasets have been released for AI4EU Knowledge Extraction for the Web of Things (KE4WoT) Challenge.Footnote 14
4.3 Knowledge Catalog Methodology
We define the following survey Inclusion Criteria:
-
Open-Source projects. Ideally, projects have a project website, documentation, ontologies released online, etc.
-
Impact-full projects (e.g., European projects with numerous partners. Projects in partnership with industrial companies, etc.)
-
Supported by standards (e.g., ETSI, W3C, ISO, IEC, OneM2M).
-
Reuse of the ontologies within other projects.
-
Size of the ontology or knowledge base.
-
Maintenance of the projects.
As explained in Sect. 2, doing the Systematic Literature Survey such as PRISMA, PICO or [18, 96] [62] is out of the scope of the paper, however, we continuously investigated on search engines keyphrases such as ‘robotic ontologies” or “robotic ontology healthcare”; the set of keywords has been regularly refined. We provide this overview to corroborate the discussion and the context of our work. We persistently explored references included within the bibliography of the papers that have been studied and added any new papers that needed to be investigated. When looking for papers on search engines, the most cited paper will be probably referenced first. However, we have to consider the latest publications as well, that are not cited yet. For this reason, we can use Google Scholar to explicitly request publications per year (e.g., 2018, 2019), to include the latest publications. We foster open-source projects that share their ontologies online in priority. Another criteria is the maintenance of the project and/or the ontology.
Discussions: More than a survey, an ontology catalog tool. On top of this survey paper, we created an ontology catalog tool accessible online, called LOV4IoT to ease the task of retrieving ontology URLs when available, links to scientific publications, etc. The ontology catalog provides a table view for humans but is also encoded in a machine processable format (RDF). We also provide a web service using the RDF dataset to be used by developers (e.g., robotics, or knowledge representation experts). A tutorial to help reusing those ontology-based projects provides: (1) the easy use of the web service, (2) the opportunity to download the zip file with ontology code when available, and (3) a table sheet with ontology URLs when available and potential issues encountered (e.g., versioning).
The generic SPARQL query to automatically retrieve all ontologies for a specific domain from the LOV4IOT ontology catalog is displayed in Listing 2. The generic variable ?domainURL enables to achieve this. This ?domainURL refers to Listing 3 when we query the robotic domain, ?domainURL is replaced by m3:Robotic. The same mechanism works for other domains covered in this paper (e.g., IoT, emotion, health).
Figure 11 demonstrates the web service providing all robotic ontologies. The web service is publicly accessible.Footnote 15 You can notice the ontologyURL matches to the one explained within the RDF dataset (Listing 4). This figure shows the result returned by the SPARQL query when executed.
4.4 Summary of Knowledge API Demonstrators
We retrieve knowledge expertise from existing projects by classifying sensors, ontologies used to model data and applications, and the reasoning mechanisms used to interpret sensor data. We integrate the knowledge from several communities (Affective Computing designing new AI algorithms to understand IoT data, Affective Sciences modeling emotion within ontologies, Robotics community designing ontologies, IoT community designing ontologies, etc.). The collected knowledge can be automatically retrieved within online Knowledge API demonstrators, as referenced in Table 2).
5 Engineering Framework for Robotics in Smart Health and Emotional Care
ACCRA is using a framework consisting of three multi-disciplinary engineering components (Fig. 1) and their maturity is illustrated in Table 3:
-
1.
IoT capabilities for robotics (Sect. 5.1),
-
2.
Knowledge for emotional well-being (Sect. 5.2), and
-
3.
Co-creation (Sect. 6.1).
5.1 Engineering IoT Capabilities for Robotics
The ACCRA platform is evaluated with the mobility, daily life, and conversational applications detailed hereafter.
5.1.1 Mobility Application
The ASTRO robot acts as a walker to support frail persons during indoor walking tasks [26]. Data collected from pressure sensors located on the ASTRO handles are aggregated in real-time with the AI algorithm stored on the robot to tune the physical human-robot interaction. ASTRO provides a personalized list of physical exercises and monitors the performance of the patient over time. ASTRO can also be used by caregivers to support their work, by using a customized web-interface stored in FIWARE: they can monitor the performance of a user over a certain period of time (i.e. statistic on the exercise performed over a day, related satisfaction of walking service) or monitor parameters related to Sarcopenia and frailty, namely handgrip strength and walking velocity [36]. Caregivers can personalize the list of exercises according to the clinical profile.
5.1.2 Daily Life Application (e.g., Well-Being)
IoT technologies are investigated in healthcare applications too (e.g., to provide an affordable way to unobtrusively monitor the patient’s lifestyles in terms of the amount of physical activity suggested by international guidelines for patients with chronic conditions like diabetes [6]). We designed IAMHAPPY [48], an innovative IoT-based well-being recommendation system (ontology and rule-based system) to encourage people’s happiness daily. The system helps people deal with day-to-day discomforts (e.g., minor symptoms such as headache, fever) by using home remedies and related alternative medicines (e.g., naturopathy, aromatherapy), activities to reduce stress, etc. The recommendation system queries the web-based knowledge repository for emotions that reference expertise from past emotional ontology-based projects (Table 8) and well-being IoT-based applications (Table 7), reused to build applications as explained below in Sect. 5.2. The knowledge repository helps to analyze the data produced by IoT devices to understand users’ emotions and health. The knowledge repository is integrated with a rule-based engine to suggest recommendations to enhance people’s well-being every day. The naturopathy application scenario supports the recommendation system, which has also been enriched with food that boosts the immune system, and scientific facts, necessary to face the COVID-19 pandemic.
5.1.3 Conversational Application
Buddy challenges people with intellectual exercises to stimulate their curiosity on different topics. The results of the need analysis (see the “Listen” phase of the LIFE methodology in Sect. 6.1) indicate differences between the Italian and the Japanese culture [31]. Japanese elderly are more interested in the conversation on fashion, golf, and travel which are modulated on their preferences and psychological profile, in order to improve functioning in the everyday context thus encouraging autonomy of the patients and reducing the risk of isolation. Italian elderly use the robot as a cognitive stimulus in order to remember appointments and things to do during the day, to propose the listening of music tracks, to adjourn on daily news, and to show pictures in order to support patients with cognitive impairment during hospitalization. Furthermore, it could improve the users’ comfort degree when using the device, automatically adapting to users behaviors and their personal histories.
5.2 Engineering Knowledge for Robotics Emotional Well-Being
Robots can either convey emotions via their cute interface (e.g. Buddy), or robots can understand human emotions by using: (1) cameras with face recognition (e.g. RoboHon), (2) speech with voice recognition, and (3) IoT devices (e.g., heartbeat, ECG). We designed a knowledge-based repository for emotion, health, robotics, and home (as explained in detail in Sect. 4). For instance, we classify and analyze emotional-based projects either using IoT technologies to understand human emotions (Table 7) or ontologies to model emotions (Table 8).
As an example, Fig. 12 shows reasoning from three IoT devices (thermometer, heart beat sensor, and humidity) embedded within the ASTRO mobility robots to assist stakeholders such as physicians when emergency actions must be taken.
6 Evaluation
This section evaluates our proposed solutions with stakeholders: (1) elderly and caregivers through the Listen, Innovate, Field-test, Evaluate (LIFE) co-creation methodology in Sect. 6.1, (2) researchers and IT experts in Sect. 6.2 via a user form and Google Analytics in Sect. 6.3.
6.1 Engineering Co-Creation for IoT Integrating Human Factors
Our approach is based on the Listen, Innovate, Field-test, Evaluate (LIFE)Footnote 16, a co-creation methodology for IoT, integrating human factors focusing on: (1) user-centric design involving elderly people and caregivers, and (2) technology design involving robotics features. The LIFE methodology identifies the needs of the elderly people experiencing a loss of autonomy, to co-create robotic solutions that meet these needs, to field-test their daily use and to evaluate the solutions’ sustainability.
The methodology consists of four phases, each ending with a checkpoint to assure that the project team only passes to a next phase if the scope is clear, the goals are met, and the necessary conditions in terms of resources are in place. The phases are: (1) Needs Analysis, (2) Agile co-creation, (3) Agile pre-experiment, and (4) Final evaluations. Table 4 details activities and checkpoints in each of the phases. The agile co-creation stage is the heart of the LIFE methodology. It is based on iterative cycles comprising four sub-steps: Co-design, Test, Develop, and Quality check meeting.
Experiments: The goal of the experimentation phase is to test the robotics solution under real conditions by a large group of end users. The experimentation only took place in Europe. For each use case a methodology for the experimentation has been designed, taking into account what the robot’s functionalities are and what would (given the time constraints of the project) be the optimal duration of use. The design was either a before and after design or a cross sectional design without a control group. We used mixed method data collection consisting of surveys, interviews and, for the conversation use case, video recording. An extensive amount of data was collected, giving insight in, among others, perception of the robot, usability, and quality of life of the user. The target sample per country for each use case was to recruit the following stakeholders: 20 elderly, around 6 formal caregivers and as many informal caregivers as available and willing to participate. In total 100 people participated in the experiment. The results reached in the ACCRA project are beyond the scope of this paper, but will be reported in other publications.
The application usability is measured with the with the System Usability Scale (SUS) [77]. SUS provides specific questions such as: (1) I think that I would like to use this system frequently. (2) I found the system unnecessarily complex. (3) I thought the system was easy to use. (4) I think that I would need the support of a technical person to be able to use this system. (5) I found the various functions in this system were well integrated. (6) I thought there was too much inconsistency in this system. (7) I would imagine that most people would learn to use this system very quickly. (8) I found the system very cumbersome to use. (9) I felt very confident using the system. (10) I needed to learn a lot of things before I could get going with this system.
Conclusion: Agile and DevOps (introduced in Sect. 2.3) are used to co-create applications (mentioned above) with elderly and medical professionals, amongst other stakeholders. Internal sprints are done within the engineering team. Once an application is ready, it is presented to the previously mentioned multidisciplinary stakeholders during co-creation workshops for validation, refinement or further development (within external sprints), depicted in Fig. 13. Technical limitations of the study are described in Sect. 7.
6.2 Experiments of Knowledge API Demonstrators with Users Through Google Forms
To evaluate LOV4IoT, an evaluation formFootnote 17 has been set up available on the LOV4IoT user interface. It has been filled in by 50 volunteers, who employed the LOV4IoT web site and found the evaluation form to fill it in. We did not select their profiles but requested the information “Who are you?” in the form. We keep the evaluation form open to get additional volunteers that can still provide their feedback. For this reason, statistics can evolve. This form demonstrates that the synthesization and classification work of numerous ontology-based projects is useful for other developers, researchers and not only designed for the IoT research field. It helps users for their state of the art or for finding and reusing the existing ontologies. Sometimes the results are not always 100% when the question was not mandatory or when we added later a new question to get more information. The LOV4IoT evaluation form contains the following questions and results.
-
Who are you? Users are either: 37% semantic-based IoT developers, 37% IoT developers, 11% ontology matching tool experts and 9% domain experts. It means that the dataset is mainly used by the IoT community.
-
Domain ontologies that you are looking for? 40% of users are interested in smart home ontologies (the highest score), 28% in health ontologies, 10% in emotion. It means that users are interested in most of the domains that we cover.
-
How did you find this tool? 23% found the LOV4IoT tool thanks to search engines, 23% thanks to emails that we sent to ask people to share their domain knowledge or to fill this form, 19% thanks to research articles, and 13% thanks to people who recommended this tool. Everybody can find and use this tool, not necessarily researchers.
-
Do you trust the results since we reference research articles? 57% of users trust the LOV4IoT tool since we reference research articles, 40% are partially convinced. It means they consider this dataset as a reliable source.
-
In which information are you interested? 72% of users are interested in ontology URL referenced, 66% of users are interested in research articles, 37% in technologies, 27% in rules and 27% in sensors used. The classification and description of each project is beneficial for our users.
-
Do you use this web page for your state-of-the art? 40% of users answered yes frequently, 32% yes, and 28% no. Thanks to this work, users save time by doing the state of the art on our dataset.
-
In your further IoT application developments, do you think you will use again this web page? 59% of users answered yes frequently, 36% yes, and 4% no. This result is really encouraging to maintain the dataset for domain and IoT experts.
-
In general, do you think this web page is useful: 69% of users answered yes frequently, 28% yes, and 2% no. This result is really encouraging to maintain the dataset and add new functionalities.
-
Would you recommend this web page to other colleagues involved in ontology-based IoT development projects? 82% of users answered yes frequently, 16% yes, and 2% no. This result is really encouraging to maintain the dataset and add new functionalities.
This evaluation shows that LOV4IoT is really relevant for the IoT community. The results are encouraging to update the dataset with additional domains and ontologies. LOV4IoT leads to the AIOTI (The Alliance for the Internet of Things Innovation) IoT ontology landscape survey formFootnote 18 and analysis resultFootnote 19, executed by the WG03 Standards—Semantic Interoperability Expert Group. It aims to help industrial practitioners and non-experts to answer those questions: Which ontologies are relevant in a certain domain? Where to find them? How to choose the most suitable? Who is maintaining and taking care of their evolution?
6.3 Experiments of Knowledge API Demonstrators with Google Analytics
Since August 2014, we have set up Google Analytics. In January 2021; more than 955 visits have been done on the ontologies (all ontologies), (794 unique visits). Since we realized that the LOV4IoT-ontologies was one of the most visited (after the home web page) of the entire “Semantic Web of Things” web site, we decided to create a dedicated web site for LOV4IoT. In April 2018, Google Analytics have been set up on the LOV4IoT dedicated web site, as depicted in Fig. 14. We can see a constant improvement of the number of visits. Because of such encouraging results, we also started to split the web page per domains to clearly observe the IoT domains that interest the audience (Fig. 15). It demonstrates that robotics, emotion, and home domains referencing existing ontologies are the most visited: robotics with more than 6200 page views, emotion with more than 4800 page views, and home with more than 4700 pageviews. It means that visitors return several time to this dataset which demonstrates its usefulness. However, we are aware of the need of front-end developers to enhance GUI user experience. As mentioned previously, LOV4IoT leads to the AIOTI IoT ontology landscape survey.
In the same way, we have recently refactored the SLOR rule discovery webpage to a dedicated web site and then split it per domain; the google analtyics results are not impressive yet.
7 Key Contributions and Lessons Learnt
Internet of Robotic Things (IoRT): the Scuola Superiore Sant’anna ACCRA partner, is a pioneer of the IoRT field [115]; several partners have an interest in contributing more to the convergence of IoT and robotics communities and technologies [94] [106]. Cloud Robotics [57, 60, 99]): the reasoning engine component is available on the Cloud to benefit from the computational power. The intelligence of the robot can be downloaded from the Cloud according to the application needs. The reasoning engine is flexible enough to be used either on the Cloud, and has already been tested on Android-powered devices in previous projects; which means that any robot platform compatible with Android should be able to run the reasoning engine and the Knowledge API.
Unifying Models: A common language to allow machines and devices talk with each other is necessary, we found that robots need a common language to exchange information with each other (more sophisticated than RoboEarth [124] and RobotML [28]). Our findings show that this can be achieved with semantic web technologies (e.g., ontology). The IEEE RAS Ontologies for Robotics and Automation Working Group [89] is becoming known by the IoRT community [115]. There is no emotion ontology endorsed by standards such as IEEE, ISO, or W3C yet. The reuse and integration of existing ontologies is a technical challenge to face, there is a need to disseminate more semantic web best practices [52] to communities such as IoT, Robotics, Affective Sciences, etc.
Autonomous robots: The IEEE 1872.2 ontology for autonomous robotics standardization designs ontological patterns to be reused among heterogeneous robots. The LOV4IoT-Robotics has been refined with the knowledge from the IEEE robotic experts (see the survey of ontology-based approaches to robot autonomy [83]). LOV4IoT-Robotics classifies 55 ontology-based projects (in August 2020), many more projects compared to the ones from the survey.
Trust (e.g., Security and Privacy): Cloud technologies bring new AI capabilities, however, we are aware of trust issues (that includes security and privacy) to be compliant with the European Commission vision conveyed within their white paper [24].
8 Conclusion and Future Work
Three social robots (Buddy, ASTRO, and RoboHon) assist elderly people to stay independent at home and improve their socialization in the context of the ACCRA project. The robots embed three kinds of emotional-based applications (mobility, daily life and conversational), conceived with the Listen, Innovate, Field-test, Evaluate (LIFE) co-creation methodology for IoT, integrating human factors to fit the elderly’s needs. We also have detailed the knowledge API to reuse expertise from cross-domain communities (e.g., robotics, IoT, health, and affective sciences).
Future work is to contribute to the IEEE 1872.2 ontology for autonomous robotics and apply it to deployed innovative cross-domain applications. Applications will use more wearables with elderly through co-creation methodologies and will integrate ontologies from various domains (e.g., robotic, IoT, Ambient Assisted Living). Another way to enhance the evaluation to measure the usability of applications is through the World Health Organisation Quality of Life assessment (WHOQOL) [47] that asks questions about overall quality of life and general health. We will continue the work on the Personalized Health Knowledge Graph [51]. Automatic knowledge extraction from ontologies and scientific publication describing the ontology purpose is challenging (see our AI4EU Knowledge Extraction for the Web of Things Challenge) to reuse the expertise designed by robotics and domain experts (e.g., physicians) and to make it usable by machines.
Notes
References
ACCRA D1.3 Methodology Handbook and Instruction Videos
Abaalkhail R, Guthier B, Alharthi R, El Saddik A (2018) Survey on ontologies for affective states and their influences. Semantic web 9(4):441–458
Afzal M, Ali SI, Ali R, Hussain M, Ali T, Khan WA, Amin MB, Kang BH, Lee S (2018) Personalization of wellness recommendations using contextual interpretation. Expert Syst Appl 96:506–521
Ahmed F (2017) An internet of things (IoT) application for predicting the quantity of future heart attack patients. Int J Comput Appl 164(6):36–40
Al-Taee MA, Al-Nuaimy W, Muhsin ZJ, Al-Ataby A (2016) Robot assistant in management of diabetes in children based on the internet of things. IEEE Internet Things J 4(2):437–445
American Diabetes Association (2019) Standards of medical care in diabetes-2019, abridged for primary care providers
Angelidou R (2015) Development of a portable system for collecting and processing bio-signals and sounds to support the diagnosis of sleep Apnea. Master’s thesis
Arguedas M, Xhafa F, Daradoumis T, Caballe S (2015) An ontology about emotion awareness and affective feedback in elearning. In: Proceedings of the 2015 international conference on intelligent networking and collaborative systems, IEEE, pp 156–163
Azkune G, Orduna P, Laiseca X, Castillejo E, López-de Ipiña D, Loitxate M, Azpiazu J (2013) Semantic framework for social robot self-configuration. Sensors 13(6):7004–7020
Balakirsky S, Kootbally Z, Schlenoff C, Kramer T, Gupta S (2012) An industrial robotic knowledge representation for kit building applications. In: Proceedings of the 2012 IEEE/RSJ international conference on intelligent robots and systems, IEEE, pp 1365–1370
Baldoni M, Baroglio C, Patti V, Rena P (2012) From tags to emotions: ontology-driven sentiment analysis in the social semantic web. Intelligenza Artificiale 6(1):41–54
Barrett LF (2017) How emotions are made: the secret life of the brain. Houghton Mifflin Harcourt, Boston
Bauer M, Baqa H, Bilbao S, Corchero A, Daniele L, Esnaola I, Fernandez I, Franberg O, Garcia-Castro R, Girod-Genet M, Guillemin P, Gyrard A, Kaed CE, Kung A, Lee J, Lefrançois M, Li W, Raggett D, Wetterwald M (2019) Semantic IoT solutions: a developer perspective (semantic interoperability white paper part I)
Benta KI, Rarău A, Cremene M (2007) Ontology based affective context representation. In: Proceedings of the 2007 Euro American conference on telematics and information systems, pp 1–9
Bermejo-Alonso J, Sanz R, Rodríguez M, Hernández C (2010) An ontological framework for autonomous systems modelling. Int J Adv Intel Syst 3(3):4
Berthelon F, Sander P (2013) Emotion ontology for context awareness. In: Proceedings of the 2013 IEEE 4th international conference on cognitive infocommunications (CogInfoCom), IEEE, pp 59–64
Breuning LG (2015) Habits of a happy brain: retrain your brain to boost your serotonin, dopamine, oxytocin, and endorphin levels. Simon and Schuster, New York
Budgen D, Brereton P (2006) Performing systematic literature reviews in software engineering. In: Proceedings of the 28th international conference on Software engineering, pp 1051–1052
Budner P, Eirich J, Gloor PA (2017) Making you happy makes me happy-measuring individual mood with smartwatches. arXiv preprint arXiv:1711.06134
Chang KH, Fisher D, Canny J, Hartmann B (2011) Hows my mood and stress? An efficient speech analysis library for unobtrusive monitoring on mobile phones. In: Proceedings of the 6th international conference on body area networks, pp 71–77
Chatterjee R, Matsuno F (2005) Robot description ontology and disaster scene description ontology: analysis of necessity and scope in rescue infrastructure context. Adv Robot 19(8):839–859
Chella A, Cossentino M, Pirrone R, Ruisi A (2002) Modeling ontologies for robotic environments. In: Proceedings of the 14th international conference on Software engineering and knowledge engineering, pp 77–80
Church K, Hoggan E, Oliver N (2010) A study of mobile mood awareness and communication through mobimood. In: Proceedings of the 6th Nordic conference on human-computer interaction: extending boundaries, pp 128–137
Commission E (2020) White paper on artificial intelligence: a European approach to excellence and trust
Consortium A (2020) D5.3 platform environment for marketplace1
Coviello L, Cavallo F, Limosani R, Rovini E, Fiorini L (2019) Machine learning based physical human-robot interaction for walking support of frail people. In: Proceedings of the 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), IEEE, pp 3404–3407
Dhouib S, Du Lac N, Farges JL, Gerard S, Hemaissia-Jeannin M, Lahera-Perez J, Millet S, Patin B, Stinckwich S (2011) Control architecture concepts and properties of an ontology devoted to exchanges in mobile robotics. In: Proceedings of the 6th national conference on control architectures of robots, p 24
Dhouib S, Kchir S, Stinckwich S, Ziadi T, Ziane M (2012) Robotml, a domain-specific language to design, simulate and deploy robotic applications. International conference on simulation, modeling, and programming for autonomous robots. Springer, New York, pp 149–160
Dogmus Z, Papantoniou A, Kilinc M, Yildirim SA, Erdem E, Patoglu V (2013) Rehabilitation robotics ontology on the cloud. In: Proceedings of the 2013 IEEE 13th international conference on rehabilitation robotics (ICORR), IEEE, pp 1–6
Dogmus Z, Erdem E, Patoglu V (2015) Rehabrobo-onto: design, development and maintenance of a rehabilitation robotics ontology on the cloud. Robot Comput Integ Manuf 33:100–109
Donofrio G, Fiorini L, Hoshino H, Matsumori A, Okabe Y, Tsukamoto M, Limosani R, Vitanza A, Greco F, Greco A et al (2019) Assistive robots for socialization in elderly people: results pertaining to the needs of the users. Aging Clin Exp Res 31(9):1313–1329
Eckman P, Davidson RJ (1994) The nature of emotion. Oxford University, New York
Ekman P, Yamey G (2004) Emotions revealed: recognising facial expressions: in the first of two articles on how recognising faces and feelings can help you communicate, paul ekman discusses how recognising emotions can benefit you in your professional life. Stud BMJ 12:140–142
Elizabeth BNS, Stappers PJ (2012) Convivial toolbox: generative research for the front end of design
Eyharabide V, Amandi A, Courgeon M, Clavel C, Zakaria C, Martin JC (2011) An ontology for predicting students’ emotions during a quiz. Comparison with self-reported emotions. In: Proceedings of the 2011 IEEE workshop on affective computational intelligence (WACI), IEEE, pp 1–8
Fiorini L, D’Onofrio G, Rovini E, Sorrentino A, Coviello L, Limosani R, Sancarlo D, Cavallo F (2019) A robot-mediated assessment of tinetti balance scale for sarcopenia evaluation in frail elderly. In: Proceedings of the 2019 28th IEEE international conference on robot and human interactive communication (RO-MAN), IEEE, pp 1–6
Francisco V, Gervás P, Peinado F (2007) Ontological reasoning to configure emotional voice synthesis. International conference on web reasoning and rule systems. Springer, New York, pp 88–102
Futami K, Yanagisawa Y, Hoshino H, Matsumori A, Tsukamoto M, Kotani D, Okabe Y (2019) Data distribution infrastructure and applications for robotic therapy for blind elderly. In: Adjunct proceedings of the 2019 ACM international joint conference on pervasive and ubiquitous computing and proceedings of the 2019 ACM international symposium on wearable computers, pp 61–64
Garcia-Ceja E, Riegler M, Nordgreen T, Jakobsen P, Oedegaard KJ, Tørresen J (2018) Mental health monitoring with multimodal sensing and machine learning: a survey. Pervasive Mob Comput 51:1–26
Garcia-Ceja E, et al (2016) Automatic stress detection in working environments from smartphones accelerometer data: a first step. J Biomed Health Inform (IF: 385 in 2017)
García-Rojas A, et al (2006) Emotional body expression parameters in virtual human ontology
Ghafurian M, Ellard C, Dautenhahn K (2020) Social companion robots to reduce isolation: a perception change due to covid-19. arXiv preprint arXiv:2008.05382
Gil R, Virgili-Gomá J, García R, Mason C (2015) Emotions ontology for collaborative modelling and learning of emotional responses. Comput Hum Behav 51:610–617
Gonçalves PJ (2016) Ontologies applied to surgical robotics. Robot 2015: second Iberian robotics conference. Springer, New York, pp 479–489
Grassi M (2009) Developing heo human emotions ontology. European workshop on biometrics and identity management. Springer, New York, pp 244–251
Grea A, Saraydaryan J, Jumel F (XXXX) A robotic and automation services ontology
Group TW (1998) The world health organization quality of life assessment (whoqol): development and general psychometric properties. Soc Sci Med 46(12):1569–1585
Gyrard A, Sheth A (2019) IAMHAPPY: towards an IoT knowledge-based cross-domain well-being recommendation system for everyday happiness
Gyrard A, Bonnet C, Boudaoud K, Serrano M (2016) LOV4IoT: a second life for ontology-based domain knowledge to build semantic web of things applications. In: IEEE international conference on future internet of things and cloud
Gyrard A, Serrano M, Datta S, Jares J, Intizar A (2017) Sensor-based linked open rules (S-LOR): an automated rule discovery approach for IoT applications and its use in smart cities. In: Smart City Workshop (AW4city) in conjunction WWW, ACM
Gyrard A, Gaur M, Thirunarayan K, Sheth A, Shekarpour S (2018) Personalized health knowledge graph. In: Proceedings of the 1st workshop on contextualized knowledge graph (CKG) co-located with international semantic web conference (ISWC), 8–12 October 2018, Monterey, USA
Gyrard A, Atemezing G, Serrano M (2021) PerfectO: an online toolkit for improving quality, accessibility, and classification of domain-based ontologies. Springer, New York
Haidegger T, Barreto M, Gonçalves P, Habib MK, Ragavan SKV, Li H, Vaccarella A, Perrone R, Prestes E (2013) Applied ontologies and standards for service robots. Robot Auton Syst 61(11):1215–1223
Hastings J, Ceusters W, Smith B, Mulligan K (2011) The emotion ontology: enabling interdisciplinary research in the affective sciences. International and interdisciplinary conference on modeling and using context. Springer, New York, pp 119–123
Honold F, Schüssel F, Panayotova K, Weber M (2012) The nonverbal toolkit: towards a framework for automatic integration of nonverbal communication into virtual environments. In: Proceedings of the 2012 eighth international conference on intelligent environments, IEEE, pp 243–250
Hotz L, Neumann B, Von Riegen S, Worch N (2012) Using ontology-based experiences for supporting robot tasks-position paper. Machine learning for interactive systems: bridging the gap between language, motor p 17
Hu G, Tay WP, Wen Y (2012) Cloud robotics: architecture, challenges and applications. IEEE Netw 26(3):21–28
Jangid N, Sharma B (2016) Cloud computing and robotics for disaster management. In: Proceedings of the 2016 7th international conference on intelligent systems. Modelling and simulation (ISMS), IEEE, pp 20–24
Kamilaris A, Botteghi N (2020) The penetration of internet of things in robotics: towards a web of robotic things. J Amb Intel Smart Environ (Preprint) 1–22
Kehoe B, Patil S, Abbeel P, Goldberg K (2015) A survey of research on cloud robotics and automation. IEEE Trans Autom Sci Eng 12(2):398–409
Kim JY, Liu N, Tan HX, Chu CH (2017) Unobtrusive monitoring to detect depression for elderly with chronic illnesses. IEEE Sens J 17(17):5694–5704
Kitchenham B, Pretorius R, Budgen D, Brereton OP, Turner M, Niazi M, Linkman S (2010) Systematic literature reviews in software engineering: a tertiary study. Inform Softw Technol
Koelstra S, et al (2012) DEAP: a database for emotion analysis; using physiological signals. IEEE Trans Affect Comput (IF: 7170 in 2020)
Koubaa A (2015) Ros as a service: web services for robot operating system. J Softw Eng Robot 6(1):1–14
Lane ND, Mohammod M, Lin M, Yang X, Lu H, Ali S, Doryab A, Berke E, Choudhury T, Campbell A (2011) Bewell: a smartphone application to monitor, model and promote wellbeing. In: Proceedings of the 5th international ICST conference on pervasive computing technologies for healthcare, pp 23–26
Laxminarayan P (2004) Exploratory analysis of human sleep data. PhD thesis, Worcester Polytechnic Institute
Lemaignan S, Ros R, Mösenlechner L, Alami R, Beetz M (2010) Oro, a knowledge management platform for cognitive architectures in robotics. In: Proceedings of the 2010 IEEE/RSJ international conference on intelligent robots and systems, IEEE, pp 3548–3553
Li X, Bilbao S, Martín-Wanton T, Bastos J, Rodriguez J (2017) Swarms ontology: a common information model for the cooperation of underwater robots. Sensors 17(3):569
LiKamWa R, Liu Y, Lane ND, Zhong L (2013) Moodscope: building a mood sensor from smartphone usage patterns. In: Proceeding of the 11th annual international conference on mobile systems, applications, and services, pp 389–402
Lim GH, Suh IH, Suh H (2011) Ontology-based unified robot knowledge for service robots in indoor environments. IEEE Trans Syst Man Cybern A Syst Hum
Lim TP, Husain W, Zakaria N (2013) Recommender system for personalised wellness therapy. Int J Adv Comput Sci Appl 4
Lin R, Liang C, Duan R, Chen Y, Tao C et al (2018) Visualized emotion ontology: a model for representing visual cues of emotions. BMC Med Inform Decis Mak 18(2):101–113
Lin Y, Jessurun J, De Vries B, Timmermans H (2011) Motivate: towards context-aware recommendation mobile system for healthy living. In: Proceedings of the 2011 5th international conference on pervasive computing technologies for healthcare (PervasiveHealth) and workshops, IEEE, pp 250–253
López JM, Gil R, García R, Cearreta I, Garay N (2008) Towards an ontology for describing emotions. World summit on knowledge society. Springer, New York, pp 96–104
Lortal G, Dhouib S, Gérard S (2010) Integrating ontological domain knowledge into a robotic DSL. International conference on model driven engineering languages and systems. Springer, New York, pp 401–414
Lu H, Frauendorfer D, Rabbi M, Mast MS, Chittaranjan GT, Campbell AT, Gatica-Perez D, Choudhury T (2012) Stresssense: detecting stress in unconstrained acoustic environments using smartphones. In: Proceedings of the 2012 ACM conference on ubiquitous computing, pp 351–360
Martins AI, Rosa AF, Queirós A, Silva A, Rocha NP (2015) European portuguese validation of the system usability scale (sus). Proc Comput Sci 67:293–300
Mouradian C, Yangui S, Glitho RH (2018) Robots as-a-service in cloud computing: Search and rescue in large-scale disasters case study. In: Proceedings of the 2018 15th IEEE Annual consumer communications and networking conference (CCNC), IEEE, pp 1–7
Murdock P, Bassbouss L, Bauer M, Alaya MB, Bhowmik R, Brett P, Chakraborty RN, Dadas M, Davies J, Diab W, et al. (2016) Semantic interoperability for the web of things. PhD thesis, Dépt. Réseaux et Service Multimédia Mobiles (Institut Mines-Télécom-Télécom
Nocentini O, Fiorini L, Acerbi G, Sorrentino A, Mancioppi G, Cavallo F (2019) A survey of behavioral models for social robots. Robotics 8(3):54
Nouh RM, Lee HH, Lee WJ, Lee JD (2019) A smart recommender based on hybrid learning methods for personal well-being services. Sensors 19(2):431
Obrenovic Z, Garay N, López JM, Fajardo I, Cearreta I (2005) An ontology for description of emotional cues. International conference on affective computing and intelligent interaction. Springer, New York, pp 505–512
Olivares-Alarcos A, Beßler D, Khamis A, Goncalves P, Habib MK, Bermejo J, Barreto M, Diab M, Rosell J, Quintas J, Olszewska J, Nakawala H, Pignaton E, Gyrard A, Borgo S, Alenya G, Beetz M, Li H (2019) A review and comparison of ontology-based approaches to robot autonomy
Olszewska JI, Barreto M, Bermejo-Alonso J, Carbonera J, Chibani A, Fiorini S, Goncalves P, Habib M, Khamis A, Olivares A, et al (2017) Ontology for autonomous robotics. In: Proceedings of the 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN), IEEE, pp 189–194
Onyeulo EB, Gandhi V (2020) What makes a social robot good at interacting with humans? Information 11(1):43
Paulius D, Sun Y (2019) A survey of knowledge representation in service robotics. Robot Auton Syst 118:13–30
Paull L, Severac G, Raffo GV, Angel JM, Boley H, Durst PJ, Gray W, Habib M, Nguyen B, Ragavan SV, et al (2012) Towards an ontology for autonomous robots. In: Proceedings of the 2012 IEEE/RSJ international conference on intelligent robots and systems, IEEE, pp 1359–1364
Picard RW (2000) Affective computing. MIT press, Cambridge
Prestes E, Carbonera JL, Fiorini SR, Jorge VA, Abel M, Madhavan R, Locoro A, Goncalves P, Barreto ME, Habib M et al (2013) Towards a core ontology for robotics and automation. Robot Auton Syst 61(11):1193–1204
Prestes E, Fiorini SR, Carbonera J (2014) Core ontology for robotics and automation. In: Proceedings of the 18th workshop on knowledge representation and ontologies for robotics and automation, p 7
Ptaszynski M, Rzepka R, Araki K, Momouchi Y (2012) A robust ontology of emotion objects. In: Proceedings of the eighteenth annual meeting of the association for natural language processing (NLP-2012), pp 719–722
Rabbi M, Ali S, Choudhury T, Berke E (2011) Passive and in-situ assessment of mental and physical well-being using mobile sensors. In: Proceedings of the 13th international conference on Ubiquitous computing, pp 385–394
Radulovic F, Milikic N (2009) Smiley ontology. In: Proceedings of the 1st international workshop on social networks interoperability
Ray PP (2016) Internet of robotic things: concept, technologies, and challenges. IEEE Access 4:9489–9500
Retto J (2017) Sophia, first citizen robot of the world
Rizzo G, Tomassetti F, Vetro A, Ardito L, Torchiano M, Morisio M, Troncy R (2017) Semantic enrichment for recommendation of primary studies in a systematic literature review. Dig Scholar Hum 32(1):195–208
Roy Chowdhury A (2017) Iot and robotics: a synergy. PeerJ Preprints 5:e2760v1
Sabri L, Bouznad S, Rama Fiorini S, Chibani A, Prestes E, Amirat Y (2018) An integrated semantic framework for designing context-aware internet of robotic things systems. Integ Comput Aided Eng 25(2):137–156
Saha O, Dasgupta P (2018) A comprehensive survey of recent trends in cloud robotics architectures and applications. Robotics 7(3):47
Sánchez-Rada JF, Iglesias CA (2016) Onyx: a linked data approach to emotion representation. Inform Process Manag 52(1):99–114
Saraydaryan J, Jumel F, Guenard A (2014) Astro: architecture of services toward robotic objects. Int J Comput Sci Issues (IJCSI) 11(4):1
Saxena A, Jain A, Sener O, Jami A, Misra DK, Koppula HS (2014) Robobrain: large-scale knowledge engine for robots. arXiv preprint arXiv:1412.0691
Schlenoff C, Messina E (2005) A robot ontology for urban search and rescue. In: Proceedings of the 2005 ACM workshop on Research in knowledge representation for autonomous systems, pp 27–34
Seligman M (2012) Flourish: a visionary new understanding of happiness and well-being (book). Simon and Schuster, New York
Sener O (2016) Learning from large-scale visual data for robots. Cornell University, New York
Simoens P, Dragone M, Saffiotti A (2018) The internet of robotic things: a review of concept, added value and applications. Int J Adv Rob Syst 15(1):1729881418759424
Sykora M, Jackson T, O’Brien A, Elayan S (2013) Emotive ontology: extracting fine-grained emotions from terse, informal messages
Tabassum H, Ahmed S (2016) Emotion: an ontology for emotion analysis. In: Proceedings of the 1st national conference on emerging trends and innovations in computing and technology, Karachi, Pakistan
Tapia SAA, Gomez AHF, Corbacho JB, Ratte S, Torres-Diaz J, Torres-Carrion PV, Garcia JM (2014) A contribution to the method of automatic identification of human emotions by using semantic structures. In: Proceedings of the 2014 international conference on interactive collaborative learning (ICL), IEEE, pp 60–70
Tenorth M, Beetz M (2013) KnowRob: A knowledge processing infrastructure for cognition-enabled robots. Int J Robot Res
Tenorth M, Beetz M (2017) Representations for robot knowledge in the knowrob framework. Artif Intell 247:151–169
Tiddi I, Bastianelli E, Bardaro G, d’Aquin M, Motta E (2017) An ontology-based approach to improve the accessibility of ros-based robotic systems. In: Proceedings of the knowledge capture conference, pp 1–8
Tiddi I, Bastianelli E, Daga E, Daquin M, Motta E (2020) Robot-city interaction: mapping the research landscape-a survey of the interactions between robots and modern cities. Int J Soc Robot 12(2):299–324
Toselloa E, Fanb Z, Castroc AG, Pagelloa E (2018) RTASK: a cloud-based knowledge engine for robot task and motion planning
Vermesan O, Bröring A, Tragos E, Serrano M, Bacciu D, Chessa S, Gallicchio C, Micheli A, Dragone M, Saffiotti A, et al (2017) Internet of robotic things: converging sensing/actuating, hypoconnectivity, artificial intelligence and iot platforms
Vorobieva H, Soury M, Hède P, Leroux C, Morignot P (2010) Object recognition and ontology for manipulation with an assistant robot. International conference on smart homes and health telematics. Springer, New York, pp 178–185
Waibel M, Beetz M, Civera J, Dandrea R, Elfring J, Galvez-Lopez D, Haussermann K, Janssen R, Montiel J, Perzylo A et al (2011) A world wide web for robots. IEEE Robot Autom Mag 18(2):69–82
Wang E, Kim YS, Kim HS, Son JH, Lee S, Suh IH (2005) Ontology modeling and storage system for robot context understanding. International conference on knowledge-based and intelligent information and engineering systems. Springer, New York, pp 922–929
Yacchirema DC, Sarabia-Jácome D, Palau CE, Esteve M (2018) A smart system for sleep monitoring by integrating iot with big data analytics. IEEE Access 6:35988–36001
Yan J, Bracewell DB, Ren F, Kuroiwa S (2008) The creation of a Chinese emotion ontology based on hownet. Eng Lett 16:1
Yoon S, Sim JK, Cho YH (2016) A flexible and wearable human stress monitoring patch. Sci Rep 6(1):1–11
Zander S, Ahmed N, Frank MT (2016) A survey about the usage of semantic technologies for the description of robotic components and capabilities. In: SAMI@ iKNOW
Zhou D, Luo J, Silenzio VM, Zhou Y, Hu J, Currier G, Kautz H (2015) Tackling mental health by integrating unobtrusive multimodal sensing. In: Twenty-ninth AAAI conference on artificial intelligence
Zweigle O, van de Molengraft R, d’Andrea R, Häussermann K (2009) Roboearth: connecting robots worldwide. In: Proceedings of the 2nd international conference on interaction sciences: information technology, culture and human, pp 184–191
Acknowledgements
This work has partially received funding from the European Union’s Horizon 2020 research and innovation program (ACCRA) under grant agreement No. 738251, National Institute of Information and Communications Technology (NICT) of Japan, and AI4EU No. 825619. We would like to thanks ACCRA partners for their valuable comments. The opinions expressed are those of the authors and do not reflect those of the sponsors.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Rights and permissions
About this article
Cite this article
Gyrard, A., Tabeau, K., Fiorini, L. et al. Knowledge Engineering Framework for IoT Robotics Applied to Smart Healthcare and Emotional Well-Being. Int J of Soc Robotics 15, 445–472 (2023). https://doi.org/10.1007/s12369-021-00821-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-021-00821-6