Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
The candidate’s thesis is within the emerging topic of mulsemedia systems, which encompass audiovisual content associated with multisensory effects, users’ quality of experience, and human-computer interaction. It presents a conceptual... more
The candidate’s thesis is within the emerging topic of mulsemedia systems, which encompass audiovisual content associated with multisensory effects, users’ quality of experience, and human-computer interaction. It presents a conceptual architecture and a framework that take into account the challenges and requirements for mulsemedia delivery systems identified from the gaps and shortcomings in related work. Furthermore, the work’s outcome brings multiple and valuable contributions relying on a complex and solid experimental setup, which have been shared and published in relevant journals and conferences.
The annual ACM Multimedia Conference was held in Nice, France during October 21st to 25th, 2019. Being the 27th of its series, it attracted approximately 800 participants from all over the World. Among them were the student volunteers who... more
The annual ACM Multimedia Conference was held in Nice, France during October 21st to 25th, 2019. Being the 27th of its series, it attracted approximately 800 participants from all over the World. Among them were the student volunteers who supported the smooth organization of the Conference.
Multisensory experiences have been increasingly undertaken in the digital world. With the emerging interest in immersive applications (i.e. 360 videos and virtual reality), more and more researchers and practitioners are in pursuit of... more
Multisensory experiences have been increasingly undertaken in the digital world. With the emerging interest in immersive applications (i.e. 360 videos and virtual reality), more and more researchers and practitioners are in pursuit of ways to take these experiences to the next level, adding sensations that go beyond that of seeing and hearing. This one-day workshop aims at identifying the current practices, challenges, opportunities, and limitations to be overcome in the quest to transcend the overwhelmingly bisensorial nature of digital multimedia into a multisensory one by fostering discussions on the use of olfactory, gustatory and tactile effects in digital experiences.
Dispositivos de efeitos multissensoriais estão cada vez mais sendo desenvolvidos por grupos de pesquisa e empresas com o objetivo de melhorar a qualidade da experiência (QoE) dos usuários, criando sensações de vibração, toque, cheiro,... more
Dispositivos de efeitos multissensoriais estão cada vez mais sendo desenvolvidos por grupos de pesquisa e empresas com o objetivo de melhorar a qualidade da experiência (QoE) dos usuários, criando sensações de vibração, toque, cheiro, vento e assim por diante. Este breve artigo resume os esforços recentes de dispositivos táteis, olfativos e gustativos para criar experiências mais imersivas para a interação humanocomputador.
User’s Quality of Experience (QoE) understanding from objective metrics has been increasingly explored in multisensory research. However, capturing physiological data adds a degree of difficulty to an already complex environment composed... more
User’s Quality of Experience (QoE) understanding from objective metrics has been increasingly explored in multisensory research. However, capturing physiological data adds a degree of difficulty to an already complex environment composed of software to reproduce content and actuators to deliver sensory effects. In this paper, we introduce the potential use of remote patient monitoring (RPM) systems to monitor users’ QoE through a specific tool named HealthDash. We aim to raise discussion around them in digital multisensory experiences, their application, advantages and disadvantages, and challenges and opportunities.
Mulsemedia applications have become increasingly popular. There have been many efforts to increase the Quality of Experience (QoE) of users by using them. From the users' perspective, it is crucial that systems produce high levels of... more
Mulsemedia applications have become increasingly popular. There have been many efforts to increase the Quality of Experience (QoE) of users by using them. From the users' perspective, it is crucial that systems produce high levels of enjoyment and utility. Thus, many experimental tools have been developed and applied to different purposes such as entertainment, health, and culture. Despite that, little attention is paid to the evaluation of mulsemedia tools and platforms. In this paper, we present a time evaluation of the integration between a distributed mulsemedia platform called PlaySEM and an interactive application whereby users interact by gestures, in order to discover how long this process takes. We describe the test scenario and our approach for measuring this integration. Then, we discuss the results and point out aspects that bring implications to be taken into account for future similar solutions. The results showed values in the range of 27ms to 67ms on average spent throughout the process before the effective activation of the sensory effect devices on a wired network.
The candidate’s thesis is within the emerging topic of mulsemedia systems, which encompass audiovisual content associated with multisensory effects, users’ quality of experience, and human-computer interaction. It presents a conceptual... more
The candidate’s thesis is within the emerging topic of mulsemedia systems, which encompass audiovisual content associated with multisensory effects, users’ quality of experience, and human-computer interaction. It presents a conceptual architecture and a framework that take into account the challenges and requirements for mulsemedia delivery systems identified from the gaps and shortcomings in related work. Furthermore, the work’s outcome brings multiple and valuable contributions relying on a complex and solid experimental setup, which have been shared and published in relevant journals and conferences.
Agricultural greenhouses have improved productivity in the cultivation of specific crops. Efforts towards the automation of these environments have been carried out with the application of different technologies. However, the absence of a... more
Agricultural greenhouses have improved productivity in the cultivation of specific crops. Efforts towards the automation of these environments have been carried out with the application of different technologies. However, the absence of a solution that comprises a complete chain of the automation process has been noted. Internet of Things (IoT) is a technology that can offer solutions for the modernization of agricultural environments, making it possible to automate processes, to predict situations, and to improve production activities. Moreover, IoT solutions and businesses process should be integrated into a common framework to provide a more efficient production process control, since business rules are often dynamic and might vary according to agriculture practices. In this fashion, we propose an architecture for sensing and actuating in controlled agricultural environments using business rules modeling as the central artifact for the automation of production chains. To validate the proposed architecture, we present a case study in which we describe the implementation and tools to support vegetable production. The study showed that the architecture is feasible to monitor crops, to collaborate to maximize yield and to control the use of inputs and agrochemicals. Furthermore, it caters for culture monitoring in real time, offering information to producers to aid decision making.
A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is... more
A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is matched with angular shapes). However, the possibilities cross-modality opens in the digital world have been relatively unexplored. Therefore, we consider that studying the plasticity and the effects of crossmodal correspondences in a mulsemedia setup can bring novel insights about improving the human-computer dialogue and experience. Mulsemedia refers to the combination of three or more senses to create immersive experiences. In our experiments, users were shown six video clips associated with certain visual features based on color, brightness, and shape. We examined if the pairing with crossmodal matching sound and the corresponding auto-generated haptic effect, and smell would lead to an enhanced user QoE. For this, we used an eye-tracking device as...
MulSeMedia is related to the combination of traditional media (e.g. text, image and video) with other objects that aim to stimulate other human senses, such as mechanoreceptors, chemoreceptors and thermoreceptors. Existing solutions embed... more
MulSeMedia is related to the combination of traditional media (e.g. text, image and video) with other objects that aim to stimulate other human senses, such as mechanoreceptors, chemoreceptors and thermoreceptors. Existing solutions embed the control of actuators in the applications, thus limiting their reutilization in other types of applications or different media players. This work presents PlaySEM, a platform that brings a new approach for simulating and rendering sensory effects that operates independently of any Media Player, and that is compatible with the MPEG-V standard, while taking into account reutilization requirement. Regarding this architecture conjectures are tested focusing on the decoupled operation of the renderer.
In order to create immersive experiences in virtual worlds, we need to explore different human senses (sight, hearing, smell, taste, and touch). Many different devices have been developed by both industry and academia towards this aim. In... more
In order to create immersive experiences in virtual worlds, we need to explore different human senses (sight, hearing, smell, taste, and touch). Many different devices have been developed by both industry and academia towards this aim. In this paper, we focus our attention on the researched area of thermal and wind devices to deliver the sensations of heat and cold against people’s skin and their application to human-computer interaction (HCI). First, we present a review of devices and their features that were identified as relevant. Then, we highlight the users’ experience with thermal and wind devices, highlighting limitations either found or inferred by the authors and studies selected for this survey. Accordingly, from the current literature, we can infer that, in wind and temperature-based haptic systems (i) users experience wind effects produced by fans that move air molecules at room temperature, and (ii) there is no integration of thermal components to devices intended for t...
The advent of Kinect triggered the growth of applications aimed at natural interaction, gesture recognition and interactive environments. Over time, we realized that interface solutions based on such types of interaction grew in a rapid... more
The advent of Kinect triggered the growth of applications aimed at natural interaction, gesture recognition and interactive environments. Over time, we realized that interface solutions based on such types of interaction grew in a rapid and disorderly way, without any concern for the formalization of development stages. In addition, issues related to the way these interactions are represented, the context in which they occur, and the environmental behavior in response to these interactions became relevant. In this sense, this paper brings a contribution to the practical approach of a Where-What-Why-How Model for the development of interactive environments. The proposal focuses on three main points: (i) actions that must be performed by the interactive environment;(ii) the situations that trigger the implementation of the actions by the environment; and (iii) the expected behavior once the situations have been recognized. The illustration of the proposal is made through a complete ca...
Quality of Experience (QoE) is indelibly linked to the human side of the multimedia experience. Surprisingly, however, there is a paucity of research which explores the impact that human factors has in determining QoE. Whilst this is true... more
Quality of Experience (QoE) is indelibly linked to the human side of the multimedia experience. Surprisingly, however, there is a paucity of research which explores the impact that human factors has in determining QoE. Whilst this is true of multimedia, it is even more starkly so as far as mulsemedia - applications that involve media engaging three or more of human senses - is concerned. Hence, in the study reported in this paper, we focus on an exciting subset of mulsemedia applications - 360∘ mulsemedia - particularly important given that the upcoming 5G technology is foreseen to be a key enabler for the proliferation of immersive Virtual Reality (VR) applications. Accordingly, we study the impact that human factors such as gender, age, prior computing experience, and smell sensitivity have on 360∘ mulsemedia QoE. Results showed insight into the potential of 360∘ mulsemedia to inspire and to enrich experiences for Generation Z - a generation empowered by rapidly advancing technology. Patterns of prior media usage and smell sensitivity play also an important role in influencing the QoE evaluation - users who have a preference for dynamic videos enjoy and find realistic the 360∘ mulsemedia experiences.
Using olfactory media to enhance traditional multimedia content opens up novel opportunities for user interactions. Whilst the influence of olfaction on user Quality of Experience (QoE) in mulsemedia (multiple sensorial media)... more
Using olfactory media to enhance traditional multimedia content opens up novel opportunities for user interactions. Whilst the influence of olfaction on user Quality of Experience (QoE) in mulsemedia (multiple sensorial media) environments has been previously studied, the impact of the fundamental dimensions of scent intensity and valence (pleasantness) have been largely unexplored. This is precisely what we target in this paper, which reports the results of an empirical investigation examining how scent intensity and valence impact mulsemedia QoE. Accordingly, 54 participants were exposed to different odor valences and scent intensity levels when viewing three short multimedia clips. In particular, we examine both subjective (self-reported) as well as objective QoE metrics, as evidenced by user heart rates and eye gaze patterns. Results show that whilst eye gaze patterns are largely unaffected by the experimental conditions, valence does have a statistically significant impact upon user heart rates, as does intensity for two of the three clips employed in our study. In terms of subjective QoE, results indicate that valence impacts on the sense of reality and enjoyment; however varying intensity levels do not seem to differentially impact on QoE, bringing into question the need for strong scent intensities.
A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is... more
A great deal of research effort has been put in exploring crossmodal correspondences in the field of cognitive science which refer to the systematic associations frequently made between different sensory modalities (e.g. high pitch is matched with angular shapes). However, the possibilities cross-modality opens in the digital world have been relatively unexplored. Therefore, we consider that studying the plasticity and the effects of crossmodal correspondences in a mulsemedia setup can bring novel insights about improving the human-computer dialogue and experience. Mulsemedia refers to the combination of three or more senses to create immersive experiences. In our experiments, users were shown six video clips associated with certain visual features based on color, brightness, and shape. We examined if the pairing with crossmodal matching sound and the corresponding auto-generated haptic effect, and smell would lead to an enhanced user QoE. For this, we used an eye-tracking device as well as a heart rate monitor wristband to capture users’ eye gaze and heart rate whilst they were experiencing mulsemedia. After each video clip, we asked the users to complete an on-screen questionnaire with a set of questions related to smell, sound and haptic effects targeting their enjoyment and perception of the experiment. Accordingly, the eye gaze and heart rate results showed significant influence of the cross-modally mapped multisensorial effects on the users’ QoE. Our results highlight that when the olfactory content is crossmodally congruent with the visual content, the visual attention of the users seems shifted towards the correspondent visual feature. Crosmodally matched media is also shown to result in an enhanced QoE compared to a video only condition.
Designing a mulsemedia-multiple sensorial media-system entails first and foremost comprehending what it is beyond the ordinary understanding that it engages users in digital multisensory experiences that stimulate other senses in addition... more
Designing a mulsemedia-multiple sensorial media-system entails first and foremost comprehending what it is beyond the ordinary understanding that it engages users in digital multisensory experiences that stimulate other senses in addition to sight and hearing, such as smell, touch, and taste. A myriad of programs that comprise a software system, several output devices to deliver sensory effects, computer media, among others, dwell deep in the realm of mulsemedia systems, making it a complex task for newcomers to get acquainted with their concepts and terms. Although there have been many technological advances in this field, especially for multisensory devices, there is a shortage of work that tries to establish common ground in terms of formal and explicit representation of what mulsemedia systems encompass. This might be useful to avoid the design of feeble mulsemedia systems that can be barely reused owing to misconception. In this paper, we extend our previous work by proposing to establish a common conceptualization about mulsemedia systems through a domain reference ontology named MulseOnto to aid the design of them. We applied ontology verification and validation techniques to evaluate it, including assessment by humans and a data-driven approach whereby the outcome is three successful instantiations of MulseOnto for distinct cases, making evident its ability to accommodate heterogeneous mulsemedia scenarios.
One of the main challenges in current multimedia networking environments is to find solutions to help accommodate the next generation of mobile application classes with stringent Quality of Service (QoS) requirements whilst enabling... more
One of the main challenges in current multimedia networking environments is to find solutions to help accommodate the next generation of mobile application classes with stringent Quality of Service (QoS) requirements whilst enabling Quality of Experience (QoE) provisioning for users. One such application class, featured in this paper, is 360° mulsemedia-multiple sensorial media-which enriches 360° video by adding sensory effects that stimulate human senses beyond those of sight and hearing, such as the tactile and olfactory ones. In this paper, we present a conceptual framework for 360° mulsemedia delivery and a 360° mulsemedia-based prototype that enables users to experience 360° mulsemedia content. User evaluations revealed that higher video resolutions do not necessarily lead to the highest QoE levels in our experimental setup. Therefore, bandwidth savings can be leveraged with no detrimental impact on QoE.
Previous research has shown that adding multisensory media—mulsemedia—to traditional audiovisual content has a positive effect on user Quality of Experience (QoE). However, the QoE impact of employing mulsemedia in 360° videos has... more
Previous research has shown that adding multisensory media—mulsemedia—to traditional audiovisual content has a positive effect on user Quality of Experience (QoE). However, the QoE impact of employing mulsemedia in 360° videos has remained unexplored. Accordingly, in this paper, a QoE study for watching a 360° video—with and without multisensory effects—in a full free-viewpoint VR setting is presented. The parametric space we considered to influence the QoE consists of the encoding quality and the motion level of the transmitted media. To achieve our research aim, we propose a wearable VR system that provides multisensory enhancement of 360° videos. Then, we utilise its capabilities to systematically evaluate the effects of multisensory stimulation on perceived quality degradation for videos with different motion levels and encoding qualities. Our results make a strong case for the inclusion of multisensory effects in 360° videos, as they reveal that both user-perceived quality, as well as enjoyment, are significantly higher when mulsemedia (as opposed to traditional multimedia) is employed in this context. Moreover, these observations hold true independent of the underlying 360° video encoding quality—thus QoE can be significantly enhanced with a minimal impact on networking resources.
Sensory studies emerged as a significant influence upon Human Computer Interaction and traditional multimedia. Mulsemedia is an area that extends multimedia addressing issues of multisensorial response through the combination of at least... more
Sensory studies emerged as a significant influence upon Human Computer Interaction and traditional multimedia. Mulsemedia is an area that extends multimedia addressing issues of multisensorial response through the combination of at least three media, typically a non-traditional media with traditional audio-visual content. In this paper, we explore the concepts of Quality of Experience and crossmodal correspondences through a case study of different types of mulsemedia setups. The content is designed following principles of crossmodal correspondence between different sensory dimensions and delivered through olfactory, auditory and vibrotactile displays. The Quality of Experience is evaluated through both subjective (questionnaire) and objective means (eye gaze and heart rate). Results show that the auditory experience has an influence on the olfactory sensorial responses and lessens the perception of lingering odor. Heat maps of the eye gazes suggest that the crossmodality between olfactory and visual content leads to an increased visual attention on the factors of the employed crossmodal correspondence (e.g., color, brightness, shape).
Technological advances in computing have allowed multimedia systems to create more immersive experiences for users. Beyond the traditional senses of sight and hearing, researchers have observed that the use of smell, taste, and touch in... more
Technological advances in computing have allowed multimedia systems to create more immersive experiences for users. Beyond the traditional senses of sight and hearing, researchers have observed that the use of smell, taste, and touch in such systems is becoming increasingly well-received, leading to a new category of multimedia systems called mulsemedia-multiple sensorial media-systems. In parallel, these systems introduce heterogeneous technologies to deliver different sensory effects such as lighting, wind, vibration, and smell, under varied conditions and restrictions. This new paradigm shift poses many challenges, mainly related to mulsemedia integration, delay, responsiveness, sensory effects intensities, wearable and other heterogeneous devices for delivering sensory effects, and remote delivery of mulsemedia components. In addition, new approaches to interacting with multimedia applications have emerged such as multi-touch interfaces, voice processing, and brain-computer interfaces, giving rise to new kinds of complex interactive systems. In this article, we underpin fundamental challenges for delivering multisensory effects to heterogeneous systems. We propose an interoperable mulsemedia framework for coping with these challenges, meeting the emerging requirements. It is achieved through the evolution of an open distributed mulsemedia system. We changed its core following architectural and design patterns to accommodate different profiles of communication, connectivity, and sensory effects metadata standard according to the need of mulsemedia applications and devices available in the user's environment. The results include case studies where the framework has been duly applied.
Agricultural greenhouses have improved productivity in the cultivation of specific crops. Efforts towards the automation of these environments have been carried out with the application of different technologies. However, the absence of a... more
Agricultural greenhouses have improved productivity in the cultivation of specific crops. Efforts towards the automation of these environments have been carried out with the application of different technologies. However, the absence of a solution that comprises a complete chain of the automation process has been noted. In-ternet of Things (IoT) is a technology that can offer solutions for the modernization of agricultural environments, making it possible to automate processes, to predict situations, and to improve production activities. Moreover, IoT solutions and businesses process should be integrated into a common framework to provide a more efficient production process control, since business rules are often dynamic and might vary according to agriculture practices. In this fashion, we propose an architecture for sensing and actu-ating in controlled agricultural environments using business rules modeling as the central artifact for the automation of production chains. To validate the proposed architecture, we present a case study in which we describe the implementation and tools to support vegetable production. The study showed that the architecture is feasible to monitor crops, to collaborate to maximize yield and to control the use of inputs and agrochemicals. Furthermore, it caters for culture monitoring in real time, offering information to producers to aid decision making.
Multisensory experiences have been increasingly applied in Human-Computer Interaction (HCI). In recent years, it is commonplace to notice the development of haptic, olfactory, and even gustatory displays to create more immersive... more
Multisensory experiences have been increasingly applied in Human-Computer Interaction (HCI). In recent years, it is commonplace to notice the development of haptic, olfactory, and even gustatory displays to create more immersive experiences. Companies are proposing new additions to the multisensory world and are unveiling new products that promise to offer amazing experiences exploiting mulsemedia-multiple sensorial media-where users can perceive odors, tastes, and the sensation of wind blowing against their face. Whilst researchers, practitioners and users alike are faced with a wide-range of such new devices, relatively little work has been undertaken to summarize efforts and initiatives in this area. The current paper addresses this shortcoming in two ways-firstly, by presenting a survey of devices targeting senses beyond that of sight and hearing; secondly, by describing an approach to guide newcomers and experienced practitioners alike to build their own mulsemedia environment, both in a desktop setting and in an immersive 360º environment.
Human perception is inherently multi-sensorial involving five traditional senses: sight, hearing, touch, taste, and smell. In contrast to traditional multimedia, based on audio and visual stimuli, mulsemedia seek to stimulate all the... more
Human perception is inherently multi-sensorial involving five traditional senses: sight, hearing, touch, taste, and smell. In contrast to traditional multimedia, based on audio and visual stimuli, mulsemedia seek to stimulate all the human senses. One way to produce multi-sensorial content is authoring videos with sensory effects. These effects are represented as metadata attached to the video content, which are processed and rendered through physical devices into the user’s environment. However, creating sensory effects metadata is not a trivial activity because authors have to identify carefully different details in a scene such as the exact point where each effect starts, finishes, and also its presentation features such as intensity, direction, etc. It is a subjective task that requires accurate human perception and time. In this article, we aim at finding out whether a crowdsourcing approach is suitable for authoring coherent sensory effects associated with video content. Our belief is that the combination of a collective common sense to indicate time intervals of sensory effects with an expert fine-tuning is a viable way to generate sensory effects from the point of view of users. To carry out the experiment, we selected three videos from a public mulsemedia dataset, sent them to the crowd through a cascading microtask approach. The results showed that the crowd can indicate intervals in which users agree that there should be insertions of sensory effects, revealing a way of sharing authoring between the author and the crowd.
Multiple Sensorial Media (MulSeMedia) systems transcend the traditional senses of sight and hearing, adding smell, touch, and taste into multimedia applications to create more immersive experiences for the users. Here, we provide a... more
Multiple Sensorial Media (MulSeMedia) systems transcend the traditional senses of sight and hearing, adding smell, touch, and taste into multimedia applications to create more immersive experiences for the users. Here, we provide a picture of the challenges and requirements for mulsemedia delivery and present a solution to cope with software and hardware heterogeneity.
The use of multiple senses in interactive applications has become increasingly feasible due to the upsurge of commercial, off-the-shelf devices to produce sensory effects. Creating Multiple Sensorial Media (MulSeMedia) immersive systems... more
The use of multiple senses in interactive applications has become increasingly feasible due to the upsurge of commercial, off-the-shelf devices to produce sensory effects. Creating Multiple Sensorial Media (MulSeMedia) immersive systems requires understanding their digital ecosystem. Mulsemedia systems encompass a set of applications , and devices of different types assembled to communicate or express feelings from the virtual world to the real world. Despite existing standards, tools, and recent research devoted to them, there is still a lack of formal and explicit representation of what mulse-media is. Misconceptions could eventually lead to the construction of solutions that might not take into account reuse, integration, standardization, among other design features. In this paper, we propose to establish a common conceptualization about mulsemedia systems through a reference ontology, named MulseOnto, covering their main notions. To evaluate it, we applied ontology verification and validation techniques, including assessment by humans and a data-driven approach. The results showed that MulseOnto can be used as a consensual conceptual model for exploring the knowledge about the whole chain of mulsemedia systems.
Human perception is inherently multisensory involving sight, hearing , smell, touch, and taste. Mulsemedia systems include the combination of traditional media (text, image, video, and audio) with non-traditional ones that stimulate other... more
Human perception is inherently multisensory involving sight, hearing , smell, touch, and taste. Mulsemedia systems include the combination of traditional media (text, image, video, and audio) with non-traditional ones that stimulate other senses beyond sight and hearing. Whilst work has been done on some user-centred aspects that the distribution of mulsemedia data raises, such as synchro-nisation, and jitter, this paper tackles complementary issues that temporality constraints pose on the distribution of mulsemedia effects. It aims at improving response time interval in networked event-based mulsemedia systems based upon prior findings in this context. Thus, we reshaped the communication strategy of an open distributed mulsemedia platform called PlaySEM to work more efficiently with other event-based applications, such as games, VR/AR software, and interactive applications, wishing to stimulate other senses to increase the immersion of users. Moreover, we added lightweight communication protocols in its interface to analyse whether they reduce network overhead. To carry out the experiment, we developed mock applications for different protocols to simulate an interactive application working with the PlaySEM, measuring the delay between them. The results showed that by pre-processing sensory effects metadata before real-time communication, and selecting the appropriate protocol, response time interval in networked event-based mulsemedia systems can decrease remarkably.
Mulsemedia applications have become increasingly popular. There have been many efforts to increase the Quality of Experience (QoE) of users by using them. From the users' perspective, it is crucial that systems produce high levels of... more
Mulsemedia applications have become increasingly popular. There have been many efforts to increase the Quality of Experience (QoE) of users by using them. From the users' perspective, it is crucial that systems produce high levels of enjoyment and utility. Thus, many experimental tools have been developed and applied to different purposes such as entertainment, health, and culture. Despite that, little attention is paid to the evaluation of mulsemedia tools and platforms. In this paper, we present a time evaluation of the integration between a distributed mulsemedia platform called PlaySEM and an interactive application whereby users interact by gestures, in order to discover how long this process takes. We describe the test scenario and our approach for measuring this integration. Then, we discuss the results and point out aspects that bring implications to be taken into account for future similar solutions. The results showed values in the range of 27ms to 67ms on average spent throughout the process before the effective activation of the sensory effect devices on a wired network.
The advent of Kinect triggered the growth of applications aimed at natural interaction, gesture recognition and interactive environments. Over time, we realized that interface solutions based on such types of interaction grew in a rapid... more
The advent of Kinect triggered the growth of applications aimed at natural interaction, gesture recognition and interactive environments. Over time, we realized that interface solutions based on such types of interaction grew in a rapid and disorderly way, without any concern for the formalization of development stages. In addition, issues related to the way these interactions are represented, the context in which they occur, and the environmental behavior in response to these interactions became relevant. In this sense, this paper brings a contribution to the practical approach of a Where-What-Why-How Model for the development of interactive environments. The proposal focuses on three main points: (i) actions that must be performed by the interactive environment;(ii) the situations that trigger the implementation of the actions by the environment; and (iii) the expected behavior once the situations have been recognized. The illustration of the proposal is made through a complete case study that contemplates all development stages and the physical implementation of a remote control that triggers actions in the real world (TV) and in the virtual world (graphical representation of the remote control), which compose the interactive environment.

And 4 more

The increasing interest in digital immersive experiences has drawn the attention of researchers into understanding human perception whilst adding sensory effects to multimedia systems such as VR (Virtual Reality) and AR (Augmented... more
The increasing interest in digital immersive experiences has drawn the attention of researchers into understanding human perception whilst adding sensory effects to multimedia systems such as VR (Virtual Reality) and AR (Augmented Reality) applications, multimedia players, and games. These so-called mulsemedia—multiple sensorial media—systems are capable of delivering wind, smell, vibration, among others, along with audiovisual content with the aim of enhancing users’ Quality of Experience (QoE) in areas such as entertainment, healthcare, education, culture, and marketing. To support the researchers’ investigation, many standalone software solutions and incipient architectural proposals have been developed to bind these applications to sensory effects devices, such as wind fans, scent emitters, vibration chairs, etc. These devices, in turn, are constantly evolving, making it difficult to update applications to be compatible with them. There is little or no interoperability between software and hardware in this realm, hindering reuse in other contexts. Every time a mulsemedia application is needed, new software is built mostly from scratch. This model has proven to be demanding, time-consuming, and costly mainly because it requires researchers and developers alike to gain knowledge about new devices, connectivity, communication protocols, and other particulars. The fact is that building such systems imposes a number of challenges and requirements (which are discussed in this thesis) due mainly to their ever-evolving and heterogeneous traits. As a result, few mulsemedia systems have remained reusable to be applied to different research purposes as opposed to the use of open mulsemedia datasets. Therefore, the main contribution of this thesis is a decoupled conceptual architecture to deal with variability of scenarios in mulsemedia delivery systems, which includes recommendations to cope with the variation of end-user applications and sensory effect devices through the support and reuse of even unforeseen communication and connectivity protocols, and sensory effects metadata (SEM). To evaluate it, an open-source and robust mulsemedia framework was developed. Then, a performance assessment was carried out on communication protocols for the integration between event-based applications, whereby temporal restrictions play a role, and the framework. Results indicated statistically significant differences in response time providing directions for optimized integrations. Finally, a user QoE subjective evaluation comparing a monolithic mulsemedia system with this framework was undertaken with results suggesting no evinced statistically significant differences in user-perceived QoE between the systems under different aspects. Therefore, it is hoped that this work fosters the area of mulsemedia and HCI (Human-Computer Interaction) in the sense that researchers can leverage either the conceptual architecture to design mulsemedia delivery systems or the framework to carry out their experiments.
Multisensory effects devices have been increasingly developed by research groups and companies with the aim of creating more immersive experiences for users by generating tactile, olfactory, and gustatory sensations. These technologies... more
Multisensory effects devices have been increasingly developed by research groups and companies with the aim of creating more immersive experiences for users by generating tactile, olfactory, and gustatory sensations. These technologies have been applied to areas such as entertainment, healthcare, education, culture, and marketing. This article brings recent efforts of haptic, olfactory, and gustatory displays integrated for Human-Computer Interaction (HCI). Additionally, it gets down to the nitty-gritty of how to create a multisensory environment for a desktop and a 360° VR application and evaluate it from the users' viewpoint.