This paper presents an overview and the state-of-art in the applications of 'affect&... more This paper presents an overview and the state-of-art in the applications of 'affect'recognition in serious games for the support of patients in behavioral and mental disorder treatments and chronic pain rehabilitation, within the framework of the European project PlayMancer. ...
We have developed a 3-D grapb.ics model fbr the represemation of qu.alitative at~d q mm titative ... more We have developed a 3-D grapb.ics model fbr the represemation of qu.alitative at~d q mm titative variables lbr fiMustry analysis. We presem some of its. key features, sod we give at~ exampi:e of its application to aa analysis of the Catmdian Financial Ser-vices sector. The ...
Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic f... more Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co‐articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation — using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post‐processed semi‐automatically to ensure continuity at the vowel boundaries of t...
This issue contains six papers. The first four papers are an extended version of the papers prese... more This issue contains six papers. The first four papers are an extended version of the papers presented at CASA 2011 in Chengdu, China. The last two papers are regular papers. In the first paper, Min Meng, Lubin Fan and Ligang Liu from Zhejiang University, in Hangzhou, China, present a novel sketch-based tool, called iCutter (short for Intelligent Cutter), for cutting out semantic parts of 3D shapes. When a user performs a cutting task, he or she only needs to draw a freehand stroke to roughly specify where cuts should be made without much attention. Then iCutter intelligently returns the best cut that meets the user’s intention and expectation. The authors demonstrate various examples to illustrate the flexibility and applicability of their iCutter tool. Inmaculada Rodriguez and Anna Puig from the University of Barcelona and Marc Esteva from the Autonomous University of Barcelona, in Spain, propose, in the third paper, a generic interaction framework, which controls intelligent objects’ actions in different virtual world platforms. These actions are based on the state of external platform-independent AI-based systems such as multiagent and rule-based systems. The authors have evaluated the proposed framework by means of two intelligent objects, a door and a notice board, incorporating them in Second Life and OpenWonderland virtual worlds. These objects allow to work along three advanced aspects of a serious virtual environment. In the third paper, Gengdai Liu from Xidian University in Xi’an, China, Mingliang Xu from Zhengzhou University in China, Zhigeng Pan from Hangzhou Normal University, in China, and Abdennour El Rhalibi from Liverpool John Moores University in UK propose a novel human motion model for generating and editing motions with multiple factors. A set of motions performed by several actors with various styles is captured for constructing a well-structured motion database. Then, multilinear independent component analysis model that combines independent component analysis and conventional multilinear framework is adopted for the construction of a multifactor model. With this model, new motions can be synthesised by interpolation and through solving optimization problems for the specific factors. The fourth paper by Elena Kokkinara, Oyewole Oyekoya and Anthony Steed, from University College London in UK, models automatic attention behaviour using a saliency model that generates plausible targets for combined gaze and head motions. The model was compared with the default behaviour of the Second Life system in an object observation scenario while it was compared to real actors’ behaviour in a conversation scenario. Results from a study run within the Second Life system demonstrate a promising attention model that is not just believable and realistic but also adaptable to varying task, without any prior knowledge of the virtual scene. In the fifth paper, Jean-Paul Gourret from Université de La Rochelle in France and Amir Hariri and Philippe Liverneaux, both from Strasbourg University Hospitals in France, describe a work designed to control the movement of hand structural agents under external action using the Implicit Animation driven by the Explicit Animation technique. Starting from the configuration of a hand at rest obtained by a 3D scanner and after meshing of the structural agents, they seek the configuration of the rigid agents under orthopaedic surgeon external action and interacting reliance of deformable and rigid agents. They developed a model and software tools to answer this interactive application with adaptive execution. The resulting technique is applied to the bone structure consistency of a specific human hand in the context of virtual hand orthopaedic surgery. The last paper by Marcos Paulo Berteli Slomp, Matthew W. Johnson, Toru Tamaki and Kazufumi Kaneda, from Hiroshima University in Japan, describes a straightforward approach for rendering raindrops in temporal conditions such as slow motion or paused simulations. The proposed technique consists of a preprocessing stage, which generates a raindrop mask and a run-time stage that renders raindrops as screen-aligned billboards. The mask’s contents are adjusted based on the viewpoint, viewing direction and raindrop position. The proposed method renders millions of raindrops at real-time rates in current graphics hardware, making it suitable for applications that require high visual quality without compromising performance.
This paper presents an overview and the state-of-art in the applications of 'affect&... more This paper presents an overview and the state-of-art in the applications of 'affect'recognition in serious games for the support of patients in behavioral and mental disorder treatments and chronic pain rehabilitation, within the framework of the European project PlayMancer. ...
We have developed a 3-D grapb.ics model fbr the represemation of qu.alitative at~d q mm titative ... more We have developed a 3-D grapb.ics model fbr the represemation of qu.alitative at~d q mm titative variables lbr fiMustry analysis. We presem some of its. key features, sod we give at~ exampi:e of its application to aa analysis of the Catmdian Financial Ser-vices sector. The ...
Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic f... more Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co‐articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation — using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post‐processed semi‐automatically to ensure continuity at the vowel boundaries of t...
This issue contains six papers. The first four papers are an extended version of the papers prese... more This issue contains six papers. The first four papers are an extended version of the papers presented at CASA 2011 in Chengdu, China. The last two papers are regular papers. In the first paper, Min Meng, Lubin Fan and Ligang Liu from Zhejiang University, in Hangzhou, China, present a novel sketch-based tool, called iCutter (short for Intelligent Cutter), for cutting out semantic parts of 3D shapes. When a user performs a cutting task, he or she only needs to draw a freehand stroke to roughly specify where cuts should be made without much attention. Then iCutter intelligently returns the best cut that meets the user’s intention and expectation. The authors demonstrate various examples to illustrate the flexibility and applicability of their iCutter tool. Inmaculada Rodriguez and Anna Puig from the University of Barcelona and Marc Esteva from the Autonomous University of Barcelona, in Spain, propose, in the third paper, a generic interaction framework, which controls intelligent objects’ actions in different virtual world platforms. These actions are based on the state of external platform-independent AI-based systems such as multiagent and rule-based systems. The authors have evaluated the proposed framework by means of two intelligent objects, a door and a notice board, incorporating them in Second Life and OpenWonderland virtual worlds. These objects allow to work along three advanced aspects of a serious virtual environment. In the third paper, Gengdai Liu from Xidian University in Xi’an, China, Mingliang Xu from Zhengzhou University in China, Zhigeng Pan from Hangzhou Normal University, in China, and Abdennour El Rhalibi from Liverpool John Moores University in UK propose a novel human motion model for generating and editing motions with multiple factors. A set of motions performed by several actors with various styles is captured for constructing a well-structured motion database. Then, multilinear independent component analysis model that combines independent component analysis and conventional multilinear framework is adopted for the construction of a multifactor model. With this model, new motions can be synthesised by interpolation and through solving optimization problems for the specific factors. The fourth paper by Elena Kokkinara, Oyewole Oyekoya and Anthony Steed, from University College London in UK, models automatic attention behaviour using a saliency model that generates plausible targets for combined gaze and head motions. The model was compared with the default behaviour of the Second Life system in an object observation scenario while it was compared to real actors’ behaviour in a conversation scenario. Results from a study run within the Second Life system demonstrate a promising attention model that is not just believable and realistic but also adaptable to varying task, without any prior knowledge of the virtual scene. In the fifth paper, Jean-Paul Gourret from Université de La Rochelle in France and Amir Hariri and Philippe Liverneaux, both from Strasbourg University Hospitals in France, describe a work designed to control the movement of hand structural agents under external action using the Implicit Animation driven by the Explicit Animation technique. Starting from the configuration of a hand at rest obtained by a 3D scanner and after meshing of the structural agents, they seek the configuration of the rigid agents under orthopaedic surgeon external action and interacting reliance of deformable and rigid agents. They developed a model and software tools to answer this interactive application with adaptive execution. The resulting technique is applied to the bone structure consistency of a specific human hand in the context of virtual hand orthopaedic surgery. The last paper by Marcos Paulo Berteli Slomp, Matthew W. Johnson, Toru Tamaki and Kazufumi Kaneda, from Hiroshima University in Japan, describes a straightforward approach for rendering raindrops in temporal conditions such as slow motion or paused simulations. The proposed technique consists of a preprocessing stage, which generates a raindrop mask and a run-time stage that renders raindrops as screen-aligned billboards. The mask’s contents are adjusted based on the viewpoint, viewing direction and raindrop position. The proposed method renders millions of raindrops at real-time rates in current graphics hardware, making it suitable for applications that require high visual quality without compromising performance.
Uploads
Papers by Nadia Thalmann