GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-wor... more GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-world challenges, and this advantageously provides an accelerated environment for innovation, where concepts and ideas can be explored unencumbered by physical or conventional restrictions. This editorial considers the role of artificial intelligence (AI) in GAMES in the context of emerging systems seemingly beginning to exhibit artificial general intelligence (AGI) and where there is much fertile ground to be found in synthetic constrained worlds to explore, understand, and prepare for its increasing presence in our lives.
Games Futures collect short opinion pieces by industry and research veterans and new voices envis... more Games Futures collect short opinion pieces by industry and research veterans and new voices envisioning possible and desirable futures and needs for games and playable media. This inaugural series features eight of over thirty pieces.
We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mount... more We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as the “leader” (i.e., they provide a greater contribution to the collaboration), whereas no similar “leader” emerges in augmented reality (AR)‐to‐AR and AR‐to‐VRBody settings. We also found that these special patterns of leadership only emerged for 3D interactions and not for 2D interactions. Results about the participants' experience of leadership, collaboration, embodiment, presence, and copresence shed further light on these findings.
We present a practical neural computational approach for interactive design of Audio-Animatronic®... more We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deformations. To achieve interactive digital pose design, we train a shallow, fully connected neural network (KSNN) on input motor activations to solve the simulated mesh vertex positions. Our fully automatic synthetic training algorithm enables a first-of-its-kind learning active learning framework (GEN-LAL) for generative modeling of facial pose simulations. With adaptive selection, we significantly reduce training time to within half that of the unmodified training approach for each new Audio-Animatronic® figure
In games and playable media, almost nothing is as it was at the turn of the millennium. Digital a... more In games and playable media, almost nothing is as it was at the turn of the millennium. Digital and analog games have exploded in reach, diversity, and relevance. Digital platforms and globalisation have shifted and fragmented their centres of gravity and how they are made and played. Games are converging with other media, technologies, and arts into a wide field of playable media. Games research has similarly exploded in volume and fragmented into disciplinary specialisms. All this can be deeply disorienting. The journal Games: Research and Practice wants to offer a lighthouse that helps readers orient themselves in this new, ever-shifting reality of games industry and games research.
We present a novel real-time face detail reconstruction method capable of recovering high quality... more We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibration. We then capture fine-scale surface details using a patch-based Shape from Shading (SfS) approach. We pre-compute the patch-wise constant Moore–Penrose inverse matrix of the resulting linear system to achieve real-time performance. Our method achieves high interactive frame-rates and experiments show that our new approach is capable of reconstructing high-fidelity geometry with corresponding results to off-line techniques. We illustrate this through comparisons with off-line and on-line related works, and include demonstrations of novel face detail shader effects processing.
2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), 2017
Stop motion animation evolved in the early days of cinema with the aim to create an illusion of m... more Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more accurately and rapidly. However, due to the nature of this technique, every frame needs to be computer-generated, 3D printed and post-processed before it can be recorded. Therefore, a typical stop motion film could require many thousands of props to be created, resulting in a laborious and expensive production. We address this with a real-time interactive Augmented Reality system which generates virtual in-between poses according to a reduced number of key frame physical props. We perform deformation of the surface camera samples to accomplish smooth animations with retained visual appearance and incorporate a diminished reality method to allow virtual deformations that would, otherwise, reveal undesired background behind the animated mesh...
Authoring virtual terrains can be a challenging task. Proce-dural and stochastic methods for auto... more Authoring virtual terrains can be a challenging task. Proce-dural and stochastic methods for automated terrain genera-tion produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more cre-ative control at the cost of a limited feature set, higher stor-age requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satel-lite images, for the incorporation of which there is little sup-port from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of eleva-tion maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input ty...
Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for autom... more Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for automated terrain generation produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more creative control at the cost of a limited feature set, higher storage requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satellite images, for the incorporation of which there is little support from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of elevation maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input types: st...
We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corr... more We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from un-occluded real shadow samples observed in the camera frame, we recover the deformed shadow appearance efficiently. Our method uses geometry priors for the shadow casting object and a planar receiver surface. Inspired by image retargeting approaches [VTP* 10] we describe a novel local search strategy, steered by importance based deformed shadow estimation. Results are presented on a range of objects, deformations and illumination conditions in real-time Augmented Reality (AR) on a mobile device. We demonstrate the practical application of the method in generating otherwise laborious in-betweening frames for 3D printed stop motion animation.
We propose an end-to-end solution for presenting movie quality animated graphics to the user whil... more We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. We detail a real-time rendering algorithm using multi-view ray casting and view dependent decompression. Compression rates of 150:1 and greater are ...
2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2020
Augmented reality devices enable new approaches for character animation, e.g., given that charact... more Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present PoseMMR, allowing Multiple users to animate characters in a Mixed Reality environment, like how a stop-motion animator would manipulate a physical puppet, frame-by-frame, to create the scene. We explore the potential advantages of the PoseMMR can facilitate immersive posing, animation editing, version control and collaboration, and provide a set of guidelines to foster the development of immersive technologies as tools for collaborative authoring of character animation.
We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light... more We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media [Rose 2012] aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ability to change the outcome of the story through touch and physical movement, we enable the agency of guests to make choices with consequences in immersive movies. We extend, IRIDiuM [Koniaris et al. 2016, 2017] to allow branching streams of full-motion light field video depending on user actions in real time. The interactive narrative guides guests through the immersive story with lighting and spatial audio design and integrates both walkable and air haptic actuators.
In this paper we present a novel approach to author vegetation cover of large natural scenes. Unl... more In this paper we present a novel approach to author vegetation cover of large natural scenes. Unlike stochastic scatter-instancing tools for plant placement (such as multi-class blue noise generators), we use a simulation based on ecological processes to produce layouts of plant distributions. In contrast to previous work on ecosystem simulation, however, we propose a framework of global and local editing operators that can be used to interact directly with the live simulation. The result facilitates an artist-directed workflow with both spatiallyand temporally-varying control over the simulation’s output. We compare our result against random-scatter solutions, also employing such approaches as a seed to our algorithm. We demonstrate the versatility of our approach within an iterative authoring workflow, comparing it to typical artistic methods.
Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearan... more Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for other viewers. We present a method for constructing non-linear projections as a combination of anamorphic rendering of selective objects whilst reverting to normal perspective rendering of the rest of the scene. Our study defines a scene consisting of five characters, with one of these characters selectively rendered in anamorphic perspective. We conducted an evaluation experiment and demonstrate that the tracked viewer centric imagery for the selected character results in an improved gaze and engagement estimation. Critically, this is performed without sacrificing the other viewers’ viewing experience. In addition, we present findings on the perception of gaze direction for regularly viewed characters located off-center to the origin, where...
This is a position paper concerning the embodied dance learning objectives of the CAROUSEL+ 1 pro... more This is a position paper concerning the embodied dance learning objectives of the CAROUSEL+ 1 project, which aims to impact how online immersive technologies influence multi-user interaction and communication with a focus on dancing and learning dance together online. We aim to enable shared online social immersive experiences across the reality-virtuality continuum synchronizing audio, visual and haptic rendering. In teaching and learning to dance remotely, our models should support accessibility, style transfer and adaption of multi-modal feedback according to skills, strength, flexibility observations.
GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-wor... more GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-world challenges, and this advantageously provides an accelerated environment for innovation, where concepts and ideas can be explored unencumbered by physical or conventional restrictions. This editorial considers the role of artificial intelligence (AI) in GAMES in the context of emerging systems seemingly beginning to exhibit artificial general intelligence (AGI) and where there is much fertile ground to be found in synthetic constrained worlds to explore, understand, and prepare for its increasing presence in our lives.
Games Futures collect short opinion pieces by industry and research veterans and new voices envis... more Games Futures collect short opinion pieces by industry and research veterans and new voices envisioning possible and desirable futures and needs for games and playable media. This inaugural series features eight of over thirty pieces.
We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mount... more We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as the “leader” (i.e., they provide a greater contribution to the collaboration), whereas no similar “leader” emerges in augmented reality (AR)‐to‐AR and AR‐to‐VRBody settings. We also found that these special patterns of leadership only emerged for 3D interactions and not for 2D interactions. Results about the participants' experience of leadership, collaboration, embodiment, presence, and copresence shed further light on these findings.
We present a practical neural computational approach for interactive design of Audio-Animatronic®... more We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deformations. To achieve interactive digital pose design, we train a shallow, fully connected neural network (KSNN) on input motor activations to solve the simulated mesh vertex positions. Our fully automatic synthetic training algorithm enables a first-of-its-kind learning active learning framework (GEN-LAL) for generative modeling of facial pose simulations. With adaptive selection, we significantly reduce training time to within half that of the unmodified training approach for each new Audio-Animatronic® figure
In games and playable media, almost nothing is as it was at the turn of the millennium. Digital a... more In games and playable media, almost nothing is as it was at the turn of the millennium. Digital and analog games have exploded in reach, diversity, and relevance. Digital platforms and globalisation have shifted and fragmented their centres of gravity and how they are made and played. Games are converging with other media, technologies, and arts into a wide field of playable media. Games research has similarly exploded in volume and fragmented into disciplinary specialisms. All this can be deeply disorienting. The journal Games: Research and Practice wants to offer a lighthouse that helps readers orient themselves in this new, ever-shifting reality of games industry and games research.
We present a novel real-time face detail reconstruction method capable of recovering high quality... more We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibration. We then capture fine-scale surface details using a patch-based Shape from Shading (SfS) approach. We pre-compute the patch-wise constant Moore–Penrose inverse matrix of the resulting linear system to achieve real-time performance. Our method achieves high interactive frame-rates and experiments show that our new approach is capable of reconstructing high-fidelity geometry with corresponding results to off-line techniques. We illustrate this through comparisons with off-line and on-line related works, and include demonstrations of novel face detail shader effects processing.
2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), 2017
Stop motion animation evolved in the early days of cinema with the aim to create an illusion of m... more Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more accurately and rapidly. However, due to the nature of this technique, every frame needs to be computer-generated, 3D printed and post-processed before it can be recorded. Therefore, a typical stop motion film could require many thousands of props to be created, resulting in a laborious and expensive production. We address this with a real-time interactive Augmented Reality system which generates virtual in-between poses according to a reduced number of key frame physical props. We perform deformation of the surface camera samples to accomplish smooth animations with retained visual appearance and incorporate a diminished reality method to allow virtual deformations that would, otherwise, reveal undesired background behind the animated mesh...
Authoring virtual terrains can be a challenging task. Proce-dural and stochastic methods for auto... more Authoring virtual terrains can be a challenging task. Proce-dural and stochastic methods for automated terrain genera-tion produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more cre-ative control at the cost of a limited feature set, higher stor-age requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satel-lite images, for the incorporation of which there is little sup-port from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of eleva-tion maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input ty...
Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for autom... more Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for automated terrain generation produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more creative control at the cost of a limited feature set, higher storage requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satellite images, for the incorporation of which there is little support from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of elevation maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input types: st...
We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corr... more We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from un-occluded real shadow samples observed in the camera frame, we recover the deformed shadow appearance efficiently. Our method uses geometry priors for the shadow casting object and a planar receiver surface. Inspired by image retargeting approaches [VTP* 10] we describe a novel local search strategy, steered by importance based deformed shadow estimation. Results are presented on a range of objects, deformations and illumination conditions in real-time Augmented Reality (AR) on a mobile device. We demonstrate the practical application of the method in generating otherwise laborious in-betweening frames for 3D printed stop motion animation.
We propose an end-to-end solution for presenting movie quality animated graphics to the user whil... more We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. We detail a real-time rendering algorithm using multi-view ray casting and view dependent decompression. Compression rates of 150:1 and greater are ...
2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2020
Augmented reality devices enable new approaches for character animation, e.g., given that charact... more Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present PoseMMR, allowing Multiple users to animate characters in a Mixed Reality environment, like how a stop-motion animator would manipulate a physical puppet, frame-by-frame, to create the scene. We explore the potential advantages of the PoseMMR can facilitate immersive posing, animation editing, version control and collaboration, and provide a set of guidelines to foster the development of immersive technologies as tools for collaborative authoring of character animation.
We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light... more We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media [Rose 2012] aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ability to change the outcome of the story through touch and physical movement, we enable the agency of guests to make choices with consequences in immersive movies. We extend, IRIDiuM [Koniaris et al. 2016, 2017] to allow branching streams of full-motion light field video depending on user actions in real time. The interactive narrative guides guests through the immersive story with lighting and spatial audio design and integrates both walkable and air haptic actuators.
In this paper we present a novel approach to author vegetation cover of large natural scenes. Unl... more In this paper we present a novel approach to author vegetation cover of large natural scenes. Unlike stochastic scatter-instancing tools for plant placement (such as multi-class blue noise generators), we use a simulation based on ecological processes to produce layouts of plant distributions. In contrast to previous work on ecosystem simulation, however, we propose a framework of global and local editing operators that can be used to interact directly with the live simulation. The result facilitates an artist-directed workflow with both spatiallyand temporally-varying control over the simulation’s output. We compare our result against random-scatter solutions, also employing such approaches as a seed to our algorithm. We demonstrate the versatility of our approach within an iterative authoring workflow, comparing it to typical artistic methods.
Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearan... more Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for other viewers. We present a method for constructing non-linear projections as a combination of anamorphic rendering of selective objects whilst reverting to normal perspective rendering of the rest of the scene. Our study defines a scene consisting of five characters, with one of these characters selectively rendered in anamorphic perspective. We conducted an evaluation experiment and demonstrate that the tracked viewer centric imagery for the selected character results in an improved gaze and engagement estimation. Critically, this is performed without sacrificing the other viewers’ viewing experience. In addition, we present findings on the perception of gaze direction for regularly viewed characters located off-center to the origin, where...
This is a position paper concerning the embodied dance learning objectives of the CAROUSEL+ 1 pro... more This is a position paper concerning the embodied dance learning objectives of the CAROUSEL+ 1 project, which aims to impact how online immersive technologies influence multi-user interaction and communication with a focus on dancing and learning dance together online. We aim to enable shared online social immersive experiences across the reality-virtuality continuum synchronizing audio, visual and haptic rendering. In teaching and learning to dance remotely, our models should support accessibility, style transfer and adaption of multi-modal feedback according to skills, strength, flexibility observations.
Uploads
Papers by Kenny Mitchell